Monketoo's picture
Add files using upload-large-folder tool
eda048d verified

[19] S. Nelakuditi and Z. L. Zhang, "A localized adaptive proportioning approach to QoS routing," IEEE Commun. Mag., vol. 40, no. 6, pp. 66-71, 2002.

[20] G. Apostolopoulos, R. Guerin, S. Kamat, and S. Tripathi, "Quality of service based routing: A performance perspective," in ACM SIGCOMM, 1998.

[21] Q. Ma and P. Steenkiste, "On path selection for traffic with bandwidth guarantees," in IEEE International Conference on Network Protocols, 1997.

[22] N. Spring, R. Mahajan, and D. Wetherall, "Measuring ISP Topologies with Rocket-fuel," in ACM SIGCOMM, 2002.

[23] D. P. Heyman and T. V. Lakshman, "What are the implications of long range dependence for VBR video traffic engineering," IEEE/ACM Transactions on Networking, vol. 4(3), pp. 301-317, 1996.

[24] G.R.Grimmett and D. Stirzaker, Probability and Random Processes. Oxford Science Publications, 2nd edition, 1998.

[25] H. Kushner and D. Clark, Stochastic Approximation Methods for Constrained and Unconstrained Systems. Springer-Verlag, 1978.

[26] H. Kushner and G. Yin, Stochastic Approximation Algorithms and Applications. Springer-Verlag, 1997.

Appendix - I

Proof of Proposition 4.1

First, note that the algorithm defined in (9) has the same form as the unicast algorithm defined in [18]. In the unicast case, it was assumed that the link cost functions and consequently the overall cost function are continuously differentiable with respect to the input variables. However, as it can be seen from (2), this assumption is no longer valid due to $x_o^s$ terms, although the convexity is preserved. Here, we will show that the proof holds even if the cost function is not differentiable using convex analysis and the concept of subgradient.⁹ We will closely follow [18]. Collecting the terms of (9) for all sources, we have:

x(k+1)=ΠΘ[x(k)a(k)g^(k)],(13)x(k + 1) = \Pi_{\Theta}[x(k) - a(k)\hat{g}(k)], \quad (13)

where $x(k) = (x_s(k), s \in S)$, $g(k) = (g_s(k), s \in S)$, $a(k)$ is a $N \times N$ diagonal matrix, $N = \sum_{s \in S} (N_s \cdot |D^s|)$, and the diagonal entries of $a(k)$ are equal to the corresponding step sizes of different sources, $a_s(k)$.

We can follow the definitions given in [18], with the exception that all gradient terms $\nabla C(x)$ are now replaced by a subgradient $sg(x)$ that satisfies certain conditions as will be specified shortly. Rewrite (13) in the following form

x(k+1)=x(k)+a(k)[sg(x(k))+ξ(k)+b(k)]+τ(k)+ϕ(k),=v(k)+τ(k)+ϕ(k) \begin{aligned} x(k+1) &= x(k) + a(k)[-sg(x(k)) + \xi(k) + b(k)] + \tau(k) + \phi(k), \\ &= v(k) + \tau(k) + \phi(k) \end{aligned}

⁹See [17] for details on subgradients ($sg(x)$) and subdifferentials ($\partial C(x)$).