text
stringlengths
1
3.78M
meta
dict
Clinic of Advanced Biological Medicine The Clinic of Advanced Biological Medicine in Frankfurt am Main aims to boost the immune system, to maintain and improve the quality of life in case of disease, minimize side effects of toxic treatment, to increase chances for recovery as well as to increase life span through the use of effective treatments. Haven't found Your diagnosis yet? University Hospital Ulm 8.7/10 from 83 Votes Department of Urology and Pediatric Urology Surgery treatment of testicular teratoma #235559 The Department of Urology at the Ulm University Clinic offers a wide range of modern services for treatment of urology and pediatric urology. In order to improve the patient care and after personal consultation, the clinic consistently shows its professional competence.
{ "pile_set_name": "Pile-CC" }
Imitare Mussolini di sti tempi??!! Mi stupisco che sia il direttore artistico a farlo (in #eurovisione!) E queste battute son preparate e scritte su un copione! Se vengono sospesi dei cantanti dalla gara allora anche #Baglioni per una sera...#parcondicio#Sanremo2018#Sanremo — Maria Francesca (@19MF90) 7 febbraio 2018 #Baglioni ha appena imitato Mussolini ma va tutto bene #sanremo2018 — Daniele Vincioni (@DandyVOGUE) 7 febbraio 2018 Ultimo aggiornamento: Giovedì 8 Febbraio 2018, 01:51 © RIPRODUZIONE RISERVATA Claudioci ricasca. Dopo la gaffe sugli alpini, scivola sull'imitazione diLEGGI ANCHE---> Cambiata la pagina Wikipedia di Baglioni Il "dittatore Artistico" nella prima puntata aveva scatenato le polemiche per aver detto che il suo Festival non sarebbe stato "un raduno di alpini", polemica rientrata durante la conferenza stampa di mercoledì quando ha intonato un coro "Ho fatto una battuta sugli alpini, si sono arrabbiati - dice - Amo talmente tanto i cori degli alpini che sono in grado di farlo da solo. Vorrei riconciliarmi con la categoria".LEGGI ANCHE---> Fenomeno Leosini LEGGI ANCHE---> Sting omaggia Zucchero Durante la seconda serata però Baglioni prima di presentare Pippo Baudo, con un podio davanti e mani sui fianchi ha detto “Invece di essere chiamato direttore artistico, mi hanno chiamato dittatore artistico, facendomi trovare questo podio…Quindi pubblico di ITALIANIIII…”proprio quell'urlo alla Mussolini non è piaciuto e sui social si è scatenato un vero e proprio polverone.LEGGI ANCHE---> Pippo Baudo, ecco cosa ha detto
{ "pile_set_name": "OpenWebText2" }
Bison Range Recalls Reservation History; Land Trust Restrictions get five year extension; Forestry grazin notes; Land in irrigation district must be properly classified; Letters to the Editor; Editorially speaking (W. W. McDonald). 1960 Fires Bad-Better Methods Lower Loss; Wiprud Paintings Were Saved When Old Jocko Church Burned; Proposed Indian Charter; Reports on Students Tell of New Jobs - Those at Schools; Morigeau Writes Views of Proposed Indian Charter Goal is Nearer in Flathead Claims Settlement; New St. Ignatius Hospital Reminds of Health History on Flathead; Man-Caused Fire Point to Problem of Trespassing; Seven Years Since Third Wheel Started at Kerr Dam; 4-H Member Writes of Trip to... Flathead Lakers elect Fouty to Board; Sunday..117th anniversary of Treaty; Montana Power to pay $11 and a quarter million by August; Malatare seated after Council double reverse; Wheeler acquitted on firecracker charge; Council committee wading in... Treaty hunt, fish rights to be tested; Flathead irrigation project; People of Tribe to decide on judgment: check with BIA pending polling of the people; St. Ignatius judgement Pow Wow slated; Labor Dept. finds fault in area Indian hiring; Zectran... Indian justice conference; Tribal range hunt-right in courts; Council shifts on childrens shares: Total split of MP check must go through channels; Tribes plan to takeover education, employment; Council committees: are they doing the job?;... Five firms to bid for valley; Cracking down on trespassers; Jurisdiction rundown: Pierre awaits decision; High court to decide Indian cases in March; Justice department stalling case; Trust land transfere is unreasonably hard; Game conviction... Tribe wants boundary action; Jurisdictional 'misunderstanding' causes a number of problems; Tribe-BIA must comply with environmental law; Jocko River mud; Blackfeet in line for Governor; What's new on the Reservation; Tribe donates $5,000 to St.... Recreational permit law upheld in Federal Court; No more state income taxes for Indians; Kootenais want their share; Dixom JOM: Bringing Indian involvement; Mist over souther half of Flathead Lake is lifted; ITB wants more government employment;... Council fails to show in Elmo, Kootenais plan split; Forestry practices questioned; BIA outlines absentee voting; 90% payout vote set June 30; New buildings, same old pow-wow; Tribe cracks down on crime against Indians; Salish language; Kootenai...
{ "pile_set_name": "Pile-CC" }
Q: Add query condition for referenced user by user ID I have created a content type with machine name user_points and let's say have the following fields for this content type: nid, title, reference_id (text), user (reference of user entity), points (integer) the following code give me the sum of all points. $storage = $this->entityTypeManager()->getStorage('node'); $total_points = $storage->getAggregateQuery() ->condition('status', 1) ->aggregate('points', 'SUM') ->condition('type', 'user_points') ->execute(); I would like to add a condition to get total points for specific referenced user by user ID. How can I do it? A: if the machine name of user entity referenced in user_points content type is for example field_my_user, the following condition can be added to the query above to select only nodes with referenced user equal to $user_id ->condition('field_my_user.entity:user.uid', $user_id)
{ "pile_set_name": "StackExchange" }
About Us TeamAsia is an award-winning integrated marketing communication firm. Our biggest interest is bringing brands to the next level experience of visibility. We do this by exploring innovation to its fullest potential through our five core services: integrated marketing, experience, creative, content and digital. In our 24 years in the industry, TeamAsia has been recognized by several organizations such as the Mobile Web Awards, the Philippine Quill Awards and the Web Awards for our excellence. Our team is composed by some of the most inspired and passionate minds today and together we seek to continuously learn, connect and share. Our Story It all began in 1992 when Michael and Monette Hamlin founded TeamAsia. We started small and ambitious, organizing the Asian Management Awards in Hong Kong with only a team of three in a cramped garage. Since then, determination and talent has allowed us to flourish as a pioneer of brand visibility in the Philippines. Now with five core services and a multitalented team, we have blossomed into a premier integrated marketing communication agency, catering to diverse local and international clients. We attract accomplished international business leaders through our exceptional service, which has opened doors to a new era of business tourism throughout the Philippines. Proactively finding ways to innovate and sustain brand visibility, we prospered through the ‘90s Asian financial crisis and continue through the software and social revolutions of the new millennium. Two decades since our establishment, a brighter era has come. We are a hub of innovative ideas born out of brilliant minds. We are now a large team of highly talented individuals driven by passion in creating larger than life experiences and intricately crafted stories that spark life into brands. TeamAsia is an award-winning integrated marketing communication firm. In our 25 years in the industry, TeamAsia has been recognized by several organizations such as the Mobile Web Awards, the Philippine Quill Awards, and the Web Awards for our excellence. Our biggest interest is bringing brands to the next level experience of visibility. Our expertise is developing strategic integrated marketing communication campaigns. We focus on connecting brands to their audiences through various innovative and relevant touch points that work harmoniously together. We do this by exploring innovation to its fullest potential through our core services. Our digital team connects your brand to online, tech-savvy audiences by coming up with strategic content plans that highlight your products and services. We do this through digital marketing, social media, analytics, website development, and mobile app development. Our creative team pushes the limit of imagination to create meaningful and strategic ideas for your brand. Our expertise lies in creative storytelling through the development of compelling designs and visual marketing materials.
{ "pile_set_name": "Pile-CC" }
/* dporfsx.f -- translated by f2c (version 20061008). You must link the resulting object file with libf2c: on Microsoft Windows system, link with libf2c.lib; on Linux or Unix systems, link with .../path/to/libf2c.a -lm or, if you install libf2c.a in a standard place, with -lf2c -lm -- in that order, at the end of the command line, as in cc *.o -lf2c -lm Source for libf2c is in /netlib/f2c/libf2c.zip, e.g., http://www.netlib.org/f2c/libf2c.zip */ #include "f2c.h" #include "blaswrap.h" /* Table of constant values */ static integer c_n1 = -1; static integer c__0 = 0; static integer c__1 = 1; /* Subroutine */ int dporfsx_(char *uplo, char *equed, integer *n, integer * nrhs, doublereal *a, integer *lda, doublereal *af, integer *ldaf, doublereal *s, doublereal *b, integer *ldb, doublereal *x, integer * ldx, doublereal *rcond, doublereal *berr, integer *n_err_bnds__, doublereal *err_bnds_norm__, doublereal *err_bnds_comp__, integer * nparams, doublereal *params, doublereal *work, integer *iwork, integer *info) { /* System generated locals */ integer a_dim1, a_offset, af_dim1, af_offset, b_dim1, b_offset, x_dim1, x_offset, err_bnds_norm_dim1, err_bnds_norm_offset, err_bnds_comp_dim1, err_bnds_comp_offset, i__1; doublereal d__1, d__2; /* Builtin functions */ double sqrt(doublereal); /* Local variables */ doublereal illrcond_thresh__, unstable_thresh__, err_lbnd__; integer ref_type__, j; doublereal rcond_tmp__; integer prec_type__; extern doublereal dla_porcond__(char *, integer *, doublereal *, integer * , doublereal *, integer *, integer *, doublereal *, integer *, doublereal *, integer *, ftnlen); doublereal cwise_wrong__; extern /* Subroutine */ int dla_porfsx_extended__(integer *, char *, integer *, integer *, doublereal *, integer *, doublereal *, integer *, logical *, doublereal *, doublereal *, integer *, doublereal *, integer *, doublereal *, integer *, doublereal *, doublereal *, doublereal *, doublereal *, doublereal *, doublereal *, doublereal *, integer *, doublereal *, doublereal *, logical *, integer *, ftnlen); char norm[1]; logical ignore_cwise__; extern logical lsame_(char *, char *); doublereal anorm; logical rcequ; extern doublereal dlamch_(char *); extern /* Subroutine */ int xerbla_(char *, integer *), dpocon_( char *, integer *, doublereal *, integer *, doublereal *, doublereal *, doublereal *, integer *, integer *); extern doublereal dlansy_(char *, char *, integer *, doublereal *, integer *, doublereal *); extern integer ilaprec_(char *); integer ithresh, n_norms__; doublereal rthresh; /* -- LAPACK routine (version 3.2.1) -- */ /* -- Contributed by James Demmel, Deaglan Halligan, Yozo Hida and -- */ /* -- Jason Riedy of Univ. of California Berkeley. -- */ /* -- April 2009 -- */ /* -- LAPACK is a software package provided by Univ. of Tennessee, -- */ /* -- Univ. of California Berkeley and NAG Ltd. -- */ /* .. */ /* .. Scalar Arguments .. */ /* .. */ /* .. Array Arguments .. */ /* .. */ /* Purpose */ /* ======= */ /* DPORFSX improves the computed solution to a system of linear */ /* equations when the coefficient matrix is symmetric positive */ /* definite, and provides error bounds and backward error estimates */ /* for the solution. In addition to normwise error bound, the code */ /* provides maximum componentwise error bound if possible. See */ /* comments for ERR_BNDS_NORM and ERR_BNDS_COMP for details of the */ /* error bounds. */ /* The original system of linear equations may have been equilibrated */ /* before calling this routine, as described by arguments EQUED and S */ /* below. In this case, the solution and error bounds returned are */ /* for the original unequilibrated system. */ /* Arguments */ /* ========= */ /* Some optional parameters are bundled in the PARAMS array. These */ /* settings determine how refinement is performed, but often the */ /* defaults are acceptable. If the defaults are acceptable, users */ /* can pass NPARAMS = 0 which prevents the source code from accessing */ /* the PARAMS argument. */ /* UPLO (input) CHARACTER*1 */ /* = 'U': Upper triangle of A is stored; */ /* = 'L': Lower triangle of A is stored. */ /* EQUED (input) CHARACTER*1 */ /* Specifies the form of equilibration that was done to A */ /* before calling this routine. This is needed to compute */ /* the solution and error bounds correctly. */ /* = 'N': No equilibration */ /* = 'Y': Both row and column equilibration, i.e., A has been */ /* replaced by diag(S) * A * diag(S). */ /* The right hand side B has been changed accordingly. */ /* N (input) INTEGER */ /* The order of the matrix A. N >= 0. */ /* NRHS (input) INTEGER */ /* The number of right hand sides, i.e., the number of columns */ /* of the matrices B and X. NRHS >= 0. */ /* A (input) DOUBLE PRECISION array, dimension (LDA,N) */ /* The symmetric matrix A. If UPLO = 'U', the leading N-by-N */ /* upper triangular part of A contains the upper triangular part */ /* of the matrix A, and the strictly lower triangular part of A */ /* is not referenced. If UPLO = 'L', the leading N-by-N lower */ /* triangular part of A contains the lower triangular part of */ /* the matrix A, and the strictly upper triangular part of A is */ /* not referenced. */ /* LDA (input) INTEGER */ /* The leading dimension of the array A. LDA >= max(1,N). */ /* AF (input) DOUBLE PRECISION array, dimension (LDAF,N) */ /* The triangular factor U or L from the Cholesky factorization */ /* A = U**T*U or A = L*L**T, as computed by DPOTRF. */ /* LDAF (input) INTEGER */ /* The leading dimension of the array AF. LDAF >= max(1,N). */ /* S (input or output) DOUBLE PRECISION array, dimension (N) */ /* The row scale factors for A. If EQUED = 'Y', A is multiplied on */ /* the left and right by diag(S). S is an input argument if FACT = */ /* 'F'; otherwise, S is an output argument. If FACT = 'F' and EQUED */ /* = 'Y', each element of S must be positive. If S is output, each */ /* element of S is a power of the radix. If S is input, each element */ /* of S should be a power of the radix to ensure a reliable solution */ /* and error estimates. Scaling by powers of the radix does not cause */ /* rounding errors unless the result underflows or overflows. */ /* Rounding errors during scaling lead to refining with a matrix that */ /* is not equivalent to the input matrix, producing error estimates */ /* that may not be reliable. */ /* B (input) DOUBLE PRECISION array, dimension (LDB,NRHS) */ /* The right hand side matrix B. */ /* LDB (input) INTEGER */ /* The leading dimension of the array B. LDB >= max(1,N). */ /* X (input/output) DOUBLE PRECISION array, dimension (LDX,NRHS) */ /* On entry, the solution matrix X, as computed by DGETRS. */ /* On exit, the improved solution matrix X. */ /* LDX (input) INTEGER */ /* The leading dimension of the array X. LDX >= max(1,N). */ /* RCOND (output) DOUBLE PRECISION */ /* Reciprocal scaled condition number. This is an estimate of the */ /* reciprocal Skeel condition number of the matrix A after */ /* equilibration (if done). If this is less than the machine */ /* precision (in particular, if it is zero), the matrix is singular */ /* to working precision. Note that the error may still be small even */ /* if this number is very small and the matrix appears ill- */ /* conditioned. */ /* BERR (output) DOUBLE PRECISION array, dimension (NRHS) */ /* Componentwise relative backward error. This is the */ /* componentwise relative backward error of each solution vector X(j) */ /* (i.e., the smallest relative change in any element of A or B that */ /* makes X(j) an exact solution). */ /* N_ERR_BNDS (input) INTEGER */ /* Number of error bounds to return for each right hand side */ /* and each type (normwise or componentwise). See ERR_BNDS_NORM and */ /* ERR_BNDS_COMP below. */ /* ERR_BNDS_NORM (output) DOUBLE PRECISION array, dimension (NRHS, N_ERR_BNDS) */ /* For each right-hand side, this array contains information about */ /* various error bounds and condition numbers corresponding to the */ /* normwise relative error, which is defined as follows: */ /* Normwise relative error in the ith solution vector: */ /* max_j (abs(XTRUE(j,i) - X(j,i))) */ /* ------------------------------ */ /* max_j abs(X(j,i)) */ /* The array is indexed by the type of error information as described */ /* below. There currently are up to three pieces of information */ /* returned. */ /* The first index in ERR_BNDS_NORM(i,:) corresponds to the ith */ /* right-hand side. */ /* The second index in ERR_BNDS_NORM(:,err) contains the following */ /* three fields: */ /* err = 1 "Trust/don't trust" boolean. Trust the answer if the */ /* reciprocal condition number is less than the threshold */ /* sqrt(n) * dlamch('Epsilon'). */ /* err = 2 "Guaranteed" error bound: The estimated forward error, */ /* almost certainly within a factor of 10 of the true error */ /* so long as the next entry is greater than the threshold */ /* sqrt(n) * dlamch('Epsilon'). This error bound should only */ /* be trusted if the previous boolean is true. */ /* err = 3 Reciprocal condition number: Estimated normwise */ /* reciprocal condition number. Compared with the threshold */ /* sqrt(n) * dlamch('Epsilon') to determine if the error */ /* estimate is "guaranteed". These reciprocal condition */ /* numbers are 1 / (norm(Z^{-1},inf) * norm(Z,inf)) for some */ /* appropriately scaled matrix Z. */ /* Let Z = S*A, where S scales each row by a power of the */ /* radix so all absolute row sums of Z are approximately 1. */ /* See Lapack Working Note 165 for further details and extra */ /* cautions. */ /* ERR_BNDS_COMP (output) DOUBLE PRECISION array, dimension (NRHS, N_ERR_BNDS) */ /* For each right-hand side, this array contains information about */ /* various error bounds and condition numbers corresponding to the */ /* componentwise relative error, which is defined as follows: */ /* Componentwise relative error in the ith solution vector: */ /* abs(XTRUE(j,i) - X(j,i)) */ /* max_j ---------------------- */ /* abs(X(j,i)) */ /* The array is indexed by the right-hand side i (on which the */ /* componentwise relative error depends), and the type of error */ /* information as described below. There currently are up to three */ /* pieces of information returned for each right-hand side. If */ /* componentwise accuracy is not requested (PARAMS(3) = 0.0), then */ /* ERR_BNDS_COMP is not accessed. If N_ERR_BNDS .LT. 3, then at most */ /* the first (:,N_ERR_BNDS) entries are returned. */ /* The first index in ERR_BNDS_COMP(i,:) corresponds to the ith */ /* right-hand side. */ /* The second index in ERR_BNDS_COMP(:,err) contains the following */ /* three fields: */ /* err = 1 "Trust/don't trust" boolean. Trust the answer if the */ /* reciprocal condition number is less than the threshold */ /* sqrt(n) * dlamch('Epsilon'). */ /* err = 2 "Guaranteed" error bound: The estimated forward error, */ /* almost certainly within a factor of 10 of the true error */ /* so long as the next entry is greater than the threshold */ /* sqrt(n) * dlamch('Epsilon'). This error bound should only */ /* be trusted if the previous boolean is true. */ /* err = 3 Reciprocal condition number: Estimated componentwise */ /* reciprocal condition number. Compared with the threshold */ /* sqrt(n) * dlamch('Epsilon') to determine if the error */ /* estimate is "guaranteed". These reciprocal condition */ /* numbers are 1 / (norm(Z^{-1},inf) * norm(Z,inf)) for some */ /* appropriately scaled matrix Z. */ /* Let Z = S*(A*diag(x)), where x is the solution for the */ /* current right-hand side and S scales each row of */ /* A*diag(x) by a power of the radix so all absolute row */ /* sums of Z are approximately 1. */ /* See Lapack Working Note 165 for further details and extra */ /* cautions. */ /* NPARAMS (input) INTEGER */ /* Specifies the number of parameters set in PARAMS. If .LE. 0, the */ /* PARAMS array is never referenced and default values are used. */ /* PARAMS (input / output) DOUBLE PRECISION array, dimension NPARAMS */ /* Specifies algorithm parameters. If an entry is .LT. 0.0, then */ /* that entry will be filled with default value used for that */ /* parameter. Only positions up to NPARAMS are accessed; defaults */ /* are used for higher-numbered parameters. */ /* PARAMS(LA_LINRX_ITREF_I = 1) : Whether to perform iterative */ /* refinement or not. */ /* Default: 1.0D+0 */ /* = 0.0 : No refinement is performed, and no error bounds are */ /* computed. */ /* = 1.0 : Use the double-precision refinement algorithm, */ /* possibly with doubled-single computations if the */ /* compilation environment does not support DOUBLE */ /* PRECISION. */ /* (other values are reserved for future use) */ /* PARAMS(LA_LINRX_ITHRESH_I = 2) : Maximum number of residual */ /* computations allowed for refinement. */ /* Default: 10 */ /* Aggressive: Set to 100 to permit convergence using approximate */ /* factorizations or factorizations other than LU. If */ /* the factorization uses a technique other than */ /* Gaussian elimination, the guarantees in */ /* err_bnds_norm and err_bnds_comp may no longer be */ /* trustworthy. */ /* PARAMS(LA_LINRX_CWISE_I = 3) : Flag determining if the code */ /* will attempt to find a solution with small componentwise */ /* relative error in the double-precision algorithm. Positive */ /* is true, 0.0 is false. */ /* Default: 1.0 (attempt componentwise convergence) */ /* WORK (workspace) DOUBLE PRECISION array, dimension (4*N) */ /* IWORK (workspace) INTEGER array, dimension (N) */ /* INFO (output) INTEGER */ /* = 0: Successful exit. The solution to every right-hand side is */ /* guaranteed. */ /* < 0: If INFO = -i, the i-th argument had an illegal value */ /* > 0 and <= N: U(INFO,INFO) is exactly zero. The factorization */ /* has been completed, but the factor U is exactly singular, so */ /* the solution and error bounds could not be computed. RCOND = 0 */ /* is returned. */ /* = N+J: The solution corresponding to the Jth right-hand side is */ /* not guaranteed. The solutions corresponding to other right- */ /* hand sides K with K > J may not be guaranteed as well, but */ /* only the first such right-hand side is reported. If a small */ /* componentwise error is not requested (PARAMS(3) = 0.0) then */ /* the Jth right-hand side is the first with a normwise error */ /* bound that is not guaranteed (the smallest J such */ /* that ERR_BNDS_NORM(J,1) = 0.0). By default (PARAMS(3) = 1.0) */ /* the Jth right-hand side is the first with either a normwise or */ /* componentwise error bound that is not guaranteed (the smallest */ /* J such that either ERR_BNDS_NORM(J,1) = 0.0 or */ /* ERR_BNDS_COMP(J,1) = 0.0). See the definition of */ /* ERR_BNDS_NORM(:,1) and ERR_BNDS_COMP(:,1). To get information */ /* about all of the right-hand sides check ERR_BNDS_NORM or */ /* ERR_BNDS_COMP. */ /* ================================================================== */ /* .. Parameters .. */ /* .. */ /* .. Local Scalars .. */ /* .. */ /* .. External Subroutines .. */ /* .. */ /* .. Intrinsic Functions .. */ /* .. */ /* .. External Functions .. */ /* .. */ /* .. Executable Statements .. */ /* Check the input parameters. */ /* Parameter adjustments */ err_bnds_comp_dim1 = *nrhs; err_bnds_comp_offset = 1 + err_bnds_comp_dim1; err_bnds_comp__ -= err_bnds_comp_offset; err_bnds_norm_dim1 = *nrhs; err_bnds_norm_offset = 1 + err_bnds_norm_dim1; err_bnds_norm__ -= err_bnds_norm_offset; a_dim1 = *lda; a_offset = 1 + a_dim1; a -= a_offset; af_dim1 = *ldaf; af_offset = 1 + af_dim1; af -= af_offset; --s; b_dim1 = *ldb; b_offset = 1 + b_dim1; b -= b_offset; x_dim1 = *ldx; x_offset = 1 + x_dim1; x -= x_offset; --berr; --params; --work; --iwork; /* Function Body */ *info = 0; ref_type__ = 1; if (*nparams >= 1) { if (params[1] < 0.) { params[1] = 1.; } else { ref_type__ = (integer) params[1]; } } /* Set default parameters. */ illrcond_thresh__ = (doublereal) (*n) * dlamch_("Epsilon"); ithresh = 10; rthresh = .5; unstable_thresh__ = .25; ignore_cwise__ = FALSE_; if (*nparams >= 2) { if (params[2] < 0.) { params[2] = (doublereal) ithresh; } else { ithresh = (integer) params[2]; } } if (*nparams >= 3) { if (params[3] < 0.) { if (ignore_cwise__) { params[3] = 0.; } else { params[3] = 1.; } } else { ignore_cwise__ = params[3] == 0.; } } if (ref_type__ == 0 || *n_err_bnds__ == 0) { n_norms__ = 0; } else if (ignore_cwise__) { n_norms__ = 1; } else { n_norms__ = 2; } rcequ = lsame_(equed, "Y"); /* Test input parameters. */ if (! lsame_(uplo, "U") && ! lsame_(uplo, "L")) { *info = -1; } else if (! rcequ && ! lsame_(equed, "N")) { *info = -2; } else if (*n < 0) { *info = -3; } else if (*nrhs < 0) { *info = -4; } else if (*lda < max(1,*n)) { *info = -6; } else if (*ldaf < max(1,*n)) { *info = -8; } else if (*ldb < max(1,*n)) { *info = -11; } else if (*ldx < max(1,*n)) { *info = -13; } if (*info != 0) { i__1 = -(*info); xerbla_("DPORFSX", &i__1); return 0; } /* Quick return if possible. */ if (*n == 0 || *nrhs == 0) { *rcond = 1.; i__1 = *nrhs; for (j = 1; j <= i__1; ++j) { berr[j] = 0.; if (*n_err_bnds__ >= 1) { err_bnds_norm__[j + err_bnds_norm_dim1] = 1.; err_bnds_comp__[j + err_bnds_comp_dim1] = 1.; } else if (*n_err_bnds__ >= 2) { err_bnds_norm__[j + (err_bnds_norm_dim1 << 1)] = 0.; err_bnds_comp__[j + (err_bnds_comp_dim1 << 1)] = 0.; } else if (*n_err_bnds__ >= 3) { err_bnds_norm__[j + err_bnds_norm_dim1 * 3] = 1.; err_bnds_comp__[j + err_bnds_comp_dim1 * 3] = 1.; } } return 0; } /* Default to failure. */ *rcond = 0.; i__1 = *nrhs; for (j = 1; j <= i__1; ++j) { berr[j] = 1.; if (*n_err_bnds__ >= 1) { err_bnds_norm__[j + err_bnds_norm_dim1] = 1.; err_bnds_comp__[j + err_bnds_comp_dim1] = 1.; } else if (*n_err_bnds__ >= 2) { err_bnds_norm__[j + (err_bnds_norm_dim1 << 1)] = 1.; err_bnds_comp__[j + (err_bnds_comp_dim1 << 1)] = 1.; } else if (*n_err_bnds__ >= 3) { err_bnds_norm__[j + err_bnds_norm_dim1 * 3] = 0.; err_bnds_comp__[j + err_bnds_comp_dim1 * 3] = 0.; } } /* Compute the norm of A and the reciprocal of the condition */ /* number of A. */ *(unsigned char *)norm = 'I'; anorm = dlansy_(norm, uplo, n, &a[a_offset], lda, &work[1]); dpocon_(uplo, n, &af[af_offset], ldaf, &anorm, rcond, &work[1], &iwork[1], info); /* Perform refinement on each right-hand side */ if (ref_type__ != 0) { prec_type__ = ilaprec_("E"); dla_porfsx_extended__(&prec_type__, uplo, n, nrhs, &a[a_offset], lda, &af[af_offset], ldaf, &rcequ, &s[1], &b[b_offset], ldb, &x[ x_offset], ldx, &berr[1], &n_norms__, &err_bnds_norm__[ err_bnds_norm_offset], &err_bnds_comp__[err_bnds_comp_offset], &work[*n + 1], &work[1], &work[(*n << 1) + 1], &work[1], rcond, &ithresh, &rthresh, &unstable_thresh__, & ignore_cwise__, info, (ftnlen)1); } /* Computing MAX */ d__1 = 10., d__2 = sqrt((doublereal) (*n)); err_lbnd__ = max(d__1,d__2) * dlamch_("Epsilon"); if (*n_err_bnds__ >= 1 && n_norms__ >= 1) { /* Compute scaled normwise condition number cond(A*C). */ if (rcequ) { rcond_tmp__ = dla_porcond__(uplo, n, &a[a_offset], lda, &af[ af_offset], ldaf, &c_n1, &s[1], info, &work[1], &iwork[1], (ftnlen)1); } else { rcond_tmp__ = dla_porcond__(uplo, n, &a[a_offset], lda, &af[ af_offset], ldaf, &c__0, &s[1], info, &work[1], &iwork[1], (ftnlen)1); } i__1 = *nrhs; for (j = 1; j <= i__1; ++j) { /* Cap the error at 1.0. */ if (*n_err_bnds__ >= 2 && err_bnds_norm__[j + (err_bnds_norm_dim1 << 1)] > 1.) { err_bnds_norm__[j + (err_bnds_norm_dim1 << 1)] = 1.; } /* Threshold the error (see LAWN). */ if (rcond_tmp__ < illrcond_thresh__) { err_bnds_norm__[j + (err_bnds_norm_dim1 << 1)] = 1.; err_bnds_norm__[j + err_bnds_norm_dim1] = 0.; if (*info <= *n) { *info = *n + j; } } else if (err_bnds_norm__[j + (err_bnds_norm_dim1 << 1)] < err_lbnd__) { err_bnds_norm__[j + (err_bnds_norm_dim1 << 1)] = err_lbnd__; err_bnds_norm__[j + err_bnds_norm_dim1] = 1.; } /* Save the condition number. */ if (*n_err_bnds__ >= 3) { err_bnds_norm__[j + err_bnds_norm_dim1 * 3] = rcond_tmp__; } } } if (*n_err_bnds__ >= 1 && n_norms__ >= 2) { /* Compute componentwise condition number cond(A*diag(Y(:,J))) for */ /* each right-hand side using the current solution as an estimate of */ /* the true solution. If the componentwise error estimate is too */ /* large, then the solution is a lousy estimate of truth and the */ /* estimated RCOND may be too optimistic. To avoid misleading users, */ /* the inverse condition number is set to 0.0 when the estimated */ /* cwise error is at least CWISE_WRONG. */ cwise_wrong__ = sqrt(dlamch_("Epsilon")); i__1 = *nrhs; for (j = 1; j <= i__1; ++j) { if (err_bnds_comp__[j + (err_bnds_comp_dim1 << 1)] < cwise_wrong__) { rcond_tmp__ = dla_porcond__(uplo, n, &a[a_offset], lda, &af[ af_offset], ldaf, &c__1, &x[j * x_dim1 + 1], info, & work[1], &iwork[1], (ftnlen)1); } else { rcond_tmp__ = 0.; } /* Cap the error at 1.0. */ if (*n_err_bnds__ >= 2 && err_bnds_comp__[j + (err_bnds_comp_dim1 << 1)] > 1.) { err_bnds_comp__[j + (err_bnds_comp_dim1 << 1)] = 1.; } /* Threshold the error (see LAWN). */ if (rcond_tmp__ < illrcond_thresh__) { err_bnds_comp__[j + (err_bnds_comp_dim1 << 1)] = 1.; err_bnds_comp__[j + err_bnds_comp_dim1] = 0.; if (params[3] == 1. && *info < *n + j) { *info = *n + j; } } else if (err_bnds_comp__[j + (err_bnds_comp_dim1 << 1)] < err_lbnd__) { err_bnds_comp__[j + (err_bnds_comp_dim1 << 1)] = err_lbnd__; err_bnds_comp__[j + err_bnds_comp_dim1] = 1.; } /* Save the condition number. */ if (*n_err_bnds__ >= 3) { err_bnds_comp__[j + err_bnds_comp_dim1 * 3] = rcond_tmp__; } } } return 0; /* End of DPORFSX */ } /* dporfsx_ */
{ "pile_set_name": "Github" }
Glory of Fellowland Glory of Fellowland (GoF) is a warfare browser-based massively online game (MMOG) under the genre of strategy and fantasy game produced at Feeltainment Ltd. and was released in 2008. Its setting is in the period of Medieval Ages. The game is in eight languages. There are five races which are the human, dwarf, orc, elf and hobbit. Gameplay It is similar to games like Tribal Wars of InnoGames GmbH and Land of Destiny. The objective of the game is to have a powerful empire out of small villages either by the player or with the help of allies. The game needed balancing peace and war. The player can build new buildings, gain resources and develop sciences. As the game progresses, new skills, like spying or trading, can be made by the player. The game will continue even if the player logs out. In starting the gameplay, player will choose location and race, according to his type of play or strategy, which can be developed by constructing different buildings, completing units and developing science. During the latter part, players will be able to train units for defenses and offenses, and conjure magic for support. When the village is expanding and advancing to become an empire, the player may send messages to other players, to build peace agreements and to trade. If it is fully established, the player can spy and capture other villages. Requirements The game requires a monitor with a width of 1024 pixels and a browser of Internet Explorer 7 for it to work properly. Races Each races have their specialization. Humans are the best for new players as it could be an all-around player. Hobbits are usually for defensive players. Orcs needed “Great Wisdom” and for offensive player.. Dwarves have “dwarf wisdom” and Elves are best for magic and spells. Alliances and Guild As this game has a peace agreement with the other players, alliances and guilds is possible as it will help the player’s village to other quest. Resources and Commerce The different resources are produced and used to buildings, units, spells and battles. The resources are iron, wood, stone, culture, science, spirit, gold, honor, diamonds, amulets and food. Iron is needed for military units and it can be found at mountain terrain. Wood is for construction and the vast amount can be found at forests. Stone is used for bigger buildings like tower and castle. Culture is for special units. Science is needed for researching new discoveries and in advancing. Spirit is for spells. The most commonly used is gold specially on trading. Honor is attainable during battles wherein the player might gain or lose it. Diamond is used to increase culture production while amulet is for spirit production. Food is needed and the player must have grain field, rice field and potato field. It is usually recommended for new players to have their village at the plain. The village’s population can be counted through the established buildings and units. These population consumes food. Tax rate is also used to maintain the culture production and food consumption. The production of the resources will be based on the location of the village. References Category:Massively multiplayer online real-time strategy games Category:Browser-based multiplayer online games
{ "pile_set_name": "Wikipedia (en)" }
Our scene has a huge problem with girls. One search of the word within our recent Facebook comments will show you all manner of vitriol aimed at young women, including but not limited to: “Calling all Austin Carlile fangirls that don't care about the rest of the band members” “Just because the majority of their fans are girls does not mean that they're a bad band. Don't ever judge a band off their fanbase. I love how everybody thinks that bands with mostly-male fans are “legendary” and make “fantastic music” but as soon as a female-appealing one pops up, they're suddenly shit.” “AP's hate comments on just about every band they post about is actually worst than the “fangirl bands” & “fangirls” they constantly bitch about.” “95% of their fanbase are fat, annoying, 15-year-old scene girls” “What is so wrong with being a 15-year-old girl?” has been my mental response to such venom even since I was, well, a 15-year-old girl. Now I say it with sass, narrowed eyes and a few swear words thrown in and can write off those comments for what they are: blatantly ignorant. But as a 15-year-old, it wasn’t so easy. Flashback to 2005: I was that reviled age and waiting in line, clutching tickets for my first My Chemical Romance show (which I had cried and screamed over at my birthday party the week prior). I heard the man in front of me going on about “stupid, little My Chemical Romance fangirls” wearing the band’s T-shirts and “ruining” the show. I looked down at my brand new MCR shirt and hoodie and turned my back, self-consciously zipping up. It was easier then to think, “Is there something wrong with me? Am I stupid? Maybe he’s right. I can’t act like a stupid fangirl.” What I hope you will take away from this is that you should not think that or be afraid of being a “fangirl,” because what I should have thought back then was, “How pathetic is it that a 20-something-year-old guy is picking on young fans who are showing their enthusiasm for a band they like?” So what is so wrong with being a fangirl? Why are people afraid of the passion of young girls? Fangirls are accused of the unspoken crime of being young, female and excited about the art they like—a “crime” people never seem to take the time to realize is very silly. Being young is awesome. Being a girl is awesome. Being passionate about something is awesome. What’s the problem? Here is where naysayers might interject, “They just like the band members for their looks” or “They’re blind followers” or “They’re creepy stalkers” or “They’re obsessive.” Actual instances of stalking (which are not okay under any circumstances) and blind following of things that are problematic aside, if someone likes a band in the way that makes sense to them and is not hurting you or anyone else, then I have to ask again, What’s the problem? And if you’re here to argue that musicians having fangirls discredits them, then I would ask you to think deeper about why you think that. What is implied when someone says “fangirls are ruining the band” is: “It's not okay to like things that girls like. Things that girls like are not credible.” AP contributor Annie Zaleski wrote eloquently and extensively for the AV Club about such overshadowing of the musical merits of teen idols. “Teenage girls, the major audience for teen idols, aren’t given enough credit for being savvy culture consumers,” Zaleski wrote. “The possibility that they could appreciate and want to hear music with substance (and not just blindly worship the cute guy or perky pop star) often isn’t even considered. Besides being insulting and sexist, such assumptions are also wildly incorrect.” Female interest in literally anything is criticized. People villainize women for believing in things that help them. People don't like feminism. People don't like women with wedding boards on Pinterest. People don't like women who want to have serious discussions about race, gender, sexuality and politics. People don't like women who enjoy dressing up. People don't like men who wear things coded feminine. People don't like girls who love Bring Me The Horizon a little “too much.” But what is too much? By some standards, wearing a band’s T-shirt to a show is excessive. By others’ standards, plastering one’s room with posters is. “Too much” can’t be defined universally. And being a15-year-old? Yikes! Want to know what young people are facing? Being too socially aware for their hometowns where teachers shut them down in grammar class for introducing them to gender-neutral pronouns. The pain that simply comes with being a teenager. The uncertainty of their futures. The pressures of their parents. Being regarded as a marketing demographic rather than a human being. Potentially struggling with mental illness, perhaps with unsupportive parents who won’t get them help. Having some of their their earliest (and thus most painful) encounters with racism, sexism and homophobia. And to top it off, the constant invalidation for loving and taking comfort in something positive that helps them escape that psychic maelstrom. So is it really a crime when a young girl wears her Motionless In White hoodie like armor, paints her face and plasters photos of Chris Motionless all over her walls? If the world doesn't understand her, she at least has the hope their music gives her to hold onto. Instead, that girl is told she’s “everything that’s wrong with music these days” because self-perceived rock ’n’ roll crusaders need to defend music from the evil powers that, you know, actually put their energy, time and money into (gasp) actually keeping the music world alive. And demonizing fangirls is not an issue that solely harms female fans. A male friend recently confided to me, “Man, I love My Chemical Romance, but I almost feel like I have to defend that as a 20-something man” because of the perception of their fanbase. Because we live in a society where we’ve taught men it’s not okay to like things that young girls do, where they have to explain or completely conceal their own passions. A fangirl’s devotion is the precise kind of fervor that can't be taught. It's the thing that puts them at the front row of shows now, and later in life, will put them anywhere else, doing anything they want to do. Being able to believe in something with unbridled love is so special and beautiful. Loving something so much it makes you cry and do things others think are “crazy” is something many people may never have, but these fangirls do. Why does that offend you? Think about it.
{ "pile_set_name": "OpenWebText2" }
1. Field of the Invention The invention is related to the field of hardening the bores of rifle barrels and in particular to forming nitrided and nitrocarburized surfaces in the bores of rifle barrels using a fluidized bed furnace. 2. Description of the Prior Art The hardening of the internal surfaces or bores of rifle barrels, gun barrels and cannons is well known in the art. These hardened surfaces reduce friction and wear of the bore increasing the accuracy and life of the rifles and gun barrels. The bores of the rifles may be hardened by heat treatment followed by a rapid quenching as taught by Somes in U.S. Pat. No. 2,541,114 or Polcha in U.S. Pat. No. 3,780,465. Alternatively, the bores of the rifles or guns may be hardened by nitriding as taught by Chenault et al in U.S. Pat. No. 2,596,981 and Osborn in U.S. Pat. No. 2,799,959. Chenault et al teach nitriding at a temperature of approximately 1000.degree. F. at a pressure of 100 atmospheres for approximately 15 hours while Osborn teaches nitriding at a temperature of 950.degree. F. to 975.degree. F. for 38 hours. Siemers et al, in U.S. Pat. No. 4,577,431, and Gstettner et al, in U.S. Pat. No. 4,747,225, disclose the application of a hard material over the internal surface of the rifle's bore. Siemers et al disclose coating the bore with a layer of refractory metal such as tantalum alloy by means of a vacuum plasma spray while Gstettner et al teach sintering of a thin heat resistant nickel based alloy on the surface of the bore. The use of a fluidized bed furnace for nitriding or nitrocarburizing various metals is taught by Ross in U.S. Pat. No. 4,461,656 and Staffin et al in U.S. Pat. No. 4,512,821. Ross teaches the treatment of ferrous metal components in a particulate medium fluidized with ammonia gas, a hydrocarbon gas, and nitrogen gas while Staffin et al teach the use of an atmosphere precursor, such as methanol or ethyl acetate in the fluidized bed furnace to produce the desired atmosphere. In their paper "Nitriding of Titanium with Ammonia" presented before the Thirty-fifth Annual Convention of the American Society of Metallurgy, held Oct. 17 through 23, 1953, and published in the Transactions of ASM, Volume 46, 1954, pp. 191 through 218, James L. Wyatt and Nicholas J. Grant presented a detailed process for nitriding titanium and titanium alloys by the decomposition of ammonia at elevated temperatures. The direct application of the methods taught by Ross and Wyatt et al to steel and aluminum alloy rifle barrels using a fluidized bed furnace failed to produce satisfactory nitrided or nitrocarburized surfaces within the bores of these rifle barrels. The invention is a solution to this problem.
{ "pile_set_name": "USPTO Backgrounds" }
Results of allogeneic bone marrow transplantation with unrelated or mismatched donors. As most patients are not fortunate enough to have an HLA-matched sibling to use as a bone marrow donor, attention has focused on the use of either HLA-matched but unrelated donors or HLA-mismatched family members. With the maturation of the field of histocompatibility testing, it is now possible to quantitate with relative precision the degree of disparity between patient and donor. In general, it appears that with respect to histocompatibility differences between donors other than HLA-matched siblings, there is an increased incidence of acute graft-versus-host disease, with the risk correlated with the degree of histoincompatibility. However, the overall disease-free survival is not always adversely affected, as a graft-versus-leukemia effect may counterbalance the increased death rate from graft-versus-host disease. To find donors for most patients, efforts are under way to recruit a large number of unrelated volunteers into the National Marrow Donor Program.
{ "pile_set_name": "PubMed Abstracts" }
import Postbox public final class RecentHashtagItem: OrderedItemListEntryContents { public init() { } public init(decoder: PostboxDecoder) { } public func encode(_ encoder: PostboxEncoder) { } }
{ "pile_set_name": "Github" }
Lady GaGa Talks ‘Delicious’ Breasts & ‘Mental Orgasm’ In the same interview Lady G also spoke about having a larger chest when she was studying. “At that time my breasts were much bigger, and firm and delicious. I was 15 to 20 pounds heavier than I am now. “I would wear shirts that were low-cut and the teachers would tell me that I couldn't wear them, and I'd point to another girl who was wearing the same thing, and they would say, 'Well, it looks different on her.' It wasn't fair.”
{ "pile_set_name": "Pile-CC" }
Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings. South Sudan has agreed to the deployment of a 4,000-strong regional protection force approved by the U.N. Security Council after first rejecting the peacekeepers as a violation of its sovereignty. Sunday's announcement came after the Security Council met with South Sudan President Salva Kiir in Juba, the capital, during a rare visit to the turbulent East African country. South Sudan President Salva Kiir walks with Samantha Power, the U.S. ambassador to the United Nations, and other members of the U.N. Security Council in Juba, South Sudan, on Sunday. Justin Lynch / AP The threat of an arms embargo loomed over the meeting, as the council has said it would pursue one if South Sudan didn't accept the additional peacekeepers. The U.N. already has 12,000 peacekeepers in the country, and South Sudan has been wary of giving it more authority. Let our news meet your inbox. The news and stories that matters, delivered weekday mornings. This site is protected by recaptcha "The Security Council came to achieve what we have secured," U.S. Ambassador Samantha Power said. Protecting civilians has become an even more critical issue after fighting erupted in Juba in July, killing hundreds of people and sparking fears of a return to civil war in the already devastated country. Both civilians and foreigners, including aid workers, were targeted in the July chaos by South Sudanese soldiers who raped women and girls, conducted mock executions and forced people at one hotel compound to watch as a local journalist was shot dead. Related: Ex-Dutch General to Lead Probe Into South Sudan Attacks Challenges already lie ahead for the 4,000 additional peacekeepers, who are tasked with protecting civilians in Juba and perhaps beyond. U.N. officials say the new force needs more than two months to deploy. Senegal's ambassador to the U.N., Fode Seck, said there has been difficulty getting enough troops pledged by regional countries that will make up the force. The council meets with African Union officials in Ethiopia on Monday. Both government and rebel forces have been accused of widespread abuses in the civil war that began in December 2013 between supporters of Kiir, an ethnic Dinka, and former Vice President Riek Machar, a Nuer. Tens of thousands of people have died. Ethnic tensions remain. Kiir told council diplomats that the peacekeeping mission's neutrality has been compromised because its camps that shelter tens of thousands of South Sudanese mostly are protecting supporters of the opposition, the U.N. official and council diplomat said.
{ "pile_set_name": "OpenWebText2" }
Efficacy of a triple therapy with rabeprazole, amoxicillin, and faropenem as second-line treatment after failure of initial Helicobacter pylori eradication therapy. Triple therapy consisting of lansoprazole, amoxicillin, and clarithromycin (LAC regimen) is widely used to eradicate Helicobacter pylori in Japan. However, the need for appropriate treatment after failure of initial therapy to eradicate H. pylori has been increasing. We therefore assessed the efficacy of a combination of rabeprazole, amoxicillin, and faropenem for second-line eradication therapy. The subjects were 116 patients positive for H. pylori infection. Patients initially received lansoprazole 60 mg/day, amoxicillin 1500 mg/day and clarithromycin 400 mg/day in two divided doses for 7 days. Patients in whom eradication treatment failed were given rabeprazole 20 mg/day and amoxicillin 1500 mg/day in two divided doses, and faropenem 600 mg/day in three divided doses (RAF regimen) for 7 consecutive days. H. pylori status was assessed by the 13C-urea breath test combined with rapid urease test or H. pylori culture method 8 weeks after completion of therapy. Susceptibility to clarithromycin was determined by the agar dilution method, and genetic polymorphism of CYP2C19 was analyzed by polymerase chain reaction-restriction fragment length polymorphism. The initial H. pylori eradication rate with the LAC regimen was 76.4% (84/110). Assessment of the CYP2C19 genotypes of the patients in whom eradication therapy failed revealed that homozygous extensive metabolizers accounted for 70.0% (16/23) and heterozygous extensive metabolizers for 30.0% (7/23), with no poor metabolizers. The acquired resistance rate for clarithromycin was 52.0% (12/23). The success rate of re-eradication with the RAF regimen was 91.3% (21/23) with no serious adverse effects. Triple therapy comprising rabeprazole, amoxicillin, and faropenem is effective for second-line eradication treatment of H. pylori infection, regardless of the genetic polymorphism of CYP2C19 or the presence of resistance to clarithromycin.
{ "pile_set_name": "PubMed Abstracts" }
Wrestling at the 1998 Asian Games – Men's freestyle 76 kg The men's freestyle 76 kg wrestling competition at the 1998 Asian Games in Bangkok was held on 16 December and 17 December at the Thammasat Gymnasium 1. The gold and silver medalists were determined by the final match of the main single-elimination bracket. The losers advanced to the repechage. These matches determined the bronze medalist for the event. Schedule All times are Indochina Time (UTC+07:00) Results Round 1 Round 2 Round 3 Round 4 Round 5 Finals Final standing References Results FILA Database Freestyle 076 kg
{ "pile_set_name": "Wikipedia (en)" }
Ek Rishta Sajhedari Ka November 29th 2016 Video Update Watch November 29th episode of Ek Rishta Sajhedari Ka serial online. This is the video update of Ek Rishta Sajhedari Ka for Tuesday, November 29 2016. Watch Ek Rishta Sajhedari Ka November 29 video update online. All links, videos, mirrors, are not part of Apni TV. These are user contributed links that are found on the internet and search engines. We do not claim resposibility for the reliability, validity, legality, or safety of these external weblinks. Visit them at your sole discretion. Some may contain harmful material, or pop-ups beyond user control.
{ "pile_set_name": "Pile-CC" }
[ { "type":"banner_only", "url":"http://www.co-meeting.com/", "banner_url":"http://pic.crowy.net/ad/images/co-meeting-banner20120411.png", "height": 100 } ]
{ "pile_set_name": "Github" }
The Streetly Academy and Future First Vicky O’Connor, careers lead at The Streetly Academy, talks about their partnership with Future First. I was brought in to The Streetly Academy to lead on the careers programme within the school; the head and SLT are very passionate about raising the aspirations of the pupils and wanted to improve their current careers programme by having someone who was dedicated to this full time. Before I came in to the school the provision was quite ad-hoc and minimal, whereas now the careers programme is implemented across all year groups and events and activities are scheduled frequently across the year. I began by creating a questionnaire to find out what the goals and aspirations were of our pupils, we then used the answers to map out what kind of careers provision we would need. We cater to the aspirations of the students through their own responses and needs and we track this across the school. We’ve seen that goals and aspirations are a lot higher in year 7 and then dip in year 9 and 10. Based on the way that this programme is catered to the goals of the students, we then address these ‘dips’ in aspiration/goal setting through more specific intervention and activities. I want to make sure that as well as thinking about their grades and how important they are, that the pupils get a chance to see how these are important within the wider context of jobs and careers. One of the main reasons that I chose to work with Future First is that alumni can provide direct relevance to the students. With an alumni speaker the pupils are given a link to what they’re saying as they know this person had been sat in their seat, in their position. Alumni provide more ways for pupils to believe in themselves. Using the alumni portal has been a much easier way of engaging and sustaining our alumni network; in previous roles I’d worked with alumni before, mostly using spreadsheets and my own database of alumni information. Future First have made it easier to contact alumni, we don’t have to manually input any data, it’s easier and safer to use when we engage our alumni community. We have a lot of events planned in the coming year where we know our alumni will be brilliant role models for our students; for our year 7s we’ll be running a ‘The Apprentice’ style event where we want our alumni to act as judges. We’ll be having Future First in to lead an event called ‘What is The World of Work Like’ for our year 8s which will go through key skills and information about choosing and entering into a career, as well as Year 9 option days where we’ll be having our alumni in to speak about each subject and its relevance to their jobs, so that we can link the curriculum to potential career paths. As well as these events we’ve got CV workshops, mock interviews, and careers speaker sessions where alumni will get a chance to talk about their careers for students unsure of what their paths might be. Working with Future First so far we’ve really appreciated how easy it’s going to be to keep track of our students as they go on to get jobs or go into further study after finishing their exams. We’ll be able to keep track of what our students go on to do. We want to build a community from this, so that we can keep in touch and keep that connection with students who are at The Streetly Academy now and in the future.
{ "pile_set_name": "Pile-CC" }
Flagellin in combination with curli fimbriae elicits an immune response in the gastrointestinal epithelial cell line HT-29. Flagellin is the major cytokine-releasing factor when Salmonella enterica serovar Typhimurium (S. Typhimurium) infects intestinal epithelial cells. In this work it is shown that curli, an adhesive proteinaceous surface component of Enterobacteriaceae involved in biofilm formation of S. Typhimurium and Escherichia coli strains can bind flagellin and thus elicit an immune response by the intestinal epithelial cell line HT-29.
{ "pile_set_name": "PubMed Abstracts" }
THERE IS A VANDAL IN the Tasmania brothel town of Launceston who keeps defacing signs bearing the city’s name – and police want to talk to him. Dubbed the “Launceston Bandit” by The Daily Examiner, the identity of the pervert is unknown. However, footprints found near the scene of one of the crimes indicates that it’s a man, or a big-hoofed female. Speaking to the media this morning, Tasmanian Police superintendent Gai Docking said they’re closing in a suspect, but stopped short of naming names. “We know you’re out there, Inceston Bandit,” she said, looking down the barrel of a Southern Cross Seven camera. “What you’re doing isn’t funny. Not only is it frowned up to have sexual relations or ejaculate on a blood relative in Launceston, it’s actually illegal,” “You will be forced to repair the signs at your own cost. Also, we know it’s you who keeps giving Ricky Ponting’s greyhounds chocolate. Please don’t do that, you’re making them sick.” However, a recreational drug user from Invermay has hit back at police, saying that a little bit of graffiti never hurt anybody. Dennis Coin spoke to The Advocate today after spending the morning throwing rocks at passing buses, a popular pastime in the Tamar capital. “I think it’s piss funny,” said Coin. “There’s not much to do here. When they stopped the booze bus from the Country Club Casino because too many people were throwing rocks at it, that really put a wet blanket on the fun fire. You can still smoke on public transport here,” “Have you ever had a bad pinga on a Tuesday night? I have.” Do you know who the Inceston Bandit is? The Betoota Advocate encourages whistleblowers who aren’t dobbing on a mate, and others with access to information they believe should be revealed for the public good, to contact us. If you want to dob on a mate, then wake up to yourself cunt.
{ "pile_set_name": "Pile-CC" }
What's this growing team working on? Welcome to our monthly sneak peek at the cogs that keep Iron Harvest moving forward. This month we asked our team to report in and tease what they’ve been working on as we plough on towards Beta. News New Community Manager The Iron Harvest team is growing; welcome The Iron Doctor! He’s British, experienced with Indie game development, recently received an actual doctorate in the sciences and has a huge passion for gaming, sci-fi, fantasy and history. He’ll be keeping the community growing and be one of your primary contact points. Ask him your questions, show him cool fan art and put him to good use. Backer's Discord Channel. If you've claimed your copy of Iron Harvest, you can find the link here. If you backed Iron Harvest on Kickstarter or pre-ordered it at www.iron-harvest.com, be sure to join us in our. If you've claimed your copy of Iron Harvest, you can find the link Updated FAQs Thanks to you we’ve have had a lot of feedback from Alpha 2 and some interesting questions have been asked about how we will move forward. Therefore, we’ve updated the FAQs on Kickstarter and the backer’s Discord channel. We hope this will continue to be the best way to get answers to your immediate questions. The Kickstarter FAQs can be found here Info Multiplayer Servers Are Now Offline We have taken the multiplayer servers offline. A big thank you to everyone who played and contributed during the extended Alpha 2 multiplayer test period. While it’s a shame to be offline, this gives us a chance to process your feedback and data. We’ll be back as soon as we have more to share with you. Let’s collect the intel, and when it's ready, we’ll be seeing you again on the battlefield. Team Spotlight Jan (Game Director) I spent a few days in Munich with Julian. We visited Deep Silver and talked to a bunch of very capable people about marketing, servers, collector's editions, QA, localization and a thousand other topics. I also finished writing the Rusviet campaign and started working on the Saxony campaign. Julian (Executive Producer) I presented Iron Harvest at GDC and Pax East (and met some of you awesome people there). After that I worked on project plan adjustments and organized the voice recordings. We’re going to record the first voices later this month in Poland and early next month in Germany. Marilena (Junior Producer) I've been working on project optimization so that we can have more efficient workflows in the future. Together with the other producers of the project, I am making sure the administration of the project plan is as smooth and clean as possible. Not an easy task with such a huge game with so many different elements. Tessa (Junior Producer) I’m new to Iron Harvest and I have spent most of my time getting to know the project and everyone involved. I am getting to grip with an overview of the asset creation, because a lot of assets from many different sources come into the project and one has to keep track of everything ;-) Max (Lead Game Designer) I’ve been coordinating with the game & level design team in the creation of all new content. We have been making quite some progress with regards to the heroes, a few new units and of course new multiplayer and campaign maps for you! New WIP multiplayer map: Will you wade through the oil-spills of forgotten mechs or hunker down in the abandoned village? Dominik (Game Designer) The birthplace of our mechs. Can you guess from these scrawlings, what will they become? Over the past month, I’ve worked mainly on our hero concept. This includes general questions as well as very specific ones. For example, how will veterancy work for heroes? How do they get unlocked and revived? In addition, each hero receives their own set of abilities and is effective against different types of enemies. Their abilities are especially interesting since they are intended to support the hero’s role, look and feel, while also enhancing overall gameplay. Elliott (Game Designer) I've been working on several features: As you may have noticed, we have an early version of a veterancy system implemented now, but there's still a lot planned. We will be fleshing it out further in the future. We've just implemented the system that will handle unit and notification voice lines in our studios internal build. Now it will undergo testing to ensure we give players enough feedback while not overwhelming them with information. After listening to your thoughts, and plenty of discussion within the team, we have determined how we want to implement hero units and what their gameplay roles will be within each faction. Though this is still very early in development, you can expect more news about this in due course. We will be making strides to reduce the feeling of clunkiness when controlling units, particularly mechs. This should especially make slower units much less frustrating to play with. Philipp (Lead Programmer) You will be able to defend an area by garrisoning infantry in buildings. Let’s hope you chose one sturdy enough. While managing all the other programmers, I also worked on new gameplay features such as map buildings that can be occupied by squads of human soldiers. All the buildings on the map that you can destroy with your mechs in the latest Alpha 2 can be occupied in the games next released version. That will open up more interesting strategies because right now map buildings mainly serve as navigation points and as a line of sight blocker. Soon, every map building could host a deadly veteran gunner squad that rains down terror onto your precious machines. And to counter this, we expect more havoc and destruction on the battlefield as you try to get opposing squads out of their building. Thomas (Technical Director) Looks trippy, but there is logic to this pathfinding map, or so Thomas tells us. I’ve started to investigate an alternative to our current navigation and spatial reasoning implementation. Right now, Iron Harvest is using visibility graphs for pathfinding. Their main advantage is that they are easy to implement, and they quickly provided us with something to work with during the prototyping and alphas phase of the game. But by now we have a growing list of features we would like to add that are not possible with the current implementation. One of our main grievances is that performance scales badly with increased map complexity. A different approach would be based on a triangulation of the pathable space that can be adapted locally when obstacles get destroyed or built. There are a lot of 3rd party solutions and academic literature to review, but it looks like we’re going to write from scratch something tailored to our specific needs. In the screenshot above you can see the first baby steps of something I hope to develop into a fully featured spatial reasoning library that will allow us to overcome all the current limitations and implement some new great features we have in mind. For example, if the detour to avoid an obstacle such as a fence or wall would be unacceptably long, infantry could vault over them or big mechs just crash through them. Lukas (Programmer) I have spent countless hours in profiling tools to locate performance problems. Performance optimization contains three main areas: CPU, GPU and memory. To reduce CPU usage, we optimized code and assets to reduce the amount of computation that is done with every frame and optimized the rendering to minimize draw calls. GPU utilization depends on the amount of geometry we are rendering and shader complexity. There's still plenty of work to do in this area but for now we have added some quality settings to reduce GPU usage. Memory-wise we had to refactor a lot of code that is executed each frame to minimize memory allocations. This reduces the amount of lag spikes caused by garbage collection. Furthermore, I upgraded the project to Unity 2019.1 which was released in a stable version just a few weeks ago. Janek (Programmer) I’ve been busy working on the dialogue system for Iron Harvest. There is now a way to properly organize and manage all our in-game and cutscene dialogues, as well as export them to be translated into different languages and used as a script for voice recordings. I also created a small handy tool for fancy in-game camera tracking shots to be used for trailers and content for presentations. Janek uses some iconic revelations to test his dialogue system. Arne (Programmer) I've been on vacation, but beforehand I did some research on how to split our one large Unity project into smaller projects that are easier to build and maintain. While doing that I also discovered a way that could enable our community to build their own custom maps! So far this is only a theory, but stay tuned ;) Chris (Programmer) I am mainly working on extensions and improvements to our internal scripting-logic, which is used by the level-designers to work on the campaigns. The main goal is to create a flexible toolset that allows the designers to implement every aspect of the various missions. Currently, I take care of extensions that affect the animation and visualization of in-game sequences. Furthermore, based on the feedback of our level-designers, I am implementing new mission- goals, triggers and conditions and fixing bugs in the existing ones. Patrick (Programmer) Currently, I am mostly working on developing, supporting and improving the software tools our game designers and level designers use to make their work easier. The biggest part of this was my work on the editor used for scripting our single player campaign. The editor needs to be able to display many different operations, from spawning units to checking victory conditions in an organized and extendable manner. When there is time, I am also fixing as many bugs as possible and working on parts of the code that need some improvements. Lars (Programmer) I recently started to work a bit more closely with the game and developed a first prototype for a benchmarking system. It will help us to evaluate different builds of Iron Harvest more easily in the future. This way we'll prevent performance problems from sneaking into our game when we're adding or changing features or content. Thomas (Programmer) I worked on a client-based player color system. The player's color will no longer be set in the map settings when starting a match. Instead, each player can define in the settings what color the player, the allies and the enemies should have on their system. If you want, you can even set the same color for multiple allies or enemies (for example set the color of all enemies to red and the color of all allies to green). I also worked on playing unit shouts when certain events happen. For example, the units should play a voice line when you give it an order or when it finishes conquering a resource structure. There are also some notification shouts, for example when you lose a unit or when a structure is attacked. Thorsten (Quality Assurance) After receiving bug reports from our backers (through the in-game tools or via our discord), I tried to get to the bottom of why these things had happened. I was successful most of the time! I also automated Iron Harvest, so the game would play itself non-stop to see if our game would survive a really long gaming session. As mentioned in a past DevBlog, it is technically possible to play over 200 matches back to back, in a single session! I was asked to take a screenshot of a “funny bug”. But, out of principle, I think bugs are unfunny, so I will give you a screenshot of a mech army I can spawn at will. Making sure that no stone is left unturned as we solve game issues sometimes requires filling up the map with an army of mechs. We have that power! Stefan (Art Director) Production of new units and characters is continuing at full speed. This gives me time to create new environments, like mountains and lake-land areas, for our level designers to use. Stefan is hard at work creating new environments that blend the rural with the industrial. Robert (3D Artist) Inspired by the Masuren landscape in Poland, I created two asset kits including objects like boats, nets and tools for the farm and fishing village area. The texture work was done with one 1k texture atlas consisting of tileable patterns combined with uniquely baked objects for each set. These objects are being created so our peaceful Polonia fishing village will feel alive and lived-in. Michiel (3D Artist) Detailed work, but sometimes looking too closely at the details just gets creepy. During the past 2 weeks, I worked on creating the eye-shader in Unity from scratch and getting it to a final state. I also made some progress on making block-out models for the Rusviet structures. I did some promotional art and researched cutscene development in Unity. I am going to test out a couple of these approaches this week. Arfri (3D Artist) The beginnings of a female farmer who will populate our maps. I've been working mostly on the character-related assets, for example a adult male, female, and boy farmer as well as statues and a cavalry horse. Firstly, I block-out the major forms and usually multiple versions of whatever it is I’m working on. I then refine the model until it has a certain amount of detail, this part is called high poly creation and I then “retopologize” the model to reduce the polycount. Daniel (Concept Artist) I’ve done some paint overs for new mechs and Rusviet buildings. I’ve also been heavily involved in the design and development of the last heroes and characters. We’re down to less than a handful! Our version of “Brunhilde”, Gunter von Duisburg’s legendary mech. It is old, its slow but it is also almost unstoppable. Creature Factory They are our 3D outsourcing company and are working on the much-anticipated new units and characters. They just finished all Rusviet-Soldiers and are now mainly working on new cutscene characters, new hero units and the first Rusviet mechs! Alex (Art Director and Lead Level Artist) I’ve been working with Robert, one of our 3D artists, to plan and create assets that will be used in different levels and environments of the game. This requires a constant back and forth of feedback with our level artists while we perform the first visual pass for all new levels. A moment of peace on the long road to the battlefront. Dirk (Level Designer) This month I created more first-draft concept maps and I’ve extended other more familiar maps. I’ve designed a new scripting tool that defines events and triggers them at certain times. I have also prepared all the in-game dialogues for language localization. Here are the hidden boundaries that will trigger in-game events. Magnus (Level Designer) I‘ve focused on the campaign of Iron Harvest. A lot has happened since we created those maps, so I did quite a lot of work updating them using the new tools and assets we now have. Apart from polishing them, I also started properly scripting many of the events and missions that you will encounter! This involved talking a lot with our programmers to make sure the scripting process works well for everyone and gives us the options we need. Justin (Level and Game Designer) I’ve been working on several maps including the challenge map, some single and multiplayer maps, and a small little map that can be used as a playground and testing area. Besides the whole level design process, we are working hard on our scripting tools so we are able to deliver an amazing gameplay experience. Furthermore, I’m managing external level designers and level artists. We are working closely with extremely talented people from all around the world, including some that are directly working with us on single and multiplayer maps. We are looking forward to showing you more of it. The iteration process behind the 'bootcamp' map: from basic layout to block-out to first visual pass. A pleasant environment where infantry and mechs alike can play, train and hone their skills before battle. Daniel (Level Designer) I’m currently living in Uruguay, but I’m working on the multiplayer maps for Iron Harvest. Our latest Iron Harvest map is meant to provide a base for solid competitive gameplay and deliver a canvas which will allow varied tactics to emerge. These environments are one of many examples of how we are creating new unseen settings and locations for the world of 1920+. This map focuses on the diesel punk theme of the land being taken over by a relentless oil industry, which has damaged the environment with oil spillages. In the map itself there’s this big oil-swamped river that provides high mobility areas for mechs. Meanwhile the village is more welcoming of siege based and infantry-based fights. Overall, the map is meant to provide players with a balanced canvas where they can take advantage of the terrain by setting up their own strategies and machinations. August (Level Artist) I’m from Stockholm, Sweden, but lived in Germany for 5 years while working at Ubisoft. I’m currently working on my first Iron Harvest campaign level. The landscape features a castle above an old picturesque village with surrounding farmlands. My current focus is mostly researching other similar levels and learning how they are made in Unity, using third party tools such as Vegetation Studio Pro for nature and roads. Vegetation Studio generates materials, foliage and even forests based on areas, big or small, defined by the level artist. Sabrina (Design Intern) Since I started my internship in March, I have done some research for the occupation of buildings and documentation of various level design tools that we are using internally for Unity. Right now, I’m helping with the designing and scripting of campaign missions. We are working quite closely with the programmers to improve our scripting tools, so we have more options to design action sequences. Thomas (Lead Animator) My main concern is getting everything up and running for our cinematics production. I'm currently laying out the first couple of scenes from the Polania campaign. Once done, we'll be able to test new workflows with motion capture being the big new one. We will implement this rough animation so that every other department (Lighting, VFX etc.) is able to follow through with their own preparations. A suspenseful moment for our heroine in one of our WIP campaign cut scenes. Vladislav (Animator) Gorilla behavior inspired the movement of this mech concept. I've been working on the run and walk animation concept for a new mech, working title “Rebel Leader”. It was a challenge to find a good compromise between the movements of a gorilla and a mech. I also added new life to “Wojtek“, our fans' favorite bear. Theresa (Animator) My focus has been on bringing the world of Iron Harvest even more to life with animations for various fauna and farm animals. As well as this, a new Rusviet unit arrived and – most exciting – a new mech. One hell of a scale and with some serious strength behind it. We’ll see, who stands a chance against it. Valentin (VFX) Increased realism: units will now have bullet casing effects. During the past few weeks I´ve worked on some new visual effects. One of them is an effect for bullet casings that is triggered by some mechs after they’ve fired. Right now, I’m working on another effect that prevents parts of destroyed mechs from just “plopping” away. Instead they will dissolve and burn down over time. Thomas (Lead Technical Artist) I’ve been setting up and testing our motion capture studio. It’s been a lot of fun and will add new life to our animations. This has also involved updating rigs and streamlining our motion capture pipeline for our Iron Harvest cutscenes. Dennis (Technical Artist) Hello world: Testing the digital pen that will guide you during mission briefings. I've mainly been working on our asset pipeline and improving the general speed of things, such as our animation export tools. I've also been involved in improving the rendering performance of the soldier units, so more units can be drawn on screen with less of a performance hit. Developing the map-stroke prototype was probably the task I had the most fun with. It allows our mission designers to record drawings and markings on the overview map in a natural fashion (using a digital-pen) and have those played back in the same way during a mission-briefing. Yuhuang (Technical Artist) I've mainly worked on some upcoming units and mechs, building rough blockouts and adding small adjustments to existing designs to make sure that their mechanical structure (joints) can achieve the pose and silhouette we want them to achieve. Look Ahead As you can see a lot is going on as we gear up for the next update to be released in early June. Our multiplayer servers will be back online once we are ready for new testing and have completed the transfer of our server infrastructure over from Amazon Web Services to Google. We’re looking forward to sharing with you the beginnings of our campaigns, new maps and voice recordings. As always, we enjoy receiving your feedback, so join the discussion on any of our social media channels and tell us what you think about what you’ve seen here. pre-order Iron Harvest here. You‘ll not only get the game at a discount, you‘ll also get access to Alpha and Beta builds and to our private Backers Discord Channel for exclusive behind-the-scenes material. If you want to support us, you can. You‘ll not only get the game at a discount, you‘ll also get access to Alpha and Beta builds and to our private Backers Discord Channel for exclusive behind-the-scenes material. Never want to miss an Iron Harvest DevBlog or update? Like us on Facebook, follow us on Twitter and Reddit or join our Mailing List.
{ "pile_set_name": "OpenWebText2" }
I shoot mostly a C75B Single Action Only 9mm. I am not a very good shot. The creepy trigger doesn;t help, but I think the gun is fine. At 15 yards, I can't do much better than saucer size groups. I do notably better with my Dan Wesson Pointman 7 in 45ACP (a much higher end gun), but would like to see some kind of good self help videos on how to best shoot a handgun for accuracy. I may need personal instruction, but for now, some Internet type instruction would be useful (I hope) for smaller groups. I realize you are asking for videos to temporarily "help" you out. The reality is, you need hands on training with a competent trainer. You'd be amazed at how much better you will shoot after a session with a good trainer. Well worth the money. Just skip out on a few shooting sessions and save your money on ammo and use it for future training. Easily the best money I have ever spent on firearms was spent on good training. So many people own multiple firearms and have never spent the money on good training. I guess I would rather be proficient with one firearm than being a crappy shooter with two firearms. YMMV After spending the first 6 months or so shooting my 1911 and a Sig SP2022 (.45 and 9 mm respectively) and not getting a whole lot better, I rented a .22 and spent some serious time at the range and focused on all of the basics. Stance, grip, breathing, trigger finger. After 2 trips, my groups got much, much smaller!!! I was SO happy to see some improvement! Then, I switched to the 1911 and kept in mind the basics. Shot only 5 shots. All in the black bullseye or 10 ring. So, I HIGHLY recommend training on a small caliber pistol to get the fundamentals down. I'll see if I can dig up some other threads on it, but I know of at least 2 in recent memory-one on breathing and another on slow, deliberate training. The one thing you don't want to do is to aimlessly (pun intended!) go shooting. This will only reinforce habits that will be hard to break later on. I second the suggestion on a personal coach/trainer. And, I'm sure if you're nearby someone, you can meet up with them at a range and they can give you some pointers! (HEY!!! Wait a minute! I'm a San Ramonian too!!!). I'd offer my help, but I'm a newbie too and for all I know, my smaller groups could be a fluke! I've got a Ruger 22/45 in jail right now and pick it up on the 6th. I'll probably be shooting again after that. Where do you normally shoot? I like Livermore, but have been going to Reeds since it's been Arctic-Cold out here lately. I'd agree. Take a class, snap cap training, have a partner come with you and critique your stance, flinch, and other aspects from grip to posture. (someone who knows what they are doing) But first and foremost, take a good handgun class. I realize you are asking for videos to temporarily "help" you out. The reality is, you need hands on training with a competent trainer. You'd be amazed at how much better you will shoot after a session with a good trainer. Well worth the money. Just skip out on a few shooting sessions and save your money on ammo and use it for future training. Easily the best money I have ever spent on firearms was spent on good training. So many people own multiple firearms and have never spent the money on good training. I guess I would rather be proficient with one firearm than being a crappy shooter with two firearms. YMMV ^^ This! I took a CCW class recently and needed range qualification for the FL CCW permit. Only shot 25 rounds but the trainer was familiar with Glocks and gave me tips that improved my groupings immensely! I can only imagine how much my shooting would improve if I spent more time with a trainer. One thing that is also very useful is to take pictures of your targets. Then, analyze them. Use one of the shooting charts to tell you what you're doing wrong. Ex: Shooting low and slightly left is usually flinching or anticipating the recoil. Other tips are: Dry Fire and using snap caps when you're out on the range-Have a friend load the mag with one or two snap caps in it. Then, when you come up to the snap cap, it will tell you whether or not you're flinching. (Kind of embarrassing-Ask me how I know). Go to You Tube and do a search for "Pistol shooting tips" or "Pistol shooting techniques" The videos are good but it's all about good live instruction and trigger time.... Nothing trumps trigger time. Ask at the range if they can recommend an instructor for some 1 on 1. Have a friend video tape you at the range so you can see how you actually look when you pull that trigger, a little video goes a long way... you'll probably see things you had no idea you were doing. Buy a laser bore sighter (about $30-$40 at Walmart) so you can see where the bullet is actually going relative to where the sights tell you it's going. I have a Glock 17 that kept consistently shooting high and to the right at anything more than 15 feet... I went thru hundreds of rounds, finally just for kicks i bought a bore sighter (Wal-Mart $30).... Low and behold, with my Trijicon sights perfectly lined up and level at 20 feet away from my target, I was 1" high and right at the 2 O'clock from the dead center of the target. A couple of adjustments later and it was right where it should be. Most of the time (IMHO) it's an operator inconsistency with the lining up of the sights or the sights themselves being a bit off (it doesn't take much). I would also recommend some Hi-Viz targets. They are stickers that go on your paper targets and go from black to neon green where the bullet strikes them...great tool for correction of follow up shots. The rest is just trigger time. ~Best of luck! Dry firing has helped me immensely.... remove ALL ammo from the room you'll be dry firing in, double check that the gun is unloaded, and pick a spot with a safe backdrop to dry fire at (I use my brick fireplace which would stop a bullet if something went horribly wrong). Practice a consistent grip/stance, pull the trigger straight back, and keep your eye(s) on the front sight the entire time. The front sight shouldn't move at all when the hammer drops. You can balance a coin on the front sight or the top of the slide, and try to dry fire without letting the coin to fall off when you pull the trigger. At the range, bring some snap caps and have someone load a couple of them in a mag along with live ammo (so you don't know when you'll get a snap cap or a live round)... this is very useful in catching yourself flinching. As XDJYo mentioned, it can be pretty embarrassing when you flinch on a snap cap and I've found that shame is a particularly good motivator. When my groups start to open up at the range, I stop, take a few minutes to rest, then dry fire 20-30 times before I load up another mag. This helps to eliminate flinching if you don't have a buddy to mix snap caps in with live rounds for you. If you dry fire just 5-10 minutes every night, you should see some improvement in your groups pretty quickly. Remember: keep your eyes on the front sight, consistent grip/stance, pull the trigger straight back, and just "let" the gun go off. Resist the urge to peek at the target as/after you fire each shot... keep your eyes on that front sight as the gun levels out, even if you aren't firing multiple shots in succession. Another piece of advice that helped me is to not worry so much about getting perfect sight picture, but rather concentrate more on sight alignment and a straight trigger pull. Many people will wait and wait until they think the sights are perfectly lined up with the center of the bullseye, then snap off a round quickly before they lose that sight picture. This leads to anticipating the recoil/flinching/sloppy trigger pulls, which will cause you to throw the shot much worse than you would have with a solid sight alignment and trigger pull, even if the POA is slightly off center when the trigger breaks. I'm not a crack shot by ANY stretch of the imagination, but all the above advice has really helped me tighten up my groups over the past few months. As others have mentioned, there's no substitute for good professional training, but there are some things you can do by yourself to improve your technique. Good luck, have fun, and always follow the 4 main safety rules at all times! __________________ “That rifle on the wall of the labourer’s cottage or working class flat is the symbol of democracy. It is our job to see that it stays there.” -George Orwell Nothing in particular to add except some advice from someone at the range on grip: "Ask 20 different shooters how to hold a pistol, and You'll get 40 different answers that are all the "only right way." The main thing is to hold it so that you are comfortable and hit the target consistently. It doesn't have to look pretty, it just has to work." Besides that, thanks everyone for the great tips. __________________My AR is 7.62x39, so that if/when we get invaded, I can shoot their ammo back at them! Quote: Originally Posted by Falstaff Where is this ammo "Black market" he speaks of? Do they have .223 in stock? Solid fundamentals are essential. If you fail to master trigger control, sight alignment, sight picture you will never be anything but a mediocre shooter. Get some professional, in person, assistance. Videos are great for shooters with experience, but you have to somewhat up to speed when you view and attempt to put into practice the techniques you viewed. Videoing yourself can be of great benefit to improving performance on the range. Practice makes permanent. Only Perfect Practice make perfect. Practice crap for 10 years and you will be a crap master. Practice perfectly, it pays off in the end. Dry fire and snap caps helped me a lot. First gun I ever shot was a .40 and I did ok, but being mostly self-taught I definitely learned the wrong way. Took me a long time to correct my bad habits and even today I have to slow waaaaay down and concentrate if I'm shooting for accuracy. Training obviously is the best way to improve your shooting. If you still don't believe us, you can watch a few videos. I learned a lot by watching the Magpul Dynamic Handgun DVD. They covered the basics pretty well. I'd agree. Take a class, snap cap training, have a partner come with you and critique your stance, flinch, and other aspects from grip to posture. (someone who knows what they are doing) But first and foremost, take a good handgun class. +1 to that. I have many friends who own or have owned weapons, including some who are in a line of work that requires them to be armed every day. Best advice I've received wasn't from any of them, it was from a class that focused on fundamentals. As others have suggested, mixing live and dummy rounds at the range is a quick way to find out how clean your presses are and whether or not you're anticipating. I'll be practicing those drill tonight myself. Based on these responses, I intend on getting a Browning Buckmark 22 rimfire, learn the fundamentals through a class if I can find one, videos from reputable sources, practice a lot (live and w/snap caps on all pistols I have), and then apply the lessons learned to live fire with my 9mm and 45ACP. Yes, I hoped for better with SAO and trigger performance, but the trigger is not great. I like a light pull and very crisp let off. Not happening with this gun. My Dan Wesson PointMan 7 is much better, but it should be for twice the price of the CZ75B. Based on these responses, I intend on getting a Browning Buckmark 22 rimfire, learn the fundamentals through a class if I can find one, videos from reputable sources, practice a lot (live and w/snap caps on all pistols I have), and then apply the lessons learned to live fire with my 9mm and 45ACP. Phil Hi Phil- Just as an option (not sure how much the Buckmark's are), but I just bought a Ruger 22/45 (same grip angle as the 1911) for $275 at Reeds in Santa Clara. That was with a $25 credit for the Ruger I rented. I pick it up on the 6th. Anyway, if you're willing to head up to USI in Concord, I know a bunch of Calgunners meet there on Sundays. Maybe you can get a few hints and make a pal or two! And, every time I'm at Livermore, the Range Officers are always very friendly and they give me pointers for free (they probably take pity on me!). I don't know of any Calgunners that hit Livermore, although I'm sure they exist! PM me the next time you hit Livermore or Reeds or something and maybe we can meet up! First competing in Bullseye matches (in slow fire the main emphasis is to get all the shots to score - at 50 yds, strong hand only). Scoring your shots helps you understand what your doing wrong. Having people around at the matches who have a few pointers helps also. Second, eye correction is a big deal. I'm near sighted and now do my pistol work without correction. Not wearing corrective glasses really helped my groups. When I use even half correction glasses the front sight becomes to blurry (and the rear sight is worse). __________________ Spreading the WORD according to COLT. and Smith, Wesson, Ruger, HK, Sig, High Standard, Browning I also have a 75b SA and a buckmark. If you're used to a 1911 trigger, everything else seems "long and creepy". You will find a similar issue with the buckmark. It has a very short and light trigger pull. It definitely spoils you, and I don't think this particular .22 will help you with improving accuracy on a CZ75. Based on these responses, I intend on getting a Browning Buckmark 22 rimfire, learn the fundamentals through a class if I can find one, videos from reputable sources, practice a lot (live and w/snap caps on all pistols I have), and then apply the lessons learned to live fire with my 9mm and 45ACP. Phil While you don't necessarily need the .22, it's just a LOT cheaper to get the trigger time. I'd look up Bill Tidwell. He makes his rounds through the Bay Area regularly. But yea, basically the point of taking the class is when you're shooting, unless you know what to look or feel for, you can only see the results of your shooting and try to infer what you did wrong. With an instructor, you have someone who's watching you through every step of the process and can give you targeted advice. The payment you make to a good fundamentals instructor is usually less that what you'd pay in ammo for a centerfire pistol to get the same results. Yes, Bill Tidwell is who I got my First Steps pistol course from. He teaches out of Livermore and Reeds in Santa Clara. IIRC, there are others who advertise pistol or target training at the ranges. Next time you're there, chat with one of the range officers. I shoot mostly a C75B Single Action Only 9mm. I am not a very good shot. The creepy trigger doesn;t help, but I think the gun is fine. At 15 yards, I can't do much better than saucer size groups. I do notably better with my Dan Wesson Pointman 7 in 45ACP (a much higher end gun), but would like to see some kind of good self help videos on how to best shoot a handgun for accuracy. I may need personal instruction, but for now, some Internet type instruction would be useful (I hope) for smaller groups. Thanks. Phil I recommend lots of focused dry fire but here's my video on grip; might help you out.. Here's a quick old trick. Have some one stack 5 or 6 quarters on top if your slide, as close to the front sight post as you can, and pull the trigger. Pull it over, and over, and over, and... You get the point. It teaches you a nice smooth trigger press. If you want to get crazy, punish yourself every time this coins fall off. Hahahha. Depending on your level of masocism, you can improve quickly.
{ "pile_set_name": "Pile-CC" }
Background {#Sec1} ========== The productions of functional foods containing probiotic bacteria such as lactobacilli are gaining high significance throughout the world. These bacteria enhance the microbial safety and offer organoleptic, technological, nutritional, and health benefits to the consumers. As a consequence of the large scale production of fermented foods incorporated with probiotics, the industrial production of these bacteria at low cost is becoming more important. Therefore, growth parameters including cost, ability to produce a large number of cells, and the harvesting method should be considered while optimizing growth medium. Moreover, designing new medium for enhanced biomass production can definitely lead to more economical probiotic production \[[@CR1], [@CR2]\]. Among the lactobacilli, the *Lactobacillus plantarum* is the most common probiotic bacterium traditionally used in fermented foods such as, vegetables, meat and dairy products \[[@CR3]\]. The major striking characteristic of *L. plantarum* and other lactic acid bacteria (LAB) is their ability to produce lactic acid, acetic acid and other metabolites when subjected to cultivation through batch or fed-batch fermentation. Lactic acid bacteria are strain-dependent fastidious bacteria with respect to nutrients and environmental requirements. Rich medium and suitable conditions are the key environmental parameters required for good bacterial growth \[[@CR4]\]. Nutrient supplements, such as yeast extract and casein hydrolysate can improve the nutritional quality of the medium as they contain growth-promoting compounds in addition to organic nitrogen and carbonaceous compounds. However, the use of these nutrient supplements in large quantities is very expensive \[[@CR4]\]. A number of studies have shown a significantly increased lactic acid and cell biomass production in most lactobacilli in the presence of yeast extract, amino acids, protein concentrates, hydrolysates, vitamins and inorganic compounds such as (NH~4~)~2~SO~4~ and (NH~4~)~2~HPO~4~ \[[@CR5]--[@CR7]\]. Similarly, supplementing the culture medium with cheese whey (industrial waste) along with commercially available growth supplements results in enhanced cell biomass production \[[@CR1], [@CR7], [@CR8]\]. However, studies regarding the use of industrial wastes (cheese whey and corn steep liquor) as a nutrient supplements for enhanced biomass yield of lactobacilli, are scarce. Cheese whey is a major by-product of dairy industry retaining 55% of milk nutrients, including lactose (4.5--5.0% w/v), soluble proteins (0.6--0.8% w/v), lipids and mineral salts \[[@CR9]\]. Whey is normally discarded in the environment as a waste product, posing a threat to the environment due to its volume disposal and reduction in biochemical oxygen demand \[[@CR10], [@CR11]\]. Similarly, the corn steep liquor, a by-product of corn milling industry, has been used as an inexpensive nutrient source for fermentation. It is an excellent source of nitrogen for most microorganisms due to its high content of amino acids and polypeptides with considerable amounts of B-complex vitamins \[[@CR12]\]. Therefore, cheese whey and corn steep liquor can be used as a low cost carbon and nitrogen sources respectively for the enhanced biomass production by *Lactobacillus*. Increased biomass may facilitate the recovery process and reduce the production cost. Other benefits of higher amounts of biomass include shortening of the fermentation time, reduced waste water volume, and accelerated downstream processing. Besides, higher cell density of *Lactobacillus* also benefits industries to produce higher concentrations of chemicals such as lactic acid or bacteriocins. Increased biomass production of *L. plantarum* LP02 and *L. plantarum* Pi06 \[[@CR13]\] by optimizing medium using a combination of the Taguchi array design and Box-Behnken design \[[@CR14]\] have been recently reported. Optimizing fermentation media have been extensively studied to produce exopolysaccharides and bacteriocin \[[@CR15]--[@CR17]\] and fermentation of olive juice \[[@CR18]\] by *L. plantarum*. Various studies have determined LAB culture medium optimization using a time effective statistical approach, response surface methodology (RSM) \[[@CR14], [@CR18]--[@CR20]\] instead of conventional "one factor at a time" approach. Response surface methodology with a central composite design has often been used for the optimization of biomass yield of *Lactobacillus rhamnosus* \[[@CR21]\], *Bacillus coagulans* \[[@CR22]\], and *Bifidobacterium longum* \[[@CR23]\]. Nevertheless, the concomitant use of Taguchi and RSM statistical methods have not yet been employed in *Lactobacillus* culture optimization studies, while using cheese whey as a main carbon source. Therefore, the current study was designed to determine the most significant variables among the culture parameters including cost effective carbon source cheese whey with corn steep liquor in all possible combinations for enhanced biomass production of our recently isolated and characterised *Lactobacillus plantarum* AS-I4 \[[@CR24]\], by using Taguchi design and Box-Behnken design (RSM). Results and discussion {#Sec2} ====================== The optimization of five nutrient variables {#Sec3} ------------------------------------------- The three different media compositions were used for screening of preliminary dry cell mass production of *Lactobacillus plantarum* AS-14 (Table [1](#Tab1){ref-type="table"}). Among three media compositions, medium M3 produced a higher biomass than that produced by other media except MRS. But no significant difference was observed in dry cell mass production between medium M3 and MRS under agitation condition. Five out of the ten ingredients in medium M3 were selected for medium optimization by Taguchi array design with L16 (4^5^) array (Table [4](#Tab4){ref-type="table"}). The five independent nutrient variables included glucose (X~1~), yeast extract (*X* ~2~), cheese whey (X~3~), (NH~4~)~2~SO~4~ (X~4~), and corn steep liquor (X~5~), and two operating conditions pH (A) and temperature (B), with the respective design codes of the nutrient variables and their corresponding levels, coded 1 to 4, are listed in Tables [2](#Tab2){ref-type="table"} and [3](#Tab3){ref-type="table"}. All of the 16 experiments designed for media composition were conducted under static conditions. The results obtained from the optimization of 5 nutrient variables (X~1~ to X~5~) and 2 operating conditions (A, B) using Taugchi's experimental design for the production of dry cell mass are summarized in Table [4](#Tab4){ref-type="table"}. The highest dry cell mass production achieved in the verification experiment was 13.88 g/l as seen in run 14 (Table [4](#Tab4){ref-type="table"}) with the X~1~ -- X~5~ and A, B levels in the orderof 4, 3, 1, 4, 3, 3, and 3. First-order regression was performed by multiple regression analysis and the following regression equation Eq. [3](#Equ1){ref-type=""} was obtained:Table 1Compositions of the media used for the growth of *Lactobacillus plantarum* (AS14)ComponentMedia concentration (g/l)M1M2M3Cheese whey\--75Glucose507520Tryptone2010-Yeast extract888Corn steep liquor101020(NH~4~)~2~SO~4~-1520KH~2~PO~4~0.2-0.2Sodium acetate\--5Tween 80-0.20.2MgSO~4~.7H~2~O\--0.05FeCl~3~0.05-0.05 Table 2Experimental levels of independent nutrient variable and operating conditions with their corresponding levelsIndependent Variables (g/l)SymbolLevels1234Production Medium OptimizationGlucose(X~1~)5101520Yeast Extract(*X* ~2~)2468Cheese Whey(X~3~)30456075(NH~4~)~2~SO~4~(X~4~)5101520Corn steep Liquor(X~5~)5101520Operating Conditions OptimizationpH (A)(A)6.16.26.36.4Temperature°C (B)(B)30354045 Table 3Coded values and real values of the experimental variables for Box-Behnken methodIndependent Variables (g/l)SymbolCoded values−1 (Low)0 (Centre)1 (High)Production Medium OptimizationGlucose(X~1~)101520Yeast Extract(*X* ~2~)468Cheese Whey(X~3~)456075(NH~4~)~2~SO~4~(X~4~)101520Corn steep Liquor(X~5~)101520Operating Conditions OptimizationpH(A)6.26.36.4Temperature °C(B)354045Other composition included sodium acetate (5g/l), MgSO~4~.7H~2~O (0.3g/l), MnSO~4~.4H~2~O (0.04g/l), Tween 80 0.2 Table 4Optimization of five nutrient variables (X~1~-X~5~) and two operating conditions (A, B) using Taugchi's experimental design for the production of dry cell massRuns orderNutrient variables (g/l) and operating conditions levels ^a^Response Dry cell mass (g/l)X~1~*X* ~2~X~3~X~4~X~5~AB112222118.99 ± 0.02231342219.12 ± 0.063234123110.22 ± 0.044414234113.47 ± 0.125441321213.01 ± 0.01632231228.21 ± 0.077442143213.55 ± 0.018213344211.32 ± 0.059133332310.91 ± 0.0510224432311.21 ± 0.0711242212312.87 ± 0.0612142442312.71 ± 0.0613141133111.82 ± 0.02**14431433313.88 ± 0.03**15333244311.49 ± 0.081611211348.21 ± 0.05The bold data represent that experiemental run which shows maximum L. Plantarum biomass production^a^X~1,~ Glucose~;~ *X* ~2,~ Yeast extract; X~3,~ Cheese Whey~;~ X~4,~ (NH~4~)~2~SO~4~; X~5~ corn steep liquor. Other composition included sodium acetate (5 g/l), MgSO~4~.7H~2~O (0.3 g/l), MnSO~4~.4H~2~O (0.04 g/l), Tween 80 (0.2 g/l) $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{Dry}\kern0.5em \mathrm{cell}\kern0.5em \mathrm{mass}\kern0.5em \left(\mathrm{Ya}\right)=+13.48+0.40{\mathrm{X}}_1+0.74{\mathrm{X}}_2+0.16{\mathrm{X}}_3+7.02{\mathrm{X}}_4+0.68{\mathrm{X}}_5 $$\end{document}$$ where Ya is the predicted response or dry cell mass; colony forming unit of *Lactobacillus plantarum* AS-14 per ml culture (CFU/ml), and X~1~, X~2~, X~3~, X~4~ and X~5~ are the coded values of the test variables glucose, yeast extract, cheese whey, (NH~4~)~2~SO~4~ and corn steep liquor, respectively. The response surface-Box-Behnken method was used to further optimize and examine the three most important factors such as glucose (X~1~), cheese whey (X~3~), and corn steep liquor (X~5~), and the results in terms of respective coded values and three levels in the medium are listed in Tables [5](#Tab5){ref-type="table"} and [6](#Tab6){ref-type="table"}. The biomass reached its highest (15.41 g/l) during run 15, with nutritional levels of 15 g/l glucose, 60 g/l cheese whey, and 15 g/l corn steep liquor. A second order regression formulation was derived by ANOVA following regression as given below:Table 5Box-Behnken design of experimental response of three nutrient variables (X~1,~ glucose~;~ X~3,~ cheese Whey~;~ X~5,~ corn steep liquor) in terms of coded factorRuns orderNutrient variablesResponse Dry cell mass (g/l)X~1~*X* ~2~X~3~10−1013.66 ± 0.042−10114.96 ± 0.07300013.28 ± 0.034−10015.34 ± 0.06501115.35 ± 0.08610114.55 ± 0.017−11115.12 ± 0.11801−115.14 ± 0.169−1−1015.18 ± 0.041010013.69 ± 0.06111−1015.25 ± 0.071210−115.20 ± 0.061301−114.87 ± 0.05140−1114.99 ± 0.08**151−1115.41 ± 0.06**16−11−115.17 ± 0.04The bold data represent that experiemental run which shows maximum L. Plantarum biomass production Table 6Box-Behnken design of experimental response of two operating conditions (A, B) in terms of coded factorRuns orderOperating conditionsResponse Dry cell mass (g/l)AB11113.66 ± 0.0820114.96 ± 0.0630015.28 ± 0.014−1115.44 ± 0.045−1016.04 ± 0.0160114.55 ± 0.137−1016.12 ± 0.0681−115.14 ± 0.059−1015.88 ± 0.05100014.69 ± 0.02111015.55 ± 0.0212−1016.20 ± 0.03131−114.87 ± 0.0514−1115.99 ± 0.07150−115.91 ± 0.0616−1016.17 ± 0.07 **Final equation in terms of coded factors:** $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \begin{array}{l}\mathrm{Dry}\kern0.5em \mathrm{cell}\kern0.5em \mathrm{mass}\kern0.5em \left(\mathrm{Ya}\right)=+12.74+2.77{\mathrm{X}}_1+1.16{\mathrm{X}}_3+9.48{\mathrm{X}}_5+0.898{\mathrm{X}}_1^2+0.0571{\mathrm{X}}_3^2+0.786{\mathrm{X}}_5^2\hbox{-} 5.34{\mathrm{X}}_1{\mathrm{X}}_3+\\ {}14.69{\mathrm{X}}_1{\mathrm{X}}_5+5.64{\mathrm{X}}_3{\mathrm{X}}_5\end{array} $$\end{document}$$ quadratic and linear interaction effects were calculated for the optimization process, and significant coefficients were preserved to create the corresponding response surfaces. The results in the ANOVA of Table [7](#Tab7){ref-type="table"} showed that the independent variable (X~1~) had a significant effect, as it had a positive coefficient, according to which an increase in its concentration led to an increased yield. The independent variables X~3~ and X~5~ were also significant within the range of this study. The same is observed with the X~1~X~5~ interaction. The negative signs of the X~1~X~3~ interaction and squared variables X~1~ ^2^, X~3~ ^2^ and X~5~ ^2^ revealed a reduction in dry cell mass production when their concentration was increased in the system.Table 7Analysis of variance (ANOVA) for the response surface of full quadratic model for optimization of three variables (X~1,~ glucose~;~ X~3,~ Cheese Whey~;~ X~5,~ corn steep liquor)SourceSum of SquareDegree of freedomMean SquareF-value*P*-valueX~1~70.46170.461182.440.0002X~1~ ^2^0.02110.0210.3450.4098X~3~10.19110.19171.070.0004X~3~ ^2^1.16011.160521.3210.001601X~5~34.60134.60580.570.00131X~5~ ^2^0.02910.0290.5510.23481X~1~X~3~23.78123.78399.100.0001X~1~X~5~23.59123.50395.800.0035X~3~X~5~20.52120.52344.370.0003Lack of fit0.0133630.0041122.0010.311310Pure error0.06120.056Total184.42514R^2^: 0.9955; Adjusted R^2^:0.9936 It is shown that a higher concentration of cheese whey resulted in higher biomass production (Figs. [1](#Fig1){ref-type="fig"}, [2](#Fig2){ref-type="fig"} and [3](#Fig3){ref-type="fig"}). However, a higher glucose and corn steep liquor concentrations might have resulted in inhibited cell growth. The lack of fit of the model was observed non-significant that showed good correlation between the data and the model. The high R^2^ value (0.9936) indicated that the data were close to the predicted values from the model. An optimized formulation of nutrition levels was suggested from the software at the following concentrations: 15 g/l glucose, 60 g/l cheese whey, and 15 g/l corn steep liquor. Live cells were also counted for comparison and to assure viability of cells in the MRS and optimized culture media (\>1 × 10^10^ CFU/ml), (Fig. [4](#Fig4){ref-type="fig"}). The results indicated that both biomass and viable counts in the optimized culture medium were significantly higher than those in the MRS medium. Because the enhanced biomass was almost in proportion to the increase in viable cell numbers, it was deduced that volumes (or sizes) per cell were similar in each culture and that the cells in the optimum culture were alive.Fig. 1Response surface of dry cell mass production by *Lactobacillus plantarum* (AS-14), showing the interaction of glucose (X~1~) and Cheese whey (X~3~) at constant levels of corn steep liquor (15 g/l) yeast extract (5 g/l), sodium acetate (5 g/l), MgSO~4~ 7H~2~O (0.3 g/l), and MnSO~4~ 4H~2~O (0.04 g/l) Fig. 2Response surface of dry cell mass production by *Lactobacillus plantarum* (AS-14), showing the interaction of glucose (X~1~) and corn steep liquor (X~5~) at constant levels of cheese whey (60 g/l) yeast extract (5 g/l), Sodium acetate (5 g/l), MgSO~4~ 7H~2~O (0.3 g/l), and MnSO~4~ 4H~2~O (0.04 g/l) Fig. 3Response surface plot in relation to temperature and pH for dry mass production Fig. 4Comparison of dry cell mass and viable counts of *Lactobacillus plantarum* (AS-14) cultivated in MS medium and the optimum medium. Fermentation conditions: 5% inoculum, 40 °C, 24 h, at 100 ml medium/250 ml Hinton flask without shaking. Each value represents the mean columns are significantly different (*p* \< 0.05) compared to MRS medium The optimization of physical operating conditions {#Sec4} ------------------------------------------------- Tables [4](#Tab4){ref-type="table"} and [6](#Tab6){ref-type="table"} show the design matrix of the variables in coded units with the experimental results. The highest dry cell mass production was 16.20 g/l as seen in run 12. For the experimental data, following equation was obtained after multiple regression analysis:$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{Y}\mathrm{b}={\mathrm{b}}_0+{\mathrm{b}}_1\mathrm{A}+{\mathrm{b}}_2\mathrm{B}+{\mathrm{b}}_{12}\mathrm{AB}+{\mathrm{b}}_{11}{\mathrm{A}}^2+{\mathrm{b}}_{22}{\mathrm{B}}^2 $$\end{document}$$ $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{Y}\mathrm{b}=16.04+0.61\mathrm{A}+2.54\mathrm{B}\hbox{-} 1.07\mathrm{AB}\hbox{-} 1.09{\mathrm{A}}^2\hbox{-} 5.36{\mathrm{B}}^2 $$\end{document}$$ where Yb is the predicted response, i.e. dry cell mass g/l, A and B are the coded values of the test variables temperature and pH, respectively. Table [8](#Tab8){ref-type="table"} shows the results of the second-order response surface models for dry cell mass production in the form of analysis of variance (ANOVA). The ANOVA of the quadratic regression model demonstrates that the model is very significant as evident from very low probability value \[*p*-value Prob \> F \< 0.0001\]. The R-squared of 0.9411 is in reasonable agreement with the "adj R-squared" of 0.9083; i.e. the difference is less than 0.1. The coefficientof determination (R^2^) indicating, that 94.11% of the variability in the response could be explainedby the model. The adjusted determination coefficient (adj. R^2^ = 0.9083) was also satisfactory for confirming the significance of the model. Figure [3](#Fig3){ref-type="fig"} displays the surface response plot of the model equation. In Eq. [3](#Equ1){ref-type=""}, only the isolated variable A and variable B significantly influence the process. A positive but lower value of A and higher value of B demonstrate that a rise in temperature and reduction in pH lead to an increased dry cell mass production. The determination coefficient was 0.9083, indicating that 90.9% of the variability in the response could be explained by the model. The coordinates of the stationary points for dry cell mass production were calculated from the complete Eq. [3](#Equ1){ref-type=""}. Figure [3](#Fig3){ref-type="fig"} illustrates that an increase in temperature leads to increased production of dry cell mass. Maximal dry cell mass production was obtained at pH 6.2. The optimal range for dry cell mass production was from 34 to 39.6°C and pH = 6.1 to 6.4.Table 8Analysis of variance (ANOVA) for the response surface of full quadratic model for optimization of two variables (A~,~ pH~;~ B, Temperature)SourceSum of SquareDegree of freedomMean SquareF-value*P*-valueA43.30143.3038.010.0021B56.28156.2851.340.0001AB5.6515.655.160.0493A23.4513.453.150.1096B284.01184.0176.64\<0.0001Lack of fit0.012530.002911.8090.37651Pure error0.04120.096Total192.7510R^2^: 0.9411 Adjusted R^2^: 0.9083 Statistical design of experiments can be employed to model the relationship between certain variables and one or more responses in process. Since the cost of culture medium has a remarkable impact on the mass production of probiotics, the optimization of growth conditions, substitution with low-price nutrient ingredients and simplification of medium are vital for their economical production. Taguchi method can be employed to screen the significant nutritional parameters and design a simple medium. Response surface methodology can be employed to estimate a polynomial model representing the effect of significant factors on viable cell counts in the probiotic products, as well as to optimize the process variables. Such combination of statistical design of experiments would be useful to optimize bioprocesses. Many workers employed a two-phase procedure including a screening phase through Taguchi and an optimization phase through Box-Behnken design to develop an economical broth for the growth of Lactobacilli for example *L. casei* ATCC 334 \[[@CR2]\], *Lactobacillus* sp. LMI8 \[[@CR10]\], *Lactobacillus plantarum* Pi06 \[[@CR13]\]. The selected lactobacilli strains were growing relatively slower in the MRS broth. The growth medium was therefore, reformulated and simplifiedby using three different media compositions. Furthermore, the extensive use of nitrogenous sources such as peptones from poultry or beef extract has been environment un-friendly due to high amount of waste. Therefore, we used cheese whey as a low-cost carbon source, with corn steep liquor and ammonium sulphate as low cost nitrogen sources. A step towards simplification is the omission of peptone and the evaluation of the remaining components on the growth performance of the strain. Each component was tested in arange of different concentrations, while all the remaining parameters were kept constant, equal to the initial concentration of the MRS medium. According to our results, there was no significant difference in dry cell mass production between optimum medium M3 and MRS under agitation. Generally, under non agitating conditions, higher dry cell mass was acquired as compared to shaking. These results indicate that *L. plantarum* strain AS-14 is an anaerobic microorganism. With Taguchi array design by factor optimization, the initial medium opted for further optimization experiments was medium M3. The growth results on MRS medium reported by Zacharof and Lovitt \[[@CR25]\] showed that this medium was unsuitable for intensive propagation of *Lactobacillus plantarum* NCIMB 8014, because medium containing numerous nitrogen sources did not facilitate good growth. The absence of peptone from the medium lead to an improved growth rate and higher growth yields of the bacteria, therefore yeast extract was chosen as a primary and sole source of nitrogen. Several studies \[[@CR26]--[@CR29]\] have introduced the idea of partial dependence of biomass development and metabolites production by lactobacilli on the amount of nitrogen sources like yeast extract in defined growth media. Yeast extract serves as the carbon, nitrogen and vitamin source needed to satisfy the growth requirements of the microorganisms. Several studies have underlined the important influence of metal ions on the growth of lactobacilli \[[@CR29]--[@CR31]\]. We have also evaluated the effect of yeast extract, ammonium bi-sulphate, manganese and magnesium salts concentrations on dry cell mass production but according to results, they are necessary for growth but have no significant effect on response surface of growth. As the yield of biomass in all anaerobic bacteria was strongly depends on carbohydrate feed \[[@CR27]\], glucose was used as energy source. The 3D response surface plots are graphical representations of the regression equation; they were plotted to show the interaction of the variables and to identify the optimum level of each variable for a maximum response (Figs. [1](#Fig1){ref-type="fig"}, [2](#Fig2){ref-type="fig"} and [3](#Fig3){ref-type="fig"}). Each response surface for biomass production represented different combinations of two test variables at once. As a result, glucose (X1) and dominant nutrients cheese whey (X3) and corn steep liquor (X5) significantly enhanced the biomass production while, yeast extract did not significantly affect the cell growth. Moreover, it was shown that the log value of viable cells in the optimized medium based on cheese whey and corn steep liquor was increased as compare to complex and expensive MRS medium (Fig. [4](#Fig4){ref-type="fig"}). Further, cheese whey and corn steep liquor as a low cost carbon and nitrogen sources could be an appropriate substitute for many other carbon and nitrogen sources such as glucose, yeast extract, KH~2~PO~4~ in different medium compositions. Combining all the optimised growth parameters in the desired quantities formulated a liquid medium. This modified medium served the aim of enhancing the cellular productivity, ensuring high growth. When comparing the growth of the selected lactobacilli on MRS and the formulated media (Fig. [4](#Fig4){ref-type="fig"}), it was clearly shown that the log value of viable cells in the optimized medium based on cheese whey and corn steep liquor was increased to that in the complex and expensive MRS medium which demonstrated that the optimized medium improves growth with a significant increase in dry cell mass. Corn steep liquor and lactose have long been proven inexpensive dominant nutrients, alternative to much more expensive materials, such as yeast extract and peptone. It has been used as an alternative nitrogen and carbon sources for optimizing lactic acid production by LAB \[[@CR21], [@CR32]\]. Because LAB are nutritionally fastidious and require various amino acids and vitamins for growth, choosing a suitable nitrogen source appears to be very important. Lactose and corn steep liquor are the dominant nutrients that control the biosynthesis of lactic acid produced by LAB. Hence, a strong interaction between them for lactic acid fermentation is inevitable. Lima et al. \[[@CR10]\] proposed that maximal lactic acid production (18.31 g/l) was obtained for values of cheese whey (total lactose 55 g/l) and corn steep liquor (15 g/l) in the central point region. Hwang et al. \[[@CR13]\] investigated the fermentative production of dry cell mass of *Lactobacillus plantarum* Pi06, to 8.94 g/l from three response variables of glucose, yeast extract, and corn steep liquor. The resulting optimum medium consisted of 35 g/l glucose, 35 g/l yeast extract, and 40 ml/lcorn steep liquor by using Box-Behnken method. Wee et al. \[[@CR33]\] explored the lactic acid production (up to 91 g/l) using *Lactobacillus* sp. RKY1 from corn steep liquor (15--60 g/l) and cheese whey containing 100 g/l of lactose as cheap raw materials. Plessas et al. \[[@CR34]\] used cheese whey (initial lactose concentrationof 36 g/l) and sour dough (1%), which resulted in maximum production of 6.9 g/l of lactic acid (by single culture) and 8.8 g/l of lactic acid (by mixed culture). To further optimize the growth rate and the dry cell mass concentration, the effect of physical operating conditions, temperature and pH over the growth of Lactobacilli was investigated in 3 l fermenter containing the optimum formulated medium. The optimum temperature and pH resulted in 16.20 g/l dry cell mass was 40°C and 6.2, respectively. At the stationary point, in the MRS broth (pH 6.0), the dry cell mass concentration was 8.94 g/l, while in the medium without pH control, it was only 6.02 g/l. These values represent an increase of 165% of dry cell mass production when the pH of the supplemented hydrolysate was controlled. An incubation temperature of lactobacilli in the range of 25 to 38°C was proposed by several researchers \[[@CR26]--[@CR29]\]. Zacharof and Lovitt \[[@CR25]\] reported that the maximum specific growth rate of three Lactobacilli was enhanced at controlled pH6.5, though in the cases of *L. Lactis* and *L. Plantarum* pH 7 also supported good growth. These experiments gave higher biomass yields and maximum specific growth rates as compared to the uncontrolled pH growth systems. Conclusion {#Sec5} ========== The *Lactobacillus plantarum* AS-14 proves to have great potential for dry cell mass production in the presence of cheese whey and corn steep liquor. Moreover, MRS medium, although it can support the growth of lactobacilli, is unsuitable for use in large quantities owing to its high formulation cost and potential environmental hazards. Hence this study would be a great asset to address the limitations caused by conventional MRS medium for the growth of *Lactobacillus plantarum*. Hence we present a simplified, cost-effective medium that has the potential to be employed in the industrial production of different types of Lactobacilli used in dairy products. Methods {#Sec6} ======= Microorganism {#Sec7} ------------- Recently, we have isolated and characterised fifty four different species from spoiled fruits and vegetables \[[@CR24]\] and among these species novel strain of *Lactobacillus plantarum* AS-14 was used for current study. The AS-14 strain was isolated and characterised for the first time from rotten vegetables (brinjal) collected from local fruit market of Sargodha, Punjab Pakistan. The strain was stored in de Man, Rogosa and Sharpe (MRS) broth with 20% glycerol at −20°C. Media composition and growth conditions {#Sec8} --------------------------------------- Whey powder containing 82% lactose was obtained from (Noorpur Dairy Industry, Bhalwal Sargodha). Deproteinization of whey was carried out by heat treatment (100°C for 15 min) of the acidified (pH 4.0) whey solution with some modification in the method reported by Lima et al. \[[@CR10]\]. The resulting solution was centrifuged at 12,000X *g* and the supernatant was diluted to reach the desired lactose concentration. Corn steep liquor was obtained from Refhan Industries Pvt. Ltd., Lahore, Pakistan. Inoculum of *L. Plantarum* was prepared by transferring glycerol stock culture (1ml) to an Erlenmeyer flask containing 50 ml of MRS medium and incubated at 37°C for 18 h (time required for the microorganism to reach the exponential growth phase) without agitation. Erlenmeyer flasks containing the production medium were inoculated with 1% (V/V) inoculum grown in the MRS medium. The composition of MRS medium was (g/l): peptone 10, yeast extract 5, beef extract 10, glucose 20, sodium acetate 5, Na~2~HPO~4~ · 2H~2~O 2, triammonium citrate 2, MgSO~4~ · 7H~2~O 0.1 and MnSO~4~ · 4H~2~O 0.05. For the optimization of dry cell mass (DCM) production, the experiments were performed in 250 ml Erlenmeyer flasks containing 100 ml of production medium. Three different media compositions were screened to identify a suitable one for further experiments (Table [1](#Tab1){ref-type="table"}). All media were adjusted to pH 6.2 before sterilization at 121°C for 15 min. The culture with the highest biomass production was selected for subsequent optimization experiments. The production medium consisted of the same salts used in the growth medium, with the addition of whey lactose (30 to 75 g/l), corn steep liquor (5 to 20 g/l) and (NH~4~)~2~SO~4~ (5 to 20 g/l). Initial pH was 6.5 and it was not kept constant throughout the experiments. The optimization of temperature (30°C to 45°C) and pH (6.1 to 6.4) was also carried out in a 3.0 l fermenter (Eyela Tokyo. Jar Fermenter MBF) with working volume of 1.5 l. Measurement of dry cell mass {#Sec9} ---------------------------- The dry cell mass of the fermented broth was measured by a UV-visible spectrophotometer (Shimadzu Co, Tokyo, Japan). Aliquots of the cell culture obtained at different time intervals were centrifuged, washed twice and suspended in distilled water and their absorbance was measured at 600 nm. Washed cells were dried in an oven at 80°C for 16--24 h and weighed to constant weight. Dry cell mass was determined by a calibration curve measured at the absorbance values of cell density (OD~600nm~) and dry cell weight (g/l). The glucose concentration in the supernatant was measured by the dinitrosalicylic acid (DNS) method. The number of viable cells of *L.plantarum* AS-14 was determined using serial 10-fold dilution in sterile physiological saline. Secondary dilutions (0.1 ml) from 10^4^ to 10^8^ were injected into anaerobic tubes (containing MRS agar, 50°C) and immediately rotated in ice water. Anaerobic tubes (capped with butyl rubber stoppers) were placed in an incubator at 37°C for 24--48 h, and the colony-forming units were estimated by counting viable cells cultivated in MRS agar plates at 37°C after a series of sample dilutions in 0.85% physiological saline. Optimization of growth media {#Sec10} ---------------------------- The response surface methodology was conducted by applying Taguchi array design to understand the interaction of various variables and operating conditions, used to find the optimum concentration of the main medium components that affect the response (dry cell mass). To examine the effect of five growth factors (X~1~ to X~5~) on the production of dry cell mass, standard orthogonal L~16~ (4^5^) factorial arrays design was used for designing the experiment. The L~16~ represents the Latin square and the number of experimental runs, respectively. Every run set consisted of a particular combination of levels and factors. The value of each level is listed in Table [2](#Tab2){ref-type="table"}. Each selected combination of factors and levels was tested in a Hinton flask containing 100 ml of culture under static conditions. To estimate the optimal point of four important variables from the results of the Taguchi array design, a second order polynomial function, the Box-Behnken Design (BBD) method was fitted to the experimental results. Thus, the influence of all experimental variables, factors and interaction effects on the response was investigated. The objective of the second experiment was to obtain a more precise estimate of the optimal operating conditions for the factors involved. Thus, ABBD factorial 3^3^ experimental design was developed with four variables at three levels. Only a small number of experimental runs (i.e. 16 runs) were necessary for the optimization of nutrition variables (cheese whey, glucose and yeast extract) and with two variables (temperature and pH) listed in Table [3](#Tab3){ref-type="table"}.$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \begin{array}{l}\mathrm{Ya}={\mathrm{a}}_0+{\mathrm{a}}_1{\mathrm{X}}_1+{\mathrm{a}}_2{\mathrm{X}}_2+{\mathrm{a}}_3{\mathrm{X}}_3+{\mathrm{a}}_4{\mathrm{X}}_4+{\mathrm{a}}_5{\mathrm{X}}_5+{\mathrm{a}}_{12}{\mathrm{X}}_1{\mathrm{X}}_2+{\mathrm{a}}_{13}{\mathrm{X}}_1{\mathrm{X}}_3+{\mathrm{a}}_{14}{\mathrm{X}}_1{\mathrm{X}}_4+{\mathrm{a}}_{15}{\mathrm{X}}_1{\mathrm{X}}_5\\ {}{\mathrm{a}}_{23}{\mathrm{X}}_2{\mathrm{X}}_3+{\mathrm{a}}_{11}{\mathrm{X}}_1^2+{\mathrm{a}}_{22}{\mathrm{X}}_2^2+{\mathrm{a}}_{33}{\mathrm{X}}_3^2+{\mathrm{a}}_{44}{\mathrm{X}}_4^2+{\mathrm{a}}_{55}{\mathrm{X}}_5^2\end{array} $$\end{document}$$ $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{Y}\mathrm{b}={\mathrm{b}}_0+{\mathrm{b}}_1\mathrm{A}+{\mathrm{b}}_2\mathrm{B}+{\mathrm{b}}_{12}\mathrm{AB}+{\mathrm{b}}_{11}{\mathrm{A}}^2+{\mathrm{b}}_{22}{\mathrm{B}}^2 $$\end{document}$$ where Ya and Yb are the predicted response (dry cell biomass) values, a~0,~ b~0~ are the constants, a~1~, b~1,~ a~2,~ b~2,~ a~3,~ a~4~ and a~5~ are the linear coefficients; a~12,~ b~12,,~ a~14,~ a~15~ and a~23,~ are the cross product coefficients; and a~11,~ b~11,~ a~22,~ b~22,~ a~33,~ a~44,~ a~55~ are the quadratic coefficients. Data analysis {#Sec11} ------------- The effects of the factors on surface tension for biomass production were statistically analysed with analysis of variance (ANOVA). Response surface methodology was carried out using Design Expert software package (version 9.0.3.1, Stat-Ease Inc., USA). BBD : Box behnken design DCM : Dry cell mass DNS : Dinitrosalicyclic acid *L. Lactis* : *Lactoccocus lactis* MRS : de Man, Rogosa and Sharpe RSM : Response surface methodology The authors would like to acknowledge Higher Education Commission Pakistan. Funding {#FPar1} ======= This project was financially supported by the Higher Education Commission Pakistan as award of Ph.D. scholarship under Indigenous PhD 5000 Fellowship Program Phase-VI. The funding supported all of the research including the design of the study and collection, analysis, and interpretation of data and writing of the manuscript. Availability of data and materials {#FPar2} ================================== Since these data has not still been published, and we will prepare a new paper by refining new experiment results based on these data. Authors do not wish to share their data now. Authors' contributions {#FPar3} ====================== Concept, design, and writing of the manuscript: AM; data analysis, critical revision and editing of the manuscript: AR and JIQ; contribution toward the development of the protocol: I-ul-H and HM. All authors read and approved the final manuscript. Competing interests {#FPar4} =================== The authors declare that they have no competing interests. Consent for publication {#FPar5} ======================= No applicable. Ethics approval and consent to participate {#FPar6} ========================================== This paper is in compliance with ethical standards. Publisher's Note {#FPar7} ================ Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
{ "pile_set_name": "PubMed Central" }
*(-2/37) assuming b is positive. b**(-34/481) Simplify ((((s/((s/s**(-4))/s*s))/s)/s)/s)/s*s**(-6/7)*s*(s/((s*s*s*s*s**(-5/2)*s*s)/s))/s*s*s*s*s/(s*s**7*s)*s assuming s is positive. s**(-201/14) Simplify (u/u**(-3))**(-42) assuming u is positive. u**(-168) Simplify ((q/q**(-1/2))/(q/((q*q**(-6)*q)/q)))**(1/9) assuming q is positive. 1/sqrt(q) Simplify (k**(5/4)*k)**(-1/8) assuming k is positive. k**(-9/32) Simplify ((n**(-2/7))**(-21))**(2/51) assuming n is positive. n**(4/17) Simplify ((a**(-39)/a)/a)/(a**(-32)*a) assuming a is positive. a**(-10) Simplify (w**(-2/3))**24/(w**(-3)/w**1) assuming w is positive. w**(-12) Simplify a**(-2/61)*a/a**12 assuming a is positive. a**(-673/61) Simplify ((c/c**(1/22)*c)/c)**11 assuming c is positive. c**(21/2) Simplify ((y/y**0)/y*(y*y**2*y)/y)/(y**1)**(-31) assuming y is positive. y**34 Simplify (l*l*l/((l*l**(-47))/l))**(3/2) assuming l is positive. l**75 Simplify (v**(-7))**(1/6) assuming v is positive. v**(-7/6) Simplify g**(-5/4)/(g**9*g)*(g**(-3/10)*g)/g*g/(g**(-2/5)/g) assuming g is positive. g**(-183/20) Simplify (j**(2/7))**(47/5)/(j**(2/7))**(-1/29) assuming j is positive. j**(2736/1015) Simplify ((b*b/(b*b**24/b*b)*b*b)/b)**(-7) assuming b is positive. b**154 Simplify (i**(-2/25)/(i**(-12)*i))**21 assuming i is positive. i**(5733/25) Simplify u**(-5/2)*u**(-8)*u*u*u/(u*u**(-6))*u/(u/((u/(u**2/u))/u)) assuming u is positive. u**(-7/2) Simplify z**(-2/5)/z*z**(1/4)*z**(-1/8)*z**(2/11)/z assuming z is positive. z**(-921/440) Simplify y/y**(-5)*(y*y/((y*y*y**(-3/7)*y*y)/y)*y)/y*(y**(-2/7))**6 assuming y is positive. y**(26/7) Simplify ((o*o**0)**(-7/10))**22 assuming o is positive. o**(-77/5) Simplify n**(-2/7)*n*n*(n/n**(-1/49)*n*n)/n assuming n is positive. n**(183/49) Simplify (o**7/(o*o*o/(o/(o*o**(-5)))))/(o**(-1/7)/o**(-2/9)) assuming o is positive. o**(562/63) Simplify m**(3/5)*m**(-2/11)/m assuming m is positive. m**(-32/55) Simplify ((c/c**(-1/3)*c*c*c)/(c/(c**4/c*c)))**(2/39) assuming c is positive. c**(44/117) Simplify (c/(c/((c/c**(9/4))/c*c)))**24 assuming c is positive. c**(-30) Simplify (i*i/(i*(i*i**(-5/3)*i)/i))/(i*i**12) assuming i is positive. i**(-34/3) Simplify (((((m*m*m**(-1/3)*m)/m)/m)/m)/m**5)/(m**(-1/2))**(6/13) assuming m is positive. m**(-199/39) Simplify (b**4/(b*(b*b*b*b**4)/b))/(((b**(2/7)/b)/b)/(b/b**(2/5))) assuming b is positive. b**(-24/35) Simplify (q**42*q)/q**(-11) assuming q is positive. q**54 Simplify a**(-2/7)*a**(-8)*(a**0)**(-3/34) assuming a is positive. a**(-58/7) Simplify (d**6*d)/d**4*d/d**(-3)*d/d**(-1) assuming d is positive. d**9 Simplify ((d/(d**(-5)/d))/(d/d**6))/(d**(-5)/d*d*d/(d/(d**1*d*d)*d)*d) assuming d is positive. d**14 Simplify (r**(-1)/(r**(-3)/r))/(r/(r**(-1/3)*r))**35 assuming r is positive. r**(-26/3) Simplify (w**8)**8 assuming w is positive. w**64 Simplify (k**(4/5))**(-23) assuming k is positive. k**(-92/5) Simplify (((k*k**2)/k)/k)/(k/(k/(((k**(-2/3)*k*k)/k)/k))) assuming k is positive. k**(5/3) Simplify (a/(a/((a*a**(-9/8))/a)*a))**(2/137) assuming a is positive. a**(-17/548) Simplify (l/l**0)**(1/59) assuming l is positive. l**(1/59) Simplify f**(-1/35)/(f*f*f**(-12/7)/f*f) assuming f is positive. f**(-11/35) Simplify i**(-1/9)*i**(2/71) assuming i is positive. i**(-53/639) Simplify (s**(-2/3)/s)**(-12/11)/(s**(-1)/(s*(s/((((s**(-5/3)/s)/s)/s*s)/s))/s)) assuming s is positive. s**(280/33) Simplify r**(2/5)/r*(r**(1/7)*r)/r*(r**0)**(8/3) assuming r is positive. r**(-16/35) Simplify (m/m**(-16)*m)**(-2/5) assuming m is positive. m**(-36/5) Simplify (n/(n/((n**(1/3)/n)/n))*n)/n*n**(-19) assuming n is positive. n**(-62/3) Simplify (g/g**7*g**(-6)*g)/(g*g**(1/2))**(-11) assuming g is positive. g**(11/2) Simplify (b**(2/7)*b)**(-29) assuming b is positive. b**(-261/7) Simplify (w*w*w/(w*w**(-16))*w*w*w*w*w/((w*(w**(2/5)*w)/w)/w))**27 assuming w is positive. w**(3051/5) Simplify m**(11/4)*m*m/(m*(m*m/(m**(-1/20)/m*m)*m)/m) assuming m is positive. m**(17/10) Simplify ((t**1)**(6/5))**(2/21) assuming t is positive. t**(4/35) Simplify v**(2/37)/(v**(-11)/v) assuming v is positive. v**(446/37) Simplify o/(o*o*o**(11/3))*o*o*o**(9/2)*o assuming o is positive. o**(17/6) Simplify (v/(v/(v*v**(-20))))/(v**(3/2)/v) assuming v is positive. v**(-39/2) Simplify ((m*m/m**11*m)/m)**(-2/75) assuming m is positive. m**(6/25) Simplify (b**(2/13)*b**(-1/4)/b*b*b*b)/((b*b**4)/(b*b*b**(-1/6)*b)) assuming b is positive. b**(-41/156) Simplify ((i/(((i**(-4/5)*i)/i)/i))/(i*i**3*i))/(i/i**(-1)*i**(4/7)) assuming i is positive. i**(-167/35) Simplify v**2/((v*v**(-2/5))/v)*(v*v*v**3/v)/(v**(-1/6)*v*v) assuming v is positive. v**(137/30) Simplify (d/((d/(d*d**12*d))/d)*d)/((d**45/d)/d) assuming d is positive. d**(-27) Simplify h**(2/11)*h/((h**(-4)*h*h)/h)*h**(-2/7)*h*h**(-3/5) assuming h is positive. h**(1654/385) Simplify u**(7/5)*u**(-2/11)*u**(-1/6)/((u*u**(2/5)*u)/u) assuming u is positive. u**(-23/66) Simplify ((b**(-2)*b*b)/b**(1/2))**47 assuming b is positive. b**(-47/2) Simplify q**(-7)/(((q/((q*q*q*q**(-8)/q)/q*q))/q)/q)*(q**(1/3))**(4/19) assuming q is positive. q**(-680/57) Simplify (((n/n**(-1))/n)/n*n)**(-5)/(n**1*n*n)**(1/19) assuming n is positive. n**(-98/19) Simplify (m*m**(-1)/m)**(-45)/((m/m**(1/4))/m**(4/7)) assuming m is positive. m**(1255/28) Simplify (y**3*y**3*y*y)**(-16) assuming y is positive. y**(-128) Simplify (v**2/v)**5 assuming v is positive. v**5 Simplify f**(-3/10)/f*f**(3/4)*f*f*(f**0)**(-15) assuming f is positive. f**(29/20) Simplify (k**(-2)/k**(-3))**(-1) assuming k is positive. 1/k Simplify ((t/(t/((t**(-1/3)/t)/t)))**(2/5))**(-44) assuming t is positive. t**(616/15) Simplify (a/(a*a/(a**(-4/7)*a))*a**(-3))**(-21) assuming a is positive. a**75 Simplify (j**(-1/2))**(-17)/((j/(j/(j/j**(-4)*j)))/j*j**(6/7)) assuming j is positive. j**(37/14) Simplify (q/((q*(q*q**(2/5)*q)/q*q)/q))**(-32)/(q*q*q*q/(q**(-5)*q)*q*q*q*q**(-1/10)) assuming q is positive. q**(339/10) Simplify (i*i*i*i*i**(2/5)*i*i)/i**9*((i*i**(2/15)/i*i)/i)/i**(-5) assuming i is positive. i**(38/15) Simplify (a**4)**9 assuming a is positive. a**36 Simplify z/z**(-1)*z/z**(-2/21)*(z/z**(2/3))**3 assuming z is positive. z**(86/21) Simplify (u*(u*u**(-1/8)/u)/u)/u*u**(-7)/u*(u*u**5*u)/u**(-7) assuming u is positive. u**(39/8) Simplify t**(-2/3)/t*t*t*t/(t*t*t**(-4/11)/t*t*t*t*t)*(t/((t**(-4)/t)/t*t))/(t/(t*t**4*t)) assuming t is positive. t**(254/33) Simplify (k/k**(-40))**(-27) assuming k is positive. k**(-1107) Simplify ((c**(1/4)*c)/(c*c/(c/(c**0/c))))/(c**(-3)*c**4) assuming c is positive. c**(1/4) Simplify (v/(v/(v/v**(6/5)))*v)/v*v*v/v**(-24) assuming v is positive. v**(129/5) Simplify ((n**(4/5)/n)/(n/(n/n**(-1/8))))/(n*n**0*n**(-8)/n) assuming n is positive. n**(317/40) Simplify (s**15*s**(-6))**(-9/8) assuming s is positive. s**(-81/8) Simplify (w**4)**(-39) assuming w is positive. w**(-156) Simplify k**(2/9)*k*k*k/(k**(-1/2)*k)*k*(k/(k**5/k))/k*k*(k*k**(-8))/k assuming k is positive. k**(-131/18) Simplify p**16*p/(p*p**16)*p assuming p is positive. p Simplify (f**(-2))**(1/4) assuming f is positive. 1/sqrt(f) Simplify ((q**8/q)/q**(-8))**(-28/3) assuming q is positive. q**(-140) Simplify (((a**(2/15)*a)/a)/(a*a**(-1/5)))**(-49) assuming a is positive. a**(98/3) Simplify f**(-2/3)*f**(3/8) assuming f is positive. f**(-7/24) Simplify ((q*q**(-1/9))/q**4)/(q**4/q**(-3/5)) assuming q is positive. q**(-347/45) Simplify ((c*c**(-4/7))/(c**(-8)/c))**(2/87) assuming c is positive. c**(44/203) Simplify x**(-2/75)*x/x**3 assuming x is positive. x**(-152/75) Simplify o**(-13)/o**(-1/3) assuming o is positive. o**(-38/3) Simplify b**(-14)*(b*b**(-6))/b assuming b is positive. b**(-20) Simplify (g*g/(((g/g**(-1/2))/g)/g)*g)**(-2/43)/(g/g**(-1))**(-3/19) assuming g is positive. g**(125/817) Simplify (u*u*u*(u*(u/(u/u**(-1/4)))/u)/u*u*(u/((u*u*u**(5/2)/u)/u*u)*u)/u*u)/(u**(1/3)*u**(-2/3)) assuming u is positive. u**(19/12) Simplify s**(-3/2)*s*s**16/s assuming s is positive. s**(29/2) Simplify (y*y*y**13)**(3/44) assuming y is positive. y**(45/44) Simplify q/((q**(-6/5)/q)/q)*q*q*q**(-5/3)*(q**(1/4))**(-1/49) assuming q is positive. q**(13313/2940) Simplify (q/((q**(-4/9)*q*q)/q*q))/q**(-11) assuming q is positive. q**(94/9) Simplify (x**(-1/2))**(-16) assuming x is
{ "pile_set_name": "DM Mathematics" }
Save up to 75% on Allen Edmonds Shoes, Boots and More For over 95 years, Wisconsin-based Allen Edmonds has crafted a great selection of Goodyear-welted shoes and boots. Its dress styles are essentially the American standard and its casual boots are on-par with some of the best. If you’re in the market for some new footwear and have been waiting for a good sale, head over to Allen Edmonds today. Many clearance styles — including American-made oxfords and Italian-made loafers — are up to 75 percent off. You can even score a great deal on shirts, sweaters and belts if your closet is also in need of some extra attention.
{ "pile_set_name": "Pile-CC" }
Martín Solís Alatorre Martín Hugo Solís Alatorre (born 30 January 1959) is a Mexican politician from the National Action Party. From 2000 to 2003 he served as Deputy of the LVIII Legislature of the Mexican Congress representing the State of Mexico. References Category:1959 births Category:Living people Category:Politicians from the State of Mexico Category:Members of the Chamber of Deputies (Mexico) Category:National Action Party (Mexico) politicians Category:21st-century Mexican politicians
{ "pile_set_name": "Wikipedia (en)" }
WE NO LONGER DISTRIBUTE A DVD OF THE MIA! And we are sorry... ...but don't cry... The entire Marxists Internet Archive is over 240GBs in size. Additionally we now include the Riazanov Libary Project, a sister project with the MIA that carries only very high resolution scans of many of the same files we offer on the MIA but at lower resolution and multiple versions of the same files. This latter project is over 230GBs by itself. The DVD set could only include about 12GBs. It is simply outdated and we have moved on to make available a 1 TB portable USB hard drive to take it's place. Unfortunately given the much higher price for the HD we are unable to offer this for free to those in qualifying countries.
{ "pile_set_name": "Pile-CC" }
V ==K Pair ( Pair ( 1 , Wrapper ( 2 , .Bottoms ) , .Bottoms ) , Pair ( 3 , 4 , .Bottoms ) , .Bottoms )
{ "pile_set_name": "Github" }
Q: MonetDB refresh data in background best strategy with active connections making queries I'm testing MonetDB and getting an amazing performance while querying millions of rows on my laptop. I expect to work with billions in production and I need to update the data as often as possible, let say each 1 minute or 5 minutes worst case. Just updating existing records or adding new ones, deletion can be scheduled once a day. I've seen a good performance for the updates on my tests, but i'm a bit worried about same operations over three of four times more data. About BULK insert, got 1 million rows in 5 secs, so good enough performance right now as well. I have not tried deletion. Everything works fine unless you run queries at the same time you update the data, in this case all seems to be frozen for a long-long-long time. So, what's the best strategy to get MonetDB updated in background? Thanks A: You could do each load in a new table with the same schema, then create a VIEW that unions them all together. Queries will run on the view, and dropping and recreating that view is very fast. However, it would probably be best to merge some of these smaller tables together every now and then. For example, a nightly job could combine all load tables from the previous day(s) into a new table (runs independently, no problem) and then recreate the view again. Alternatively, you could use the BINARY COPY INTO to speed up the loading process in the first place.
{ "pile_set_name": "StackExchange" }
Clyde, New Zealand Clyde, formerly Dunstan, is a small town in Central Otago, New Zealand with a population of 1023 in 2018. It is located on the Clutha River, between Cromwell and Alexandra. Clyde grew up around the former settlement of Dunstan during the Central Otago goldrush of the 1860s. The town could once claim to be the most populous in New Zealand during the height of gold fever. The town's post office (and thus the town) was officially renamed from Dunstan to Clyde on 22 May 1865, after Lord Clyde. More recently the town has been known for the Clyde Dam, a giant hydroelectric dam at the north end of the town, behind which lies Lake Dunstan. The Clutha River is the swiftest river (per volume) in the southern hemisphere. The river then runs to the Roxburgh Dam before finally meeting the sea at Balclutha. The town is a popular holiday spot. It lies at the western end of the Otago Central Rail Trail. The Otago Central Branch Railway originally terminated at Cromwell, but this section of the railway was closed in 1980, with the railway to Clyde used to bring materials for the dam project. The rail trail is nowadays often cycled and walked by visitors and locals alike. The township is home to Dunstan Hospital, serving the surrounding district, including Alexandra and Cromwell. The hospital was rebuilt in 2006 with the original building remaining. Clyde is fast becoming known as a tourist haven. The location is particularly attractive to those visiting the region's many vineyards and orchards. The regional weather is particularly warm and dry during the summer months due to the rain shadow effect caused by the Southern Alps (given New Zealand's westerly winds). There is a single school in Clyde, the Clyde Primary School. The closest high school is Dunstan High School, which is situated 10 km away in Alexandra. The closest university is Otago University 200 km away in Dunedin. During the week beginning 22 September, the Clyde/Alexandra district hosts a Blossom Festival. This event celebrates the beginning of spring which brings the blossoming of fruit trees in the area's orchards. Entertainment at the festival includes a parade with floats made by local businesses, fun park rides, bands and drag races. Notable people born here Steffan Browning, former Green Party MP Jason Hewett, All Black Roy Scott, cricketer References External links Clyde (official website) Promote Dunstan (Promotions Group for the region) Weather Page (Un-Official) (Uploaded from Private Weather Station) Category:Central Otago District Category:Clutha River Category:Populated places in Otago
{ "pile_set_name": "Wikipedia (en)" }
Introduction {#s0001} ============ For advanced chronic kidney disease patients who will be initiated on HD therapy soon, the type of vascular HD access remains a big challenge. Guidelines from different countries strongly recommend native arteriovenous fistula (AVF) as the best method for dialysis amongst patients who are undergoing hemodialysis \[[@CIT0001]\]. It is well established that AVF had the superiority over other types of vascular access: central venous catheter (CVC) and arteriovenous graft (AVG) since it provides the best longevity, less likely rates of infection and least association with mortality and morbidity in the majority of patients \[[@CIT0012]\]. Despite these advantages of AVF, the number of patients undergoing dialysis through CVC or AVG is high \[[@CIT0016]\]. In 2003, The Fistula First breakthrough initiative (FFBI) which was a National Access Improvement Initiative to encourage the use of AVF as vascular access in HD population. This initiative was established as a collaboration with the Centers for Medicare & Medicaid Services (CMS), the end stage renal disease (ESRD) Networks, and the entire renal community \[[@CIT0010]\]. The FFBI initial goal was to increase the percentage of native AVF use to 44%, in 2009 the percentage of HD patients having AVF was 65% which exceeded the initial goal \[[@CIT0017]\]. Meanwhile, the overall proportion of prevalent AVF utilization increased from 33% in all HD patients in 2003 to 62.7% by the mid of 2016 \[[@CIT0010],[@CIT0016]\]. However, in 2015 United States Renal Data System (USRDS) annual data reported that the percentage of patients receiving HD therapy through CVC was 80% \[[@CIT0016]\]. Achieving optimal AVF access is a complicated process and many barriers have been described, including hospital systems, HD patients, and and provider-related \[[@CIT0018]\]. According to the 2017 annual health report of the Palestinian Ministry of Health, the overall number of HD patients has increased from 1014 patients in 2015 to 1119 patients in 2016 from different 12 dialysis centers in the West Bank, Palestine \[[@CIT0018]\]. To the best of our knowledge, this is the first study in Palestine to investigate the vascular access relevant issues amongst HD patient including perceptions and barriers to AVF use. The aim of this study is to explore patients' perceptions of advantages and perceived barriers that impede AVF utilization as a first vascular access choice. Material and methods {#s0002} ==================== Study design and setting {#s0003} ------------------------ In this cross-sectional study, we investigate the attitudes toward AVF and the perceived barriers to its creation among Palestinian HD patients. We recruited all adult participants aged 18--85 years, receiving HD as outpatients from August-December of 2018 at Palestinian Medical Complex Hospital in Ramallah, Palestine which is considered one of the largest Ministry of Health dialysis units in Palestine as per the total number of patients who undergo hemodialysis weekly. Participants {#s0004} ------------ We screened 198 participants who had the diagnosis of ESRD, undergoing regularly scheduled HD sessions of Saturday--Monday--Wednesday or Sunday--Tuesday--Thursday. Exclusion criteria included pediatric age group (less than 18 years), acute dialysis; major mental or neurological illness that precludes their ability to be recruited with fully consenting; refusal to participate; death before completing their data or those who were unavailable at the time of the study. This study was carried out in accordance with the recommendations of the Al-Quds University Research Ethics Committee with written informed consent from all subjects. The protocol was approved by the Al-Quds University Research Ethics Committee. Data collection {#s0005} --------------- All participants underwent in-person interviews either before, after, or during the HD session using structured questions. Patients' medical records were all reviewed to collect their demographics and characteristics information. Demographic data collected included age, sex, weight, height, body mass index (BMI). The presence of comorbidities including diabetes mellitus, hypertension, dyslipidemia, coronary artery disease, or cerebrovascular disease was recorded. Data pertaining to the cause of ESRD: Diabetes mellitus, hypertension, polycystic kidney disease, glomerulonephritis, other, or unknown were also obtained. Information was collected regarding HD initiation, access, and attitudes toward fistula creation and use, including time in months from HD initiation, current access method and whether vein mapping was done before vascular access creation. In addition, data was gathered if patients previously received sufficient education about AVF and for those patients who did not have a fistula or had a delay in its creation, perceived barriers were explored in detail. Furthermore, patients were asked whether they recommend AVF as the preferred access to others, the reasons for their recommendation as well as the characteristics of those who refused fistula. Statistical methods {#s0006} ------------------- Data were summarized by calculating means and standard deviation (*SD*) or medians and range for quantitative variables and percentages for categorical variables. Descriptive terms were used where appropriate. The reported attitudes and perceived barriers were analyzed as categorical variables. Results {#s0007} ======= We screened 198 Palestinian patients who had the diagnosis of ESRD, and undergoing regularly scheduled HD therapy during the study period that extends from August to December of 2018. Out of them, 156 were included in our study and 42 were excluded (three were pediatric age group (less than 18 years); 2 refused to participate; 22 died before completing their data; and 15 were unavailable at the time of the study). Patient's age ranged from 18 to 85 years (*M* = 55; *SD* = 15), gender (92 males and 64 females, 59% and 41%, respectively), and 29 (19%) were smokers. Average BMI (*M* = 26; *SD* = 6). At the time of the study, patients had an average time since starting dialysis of 24 months ranged (1 to 216). Detailed demographics characteristics including the cause of ESRD and major associated comorbidities were shown in ([Table 1](#t0001){ref-type="table"}). The current access method for hemodialysis based on age group showed that AVF is highly used in patient's groups who are younger than 55 and between the age of 67 and 79. While, 60% of patients who are between 55 and 66 years use permanent CVC ([Table 2](#t0002){ref-type="table"}). Patient attitudes and perceived barriers toward AVF creation are presented in [Tables 3](#t0003){ref-type="table"} and [4](#t0004){ref-type="table"}. ###### Baseline demographics and characteristics of study participants. Patient characteristics Overall *n* = 156 ----------------------------------------- ------------------- Baseline demographics   Age (years) mean ± *SD* 55 ± 15 Gender    Male, *n* (%) 92 (59)  Female, *n* (%) 64 (41) Weight (kg) mean ± *SD* 74.2 ± 16.6 Height (m) mean ± *SD* 1.66 ± 8.5 BMI (kg/m^2^) mean ± *SD* 26 ± 6 Smoker *n* (%) 29 (19) Cause of ESRD   Diabetes mellitus *n* (%) 68 (44%) Hypertension *n* (%) 23 (15)% Adult polycystic kidney disease *n* (%) 8 (5%) Glomerulonephritis *n* (%) 21 (13%) Other *n* (%) 19 (12%) Unknown *n* (%) 17 (11%) Associated comorbidities   Diabetes mellitus *n* (%) 87 (56%) Hypertension *n* (%) 108 (69%) Dyslipidemia *n* (%) 60 (38%) Coronary artery disease *n* (%) 67 (43%) Cerebrovascular disease *n* (%) 11 (7%) Peripheral vascular disease *n* (%) 34 (22%) BMI: Body Mass Index; ESRD: End stage renal disease. ###### Current access method and duration of HD based on age group.   Age group ---------------------------------------------------- --------------- --------------- --------------- ---------- Current access method         Temporary CVC *n* (%) 1 (1.5%) 3 (5.5%) -- 1 (100%) Permanent CVC *n* (%) 21 (31.3%) 33 (60%) 14 (42.4%) -- AVF *n* (%) 43 (64.2%) 19 (34.5%) 19 (57.6%) -- AVG *n* (%) 2 (3%) 0 -- -- Time in months since HD initiation, median (range) 37.9 (1--216) 25.1 (2--108) 38.6 (1--216) -- HD: Hemodialysis; CVC: Central venous catheter; AVF: arteriovenous fistula; AVG: arteriovenous graft. ###### Perceived barriers toward AVF creation based on age group.   Age group ---------------------------------------------------------- ------------ ------------ ----------- ---------- Reported outcome         Perceived barrier to AVF[^a^](#TF3){ref-type="table-fn"}         Late referral to surgical evaluation 21 (38.2%) 14 (27.5%) 7 (25.9%) -- Refusal to undergo AVF surgery 27 (49.1%) 28 (54.9%) 17 (63%) 1 (100%) Too long to surgical appointments after referral 7 (12.7%) 9 (17.6%) 3 (11.1%) -- ^a^Out of 134 patients with non-AVF dialysis access or delayed AVF creation. ###### Attitudes toward AVF creation. Reported outcome *n* (%) ----------------------------------------------------------------- ------------ Previously received sufficient education about AVF?    Yes 72 (46)  No 84 (54) Previous Vein mapping done?    Yes 87 (56)  No 69 (44) Would you recommend AVF to other HD Patients    Yes 115 (73.7)  No 26 (16.7)  Not reported/Not certain 15 (9.6) If answer to above question is Yes, why would you recommend it?  Less infection 71 (60.2)  Easier to care for 12 (10.2)  Easier showering 10 (8.5)  Better hygiene 2 (1.7)  All above 20 (16.9)  Other/unspecified 3 (2.5) The most common cause for no AVF was the refusal to undergo AVF surgical procedures in 73 patients (54.5%) and there was no difference among those patients based on their age group, followed by late referral to a surgical evaluation in 42 patients (31.3%) and time too long to surgical appointments after referral in 19 (14.2%; [Figure 1](#F0001){ref-type="fig"}). Out of the patients who refused to undergo surgical procedures, 31 (42.5%) patients were concern about the surgical procedure itself, 17 (23.3%) have a poor understanding of their disease and access needs, 11 (15.1%) they fear of needles, 13 (17.8%) denial their disease or even their need for HD, and 1 (1.4%) patient due to other causes including cosmetics ([Table 5](#t0005){ref-type="table"}). ![Causes of lack of AVF as dialysis access.](IRNF_A_1748650_F0001_C){#F0001} ###### Characteristics of those who refused fistula or do not recommend it. Characteristics of those who refused fistula *N* (%) -------------------------------------------------------- ----------- Total number of those who refused/do not recommend AVF 73 (54.5) Reason for refusing AV fistula    Concern about the surgical procedure/refused surgery 31 (42.5)  Poor understanding of disease/access needs 17 (23.3)  Fear of needles 11 (15.1)  Denial of disease or need for HD 13 (17.8)  Others include Cosmetics reasons 1 (1.4%) HD: Hemodialysis; AVF: arteriovenous fistula. Of the overall group, 72 (46%) reported they received sufficient education and information about AVF prior to creation, on the other hand, 84 patients (54%) thought that was not the case. Vein mapping is done for 87 patients (56%) prior to an attempt for fistula creation. One hundred fifteen patients (73.7%) would strongly recommend AVF to other HD patients as a method of vascular access. Patients attributed their preferences and recommendations for AVF to many reasons including decreased risk of infection 71 (60.2%), easier to care for 12 (10.2%), emphasis on easier shower 10 (8.5%), better associated hygiene 2 (1.7%), three of the patients (2.5%) reported unspecified causes and 20 (16.9%) all of these reasons in combination ([Figure 2](#F0002){ref-type="fig"}). Overall, 26 patients (16.7%) did not recommend AVF as the method of HD access to other patients ([Table 4](#t0004){ref-type="table"}). ![Why would you recommend AVF?](IRNF_A_1748650_F0002_C){#F0002} Discussion {#s0008} ========== Current clinical practice guidelines from different countries strongly advocate AVF as the best vascular access since it has been considered to have the lowest risk of complications and need for interventions, best long-term patency, and superior patient survival \[[@CIT0020]\]. Having an AVF prior to the commencement of HD is not only associated with lower morbidity and mortality but it is also associated with better patient-reported quality of life and lower health care expenditure \[[@CIT0025],[@CIT0026]\]. Despite this, many patients maintained on HD therapy use CVC \[[@CIT0027]\]. A recent study conducted in the same area to investigate the rate of pre-dialysis nephrology care and AVF usage amongst 156 chronic HD patients showed a high incidence of CVC use and a relatively large portion of HD did not have any pre-dialysis nephrology care. Furthermore, a low incidence of AV utilization found even in patients who received pre-dialysis care \[[@CIT0028]\]. Investigations regarding the system, physicians and patients\' characteristics that may be responsible for delays in AVF creation remain an ongoing challenge. To the best of our knowledge, no previous studies have presented data that help address the attitudes and perceived barriers to timely AVF creation and utilization amongst HD patients in Palestine, which is the aim of this analysis. Our study found that most of HD patients believe that AVF is the best choice of vascular access for many reasons; the vast majority of them would recommend it to their fellow patients who are newly starting dialysis because the existing patients believe it is prone to lower risk of infection compared with CVC. Other reported advantages are related to quality of life, including easier care, better hygiene and easier for showering. These patients' attitudes toward AVF could be attributed to their personal experience with CVC related complications in particular infection when they used it as the method of initial HD vascular access, they may have been also influenced by the experiences of other patients who suffered from the drawbacks associated with CVC. A previous study of a national random sample of 1563 HD patients conducted in the United States to investigate the relationship of initial HD vascular access type with patient-reported health status and quality of life scores at the time of HD initiation and at day 60. Their results showed that patients with AVF at initiation and at day 60 reported perceived greater physical activity and energy, emotional and social well-being, fewer symptoms, less effect of dialysis and burden of kidney disease, and better sleep compared with patients with persistent CVC use \[[@CIT0025]\]. In addition, Do Hyoung Kima et al. investigated the effects of vascular access types on the survival and quality of life and depression in the incident hemodialysis patients among 1461 patients who newly initiated HD. The primary outcomes were all-cause mortality and HRQOL and depression. The secondary outcome was all-cause hospitalization. Kidney Disease Quality of Life Short Form 36 (KDQOL-36) and Beck's depression inventory scores were measured to assess HRQOL and depression. In the survival analysis, patients with AVF had a better survival and low hospitalization rates, and the patients with AVF or AVG showed both higher HRQOL and lower depression scores than those with CVC \[[@CIT0029]\]. In another cohort study, preferences and concerns regarding HD vascular access were reviewed by asking 128 HD patients and 64 of dialysis nurses, technicians, HD access surgeons, and nephrologists, found that the access preferred by patients was of the utilization of a superficial access in the forearm which was easy to cannulate, had minimal effect on their appearance, provided quick hemostasis after dialysis and enabled arm comfort during dialysis, whereas from their point of view, the most common problem was pain during needle insertion \[[@CIT0030]\]. In our study, we found that the most reported perceived barriers for those who have not been dialyzing through an AVF, or who had a delay in AVF creation were the refusal to undergo AVF surgery, late referral to surgical evaluation and too long to surgical appointments after referral. In a study of a cohort of 319 HD patients conducted in nine nephrology centers in New Zealand and Australia, barriers to timely AVF creation were investigated. Their results revealed that lack of formal policies for patient referral, absence of patient database for access purposes that could facilitate the management, and also long wait times to surgical evaluation and access creation were the perceived barriers to access creation \[[@CIT0031]\]. These barriers were previously implicated by nephrologists and primary care providers in a qualitative study to identify modifiable challenges to adequate preparation of patients for renal replacement therapy \[[@CIT0032]\]. With regard to the patients who refused to undergo AVF which was the main barrier in our sample (54.5%), more details were explored to find the reasons behind their refusal. Our study revealed that concern about the surgical procedure (42.5%), poor understanding of the disease or access needs, fear of needles, denial of disease or need for HD and cosmetic reasons were the most cited barriers related to the patient. In a related systematic review of qualitative studies, aiming to understand the attitudes, beliefs, preferences, and values of 375 patients who refused AVF creation or use, Xi et al., performed interviews with those patients investigating the rationale for decision making \[[@CIT0033]\]. Three main reasons that affected the decision to refuse AVF were identified: Poor previous experience with the fistula such as issues with cannulation, bleeding, or appearance, issues with knowledge transfer and informed decision making, and patient acceptance of current status quo without a desire for change. Patients can have a strong preference for the status quo and are disappointed to switch from an acute start CVC access to a long-term AVF may explain why a large number of patients in our sample refuse AVF \[[@CIT0034]\]. In contrast to our study, decreasing infection rate was not the focus of the 375 patients who refused AVF creation or use in the study of Xi et al. \[[@CIT0033]\]. The same was seen in another study that investigated patients' perspectives on complications of vascular access-related interventions and found that infectious complications were not reported as a major concern by patients when the access modalities are compared. On the other hand, physical complications which manifested as fear and pain associated with cannulation were more likely a cause of patients' dissatisfaction with AVF compared to CVC access \[[@CIT0035]\]. In another study, fear of painful and difficult cannulation and patients trust in their ability to manage complications of CVC were the reasons to avoid AVF \[[@CIT0036]\]. Nearly half of our patients reported they received insufficient education about different types of HD access and the pros and cons of each one. It was previously noted that patients with less dialysis knowledge were found to be less likely to use arteriovenous access for dialysis at initiation and transitioning to AVF after starting hemodialysis, since the poor understanding of the AVF is an important aspect regarding the barriers related to the patient \[[@CIT0037]\]. In fact, 23% of our sample reported a poor understanding of vascular access and this is a modifiable challenge which by improving patient's education may facilitate the AVF utilization \[[@CIT0038]\]. Several factors contribute to the heterogeneity of AVF prevalent use and the distribution that include the age (young vs. old) \[[@CIT0039]\]. In our study, the largest percentage of patients under the age of 55 and between 67 and 76 currently uses AVF. Conclusion {#s0009} ========== In this study among dialysis patients in Ramallah/Palestine, most participants would recommend an AVF as the mode of access. Barriers to AV use were found to be classified into three major categories; provider-related in which there is a late referral to surgical evaluation, system-based including too long to surgical appointments after referral, and issues pertaining to the patient who may refuse or be reluctant to undergo AVF surgery. The reasons that stand behind the patient's refusal of AVF were: concern about the surgical procedure, poor understanding of the disease or access needs, denial of disease or need for HD, and fear of needles. These results suggest the need for a systematic evaluation of the attitudes that precede AVF creation, to identify potential targets for care improvement such as timely referral to a surgical evaluation in addition to facilitating sufficient education about HD access methods may allow for better AVF utilization in HD patients in Palestine. Furthermore, engaging patients in care planning and decision making may improve patient knowledge about treatment options and adherence. Geolocation information {#s0010} ======================= This study conducted at the national dialysis center in Ramallah, West Bank, Palestine, which is considered one of the largest Ministry of Health dialysis units in Palestine as per the total number of patients who undergo hemodialysis weekly. Disclosure statement {#s0011} ==================== The authors declare no conflict of interest.
{ "pile_set_name": "PubMed Central" }
Sorry Not Sorry 10 Days Sometimes there are things that just need to be said. This Bible plan walks through some of the values that tend to get lost in the local church because, over time, we tend to make church all about ourselves. It's time to get back to the basics. Buckle up: this might hurt a little. Sorry, not sorry. Publisher We would like to thank Journey Church for providing this plan. For more information, please visit: http://JRNY.church
{ "pile_set_name": "Pile-CC" }
“We’ll get to that later.” That was the dismissive answer of Speaker John Boehner on Thursday, when asked if the House would restore the food stamp program it had just coldly ripped out of the farm bill. “Later,” he said, Republicans will deal with the nation’s most important anti-hunger program. “Later,” maybe, they will think about the needs of 47 million people who can’t afford adequate food, probably by cutting the average daily subsidy of $4.39. But right then their priorities were clear, as a bare majority rushed to provide $195.6 billion over 10 years to Big Agriculture. Most of the money went to subsidies for crop insurance and commodities, demanded by the corn, rice and sugar barons who fill campaign coffers. The choice made by the House in cutting apart the farm bill was one of the most brutal, even in the short history of the House’s domination by the Tea Party. Last month, the chamber failed to pass a farm bill that cut $20.5 billion from food stamps because that was still too generous for the most extreme Republican lawmakers. So, in the name of getting something — anything — done, Mr. Boehner decided to push through just the agriculture part of the bill. For decades, farm subsidies and food stamps have been combined for simple reasons of political expediency. Farm-state lawmakers went along with food stamps to keep the crop subsidies flowing; urban lawmakers did the reverse. The coalition may have been an uneasy one, and it cost the taxpayers untold billions in wasteful payments to growers, but that was the price for helping the hungry.
{ "pile_set_name": "OpenWebText2" }
Q: How to display city limits using google maps API V3 If I do a search for a city or neighbourhood which google recognizes, like Fernwood, Victoria, I get a pretty map with a drawn boundary. Is there any way to access this kind of information with the google maps API? I am not looking to draw my own lines on maps, I am looking to make a map showing multiple neighbourhoods with limits on the same map. Bonus points for the ability to style these limits (eg: fill them in like a regular polygon)? A: I'm pretty sure that this kind of functionality is not supported by the api and works only for google maps. I had implement this by finding the boundaries that i needed insert them in my postgis db and then send them with json whenever the polygon was within the map viewport. Hope it helps
{ "pile_set_name": "StackExchange" }
Two Hands Full Of… Hollows When someone pays your band a compliment you hope nothing more than that it’s not just hollow words, lip service to appease your concerns rather than a genuine appreciation. Luckily, when I tell you that Hollowsare ruddy brilliant and that you’re missing out if you’ve not listened to them before I’m being very, very serious. The north-west quartet strike a solid resemblance to Editors and seem to be forging their way up to become an alternative force to be reckoned with. Here, the band picks ten WHOLE tracks they love and that they just couldn’t wait to share with you. Which member of the band loves Notorious B.I.G.? Only one way to find out… But first, here’s a bit of Hollowsthemselves… Sean Davies – Hollows, Lead Vocals & Guitar THE CURE – Letter To Elise It’s really hard to pick one Cure song as there are so many albums. This one embodies what the band’s all about though. Lyrically Gorgeous…. At least I’d lose this sense of sensing something else That hides away From me and you There’re worlds to part With aching looks and breaking hearts And all the prayers your hands can make Oh I just take as much as you can throw And then throw it all away RADIOHEAD – The Bends The guitar work is brilliant and the vocals seem to really show off Thom’s range. From one of my favourite albums of all time. Josh Owen – Hollows, Bass ARCTIC MONKEYS – Do Me A Favour This track has great lyrics and I love how the whole song just builds into a massive outro. Also, the bass line in it is perfect. ELBOW – Dear Friends It’s just very chilled, I associate the song with finishing Uni as I listened to it when I was completing my final assignment, it definitely puts me in a positive mood. Andy Hill – Hollows, Lead Guitar & Vocals THERE WILL BE FIREWORKS – River Their album, The Dark, Dark Bright, is probably my favourite of all time, I love the way they mash together elements of indie music, folk and post rock. Their singer, Nicky McManus’ voice is unreal too, I just wish they’d release some new music. AGENT FRESCO – See Hell They managed to make music that’s progressive with its use of rhythms and strong structures, but don’t end up sounding self-indulgent or that they’re trying too hard. They’re catchy, unique and write some really great guitar riffs. Conor Williams – Hollows, Drums JUICY – The Notorious B.I.G. You just get that glorious summer feeling from this track and it gets you motivated for whatever you’re doing. FOALS – Spanish Sahara The ambience and sounds in this tune are tingling, it builds epically into a beautiful and powerful track. Full Band Picks EDITORS – Smokers Outside The Hospital Doors We’re all fans of the band and they definitely have an influence on us. The delayed guitar tones, straightforward but dynamic drumming and Tom’s vocals on this particular track all come together so well. Dark and moody is what we like and Editors do it pretty well. NEW ORDER – Ceremony Being bigger fans of Joy Division material than New Order in general, it’s strange to pick this track. It’s the closest you’ll probably get to the changing of the band and a look at where the sound might have been going. Performed and recorded brilliantly by New Order but carried by the lyrics written by Ian – beautiful!
{ "pile_set_name": "Pile-CC" }
Early neuromuscular electrical stimulation to optimize quadriceps muscle function following total knee arthroplasty: a case report. Case report. Following total knee arthroplasty (TKA), restoration of normal quadriceps muscle function is rare. One month after surgery, quadriceps torque (force) is only 40% of preoperative values and quadriceps activation is only 82% of preoperative levels, despite initiating postoperative rehabilitation the day after surgery. Early application of neuromuscular electrical stimulation (NMES) offers a possible approach to minimize loss of quadriceps torque more effectively than traditional rehabilitation exercises alone. A 65-year-old female underwent a right, cemented TKA. Isometric quadriceps and hamstrings muscle torque were measured preoperatively and at 3, 6, and 12 weeks after TKA. Quadriceps muscle activation was measured using a doublet interpolation technique at the same time points. The patient participated in a traditional TKA rehabilitation program augmented by NMES, which was initiated 48 hours after surgery and continued twice a day for the first 3 weeks, and once daily for 3 additional weeks. Preoperatively, the involved quadriceps produced 75% of the torque of the uninvolved side and demonstrated only 72.9% activation. At 3, 6, and 12 weeks after TKA, quadriceps torque was greater than the preoperative values of the involved side by 16%, 29%, and 56%, respectively. Similarly, activation improved to 93.4%, 94.6%, and 93.5% at 3, 6, and 12 weeks after TKA. Mitigating quadriceps muscle weakness immediately after TKA using early NMES may improve functional outcomes, because quadriceps weakness has been associated with numerous functional limitations and an increased risk for falls. Despite presenting preoperatively with substantial quadriceps torque and activation deficits, the patient in this case demonstrated improvements in quadriceps function at all the times measured, all of which were superior to those reported in the literature. The patient also made substantial improvements in functional outcomes, including the Knee Injury and Osteoarthritis Outcome Score (KOOS), 6-minute walk test, timed up and go (TUG) test, stair-climbing test, and the SF-36 Physical Component Score. Appropriately controlled clinical trials will be necessary to determine whether such favorable outcomes following TKA are specifically attributable to the addition of NMES to the rehabilitation program.
{ "pile_set_name": "PubMed Abstracts" }
Rangan Rangan may refer to: Rangan, Iran, a village in Isfahan Province Rangan, Razavi Khorasan, a village in Razavi Khorasan Province Venkat Rangan, Indian computer scientist Gumbok Rangan, a mountain of India Rangan Chakraborty (b. 1957), Indian filmmaker
{ "pile_set_name": "Wikipedia (en)" }
Using the importer I am trying to use the import tool in bbpress to transfer all the posts and users from my vbulletin forum to my new bbpress forum. The vbulletin forum was set-up and hosted by a web company. I emailed them and asked for the required set-up information and they responded with the following: However, when I enter this information into the database settings page on the import page and press start, it comes back with the following: Repair any missing information: Continue Conversion Complete No reply_to parents to convert No replies to convert No tags to convert No super stickies to stick No stickies to stick No topics to convert No forum parents to convert No forums to convert No passwords to clear No users to convert No data to clean Starting Conversion I don’t delete any databases until I have a backup, if you have a backup and are 100% confident you have a backup of the correct database and that database is no longer connected to a forum install (and you have backups of that also) then delete away, you can’t get it back once you delete it 😉
{ "pile_set_name": "Pile-CC" }
In the article titled "Chitosan Prevents Gentamicin-Induced Nephrotoxicity via a Carbonyl Stress-Dependent Pathway" \[[@B1]\], the name of the first author was given incorrectly as Chu-Kung Chou. The author\'s name should have been written as Chu-Kuang Chou. The revised authors\' list is shown above. Also, there was an error in Figure 2. Figures 2(b) and 2(e) were inadvertently reused from Yi-Chieh Li, Yi-Min Shih, and Jen-Ai Lee, "Gentamicin caused renal injury deeply related to methylglyoxal and Nɛ-(carboxyethyl)lysine (CEL)," Toxicology Letters, Volume 219, Issue 1, <https://www.doi.org/10.1016/j.toxlet.2013.01.024>. Additionally, the same picture of Figure 2(e) was presented as Figure 2(c). The corrected [Figure 2](#fig1){ref-type="fig"} is as follows. ![LMWC-induced changes in histology. Light micrographs of rat kidney sections were stained with hematoxylin and eosin. (a) Histology of kidney tissue in the control group. (b) Necrotic tubules and desquamation were apparent after treatment with 150 mg/kg/day GM for 6 days. (c) Treatment of GN rats with 165 mg/kg/day LMWC for 13 days improved histology. (d) Treatment of GN rats with 825 mg/kg/day LMWC for 13 days significantly improved histology. (e) Treatment of GN rats with 100 mg/kg/day metformin for 13 days significantly improved histology.](BMRI2017-7686249.001){#fig1}
{ "pile_set_name": "PubMed Central" }
var searchData= [ ['g_5fregistry',['g_Registry',['../classFling_1_1Engine.html#a57187f219f0d07f63227723f0a4d9d77',1,'Fling::Engine']]], ['gamma',['Gamma',['../structFling_1_1CameraInfoUbo.html#a1be655f49401cdcf26d428fb1edf0037',1,'Fling::CameraInfoUbo']]] ];
{ "pile_set_name": "Github" }
Hei, denne artikkelen er over ett år gammel og kan inneholde utdatert informasjon Tanken bak langsiktig bistand er at rike land skal tilføre fattige utviklingsland kapital og kunnskaper som de mangler, slik at de får utnyttet sine naturressurser og arbeidskraft best mulig. Bistand skal støtte lands bestrebelser på å gi befolkningen bedre helse og utdanning og gode rammebetingelser for næringsliv og sysselsetting. Etter hvert som folk får bedre inntekter vil skatter og avgifter dekke stadig mer av det offentliges utgifter – og bistanden blir gradvis overflødig. I virkeligheten er det få, om noen, land i Afrika sør for Sahara, der mye av bistanden havner, hvor det er grunn til å tro at slike positive utviklingsspiraler er i gang. Etter 60 år og milliarder på milliarder av bistandsdollars er det langt mellom land hvor befolkningens behov ivaretas godt. Den økonomiske veksten er, med få unntak, minimal, mens befolkningsveksten er høy. Bistanden synes å gjøre liten forskjell. Likevel fortsetter vestlige bistandsmyndigheter å insistere på at langsiktig bistand gjør en stor og varig forskjell for folks velferd – også i Afrika sør for Sahara. Historier om forbedringer innen helse og utdanning brukes for det de er verdt: Det er stadig flere som får hiv/aids-behandling, flere vaksineres, færre barn dør og flere jenter får utdanning. Så lenge giverne bidrar med store bistandsmidler skulle det bare mangle. Landene selv har ikke økonomisk ryggrad til å overta disse tunge oppgavene. Erfaring fra utallige prosjekter på mange områder viser at når giverne forsvinner gjør som oftest tjenestene det også. Økonomisk forskning kan heller ikke godtgjøre at bistand bidrar til økonomisk vekst og bærekraft på lang sikt. Afrika sør for Sahara er for alle praktiske formål i dag like økonomisk tilbakeliggende som for 50 år siden. Norad har beregnet at om 20 år vil mellom 80 og 90 prosent av verdens ekstremt fattige bo nettopp der. En sammenligning mellom Tanzania og Sør Korea illustrerer hvor grundig Afrika sør for Sahara er havnet i bakleksa. I Tanzania har bruttonasjonalinntekten per innbygger økt fem ganger fra 1970 til 2015. I Sør Korea som startet på omtrent samme inntektsnivå, har gjennomsnittsinntekten økt 98 ganger i samme periode. I dag er det ene landet en moderne industristat og selv bistandsgiver. Det andre, som er blant dem som har mottatt mest bistand i verden, er fortsatt et fattig bondesamfunn basert på hakkejordbruk. Situasjonen er enda mer ugunstig i andre afrikanske land. Hva bistanden ellers har bragt av velsignelser til Tanzania og det øvrige Afrika er det i hvert fall ikke vellykket hjelp til selvhjelp. Det er mange grunner til at utviklingen i denne regionen er kommet så ettertrykkelig i bakleksa. En av de største utfordringene, i tillegg til alle de nedarvede fra kolonitiden, tørke og de handelsmessige problemene, er at politikere og det høyere embetssjiktet i mange land ser staten som en kilde til makt og personlig berikelse. Korrupsjonen forplanter seg nedover i rekkene til politi, tollfunksjonærer og andre som rår over knappe goder. Det er nesten vanskelig å finne et land hvor dette ikke skjer i stort omfang. I sin bok «Fighting Corruption is Dangerous» som utkom i april i år, skildrer Ngozi Okonjo-Iweala hvor ekstremt det politisk/økonomiske utbyttingssamfunnet kan bli. Hun var finansminister i Nigeria i to perioder, men måtte gi opp å stanse korrupte politikere etter at hennes mor ble kidnappet og familien mottok dødstrusler. De måtte til slutt flykte. Nigeria er et ytterpunkt, men vi snakker om gradsforskjeller. Når lands ressurser føres på avveie gjennom underslag, bestikkelser, skyhøye parlamentarikerlønninger og gjemmes bort i utlandet, kan en ikke vente økonomisk utvikling eller at mer bistand skal gjøre en forskjell. Maktens vilje til å sikre sin egen overlevelse og skaffe seg selv en andel av landenes ressurser er et stort problem nesten overalt i Afrika sør for Sahara. Makthavere ser seg også truet av krav om åpenhet og ansvarlighet. Mange land publiserer ikke statistikk som viser interne økonomiske forskjeller, fordi det kan skape etnisk uro. I en rekke land er pressen ufri og informasjonstilgang og -deling på sosiale medier betraktes med skepsis fordi man er redd for politiske konspirasjoner og oppvigleri. Den norske regjeringen har nettopp offentliggjort sin nye digitale strategi for bistanden. Den konstaterer at land som vil utvikle seg må være godt integrert i det internasjonale informasjonsnettverket. Strategien nevner i forbifarten at myndighetene i Tanzania og Uganda ikke liker for mye åpenhet i sosiale medier – som om det var en bisak. Det er det ikke. Her burde strategien tatt et skritt tilbake fra det faglige og drøftet inngående hvor sannsynlig det er at dens mange gode forslag noen gang vil komme de fattige til gode. Det gjør man ikke. Jeg har ikke spurt, men svaret er antakelig her som ellers at bistand til godt styresett på sikt skal løse problemene ved å skape en kjerne av sterke demokratiske institusjoner som til slutt kan holde korrupte, maktglade og informasjons-sky politikere i sjakk. Mon det. Riksrevisjonens dom i 2015 var at norsk bistand til godt styresett fungerer dårlig. Langsiktig bistand har liten sjanse til å lykkes der hvor selvtjenende eliter har makten, noe som gjelder de fleste land i Afrika sør for Sahara. Den burde vært avviklet, men overlever nok, for den har andre viktige funksjoner. Bistand er ikke bare et verktøy for hjelp til selvhjelp. For alle som er ansatt i den enorme bistandsindustrien, NGOer, fonds, forskere, private aktører, utviklingsinstitusjoner og stater, betyr den også sysselsetting og inntekt. Dette er en sterk lobby som har interesse av at fortellingen om bistandens suksess lever videre. For et lite land som Norge er bistand i tillegg et hendig redskap til å løse regjeringskabaler og en sparegris som gir innflytelse i internasjonal storpolitikk.
{ "pile_set_name": "OpenWebText2" }
Human Resources - Service Request Management It's always a challenge for multiple departments to get the most productivity out of the entire staff while maintaining a quality level. HR being one of the main departments that are impacted with cutbacks due to reductions although they would still need provide a high level for service. There is a better way to maintain that level which is called Service Request Management. This management focuses on basically automating all normal requests to the entire organization. It gives all employees one main point to request and view their requests using a service ticket. This helps them focus on their job and not on worrying on what's the update on their respective request. Staff productivity is usually improved as HR managers can report on the request fulfillment and help measure the service levels. Additional compliances are also easier to implement. Problems like fragmentation can be avoided as there are so many different applications and processes that actually take out the focus on the main reason they are there. With nothing having the right process and resources in place, a lot of HR managers struggle to get everything done in a timely manner. Overall, it's always important to put the right tools in place to make life easier on the small issues while focusing on the big overall plan on running the organization smoothly. Having the right HR resource can be one of the biggest assets a company could have. Human resources are the main staff members that help organizations with profits and advancement of the business as the main focus. It's very critical to have the right human resources as they manage and ensure that the entire working atmosphere of any organization is run smoothly and free from any issues or discomfort. The main area of focus is always the work environment. It takes years for companies to establish the right environment for everyone to feel comfortable and wanted. A happy and healthy workplace makes an employee more eager and a lot more productive depending on the tasks assigned. With high productivity, a company can grow a lot more organically which results to a lot more growth. Every employee that joins needs to understand what their role is. This helps them in getting motivated if they know and love what they are doing. Also, an effective system can help them in their internal issues which can help an employer in receiving strong loyalty from his/her employees. This helps in maintaining a good working relationship among stakeholders. A feedback process should also be introduced to provide the freedom for everyone to express their desires when necessary for the betterment of a business. Policies and regulations must be understood and maintained very well to ensure everyone is compliant with them. With this in mind, a disciplining authority must be developed to maintain all compliances and company rules. Everyone should understand these rules govern the entire company and will not be based on the management's discretion alone. With this, employees would feel a lot safer and will maintain a fair atmosphere for all. All these practices benefit both the company and the workers and will maintain a strong and healthy relationship with everyone in the organization. I am Michle Disuza, serving HR Services from last many years. Providing end to end solutions, Cornerstone help organizations access and improve their work force management resulting in improving the bottom-line in operations
{ "pile_set_name": "Pile-CC" }
We are back!!! Over 100 new Masonic files in the Masonic Educational Center. We took a slight hiatus from the website this summer while traveling the U.S. and overseas. These are some of the sites we visited: Jacksonville, FL; Miami, FL; Valdosta, GA; Houston, TX; San Antonio, TX; San Diego; Los Angeles; Las Vegas; Denver, CO; Columbia, SC; Charlotte, NC, Washington, DC and many others. In addition, we also visited London, England. We had many "open" discussions about Freemasonry in the hopes to improve our Masonic knowledge, improve our current product line at: MasterMason.biz and to provide interesting new projects/products on both of our Masonic sites. We accomplished these goals. We have many new Masonic "projects" and "products" in the works... stay tuned.... 1) We opened a new section of the website: www.MasterMason.info/join . This area is intended for "those interested in Masonry" to get answers to questions. There is an area for Masons to add content on why they joined Freemasonry... we would welcome input from current Freemasons in this section... what you write could help someone decide to join.... 2) Please visit www.MasterMason.info/links and add your Grand Lodge and Lodge website to our links section! We are looking for all current Grand Lodge links. Welcome to all the new Brothers/Sisters to this list... we welcome you with open arms. You are invited to e-mail us at the contact page if you have questions, comments or would like to donate material for the newsletter. As you can tell this newsletter is geared more toward Masonic Education than anything else... make no mistakes... we have extra material to keep things interesting but education is our main goal. To the Brothers/Sisters who have been around a while... thank you for the support! We look forward to providing you with material that is easy to obtain.
{ "pile_set_name": "Pile-CC" }
+ z - 6*z + 3 = 0. Is 0 < y? False Let b = 1.3 - 1.23. Which is greater: 1 or b? 1 Let s = 2.4 + -0.4. Is s less than -12? False Let m = -58/15 - -4. Let i(n) = -n**3 + 4*n**2 - 2*n - 3. Let a be i(3). Which is bigger: a or m? m Let x = -106 + 42. Which is greater: -0.2 or x? -0.2 Suppose -5*d - 4*z - 10 = 0, 0 = 2*d + 4*z + 3 + 1. Let n be (0/(0 - 2))/d. Is n less than 0? False Suppose 0 = -2*x + 3 + 3. Let n be (-3)/x*(5 - 3). Which is smaller: -0.2 or n? n Let d be -2*1 + 1628/(-126). Let y = -106/7 - d. Is y less than -2? False Suppose -5*y + 28 = 3. Does 19/5 = y? False Let g be 0/(2 - (4 + 1)). Let n = g + 0. Suppose -5*r = -n*h - h + 23, 3*h - 5 = -r. Is -5/2 < r? False Let p = 0.046 + -0.276. Let w = p + 0.46. Let d = w + 0.07. Which is smaller: 2/5 or d? d Let b = 0.06 - 0.09. Let s = -3.97 + b. Which is greater: s or 1? 1 Let g be 1/5 - (-384)/30. Which is greater: 14 or g? 14 Let q be (-91)/2 - (-1)/2. Let s = -48657/5 + 9686. Let c = q - s. Do c and -1 have different values? True Let z(v) = v**2 - 12*v + 17. Let x be z(9). Is x at least as big as -8? False Let d = -267/241 + 101698/75915. Let k = d - 2/63. Suppose 12 = 4*y - 0*y, 2*q + 2*y = 8. Which is greater: q or k? q Let h = 8/147 - -1088/1617. Which is bigger: h or 1? 1 Let x = 0.38 + -0.08. Let l = 1.3 - x. Which is smaller: -0.2 or l? -0.2 Suppose -h + 2 = -5. Which is smaller: h or 8? h Let x = 0.1 + 0. Let a = 0.1 + x. Let t = 2 - 2.1. Which is smaller: t or a? t Let o(d) = -d - 13. Let g be o(-6). Is g less than or equal to 2/3? True Let k = 33 + -46. Which is smaller: 2 or k? k Let m = 2.9 - -10.1. Let h = m - 14. Which is smaller: 3/7 or h? h Let y(w) = -w - 12. Suppose -3*g = 2*p + 62, -4*g + 23 = -6*g - 5*p. Let x = 11 + g. Let l be y(x). Which is greater: -2/15 or l? l Let h = -1784/11 + 162. Suppose -4*f = -2 - 2. Which is bigger: h or f? f Let g(h) = h**3 + h**2 - 5*h - 4. Let a be g(-3). Let t be (-8)/(-28) - (-4)/a. Which is bigger: 0 or t? 0 Let z = -2872/3 + 12983/15. Let d = z + 91. Which is smaller: d or -1? -1 Let i(t) = -t**2 - 3*t - 2. Let q be i(-4). Let u(g) = -g**2 - 6*g - 7. Suppose -8 = -2*d, -2*o - 3*o = -5*d + 50. Let v be u(o). Which is bigger: v or q? q Let w = 44.4 - -4.6. Let l = 46.1 - w. Let g = l + 3. Are 1 and g non-equal? True Let i(l) = -l + 12. Let u be i(8). Let r = -2 + 9. Let k = r + -4. Which is smaller: u or k? k Let m(i) = i**2 + i - 1. Let y be m(0). Is -2 at least as big as y? False Suppose -2*k = -2 + 6, -4*v - k = -70. Let g = v + -19. Does g = -1/29? False Let v = -16.8 + 16.4. Which is smaller: -23 or v? -23 Let l(f) = -f**2 + 6*f + 10. Let g be l(7). Suppose -g*p + 1 = -2. Which is bigger: 9/5 or p? 9/5 Let j(t) = t + 8. Let q be j(-13). Which is smaller: q or 0? q Let a = -1.93 - -2.1. Let n = a - 0.87. Let k = 6 - 5. Which is bigger: n or k? k Let b = -219/145 - -9/29. Are b and -2 non-equal? True Let r(c) = c**2 - 4*c - 4. Let j = 1 - -4. Let b be r(j). Is b greater than or equal to -1/11? True Let q = -6 + 6. Suppose -3*p - 3*w + 5 + 7 = q, -2*w + 9 = p. Which is greater: 2/5 or p? 2/5 Let x = 9 + 6. Which is smaller: x or 14? 14 Suppose -7*h = 4*m - 9*h + 6, -4*m = -h + 5. Let g be 29/(-148) + (-3)/(-12). Which is greater: m or g? g Let u = -22.2 + 23.2. Which is greater: u or -5? u Let n = -26 - -33. Is 7 less than or equal to n? True Let l be (1 - 1)/(-4)*1. Suppose l = z + 3*t + 13, 3*z + 0*t + 18 = -2*t. Is z less than or equal to -7? False Suppose -1 + 11 = 3*k - 4*u, -5*k + 5*u + 10 = 0. Is k >= 1? False Let l = 570 + -51301/90. Let k = 113/45 + l. Let s = 10 + -7. Does k = s? False Let t(i) = 5*i**2 - 4*i + 3. Let x be t(4). Let h = 123/2 - x. Let y = 5 + h. Is y equal to 0.1? False Suppose 15 = -3*h - 5*z, -z + 0*z + 3 = 0. Let r be ((-4)/h)/((-2)/(-10)). Suppose 2 + 2 = r*v. Is 2 < v? False Let s(q) = q**3 + 7*q**2 + 6*q + 5. Let y be s(-6). Suppose 0*l - y = 5*l. Which is greater: l or -3? l Let y(s) = -s**2 + s + 8. Let f be y(4). Is f at most as big as -19/4? False Let p be 5/(-3) + 3/(-9). Let c = p - -9/4. Which is smaller: c or 0? 0 Let w(j) = j + 3. Let f be w(-3). Let k = 4626/13 + -356. Which is bigger: f or k? f Let n(r) = r**2 + 6*r + 6. Let v be n(-5). Let g(x) = -x**2 - 4*x + 7. Suppose -3*t - 12 = 6. Let d be g(t). Is v at least d? True Let k be 87/27 + (-4)/18. Suppose 6*c = 4*c + 4. Is c equal to k? False Let c = 17 - 4. Let h = -10 + c. Which is smaller: -2 or h? -2 Suppose -54 = -2*p - 12. Which is bigger: 20 or p? p Suppose -3*a + 4*t - 228 = -7*a, 3*a = 5*t + 195. Let h = a + -301/5. Which is smaller: h or 0? h Let s(x) = -14*x - 24. Let o(q) = 5*q + 8. Let p(a) = -17*o(a) - 6*s(a). Let i be p(10). Let u be (2*(-1)/5)/i. Which is smaller: 1 or u? u Let q be -2 - (-42)/(-9)*432/42. Are q and -99/2 non-equal? True Let v = 813/18404 - -1/428. Which is bigger: v or 0? v Let z = -36.8 - -38. Which is smaller: z or -1/2? -1/2 Let v(d) = d**3 - 4*d**2 - 6*d + 4. Let n be v(5). Let h be 90/76 + 48/(-32). Which is smaller: n or h? n Let u(x) = -x**3 + 9*x**2 - 7*x - 8. Let n be u(8). Suppose n*l = -3*l - 3. Let d be 2 - 1 - (-1 + 1). Which is smaller: d or l? l Let d be (1 + (-4)/6)*0. Is -1/5 smaller than d? True Suppose 0*s - 4*s + 8 = 0. Let t be s/1 + (3 - 1). Let m = t + -4. Which is greater: m or -2/7? m Let o be (-2)/3 + 1/(-3). Suppose -i + 0*i + 4*y = -20, 2*i + 10 = -2*y. Which is bigger: o or i? i Let r be (1 - 1)/(3 - 1). Suppose r*f = -f. Suppose f = -3*l + 5*v + 5, 3*l + 8*v + 5 = 3*v. Which is smaller: 2 or l? l Let i = -0.04 + -0.06. Which is smaller: i or 0.14? i Let d = 10 + -15. Let z = -8 + 7.8. Which is smaller: d or z? d Let j(l) = l**3 - 3*l**2 - 2*l + 4. Let x be j(3). Which is greater: x or -3/5? -3/5 Suppose -2*m + 2 = 3*a + 3, a = -5*m + 17. Suppose 3*k - 12 = 2*j, -2*j + 2*k - k = m. Is j less than 0? False Let x = -175.58 - -177. Let s = x - 7.52. Let q = s - -6. Which is smaller: q or 1? q Suppose 2*r - 3 + 9 = 0. Let f be r/6*(-2 + -2). Which is smaller: f or 5/3? 5/3 Suppose 2*p - w = 3*p - 2, 4*w = 5*p + 8. Let q be (-2)/(-10)*(4 + -2). Which is greater: p or q? q Let r = -6 + 9. Let g = 23 + -20. Are g and r nonequal? False Let b = -2653/59 - -45. Which is greater: -1 or b? b Let p be ((-1)/4)/(0 - 1). Suppose 3*g - 675 = 3*y + 6*g, -2*y - 455 = -3*g. Let f = y + 2030/9. Is f bigger than p? False Let l = -5 + 8. Suppose 5*y = l*y. Do -2/13 and y have different values? True Suppose 0 = -3*t + m + 44, 2*t - 2*m - 32 = -0*t. Let a(k) = -k + 13. Let j be a(t). Is j > 1? False Let d(v) = 2*v**2 - 2*v - 1. Let z(o) = -o**3 + 7*o**2 + 7*o + 11. Let t be z(8). Let u be d(t). Suppose -u = 3*g - 2. Is g at most as big as -3? True Let k(g) = -g**3 - 4*g**2 + 5*g + 2. Let p be k(-5). Suppose 0 = -2*z - 2*d - 2, -4*d + 2 = -p*z - 0. Is -3 bigger than z? False Suppose -4*o - 39 = 5*b, -2*b - b - 21 = 2*o. Is o greater than -0.1? False Let x be (-134)/126 + 14/(-63). Is -1 >= x? True Suppose -2*l - 6 + 2 = 0. Which is bigger: -4/7 or l? -4/7 Suppose 0 = -2*f + 3 - 1. Let g = 977/8 + -122. Is f less than or equal to g? False Let q be 14 + -11 + (-19)/9. Do q and 0 have the same value? False Let k(l) = -8*l - 2*l**2 + 4*l**2 - 2 - 3*l**2. Let h be k(-6). Let z be ((-2)/5)/((-12)/h). Which is smaller: 1 or z? z Let j = -107 + 97. Which is smaller: j or -3? j Let z be (-30619)/(-11518) - 2/13. Let j = -2/443 + z. Which is smaller: j or 3? j Let x = -0.02 + -0.28. Let q = -8 - -29. Let d = q + -21. Is x greater than d? False Suppose 5*s = -0*s + 30. Let q = -77 - -83. Is s smaller than q? False Suppose o - 6 = -2*o. Suppose 0 = 5*t + 4*w - o, -5 = 5*w + 5. Suppose 0 = t*b - b - 5. Do b and 4 have the same value? False Let x(h) = h + 4. Let i(z) = -z + 4. Let t be i(9). Let d be x(t). Is d >= -3? True Suppose 0 = -2*p + 82 - 14. Let a be p/(-18) - 10/(-45). Which is greater: -1 or a? -1 Let i be (3/(-9) - -1)*(45 - 0). Which is smaller: 28 or i? 28 Let g(z) = z**2 + 4*z - 6. Suppose -5*q - 5*s + 3 = -3*q, -9 = 3*q + 3*s. Let c be g(q). Suppose c*f + 2 = 2*k + f, -3*f = 0. Is -2/25 at most k? True Let c be (-20)/(-3)*(-30)/(-20). Let h = c + -10. Is h at most as big as 1/2? True Suppose -4*n - 31 = -7. Is n smaller than -6? False Let d(z) = z**2 + 7*z + 4. Let u be d(-6). Let l be 2
{ "pile_set_name": "DM Mathematics" }
Streptomyces collinus Streptomyces collinus is a bacterium species from the genus of Streptomyces which has been isolated from soil in Baden in Germany. Streptomyces collinus produces ansatrienin A2, ansatrienin A3, ansatrienin B, naphthomycin A, collinomycine, toromycin, streptocollin, kirromycin and rubromycine. Further reading See also List of Streptomyces species References External links Type strain of Streptomyces collinus at BacDive - the Bacterial Diversity Metadatabase collinus Category:Bacteria described in 1952
{ "pile_set_name": "Wikipedia (en)" }
The relationship between retrospective reports of early child temperament and adjustment at ages 10-12. For 131 highly stressed 4th- to 6th-grade urban children, retrospective parental reports of child temperament along an easy-difficult dimension, for the infancy (ages 0-2) and preschool (ages 2-5) periods, were obtained during in-depth interviews. Parent judgments of an easier temperament in each of the two age periods, and their sum, related consistently and significantly to positive ratings of current child adjustment. The latter reflected both multiple sources (i.e., parents, former teachers, and current teachers) and different aspects of adjustment (e.g., fewer problem behaviors and more competencies).
{ "pile_set_name": "PubMed Abstracts" }
Fandango (Popularity: ): Purchase movie tickets online for showtimes and find theater listings in your area. Ain't It Cool News (Popularity: ): Harry Knowles specializes in revealing behind-the-scenes information and reviews of upcoming films.Rotten Tomatoes (Popularity: ): Read reviews of current movies from the nation's top critics and many other sources, used by Rotten Tomatoes to formulate ...IFILM (Popularity: ): Videos and movies from iFilm, the streaming media site. See the latest music videos, short films, TV clips, games, or ...Watch Movies (Popularity: ): Watch all latest movies for many current and past releases. All the movies are available in high-quality HD, watch them ...Watch TV Online (Popularity: ): The largest resource available on the web for viewing Free Internet Television. Live streaming TV, News, broadband internet TV stations, ...Film Production Services (Popularity: ): Video based production company- production services India. Angles Unlimitedº India Productions is a New Delhi & Mumbai based world-class client-focused ...Find Acting Jobs (Popularity: ): Find modeling jobs, acting jobs and open casting calls. Explore Talent for acting jobs (Popularity: ): Find modeling jobs, acting auditions and open calls.For many years, professionals in the entertainment industry have been scurrying all over ...Download TV Shows USA (Popularity: ): Edogo is one stop destination for download tv shows. You will find never seen speed while you download tv shows ...Regional Environmental Services (Popularity: ): Provides specialist services in London bird control and pest control, as well as cleaning and waste removal services.Electrifire Limited (Popularity: ): A Kent engineer who installs security and fire alarm systems. These include CCTV cameras, wireless intruder alarms, access control and ... Popular Sites Watch TV Online (Popularity: ): The largest resource available on the web for viewing Free Internet Television. Live streaming TV, News, broadband internet TV stations, ...New Capitol Cinema Botswana (Popularity: ): Newcapitolcinema Movie Show timings Movie schedules at newcapitolcinema Botswana Movie theaters show timings Riverwalk movies in gaborone. The Riverwalk cinema ...Watch Hindi Movies Online (Popularity: ): If you like to watch Hindi movies online then Movies Vatika is an amazing option where you can access the ...Movies Fun Zone (Popularity: ): The new world of entertainment where download movies is just a simple task to do. You just need to subscribe ...Fandango (Popularity: ): Purchase movie tickets online for showtimes and find theater listings in your area.
{ "pile_set_name": "Pile-CC" }
When a semiconductor device or an FPD (flat panel display) is microprocessed by using a plasma, it is extremely crucial to control a temperature and a temperature distribution of a substrate and a plasma density distribution on a substrate to be processed (a semiconductor wafer, a glass substrate or the like). If the temperature of the substrate is not properly controlled, it is difficult to secure process uniformity on a surface of the substrate, thereby deteriorating a production yield of a semiconductor device or a display device. Generally, a mounting table or a susceptor for mounting thereon a substrate to be processed inside a chamber of a plasma processing apparatus, especially a capacitively coupled plasma processing apparatus, functions as a high frequency electrode for applying a high frequency power to a plasma space, as a support member for supporting a substrate by an electrostatic adsorption or the like and as a temperature control unit for controlling the substrate at a predetermined temperature by heat conduction. The mounting table serving as the temperature control unit is required to properly compensate a heat distribution caused by a substrate supporting structure or a distribution of heat transfer characteristics on the substrate caused by nonuniformity of a radiant heat from a plasma or a chamber wall. Conventionally, in order to control a temperature of a top surface of the susceptor (and further a temperature of the substrate), there has been widely used a method for supplying a coolant whose temperature controlled by a chiller unit into a coolant passageway provided inside a susceptor or a susceptor support to be circulated therein. However, the above method is disadvantageous in that it is difficult to change a temperature of the coolant at a high speed and, also, the temperature cannot be raised and lowered at a high speed due to poor responsiveness in temperature control. Recently, a plasma processing, e.g., a plasma etching, requires a method for successively processing a multilayer film on a substrate to be processed inside a single chamber instead of multiple chambers. In order to implement such method, it is crucial to have a technique capable of raising and lowering a temperature of a mounting table at a high speed. For the above reasons, a heater capable of precisely controlling a susceptor temperature and further a substrate temperature at a high speed by controlling Joule heat of a heating element is attracting attention again. Meanwhile, in case where a lower plate dual frequency application type in which a high frequency power supply is connected to a susceptor in view of plasma control and the above heater in which a heating element is provided in a susceptor in view of temperature control are used at the same time, if a part of a high frequency applied to the susceptor enters a heater power supply via a heater power feed line, an operation or a performance of the heater power supply may deteriorate. Especially, the heater power supply capable of high-speed control performs an ON/OFF control or a switching control with high sensitivity by using a semiconductor switching device such as an SSR (solid state relay) or the like, so that misoperation may easily occur by high frequency noise. To that end, it is common to provide in the heater power feed line a filter circuit for efficiently reducing the high frequency noise. Generally, such filter circuit includes a plurality of LC low pass filters each having a single coil (inductor) and a single capacitor, the LC low pass filters being connected at multiple stages in the form of a ladder. For example, if the high frequency noise can be reduced by 1/10 in each stage of the LC low pass filter, it can be reduced by 1/100 in a second-stage connection and to 1/1000 in a third-stage connection. (Patent Document 1) Japanese Patent Application Publication No. 2006-286733 As set for the above, in the conventional plasma processing apparatus, the function of the filter circuit provided in the heater power feed line focuses on reducing the high frequency noise from the high frequency power supply via the susceptor in view of ensuring normal operation and performance of the heater power supply. Thus, a coil having a small inductance and a capacitor having a large capacitance are used in each of the LC low pass filters in the filter circuit. However, the inventors of the present invention have found, during the development and the evaluation of a plasma processing apparatus using a heater in a susceptor together with applying a high frequency power to a lower plate, that the conventional filter circuit has a problem in processing performance. Namely, they have found that the RF power loss in the conventional filter is so large that it cannot be neglected in the processing performance, in addition to a known fact that a predetermined correlation exists between a processing performance (e.g., an etching rate) and a loss of high frequency power applied from a high frequency power supply to a susceptor (i.e., the processing performance deteriorates as the RF power loss increases). Moreover, the inventors of the present invention have found that the RF power loss in the filter circuit is not determined by the circuit design, and varies even between plasma apparatuses of a same configuration, thereby causing differences in the processing performance. The inventors of the present invention have conducted numerous tests and wholehearted studies by considering the above drawbacks, thereby conceiving the present invention.
{ "pile_set_name": "USPTO Backgrounds" }
AdSense program policies All publishers are required to adhere to the following policies, so please read them carefully. If you fail to comply with these policies without permission from Google, we reserve the right to disable ad serving to your site and/or disable your AdSense account at any time. If your account is disabled, you will not be eligible for further participation in the AdSense program. Because we may change our policies at any time, please check here often for updates. In accordance with our online Terms and Conditions, it's your responsibility to keep up to date with, and adhere to, the policies posted here. Exceptions to these policies are permitted only with authorization from Google. Invalid clicks and impressions Publishers may not click their own ads or use any means to inflate impressions and/or clicks artificially, including manual methods.Learn more Clicks on Google ads must result from genuine user interest. Any method that artificially generates clicks or impressions on your Google ads is strictly prohibited. These prohibited methods include, but are not limited to, repeated manual clicks or impressions, automated click and impression generating tools and the use of robots or deceptive software. Please note that clicking your own ads for any reason is prohibited. Encouraging clicks Publishers may not ask others to click their ads or use deceptive implementation methods to obtain clicks. This includes, but is not limited to, offering compensation to users for viewing ads or performing searches, promising to raise money for third parties for such behavior or placing images next to individual ads.Learn more In order to ensure a good experience for users and advertisers, publishers participating in the AdSense program may not: Compensate users for viewing ads or performing searches, or promise compensation to a third party for such behavior. Encourage users to click the Google ads using phrases such as "click the ads", "support us", "visit these links" or other similar language. Direct user attention to the ads using arrows or other graphical gimmicks. Place misleading images alongside individual ads. Place ads in a floating box script. Format ads so that they become indistinguishable from other content on that page. Format site content so that it is difficult to distinguish it from ads. Place misleading labels above Google ad units. For instance, ads may be labelled "Sponsored Links" or "Advertisements", but not "Favourite Sites" or "Today's Top Offers". Content guidelines Publishers may not place AdSense code on pages with content that violates any of our content guidelines. Some examples include content that is adult, violent or advocating racial intolerance. Please see our prohibited content article for more information.View full content policies. Sites with Google ads may not include or link to: Pornography, adult or mature content Violent content Hate speech (including content that incites hatred or promotes violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity), harassment, bullying, or similar content that advocates harm against an individual or group. Any other content that is illegal, promotes illegal activity or infringes on the legal rights of others Publishers are also not permitted to place AdSense code on pages with content primarily in an unsupported language. Copyrighted material AdSense publishers may not display Google ads on webpages with content protected by copyright law unless they have the necessary legal rights to display that content. This includes sites that display copyrighted material, sites hosting copyrighted files, or sites that provide links driving traffic to sites that contain copyrighted material. Please see our DMCA policy for more information. Counterfeit goods AdSense publishers may not display Google ads on webpages that offer for sale or promote the sale of counterfeit goods. Counterfeit goods contain a trademark or logo that is identical to or substantially indistinguishable from the trademark of another. They mimic the brand features of the product in an attempt to pass themselves off as a genuine product of the brand owner. Webmaster guidelines Do not place excessive, repetitive or irrelevant keywords in the content or code of webpages. Avoid hidden text or hidden links. Avoid "doorway" pages created just for search engines or other "cookie cutter" approaches such as affiliate programs with little or no original content. Do not include deceptive or manipulative content or construction to improve your site's search engine ranking (e.g., your site's PageRank). Create a useful, information-rich site and write pages that clearly and accurately describe your content. Traffic sources Google ads may not be placed on pages receiving traffic from certain sources. For example, publishers may not participate in paid-to-click programs, send unwanted emails or display ads as the result of the action of any software application. Also, publishers using online advertising must ensure that their pages comply with Google's Landing Page Quality Guidelines.Learn more Use third-party services that generate clicks or impressions such as paid-to-click, paid-to-surf, autosurf and click-exchange programs. Be promoted through unsolicited mass emails or unwanted advertisements on third-party websites. Display Google ads, search boxes or search results as a result of the actions of software applications such as toolbars. Be loaded by any software that can trigger pop-ups, redirect users to unwanted websites, modify browser settings or otherwise interfere with site navigation. It is your responsibility to ensure that no ad network or affiliate uses such methods to direct traffic to pages that contain your AdSense code. Receive traffic from online advertising unless the site complies with the spirit of Google's Landing Page Quality Guidelines. For instance, users should easily be able to find what your ad promises. Ad behavior Publishers are permitted to make modifications to the AdSense ad code so long as those modifications do not artificially inflate ad performance or harm advertisers. Please see Modification of the AdSense ad code for more information. Ad placement Publishers are encouraged to experiment with a variety of placements and ad formats. However, AdSense code may not be placed in inappropriate places such as pop-ups, emails or software. Publishers must also adhere to the policies for each product used. Please see our ad placement policies article for more information.View full ad placement policies. Google ads, search boxes or search results may not be: Integrated into a software application (does not apply to AdMob) of any kind, including toolbars. Displayed in pop-ups or pop-unders. Placed in emails, email programs, including webmail, or on pages where dynamic content (such as live chat, instant messaging, or auto-refreshing comments) is the primary focus. (Does not apply to AdMob.) Placed in emails, email programs, or chat programs. (Does not apply to AdMob.) Obscured by elements on a page. Placed on any non-content-based page. (Does not apply to AdSense for search, mobile AdSense for search, or AdMob.) Placed on pages published specifically for the purpose of showing ads. Placed on pages whose content or URL could confuse users into thinking it is associated with Google due to the misuse of logos, trademarks or other brand features. Placed on, within or alongside other Google products or services in a manner that violates the policies of that product or service. Site behavior Sites showing Google ads should be easy for users to navigate. Sites may not change user preferences, redirect users to unwanted websites, initiate downloads, include malware or contain pop-ups or pop-unders that interfere with site navigation. Google advertising cookies AdSense publishers must have and abide by a privacy policy that discloses that third parties may be placing and reading cookies on your users' browsers, or using web beacons to collect information as a result of ad serving on your website.Learn more Google uses the DoubleClick cookie on publisher websites displaying AdSense for content ads. Subject to any applicable laws, rules and regulations, you will have the sole and exclusive right to use all data derived from your use of the DoubleClick cookie for any purpose related to your business, provided that Google may use and disclose this data subject to the terms of Google's advertising privacy policies and any applicable laws, rules and regulations. If your current advertising services contract with Google or DoubleClick already has a specific provision defining data ownership, that provision instead of this policy will govern with regard to the data collected under that contract. Privacy You must disclose clearly any data collection, sharing and usage that takes place on any site, app or other property as a consequence of your use of any Google advertising service. To comply with this disclosure obligation with respect to Google’s use of data, you have the option to display a prominent link to How Google uses data when you use our partners’ sites or apps. Children's Online Privacy Protection Act (COPPA) If you implement any Google advertising service on a site or section of a site that is covered by the Children's Online Privacy Protection Act (COPPA), (a) you must notify Google of those sites or sections of sites covered by COPPA using the tools found here: https://www.google.com/webmasters/tools/coppa, or the method for apps described here: https://developers.google.com/mobile-ads-sdk/docs/admob/additional-controls, and (b) you must not use interest-based advertising (including remarketing) to target: (i) past or current activity by users known by you to be under the age of 13 years or (ii) past or current activity on sites directed at users under the age of 13 years. Gambling content AdSense restricts the placement of ads on gambling sites and gambling-related content. We have different policies for gambling content based on the country in which a publisher is located. Publishers outside a limited group of countries are not allowed to place ads on any gambling content or on any pages with links to gambling content. This includes any content that allows users to place bets or play games in exchange for an opportunity to earn money or other prizes.Learn more Currently, these publishers must meet Google’s stringent requirements for consideration and need to be selected and approved by the AdSense Policy Team before ads can be shown. If you're eligible, AdSense will be sure to contact you. Publishers located outside of these countries aren’t permitted to place ads on content that allows any type of betting or has links to such content. To keep the Global Display Network family safe, ads will only be shown on gambling content if both the publisher and the page’s visitor are located within a gambling-approved country and if the visitor is of legal age. Additionally, publishers now have the opportunity to opt in to receiving gambling ads through the category filtering feature. If you don't wish to receive gambling ads on your site, no action is required on your part. Sites that drive traffic to online gambling sites, such as through organic links Publishers who have been selected and approved for gambling content, and are located in one of the following countries: Austria, Belgium, Canada, Denmark, Finland, France, Greece, Ireland, Italy, Israel, Norway, Peru, Portugal, Romania, Serbia, Spain, Sweden, United Kingdom: Product-specific policies AdSense for content: Publishers may place up to three AdSense for content units on one webpage. This includes a maximum of one 300x600 ad unit (or similar sized ad) per page. In addition to three AdSense for content ad units, publishers may also place up to three link units and two search boxes on each page. These policies apply to both desktop and high-end mobile optimized sites. AdSense for search: A maximum of two Google AdSense for search boxes may be placed per page. Also, a single link unit or image ad only may be placed on pages with AdSense for search results. Queries must originate from users inputting data directly into the search box and cannot be modified. This includes pre-populating the search box with terms or hard-coding direct links to search results pages. AdSense for search code may not be integrated into any software application such as a toolbar. The online AdSense for search product is limited to five (5) billion queries per account from the period of July 1st to June 30th of the following year.
{ "pile_set_name": "Pile-CC" }
{ "name": "@datafire/wmata_rail_station", "version": "5.0.0", "main": "index.js", "description": "DataFire integration for Rail Station Information", "repository": { "type": "git", "url": "git+https://github.com/DataFire/integrations.git" }, "author": "DataFire", "license": "MIT", "bugs": { "url": "https://github.com/DataFire/integrations/issues" }, "homepage": "https://github.com/DataFire/integrations#readme", "datafire": { "origin": "https://api.apis.guru/v2/specs/wmata.com/rail-station/1.0/swagger.json", "type": "openapi" }, "peerDependencies": { "datafire": "^2.0.0" }, "dependencies": { "datafire": "^2.0.0" } }
{ "pile_set_name": "Github" }
The -514 polymorphism in the hepatic lipase gene (LIPC) does not influence androgen-mediated stimulation of hepatic lipase activity. The -514T allele of hepatic lipase is associated with increased high density lipoprotein-cholesterol levels in men, but not in women. This observation suggests that the -514C to T polymorphism may diminish the response of hepatic lipase to androgens. To test this hypothesis, five -514T and five -514C homozygous men were treated with the anabolic steroid stanozolol for 6 days. The mean increase in hepatic lipase activity was similar in the two groups (45+/-10 vs. 51+/-10 mmol x hr(-1) x l(-1), P = 0.5). To evaluate the association between the -514 polymorphism and hepatic lipase activity at different physiological androgen concentrations, hepatic lipase genotypes and activities were measured in 44 men and 40 premenopausal women. The effect of the -514T allele on hepatic lipase activity was significant and quantitatively similar in both sexes. These data indicate that the -514 polymorphism does not influence the response of hepatic lipase activity to androgens, and that the effects of this polymorphism on hepatic lipase activity are independent of androgen action.
{ "pile_set_name": "PubMed Abstracts" }
Dang, this was a fun episode! Let’s talk about it (no pun intended)! Silence is Golden Like last week, we pick up right where we left off. Elliot grabs Tyrell’s cell phone (which had a pic of Joanna and the baby on the lock screen…aww…), the Dark Army operative’s gun, pours gasoline over everything, lights the lighter and throws it in the van, setting it ablaze. Light ’em up, burn ’em up, flame on! Elliot and Mr. Robot watch as the van gets cooked ’til tender. And that my friends, is the only appearance Mr. Robot makes in this entire episode. I guess runnin’ around with Elliot non-stop for the last day and some change wore him out. Mr. Alderson, however, has stamina in spades, as we’ll soon learn. Darlene is driving around the area, still looking for Elliot. The cell reception there seems to be working somewhat now, because she’s able to pick up his location again. However, the phone pings the country store he went to hours ago. Darlene doesn’t see him there, of course, but she notices smoke nearby and decides to check it out. Sure enough, when she gets to the source, she sees the Dark Army van, still aflame. Darlene stares at the carnage for a minute, before Elliot knocks on the passenger door. She lets him in and pulls off. Elliot is about to tell her something, and she says, “It’s cool, dude. We don’t have to talk.” And they don’t talk. Not for the next 49 min. Neither does anyone else. Back at Elliot’s apartment, he’s printing a fake Virtual Realty ID card for Darlene, using her hacker handle, Dolores Haze. He notices the blood on his hands and goes over to the kitchen to wash it off. He breaks down, and eventually crouches in the corner, trying his best to hold back his tears. A lot of folks think that he was crying because of what happened to Tyrell, but I personally think it was a combination of things: Tyrell, Angela, his mother’s death, the fact that he and Darlene aren’t really on the best of terms right now, the realization that time is running out for him and Darlene, etc. Darlene emerges from the bathroom and Elliot quickly straightens up and finishes washing his hands, not wanting her to see him upset. The ID card finishes printing, and Elliot sits down to conduct his research on Virtual Realty. What Do the Lonely Do At Christmas? Dom is packing store bought Christmas cookies to take to her mother’s house when she gets a text from Janice. She tells Janice she can’t talk (LOL), but ol’ girl lets Dom know that one of the Dark Army vans was found in Pike’s Hollow burnt to a crisp with a body inside. Dom has to get out there and use her FBI credentials to ID the body and gain intel…now. Fuck your plans, Dom. Somewhere on the rich folks’ side of town, Price is dining alone at a restaurant on Christmas morning when he gets a text from Elliot telling them that Tyrell won’t be at the Deus Group meeting. Price is visibly annoyed, but his annoyance turns to sadness when he sees a happy family sitting nearby, enjoying each other. You can see that he’s mourning the loss of his daughter and the family that he could have had with the woman he loved. A waiter slams the check on Price’s table, and when he opens the booklet, he sees the receipt instructs him to go to the E-Corp building and tip the trombone player outside $20.00. $20.00!? When I was in New York, the street performers were lucky to get a buck from me! Elliot and Darlene: The Dynamic Duo! The plan commences at 11:00 AM sharp. Darlene shows up at Virtual Realty, wearing a black wig and a frumpy outfit. Elliot stays in the car, putting his gloves on and getting ready for action. The security guards are chillin’, watching Die Hard. When Darlene enters the building, she scans her ID badge, but it doesn’t work. She continues trying, but she still can’t gain access past the lobby. This is all well and fine, however. It’s all part of the plan. Elliot gets out of the car and walks over to the building, stopping to briefly smoke a cigarette. Darlene gives the security guard a warm smile and lets him know that her ID badge isn’t working (of course, you can’t hear any of this, but you can put two and two together). She knocks her bag over and the guard helps her pick it up. That’s Elliot’s cue. He sprints through the lobby with all the grace and silence of a cheetah and runs down the hall to the door of the control room. He uses a decryption key to override the lock and once Elliot’s in, he hacks the building database and adds Darlene’s credentials in the computer. While all this is happening, the security guard is looking for Darlene’s name in the employee directory. As he searches, Darlene wipes off the screen of her phone and places the device down on the front desk. Elliot uploads the info in the nick of time, because the security guard finds Dolores Haze listed as an associate. He opens the gate for her, and Darlene walks away slowly…very slowly. A second later, the security guard whistles at her. She turns and sees him holding her phone with his thumb resting on the front screen. Got ’em! Darlene gingerly takes the phone from him (gripping it on the sides) and walks out of the lobby at a normal pace. In the control room, Elliot uploads a firmware update for the the surveillance cameras that knocks them out for 40 minutes. That gives him and Darlene and little over half an hour to do their thing. In the elevator, Darlene takes the tempered glass off her phone and places it in a case. She has the elevator go down to the basement to meet Elliot, and then they both go to Kraftwerks, a 3D printing office in the building. Side note, I wonder if Sam Esmail is a big fan of the group Kraftwerk. He used their song “The Hall of Mirrors” in the second season (love that track!), and now he’s using the name of their band in the show. Just an observation. When the Aldersons get to the printing office, Darlene puts some super glue and hot water in a case, along with the tempered glass containing the thumbprint, a la Eddie Murphy in Beverly Hills Cop 2 (which was the best film in the trilogy, IMHO). Once the fumes from the super glue stick on the fingerprint, Darlene scans it, and Elliot uses the image to print a 3D version of it! Scavenger Hunt Price arrives at the E-Corp building and wistfully looks up at the E logo. The trombone player sitting nearby starts playing his sad tune of woe. Price walks over and hands him the 20 bucks (doggone). The trombone player turns his hat upside down for Price to place the money inside, and it turns out there’s a ticket sitting inside. When Price takes the ticket, he sees it’s for a dry cleaning company. Another dag gum breadcrumb to follow. Sheesh. Pike’s Hollow PD Dom is in Pike’s Hollow, where the Dark Army operative has finished roasting on an open fire. The coroner handles the body while Dom leaves the crime scene and heads off to the police station. On the way there, she’s stopped by a red light. No one else is coming so she runs the light, and sure enough, the camera takes her picture. By the way, it irks me to my soul when a traffic light is red, but there’s no other cars driving on the opposite side. But the doggone light stays red because…reasons. Also, I can’t believe this little po-dunk town has cameras on their traffic lights. Not that I want it to get any, but the little town I live in doesn’t even have those, and we have a whole Wal-Mart here. A Walgreens and a Lowe’s, too! Anyway, the camera gives Dom an idea. Maybe the people that burned the van made the same mistake at the intersection she did… Later, Dom’s at the police station, sitting at an officer’s desk. She gets up for a few moments and disappears in the back. Once Dom’s out of sight, a female cop in the office hears her phone going off. She looks around for the phone, but she can’t find it. The ringtone seems to be sounding off from the back. Sure enough, the cop finds her phone inside a pastry box. She didn’t think to question why the hell it was in there…or who put it in there? As she retrieves her phone and goes back to her desk, Dom leaves, and we see that she placed a device on the officer’s computer. Dom goes out to her car and activates the device to spy on the cop’s desktop. Darlene must’ve taught Dom some hacking skills during their torrid night together. Dom texts Janice and lets her know she hacked the local PD’s network and to look out for any red light camera reports. After speaking with Janice, she looks over at the cookies she was supposed to give to her family and starts to cry. Dom soon gets another text—this one is from the FBI, letting her know that the Irish sex trafficker guy has been released. Hmm… Stalkin’ Around the Christmas Tree Krista, who’s planning a hot Yuletide date with her new man, is shopping for groceries. While she’s looking around, Young MA—yeah, that Young MA—bumps into her. Krista sees this as nothing and keeps shopping. However, when she goes to pay for her items, Young MA is outside the store, watching her… I have to say, that has to be the UGLIEST nativity scene I’ve ever seen in my life! And this is supposed to be a so-called high end store! Who the hell okayed that monstrosity!? Wonder Siblings Powers Activate! Form of…Hackers! Back at Virtual Realty, a badge is needed to gain entry to the server room on the 9th floor, so Elliot picks the firefighters’ lock to override the scan. Once he and Darlene get to the 9th floor, Elliot breaks into the IT room to print a badge to allow Darlene access into the actual servers. Darlene uses some putty to make an impression of the 3D thumbprint they printed, and once they press it on the scanner for the server room, they’re in! The security guard tries to override the firmware upgrade to see what’s going on with the surveillance cameras, but to no avail. He seems to let the issue go, and and he walks his rounds. While on the elevator, he sees the firefighters’ panel is open. Realizing that the firefighters’ lock is the only way to enter the 9th floor without a badge, the guard decides to investigate. Once he’s up there, he uses an app on his phone to see if anyone may have legally gone in the server room. The app shows that he just went inside. The guard sends a text to his co-worker downstairs stating that someone went into the server room using his credentials, and to call the cops. Oooohh… In the meantime, Darlene locates a computer in the server room and gets to work. However, time is running out. Elliot looks at the timer on his cell phone and sees that they’re down to about 56 seconds. He shows Darlene the elapsed time, but the poor child can only move so fast. To make matters worse, the security guard has now entered the server room, looking to see who’s in there. Thankfully, he can’t see or hear Elliot and Darlene…yet. Elliot spots a sticker on the pipes overhead, showing that the lights are powered digitally. Elliot gets to work and hacks into the system, just as time runs out and the cameras come back on… The lights go out mere seconds after the surveillance cameras come back online. Darlene is still working, adding the credentials in the servers. The security guards doesn’t give up his search. He continues to walk around the server room with a flashlight, looking for the intruders. When he finally comes to the row we all hoped he wouldn’t walk down, Elliot and Darlene are gone! Whew! We see the dynamic duo darting out of the server room and Elliot zip ties the door shut so the security guard can’t get out. They’re about to take the elevator, but they see that the other security guard is coming up. They hurry away and take the stairs, resulting in a camera shot that I’m certain will become iconic (property of USA Networks/NBCUniversal): They make it to the lobby, but the cops just pulled up, and they see the security guards are heading back down on the elevator! Elliot and Darlene run back into the stairwell and hide. However, while Elliot looks out of the little window on the door to see what’s happening, the security guard spots him! The security guard tries his best to open the stairwell door to get to Elliot, but Elliot’s strooooong. He manages to hold the door closed, ensuing a struggle. He looks over at Darlene, who’s inwardly panicking, and knows what he has to do. He throws his backpack off, knocks the security guard down with the door, and sprints out of the building. Go, Elliot, go! Skinny Superman (In a Black Hoodie) Elliot starts running and the cops follow, which results in a foot chase through most of Manhattan. GO, GO, ELLIOT!!! 🏃🏾 The cops chase him through Central Park, and it’s especially funny when Elliot makes his way across the ice skating rink. He goes down a hill and tumbles a bit, hurting his leg, but still manages to evade the cops and get on a bus. Darlene, on the other hand, is still in the building. She runs upstairs and hides in an empty office, not sure how she’s going to escape. She spots some water bottles and a sweatsuit laying around and remembers there’s a gym downstairs… We see her back in the lobby some time later, sans wig, now holding the bottled water and wearing the sweatsuit. She pretends to exit the gym and slowly walks out the door, managing to avoid the cops and the guards. Boss. When Darlene gets to the car, she tracks Elliot’s signal again. He’s somewhere on the Upper West Side. Elliot’s still on the bus, and breathes a sigh of relief when he sees that he’s lost the cops chasing him on foot. Oh, but he’s not out of the woods yet (once again, no pun intended)! A police car pulls up and tries to cut the bus off! Elliot jumps out and takes off running again. Unfortunately, during the chase, Elliot gets hit by an SUV. Y’all know it takes more than a sports utility vehicle to keep our Elliot down (literally)! He wakes up and hobbles away, clearly injured this time. He comes to a nearby bridge, and looks over the side. When he looks to his left and right side, he sees the cops coming after him. Elliot braces himself, climbs on the railing of the bridge AND JUMPS! Y’all know it’ll take more than a neck breaking fall to keep our Elliot down (literally)! He takes some hard tumbles down the cliff below (turns out it wasn’t a straight drop) and rolls over to the sidewalk, where Darlene is waiting for him. Elliot hobbles to the car and they pull off. Elliot struggles to catch his breath as Darlene drives. In a sweet moment, Elliot reaches out and places his hand on top of hers, just as he did when they were children (Darlene mentioned this before on “402 Payment Required”). No words are spoken between them, but Elliot keeps holding her hand. Awww… All I Want For Christmas Is My Life Back Price is back at his house, with the suit he retrieved from the dry cleaners. The invitation to the Deus Group meeting is tucked in the lapel, instructing Price to be at The Brentano at 9:00 PM. Price texts Elliot and lets him know that Tyrell or no Tyrell, the meeting is happening. At Mrs. DiPierro’s house, everyone’s enjoying their Christmas, except poor Dom. While she’s walking the family dog, she notices the Dark Army vans parked around the neighborhood. Dom takes note of the license plates of each van and where they’re located, and jots the info down on a notebook when she gets back to the house. Bump relinquishing control. After making her notes, she sees a news story about Elliot in Central Park channeling his inner Usain Bolt. Then her phone vibrates. It’s Janice, again. Someone should dedicate this classic track to her. This time, she wants Dom to drop everything to find “a couple of troublemakers.” She sends Dom a pic of the folks she’s talking about, and it’s Elliot and Darlene. Turns out Darlene ran the red light and the camera caught her. Dammit. The Gift That Ain’t Givin’ Krista happily walks home from the grocery store with all the goodies she needs for her hot date. When she gets to the door, she has an uneasy feeling and looks across the street. There’s Young MA, staring at her. Young MA proceeds to cross the street, and Krista nervously drops her keys and her groceries. *Sigh* Young MA helps her pick up the Ben and Jerry’s that rolled onto the sidewalk. Krista takes it from her, and when she turns around, she’s face to face with none other than crazy ass Vera. “It’s time we talked,” he says. Now it’s okay to be scared, Krista. The end. Y’all…this episode was so doggone good!! Hands down, “405 Method Not Allowed” is now my favorite episode of the season. Everything was on point: the acting, the music, the cinematography, the plot, everything. And the fact that there was only two doggone lines of dialog spoken? Brilliant. It’s official: Sam Esmail is a genius. The folks that dogged out “404 Not Found” last week and started writing the entire fourth season off put away all reservations after watching this episode. As of now, it holds a 9.8 rating on IMDB.com. Mr. Robot was the even the number one trending topic on Twitter after the show aired! This episode deserves every accolade; it was great. It reminded me a lot of “eps3.4_runtime-error.r00,” which you all may know as the episode shot with one long tracking shot. Both installments had nail biting tension and an artistic technique utilized to convey the story (that a lot of people didn’t notice right away thanks to the brilliant performances and intriguing plot). “eps3.4_runtime-error.r00” still reigns supreme between the two (along with the conclusion, “eps3.5_kill-process.inc”), but “405…” runs a very close second. Let’s talk about the acting for a minute. Everybody did the damn thing, and I mean everybody. You could look into the actors’ eyes during their respective scenes and see what they were thinking and/or planning. Darlene’s panic. Price’s remorse. Dom’s hopelessness. The security guard’s suspicion. Elliot being overwhelmed and later determined. The camera work was nothing to gloss over, either. Man, that stairwell shot was ingenuity at its finest! I also loved all the Die Hard allusions that were incorporated into the film. First, with “Ode to Joy” playing in the first part of the episode (I know what you’re thinking, “‘Ode to Joy’ was around centuries before Die Hard was released!” True, but it was also featured a great deal in the film, and when most folks hear that song, they think of that movie), then with the fellas actually watching Die Hard in the lobby. This time though, the good guys are the ones pulling the heist in the building. I’m glad Elliot and Darlene were able to patch up their relationship…at least at the moment. Elliot made up for the way he treated his sister throughout the episode, showing that he meant what he said to Tyrell. The way he led her away from the elevator when he noticed the security guards were coming up, how he ran to place all the cops’ attention on him, allowing her to escape; and don’t get me started on the hand holding scene. Loved it all. Unfortunately, the target on Elliot and Darlene’s back will be even bigger at this point. With the break in, the photo from the intersection, the deep fried Dark Army operative, Elliot’s hot one night stand with the Dark Army’s account manager and Price’s impromptu Deus Group meeting, there’ll be no doubt in Whiterose’s or Ms. Thang Jr.’s mind that Elliot is plotting against them. Speaking of which, in the last 24 hrs., Elliot’s broken into someone’s apartment, had drinks at a club, had sex, walked through the woods, disposed of a body, prepped for a break in, pulled off the break in, ran through the majority of Manhattan, has had zero hours of sleep, AND IS STILL GOING. Just give me a third of that energy. I could make miracles happen. On to Dom…you’re too damn smart for your own good. Doggone your time for hacking that computer and getting Darlene and Elliot in even deeper trouble. Despite all that, I do hope that Dom finds a way to be free of the Dark Army. Maybe relinquishing control will keep her alive, but she’ll always be restrained and never have peace of mind. I mean hell, the poor woman couldn’t even enjoy Christmas with her family thanks to Janice’s messy ass blowin’ up her phone every five seconds (kind of like in this old school PSA!). I see the Irish sex trafficker guy is going to be more important to the story than we initially thought. I’m very interested to see how he plays a role in the narrative. Will he help Dom gain her freedom? As for Mr. Phillip Price, I can’t wait to see what happens during his Deus Group meeting. I have a sinking feeling this isn’t going to end well for him… I’m still a bit irked at Krista for the way she treated Elliot in the “403 Forbidden” episode, but I don’t want anything bad to happen to her. With Vera however, her chances of survival are slim to none, I hate to say. Can Elliot save her? Will she blame Elliot for all of this happening? We’ll just have to wait and see! Next ep, Vera and Young MA hold Krista hostage in her own house, Elliot asks Olivia for a favor, Whiterose demands that Elliot be brought to her, and Dom and Darlene are reunited…and it don’t feel so good. Can’t wait! By the way, next week’s episode comes on USA at 8:00 PM instead of 10:00! I hope they don’t cut the cuss words! —Written by Nadiya So what did y’all think about “405 Method Not Allowed?” Was it better than the previous episode? Did you notice there was no dialog? Did you enjoy the Die Hard references? What do you think about Elliot being a slender Superman? Did you get the feels from Elliot taking Darlene’s hand? Are you scared for Krista? Do you think that Dom will escape the Dark Army? What do you think will happen at the Deus Group meeting? Let me know in the comments section!
{ "pile_set_name": "OpenWebText2" }
1. Field of this Invention This invention relates to a process for the separation of a gas mixture containing hydrogen, carbon monoxide and methane to obtain product streams of substantially pure hydrogen and carbon monoxide. 2. Description of the Prior Art The prior art has commonly employed cryogenic processes for the separation of synthesis gas to yield hydrogen and carbon monoxide as recovered products. Such processes typically involve at least a partial liquifaction of the feed gas mixture and require the efficient use of vapor-liquid contacting and separation equipment for overall economic operation. When manufactured for the production of carbon monoxide by primary steam reforming of natural gas or by partial oxidation of higher hydrocarbon fossil fuels, the synthesis gas mixture contains residual methane as well as the hydrogen and carbon monoxide common to all synthesis gas streams. The cryogenic processes employed for the separation of such synthesis gas mixtures are designed to reject methane and produce carbon monoxide and hydrogen at a purity consistent with the end use requirement. These designs are intended to minimize the carbon monoxide content of the rejected hydrogen and methane streams in order to maximize carbon monoxide recovery. Characteristically the gas mixture will contain approximately 50 to 70 mol % hydrogen, 15 to 45 mol % carbon monoxide and 2 to 6 mol % methane, together with minor impurities, as for example trace amounts of nitrogen. Since essentially three primary components are present in the above-described synthesis gas mixture--hydrogen, carbon monoxide and methane--the prior art has commonly employed two serial multiple-plate column liquid-vapor contactors to carry out the synthesis gas separation. In one conventional process arrangement employing such liquid-vapor contactors, the synthesis gas feed stream is provided at elevated pressure and cooled by heat exchange to form a vapor-liquid mixture which is introduced to the first contacting column. In the first column, the introduced feed is contacted with a chilled methane wash liquid for absorption of the carbon monoxide in the methane wash liquid. Hydrogen is obtained from the first column as carbon monoxide-free overhead product and bottoms liquid is recovered comprising methane and the absorbed carbon monoxide. The recovered bottoms liquid is then throttled to reduced pressure and fractionated in the second contacting column. From the second column, carbon monoxide is recovered as overhead and methane is recovered as bottoms. The methane bottoms are chilled and recycled as the aforementioned methane wash liquid for the first contacting column. Although the above separation system entails a comparatively simple apparatus arrangement, the carbon monoxide product recovered by the process is unsatisfactory for use in most chemical synthesis applications by virtue of its relatively high hydrogen content. Accordingly, the prior art has attempted to obtain improvement in purity of the carbon monoxide product by removal of the hydrogen contaminant upstream of the second contacting column. In one such improvement scheme, the synthesis gas is cooled by heat exchange, as before, and introduced as a vapor to the first contacting column. The bottoms liquid from the first contacting column is throttled to lower pressure and passed to a flash drum for vapor-liquid separation. In the flash drum an equilibrium vapor-liquid separation is achieved to reject the bulk of the hydrogen which would otherwise be contained in the feed to the second contacting column. The liquid from the flash drum thus freed from the hydrogen contaminant is then throttled to still lower pressure prior to its introduction to the second contacting column. By the above-described improvement modifications, a carbon monoxide overhead product from the second contacting column can be obtained with hydrogen contaminant concentrations of less than 5000 parts per million (p.p.m.). Nonetheless the product recovery attainable in such modified systems is extremely sensitive to product purity. As a result high losses are encountered in the provision of product carbon monoxide containing hydrogen contaminant at concentration levels of less than 5000 p.p.m. Such losses occur by flash-off of carbon monoxide with the hydrogen in the equilibrium flash drum and consequent removal of the flashed carbon monoxide with the hydrogen withdrawn from the drum. Inasmuch as end use specifications for the carbon monoxide product in many applications, as for example for acrylic and polyurethane resin production, require a hydrogen content of less than about 3000 p.p.m., it has been necessary to operate the prior art process with comparatively low recovery levels, with a maximum recovery of about 90%, as based on the content of carbon monoxide in the synthesis gas feed mixture, to meet such end use carbon monoxide product specifications. In the prior art, the refrigeration content of the reduced pressure, low temperature product streams has been utilized to cool the synthesis gas feed mixture prior to its introduction to the absorber column. Nonetheless, refrigeration is generally required to provide reflux for the second contacting column and to cool the feed gas mixture and the methane wash liquid for the first contacting column. Under such conditions, the minimum pressure at which the final contacting column can be economically operated is about 20 psia. Such minimum pressure constraint is imposed by the requirement of providing sufficient pressure to overcome the flow resistance associated with the product transfer lines. Since the process involves two substantial reductions in main stream pressure in the aforementioned throttling steps, considerable compression energy must be expended in initial pressurization of the synthesis gas feed mixture for the process. Accordingly, it is an object of the present invention to provide an improved process for the separation of a synthesis gas mixture containing hydrogen, carbon monoxide and methane to provide a high purity (carbon monoxide-free) hydrogen product and a high purity (hydrogen-free) carbon monoxide product. It is another object of the invention to provide an improved process of the above type wherein high recovery of carbon monoxide is achieved. It is still another object of the invention to provide an improved process of the above type characterized by low process energy requirements. Other objects and advantages of the invention will be apparent from the ensuing disclosure and appended claims.
{ "pile_set_name": "USPTO Backgrounds" }
When and Where? The week of March 25th, 2019, eleven active judges of the 9th circuit court of appeals and the players for the two sides will assemble in the James R. Browning U.S. Courthouse, San Francisco to hear oral argument in the case of George Young, Jr. v. State of Hawaii. The Nature of the Game The case before the judges is a simple one. George Young appeals from the district court’s dismissal of his civil rights action challenging under the Second Amendment provisions of Hawaii law pertaining to the issuance of permits to carry a concealed or unconcealed weapon. In short, is there a Second Amendment right to keep and bear arms outside of the interior of our homes or, as the district court held, the Second Amendment disappears once we step outside the door to our home? A high stakes game to be sure. Possible Outcomes If the en banc panel publishes a decision, which is likely, then that decision will be binding on all subsequent three-judge panels in the 9th circuit court of appeals unless or until the United States Supreme Court issues a decision which casts doubt on the en banc decision in Young v. Hawaii. The en banc panel of judges could simply issue an unpublished decision, which is binding on nobody except the parties to the lawsuit, and/or kick the case back to the district court for a do-over. The Judges Because of its size, the 9th circuit court of appeals does not hold “full court” en banc hearings which normally consist of all active, non-recused circuit judges in a particular circuit. Had President Trump been able to fill all of the vacancies on the 9th circuit court of appeals last year then there would now be 29 active judges. As it is, there are 23 active judges on the 9th circuit court of appeals. Which is more than on any other Federal circuit court of appeals. The en banc panel of judges will consist of 9th Circuit Chief Judge Sidney Thomas and 10 circuit judges who were randomly drawn from a pool of judges which consists of all active circuit judges in this circuit plus senior circuit court judges O’Scannlain and Clifton because they sat on the three-judge panel which decided Mr. Young’s case. These two Senior circuit judges are not required to be a member of the en banc pool. Their participation in the pool is voluntary and only then if they request to be included in the pool of judges. The one judge who we do know who will be on the panel is 9th circuit Chief Judge Sidney Thomas who serves on every en banc panel in the 9th circuit. This leaves a potential pool of at most 24 judges, the 22 remaining active circuit judges plus the two senior circuit judges who sat on the Young v. Hawaii three-judge panel. Although it is impossible to predict with absolute certainty how an individual judge will decide, we do have enough information on some of them which is sufficient to make an educated guess. Chief Judge Sidney Thomas – The Chief Judge sat on both the three-judge and en banc panels in the NRA’s concealed carry appeal, Peruta v. San Diego, which was decided (lost) alongside the SAF’s and CalGuns.nuts appeal, Richards v. Prieto. Chief Judge Thomas filed a dissent to the three-judge panel decision in Peruta which the seven-judge majority in Peruta/Richards adopted. Namely, concealed carry is not a right. Chief Judge Thomas’ view is if there is a right to carry arms in public under the Second Amendment then that right is to openly carry firearms. And since the plaintiffs in both the Peruta and Richards concealed carry lawsuits had argued that states can ban Open Carry in favor of concealed carry and had not sought to Open Carry, anywhere or anytime, the en banc court did not decide the Second Amendment Open Carry question because that question was not before the court. My sense is that if Chief Judge Thomas were allowed to rewrite the legislation then he would require that firearms carried in cities, towns and villages be carried unloaded (and openly) and allow for Loaded Open Carry outside of these places. But Chief Judge Thomas is also of the generation of judges who knows that they are not allowed to legislate from the bench. And so it is a coin toss as to what he decides in Young v. Hawaii with the coin likely landing on its edge. Which is to say that Judge Thomas will hold that the Second Amendment is limited to the home until the Supreme Court says otherwise, or he will concur with the decision which comes closest to what he would write, if he could rewrite the Hawaii statute. In any event, whatever the court decides Chief Judge Thomas will stand by the decision in Peruta v. San Diego en banc, that there is no right to concealed carry as will the other members of the Peruta v. San Diego en banc panel who are still active judges if they happen to be picked to sit on the Young v. Hawaii en banc panel. Namely, Circuit Judges Graber, McKeown, Fletcher, Paez, and Owens. Keep in mind, the Chief Judge has only one vote. There are ten other members of the en banc panel and it takes at least six of them to agree on a decision. Senior Circuit Judge O’Scannlain – Judge O’Scannlain wrote the majority three-judge panel decision in both Peruta v. San Diego and Young v. Hawaii. He will vote for a decision which holds that the Second Amendment extends outside the door to our home. Having now held in his Young v. Hawaii three-judge panel decision that the Second Amendment, at a minimum, protects the right to Open Carry one can only hope he doesn’t change his mind if he is able to cobble enough votes on the en banc panel to say that Open Carry can be banned in favor of concealed carry. Senior Circuit Judge Clifton – Judge Clifton filed a dissent in the Young v. Hawaii decision. He will vote for a decision which holds that either the Second Amendment is limited to the interior of our homes or he will vote that Hawaii’s de facto ban on the Second Amendment right to keep and bear arms is Constitutional. Circuit Judge Susan P. Graber – This is an easy prediction. She will vote for a decision which holds that either the Second Amendment is limited to the interior of our homes or she will vote that de facto bans on the Second Amendment right to keep and bear arms are Constitutional. Circuit Judge M. Margaret McKeown – She will vote for a decision which holds that either the Second Amendment is limited to the interior of our homes or she will vote that de facto bans on the Second Amendment right to keep and bear arms are Constitutional. Circuit Judge Kim McLane Wardlaw – She will vote for a decision which holds that either the Second Amendment is limited to the interior of our homes or she will vote that de facto bans on the Second Amendment right to keep and bear arms are Constitutional. Circuit Judge William A. Fletcher – Another easy prediction. Judge Fletcher will vote for a decision which holds that either the Second Amendment is limited to the interior of our homes or he will vote that de facto bans on the Second Amendment right to keep and bear arms are Constitutional. Circuit Judge Ronald M. Gould – Judge Gould filed a dissent to a 9th circuit decision which held that the Second Amendment is not an individual right. What his views are today is anyone’s guess. Circuit Judge Richard A. Paez – Judge Paez will vote for a decision which holds that either the Second Amendment is limited to the interior of our homes or he will vote that de facto bans on the Second Amendment right to keep and bear arms are Constitutional. Circuit Judge Marsha S. Berzon – If Hawaii law allowed some people, other than security guards, to obtain permits then Judge Berzon would uphold a law which made exceptions for “good cause” or “heightened need.” Which way she comes down in the Young v. Hawaii case is anyone’s guess. As Mr. Young did not claim to have good cause or any heightened need then my guess is Judge Berzon will side against Mr. Young and side in favor of a decision which says Hawaii must issue permits to some folks, other than security guards, who have “good cause” for a carry permit. Circuit Judge Johnnie B. Rawlinson – Unknown. Circuit Judge Jay Bybee – Prior to the oral argument in my appeal, Charles Nichols v. Edmund G. Brown Jr., et al, I would have placed Judge Bybee on Mr. Young’s side. I am not so sure anymore. Judge Bybee is therefore an Unknown. Circuit Judge Consuelo María Callahan – This is an easy one. Judge Callahan will side with Mr. Young. Circuit Judge Carlos Bea – Another easy one. Judge Bea will side with Mr. Young. Circuit Judge Milan Smith – Unknown. Circuit Judge Sandra Segal Ikuta – Another easy one. Judge Ikuta will side with Mr. Young. She, along with Senior Judge O’Scannlain were in the majority in the now vacated three-judge panel decision in Young v. Hawaii. The Seven Obama Circuit Judges – Judges Watford, Owens, and Friedland will side with the State of Hawaii. Judges Murgia, Nguyen and Hurwitz will probably side with the State of Hawaii. Judge Christen is an unknown. The Two Trump Circuit Judges – Judge Bennett will side with the State of Hawaii. Judge Ryan D. Nelson is an unknown. We knew when Judge Bennett was nominated that he opposes the Second Amendment. We knew because he said so. Where Judge Nelson stands is anyone’s guess. The Players The State of California and the National Rifle Association through its official state organization, the California Rifle and Pistol Association (CRPA, no really) tried to join the game but their petition was denied. So they will have to sit and wait on the sidelines along with me, the author of this piece and the sole plaintiff in Charles Nichols v. Newsom et al (formerly v. Edmund G. Brown Jr., et al). The difference being I chose to sit this one out. Six months or a year from now when the results of the Young v. Hawaii game are released, the playing field will be much more favorable for me and my California Open Carry appeal. As to Young v. Hawaii: On one side, we have the Appellant George Young Jr. and his two attorneys, Alan Beck and Stephen Stambouleigh. On the other side we have the Appellees and their somewhat longer list of players: STATE OF HAWAII Defendant - Appellee, John M. Cregor, Jr., Esquire, Deputy Attorney General Kaliko'Onalani Diara Fernandes, Deputy Solicitor Neal Kumar Katyal Mitchell Reich Colleen Sinzdak Clyde James Wadsworth NEIL ABERCROMBIE, in his capacity as Governor of the State of Hawaii Defendant - Appellee, John M. Cregor, Jr., Esquire, Deputy Attorney General Neal Kumar Katyal DAVID MARK LOUIE I, Esquire, in his capacity as State Attorney General Defendant - Appellee, John M. Cregor, Jr., Esquire, Deputy Attorney General Neal Kumar Katyal COUNTY OF HAWAII, as a sub-agency of the State of Hawaii Defendant - Appellee, D. Kaena Horowitz, Deputy Corporation Counsel Neal Kumar Katyal Laureen L. Martin, Assistant Corporation Counsel WILLIAM P. KENOI, in his capacity as Mayor of the County of Hawaii Defendant - Appellee, D. Kaena Horowitz, Deputy Corporation Counsel Neal Kumar Katyal Laureen L. Martin, Assistant Corporation Counsel HILO COUNTY POLICE DEPARTMENT, as a sub-agency of the County of Hawaii Defendant – Appellee, Neal Kumar Katyal Melody A. Parker, Deputy Corporation Counsel HARRY S. KUBOJIRI, in his capacity as Chief of Police Defendant – Appellee, D. Kaena Horowitz, Deputy Corporation Counsel Neal Kumar Katyal Laureen L. Martin, Assistant Corporation Counsel Six Votes to Win! It takes six votes to win an en banc decision. As you can see from the above list of 24 judges, there are six votes to hold that the Second Amendment is limited to the home. There are six votes to hold that the Second Amendment extends to public places and that Hawaii’s de facto ban on the issuance of permits is unconstitutional. There are six votes to hold that the Second Amendment extends outside the home (or will assume that it does) but the State of Hawaii can require one to show “good cause” or a heightened need for the issuance of a permit, which Mr. Young has not shown or alleged. Ironically, there are enough judges, whose likely position is “unknown,” that we will not be able to make any prediction at all if they are ultimately revealed to be a majority of the judges chosen to decide this case, or in sufficient number to decide the outcome of this case. We will have to wait until March 18, 2019, to find out which judges were chosen. The greater irony is with eight vacancies remaining on the 9th circuit court of appeals, if the Senate Judiciary Committee stops sitting on its thumb and confirms President Trump’s 9th circuit court nominees then we might get to go through this all over again, because win or lose, either I or the State of California will file an en banc petition in my California Open Carry appeal, Nichols v. Brown. But if we do have to go through another en banc panel rehearing six months or a year from now then the playing field will be much more favorable to me both here in the 9th circuit and, all other things being equal, before the United States Supreme Court. Charles Nichols
{ "pile_set_name": "OpenWebText2" }
Proteins brighten the brain. Through selective activation/inhibition or dissection of neuronal circuits, optogenetic tools have raised hopes for a better understanding of neuropsychiatric mechanisms and therapeutic targets for various disorders. Although, overcoming serious limitations result in from conventional neuronal circuit study, this method has its own imperfections, such as optogenetic modulation of neural activity, using an internal, animal-generated, light source. In this review, limitations of external light delivery systems and possible approaches for using internal light sources in laboratory animals and perhaps, human being, are being addressed.
{ "pile_set_name": "PubMed Abstracts" }
<?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools"> <data> <variable name="viewModel" type="com.skydoves.mvvm_coroutines.ui.main.MainActivityViewModel" /> <variable name="adapter" type="com.skydoves.common_ui.adapters.TvListAdapter" /> </data> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <androidx.recyclerview.widget.RecyclerView android:id="@+id/recyclerView" android:layout_width="match_parent" android:layout_height="match_parent" android:clipToPadding="false" app:adapter="@{adapter}" app:adapterTvList="@{viewModel.tvListLiveData}" app:layoutManager="androidx.recyclerview.widget.GridLayoutManager" app:spanCount="2" tools:listitem="@layout/item_tv" /> </RelativeLayout> </layout>
{ "pile_set_name": "Github" }
Frequently Asked Questions Director of Liberal Studies Dr. Behr held orientation Q&A sessions with about 50 students total this semester as students are approaching the new Practicum requirement for graduation. The Practicum options (study abroad, internship, thesis) give students an excellent credential and distinction on their resumé no matter what direction their LS degree takes them: job market, professional or graduate school. FAQ A: The minors you choose to make your B.A. in Liberal Studies will depend on your personal interests and professional goals. Up to one of the minors can be chosen from outside of the College of Liberal Arts and Social Sciences. You should plan on meeting with the Academic Advisor for Liberal Studies or with the Director of Liberal Studies to discuss your program as soon as you think you may want to do the Liberal Studies degree. A: The possibilities depend only on you and your goals. You can combine minors in History, English and Classics for instance for a well-rounded Humanities degree. You could combine Anthropology, Sociology and Psychology for a powerful social science orientation. You could combine Political Science, History, and Classics, or, perhaps English, Economics, and Philosophy as great pre-law degrees. Or how about History, Political Science, and Economics for going into secondary education in social studies? Or if you want to go into secondary foreign language instruction, you might choose German, French, and English or History. With the option of using a minor from outside of the College of Liberal Arts and Social Sciences, the sky is really the limit on tailoring a degree to compliment your goals. With all these options, you should work closely with the Liberal Studies Academic Advisor and with the Liberal Studies Director, as well as with the Academic Advisors in your three minor programs A: The trend of many MBA programs, here at UH and across the country, is in favor of incoming students with a well-rounded Liberal Arts education. The first year of MBA studies at many schools provide all the foundation in Economics and Accounting that the degree requires. Be sure to check with the specific MBA program you have in mind for their particular entrance requirements and preferences. A: The Practicum requirement for the Liberal Studies degree is meant to complement your degree program with a powerful, job-market-valuable experience that will distinguish your degree and give you a competitive edge in your career. There are three options for completing the Practicum requirement, each of which requires enrollment in a UH credit-bearing course, which may or may not be part of one of your minor requirements. The Study Abroad option can be through any approved UH faculty-led course abroad, for which there are many options. The Research Project option must be for an existing designated capstone course identified as such in the UH catalogue or website, or for an Honors thesis. The Internship option may be with either a commercial or non-profit concern with an established internship program. The approval process for these options are to be submitted to the Director of Liberal Studies before the start of a semester in which the relevant course is to be offered.
{ "pile_set_name": "Pile-CC" }
An England international, who has won every honour in the game, scoring 230 tries in 324 Leeds appearances in the process. Anybody with those stats is worth a serious look at. As such, Hull KR were last night credited with having a nosey, and that is of little surprise. With former Leeds stars Jamie Peacock, Danny McGuire and many others now at KCOM Craven Park, the Rhinos connection with Rovers is massive. TOUGH MOMENT: Leeds' Ryan Hall touches down for his second try against Hull FC in last year's Challenge Cup semi. (Image: SWpix) Whilst no offer has yet been tabled by KR, the pulling power of the aforementioned duo may tempt Hall to consider a move to east Hull. But, and I cannot stress enough, the fact of the matter is that so many other teams – in both codes and in both hemispheres - will undoubtedly now see the former Oulton Raiders amateur as a realistic target. He could still cut it in the NRL, where he’d probably earn a considerably bigger salary than what he could pocket if he remains in Super League. Moreover, rugby union offers a different avenue for Hall to explore but let’s hope we don’t lose one of England’s greatest products to the 15-a-side code. Read More Ryan Hall to Hull KR? For Rovers, what a massive coup he’d be. The six-times Grand Final winner, still aged only 30, has a handful of years ahead of him and continues to star on a regular occurrence for the Rhinos. Currently, he's the second-top metre-maker in Super League. With Justin Carney out of contract at the end of this season, there may well be a gap opening on the KR flanks for someone to grab. Whilst the Robins have Ryan Shaw, Joe Wardill, Will Oakes and Elliot Wallis as options on the edges, Hall’s experience would be welcomed with open arms. The latter trio are still in the early stages of their career, with Shaw too only 25-years-old and Hall would be a huge compliment and assistance to the talent already at the club. Tim Sheens and his boardroom staff will no doubt be looking closely to see if they can work their magic to but they know they’ll face stiff competition to attract the winger to the club for 2019. Read More Yet who thought Rovers would secure the signing of McGuire for this term and next? Stranger things have happened and you can forgive some KR fans who may already be dreaming of seeing Hall in a red and white shirt.
{ "pile_set_name": "Pile-CC" }
IT has become part of the accepted history of our time: The bursting of the housing bubble was the primary cause of a financial crisis, a sharp recession and prolonged slow growth. The story makes intuitive sense, since the economic crisis included a collapse in the prices of housing and related securities. The movie “The Big Short,” which is based on a book by Michael Lewis, takes this cause-and-effect relationship as a given. But there is an alternative story. In recent months, Senator Ted Cruz has become the most prominent politician to give voice to the theory that the Federal Reserve caused the crisis by tightening monetary policy in 2008. While Mr. Cruz (who is an old friend of one of the authors of this article) has been criticized for making this claim, he shouldn’t back down. He’s right, and our understanding of the great recession needs to be revised. What the housing-centric view underemphasizes is that the housing bust started in early 2006, more than two years before the economic crisis. In 2006 and 2007, construction employment fell, but overall employment continued to grow, as did the economy generally. Money and labor merely shifted from housing to other sectors of the economy. This housing decline caused financial stress by sowing uncertainty about the value of bonds backed by subprime mortgages. These bonds served as collateral for institutional investors who parked their money overnight with financial firms on Wall Street in the “shadow banking” system. As their concerns about the bonds grew, investors began to pull money out of this system.
{ "pile_set_name": "OpenWebText2" }
Actress Jennifer Lawrence has denied claims that she had sex with Harvey Weinstein after a lawsuit against the disgraced Hollywood producer quoted him bragging about having slept with her. According to the lawsuit filed by a woman identified as ‘Jane Doe,’ Weinstein pushed her to the ground during a meeting in his office in 2013, before forcibly removing her underwear and performing oral sex on her. “Do you even want to be an actress?” he allegedly asked as the woman began to cry in protest. “I slept with Jennifer Lawrence and look where she is; she has just won an Oscar.” In a statement Friday, Lawrence denied having had sex with Weinstein, with the two having worked together on the 2012 film Silver Linings Playbook. “My heart breaks for all the women who were victimized by Harvey Weinstein,” Lawrence said. “I have never had anything but a professional relationship with him. This is yet another example of the predatory tactics and lies that he engaged in to lure countless women.” When the initial slew of allegations against Harvey Weinstein emerged last year, Lawrence denied having any knowledge of “deeply disturbed behavior,” despite having appeared alongside him at countless high-profile fashion events, awards ceremonies, and other star-studded events. “I worked with Harvey five years ago and I did not experience any form of harassment personally, nor did I know about any of these allegations,” Lawrence said at the time. “My heart goes out to all of the women affected by these gross actions. And I want to thank them for their bravery to come forward.” Since the first allegations broke in October 2017, dozens of women have come forward to accuse Harvey Weinstein of sexual crimes ranging from harassment to rape. He has consistently denied any allegations of non-consensual activity. Follow Ben Kew on Facebook, Twitter at @ben_kew, or email him at bkew@breitbart.com.
{ "pile_set_name": "OpenWebText2" }
Personal email systems and file synchronization and sharing tools like Dropbox and Gmail have become prevalent, but have inherent risks in the business world. The Compliance-as-a-Service vendor Sokasa provides a self-service turnkey encryption and compliance solution to ensure files are encrypted wherever they're placed. Toward the end of 2012, the file storage company Nasuni released data indicating one in five employees admits to using Dropbox at work, even if it’s against company policy. The usage numbers have probably gotten higher since because Dropbox claims to have more than 275 million users. CIOs and CISOs hate to admit it, but they know employees use Dropbox and other unauthorized cloud services like Gmail to enhance productivity. Personal email systems and file synchronization and sharing tools have become prevalent in the business world, even if they are not officially sanctioned. Even if you look at highly regulated industries like healthcare, education, legal and financial services, you’ll see high penetration of consumer-oriented cloud services. Perhaps the biggest problem resulting from use of such services is the scattering of files. If you look at services like Dropbox, Box, Gmail, Evernote and numerous others, they all have a similar property. They don’t just keep a copy of your data in the cloud; they also scatter or download a copy of that data to all your devices through synchronization. And if you share a file with someone else, the data goes onto their devices as well. Needless to say, this creates quite a problem if a device containing sensitive or regulated data is lost or stolen, or if data is shared with someone who has no business receiving it. According to the U.S. Department of Health and Human Services, the most common cause of a breach of unsecured protected health information (PHI) – a clear violation of HIPAA – is the loss or theft of a device containing the data. Organizations that allow (or don’t prevent) BYOD now have even more unmanaged devices that are connected to these cloud services and receiving company data on them. It’s a ticking time bomb in terms of data security and compliance. A new company emerged from stealth mode a few weeks ago to address this very problem. Sookasa claims to be the first company to enable professionals to natively use their favorite mobile devices and cloud services, such as Dropbox and Gmail, while transparently encrypting sensitive data and addressing regulations such as HIPAA and FERPA. The Compliance-as-a-Service vendor provides a self-service turnkey encryption and compliance solution that promises to encrypt files anywhere they are placed – including in the cloud and on mobile devices and desktops – and remain protected even when shared externally. Sookasa says it addresses three critical risks of sharing files through cloud services: Unencrypted data can be exposed when it is on a device that is lost or stolen. Even if the data is not accessed illegitimately, the event is still technically a breach of regulations such as HIPAA, FERPA or GLBA, depending on the data type, and must be treated as a breach. Files shared through cloud services might accidentally be shared with people who don’t have a legitimate need for the data. For example, an email attachment can be forwarded without the data owner’s knowledge or permission, creating an opportunity for a breach. Unencrypted data that is stored in a cloud service could potentially be accessed by the service provider or other authorities. This can be a violation of corporate policy or government or industry regulation. While the third risk grabs headlines, other solutions already exist to address data encryption for cloud services. It’s the other two risks that Sookasa points to as a differentiator. Sookasa itself is a cloud service that connects to the various cloud providers like Dropbox through APIs and does encryption and access control through those APIs. In addition, you can download lightweight apps to mobile and desktop devices, and through the apps do encryption, decryption and access control on the fly while preserving the cloud service’s user experience on the device. After Sookasa is initially set up on the device, it works in the background. This is best illustrated with an example, so I’ll continue with my Dropbox scenario. When Sookasa is installed on a user’s device, it creates a “secure” folder within Dropbox and anything that is placed in this folder is automatically going to get encrypted, access controlled and audited. A file within the secure folder will have a .Sookasa file extension. This indicates the file has Sookasa’s unique encryption properties that follow the file no matter where it is scattered. Meanwhile, all of the normal sharing features of Dropbox are preserved. Through a central dashboard, an administrator of the Sookasa controls can access the encrypted files. This administrator creates a team of users on Sookasa. The team members can come from inside or outside the organization and the names can be drawn from Active Directory if desired. Think of this team as a whitelist of people who are authorized to access the Sookasa-protected files. Anyone who is not whitelisted by the admin cannot open the encrypted files. The admin has the power to revoke access at any time so that team members can be de-provisioned as needed. Device access can be revoked as well in the event that someone loses a device that has files on it. Sookasa also has shared links that lets you share encrypted files with people who are not Sookasa users. For example, a doctor can send test results to a patient in a secure fashion, but the patient doesn’t have to be on the whitelist. He can simply access the encrypted file through a special Web link after the patient’s identity is authenticated. The administrator can set policies for file access, and files can be set to be accessed even when a device is offline. Files are completely audited so that the administration can see when people access files or change permissions to files. The audit trail is a requirement for many regulations. Key management is critical with all such tools. With Sookasa, every file placed in the secure folder has its own encryption key. Files that get the .sookasa extension have metadata that contains the file key of that specific file encrypted by a public master key. This encrypted file key goes with the file when it is replicated to the cloud or users’ devices. When you double-click on a file to open it, a request is sent to the Sookasa server to retrieve the file key that is needed to decrypt the file using the private part of the master key. Since Sookasa doesn’t store any data files (in this example, they are stored by Dropbox), the files and the master keys are always separate, and neither Sookasa nor the cloud app vendor ever has access to both. For larger enterprises that want to host or exclusively own their private master key, Sookasa has a solution for them to do so. Sookasa elegantly addresses the three risks mentioned above. If a device containing protected files is lost or stolen, the files are encrypted and only people who are authorized to access the files can open them. Files placed in the secured Sookasa folder are encrypted before they ever get distributed to the cloud service or scattered devices. Protection follows the files wherever they go, and users don’t need to change anything about their routines to use the solution. The solution’s architecture is said to be scalable. Sookasa’s key management is independent of the cloud application service it is interacting with so it would work the same way on any cloud service or device platform you choose to add. David Crump, director of Operations for Choice Medical Healthcare, implemented Sookasa for his medical supply company a few months ago. Because his company handles medical records, it must conform to HIPAA regulations for data protection. Choice Medical had built some business processes around Dropbox to facilitate sending files among critical contacts and customers. “I shopped around for solutions so we could do this and still collaborate,” says Crump. “I tried a couple of options and they were just disastrous. When I tried Sookasa it allowed us to keep everything we had already created. We didn’t lose any functionality and it added on this protection layer that kept everything seamless, kept everything together for us. There wasn’t any extra work that we had to put into our processes. By using Sookasa we were HIPAA compliant without any issues.” Linda Musthaleris a Principal Analyst with Essential Solutions Corp., which researches the practical value of information technology and how it can make individual workers and entire organizations more productive. Essential Solutions offers consulting services to computer industry and corporate clients to help define and fulfill the potential of IT. Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind. Linda Musthaler is a Principal Analyst with Essential Solutions Corp. which researches the practical value of information technology and how it can make individual workers and entire organizations more productive. Essential Solutions offers consulting services to computer industry and corporate clients to help define and fulfill the potential of IT.
{ "pile_set_name": "Pile-CC" }
The DHL logo on its own isn’t an extraordinary piece of design. But it is just one element of a holistic visual identity that we created for Deutsche Post and its subsidiaries back in the mid-1990s. Helge Rieder As a creative director for the Nitsch Design agency in Düsseldorf from 1995 until 2003, I was responsible for the brand design of Deutsche Post. And when Deutsche Post acquired DHL, the migration of the DHL brand into what would become the Deutsche Post DHL Group became part of my remit. From the beginning we developed a strong, concise and remarkable colour concept as the main anchor for the Deutsche Post identity. At that time DHL was already an established brand in the US and we decided to make only slight changes to the logo itself (adjusting the angle of the letters and the spacing between them, opening the counterforms and adjusting proportions and the shade of red) to bring it into line with the Deutsche Post brand family. Vetements' DHL T-shirt on the SS16 catwalk © Catwalking The colour concept (replacing the white with the Deutsche Post yellow) was important for brand recognition. Still, establishing a global brand on a long-term basis requires market penetration and patience. Today DHL is a brand leader and almost everyone recognises its yellow vans. For me the hype about the fashion label [Vetements, the collective led by Demna Gvasalia] is typical of the fashion industry and a consumer-driven and sense-devoid society. Selling a T-shirt with a DHL logo for £185 is crazy. Buying it for this price is beyond reason. I can’t see any creativity in printing a well-known brand on a T-shirt. What is genius is persuading people to freak out about a simple T-shirt. I give the designers and marketing people at Vetements credit if they can sell an “ugly” T-shirt for £185. It’s totally nuts. We live in an overexcited world. As consumers we have to deal with thousands of products and brands that vie for our attention. So it becomes more and more difficult for designers and marketers to create something completely new to serve glutted markets. For Vetements the DHL T-shirt is a lucky stunt, and it’s good PR for DHL, too. DHL tolerates the T-shirt (or perhaps it has a deal with Vetements) because it promotes its brand on a channel and in a market where it wouldn’t normally get any attention. “Ugly” or not, the DHL T-shirt is the best choice from the Vetements website. I wouldn’t wear any of the other tops, even if I got them for free. Second photograph: Catwalking
{ "pile_set_name": "OpenWebText2" }
Line6 DC 3-G, power supply for POD HD series, input = 100-240V AC, 1.0A 50-60 Hz, output = thomann 9V DC, 3A (27W max). Plus outside, minus inside
{ "pile_set_name": "OpenWebText2" }
When Andrew Kim was working on his project, he probably did not expect to become popular among online communities of DIY enthusiasts and Apple-devoted fans with his custom iPad/iPhone/iPod stand. Claimed to be among the best hand-made “iDevice” stands ever, Polyply, as Andrew named it, has been meticulously crafted out of milky-white acrylic plastic and birch plywood with the goal of being just as minimalistic as Apple's gadgets are. Andrew admits that it took him some trial and error before he was finally happy with the appearance and the ergonomics of Polyply. The stand is not meant to be a commercially available accessory yet, but who knows, one of its future versions may end up on your desk one day. Good job, Andrew! posted on Mar 21, 2011, 2:26 PM 1 Normally i would say this is a stupid idea but there are some dumb arses that dont mind owning three devices that do practically exactly the same thing in practically exactly the same way. ipod, ipod that makes calls aaaaand a giant ipod. i mean its just like doing a zoom on the same device 1x zoom, 2x zoom, 3x zoom posted on Mar 21, 2011, 8:38 PM 0 Posts: 224; Member since: Aug 22, 2010 posted on Mar 22, 2011, 12:15 AM 0 Posts: 1048; Member since: Dec 10, 2010 Yea, it seems like a good idea as a display to hold/charge those devices...I'd like to see the placement for each device have its own power source built in so you can just plug the device in and it gets charged without all the cord business. The right side seems off to me and I think that line would look better square. Put another kickstand on the side so you can rotate the display to watch a movie on the pad in landscape. Wrap a nice piece of aluminum on the outside edge and then he's got something to really go to market with! This copy is for your personal, non-commercial use only. You can order presentation-ready copies for distribution to your colleagues, clients or customers at https://www.parsintl.com/phonearena or use the Reprints & Permissions tool that appears at the bottom of each web page. Visit https://www.parsintl.com/ for samples and additional information.
{ "pile_set_name": "Pile-CC" }
Q: Sign of eigenvectors change depending on specification of the symmetric argument for symmetric matrices The signs of the eigenvectors in the eigen function change depending on the specification of the symmetric argument. Consider the following example: set.seed(1234) data <- matrix(rnorm(200),nrow=100) cov.matrix <- cov(data) vectors.1 <- eigen(cov.matrix,symmetric=TRUE)$vectors vectors.2 <- eigen(cov.matrix,symmetric=FALSE)$vectors #The second and third eigenvectors have opposite sign all(vectors.1 == vectors.2) FALSE This also has implications for principal component analysis as the princomp function appears to calculate the eigenvectors for the covariance matrix using the eigen function with symmetric set to TRUE. pca <- princomp(data) #princomp uses vectors.1 pca$loadings Loadings: Comp.1 Comp.2 [1,] -0.366 -0.931 [2,] 0.931 -0.366 Comp.1 Comp.2 SS loadings 1.0 1.0 Proportion Var 0.5 0.5 Cumulative Var 0.5 1.0 vectors.1 [,1] [,2] [1,] -0.3659208 -0.9306460 [2,] 0.9306460 -0.3659208 Can someone please explain the source or reasoning behind the discrepancy? A: Eigenvectors remain eigenvectors after multiplication by a scalar (including -1). The proof is simple: If v is an eigenvector of matrix A with matching eigenvalue c, then by definition Av=cv. Then, A(-v) = -(Av) = -(cv) = c(-v). So -v is also an eigenvector with the same eigenvalue. The bottom line is that this does not matter and does not change anything. A: Linear algebra libraries like LAPACK contain multiple subroutines for carrying out operations like eigendecompositions. The particular subroutine used in any given case may depend on the type of matrix being decomposed, and the pieces of that decomposition needed by the user. As you can see in this snippet from eigen's code, it dispatches different LAPACK subroutines depending on whether symmetric=TRUE or symmetric=FALSE (and also, on whether the matrix is real or complex). if (symmetric) { z <- if (!complex.x) .Internal(La_rs(x, only.values)) else .Internal(La_rs_cmplx(x, only.values)) ord <- rev(seq_along(z$values)) } else { z <- if (!complex.x) .Internal(La_rg(x, only.values)) else .Internal(La_rg_cmplx(x, only.values)) ord <- sort.list(Mod(z$values), decreasing = TRUE) } Based on pointers in ?eigen, La_rs() (used when symmetric=TRUE) appears to refer to dsyevr while La_rg() refers to dgeev. To learn exactly why those two algorithms switch some of the signs of the eigenvectors of the matrix you've handed to eigen(), you'd have to dig into the FORTRAN code used to implement them. (Since, as others have noted, the sign is irrelevant, I'm guessing you won't want to dig quite that deep ;).
{ "pile_set_name": "StackExchange" }
Pliny The Younger _Letter to Sura_ Our leisure furnishes me with the opportunity of learning from you, and you with that of instructing me. Accordingly, I particularly wish to know whether you think there exist such things as phantoms, possessing an appearance peculiar to themselves, and a certain supernatural power, or that mere empty delusions receive a shape from our fears. For my part, I am led to believe in their existence, especially by what I hear happened to Curtius Rufus. While still in humble circumstances and obscure, he was a hanger-on in the suite of the Governor of Africa. While pacing the colonnade one afternoon, there appeared to him a female form of superhuman size and beauty. She informed the terrified man that she was "Africa," and had come to foretell future events; for that he would go to Rome, would fill offices of state there, and would even return to that same province with the highest powers, and die in it. All which things were fulfilled. Moreover, as he touched at Carthage, and was disembarking from his ship, the same form is said to have presented itself to him on the shore. It is certain that, being seized with illness, and auguring the future from the past and misfortune from his previous prosperity, he himself abandoned all hope of life, though none of those about him despaired. Is not the following story again still more appalling and not less marvelous? I will relate it as it was received by me: There was at Athens a mansion, spacious and commodious, but of evil repute and dangerous to health. In the dead of night there was a noise as of iron, and, if you listened more closely, a clanking of chains was heard, first of all from a distance, and afterwards hard by. Presently a specter used to appear, an ancient man sinking with emaciation and squalor, with a long beard and bristly hair, wearing shackles on his legs and fetters on his hands, and shaking them. Hence the inmates, by reason of their fears, passed miserable and horrible nights in sleeplessness. This want of sleep was followed by disease, and, their terrors increasing, by death. For in the daytime as well, though the apparition had departed, yet a reminiscence of it flitted before their eyes, and their dread outlived its cause. The mansion was accordingly deserted, and, condemned to solitude, was entirely abandoned to the dreadful ghost. However, it was advertised, on the chance of some one, ignorant of the fearful curse attached to it, being willing to buy or to rent it. Athenodorus, the philosopher, came to Athens and read the advertisement. When he had been informed of the terms, which were so low as to appear suspicious, he made inquiries, and learned the whole of the particulars. Yet none the less on that account, nay, all the more readily, did he rent the house. As evening began to draw on, he ordered a sofa to be set for himself in the front part of the house, and called for his notebooks, writing implements, and a light. The whole of his servants he dismissed to the interior apartments, and for himself applied his soul, eyes, and hand to composition, that his mind might not, from want of occupation, picture to itself the phantoms of which he had heard, or any empty terrors. At the commencement there was the universal silence of night. Soon the shaking of irons and the clanking of chains was heard, yet he never raised his eyes nor slackened his pen, but hardened his soul and deadened his ears by its help. The noise grew and approached: now it seemed to be heard at the door, and next inside the door. He looked round, beheld and recognized the figure he had been told of. It was standing and signaling to him with its finger, as though inviting him. He, in reply, made a sign with his hand that it should wait a moment, and applied himself afresh to his tablets and pen. Upon this the figure kept rattling its chains over his head as he wrote. On looking round again, he saw it making the same signal as before, and without delay took up a light and followed it. It moved with a slow step, as though oppressed by its chains, and, after turning into the courtyard of the house, vanished suddenly and left his company. On being thus left to himself, he marked the spot with some grass and leaves which he plucked. Next day he applied to the magistrates, and urged them to have the spot in question dug up. There were found there some bones attached to and intermingled with fetters; the body to which they had belonged, rotted away by time and the soil, had abandoned them thus naked and corroded to the chains. They were collected and interred at the public expense, and the house was ever afterwards free from the spirit, which had obtained due sepulture. The above story I believe on the strength of those who affirm it. What follows I am myself in a position to affirm to others. I have a freedman, who is not without some knowledge of letters. A younger brother of his was sleeping with him in the same bed. The latter dreamed he saw some one sitting on the couch, who approached a pair of scissors to his head, and even cut the hair from the crown of it. When day dawned he was found to be cropped round the crown, and his locks were discovered lying about. A very short time afterwards a fresh occurrence of the same kind confirmed the truth of the former one. A lad of mine was sleeping, in company with several others, in the pages' apartment. There came through the windows (so he tells the story) two figures in white tunics, who cut his hair as he lay, and departed the way they came. In his case, too, daylight exhibited him shorn, and his locks scattered around. Nothing remarkable followed, except, perhaps, this, that I was not brought under accusation, as I should have been, if Domitian (in whose reign these events happened) had lived longer. For in his desk was found an information against me which had been presented by Carus; from which circumstance it may be conjectured--inasmuch as it is the custom of accused persons to let their hair grow--that the cutting off of my slaves' hair was a sign of the danger which threatened me being averted. I beg, then, that you will apply your great learning to this subject. The matter is one which deserves long and deep consideration on your part; nor am I, for my part, undeserving of having the fruits of your wisdom imparted to me. You may even argue on both sides (as your way is), provided you argue more forcibly on one side than the other, so as not to dismiss me in suspense and anxiety, when the very cause of my consulting you has been to have my doubts put an end to.
{ "pile_set_name": "Pile-CC" }
Q: Visual Studio 2008 with SQL Server 2005 Developer Edition I am trying to add a database to the App_Data location in an ASP.NET MVC 2 application in Visual Studio 2008 (VS). I have SQL Server 2005 Developer Edition installed on the local machine. However when adding the database VS complains that SQL Server 2005 Express is required. I configured VS to use the local server instance (MSSQLSERVER) which is the developer edition. It still failed. I installed SQL Server 2005 Express on the machine and configured VS to use the Express server instance (SQLEXPRESS) and the database creation started working. My questions is whether there is a way to get VS to use the developer edition of SQL Server. A: Sure you can use the SQL Server 2005 Developer edition - you just cannot add the .mdf to App_data if you do this. SQL Server 2005 Express has this extra feature that you can just simply drop a .mdf/.ldf in the App_Data folder and get going. But this is an Express-only feature. If you want to use SQL Server 2005 Developer, you need to create a database on the server, using SQL Server Management Studio, and you need to connect to it using a regular connection string. The mdf/ldf files will be placed in the usual SQL Server data directory and used from there.
{ "pile_set_name": "StackExchange" }
Saturday, May 12, 2012 Evolutionary Blackballing No Longer in the Closet Evolutionists don’t usually advertise their McCarthyist blackballing, for even they realize that manipulating the message, controlling information, ruining careers and the like doesn’t look good. But in the wake of the uproar over this year’s commencement speaker—neurosurgeon Ben Carson who doubts all of biology arose spontaneously as evolutionists insist—Emory University President Jim Wagner had no choice not only to implement an evolutionary “background checking step” to filter out all future commencement speakers who might say something interesting, but to make it clear to all that such a blackballing procedure would be formally implemented. Of course McCarthyites always believe they are right, for after all they hold the truth. And so there is always the hypocritical twist that while engaging in their blackballing activities evolutionists tell each other they are upholding truth. So it is not surprising that, according to Jaap, President Wagner “expressed his hopes that this discussion can be followed up in the fall, with a College-wide discussion on truth and systems of belief.” Yes, it will be another teachable moment with the evolutionists. In that fact-free ambiance all will be assured that the good doctor is a fine man with good intention, but that the objective, unquestionable scientific truth that everything came from nothing can be a bit too raw and hard-hitting for the sentimental. How can we better communicate the hard truth of evolution while not offending those not equipped to handle it? That will be the question of the day in that polite, collegiate gathering where evolutionists, having controlled the message, continue to drink their own bathwater. You can be sure that there will be no more evolution doubters at Emory University, blackballing is now coming out. 31 comments: Given that people that appeal to general purpose methods of denying anything are essentially solipsists, It's unclear why you'd expect the science community to treat them any differently than they would, well, solipsists. Ultimately, if you accept *X, you dismiss ethics, you don't have to abide by my set of moral codes, you determine your own conscience based false premises. *X=CapitalismFree WillChristianity Islam Judaism AtheismHinduism Is is right for a commencement speaker for a large, diverse audience portray a good portion of that audience as "dismissing ethics?" Hunter thinks it is CRAZY to pick a commencement speaker that wouldn't alienate and insult the audience. Do you think you have the right to tell a PRIVATE university how to select a commencement speaker? The fact is, a lot has been written by Darwinists themselves about how Evolutionary theory - or rather the Materialistic paradigm - undermine ethics. This logical conclusion is recognized both by proponents and opponents of materialism It seems like Darwinists think that they can say anything they like, and when it becomes inconvenient, they can just deny it was ever said. Please observe this, from prof Provine's keynote address at the Darwin Day event in U Tenn in 1998, a well-known statement that is conspicuously not denounced by the elites: >> Naturalistic evolution has clear consequences that Charles Darwin understood perfectly. 1) No gods worth having exist; 2) no life after death exists; 3) no ultimate foundation for ethics exists; 4) no ultimate meaning in life exists; and 5) human free will is nonexistent . . . >> I have never ever heard of Darwinists being offended, signing petitions and complaining about this, or similar things from the likes of Crick, or Dawkins, or any number of others. Now, explain to me how this differs substantially in underlying meaning -- evolutionary materialism leads to subjectivism and/or relativism on morals -- from what Dr Carson said. Or for that matter, from what Plato said in The laws, Bk X, 2350 years ago. This is sounding uncommonly like it is not what is being said but who says it that is the problem. Absent a convincing explanation, I think I have a right on fair comment to hold that what is really going on is anti-Christian bigotry, disguised as huffing and puffing and finding offence over that which is routinely implied or outright said by leading Darwinists, once it comes form the mouth of someone who is taking exception to it. "no ultimate foundation" and "equates the acceptance of evolution with a lack of ethics and morality" are very different. The letter to the editor believes Carson said the latter. Having ethics with other foundations and having a lack of ethics are very different things. I know you believe ALL ethics are founded in your God, but lets just agree for now that that belief is not universal. If you hold that outside your view of Christianity their is no ethics, you might be a poor commencement speaker for Emory as well. And Bigotry? Persecution? McCarthyism. All these terms have been used in reference to this situation. He's still speaking. Says one professor:"Dr. Carson was a childhood hero of mine, and he still is a hero of mine...""The professors say this is no protest and they still want Carson to speak at the commencement." Can one be a Christian and accept evolutionary theory, the present theory of the big bang? Dr Carson in his interview says no, you can have no foundation for your ethics if you do not accept his beliefs. He is free to believe that, Professor Roode is as free to express his opinion.It seems to me that there is only one group bitterly denouncing the exercise of free civil dialog. You can SAY that. The problem is that if naturalistic evolution = evolutionary materialism is so, and we have no real choice or purpose, not only is it so that there is no ultimate foundation for ethics, but the relative "foundations" we erect boil down to might and manipulation make "right." Which is precisely what nihilism is; just what Plato warned against 2350 years ago in the Laws, Bk X: >> [The avant garde philosophers, teachers and artists c. 400 BC say that] The elements are severally moved by chance and some inherent force according to certain affinities among them-of hot with cold, or of dry with moist, or of soft with hard, and according to all the other accidental admixtures of opposites which have been formed by necessity. After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only . . . . [[T]hese people would say that the Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT.] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [[ Evolutionary materialism leads to the promotion of amorality], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [[Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles], these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others [[such amoral factions, if they gain power, "naturally" tend towards ruthless tyranny; here, too, Plato hints at the career of Alcibiades], and not in legal subjection to them . . . >> So the logic here is that because you agree with Carson, no one is even allowed to question his invitation to a private affair for a community that a substantial percentage of is offended by his suggestion they are amoral because they believe in evolution? "The elements are severally moved by chance and some inherent force according to certain affinities among them" Not a bad guess-I'm glad you and Plato didn't have your way, and the Avant-garde has kept searching and pushing human knowledge forward for the last 2400 years. KF: You can SAY that. The problem is that if naturalistic evolution = evolutionary materialism is so, and we have no real choice or purpose, not only is it so that there is no ultimate foundation for ethics, but the relative "foundations" we erect boil down to might and manipulation make "right." Exactly why would this necessarily be the case? The fact that you haven't actually argued for this in any detailed way indicates you do not recognize this as an idea that would be subject to criticism. For example, is it not logically possible that God created evolutionary mechanisms as a secondary cause and allowed great freedom as to what kind of life would arise, just as he supposedly created gravity as a secondary cause? If one is genuinely open to the idea that we simply do not know what happens after we die, is it not logically possible that human conciseness could exist in some form after death even if God doesn't exist? Is it not logically possible that God purposely designed us to be material beings that had a finite existence, making that our purpose? I'm guessing you recognize the above as ideas that would be subject to criticism. But, it would seem you're unable to recognize your own assumptions as such. On that note, I wonder how many posters here work for, or went to, or support institutions that have explicit doctrinal statements, as opposed to Emory, which has an inclusive and tolerant statement of purpose. Didn't Dr. Dembski have to walk back some statements on the flood recently? No cries of academic freedom, intimidation or bullying then. For goodness sake, somewhere that has had Newt Gingrich as a student, Jimmy Carter and the Dali Lama as faculty, a School of Theology, and some great evolutionary biology research has probably had some nice dialogues. What shared dogma is there to punish people for not adhering to? Who's doing the bullying? I think you would be well instructed to have a look here, to see more of the problem. Michael Ruse, 1985: >> "Ethics as we understand it is an illusion fobbed off on us by our genes to get us to co-operate." >> KF PS: Thorton, you begin with a smear: that you insist on falsely labelling design theory as creationism at this stage is willful dishonesty. I need say nothing further to you and simply note for record to onlookers, as you want to drag this discussion down into personalities. F/N: For record, onlookers, I think you will notice there was a problem of persistent, willful obstructionism at UD, including a nest of sock-puppets and trolls, extending over about a year; and associated with indefensible behaviour at several hate sites. In the case of Dr Liddle, with all due respect to her gentility (and pardon my having to speak to such, to give the rest of the story), she was in fact unfortunately associated with the enabling behaviour of going along meekly with abusive commentary at one of the notorious anti-design hate sites; sites that fully warrant my general policy of not engaging in discussions where participants on the other side routinely resort to abuse and harassment. BTW, you the astute onlooker would be well advised to note that I hold no moderating powers at UD and from time to time have made my objections known publicly and privately where I think things have gone wrong or too far (as will be inevitable in the real world, not the one where Thorton's side gets away with abuse, name-calling, slander, threats targetting uninvolved family members, outing tactics, invasion of unrelated web sites, and probable email tampering while demanding passivity on ours . . . ); and not without some positive effect. All of this was brought out in details months ago, but of course telling the rest of the story on things like mafioso style threats made against my family would not serve Thorton's purpose, or the posting of RW pictures with defacing that serve as both mockery and targetting information. In addition, you know or should know that I have been subjected to considerable web harassment from Thorton's side of the fence, with very little protest or self-correction on that side. What that tells me is that what Thorton is doing here is diverting from a substantial issue into atmosphere-poisoning and polarisation in place of addressing serious issues on the merits. In short, he plainly has nothing cogent to say so he thinks that by letting off stink bombs he can score points. All he is doing in the end is substantiating the concern that Carson made that evolutionary materialism encourages an atmosphere of relativist nihilism that lends itself to the sort of ruthless faction tactics we are seeing above. I had hoped that this blog was showing some improvement on that side, but it is quite evident that that is not the case. In any case I have said enough for responsible people to see what is going on. G'day. >> of course Gentle Ben (and he is indeed one of the gentlest, kindest people one could ever meet) doesn't believe that his Darwinist friends and colleagues are necessarily unethical. What he believes is that Darwinism is necessarily materialistic. (This is a view about Darwinism that he shares with some devout Darwinists themselves.) And he believes that materialism, if true, is incompatible with free will and with ethical norms (which must be, after all, norms for the guidance of free choices, if they are to have any standing, force, and validity at all). Now, he knows perfectly well that people who believe in materialism are in many cases decent, honorable, ethical people. But he thinks that they lead lives that are much better than their formal philosophical beliefs would require them to lead. He believes that their commitment to materialism makes it impossible for them to give a sound account of the ethical norms which they themselves, to their credit, live by. Of course, he might be wrong about that (though I don't think he is), just as he might be wrong about the validity of Darwinism as a scientific theory, or the compatility of Darwinism with the rejection of materialism. But it's certainly not a mean or crazy thing to believe or say. It's scarcely a cause for "concern" about having him as a Commencement speaker. >> That is what the objectors of the ilk of Thorton don't want you thinking about (and clearly cannot cogently answer to), so the best retort to the sort of tactics that have begun to play pout above, is to put the issue back on the merits. >> Beyond the specific episode involving Emory University and Ben Carson, the general point needs to be emphasized. Dr. Carson is protected both by his renown and by the fact that a Commencement address, however high profile, is still just a one-shot event. It's not an academic appointment. If Carson's brief comments on evolution drew this kind of harsh and distorting criticism, imagine the results if he were someone else: a young scientist seeking a strong start to his career, a not so young but still untenured scientist with his livelihood to protect, even a tenured academic worried about his reputation and the future careers of his own grad students. Imagine one of those folks harboring private doubts about Darwin -- as, in fact, we know that plenty do. He would have to be nearly suicidal, in disregarding his future job prospects -- either that or fantastically brave -- to breath a word about his opinions. This is, once again, how Darwinists maintain the fiction that the scientific community has reached a freely determined "consensus" in favor of Darwinian evolution and against competing scientific views like intelligent design. The consensus is maintained by intimidation. It's a farce -- but for vulnerable people in academic life, a scary farce. >> The central flaw in creationism is the same flaw found in the pre-enlightenment conception of human knowledge, in that it's irrational, supernatural or completely absent. Specifically, you assuming that morality isn't an idea that would be subject to criticism, but is exhaustively true. As such, it cannot or should not become more accurate over time. However, I'm a fallibilist and a critical rationalist. As such, we make progress by conjecturing theories, then exposing them to criticism. So, I have no reason to expect our answers to moral questions to be exhaustively true. Nor am I surprised that our assumptions how the impact of evolutionary theory on morality would contain errors, but would become more accurate over time. New discoveries reveal new problem. But those problems eventually get solved. While this might be a problem for you're conception of human knowledge, it's not a problem for mine. In fact, it's a direct consequence of mine. Since cannot predict what kind of knowledge we will create, we cannot predict what impact it will have on morality. For example, at some point, we will create the knowledge of how to bring back someone who has died less than 30 minutes ago due to serious injury. What would be the effect this would have on ethics? Is it OK to violently kill someone as long as you revive within 30 minutes? Do you see how an open-ended process of knowledge creation makes the idea of a fixed, moral code that never changes and never becomes more accurate untenable? Darwin's discovery is no different. It revealed new problems to be solved. This includes moral problems that we couldn't have predicted. Nor does this mean that we exhaustively solve them right away. But, again, this is only a problem for your conception of human knowledge, not mine. I guess the answer to my question: "So the logic here is that because you agree with Carson, no one is even allowed to question his invitation to a private affair for a community that a substantial percentage of is offended by his suggestion they are amoral because they believe in evolution? " Is yes. Why should the professors and graduates of Emory University, who believe in evolution and feel quite moral not have the right to a dialogue about the choice of commencement speaker? RC: You obviously do not want to acknowledge the difference between dialogue and intimidation; as Klinghoffer has so aptly pointed out. I need not repeat myself on that score. In addition, you do not seem to appreciate that reiterating a false talking point does not transform it into truth. Since onlookers can see for themselves why I pointed out that no small number of Darwinists -- including Darwin -- have said or implied the substantial claim made by Carson, it is plain that the real objection and reason why he was turned into a strawman and scapegoat to be smeared publicly, is that he spoke as a Christian. KF says it is "anti-christian bigotry" to complain when someone invited to your home institution describes you as devoid of ethics because of his religious beliefs. I wonder how an evolutionary biology student, after working hard for 6 years on her Ph.D. will feel at her graduation, with her family watching, if Dr. Carson chooses to describe her as amoral and irrational. Hopefully he won't--but I think the warning by the Emory faculty is valid, given his public statements. Let's hope he talks about his life and accomplishments in a positive manner, not detracting from others. Of course, in the spirit of open dialogue, KF posted this (even quoting me) on the site where those who don't conform to Barry's beliefs aren't allowed to post.
{ "pile_set_name": "Pile-CC" }
Alcohol has many health benefits, but it also leads to over 80,000 deaths in America per year. And if you're engaging in any of these events (below), then you ARE drinking a little too much! Here are the signs: You drink too much on one occasion:Gregory A. Smith, MD says “drinking too much on just one occasion can change your life for the worse." You only drink a lot on the weekend: Dr. Smith says, “if you don’t drink daily but are drinking regularly, such as every Friday night, that’s a red flag." You don't care about your responsibilities as much:Keith Humphreys, PhD says, “drinking is a problem when you notice that you’ve started to neglect things that are important to you for the sake of alcohol." Alcoholic beverages sneak up on you: Not knowing your limits may be a clue that you're a binge drinker. You lose your memory: You forget parts of your night after drinking a lot the night before. Your loved ones worry about you:Deidra Roach, MD says, “the first step is to recognize that you’re drinking more than you should, and then to set some goals for yourself." And Dr. Humphreys added, “And if you’re afraid to ask people if you drink too much, that’s probably a sign that you’re overdoing it, too." Now that you know if you do or do NOT drink too much, you can set a plan and never get into irresponsible drinking. Of course having one or two drinks here and then won't hurt you, but overindulging in booze can lead to craziness and even death. So the next time you're out having an AH-MAZYZING time, remember to drink RESPONSIBLY.
{ "pile_set_name": "Pile-CC" }
Q: addEventListener и getElementsByClassName Здравствуйте. есть html: <div id="arr"> <div class="test">sdvsdvsdv</div> <div class="test">4349567294</div> </div> нужно с помощью addEventListener, getElementById и document.getElementsByClassName без использования JQUERY при клике на <div class="test"></div> получить html значение элемента, на который кликнули. A: Это можно сделать например так: https://jsfiddle.net/matkdLxf/ Тоже самое с помощью getElementsByClassName: https://jsfiddle.net/23w21c3z/ var test_items = document.querySelectorAll(".test"); var result = document.getElementById("result"); for (var i = 0; i < test_items.length; i++) { test_items[i].addEventListener("click", function () { result.innerHTML = this.innerHTML; }, false); } #result { margin-top: 20px; border-top: 2px solid #ccc; padding-top: 10px; } <div id="arr"> <div class="test">sdvsdvsdv</div> <div class="test">4349567294</div> </div> <div id="result"> </div>
{ "pile_set_name": "StackExchange" }
Voila LA rumeur qui enflamme NeoGaf et les forums Japonais depuis 2 jours. Playstation annoncerait une nouvelle PS Vita portant le nom de PSE lors de cette E3 Qui serait basiquement une PS4 portable en 720p et une concurrente direct de la Nintendo Switch. Les jeux tourneraient en 720p à 30 ou 60fps et en mode dock tournerait alors en 900p (dock qui serait à 59$) Mais compresser 60-70gbs de données serait compliqué, la console serait donc une console cloud utilisant la 4G ou un système de compression via des cartouches. Cela reste encore flou… Le nom de code est : Transformacija. Il semblerait que les développeurs n’auraient pas besoin de faire de patchs supplémentaires pour faire tourner les jeux PS4. Voici les documents officiels qui auraient fuités: 1 de 4 Vrai fuite ou intox ? Il faudra attendre la conférence Playstation de l’E3 le 13 juin à 3h du matin.
{ "pile_set_name": "OpenWebText2" }
--- abstract: 'We examine scaling relations of dispersion-supported galaxies over more than eight orders of magnitude in luminosity by transforming standard fundamental plane parameters into a space of mass, radius, and luminosity. The radius variable $r_{1/2}$ is the de-projected (3-D) half-light radius, the mass variable $M_{1/2}$ is the total gravitating mass within this radius, and $L_{1/2}$ is half the luminosity. We find that from ultra-faint dwarf spheroidals to giant cluster spheroids, dispersion-supported galaxies scatter about a one-dimensional “fundamental curve” through this MRL space. The mass-radius-luminosity relation transitions from $M_{1/2} \sim r_{1/2}^{1.44} \sim L_{1/2}^{0.30}$ for the faintest dwarf spheroidal galaxies to $M_{1/2} \sim r_{1/2}^{1.42} \sim L_{1/2}^{3.2}$ for the most luminous galaxy cluster spheroids. The weakness of the $M_{1/2}-L_{1/2}$ slope on the faint end may imply that potential well depth limits galaxy formation in small galaxies, while the stronger dependence on $L_{1/2}$ on the bright end suggests that baryonic physics limits galaxy formation in massive galaxies. The mass-radius projection of this curve can be compared to median dark matter halo mass profiles of $\Lambda$CDM halos in order to construct a virial mass-luminosity relationship ($M_{\rm vir} - L$) for galaxies that spans seven orders of magnitude in $M_{\rm vir}$. Independent of any global abundance or clustering information, we find that (spheroidal) galaxy formation needs to be most efficient in halos of $M_{\rm vir} \sim 10^{12} \, M_\odot$ and to become inefficient above and below this scale. Moreover, this profile matching technique for deriving the $M_{\rm vir} - L$ is most accurate at the high and low luminosity extremes (where dark matter fractions are highest) and is therefore quite complementary to statistical approaches that rely on having a well-sampled luminosity function. We also consider the significance and utility of the scatter about this relation, and find that in the dSph regime observational errors are almost at the point where we can explore the intrinsic scatter in the luminosity-virial mass relation. Finally, we note that purely stellar systems like Globular Clusters and Ultra Compact Dwarfs do not follow the fundamental curve relation. This allows them to easily be distinguished from dark-matter dominated dSph galaxies in MRL space.' author: - 'Erik J. Tollerud, James S. Bullock, Genevieve J. Graves, Joe Wolf' bibliography: - 'paper.bib' title: 'From Galaxy Clusters to Ultra-Faint Dwarf Spheroidals: A Fundamental Curve Connecting Dispersion-supported Galaxies to Their Dark Matter Halos' --- Introduction {#sec:intro} ============ Galaxy observables such as size, luminosity, and velocity dispersion are known to follow scaling relations. The study of these relations provides a window into the processes that regulate galaxy formation. The $\Lambda$CDM dark matter halos that host these galaxies are also predicted to follow structural scaling relations, including relations between their central densities and total virial masses. In this paper, we seek to link galaxy observables to dark matter halo properties by studying galaxy dynamical masses ($M_{1/2}$) within their 3-D half-light radii ($r_{1/2}$) as a function of galaxy luminosity ($L_{1/2} = L/2$). This coordinate space of intrinsic parameters (MRL Space) is obtained via a simple transformation of the standard [*observed*]{} parameters of fundamental plane space. Our approach is motivated by the work of @wolf09, who showed that the dynamical mass of a spheroidal galaxy within $r_{1/2}$ can be determined accurately from observed sizes and velocity dispersions without knowledge of the stellar velocity dispersion anisotropy. This fact enables manifestly apparent physical interpretations of MRL space and, in principle, a method to connect central galaxy densities to global dark matter halo properties. It is well established that when placed in a parameter space of observed velocity dispersion ($\sigma$), 2-D effective radius ($R_e$), and surface brightness ($I_e$), bright ($\gtrsim L_*$) early-type galaxies lie approximately within a two-dimensional “fundamental plane” [@dj87fp; @dressler87dnsig; @faber87fp]. Other work [e.g. @nieto90faintfp; @b2f1; @burbend97; @pb02fundline; @zar06fman; @shankar06; @woo08; @forbes08] has expanded upon or considered similar such relations sometimes including galaxies that have significant rotationally-supported components. These scaling relations provide a wealth of opportunities to examine what physical processes generate them [e.g. @dantas00; @dekel03fundline; @robertson06ellscale; @zar08eqgal; @hopkins08fpdiss; @korm09ESph; @bovill09; @graves09ii], and hence further constrain scenarios of galaxy formation. Zarisky and collaborators [@zar06fmdwarfs; @zar06fman] explored a unified description of the fundamental plane parameters for all spheroids that are embedded within their own dark matter halos. They found that dwarf spheroidal galaxies (dSphs), dwarf elliptical galaxies (dE), normal elliptical galaxies (E), and the extended stellar spheroidal components of galaxy clusters (cluster spheroids, CSphs) could be characterized by a 2-D fundamental manifold in ($\sigma$, $R_e$, $I_e$) space (although curved relations had been seen noted in sub-spaces, e.g. @gra05). The CSph of a galaxy cluster halo is the sum of the brightest cluster galaxy (BCG) and the extended intra-cluster stars (ICS). Empirically, the inclusion of CSphs is motivated by the fact that they demonstrate a relationship between $R_e$ and $I_e$ that is similar to elliptical galaxies in many respects [@gonz05icl]. From a theoretical/cosmological perspective, a CSph is the most natural single stellar system to associate with the host dark matter halo of a cluster [while the cluster galaxies themselves are more readily associated with subhalos, e.g. @conroy07; @purcell07icl]. Typically, the CSph ($L \gtrsim 10^{11} L_\odot$) contains many more stars than the BCG by itself [@gonz05icl] and, importantly for our purposes, it extends to a fair fraction of the cluster virial radius, and thus (in principle) allows a more global probe of the cluster potential. We will include CSphs as the cluster-halo counterparts to normal spheroidal galaxies in our work. Hence, our definition of “galaxy” here is the central luminous component of a distinct dark matter halo (although it may be a subhalo of a larger halo, as is the case for dSph satellites or non-BCG galaxies in clusters). On the opposite luminosity/size extreme from giant CSphs are the ultra-faint dwarf dSphs [e.g. @will05wI; @bel07catsdogs; @walsh07boo; @bel2009seg2]. Discovered by searches within the Sloan Digital Sky Survey dataset [SDSS, @sdssdr6], these systems can have luminosities smaller than $\sim 1000 L_\odot$ and have been shown to be the most dark matter dominated systems known [e.g. @martin07; @sandg07; @pen08dwarfdm; @geha09seg1; @simon10seg1]. The ultra-faint dSphs provide means to study galaxy formation within the smallest dark matter halos that host stars [@stri08dmdom]. By including them in our analysis, we will extend galaxy scaling relation studies to span more than eight orders of magnitude in luminosity. By exploring galaxy properties over a very wide range in luminosity, we are able to address one of the broader questions in astrophysics: how and why galaxy formation efficiency varies as a function of dark matter halo mass. Remarkably, observed galaxy luminosity functions and two-point clustering statistics can be explained fairly well under the assumption that $L$ (or stellar mass) maps to dark matter halo mass $M_{\rm vir}$ in a monotonic way [e.g. @krav04hod; @CW08; @moster09abund], such that halo viral mass-to-light ratio $M_{\rm vir}/L$ is minimized near $L \sim L_*$ and rises steeply at larger and smaller $L$. An understanding of this behavior – how and why it happens – is hampered at the smallest and largest mass scales because luminosity functions become less complete and less well-sampled in the extremes. One of the goals of this work is to use galaxy MRL relations to inform the $M_{\rm vir} - L$ mapping in a way that is independent of large-scale abundance and clustering studies. The paper is organized as follows: in §\[sec:dat\], we describe the data set used for this study and the relevant sources. In §\[sec:mlrspace\] we consider the scaling relations of our data, introducing a new space (“MRL” Space) that is designed to provide a bridge between the scaling relations of galaxies and the scaling relations of dark matter halos over the full dynamic range of known galaxies ($\sim10$ orders of magnitude in $L$ and $M_{\rm vir}$). In §\[sec:curve\], we introduce a one-dimensional curve that the galaxies follow in this space, and apply this curve to canonical $\Lambda$CDM halos to map halos onto their galaxies. In §\[sec:err\] we address the scatter in the fundamental curve, in §\[sec:err2\] we address the errors and scatter in the halo mapping, and in §\[sec:conc\], we conclude. Throughout this paper we assume a $\Lambda$CDM cosmology with WMAP7 [@WMAP7] parameters of $h=0.704$, $\Omega_M=.272$, $\Omega_\Lambda=1-\Omega_M$, $\sigma_8=0.809$, and $n_s=0.963$. Further, we use the symbol $\log$ to represent base-10 logarithms. [cccccccccc]{}\[t!\] A0122 & 2.83 & 2.03 & 2.15 & 11.3 & 13.7 & 13.7 &CSph & & 1\ E Bin 1 & 1.92 & 0.04 & 0.16 & 9.22 & 9.85 & 9.5 & E & 36 & 2\ VCC452 & 1.38 & -0.15 & -0.02 & 8.04 & 8.57 & 8.37 & dE & & 3\ Draco & 1.0 & -0.66 & -0.53 & 5.03 & 7.32 & 7.32 & dSph & & 4\ 47 Tuc & 1.31 & -2.66 & -2.54 & 5.20 & 5.92 & 5.92 & GC & & 5,6\ F-19 & 1.36 & -1.05 & -0.92 & 7.00 & 7.64 & 7.64 & UCD & & 7 \[tab:dat\] Data {#sec:dat} ==== The data sources for this study are varied by necessity due to the wide dynamic range covered. Table \[tab:dat\] gives the relevant parameters for the objects in this study and the sources for each. Starting with the least luminous objects that are embedded within dark matter halos ($L \lesssim 10^{8} L_\odot$), our dwarf spheroidal (dSph) data set is taken from the summary table of @wolf09 and draws from various sources for photometric properties and resolved star kinematic measurements for Milky Way dSph galaxies. Moving up in brightness ($L \simeq 10^{8-9} L_\odot$) our “dwarf elliptical” (dE) sample is taken from the Virgo Cluster dE study of @geha03deii. Note that while dEs are not as clearly dark matter dominated as dSph galaxies within $r_{1/2}$ (see below), they are believed to be embedded in their own dark matter halos based on extended kinematic samples [@geha10]. Data for normal elliptical galaxies (E) are from @graves09i [$L \simeq 10^{10} L_\odot$] and are discussed in more detail toward the end of this section. The brightest ($L \simeq 10^{11} L_\odot$) cluster spheroid (CSph) data are from the imaging of @gonz05icl and spectra of @zar06fman. These data are also described in more detail below. We also examine two comparison populations as examples of systems that are not embedded within dark matter halos: Milky Way globular clusters (GCs, $L\simeq 10^{5}$) and ultra-compact dwarfs (UCDs, $L \simeq 10^{6}$). For GC photometry we use the 2003 revision of the Harris catalog [@harris96gcs] and take velocity dispersions from @PM93. For UCDs we use data from @mieske08ucds. Note that while the status of UCDs as large examples of purely stellar systems is debated [e.g. @evst07ucd; @goerdt08ucds; @baum08ucds; @dab09ucds; @taylor10gcucds and references therein] we find that their scaling relations are more in line with GCs than similarly luminous dSphs and therefore treat them as lacking dark matter halos below. The CSph data set stands out compared to the other data sets in two distinct ways. First, while all other data sources are in the V band, the CSph data [summarized by @zar06fman] use Cousins I-band luminosities. We convert these data to V-band using averaged colors of E galaxies from @fuk95. While this does not account for the possibility of a systematic error in $R_e$ for the CSph data points due to a different choice of band, this effect is likely to be small given the large dynamic range in this data set. Furthermore, @labarbera08 find that the fundamental plane for early-type galaxies is nearly independent of band from optical to the K-band. Given the similarity of the stellar populations for those galaxies and the CSph, it is therefore likely that the band mismatch is not a significant effect. The second way that the CSph data set differs is that the velocity dispersions from [@zar06fman] are derived from galaxies in the cluster, rather than the CSph (mostly ICS) light itself. This is of course not ideal, but the measurement of ICS velocity dispersions is very difficult with current spectroscopic capabilities. While this has been accomplished both in integrated light [@kel02] and planetary nebula kinematics [e.g. @arn04] for a few clusters, there is not yet a large, homogenous sample. This is required for generality and to compare to our other samples, and hence we are forced to use galaxy dispersions until large direct measurement samples become available. In principle this could impose a bias in our mass estimator (described below) because the ICS and cluster galaxies follow different distribution functions. We explore in more detail how this bias might affect our results in §\[sec:err\]. The normal elliptical galaxy data comprise a sample of $\sim$16,000 galaxies selected from the Sloan Digital Sky Survey (SDSS, @york00sdss) Main Galaxy Sample [@strauss02sdss], as described in @graves09i. Galaxies are selected to be passively evolving quiescent objects with no emission lines in their spectra. The individual galaxies are sorted into bins in the 3-D Fundamental Plane parameter space defined by $\sigma$, $R_e$, and $I_e$. Values reported here are the median values for each bin of galaxies. Before continuing, we summarize our galaxy terminology and the symbol codes we use when presenting each galaxy type. The CSph population of @zar06fman is presented as orange squares. The “E” or “bright E” terminology refers to the @graves09i data set and is represented as red circles of varying size such that the size of the data point is proportional to the number of galaxies in each bin. The dE or “dwarf elliptical" label refers to the @geha03deii data set and is presented as yellow diamonds of uniform size. The Milky Way dSph satellites here are represented by magenta triangles. In some cases, a distinction will be drawn between the “SDSS dSphs” and the “classical dSphs,” referring to those discovered by SDSS and those known before. The SDSS dwarfs are almost exclusively fainter, and include the “ultra-faint dSphs.” Finally, the GC and UCD populations are represented by the green and blue star-symbols and pentagons, respectively. MRL Space {#sec:mlrspace} ========= We now examine the data set described in the previous section in the context of the scaling relations of the observables. We emphasize the use of the MRL space described below to understand this data set. First, we provide a sample projection of the data set described in the previous section (Table \[tab:dat\]). Figure \[fig:fjplot\] plots this data set in the 2-D space of luminosity ($L$) and stellar velocity dispersion dispersion ($\sigma$)—the Faber-Jackson relation [@fjrel]. We also show best-fit power laws ($L=\sigma^{\gamma}$) for each of our classes of objects. We compute slopes by fitting a linear relation in log space with $\log L$ ($\log \sigma$) as the parametric variable. For the CSphs, Es, dEs, dSphs, UCDs, and GCs, this results in slopes of $\gamma=$ 1.5 (0.5), 2.6 (1.8), 6.0 (1.1), 11.1 (6.3), 2.4 (1.2), and 3.4 (1.5), respectively. In this plane the slopes increase towards larger luminosities, suggesting a definite scaling relation (the original Faber-Jackson relation). We note, however, that the dSphs, UCDs, and GCs are mixed together in this projection, a clear drawback from interpreting these objects in this space. Further, there is structure to the E sample not fully aligned with this 2-D parameter space. The structure here is the fundamental plane [@dj87fp; @dressler87dnsig; @faber87fp] for E galaxies, distinguished from the Faber-Jackson relation by being a 3-D parameter space with the inclusion of the effective radius ($R_e$, the radius enclosing half the total luminosity) and use of mean surface brightness $I_e = L/(2 \pi R_e^2)$ in place of the luminosity. In Appendix \[apx:altdata\] we show this data set in the fundamental plane space (and the related $\kappa$ space of @b2f1) for reference and comparison, but here we emphasize the use of a different parameter space, described below. While the fundamental plane is a valuable parameter space of observables, the connection to this space from typical dark matter scaling relations is non-trivial. In order to facilitate manifestly apparent theoretical interpretations, we introduce a set of physical variables – a mass, a size, and a luminosity – that are derived from the same observables. Hence, we call this space “MRL Space” for the three variables: 1. The half-light mass $M_{1/2} \equiv M(<r_{1/2})$ – the total dynamical mass within $r_{1/2}$. 2. The 3-D half-light radius $r_{1/2}$, the radius enclosing the half-luminosity $L_{1/2}$ . 3. The half-luminosity, $L_{1/2}$, half of the total luminosity emitted from the galaxy (*not* necessarily the same as half the observed luminosity). We note that the luminosity variable here is defined in terms of the total luminous material in the galaxy, ignoring any attenuation that may occur as light propagates out of the galaxy. Below we describe the transformation of observables used to closely approximate this space for the data set here. A major motivation for the choice of these coordinates is the explicit use of the mass within the 3-D half-light radius as the mass variable, $M_{1/2} \equiv M(r_{1/2})$. The adoption of this mass in particular is motivated by @wolf09, who showed that while dynamical masses with $r \ll r_{1/2}$ and $r \gg r_{1/2}$ are largely unconstrained from 1-D velocity dispersion data (due to weak constraints on the stellar velocity dispersion anisotropy), $M_{1/2}$ can be determined simply and accurately for spherical systems without knowledge of the anisotropy: $$M_{1/2} = 3 \, G^{-1} \, \sigma^2 \, r_{1/2} \,. \label{eqn:Mh}$$ @wolf09 showed that as long as the stellar velocity dispersion profile is fairly flat with radius, this mass estimator for $M_{1/2}$ is accurate for a wide range of light profiles, including the types of profiles used to fit all of the types of objects shown in Table \[tab:dat\]. Hence, for stellar systems with negligible rotational support, this formula provides a good estimate for the total dynamical mass within $r_{1/2}$ (assuming spherical symmetry). Note that Equation \[eqn:Mh\] was *not* derived using the virial theorem, but rather follows from the Jeans Equation. The virial theorem provides only an integral constraint on the total mass traced by a stellar system and therefore cannot be used to infer precise masses (see @merr87 Appendix A and @wolf09 §2.1). Similar estimators [e.g. @spitzer69; @ill76; @cappellari06] have the same form (by dimensional analysis), but for most of these the coefficient is calibrated by examining high-quality data and assuming that mass follows light. These calibrations are less useful for a wide variety of spheroidal galaxies because there is no reason to expect that all spheroidal galaxy are homologous. Equation \[eqn:Mh\] is derived analytically rather than empirically, and shows that there is a *particular* radius at which the mass is unbiased at any scale ($\approx r_{1/2}$). Estimators that do not use this radius must have different virial coefficients as a function of scale. Further, Equation \[eqn:Mh\] assumes neither mass-follow-light nor isotropy, and hence is suited to the range of objects with various dark matter fractions that we consider here. Further, we note that the approximation $r_{1/2} = 4 R_e/3$ is accurate for the light profiles of relevance in this paper. As shown in @ciotti91 and @limaneto99, deprojected spherical Sersic [@sersic63] profiles for a range of Sersic indicies are within a few percent of this relation, and the same is demonstrated for Plummer [@plummer1911] and King [@kingprof] profiles in @spitz87plummer and @wolf09. The objects presented here are well fit by at least one of these profiles, motivating the use of the approximation. We note here that these deprojections must assume spherical symmetry, like the $M_{1/2}$ estimator described above. With these estimators chosen, the MRL space as derived from the observables consists of: 1. $M_{1/2} = 3 \, G^{-1} \, \sigma^2 \, r_{1/2} \,$. 2. $r_{1/2} = 4 R_e/3$. 3. $L_{1/2} = L/2 = I_e \pi R_e^2 $. Here $R_e$ is the 2-D (projected) half-light radius, $G$ is the gravitational constant, $\sigma$ is the stellar velocity dispersion of the galaxy, and $I_e$ is the mean surface brightness within $R_e$. We note that the observables here are the same as those used for the fundamental plane and thus this space can be viewed as a transformation of the fundamental plane space. The use of $L/2$ as $L_{1/2}$ would be invalid in the presence of significant attenuation due to dust, but the objects described here are have very low gas fractions and hence likely have negligible attenuation. Thus the interpretation of $L_{1/2} = L(< r_{1/2})$ as the light emitted within $r_{1/2}$ is a reasonable one for these objects, and the above set of observable transforms relations are close approximations to the actual MRL variables. Later, we will also consider a modified version of MRL space that we call dMRL space. In dMRL space, the mass variable is ${M_{1/2}}^{\rm DM} \equiv {M_{1/2}}- M_{\rm baryon}(<r_{1/2})$, the [*dark matter*]{} mass within $r_{1/2}$. For our purposes, the difference between ${M_{1/2}}^{\rm DM}$ and $M_{1/2}$ will only be substantial for E and dE galaxies, and is  obtained by subtracting out the stellar mass within the half-light radius for these galaxies (which contain negligible gas fractions); explicitly, ${M_{1/2}}^{\rm DM} \simeq {M_{1/2}}- M_*/2$. It is important to recognize that the presence of radial gradients in $M_*/L$ due to metallicity variation could render the use of our formula for ${M_{1/2}}^{\rm DM} $ invalid by shifting the radius enclosing half the stellar mass from $r_{1/2}$. However, as shown in @smith09, typical metallicity gradients for the E galaxies (for which $M_*$ is most important) are $\delta \log(Z)/\delta \log(r) \approx -0.1$. Using this gradient with a typical ancient (13.7 Gyr) solar metallicity stellar population from @bc03, we find $M_*/L$ shifts by 0.07 dex from $R_e$ to $0.1 R_e$. Hence, this is a small effect for our galaxies and we disregard it[^1]. We return to dMRL space in the next section. In Figure \[fig:mlr2d\] we plot the data set described in §\[sec:dat\] transformed into MRL space and projected along the coordinate axes. For each of the projections, we have also plotted lines to reflect scalings of interest. The left panel represents $L_{1/2}$ as a function of $r_{1/2}$. The galaxies show a trend of increasing luminosity with increasing $r_{1/2}$, but occupy a relatively small fraction of the available detection space. The GCs and UCDs, meanwhile, are much more scattered in this plot, and are consistently smaller than the dSphs at similar luminosities (e.g. higher surface brightness); as described below we interpret this (along with similar behavior in the other projections) as a clear sign they are separate populations. The dashed black line in the left panel is a line of constant surface brightness ($L_{1/2} \propto r_{1/2}^2$), $\mu_V=30$ mag arcsec$^{-2}$. Below this surface brightness limit, detection bias in this plane becomes significant for MW dSphs [@kop08; @walsh09]. This likely biases the observed $r_{1/2} - L_{1/2}$  relation to small $r_{1/2}$ at the faint end [@bull09stealth].  We discuss the effect of this bias on our parameterization of the $r_{1/2} - L_{1/2}$  relation in §\[subsec:mrlcurve\]. In the middle panel we show a projection into the $r_{1/2} - M_{1/2}$ space. We include lines of constant mass density ($M_{1/2} \propto r_{1/2}^3$, black dash-dotted line) and constant *surface* mass density ($M_{1/2} \propto r_{1/2}^2$, black dotted line), with normalizations arbitrarily set to bracket this data set. In almost all cases, spherical geometry is assumed, in which case a slope of 2, is more properly characterized as a 3D density profile that varies as $\rho \propto r^{-1}$ (somewhat cuspier than constant density). A slope of 3, meanwhile, is the scaling expected if all galaxies had a single constant density within their half-light radii. This slope has been noted previously at some scales[@gentile09; @napo10cendm; @walker10dsphsp]. The fact that the dSph galaxies lie above the constant density line (black dash-dotted) that is normalized to intersect the most massive cluster population suggests that they are slightly denser than galaxy clusters (but not that much denser) at their half-light radii. For a figure that explicitly compares the implied mean density of these objects, see Appendix \[apx:altdata\]. Finally, in the right panel we show $M_{1/2}$ vs. $L_{1/2}$, and a mass-follows-light line ($M_{1/2} \propto L_{1/2}$) normalized at $M/L = 3$ in solar units to reflect the mass-to-light ratio of a uniform fairly old stellar population. Note that the deviation of a population from $M_{1/2} \propto L_{1/2}$ is equivalent to the “tilt” that is often discussed in the context of the fundamental plane. It is clear from this figure that the CSphs and dSphs deviate from this scaling substantially owing to their high dark matter fractions, while the other populations are more consistent, although the Es do show the well-known tilt, and the UCDs show a possible tilt (discussed below). Figure \[fig:mlr3d\] shows the same information, now presented in a 3-D representation. The red plane outlined with a solid line is the @graves09ii fundamental plane (transformed into MRL space). The blue plane outlined with a dashed line is a plane with mass proportional to luminosity with $M_{1/2} = 3 L_{1/2}$ and is indicative of the plane we would expect uniformly old, purely stellar systems to lie within. We note that, in fundamental plane space, this last scaling is sometimes called the “virial plane” (even though systems can be in virial equilibrium regardless of whether or not they lie within this plane). In MRL space it is manifestly apparent that this plane is defined by the assumption that mass-follows-light with a fixed $M/L$. Another feature revealed by examination of the populations in Figures \[fig:mlr2d\] and \[fig:mlr3d\] is a distinct separation between dSphs (magenta) along one sequence and UCDs/GCs (blue/green) along another [a similar situation is noted by @forbes08 in K-band]. Specifically, the UCDs and GCs cluster more closely around the $M_{1/2} \propto L_{1/2}$ plane (shown as dashed, transparent blue) while the dSphs (at similar luminosity) peel sharply up from it, reflecting a significant dark matter component and larger sizes. This difference is clearly visible in the two-dimensional projections of MRL space shown in Figure \[fig:mlr2d\], and manifests itself as a wishbone-shaped bifurcation of the spheroidal sequence in Figure \[fig:mlr3d\]. We also note here that the UCD sample seems to show a slight tilt from the $M_{1/2} \propto L_{1/2}$ relation, most clearly apparent in the right panel of Figure \[fig:mlr2d\]. This could be a sign of a very small amount of dark matter, but could also be systematic variation in the $M_*/L$ ratio due to stellar effects. These objects have uniquely large luminosity densities, and hence are the most likely places to show changes in star formation conditions [@dab09ucds] or simply be an extension of scalings that exist everywhere (such as variation for the Es is described in more detail in §\[sec:err\]). Alternatively, they may be due to dynamical evolution or more complex formation scenarios [e.g @goerdt08ucds; @taylor10gcucds]. Regardless, the significance of this tilt is not clear from this data set (although more tilted than the GCs), and the UCDs and GCs are quite distinct from the dSph sample. Given the observation that the MW dSphs are dark matter-dominated [@sandg07; @stri08dmdom; @simon10seg1], and GCs have $M/L$ consistent with purely stellar systems [e.g. @PM93], we consider if there is a clean separation between these systems based on the MRL space parameters. We fit a plane that separates the dSphs from GCs by finding a plane that lies perpendicular to the best least squares fit between all of the dSphs and GCs and perpendicular to the best fit line through the dSph sequence; we then offset the plane until it evenly divides the two populations, giving the plane rendered in Figure \[fig:sepplot\]. This plane is a convenient empirical way to determine if an object is a faint dSph or a globular cluster. In the MRL space for our data set, the best fit separation plane is given by $$\label{eqn:mlrsepeq} 0.34 \log{M_{1/2}} - 0.50 \log{L_{1/2}} + 0.79 \log{r_{1/2}} = -1.35.$$ Specifically, objects that lie at lower $M_{1/2}$, lower $r_{1/2}$, or higher $L_{1/2}$ are GCs while others are galaxies. This same relation can easily be transformed into fundamental plane space, providing the separation plane $$0.68 \log{\sigma} - 0.50 \log{I_e} + 0.13 \log{R_e} = -3.23.$$ such that objects with lower $\sigma$, higher $\log{I_e}$, or lower $\log{R_e}$ are GCs while others are galaxies. The fact that this single plane easily separates the GCs and dSphs in the MRL space implies that these are distinct classes of objects (see also the discussion in Appendix \[apx:UCDs\] - the arguments there for UCDs also apply to GCs). It is possible that future studies of faint/low surface brightness GCs may change the location of this separation plane, or even fill in the gap, rendering the plane completely arbitrary. But for this data set, the classes are completely separated by the plane of Figure \[fig:sepplot\]. Further, we note that this plane implies that a galaxy/cluster projection using a single variable [e.g. @gil07] is not sufficient to separate these populations, as is apparent from Figure \[fig:mlr2d\]. All 3 dimensions are necessary to account for the most extreme objects. Additionally, we include UCDs in Figure \[fig:sepplot\] and find that they also lie clearly separated by the plane, even though they are *not* included in the determination of the best-fit separation plane. This is suggestive that they are in the same class as GCs, and not on the galaxy sequence. However, although given the tilt discussed above, we cannot discount the possibility that this is simply due to a relative rarity of the most massive UCDs to bridge the gap. Given that GCs and UCDs both lack clear evidence for dark matter and sit in a distinct region of MRL space we are inclined to treat them as stellar systems rather than “galaxies”, which we define operationally as stellar systems that are bound to a dominant dark matter halo (as discussed in §\[sec:intro\]). Alternatively, a second scenario is possible where UCDs do contain significant dark matter. If this is the case, then an interesting implication follows: there would need to be a dichotomy in galaxy formation efficiency in dark matter halos of a fixed virial mass. Specifically, as shown in Appendix \[apx:UCDs\], most UCDs are consistent with no dark matter given the uncertainties in the expected stellar mass-to-light ratios. If we force a stellar mass-to-light ratio of 2 (such that their dark matter densities are comparable to their dynamical mass densities) then the implied dark matter densities are incredibly high – comparable to the central densities of the most massive galaxy clusters ($M_{\rm vir} \sim 10^{16} M_\odot$). dSphs of similar luminosities sit in $M_{\rm vir} \sim 10^9 M_\odot$ halos. UCD dark matter mass fractions would need to be extremely fined-tuned (and different from object to object) in order to avoid a dichotomy in galaxy formation efficiency at a fixed dark matter halo mass – a dichotomy that is not seen for any other type of spheroidal system. This is an interesting possibility and may call for more investigation, as such a result would be difficult to explain in LCDM. Nevertheless, we regard the above scenario to be unlikely, and adopt the simpler interpretation that UCDs are purely stellar systems that occasionally have unusually high $M_*/L$ due to unique star formation conditions or dynamical evolution. From here on we omit the GCs and UCDs from consideration as systems that clearly contain dark matter halos of their own. In the alternative scenario where UCDs are to be regarded as galaxies, our approach could be viewed as restricting ourselves to the simpler dSph “branch” of the MRL relation. Once we remove UCDs and GCs, we are left with a galaxy sequence in Figures \[fig:mlr2d\] and \[fig:mlr3d\] that scatters about a 1-D relation through MRL space. In the next section we work towards characterizing this 1-D curve. Figure \[fig:mlr\_mtol\] provides yet another representation of the MRL data, now presented as the dynamical half-light mass-to-light ratio $\Upsilon_{1/2} \equiv M_{1/2}/L_{1/2} $ (in $M_\odot/L_\odot$) plotted as a function of each of the MRL variables individually. Along the top of each panel we show characteristic observational uncertainties for our galaxies of each type across the MRL sequence. We discuss these errors in the context of measuring scatter in the MRL relation in §6. Each panel in Figure \[fig:mlr\_mtol\] clearly reveals a minimum $\Upsilon_{1/2} \simeq 3$ that spans a broad regime of spheroidal galaxies, from $M_{1/2} \simeq 10^{9-11} M_\odot$ (left); $r_{1/2} \simeq 1-10$ kpc (middle); and $L_{1/2} \simeq 10^{6-10} L_\odot$ (right). As discussed by @wolf09 in the context of a similar figure in their paper (Figure 4), the dramatic increase in dynamical half-light mass-to-light ratios at both smaller and larger scales is likely indicative of a decrease in the efficiency of galaxy formation in the smallest and largest dark matter halos – as discussed above, the influence of radial variations in $M_*/L$ is $\sim 0.1$ dex, far less than that observed here. For the biggest, brightest, most massive galaxies, the increase in $\Upsilon_{1/2}$ implies a sharp threshold for galaxy formation in luminosity (not in mass) at $L_{1/2} \simeq 10^{11} L_\odot$, as shown by the strong break in the left panel of Figure \[fig:mlr\_mtol\]. The strong sensitivity to luminosity suggests that baryonic processes are responsible for this transition. Meanwhile, the smallest, faintest, least massive galaxies seem to exhibit a sharp rise at a particular mass scale (not luminosity scale) near $M_{1/2} \simeq 10^6 M_\odot$ (right panel of Figure \[fig:mlr\_mtol\]). This indicates they are more tied to the size of their potential wells than star formation (although this does not preclude an interaction between the two, e.g. @dekel03fundline [@woo08]). We connect these scaling trends to dark matter halo virial masses and relate them broadly to galaxy formation in Sections \[sec:halomatch\] and \[sec:curve\]. Fundamental Curve {#sec:curve} ================= It is evident in Figures \[fig:mlr2d\] and \[fig:mlr3d\] that CSphs, Es, dEs, and dSphs seem to curve through MRL space along a 1-D sequence [see also @gra06; @gra08 for dE and Es]. We refer to this sequence as the “fundamental curve" and we plot analytic representations of this curve in the left panel of Figure \[fig:fcurve3d\] along with the associated data points. We discuss these analytic curve representations Sections \[subsec:mrlcurve\] and \[subsec:dmrlcurve\]. It is important to note that the existence of this 1-D curve does not imply that these objects are a single parameter family, nor that the curve is a more suitable fit than a higher-dimensional construct. As the fundamental plane [@graves09ii] for Es and fundamental manifold [@zar06fman] show, galaxies do show systematic variation along multiple directions in fundamental plane or MRL space. We do not aim to compare the statistical significance of these relations to the fundamental curve, as the applications of 1-D and 2-D relations are quite different. Instead, the best way to think of the fundamental curve is as the direction of largest variation of this set of dispersion-supported galaxy properties. Thus, it is useful as the first-order scaling relation, and hence the first priority is to understand galaxies’ positions along the curve. The other significant scalings are then encoded in the “intrinsic scatter” about the fundamental curve (discussed and quantified in §\[sec:err\] and §\[sec:err2\]). The right panel of Figure \[fig:fcurve3d\] shows the same data, but now in dMRL space. Recall that the only difference between dMRL space and MRL space is that the dynamical mass within the half-light radius, $M_{1/2}$, is replaced by the dark matter mass within the same radius: $M_{1/2} \rightarrow M_{1/2}^{\rm DM}$. The half-light dark matter mass is determined by subtracting the stellar mass of each system via $M_{1/2}^{\rm DM} = M_{1/2} - M_*/2$. For the E galaxies of @graves09ii we use stellar masses derived from the estimates of @gall05mstar [see @graves10iii for more details]. For the dE sample of @geha03deii, explicitly computed stellar masses are unavailable so we assign them stellar masses from their observed integrated colors using the prescription of @bell03. For the CSphs and dSphs we assume $M_{1/2} = M_{1/2}^{\rm DM}$, because the dynamical mass-to-light ratios in these systems are very large. The motivation for exploring dMRL space and its fundamental curve is that we would like to use the dark matter mass density within $r_{1/2}$ as a estimator for the halo virial mass. With a virial mass estimate in hand, the fundamental curve relation can be used to provide an approximate, average relationship between halo virial mass ($M_{\rm vir}$) and galaxy luminosity ($L$). This necessitates comparison to a 1-D dMRL relation, as halo virial masses are a one-parameter family. We discuss this effort in §\[sec:halomatch\]. MRL Curve Models {#subsec:mrlcurve} ---------------- We have chosen to quantify the fundamental curve by treating ${r_{1/2}}$ as the parametric variable. We fit two relations, one in the ${r_{1/2}}- {L_{1/2}}$ plane and another in the ${r_{1/2}}- {M_{1/2}}$ plane. The derived pair of relations (RL and RM) define our fundamental curve relation for the three MRL variables. We also fit the curve directly in three dimensions for some models, but the derived parameters were effectively identical, and hence we use the simpler two dimensional fits for clarity. We now describe our choice of functional forms for modeling these relations, followed by a set of five best-fit models for the fundamental curve, distinguished by slight differences in the fitting procedure and the choice of ${M_{1/2}}^{\rm DM} = {M_{1/2}}- M_*/2$ as the mass variable in place of the raw ${M_{1/2}}$. For the ${r_{1/2}}- {L_{1/2}}$ relation, we define ${\tilde{r}}_L \equiv \log ({r_{1/2}}/r_L)$ and ${\tilde{L}}\equiv \log ({L_{1/2}}/L_0)$ and employ a fit following the empirically-motivated form $$\begin{aligned} {\tilde{L}}= {\tilde{r}}_L \, \frac{a+b}{2} + \left[ s - {\tilde{r}}_L (a-b) \right] \frac{\arctan({\tilde{r}}_L / w)}{\pi}. \label{eqn:arctanrvsl}\end{aligned}$$ Equation \[eqn:arctanrvsl\] has the property of smoothly transitioning from an asymptotic slope $a$ (such that $L_{1/2} \propto r_{1/2}^a$) for $r_{1/2} \ll r_L$ to $b$ (i.e. $L_{1/2} \propto r_{1/2}^b$) for $r_{1/2} \gg r_L$, with the width of the transition zone at $r_L$ defined by $w$. The parameter $L_0$ is then the characteristic luminosity at $r=r_L$, and the final parameter $s$ determines the size of a luminosity offset that occurs in the transition region (e.g. the break in luminosity at $\log(r_{1/2}) \approx 0$ in the upper-middle panel of Figure \[fig:fcurve2d\]). This fitting function simply yet generically captures the behavior of a data set that has distinct asymptotic power laws and a smooth transition region between them. For the ${r_{1/2}}- {M_{1/2}}$ relation we utilize a fitting function with a form identical to Equation \[eqn:arctanrvsl\]: $$\begin{aligned} {\tilde{M}}= {\tilde{r}}_M \, \frac{\alpha + \beta}{2} + \left[ \sigma - {\tilde{r}}_M (\alpha - \beta) \right] \frac{\arctan({\tilde{r}}_M / \omega)}{\pi}. \label{eqn:arctanrvsm}\end{aligned}$$ where ${\tilde{r}}_M \equiv \log ({r_{1/2}}/r_M)$, so that $r_M$ defines the transition radius and ${\tilde{M}}\equiv \log ({M_{1/2}}/M_0)$ defines a characteristic mass scale $M_0$ at $r=r_M$. Using this method, the $M_{1/2}$ vs. $L_{1/2}$ relations are generated by eliminating our chosen parametric variable $r_{1/2}$ in Equations \[eqn:arctanrvsl\] and \[eqn:arctanrvsm\]. For comparison, we also directly fit the ML relation using the form of Equation \[eqn:arctanrvsl\], and find very similar relations to those shown below. Hence, the results presented here are likely not very sensitive to the choice of $r_{1/2}$ as the parametric variable. Motivated by the fact that we are interested in understanding each type of galaxy universally (CSph, E, dE, and dSph) we weight the data points such that each of the four groups has equal weight (i.e. the weight for each point is $1/N_{type}$ where $N_{type}$ are the number of objects of that type). Furthermore, for the E data set of @graves09ii, we weight each point by the relative fraction of galaxies in that particular bin so as to properly represent the full SDSS population rather than the choice of bin locations. With these weights for the data set, a non-linear least-squared fit for the parameters in Equations \[eqn:arctanrvsl\] and \[eqn:arctanrvsm\] (using a Levenberg–Marquardt algorithm) fully determines the one-dimensional relations. With the models for the fundamental curve and this fitting procedure, we call our empirically fit fundamental curve model “MRL-1,” with best-fit parameters given in the first column of Table \[tab:curveparams\]. The relation is shown in projection on the MRL axes as a blue-dashed line on the top panels of Figure \[fig:fcurve2d\], along with data points for the individual galaxies and their associated observational error bars (error bars are discussed in detail in §\[sec:err\]). We show the same curves and data points as 3-D representations in \[fig:fcurve3d\], with the MRL-1 shown as the dashed green line in the left panel. The black dotted line in the upper left panel of Figure \[fig:fcurve2d\] shows the surface brightness detection limit for dSphs, $\mu_V = 30$ mag arcsec$^{-2}$ [@kop08; @walsh09]. Given that the detection limit indicates that the least luminous dSphs galaxies are at the edge of detectability, it is plausible that the shallow slope in RL at faint $L_{1/2}$ is due to a selection effect. The “stealth galaxies” of @bull09stealth, if present, could substantially alter the slope at the faint end. Thus, we also include an “MRL-2” model in which the $s$ parameter is forced to be 0, causing the faint end slope to trace the full dSph population instead of being strongly driven by the faintest of them. This model is shown as the solid black line on the left panel of Figure \[fig:fcurve3d\] and the upper panels of \[fig:fcurve2d\], and the best-fit parameters are given in the second column of Table \[tab:curveparams\]. Given the fact that most of the faint dSphs skirt the edge of this detection limit [e.g. @walsh09], we consider the MRL-2 model to be the more robust choice for characterizing the MRL fundamental curve. The fit parameters listed in Table \[tab:curveparams\] for the MRL-2 model reveal that the smallest galaxies with $L \lesssim 2\, L_0 \simeq 4\times 10^9 \, L_\odot$ follow a mass-luminosity relationship that varies weakly with luminosity $$M_{1/2} \propto L_{1/2}^{\alpha/a} \propto L_{1/2}^{0.30} \, ,$$ while the largest galaxies ($L \gtrsim 4\times 10^9 \, L_\odot$) obey a steep mass-luminosity relationship with $$M_{1/2} \propto L_{1/2}^{\beta/b} \propto L_{1/2}^{3.2} \, .$$ Both regimes are clearly very far from mass-follows-light scalings[^2] (i.e., $M_{1/2} \propto L_{1/2}$). For the smallest galaxies, large changes in luminosity correspond to fairly minor changes in half-light mass. Conversely, for the largest galaxies, a factor of $\sim 2$ change in luminosity corresponds to more than an order of magnitude change in half-light mass. This is the same effect noted in §\[sec:mlrspace\] (with regard to Figure \[fig:mlr\_mtol\]), and without any appeal to theory suggests that two qualitatively different processes are acting to suppress baryon conversion into stars along the transition from small galaxies to large. The smallest galaxies seem to be limited by the dark matter mass itself (e.g., by the potential well depth), while the largest galaxies seem to be baryon limited (e.g., by the supply of cool gas for star formation). Also of interest is the sharp transition  in the RM relation at $\log(r_{1/2}) \simeq 0.5$ and $\log(M_{1/2}) \simeq 9$, where the half-light mass suddenly jumps with increasing radius.  This transition scale corresponds closely to the point where the dynamical mass-to-light ratios of galaxies reach their minimum (Figure \[fig:mlr\_mtol\]) and thus where baryons contribute substantially to the mass compared to dark matter.  It is possible that this feature is enhanced or even caused by the effects of baryonic contraction [@blumenthal86] as discussed in the context of dark matter masses below. dMRL Curve Models {#subsec:dmrlcurve} ----------------- Recall that the dMRL relation is distinguished from the MRL relation by the use of ${M_{1/2}}^{\rm DM} = {M_{1/2}}- M_*/2$ as the mass variable in place of the raw dynamical ${M_{1/2}}$. The fit to the data in this space using Equations \[eqn:arctanrvsl\] and \[eqn:arctanrvsm\] is our “dMRL-1” model. Trying a variety of starting values for the parameters revealed that $r_M$ is not well-constrained by the data and often would end up outside the data set regardless of the starting value. Hence, we used the RL relation to set the scale, through the constraint $r_M = r_L$. Using this constraint, the final parameters are given in the third column of Table \[tab:curveparams\] and plotted in the right panel of Figure \[fig:fcurve3d\] and the lower panels of Figure \[fig:fcurve2d\] as the red dashed line. As Table \[tab:curveparams\] shows, the dMRL-1 model best-fit parameters have $\alpha \approx \beta$, and $\sigma$ preferring 0. Equation \[eqn:arctanrvsm\] for dMRL-1 reduces to a power law for $\alpha = \beta$ and $\sigma=0$, so the ${r_{1/2}}- {M_{1/2}}^{\rm DM}$ relation turns out to be very close to a single power law (linear in ${\tilde{M}}$ and ${\tilde{r}}_M$). Hence, the RM relation can be modeled as a simple power law $$\begin{aligned} {M_{1/2}}^{\rm DM} = M_0 (r_{1/2}/r_M)^{\alpha}, \label{eqn:powerlawrvsm}\end{aligned}$$ where $r_M$ is determined from the dMRL-1 fit to simplify comparisons. The value of the slope $\alpha = 2.33$ is also given in Table \[tab:curveparams\]. The lower-middle panel of Figure \[fig:fcurve2d\] compares this fit (red dotted line) to dMRL-1, showing an insignificant difference. Thus in the second dMRL model (dMRL-2) we adopt Equation \[eqn:powerlawrvsm\] as the model for the RM relation, and the RL model of MRL-2, selected due to the likely presence of the stealth galaxy selection effect. We tabulate the best-fit parameters for this model in the second-to-last column of Table \[tab:curveparams\], and plot it as the black solid line in the lower panels of Figure \[fig:fcurve2d\] and the right panel of Figure \[fig:fcurve3d\]. In the RM relation of the dMRL space, we include for comparison the @walker10dsphsp relation derived using Milky Way dSphs for the faint end and spiral galaxy rotation curves for the galaxy regime (black dash-dotted line on the lower-middle panel of Figure \[fig:fcurve2d\]). We note here that while the @walker10dsphsp non-dSph sample is a very different set of galaxies that may obey different scaling relations from our sample[^3], it is fairly close to our relation in the galaxy regime. However, the relation steepens with the inclusion of Es and CSphs, so our derived slope is somewhat higher than a $M^{\rm DM} \propto r^2$ relation. Motivated partly by this $M^{\rm DM} \propto r^2$ result on the faint end, as well as the greater uncertainty in $M_{1/2}^{\rm DM}$ for the dEs and Es (see §\[sec:err\] and \[sec:err2\]), we consider a third dMRL model (dMRL-3). In this model we use Equation \[eqn:arctanrvsm\] for the RM relation, but we force the faint-end slope to 2 and set the normalization to pass through the dSphs. We then force the $r_M$ scale to match $r_L$ (from MRL-2), set $\omega=0.01$ to ensure a small transition rgion, and fir the remaining parameters. We also continue to use the RL model of MRL-2 for dMRL-3. In the last column of Table \[tab:curveparams\], we show the best-fit parameters of this model, and in the lower panels of Figure \[fig:fcurve2d\] and the right panel of Figure \[fig:fcurve3d\], we plot it as the green dashed line. Before continuing, we note a discrepancy for the E galaxies in the dMRL models, most apparent in the lower-middle panel of Figure \[fig:fcurve2d\] – the Es tend to have higher ${M_{1/2}}^{\rm DM}$ than the best-fit relations. Recall, however, that the primary motivation for exploring the ${M_{1/2}}^{\rm DM}$ as a parameter is that it will allow us to map galaxy properties to an underlying dark matter halo mass. This mapping is hindered somewhat by the contraction of baryons. An anomalously high dark matter mass for the galaxies with the highest baryonic-to-dark matter ratio is precisely what is expected if dark matter halos contract due to central condensation of baryonic matter [@blumenthal86]. Thus, we might expect an offset in the scaling relations of galaxies at the scale where baryonic condensation has been the most significant. In §\[sec:halomatch\] we estimate the degree to which baryonic contraction has increased the ${M_{1/2}}^{\rm DM}$ masses in our E galaxy sample and show that this increase approximately accounts for the discrepancy. Further, as discussed more in §\[sec:err\], a power law for the RM relation is in general more robust to the problem of a non-monotonic mapping of baryonic galaxies to dark matter halos. Thus, use of a power law for the RM model is a reasonable choice for the exercise of halo profile matching (described in §\[sec:halomatch\]), while still being a decent fit to this data set. In the RL space, as described above for MRL-2, it is more appropriate to use the $s=0$ model so as to prevent the stealth galaxies selection effect from strongly biasing the faint end slope. Thus, we adopt dMRL-2 as our fiducial model in the latter sections of this paper. [|cccccc|]{} Mass Variable & $M_{1/2}$ & $M_{1/2}$ & $M_{1/2}^{\rm DM}$ & $M_{1/2}^{\rm DM}$ & $M_{1/2}^{\rm DM}$\ RM Model & Eqn. \[eqn:arctanrvsm\] & Eqn. \[eqn:arctanrvsm\] & Eqn. \[eqn:arctanrvsm\] & Eqn. \[eqn:powerlawrvsm\] & Eqn. \[eqn:powerlawrvsm\]\ $\log(r_L/{\rm kpc})$ & -0.04 & 0.54 & -0.04 & -0.04 & -0.04\ $\log(L_0/L_\odot)$ & 7.54 & 9.95 & 7.54 & 7.54 & 7.54\ $a$ & 1.67 & 4.77 & 1.66 & 1.66 & 1.66\ $b$ & 0.26 & 0.44 & 0.26 & 0.26 & 0.26\ $w$ & 0.32 & 0.42 & 0.32 & 0.32 & 0.32\ $s$ & 6.58 & 0 & 6.58 & 6.58 & 6.58\ $\log(r_M/{\rm kpc})$ & 0.09 & 0.09 & -0.04 & & -0.04\ $\log(M_0/M_\odot)$ & 9.12 & 9.12 & 8.40 & 8.50 & 8.32\ $\alpha$ & 1.44 & 1.44 & 2.33 & 2.32 & 2.00\ $\beta$ & 1.42 & 1.42 & 2.28 & & 2.27\ $\omega$ & 0.27 & 0.27 & 0 & & 0.01\ $\sigma$ & 3.13 & 3.13 & 0 & & 0.69 \[tab:curveparams\] Scatter and Uncertainty in the Fundamental Curve {#sec:err} ------------------------------------------------ It is interesting to ask about the degree of intrinsic scatter within the fundamental curve that was defined in the previous section, but in order to do that we need to estimate the observational uncertainties on the MRL variables. Representative error bars for $M_{1/2}$, $r_{1/2}$, and $L_{1/2}$ are shown in Figure \[fig:mlr\_mtol\] for several galaxy types. Observational errors for $M_{1/2}^{\rm DM}$ are presented in Figure \[fig:fcurve2d\]. Individual error bars for each data point are shown in Figure \[fig:fcurve2d\]. Note that for the faint dSphs and the CSph, the measured mass-to-light ratios are much larger than any reasonable stellar population (e.g. $M_{1/2}/L_{1/2} >> 1$). Hence, they are dark matter-dominated ($M_{1/2} \approx M_{1/2}^{\rm DM}$), and hence the $M_{1/2}$ and $M_{1/2}^{\rm DM}$ errors are similar to each other. For the dE and E galaxies, however, the mass-to-light ratios are closer to that expected of stellar populations and hence a significant amount of mass within $r_{1/2}$ is in stars rather than dark matter, so $M_{1/2}^{\rm DM}$ errors are larger for these objects due to the errors on $M_*$. For the E galaxies, the uncertainty in $M_*$ due to stellar populations is a major uncertainty. While the observational errors play a role in general, for the large stacked E data sets here, the errors are certainly dominated by systematics, of which there are three major components [@graves10iii]. First, there is variation due to the method used to derive $M_*/L$ (e.g. integrated colors or particular spectral features). As shown in @graves10iii, this contributes a $1\sigma$ scatter of $\sim 0.08$ dex. Second, the assumed star formation history affects the inferred stellar mass, at the level of 0.15 dex for this sample [@graves10iii]. Third, the choice of IMF has a major effect on the inferred $M_*$. For the example (conservative) comparison of Chabrier as compared to Kroupa [@long09], the inferred $M_*$ varies by 0.26 dex. More detailed studies of individual objects can potentially reduce the systematics [e.g. @cappellari06], but the analysis above is appropriate for the large data set in use here. Thus we show error bars by adding the above 3 components in quadrature, providing a factor of 2 uncertainty in the $M_*$ used for mapping $M_{1/2}$ to $M_{1/2}^{\rm DM}$. This error on the Es is shown in Figure \[fig:fcurve2d\] as the error bar on $M_{1/2}^{\rm DM}$, and we also adopt it in the next sections as the error for $M_{1/2}^{\rm DM}$. The error bars shown in Figure \[fig:fcurve2d\] account for the uncertainty in measuring the dark matter mass as it is today, but do not include the systematic uncertainty that remains in our ability to map an observed dark matter density to the virial properties of that dark matter halo. Baryonic contraction [@blumenthal86] in particular can make the mapping between density and global halo mass quite difficult. We expect this uncertainty to be particularly important for E galaxies because they have the highest baryon fractions. We discuss this effect in more detail in §\[sec:halomatch\]. For the dE sample, $M_*$ is inferred from SDSS colors as described in §\[sec:halomatch\]. Errors can be estimated from this procedure by comparing the inferred $M_*$ for each band. Using this procedure, the scatter in the inferred $M_*$ is about $30\%$, comparable to the observational errors for ${M_{1/2}}$. This estimate has its own set of systematic errors like those described above – we do not quantify this here due to the smaller sample size (and hence larger random errors) and more simplistic method compared to the Es. Regardless, the error bars are large enough to be consistent with the fundamental curve. For the CSph population, the uncertainty in ${M_{1/2}}$ is difficult to characterize, as it is primarily due to the use of the galaxies to trace the velocity dispersion instead of the ICS. The effect this will have is not as well understood, as represented by the “?” in the error bar of Figure \[fig:mlr\_mtol\]. The simulations of @dolag10csphsims find a disagreement in $\sigma$ of $\sim 20\%$ between galaxies and the ICS component (i.e. approximately 50% in mass), although this is not necessarily representative of the clusters in our sample. In order to broadly represent this uncertainty, we have assumed a factor of 2 uncertainty on $\sigma$ in deriving the error bars in the next section. Adopting these observational error bars, it is clear from the top panel of Figure \[fig:fcurve2d\] that the actual scatter about the fundamental curve is larger than the observational errors. We estimate the scatter by computing the residuals of ${M_{1/2}}$ and ${L_{1/2}}$ from the fundamental curve, and measure the standard deviation with weights as described in §\[sec:curve\]. The resulting as-observed scatter in $M_{1/2}$ at fixed $r_{1/2}$ about the MRL-2 relation is $\delta \log {M_{1/2}}= 0.41$. Subtracting the observational error in ${M_{1/2}}$ (including the contribution due to the error in ${r_{1/2}}$) in quadrature from this value, we obtain an estimated intrinsic scatter of $\Delta \log {M_{1/2}}= 0.38$. Using the dMRL-2 relation, the observed scatter in $M_{1/2}^{\rm DM}$ at fixed $r_{1/2}$ is $\delta \log M_{1/2}^{\rm DM} = 0.60$ and the intrinsic is $\Delta \log M_{1/2}^{\rm DM} = 0.20$, due to the larger uncertainties in ${M_{1/2}}^{\rm DM}$. We emphasize that this estimate of intrinsic scatter is only approximate, given our small samples size and our rough characterization of observational errors over the entire (disjoint) population of our objects. Applying the same method to the RL relation (identical for MRL-2 and dMRL-2), we get an observed scatter in ${L_{1/2}}$ at fixed ${r_{1/2}}$ of $\delta \log {L_{1/2}}= 0.73$, and estimated intrinsic scatter $\Delta \log {L_{1/2}}= 0.71$. This is relatively high, but is driven almost entirely by a few dSph outliers (the low dSph points in the upper-left panel of Figure \[fig:fcurve2d\]) that render the distribution non-Gaussian. The dSphs generally have relatively high error bars, but this is not accounted for in the averaging process above. Thus, removing the discrepant dSphs gives an observed scatter of ${L_{1/2}}$ at fixed ${r_{1/2}}$ of $\delta \log {L_{1/2}}= 0.42$, and $\Delta \log {L_{1/2}}= 0.37$. These values for the scatter are purely empirical measurements of the deviation of individual galaxies from the fundamental curve. As discussed in §\[sec:curve\], the intrinsic portion of this scatter encodes all of the additional scalings in galaxy formation that are sub-dominant to the curve itself. In the next section, we describe theoretically expected scatter based on the profile matching scheme. Dark Matter Profile Matching {#sec:halomatch} ============================ We now describe a technique to use the fundamental curve described in the last section to derive global relations connecting dark matter halos to the luminous properties of the galaxy. The main relationship we would like to derive is the median relation between $M_{\rm vir}$ and $L$. We refer to this method as “profile matching,” as it matches the mass profile of galaxies to dark matter halos to do this. While the analysis presented here relies on NFW halos [@NFW97] in $\Lambda$CDM, the general approach is applicable to any halo form or variant cosmology. $\Lambda$CDM simulations predict that at a fixed physical radius $r$, a more massive dark matter halo will be denser, on average, than a less massive dark matter halo [e.g. @NFW97]. Moreover, the [*typical*]{} mass profile for a given virial mass halo is determined by the virial mass in a one-to-one way, such that knowledge of $M_{1/2}^{\rm DM}$ and $r_{1/2}$ for a galaxy can be mapped to the unique dark matter halo virial mass that gives $M_{\rm halo}(r=r_{1/2};M_{\rm vir}) = M_{1/2}^{\rm DM}$. Of course, this mapping is not without scatter, and we address this issue in §\[sec:err2\]. This mapping is also made more difficult by the fact that some of the galaxies we consider reside within subhalos. We also address this point in §\[sec:err2\]. We assume that each galaxy resides at the center of a dark matter halo and that galaxies have $M_{1/2}^{\rm DM}$, $r_{1/2}$, and $L_{1/2}$ values specified by the dMRL fundamental curve. We also assume that the dark matter densities within $r_{1/2}$ can be mapped to a virial mass using density scaling relations derived for dark matter halos from [*dissipationless*]{} simulations. This is a reasonable assumption for most of our galaxies because most of them are dark matter dominated. This is not a good assumption for E galaxies, which have fairly high baryon mass fractions and have likely had their dark matter masses enhanced within $r_{1/2}$ by baryonic contraction [@blumenthal86; @gnedin04; @napo10cendm] . But as discussed in the previous section, the dMRL curves tend to lie below the dark matter masses in E galaxies in dMRL space. Indeed, we will show that a first-order correction for the effects of baryonic contraction yields “uncontracted” masses for E galaxies that sit along our dMRL fits. We consider an ensemble of dark matter halos with a range of virial masses $ 10^7 < M_{\rm vir}/M_\odot < 10^{16.5} $. Each halo is assumed to follow an NFW mass profile $M_{\rm halo}(r) = M(r; M_{\rm vir})$ with a concentration parameter ($c \equiv r_{\rm vir}/r_s$) set by the [*median*]{} concentration-mass relations provided by @klypin10bolshoi from the Bolshoi simulations. This simulation was run with cosmological parameters ($n=0.95$, $\sigma_8 = 0.82$, $h=0.7$, and $\Omega_m = 0.27$) that are very similar to those favored by WMAP7 [@WMAP7]. We define virial mass and virial radius as in @klypin10bolshoi, using the virial overdensity as calculated by the spherical collapse approximation. Note that we have extrapolated their fitted concentration-mass-redshift relation to masses beyond those directly probed by the Bolshoi simulation ($M_{\rm vir} = 10^{10.3-14.5} M_\odot$). However, these extrapolations are consistent with the scaling behaviors expected from previous simulations that have probed higher and lower mass regimes directly [e.g. @neto07; @springel08aquarius; @mac08]. The implied dark matter mass profiles for many different virial masses are illustrated as colored lines in Figure \[fig:massplot\]. For reference, the half-mass radii for the [dark matter halos]{}, $R_{1/2}^{\rm halo}$, are plotted as large colored circles at their associated half-mass values, $M_{1/2}^{\rm halo} = M_{\rm vir}/2$. The slope of this relation is almost exactly $M_{1/2}^{\rm halo} \propto (R_{1/2}^{\rm halo})^3$, and therefore significantly steeper than the $M_{1/2}^{\rm DM} \propto r_{1/2}^{2.3}$ slope favored by our fiducial fit to the fundamental curve of stellar systems. The virial mass associated with each mass profile plotted is indicated to the right of the associated colored circle. Overlaid on Figure \[fig:massplot\] as a thick, black solid line is the $M_{1/2}^{\rm DM}$ vs. $r_{1/2}$ relation for our preferred fundamental curve fit (Model dMRL-2 in Table \[tab:curveparams\]). The thick green, dashed line is the alternative dMRL-3 relation. Each point along these curves is mapped to a single luminosity via its respective dMRL relation. Each point on the line can also be mapped in a one-to-one way to a median dark matter halo virial mass, set by the particular $M_{\rm halo}(r) = M(r; M_{\rm vir})$ halo line it intersects. This allows us to back out an implied median relationship between galaxy luminosity and halo virial masses across the range of galaxies considered. Figure \[fig:matchplots\] shows the implied $M_{\rm vir}-L$ mapping for each of these curves (dMRL-2, solid blue with points and dMRL-3, green dashed) in the upper right panel.  Associated relationships between $M_{\rm vir}$ and the other fundamental curve parameters are shown in the other panels of Figure \[fig:matchplots\].  Full analytic descriptions of these relation are provided in Appendix \[apx:matchfit\] (see Table \[tab:matchtab\]). For dMRL-2, the  $M_{1/2}$ vs. $M_{\rm vir}$ and $r_{1/2}$ vs, $M_{\rm vir}$ relations are fairly well characterized by power-laws with ${M_{1/2}}\propto M_{\rm vir}^{1.36}$ and ${r_{1/2}}\propto M_{\rm vir}^{1.59}$.  The $L$-to-$M_{\rm vir}$ relation, meanwhile, can be approximated on the faint end as $L \propto M_{\rm vir}^{2.84}$ and on the bright end as $L \propto M_{\rm vir}^{0.26}$. As expected from our $M_{1/2} - L_{1/2}$ scalings, one interpretation is that mass is the limiting factor in galaxy formation for faint galaxies while baryonic feedback of some kind limits galaxy formation for bright galaxies. Returning to Figure \[fig:massplot\], we have also plotted the galaxy data points used to fit the fundamental curve as colored symbols, with error bars reproduced from the lower middle panel of Figure \[fig:mlr2d\]. The symbol types are the same as those described in §\[sec:dat\] and Figures \[fig:mlr\_mtol\]-\[fig:fcurve2d\] except for the red (E) points, as described below. Clearly, these points exhibit a large scatter at fixed radius. As we discuss (and illustrate) in the next section, one of the reasons for the apparent scatter and offsets is that the measurement errors on each data point are quite large. This is particularly important for the red symbols (Es), for which small errors in stellar mass estimation can propagate to very large errors in the dark matter masses plotted, potentially in a systematic way. We discuss inherent vs observational scatter in detail in §\[sec:err2\]. Another effect that adds uncertainty to the mapping between halo mass and galaxy luminosity is baryonic contraction [@blumenthal86; @gnedin04], which  increases the dark matter density within a given radius from what it otherwise would have been absent the infall of baryons. The E points (red circles) in Figure \[fig:massplot\] have been modified in their $M_{1/2}^{\rm DM}$ masses from those shown in Figures \[fig:fcurve3d\] and \[fig:fcurve2d\] in order to approximately account for this effect. Specifically, the DM masses for the E galaxies in this plot are estimates of the “intrinsic” dark matter masses within $r_{1/2}$ prior to the infall of baryons. We make this estimate using the *contra* code of @gnedin04 applied to the E galaxy bin with the largest number of galaxies. In order estimate the degree of the mass enhancement from baryonic contraction, we assume that the initial virial mass followed is that implied by our fiducial curve in Figure \[fig:matchplots\] (dMRL-2) for the $r_{1/2}$ of the chosen E bin. We use the concentration-mass relation discussed above to determine the $c_{\rm vir}$ for an NFW profile. For simplicity we assume a @hernq90 model for the stellar distribution with $M_*$ and $r_{1/2}$ set by the E bin. We determine the ratio of the mass within $r_{1/2}$ before and after the contraction, and correct our profile matching $M_{1/2}^{\rm DM}$ by this ratio. The points shown in Figure \[fig:massplot\] assume the @blumenthal86 adiabatic contraction formula, but we find that with both the @gnedin04 and @blumenthal86 methods, the correction is large enough to move the E galaxies onto the dMRL-2 relation. For simplicity, the error bars on the E points here are simply scaled versions of the direct uncertainty in $M_{1/2}^{\rm DM}$ as presented in Figure \[fig:fcurve2d\] and do not include the additional uncertainty in the baryonic contraction correction, which is certainly large but hard to quantify. The errors shown here are conservatively small for this reason. The uncertainty in profile matching in the E/dE regime is nicely illustrated by the differences between the solid curve (from dMRL-2) and green dashed curves (from dMRL-3) in Figure \[fig:matchplots\].   The dMRL-3 relation yields bumps (e.g. a plateau in $L$ around $M_{\rm vir} \sim 10^{12} M_{\odot}$ ) due to the enhanced $M_{1/2}^{\rm DM}$ at  $\log(r_{1/2}) \sim 0$ associated with this relation.  This break in the MR relation maps onto an increased $M_{\rm vir}$, creating this unexpected feature, which is likely an artifact of baryonic contraction, possibly with a component due to uncertainties in $M_*$. Regardless of the nature of this bump, however, this dMRL-3 scaling does a slightly better job in matching the properties of the faintest galaxies, as it was designed to have an MR relation that is overweighted in dSph regime (compare the dashed and solid lines in Figure \[fig:massplot\]). Interestingly, the green dashed curves in Figure \[fig:matchplots\] reveal features in the scaling relations of the smallest galaxies at $M_{\rm vir} \sim 10^9 M_\odot$ in the form of a wall in $M_{\rm vir}$. Strictly speaking, this is a breakdown in monotonicity of the $L-M_{\rm vir}$ relation (discussed further in §\[subsec:abund\]), but for dMRL-3 this is because $M_{\rm vir}$ is very nearly constant with $L$. This might be indicative of a common mass scale for small galaxies  [@stri08nat; @pen08scale; @ok09scale; @wolf09] under which luminous galaxies do not inhabit dark matter halos. Abundance matching does not constrain the existence of such a scale, as the galaxies in those halos are too faint to be observed in statistically significant quantities outside the Local Group.  As we discuss below, profile matching is just approaching the point where we can begin to test this possibility as part of a global relation. Comparison to Abundance Matching {#subsec:abund} -------------------------------- Figure \[fig:abundmatch\] compares our fiducial profile matched results (blue lines, dMRL-2) to those of the independent technique of abundance matching (red lines). The implied ratios $(M_{\rm vir} - L)$ vs. $M_{\rm vir}$ are shown in the left panel and the equivalent relations for $(M_{\rm vir} - L)$ vs. $L$ are shown in the right panel. The blue profiled-matching lines are shown as dashed in the regime where the average dynamical mass-to-light ratio within $r_{1/2}$ is indicative of a significant stellar component, with $M_{1/2}/L_{1/2} < 9$. The line is solid in the regime where our stellar mass subtraction is less important for the dark matter mass determination within $r_{1/2}$. The line types emphasize the point that our profile matching technique is most trustworthy in the luminosity/mass extremes. We return to this point again in §\[sec:err2\]. The red curves, specifically, illustrate the $M_{\rm vir} - L$ relation that is set by forcing the cumulative abundance of dark matter halos more massive than $M_{\rm vir}$ to match the observed cumulative abundance of all galaxies brighter than $L$ [described, for example, in @krav04hod; @CW08; @busha09reion; @moster09abund].  We use the SDSS luminosity function of @blanton05dwarfs and the halo mass function of @tinker08 [for WMAP7 cosmological parameters]. To convert from the SDSS bands used in @blanton05dwarfs to the $V$-band used in this work, we use the transformation $V=g - 0.59*(g-r) -0.01$ from @jest05sdsstrans, implicitly assuming all galaxies have average colors. The line becomes dashed where we have extrapolated beyond the luminosity function completeness limit and becomes dotted at large luminosities where statistical uncertainties affect our ability to quantify the luminosity function. It is encouraging in Figure \[fig:abundmatch\] that our derived profile matching relation for dMRL-2 (blue, with circles) reveals a similar U-shape as does the abundance matching relation (red). In particular, our profile matched curve reveals a minimum of $M_{\rm vir}/L \simeq 80$ at $M_{\rm vir} \simeq 2 \times 10^{12} M_\odot$ and $L \simeq 2 \times 10^{10} L_\odot$, reflecting scales where galaxy formation efficiency is maximized. Similarly, the abundance-matched curve minimizes at $M_{\rm vir}/L \simeq 80$ at $M_{\rm vir} \simeq 3 \times 10^{11} M_\odot$ and $L \simeq 4 \times 10^{9} L_\odot$. This factor of $\sim 6$ agreement is reasonably encouraging, considering that the minimization of the abundance-matched curve occurs well within the regime where abundance matching is most affected by baryonic uncertainties. Compare the minima to the mass-to-light ratio that would result in the limiting case where 100% of each halo’s baryons is converted to stars: $(M_{\rm vir}/L)_{\rm min} = \Upsilon_*/f_{\rm baryon} \simeq 12$ with $\Upsilon_* \approx 2$ set by the average stellar mass of the E sample in this work ($\Upsilon_*^{\rm E} = 1.89$). The range $1 < \Upsilon_* < 3$ is shown in Figure \[fig:abundmatch\] as the gray shaded region clearly below any of the matching curves. The implication is that even for galaxies that are maximally efficient in converting their baryons into stars, some $\sim 70\%$ of their baryons remain unconverted. Of course, the inefficiency of baryon conversion into stars is a well-known result of CDM-based comparisons to galaxy luminosity functions. Nevertheless, it is encouraging that our profile matching analysis seems to imply the same level of inefficiency (on average) without appealing to abundance information in any way. While the broad-brush agreement between abundance matching and profile matching is encouraging, clearly distinct differences are present for dMRL-2. There could be several explanations for this. The most straightforward is that our profile matching $M_{\rm vir}/L$ relations are applicable to dispersion-supported galaxies, while abundance matching applies to galaxies of all types. This is particularly important in the mass range $M_{\rm vir} \simeq 10^{10-13} M_\odot$ where the population of disky late-type galaxies become much more important relative to spheroidal early-types as mass decreases. The star forming galaxies will have higher luminosities (lower $M_{\rm vir}/L$) than their pressure-supported/passive counterparts at the same $M_{\rm vir}$, and it is only this latter category that is reflected in our profile matching data set. Hence, if the star formation efficiency peaks at a different mass for early-type galaxies than late-types, the two methods will give different results for the galaxies in this mass range. Additionally, at the bright end, abundance matching typically matches the largest dark matter halos to bright E galaxies. Thus they do not include the more diffuse, harder to measure intra-cluster stars. We have included the full CSph light, and therefore the profile matched relation has a larger $L$ at fixed $M_{\rm vir}$ (or lower $M_{\rm vir}/L$). With this in mind, it is important to note that at the cluster scale, direct object-by-object comparisons of the measured efficiency [@gonz07barycen] is complimentary to the scaling relation approach for comparison to galaxy formation models. Further, it is possible to directly compare lensing-based mass estimates to the stellar mass [e.g. @zar08eqgal]. With a large enough sample, this could potentially determine whether there is a discrepancy in either abundance matching or profile matching, although the abundance matching estimates are rather uncertain at these mass ranges due to the impact of small numbers of large clusters (discussed above). However, because clusters are, by nature, systems where the subhalos/lower-luminosity galaxies are near the peak of efficiency, the host halo of a cluster will always be significantly above the peak. Thus, this scale cannot probe the mismatch at peak efficiency. As larger lensing samples at lower masses become available, however, it may be possible to perform direct comparisons at those scales. The disagreement between abundance matching and profile matching could be further influenced by the use of a luminosity function instead of the $M_*$ mass function. Because the luminosity function varies depending both on galaxy type (and thus, color) and choice of band, it could bias the inferred abundance matching scales differentially for different galaxy types. This explanation for the difference in Figure \[fig:abundmatch\] is supported by results such as @moster09abund that find a characteristic scale in the $M_{\rm vir} - M_*$ relation at $M_{\rm vir} \simeq 10^{12} M_\odot$, just where our profile matching efficiency is highest. Other issues affect our interpretation of the dSph galaxies in our sample. First, almost all of them are located within the virial radius of the Milky Way, meaning that their dark matter halos are subhalos, which may follow different scaling relations. We consider the effect of this on our derived relations in the next section. Also, for the very faintest galaxies, we are approaching a regime where surface brightness effects could lead to an observational bias to detect only the highest $M_{\rm vir}/L$ galaxies [@bull09stealth]. Despite these caveats, Figure \[fig:abundmatch\] does clearly show similar patterns to those noted in §\[sec:curve\]. On the faint end, $\Upsilon_{\rm vir}=M_{\rm vir}/L$ shows a much steeper dependence on dark matter mass (this time $M_{\rm vir}$), while the CSph on the bright end are much more sensitive to $L$. This continues to suggest the dark matter halos are of greater importance for dSphs, while Es and CSph scalings are more controlled by baryonic physics. A final intriguing property of the profile matching scheme is that there is a built-in consistency check for monotonicity in the $M_{\rm vir} - L$ relation. Specifically, if the $M_{1/2}$ vs. $r_{1/2}$ relation is anywhere shallower than the $M_{\rm halo}(r)$ profile it is matching, then the assumption of a monotonic, one-to-one mapping from averaged halo mass (and density profile) to averaged galaxy luminosity must break down. The fact that the model used here does *not* have this problem implies self-consistency, although it does not guarantee this property in the actual universe. Clearly, given the size of the measurement errors (see below) the data at this point are not accurate enough to determine whether or not the relation becomes shallow enough to make the mapping double valued over a small $r$ range. We note, however, that if we only consider the smallest (magenta, dSph) galaxies ($r_{1/2} \lesssim 1$ kpc), the relation appears consistent with $M_{1/2} \propto r_{1/2}^2$. For $r<<r_s$ (true for most of the dSphs here), NFW halos obey $M_{\rm halo} \propto r_{1/2}^2$, so the profile matching is just at the limit of monotonicity in the relevant halo mass range [see, @walker09; @wolf09 for related discussions]. We return to this issue in the next section. Uncertainty and Scatter in the $M_{\rm vir}-L$ relation. {#sec:err2} -------------------------------------------------------- Profile matching to the fundamental curve provides a potentially strong constraint on galaxy formation models, and in principle this method provides a means to test whether or not there is an average, monotonic $L - M_{\rm vir}$ relation between galaxy luminosity an halo mass, and to investigate the degree of scatter about this relation. Unfortunately, this level of precision testing is hindered by several uncertainties. First, as discussed in §\[sec:err\], there is observational uncertainty that affects our ability to measure the scatter about and underlying shape of the fundamental curve. Second, there is theoretical uncertainty in the [*average*]{} mapping between an inner mass $M_{1/2}^{\rm DM} = M_{\rm halo}(r_{1/2})$ and halo virial mass, which is particularly difficult (and somewhat ill defined) for the dSph population we consider because they are subhalos. Finally, even in the limit where the theoretical mapping between the [*average*]{} $M(r)$ profile and $M_{\rm vir}$ is perfect, there is a well-known scatter in halo profiles at fixed mass [@Jing00; @bul01; @Wech02; @BK09] and this imposes a limiting cosmic scatter in the map between $M_{1/2}$ and $M_{\rm vir}$. We discuss how all of these issues affect the $M_{\rm vir} - L$ relation in what follows. Figures \[fig:mlscatter\] and \[fig:mlscatter\_altmodel\] provide visual presentations of the observational and theoretical uncertainties in the profile matching relations for $M_{\rm vir}$ vs. $L$ (left) and the equivalent implied relations of $M_{\rm vir}/L$ vs $M_{\rm vir}$ (middle) and $M_{\rm vir}/L$ vs. $L$ (right). Starting with observational uncertainties, the error bar on $M_{\rm vir}$ for each data point is estimated by offsetting the observables by their $1\sigma$ errors in $M_{1/2}^{\rm DM}$ and $r_{1/2}$, and performing the profile matching for each data point individually (using the mean fundamental curve relation for Model 3). For the Es, we use the error bars adopted in the previous section (factor of 2 on $M_*$). We note that this this implies very large errors on $M_{1/2}^{\rm DM}$ for the E (and dE) galaxies, because these are the systems for which $M_*/2$ is closest to $M_{1/2}$, and hence the possible error in $M_*$ has the largest effect on $M_{1/2}^{\rm DM}$. This large uncertainty in $M_{1/2}^{\rm DM}$ maps to an even larger (relative) uncertainty in $M_{\rm vir}$. Figures \[fig:mlscatter\] and \[fig:mlscatter\_altmodel\] are distinguished by use of the dMRL-2 and dMRL-3 models, respectively. ![image](ml_scatter){width="33.00000%"} ![image](mtol_scatter_M){width="33.00000%"} ![image](mtol_scatter_L){width="33.00000%"} ![image](ml_scatter_altmodel){width="33.00000%"} ![image](mtol_scatter_M_altmodel){width="33.00000%"} ![image](mtol_scatter_L_altmodel){width="33.00000%"} Next we consider the cosmological scatter in the dark matter mass enclosed within a given radius for an ensemble of halos with identical virial masses. For field halos, this scatter can be accounted for by the scatter in the concentration-mass relation for dark halos, which is approximately log-normal in concentration with a variance of $\Delta \log(c) = 0.14$ [@Wech02]. In principle, this cosmic scatter provides a lower limit on point-to-point scatter that can be measured in a profile matched $M_{\rm vir}-L$ relation. We illustrate the magnitude of this cosmic scatter by the middle (dark gray) shaded band, which traces our best-fit relation (shown as a solid blue line connecting blue circles) in each panel. We see that this cosmic variance is particularly important for the smallest galaxies. This cosmic variance scatter is the minimal possible scatter expected for galaxies in $\Lambda$CDM. Even if galaxy properties tracked virial mass in a precisely one-to-one fashion, they would scatter about the profile matching relation with at least this amplitude. [^4] An additional component of scatter and uncertainty must be considered for the dSph galaxies – because they are satellites of the MW, their dark matter halos are subhalos, and hence do not obey the same scaling relations as field halos [e.g @bul01; @springel08aquarius]. More specifically, it is inappropriate to speak of a virial mass for a subhalo, because subhalos tend to be tidally truncated at radii that are smaller than the virial radius they had when they were first accreted. A more meaningful mass to be associated with each dSph is its halo’s virial mass at the time it was accreted. It is this mass, the virial mass at accretion, that would most likely show a strong correlation with galaxy luminosity. Two competing effects may act to modify standard (field) mapping between inner mass and virial mass. First, at fixed virial mass, a halo at higher redshift will tend to be denser at a fixed physical radius than a halo of the same virial mass at a later redshift (because the virial density scales roughly with the density of the universe). Therefore, if a subhalo was accreted at some high redshift (e.g. $z=3$) and it experienced no mass loss in its central regions (unlikely) then our virial mass estimates are biased high. The lower (red) shaded region in the $L < 10^7$ L$_\odot$ band of Figure \[fig:mlscatter\] illustrates the degree by which the median relation would need to be shifted down in order to account for a $z \le 3$ accretion that experienced no mass loss within its central region after accretion. The lower edge of the red band corresponds to the relation expected if *all* dSphs were accreted at $z=3$ with no mass loss. The second, competing processes that adds uncertainty to profile matching estimates for subhalos is tidal mass loss. Halos tend to lose mass at all radii after they are accreted, and this acts to decrease their central density for a fixed virial mass at accretion. The cosmological simulation of @BK09 shows that the median subhalo at $z=0$ in a Milky-Way-type host has lost $75 \%$ of its initial [*total*]{} mass, while $\sim85\%$ of subhalos in have lost $< 90 \%$ of their initial *total* mass (Boylan-Kolchin 2010, private communication). However, the mass loss is far less significant in the inner regions we are probing here [@K04; @pen08; @wetzel10; @pen10]. The simulations of @BJ05 show that a $75\%$ ($90\%$) loss of [*total*]{} mass, results in a mass loss fraction within the inner 300 pc of only $20 \%$ ($40 \%$) – where $r_{1/2} = 300$ pc is the median half-light radius for our dSph sample. For the mass range of relevance $M_{\rm vir} \propto M_{300}^{3.3}$ [@bull09stealth], which implies that our fiducial $M_{\rm vir}$ determination from field halo profile matching would be under-estimated by a factor of $(0.8)^{-3.3} \sim 2$ for median subhalo mass loss, and by a factor of $(0.6)^{-3.3} \sim 5$ in the case of 90% total mass loss. Thus, in Figure \[fig:mlscatter\] we include an upper (green) shaded region corresponding to a factor of 5 increase in the inferred $M_{\rm vir}$, as a conservative estimate of the maximal scatter. This treatment is conservative because we expect that systems with the most mass loss will also have been accreted earlier, and therefore to have had higher virial densities overall. This offsetting effect has been ignored in the upper green shaded band. Of course, if we knew the redshift of accretion and orbital trajectory (including mass loss) of each dSph in our sample, we could perform the profile matching in a more exacting way, but this is not practical with present data due to the uncertainties in the orbits of the MW satellites [@lake10dwarforbits]. Therefore, we have added both the effects in quadrature to the cosmic variance error band in Figure \[fig:mlscatter\] in order to derive a limiting scatter estimate shown as the outer light gray regions in Figure \[fig:mlscatter\]. Thus, the shaded bands about the average relations in Figure \[fig:mlscatter\] can be thought of as a limiting theoretical scatter about the relation. In principle, if the data at a particular scale scatter about the relation with a larger variance than indicated by the shaded band, then this would be indicative of intrinsic scatter in the $M_{\rm vir} - L$ relationship. This then implies that the secondary scalings in galaxy formation (e.g. 2-D scalings such as the fundamental plane) can be fit to the data set to provide useful information. Conversely, if the scatter (including observational errors) is consistent with the theoretical scatter, the secondary scaling relations cannot be measured at that scale. The possibility of detectable intrinsic scatter is is particularly interesting at the faint end, where it has been noted that despite the wide ranges of luminosities, the MW dSphs appear to have similar halo masses, albeit with large scatter [@stri08nat; @wolf09] – this could be due to scatter (observational or intrinsic) masking a weak relation, scatter in halo mass about a new scale in galaxy formation, selection effects (e.g. the stealth galaxies’ influence as discussed in §\[sec:halomatch\]), or some as yet unknown effect. This scale appears particularly strongly in \[fig:mlscatter\_altmodel\] due to the preferential fitting on the dSph. These data also admit a steepening power law instead of a true scale at the low-mass regime as suggested by @krav09drev to match the dSph luminosity function, so we plot this relation in Figure \[fig:abundmatch\]. We note in Figure \[fig:mlscatter\] that there is a systematic offset for the bright dSphs. This is primarily due to the tension between fitting the RM relation for the dE and the bright dSphs with a single power law, as is used for dMRL-2. In Figure \[fig:mlscatter\_altmodel\], this offset is essentially gone, as the fit in the RM relation is tailored to fit best for the dSphs. This comes at the price of a poorer fit for the other galaxies, however, as well as an anomalously low $M_{\rm vir}/L$ apparent in Figure \[fig:abundmatch\] (green-dashed line in lower-right panel). It is unclear if this tension is due to problems with $\Lambda$CDM accounting for the existence of galaxies in the halos of the bright dSph, evolutionary effects on subhalos (as discussed above), or the influence of baryonic contamination of $M_{1/2}$, which is unaccounted for in our analysis of the dSphs. Unfortunately, as is clear from comparing the data points to the shaded band in Figure \[fig:mlscatter\], the observational uncertainty is still slightly too large on the faint end to determine if there is significant *intrinsic* scatter about the fundamental curve, although the data are close. Similarly, while the very faintest galaxies show deviation from fundamental curve in a way consistent with a new scale of flat $M_{\rm vir}/L$, this level of deviation is not statistically significant. Similar uncertainties likely apply to M31 satellites, making it difficult to interpret the possible existence of an offset [@kal10]. Marginal improvements in data quality may be enough to shed light on these questions, however, as observational errors could be brought to the level of cosmological scatter. Furthermore, the predicted existence of far more faint dSphs in the Local Group to be detected in upcoming surveys [@toll08; @martin09cubs; @bull09stealth] provides hope that this degeneracy between intrinsic and observational scatter may be broken by sheer statistics. Nevertheless, the current data are not good enough to definitively address this question. There is also hope on the bright end. Interestingly, the most massive, luminous objects are the ones that face the least cosmological scatter associated with the profile matching technique. As can be seen in Figure \[fig:massplot\], as one travels along the fundamental curve projection to large values of $r_{1/2}$ and $M_{1/2}^{\rm DM}$, the associated $M_{\rm vir}$ determinations become more cleanly defined. Unfortunately, it is in this regime where our inability to determine CSph velocity dispersions limit the ability to cleanly determine $M_{1/2}$. Conclusions {#sec:conc} =========== We have examined the scaling relations for a broad collection of spheroidal stellar systems in an intrinsic MRL space of half-light mass (${M_{1/2}}$; Equation \[eqn:Mh\]), half-light radius (${r_{1/2}}$), and half-luminosity ($L_{1/2}$). These MRL coordinates are a theoretically-motivated transformation of the familiar fundamental plane variables and can serve as a bridge between direct observables and the predictions of galaxy formation models. The latter is facilitated by considering an alternative space we refer to as dMRL space. In dMRL space, the mass variable is ${M_{1/2}}^{\rm DM}$ – the dark matter mass within ${r_{1/2}}$ rather than the dynamical mass. Our main findings are as follows. 1. All spheroidal [*galaxies*]{}—stellar systems with their own dark matter halos—track a 1-D fundamental curve through MRL space. This curve is visualized in 3-D in Figure \[fig:fcurve3d\] and represented analytically in Equations \[eqn:arctanrvsl\] and \[eqn:arctanrvsm\] (with parameters from Table \[tab:curveparams\]). The fundamental mass-radius-luminosity relation transitions from $M_{1/2} \propto r_{1/2}^{1.44} \propto L_{1/2}^{0.30}$ for the faintest dwarf spheroidal (dSph) galaxies to $M_{1/2} \propto r_{1/2}^{1.42} \propto L_{1/2}^{3.2}$ for the most luminous cluster spheroids (CSphs). This ${r_{1/2}}-{L_{1/2}}$ scaling (MRL-2 model) is a good fit for the dSphs if we take into account the fact that the lowest luminosity dwarf galaxies suffer from surface brightness incompleteness (which biases the sample towards smaller ${r_{1/2}}$). If we ignore this bias, the raw empirical relation (MRL-1 model) gives $M_{1/2} \propto r_{1/2}^{1.44} \propto L_{1/2}^{0.86}$ on the faint end. 2. Dwarf ellipticals (dEs) and normal ellipticals (Es) inhabit the transition regime between the limiting power laws, where the dynamical mass-to-light ratio within ${r_{1/2}}$ is minimized at $\Upsilon_{1/2} \simeq 3$. The dynamical mass as a function of ${r_{1/2}}$ transitions quite abruptly as the galaxies become baryon-dominated (see Figure \[fig:fcurve2d\]). When we subtract out the baryonic component with estimates for the stellar mass (although these are subject to uncertain systematic errors), the relation is better fit by a power law, particularly when we include an estimate for the effect of baryonic contraction (see the inset of Figure \[fig:massplot\]). The inferred slope for the ${r_{1/2}}- {M_{1/2}}^{\rm DM}$ relation is ${M_{1/2}}^{\rm DM} \propto r_{1/2}^{2.32}$, slightly steeper than the $M \propto r^2$ relation that has been discussed in the literature [@gentile09; @napo10cendm; @walker10dsphsp]. 3. Globular clusters (GCs) and ultra-compact dwarfs (UCDs) do not follow the fundamental curve relation. Instead, GCs and UCDs inhabit overlapping/connecting regions in MRL space that resemble sections of mass-follows-light planes near $M_{1/2} = 3 \, L_{1/2}$, as illustrated in Figure \[fig:sepplot\]. See Equation \[eqn:mlrsepeq\] for the exact form of the plane that separates this GC locus from the dSph portion of the fundamental curve. Note that the UCDs in our sample exhibit a small “tilt" away from the mass-follows-light plane, while GCs exhibit no such tilt – thus it cannot be ruled out that UCDs are a part of the galaxy sequence, but are intrinsically rare in the region where they meet the fundamental curve. However, dSphs separate distinctly from GCs and UCDs in MRL space, implying that if UCDs are actually embedded in dark matter halos, an irreducible dichotomy exists in galaxy formation. 4. The fundamental curve relation in dMRL space allows us to connect galaxies to their dark matter halos via an approach we call profile matching. Specifically, at each luminosity, an [*average*]{} galaxy sits in a specific point in the ${M_{1/2}}^{\rm DM} - {r_{1/2}}$ plane. This mass-density point can be mapped to an [*average*]{} dark matter halo virial mass, as illustrated in Figure \[fig:massplot\]. While we assume standard NFW halos in $\Lambda$CDM, this technique is easily adaptable to any dark matter halo type that can be cast as a single-parameter family. In the end, we can construct relationships between luminous galaxy properties and their dark matter halo masses. This profile matching technique for deriving the $M_{\rm vir} - L$ is most accurate at the high and low luminosity extremes (where dark matter fractions are highest) and is therefore quite complementary to statistical approaches that rely on having a well-sampled luminosity function. 5. Independent of any global abundance or clustering information, we find that (spheroidal) galaxy formation needs to be most efficient in $\Lambda$CDM halos of virial mass $M_{\rm vir} \simeq 10^{12} \, M_\odot$ and to become [*sharply*]{} inefficient in masses smaller than $M_{\rm vir} \lesssim 10^{10} \, M_\odot$. On the other hand, the inefficiency of galaxy formation seems occur more gradually as halos become more massive than $M_{\rm vir} \simeq 10^{13} \, M_\odot$. Rather, the inefficiency sets in sharply in luminosity at $L \simeq 10^{11} \, L_\odot$. These results are qualitatively consistent with the expectations of abundance matching (see Figure \[fig:abundmatch\]), although only if we use models that account for surface brightness selection effects on the faint end (dMRL-2 and dMRL-3). The sharpness of the transition with $M_{\rm vir}$ on the faint end may imply the dark matter halo or potential depth drives scaling relations for low-mass galaxies, while the stronger dependence on $L$ on the bright end suggests baryonic physics controls the massive galaxy regime. 6. Object-by-object scatter about the $M_{\rm vir} - L$ relation remains very difficult to quantify. Nevertheless, despite the large theoretical uncertainties associated with our profile matching technique at the low-mass end, the observational data for dSphs are almost to the point where we can explore intrinsic scatter about this relation in the smallest systems. On the other hand, the theoretical uncertainty in the mapping between points in the ${M_{1/2}}^{\rm DM} - {r_{1/2}}$ plane and halo virial is much smaller on the scale of CSphs, so there is promise at the bright end from that respect. Unfortunately, stellar velocity dispersion for CSphs remain very difficult to obtain directly. A better approach would be to consider alternative mass-radius measurements for CSphs (based, for example, on X-ray studies) as has recently been explored by @tgkly10. We close by mentioning that the existence of a fundamental curve in MRL space is not out of line with an understanding that galaxy properties show strong correlation with a single parameter [see, e.g. @disney08simplegals for similar results on an HI-selected sample]. Nevertheless, this fact does *not* imply that all galaxies belonging to a given evolutionary sequence are completely or even primarily controlled by a single parameter – only that their first-order scaling relation is characterized by a single parameter when the galaxy properties are averaged. Our viewpoint is rather that the MRL relation presented above provides a useful bridge between observational properties and theoretical models. At the very least, models should be able to reproduce the 1-D scaling relation presented. Some guidance to that aim is provided by our dMRL-inspired profile matching, which seeks to unite galaxies across a space of virial mass, stellar luminosity, and stellar radius self-consistently. We wish to acknowledge Sandra Faber, Andrey Kravtsov, and Chris Purcell for helpful discussions. We also thank the anonymous referee for a useful and clarifying report. This work was supported by the Center for Cosmology at UC Irvine. E.J.T. was supported by the UCI Physics & Astronomy GAANN fellowship. Analytic Fit to Halo Matching Relations {#apx:matchfit} ======================================= In order to provide an analytic description of the derived $M_{\rm vir}$ relations for our fiducial (dMRL-2) model results, we perform a least-squares fit of $y$ vs. $M_{\rm vir}$ for each of $y = L_{1/2}$, $r_{1/2}$, $M_{1/2}$, and $(M_{\rm vir}/L)$ using the same fitting form as Equation \[eqn:arctanrvsl\] $$\begin{aligned} \log\left(\frac{y}{y_0}\right) = {\tilde{\mathcal{M}}}\, \frac{A + B}{2} + \left[ S - {\tilde{\mathcal{M}}}(A - B) \right] \frac{\arctan({\tilde{\mathcal{M}}}/ W)}{\pi}. \label{eqn:matchfit}\end{aligned}$$ Here, ${\tilde{\mathcal{M}}}\equiv \log (M_{\rm vir}/M^0_{\rm vir})$ defines a characteristic virial mass scale $M^0_{\rm vir}$ at $y = y_0$ and $W$ sets the width of the transition from $y \propto M_{\rm vir}^A$ and $y \propto M_{\rm vir}^B$ at small and large $M_{\rm vir}$, respectively, and $S$ sets the offset in $\log(y)$ over the transition region. Each of the fit parameters $A$, $B$, $W$, $S$, $M^0_{\rm vir}$, and $y_0$ are provided in Table \[tab:matchtab\] for our four $y$ relations, corresponding to the four panels of Figure \[fig:matchplots\]. We find that the ${M_{1/2}}^{\rm DM}$ vs. $M_{\rm vir}$ and ${r_{1/2}}$ vs. $M_{\rm vir}$ relations for dMRL-2 are also very well characterized by power-laws. Specifically we find $$M_{1/2}^{\rm DM} \simeq \left(\frac{M_{\rm vir}}{1.35 \times 10^{5} M_{\odot}}\right)^{1.36} \, M_\odot \, ,$$ and $$r_{1/2} \simeq \left(\frac{M_{\rm vir}}{2.17 \times 10^{11} M_{\odot}}\right)^{0.59} \, {\rm kpc}.$$ The $L$-to-$M_{\rm vir}$ relation, meanwhile, can be approximated on the faint end as $L \propto M_{\rm vir}^{2.84}$ and flattens on the bright end to $L \propto M_{\rm vir}^{0.26}$. [|c|cccccc|]{} $M_{1/2}^{\rm DM}$ & $2.03 \times 10^{12} M_\odot$ & $2.85 \times 10^{14}$ & 1.38 & 1.31 & 2.40 & 0\ $L_{1/2}$ & $8.95 \times 10^{9} L_\odot$ & $1.78 \times 10^{12}$ & 2.84 & 0.26 & 0.71 & 0\ $r_{1/2}$ & $70 \; {\rm kpc}$ & $2.85 \times 10^{14}$ & 0.60 & 0.56 & 2.40 & 0\ $M_{\rm vir}/L$ & $199 (M/L)_\odot$ & $1.78 \times 10^{12}$ & -1.84 & 0.74 & 0.71 & 0 \[tab:matchtab\] Alternative Data Projections {#apx:altdata} ============================ The fundamental plane of bright elliptical galaxies [@dj87fp; @dressler87dnsig; @faber87fp] lies within a 3-D parameter space that consists of the velocity dispersion ($\sigma$), the 2-D half-light (effective) radius ($R_e$), and the surface brightness ($I_e$). Here we define $I_e$ such that it is the *mean* surface brightness within $R_e$, in units of $L_\odot\,{\rm pc}^{-2}$, although we note that slightly different definitions are sometimes used in the literature. The fact that these three variables are direct observables that scale together motivates the consideration of galaxies in this space. Two-dimensional projections of our data set (Table \[tab:dat\]) on the fundamental plane axes are shown in the three panels of Figure \[fig:fp2d\]. In Figure \[fig:fp3d\] we plot in a 3-D rendering of these same data. Also shown in transparent red in Figure \[fig:fp3d\] is the best-fit fundamental plane of @graves09ii. From these plots, it is apparent that while the normal elliptical galaxies (E) lie well within the fundamental plane of @graves09ii, the CSphs and dSphs lift away from the plane in a non-trivial manner, in contrast to data sets that do not reach those extremes in luminosity [e.g. @burbend97]. However, it has been noted in the literature that the faint end of the fundamental plane (towards dEs) shows curvature up off the plane [@zar06fmdwarfs; @hyde09curvefp], and bright-end deviations from the fundamental plane are discussed in further detail in @zar06fman. Here we note that the separation from the plane is much more significant when the dSph galaxies discovered in the SDSS ($R_e \lesssim 450$ pc) are included alongside the “classical” dwarfs, as the SDSS dSphs extend nearly perpendicularly from the fundamental plane. with that in mind, the deviation from the plane is significant far beyond the scatter in the fundamental plane derived for bright E galaxies. The “tilt” of the E fundamental plane here can be interpreted in the context of this curvature; the tilt of the fundamental plane is simply the shift of the observational fundamental plane from the expected virial plane (see §\[sec:mlrspace\] and the two planes in Figure \[fig:mlr3d\]). Curvature off the plane is then just continuation of this tilt past the typical regime of Es. The tilt in the Es can potentially be driven by a mix of stellar mass-to-light ratio variations and/or variation in the dark matter-to-baryon fraction within the halo of the galaxy in question [@cappellari06; @bolton07; @humph10; @napo10cendm; @treu10imf; @graves10iii]. It may also be an aperture affect due to dissipation causing a change in the apparent dark matter fraction by packing more baryonic material in the same volume of dark matter halo [@robertson06ellscale; @hopkins08fpdiss]. For our purposes, however, it is sufficient to note that the magnitude by which the largest and smallest spheroidal galaxies peel away from the fundamental plane cannot be explained by baryonic effects – it can only be addressed in terms of dark matter content, due to the very large mass-to-light-ratios. For comparison with other work, Figure \[fig:fpproj\] shows the projection of the data onto the kappa ($\kappa$) space of @b2f1, a coordinate rotation that enables a reasonably physical interpretation with $\kappa_1 \propto \log{M}$, $\kappa_2 \propto \log{\left((M/L)I_e^3\right)}$, and $\kappa_3 \propto \log(M/L)$ such that $\kappa_1$ and $\kappa_2$ define a plane that is approximately parallel to the fundamental plane for ellipticals. In Figure \[fig:mlr3dfman\] we show this data set again in MRL space (as in Figure \[fig:mlr3d\]), but we now overplot the fundamental manifold of @zar06fman in transparent green. We use the fundamental manifold from @zar08eqgal Table 1, using the transforms from §\[sec:mlrspace\] to convert from fundamental plane space to MRL space. We also adjust the luminosity from the value for I-band [used in @zar08eqgal] to V-band assuming all objects have V-I colors of typical E galaxies from @fuk95. While there will be an additional bias because $R_e$ for V and I bands will differ, this is likely small relative to the scatter and hence we disregard it. Finally, we show the mean mass density of our data set as derived from the middle panel of Figure \[fig:mlr2d\]. This derived simply by tilting the $\log(r_{1/2}) - \log(M_{1/2})$ relation to give density within $r_{1/2}$ instead of mass (i.e. residuals from the dashed-dotted line in the middle panel of Figure \[fig:mlr2d\], but with a different normalization fixed to standard density units). UCD Mass Estimates {#apx:UCDs} ================== As discussed in §\[sec:mlrspace\], UCDs present a puzzle in the MRL space. The most massive UCDs approach the fundamental curve, although with a gap that could potentially be a result of selection effects. Regardless, the current sample of UCDs form a distinct group (with GCs) from the dSphs for the faintest/smallest objects. Thus, if this sample of UCDs and dSphs are both galaxies, the scaling relations split into a dichotomy or bimodality at the faint end. For this paper we have focused on the dSph side of this relation, but here we consider the UCDs in the profile matching context. Figure \[fig:UCDdSphmassplots\] is analogous to Figure \[fig:massplot\], but zoomed in on the faint end and with UCDs added. Note that for UCDs, we determine the dark matter mass for the UCDs as $M_{1/2}^{\rm DM} = M_{1/2} - L_{1/2} \Upsilon$, where an $\Upsilon$ is taken to be fixed at 1 (open circles) or 2 (filled circles). For the latter, we also show error bars based on a possible factor of 2 systematic uncertainty in stellar models, based on the discussion in §\[sec:err\] for E galaxies. The dSph error bars are from \[fig:massplot\], based on the observational error bars in $r_{1/2}$ and $\sigma$. The grid of NFW halos in Figure \[fig:UCDdSphmassplots\] clearly shows that the implied dark matter densities for UCDs are most consistent with cluster-sized (or larger) dark matter halos ($M_{\rm vir} \gtrsim 10^{15}$) . Taken at face value, this is impossible, as there are not enough of such halos where UCDs are found, and they would have clear kinematic effects on neighbors if UCDs had such massive halos. A few possibilities might explain these large virial masses. If UCDs do indeed have dark matter halos, baryonic contraction might boost their central densities (as described in §\[sec:halomatch\] for E galaxies). However, given the extreme stellar densities and small sizes of UCDs (and hence short dynamical times), it seems unlikely that any baryonic contraction would be adiabatic. Hence we cannot apply adiabatic contraction as we have used to correct masses for the Es. A baryonic contraction model appropriate for UCDs could be used in the same way, although we do not do such a correction as such a model does not yet exist. An alternative possibility is that the stellar population estimates are systematically in error. The error bars in \[fig:UCDdSphmassplots\] imply that such errors could explain most (possibly all) UCDs as entirely stellar objects–this corresponds to those where the error bars are upper limits. Alternatively, they may have dark matter halos with much smaller virial masses, but without better stellar population models, there is no way to tell the difference. Thus, while UCDs are possibly consistent with lying inside dark matter halos, this implies a dichotomy in galaxy formation as well as being impossible to explain with standard $\Lambda$CDM dark matter halos. We therefore favor the simplest view that they are purely stellar systems. [^1]: In principle, a radial $M_*/L$ shift could be resolved by replacing $L_{1/2}$ by $M_{*1/2}$ and defining the appropriate $r_{*1/2}$. However, the data quality is not sufficient to derive $M_*$ for our full sample, so we use $L$ here. [^2]: For $r_L$ and $r_M$ values somewhat different from the best-fit for this data set, the values of these slopes can be quite different, but mass-follows-light never holds for any reasonable fits. [^3]: See @mg10 for a discussion of how dSph scaling relations connect to spirals. [^4]: In principle, if galaxy luminosity had a secondary dependence on halo concentration, then the covariance could act to reduce the cosmic scatter from profile matching, but this seems tuned and unlikely.
{ "pile_set_name": "ArXiv" }
Had to stop drinking coffee and soda because of a kidney infection started a regular sleeping schedule and my skin is clear 329 shares
{ "pile_set_name": "OpenWebText2" }
Q: Does this Dirac Delta Exist in multivariate form? Is there a multi-variate equivalent to the Dirac-Delta? Like $\delta (X - A)$ that is infinity whenever $X=A$ and zero everywhere else and X, A positive-definite n by n squares so that I could integrate this over the Haar measure say as in \begin{eqnarray} \int_{X>0} \delta (X-A)=I_n \end{eqnarray} ? Thanks! A: Yes, you can use the identity \begin{eqnarray} \delta(X-A) = \int_{-\infty}^\infty \frac{e^{ik(X-A)}}{2 \pi}dk \end{eqnarray}. Then you can compute the Matrix exponential.
{ "pile_set_name": "StackExchange" }
Sarah Solemani Sarah Solemani (born 4 September 1982) is an English actress, writer and activist, best known for starring in the BAFTA winning sitcom Him & Her, playing Renee Zellweger's best friend 'Miranda' in Working Title's Bridget Jones's Baby, for which she was nominated for an Evening Standard Best Actress Award, and for her role as Rosie Gulliver in Bad Education. Early life Solemani was born in the London Borough of Camden and grew up in Crouch End. Her father is a Persian Jewish mathematics lecturer (now retired). Her mother, who died of cancer when Sarah was 16, was a sociology teacher of Northern Irish descent. After passing her A levels at the Henrietta Barnett School, she took a gap year before reading Social and Political Sciences (now the Human, Social and Political Sciences Tripos) at New Hall, Cambridge and graduating with an MA (Hons). At Cambridge, she joined the Cambridge University Footlights Dramatic Club and became social secretary during her first year and then vice president. She went down in Footlights history for writing the biggest all-female comedy sketch ever performed on a Footlights stage, writing twenty five female students into her sketch 'Brawl'. Career Theatre Solemani was a member of the National Youth Theatre during her gap year, starring as Elaine in the West End theatre production of The Graduate and as 'Ayesha' in the critically acclaimed National Theatre production of Sanctuary. Solemani was a member of the Young Writer's Group attached to the Royal Court Theatre, and a writer at the Young Vic Theatre. Two plays she wrote were produced at Soho Theatre. Another of her works, The Cost of Things (2010), was presented at the Public Theater New York under the aegis of the Old Vic Theatre as part of the TS Eliot Project. In 2011, she wrote The Baron which received the Old Vic New Voices 'Ignite' award. In 2009, she appeared in Simon Stephens Pornography at the Tricycle Theatre in London and in 2012, she appeared as Maryam in the play The House of Bernarda Alba at the Almeida Theatre. She wrote Up the Royal Borough, part of an evening of plays in response to Owen Jones' Chavs at the Lyric Hammersmith. It gained good reviews. Television and film Solemani's first film role was as a tableaux girl in Mrs Henderson Presents, which she performed during her third year of college. Her first major TV role was as "Becky" in BBC Three sitcom Him & Her, which was first broadcast in November 2010, and ran for four series ending in 2013. In 2012, Solemani starred in the BBC Three comedy, Bad Education, including its spin off movie The Bad Education Movie. In 2013, she featured in the BBC and Hulu's The Wrong Mans alongside James Corden. She went on to reprise the role in the second series. Solemani wrote and starred in an episode titled Aphrodite Fry in the Sky TV series Love Matters that aired in 2013. In 2014, she wrote the television film The Secrets on BBC One at 9pm to critical praise. In Hollywood, Solemani was chosen by Bill Hader and Alec Berg to be part of their writing team on Hader's new HBO show Barry. While working in the United States, she has found the American television industry has a more positive attitude towards commissioning work by women and featuring female characters in their series. Print Solemani has contributed to the New Statesman, The Guardian, The Independent and Harper's Bazaar. She writes regularly for the publications Red and Glamour. Awards and acclaim Solemani was awarded third place in the Barry Amiel and Norman Melburn Trust/New Statesman Prize for New Political Writing on the subject: "Do women's rights remain the privilege of the developed world?" in 2005. In 2011, Solemani won the Royal Television Society award for best Comedy Performance for her role in Him & Her along with her co-star Russell Tovey. In 2012, Solemani was named one of the year's Broadcast Hot Shots. Activist Solemani is against the criminalisation of sex work, and has been a champion for sex worker rights since 2002. She was nominated by the English Collective of Prostitutes (ECP) to represent them in Parliament in order to halt further efforts to criminalise clients. She was an active supporter of former shadow Home Secretary Yvette Cooper in the 2015 Labour Leadership contest. She has introduced Cooper at various Labour Party events and has contributed to her speeches. Personal life Solemani married Daniel E. Ingram , a sustainable investment expert specialising in climate change in Petah Tikva, Israel on 3 June 2012. Their daughter was born in December 2013 and their son was born in May 2018. The couple revealed their daughter's godparents at a ceremony in the New London Synagogue. Filmography Film and television Stage References External links Category:21st-century English actresses Category:1982 births Category:Alumni of New Hall, Cambridge Category:English film actresses Category:English people of Iranian-Jewish descent Category:English people of Northern Ireland descent Category:English television actresses Category:Jewish comedians Category:Jewish English actresses Category:Living people Category:National Youth Theatre members
{ "pile_set_name": "Wikipedia (en)" }
Didn't find what you're looking for? Search again! Other Executive Jobs Director of People and Organizational CultureDescription: This is an opportunity for a highly skilled and experienced leader to help shape the next phase of impact of a dynamic organization whose budget and staff have more than doubled in recent years, and whose (more...)Company: Non Profit OrganizationLocation: OaklandPosted on: 01/22/2019 DISTRICT MANAGER- FACILITIES SERVICES (ENGINEERING) - San Jose, CADescription: Responsible for the ABM operations and service delivery of an assigned area within a designed geographic territory. br br Leads, manages and oversees the managers, supervisors and staff. br br (more...)Company: ABMLocation: San JosePosted on: 01/22/2019 Director of MarketingDescription: SRA OSS is looking for a smart and articulate Brand Strategist to join our team. Reporting to the CEO, this individual will have responsibility for the overall branding of the company. This individual (more...)Company: SRA OSSLocation: San JosePosted on: 01/22/2019 Restaurant ManagerDescription: At Red Robin our Assistant Managers support and ensure that the restaurant operates within Red Robin International guidelines, while meeting/exceeding sales and profitability objectives established during (more...)Company: Red RobinLocation: LincolnPosted on: 01/22/2019
{ "pile_set_name": "Pile-CC" }
Inflammation and inflammatory processes play a major role in the pathophysiology of numerous diseases and conditions. Conditions of the brain in which increased levels of inflammation mediators were found include severe traumatic brain injury, relapsing-remitting multiple sclerosis, cerebral artery occlusion, ischemia, and stroke. Conditions of the heart in which mediators such as the selectins are suggested to play a role include acute myocardial infarct, arterial injury, such as produced by angioplasty, and ischemia. Similarly, selectins are involved in conditions of the kidneys, such as renal injury from ischemia and reperfusion, and renal failure. Furthermore, selectins appear to play a role in organ transplant rejection, cold ischemia, hemorrhagic shock, septic shock, tumour metastasis, chronic inflammation, rheumatoid arthritis, inflammatory bowel disease, atherosclerosis, restenosis, angiogenesis, disseminated intravascular coagulation, adult respiratory stress syndrome, and circulatory shock. Cell surface adhesion molecules have become recognised as key mediators in numerous cellular processes including cell growth, differentiation, immune cell transmigration and response, and cancer metastasis. Four major categories of adhesion molecules have been identified: the immunoglobulin superfamily cell adhesion molecules (CAMs), cadherins, integrins, and selecting. The selectins represent a family of presently three transmembraneous, carbohydrate-binding glycoproteins: “endothelial” E-selectin, “leukocyte” L-selectin, and “platelet” P-selectin. All three selectins are divalent cation (e.g. calcium) dependent and possess an extracellular domain with a carbohydrate recognition motif, an epidermal growth factor-like motif, and some smaller domains related to complement-regulatory proteins. Human P-selectin (also referred to as GMP-140, LECAM-3, PADGEM, CD62, CD62P) is expressed by platelets and endothelial cells. When expressed on these cell surfaces, its most notable effect is the slowing of leukocytes as these leave the capillaries and enter the postcapillary venules, the latter representing the major site of leukocyte-endothelium adhesion. The slowing process is observed as leukocyte rolling, signifying an initial adhesion with relatively low affinity. The firm adhesion of rolling leukocytes is primarily mediated by integrins. In endothelial cells, P-selectin is stored on Weibel-Palade bodies; in platelets, it is found in the α-granules. Following activation, P-selectin is mobilised to the cell surfaces within a few minutes in response to a variety of inflammatory or thrombogenic agents. The endothelial P-selectin's primary function is to recruit leukocytes into postcapillary venules, while platelet P-selectin also results in the formation of thrombi. One of the presently known natural ligands of P-selectin is PSGL-1 (P-selectin glycoprotein ligand-1), a 160 kDa sialoprotein expressed on the surface of leukocytes where it is concentrated at the uropod. More detailed descriptions of the structure and functions of p-selectin are found in numerous publications, such as J. Panes, Pathophysiology 5: 271 (1999); F. Chamoun et al., Frontiers in Bioscience 5: e103 (Nov. 1, 2000); S.-I. Hayashi, Circulation 102: 1710 (2000). P-selectin also appears to be involved more directly in platelet aggregation, as was shown recently by studies of the Ca-independent interactions of P-selectin with 3-sulfated galactosyl ceramide (also referred to as sulfatides). This interaction probably takes place at a different binding site of P-selectin, as the binding can be inhibited by the antibody WASP12.2, but not by AK4, whereas the binding of the natural P-selectin ligand PSGL-1, which is involved in leukocyte adhesion, is blocked by both WASP12.2 and AK4. However, it appears that the binding sites are overlapping. It is assumed that sulfatide interactions stabilise platelet aggregates. On the one hand, it would seem feasible to improve these and other conditions involving the activation of endothelial cells and leukocytes, and specifically the mobilisation and expression of P-selectin by specifically interrupting the P-selectin cascades. This can be done, for instance, by the administration of ligands which selectively bind to human P-selectin, but which do not possess its bioactivity. By this method, mobilised P-selectin could be inactivated and leukocyte-induced tissue damage prevented. Potentially, the same effect could be achieved by gene therapy, provided the P-selectin ligand or antagonist is a peptide or modified peptide. According to this method, somatic cells of a person in need of the therapy would be transfected with an expression vector carrying a DNA sequence encoding a P-selectin antagonist. On the other hand, P-selectin-related diseases and conditions may also be treated or prevented by drugs which do not directly interact with P-selectin, but which suppress some of the detrimental effects of P-selectin activation in the respective cells and tissues. Among the drug substances potentially useful for therapeutic intervention are anti-inflammatory agents such as glucocorticoids. One of the major drawbacks of any systemic therapy with highly active compounds is their distribution within the organism and the exposure of unaffected cells and tissues, potentially leading to substantial side effects. It would be most desirable to have methods and drug delivery systems available which allow the targeted delivery of active agents specifically to affected cells, without substantially exposing unaffected cells. While there is no pharmaceutical product comprising a cell-specifically targeted drug delivery system available on the market today, a number of experimental delivery systems have been described in the scientific and patent literature. Drug targeting may be based on conjugates of active principles with target-recognising ligands, such conjugates representing molecular drug delivery systems. A general disadvantage of such conjugates is the low ration of drug substance per ligand (often only 1:1), resulting in the exposure to high levels of ligands. As an example, Everts et al. (J. Immunol. 168: 883 (2002)) report the selective intracellular delivery of dexamethasone into activated endothelial cells using an E-selectin-directed immunoconjugate. Dexamethasone was covalently attached to an anti-E-selectin Ab, resulting in the so-called dexamethasone-anti-E-selectin conjugate. Binding of the conjugate to E-selectin was studied using surface plasmon resonance and immunohistochemistry. Furthermore, internalisation of the conjugate was studied using confocal laser scanning microscopy and immuno-transmission electron microscopy. It was demonstrated that the dexamethasone-anti-E-selectin conjugate, like the unmodified anti-E-selectin Ab, selectively bound to TNF-alpha-stimulated endothelial cells and not to resting endothelial cells. After binding, the conjugate was internalised and routed to multivesicular bodies, which is a lysosome-related cellular compartment. After intracellular degradation, pharmacologically active dexamethasone was released, as shown in endothelial cells that were transfected with a glucocorticoid-responsive reporter gene. Furthermore, intracellularly delivered dexamethasone was able to down-regulate the proinflammatory gene IL-8. Alternatively, carrier-based drug delivery systems may be rendered target-specific by attaching appropriate target-recognising ligands to their surface. For instance, this approach has been employed using liposomes as carriers. Some of the recent developments based on this approach have been reviewed by Maruyama (Biosci. Rep. 22: 251 (2002)). For instance, methods for E-selectin targeted drug delivery have been investigated by Spragg et al. (Proc. Nat. Acad. Sci USA 94: 8795 (1997)). According to this document, E-selectin was selected as a molecular target for endothelial-selective delivery of therapeutic drugs or genes for treating various disease states. Liposomes of various types (classical, sterically stabilised, cationic, pH-sensitive), each conjugated with mAb H18/7, a murine monoclonal antibody that recognises the extracellular domain of E-selectin, bound selectively and specifically to IL-1 beta-activated HUVEC at levels up to 275-fold higher than to unactivated HUVEC. E-selectin-targeted immunoliposomes appeared in acidic, perinuclear vesicles 2-4 hr after binding to the cell surface, consistent with internalisation via the endosome/lysosome pathway. Activated HUVEC incubated with E-selectin-targeted immunoliposomes, loaded with the cytotoxic agent doxorubicin, exhibited significantly decreased cell survival, whereas unactivated HUVEC were unaffected by such treatment. On the other hand, there is some evidence that P-selectin may also be at least as an appropriate molecular target for activated endothelial cell involved in inflammatory processes, as was described above. Therefore, there is a need for drug delivery systems which are specifically targeted to this member of the selectin family, and thereby to cells and tissues showing (increased) P-selectin expression or presentation. The majority of P-selectin binding compounds known today are carbohydrates, based on sialyl Lewis X (sLeX), a tetrasaccharide and natural ligand for the selecting. However, these mimics have the disadvantage of displaying low affinity (micromolar to millimolar range) and low specificity, as they tend to bind to other members of the selectin family with approximately the same affinity as they have for P-selectin. Therefore, there also is a need for such P-selectin-directed, targeted drug delivery systems which have a high affinity and specificity for the target molecule.
{ "pile_set_name": "USPTO Backgrounds" }
Welsh Government: Plans to replace Landfill Tax in Wales The Minister for Finance and Government Business recently launched a 12 week consultation on proposals for Landfill Disposals Tax which will replace Landfill Tax in Wales from April 2018. 27th March 2015 Many of you will be affected by these proposals, whether as waste producers or waste managers or as a community that has benefited through the Landfill Community Fund. It is therefore important that the Welsh Government hears your views on these proposals, including the importance of maintaining consistency with England and Scotland as well as where changes might be made to ensure that the replacement tax meets Welsh needs and circumstances. A number of events will be held across Wales during the consultation period which you are invited to attend. Events will be held at: Cardiff 23 April Llandudno 29 April Carmarthen 6 May If you like to attend any of the above or be included on the Welsh Government’s mailing list and hear how you can share your views in other ways, please contact the Welsh Government directly through the links below. For further information, please e-mail the Financial Reform Mailbox or to download the consultation visit the Welsh Government website.
{ "pile_set_name": "Pile-CC" }
Q: Measuring distance with iPhone camera How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance? Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use? A: Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly. So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff). Here's what I think. When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess. But that's about the only other way I thought about deducting scale from an image without any fixed objects. A: I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work! An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus. If the API exposes the current focus settings, maybe there's a way to use this to determine range?
{ "pile_set_name": "StackExchange" }
Hyperprolactinaemic amenorrhoea in Hong Kong. In a retrospective study of 595 patients attending the Menstrual Disorder Clinic from January, 1978 to December, 1981, 92 patients (15.5%) had raised serum prolactin (PRL) levels (greater than 25 ng/ml) on 2 or more separate occasions with a mean (+/- S.E.M.) value of 67.1 +/- 2.5 ng/ml. Galactorrhoea was found in 27.2% of the hyperprolactinaemic patients. Primary amenorrhoea was observed in 1 patient (1.1%) with serum PRL level of 68 ng/ml. Secondary amenorrhoea of longer than 6 months' duration occurred in 61 patients (66.3%) with mean PRL level 84.2 +/- 3.3 ng/ml. The 30 patients (32.6%) with irregular menstruation had a mean PRL level of 47.2 +/- 3.3 ng/ml. Investigations revealed that 43 patients (46.7%) had idiopathic hyperprolactinaemia, 14 patients (15.4%) had drug induced hyperprolactinaemia and 1 patient (1.1%) had hypothyroidism; 18 patients (19.5%) had suspected pituitary microadenoma and 16 patients (17.2%) had abnormal radiographic findings. Bromocriptine treatment was given to 38 patients, 13 with abnormal tomographic findings (mean serum PRL greater than 100ng/ml); 18 with suspected pituitary microadenoma (mean serum PRL 94 +/- 2.7 ng/ml) and 7 with idiopathic hyperprolactinaemia (mean serum PRL 65 +/- 4.7 ng/ml). All patients (38/38) responded to treatment with restoration of menstruation and cessation of galactorrhoea within 1 to 3 months. Mean PRL level was 21.6 +/- 5.2 ng/ml at the time of response. Thirteen patients subsequently became pregnant and all delivered healthy babies.
{ "pile_set_name": "PubMed Abstracts" }
(Beijing) – Baidu Inc. says it will start giving the public the chance to ride in driverless cars within three years and begin mass production of the autos within five years, after a recent test in the capital proved successful. The Internet search company made the announcement on December 14 as it unveiled plans for developing self-driving vehicles, which includes launching a dedicated business unit. Lin Yuanqing, who was named deputy general manager of that unit, said Baidu will start running driverless cars on fixed routes in several big cities so the public could get familiar with them. The autos would be advanced enough to handle roads in rain and snow, he said. Baidu recently announced that a test of a self-driving vehicle was a success. It said that on December 8 a driverless car carrying one passenger traveled the 30 kilometer r trip from its headquarters in the Zhongguancun tech area to the Olympic Forest Park on major roads. Baidu launched its driverless car project, which it says uses its map, data processing and artificial intelligence technologies, in 2013. The search engine company also provides Net users with desktop and mobile application map services. Baidu's self-driving car project is similar to that of the U.S. search giant Google Inc., which entered the field in 2009. Google has run dozens of tests on U.S. roads in recent years. Both companies are cooperating with automakers in their projects. Google is working with Toyota, Audi and Lexus, and Baidu is cooperating with BMW, whose 3-Series model was used in the test on Beijing's roads. Other automakers, such as Tesla Motors, also have plans to develop self-driving cars and autos linked to the Net. Those plans are slightly different than the efforts of Baidu and Google in that the automakers are adding functions such as computer-assisted driving to their models to see how consumers respond. Companies may have a ways to go before the public embraces the new form of transportation, if the results of a survey this year by Boston Consulting Group and the World Economic Forum are any indication. Only 16 percent of 5,500 respondents from 10 countries said they trusted driverless cars. Li Deyi, a member of the Chinese Academy of Engineering and head of the Chinese Association for Artificial Intelligence, said that high costs are another barrier for consumers. The laser radar, or lidar, technology that Baidu uses to help its driverless car avoid obstacles costs hundreds of thousands of yuan, which is more than the price of some cars, he said. "For 200,000 I would buy a car and hire a driver," he said. (Rewritten by Guo Kai)
{ "pile_set_name": "OpenWebText2" }
Q: Subclassing method with ellipsis array argument? I would like to subclass an object that has the ellipsis syntax in the init header. i.e. -(void) initObjectWith:(NSString*)argument arguments:(NSString*)someArgument,...; I'm unsure how to pass along the arguments array in this case. I suspect it would be something like: - (void) initObjectWithCustomInitializer:(NSString*)argument additionalArgument:(NSString*)additionalArgument argument:(NSString*) someArgument,... { self = [super initObjectWith:argument arguments:someArgument,...]; if (self) { //custom init code here } return self } This compiles but the nil-terminated 'arguments' array is only getting the first argument. How do I pass along the objects of a nil-terminated array? A: The superclass that declares that variadic initializer should also declare a non-variadic one that takes a va_list (analogous to how printf has vprintf, for example). Assuming that case, where the superclass has both: -(void)init:(id)a arguments:(id)b, ...; and -(void)init:(id)a arguments:(id)b variadicArgs:(va_list)args; You would do something like: - (void)myInit:(id)a newArg:(id)c arguments:(id)b, ... { va_list v; va_start(v, b); self = [super init:a arguments:b variadicArgs:v]; if (self) { //custom init code here } va_end(v); return self; } Of course, you should be sure to have a non-variadic version of your new initializer, too! A: Due to the way varargs are actually implemented, and the limitations of C language, it isn't possible to pass ... args down the callchain without a va_list-taking function to call unless you: Use assembly language appropriate to every platform on which your code may run Know intimate details of how the compiler implements va_list et al., or Try to write a function that somehow computes every possible combination of argument types and passes them along manually. Of these options, (3) is obviously impractical in any realistic case, and (2) is subject to change without notice at any time. That leaves us with (1), assembly language for each platform on which your code runs. Internally, varargs are implemented in an ABI-specific manner for each architecture. Conceptually, ... says "I'm going to pass all the arguments I want as if I were calling a function that took those arguments, and it's up to you to figure out where to pick up each argument from." Let's take the example of an architecture that passes all its arguments on the stack, such as i386 on OS X and the iOS Simulator. Given the following function prototype and call: void f(const char * const format, ...); /* ... */ f("lUf", 0L, 1ULL, 1.0); The compiler will generate the following assembly (as written by me; a real compiler will probably produce a somewhat different calling sequence with the same effect): leal L_str, %eax pushl %eax movl $0x3f800000, %eax pushl %eax movl $0x00000000, %eax pushl %eax movl $0x00000001, %eax pushl %eax movl $0x00000000, %eax pushl %eax call _f The effect of this is to push each parameter onto the stack in reverse order. Here's the secret trick: The compiler would have done the same thing if f() had been declared like this: void f(const char * const format, long arg1, unsigned long long arg2, float arg3); This means that if your function can copy the parameter area of the stack and call the vararg-taking function, the args will simply pass through. Problem: There's no generic way to figure out how big this parameter area is! On i386, in a function that has a frame pointer that is also called from a function that has a frame pointer, you can cheat and copy rbp - *rbp bytes, but that's inefficient and won't work for all cases (especially functions that take struct parameters or return structs). Then you have architectures like armv6 and armv7, where most parameters are passed in registers which must be carefully preserved, x86_64, where parameters are passed in registers and a xmm register count is passed in %al, and ppc, where stack locations and registers are both mapped to parameters! The only bulletproof way to forward arguments without using a va_list is to reimplement the entire architecture ABI logic in your code using assembly for each architecture, the same way the compiler does. This is also essentially the same problem that objc_msgSend() solves. "So wait!" you now say. "Why can't I just call objc_msgSend instead of messing around with assembly this way?!" Answer: Because you have no way to tell the compiler, "don't mangle anything on the stack and don't wipe out any registers you don't see me using". You would still have to write an assembly routine that forwarded the call to the superclass implementation - before doing any work whatsoever in your subclass implementation - and then returned to yours, all while minding the same things objc_msgSend() does, such as the need for _stret and _fpret variants and implementations on at least three architectures (armv7, i386, x86_64 - and depending upon your need for backwards and forwards compatibility, also potentially ppc, ppc64, armv6, and armv7s). For plain varargs, the compiler is using its intimate knowledge of your calls and the calling conventions of the target(s) to do this work behind the scenes when it creates a va_list. C doesn't give direct access to any of this information. And objc_msgSend() is the Objective-C compiler and runtime redoing it all again so you can write method calls without using va_list all the time. (Also, on some architectures, it's more efficient to be able to pass parameters to a known calling list than to use varargs conventions). So, unfortunately, you can't do it without putting hugely more work into the effort than it's likely be worth. Class implementors, let this be a lesson to you - whenever you provide a method that takes variadic arguments, also provide a version of the same method that accepts a va_list in lieu of .... NSString is a great example, with initWithFormat: and initWithFormat:arguments:.
{ "pile_set_name": "StackExchange" }
Pressure effects on the interactions of the sarcoplasmic reticulum calcium transport enzyme with calcium and para-nitrophenyl phosphate. The effect of hydrostatic pressure on calcium dependent p-nitrophenyl phosphate hydrolysis of the sarcoplasmic reticulum calcium transport enzyme has been investigated at different degree of enzyme saturation by calcium and Mg-p-nitrophenyl phosphate to distinguish between activation and binding volumes. The enzyme saturated by both ligands displays a significant dependence of the activation volume on pressure, rising from 20 ml/mol at atmospheric pressure (0.1 MPa) to 80 ml/mol at 100 MPa. At subsaturating concentration of Mg-p-nitrophenyl phosphate an activation volume of 35 ml/mol prevails between 0.1 and 40 MPa. At subsaturating concentration of calcium the activation volume approximates 80 ml/mol in the same pressure range. The binding volume for both substrates is likewise pressure dependent falling from 20 ml/mol to 0 ml/mol for Mg-p-nitrophenyl phosphate and rising from 67 ml/mol to 155 ml/mol for calcium. The pressure dependence of activation and binding volumes is analysed on account of a simplified reaction scheme yielding activation volumes and rate constants for individual reaction steps.
{ "pile_set_name": "PubMed Abstracts" }
BlackBerry Bold Touch previewed in leaked tutorials: prepare to pinch-to-zoom - evo_9 http://www.engadget.com/2011/04/04/blackberry-bold-touch-previewed-in-leaked-tutorials-prepare-to/ ====== jrsmith1279 Anyone else feel like RIM is grasping at straws here? As an ex-blackberry user I feel like the more that RIM tries to fit in with the "cool kids" the worse their products get quality-wise. That, coupled with the fact that using corporate email on one of their devices is so much more difficult than using an iPhone or Android with activesync, is going to be what kills RIM. Unfortunately it seems like they're either oblivious, or that they're too arrogant to care.
{ "pile_set_name": "HackerNews" }
Background ========== Busy clinicians must choose carefully what reports of medical research findings to read, and in searching the literature should choose publications of relevance to practice with the methodological rigour capable of changing practice \[[@B1]\]. Physicians\' perception of research quality \[[@B2]\] and generalisability \[[@B3]-[@B5]\] are likely to affect their willingness to change practice in response to research findings. Little research has been done to determine if the site of research or publication affects the likely impact of research findings on clinical practice. The need to look at this issue is even greater in developing countries where resources to do research are limited \[[@B5],[@B6]\] and physicians have a more acute need to rely on research done in other countries to guide their clinical decisions. The purpose of this study is to determine the likely impact of research location and journal location on physicians\' practice in developing countries. More specifically, the objectives are to answer the following questions: how likely is research published in journals from different regions of the world, including their own, to effect change in physicians\' clinical practice? and, how likely is research done in different regions of the world, including their own, to effect change in physicians\' clinical practice? In order to determine how perception of research quality affects the latter answer, we repeated the latter question with the proviso that research quality is the same in all regions. In addition, we also seek to identify the factors that are likely to explain variation in responses. Methods ======= The International Clinical Epidemiology Network (INCLEN) is a network dedicated to improving the quality of health research in the Developing World through institutional capacity building for evidence based medicine \[[@B7],[@B8]\]. Clinicians who are members of the International Clinical Epidemiology Network (INCLEN) were invited to participate, and six centres agreed: Shanghai and Chengdu in China, Bangkok in Thailand, Nagpur in India, Ismalia in Egypt and Nairobi in Kenya. The study was coordinated in the Centre for Clinical Epidemiology and Biostatistics in Newcastle, Australia. Each centre was asked to identify hospitals and physicians within those hospitals who would be expected to treat patients with pneumonia, in a way that would represent the generality of tertiary and secondary hospital settings in their region. Physicians within the hospitals were chosen from those working in Internal Medicine either at random or to represent a spread of academic/non-academic and seniority levels. A questionnaire was given to each consenting doctor. The sampling procedure varied between centres due to local circumstances. In Bangkok, Chengdu, Shanghai and Nagpur a sample of tertiary care (3 in Bangkok and Nagpur, 5 in Chengdu and Shanghai) and secondary care (4 in Bangkok and 5 in Chengdu, Shanghai and Nagpur) hospitals were selected either at random or to cover the spread of teaching/non-teaching, geography and hospital size. In Ismailia, a random sample was taken from a list of doctors working in all three city hospitals. In Nairobi, a list of all the physicians in the country rather than hospitals was the sampling frame. The questionnaire asked respondents how they rate research in journals from North America, Europe, their region and their country with regard to the likelihood of influencing their clinical practice. They were also asked the same question, but with reference to clinical research from the same regions. They were then asked to re-answer the latter question with the assumption that research quality is the same in all regions. Answers were given on a scale of 1 to 5, (very unlikely, unlikely, neutral, likely, very likely) to influence clinical practice. The study formed part of a larger study which examined variations in stated clinical practice based on a case scenario of a patient with pneumonia \[[@B9]\]. One reminder was sent to non-respondents. The questionnaire was translated into the local language by respective investigators at local centres. Pretesting was done prior to the definitive study, where each investigator gave the questionnaire to a sample of physicians to assess comprehension and feasibility. On the basis of this pretest, the Thai sample excluded the question that assumed equal quality as it was found that this modification did not change the physicians\' perceptions in that centre. Questionnaires or computer discs with coded data were sent to the coordinating centre in Newcastle, where analyses were performed. Statistical analysis -------------------- The data were tabulated and proportions calculated. The differences in impact score (the 5 point Likert scale that assessed likely influence) between current research and research assuming equal quality were assessed using the Wilcoxon ranked sum test. The p values here were adjusted after calculating the relevant design effect induced by the clustered nature of the data. The proportional odds model \[[@B10]\] for ordered categorical data was used to analyse the impact scores; the comparison is presented as a proportional odds ratio of more influence compared with the baseline category. The proportional odds assumption was checked \[[@B11]\] and where appropriate, generalised ordered logistic regression models were fitted instead \[[@B12]\]. The models were fitted to the data using the Huber estimator of variance \[[@B13],[@B14]\]. The models thus took account of the fact that individuals were clustered within hospitals which were stratified by the centres within which the samples were taken. Statistical significance was determined by p values after calculating Wald and F ratio statistics. In some cases, the 5-point scale was collapsed into a 3- or 4-point outcome to reduce problems caused by zero cells. The variables investigated include sex, number of years since graduation, physician specialty, access to a medical library, rural versus urban/suburban location and country of practice. Variables remained in the model if the relevant p value was less than 0.10. Graphs are presented to show the difference between perceived influence of respective research/journals in comparison with local research/journals as the reference. If the difference was -2 or less, then the graph reported \"prefer local\"\' if the difference was -1, 0, or 1, then the graph reported \"little difference\", and if the difference was greater than or equal to 2, the graph reported \"prefer other\". The statistical program Stata release 5.0 \[[@B15]\] was used for all the analyses. All p values are two sided. Results ======= Response rates were high, with one exception. The Chinese and Indian samples had response rates of 100%. The response rates in the Egyptian, Thai and Kenyan samples were 91%, 80% and 48% respectively. Table [1](#T1){ref-type="table"} shows the demographic and practice features of the physicians in the sample. ###### Demographic and practice features of the physicians included in the study SPECIALTY (%) ---------- ---- ---- ---- --------- ------------- ------------- --------------- --------- --------- --------- Chengdu 10 50 25 26 (52) 40 (23--60) 15 (1--37) 14 (28) 36 (72) 0 (0) 49 (98) Shanghai 10 50 25 28 (56) 40 (22--64) 14.5(1--40) 1 (2) 46 (92) 3 (6) 44 (88) Bangkok 7 40 25 10 (25) 28 (24--49) 4.5 (1--24) 25 (63) 15 (38) 0 (0) 37 (93) Nagpur 8 28 18 11 (39) 38 (26--51) 15 (2--30) 17 (61) 4 (14) 5 (18) 24 (86) Ismalia 3 20 7 1 (5) 28 (26--47) 4 (3--23) 3 (16) 16 (84) 0 (0) 5 (75) Nairobi \* 40 \* 10 (25) 40 (34--51) 14 (10--36) 11 (28) 17 (43) 12 (30) 30 (78) \* The physicians in Kenya were not selected by hospital. Factors affecting physician journal preferences ----------------------------------------------- Table [2](#T2){ref-type="table"} and Figure [1](#F1){ref-type="fig"} show the distribution of preferences for publications in journals from different regions. In general, North American journal articles were ranked fairly highly in ability to influence clinical practice with some variation from country to country. Ordinal logistic regression revealed that country was the only factor that statistically significantly affected physicians\' impression of the likely effect of studies published in North American journals on their practice: physicians from Kenya and Egypt reported that these publications were most likely to influence a change in practice and Thai doctors were the least likely to be so influenced (F~3,31~= 6.48, p = 0.0007). ![JOURNALS: Preference for journals published in other regions (US, Europe or Regional) compared to local journals](1471-2458-3-6-1){#F1} ###### The likelihood of journals published/ research done in various regions to affect physicians clinical practice according to country of practice. Figures are the number (%) of physicians choosing the highest two of five influence categories. Number (%) of physicians likely to be influenced ------------------- ---------------- -------------------------------------------------- ---------- Country Origin Journals Research China (n = 100) North American 61 (61) 55 (55) European 19 (19) 20 (20) Regional 22 (22) 14 (14) Local 96 (96) 94 (94) Thailand (n = 40) North American 19 (48) 19 (48) European 8 (20) 15 (38) Regional 7 (18) 12 (30) Local 26 (65) 33 (83) India (n = 28) North American 17 (61) 17 (61) European 18 (64) 20 (71) Regional 14 (50) 16 (57) Local 26 (93) 27 (96) Egypt (n = 20) North American 14 (70) 19 (95) European 11 (55) 17 (85) Regional 1 (5) 1 (5) Local 2 (10) 7 (35) Kenya (n = 40) North American 29 (73) 29 (73) European 34 (85) 32 (80) Regional 32 (80) 27 (68) Local 36 (90) 30 (75) Overall (n = 228) North American 140 (61) 139 (61) European 90 (39) 104 (46) Regional 76 (33) 70 (31) Local 186 (82) 191 (84) Egyptian, Indian and Kenyan were more likely than Chinese doctors to be influenced by European journals \[odds ratios 6.5 (95% CI 3.1,13.7); 6.7 (95% CI 2.7, 16.3); and 23.9 (95%CI 8.8, 65.0) respectively, F~3,31~= 12.0, p \< 0.0001\] while the Thai responses were not statistically significantly different from those of their Chinese counterparts. Physicians working in tertiary care hospitals were more likely to be influenced by European journals than those in secondary hospitals \[odds ratio 2.3, (95% CI 1.3,4.0)\] and the other factors studied were not statistically significantly related to likely influence of European journals. In general, the physicians studied were unlikely to change their practice on account of papers in regional medical journals (that is, those from regions surrounding the country of practice), although the Indian and Kenyan physicians were exceptions to this \[odds ratios 3.4 (95% CI 1.1, 10.5) and 10.3 (95% CI 2.7, 39.5) relative to Chinese physicians respectively\]. Subspecialist physicians were less likely to be influenced by regional journals relative to primary care and other doctors \[odds ratio 0.56 (95% CI 0.33, 0.94)\]. Physicians from Kenya, China, India and to a lesser extent Thailand were likely to be influenced to change clinical practice as a result of local publications. This however was not the case with the Egyptian physicians \[odds ratio 0.008 relative to Chinese physicians, (95% CI 0.0004,0.11)\]. Physicians in urban and suburban centres were more likely to be influenced by local journals relative to those from rural centres \[odds ratio 2.2, (95% CI 0.99, 4.9)\]. Subspecialist physicians revealed less tendency to be influenced by local journals relative to primary care physicians \[odds ratio 0.33, (95% CI 0.13, 0.84)\]. Factors affecting likely influence of research done in different regions ------------------------------------------------------------------------ Table [2](#T2){ref-type="table"} and Figure [2](#F2){ref-type="fig"} show the distribution of preferences for research performed in different regions. Research done in North America ranked fairly highly in ability to change clinical practice with significant variation from country to country. The proportional odds assumption was not fulfilled and thus stratum specific estimates were obtained. Kenyan and Egyptian physicians were much more likely to be influenced by North American research than physicians in India, Thailand and China (χ^2^~12~= 35, p = 0.005). Those who had access to a medical library (compared to those without) \[χ^2^~3~= 14.9, p = 0.002\] and those working in tertiary hospitals (compared to those in secondary hospitals) \[χ^2^~3~= 10.4, p = 0.016\] were also more likely to be influenced. In addition, more experienced physicians were more likely to choose the highest two Likert categories of influence than those less experienced (χ^2^~3~= 13.6, p = 0.004). ![RESEARCH: Preference for research done in other regions (US, Europe or Regional) compared to local research](1471-2458-3-6-2){#F2} Kenyan, Egyptian and Indian physicians appeared more likely to be influenced by European research than their Chinese and Thai colleagues (F~4,31~= 12.7, p \< 0.0001). Physicians working in tertiary hospitals were twice as likely as their colleagues in secondary hospitals to be influenced by European research \[odds ratio 2.0, (95% CI 1.1, 3.6)\]. The other factors studied were not statistically significantly associated with the likely influence of European research. Regional medical research was more likely to influence physicians in India and Kenya compared to those in Egypt and in China (F~4,31~= 13.8, p \< 0.001). There was also a tendency for subspecialist physicians to be less readily influenced by regional research compared to physicians without subspecialty training (F~1,34~= 7.9, p = 0.008). The proportional odds assumption was not fulfilled in the analysis of local research. Local research was judged as very likely to change clinical practice by most physicians studied except those from Egypt. Local research was most likely to change practice in physicians from China and India. (χ^2^~8~= 42.5, p \< 0.001). Other factors positively associated with likely change of practice by local research in the highest Likert category were being primary care physician (compared to those with subspecialty training and other doctors, χ^2^~4~= 21.9, p \< 0.001); access to a medical library (χ^2^~2~= 12.9, p = 0.002); and seniority as measured by years since medical school graduation (χ^2^~2~= 11.6, p = 0.003). Difference in research influence scores if research quality was the same in all regions --------------------------------------------------------------------------------------- Very little change was observed in the influence scores obtained for North American research if research quality became the same from region to region (Wilcoxon signed ranked sum test adjusted for clustering, z = -1.1, p = 0.28). The median change in influence score was 0 (interquartile range 0,0). This was different in Egypt and to a lesser extent in Kenya (χ^2^~6~= 23.9, p \< 0.001) where 50% and 30% of physicians decreased their scores respectively. The findings for European research were similar in that there was no statistically significant overall change in influence scores (Wilcoxon signed ranked sum test adjusted for clustering, z = -0.64, p = 0.53). The median change in influence score was 0 (interquartile range 0,0). Egyptian and Kenyan physicians reported reduced scores here relative to their Indian and Chinese colleagues (χ^2^~6~= 23.0, p \< 0.001). With regard to regional research, there was a statistically significant increase in perceived influence reported by physicians if research quality should become the same in all regions (Wilcoxon signed ranked sum test adjusted for clustering, z = 4.6, p \< 0.0001). The median change in scores was 0 (interquartile range 0, +1). Although statistically significant, there was less country to country variation in this instance (χ^2^~6~= 19.0, p = 0.004). Physicians in the Kenyan sample were more likely than their colleagues in the other countries studied to have an increase in influence scores. Subspecialist physicians were more likely to increase their influence score relative to primary care physicians and other doctors \[χ^2^~2~= 16.6, p \< 0.001\]. There was also a statistically significant increase in influence scores for local research if research quality became the same. (Wilcoxon signed ranked sum test adjusted for clustering, z = 2.15, p = 0.031). Seventeen percent of physicians showed an increase in their influence scores. There was also a significant country effect here (F~2,27~= 4.6, p = 0.01). The proportion of Egyptian and Kenyan physicians showing an increase in influence scores was 35% and 25% respectively. Subspecialist physicians and other doctors were more likely to increase scores relative to primary care physicians (F~2,27~= 6.2, p = 0.006). Physicians with access to a medical library were less likely to increase scores relative to those without (F~1,28~= 6.24, p = 0.019). None of the other factors studied had a statistically significant effect on change in the influence scores should research quality become the same. Discussion ========== With the exception of Egyptian physicians, research findings published in local journals are more likely to result in change in clinical practice relative to journals published in other regions followed closely by research findings published in North American journals. This was demonstrated by the fact that more than 80% of the physicians in this study chose the highest two influence categories indicating a high willingness to change practice in response to findings in local journals. This contrasted with approximately 60% for North American journals and less for European and regional publications. The relative influence of research done in different regions is similar to the pattern for journal information. It is also clear that the physicians\' impressions of the difference in research quality in different regions affect the degree to which they are willing to change their practices. This was evident from the fact that there was a statistically significant increase in influence scores for local and regional research if research quality is considered the same. The changes are most evident among the Kenyan and Egyptian physicians. The changes in impact scores for North American and European research are however not statistically significant. These contrasting findings suggest that developing world physicians think that the quality of medical research in North America and Europe is better than that in their own regions and countries. This study also demonstrated that there is significant variation between countries in the likely influence of journals from and research studies done in different regions. Most physicians are likely to be influenced by North American publications. There is more variation with regard to the likely effects of European publications on physician practice: physicians from Kenya, Egypt and India are more likely to be influenced by European research relative to those sampled in China and in Thailand. This may directly relate to the relative contact between the medical establishment in Europe and those in these countries. Europe has had a longer history of influence in the medical establishments in Kenya, India and in Egypt relative to those in China and in Thailand. Differences in language may also add to this influence. This study also reveals that physicians working in tertiary care hospitals are more likely to be influenced by North American and European publications than physicians from secondary care hospitals. This may relate to the greater exposure these physicians have to publications and research done in North America and Europe. There is also much variation with regard to the physicians\' impressions of the likely impact of regional research and publication on their practice. Kenyan and Indian physicians are more likely to be influenced by their regional publications and research than are physicians from the other countries studied. Egyptian physicians are especially unlikely to be influenced by their regional journals The design of this study involved random sampling of physicians after an initial random sampling of hospitals. This was however not carried out uniformly, and where random sampling was performed the method was left to the individual investigators. There is a strong possibility of selection bias being present in this study, thus limiting the interpretation of between-country differences in the results. It is unlikely, however, that any such selection would be related to the outcome factor examined (the relative importance of the source of the research or publication) and hence internal validity should not be compromised. In some centres all hospitals were used since there were only a few physicians located in each. In Kenya, a national sampling frame was used rather than identifying hospitals first, and the 48% response rate indicates uncertainty about the validity of the results. We report answers to a questionnaire rather than observations on practice, and have not established the validity of the stated responses. It is possible that \'national pride\' may explain the large difference seen between local and regional journals. The understanding of \'region\' may also be difficult, we gave examples in the question of East Africa, Asia and Latin America. In addition, it is possible that the influence and credibility of various information sources may be different for different clinical problems in different settings. The study did not differentiate between type of research study -- a randomized controlled trial would usually be more highly regarded than a descriptive study, wherever it was conducted or published. In order to allow for this issue, we asked the question about change in perceptions of the research if the quality were the same in all regions. This is the first study to assess the differences in likely impact of medical research and medical journals published in different parts of the world on physicians\' practices. The study was carried out in developing countries where few resources are available for doing local medical research and for guiding health policy \[[@B6]\], although the burden of disease is great \[[@B16]\]. Insufficient numbers of clinical trials are performed in sub-Saharan Africa despite the heavy disease burden \[[@B17]\]. Hepatitis B and C \[[@B18]\], the AIDS epidemic \[[@B19]\], the emergence of resistant strains of organisms to antibiotics \[[@B20]\], the need for culture specific and cost-effective methods for child care \[[@B19]\], and appropriate contraceptive methods \[[@B21]\] are only a few of the problems facing developing countries. Given these burdens and that so little financial resources are available for health, it is essential that doctors in developing countries use the most cost-effective methods of health management. Although the respondents to our survey reported high levels of access to medical libraries (Table [2](#T2){ref-type="table"}), and also reported high levels of access to \"up to date\" medical journals, we do not know which journals they are or if they were read. Unfortunately, even in the \'best\' settings worldwide, medical practice is not necessarily driven by peer-reviewed evidence. It is therefore important that we identify how physicians use evidence to guide their practice. This can in turn lead to appropriate education programs to guide developing world physicians on how to use evidence. Evidence-based practice needs to be taught to developing world medical practitioners \[[@B22]\]. Initiatives like the International Clinical Epidemiology Network which build research and education capacity in evidence based medicine \[[@B7],[@B8]\] should therefore be encouraged and supported. In addition, given that physicians are more likely to respond to local research than research from other countries, local researchers need to be given support to improve the quality and quantity of local research output. This obviously makes sense since local research is more likely to be directly applicable to the population involved \[[@B5]\]. However, it is neither sensible nor cost-effective to repeat every study in local settings. It is therefore important that the development of culturally sensitive evidence-based guidelines which guide physicians on how to use the results of research findings from settings other than their own be encouraged. Conclusions =========== Since local research and publications were considered most likely to change clinical practice, the conduct of high quality local research is likely to be an effective way of getting research findings into practice in developing countries. Local research should be encouraged through education and collaboration and supplemented by appropriate education programs to guide physicians on how to use evidence. Competing interests =================== None declared. Authors\' contributions ======================= The study was conceived by RFH and JP and designed by all authors in collaboration. Statistical analysis was performed by JP and LL. The paper was drafted by JP and RFH with contributions from LL and SK, and approved and corrected by all authors. Pre-publication history ======================= The pre-publication history for this paper can be accessed here: <http://www.biomedcentral.com/1471-2458/3/6/prepub> Acknowledgments =============== Funding was provided by the International Clinical Epidemiology Network (INCLEN).
{ "pile_set_name": "PubMed Central" }
Q: Is Symfony2 ready for production? I am in the process of planning a custom web application which will be sold (not SaaS) and so will be required to be installed on different servers. Do you think it would be a bad/good idea to go with Symfony2 or Zend Framework. I have to choose 1 and can't go with any other framework as I only have extensive knowledge with both of these. Despite my experience with Symfony2, I would still appreciate another opinion. My main concerns are ease of install on servers and source code protection. Sadly, it would seem ZF already has this going for it in that you don't need 5.3 like Symfony2, and we have Zend Guard. Any advice is welcome! I am looking to nurture and grow this app and I really want to be sure the first step is the right one. A: The Symfony2 download page still says: Be warned that Symfony 2.0 is not stable yet; use it with caution (current version is Beta 1). So I would wait just a but for Symfony2. I'm not sure what you mean by source code protection but there is no point in encoding any part of any of the two frameworks since they are both open source (and you should see if their licenses actually do permit that!). Zend Framework 2 is still in the oven and for the looks of it, Symfony2 will be out of beta way before ZF2. If you can't wait, then use the one that you are most comfortable with. Otherwise, wait for Symfony2 to come out of beta and then wait a little bit more until they iron out it's bugs. Now, about bundling the framework in your application, you are probably going to need to write an installer of sorts. You could first look at the "sandbox" version of Symfony to see how they did that. It's basically an unzip-it-and-it-works kind of install. No need to set anything up. That could give you some pointers. Whatever you do, you'll need to write a minimum specs script that users can download and run to check whether their system has everything ready to run your app (check configs, php modules, etc, etc). See SlideShowPro Director for an example of such scripts. Subjective answer: I'd go with ZF because that's what I know better, but having said that, performance wise I've had better results with Symfony. Apparently ZF2 will have see huge speed improvements.
{ "pile_set_name": "StackExchange" }
Jared Janes and Jason Snyder talk with Linda Ceriello and Greg Dember about metamodernism as a cultural sensibility, traditionalism, modernism, postmodernism, what led them to start What Is Metamodern?, metamodern methods that media commonly uses, the ways metamodernism is interpreted, Buffy the Vampire Slayer, The Listening Society, and much more. In this Episode of Both/And Support Both/And by becoming a patron &/or subscribing & reviewing us on iTunes Jared Janes participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. In more human terms, this means that whenever you buy a book on Amazon from a link on here, a small percentage of its price is sent to us.
{ "pile_set_name": "OpenWebText2" }
Introduction ============ Traumatic spondylolisthesis of the axis is considered one of the most frequent forms of upper cervical spine injury. The injury comprises 4 to 7% of all traumatic cervical spine fractures and is classified based on the system described by Effendi et al with modifications proposed by Levine and Edwards (L--E). [@JR170022-1] Recent studies suggest that based on the L--E system, type I and type II traumatic spondylolisthesis of the axis have been satisfactorily treated, mostly using a conservative approach, which can result in a progressively improved function, a low incidence of neurological deficits, and complications. [@JR170022-2] [@JR170022-3] The majority of cases of traumatic spondylolisthesis of the axis can be treated nonoperatively with reduction and subsequent immobilization in a rigid cervical collar or halo device. For unstable fractures, including some L--E type II and most type IIa/III fractures, or when external brace immobilization is ineffective (for example, in cases of delayed unions or nonunions), surgical management is indicated and can be accomplished by using either anterior or posterior fusion techniques. [@JR170022-4] Although surgical techniques for traumatic spondylolisthesis of the axis have advanced, problems, such as the risk of neurovascular injury and postoperative complications still remain. [@JR170022-5] [@JR170022-6] A low-intensity pulsed ultrasound (LIPUS) device has been developed for the acceleration of fracture healing. Recent studies have reported the benefits of LIPUS for fresh fractures as well as delayed unions and nonunions. [@JR170022-7] [@JR170022-8] These results suggest that LIPUS could be used as an alternative treatment to surgery. Herein, we report a unique case of delayed union of a traumatic spondylolisthesis of the axis that was successfully treated with halo immobilization and LIPUS. To the best of our knowledge, this is the first report showing that LIPUS might be a feasible treatment for cervical spine fractures. Case Report =========== A 20-year-old woman presented with neck pain after the cervical trauma that occurred during a motor vehicle accident. Plain radiographs and computed tomography (CT) scans of the cervical vertebrae showed type I traumatic spondylolisthesis of the axis using the modified L--E classification. Conservative therapy with a rigid cervical collar was the immediate postinjury treatment used. After 12 weeks of the injury, the patient was referred to our orthopedic department because of nonunion and development of angulation and displacement ( [Fig. 1](#FI170022-1){ref-type="fig"} ). An initial examination in our department did not reveal the presence of any neurologic deficit. The patient was placed in a neutral position with her head and neck in a halo vest. Immediately after halo immobilization, treatment with a LIPUS device (SAFHS 4000J; Teijin Pharma, Tokyo, Japan) was applied for 20 minutes once daily to the right and left fracture sites after marking the fracture position under fluoroscopic guidance ( [Fig. 2](#FI170022-2){ref-type="fig"} ). The LIPUS device had a frequency of 1.5 MHz, a signal burst width of 200 μs, a signal repetition frequency of 1 kHz, and an intensity of 30 mW/cm ^2^ . Radiographs and a CT scan showed improved healing of the fracture 3 and 10 weeks after the initiation of LIPUS ( [Fig. 3](#FI170022-3){ref-type="fig"} ). After 11 weeks of starting LIPUS was started, the halo vest was removed, and the patient had structurally and functionally recovered. The clinical follow-up at 12 months revealed no symptoms, such as neck pain and discomfort, which suggest pseudarthrosis. Because the application of LIPUS for spinal fractures is considered to be an off-label use of this device, it was approved by the ethical committee of Yamanashi University before LIPUS treatment was started (approval number: 152). ![( **A** ) Cervical plain radiograph at the time of injury and ( **B** ) 3 months later. The white arrow denotes delayed union and the development of angulation and displacement.](10-1055-s-0037-1607425-i170022-1){#FI170022-1} ![Macroscopic picture showing low-intensity pulsed ultrasound being used as an adjuvant therapy after halo immobilization.](10-1055-s-0037-1607425-i170022-2){#FI170022-2} ![( **A** ) Radiographs with an arrow showing the C2 vertebra at admission, 3 weeks after initiation of low-intensity pulsed ultrasound (LIPUS), and 10 weeks after initiation of LIPUS. ( **B** ) Computed tomography (CT) scans showing, lateral (right and left side) views at admission, 3 weeks after initiation of LIPUS, and 10 weeks after initiation of LIPUS, and axial views (bottom two panels) of C2 vertebra at 3 and 10 weeks after the initiation of LIPUS. White lines indicate the locations of the sagittal sections. Favorable healing of the fracture can be seen on the radiographs, and CT scans taken after initiation of LIPUS. ( **C** ) Time series of treatments was shown.](10-1055-s-0037-1607425-i170022-3){#FI170022-3} Discussion ========== Both delayed and nonunions can lead to additional suffering, and prolonged functional impairment for patients, as well as increased health care system costs. [@JR170022-9] Delayed unions, and nonunions often require additional complex surgical procedures to heal. [@JR170022-10] In the current case, 3 months of conservative therapy using a cervical collar failed to prevent increasingly severe angulation and displacement indicative of instability at the fracture site. For unstable fractures that are the result of traumatic spondylolisthesis of the axis (including some L--E type II and most type IIa/III fractures, or when external brace immobilization is ineffective), surgical management is indicated. [@JR170022-1] When the decision to proceed with surgical fixation has been made, the various surgical techniques suggested include a posterior approach, an anterior approach or a combined anterior and posterior approach. [@JR170022-11] The anatomy of the upper cervical spine has large individual variations, and the presence of surrounding neurovascular structures makes pedicle screw fixations even more technically challenging. Misplacement and complications of pedicle screws placed using fluoroscopic techniques have been reported in up to 21.6% of cervical trauma patients. [@JR170022-12] In contrast, the disadvantage of conservative treatment with the halo device is prolonged immobilization for an additional 3 to 6 months with an uncertain outcome. Therefore, after halo immobilization, and before any decision being made to proceed with surgical fixation, we proposed the use of LIPUS as an adjuvant therapy and obtained an excellent outcome. We could see evidence of bone union only 10 weeks after the initiation of conservative treatment using halo immobilization. Importantly, we did not observe any adverse events. It has been reported that clinical success rates with LIPUS for delayed unions and nonunions in long bones can range from 67 to 90%. [@JR170022-13] Interestingly, a positive effect of LIPUS on spinal fusion has been demonstrated in several animal experiments. [@JR170022-8] [@JR170022-14] [@JR170022-15] However, clinical data are completely lacking with respect to the efficacy and safety of LIPUS for spinal fractures in humans. Despite this, based on the current case we cannot conclude LIPUS alone can achieve a successful union in the case of previously delayed unions and nonunions of traumatic spondylolisthesis of the axis, but these results might indicate the combination of halo immobilization and LIPUS can synergistically induce bone union. To the best of our knowledge, this is the first report describing that the combination of halo immobilization and LIPUS therapy might be a safe, effective, and feasible method by which to treat cervical spine fractures. **Conflict of Interest** All authors confirm that there are no conflicts of interest.
{ "pile_set_name": "PubMed Central" }
Chex Quest begins with an emergency meeting of the members of the Intergalactic Federation of Cereals. In it, it is brought to everyone's attention that a volcano exploded on the surface of Bazoik, a peaceful mining planet renowned for its quality nutritional products. The Chex Squadron captured fragments from the explosion, and discovered that they contained strange, slimy larvae of a creature from another dimension. When exposed to nutritional substances, these larvae abruptly grow into huge, slimy creatures with the capacity to launch slime, which they use as a weapon, in various ways from their bodies. Even more disturbing, communications with Bazoik have been interrupted, and the Federation cannot contact anyone.Luckily, the scientists from the Federation have found a way to counter the threat. Although regular weapons do not affect the slimy invaders, the scientists have modified the "zorchers" the main weapon of the federation, to effect a transportation of any object into another dimension.With this new weapon, the Federation believes that it can subdue the threat, but they need a volunteer. Luckily, Chex Warrior of the Chex Squadron is here to help! Using the new weapon, he is set to face the invaders. Will he succeed? You must play to find out! Undoubtedly the strangest and most complete conversion of all DOOM engine licensees, Chex Quest was a free CD-ROM action game released by General Mills on a CD stuffed free into boxes of Chex Cereals. This promotion lasted only for half a year before it was discontinued. (Okay, so the game isn't really "freeware," since you do need to buy cereal to play it... but hey, I'm sure everyone loves Chex cereal anyway ;)) DoomWorld gave the following great review of this old game that I'd like to quote here in full: "Chex Quest, a game made specifically for distribution with specially marked packages of Chex cereal about half a year ago, accomplishes its primary mission of plugging Chex cereal through countless wall decals and carefully located billboards, but also does so with a sense of its own silliness. "The general plot? You are a giant piece of cereal. Your mission is to kill a wide variety of strange, green soggy creatures known as "Flemoids," probably ex-Chex pieces which were transported through the Milk Dimension and came back mutated and dripping. You have numerous super-powerful weapons at your disposal, all with vomit-inducing names like the "Zorcher," the "Rapid Zorcher," and the ultra-powerful "Phasing Zorcher." With these weapons, you are able to return the Flemoids to their home dimension -- that's right, you don't kill anything in Chex Quest, everything disappears in a flash of light, and with a strange sparkly noise. It's Doom, but you don't kill anything. It's probably the strangest thing you'll ever see. "Well, you're probably asking, how does Chex Quest play? Suprisingly well, actually. The only major problem was a nasty bug upon startup, easily remedied by simply using Boom as the .EXE of choice. Given its kiddie nature, Chex Quest is, not surprisingly, quite easy. Only the trooper, sargeant, imp, and demon's alter egos make an appearance, except for a lone Baron of Hell at the end which is meant to be easily disposed, given a nearby BFG (known in this world as the Laz Device). Demo gods and Hell Revealed veterans will find this a walk in the park, but keep in mind that those same demo gods' 5-year-old cousins have to have a fair chance too. Can't you just see little Johnny Donner picking up a Powerfork? "Since Chex Quest was packaged in boxes of cereal, it's also a true total conversion. Absolutely no standard Doom textures, flats, or other assorted graphics rear their heads. This makes the levels have a completely different feel, as everything you see -- the textures, the monsters, the weapons -- are new. The replacements for the rocket launcher and plasma rifle are especially inspiring. The sounds are completely novel as well, but of less important; expect lots of strange gurgles, whistles, and chimes. They work within the context, but are of little consequence. "In the end, Chex Quest is one of the most novel total conversions around. Forget the fact that it has about as little product placement as Disneyland. It's a fun escape into a world where gore is goo and Total Kills is zero. Now excuse me, I'm off to return some more Flemoids." No articles were found matching the criteria specified. We suggest you try the article list with no filter applied, to browse all available. Post article and help us achieve our mission of showcasing the best content from all developers. Join now to share your own content, we welcome creators and consumers alike and look forward to your comments. No files were found matching the criteria specified. We suggest you try the file list with no filter applied, to browse all available. Add file and help us achieve our mission of showcasing the best content from all developers. Join now to share your own content, we welcome creators and consumers alike and look forward to your comments. if anyone is wondering where Return of the Queen is, it was just a port of freedoom into Chex, and not a very good one. It had a few betas but they have long since gone from the web. I haven't been able to find Final Fight, unless someone can help me. Trivia: The original chex quest game replaced the maps and resources for DOOM episode 1, but the DOOM maps for episodes 2-4 are there if you start the game with command line options to load a map directly. Monsters are invisible and stuff doesn't work because of missing resources.
{ "pile_set_name": "Pile-CC" }
After many weeks of emailing and calling, I was able to receive a copy of a study done by Dr. Lorrin Pang who had rainwater samples taken in Maui, checked for aluminum levels, and attempted to correlate them with sightings of "chemtrails". Dr. Pang is the Maui County Health Officer and is known as an environmental activist. He described this study in several videos here and here. Dr. Pang told me that some public funds had been used for the study. Here was his protocol: Dr. Lorrin Pang said: Background: There are claims that for the past several years’ aircraft are spraying particles into the atmosphere (chemtrails). No agency admits to this. Scientists have claimed that chemtrails could be an effective, cheap way to counter global warming by reflecting back sunlight (and heat) into space. Chemicals used would be primarily aluminum with barium and strontium components. Another possibility is to use seawater. Californians who claimed that they have been repeatedly sprayed have noted high levels of aluminum in their water and soil. It is possible that nearby heavy industry has contaminated the air and dust with aluminum but researchers in California claim that rain water samples show a correlation of high aluminum levels with chemtrail observations. It is not clear what the side effects of chemtrails might be. It could have an unwanted effect on the environment or human health. Maui residents claim to see chemtrails used both visually and by remote radar sensing. Purpose: The purpose of this project is to see if the chemtrail observations (visual and radar) correlates with high levels of aluminum (barium and strontium) in rain water. Methods: One group will make observations of chemtrail spraying (daily observations for 30 days). Another will collect rain samples (daily collections during the same 30 days). We will test the water samples at a laboratory in California to see if the highest aluminum levels correlate with the highest observations of spraying. Principles: A. Water samples can be contaminated by contact with metal, maybe wood, and any chemicals (soap, toothpaste, etc.). Use only clean plastic and glass to touch the water. B. Keep good records. Record accurately. If you miss a day or spill a sample this is not important. Just record accurately what went wrong. What to do every day for 30 days (observe and record, collect a sample, rinse and set-up new cup): 1. Observe/record-In the book write the time and date. Look at the cup. Put a check mark if the cup was empty, partly filled or filled. 2. Collect a sample- if the jar is full collect it, cover it, write the date on the jar and store it. Go to #4. 3. If the jar is part full (or empty) rinse the funnel with enough water to fill the jar. Then cover it, write the date on the jar and store it. Go to #4. 4. Rinse – Rinse the funnel (with about 100 cc) – let the water drain to the ground. The working hypothesis is that there will be a positive correlation between observations (daily reports of visual or satellite images) and levels of aluminum in rainwater caught that day. The Chem Trail observation diaries must be blinded to the laboratory results. We have the diary tallies of Manis but wait for the satellite image diaries of Bruce Douglas (he is having a hard time finding archives of the satellite images). The 24 hour rainfall collections were made daily at 8:00 am and their date of collection is recorded. Next to each collection is Manis’s observation on a scale of high, medium or low which he later put on a scale of 4, 3, 2, 1 , with 4 being the highest. The dairy entry is for the previous day – for example if the collection was on the 10th Nov (8:00 am) the diary represents Manis observation the 9th. Also noted are whether the container was empty, partially full or full. If we use 40 ug/liter (40 ppb) as elevated (the California standard) one can see that only 2 of the 26 samples were elevated, despite many days of high levels observed. If one sets a “cutoff” of 20 ppb as high or low, the 3 by 2 table analyzing for chi-square trend shows no correlation (P value of .41) of Mani’s observations to aluminum levels. To get some idea of the statistical power to detect a trend the next 3 by 2 table shows that hypothetical numbers in this table would have “triggered” a finding of a positive correlation (P value of .05). One can also imagine that heavy rains which fill the cup could dilute the aluminum levels. Perhaps we should only use data from partially filled containers. A 2 by 2 chi-square analysis of Full vs partially filled does not show it as a confounder for aluminum levels. Furthermore both high and low diary readings have full and partial collections. Thus the possibility of statistical confounding (unequal distribution and correlation to outcome) should be minimal. I neither see elevated aluminum levels nor a more subtle correlation to observations from this data. It is possible that aluminum was not used during this period (sea water can be used to reflect sunlight). One could also adjust the data for longer lag periods (rainfall following spraying) but this would introduce other types of error. Some context is needed here. The "Manis" mentioned in Pang's study is Manis Martin, who operates a New Age retreat and farm called Pangaia, and a food business called Pangaia's Pantree on Maui. Manis is a good friend of Michael J. Murphy who was featured in the movie "What In The world Are They Spraying". I an effort to get some details of the way that Bruce Douglas and Manis Martin evaluated the sky for Dr. Pang, I spoke to Bruce Douglas earlier this week. He told me: -The University of Washington did not participate in the study. He had tried to use the UW GOES satellite archive, but was unable to use the resource to see what he wanted for some reason. -He bases his observations of "spraying", which were not included for correlation in the study, on the amount of "stringy, weird looking clouds". His description sounds like cirrus clouds to me. -He doesn't see linear trails from airplanes often, but has seen up to five at one time over Maui. I have been unable to find any such pictures over Maui, most just show one trail at a time. -Manis Martin paid $2000.00 to get the samples in Pang's study tested at a lab. -Manis Martin determined days that he believed "spraying" was taking place during the study by looking at haze in the sky. I got the impression that while there are some trails seen, and some can be seen in photos and videos, there really aren't very many persistent contrails in Hawaii, and that both Martin and Douglas are stretching things by including ordinary clouds as evidence for "spraying". Another example of Bruce Douglas' misunderstanding ordinary atmospheric occurrances was a major topic of our telephone discussion this week. His latest worry he calls "chem-bombs", in which he views satellite images, sees thunderstorm development and believes this to be some sort of bomb being set off at the ground level to do something. The images he is showing display ordinary thunderstorms which build into major cells high into the troposphere and the resulting high altitude winds shear off the thunderstorm tops. When I asked Douglas what a satellite infrared image was showing, he had no idea that it detected temperature, he only knew that high altitude clouds appeared brighter in that type of imagery. Yes, high altitude clouds are colder and brighter. In this video he shows both satellite and ground video of an ordinary thunderstorm offshore in Hawaii, yet he calls it a chem-bomb. He also makes several remarkable statements such as when a water vapor loop shows intense rainstorms over Mexico, he claims moisture is being "sucked away" as the thunderstorm moves offshore. Mind you that this study took place last spring and despite having full knowledge of it, Michael J. Murphy, Bruce Douglas, and Manis Martin have kept quiet about the conclusions reached. One last thing, for each of you mentioned here. The levels of aluminum found in the rainfall were extremely low considering that mainland aluminum measured in 1967 and 1973 were many many times higher, averaging 800 ug/L and the recent "chemtrail" rainfall samples average 484 ug/L. Bruce Douglas also mentioned to me that strontium should never be found in rainfall over Hawaii. I think you had better take that up with Dr.Vitousek, who has been intensely studying atmospheric deposition in Hawaii for well over a decade. I spoke to him on the phone and he sent me a copy via email. What I have posted here is a 100% copy of what he sent. I believe this is the only place to view it online. Interesting that no chemtrail believer site has displayed it and certainly not Michael J. Murphy's Coalition Against Geoengineering. By the way, the average of all aluminum levels in the samples is 23.768 ug/L (micrograms per liter) Interesting, but looks a big can of worms that no good can come from. That page sounds very reminiscent of those from people who think they are being "gang stalked" by shadowy groups in collusion with the authorities. PANGAIA IS TRULY A DANGEROUS SATANIC CULT WITH TIES TO POLICE, AND WHICH RITUALLY RAPE AND MURDER YOUNG WOMEN. Click to expand... I think these are people who (to varying degrees) simply accept chemtrails as part of the vast conspiracy they imagine themselves subject to. Their lives would be no different if chemtrails were either admitted or unequivocally debunked. On April 27, 2012, a symposium sponsored by Bruce Douglas and hyped by Michael J. Murphy was held on Maui. Curiously, very little was mentioned about it by Murphy after the fact. The original teaser Murphy directed his followers to was this website:http://mauiskywatch.org/ Judging from the YouTube videos, people in Maui need to take remedial earth science. There is a video that is classic, well......dumbass comedy, where someone is all hepped up about the NASA updated Cloud Chart. Apparently Maui has no natural clouds or weather, ever. Really. I would put up a link, but think I should stop now before breaking the politeness clause here. Nothing in Scott Stevens' wikipedia bio or his website, "WeatherWars," indicates a degree in meteorology or related subjects. He is a Pocatello native and worked for the local station for nine years. Does anyone have more information on him? Mind you that this study took place last spring and despite having full knowledge of it, Michael J. Murphy, Bruce Douglas, and Manis Martin have kept quiet about the conclusions reached. One last thing, for each of you mentioned here. The levels of aluminum found in the rainfall were extremely low considering that mainland aluminum measured in 1967 and 1973 were many many times higher, averaging 800 ug/L and the recent "chemtrail" rainfall samples average 484 ug/L. Bruce Douglas also mentioned to me that strontium should never be found in rainfall over Hawaii Click to expand... If not already aware, you may also be interested in MJM's following quote @25:50 from a recent radio show Content from external source ....he (Dr. Pang) told me a couple of years ago once you go up (to altitude) and do those tests (air sampling) there's no question in my mind that they will be spraying steam MJM phrases it as if "spraying steam" would be a deliberate act of subterfuge to thwart their efforts to gather court-admissable evidence that chemtrails exist. I wonder, seeing as you've had previous positive communications with Dr Pang, would it be possible to clarify his position. Does Dr Pang believe "they" will cover it up, as MJM is inferring or does he believe there really is nothing to find except "steam"? In which case, is this a concession that spraying "steam" (aka contrails) just happens to mimick what MJM think are chemtrails?
{ "pile_set_name": "Pile-CC" }
Newsletters: Newsbites SANS NewsBites is a semiweekly high-level executive summary of the most important news articles that have been published on computer security during the last week. Each news item is very briefly summarized and includes a reference on the web for detailed information, if possible. Spend five minutes per week to keep up with the high-level perspective of all the latest security news. New issues are delivered free every Tuesday and Friday. INTERNET STORM CENTER TECH CORNER It is no longer a matter of if, but when, attackers will break into your network. Today's enterprises need a well-established plan for responding to security incidents quickly and effectively. Download our white paper to learn about the people, processes, and technologies needed to build a strong incident response strategy and avoid damaging data breaches. http://www.sans.org/info/194630 *************************************************************************** TOP OF THE NEWS Smaller Nations Developing Cyberespionage Programs (May 3, 2017) Cybersecurity companies are noticing an increasing number of countries that are building their up cyberespionage capabilities. Unlike physical weaponry, the cost is not prohibitive and lives not at risk, which lets smaller nations enter the arena. [Editor Comments] [Pescatore] Sophisticated attack code for cybercrime has long come from cyber criminals outside the US, China, Russia or North Korea. The important part: the vulnerabilities that enable the nation-state attacks are what allow all other attacks to succeed. There is an old saying "People who live in glass houses should build stronger houses." [Murray] No surprise here. As some costs of attack fall, we must use security measures to increase others and decrease the value of success. [Williams] The Shadow Brokers' release of (what are reportedly) NSA hacking tools will accelerate the development of nation-state hacking programs. Even though the exploits have been patched and the tools will have antivirus signatures, the tools offer insight into how well-funded nation states build their programs. The documentation released through the Wikileaks disclosures of CIA hacking tools offers additional insight to nations developing offensive cyber programs. DHS Wants Broader Authority to Secure Mobile Networks (May 4, 2017) According to a report from the US Department of Homeland Security's (DHS's) Science and technology Directorate, the DHS lacks sufficient authority to take the steps it believes are necessary to secure mobile phone networks as part of its job of securing federal IT systems. Currently, DHS cannot inspect mobile carrier infrastructure without the carrier's authorization, and cannot require mobile carriers to implement security measures. The DHS Study on Mobile Device Security notes that "the enhanced capabilities that mobile devices provide, the ubiquity and diversity of mobile applications, and the typical use of the devices outside the agency's traditional network boundaries requires a security approach that differs substantially from the protections developed for desktop workstations." [Editor Comments] [Pescatore] The DHS study ignores the government's 5 years of experience in widely using smartphones and tablets, no lessons learned at all. It recommends that the DHS Continuous Diagnostics and Mitigation program, which has been slow to address basic security hygiene issues on government PCs and servers, be expanded to include mobile devices. Rather than look at how the GSA FedRamp program has been able to use the government's procurement power to drive cloud providers to high levels of security and visibility, it recommends DHS get new authority. The report also recommends that the government only used NIAP certificated devices - a snail's pace approach which failed for PCs, servers and software that have much longer life cycles than mobile devices. [Murray] Wow! First, the report is "threat," rather than risk, oriented. Second, the attacks and breaches that the government continues to suffer are aimed at desktops, and to a lesser degree, servers and legacy mainframes, not mobiles. While mobiles are novel, they have proven to be far less vulnerable than desktops. Third, while the government MAY have unique requirements to which the market MIGHT not respond, that does not justify giving DHS broad authority over the private sector. Fortunately, we remain very far from legislation, much less law. Google Docs Phishing Scam (May 3, 4, & 5, 2017) An enormous phishing scheme disguised as a Google Docs request has been sent to as many as one million users. The attackers used Google developer tools that create an app that was designed to trick users into thinking they were viewing the real Google Docs app. It displayed a legitimate OAuth screen seeking permission to access and manage users' email and contacts. Within an hour of learning about the phishing scheme, Google had taken steps to protect users. [Editor Comments] [Honan] Companies will be judged on how well they respond and deal with the breach rather than the breach itself. Incident response teams need to test their capabilities regularly so they know how to operate during a breach; good training enables teams to respond well to breach scenarios they may not have thought about. SS7 Flaws Exploited in Online Bank Account Heists (May 3, 2017) Attackers recently exploited vulnerabilities in the Signaling System 7 (SS7) protocol to steal money from bank accounts protected with two-factor authentication. The SS7 protocol allows mobile phone networks to talk to each other. The attacks, which began in January 2017, exploited flaws in SS7 to intercept text messages with mobile transaction authentication numbers (mTANs) or single-use passwords sent by banks as part of two-factor authentication schemes for funds transfers. The attackers used mTAN interception only after they had infected bank account holders' accounts with more traditional means to obtain access passwords and view balances. [Editor Comments] [Murray] While it has limitations, and while some implementations and uses may be vulnerable to (difficult) man-in-the-middle attacks, strong authentication is far from "so broken" as the ZDNet headline says. "Nothing useful can be said about the security of a mechanism except in the context of a specific application and (threat) environment." While I use strong authentication on all my financial and e-commerce accounts, my security does not rely upon it exclusively. THE REST OF THE WEEK'S NEWS WordPress Password Reset Zero Day Vulnerability (May 4, 2017) An unpatched flaw in WordPress Core could be exploited to obtain a user's account password reset link. The issue affects all versions of WordPress, including the most up-to-date, version 4.7.4. [Editor Comments] [Williams] Exploitation of this is VERY difficult. The attacker must send the email to an account that bounces the complete content of the message (with reset link) to the address specified by the reply-to address. So the attacker would have to know the reset address, fill up the mailbox where the reset link is sent, and finally hope that the configured mail server bounces the complete contents of the email message to the attacker. This is a vulnerability, but not one that is easily exploited. BondNet Botnet Mines Cryptocurrencies (May 4, 2017) The BondNet botnet has harnessed the processing resources of roughly 15,000 Windows Server machines to mine for cryptocurrencies. The culprit appears to be operating out of China and is earning about 25,000 USD a month from the botnet's activity. BondNet's presence was first detected in December 2016. An unspecified IT equipment failure at Barts Health NHS Trust has forced the cancellation of more than 100 operations and hundreds of chemotherapy appointments. The issues began on April 20, 2017 and have not been entirely resolved. The US House Oversight and Government Reform Committee took the Internal Revenue Service (IRS) and the Department of Education to task earlier this week for delaying the disclosure of a breach that may have compromised sensitive personal information of as many as 100,000 families who applied for federal student financial aid. The IRS's Data Retrieval Tool allowed financial aid applicants to populate the Free Application for Federal Student Aid (FAFSA) with information from tax returns. Attackers were abusing the tool to steal information and file fraudulent tax returns to obtain refunds. The tool was closed down in early March. FIN7 Carbanak Group May Be Behind Chipotle Breach (May 3 & 4, 2017) A cybercrime group known as FIN7/Carbanak is believed to be responsible for the payment card breach at Chipotle and several other restaurants. The group appears to be misusing the Windows Application Compatibility Infrastructure, which lets app developers create patches called shims to help apps run smoothly on newer versions of Windows. FIN7/Carbanak is believed to have registered a shim database that allowed it top inject a backdoor into targeted computers. Google Releases Android Updates (May 2, 2017) On Monday, May 1, Google released patches for Android. In all, the update fixes 17 critical vulnerabilities. Of those, six are related to the operating system's Mediaserver component. Four of the critical flaws lie in Qualcomm components in Android handsets. Android updates are released in two batches: The May 1 batch was the partial security patch level; the complete security patch level update is expected to be released on Friday, May 5.
{ "pile_set_name": "Pile-CC" }
A unique method for measuring filtration in single isolated glomeruli has been developed. This technique permits study of the ultrafiltration characteristics of glomeruli from any species. Values from the ultrafiltration coefficient of normal glomeruli of rats, rabbits and dogs have been established. An increase in glomerular size and ultrafiltration in the developing rabbit has been documented. Glomerular function in several forms of acute renal failure has been studied. Further studies will be directed toward developing a more precise description of the changes in capillary volume which occur during filtration in vitro and further defining the relation between glomerular size and cell morphology and filtration. In addition, glomerular function in several physiologic states, eg. intravascular loading and depletion will be studied, and the in vitro effects of several vasoactive substances will be investigated. These studies will enhance our understanding of normal glomerular function and of the mechanisms which regulate glomerular function in normal and disease states.
{ "pile_set_name": "NIH ExPorter" }
Bobby Hunter investigates the controversial scoring that "disgusted" Paul Smith SOMETIMES you just have to wonder in the world of boxing what goes on inside the head of a professional judge when they produce a card that shows no relevance to the fight they are scoring. Last night in Kiel, Germany WBO 168lb champion Arthur Abraham and Liverpool’s Paul Smith battled for 12 close rounds. There is no doubt in many rational minds that the fight could have gone either way, but when the ring announcer came up with two tallies of 117-111 and an even worse 119-109 scoreline for the champion, most pundits and fans simply shook their head in disgust. Something stinks here, let’s not mess around. A 9-3 score for Abraham is bad enough, Abraham did not throw enough punches in this fight to win nine rounds. To have him winning 11 rounds, though, illustrates total incompetence. The three judges for this fight were from the USA, Spain and Hungary. The WBO President Paco Valcarcel tweeted this on Friday “What really improves boxing is more qualified officials and executives.” I’m sorry but 11-1 in rounds in a close fight makes a mockery of that statement. Fernando Laguna of Spain is the offending party here with that awful score. The other two are not much better Mr Waleska for USA and Mr Zoltan Enyedi from Hungary. What is apparent is that the WBO President should be investigating these rotten scores. Eddie Hearn will no doubt appeal and ask for a rematch. He is correct in doing this as his fighter put up the fight of his life; Paul Smith deserved better. Mr Abraham is a proud champion and let’s be clear here, he didn’t score the fight but should fight Smith again. The 51 media scores attached to this article show that 58 per cent had Arthur Abraham winning. I honestly do not think many people have a problem with the result – it’s the wideness of the final scores. Out of the 51 scores below the average media score was 115-114 Abraham. That’s 51 people scoring a fight from mostly the UK and America, this shows there is no bias when the vast majority had the home fighter winning. Even Smith’s head trainer Joe Gallagher had the fight a draw. The WBO must consider striking off Mr Laguna from their approved championship official list. That score is just wrong. Personally I am surprised with Mr Enyedi’s score as in the past his scores have been very good. An investigation into the scores is what is being asked here. Let’s hope that happens along with a rematch for Paul Smith. Those scores inflicted boxing with another black eye – we don’t need it as casual fans think boxing is corrupt. What are the new fans to think after seeing THAT in Germany last night? As usual my full scorecard is below along with the media scores and what percentage of media scored for each fighter. Bobby Hunter’s Scorecard Round 1…. 10-9 Smith Round 2…. 9-10 Abraham Round 3…. 10-9 Smith Round 4…. 9-10 Abraham Round 5…. 9-10 Abraham Round 6…. 10-9 Smith Round 7…. 10-9 Smith Round 8…. 9-10 Abraham Round 9…. 10-9 Smith Round 10.. 10-9 Smith Round 11.. 9-10 Abraham Round 12.. 9-10 Abraham TOTAL : 114-114 Average Boxing Media Score (51 in Total): (115-114 ABRAHAM) Abraham: 30 Media Scores, (59 per cent) Smith: 9 Media Scores, (18 per cent) Draw: 12 Media Scores, (23 per cent) Boxing Media Scores from Ringside and TV: Boxing Guru: 115-113 ABRAHAM Steve Adams Jnr (RingNews24): 116-113 SMITH Scott Christ (BadLeftHook): 114-114 DRAW Tom Gray (Ring Magazine): 115-113 SMITH Daniel Vano (CheckHookBoxing): 115-113 ABRAHAM Kurt Ward (BoxingAsylum): 115-113 ABRAHAM FirstClassBoxing: 114-114 DRAW Shaun Brown (Boxing Monthly): 115-113 ABRAHAM Jim Watt (Sky TV): 115-113 SMITH (Ringside) Andy Paterson (BoxingAsylum): 115-113 ABRAHAM Iconic Boxing: 115-113 SMITH Corey Quincy (Freelance): 114-114 DRAW Martin Murray (Sky TV): 114-114 DRAW Alex Morris (BoxingAsylum): 114-114 DRAW Nathan Cleverly (Sky TV): 114-114 DRAW I Edit Boxing: 115-113 SMITH Fight Ghost: 115-113 ABRAHAM Ron Lewis (The Times): 115-113 ABRAHAM (Ringside) Tommy Allan (BoxingAsylum): 114-114 ABRAHAM Phil D Jay (WorldBoxingNews): 115-113 SMITH Ciaran Shanks (Irvine Times): 114-114 DRAW Alex Steedman (BoxNation TV): 115-113 ABRAHAM Johnny Nelson (Sky TV): 117-116 SMITH (Ringside) Andy Clarke (BoxNation TV): 115-114 ABRAHAM Boxingscene: 115-113 ABRAHAM V2 Boxing: 115-113 SMITH Graham Houston (Boxing Monthly): 116-112 ABRAHAM John Hoolan (Freelance): 115-113 ABRAHAM Andrew McKart (FirstClassBoxing): 115-113 ABRAHAM KO Radio Online: 114-114 DRAW Victor M Salazar (Tha Boxing Voice): 114-114 DRAW Graeme Young (Daily Record): 115-113 ABRAHAM Paul Daley (TopClassBoxing): 114-114 DRAW Mark Butcher (BoxingMonthly): 116-113 ABRAHAM Asian Boxing: 116-112 ABRAHAM Matt Mojica (The Fight Source): 115-113 ABRAHAM Nathan Orr (BoxingScene): 115-113 ABRAHAM Instant Boxing: 115-113 ABRAHAM Marius Vibe (Freelance): 116-112 ABRAHAM Irish Boxing: 115-113 ABRAHAM WildPunchBoxing: 115-113 ABRAHAM ATR Boxing Tipster: 115-113 ABRAHAM he Boxing Tribune: 114-114 DRAW Robert Palmer (CheckHookBoxing): 115-113 ABRAHAM Rachel Aylett (RingNews24): 116-112 ABRAHAM John Wharton (Freelance): 115-114 SMITH Livefight: 116-113 ABRAHAM Sam Sheppard (The Queensberry Rules): 115-113 ABRAHAM Kasim Aslam (Global Boxing): 115-113 ABRAHAM John Evans (Livefight): 115-114 ABRAHAM
{ "pile_set_name": "OpenWebText2" }
Key points {#sec1} ========== •The ongoing coronavirus disease 2019 (COVID-19) pandemic has affected hundreds of thousands of people.•Children have so far accounted for 1.7% to 2% of diagnosed cases of COVID-19.•Children often have milder disease than adults, and child deaths have been rare.•Risk factors for severe disease from COVID-19 in children are reported to be young age and underlying comorbidities, although this is not confirmed in all studies.•It is unclear whether male gender and certain laboratory and imaging findings can also be considered as risk factors, because of insufficient data. Introduction {#sec2} ============ Until recently, 6 different coronaviruses (CoVs) had been identified in humans (human CoVs \[HCoVs\]): HCoV-OC43, HCoV-229E, HCoV-NL63, HCoV-HKU1, severe acute respiratory syndrome (SARS)-CoVs, and MERS-CoVs. Endemic HCoV-OC43 and HCoV-229E were described in the 1960s, and HCoV-NL63 and HCoV-HKU1 in 2004 and 2005, respectively.[@bib1] ^,^ [@bib2] The first serious CoV disease outbreak occurred in China in 2002, when the novel SARS-CoV emerged, which was thought to have been transmitted from civet cats or bats to humans.[@bib3] ^,^ [@bib4] The second novel CoV emerged in Saudi Arabia in 2012, the Middle East respiratory syndrome (MERS)--CoV,[@bib5] which is transmitted from dromedary camels to humans.[@bib6] Collectively, these 2 CoV diseases did not affect children widely, because of the short-term nature of the epidemic of SARS and the rigid transmission route of MERS. Since December 2019, SARS-CoV-2 has been recognized as the causal factor of severe pneumonia and potential damage to vital organs in humans. The first cases of SARS-CoV-2 originated in Wuhan in the Hubei province of China, and subsequently spread to other countries throughout the world.[@bib7] In February 2020, the World Health Organization (WHO) designated the disease CoV disease 2019 (COVID-19). A substantial number of studies have already been published on adults with COVID-19, but reports on children with COVID-19 are scarce. This article analyzes the current knowledge on the risk factors for the progression and severity of COVID-19 in infants and children. The possible mechanisms of aberrant clinical features of COVID-19 in children are also presented. To the best of our knowledge, this is the first review addressing the risk factors associated with the progression and severity of COVID-19 in children. Methods {#sec3} ======= Original research studies published in English between February 26, 2020 and June 10, 2020 were identified using PubMed and Scopus. The search used combinations of the key words "COVID-19," "SARS-CoV2," "mechanism," :risk factor," "severity," and "child." In addition, the reference lists of the retrieved articles were checked for other relevant articles. The initial search yielded 293 articles, of which, after screening of their titles, 72 studies were considered relevant to the aim of this review. Studies on adults and neonates were not included, and 7 studies were excluded because they were in Chinese. Pediatric case reports of COVID-19 were included only if they provided information about risk factors for severe disease. Thus, 23 studies were eventually selected, as shown in [Fig. 1](#fig1){ref-type="fig"} , and are discussed here. The factors that may introduce bias into the findings of this article are restriction to articles in English, together with database and citation bias.Fig. 1The literature search on risk factors for severe COVID-19 in childhood (February 26 to June 10, 2020). Most of the studies originated in China, the United States, Italy, Spain, and South Korea, despite the large number of patients diagnosed with COVID-19 throughout the world. Some published studies relating to COVID-19 in children do not provide detailed information on the mechanisms, triggering factors, or clinical features, which led to the deterioration of the status of the patients. In addition, the current studies do not provide a uniform definition of severe or critical disease. The information from all the studies related to the risk factors for severe COVID-19 in infants and children is summarized in [Table 1](#tbl1){ref-type="table"} .Table 1Studies on severity and risk factors of coronavirus disease 2019 in children (February 26 to June 10, 2020)First AuthorRegionStudy PeriodNumber of ChildrenMean Age (% of Young Children)Underlying Diseases Present (Diseases)SeverityRisk FactorsBialek et al[@bib9]United States (33% from New York City, 23% from the rest of New York State, 15% from New Jersey, 29% from other jurisdictions)February 12 to April 2, 2020257211 (\<1 y, 15%)23% (chronic lung disease, cardiovascular disease, immunosuppression)5.7%--20% hospitalized, 0.58%--2% admitted to ICU, aged \<1 y: 15%--62% hospitalized, 3 deathsChildren aged \<1 y, underlying conditionDong et al,[@bib10] 2020Chinese CDC, cases from Hubei province and Anhui, Henan, Hunan, Jiangxi, Shanxi, and ChongqingJanuary 16 to February 8, 20202135 suspected and confirmed cases7 (\<1 y, 17.6%)Not available90% had asymptomatic to moderate disease\ Severe or critical disease in 10.6% \<1 y, 7.3% 1--5 y, 4.1% 6--10 y, 3% \>16 y; 1 14-year-old boy diedYoung ageLu et al,[@bib30] 2020Wuhan Children's Hospital, ChinaJanuary 28 to February 26, 20201716.7 (\<1 y, 18%)3 patients (hydronephrosis, leukemia, intussusception)3 patients with invasive mechanical ventilation (all with underlying condition), 1 deathUnderlying conditionDeBiasi et al,[@bib22] 2020Children's National Hospital WashingtonMarch 15 to April 30, 20201779.639% (asthma, neurologic condition, DM, obesity, cardiac problem, hematological disease, oncological condition)9 critically ill patientsAdolescents and young adultsParri & Leng,[@bib32] 2020Italy, 17 pediatric emergency departments, the CONFIDENCE studyMarch 3 to March 27, 20201003.3 (40% \<1 y, 14% \<5 y)27%, cystic fibrosis; neurologic, hematological, cardiac, immunologic, oncological conditions; metabolic disease; prematurity syndrome1% had severe disease, 1% were in critical conditionUnderlying medical condition, young ageChao et al,[@bib44] 2020Single tertiary children's hospital, New York CityMarch 15 to April 13, 20206713.1Obesity and asthma33 admitted to ICUHigher levels of CRP, procalcitonin, and proBNP and platelet countWhittaker et al,[@bib24] 20208 hospitals in United KingdomMarch 23 to May 16, 20205893 had asthma, 1 neurodisability, 1 epilepsy, 1 sickle cell disease, 1 alopeciaAll had multisystem inflammatory syndrome, 50% developed shock, and 14% coronary artery aneurysmIncreased CRP and ferritin levels, older age, black or Asian raceShekerdemian et al,[@bib29] 202046 North American ICUsMarch 14 to April 3, 2020481383%All admitted to ICU, 23% had multiorgan failure, 2% needed extracorporeal membrane oxygenation, 4% diedUnderlying comorbiditiesTagarro et al,[@bib35] 202030 hospitals in Madrid, SpainMarch 2 to March 16, 202041127% had underlying disease60% hospitalized, 9.7% admitted to ICU, 9.7% needed respiratory support (1 had underlying condition)Perhaps young age, underlying conditionQiu et al,[@bib43] 20203 hospitals, Zhejiang, ChinaJanuary 17 to March 1, 2020368.3 (\<5 y, 28%)Not availableAll patients had mild or moderate typeRadiographic presentation, decreased lymphocyte counts, increased body temperature, high levels of procalcitonin, D-dimers, and creatine kinase-MBBelhadjer et al,[@bib49] 202014 ICUs in France and SwitzerlandMarch 22 to April 30, 2020351028% had comorbidities (asthma, overweight)Multisystem inflammatory syndrome--acute cardiac failureCytokine storm and macrophage activationBandi et al,[@bib23] 2020University COVID-19 clinic, Chicago, IL12 March to 20 April, 2020259.7 yNot available (1 sickle cell acute pain crisis)20% hospitalized, 12% admitted to ICU, 1 intubatedOlder age\ African American raceZheng et al,[@bib33] 202010 hospitals, Hubei, ChinaFebruary 1 to February 10, 2020253 (\<3 y 40%)8% (congenital heart disease, malnutrition, suspected hereditary metabolic diseases)Most patients had mild disease\ Two had critical disease (both with underlying disorder)Underlying disordersCheung et al,[@bib51] 2020Columbia University Irving Medical Center/New York-Presbyterian Morgan Stanley Children's Hospital in New York CityApril 18 to May 5, 20201783 mild asthmaMultisystem inflammatory syndromeInflammatory markers, troponin T, and NT-proBNP levelsVerdoni et al,[@bib48] 2020Bergamo province, ItalyFebruary 18 to April 20, 2020107.5NoneMultisystem inflammatory syndromeOlder age, features of macrophage activationRiphagen et al,[@bib47] 2020ICU, United KingdomMid-April, 202089NoneMultisystem inflammatory syndromeAfro-Caribbean descent\ Male genderSun et al,[@bib34] 2020ICU of Wuhan Children's Hospital, ChinaJanuary 24 to February 24, 202087 (3 children ≤1 y)1 acute lymphoblastic leukemiaAll admitted to ICUIncreased levels of CRP, LDH, procalcitonin, abnormal liver function, cytokine storm, abnormalities on chest CTLiu & Zhang,[@bib19] 20203 branches of Tongji Hospital, Wuhan, China)January 7 to January 15, 202063 (4 children ≤3 y)NoneAll 4 patients ≤3 y had pneumonia, 1 admitted to ICUΥoung ageCui et al,[@bib18] 2020Hubei Province, ChinaJanuary 28, 2020155 dNonePneumonia, myocardial injury, acute liver injuryYoung ageShi et al,[@bib42] 2020Hubei Province, ChinaFebruary 3, 202012 moNoneSevere pneumonia, need for noninvasive ventilationYoung age, coinfection with RSV[^2] Epidemiology of coronavirus disease 2019 {#sec4} ======================================== COVID-19 worldwide is less common in children than in adults. A review of 72,314 cases by the Chinese Center for Disease Control and Prevention showed that less than 1% of the cases were in children younger than 10 years and 1% of the cases were in children aged 10 to 19 years.[@bib8] In the United States, among 149,082 reported cases of COVID-19, 1.7% were in children aged less than 18 years.[@bib9] From the currently available data, it seems that children tend to have asymptomatic or mild disease more commonly than adults,[@bib8] ^,^ [@bib10] but severe cases and even deaths have been reported worldwide in patients younger than 18 years. In a cohort study of 32,583 confirmed cases of COVID-19 from Wuhan, China, 4.1% of severe and critical cases were in patients aged less than 20 years.[@bib11] According to a large retrospective study conducted in China, 4 HCoVs, HCoV-OC43, HCoV-229E, HCoV-NL63, and HCoV-HKU1, were more common in children, because their prevalence was 4.3%, and the highest prevalence was among infants aged 7 to 12 months.[@bib12] Infection by these 4 strains usually causes acute respiratory disease, with severe manifestations in some children.[@bib13] Regarding SARS-CoV, only 6 case series have been reported, including a total of 135 pediatric cases, from Canada, Hong Kong, Taiwan, and Singapore.[@bib14] A milder form of the disease was observed in children compared with adults, and no child death was recorded.[@bib15] In the MERS-CoV epidemic, pediatric cases were even fewer, because only 2 small case series of children were reported, both originating from Saudi Arabia, 1 of 31 children with a mean age of 10 years[@bib16] and 1 of 7 children with a mean age of 8 years.[@bib17] In both studies, 42% of the infected children were asymptomatic,[@bib16] ^,^ [@bib17] and in 1, 2 of the 7 had severe disease,[@bib17] whereas in the other, 2 of the 31 children died (6%).[@bib16] Risk factors for severity in coronavirus disease 2019 and other coronavirus infections {#sec5} ====================================================================================== The Impact of Age {#sec5.1} ----------------- ### Severe acute respiratory syndrome--coronavirus-2 {#sec5.1.1} In a series of 2135 children with suspected and confirmed COVID-19 from China, severe disease was defined as the occurrence of dyspnea, central cyanosis, and oxygen saturation of less than 92%. Critical disease was defined as progression to acute respiratory distress syndrome, shock, encephalopathy, myocardial injury, coagulation dysfunction, and acute kidney injury.[@bib10] Severe and critical cases were reported in 10.6% of the children aged less than 1 year, 7.3% of those aged 1 to 5 years, 4.1% of those aged 6 to 10 years, and 3% of the children aged greater than 16 years. One 14-year-old boy died, but no further information was provided about this patient, and the study gave no data on underlying comorbidity or other possible risk factors. It is of note that, of the 2135 children, only 728 had laboratory confirmation, and the severe symptoms in the suspected cases may have been be caused by pathogens other than SARS-CoV-2. Two case reports from the same country, China, referred to children with severe disease, a 55-day-old female infant and a 3-year-old girl with no apparent risk factor apart from the young age.[@bib18] ^,^ [@bib19] Cases have been reported of infants in China and in Vietnam that, despite their young age, had mild disease, including 10 diagnosed with COVID-19 who were otherwise healthy, with mild or no symptoms.[@bib20] ^,^ [@bib21] In a study of 177 children from the Children's National Hospital in Washington, DC, the adolescents and young adults were more commonly critically ill than the younger children.[@bib22] Another study from the United States reported that the mean age of COVID-19--positive children was significantly higher than those testing negative (9.72 vs 4.85 years). In that study, the ethnicity was examined, and African American children had a significantly higher rate of positive tests for COVID-19: 6.8% versus 1.7% of white children.[@bib23] In a study in the United Kingdom, among 58 children, race (black or Asian) was described as a risk factor for COVID-19.[@bib24] ### Other coronaviruses {#sec5.1.2} In the United States, in the case of other CoVs, specifically 229E, HKU1, NL63, and OC43, age less than 2 years has been reported as a risk factor for severe disease, defined as the need for respiratory support.[@bib25] In contrast, in a series of 44 children in China with SARS-CoV, an age of greater than 12 years was associated with severe illness, requiring methylprednisolone therapy and oxygen supplementation.[@bib15] In adults, older age has been reported to be an independent risk factor for severity and mortality, not only in SARS-CoV-2 but also in the previous epidemics of SARS and MERS.[@bib26] ^,^ [@bib27] The Impact of Male Gender {#sec5.2} ------------------------- Male gender is a risk factor for severe CoV disease in adults.[@bib28] A predominance of men was reported in all age subgroups among 2490 pediatric cases of COVID-19 in a series in the United States, but no details were given about the impact of gender on the severity of the disease.[@bib9] Among 2143 Chinese children with COVID-19 in the study of Dong and colleagues,[@bib10] no significant difference was reported in the number of cases between boys and girls, and no detailed information was given on the gender of the severe and critical cases. In a cross-sectional study of 48 children with COVID-19 admitted to US and Canadian intensive care units (ICUs), 52% were boys.[@bib29] Severe disease has been reported in girls and the current data suggest that, in children, male gender is not an independent risk factor of severe COVID-19. Underlying Medical Comorbidity {#sec5.3} ------------------------------ ### Severe acute respiratory syndrome--coronavirus-2 {#sec5.3.1} In a series of 171 children with COVID-19 from the city in China, Wuhan, where SARS-CoV-2 was first described, 3 patients required ICU support and invasive mechanical ventilation, all of whom had underlying comorbidities. One was a 10-month-old male infant with intussusception who developed multiorgan failure and died 4 weeks after admission.[@bib30] The second child had leukemia, in the maintenance chemotherapy phase, and the third, aged 13 months, had bilateral hydronephrosis and calculus of the left kidney.[@bib30] ^,^ [@bib31] It was not reported whether any of the 168 children who did not need ICU admission had an underlying condition. In the recently published The Coronavirus Infection in Pediatric Emergency Departments (CONFIDENCE) study from Italy, which included 100 children, 27% had an underlying medical condition. Of the 9 children needing respiratory support, 5 were aged less than 1 year and 6 had an underlying condition. The severe (1) and critical (1) cases were both in children with underlying medical conditions.[@bib32] Among 25 pediatric cases of COVID-19 from Hubei province in China, two 1-year-old boys needed invasive mechanical ventilation, both of whom had congenital heart disease. One of them also had malnutrition and a suspected hereditary metabolic disease, and the other had coinfection with *Enterobacter aerogenes*.[@bib33] The first report from the United States concerning children with COVID-19 is of 2572 pediatric cases. Among the children for whom hospitalization status was known, 20% were hospitalized. Because of lack of information on specific disease features, hospitalization was considered to be an indicator of serious illness, and it was most often reported in children younger than 1 year. An underlying medical condition was noted in 77% of hospitalized children, in contrast with 12% of those not hospitalized. The most common comorbidities were chronic lung disease (including asthma), cardiovascular disease, and immune suppression. Three deaths were reported, but their association with COVID-19 is still under investigation.[@bib9] In another US study, among 48 children admitted to an ICU, 83% had a significant preexisting comorbidity.[@bib29] Severe and critical cases have also been reported in children with no underlying comorbidity. Sun and colleagues[@bib34] reported 8 severe and critical cases of children in a hospital in Wuhan, 7 of whom were previously completely healthy. In this study, severe cases were defined as the coexistence of tachypnea, oxygen saturation less than 93%, and arterial partial pressure of oxygen less than or equal to 300 mm Hg, whereas critical cases were defined as the presence of septic shock or the need for mechanical ventilation or ICU admission. The age range of the patients in the 8 severe cases was from 2 months to 15 years, 6 were boys, and only 1 of them had an underlying medical condition (acute lymphocytic leukemia).[@bib34] Information from a registry of 310 hospitals in Madrid, Spain, showed that, of 41 children with COVID-19, 60% were hospitalized, 4 children were admitted to an ICU, and 4 needed respiratory support. Of these children, 1 had a previous condition (recurrent wheezing) and no patient died.[@bib35] In a recent report from Paris, France, of 27 children with severe COVID-19, 70% had an underlying medical condition. Of the 5 children who died, 3 had no underlying comorbidity, suggesting that comorbidities may be a risk factor for severe disease and fatality but that other mechanisms may also be implicated in the severity of the disease.[@bib36] It seems, therefore, that although underlying medical comorbidity may be a risk factor for severe disease in childhood, it is not the only risk factor for progression of the disease and development of complications. It would be of interest to gather further information on the children with underlying medical problems and assess the percentages with severe or mild disease, and their other risk factors. To date, there is lack of such data in the literature, although, in adults, specific comorbidities are well documented as risk factors not only for admission to the ICU but also for mortality.[@bib37] ### Other coronaviruses {#sec5.3.2} Severe pediatric disease from other CoVs reported in the United States, specifically 229E, HKU1, NL63, and OC43, defined as need for respiratory support or pediatric ICU admission, has been associated with underlying comorbidity, and, in particular, cardiovascular, chronic respiratory, and genetic/congenital conditions.[@bib25] Ogimi and colleagues[@bib38] in the United States showed that both an immunocompromised state and underlying pulmonary disorder were associated with lower respiratory tract disease or severe lower respiratory tract disease from HCoV. No significant difference was found regarding the severity of illness among hospitalized children with different HCoV types.[@bib25] The 2 deaths reported in children with MERS-CoV in Saudi Arabia were in a 2-year-old child with cystic fibrosis[@bib39] and a 9-month-old infant with infantile nephrotic syndrome,[@bib40] whereas a 14-year-old girl with Down syndrome needed hospital admission but eventually recovered.[@bib39] Coinfection with Another Pathogen {#sec5.4} --------------------------------- ### Severe acute respiratory syndrome--coronavirus-2 {#sec5.4.1} Coinfection with other pathogens may be a risk factor for severe disease. One child in Wuhan with a history of congenital heart disease and severe illness was found to have coinfection with *E aerogenes*.[@bib33] In a study of 20 pediatric cases from the same region, 40% had an underlying coinfection, but there was no report on their severity.[@bib41] A severe case of COVID-19 has been reported in a Chinese 2-month-old infant who had coinfection with respiratory syncytial virus (RSV).[@bib42] ### Other coronaviruses {#sec5.4.2} The presence of copathogens with more than 1 HCoV strain (229E, HKU1, NL63, and OC43) or other respiratory pathogens is a risk factor for febrile illness. Patients infected with a single strain of HCoV were more likely to present pulmonary rales than those infected by more than 1 HCoV strain or other respiratory pathogens.[@bib12] The presence of RSV has been associated with lower respiratory tract disease or severe lower respiratory tract disease from HCoV.[@bib38] Laboratory Findings {#sec5.5} ------------------- This article reports only the available laboratory information on the severe cases compared with mild cases, according to the current literature; several publications did not provide relevant data. ### Severe acute respiratory syndrome--coronavirus-2 {#sec5.5.1} Based on currently available data, it is not possible to document a pattern of laboratory values in pediatric COVID-19 according to the severity of the disease. In the study of Qiu and colleagues[@bib43] from China, no laboratory data were reported for severe cases, but only for 36 children with moderate and mild disease. Moderate cases (19 patients) compared with mild cases (17 patients) were associated with increased body temperature, a decrease in lymphocyte counts, higher levels of procalcitonin and creatine kinase-MB (myocardial band), and increased D-dimer levels.[@bib43] Laboratory data from 8 severe pediatric cases in the same country showed normal or increased leukocyte count, and high levels of C-reactive protein (CRP), procalcitonin, and lactate dehydrogenase, whereas half had abnormal liver function tests.[@bib34] In a study of 67 children in the United States, admission to an ICU was associated with higher levels of CRP, procalcitonin, and pro--B-type natriuretic peptide and an increased platelet count.[@bib44] Henry and colleagues[@bib45] reviewed 2020 case reports and case series providing laboratory data on pediatric cases of COVID-19. In that review, 69.6% of the children had a normal leukocyte count and the investigators commented that the absence of lymphopenia in children may in part be explained by the milder disease. Another assumption was that increased procalcitonin level could be caused by a bacterial coinfection as a complication of COVID-19.[@bib45] Procalcitonin level was increased in 80% of Chinese pediatric patients in the study of Xia and Shao,[@bib41] and, in that series, 40% of the children had a coinfection. ### Other coronaviruses {#sec5.5.2} Neutrophilia was a predictor of severe illness among 44 children with SARS.[@bib15] Lymphopenia was detected in 10 children with SARS, of whom 4 needed oxygen therapy and 2 needed assisted ventilation.[@bib46] Risk factors for pediatric multisystem inflammatory syndrome associated with severe acute respiratory syndrome--coronavirus-2 {#sec6} ============================================================================================================================= A syndrome of fever and multisystem inflammatory syndrome (MIS) has recently been described in children with COVID-19. Some of these children presented with shock and multiorgan failure and others had characteristics of Kawasaki disease or a combination of Kawasaki-like disease and shock, named the Kawasaki disease shock syndrome.[@bib47] ^,^ [@bib48] These children presented with acute cardiac decompensation,[@bib49] and some developed coronary artery aneurysms.[@bib24] Among 44 children hospitalized in the United States with MIS, 84.1% had gastrointestinal symptoms as the presenting clinical complaint.[@bib50] Most studies to date have reported that MIS presents in children at an older age, with a median age of 8 to 10 years.[@bib24] ^,^ [@bib49] ^,^ [@bib51] In a retrospective study of 35 children with MIS, admitted to ICUs in France and Switzerland, comorbidities were present in 28% of the children, including asthma and being overweight,[@bib49] but most of the children in other studies reported from Europe, specifically Italy and the United Kingdom, were previously healthy.[@bib24] ^,^ [@bib48] In a study of 8 children from the United Kingdom with MIS, 6 were Afro-Caribbean and 5 were male.[@bib47] It has been suggested that black and Asian races may be predisposed to this clinical complication.[@bib24] These limited data indicate a possible gender and race predilection for MIS. The laboratory findings in children with MIS were characterized by a marked increase in levels of inflammatory markers such as CRP and ferritin,[@bib24] and a cytokine storm, with specific increase in the level of interleukin (IL)-6 and macrophage activation.[@bib49] ^,^ [@bib51] The patients often had a significant increase in B-type natriuretic peptide and troponin T.[@bib48] MIS is considered to be a result of a continuous immune response rather than injury from an acute SARS-CoV-2 infection. The disease presented 2 to 3 weeks after the peak of the infection and most children had negative COVID-19 polymerase chain reaction but positive viral serology.[@bib52] What mechanisms play a role in the atypical picture of coronavirus disease 2019 in children? {#sec7} ============================================================================================ The SARS-CoV-2 is a β CoV of group 2B, with more than 70% similarities in genetic sequence to SARS-nCoV.[@bib53] The established scientific evidence on SARS-novel coronavirus has enabled elucidation of the host defense mechanisms against SARS-CoV-2 and helped to explain the lower susceptibility of children to the virus and the variability between children. The reasons for the different pattern of COVID-19 in children are still unclear, but several hypotheses have been put forward. Environment-Epigenetics {#sec7.1} ----------------------- The effect of the environment must be considered a factor with significant impact on infection with COVID-19. Children have healthier airway tracts, because of having less exposure to cigarette smoke, air pollution, chemicals, and industrial pollutants than adults. In adults, these environmental factors, and especially smoking, have a negative epigenetic impact on epithelial and immune cells, leading to increased vulnerability to all respiratory viruses, including SARS-CoV-2.[@bib54] ^,^ [@bib55] CoVs are known to alter the epigenetic cellular mechanisms of the host associated with viral entry, replication, and innate immune control.[@bib56] Most children hospitalized with COVID-19, especially those in the ICU, were less than 3 years of age.[@bib33] ^,^ [@bib35] This finding may be explained by the immaturity of the immune system in this age period, the low likelihood of wearing face masks in this age group, and the subsequent high viral load.[@bib57] Another reason for the different clinical picture of COVID-19 in children is that they have fewer underlying disorders that may predispose to severe COVID-19 than adults.[@bib58] The severity of COVID-19 is higher in children with preexisting conditions, such as asthma, malignancies, cardiovascular disorders, and immunosuppression.[@bib33] ^,^ [@bib35] In certain chronic diseases, including systemic lupus erythematosus (SLE), epigenetic dysregulation might enable viral entry, replication, and a disproportionate immune response to SARS-CoV-2.[@bib59] Entry of the Virus into the Cells {#sec7.2} --------------------------------- Angiotensin-converting enzyme 2 (ACE2) is a zinc-containing metalloenzyme located on the surface of endothelial and other cells that counters the activity of the related angiotensin-converting enzyme (ACE) by reducing the amount of angiotensin-II.[@bib60] ACE2 serves as the entry point into cells for NL63 and SARS-CoV, and recent studies indicate that ACE2 is also likely to be the receptor for SARS-CoV-2 and the key region responsible for the interaction.[@bib61] ^,^ [@bib62] Differences in the distribution, maturation, and functioning of ACE2 in the developing phase of childhood is a possible reason for milder SARS-CoV-2 infection. Newborn infants and children have higher ACE activities, with serum levels showing an increase until puberty and progressive reduction after maturity.[@bib63] In contrast, ACE2 expression in rat lung has been found to decrease dramatically with age.[@bib64] Studies have provided evidence that ACE2 also protects against the severe acute lung injury that can be activated by sepsis, SARS, and avian influenza A H5N1 virus infection.[@bib65] It may be that children are protected against SARS-CoV-2 because ACE2 is less mature at younger ages. Epigenetic alteration of ACE2, which is further exacerbated by virus infections, is another potential mechanism in the severity of COVID-19 in patients with chronic diseases such as SLE.[@bib59] Another aspect in the variability of severity is the genetic variation of ACE among different populations. The polymorphism D/I in ACE1, an enzyme with amino acid identity and function similar to ACE2, could explain the varying rate of COVID-19 infection between European countries, and, specifically, the prevalence of COVID-19 infections has been shown to be correlated with the ACE D allele frequency.[@bib66] Immune Antiviral Response {#sec7.3} ------------------------- Frequent exposure of children to viral infections boosts the immune system and possibly enhances the response to SARS-CoV-2, and the presence of other concurrent viruses in the airway mucosa may limit the replication and the viral load of SARS-CoV2.[@bib67] It has been shown that the number of viral copies is correlated with the severity of COVID-19.[@bib68] The immune system undergoes significant changes from birth to adulthood, especially in lymphocyte biology,[@bib69] and the interaction of lymphocytes with SARS-CoV-2 may be different in children from that in adults. It is of note that, when documented, lymphocytopenia is frequent in adults with COVID-19 (83%)[@bib70] but not in children (3%).[@bib30] ^,^ [@bib45] However, in the 2003 SARS epidemic, lymphocytopenia was reported in 77% of infected children.[@bib15] The changing level of T lymphocytes with age may also be a reason for the mild disease phenotype in childhood.[@bib71] Interferon-mediated response to HCoVs is essential for the disease course. Virus-induced suppression of interferon-induced pathways leads to viral replication and disease progression, along with the production of other proinflammatory cytokines, such as IL-2, IL-6, and tumor necrosis factor, in the lower respiratory tract and other tissues.[@bib72] In some cases, the increase of cytokine levels is uncontrolled, leading to the detrimental cytokine syndrome, with a poor outcome.[@bib73] The percentage of children with COVID-19 with increased levels of inflammatory markers is reported to be low, and this could be a cofactor for nonsevere disease.[@bib45] In contrast, an unusual immune response accompanied by cytokine storm and macrophage activation is thought to result in MIS, which has been linked to COVID-19 in children.[@bib24] Another immunologic aspect that could be related to the mild disease in children is trained innate immunity, because of the routine use of various vaccines, including bacillus Calmette-Guérin(BCG). BCG vaccination induces epigenetic changes in monocytes, and increased cytokine production in response to several different pathogens.[@bib74] In mice, BCG also enhances nonspecific defense against influenza virus infection.[@bib75] Several studies have identified links between inadequate vitamin D concentrations and the development of upper and lower respiratory tract infections in infants and young children. Although the mechanism of the vitamin D effect on immunity is complex, currently available data support the hypothesis that cathelicidins and defensins can reduce viral replication rates and the levels of proinflammatory cytokines.[@bib76] Studies in small children with influenza have shown that high doses of vitamin D resulted in fast relief from symptoms, a rapid decrease in viral load, and early disease recovery. In addition, high daily doses of vitamin D have been shown to be effective in the prevention of seasonal influenza.[@bib77] Summary {#sec8} ======= Although children are less susceptible to COVID-19, and the clinical picture in childhood is often distinct from that in adults, in both age groups chronic underlying medical problems can predispose to severe disease. In contrast with adults, in whom older age is an independent risk factor for severity and mortality, very young age is considered a risk factor for severity in children, although this has recently been questioned, and MIS occurs in older children. Although a distinct pattern of laboratory findings has not emerged as being associated with severity of the disease in pediatric cases of COVID-19, lymphopenia seems to be a risk factor for severe disease in children. Increased levels of the inflammatory markers procalcitonin and CRP could be caused by a bacterial coinfection as a complication of COVID-19. The recently described pediatric MIS seems to be the result of continuous immune response rather than an injury from an acute SARS-CoV-2 infection, but further studies are needed to reach definitive conclusions. Several other aspects could be implicated in the severity of COVID-19 in children, such as coinfection with RSV, responsiveness of the immune system, vaccination history, levels of vitamin D, and genetic polymorphisms, but the present paucity of data limits the ability to draw such conclusions. It is important to further study the potential risk factors for severe disease in children and to clarify the underlying mechanisms in order to improve the management of children with COVID-19 and to help in the development of new forms of treatment. Contributors {#sec9} ============ S. Tsabouri and A. Makis designed the study, and S. Tsabouri, A. Makis, and C. Kosmeri did the literature search. A. Makis, C. Kosmeri, and E. Siomou were responsible for the data collection. S. Tsabouri and C. Kosmeri collected and analyzed the data. S. Tsabouri, A. Makis, C. Kosmeri, and E. Siomou analyzed data and wrote the article. Funding: None. Conflicts of interest: The authors have no conflicts to disclose. [^1]: Equal contributors. [^2]: *Abbreviations:* CDC, Center for Disease Control and Prevention; CRP, C-reactive protein; CT, computed tomography; DM, diabetes mellitus; ICU, intensive care unit; LDH, lactate dehydrogenase; MB, myocardial band; NT-proBNP, N-terminal pro--b-type natriuretic peptide; proBNP, pro--b-type natriuretic peptide; RSV, respiratory syncytial virus.
{ "pile_set_name": "PubMed Central" }
76 F.3d 372 NOTICE: Fourth Circuit Local Rule 36(c) states that citation of unpublished dispositions is disfavored except for establishing res judicata, estoppel, or the law of the case and requires service of copies of cited unpublished dispositions of the Fourth Circuit.Lawrence DUKES, Plaintiff-Appellant,v.W.K. JONES; Alvin Newman, Defendants-Appellees. No. 95-7334. United States Court of Appeals, Fourth Circuit. Submitted Jan. 18, 1996.Decided Feb. 1, 1996. Appeal from the United States District Court for the Eastern District of North Carolina, at Raleigh. James C. Fox, Chief District Judge. (CA-95-232-5-F) Before HAMILTON and LUTTIG, Circuit Judges, and CHAPMAN, Senior Circuit Judge. Lawrence Dukes, Appellant Pro Se. E.D.N.C. AFFIRMED. PER CURIAM: 1 Appellant appeals from the district court's order denying relief on his 42 U.S.C. § 1983 (1988) complaint. We have reviewed the record and the district court's opinion and find no reversible error. Accordingly, we affirm on the reasoning of the district court. Dukes v. Jones, No. CA-95-232-5-F (E.D.N.C. Aug. 9, 1995). We dispense with oral argument because the facts and legal contentions are adequately presented in the materials before the court and argument would not aid the decisional process. AFFIRMED
{ "pile_set_name": "FreeLaw" }
India at the 2016 Summer Olympics India competed at the 2016 Summer Olympics in Rio de Janeiro, Brazil, from 5 to 21 August 2016. Indian athletes have appeared in every edition of the Summer Olympics since 1920, although they made their official debut at the 1900 Summer Olympics in Paris. 117 Indian athletes participated in Rio 2016, 63 men and 54 women, across 15 sports at the Games. It was the nation's largest ever delegation sent to the Olympics, due to the historic comeback of the women's field hockey squad after 36 years and the proliferation of track and field athletes making the cut. Among the sporting events represented by its athletes, India made its Olympic debut in golf (new to the 2016 Games) and women's artistic gymnastics. The Indian roster featured three Olympic medalists from London, including badminton star Saina Nehwal, freestyle wrestler and four-time Olympian Yogeshwar Dutt, and rifle shooter Gagan Narang. Tennis ace and 1996 bronze medalist Leander Paes topped the roster lineup by competing at his record seventh Olympics, while air rifle marksman Abhinav Bindra, who became the nation's first and only individual gold medalist in history (2008), led the Indian delegation as the flag bearer in the opening ceremony at his fifth consecutive Games. Other notable Indian athletes also included tennis player Sania Mirza in the women's doubles, artistic gymnast and Commonwealth Games bronze medalist Dipa Karmakar, and multiple-time world medalist Jitu Rai in men's pistol shooting. India left Rio de Janeiro with only two medals, saving its pride from the humiliation of returning empty-handed for the first time since Barcelona 1992. These medals were awarded only to female athletes for the first time in history, a silver to badminton player P. V. Sindhu in the women's singles, and a bronze to freestyle wrestler Sakshi Malik in the women's 58 kg. Several Indian athletes came close to increasing the medal haul, including tennis tandem Mirza and Rohan Bopanna in the mixed doubles; Bindra, who narrowly missed out the podium by a half-point in the men's 10 m air rifle before retiring from the sport; and Karmakar, who surprised the global audience with her high-risk Produnova routine in the women's vault. For the first time, the Indian shooters failed to earn a single medal since 2004, and the boxers since 2012. Medalists Competitors The following is the list of number of competitors participating in the Games. Note that reserves in fencing, field hockey, football, and handball are not counted as athletes: Archery Three Indian women's archers & One Indian men's archer qualified after having secured top eight finish in the women's team recurve event & men's individual event at the 2015 World Archery Championships in Copenhagen, Denmark. Athletics Indian athletes have been able to achieve qualifying standard in the following athletic events (up to maximum of 3 athletes in each event) Indian shot putter Inderjeet Singh and 200 metres sprinter Dharambir Singh were suspended from participating in the Olympics after having failed both of the administered doping tests. Key Note–Ranks given for track events are within the athlete's heat only Q = Qualified for the next round q = Qualified for the next round as a fastest loser or, in field events, by position without achieving the qualifying target NR = National record SB = Seasonal best N/A = Round not applicable for the event Bye = Athlete not required to compete in round Men Track & road events * Reserves in the relay team. Field events Women Track & road events * Reserves in the relay team. Field events Badminton India has qualified seven badminton players for each of the following events into the Olympic based on their BWF World Rankings as of 5 May 2016 : Men Women Boxing India has entered three boxers to compete in each of the following classes into the Olympic boxing tournament. London 2012 Olympian Shiva Thapa had claimed his Olympic spot with a semifinal victory at the 2016 Asia & Oceania Qualification Tournament in Qian'an, China, while Manoj Kumar and Vikas Krishan Yadav secured additional places on the Indian roster with their quarterfinal triumphs at the 2016 AIBA World Qualifying Tournament in Baku, Azerbaijan. Field hockey Summary Key: FT – After full time. P – Match decided by penalty-shootout. Men's tournament India men's field hockey team qualified for the Olympics by receiving a berth and earning the gold medal from the 2014 Asian Games in Incheon. Team roster Group play Quarterfinal Women's tournament India women's field hockey team qualified for the Olympics by having achieved a top five finish at the 2014–15 Women's FIH Hockey World League Semifinals, signifying its historic Olympic comeback after 36 years. Team roster Group play Golf India has entered three golfers into the Olympic. Anirban Lahiri (Rank 62), Shiv Chawrasia (Rank 207), and Aditi Ashok (Rank 444) qualified directly among top 60 players for their respective individual events based on IGF World Rankings as of 11 July 2016. Gymnastics Artistic India has qualified one artistic gymnast into the Olympic competition for the first time since 1964. Dipa Karmakar became the first Indian female ever to book an Olympic spot in the apparatus (vault, balance beam, uneven bars and floor exercise) events and all-around event at the Olympic Test Event in Rio de Janeiro. Women Judo India has qualified one judoka for men's middleweight category (90 kg) for the olympic. Avtar Singh earned a continental quota from the Asian region, as the highest-ranked Indian judoka outside of direct qualifying position in the IJF World Ranking List of 30 May 2016. Rowing India has qualified one boat in the men's single sculls for the Olympics at the 2016 Asia & Oceania Continental Qualification Regatta in Chungju, South Korea. Qualification Legend: FA=Final A (medal); FB=Final B (non-medal); FC=Final C (non-medal); FD=Final D (non-medal); FE=Final E (non-medal); FF=Final F (non-medal); SA/B=Semifinals A/B; SC/D=Semifinals C/D; SE/F=Semifinals E/F; QF=Quarterfinals; R=Repechage Shooting Indian shooters have achieved quota places for the following events by virtue of their best finish at the 2014 and 2015 ISSF World Championships, the 2015 ISSF World Cup series and Asian Championships, as long as they obtained a minimum qualifying score (MQS) by 31 March 2016. On 19 March 2016, National Rifle Association of India (NRAI) had announced the squad of eleven Indian shooters for the Games, featuring four-time Olympian and Beijing 2008 air rifle champion Abhinav Bindra, London 2012 bronze medalist Gagan Narang, and multiple-time Worlds medalist Jitu Rai. Aiming to appear at his fourth Olympics, Manavjit Singh Sandhu became the twelfth Indian to join the team, as the NRAI decided to exchange a spot in the 50 m rifle 3 positions (won by Sanjeev Rajput) with the men's trap. Men Women Qualification Legend: Q = Qualify for the next round; q = Qualify for the bronze medal (shotgun) Swimming India has received a Universality invitation from FINA to send two swimmers (one male and one female) to the Olympics. Table tennis India has entered four athletes into the table tennis competition at the Games. 2012 Olympian Soumyajit Ghosh and Manika Batra secured the Olympic spot each in the men's and women's singles as the highest-ranked player coming from the South Asia zone, while Sharath Kamal Achanta and 2004 Olympian Mouma Das scored a second-stage draw victory each to take the remaining spots on the Indian team at the Asian Qualification Tournament in Hong Kong. Tennis India has entered four tennis players into the Olympic tournament. Sania Mirza (world no. 1) and Rohan Bopanna (world no. 10) teamed up with their partners Prarthana Thombare and six-time Olympian Leander Paes, respectively, in the men's and women's doubles by virtue of their top-10 ATP and WTA Ranking as of 6 June 2016. Weightlifting India has qualified one male and one female weightlifter for the Rio Olympics by virtue of a top seven national finish (for men) & top six (for women), respectively, at the 2016 Asian Championships. Wrestling India has qualified eight wrestlers for each of the following weight category into the Olympic. One Olympic spot in the men's freestyle 74 kg was earned at the 2015 World Championships, while two more Olympic places were awarded to Indian wrestlers, who progressed to the top two finals at the 2016 Asian Qualification Tournament. Three further wrestlers had claimed the remaining Olympic slots in separate World Qualification Tournaments; one of them in men's freestyle 57 kg at the initial meet in Ulaanbaatar, and two more each in women's freestyle 48 & 58 kg at the final meet in Istanbul. On 11 May 2016, United World Wrestling awarded two additional Olympic licenses to India in men's Greco-Roman 85 kg and women's freestyle 53 kg, after doping violations were discovered among the seven qualified wrestlers Freestyle wrestler Narsingh Pancham Yadav, who had qualified for the men's 74 kg event, failed both the A and B sample doping tests on 25 June and 5 July. He was provisionally replaced by Parveen Rana, but was later reinstated on 3 August when the National Anti-Doping Agency of India gave him a clean record on grounds that he had been a victim of sabotage. However the World Anti-Doping Agency appealed against this decision to drop the doping charges, following which Yadav was suspended for four years and disqualified from the Olympics by the Court of Arbitration on 18 August. Key: VT – Victory by Fall. PP – Decision by Points – the loser with technical points. PO – Decision by Points – the loser without technical points. ST – Technical superiority – the loser without technical points and a margin of victory of at least 8 (Greco-Roman) or 10 (freestyle) points. VB – Victory by Injury Men's freestyle Men's Greco-Roman Women's freestyle See also India at the 2016 Summer Paralympics 2016 Summer Olympics medal table India at the Olympics References External links Olympics 2016 Category:Nations at the 2016 Summer Olympics
{ "pile_set_name": "Wikipedia (en)" }
A “drive-through” is a type of service provided by a business such as, for example, fast-food restaurant, bank, pharmacy, and coffee shop that permits a customer to purchase a product without leaving their vehicle. Such drive-through services provides the customer with fast and convenient service while increasing the number of customers that may be served through conventional walk-in transactions. Orders can be generally placed utilizing a microphone and picked up in person at the window. As the order is being placed, an order-taker enters the order information into an order management system. The order information can be displayed on a display such that the order can be assembled by a runner. Conventionally, ordering paradigms utilize a single-queue approach that makes the customers with small, quick orders wait behind the customers with large complex orders. The problem associated with such approach is that the vehicles can get out of order between the time the order is placed and the vehicle receives the product. Additionally, such prior art approaches does not ensure that the correct product is being delivered to the vehicle that placed the order which further reduces order accuracy and efficiency. Such problems are exacerbated in highly trafficked locations where multiple lanes of order placement exist for each order processing window which result in decreased customer satisfaction and significant loss of revenues. Based on the foregoing, it is believed that a need exists for an improved system and method for providing signature based drive-through order tracking, as described in greater detail herein.
{ "pile_set_name": "USPTO Backgrounds" }
Reversing oliguria in critically ill patients. Oliguria is a common occurrence in the ICU setting. In patients with preserved renal function, fluid challenges or low doses of diuretics are generally successful. In patients with oliguric renal failure, it is still essential to ensure adequate intravascular fluid volume, especially in critically ill patients. Loop diuretics remain the mainstay of treatment. When diuretic resistance is encountered, physicians should consider further optimization of hemodynamics, alternative loop diuretics, and combined drug therapy. In some cases, continuous renal replacement therapy can be very effective. Yet, while these interventions can help reduce the morbidity of severe volume overload, they have not been shown to improve mortality rates.
{ "pile_set_name": "PubMed Abstracts" }
Q: C# ADO.NET UPDATE doesn't change database record, but still returns 1 affected row(s) I'm trying to update values in a table, and as far as I can tell there is nothing wrong with the code. It even returns 1 row(s) affected like wanted, but when I look in the database the record has not changed. I'd appreciate any help you could offer. public void UpdateContactInDB(int IDtoUpdate, string editedColumn, string value) { using (connection) { SqlCommand command = new SqlCommand("UPDATE ContactSet SET @column = @value WHERE Id = @ID", connection); command.Parameters.AddWithValue("@column", editedColumn); command.Parameters.AddWithValue("@value", value); command.Parameters.AddWithValue("@ID", IDtoUpdate); connection.Open(); int rowsaffected = command.ExecuteNonQuery(); MessageBox.Show("Rows affected: " + rowsaffected.ToString()); } } A: I don't think this works how you are anticipating, the query: UPDATE ContactSet SET @column = @value WHERE Id = @ID When executed does not do a 'string replacement' with your parameters, e.g. this does not translate to: UPDATE ContactSet SET MyColumn = 1 WHERE Id = 789 Instead what is happening is you are update the SQL parameter @Column with the value of the parameter @Value if you find a matching row in the database where the Id = @ID. You are getting '1 row affected' as there is a matching row in your update, but this is not actually changing anything within your contactset table. You could do this like so: public void UpdateContactInDB(int IDtoUpdate, string editedColumn, string value) { using (connection) { SqlCommand command = new SqlCommand(string.format( "UPDATE ContactSet SET {0} = @value WHERE Id = @ID", editedColumn), connection); command.Parameters.AddWithValue("@value", value); command.Parameters.AddWithValue("@ID", IDtoUpdate); connection.Open(); int rowsaffected = command.ExecuteNonQuery(); MessageBox.Show("Rows affected: " + rowsaffected.ToString()); } } However you will need to be careful where the string values in editedColumn come from, as this would be open to SQL injection. Better still to have an update that will update any of the columns you need to change, set all the appropriate parameters and you don't need dynamic SQL for this at all.
{ "pile_set_name": "StackExchange" }
Meals Fully cooked breakfast, lunch and dinner are always available to in-house guests on an ala carte basis. Menus vary in accordance with the fresh produce brought in througout the week. Pies, tarts, cakes and scones are made on the premises to supplement the tasty and wholesome cuisine provided every day. For day trippers a range of choice is available for lunch every day of the week and dinner Friday and Saturday nights. Sunday evening is our homemade Pizza Night, cooked using our wood fired oven. An extensive wine list compliments our menus. For special functions, meetings and events menus may be tailored to your needs and budget.
{ "pile_set_name": "Pile-CC" }