qid
int64
1
74.7M
question
stringlengths
15
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
4
30.2k
response_k
stringlengths
11
36.5k
44,015,241
I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a ``` day(mon, tue, wed, thu, fri). ``` and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday) I know it sounds stupid but I'm used with c and java and this is so hard for me... Thank you!
2017/05/17
[ "https://Stackoverflow.com/questions/44015241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985937/" ]
You would probably want something like ``` day([mon,tue,wed,thu,fri,sat,sun]). ``` so that you have a list to work with. Then monday is `mon` and you can use ``` Tomorrow = mon ``` but if you want to have a general rule for next day, then this is not optimal. So maybe it would be better to have a predicate `day/2` instead of `day/1` ``` day(mon,tue). day(tue,wed). day(wed,thu). day(thu,fri). day(fri,sat). day(sat,sun). day(sun,mon). ``` Now you can just call `day(Tomorrow,DayAfterThat)` to get, for example, `tue`. And if you want the list of all the days you can say ``` findall(X,day(X,_),Days) ``` You can modify the rules (eg `day/2`) using `assert/1` and `retract/1`.
Here comes the canonical answer—based on [`append/3`](https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue#append): ``` today_tomorrow(T0, T1) :- Days = [mon,tue,wed,thu,fri,sat,sun,mon], append(_, [T0,T1|_], Days). ``` Let's ask the *most general query*! ``` ?- today_tomorrow(X, Y). X = mon, Y = tue ; X = tue, Y = wed ; X = wed, Y = thu ; X = thu, Y = fri ; X = fri, Y = sat ; X = sat, Y = sun ; X = sun, Y = mon ; false. % no more solutions ```
44,015,241
I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a ``` day(mon, tue, wed, thu, fri). ``` and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday) I know it sounds stupid but I'm used with c and java and this is so hard for me... Thank you!
2017/05/17
[ "https://Stackoverflow.com/questions/44015241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985937/" ]
If you don't mind using libraries, you can also do something like this: ``` :- use_module(library(clpfd)). ord_weekday(0, mon). ord_weekday(1, tue). ord_weekday(2, wed). ord_weekday(3, thu). ord_weekday(4, fri). ord_weekday(5, sat). ord_weekday(6, sun). day_next(D, N) :- (X+1) mod 7 #= Y, ord_weekday(X, D), ord_weekday(Y, N). ``` First we mapped 0 to Monday, 1 to Tuesday, and so on; then, we mapped 0 to 1, 1 to 2, ..., 6 to 0. Now you can query this like that: ``` ?- day_next(mon, X). X = tue. ?- day_next(X, mon). X = sun. ``` Importantly, you can leave both arguments free variables and enumerate all possible combinations: ``` ?- day_next(D, N). D = mon, N = tue ; D = tue, N = wed ; D = wed, N = thu ; D = thu, N = fri ; D = fri, N = sat ; D = sat, N = sun ; D = sun, N = mon. ``` This gives you the exact same results are [this solution](https://stackoverflow.com/a/44015672/1812457). I would prefer the other solution for this particular problem (next day of the week), but there might be something else to be learned from the example here.
This may help you to get started: ``` days(mon,tue,wed,thu,fri,sat,sun). tomorrow(X,Y) :- days(|Z), tomorrow(X,Y,Z). tomorrow(X,Y,[X,Y|_]). tomorrow(X,Y,[_|Z]) :- tomorrow(X,Y,Z). ?- tomorrow(X,tue), write(X), nl. ``` It writes out `mon`.
44,015,241
I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a ``` day(mon, tue, wed, thu, fri). ``` and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday) I know it sounds stupid but I'm used with c and java and this is so hard for me... Thank you!
2017/05/17
[ "https://Stackoverflow.com/questions/44015241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985937/" ]
You would probably want something like ``` day([mon,tue,wed,thu,fri,sat,sun]). ``` so that you have a list to work with. Then monday is `mon` and you can use ``` Tomorrow = mon ``` but if you want to have a general rule for next day, then this is not optimal. So maybe it would be better to have a predicate `day/2` instead of `day/1` ``` day(mon,tue). day(tue,wed). day(wed,thu). day(thu,fri). day(fri,sat). day(sat,sun). day(sun,mon). ``` Now you can just call `day(Tomorrow,DayAfterThat)` to get, for example, `tue`. And if you want the list of all the days you can say ``` findall(X,day(X,_),Days) ``` You can modify the rules (eg `day/2`) using `assert/1` and `retract/1`.
This may help you to get started: ``` days(mon,tue,wed,thu,fri,sat,sun). tomorrow(X,Y) :- days(|Z), tomorrow(X,Y,Z). tomorrow(X,Y,[X,Y|_]). tomorrow(X,Y,[_|Z]) :- tomorrow(X,Y,Z). ?- tomorrow(X,tue), write(X), nl. ``` It writes out `mon`.
44,015,241
I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a ``` day(mon, tue, wed, thu, fri). ``` and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday) I know it sounds stupid but I'm used with c and java and this is so hard for me... Thank you!
2017/05/17
[ "https://Stackoverflow.com/questions/44015241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985937/" ]
An approach similar to what's been presented, but with a different means of determining "tomorrow"... Probably the most canonical way to lay out valid day of week in Prolog would be with individual facts: ``` % Valid days of the week, in order day_of_week(mon). day_of_week(tue). day_of_week(wed). day_of_week(thu). day_of_week(fri). day_of_week(sat). day_of_week(sun). ``` Then a query like, `day_of_week(Day)` will succeed for any given valid day of the week `Day`. To determine the day after a given day, you can use a general circular successor predicate: ``` succ(X, [H|T], S) :- succ(X, H, [H|T], S). succ(X, H, [X], H). succ(X, _, [X,Y|_], Y). succ(X, H, [_|T], S) :- succ(X, H, T, S). ``` So determining the next day would be: ``` day_after(Day, NextDay) :- % List valid days of the week in order findall(DayOfWeek, day_of_week(DayOfWeek), DaysOfWeek), succ(Day, DaysOfWeek, NextDay). % NextDay is successor to Day ```
Here comes the canonical answer—based on [`append/3`](https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue#append): ``` today_tomorrow(T0, T1) :- Days = [mon,tue,wed,thu,fri,sat,sun,mon], append(_, [T0,T1|_], Days). ``` Let's ask the *most general query*! ``` ?- today_tomorrow(X, Y). X = mon, Y = tue ; X = tue, Y = wed ; X = wed, Y = thu ; X = thu, Y = fri ; X = fri, Y = sat ; X = sat, Y = sun ; X = sun, Y = mon ; false. % no more solutions ```
44,015,241
I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a ``` day(mon, tue, wed, thu, fri). ``` and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday) I know it sounds stupid but I'm used with c and java and this is so hard for me... Thank you!
2017/05/17
[ "https://Stackoverflow.com/questions/44015241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985937/" ]
If you don't mind using libraries, you can also do something like this: ``` :- use_module(library(clpfd)). ord_weekday(0, mon). ord_weekday(1, tue). ord_weekday(2, wed). ord_weekday(3, thu). ord_weekday(4, fri). ord_weekday(5, sat). ord_weekday(6, sun). day_next(D, N) :- (X+1) mod 7 #= Y, ord_weekday(X, D), ord_weekday(Y, N). ``` First we mapped 0 to Monday, 1 to Tuesday, and so on; then, we mapped 0 to 1, 1 to 2, ..., 6 to 0. Now you can query this like that: ``` ?- day_next(mon, X). X = tue. ?- day_next(X, mon). X = sun. ``` Importantly, you can leave both arguments free variables and enumerate all possible combinations: ``` ?- day_next(D, N). D = mon, N = tue ; D = tue, N = wed ; D = wed, N = thu ; D = thu, N = fri ; D = fri, N = sat ; D = sat, N = sun ; D = sun, N = mon. ``` This gives you the exact same results are [this solution](https://stackoverflow.com/a/44015672/1812457). I would prefer the other solution for this particular problem (next day of the week), but there might be something else to be learned from the example here.
You would probably want something like ``` day([mon,tue,wed,thu,fri,sat,sun]). ``` so that you have a list to work with. Then monday is `mon` and you can use ``` Tomorrow = mon ``` but if you want to have a general rule for next day, then this is not optimal. So maybe it would be better to have a predicate `day/2` instead of `day/1` ``` day(mon,tue). day(tue,wed). day(wed,thu). day(thu,fri). day(fri,sat). day(sat,sun). day(sun,mon). ``` Now you can just call `day(Tomorrow,DayAfterThat)` to get, for example, `tue`. And if you want the list of all the days you can say ``` findall(X,day(X,_),Days) ``` You can modify the rules (eg `day/2`) using `assert/1` and `retract/1`.
44,015,241
I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a ``` day(mon, tue, wed, thu, fri). ``` and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday) I know it sounds stupid but I'm used with c and java and this is so hard for me... Thank you!
2017/05/17
[ "https://Stackoverflow.com/questions/44015241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985937/" ]
An approach similar to what's been presented, but with a different means of determining "tomorrow"... Probably the most canonical way to lay out valid day of week in Prolog would be with individual facts: ``` % Valid days of the week, in order day_of_week(mon). day_of_week(tue). day_of_week(wed). day_of_week(thu). day_of_week(fri). day_of_week(sat). day_of_week(sun). ``` Then a query like, `day_of_week(Day)` will succeed for any given valid day of the week `Day`. To determine the day after a given day, you can use a general circular successor predicate: ``` succ(X, [H|T], S) :- succ(X, H, [H|T], S). succ(X, H, [X], H). succ(X, _, [X,Y|_], Y). succ(X, H, [_|T], S) :- succ(X, H, T, S). ``` So determining the next day would be: ``` day_after(Day, NextDay) :- % List valid days of the week in order findall(DayOfWeek, day_of_week(DayOfWeek), DaysOfWeek), succ(Day, DaysOfWeek, NextDay). % NextDay is successor to Day ```
This may help you to get started: ``` days(mon,tue,wed,thu,fri,sat,sun). tomorrow(X,Y) :- days(|Z), tomorrow(X,Y,Z). tomorrow(X,Y,[X,Y|_]). tomorrow(X,Y,[_|Z]) :- tomorrow(X,Y,Z). ?- tomorrow(X,tue), write(X), nl. ``` It writes out `mon`.
44,015,241
I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a ``` day(mon, tue, wed, thu, fri). ``` and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday) I know it sounds stupid but I'm used with c and java and this is so hard for me... Thank you!
2017/05/17
[ "https://Stackoverflow.com/questions/44015241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985937/" ]
You would probably want something like ``` day([mon,tue,wed,thu,fri,sat,sun]). ``` so that you have a list to work with. Then monday is `mon` and you can use ``` Tomorrow = mon ``` but if you want to have a general rule for next day, then this is not optimal. So maybe it would be better to have a predicate `day/2` instead of `day/1` ``` day(mon,tue). day(tue,wed). day(wed,thu). day(thu,fri). day(fri,sat). day(sat,sun). day(sun,mon). ``` Now you can just call `day(Tomorrow,DayAfterThat)` to get, for example, `tue`. And if you want the list of all the days you can say ``` findall(X,day(X,_),Days) ``` You can modify the rules (eg `day/2`) using `assert/1` and `retract/1`.
### program ``` ([user]) . %%%% database-style prolog program . %%% for day , tomorrow , yesterday . day( day( sk( 10'7 ) , pk( 36'sun ) , nm( 'sunday' ) , next( 36'mon ) ) ) . day( day( sk( 10'6 ) , pk( 36'sat ) , nm( 'saturday' ) , next( 36'sun ) ) ) . day( day( sk( 10'5 ) , pk( 36'fri ) , nm( 'friday' ) , next( 36'sat ) ) ) . day( day( sk( 10'4 ) , pk( 36'thu ) , nm( 'thursday' ) , next( 36'fri ) ) ) . day( day( sk( 10'3 ) , pk( 36'wed ) , nm( 'wednesday' ) , next( 36'thu ) ) ) . day( day( sk( 10'2 ) , pk( 36'tue ) , nm( 'tuesday' ) , next( 36'wed ) ) ) . day( day( sk( 10'1 ) , pk( 36'mon ) , nm( 'monday' ) , next( 36'tue ) ) ) . tomorrow( tomorrow( day( _tomorrow_ ) ) , today( day( _today_ ) ) ) :- ( _today_ = ( day( sk( _ ) , pk( _ ) , nm( _ ) , next( _next_ ) ) ) , _tomorrow_ = ( day( sk( _ ) , pk( _next_ ) , nm( _ ) , next( _ ) ) ) , day( _today_ ) , day( _tomorrow_ ) ) . yesterday( yesterday( day( _yesterday_ ) ) , today( day( _today_ ) ) ) :- ( tomorrow( tomorrow( day( _today_ ) ) , today( day( _yesterday_ ) ) ) ) . %%% legend %% pk --- primary key %% sk --- secondary (sort) key %% nm --- name %% 10'2 --- a number whereby each digit has 10 possibilities ( [0-9] ) %% 16'ff --- a number whereby each digit has 16 possibilities ( [0-9,a-f] ) %% 36'mon --- a number whereby each digit has 36 possibilities ( [0-9,a-z] ) ``` ### example usage ``` %%%% example usage %% query for all day ?- day( DAY ) . %@ DAY = day(sk(7),pk(37391),nm(sunday),next(29399)) ? ; %@ DAY = day(sk(6),pk(36677),nm(saturday),next(37391)) ? ; %@ DAY = day(sk(5),pk(20430),nm(friday),next(36677)) ? ; %@ DAY = day(sk(4),pk(38226),nm(thursday),next(20430)) ? ; %@ DAY = day(sk(3),pk(41989),nm(wednesday),next(38226)) ? ; %@ DAY = day(sk(2),pk(38678),nm(tuesday),next(41989)) ? ; %@ DAY = day(sk(1),pk(29399),nm(monday),next(38678)) %% query for all day , sorted ?- setof( _day_ , day( _day_ ) , VECTOR ) . %@ VECTOR = %@ [ %@ day(sk(1),pk(29399),nm(monday),next(38678)) , %@ day(sk(2),pk(38678),nm(tuesday),next(41989)) , %@ day(sk(3),pk(41989),nm(wednesday),next(38226)) , %@ day(sk(4),pk(38226),nm(thursday),next(20430)) , %@ day(sk(5),pk(20430),nm(friday),next(36677)) , %@ day(sk(6),pk(36677),nm(saturday),next(37391)) , %@ day(sk(7),pk(37391),nm(sunday),next(29399)) %@ ] %% query for all day , sorted ?- use_module( library( lists ) ) . % for ``member`` . ?- setof( _day_ , day( _day_ ) , _vector_ ) , member( DAY , _vector_ ) . %@ DAY = day(sk(1),pk(29399),nm(monday),next(38678)) ? ; %@ DAY = day(sk(2),pk(38678),nm(tuesday),next(41989)) ? ; %@ DAY = day(sk(3),pk(41989),nm(wednesday),next(38226)) ? ; %@ DAY = day(sk(4),pk(38226),nm(thursday),next(20430)) ? ; %@ DAY = day(sk(5),pk(20430),nm(friday),next(36677)) ? ; %@ DAY = day(sk(6),pk(36677),nm(saturday),next(37391)) ? ; %@ DAY = day(sk(7),pk(37391),nm(sunday),next(29399)) ? ; %% query for all yesterday ?- _query_ = ( yesterday( yesterday( day( _yesterday_ ) ) , today( day( _today_ ) ) ) ) , setof( [ _yesterday_ , _today_ ] , _query_ , VECTOR ) . %@ VECTOR = %@ [ %@ [ day(sk(1),pk(29399),nm(monday),next(38678)) , day(sk(2),pk(38678),nm(tuesday),next(41989)) ] , %@ [ day(sk(2),pk(38678),nm(tuesday),next(41989)) , day(sk(3),pk(41989),nm(wednesday),next(38226))] , %@ [ day(sk(3),pk(41989),nm(wednesday),next(38226)) , day(sk(4),pk(38226),nm(thursday),next(20430)) ] , %@ [ day(sk(4),pk(38226),nm(thursday),next(20430)) , day(sk(5),pk(20430),nm(friday),next(36677)) ] , %@ [ day(sk(5),pk(20430),nm(friday),next(36677)) , day(sk(6),pk(36677),nm(saturday),next(37391)) ] , %@ [ day(sk(6),pk(36677),nm(saturday),next(37391)) , day(sk(7),pk(37391),nm(sunday),next(29399)) ] , %@ [ day(sk(7),pk(37391),nm(sunday),next(29399)) , day(sk(1),pk(29399),nm(monday),next(38678)) ] %@ ] %% query for all tomorrow . % format results as a json-style map . ?- _query_ = ( tomorrow( tomorrow( day( _tomorrow_ ) ) , today( day( _today_ ) ) ) , _tomorrow_ = day( _ , _ , nm( _nm_tomorrow_ ) , _ ) , _today_ = day( _ , _ , nm( _nm_today_ ) , _ ) ) , _each_ = ( { tommorrow: _nm_tomorrow_ , today: _nm_today_ } ) , setof( _each_ , _query_ , VECTOR ) . %@ VECTOR = [{tommorrow:friday,today:thursday}] ? ; %@ VECTOR = [{tommorrow:monday,today:sunday}] ? ; %@ VECTOR = [{tommorrow:saturday,today:friday}] ? ; %@ VECTOR = [{tommorrow:sunday,today:saturday}] ? ; %@ VECTOR = [{tommorrow:thursday,today:wednesday}] ? ; %@ VECTOR = [{tommorrow:tuesday,today:monday}] ? ; %@ VECTOR = [{tommorrow:wednesday,today:tuesday}] %% query for the day named monday ?- _nm_ = 'monday' , _day_ = day( sk( _sk_ ) , pk( _pk_ ) , nm( _nm_ ) , _etc_ ) , setof( _day_ , day( _day_ ) , VECTOR ) . %@ VECTOR = %@ [ %@ day(sk(1),pk(29399),nm(monday),next(38678)) %@ ] %% query for the today of the tomorrow named monday ?- _today_ = day( _ , _ , nm( _nm_today_ ) , _ ) , _tomorrow_ = day( _ , _ , nm( _nm_tomorrow_ ) , _ ) , _nm_tomorrow_ = 'monday' , _query_ = tomorrow( tomorrow( day( _tomorrow_ ) ) , today( day( _today_ ) ) ) , _each_ = { tomorrow: _nm_tomorrow_ , today: _nm_today_ } , setof( _each_ , _query_ , VECTOR ) . %@ VECTOR = %@ [ %@ { tomorrow: monday , today: sunday } %@ ] ```
44,015,241
I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a ``` day(mon, tue, wed, thu, fri). ``` and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday) I know it sounds stupid but I'm used with c and java and this is so hard for me... Thank you!
2017/05/17
[ "https://Stackoverflow.com/questions/44015241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985937/" ]
If you don't mind using libraries, you can also do something like this: ``` :- use_module(library(clpfd)). ord_weekday(0, mon). ord_weekday(1, tue). ord_weekday(2, wed). ord_weekday(3, thu). ord_weekday(4, fri). ord_weekday(5, sat). ord_weekday(6, sun). day_next(D, N) :- (X+1) mod 7 #= Y, ord_weekday(X, D), ord_weekday(Y, N). ``` First we mapped 0 to Monday, 1 to Tuesday, and so on; then, we mapped 0 to 1, 1 to 2, ..., 6 to 0. Now you can query this like that: ``` ?- day_next(mon, X). X = tue. ?- day_next(X, mon). X = sun. ``` Importantly, you can leave both arguments free variables and enumerate all possible combinations: ``` ?- day_next(D, N). D = mon, N = tue ; D = tue, N = wed ; D = wed, N = thu ; D = thu, N = fri ; D = fri, N = sat ; D = sat, N = sun ; D = sun, N = mon. ``` This gives you the exact same results are [this solution](https://stackoverflow.com/a/44015672/1812457). I would prefer the other solution for this particular problem (next day of the week), but there might be something else to be learned from the example here.
An approach similar to what's been presented, but with a different means of determining "tomorrow"... Probably the most canonical way to lay out valid day of week in Prolog would be with individual facts: ``` % Valid days of the week, in order day_of_week(mon). day_of_week(tue). day_of_week(wed). day_of_week(thu). day_of_week(fri). day_of_week(sat). day_of_week(sun). ``` Then a query like, `day_of_week(Day)` will succeed for any given valid day of the week `Day`. To determine the day after a given day, you can use a general circular successor predicate: ``` succ(X, [H|T], S) :- succ(X, H, [H|T], S). succ(X, H, [X], H). succ(X, _, [X,Y|_], Y). succ(X, H, [_|T], S) :- succ(X, H, T, S). ``` So determining the next day would be: ``` day_after(Day, NextDay) :- % List valid days of the week in order findall(DayOfWeek, day_of_week(DayOfWeek), DaysOfWeek), succ(Day, DaysOfWeek, NextDay). % NextDay is successor to Day ```
9,533,484
In jquery source: ``` eq: function( i ) { i = +i; return i === -1 ? this.slice( i ) : this.slice( i, i + 1 ); }, ``` Is it used for make sure parse i to int?
2012/03/02
[ "https://Stackoverflow.com/questions/9533484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/524711/" ]
> > Is it used for make sure parse i to int? > > > No, it is to make sure that `i` is a **number** (either `float` or `int`). Given what the function is doing, it was better to convert `i` to an non-decimal value though, I'm not sure how `slice` handles decimals. More information: [`MDN - Arithmetic Operators`](https://developer.mozilla.org/en/JavaScript/Reference/Operators/Arithmetic_Operators#.2B_%28Unary_Plus%29)
Almost, but *any* number is fine. > > `[ECMA-262: 11.4.6]:` The unary `+` operator converts its operand to `Number` type. > > > The production *UnaryExpression : + UnaryExpression* is evaluated as follows: > > > 1. Let *expr* be the result of evaluating UnaryExpression. > 2. Return ToNumber(GetValue(*expr*)). > > >
9,533,484
In jquery source: ``` eq: function( i ) { i = +i; return i === -1 ? this.slice( i ) : this.slice( i, i + 1 ); }, ``` Is it used for make sure parse i to int?
2012/03/02
[ "https://Stackoverflow.com/questions/9533484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/524711/" ]
Almost, but *any* number is fine. > > `[ECMA-262: 11.4.6]:` The unary `+` operator converts its operand to `Number` type. > > > The production *UnaryExpression : + UnaryExpression* is evaluated as follows: > > > 1. Let *expr* be the result of evaluating UnaryExpression. > 2. Return ToNumber(GetValue(*expr*)). > > >
> > `+i` makes sure that i is parsed into an number. > > >
9,533,484
In jquery source: ``` eq: function( i ) { i = +i; return i === -1 ? this.slice( i ) : this.slice( i, i + 1 ); }, ``` Is it used for make sure parse i to int?
2012/03/02
[ "https://Stackoverflow.com/questions/9533484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/524711/" ]
> > Is it used for make sure parse i to int? > > > No, it is to make sure that `i` is a **number** (either `float` or `int`). Given what the function is doing, it was better to convert `i` to an non-decimal value though, I'm not sure how `slice` handles decimals. More information: [`MDN - Arithmetic Operators`](https://developer.mozilla.org/en/JavaScript/Reference/Operators/Arithmetic_Operators#.2B_%28Unary_Plus%29)
Yes it will make sure it is int (or a number in general as @Felix says). Try this code out: ``` var i = "2"; console.log(i === 2); // false console.log(+i === 2); // true ```
9,533,484
In jquery source: ``` eq: function( i ) { i = +i; return i === -1 ? this.slice( i ) : this.slice( i, i + 1 ); }, ``` Is it used for make sure parse i to int?
2012/03/02
[ "https://Stackoverflow.com/questions/9533484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/524711/" ]
> > Is it used for make sure parse i to int? > > > No, it is to make sure that `i` is a **number** (either `float` or `int`). Given what the function is doing, it was better to convert `i` to an non-decimal value though, I'm not sure how `slice` handles decimals. More information: [`MDN - Arithmetic Operators`](https://developer.mozilla.org/en/JavaScript/Reference/Operators/Arithmetic_Operators#.2B_%28Unary_Plus%29)
Yes, applying the unary `+` to a variable ensures that if it's some other type (a string, for instance), it gets [converted to a number](http://es5.github.com/#x9.3) (not an int, JavaScript doesn't have ints although it uses them internally for some operations). The number can be fractional, and if it's a string it's parsed in the usual way JavaScript parses numbers (so for instance, `+"0x10"` is `16` decimal).
9,533,484
In jquery source: ``` eq: function( i ) { i = +i; return i === -1 ? this.slice( i ) : this.slice( i, i + 1 ); }, ``` Is it used for make sure parse i to int?
2012/03/02
[ "https://Stackoverflow.com/questions/9533484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/524711/" ]
> > Is it used for make sure parse i to int? > > > No, it is to make sure that `i` is a **number** (either `float` or `int`). Given what the function is doing, it was better to convert `i` to an non-decimal value though, I'm not sure how `slice` handles decimals. More information: [`MDN - Arithmetic Operators`](https://developer.mozilla.org/en/JavaScript/Reference/Operators/Arithmetic_Operators#.2B_%28Unary_Plus%29)
> > `+i` makes sure that i is parsed into an number. > > >
9,533,484
In jquery source: ``` eq: function( i ) { i = +i; return i === -1 ? this.slice( i ) : this.slice( i, i + 1 ); }, ``` Is it used for make sure parse i to int?
2012/03/02
[ "https://Stackoverflow.com/questions/9533484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/524711/" ]
Yes it will make sure it is int (or a number in general as @Felix says). Try this code out: ``` var i = "2"; console.log(i === 2); // false console.log(+i === 2); // true ```
> > `+i` makes sure that i is parsed into an number. > > >
9,533,484
In jquery source: ``` eq: function( i ) { i = +i; return i === -1 ? this.slice( i ) : this.slice( i, i + 1 ); }, ``` Is it used for make sure parse i to int?
2012/03/02
[ "https://Stackoverflow.com/questions/9533484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/524711/" ]
Yes, applying the unary `+` to a variable ensures that if it's some other type (a string, for instance), it gets [converted to a number](http://es5.github.com/#x9.3) (not an int, JavaScript doesn't have ints although it uses them internally for some operations). The number can be fractional, and if it's a string it's parsed in the usual way JavaScript parses numbers (so for instance, `+"0x10"` is `16` decimal).
> > `+i` makes sure that i is parsed into an number. > > >
59,416,308
**Json** *table\_data* ``` [ { "country": "country one", "city": "city one", "Start Time": "21th December 2019 09:00 AM", "End time": "22th December 2019 03:00 PM" }, { "country": "country two", "city": "city two", "Start Time": "23th December 2019 09:00 AM", "End time": "23th December 2019 03:00 PM" }, { "country": "country three", "city": "city three", "Start Time": "24th December 2019 09:00 AM", "End time": "24th December 2019 03:00 PM" } ] ``` so what i want to do is look for a country and once i find the country u want to print the related data to it i have done this so far ``` for dt in table_data:# this is not correct i think if("country one" == dt): print() ``` i am not sure how to approach this in python any assistance would be appreciated
2019/12/19
[ "https://Stackoverflow.com/questions/59416308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11495299/" ]
Try: ``` import json data = json.loads(""" [ { "country": "country one", "city": "city one", "Start Time": "21th December 2019 09:00 AM", "End time": "22th December 2019 03:00 PM" }, { "country": "country two", "city": "city two", "Start Time": "23th December 2019 09:00 AM", "End time": "23th December 2019 03:00 PM" }, { "country": "country three", "city": "city three", "Start Time": "24th December 2019 09:00 AM", "End time": "24th December 2019 03:00 PM" } ] """) for this_country in data: if this_country['country'] == 'country two': print("\n".join([": ".join(k) for k in this_country.items()])) ``` Output: ``` country: country two city: city two Start Time: 23th December 2019 09:00 AM End time: 23th December 2019 03:00 PM ```
Something like this. ``` for data in table_data: if data['country'] == 'country one': print(data) ```
59,416,308
**Json** *table\_data* ``` [ { "country": "country one", "city": "city one", "Start Time": "21th December 2019 09:00 AM", "End time": "22th December 2019 03:00 PM" }, { "country": "country two", "city": "city two", "Start Time": "23th December 2019 09:00 AM", "End time": "23th December 2019 03:00 PM" }, { "country": "country three", "city": "city three", "Start Time": "24th December 2019 09:00 AM", "End time": "24th December 2019 03:00 PM" } ] ``` so what i want to do is look for a country and once i find the country u want to print the related data to it i have done this so far ``` for dt in table_data:# this is not correct i think if("country one" == dt): print() ``` i am not sure how to approach this in python any assistance would be appreciated
2019/12/19
[ "https://Stackoverflow.com/questions/59416308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11495299/" ]
Try: ``` import json data = json.loads(""" [ { "country": "country one", "city": "city one", "Start Time": "21th December 2019 09:00 AM", "End time": "22th December 2019 03:00 PM" }, { "country": "country two", "city": "city two", "Start Time": "23th December 2019 09:00 AM", "End time": "23th December 2019 03:00 PM" }, { "country": "country three", "city": "city three", "Start Time": "24th December 2019 09:00 AM", "End time": "24th December 2019 03:00 PM" } ] """) for this_country in data: if this_country['country'] == 'country two': print("\n".join([": ".join(k) for k in this_country.items()])) ``` Output: ``` country: country two city: city two Start Time: 23th December 2019 09:00 AM End time: 23th December 2019 03:00 PM ```
I suggest you start with something simpler. Let's say you have a `country` object: ``` country = { "country": "country one", "city": "city one", "Start Time": "21th December 2019 09:00 AM", "End time": "22th December 2019 03:00 PM" } ``` Now we want a function to print this object: ``` def print_country(c): pass ``` The first step is to figure out how to fill in the details of this function.
48,558,073
I have followed this link to train yolo with my own dataset. I'm not using the CIAFR10 data. <https://pjreddie.com/darknet/train-cifar/> This is the cfg file named as cifar.cfg ``` [net] batch=128 subdivisions=1 height=28 width=28 channels=3 max_crop=32 min_crop=32 hue=.1 saturation=.75 exposure=.75 learning_rate=0.001 policy=poly power=4 max_batches =1000 momentum=0.9 decay=0.0005 [convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=leaky [maxpool] size=2 stride=2 [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=leaky [maxpool] size=2 stride=2 [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=leaky [convolutional] filters=4 size=1 stride=1 pad=1 activation=leaky [avgpool] [softmax] groups=1 [cost] type=sse ``` It classifies the test images on Ubuntu 16.04 properly. I have used this cgf file and corresponding wights in the OpenCV3.4 DNN module. I'm using the Visual studio 2017. ``` String modelFile = "cifar_small.cfg"; String modelBinary = "cifar_small.weights"; ``` When the line below is executed, I get the error message: ``` dnn::Net net = readNetFromDarknet(modelFile,modelBinary); ``` Error message: OpenCV Error: Parsing error (Unknown layer type: avgpool) in `cv::dnn::darknet::ReadDarknetFromCfgFile, file C:\build\master\_winpack-build-win64-vc14\opencv\modules\dnn\src\darknet\darknet\_io.cpp, line 503 Also, for the sooftmax and see. Looks like DNN is not able to comprehend these terms. Can someone let me know how to fix these issues?
2018/02/01
[ "https://Stackoverflow.com/questions/48558073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4645831/" ]
If the pings fail consistently during Endpoint creation, we will treat the container as unhealthy and fail the Endpoint with an error message: "ClientError: The primary container for production variant [xxx] did not pass the ping health check. Please check CloudWatch logs for this endpoint." If the pings fail consistently after Endpoint creation (Endpoint is up and running), we will try our best to replace the instance while keeping your Endpoint in service. Here is the documentation page: <https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html#your-algorithms-inference-algo-ping-requests> You can implement more sophisticated health check. However the ping response should return within 2 seconds timeout. Hope this helps! -Han
Amazon SageMaker is a managed service and the responsibility of the service team is to make sure that it is available. They are monitoring your endpoint and will replace the containers and instances for you if needed. You can monitor the performance of your endpoint using the CloudWatch metrics (<https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html> ), but it is tuned more toward the right selection of the instance type and the number of instances that you want to have (your cost) and not on the (ping) availability.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
Git is a platform, Mercurial is “just” an application. Git is a versioned filesystem platform that happens to ship with a DVCS app in the box, but as normal for platform apps, it is more complex and has rougher edges than focused apps do. But this also means git’s VCS is immensely flexible, and there is a huge depth of non-source-control things you can do with git. That is the essence of the difference. Git is best understood from the ground up – from the repository format up. [Scott Chacon’s Git Talk](http://blip.tv/file/4094854) is an excellent primer for this. If you try to use git without knowing what’s happening under the hood, you’ll end up confused at some point (unless you stick to only very basic functionality). This may sound stupid when all you want is a DVCS for your daily programming routine, but the genius of git is that the repository format is actually very simple and you *can* understand git’s entire operation quite easily. For some more technicality-oriented comparisons, the best articles I have personally seen are Dustin Sallings’: * [The Differences Between Mercurial and Git](http://www.rockstarprogrammer.org/post/2008/apr/06/differences-between-mercurial-and-git/) * [Reddit thread where git-experienced Dustin answers his own git neophyte questions](http://www.reddit.com/r/programming/comments/64g43/aristotle_pagaltzis_why_i_chose_git/c02t4se) He has actually used both DVCSs extensively and understands them both well – and ended up preferring git.
This link may help you to understand the difference <http://www.techtatva.com/2010/09/git-mercurial-and-bazaar-a-comparison/>
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
Git is a platform, Mercurial is “just” an application. Git is a versioned filesystem platform that happens to ship with a DVCS app in the box, but as normal for platform apps, it is more complex and has rougher edges than focused apps do. But this also means git’s VCS is immensely flexible, and there is a huge depth of non-source-control things you can do with git. That is the essence of the difference. Git is best understood from the ground up – from the repository format up. [Scott Chacon’s Git Talk](http://blip.tv/file/4094854) is an excellent primer for this. If you try to use git without knowing what’s happening under the hood, you’ll end up confused at some point (unless you stick to only very basic functionality). This may sound stupid when all you want is a DVCS for your daily programming routine, but the genius of git is that the repository format is actually very simple and you *can* understand git’s entire operation quite easily. For some more technicality-oriented comparisons, the best articles I have personally seen are Dustin Sallings’: * [The Differences Between Mercurial and Git](http://www.rockstarprogrammer.org/post/2008/apr/06/differences-between-mercurial-and-git/) * [Reddit thread where git-experienced Dustin answers his own git neophyte questions](http://www.reddit.com/r/programming/comments/64g43/aristotle_pagaltzis_why_i_chose_git/c02t4se) He has actually used both DVCSs extensively and understands them both well – and ended up preferring git.
If you are a Windows developer looking for basic disconnected revision control, go with Hg. I found Git to be incomprehensible while Hg was simple and well integrated with the Windows shell. I downloaded Hg and followed [this tutorial (hginit.com)](http://hginit.com/) - ten minutes later I had a local repo and was back to work on my project.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
I think the best description about "Mercurial vs. Git" is: ["Git is Wesley Snipes. Mercurial is Denzel Washington"](http://www.ericsink.com/entries/hg_denzel.html)
Yet another interesting comparison of mercurial and git: [Mercurial vs Git](http://rg03.wordpress.com/2009/04/07/mercurial-vs-git/). Main focus is on internals and their influence on branching process.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
Git is a platform, Mercurial is “just” an application. Git is a versioned filesystem platform that happens to ship with a DVCS app in the box, but as normal for platform apps, it is more complex and has rougher edges than focused apps do. But this also means git’s VCS is immensely flexible, and there is a huge depth of non-source-control things you can do with git. That is the essence of the difference. Git is best understood from the ground up – from the repository format up. [Scott Chacon’s Git Talk](http://blip.tv/file/4094854) is an excellent primer for this. If you try to use git without knowing what’s happening under the hood, you’ll end up confused at some point (unless you stick to only very basic functionality). This may sound stupid when all you want is a DVCS for your daily programming routine, but the genius of git is that the repository format is actually very simple and you *can* understand git’s entire operation quite easily. For some more technicality-oriented comparisons, the best articles I have personally seen are Dustin Sallings’: * [The Differences Between Mercurial and Git](http://www.rockstarprogrammer.org/post/2008/apr/06/differences-between-mercurial-and-git/) * [Reddit thread where git-experienced Dustin answers his own git neophyte questions](http://www.reddit.com/r/programming/comments/64g43/aristotle_pagaltzis_why_i_chose_git/c02t4se) He has actually used both DVCSs extensively and understands them both well – and ended up preferring git.
There's one **huge** difference between *git* and *mercurial*; the way the represent each commit. *git* represents commits as snapshots, while *mercurial* represents them as diffs. What does this means in practice? Well, many operations are faster in git, such as switching to another commit, comparing commits, etc. Specially if these commits are far away. AFAIK there's no advantage of mercurial's approach.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
There's one **huge** difference between *git* and *mercurial*; the way the represent each commit. *git* represents commits as snapshots, while *mercurial* represents them as diffs. What does this means in practice? Well, many operations are faster in git, such as switching to another commit, comparing commits, etc. Specially if these commits are far away. AFAIK there's no advantage of mercurial's approach.
If you are migrating from SVN, use Mercurial as its syntax is MUCH more understandable for SVN users. Other than that, you can't go wrong with either. But do check [GIT tutorial](http://www.kernel.org/pub/software/scm/git/docs/gittutorial.html) and [HGinit](http://hginit.com/) before selecting one of them.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
If I understand them correctly (and I'm far from an expert on each) they fundamentally each have a different philosophy. I first used mercurial for 9 months. Now I've used git for 6. hg is version control software. It's main goal is to track versions of a piece of software. git is a time based file system. It's goal is to add another dimension to a file system. Most have files and folders, git adds time. That it happens to work awesome as a VCS is a byproduct of its design. In hg, there's a history of the entire project it's always trying to maintain. By default I believe hg wants all changes to all objects by all users when pushing and pulling. In git there's just a pool of objects and these tracking files (branches/heads) that determine which set of those objects represent the tree of files in a particular state. When pushing or pulling git only sends the objects needed for the the particular branches you are pushing or pulling, which is a small subset of all objects. As far as git is concerned there is no "1 project". You could have 50 projects all in the same repo and git wouldn't care. Each one could be managed separately in the same repo and live fine. Hg's concept of branches is branches off the main project or branches off branches etc. Git has no such concept. A branch in git is just a state of the tree, everything is a branch in git. Which branch is official, current, or newest has no meaning in git. I don't know if that made any sense. If I could draw pictures hg might look like this where each commit is a `o` ``` o---o---o / o---o---o---o---o---o---o---o \ / o---o---o ``` A tree with a single root and branches coming off of it. While git can do that and often people use it that way that's not enforced. A git picture, if there is such a thing, could easily look like this ``` o---o---o---o---o o---o---o---o \ o---o o---o---o---o ``` In fact in some ways it doesn't even make sense to show branches in git. One thing that is very confusing for the discussion, git and mercurial both have something called a "branch" but they are not remotely the same things. A branch in mercurial comes about when there are conflicts between different repos. A branch in git is apparently similar to a clone in hg. But a clone, while it might give similar behavior is most definitely not the same. Consider me trying these in git vs hg using the chromium repo which is rather large. ``` $ time git checkout -b some-new-branch Switched to new branch 'some-new-branch' real 0m1.759s user 0m1.596s sys 0m0.144s ``` And now in hg using clone ``` $ time hg clone project/ some-clone/ updating to branch default 29387 files updated, 0 files merged, 0 files removed, 0 files unresolved. real 0m58.196s user 0m19.901s sys 0m8.957 ``` Both of those are hot runs. Ie, I ran them twice and this is the second run. hg clone is the actually the same as git-new-workdir. Both of those make an entirely new working dir almost as though you had typed `cp -r project project-clone`. That's not the same as making a new branch in git. It's far more heavy weight. If there is true equivalent of git's branching in hg I don't know what it is. I understand at some level hg and git *might* be able to do similar things. If so then there is a still a huge difference in the workflow they lead you to. In git, the typical workflow is to create a branch for every feature. ``` git checkout master git checkout -b add-2nd-joypad-support git checkout master git checkout -b fix-game-save-bug git checkout master git checkout -b add-a-star-support ``` That just created 3 branches, each based off a branch called master. (I'm sure there's some way in git to make those 1 line each instead of 2) Now to work on one I just do ``` git checkout fix-game-save-bug ``` and start working. Commit things, etc. Switching between branches even in a project as big as chrome is nearly instantaneous. I actually don't know how to do that in hg. It's not part of any tutorials I've read. One other big difference. Git's stage. Git has this idea of a stage. You can think of it as a hidden folder. When you commit you only commit what's on the stage, not the changes in your working tree. That might sound strange. If you want to commit all the changes in your working tree you'd do `git commit -a` which adds all the modified files to the stage and then commits them. What's the point of the stage then? You can easily separate your commits. Imagine you edited joypad.cpp and gamesave.cpp and you want to commit them separately ``` git add joypad.cpp // copies to stage git commit -m "added 2nd joypad support" git add gamesave.cpp // copies to stage git commit -m "fixed game save bug" ``` Git even has commands to decide which particular lines in the same file you want to copy to the stage so you can split up those commits separately as well. Why would you want to do that? Because as separate commits others can pull only the ones they want or if there was an issue they can revert just the commit that had the issue.
If you are migrating from SVN, use Mercurial as its syntax is MUCH more understandable for SVN users. Other than that, you can't go wrong with either. But do check [GIT tutorial](http://www.kernel.org/pub/software/scm/git/docs/gittutorial.html) and [HGinit](http://hginit.com/) before selecting one of them.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
These articles may help: * [Git vs. Mercurial: Please Relax](http://importantshock.wordpress.com/2008/08/07/git-vs-mercurial/) (Git is MacGyver and Mercurial is James Bond) * [The Differences Between Mercurial and Git](http://www.rockstarprogrammer.org/post/2008/apr/06/differences-between-mercurial-and-git/) **Edit**: Comparing Git and Mercurial to celebrities seems to be a trend. Here's one more: * [Git is Wesley Snipes, Mercurial is Denzel Washington](http://www.ericsink.com/entries/hg_denzel.html)
Sometime last year I evaluated both git and hg for my own use, and decided to go with hg. I felt it looked like a cleaner solution, and worked better on more platforms at the time. It was mostly a toss-up, though. More recently, I started using git because of git-svn and the ability to act as a Subversion client. This won me over and I've now switched completely to git. I think it's got a slightly higher learning curve (especially if you need to poke around the insides), but it really is a great system. I'm going to go read those two comparison articles that John posted now.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
I work on Mercurial, but fundamentally I believe both systems are equivalent. They both work with the same abstractions: a series of snapshots (changesets) which make up the history. Each changeset knows where it came from (the parent changeset) and can have many child changesets. The recent [hg-git](http://hg-git.github.com/) extension provides a two-way bridge between Mercurial and Git and sort of shows this point. Git has a strong focus on mutating this history graph (with all the consequences that entails) whereas Mercurial does not encourage history rewriting, [but it's easy to do anyway](http://www.selenic.com/mercurial/wiki/EditingHistory) and the consequences of doing so are exactly what you should expect them to be (that is, if I modify a changeset you already have, your client will see it as new if you pull from me). So Mercurial has a *bias* towards non-destructive commands. As for light-weight branches, then Mercurial has supported repositories with *multiple branches* since..., always I think. Git repositories with multiple branches are exactly that: multiple diverged strands of development in a single repository. Git then adds names to these strands and allow you to query these names remotely. The [Bookmarks](http://www.selenic.com/mercurial/wiki/BookmarksExtension) extension for Mercurial adds local names, and with Mercurial 1.6, you can move these bookmarks around when you push/pull.. I use Linux, but apparently TortoiseHg is faster and better than the Git equivalent on Windows (due to better usage of the poor Windows filesystem). Both <http://github.com> and <http://bitbucket.org> provide online hosting, the service at Bitbucket is great and responsive (I haven't tried github). I chose Mercurial since it feels clean and elegant -- I was put off by the shell/Perl/Ruby scripts I got with Git. Try taking a peek at the [`git-instaweb.sh` file](http://repo.or.cz/w/git.git?a=blob;f=git-instaweb.sh;hb=HEAD) if you want to know what I mean: it is a **shell** script which generates a **Ruby** script, which I think runs a webserver. The shell script generates another shell script to launch the first Ruby script. There is also a bit of **Perl**, for good measure. I like the [blog post](http://importantshock.wordpress.com/2008/08/07/git-vs-mercurial/) that compares Mercurial and Git with James Bond and MacGyver -- Mercurial is somehow more low-key than Git. It seems to me, that people using Mercurial are not so easily impressed. This is reflected in how each system do what Linus described as ["the coolest merge EVER!"](http://osdir.com/ml/version-control.git/2005-06/msg00792.html). In Git you can merge with an unrelated repository by doing: ``` git fetch <project-to-union-merge> GIT_INDEX_FILE=.git/tmp-index git-read-tree FETCH_HEAD GIT_INDEX_FILE=.git/tmp-index git-checkout-cache -a -u git-update-cache --add -- (GIT_INDEX_FILE=.git/tmp-index git-ls-files) cp .git/FETCH_HEAD .git/MERGE_HEAD git commit ``` Those commands look quite arcane to my eye. In Mercurial we do: ``` hg pull --force <project-to-union-merge> hg merge hg commit ``` Notice how the Mercurial commands are plain and not special at all -- the only unusual thing is the `--force` flag to `hg pull`, which is needed since Mercurial will abort otherwise when you pull from an unrelated repository. It is differences like this that makes Mercurial seem more elegant to me.
Some people think that VCS systems have to be complicated. They encourage inventing terms and concepts on the field. They would probably think that numerous PhDs on the subject would be interesting. Among those are probably the ones that designed Git. Mercurial is designed with a different mentality. Developers should not care much about VCS, and they should instead spend their time on their main function: software engineering. Mercurial allows users to use and happily abuse the system without letting them make any non-recoverable mistakes. Any professional tool must come with a clearly designed and intuitive CLI. Mercurial users can do most of the work by issuing simple commands without any strange options. In Git double dash, crazy options are the norm. Mercurial has a substantial advantage if you are a CLI person (and to be honest, any self-respecting Software Engineer should be). To give an example, suppose you do a commit by mistake. You forgot to edit some files. To undo you action in Mercurial you simply type: `$ hg rollback` You then get a message that the system undos your last transaction. In Git you have to type: `$ git reset --soft HEAD^` So ok suppose you have an idea what reset is about. But in addition you have to know what "--soft" and "--hard" resets are (any intuitive guesses?). Oh and of course don't forget the '^' character in the end! (now what in Ritchie's name is that...) Mercurial's integration with 3rd party tools like kdiff3 and meld is much better as well. Generate your patches merge your branches without much fuss. Mercurial also includes a simple http server that you activate by typing `hg serve` And let others browse your repository. The bottom line is, Git does what Mercurial does, in a much more complicated way and with a far inferior CLI. Use Git if you want to turn the VCS of your project into a scientific-research field. Use Mercurial if you want to get the VCS job done without caring much about it, and focus on your real tasks.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
If you are a Windows developer looking for basic disconnected revision control, go with Hg. I found Git to be incomprehensible while Hg was simple and well integrated with the Windows shell. I downloaded Hg and followed [this tutorial (hginit.com)](http://hginit.com/) - ten minutes later I had a local repo and was back to work on my project.
One thing to notice between mercurial of bitbucket.org and git of github is, mercurial can have as many private repositories as you want, but github you have to upgrade to a paid account. So, that's why I go for bitbucket which uses mercurial.
35,837
I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git. Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
2008/08/30
[ "https://Stackoverflow.com/questions/35837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3713/" ]
Sometime last year I evaluated both git and hg for my own use, and decided to go with hg. I felt it looked like a cleaner solution, and worked better on more platforms at the time. It was mostly a toss-up, though. More recently, I started using git because of git-svn and the ability to act as a Subversion client. This won me over and I've now switched completely to git. I think it's got a slightly higher learning curve (especially if you need to poke around the insides), but it really is a great system. I'm going to go read those two comparison articles that John posted now.
I'm currently in the process of migrating from SVN to a DVCS (while blogging about my findings, my first real blogging effort...), and I've done a bit of research (=googling). As far as I can see you can do most of the things with both packages. It seems like git has a few more or better implemented advanced features, I do feel that the integration with windows is a bit better for mercurial, with TortoiseHg. I know there's Git Cheetah as well (I tried both), but the mercurial solution just feels more robust. Seeing how they're both open-source (right?) I don't think either will be lacking important features. If something is important, people will ask for it, people will code it. I think that for common practices, Git and Mercurial are more than sufficient. They both have big projects that use them (Git -> linux kernel, Mercurial -> Mozilla foundation projects, both among others of course), so I don't think either are really lacking something. That being said, I am interested in what other people say about this, as it would make a great source for my blogging efforts ;-)
70,409,918
i am practicing sets in python and wrote the below script to search books. it works but not proper ly(please see it below). How can i fix the problem? ```py book_set = {"Harry Potter", "Angels and Demons", "Atlas Shrugged"} q = input('Search our Catalog: ') for book in book_set: if book == q: print(book) else: print('sorry We ran out of this book') ``` my expected result should be the title of book if present and the strint('sorry We ran out of this book') if the book doesn't exists in book\_set without any more results but see the example enter code here ### Output ``` Search our Catalog: Harry Potter Harry Potter sorry We ran out of this book ```
2021/12/19
[ "https://Stackoverflow.com/questions/70409918", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17522882/" ]
You could do this with the help of the *itertools* module. There may be better / more efficient mechanisms. Note that the *process()* function will return None when there is no possible solution. ``` import itertools def process(L1, L2): for s in set(itertools.permutations(L1, len(L1))): if all([a != b for a, b in zip(s, L2)]): return s return None list1 = ['A', 'B', 'C'] list2 = ['X', 'B', 'Z'] print(process(list1, list2)) ```
I solved it using this piece of code - however, as noted above, there are many possible cases where this very solution may not work. In this very case, you can try this: ``` import random list1 = ['A', 'B', 'C'] list2 = ['X', 'B', 'Z'] for i in list1: for j in list2: if i == j in list2: while list1.index(i) == list2.index(j): list1.remove(i) z = len(list1) rand = random.choice(range(z)) list1.insert(rand, i) print(list1, list2) ``` Also, note that you use quite weird apostrophes inside the list: `‘` instead of `'` or `"`. AFAIK, Python won't be able to comprehend them correctly.
70,409,918
i am practicing sets in python and wrote the below script to search books. it works but not proper ly(please see it below). How can i fix the problem? ```py book_set = {"Harry Potter", "Angels and Demons", "Atlas Shrugged"} q = input('Search our Catalog: ') for book in book_set: if book == q: print(book) else: print('sorry We ran out of this book') ``` my expected result should be the title of book if present and the strint('sorry We ran out of this book') if the book doesn't exists in book\_set without any more results but see the example enter code here ### Output ``` Search our Catalog: Harry Potter Harry Potter sorry We ran out of this book ```
2021/12/19
[ "https://Stackoverflow.com/questions/70409918", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17522882/" ]
```py from itertools import permutations def diff(list1, list2): for item in permutations(list1, len(list1)): if all(map(lambda a, b: a!=b, item, list2)): return list(item) return None list1 = ['A', 'B', 'C'] list2 = ['X', 'B', 'Z'] result = diff(list1, list2) if result: print(result, 'vs', list2) ``` ```py ['A', 'C', 'B'] vs ['X', 'B', 'Z'] ```
I solved it using this piece of code - however, as noted above, there are many possible cases where this very solution may not work. In this very case, you can try this: ``` import random list1 = ['A', 'B', 'C'] list2 = ['X', 'B', 'Z'] for i in list1: for j in list2: if i == j in list2: while list1.index(i) == list2.index(j): list1.remove(i) z = len(list1) rand = random.choice(range(z)) list1.insert(rand, i) print(list1, list2) ``` Also, note that you use quite weird apostrophes inside the list: `‘` instead of `'` or `"`. AFAIK, Python won't be able to comprehend them correctly.
58,108,807
I'm getting surprising behavior trying to convert a microsecond string date to an integer: ``` n = 20181231235959383171 int_ = np.int(n) # Works int64_ = np.int64(n) # "OverflowError: int too big to convert" ``` Any idea why? **Edit** - Thank you all, this is informative, however please see my actual problem: [Dataframe column won't convert from integer string to an actual integer](https://stackoverflow.com/questions/58108913/dataframe-column-wont-convert-from-integer-string-to-an-actual-integer)
2019/09/26
[ "https://Stackoverflow.com/questions/58108807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8076158/" ]
A fairly direct logic: * print a line unless it ends with a comma (need to check on the next line, perhaps remove it) * print the previous line (`$p`) if it had a comma, without it if the current line starts with `)` ``` perl -ne' if ($p =~ /,$/) { $p =~ s/,$// if /^\s*\)/; print $p }; print unless /,$/; $p = $_ ' file ``` Efficiency of this can be improved some, by losing one regex (so engine startup overhead goes) and some data copy but at the expense of clumsier code, having additional logic and checks. Tested with `file` ``` hello here's a comma, which was fine (but here's another, ) which has to go, and that was another good one. end ``` --- The above fails to print the last line if it ends in a comma. One fix for that is to check our buffer (previous line `$p`) in an [`END` block](https://perldoc.perl.org/perlmod.html#BEGIN%2c-UNITCHECK%2c-CHECK%2c-INIT-and-END), so to add at the end ``` END { print $p if $p =~ /,$/} ``` This is a fairly usual way to check for trailing buffers or conditions in `-n`/`-p` one-liners. Another fix, less efficient but with perhaps cleaner code, is to replace the statement ``` print unless /,$/; ``` with ``` print if (not /,$/ or eof); ``` This does run an `eof` check on every line of the file, while `END` runs *once*.
If using `\n` newline as a record separator is awkward, use something else. In this case you're specifically interested in the sequence `,\n)`, and we can let Perl find that for us as we read the file: ``` perl -pe 'BEGIN{ $/ = ",\n)" } s/,\n\)/\n)/' input.txt >output.txt ``` This portion: `$/ = ",\n)"` tells Perl that instead of iterating over lines of the file, it should iterate over records that terminate with the sequence `,\n)`. That helps us to assure that every chunk will have at most one such sequence, but more importantly, that this sequence will not span chunks (or records, or file-reads). Every chunk read will either end in `,\n)` or in the case of the final record, may end not have a record terminator (by our definition of terminator). Next we just use substitution to eliminate that comma in our `,\n)` record separator sequence. The key here really is that by setting the record separator to the very sequence we're interested in, we guarantee the sequence will not get broken across file-reads. As has been mentioned in the comments, this solution is most useful only if the span between `,\n)` sequences doesn't exceed the amount of memory you are willing to throw at the problem. It is most likely that newlines themselves occur in the file more often than `,\n)` sequences, and so, this will read in larger chunks. You know your data set better than we do, and so are in a better position of judging whether the simplicity of this solution is outweighed by the footprint it consumes in memory.
58,108,807
I'm getting surprising behavior trying to convert a microsecond string date to an integer: ``` n = 20181231235959383171 int_ = np.int(n) # Works int64_ = np.int64(n) # "OverflowError: int too big to convert" ``` Any idea why? **Edit** - Thank you all, this is informative, however please see my actual problem: [Dataframe column won't convert from integer string to an actual integer](https://stackoverflow.com/questions/58108913/dataframe-column-wont-convert-from-integer-string-to-an-actual-integer)
2019/09/26
[ "https://Stackoverflow.com/questions/58108807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8076158/" ]
A fairly direct logic: * print a line unless it ends with a comma (need to check on the next line, perhaps remove it) * print the previous line (`$p`) if it had a comma, without it if the current line starts with `)` ``` perl -ne' if ($p =~ /,$/) { $p =~ s/,$// if /^\s*\)/; print $p }; print unless /,$/; $p = $_ ' file ``` Efficiency of this can be improved some, by losing one regex (so engine startup overhead goes) and some data copy but at the expense of clumsier code, having additional logic and checks. Tested with `file` ``` hello here's a comma, which was fine (but here's another, ) which has to go, and that was another good one. end ``` --- The above fails to print the last line if it ends in a comma. One fix for that is to check our buffer (previous line `$p`) in an [`END` block](https://perldoc.perl.org/perlmod.html#BEGIN%2c-UNITCHECK%2c-CHECK%2c-INIT-and-END), so to add at the end ``` END { print $p if $p =~ /,$/} ``` This is a fairly usual way to check for trailing buffers or conditions in `-n`/`-p` one-liners. Another fix, less efficient but with perhaps cleaner code, is to replace the statement ``` print unless /,$/; ``` with ``` print if (not /,$/ or eof); ``` This does run an `eof` check on every line of the file, while `END` runs *once*.
Delay printing out the trailing comma and line feed until you know it's ok to print it out. ``` perl -ne' $_ = $buf . $_; s/^,(?=\n\))//; $buf = s/(,\n)\z// ? $1 : ""; print; END { print $buf; } ' ``` Faster: ``` perl -ne' print /^\)/ ? "\n" : ",\n" if $f; $f = s/,\n//; print; END { print ",\n" if $f; } ' ``` [Specifying file to process to Perl one-liner](https://stackoverflow.com/q/41742890/589924)
58,108,807
I'm getting surprising behavior trying to convert a microsecond string date to an integer: ``` n = 20181231235959383171 int_ = np.int(n) # Works int64_ = np.int64(n) # "OverflowError: int too big to convert" ``` Any idea why? **Edit** - Thank you all, this is informative, however please see my actual problem: [Dataframe column won't convert from integer string to an actual integer](https://stackoverflow.com/questions/58108913/dataframe-column-wont-convert-from-integer-string-to-an-actual-integer)
2019/09/26
[ "https://Stackoverflow.com/questions/58108807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8076158/" ]
A fairly direct logic: * print a line unless it ends with a comma (need to check on the next line, perhaps remove it) * print the previous line (`$p`) if it had a comma, without it if the current line starts with `)` ``` perl -ne' if ($p =~ /,$/) { $p =~ s/,$// if /^\s*\)/; print $p }; print unless /,$/; $p = $_ ' file ``` Efficiency of this can be improved some, by losing one regex (so engine startup overhead goes) and some data copy but at the expense of clumsier code, having additional logic and checks. Tested with `file` ``` hello here's a comma, which was fine (but here's another, ) which has to go, and that was another good one. end ``` --- The above fails to print the last line if it ends in a comma. One fix for that is to check our buffer (previous line `$p`) in an [`END` block](https://perldoc.perl.org/perlmod.html#BEGIN%2c-UNITCHECK%2c-CHECK%2c-INIT-and-END), so to add at the end ``` END { print $p if $p =~ /,$/} ``` This is a fairly usual way to check for trailing buffers or conditions in `-n`/`-p` one-liners. Another fix, less efficient but with perhaps cleaner code, is to replace the statement ``` print unless /,$/; ``` with ``` print if (not /,$/ or eof); ``` This does run an `eof` check on every line of the file, while `END` runs *once*.
This can be done more simply with just awk. ``` awk 'BEGIN{RS=".\n."; ORS=""} {gsub(",\n)", "\n)", RT); print $0 RT}' ``` Explanation: `awk`, unlike Perl, allows a regular expression as the Record Separator, here `.\n.` which "captures" the two characters surrounding each newline. Setting `ORS` to empty prevents `print` from outputting extra newlines. Newlines are all captured in `RS`/`RT`. `RT` represents the actual text matched by the `RS` regular expression. The `gsub` removes any desired comma from `RT` if present. Caveat: You'd need gnu `awk` (`gawk`) for this to work. It seems that POSIX-only `awk` will lack the regexp-`RS` with `RT` variable feature, according to `gawk` man page. Note: `gsub` is not really needed, `sub` is good enough and probably should have been used above.
58,108,807
I'm getting surprising behavior trying to convert a microsecond string date to an integer: ``` n = 20181231235959383171 int_ = np.int(n) # Works int64_ = np.int64(n) # "OverflowError: int too big to convert" ``` Any idea why? **Edit** - Thank you all, this is informative, however please see my actual problem: [Dataframe column won't convert from integer string to an actual integer](https://stackoverflow.com/questions/58108913/dataframe-column-wont-convert-from-integer-string-to-an-actual-integer)
2019/09/26
[ "https://Stackoverflow.com/questions/58108807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8076158/" ]
A fairly direct logic: * print a line unless it ends with a comma (need to check on the next line, perhaps remove it) * print the previous line (`$p`) if it had a comma, without it if the current line starts with `)` ``` perl -ne' if ($p =~ /,$/) { $p =~ s/,$// if /^\s*\)/; print $p }; print unless /,$/; $p = $_ ' file ``` Efficiency of this can be improved some, by losing one regex (so engine startup overhead goes) and some data copy but at the expense of clumsier code, having additional logic and checks. Tested with `file` ``` hello here's a comma, which was fine (but here's another, ) which has to go, and that was another good one. end ``` --- The above fails to print the last line if it ends in a comma. One fix for that is to check our buffer (previous line `$p`) in an [`END` block](https://perldoc.perl.org/perlmod.html#BEGIN%2c-UNITCHECK%2c-CHECK%2c-INIT-and-END), so to add at the end ``` END { print $p if $p =~ /,$/} ``` This is a fairly usual way to check for trailing buffers or conditions in `-n`/`-p` one-liners. Another fix, less efficient but with perhaps cleaner code, is to replace the statement ``` print unless /,$/; ``` with ``` print if (not /,$/ or eof); ``` This does run an `eof` check on every line of the file, while `END` runs *once*.
This might work for you (GNU sed): ``` sed 'N;s/,\n)/\n)/;P;D' file ``` Keep a moving window of two lines throughout the file and if the first ends in a `,` and the second begins with `)`, remove the `,`. If there is white space and it needs to be preserved, use: ``` sed 'N;s/,\(\s*\n\s*)\)/\1/;P;D' file ```
58,108,807
I'm getting surprising behavior trying to convert a microsecond string date to an integer: ``` n = 20181231235959383171 int_ = np.int(n) # Works int64_ = np.int64(n) # "OverflowError: int too big to convert" ``` Any idea why? **Edit** - Thank you all, this is informative, however please see my actual problem: [Dataframe column won't convert from integer string to an actual integer](https://stackoverflow.com/questions/58108913/dataframe-column-wont-convert-from-integer-string-to-an-actual-integer)
2019/09/26
[ "https://Stackoverflow.com/questions/58108807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8076158/" ]
If using `\n` newline as a record separator is awkward, use something else. In this case you're specifically interested in the sequence `,\n)`, and we can let Perl find that for us as we read the file: ``` perl -pe 'BEGIN{ $/ = ",\n)" } s/,\n\)/\n)/' input.txt >output.txt ``` This portion: `$/ = ",\n)"` tells Perl that instead of iterating over lines of the file, it should iterate over records that terminate with the sequence `,\n)`. That helps us to assure that every chunk will have at most one such sequence, but more importantly, that this sequence will not span chunks (or records, or file-reads). Every chunk read will either end in `,\n)` or in the case of the final record, may end not have a record terminator (by our definition of terminator). Next we just use substitution to eliminate that comma in our `,\n)` record separator sequence. The key here really is that by setting the record separator to the very sequence we're interested in, we guarantee the sequence will not get broken across file-reads. As has been mentioned in the comments, this solution is most useful only if the span between `,\n)` sequences doesn't exceed the amount of memory you are willing to throw at the problem. It is most likely that newlines themselves occur in the file more often than `,\n)` sequences, and so, this will read in larger chunks. You know your data set better than we do, and so are in a better position of judging whether the simplicity of this solution is outweighed by the footprint it consumes in memory.
This can be done more simply with just awk. ``` awk 'BEGIN{RS=".\n."; ORS=""} {gsub(",\n)", "\n)", RT); print $0 RT}' ``` Explanation: `awk`, unlike Perl, allows a regular expression as the Record Separator, here `.\n.` which "captures" the two characters surrounding each newline. Setting `ORS` to empty prevents `print` from outputting extra newlines. Newlines are all captured in `RS`/`RT`. `RT` represents the actual text matched by the `RS` regular expression. The `gsub` removes any desired comma from `RT` if present. Caveat: You'd need gnu `awk` (`gawk`) for this to work. It seems that POSIX-only `awk` will lack the regexp-`RS` with `RT` variable feature, according to `gawk` man page. Note: `gsub` is not really needed, `sub` is good enough and probably should have been used above.
58,108,807
I'm getting surprising behavior trying to convert a microsecond string date to an integer: ``` n = 20181231235959383171 int_ = np.int(n) # Works int64_ = np.int64(n) # "OverflowError: int too big to convert" ``` Any idea why? **Edit** - Thank you all, this is informative, however please see my actual problem: [Dataframe column won't convert from integer string to an actual integer](https://stackoverflow.com/questions/58108913/dataframe-column-wont-convert-from-integer-string-to-an-actual-integer)
2019/09/26
[ "https://Stackoverflow.com/questions/58108807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8076158/" ]
If using `\n` newline as a record separator is awkward, use something else. In this case you're specifically interested in the sequence `,\n)`, and we can let Perl find that for us as we read the file: ``` perl -pe 'BEGIN{ $/ = ",\n)" } s/,\n\)/\n)/' input.txt >output.txt ``` This portion: `$/ = ",\n)"` tells Perl that instead of iterating over lines of the file, it should iterate over records that terminate with the sequence `,\n)`. That helps us to assure that every chunk will have at most one such sequence, but more importantly, that this sequence will not span chunks (or records, or file-reads). Every chunk read will either end in `,\n)` or in the case of the final record, may end not have a record terminator (by our definition of terminator). Next we just use substitution to eliminate that comma in our `,\n)` record separator sequence. The key here really is that by setting the record separator to the very sequence we're interested in, we guarantee the sequence will not get broken across file-reads. As has been mentioned in the comments, this solution is most useful only if the span between `,\n)` sequences doesn't exceed the amount of memory you are willing to throw at the problem. It is most likely that newlines themselves occur in the file more often than `,\n)` sequences, and so, this will read in larger chunks. You know your data set better than we do, and so are in a better position of judging whether the simplicity of this solution is outweighed by the footprint it consumes in memory.
This might work for you (GNU sed): ``` sed 'N;s/,\n)/\n)/;P;D' file ``` Keep a moving window of two lines throughout the file and if the first ends in a `,` and the second begins with `)`, remove the `,`. If there is white space and it needs to be preserved, use: ``` sed 'N;s/,\(\s*\n\s*)\)/\1/;P;D' file ```
58,108,807
I'm getting surprising behavior trying to convert a microsecond string date to an integer: ``` n = 20181231235959383171 int_ = np.int(n) # Works int64_ = np.int64(n) # "OverflowError: int too big to convert" ``` Any idea why? **Edit** - Thank you all, this is informative, however please see my actual problem: [Dataframe column won't convert from integer string to an actual integer](https://stackoverflow.com/questions/58108913/dataframe-column-wont-convert-from-integer-string-to-an-actual-integer)
2019/09/26
[ "https://Stackoverflow.com/questions/58108807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8076158/" ]
Delay printing out the trailing comma and line feed until you know it's ok to print it out. ``` perl -ne' $_ = $buf . $_; s/^,(?=\n\))//; $buf = s/(,\n)\z// ? $1 : ""; print; END { print $buf; } ' ``` Faster: ``` perl -ne' print /^\)/ ? "\n" : ",\n" if $f; $f = s/,\n//; print; END { print ",\n" if $f; } ' ``` [Specifying file to process to Perl one-liner](https://stackoverflow.com/q/41742890/589924)
This can be done more simply with just awk. ``` awk 'BEGIN{RS=".\n."; ORS=""} {gsub(",\n)", "\n)", RT); print $0 RT}' ``` Explanation: `awk`, unlike Perl, allows a regular expression as the Record Separator, here `.\n.` which "captures" the two characters surrounding each newline. Setting `ORS` to empty prevents `print` from outputting extra newlines. Newlines are all captured in `RS`/`RT`. `RT` represents the actual text matched by the `RS` regular expression. The `gsub` removes any desired comma from `RT` if present. Caveat: You'd need gnu `awk` (`gawk`) for this to work. It seems that POSIX-only `awk` will lack the regexp-`RS` with `RT` variable feature, according to `gawk` man page. Note: `gsub` is not really needed, `sub` is good enough and probably should have been used above.
58,108,807
I'm getting surprising behavior trying to convert a microsecond string date to an integer: ``` n = 20181231235959383171 int_ = np.int(n) # Works int64_ = np.int64(n) # "OverflowError: int too big to convert" ``` Any idea why? **Edit** - Thank you all, this is informative, however please see my actual problem: [Dataframe column won't convert from integer string to an actual integer](https://stackoverflow.com/questions/58108913/dataframe-column-wont-convert-from-integer-string-to-an-actual-integer)
2019/09/26
[ "https://Stackoverflow.com/questions/58108807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8076158/" ]
Delay printing out the trailing comma and line feed until you know it's ok to print it out. ``` perl -ne' $_ = $buf . $_; s/^,(?=\n\))//; $buf = s/(,\n)\z// ? $1 : ""; print; END { print $buf; } ' ``` Faster: ``` perl -ne' print /^\)/ ? "\n" : ",\n" if $f; $f = s/,\n//; print; END { print ",\n" if $f; } ' ``` [Specifying file to process to Perl one-liner](https://stackoverflow.com/q/41742890/589924)
This might work for you (GNU sed): ``` sed 'N;s/,\n)/\n)/;P;D' file ``` Keep a moving window of two lines throughout the file and if the first ends in a `,` and the second begins with `)`, remove the `,`. If there is white space and it needs to be preserved, use: ``` sed 'N;s/,\(\s*\n\s*)\)/\1/;P;D' file ```
344,863
Are the statements of the Heine-Borel Thm and Bolzano-Weierstrass Thm equivalent?
2013/03/28
[ "https://math.stackexchange.com/questions/344863", "https://math.stackexchange.com", "https://math.stackexchange.com/users/69911/" ]
See the [Wikipedia page for reverse mathematics](http://en.wikipedia.org/wiki/Reverse_mathematics). Heine-Borel needs the $WKL\_0$ axiom set, while Bolzano-Weierstrass needs $ACA\_0$. $ACA\_0$ is the stronger theory - all theorems of $WKL\_0$ are theorems in $ACA\_0$ but not visa versa. There is a base theory for which adding Heine-Borel as an additional axiom yields $WKL\_0$, and adding Bolzano-Weierstrass yields $ACA\_0$. In particular, B-W is not a theorem in $WKL\_0$. The basic approach of reverse mathematics is to start with a base set of axioms, $RCA\_0$, and then find various additional axioms that one could add to this set to get other results. This notion, then, that Heine-Borel is weaker, is a very specific notion, relative to this base theory $RCA\_0$.
Jupp they are, both are equivalent to have a finite dimensional normed vector space over a complete field. (If you take that every bounded sequence has a convergent subsequence, there are more than one formulation of the bolzano weierstrass theorem).
344,863
Are the statements of the Heine-Borel Thm and Bolzano-Weierstrass Thm equivalent?
2013/03/28
[ "https://math.stackexchange.com/questions/344863", "https://math.stackexchange.com", "https://math.stackexchange.com/users/69911/" ]
Jupp they are, both are equivalent to have a finite dimensional normed vector space over a complete field. (If you take that every bounded sequence has a convergent subsequence, there are more than one formulation of the bolzano weierstrass theorem).
Heine-Borel and Bolzano-Weierstrass theorems are equivalent in the sense that their proofs can be derived from each other. In fact, there are other axioms and results such as completeness axiom, the nested interval property, the Dedekind cut axiom of continuity and Cauchy’s general principle of convergence which are equivalent to these theorems, stated as: Theorem A. (Heine-Borel). Every bounded closed subset of R is compact. Theorem B. (Bolzano-Weierstrass). Every bounded infinite subset of R has a limit point. A proof of equivalences of these theorems is given in an article (Classroon Notes), available via the link: <http://www.researchgate.net/publication/232863146> (Thanks to Peter for his suggestion.)
344,863
Are the statements of the Heine-Borel Thm and Bolzano-Weierstrass Thm equivalent?
2013/03/28
[ "https://math.stackexchange.com/questions/344863", "https://math.stackexchange.com", "https://math.stackexchange.com/users/69911/" ]
See the [Wikipedia page for reverse mathematics](http://en.wikipedia.org/wiki/Reverse_mathematics). Heine-Borel needs the $WKL\_0$ axiom set, while Bolzano-Weierstrass needs $ACA\_0$. $ACA\_0$ is the stronger theory - all theorems of $WKL\_0$ are theorems in $ACA\_0$ but not visa versa. There is a base theory for which adding Heine-Borel as an additional axiom yields $WKL\_0$, and adding Bolzano-Weierstrass yields $ACA\_0$. In particular, B-W is not a theorem in $WKL\_0$. The basic approach of reverse mathematics is to start with a base set of axioms, $RCA\_0$, and then find various additional axioms that one could add to this set to get other results. This notion, then, that Heine-Borel is weaker, is a very specific notion, relative to this base theory $RCA\_0$.
Heine-Borel and Bolzano-Weierstrass theorems are equivalent in the sense that their proofs can be derived from each other. In fact, there are other axioms and results such as completeness axiom, the nested interval property, the Dedekind cut axiom of continuity and Cauchy’s general principle of convergence which are equivalent to these theorems, stated as: Theorem A. (Heine-Borel). Every bounded closed subset of R is compact. Theorem B. (Bolzano-Weierstrass). Every bounded infinite subset of R has a limit point. A proof of equivalences of these theorems is given in an article (Classroon Notes), available via the link: <http://www.researchgate.net/publication/232863146> (Thanks to Peter for his suggestion.)
47,903,446
I have a function, which should check if an Object has a `toString()` function and output it or otherwise return the object. The problem is, it also triggers on plane objects and finally returns `[Object object]` as a string which obivously looks awfull on the GUI. Is there a way to determine if an Object uses the default `toString()` method that returns the ugly `[Object object]` or has an custom `toString()` function that returns a pretty string. Here is my current function: ``` (data: any) => (data != null && typeof data.toString === 'function') ? data.toString() : data; ```
2017/12/20
[ "https://Stackoverflow.com/questions/47903446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5653638/" ]
**1)** You can compare with `Object.prototype.toString`. If it is not overridden the references are equal ```js const obj1 = {}; console.log(obj1.toString === Object.prototype.toString); const obj3 = { toString() { } }; console.log(obj3.toString === Object.prototype.toString); ``` **2)** You can check the existence via `hasOwnProperty` ```js const obj1 = {}; console.log(obj1.hasOwnProperty('toString')); const obj3 = { toString() { } }; console.log(obj3.hasOwnProperty('toString')); ```
``` o = new Object(); o.prop = 'exists'; o.hasOwnProperty('prop'); // returns true o.hasOwnProperty('toString'); // returns false o.hasOwnProperty('hasOwnProperty'); // returns false ``` prototype properties will return false if you check with `hasOwnProperty`
47,903,446
I have a function, which should check if an Object has a `toString()` function and output it or otherwise return the object. The problem is, it also triggers on plane objects and finally returns `[Object object]` as a string which obivously looks awfull on the GUI. Is there a way to determine if an Object uses the default `toString()` method that returns the ugly `[Object object]` or has an custom `toString()` function that returns a pretty string. Here is my current function: ``` (data: any) => (data != null && typeof data.toString === 'function') ? data.toString() : data; ```
2017/12/20
[ "https://Stackoverflow.com/questions/47903446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5653638/" ]
**1)** You can compare with `Object.prototype.toString`. If it is not overridden the references are equal ```js const obj1 = {}; console.log(obj1.toString === Object.prototype.toString); const obj3 = { toString() { } }; console.log(obj3.toString === Object.prototype.toString); ``` **2)** You can check the existence via `hasOwnProperty` ```js const obj1 = {}; console.log(obj1.hasOwnProperty('toString')); const obj3 = { toString() { } }; console.log(obj3.hasOwnProperty('toString')); ```
We have two options: 1. `myObj.hasOwnProperty('toString')` 2. `myObj.toString !== Object.prototype.toString` I think it is better to use second option, because method using `hasOwnProperty` isn't so robust. Check out this code snipped: ```js class MyObj { toString() { return "myObj"; } } class MyObjEx extends MyObj { something() {} } var a = new MyObj(); var b = new MyObjEx(); var c = new Object(); c.toString = () => "created by new Object()"; var d = { test: "test", toString: () => "created with object literal" }; var unchanged = { test: "test" }; console.log( "hasOwnProperty with 'native' toString method", unchanged.hasOwnProperty("toString") ); console.log("hasOwnProperty object from class", a.hasOwnProperty("toString")); console.log( "hasOwnProperty object from extended class", b.hasOwnProperty("toString") ); console.log( "hasOwnProperty object from new Object()", c.hasOwnProperty("toString") ); console.log( "hasOwnProperty object from object literal", d.hasOwnProperty("toString") ); console.log( "compare with Object.prototype on unchanged object", unchanged.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from class", a.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from extended class", b.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from new Object()", c.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from object literal", d.toString !== Object.prototype.toString ); ``` Output will be: ``` "hasOwnProperty with 'native' toString method" false "hasOwnProperty object from class" false "hasOwnProperty object from extended class" false "hasOwnProperty object from new Object()" true "hasOwnProperty object from object literal" true "compare with Object.prototype on unchanged object" false "compare with Object.prototype object from class" true "compare with Object.prototype object from extended class" true "compare with Object.prototype object from new Object()" true "compare with Object.prototype object from object literal" true ```
47,903,446
I have a function, which should check if an Object has a `toString()` function and output it or otherwise return the object. The problem is, it also triggers on plane objects and finally returns `[Object object]` as a string which obivously looks awfull on the GUI. Is there a way to determine if an Object uses the default `toString()` method that returns the ugly `[Object object]` or has an custom `toString()` function that returns a pretty string. Here is my current function: ``` (data: any) => (data != null && typeof data.toString === 'function') ? data.toString() : data; ```
2017/12/20
[ "https://Stackoverflow.com/questions/47903446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5653638/" ]
**1)** You can compare with `Object.prototype.toString`. If it is not overridden the references are equal ```js const obj1 = {}; console.log(obj1.toString === Object.prototype.toString); const obj3 = { toString() { } }; console.log(obj3.toString === Object.prototype.toString); ``` **2)** You can check the existence via `hasOwnProperty` ```js const obj1 = {}; console.log(obj1.hasOwnProperty('toString')); const obj3 = { toString() { } }; console.log(obj3.hasOwnProperty('toString')); ```
None of the answers worked for me. The only one that worked was: ``` if (obj1.toString) { console.log(obj1.toString()) } ```
47,903,446
I have a function, which should check if an Object has a `toString()` function and output it or otherwise return the object. The problem is, it also triggers on plane objects and finally returns `[Object object]` as a string which obivously looks awfull on the GUI. Is there a way to determine if an Object uses the default `toString()` method that returns the ugly `[Object object]` or has an custom `toString()` function that returns a pretty string. Here is my current function: ``` (data: any) => (data != null && typeof data.toString === 'function') ? data.toString() : data; ```
2017/12/20
[ "https://Stackoverflow.com/questions/47903446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5653638/" ]
We have two options: 1. `myObj.hasOwnProperty('toString')` 2. `myObj.toString !== Object.prototype.toString` I think it is better to use second option, because method using `hasOwnProperty` isn't so robust. Check out this code snipped: ```js class MyObj { toString() { return "myObj"; } } class MyObjEx extends MyObj { something() {} } var a = new MyObj(); var b = new MyObjEx(); var c = new Object(); c.toString = () => "created by new Object()"; var d = { test: "test", toString: () => "created with object literal" }; var unchanged = { test: "test" }; console.log( "hasOwnProperty with 'native' toString method", unchanged.hasOwnProperty("toString") ); console.log("hasOwnProperty object from class", a.hasOwnProperty("toString")); console.log( "hasOwnProperty object from extended class", b.hasOwnProperty("toString") ); console.log( "hasOwnProperty object from new Object()", c.hasOwnProperty("toString") ); console.log( "hasOwnProperty object from object literal", d.hasOwnProperty("toString") ); console.log( "compare with Object.prototype on unchanged object", unchanged.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from class", a.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from extended class", b.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from new Object()", c.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from object literal", d.toString !== Object.prototype.toString ); ``` Output will be: ``` "hasOwnProperty with 'native' toString method" false "hasOwnProperty object from class" false "hasOwnProperty object from extended class" false "hasOwnProperty object from new Object()" true "hasOwnProperty object from object literal" true "compare with Object.prototype on unchanged object" false "compare with Object.prototype object from class" true "compare with Object.prototype object from extended class" true "compare with Object.prototype object from new Object()" true "compare with Object.prototype object from object literal" true ```
``` o = new Object(); o.prop = 'exists'; o.hasOwnProperty('prop'); // returns true o.hasOwnProperty('toString'); // returns false o.hasOwnProperty('hasOwnProperty'); // returns false ``` prototype properties will return false if you check with `hasOwnProperty`
47,903,446
I have a function, which should check if an Object has a `toString()` function and output it or otherwise return the object. The problem is, it also triggers on plane objects and finally returns `[Object object]` as a string which obivously looks awfull on the GUI. Is there a way to determine if an Object uses the default `toString()` method that returns the ugly `[Object object]` or has an custom `toString()` function that returns a pretty string. Here is my current function: ``` (data: any) => (data != null && typeof data.toString === 'function') ? data.toString() : data; ```
2017/12/20
[ "https://Stackoverflow.com/questions/47903446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5653638/" ]
We have two options: 1. `myObj.hasOwnProperty('toString')` 2. `myObj.toString !== Object.prototype.toString` I think it is better to use second option, because method using `hasOwnProperty` isn't so robust. Check out this code snipped: ```js class MyObj { toString() { return "myObj"; } } class MyObjEx extends MyObj { something() {} } var a = new MyObj(); var b = new MyObjEx(); var c = new Object(); c.toString = () => "created by new Object()"; var d = { test: "test", toString: () => "created with object literal" }; var unchanged = { test: "test" }; console.log( "hasOwnProperty with 'native' toString method", unchanged.hasOwnProperty("toString") ); console.log("hasOwnProperty object from class", a.hasOwnProperty("toString")); console.log( "hasOwnProperty object from extended class", b.hasOwnProperty("toString") ); console.log( "hasOwnProperty object from new Object()", c.hasOwnProperty("toString") ); console.log( "hasOwnProperty object from object literal", d.hasOwnProperty("toString") ); console.log( "compare with Object.prototype on unchanged object", unchanged.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from class", a.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from extended class", b.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from new Object()", c.toString !== Object.prototype.toString ); console.log( "compare with Object.prototype object from object literal", d.toString !== Object.prototype.toString ); ``` Output will be: ``` "hasOwnProperty with 'native' toString method" false "hasOwnProperty object from class" false "hasOwnProperty object from extended class" false "hasOwnProperty object from new Object()" true "hasOwnProperty object from object literal" true "compare with Object.prototype on unchanged object" false "compare with Object.prototype object from class" true "compare with Object.prototype object from extended class" true "compare with Object.prototype object from new Object()" true "compare with Object.prototype object from object literal" true ```
None of the answers worked for me. The only one that worked was: ``` if (obj1.toString) { console.log(obj1.toString()) } ```
11,359,333
What is the best (idiomatic) way to concatenate Strings in Groovy? Option 1: ``` calculateAccountNumber(bank, branch, checkDigit, account) { bank + branch + checkDigit + account } ``` Option 2: ``` calculateAccountNumber(bank, branch, checkDigit, account) { "$bank$branch$checkDigit$account" } ``` I've founded an interesting point about this topic in the old Groovy website: Things you can do but better leave undone. > > As in Java, you can concatenate Strings with the "+" symbol. But Java > only needs that one of the two items of a "+" expression to be a > String, no matter if it's in the first place or in the last one. Java > will use the toString() method in the non-String object of your "+" > expression. But in Groovy, you just should be safe the first item of > your "+" expression implements the plus() method in the right way, > because Groovy will search and use it. In Groovy GDK, only the Number > and String/StringBuffer/Character classes have the plus() method > implemented to concatenate strings. To avoid surprises, always use > GStrings. > > >
2012/07/06
[ "https://Stackoverflow.com/questions/11359333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/462015/" ]
I always go for the second method (using the GString template), though when there are more than a couple of parameters like you have, I tend to wrap them in `${X}` as I find it makes it more readable. Running some benchmarks (using [Nagai Masato](https://twitter.com/#!/nagai_masato)'s excellent [GBench module](http://code.google.com/p/gbench/)) on these methods also shows templating is faster than the other methods: ``` @Grab( 'com.googlecode.gbench:gbench:0.3.0-groovy-2.0' ) import gbench.* def (foo,bar,baz) = [ 'foo', 'bar', 'baz' ] new BenchmarkBuilder().run( measureCpuTime:false ) { // Just add the strings 'String adder' { foo + bar + baz } // Templating 'GString template' { "$foo$bar$baz" } // I find this more readable 'Readable GString template' { "${foo}${bar}${baz}" } // StringBuilder 'StringBuilder' { new StringBuilder().append( foo ) .append( bar ) .append( baz ) .toString() } 'StringBuffer' { new StringBuffer().append( foo ) .append( bar ) .append( baz ) .toString() } }.prettyPrint() ``` That gives me the following output on my machine: ``` Environment =========== * Groovy: 2.0.0 * JVM: Java HotSpot(TM) 64-Bit Server VM (20.6-b01-415, Apple Inc.) * JRE: 1.6.0_31 * Total Memory: 81.0625 MB * Maximum Memory: 123.9375 MB * OS: Mac OS X (10.6.8, x86_64) Options ======= * Warm Up: Auto * CPU Time Measurement: Off String adder 539 GString template 245 Readable GString template 244 StringBuilder 318 StringBuffer 370 ``` So with readability and speed in it's favour, I'd recommend templating ;-) NB: If you add `toString()` to the end of the GString methods to make the output type the same as the other metrics, and make it a fairer test, `StringBuilder` and `StringBuffer` beat the GString methods for speed. However as GString can be used in place of String for most things (you just need to exercise caution with Map keys and SQL statements), it can mostly be left without this final conversion Adding these tests (as it has been asked in the comments) ``` 'GString template toString' { "$foo$bar$baz".toString() } 'Readable GString template toString' { "${foo}${bar}${baz}".toString() } ``` Now we get the results: ``` String adder 514 GString template 267 Readable GString template 269 GString template toString 478 Readable GString template toString 480 StringBuilder 321 StringBuffer 369 ``` So as you can see (as I said), it is slower than StringBuilder or StringBuffer, but still a bit faster than adding Strings... But still lots more readable. ### Edit after comment by ruralcoder below Updated to latest gbench, larger strings for concatenation and a test with a StringBuilder initialised to a good size: ``` @Grab( 'org.gperfutils:gbench:0.4.2-groovy-2.1' ) def (foo,bar,baz) = [ 'foo' * 50, 'bar' * 50, 'baz' * 50 ] benchmark { // Just add the strings 'String adder' { foo + bar + baz } // Templating 'GString template' { "$foo$bar$baz" } // I find this more readable 'Readable GString template' { "${foo}${bar}${baz}" } 'GString template toString' { "$foo$bar$baz".toString() } 'Readable GString template toString' { "${foo}${bar}${baz}".toString() } // StringBuilder 'StringBuilder' { new StringBuilder().append( foo ) .append( bar ) .append( baz ) .toString() } 'StringBuffer' { new StringBuffer().append( foo ) .append( bar ) .append( baz ) .toString() } 'StringBuffer with Allocation' { new StringBuffer( 512 ).append( foo ) .append( bar ) .append( baz ) .toString() } }.prettyPrint() ``` gives ``` Environment =========== * Groovy: 2.1.6 * JVM: Java HotSpot(TM) 64-Bit Server VM (23.21-b01, Oracle Corporation) * JRE: 1.7.0_21 * Total Memory: 467.375 MB * Maximum Memory: 1077.375 MB * OS: Mac OS X (10.8.4, x86_64) Options ======= * Warm Up: Auto (- 60 sec) * CPU Time Measurement: On user system cpu real String adder 630 0 630 647 GString template 29 0 29 31 Readable GString template 32 0 32 33 GString template toString 429 0 429 443 Readable GString template toString 428 1 429 441 StringBuilder 383 1 384 396 StringBuffer 395 1 396 409 StringBuffer with Allocation 277 0 277 286 ```
Reproducing tim\_yates answer on current hardware and adding leftShift() and concat() method to check the finding: ``` 'String leftShift' { foo << bar << baz } 'String concat' { foo.concat(bar) .concat(baz) .toString() } ``` The outcome shows concat() to be the faster solution for a pure String, but if you can handle GString somewhere else, GString template is still ahead, while honorable mention should go to leftShift() (bitwise operator) and StringBuffer() with initial allocation: ``` Environment =========== * Groovy: 2.4.8 * JVM: OpenJDK 64-Bit Server VM (25.191-b12, Oracle Corporation) * JRE: 1.8.0_191 * Total Memory: 238 MB * Maximum Memory: 3504 MB * OS: Linux (4.19.13-300.fc29.x86_64, amd64) Options ======= * Warm Up: Auto (- 60 sec) * CPU Time Measurement: On user system cpu real String adder 453 7 460 469 String leftShift 287 2 289 295 String concat 169 1 170 173 GString template 24 0 24 24 Readable GString template 32 0 32 32 GString template toString 400 0 400 406 Readable GString template toString 412 0 412 419 StringBuilder 325 3 328 334 StringBuffer 390 1 391 398 StringBuffer with Allocation 259 1 260 265 ```
11,359,333
What is the best (idiomatic) way to concatenate Strings in Groovy? Option 1: ``` calculateAccountNumber(bank, branch, checkDigit, account) { bank + branch + checkDigit + account } ``` Option 2: ``` calculateAccountNumber(bank, branch, checkDigit, account) { "$bank$branch$checkDigit$account" } ``` I've founded an interesting point about this topic in the old Groovy website: Things you can do but better leave undone. > > As in Java, you can concatenate Strings with the "+" symbol. But Java > only needs that one of the two items of a "+" expression to be a > String, no matter if it's in the first place or in the last one. Java > will use the toString() method in the non-String object of your "+" > expression. But in Groovy, you just should be safe the first item of > your "+" expression implements the plus() method in the right way, > because Groovy will search and use it. In Groovy GDK, only the Number > and String/StringBuffer/Character classes have the plus() method > implemented to concatenate strings. To avoid surprises, always use > GStrings. > > >
2012/07/06
[ "https://Stackoverflow.com/questions/11359333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/462015/" ]
I always go for the second method (using the GString template), though when there are more than a couple of parameters like you have, I tend to wrap them in `${X}` as I find it makes it more readable. Running some benchmarks (using [Nagai Masato](https://twitter.com/#!/nagai_masato)'s excellent [GBench module](http://code.google.com/p/gbench/)) on these methods also shows templating is faster than the other methods: ``` @Grab( 'com.googlecode.gbench:gbench:0.3.0-groovy-2.0' ) import gbench.* def (foo,bar,baz) = [ 'foo', 'bar', 'baz' ] new BenchmarkBuilder().run( measureCpuTime:false ) { // Just add the strings 'String adder' { foo + bar + baz } // Templating 'GString template' { "$foo$bar$baz" } // I find this more readable 'Readable GString template' { "${foo}${bar}${baz}" } // StringBuilder 'StringBuilder' { new StringBuilder().append( foo ) .append( bar ) .append( baz ) .toString() } 'StringBuffer' { new StringBuffer().append( foo ) .append( bar ) .append( baz ) .toString() } }.prettyPrint() ``` That gives me the following output on my machine: ``` Environment =========== * Groovy: 2.0.0 * JVM: Java HotSpot(TM) 64-Bit Server VM (20.6-b01-415, Apple Inc.) * JRE: 1.6.0_31 * Total Memory: 81.0625 MB * Maximum Memory: 123.9375 MB * OS: Mac OS X (10.6.8, x86_64) Options ======= * Warm Up: Auto * CPU Time Measurement: Off String adder 539 GString template 245 Readable GString template 244 StringBuilder 318 StringBuffer 370 ``` So with readability and speed in it's favour, I'd recommend templating ;-) NB: If you add `toString()` to the end of the GString methods to make the output type the same as the other metrics, and make it a fairer test, `StringBuilder` and `StringBuffer` beat the GString methods for speed. However as GString can be used in place of String for most things (you just need to exercise caution with Map keys and SQL statements), it can mostly be left without this final conversion Adding these tests (as it has been asked in the comments) ``` 'GString template toString' { "$foo$bar$baz".toString() } 'Readable GString template toString' { "${foo}${bar}${baz}".toString() } ``` Now we get the results: ``` String adder 514 GString template 267 Readable GString template 269 GString template toString 478 Readable GString template toString 480 StringBuilder 321 StringBuffer 369 ``` So as you can see (as I said), it is slower than StringBuilder or StringBuffer, but still a bit faster than adding Strings... But still lots more readable. ### Edit after comment by ruralcoder below Updated to latest gbench, larger strings for concatenation and a test with a StringBuilder initialised to a good size: ``` @Grab( 'org.gperfutils:gbench:0.4.2-groovy-2.1' ) def (foo,bar,baz) = [ 'foo' * 50, 'bar' * 50, 'baz' * 50 ] benchmark { // Just add the strings 'String adder' { foo + bar + baz } // Templating 'GString template' { "$foo$bar$baz" } // I find this more readable 'Readable GString template' { "${foo}${bar}${baz}" } 'GString template toString' { "$foo$bar$baz".toString() } 'Readable GString template toString' { "${foo}${bar}${baz}".toString() } // StringBuilder 'StringBuilder' { new StringBuilder().append( foo ) .append( bar ) .append( baz ) .toString() } 'StringBuffer' { new StringBuffer().append( foo ) .append( bar ) .append( baz ) .toString() } 'StringBuffer with Allocation' { new StringBuffer( 512 ).append( foo ) .append( bar ) .append( baz ) .toString() } }.prettyPrint() ``` gives ``` Environment =========== * Groovy: 2.1.6 * JVM: Java HotSpot(TM) 64-Bit Server VM (23.21-b01, Oracle Corporation) * JRE: 1.7.0_21 * Total Memory: 467.375 MB * Maximum Memory: 1077.375 MB * OS: Mac OS X (10.8.4, x86_64) Options ======= * Warm Up: Auto (- 60 sec) * CPU Time Measurement: On user system cpu real String adder 630 0 630 647 GString template 29 0 29 31 Readable GString template 32 0 32 33 GString template toString 429 0 429 443 Readable GString template toString 428 1 429 441 StringBuilder 383 1 384 396 StringBuffer 395 1 396 409 StringBuffer with Allocation 277 0 277 286 ```
``` def my_string = "some string" println "here: " + my_string ``` Not quite sure why the answer above needs to go into benchmarks, string buffers, tests, etc.
11,359,333
What is the best (idiomatic) way to concatenate Strings in Groovy? Option 1: ``` calculateAccountNumber(bank, branch, checkDigit, account) { bank + branch + checkDigit + account } ``` Option 2: ``` calculateAccountNumber(bank, branch, checkDigit, account) { "$bank$branch$checkDigit$account" } ``` I've founded an interesting point about this topic in the old Groovy website: Things you can do but better leave undone. > > As in Java, you can concatenate Strings with the "+" symbol. But Java > only needs that one of the two items of a "+" expression to be a > String, no matter if it's in the first place or in the last one. Java > will use the toString() method in the non-String object of your "+" > expression. But in Groovy, you just should be safe the first item of > your "+" expression implements the plus() method in the right way, > because Groovy will search and use it. In Groovy GDK, only the Number > and String/StringBuffer/Character classes have the plus() method > implemented to concatenate strings. To avoid surprises, always use > GStrings. > > >
2012/07/06
[ "https://Stackoverflow.com/questions/11359333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/462015/" ]
``` def my_string = "some string" println "here: " + my_string ``` Not quite sure why the answer above needs to go into benchmarks, string buffers, tests, etc.
Reproducing tim\_yates answer on current hardware and adding leftShift() and concat() method to check the finding: ``` 'String leftShift' { foo << bar << baz } 'String concat' { foo.concat(bar) .concat(baz) .toString() } ``` The outcome shows concat() to be the faster solution for a pure String, but if you can handle GString somewhere else, GString template is still ahead, while honorable mention should go to leftShift() (bitwise operator) and StringBuffer() with initial allocation: ``` Environment =========== * Groovy: 2.4.8 * JVM: OpenJDK 64-Bit Server VM (25.191-b12, Oracle Corporation) * JRE: 1.8.0_191 * Total Memory: 238 MB * Maximum Memory: 3504 MB * OS: Linux (4.19.13-300.fc29.x86_64, amd64) Options ======= * Warm Up: Auto (- 60 sec) * CPU Time Measurement: On user system cpu real String adder 453 7 460 469 String leftShift 287 2 289 295 String concat 169 1 170 173 GString template 24 0 24 24 Readable GString template 32 0 32 32 GString template toString 400 0 400 406 Readable GString template toString 412 0 412 419 StringBuilder 325 3 328 334 StringBuffer 390 1 391 398 StringBuffer with Allocation 259 1 260 265 ```
12,555,940
all I am integrating a jQuery vertical accordion menu to be more specific [THIS ONE HERE](http://www.designchemical.com/lab/jquery-vertical-accordion-menu-plugin/examples/) As you can see this nice easing menu is set to be hover or onclick function. What im trying to achieve here to use the combination of two: specifically i want the first and second child to open with onclick function and the third one with a hover.. i tried different techniques non of them seems to work so far, im pretty new in jQuery.. any ideas? Thanks in advance
2012/09/23
[ "https://Stackoverflow.com/questions/12555940", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1503518/" ]
This answer explains the difference betweeen using an object literal and a function for defining a view model: [Difference between knockout View Models declared as object literals vs functions](https://stackoverflow.com/questions/9589419/difference-between-knockout-view-models-declared-as-object-literals-vs-functions)
It is better because entire logic of a ViewModel can be contained in (encapsulated by) this constructor function. Logic can be very complex. It can include definitions of new functions and variables that are not global anymore. I find following pattern: ``` ko.applyBindings((function() { // Define local variables and functions that view model instance can access and use, // but are not visible outside this function. var firstName = ko.observable("John"); var lastName = ko.observable("Doe"); var getFullName = function() { return firstName() + " " + lastName(); }; // Return view model instance. return { fullName : ko.computed(getFullName) }; })()); ``` to be even better because it doesn't introduce any new global variable (like constructor function) and still has great encapsulation capabilities.
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
I think you might as well use your own implementation of the BeanUtils, overriding the class [org.apache.commons.beanutils.converters.AbstractConverter](http://commons.apache.org/beanutils/api/org/apache/commons/beanutils/converters/AbstractConverter.html) and [org.apache.commons.beanutils.converters.StringConverter](http://commons.apache.org/beanutils/api/org/apache/commons/beanutils/converters/StringConverter.html)
It's a little old question, but I resolved this issue by implementing another solution (I think in an easier way). I implement a `TypeConverter` to convert empty string to null. Two files is necessary : The converter. ``` public class StringEmptyToNullConverter implements TypeConverter { private static final Logger log = LoggerFactory.getLogger(StringEmptyToNullConverter.class); @Override public Object convertValue(Map arg0, Object arg1, Member member, String arg3, Object obj, Class arg5) { String[] value = (String[]) obj; if (value == null || value[0] == null || value[0].isEmpty()) { logDebug("is null or empty: return null"); return null; } logDebug("not null and not empty: return '{}'", value[0]); return value[0]; } private void logDebug(String msg, Object... obj) { if (log.isDebugEnabled()) { log.debug(msg, obj); } } } ``` And the register, named `xwork-conversion.properties`. You have to put this file in your java path. ``` # syntax: <type> = <converterClassName> java.lang.String = StringEmptyToNullConverter ``` see the struts [converter documentation](http://struts.apache.org/release/2.0.x/docs/type-conversion.html#TypeConversion-ApplyingaTypeConverterforanapplication)
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
A possible solution - one that will allow a single conversion entry point for all your String fields - would be to register a custom convertor, but not with Struts but with [BeanUtils](http://commons.apache.org/beanutils/). For mapping request parameters to form properties, Struts makes use of the [populate method of its RequestUtils class](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html#populate%28java.lang.Object,%20javax.servlet.http.HttpServletRequest%29). This class in turn uses a `BeanUtils` implementation do do its work. A simplistic flow would be something of the likes of Struts' [RequestUtils](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html) > [BeanUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtils.html) > [BeanUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtilsBean.html) > [ConvertUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html) > [ConvertUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtilsBean.html) > [Converter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/Converter.html). The interesting thing is that there is also a [StringConverter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/converters/StringConverter.html) which converts from String to... aaaaaa... String! The [ConvertUtils class has a register](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html#register%28org.apache.commons.beanutils.Converter,%20java.lang.Class%29) method which you can use to register convertors, overwriting the existing ones. This means you could write yourself the custom String convertor which returns null for empty strings and then you could wait for your Struts application to load completely so that you have no surprises (i.e. make sure your converter is the last registered for type `String`). After the application loads, you step in and overwrite the default String convertor with your own implementation. For example, you could do that using a [ServletContextListener](http://download.oracle.com/javaee/1.4/api/javax/servlet/ServletContextListener.html) and call the `ConvertUtils.register(...)` in the `contextInitialized` method. You then configure the listener in `web.xml` and you [should be good to go](http://www.java-tips.org/java-ee-tips/java-servlet/how-to-work-with-servletcontextlistener-4.html).
See Apache's [`StringUtils.stripToNull()`](http://commons.apache.org/lang/api-2.3/org/apache/commons/lang/StringUtils.html#stripToNull%28java.lang.String%29) method. As for a configuration, Struts has not given us that functionality (I don't recall). I would have suggested overriding `reset()` method from the `ActionForm` but that is called before the controller repopulates the form bean.
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
A possible solution - one that will allow a single conversion entry point for all your String fields - would be to register a custom convertor, but not with Struts but with [BeanUtils](http://commons.apache.org/beanutils/). For mapping request parameters to form properties, Struts makes use of the [populate method of its RequestUtils class](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html#populate%28java.lang.Object,%20javax.servlet.http.HttpServletRequest%29). This class in turn uses a `BeanUtils` implementation do do its work. A simplistic flow would be something of the likes of Struts' [RequestUtils](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html) > [BeanUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtils.html) > [BeanUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtilsBean.html) > [ConvertUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html) > [ConvertUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtilsBean.html) > [Converter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/Converter.html). The interesting thing is that there is also a [StringConverter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/converters/StringConverter.html) which converts from String to... aaaaaa... String! The [ConvertUtils class has a register](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html#register%28org.apache.commons.beanutils.Converter,%20java.lang.Class%29) method which you can use to register convertors, overwriting the existing ones. This means you could write yourself the custom String convertor which returns null for empty strings and then you could wait for your Struts application to load completely so that you have no surprises (i.e. make sure your converter is the last registered for type `String`). After the application loads, you step in and overwrite the default String convertor with your own implementation. For example, you could do that using a [ServletContextListener](http://download.oracle.com/javaee/1.4/api/javax/servlet/ServletContextListener.html) and call the `ConvertUtils.register(...)` in the `contextInitialized` method. You then configure the listener in `web.xml` and you [should be good to go](http://www.java-tips.org/java-ee-tips/java-servlet/how-to-work-with-servletcontextlistener-4.html).
It's a little old question, but I resolved this issue by implementing another solution (I think in an easier way). I implement a `TypeConverter` to convert empty string to null. Two files is necessary : The converter. ``` public class StringEmptyToNullConverter implements TypeConverter { private static final Logger log = LoggerFactory.getLogger(StringEmptyToNullConverter.class); @Override public Object convertValue(Map arg0, Object arg1, Member member, String arg3, Object obj, Class arg5) { String[] value = (String[]) obj; if (value == null || value[0] == null || value[0].isEmpty()) { logDebug("is null or empty: return null"); return null; } logDebug("not null and not empty: return '{}'", value[0]); return value[0]; } private void logDebug(String msg, Object... obj) { if (log.isDebugEnabled()) { log.debug(msg, obj); } } } ``` And the register, named `xwork-conversion.properties`. You have to put this file in your java path. ``` # syntax: <type> = <converterClassName> java.lang.String = StringEmptyToNullConverter ``` see the struts [converter documentation](http://struts.apache.org/release/2.0.x/docs/type-conversion.html#TypeConversion-ApplyingaTypeConverterforanapplication)
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
A possible solution - one that will allow a single conversion entry point for all your String fields - would be to register a custom convertor, but not with Struts but with [BeanUtils](http://commons.apache.org/beanutils/). For mapping request parameters to form properties, Struts makes use of the [populate method of its RequestUtils class](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html#populate%28java.lang.Object,%20javax.servlet.http.HttpServletRequest%29). This class in turn uses a `BeanUtils` implementation do do its work. A simplistic flow would be something of the likes of Struts' [RequestUtils](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html) > [BeanUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtils.html) > [BeanUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtilsBean.html) > [ConvertUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html) > [ConvertUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtilsBean.html) > [Converter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/Converter.html). The interesting thing is that there is also a [StringConverter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/converters/StringConverter.html) which converts from String to... aaaaaa... String! The [ConvertUtils class has a register](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html#register%28org.apache.commons.beanutils.Converter,%20java.lang.Class%29) method which you can use to register convertors, overwriting the existing ones. This means you could write yourself the custom String convertor which returns null for empty strings and then you could wait for your Struts application to load completely so that you have no surprises (i.e. make sure your converter is the last registered for type `String`). After the application loads, you step in and overwrite the default String convertor with your own implementation. For example, you could do that using a [ServletContextListener](http://download.oracle.com/javaee/1.4/api/javax/servlet/ServletContextListener.html) and call the `ConvertUtils.register(...)` in the `contextInitialized` method. You then configure the listener in `web.xml` and you [should be good to go](http://www.java-tips.org/java-ee-tips/java-servlet/how-to-work-with-servletcontextlistener-4.html).
Cf <http://struts.apache.org/development/1.x/userGuide/configuration.html> for convertNull ;-) convertNull - Force simulation of the version 1.0 behavior when populating forms. If set to "true", the numeric Java wrapper class types (like java.lang.Integer ) will default to null (rather than 0). (Since version 1.1) [false]
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
I think you might as well use your own implementation of the BeanUtils, overriding the class [org.apache.commons.beanutils.converters.AbstractConverter](http://commons.apache.org/beanutils/api/org/apache/commons/beanutils/converters/AbstractConverter.html) and [org.apache.commons.beanutils.converters.StringConverter](http://commons.apache.org/beanutils/api/org/apache/commons/beanutils/converters/StringConverter.html)
Cf <http://struts.apache.org/development/1.x/userGuide/configuration.html> for convertNull ;-) convertNull - Force simulation of the version 1.0 behavior when populating forms. If set to "true", the numeric Java wrapper class types (like java.lang.Integer ) will default to null (rather than 0). (Since version 1.1) [false]
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
I think you might as well use your own implementation of the BeanUtils, overriding the class [org.apache.commons.beanutils.converters.AbstractConverter](http://commons.apache.org/beanutils/api/org/apache/commons/beanutils/converters/AbstractConverter.html) and [org.apache.commons.beanutils.converters.StringConverter](http://commons.apache.org/beanutils/api/org/apache/commons/beanutils/converters/StringConverter.html)
Declare String value by default in ActionForm as `NULL` . `Eg: private String str = null;` EDIT : solution for this is . I think for that attribute you have setter and getter methods, in getter method check if the value is empty, then you explisitly set for null value.
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
A possible solution - one that will allow a single conversion entry point for all your String fields - would be to register a custom convertor, but not with Struts but with [BeanUtils](http://commons.apache.org/beanutils/). For mapping request parameters to form properties, Struts makes use of the [populate method of its RequestUtils class](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html#populate%28java.lang.Object,%20javax.servlet.http.HttpServletRequest%29). This class in turn uses a `BeanUtils` implementation do do its work. A simplistic flow would be something of the likes of Struts' [RequestUtils](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html) > [BeanUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtils.html) > [BeanUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtilsBean.html) > [ConvertUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html) > [ConvertUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtilsBean.html) > [Converter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/Converter.html). The interesting thing is that there is also a [StringConverter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/converters/StringConverter.html) which converts from String to... aaaaaa... String! The [ConvertUtils class has a register](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html#register%28org.apache.commons.beanutils.Converter,%20java.lang.Class%29) method which you can use to register convertors, overwriting the existing ones. This means you could write yourself the custom String convertor which returns null for empty strings and then you could wait for your Struts application to load completely so that you have no surprises (i.e. make sure your converter is the last registered for type `String`). After the application loads, you step in and overwrite the default String convertor with your own implementation. For example, you could do that using a [ServletContextListener](http://download.oracle.com/javaee/1.4/api/javax/servlet/ServletContextListener.html) and call the `ConvertUtils.register(...)` in the `contextInitialized` method. You then configure the listener in `web.xml` and you [should be good to go](http://www.java-tips.org/java-ee-tips/java-servlet/how-to-work-with-servletcontextlistener-4.html).
I think you might as well use your own implementation of the BeanUtils, overriding the class [org.apache.commons.beanutils.converters.AbstractConverter](http://commons.apache.org/beanutils/api/org/apache/commons/beanutils/converters/AbstractConverter.html) and [org.apache.commons.beanutils.converters.StringConverter](http://commons.apache.org/beanutils/api/org/apache/commons/beanutils/converters/StringConverter.html)
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
A possible solution - one that will allow a single conversion entry point for all your String fields - would be to register a custom convertor, but not with Struts but with [BeanUtils](http://commons.apache.org/beanutils/). For mapping request parameters to form properties, Struts makes use of the [populate method of its RequestUtils class](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html#populate%28java.lang.Object,%20javax.servlet.http.HttpServletRequest%29). This class in turn uses a `BeanUtils` implementation do do its work. A simplistic flow would be something of the likes of Struts' [RequestUtils](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html) > [BeanUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtils.html) > [BeanUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtilsBean.html) > [ConvertUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html) > [ConvertUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtilsBean.html) > [Converter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/Converter.html). The interesting thing is that there is also a [StringConverter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/converters/StringConverter.html) which converts from String to... aaaaaa... String! The [ConvertUtils class has a register](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html#register%28org.apache.commons.beanutils.Converter,%20java.lang.Class%29) method which you can use to register convertors, overwriting the existing ones. This means you could write yourself the custom String convertor which returns null for empty strings and then you could wait for your Struts application to load completely so that you have no surprises (i.e. make sure your converter is the last registered for type `String`). After the application loads, you step in and overwrite the default String convertor with your own implementation. For example, you could do that using a [ServletContextListener](http://download.oracle.com/javaee/1.4/api/javax/servlet/ServletContextListener.html) and call the `ConvertUtils.register(...)` in the `contextInitialized` method. You then configure the listener in `web.xml` and you [should be good to go](http://www.java-tips.org/java-ee-tips/java-servlet/how-to-work-with-servletcontextlistener-4.html).
Struts 2 offers a good mechanism for this with interceptors, which I think is much safer and easier than mucking around with `BeanUtils`. Here is the code I used, based on [Cimballi's Blog](http://cimballisblog.blogspot.de/2009/10/struts2-interceptor-to-remove-empty.html) but edited to compile in Java 7 (the original was from 2009): ``` import com.opensymphony.xwork2.ActionContext; import com.opensymphony.xwork2.ActionInvocation; import com.opensymphony.xwork2.interceptor.Interceptor; import java.util.ArrayList; import java.util.Collection; import java.util.Map; import org.apache.commons.lang.ArrayUtils; import org.apache.commons.lang.StringUtils; public class RemoveEmptyParametersInterceptor implements Interceptor { public RemoveEmptyParametersInterceptor() { super(); } /** * @see com.opensymphony.xwork2.interceptor.Interceptor#destroy() */ public void destroy() { // Nothing to do. } /** * @see com.opensymphony.xwork2.interceptor.Interceptor#init() */ public void init() { // Nothing to do. } public String intercept(final ActionInvocation invocation) throws Exception { final String result; final ActionContext actionContext = invocation.getInvocationContext(); final Map<String, Object> parameters = actionContext.getParameters(); if (parameters == null) { // Nothing to do. } else { final Collection<String> parametersToRemove = new ArrayList<String>(); for (final Map.Entry entry : parameters.entrySet()) { final Object object = entry.getValue(); if (object instanceof String) { final String value = (String) object; if (StringUtils.isEmpty(value)) { parametersToRemove.add((String) entry.getKey()); } } else if (object instanceof String[]) { final String[] values = (String[]) object; final Object[] objects = ArrayUtils.removeElement(values, ""); if (objects.length == 0) { parametersToRemove.add((String) entry.getKey()); } } else { throw new IllegalArgumentException(); } } for (final String parameterToRemove : parametersToRemove) { parameters.remove(parameterToRemove); } } result = invocation.invoke(); return result; } } ``` And here is how I use it in my struts.xml file: ``` <package name="webdefault" namespace="/" extends="struts-default"> <interceptors> <interceptor name="removeEmptyParameters" class="com.sentrylink.web.struts.RemoveEmptyParametersInterceptor"/> <interceptor-stack name="webStack"> <interceptor-ref name="removeEmptyParameters"/> <interceptor-ref name="defaultStack"/> </interceptor-stack> </interceptors> <default-interceptor-ref name="webStack"/> ... </package> ``` It was pointed out to me that ActionForm in the original question is a Struts 1 convention (the question has since been correctly tagged) but since Google still brings one here with a Struts 2 query I hope this answer will be useful to someone else.
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
A possible solution - one that will allow a single conversion entry point for all your String fields - would be to register a custom convertor, but not with Struts but with [BeanUtils](http://commons.apache.org/beanutils/). For mapping request parameters to form properties, Struts makes use of the [populate method of its RequestUtils class](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html#populate%28java.lang.Object,%20javax.servlet.http.HttpServletRequest%29). This class in turn uses a `BeanUtils` implementation do do its work. A simplistic flow would be something of the likes of Struts' [RequestUtils](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html) > [BeanUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtils.html) > [BeanUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtilsBean.html) > [ConvertUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html) > [ConvertUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtilsBean.html) > [Converter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/Converter.html). The interesting thing is that there is also a [StringConverter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/converters/StringConverter.html) which converts from String to... aaaaaa... String! The [ConvertUtils class has a register](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html#register%28org.apache.commons.beanutils.Converter,%20java.lang.Class%29) method which you can use to register convertors, overwriting the existing ones. This means you could write yourself the custom String convertor which returns null for empty strings and then you could wait for your Struts application to load completely so that you have no surprises (i.e. make sure your converter is the last registered for type `String`). After the application loads, you step in and overwrite the default String convertor with your own implementation. For example, you could do that using a [ServletContextListener](http://download.oracle.com/javaee/1.4/api/javax/servlet/ServletContextListener.html) and call the `ConvertUtils.register(...)` in the `contextInitialized` method. You then configure the listener in `web.xml` and you [should be good to go](http://www.java-tips.org/java-ee-tips/java-servlet/how-to-work-with-servletcontextlistener-4.html).
In web.xml use below code ``` <init-param> <param-name>convertNull</param-name> <param-value>true</param-value> </init-param> ```
6,200,433
When user decides to leave the field in the form empty the Apache Struts binds empty `String` as value for properties in the `ActionForm`. Is there any way to modify the behavior globally and opt for `null` instead of empty `String`? I know that Spring MVC does exactly the same, but there is also [StringTrimmerEditor](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/beans/propertyeditors/StringTrimmerEditor.html) that can be registered as property editor to trim strings to `null`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77278/" ]
A possible solution - one that will allow a single conversion entry point for all your String fields - would be to register a custom convertor, but not with Struts but with [BeanUtils](http://commons.apache.org/beanutils/). For mapping request parameters to form properties, Struts makes use of the [populate method of its RequestUtils class](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html#populate%28java.lang.Object,%20javax.servlet.http.HttpServletRequest%29). This class in turn uses a `BeanUtils` implementation do do its work. A simplistic flow would be something of the likes of Struts' [RequestUtils](http://struts.apache.org/1.1/api/org/apache/struts/util/RequestUtils.html) > [BeanUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtils.html) > [BeanUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/BeanUtilsBean.html) > [ConvertUtils](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html) > [ConvertUtilsBean](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtilsBean.html) > [Converter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/Converter.html). The interesting thing is that there is also a [StringConverter](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/converters/StringConverter.html) which converts from String to... aaaaaa... String! The [ConvertUtils class has a register](http://commons.apache.org/beanutils/commons-beanutils-1.7.0/docs/api/org/apache/commons/beanutils/ConvertUtils.html#register%28org.apache.commons.beanutils.Converter,%20java.lang.Class%29) method which you can use to register convertors, overwriting the existing ones. This means you could write yourself the custom String convertor which returns null for empty strings and then you could wait for your Struts application to load completely so that you have no surprises (i.e. make sure your converter is the last registered for type `String`). After the application loads, you step in and overwrite the default String convertor with your own implementation. For example, you could do that using a [ServletContextListener](http://download.oracle.com/javaee/1.4/api/javax/servlet/ServletContextListener.html) and call the `ConvertUtils.register(...)` in the `contextInitialized` method. You then configure the listener in `web.xml` and you [should be good to go](http://www.java-tips.org/java-ee-tips/java-servlet/how-to-work-with-servletcontextlistener-4.html).
Declare String value by default in ActionForm as `NULL` . `Eg: private String str = null;` EDIT : solution for this is . I think for that attribute you have setter and getter methods, in getter method check if the value is empty, then you explisitly set for null value.
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
This will change the value of the first [node](https://developer.mozilla.org/en-US/docs/Web/API/Node) (which is a text node in your example): ``` $('#1').contents()[0].nodeValue = 'new text'; ``` [JSFiddle](http://jsfiddle.net/q5jn6/) --------------------------------------
If plausible, do something like the following: ``` <div id="1"> <span id="edit">some text</span> <div id="2"></div> </div> ``` Then, you can edit your text like so: ``` $('#edit').text('new text'); ``` Basically, your putting the text you want to edit in its own element with an ID. Keep in mind that you can use any type of element you want, you don't have to use `span` specifically. You can change this to `div` or `p` and you'll still achieve the same result.
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
If plausible, do something like the following: ``` <div id="1"> <span id="edit">some text</span> <div id="2"></div> </div> ``` Then, you can edit your text like so: ``` $('#edit').text('new text'); ``` Basically, your putting the text you want to edit in its own element with an ID. Keep in mind that you can use any type of element you want, you don't have to use `span` specifically. You can change this to `div` or `p` and you'll still achieve the same result.
My solution as an alternative to others: ``` var text_to_keep = $('#2').text(); $('#1').text('new text').append('<div id="2">' + text_to_keep + '</div>'); ``` <http://jsfiddle.net/LM63t/> P.S. I don't think `id`'s that begin with a number are valid.
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
This will change the value of the first [node](https://developer.mozilla.org/en-US/docs/Web/API/Node) (which is a text node in your example): ``` $('#1').contents()[0].nodeValue = 'new text'; ``` [JSFiddle](http://jsfiddle.net/q5jn6/) --------------------------------------
Try the following ``` <div id="1"> <span id='myspan'>some text</span> <div id="2"></div> </div> ``` Then use; ``` $('#myspan').text('new text'); ```
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
Try the following ``` <div id="1"> <span id='myspan'>some text</span> <div id="2"></div> </div> ``` Then use; ``` $('#myspan').text('new text'); ```
My solution as an alternative to others: ``` var text_to_keep = $('#2').text(); $('#1').text('new text').append('<div id="2">' + text_to_keep + '</div>'); ``` <http://jsfiddle.net/LM63t/> P.S. I don't think `id`'s that begin with a number are valid.
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
This will change the value of the first [node](https://developer.mozilla.org/en-US/docs/Web/API/Node) (which is a text node in your example): ``` $('#1').contents()[0].nodeValue = 'new text'; ``` [JSFiddle](http://jsfiddle.net/q5jn6/) --------------------------------------
By the way it is a bad practice to use ids with numbers as elements. You can add a span if you don't want to change the style of your element. So in your case you will do something like this (I removed the numbers as ids): ``` <!-- HTML --> <div id="text-container"> <span id="some-text">some text</span> <div id="something-else"></div> </div> ``` And in your JavaScript: ``` //JavaScript $('#some-text').text('new text'); ```
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
This will change the value of the first [node](https://developer.mozilla.org/en-US/docs/Web/API/Node) (which is a text node in your example): ``` $('#1').contents()[0].nodeValue = 'new text'; ``` [JSFiddle](http://jsfiddle.net/q5jn6/) --------------------------------------
My solution as an alternative to others: ``` var text_to_keep = $('#2').text(); $('#1').text('new text').append('<div id="2">' + text_to_keep + '</div>'); ``` <http://jsfiddle.net/LM63t/> P.S. I don't think `id`'s that begin with a number are valid.
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
This will change the value of the first [node](https://developer.mozilla.org/en-US/docs/Web/API/Node) (which is a text node in your example): ``` $('#1').contents()[0].nodeValue = 'new text'; ``` [JSFiddle](http://jsfiddle.net/q5jn6/) --------------------------------------
the best solution is to check the node type as `Node.TEXT_NODE` and `nodeValue` is not null: ``` $('#1') .contents() .filter(function() { return this.nodeType === Node.TEXT_NODE && this.nodeValue && this.nodeValue.trim(); }) .replaceWith("new text"); ```
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
By the way it is a bad practice to use ids with numbers as elements. You can add a span if you don't want to change the style of your element. So in your case you will do something like this (I removed the numbers as ids): ``` <!-- HTML --> <div id="text-container"> <span id="some-text">some text</span> <div id="something-else"></div> </div> ``` And in your JavaScript: ``` //JavaScript $('#some-text').text('new text'); ```
My solution as an alternative to others: ``` var text_to_keep = $('#2').text(); $('#1').text('new text').append('<div id="2">' + text_to_keep + '</div>'); ``` <http://jsfiddle.net/LM63t/> P.S. I don't think `id`'s that begin with a number are valid.
24,493,294
I need to modify the following HTML using Javascript, ``` <div id="1"> some text <div id="2"></div> </div> ``` I have tried using `$('#1').text('new text');` However, this unintentionally removes `<div id="2">` How should I change `some text`, without changing the surrounding elements?
2014/06/30
[ "https://Stackoverflow.com/questions/24493294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893345/" ]
the best solution is to check the node type as `Node.TEXT_NODE` and `nodeValue` is not null: ``` $('#1') .contents() .filter(function() { return this.nodeType === Node.TEXT_NODE && this.nodeValue && this.nodeValue.trim(); }) .replaceWith("new text"); ```
My solution as an alternative to others: ``` var text_to_keep = $('#2').text(); $('#1').text('new text').append('<div id="2">' + text_to_keep + '</div>'); ``` <http://jsfiddle.net/LM63t/> P.S. I don't think `id`'s that begin with a number are valid.
37,036,631
I am trying to make web tool that will allow people to replace colors in simple 2 color images that I am selling. Images that I am using are saved in jpg format. I did look here and there and I did come up with following code: ``` var canvas = document.getElementById("canvas"); var context = canvas.getContext("2d"); img = new Image(); img.onload = function() { canvas.width = img.width; canvas.height = img.height; context.drawImage(img,0,0,canvas.width,canvas.height); var imageData = context.getImageData(0, 0, canvas.width, canvas.height); var pixelArray = imageData.data; var length = pixelArray.length / 4; for (var i = 0; i < length; i++) { var index = 4 * i; var r = pixelArray[index]; var g = pixelArray[++index]; var b = pixelArray[++index]; var a = pixelArray[++index]; if (r === 0 && g === 0 && b === 0 & a === 255) { pixelArray[--index] = 47; pixelArray[--index] = 255; pixelArray[--index] = 173; } } context.putImageData(imageData, 0, 0); } img.src = "assets/images/tijana-mala.jpg"; ``` Unfortunately I came into realization that many of the pixels are not just "black or white" but there are different shades of it. Complexitiy of images is rather simple, I sell quotes: Text on the background, using variety of fonts. Is there a way to do this beyond my piece of code? Thank you P.S. My competition made it there: <http://www.crownprintsart.com/color> username: your email address password: crownprintsdemo
2016/05/04
[ "https://Stackoverflow.com/questions/37036631", "https://Stackoverflow.com", "https://Stackoverflow.com/users/653874/" ]
Transfer Luma To Alpha Channel ------------------------------ In the method I would recommend we can first convert the black data to an alpha channel based on the "luminance" of the the color. This will keep the anti-aliasing which is the cause for the various shades along the edges as well as allowing us to use composition modes. Although not stated, if the black is not pure black (ie. is actually dark grey) we can compress the range by using a simple threshold value in combination. Then color the white background using `source-atop` composition mode, and finally draw in the new color for black (which is now the alpha channel) using `destination-atop` composition mode. Using composition modes is better for performance as you don't have to bother mixing the color manually. Using Threshold Value --------------------- A different method is to use a threshold value. If the color (grey/luminance) is above a certain value turn it into color A, if below into color B etc. However, the problem with this is that anti-aliased edges will loose its purpose and become a solid part of the edge which is not desirable as the edge will end up jaggy as if not anti-aliased. We can reduce the problem by using a very high-resolution original which we later scale down but the penalty is that there is more data to iterate. Manual Color Mixing =================== We can also mix the colors manually. Use a value from one of the channels as a normalized luminance/grey value and mix the two color based on this value (pseudo): ``` luma = redColor / 255; // or green/blue assuming image is grey-scale newColor = Math.round(colorA * luma + colorB * (1 - luma)); ``` Example Using Composition Modes =============================== This example uses the first technique. Grey below is the browser background color to show the alpha channel - ![stage 1](https://i.stack.imgur.com/8nGhv.jpg) ![stage 2](https://i.stack.imgur.com/n5xb5.jpg) **Final result** ![result](https://i.stack.imgur.com/jGyZY.png) ```js var img = new Image, ctx = c.getContext("2d"); img.onload = setup; img.crossOrigin = ""; img.src = "https://i.imgur.com/ZJHabAD.jpg"; function setup() { // draw in image (scaled for demo) c.width = this.naturalWidth; c.height = this.naturalHeight; ctx.drawImage(this, 0, 0); // create alpha channel from "black" pixels var idata = ctx.getImageData(0, 0, c.width, c.height), data = idata.data; for(var i = 0, v; i < data.length; i += 4) { v = data[i]; if (v < 30) v = 0; // compress grey to black (arbitrary threshold here) data[i+3] = v; } ctx.putImageData(idata, 0, 0); // replace white color ctx.fillStyle = "#AF4979"; ctx.globalCompositeOperation = "source-atop"; ctx.fillRect(0, 0, c.width, c.height); // replace "black" (alpha) ctx.fillStyle = "#C6AF47"; ctx.globalCompositeOperation = "destination-atop"; ctx.fillRect(0, 0, c.width, c.height); } ``` ```css body {background:#777} ``` ```html <canvas id=c></canvas> ```
Instead of checking the exact value you can check if the color is near a specific value. There may be a better way to do it but here's my solution: ``` var precision = 10; var replaceColor = { r: 0, g: 0, b: 0, a: 255, }; var isInRange = ( r >= replaceColor.r - precision && r <= replaceColor.r + precision && g >= replaceColor.g - precision && g <= replaceColor.g + precision && b >= replaceColor.b - precision && b <= replaceColor.b + precision && a >= replaceColor.a - precision && a <= replaceColor.a + precision ); if(isInRange) { } ``` Hope this helps
6,494
I'm new to all grain brewing and one of the things I haven't been able to find much information on is the importance of water volumes for the mash and sparge steps. There are a lot of calculators out there, and I'm trying to find out if water volumes play a key role in the extraction of sugars from the grain. It makes sense that too little water in a mash wouldn't allow for the extraction of sugars because not all of the grain would be soaking in the mash water. But, can the mash efficiency be lessened when mashing with too much water? And, once mashing is complete, does it really matter how much sparge water is used, so long as you reach your target pre-boil volume?
2012/02/28
[ "https://homebrew.stackexchange.com/questions/6494", "https://homebrew.stackexchange.com", "https://homebrew.stackexchange.com/users/1750/" ]
Mash thickness has a small impact on your beer, but not really enough to stress about. I aim for a constant thickness, regardless of beer style. Some brewers give their stronger beers to have a thicker mash, while their low-gravity beers have a thinner mash. The logic behind that latter approach is that thinner mashes encourage a more fermentable beer. My opinion is that lowering the mash temp a degree or two has a much bigger impact than mash thickness. I don't believe that mash thickness will have much effect on your efficiency, as long as you are in the 'normal' range of mash thickness -- about 1-3 quarts of water per lb of grain. As for how much sparge water you ues, the approach I follow is that First Runnings + Second Runnings (i.e. the sparge water) = Pre-Boil Volume. You don't have to do it this way, but it's a good solid approach that doesn't waste water. To figure out how much water you'll need requires a small bit of math and knowledge about your system. You'll need to know how much water you'll lose to your mash tun's false bottom (1-2 liters is common) and how much water will be absorbed by the grain (about .9 liters per kg of grain for me). ``` Water lost to grain + water lost to false bottom + pre boil volume = total water needed for mash & sparge. ```
@Hopwise addressed the issue of efficiency and mash thickness. But the amount of sparge water will affect your efficiency as well. The more water you sparge with, the more sugar you'll extract from the mash. The grist will absorb a constant amount of water (around 0.13 gallons per pound of grist). When you add your sparge water and stir, all the sugars are dissolved into the water. If you add a lot of water, the sugars are very dilute. If you add a little water, they're more concentrated. When you drain your mash tun, you're leaving behind jsut the water absorbed by the grain. If the sugars are highly concentrated (i.e. you added a small amount of sparge water), then you're leaving behind more sugar than if the sugars are dilute. But this doesn't mean that you should add huge quantities of sparge water. The increase in extract is countered by a number of factors. For one, you'll have to boil longer to evaporate all that water you added. Also, you'll have to heat all that extra sparge water. The gains in efficiency are countered by using more energy and spending more time. I calculate my sparge water volume from the boil time, and evaporation rate. For most beers I boil for 90 minutes. With my setup, this translates to an evaporation loss of around 1.5 gallons. So to get 5.5 gallons into the fermenter I aim for a pre-boil volume of around 7 gallons. You're mileage will, of course, vary.
6,494
I'm new to all grain brewing and one of the things I haven't been able to find much information on is the importance of water volumes for the mash and sparge steps. There are a lot of calculators out there, and I'm trying to find out if water volumes play a key role in the extraction of sugars from the grain. It makes sense that too little water in a mash wouldn't allow for the extraction of sugars because not all of the grain would be soaking in the mash water. But, can the mash efficiency be lessened when mashing with too much water? And, once mashing is complete, does it really matter how much sparge water is used, so long as you reach your target pre-boil volume?
2012/02/28
[ "https://homebrew.stackexchange.com/questions/6494", "https://homebrew.stackexchange.com", "https://homebrew.stackexchange.com/users/1750/" ]
Mash thickness has a small impact on your beer, but not really enough to stress about. I aim for a constant thickness, regardless of beer style. Some brewers give their stronger beers to have a thicker mash, while their low-gravity beers have a thinner mash. The logic behind that latter approach is that thinner mashes encourage a more fermentable beer. My opinion is that lowering the mash temp a degree or two has a much bigger impact than mash thickness. I don't believe that mash thickness will have much effect on your efficiency, as long as you are in the 'normal' range of mash thickness -- about 1-3 quarts of water per lb of grain. As for how much sparge water you ues, the approach I follow is that First Runnings + Second Runnings (i.e. the sparge water) = Pre-Boil Volume. You don't have to do it this way, but it's a good solid approach that doesn't waste water. To figure out how much water you'll need requires a small bit of math and knowledge about your system. You'll need to know how much water you'll lose to your mash tun's false bottom (1-2 liters is common) and how much water will be absorbed by the grain (about .9 liters per kg of grain for me). ``` Water lost to grain + water lost to false bottom + pre boil volume = total water needed for mash & sparge. ```
Also, over sparging can extract extra, undesirable tannins from the husks.
6,494
I'm new to all grain brewing and one of the things I haven't been able to find much information on is the importance of water volumes for the mash and sparge steps. There are a lot of calculators out there, and I'm trying to find out if water volumes play a key role in the extraction of sugars from the grain. It makes sense that too little water in a mash wouldn't allow for the extraction of sugars because not all of the grain would be soaking in the mash water. But, can the mash efficiency be lessened when mashing with too much water? And, once mashing is complete, does it really matter how much sparge water is used, so long as you reach your target pre-boil volume?
2012/02/28
[ "https://homebrew.stackexchange.com/questions/6494", "https://homebrew.stackexchange.com", "https://homebrew.stackexchange.com/users/1750/" ]
@Hopwise addressed the issue of efficiency and mash thickness. But the amount of sparge water will affect your efficiency as well. The more water you sparge with, the more sugar you'll extract from the mash. The grist will absorb a constant amount of water (around 0.13 gallons per pound of grist). When you add your sparge water and stir, all the sugars are dissolved into the water. If you add a lot of water, the sugars are very dilute. If you add a little water, they're more concentrated. When you drain your mash tun, you're leaving behind jsut the water absorbed by the grain. If the sugars are highly concentrated (i.e. you added a small amount of sparge water), then you're leaving behind more sugar than if the sugars are dilute. But this doesn't mean that you should add huge quantities of sparge water. The increase in extract is countered by a number of factors. For one, you'll have to boil longer to evaporate all that water you added. Also, you'll have to heat all that extra sparge water. The gains in efficiency are countered by using more energy and spending more time. I calculate my sparge water volume from the boil time, and evaporation rate. For most beers I boil for 90 minutes. With my setup, this translates to an evaporation loss of around 1.5 gallons. So to get 5.5 gallons into the fermenter I aim for a pre-boil volume of around 7 gallons. You're mileage will, of course, vary.
Also, over sparging can extract extra, undesirable tannins from the husks.
32,042,240
I'm new to JavaScript and I'm learning about recursive functions. Specifically, I'm working on Euclid's algorithm. It all makes sense to me except for the base case on line 2 "if ( ! b) { return a; }. I understand that at some point a % b will be equal to NaN and this is when the recursive call should stop. But what does "if ( ! b)" mean in it's simplest form? I can't wrap my head around this. I appreciate the feedback in advance! ``` // Euclid's Algorithm var gcd = function(a, b) { if (!b) { return a; } return gcd(b, a % b); }; console.log(gcd(462, 910)); ```
2015/08/17
[ "https://Stackoverflow.com/questions/32042240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5233635/" ]
This is the rationale behind Euclid's algorithm. ``` gcd(a, b) = gcd(b, a%b) ``` But except when `b` is 0. What is the result in this case? You can imagine that 0 has actually infinite factors in it: you can divide 0 by anything and the remainder will always be 0. From this, we can derive that ``` gcd(a, 0) = a ``` And that's the stop case of the algorithm. `if (!b)` is actually checking if `b===0`, and returning `a` in this case.
Put very simply, in this case, `!b` is the same as `b == 0`. Zero as a truthy value is false so `!0` will be true. In other words, the function returns `a` when `b` is zero, otherwise it carries on through more recursive levels. Your contention that `a % b` will become `NaN` is actually false - the fact that `b` becomes zero, and that you *detect* that, means you never actually *divide* anything by zero to get `NaN`.
32,042,240
I'm new to JavaScript and I'm learning about recursive functions. Specifically, I'm working on Euclid's algorithm. It all makes sense to me except for the base case on line 2 "if ( ! b) { return a; }. I understand that at some point a % b will be equal to NaN and this is when the recursive call should stop. But what does "if ( ! b)" mean in it's simplest form? I can't wrap my head around this. I appreciate the feedback in advance! ``` // Euclid's Algorithm var gcd = function(a, b) { if (!b) { return a; } return gcd(b, a % b); }; console.log(gcd(462, 910)); ```
2015/08/17
[ "https://Stackoverflow.com/questions/32042240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5233635/" ]
This is the rationale behind Euclid's algorithm. ``` gcd(a, b) = gcd(b, a%b) ``` But except when `b` is 0. What is the result in this case? You can imagine that 0 has actually infinite factors in it: you can divide 0 by anything and the remainder will always be 0. From this, we can derive that ``` gcd(a, 0) = a ``` And that's the stop case of the algorithm. `if (!b)` is actually checking if `b===0`, and returning `a` in this case.
``` //I'm glad that I can help others, here is my version: //Euclid's algorithm: //1)Find reminder of division m by n //2)If remainder is zero, n is a solution //3)Reduce(if r != zero) const euclid = function(num1, num2){ //compute the remainder: var remainder = num1 % num2; if(remainder == 0){ return num2; }else{ //step 3: num1 = num2; num2 = remainder; return euclid(num1, num2); } } //test the result: console.log(euclid(80, 30)); ```
32,042,240
I'm new to JavaScript and I'm learning about recursive functions. Specifically, I'm working on Euclid's algorithm. It all makes sense to me except for the base case on line 2 "if ( ! b) { return a; }. I understand that at some point a % b will be equal to NaN and this is when the recursive call should stop. But what does "if ( ! b)" mean in it's simplest form? I can't wrap my head around this. I appreciate the feedback in advance! ``` // Euclid's Algorithm var gcd = function(a, b) { if (!b) { return a; } return gcd(b, a % b); }; console.log(gcd(462, 910)); ```
2015/08/17
[ "https://Stackoverflow.com/questions/32042240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5233635/" ]
(! b) is evaluating the variable b to return the value of false. In JavaScript NaN is falsey which means it's not false, but it will evaluate to false in a Boolean evaluation. As others have mentioned, the variable b will equal 0 and not NaN. I was just explaining how NaN evaluates for the sake of the original question.
Put very simply, in this case, `!b` is the same as `b == 0`. Zero as a truthy value is false so `!0` will be true. In other words, the function returns `a` when `b` is zero, otherwise it carries on through more recursive levels. Your contention that `a % b` will become `NaN` is actually false - the fact that `b` becomes zero, and that you *detect* that, means you never actually *divide* anything by zero to get `NaN`.
32,042,240
I'm new to JavaScript and I'm learning about recursive functions. Specifically, I'm working on Euclid's algorithm. It all makes sense to me except for the base case on line 2 "if ( ! b) { return a; }. I understand that at some point a % b will be equal to NaN and this is when the recursive call should stop. But what does "if ( ! b)" mean in it's simplest form? I can't wrap my head around this. I appreciate the feedback in advance! ``` // Euclid's Algorithm var gcd = function(a, b) { if (!b) { return a; } return gcd(b, a % b); }; console.log(gcd(462, 910)); ```
2015/08/17
[ "https://Stackoverflow.com/questions/32042240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5233635/" ]
The base case for the Euclidean Algorithm happens when one of the parameters are equal to zero, which means the greatest common divisor of both numbers is the non-zero number. you can refer to this link to see some examples: <https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/the-euclidean-algorithm> This is how I implemented the Euclidean Algorithm: ``` function gcd(a, b) { // base cases if(a === 0) { return b;} if(b === 0) { return a;} // decrease and conqure - recursion return gcd(b, a % b); } console.log(gcd(9, 6)); console.log(gcd(6, 9)); ```
Put very simply, in this case, `!b` is the same as `b == 0`. Zero as a truthy value is false so `!0` will be true. In other words, the function returns `a` when `b` is zero, otherwise it carries on through more recursive levels. Your contention that `a % b` will become `NaN` is actually false - the fact that `b` becomes zero, and that you *detect* that, means you never actually *divide* anything by zero to get `NaN`.
32,042,240
I'm new to JavaScript and I'm learning about recursive functions. Specifically, I'm working on Euclid's algorithm. It all makes sense to me except for the base case on line 2 "if ( ! b) { return a; }. I understand that at some point a % b will be equal to NaN and this is when the recursive call should stop. But what does "if ( ! b)" mean in it's simplest form? I can't wrap my head around this. I appreciate the feedback in advance! ``` // Euclid's Algorithm var gcd = function(a, b) { if (!b) { return a; } return gcd(b, a % b); }; console.log(gcd(462, 910)); ```
2015/08/17
[ "https://Stackoverflow.com/questions/32042240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5233635/" ]
(! b) is evaluating the variable b to return the value of false. In JavaScript NaN is falsey which means it's not false, but it will evaluate to false in a Boolean evaluation. As others have mentioned, the variable b will equal 0 and not NaN. I was just explaining how NaN evaluates for the sake of the original question.
``` //I'm glad that I can help others, here is my version: //Euclid's algorithm: //1)Find reminder of division m by n //2)If remainder is zero, n is a solution //3)Reduce(if r != zero) const euclid = function(num1, num2){ //compute the remainder: var remainder = num1 % num2; if(remainder == 0){ return num2; }else{ //step 3: num1 = num2; num2 = remainder; return euclid(num1, num2); } } //test the result: console.log(euclid(80, 30)); ```
32,042,240
I'm new to JavaScript and I'm learning about recursive functions. Specifically, I'm working on Euclid's algorithm. It all makes sense to me except for the base case on line 2 "if ( ! b) { return a; }. I understand that at some point a % b will be equal to NaN and this is when the recursive call should stop. But what does "if ( ! b)" mean in it's simplest form? I can't wrap my head around this. I appreciate the feedback in advance! ``` // Euclid's Algorithm var gcd = function(a, b) { if (!b) { return a; } return gcd(b, a % b); }; console.log(gcd(462, 910)); ```
2015/08/17
[ "https://Stackoverflow.com/questions/32042240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5233635/" ]
The base case for the Euclidean Algorithm happens when one of the parameters are equal to zero, which means the greatest common divisor of both numbers is the non-zero number. you can refer to this link to see some examples: <https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/the-euclidean-algorithm> This is how I implemented the Euclidean Algorithm: ``` function gcd(a, b) { // base cases if(a === 0) { return b;} if(b === 0) { return a;} // decrease and conqure - recursion return gcd(b, a % b); } console.log(gcd(9, 6)); console.log(gcd(6, 9)); ```
``` //I'm glad that I can help others, here is my version: //Euclid's algorithm: //1)Find reminder of division m by n //2)If remainder is zero, n is a solution //3)Reduce(if r != zero) const euclid = function(num1, num2){ //compute the remainder: var remainder = num1 % num2; if(remainder == 0){ return num2; }else{ //step 3: num1 = num2; num2 = remainder; return euclid(num1, num2); } } //test the result: console.log(euclid(80, 30)); ```
27,612,119
I have a UIImageView that I want to scale its outter edges towards the center (the top and bottom edges to be precise). The best I am able to get so far is just one edge collapsing into the other (which doesn't move). I want both edges to collapse towards the middle. I have tried setting the the anchor point to 0.5,0.5 but that doesnt address the issue (nothing changes). Is there a way to setup an ImageView such that it will scale inwards towards the middle so the outter edges collapse towards the center? Here was my best attempt: ``` override public func viewWillAppear(animated: Bool) { super.viewWillAppear(animated) var uimg = UIImage(named: "img.PNG")! var img:UIImageView = UIImageView(image: uimg) img.layer.anchorPoint = CGPointMake(0.5, 0.5) view.addSubview(img) UIView.animateWithDuration(5, animations: { () -> Void in img.frame = CGRectMake(img.frame.origin.x,img.frame.origin.y,img.frame.width, 0) }) { (finished) -> Void in // } } ```
2014/12/22
[ "https://Stackoverflow.com/questions/27612119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3324388/" ]
You can use scaling using UIView's transform property. It seems like scaling does not allow you to make the value 0 and it goes to the final value without any animation. So, you could basically make the scaling y value negligible like 0.001. ``` imageView.transform = CGAffineTransformMakeScale(1, 0.001) ``` You could also use core animation to scale, using CABasicAnimation, this would be more simpler as this, ``` let animation = CABasicAnimation(keyPath: "transform.scale.y") animation.toValue = 0 animation.duration = 2.0 animation.fillMode = kCAFillModeForwards animation.removedOnCompletion = false imageView.layer.addAnimation(animation, forKey: "anim") ``` Or you can also use the approach you have been using, simply setting the frame, ``` var finalFrame = imageView.frame finalFrame.size.height = 0 finalFrame.origin.y = imageView.frame.origin.y + (imageView.frame.size.height / 2) UIView.animateWithDuration(2.0, animations: { () -> Void in self.imageView.frame = finalFrame }) ```
Your new frame has the origin in the same place. You need to move the origin down, at the same time as you reduce the height: ``` img.frame = CGRectMake(img.frame.origin.x, img.frame.origin.y + (img.frame.size.height/2), img.frame.size.width, 0); ```
10,920,216
I was just wondering if it's possible to alter the color of the Less mixin gradient by applying something like lighten or darken to the CSS code? Here is the Less Mixin: ``` .css-gradient(@from: #20416A, @to: #3D69A5) { background-color: @to; background-image: -webkit-gradient(linear, left top, left bottom, from(@from), to(@to)); background-image: -webkit-linear-gradient(top, @from, @to); background-image: -moz-linear-gradient(top, @from, @to); background-image: -o-linear-gradient(top, @from, @to); background-image: -ms-linear-gradient(top, @from, @to); background-image: linear-gradient(top, @from, @to); } ``` And in Less file I would like to do something like this: ``` #div { width:100px; height:100px; .css-gradient (darken, 10%); } ``` Don't know if this is possible or if there is another way I should do this.
2012/06/06
[ "https://Stackoverflow.com/questions/10920216", "https://Stackoverflow.com", "https://Stackoverflow.com/users/965879/" ]
I'd do this like so: ``` .css-gradient(darken(#20416A,10%),darken(#3D69A5,10%)) ``` Of course you could also do: ``` .css-gradient(@from: #20416A, @to: #3D69A5) { background-color: @to; background-image: -webkit-gradient(linear, left top, left bottom, from(@from), to(@to)); background-image: -webkit-linear-gradient(top, @from, @to); background-image: -moz-linear-gradient(top, @from, @to); background-image: -o-linear-gradient(top, @from, @to); background-image: -ms-linear-gradient(top, @from, @to); background-image: linear-gradient(top, @from, @to); } .css-gradient(darken,@amount: 10%, @from: #20416A, @to: #3D69A5){ .css-gradient(darken(@from,@amount),darken(@to,@amount)); } ``` And then just call it: ``` .css-gradient(darken,10%); ``` or: ``` .css-gradient(#20416A, #3D69A5); ``` or: ``` .css-gradient(darken,5%,#00ff00,#ff0000); ```
Less mixin: ``` .gradient(@dir: 0deg; @colors; @prefixes: webkit, moz, ms, o; @index: length(@prefixes)) when (@index > 0) { .gradient(@dir; @colors; @prefixes; (@index - 1)); @prefix : extract(@prefixes, @index); @dir-old: 90 - (@dir); background-image: ~"-@{prefix}-linear-gradient(@{dir-old}, @{colors})"; & when ( @index = length(@prefixes) ) { background-image: ~"linear-gradient(@{dir}, @{colors})"; } } ``` Using: .gradient(90deg, #FFAA64, #FFCD73);
4,673,250
Can I offer the authentication, authorization, etc created using "ASP.NET MVC Open Id website" extension.. as a REST service in ASP.NET MVC? How can I create this service(maybe using WCF)? (Please if you can, offer me some examples please).
2011/01/12
[ "https://Stackoverflow.com/questions/4673250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/409468/" ]
Yes, you can. OpenID is *not* about authorizing web services at all. That's what OAuth does. But DotNetOpenAuth does both OpenID and OAuth, so your users can authenticate with OpenID, then authorize RESTful clients via OAuth, and the user story is probably exactly what you're looking for. There is a [project template that shows you exactly how to do it](http://visualstudiogallery.msdn.microsoft.com/81153747-70d7-477b-b85a-0374e7edabef) (does it for you, actually) available on the Visual Studio Gallery.
Have a look at the latest nerddinner tutorial on codeplex. It has OpenId integration built into the MVC example application: <http://nerddinner.codeplex.com/>
4,673,250
Can I offer the authentication, authorization, etc created using "ASP.NET MVC Open Id website" extension.. as a REST service in ASP.NET MVC? How can I create this service(maybe using WCF)? (Please if you can, offer me some examples please).
2011/01/12
[ "https://Stackoverflow.com/questions/4673250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/409468/" ]
You can easily create REST services using just MVC. WCF is not necessary. There are tons of posts on restful architecture in ASP.NET MVC. There is code available with a base API for Restful services using ASP.NET MVC available here: <http://code.msdn.microsoft.com/MvcWebAPI> . The author of this library has an excellent article explaining how to create such a service that is capable of will serve both JSON and XML. It can be read at: <http://omaralzabir.com/create_rest_api_using_asp_net_mvc_that_speaks_both_json_and_plain_xml/> There are plenty of tools that can help you implement the OpenId service, such as <http://www.dotnetopenauth.net/> or the solution outlined at <http://www.west-wind.com/weblog/posts/899303.aspx>. You said you've already created an OpenId logging system. Basically, take the logging system, create an interface like: ``` public interface IOpenIdService{ bool Login(string login, string password); } ``` and execute it in your Controller Action method. If it is successful return a JSON or XML success message. If it fails return a JSON or XML failure message. \*I have also found this article helpful for REST with MVC: <http://blog.wekeroad.com/2007/12/06/aspnet-mvc-using-restful-architecture/>. Also, if you want to extend JSON functionality, look into JSON.NET.
Have a look at the latest nerddinner tutorial on codeplex. It has OpenId integration built into the MVC example application: <http://nerddinner.codeplex.com/>
4,673,250
Can I offer the authentication, authorization, etc created using "ASP.NET MVC Open Id website" extension.. as a REST service in ASP.NET MVC? How can I create this service(maybe using WCF)? (Please if you can, offer me some examples please).
2011/01/12
[ "https://Stackoverflow.com/questions/4673250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/409468/" ]
Yes, you can. OpenID is *not* about authorizing web services at all. That's what OAuth does. But DotNetOpenAuth does both OpenID and OAuth, so your users can authenticate with OpenID, then authorize RESTful clients via OAuth, and the user story is probably exactly what you're looking for. There is a [project template that shows you exactly how to do it](http://visualstudiogallery.msdn.microsoft.com/81153747-70d7-477b-b85a-0374e7edabef) (does it for you, actually) available on the Visual Studio Gallery.
You can easily create REST services using just MVC. WCF is not necessary. There are tons of posts on restful architecture in ASP.NET MVC. There is code available with a base API for Restful services using ASP.NET MVC available here: <http://code.msdn.microsoft.com/MvcWebAPI> . The author of this library has an excellent article explaining how to create such a service that is capable of will serve both JSON and XML. It can be read at: <http://omaralzabir.com/create_rest_api_using_asp_net_mvc_that_speaks_both_json_and_plain_xml/> There are plenty of tools that can help you implement the OpenId service, such as <http://www.dotnetopenauth.net/> or the solution outlined at <http://www.west-wind.com/weblog/posts/899303.aspx>. You said you've already created an OpenId logging system. Basically, take the logging system, create an interface like: ``` public interface IOpenIdService{ bool Login(string login, string password); } ``` and execute it in your Controller Action method. If it is successful return a JSON or XML success message. If it fails return a JSON or XML failure message. \*I have also found this article helpful for REST with MVC: <http://blog.wekeroad.com/2007/12/06/aspnet-mvc-using-restful-architecture/>. Also, if you want to extend JSON functionality, look into JSON.NET.
19,146,345
I'm looking for some kind of delegate method that will be triggered when a subview is added to a view. I need this because the compass view on the iOS7 map is added to the mapview when it starts rotating (it is not there previously and unhidden). However when I try to manipulate the compass view in `regionWillChangeAnimated` it doesn't trigger until the user lets go of the map. Basically I need anything along the lines of: `subviewWasAdded`, `subviewsDidChange`, or anything like that. **EDIT:** After taking Daij-Djan's suggestion, I made MPOMapView.m and MPOMapView. ``` #import "MPOMapView.h" @implementation MPOMapView - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code } return self; } - (void)addSubview:(UIView *)view { [super addSubview:view]; [self moveCompass:&view]; } - (void)moveCompass:(UIView **)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return; } if(![*view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return; } //Gets here NSLog(@"Was the compass"); float width = (*view).frame.size.width; float height = (*view).frame.size.height; (*view).frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); } @end ``` The method is called, but the frame won't change. I've also tried: ``` - (void)addSubview:(UIView *)view { [super addSubview:view]; [self moveCompass:view]; } - (void)moveCompass:(UIView *)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return; } if(![view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return; } //Gets here NSLog(@"Was the compass"); float width = view.frame.size.width; float height = view.frame.size.height; view.frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); } ``` but this doesn't work either. And lastly.... ``` - (void)addSubview:(UIView *)view { [super addSubview:[self movedCompass:view]]; } - (UIView *)movedCompass:(UIView *)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return view; } if(![view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return view; } //Gets here NSLog(@"Was the compass"); float width = view.frame.size.width; float height = view.frame.size.height; view.frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); return view; } ``` does not work either
2013/10/02
[ "https://Stackoverflow.com/questions/19146345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2565551/" ]
there is no such delegate. BUT all you can do is subclass MKMapView and implement insertSubview and addSubview and build your own solution. BUT see better way ``` - (void)layoutSubviews { [super layoutSubviews]; if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return; } for(id view in self.subviews) { if(![view isKindOfClass:NSClassFromString(@"MKCompassView")]) { continue; } [self moveCompass:view]]; } } - (void)moveCompass:(UIView *)view { //Gets here NSLog(@"Was the compass"); float width = view.frame.size.width; float height = view.frame.size.height; //view.frame = CGRectMake(40, self.frame.size.height - 50, width, height); view.frame = CGRectMake(40, self.bounds.size.height - 50, width, height); } ```
I don't know if it will work, buy you could try using Key Value Observing with the mapview's `camera`, which has a `heading` property.
19,146,345
I'm looking for some kind of delegate method that will be triggered when a subview is added to a view. I need this because the compass view on the iOS7 map is added to the mapview when it starts rotating (it is not there previously and unhidden). However when I try to manipulate the compass view in `regionWillChangeAnimated` it doesn't trigger until the user lets go of the map. Basically I need anything along the lines of: `subviewWasAdded`, `subviewsDidChange`, or anything like that. **EDIT:** After taking Daij-Djan's suggestion, I made MPOMapView.m and MPOMapView. ``` #import "MPOMapView.h" @implementation MPOMapView - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code } return self; } - (void)addSubview:(UIView *)view { [super addSubview:view]; [self moveCompass:&view]; } - (void)moveCompass:(UIView **)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return; } if(![*view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return; } //Gets here NSLog(@"Was the compass"); float width = (*view).frame.size.width; float height = (*view).frame.size.height; (*view).frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); } @end ``` The method is called, but the frame won't change. I've also tried: ``` - (void)addSubview:(UIView *)view { [super addSubview:view]; [self moveCompass:view]; } - (void)moveCompass:(UIView *)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return; } if(![view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return; } //Gets here NSLog(@"Was the compass"); float width = view.frame.size.width; float height = view.frame.size.height; view.frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); } ``` but this doesn't work either. And lastly.... ``` - (void)addSubview:(UIView *)view { [super addSubview:[self movedCompass:view]]; } - (UIView *)movedCompass:(UIView *)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return view; } if(![view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return view; } //Gets here NSLog(@"Was the compass"); float width = view.frame.size.width; float height = view.frame.size.height; view.frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); return view; } ``` does not work either
2013/10/02
[ "https://Stackoverflow.com/questions/19146345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2565551/" ]
there is no such delegate. BUT all you can do is subclass MKMapView and implement insertSubview and addSubview and build your own solution. BUT see better way ``` - (void)layoutSubviews { [super layoutSubviews]; if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return; } for(id view in self.subviews) { if(![view isKindOfClass:NSClassFromString(@"MKCompassView")]) { continue; } [self moveCompass:view]]; } } - (void)moveCompass:(UIView *)view { //Gets here NSLog(@"Was the compass"); float width = view.frame.size.width; float height = view.frame.size.height; //view.frame = CGRectMake(40, self.frame.size.height - 50, width, height); view.frame = CGRectMake(40, self.bounds.size.height - 50, width, height); } ```
Daij-Djan has posted the correct answer, credit to him! This is just an improvement on what I feel is too much code: ``` - (void)layoutSubviews { [super layoutSubviews]; for(id view in self.subviews) { if([view isKindOfClass:NSClassFromString(@"MKCompassView")]) [self moveCompass:view]]; } } - (void)moveCompass:(UIView *)view { //change compass frame as shown by Daij-Djan } ```
19,146,345
I'm looking for some kind of delegate method that will be triggered when a subview is added to a view. I need this because the compass view on the iOS7 map is added to the mapview when it starts rotating (it is not there previously and unhidden). However when I try to manipulate the compass view in `regionWillChangeAnimated` it doesn't trigger until the user lets go of the map. Basically I need anything along the lines of: `subviewWasAdded`, `subviewsDidChange`, or anything like that. **EDIT:** After taking Daij-Djan's suggestion, I made MPOMapView.m and MPOMapView. ``` #import "MPOMapView.h" @implementation MPOMapView - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code } return self; } - (void)addSubview:(UIView *)view { [super addSubview:view]; [self moveCompass:&view]; } - (void)moveCompass:(UIView **)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return; } if(![*view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return; } //Gets here NSLog(@"Was the compass"); float width = (*view).frame.size.width; float height = (*view).frame.size.height; (*view).frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); } @end ``` The method is called, but the frame won't change. I've also tried: ``` - (void)addSubview:(UIView *)view { [super addSubview:view]; [self moveCompass:view]; } - (void)moveCompass:(UIView *)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return; } if(![view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return; } //Gets here NSLog(@"Was the compass"); float width = view.frame.size.width; float height = view.frame.size.height; view.frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); } ``` but this doesn't work either. And lastly.... ``` - (void)addSubview:(UIView *)view { [super addSubview:[self movedCompass:view]]; } - (UIView *)movedCompass:(UIView *)view { if([[[UIDevice currentDevice] systemVersion] floatValue] < 7) { return view; } if(![view isKindOfClass:NSClassFromString(@"MKCompassView")]) { return view; } //Gets here NSLog(@"Was the compass"); float width = view.frame.size.width; float height = view.frame.size.height; view.frame = CGRectMake(40, self.frame.size.height - 50, width, height); //Frame doesn't change NSLog(@"Frame should change"); return view; } ``` does not work either
2013/10/02
[ "https://Stackoverflow.com/questions/19146345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2565551/" ]
Daij-Djan has posted the correct answer, credit to him! This is just an improvement on what I feel is too much code: ``` - (void)layoutSubviews { [super layoutSubviews]; for(id view in self.subviews) { if([view isKindOfClass:NSClassFromString(@"MKCompassView")]) [self moveCompass:view]]; } } - (void)moveCompass:(UIView *)view { //change compass frame as shown by Daij-Djan } ```
I don't know if it will work, buy you could try using Key Value Observing with the mapview's `camera`, which has a `heading` property.
31,538,765
I am using Atlassian Javascript framework to create tabs in my page. Every time I refresh the page it goes back to the default tab and leaves the selected tab. I added this piece of code after searching for the same problem in stack overflow but it is not working. ``` <script> $('#prod-discovery a').click(function (e) { e.preventDefault(); $(this).tab('show'); }); // store the currently selected tab in the hash value $("ul.tabs-menu > li > a").on("shown.bs.tab", function (e) { var id = $(e.target).attr("href").substr(1); window.location.hash = id; }); $('.active').removeClass('active'); // on load of the page: switch to the currently selected tab var hash = window.location.hash; $('#prod-discovery a[href="' + hash + '"]').tab('show'); </script> ``` This is the code I have for creating tabs: ``` <ul class="tabs-menu" role="tablist" id="prod-discovery"> <li class="menu-item active-tab" role="presentation"> <a href="#one" id="aui-uid-0-1430814803876" role="tab" aria-selected="true"><strong><h2>Tab One</h2></strong></a> </li> <li class="menu-item" role="presentation"> <a href="#two" id="aui-uid-1-1430814803876" role="tab" aria-selected="false"><strong><h2>Tab Two</h2></strong></a> </li> <li class="menu-item" role="presentation"> <a href="#three" id="aui-uid-1-1430814803876" role="tab" aria-selected="false"><strong><h2>Tab three</h2></strong></a> </li> <li class="menu-item" role="presentation"> <a href="#four" id="aui-uid-1-1430814803876" role="tab" aria-selected="false"><strong><h2>Tab Four</h2></strong></a> </li> <li class="menu-item" role="presentation"> <a href="#five" id="aui-uid-1-1430814803876" role="tab" aria-selected="false"><strong><h2>Tab5</h2></strong></a> </li> </ul> ``` For reference you can also have a look into <http://jsfiddle.net/alok15ee/5wpmsqe5/1/>
2015/07/21
[ "https://Stackoverflow.com/questions/31538765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2900150/" ]
The PivotTable will always sort alphabetically unless you give it something else to sort by. If you add a new field to your data which contains the sort order, then you can tell PowerPivot to use that as the sort order. First Select your Room Category column, click on the *Sort By Column* button, then select the column that contains your sort order: ![enter image description here](https://i.stack.imgur.com/nuvfU.png) And the result: ![enter image description here](https://i.stack.imgur.com/UH8fD.png)
I think I understand the issue (I've had the same problem with Dates as Text). I think if you create a "Named Set" based on the row Items, it will work.
14,182,286
I've tried a few tab scripts but I think I'm looking for something more specific. Basically I want something that shows a div and hides all others just like tabs. Something very similar to this <http://jsfiddle.net/6UcDR/2/> But I want to be able to use something other than buttons. I will have a set up images lined up horizontally like a gallery. When one image is clicked some content in another div tag is displayed above it right in the middle. When another image is clicked, the one that was being displayed his hidden while the other is shown. Almost exactly like how it's being done at that jsfiddle demo. However, I want to be able to use some type of div IDs / classes to connect the images with the content ... so `<div id="blahblah"> Related Content Here </div><div id="blahblah"> Image Here </div>` or something like this `<div id="content123" class="tab_content" style="display: block;"> content here <li class="active"><a href="#content123" rel="episodelist">Episode List</a></li>` This would make it really easy for me to put into a php loop where I can pull out multiple images and it's content by using image IDs and so on. Again, I want the content that's being displayed above the link or image that I'm going to have. So it would look something like this ... ``` Dynamic Content Displayed Image1 Image2 Image3 Image4 Image5 ``` This dynamic content would switch depending on the image that is clicked. If someone could help, it would be great!
2013/01/06
[ "https://Stackoverflow.com/questions/14182286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1952811/" ]
WHen learning jQuery it is easy to focus on needing an ID for everything since ID selectors are very specific. Using ID's can get complicated to code when faced with multiple scenarios for each group of ID. However there are many ways to match different elements using other methods that can greatly simplify code without needing to use any ID. For example adding a wrapper around your buttons and around the content items in your demo and code can be written using `index()` method: HTML: ``` <div id="content"> <div id=a style="display:none">1</diV> <div id=b style="display:none">2</diV> <div id=c style="display:none">3</diV> <div id=d style="display:none">4</diV> </div> <div id="button_wrap"> <input type=button value=1 id=w/> <input type=button value=2 id=x/> <input type=button value=3 id=y/> <input type=button value=4 id=z/> </div> ``` JS: ``` $(function(){ $('#button_wrap input').click(function(){ /* hide all content, show item that matches index of this button*/ $('#content').children().hide().eq( $(this).index() ).show() }); }); ``` Adding classes to similar items is also a big helper for selectors DEMO: <http://jsfiddle.net/6UcDR/38/> API reference: <http://api.jquery.com/index/> <http://api.jquery.com/eq/> Spend some time looking through the various traversal methods and examples in the API. Understanding what the various traverses do will help you generate more dynamic layouts and user interfaces
``` $(".contet").each( function() { $(this).hide(); }); $(".tab").on("click", function() { var s = $(this).attr("id"); var g = $("#"+s+".contet").html(); $(".content").html(g); }); ``` [Working Fiddle With Pure jQuery](http://jsfiddle.net/AR6nr/1/)
14,182,286
I've tried a few tab scripts but I think I'm looking for something more specific. Basically I want something that shows a div and hides all others just like tabs. Something very similar to this <http://jsfiddle.net/6UcDR/2/> But I want to be able to use something other than buttons. I will have a set up images lined up horizontally like a gallery. When one image is clicked some content in another div tag is displayed above it right in the middle. When another image is clicked, the one that was being displayed his hidden while the other is shown. Almost exactly like how it's being done at that jsfiddle demo. However, I want to be able to use some type of div IDs / classes to connect the images with the content ... so `<div id="blahblah"> Related Content Here </div><div id="blahblah"> Image Here </div>` or something like this `<div id="content123" class="tab_content" style="display: block;"> content here <li class="active"><a href="#content123" rel="episodelist">Episode List</a></li>` This would make it really easy for me to put into a php loop where I can pull out multiple images and it's content by using image IDs and so on. Again, I want the content that's being displayed above the link or image that I'm going to have. So it would look something like this ... ``` Dynamic Content Displayed Image1 Image2 Image3 Image4 Image5 ``` This dynamic content would switch depending on the image that is clicked. If someone could help, it would be great!
2013/01/06
[ "https://Stackoverflow.com/questions/14182286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1952811/" ]
WHen learning jQuery it is easy to focus on needing an ID for everything since ID selectors are very specific. Using ID's can get complicated to code when faced with multiple scenarios for each group of ID. However there are many ways to match different elements using other methods that can greatly simplify code without needing to use any ID. For example adding a wrapper around your buttons and around the content items in your demo and code can be written using `index()` method: HTML: ``` <div id="content"> <div id=a style="display:none">1</diV> <div id=b style="display:none">2</diV> <div id=c style="display:none">3</diV> <div id=d style="display:none">4</diV> </div> <div id="button_wrap"> <input type=button value=1 id=w/> <input type=button value=2 id=x/> <input type=button value=3 id=y/> <input type=button value=4 id=z/> </div> ``` JS: ``` $(function(){ $('#button_wrap input').click(function(){ /* hide all content, show item that matches index of this button*/ $('#content').children().hide().eq( $(this).index() ).show() }); }); ``` Adding classes to similar items is also a big helper for selectors DEMO: <http://jsfiddle.net/6UcDR/38/> API reference: <http://api.jquery.com/index/> <http://api.jquery.com/eq/> Spend some time looking through the various traversal methods and examples in the API. Understanding what the various traverses do will help you generate more dynamic layouts and user interfaces
``` <script src="../../jquery-1.8.0.js"></script> <script src="../../ui/jquery.ui.core.js"></script> <script src="../../ui/jquery.ui.widget.js"></script> <script src="../../ui/jquery.ui.tabs.js"></script> <link rel="stylesheet" href="../demos.css"> <script> $(function() { $( "#tabs" ).tabs(); }); </script> <div class="demo"> <div id="tabs"> <ul> <li><a href="#tabs-1">Nunc tincidunt</a></li> <li><a href="#tabs-2">Proin dolor</a></li> <li><a href="#tabs-3">Aenean lacinia</a></li> </ul> <div id="tabs-1"> <p>Image1</p> </div> <div id="tabs-2"> <p>Image2</p> </div> <div id="tabs-3"> <p>Image3</p> </div> <div id="tabs-4"> <p>Image4</p> </div> <div id="tabs-5"> <p>Image5</p> </div> </div> </div> ```
64,722,898
Here's my problem: I have this list I've generated containing a large number of links and I want to take this list and apply a function to it to scrape some data from all those links; however, when I run the program it only takes the data from the first link of that element, reprinting that info for the correct number of iterations. Here's all my code so far: ``` library(tidyverse) library(rvest) source_link<-"http://www.ufcstats.com/statistics/fighters?char=a&page=all" source_link_html<-read_html(source_link) #This scrapes all the links for the pages of all the fighters links_vector<-source_link_html%>% html_nodes("div ul li a")%>% html_attr("href")%>% #This seq selects the 26 needed links, i.e. from a-z .[1:26] #Modifies the pulled data so the links become useable and contain all the fighers instead of just some links_vector_modded<-str_c("http://www.ufcstats.com", links_vector,"&page=all") fighter_links<-sapply(links_vector_modded, function(links_vector_modded){ read_html(links_vector_modded[])%>% html_nodes("tr td a")%>% html_attr("href")%>% .[seq(1,length(.),3)]%>% na.omit(fighter_links) }) ###Next Portion: Using the above links to further harvest #Take all the links within an element of fighter_links and run it through the function career_data to scrape all the statistics from said pages. fighter_profiles_a<-map(fighter_links$`http://www.ufcstats.com/statistics/fighters?char=a&page=all`, function(career_data){ #Below is where I believe my problem lies read_html()%>% html_nodes("div ul li")%>% html_text() }) ``` The issue I'm having is in the last section of code,`read_html()`. I do not know how to apply each link in the element within the list to that function. Additionally, is there a way to call all of the elements of `fighter_links` instead of doing it one element at a time? Thank you for any advice and assistance!
2020/11/06
[ "https://Stackoverflow.com/questions/64722898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14358258/" ]
Use `mask` with `str.contains()` to perform the operation on rows with the specified condition, and then use the following operation: `.str.split(', ').str[0:2].agg(', '.join))`: ``` df['Col'] = df['Col'].mask(df['Col'].str.contains('County, Texas'), df['Col'].str.split(', ').str[0:2].agg(', '.join)) ``` Full Code: ``` import pandas as pd df = pd.DataFrame({'Col': {0: 'Jack Smith, Bank, Wilber, Lincoln County, Texas', 1: 'Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas', 2: 'Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas, Branch, Landing, Services', 3: 'Jack Smith, Union, Credit, Bank, Wilber, Branch, Landing, Services'}}) df['Col'] = df['Col'].mask(df['Col'].str.contains('County, Texas'), df['Col'].str.split(', ').str[0:2].agg(', '.join)) df Out[1]: Col 0 Jack Smith, Bank 1 Jack Smith, Union 2 Jack Smith, Union 3 Jack Smith, Union, Credit, Bank, Wilber, Branc... ``` --- Per the updated question, you can use `np.select`: ``` import pandas as pd df = pd.DataFrame({'Col': {0: 'Jack Smith, Bank, Wilber, Lincoln County, Texas', 1: 'Jack Smith, Bank, Credit, Bank, Wilber, Lincoln County, Texas', 2: 'Jack Smith, Bank, Union, Credit, Bank, Wilber, Lincoln County, Texas, Branch, Landing, Services', 3: 'Jack Smith, Bank, Credit, Bank, Wilber, Branch, Landing, Services'}}) df['Col'] = np.select([df['Col'].str.contains('County, Texas') & ~df['Col'].str.contains('Union'), df['Col'].str.contains('County, Texas') & df['Col'].str.contains('Union')], [df['Col'].str.split(', ').str[0:2].agg(', '.join), df['Col'].str.split(', ').str[0:3].agg(', '.join)], df['Col']) df Out[2]: Col 0 Jack Smith, Bank 1 Jack Smith, Bank 2 Jack Smith, Bank, Union 3 Jack Smith, Bank, Credit, Bank, Wilber, Branch... ```
You can simply use a combination of `map` with a `lambda`, `split` and `join`: ``` df['Example'] = df['Example'].map(lambda x: ','.join(x.split(',')[0:2]) if 'County, Texas' in x else x) ``` In this case: ``` import pandas as pd df = pd.DataFrame({'Example':["Jack Smith, Bank, Wilber, Lincoln County, Texas","Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas", "Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas, Branch, Landing, Services", "Jack Smith, Union, Credit, Bank, Wilber, Branch, Landing, Services"]}) df['Example'] = df['Example'].map(lambda x: ','.join(x.split(',')[0:2]) if 'County, Texas' in x else x) ``` We get the following output: ``` Example 0 Jack Smith, Bank 1 Jack Smith, Union 2 Jack Smith, Union 3 Jack Smith, Union, Credit, Bank, Wilber, Branc... ```
64,722,898
Here's my problem: I have this list I've generated containing a large number of links and I want to take this list and apply a function to it to scrape some data from all those links; however, when I run the program it only takes the data from the first link of that element, reprinting that info for the correct number of iterations. Here's all my code so far: ``` library(tidyverse) library(rvest) source_link<-"http://www.ufcstats.com/statistics/fighters?char=a&page=all" source_link_html<-read_html(source_link) #This scrapes all the links for the pages of all the fighters links_vector<-source_link_html%>% html_nodes("div ul li a")%>% html_attr("href")%>% #This seq selects the 26 needed links, i.e. from a-z .[1:26] #Modifies the pulled data so the links become useable and contain all the fighers instead of just some links_vector_modded<-str_c("http://www.ufcstats.com", links_vector,"&page=all") fighter_links<-sapply(links_vector_modded, function(links_vector_modded){ read_html(links_vector_modded[])%>% html_nodes("tr td a")%>% html_attr("href")%>% .[seq(1,length(.),3)]%>% na.omit(fighter_links) }) ###Next Portion: Using the above links to further harvest #Take all the links within an element of fighter_links and run it through the function career_data to scrape all the statistics from said pages. fighter_profiles_a<-map(fighter_links$`http://www.ufcstats.com/statistics/fighters?char=a&page=all`, function(career_data){ #Below is where I believe my problem lies read_html()%>% html_nodes("div ul li")%>% html_text() }) ``` The issue I'm having is in the last section of code,`read_html()`. I do not know how to apply each link in the element within the list to that function. Additionally, is there a way to call all of the elements of `fighter_links` instead of doing it one element at a time? Thank you for any advice and assistance!
2020/11/06
[ "https://Stackoverflow.com/questions/64722898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14358258/" ]
Use `mask` with `str.contains()` to perform the operation on rows with the specified condition, and then use the following operation: `.str.split(', ').str[0:2].agg(', '.join))`: ``` df['Col'] = df['Col'].mask(df['Col'].str.contains('County, Texas'), df['Col'].str.split(', ').str[0:2].agg(', '.join)) ``` Full Code: ``` import pandas as pd df = pd.DataFrame({'Col': {0: 'Jack Smith, Bank, Wilber, Lincoln County, Texas', 1: 'Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas', 2: 'Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas, Branch, Landing, Services', 3: 'Jack Smith, Union, Credit, Bank, Wilber, Branch, Landing, Services'}}) df['Col'] = df['Col'].mask(df['Col'].str.contains('County, Texas'), df['Col'].str.split(', ').str[0:2].agg(', '.join)) df Out[1]: Col 0 Jack Smith, Bank 1 Jack Smith, Union 2 Jack Smith, Union 3 Jack Smith, Union, Credit, Bank, Wilber, Branc... ``` --- Per the updated question, you can use `np.select`: ``` import pandas as pd df = pd.DataFrame({'Col': {0: 'Jack Smith, Bank, Wilber, Lincoln County, Texas', 1: 'Jack Smith, Bank, Credit, Bank, Wilber, Lincoln County, Texas', 2: 'Jack Smith, Bank, Union, Credit, Bank, Wilber, Lincoln County, Texas, Branch, Landing, Services', 3: 'Jack Smith, Bank, Credit, Bank, Wilber, Branch, Landing, Services'}}) df['Col'] = np.select([df['Col'].str.contains('County, Texas') & ~df['Col'].str.contains('Union'), df['Col'].str.contains('County, Texas') & df['Col'].str.contains('Union')], [df['Col'].str.split(', ').str[0:2].agg(', '.join), df['Col'].str.split(', ').str[0:3].agg(', '.join)], df['Col']) df Out[2]: Col 0 Jack Smith, Bank 1 Jack Smith, Bank 2 Jack Smith, Bank, Union 3 Jack Smith, Bank, Credit, Bank, Wilber, Branch... ```
Data ``` df = pd.DataFrame({'text':["Jack Smith, Bank, Wilber, Lincoln County, Texas","Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas", "Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas, Branch, Landing, Services", "Jack Smith, Union, Credit, Bank, Wilber, Branch, Landing, Services"]}) ``` Solution; Use `.str.extract` ``` df['newtext']=df.text.str.extract('(^\w+\s\w+\,\s\w+)') text newtext 0 Jack Smith, Bank, Wilber, Lincoln County, Texas Jack Smith, Bank 1 Jack Smith, Union, Credit, Bank, Wilber, Linco... Jack Smith, Union 2 Jack Smith, Union, Credit, Bank, Wilber, Linco... Jack Smith, Union 3 Jack Smith, Union, Credit, Bank, Wilber, Branc... Jack Smith, Union ```
64,722,898
Here's my problem: I have this list I've generated containing a large number of links and I want to take this list and apply a function to it to scrape some data from all those links; however, when I run the program it only takes the data from the first link of that element, reprinting that info for the correct number of iterations. Here's all my code so far: ``` library(tidyverse) library(rvest) source_link<-"http://www.ufcstats.com/statistics/fighters?char=a&page=all" source_link_html<-read_html(source_link) #This scrapes all the links for the pages of all the fighters links_vector<-source_link_html%>% html_nodes("div ul li a")%>% html_attr("href")%>% #This seq selects the 26 needed links, i.e. from a-z .[1:26] #Modifies the pulled data so the links become useable and contain all the fighers instead of just some links_vector_modded<-str_c("http://www.ufcstats.com", links_vector,"&page=all") fighter_links<-sapply(links_vector_modded, function(links_vector_modded){ read_html(links_vector_modded[])%>% html_nodes("tr td a")%>% html_attr("href")%>% .[seq(1,length(.),3)]%>% na.omit(fighter_links) }) ###Next Portion: Using the above links to further harvest #Take all the links within an element of fighter_links and run it through the function career_data to scrape all the statistics from said pages. fighter_profiles_a<-map(fighter_links$`http://www.ufcstats.com/statistics/fighters?char=a&page=all`, function(career_data){ #Below is where I believe my problem lies read_html()%>% html_nodes("div ul li")%>% html_text() }) ``` The issue I'm having is in the last section of code,`read_html()`. I do not know how to apply each link in the element within the list to that function. Additionally, is there a way to call all of the elements of `fighter_links` instead of doing it one element at a time? Thank you for any advice and assistance!
2020/11/06
[ "https://Stackoverflow.com/questions/64722898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14358258/" ]
You can simply use a combination of `map` with a `lambda`, `split` and `join`: ``` df['Example'] = df['Example'].map(lambda x: ','.join(x.split(',')[0:2]) if 'County, Texas' in x else x) ``` In this case: ``` import pandas as pd df = pd.DataFrame({'Example':["Jack Smith, Bank, Wilber, Lincoln County, Texas","Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas", "Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas, Branch, Landing, Services", "Jack Smith, Union, Credit, Bank, Wilber, Branch, Landing, Services"]}) df['Example'] = df['Example'].map(lambda x: ','.join(x.split(',')[0:2]) if 'County, Texas' in x else x) ``` We get the following output: ``` Example 0 Jack Smith, Bank 1 Jack Smith, Union 2 Jack Smith, Union 3 Jack Smith, Union, Credit, Bank, Wilber, Branc... ```
Data ``` df = pd.DataFrame({'text':["Jack Smith, Bank, Wilber, Lincoln County, Texas","Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas", "Jack Smith, Union, Credit, Bank, Wilber, Lincoln County, Texas, Branch, Landing, Services", "Jack Smith, Union, Credit, Bank, Wilber, Branch, Landing, Services"]}) ``` Solution; Use `.str.extract` ``` df['newtext']=df.text.str.extract('(^\w+\s\w+\,\s\w+)') text newtext 0 Jack Smith, Bank, Wilber, Lincoln County, Texas Jack Smith, Bank 1 Jack Smith, Union, Credit, Bank, Wilber, Linco... Jack Smith, Union 2 Jack Smith, Union, Credit, Bank, Wilber, Linco... Jack Smith, Union 3 Jack Smith, Union, Credit, Bank, Wilber, Branc... Jack Smith, Union ```
24,269
According to big endian byte ordering or network byte order the bits are transmitted in this order: bits 0-7 first, then bits 8-15, then 16-23 and bits 24-31 last. Does this means that bits from version, identification, TTL etc go first and then bits from next fields? [![enter image description here](https://i.stack.imgur.com/CH80Q.png)](https://i.stack.imgur.com/CH80Q.png)
2015/11/10
[ "https://networkengineering.stackexchange.com/questions/24269", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/20572/" ]
There is a confusion here. The network byte order does not specify how bits are transmitted over the network. It specifies how values are stored in multi byte fields. **Example**: The *Total Length* field is composed of two bytes. It specifies in bytes the size of the packet. Lets say we have the value 500 for that field. Using the Network Byte Order it will be seen over the wire like this, being transmitted transmission from left to right: ``` 00000001 11110100 ``` If we would use the little endian format then it would have been seen over the wire like this: ``` 11110100 00000001 ``` After the whole packet is constructed the bits will be sent starting with the lowest addressed bit of the header (bit 0), so the transmission will start with the *Version* field. A final point to make here is that the Network byte order is, as you mentioned, the Big Endian Order. This was chosen arbitrarily to have a common format for all network protocols and implementations.
It's very easy to think that internet packets go on the wire in a very simple "serial port" kind of way. In practice there is nothing inherently serial about it. If you think about some interface details it might make this clearer: * Consider Parallel port IP, which actually sends the data 4-bits at a time over four wires. <https://en.wikipedia.org/wiki/Parallel_Line_Internet_Protocol> * Actual 100baseTX scrambles 4-bit blocks and sends them as 5 bits serially but the original data isn't visible in the output, so the question about what order they go in doesn't have an answer. <https://en.wikipedia.org/wiki/4B5B> * When you send a packet across a loopback interface, it might be copied inside the computer's bus 64-bits at a time; or indeed just by memory remapping which would really be whole packet in parallel. Of course parallel port IP isn't common, but it illustrates the point; the other two are ubiquitous. Hope that helps Jonathan.
24,269
According to big endian byte ordering or network byte order the bits are transmitted in this order: bits 0-7 first, then bits 8-15, then 16-23 and bits 24-31 last. Does this means that bits from version, identification, TTL etc go first and then bits from next fields? [![enter image description here](https://i.stack.imgur.com/CH80Q.png)](https://i.stack.imgur.com/CH80Q.png)
2015/11/10
[ "https://networkengineering.stackexchange.com/questions/24269", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/20572/" ]
There is a confusion here. The network byte order does not specify how bits are transmitted over the network. It specifies how values are stored in multi byte fields. **Example**: The *Total Length* field is composed of two bytes. It specifies in bytes the size of the packet. Lets say we have the value 500 for that field. Using the Network Byte Order it will be seen over the wire like this, being transmitted transmission from left to right: ``` 00000001 11110100 ``` If we would use the little endian format then it would have been seen over the wire like this: ``` 11110100 00000001 ``` After the whole packet is constructed the bits will be sent starting with the lowest addressed bit of the header (bit 0), so the transmission will start with the *Version* field. A final point to make here is that the Network byte order is, as you mentioned, the Big Endian Order. This was chosen arbitrarily to have a common format for all network protocols and implementations.
Other protocols may be different, but **Ethernet** transmits **most signficant octet/byte first** and within each byte **least significant bit first**. So, a 16-bit field is transmitted 8-9-10-11-12-13-14-15 - 0-1-2-3-4-5-6-7 (0=least signficant bit, 15=most significant bit). Check IEEE 802.3 Clauses 3.1.1, 3.2.6, and 3.3. (This is for purely serial Ethernet - depending on the physical layer, up to eight bits may be transferred simultaneously. Additionally, the bit order goes only for the unencoded layer 1.) IPv4 also uses most significant octet first, check RFC 791. However, numbering in IETF RFCs is in order of transmission with the bit numbering *in reverse to Ethernet*: Bit 0 = most significant bit = transmitted first (where not otherwise defined).
24,269
According to big endian byte ordering or network byte order the bits are transmitted in this order: bits 0-7 first, then bits 8-15, then 16-23 and bits 24-31 last. Does this means that bits from version, identification, TTL etc go first and then bits from next fields? [![enter image description here](https://i.stack.imgur.com/CH80Q.png)](https://i.stack.imgur.com/CH80Q.png)
2015/11/10
[ "https://networkengineering.stackexchange.com/questions/24269", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/20572/" ]
There is a confusion here. The network byte order does not specify how bits are transmitted over the network. It specifies how values are stored in multi byte fields. **Example**: The *Total Length* field is composed of two bytes. It specifies in bytes the size of the packet. Lets say we have the value 500 for that field. Using the Network Byte Order it will be seen over the wire like this, being transmitted transmission from left to right: ``` 00000001 11110100 ``` If we would use the little endian format then it would have been seen over the wire like this: ``` 11110100 00000001 ``` After the whole packet is constructed the bits will be sent starting with the lowest addressed bit of the header (bit 0), so the transmission will start with the *Version* field. A final point to make here is that the Network byte order is, as you mentioned, the Big Endian Order. This was chosen arbitrarily to have a common format for all network protocols and implementations.
It is important to note that "ordering" is a very personal concept, dependent upon culture and experience. "Most people" fail to disclose their own personal concept of "ordering" when engaging in conversation, simply assuming that the other person holds the same concept of ordering as themselves. That assumption, then, is very likely to be both "hidden" and wrong. I grew-up speaking English, and reading "left to right", "top to bottom". So I tend to "image" in that format. Most of my assembly programming was on Intel microprocessors, with storage "imaged" with "the top of memory" - the memory location with the largest address - at the "top" of a written page. So *that's* my reference point, and any other ordering format will seem "altered" to me. My Intel storage image seems "natural" to me because, in English, by convention, we write numbers with the largest weighted symbol on the left. This format has the "natural" advantage of the storage image being independent of the word width. Bit locations in storage do not change when writing different length "chunks" into storage. The bits are always "imaged" in the same order, "left to right", "top to bottom". That is the "little endian" way. In contrast, with "big endian" format, the bit order in the storage image appears "scrambled", changing with the length of the "chunk" being written. Bits are written "MSB to LSB", and words are written in the opposite order, "Lowest Address to Highest Address". But "scrambled" is *only relative* to my personal ordering system. Similarly, display utilities like hexdump appear to me to be "bit-swapped". On the other hand, though, hexdump will display C language text strings in the conventional, readable, left to right order. However, from my perspective as a "hardware person", then, my "little endian" format is, instead, "backward". I will draw a hardware counter/divider with the fast input clock on the left, which would represent the *least* weighted symbol in a counter, and draw divider stages "left to right", which is like representing a number "LSB to MSB" - which is "backward". Similarly, converting "storage order" to network "wire order" depends very much upon your concept of the hardware. Serial or Parallel? Shift Left or Shift Right? Bits or Bauds? With respect to the original poster's RFC 791 "Internet Header Format" example, we can then refer to "APPENDIX B: Data Transmission Order" from that same document. We find, perhaps counterintuitively, that: > > Whenever a diagram shows a > group of octets, the order of transmission of those octets is the normal > order in which they are read in English. For example, in the following > diagram *the octets are transmitted in the order they are numbered*. > > > > ``` > > 0 1 2 3 > 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 1 | 2 | 3 | 4 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 5 | 6 | 7 | 8 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 9 | 10 | 11 | 12 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > > Transmission Order of Bytes > > Figure 10. > > ``` > > Whenever an octet represents a numeric quantity the left most bit in the > diagram is the high order or most significant bit. That is, *the bit > labeled 0 is the most significant bit*. > > > People might be inclined, instead, to consider "the bit labelled 0" to be the *least* significant bit, but that is not the convention here. In particular, note that the document describes the transmission *octet* order, and says *nothing* about the transmission *bit* order. In practice, then, the Internet Protocol transmission *bit* order is only meaningful in the context of bit-serial transmission, and meaningless in the context of multi-bit symbol transmission, as, for example, with WiFi. Internet Protocol transmission will depend entirely upon the underlying Link Layer and Physical Hardware Layer, using the OSI Network Stack Model terminology. Furthermore, the computer "storage image" of this "Octet-Oriented" Internet Protocol will also depend entirely upon the word size and byte ordering of the underlying Programming Language and Physical Hardware of the computer system. There are many "moving parts" here, and too many moving parts to provide a simple answer to the original question about "which bits *go first*". Context is everything. Still, for Internet Protocol itself, the answer is: octet by octet, in "the normal order in which they are read in English", left to right, top to bottom, as read from each diagram in the Internet Protocol documentation. "Altered", "natural", "scrambled", "bit-swapped", "backward" - depends upon your point of view. While there is no "one true way" of ordering, it is very useful to, first, *choose* a personal "reference ordering system", and then second, to always communicate that reference in conversation.
24,269
According to big endian byte ordering or network byte order the bits are transmitted in this order: bits 0-7 first, then bits 8-15, then 16-23 and bits 24-31 last. Does this means that bits from version, identification, TTL etc go first and then bits from next fields? [![enter image description here](https://i.stack.imgur.com/CH80Q.png)](https://i.stack.imgur.com/CH80Q.png)
2015/11/10
[ "https://networkengineering.stackexchange.com/questions/24269", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/20572/" ]
It's very easy to think that internet packets go on the wire in a very simple "serial port" kind of way. In practice there is nothing inherently serial about it. If you think about some interface details it might make this clearer: * Consider Parallel port IP, which actually sends the data 4-bits at a time over four wires. <https://en.wikipedia.org/wiki/Parallel_Line_Internet_Protocol> * Actual 100baseTX scrambles 4-bit blocks and sends them as 5 bits serially but the original data isn't visible in the output, so the question about what order they go in doesn't have an answer. <https://en.wikipedia.org/wiki/4B5B> * When you send a packet across a loopback interface, it might be copied inside the computer's bus 64-bits at a time; or indeed just by memory remapping which would really be whole packet in parallel. Of course parallel port IP isn't common, but it illustrates the point; the other two are ubiquitous. Hope that helps Jonathan.
It is important to note that "ordering" is a very personal concept, dependent upon culture and experience. "Most people" fail to disclose their own personal concept of "ordering" when engaging in conversation, simply assuming that the other person holds the same concept of ordering as themselves. That assumption, then, is very likely to be both "hidden" and wrong. I grew-up speaking English, and reading "left to right", "top to bottom". So I tend to "image" in that format. Most of my assembly programming was on Intel microprocessors, with storage "imaged" with "the top of memory" - the memory location with the largest address - at the "top" of a written page. So *that's* my reference point, and any other ordering format will seem "altered" to me. My Intel storage image seems "natural" to me because, in English, by convention, we write numbers with the largest weighted symbol on the left. This format has the "natural" advantage of the storage image being independent of the word width. Bit locations in storage do not change when writing different length "chunks" into storage. The bits are always "imaged" in the same order, "left to right", "top to bottom". That is the "little endian" way. In contrast, with "big endian" format, the bit order in the storage image appears "scrambled", changing with the length of the "chunk" being written. Bits are written "MSB to LSB", and words are written in the opposite order, "Lowest Address to Highest Address". But "scrambled" is *only relative* to my personal ordering system. Similarly, display utilities like hexdump appear to me to be "bit-swapped". On the other hand, though, hexdump will display C language text strings in the conventional, readable, left to right order. However, from my perspective as a "hardware person", then, my "little endian" format is, instead, "backward". I will draw a hardware counter/divider with the fast input clock on the left, which would represent the *least* weighted symbol in a counter, and draw divider stages "left to right", which is like representing a number "LSB to MSB" - which is "backward". Similarly, converting "storage order" to network "wire order" depends very much upon your concept of the hardware. Serial or Parallel? Shift Left or Shift Right? Bits or Bauds? With respect to the original poster's RFC 791 "Internet Header Format" example, we can then refer to "APPENDIX B: Data Transmission Order" from that same document. We find, perhaps counterintuitively, that: > > Whenever a diagram shows a > group of octets, the order of transmission of those octets is the normal > order in which they are read in English. For example, in the following > diagram *the octets are transmitted in the order they are numbered*. > > > > ``` > > 0 1 2 3 > 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 1 | 2 | 3 | 4 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 5 | 6 | 7 | 8 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 9 | 10 | 11 | 12 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > > Transmission Order of Bytes > > Figure 10. > > ``` > > Whenever an octet represents a numeric quantity the left most bit in the > diagram is the high order or most significant bit. That is, *the bit > labeled 0 is the most significant bit*. > > > People might be inclined, instead, to consider "the bit labelled 0" to be the *least* significant bit, but that is not the convention here. In particular, note that the document describes the transmission *octet* order, and says *nothing* about the transmission *bit* order. In practice, then, the Internet Protocol transmission *bit* order is only meaningful in the context of bit-serial transmission, and meaningless in the context of multi-bit symbol transmission, as, for example, with WiFi. Internet Protocol transmission will depend entirely upon the underlying Link Layer and Physical Hardware Layer, using the OSI Network Stack Model terminology. Furthermore, the computer "storage image" of this "Octet-Oriented" Internet Protocol will also depend entirely upon the word size and byte ordering of the underlying Programming Language and Physical Hardware of the computer system. There are many "moving parts" here, and too many moving parts to provide a simple answer to the original question about "which bits *go first*". Context is everything. Still, for Internet Protocol itself, the answer is: octet by octet, in "the normal order in which they are read in English", left to right, top to bottom, as read from each diagram in the Internet Protocol documentation. "Altered", "natural", "scrambled", "bit-swapped", "backward" - depends upon your point of view. While there is no "one true way" of ordering, it is very useful to, first, *choose* a personal "reference ordering system", and then second, to always communicate that reference in conversation.
24,269
According to big endian byte ordering or network byte order the bits are transmitted in this order: bits 0-7 first, then bits 8-15, then 16-23 and bits 24-31 last. Does this means that bits from version, identification, TTL etc go first and then bits from next fields? [![enter image description here](https://i.stack.imgur.com/CH80Q.png)](https://i.stack.imgur.com/CH80Q.png)
2015/11/10
[ "https://networkengineering.stackexchange.com/questions/24269", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/20572/" ]
Other protocols may be different, but **Ethernet** transmits **most signficant octet/byte first** and within each byte **least significant bit first**. So, a 16-bit field is transmitted 8-9-10-11-12-13-14-15 - 0-1-2-3-4-5-6-7 (0=least signficant bit, 15=most significant bit). Check IEEE 802.3 Clauses 3.1.1, 3.2.6, and 3.3. (This is for purely serial Ethernet - depending on the physical layer, up to eight bits may be transferred simultaneously. Additionally, the bit order goes only for the unencoded layer 1.) IPv4 also uses most significant octet first, check RFC 791. However, numbering in IETF RFCs is in order of transmission with the bit numbering *in reverse to Ethernet*: Bit 0 = most significant bit = transmitted first (where not otherwise defined).
It is important to note that "ordering" is a very personal concept, dependent upon culture and experience. "Most people" fail to disclose their own personal concept of "ordering" when engaging in conversation, simply assuming that the other person holds the same concept of ordering as themselves. That assumption, then, is very likely to be both "hidden" and wrong. I grew-up speaking English, and reading "left to right", "top to bottom". So I tend to "image" in that format. Most of my assembly programming was on Intel microprocessors, with storage "imaged" with "the top of memory" - the memory location with the largest address - at the "top" of a written page. So *that's* my reference point, and any other ordering format will seem "altered" to me. My Intel storage image seems "natural" to me because, in English, by convention, we write numbers with the largest weighted symbol on the left. This format has the "natural" advantage of the storage image being independent of the word width. Bit locations in storage do not change when writing different length "chunks" into storage. The bits are always "imaged" in the same order, "left to right", "top to bottom". That is the "little endian" way. In contrast, with "big endian" format, the bit order in the storage image appears "scrambled", changing with the length of the "chunk" being written. Bits are written "MSB to LSB", and words are written in the opposite order, "Lowest Address to Highest Address". But "scrambled" is *only relative* to my personal ordering system. Similarly, display utilities like hexdump appear to me to be "bit-swapped". On the other hand, though, hexdump will display C language text strings in the conventional, readable, left to right order. However, from my perspective as a "hardware person", then, my "little endian" format is, instead, "backward". I will draw a hardware counter/divider with the fast input clock on the left, which would represent the *least* weighted symbol in a counter, and draw divider stages "left to right", which is like representing a number "LSB to MSB" - which is "backward". Similarly, converting "storage order" to network "wire order" depends very much upon your concept of the hardware. Serial or Parallel? Shift Left or Shift Right? Bits or Bauds? With respect to the original poster's RFC 791 "Internet Header Format" example, we can then refer to "APPENDIX B: Data Transmission Order" from that same document. We find, perhaps counterintuitively, that: > > Whenever a diagram shows a > group of octets, the order of transmission of those octets is the normal > order in which they are read in English. For example, in the following > diagram *the octets are transmitted in the order they are numbered*. > > > > ``` > > 0 1 2 3 > 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 1 | 2 | 3 | 4 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 5 | 6 | 7 | 8 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | 9 | 10 | 11 | 12 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > > Transmission Order of Bytes > > Figure 10. > > ``` > > Whenever an octet represents a numeric quantity the left most bit in the > diagram is the high order or most significant bit. That is, *the bit > labeled 0 is the most significant bit*. > > > People might be inclined, instead, to consider "the bit labelled 0" to be the *least* significant bit, but that is not the convention here. In particular, note that the document describes the transmission *octet* order, and says *nothing* about the transmission *bit* order. In practice, then, the Internet Protocol transmission *bit* order is only meaningful in the context of bit-serial transmission, and meaningless in the context of multi-bit symbol transmission, as, for example, with WiFi. Internet Protocol transmission will depend entirely upon the underlying Link Layer and Physical Hardware Layer, using the OSI Network Stack Model terminology. Furthermore, the computer "storage image" of this "Octet-Oriented" Internet Protocol will also depend entirely upon the word size and byte ordering of the underlying Programming Language and Physical Hardware of the computer system. There are many "moving parts" here, and too many moving parts to provide a simple answer to the original question about "which bits *go first*". Context is everything. Still, for Internet Protocol itself, the answer is: octet by octet, in "the normal order in which they are read in English", left to right, top to bottom, as read from each diagram in the Internet Protocol documentation. "Altered", "natural", "scrambled", "bit-swapped", "backward" - depends upon your point of view. While there is no "one true way" of ordering, it is very useful to, first, *choose* a personal "reference ordering system", and then second, to always communicate that reference in conversation.
148,118
I am trying to make a little wireless-connected gizmo to attach to my grandfather's walking cane. The idea is that when it senses the cane has fallen down, it would alert/call one of us so that we can rush to him if we're not nearby. The question is: What sensor or combination of sensors could I use to reliably differentiate between these two scenarios? **NORMAL situation**: The cane is being held upright (or at only a slight angle from vertical) and used normally. *versus* **CRITICAL situation**: The cane has been let go of, falls down and continues to lie on the floor/ground Obviously, occasional false positives would be fine, but detection misses (false negatives) should be minimized. --- I am thinking a combination of two things might help: --sense whether my grandpa's hand is holding the cane or not (perhaps a light sensor that is blocked when the hand is on it?) --sense whether the cane is upright or has taken a fall (perhaps an accelerometer+gyroscope+vibration sensor to determine orientation and shock events?)
2015/01/08
[ "https://electronics.stackexchange.com/questions/148118", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/11636/" ]
A very simple solution is a tilt sensor such as the AT-407 from Sparkfun. It consists of 2 steel balls inside a small tube and 2 leads. When the tube is tilted, the balls separate and continuity is lost between the 2 leads. It costs only $1.95.
Seems like an accelerometer and a small microcontroller (a PIC for example) could do this quite easily. You could then set the angle at which the alarm goes off in the code. With the light sensor approach you'd need to be careful that the area stayed clean. Otherwise if he covers the sensor with crud it will always indicate that the cane is vertical even when it is not. With a vibration sensor you'd have to get creative about detecting the impulse event (when the cane falls) and then latch the output so it continues to sound the alarm even after the can has come to a rest.
148,118
I am trying to make a little wireless-connected gizmo to attach to my grandfather's walking cane. The idea is that when it senses the cane has fallen down, it would alert/call one of us so that we can rush to him if we're not nearby. The question is: What sensor or combination of sensors could I use to reliably differentiate between these two scenarios? **NORMAL situation**: The cane is being held upright (or at only a slight angle from vertical) and used normally. *versus* **CRITICAL situation**: The cane has been let go of, falls down and continues to lie on the floor/ground Obviously, occasional false positives would be fine, but detection misses (false negatives) should be minimized. --- I am thinking a combination of two things might help: --sense whether my grandpa's hand is holding the cane or not (perhaps a light sensor that is blocked when the hand is on it?) --sense whether the cane is upright or has taken a fall (perhaps an accelerometer+gyroscope+vibration sensor to determine orientation and shock events?)
2015/01/08
[ "https://electronics.stackexchange.com/questions/148118", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/11636/" ]
I agree with one of the above replies in that you need something (probably two accelerometers) to detect the angle at which the cane stands (calibrated at the factory if you are industrializing this). This way you have a continuum of what angle the cane is at, relative to upright. This gives a distinct advantage over a simple tilt switch for the following reasons. 1. You can calculate the angle with two accelerometers. With a tilt (on/off) switch, you cannot 2. Not only can you calculate the angle with two accelerometers, you can detect **the rate at which the angle changes** relative to the cane standing up straight. **This is key** as you can then at least have the information to characterize what the rate of fall looks like by a simple drop test and base your fall detection algorithm model on that.
Seems like an accelerometer and a small microcontroller (a PIC for example) could do this quite easily. You could then set the angle at which the alarm goes off in the code. With the light sensor approach you'd need to be careful that the area stayed clean. Otherwise if he covers the sensor with crud it will always indicate that the cane is vertical even when it is not. With a vibration sensor you'd have to get creative about detecting the impulse event (when the cane falls) and then latch the output so it continues to sound the alarm even after the can has come to a rest.