text
stringlengths
0
30.5k
title
stringclasses
1 value
embeddings
listlengths
768
768
the parameters of the function are assigned to the special vars $0, $1 etc. I suggest you to put them into more meaningful names. declare the variables inside the function as local: ``` function foo { local bar="$0" } ``` **Error prone situations** In bash, unless you declare otherwise, an unset variable is used as an empty string. This is very dangerous in case of typo, as the badly typed variable will not be reported, and it will be evaluated as empty. use ``` set -o nounset ``` to prevent this to happen. Be careful though, because if you do this, the program will abort every time you
[ 0.0346597395837307, 0.11684604734182358, 0.10867045819759369, -0.17901058495044708, -0.16244696080684662, 0.004024371039122343, 0.41640689969062805, -0.03383544459939003, -0.1667887419462204, -0.35074394941329956, -0.2802428603172302, 0.900266170501709, -0.3858698606491089, 0.0575433820486...
evaluate an undefined variable. For this reason, the only way to check if a variable is not defined is the following: ``` if test "x${foo:-notset}" == "xnotset" then echo "foo not set" fi ``` You can declare variables as readonly: ``` readonly readonly_var="foo" ``` **Modularization** You can achieve "python like" modularization if you use the following code: ``` set -o nounset function getScriptAbsoluteDir { # @description used to get the script path # @param $1 the script $0 parameter local script_invoke_path="$1" local cwd=`pwd` # absolute path ? if so, the first character is a /
[ -0.07331329584121704, -0.04961445927619934, 0.32836875319480896, -0.2065696120262146, -0.1376650631427765, 0.1088101714849472, 0.2579095661640167, -0.400271475315094, 0.09474866092205048, -0.3439379930496216, 0.07287228852510452, 0.6665037274360657, -0.24143308401107788, -0.193964257836341...
if test "x${script_invoke_path:0:1}" = 'x/' then RESULT=`dirname "$script_invoke_path"` else RESULT=`dirname "$cwd/$script_invoke_path"` fi } script_invoke_path="$0" script_name=`basename "$0"` getScriptAbsoluteDir "$script_invoke_path" script_absolute_dir=$RESULT function import() { # @description importer routine to get external functionality. # @description the first location searched is the script directory. # @description if not found, search the module in the paths contained in $SHELL_LIBRARY_PATH environment variable # @param $1 the .shinc file to import, without .shinc extension
[ 0.24162010848522186, -0.16521266102790833, 0.24412809312343597, -0.04163916036486626, 0.15054789185523987, -0.06725828349590302, 0.32719433307647705, -0.5650854110717773, 0.3163141906261444, -0.6804400682449341, -0.26478132605552673, 0.6360509395599365, -0.004987739957869053, -0.0162816103...
module=$1 if test "x$module" == "x" then echo "$script_name : Unable to import unspecified module. Dying." exit 1 fi if test "x${script_absolute_dir:-notset}" == "xnotset" then echo "$script_name : Undefined script absolute dir. Did you remove getScriptAbsoluteDir? Dying." exit 1 fi if test "x$script_absolute_dir" == "x" then
[ 0.0798487663269043, -0.2871919572353363, 0.7821720838546753, -0.32473623752593994, 0.2430354505777359, -0.21842214465141296, 0.44904279708862305, -0.7586362361907959, 0.16469140350818634, -0.07164915651082993, -0.2504965662956238, 0.7807807922363281, -0.4241565465927124, 0.2472841888666153...
echo "$script_name : empty script path. Dying." exit 1 fi if test -e "$script_absolute_dir/$module.shinc" then # import from script directory . "$script_absolute_dir/$module.shinc" elif test "x${SHELL_LIBRARY_PATH:-notset}" != "xnotset" then # import from the shell script library path # save the separator and use the
[ 0.31618964672088623, -0.09941031783819199, 0.1651310920715332, -0.441808819770813, 0.14873546361923218, 0.04492657631635666, 0.46401095390319824, -0.7016223669052124, 0.34469541907310486, -0.32174140214920044, -0.07315658777952194, 0.5304413437843323, -0.3189719021320343, -0.01431705243885...
':' instead local saved_IFS="$IFS" IFS=':' for path in $SHELL_LIBRARY_PATH do if test -e "$path/$module.shinc" then . "$path/$module.shinc" return
[ -0.09654807299375534, -0.245481476187706, 0.05396221950650215, -0.038543473929166794, 0.20059718191623688, -0.11586324125528336, 0.4334498941898346, -0.20818191766738892, 0.10956459492444992, -0.6516398191452026, -0.16589945554733276, 0.6622462868690491, -0.14561086893081665, -0.0771167725...
fi done # restore the standard separator IFS="$saved_IFS" fi echo "$script_name : Unable to find module $module." exit 1 } ``` you can then import files with the extension .shinc with the following syntax import "AModule/ModuleFile" Which will be searched in SHELL\_LIBRARY\_PATH. As you always import in the global namespace, remember to prefix all your functions and variables with a proper prefix, otherwise you risk
[ 0.18643806874752045, -0.32789406180381775, 0.4811460077762604, -0.3019498884677887, 0.19837474822998047, -0.3293241858482361, 0.201525017619133, -0.248484805226326, -0.11753101646900177, -0.4227842092514038, -0.3838760256767273, 1.0402947664260864, -0.33569180965423584, -0.1389489471912384...
name clashes. I use double underscore as the python dot. Also, put this as first thing in your module ``` # avoid double inclusion if test "${BashInclude__imported+defined}" == "defined" then return 0 fi BashInclude__imported=1 ``` **Object oriented programming** In bash, you cannot do object oriented programming, unless you build a quite complex system of allocation of objects (I thought about that. it's feasible, but insane). In practice, you can however do "Singleton oriented programming": you have one instance of each object, and only one. What I do is: i define an object into a module (see the modularization entry). Then I define empty vars (analogous to member variables) an
[ 0.22332879900932312, 0.1558176726102829, -0.236627459526062, -0.21203412115573883, -0.35565412044525146, -0.12475506961345673, 0.12977994978427887, -0.40380778908729553, -0.2619367241859436, -0.3905353844165802, 0.04176225885748863, 0.7789291739463806, -0.5384438037872314, -0.0546972043812...
init function (constructor) and member functions, like in this example code ``` # avoid double inclusion if test "${Table__imported+defined}" == "defined" then return 0 fi Table__imported=1 readonly Table__NoException="" readonly Table__ParameterException="Table__ParameterException" readonly Table__MySqlException="Table__MySqlException" readonly Table__NotInitializedException="Table__NotInitializedException" readonly Table__AlreadyInitializedException="Table__AlreadyInitializedException" # an example for module enum constants, used in the mysql table, in this case readonly Table__GENDER_MALE="GENDER_MALE" readonly Table__GENDER_FEMALE="GENDER_FEMALE" # private: prefixed with p_ (a bash variable cannot start with _) p_Table__mysql_exec="" # will contain the executed mysql command p_Table__initialized=0 function Table__init { # @description init the module with the database parameters # @param $1 the mysql config file # @exception Table__NoException, Table__ParameterException EXCEPTION=""
[ -0.21953576803207397, 0.0005542311700992286, 0.3781236708164215, 0.11271367222070694, -0.13700486719608307, 0.13849776983261108, 0.20931439101696014, -0.5289532542228699, 0.024472476914525032, -0.41915518045425415, -0.18851998448371887, 0.4846705496311188, -0.20554441213607788, -0.07363325...
EXCEPTION_MSG="" EXCEPTION_FUNC="" RESULT="" if test $p_Table__initialized -ne 0 then EXCEPTION=$Table__AlreadyInitializedException EXCEPTION_MSG="module already initialized" EXCEPTION_FUNC="$FUNCNAME" return 1 fi local config_file="$1" # yes, I am aware that I could put default parameters and other niceties, but I am lazy today if test
[ 0.0024759082589298487, -0.12325535714626312, 0.4370507597923279, -0.16670167446136475, 0.1825585514307022, -0.19628171622753143, 0.46023115515708923, -0.2075771987438202, -0.11275068670511246, -0.5643644332885742, -0.25825992226600647, 0.7338192462921143, -0.1732325255870819, -0.0261179711...
"x$config_file" = "x"; then EXCEPTION=$Table__ParameterException EXCEPTION_MSG="missing parameter config file" EXCEPTION_FUNC="$FUNCNAME" return 1 fi p_Table__mysql_exec="mysql --defaults-file=$config_file --silent --skip-column-names -e " # mark the module as initialized p_Table__initialized=1 EXCEPTION=$Table__NoException EXCEPTION_MSG="" EXCEPTION_FUNC="" return 0 } function Table__getName() { # @description
[ -0.09306789189577103, 0.0026883473619818687, 0.7688305377960205, -0.2213655412197113, 0.11717240512371063, -0.061912599951028824, -0.03443488851189613, -0.5699679851531982, -0.3771895468235016, -0.3869131803512573, -0.40444284677505493, 0.5835621953010559, -0.25640615820884705, 0.067648932...
gets the name of the person # @param $1 the row identifier # @result the name EXCEPTION="" EXCEPTION_MSG="" EXCEPTION_FUNC="" RESULT="" if test $p_Table__initialized -eq 0 then EXCEPTION=$Table__NotInitializedException EXCEPTION_MSG="module not initialized" EXCEPTION_FUNC="$FUNCNAME" return 1 fi id=$1 if
[ -0.16097049415111542, 0.02026607282459736, 0.6029943823814392, -0.07067003101110458, 0.175601989030838, 0.04247671738266945, -0.007098536007106304, -0.3408234119415283, -0.14619863033294678, -0.28609034419059753, -0.7065317034721375, 0.3848876953125, -0.020935090258717537, -0.0670499131083...
test "x$id" = "x"; then EXCEPTION=$Table__ParameterException EXCEPTION_MSG="missing parameter identifier" EXCEPTION_FUNC="$FUNCNAME" return 1 fi local name=`$p_Table__mysql_exec "SELECT name FROM table WHERE id = '$id'"` if test $? != 0 ; then EXCEPTION=$Table__MySqlException EXCEPTION_MSG="unable to perform select"
[ -0.07365191727876663, -0.23972249031066895, 0.3469860851764679, -0.028585199266672134, 0.20891490578651428, -0.06964350491762161, 0.36012762784957886, -0.2921455204486847, 0.21707335114479065, -0.42956113815307617, -0.4235679805278778, 0.545596718788147, -0.10382066667079926, 0.09642513841...
EXCEPTION_FUNC="$FUNCNAME" return 1 fi RESULT=$name EXCEPTION=$Table__NoException EXCEPTION_MSG="" EXCEPTION_FUNC="" return 0 } ``` **Trapping and handling signals** I found this useful to catch and handle exceptions. ``` function Main__interruptHandler() { # @description signal handler for SIGINT echo "SIGINT caught" exit } function Main__terminationHandler() { # @description signal handler for SIGTERM echo "SIGTERM caught" exit } function Main__exitHandler() {
[ -0.16115230321884155, -0.1624133139848709, 0.22512684762477875, -0.09349902719259262, 0.08224450796842575, -0.03183063864707947, 0.4212249517440796, -0.28882381319999695, -0.3174915313720703, -0.19447173178195953, -0.11590871959924698, 0.35526570677757263, -0.411320298910141, 0.17502604424...
# @description signal handler for end of the program (clean or unclean). # probably redundant call, we already call the cleanup in main. exit } trap Main__interruptHandler INT trap Main__terminationHandler TERM trap Main__exitHandler EXIT function Main__main() { # body } # catch signals and exit trap exit INT TERM EXIT Main__main "$@" ``` **Hints and tips** If something does not work for some reason, try to reorder the code. Order is important and not always intuitive. do not even consider working with tcsh. it does not support functions, and it's horrible in general. Hope it helps, although please
[ 0.2535357177257538, 0.03803904727101326, 0.22906634211540222, -0.07137895375490189, 0.12416687607765198, -0.08909441530704498, 0.6209309101104736, -0.24002127349376678, -0.14797525107860565, -0.5839986205101013, -0.22668860852718353, 0.5430234670639038, -0.5747711658477783, 0.2156622707843...
note. If you have to use the kind of things I wrote here, it means that your problem is too complex to be solved with shell. use another language. I had to use it due to human factors and legacy.
[ 0.037039656192064285, 0.3270174562931061, -0.2548626959323883, -0.0019968680571764708, 0.0839906632900238, -0.19419914484024048, 0.5330062508583069, 0.41211575269699097, 0.07379807531833649, -0.6605566740036011, 0.041774146258831024, 0.7128707766532898, -0.37770161032676697, -0.05704715475...
I want to do something like: ``` MyObject myObj = GetMyObj(); // Create and fill a new object MyObject newObj = myObj.Clone(); ``` And then make changes to the new object that are not reflected in the original object. I don't often need this functionality, so when it's been necessary, I've resorted to creating a new object and then copying each property individually, but it always leaves me with the feeling that there is a better or more elegant way of handling the situation. How can I clone or deep copy an object so that the cloned object can be modified without any changes being reflected in
[ 0.4398524761199951, 0.25707587599754333, 0.19872236251831055, -0.11775145679712296, 0.30277496576309204, -0.06117022782564163, 0.21786126494407654, 0.12407497316598892, -0.17028626799583435, -0.841422975063324, -0.12435785681009293, 0.4710286557674408, -0.38002827763557434, 0.4045030474662...
the original object? Whereas one approach is to implement the [`ICloneable`](http://msdn.microsoft.com/en-us/library/system.icloneable.aspx) interface (described [here](https://stackoverflow.com/questions/78536/cloning-objects-in-c/78568#78568), so I won't regurgitate), here's a nice deep clone object copier I found on [The Code Project](http://www.codeproject.com/Articles/23832/Implementing-Deep-Cloning-via-Serializing-objects) a while ago and incorporated it into our code. As mentioned elsewhere, it requires your objects to be serializable. ``` using System; using System.IO; using System.Runtime.Serialization; using System.Runtime.Serialization.Formatters.Binary; /// <summary> /// Reference Article http://www.codeproject.com/KB/tips/SerializedObjectCloner.aspx /// Provides a method for performing a deep copy of an object. /// Binary Serialization is used to perform the copy. /// </summary> public static class ObjectCopier { /// <summary> /// Perform a deep copy of the object via serialization.
[ 0.011617166921496391, -0.11715857684612274, 0.32458755373954773, 0.3232288062572479, 0.30053427815437317, 0.096503347158432, -0.17428578436374664, 0.00036499384441412985, -0.4228839874267578, -0.32629066705703735, -0.18410517275333405, 0.3339056670665741, -0.21501873433589935, 0.4466140568...
/// </summary> /// <typeparam name="T">The type of object being copied.</typeparam> /// <param name="source">The object instance to copy.</param> /// <returns>A deep copy of the object.</returns> public static T Clone<T>(T source) { if (!typeof(T).IsSerializable) { throw new ArgumentException("The type must be serializable.", nameof(source)); } // Don't serialize a
[ 0.06346498429775238, -0.0875074490904808, 0.3242283761501312, 0.15488064289093018, 0.12820908427238464, 0.29098305106163025, 0.2807078957557678, -0.2861126959323883, -0.27177610993385315, -0.5646740198135376, -0.08183923363685608, 0.2348816692829132, -0.47997286915779114, 0.494660764932632...
null object, simply return the default for that object if (ReferenceEquals(source, null)) return default; using var Stream stream = new MemoryStream(); IFormatter formatter = new BinaryFormatter(); formatter.Serialize(stream, source); stream.Seek(0, SeekOrigin.Begin); return (T)formatter.Deserialize(stream); } } ``` The idea is that it serializes your object and then deserializes it into a fresh object. The benefit is that you don't have
[ 0.07086790353059769, -0.29484787583351135, 0.37011706829071045, -0.04584378004074097, 0.24454475939273834, 0.02730279229581356, 0.3433681130409241, -0.13146087527275085, -0.1856265515089035, -0.5855960845947266, -0.12903641164302826, 0.6845077872276306, -0.44442495703697205, 0.589097857475...
to concern yourself about cloning everything when an object gets too complex. In case of you prefer to use the new [extension methods](http://en.wikipedia.org/wiki/Extension_method) of C# 3.0, change the method to have the following signature: ``` public static T Clone<T>(this T source) { // ... } ``` Now the method call simply becomes `objectBeingCloned.Clone();`. **EDIT** (January 10 2015) Thought I'd revisit this, to mention I recently started using (Newtonsoft) Json to do this, it [should be](http://maxondev.com/serialization-performance-comparison-c-net-formats-frameworks-xmldatacontractserializer-xmlserializer-binaryformatter-json-newtonsoft-servicestack-text/) lighter, and avoids the overhead of [Serializable] tags. (**NB** @atconway has pointed out in the comments that private members are not cloned using the JSON method) ``` /// <summary> /// Perform a deep Copy
[ 0.1923753321170807, -0.11168108880519867, 0.30548152327537537, 0.050235725939273834, -0.016822317615151405, -0.35898247361183167, 0.13979396224021912, -0.08339101076126099, -0.36319392919540405, -0.16822950541973114, -0.29495492577552795, 0.5371894836425781, -0.5431170463562012, 0.33463066...
of the object, using Json as a serialization method. NOTE: Private members are not cloned using this method. /// </summary> /// <typeparam name="T">The type of object being copied.</typeparam> /// <param name="source">The object instance to copy.</param> /// <returns>The copied object.</returns> public static T CloneJson<T>(this T source) { // Don't serialize a null object, simply return the default for that object if (ReferenceEquals(source, null)) return default; // initialize inner objects individually // for example in default constructor some list property initialized with some values,
[ 0.06705456972122192, -0.146103635430336, 0.1451689898967743, 0.02018284797668457, -0.04990270733833313, 0.19090823829174042, 0.173711895942688, -0.43117281794548035, -0.27197563648223877, -0.47330430150032043, -0.16495239734649658, 0.22513540089130402, -0.5582466721534729, 0.49227026104927...
// but in 'source' these items are cleaned - // without ObjectCreationHandling.Replace default constructor values will be added to result var deserializeSettings = new JsonSerializerSettings {ObjectCreationHandling = ObjectCreationHandling.Replace}; return JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source), deserializeSettings); } ```
[ 0.30573853850364685, -0.24648915231227875, 0.5330758690834045, -0.18455424904823303, 0.4067457318305969, 0.233149453997612, 0.4768533408641815, -0.2419620156288147, -0.37237417697906494, -0.5914966464042664, -0.3953455686569214, 0.6084807515144348, -0.4342992603778839, 0.23033589124679565,...
In perl one would simply do the following to store and iterate over a list of names ``` my @fruit = (apple, orange, kiwi); foreach (@fruit) { print $_; } ``` What would the equivalent be in bash? bash (unlike POSIX sh) supports arrays: ``` fruits=(apple orange kiwi "dried mango") for fruit in "${fruits[@]}"; do echo "${fruit}" done ``` This has the advantage that array elements may contain spaces or other members of $IFS; as long as they were correctly inserted as separate elements, they are read out the same way.
[ 0.006235451437532902, -0.06562527269124985, -0.049835655838251114, -0.289337158203125, -0.24391642212867737, 0.21571482717990875, 0.10117221623659134, -0.288494348526001, -0.4023760259151459, -0.17617516219615936, -0.24678345024585724, 0.48752105236053467, -0.33123499155044556, -0.10802774...
What is the fastest way you know to convert a floating-point number to an int on an x86 CPU. Preferrably in C or assembly (that can be in-lined in C) for any combination of the following: * 32/64/80-bit float -> 32/64-bit integer I'm looking for some technique that is faster than to just let the compiler do it. It depends on if you want a truncating conversion or a rounding one and at what precision. By default, C will perform a truncating conversion when you go from float to int. There are FPU instructions that do it but it's not an ANSI C
[ -0.19110146164894104, -0.12359832227230072, 0.17085660994052887, 0.06429869681596756, -0.26292213797569275, 0.2688332796096802, -0.17840464413166046, -0.061713457107543945, -0.22102265059947968, -0.5148457288742065, 0.3557811677455902, 0.510545015335083, -0.1297326236963272, -0.36712834239...
conversion and there are significant caveats to using it (such as knowing the FPU rounding state). Since the answer to your problem is quite complex and depends on some variables you haven't expressed, I recommend this article on the issue: <http://www.stereopsis.com/FPU.html>
[ 0.2685076892375946, -0.13889800012111664, 0.11743417382240295, 0.043016865849494934, -0.06793826818466187, 0.21493183076381683, -0.023860231041908264, -0.6159852743148804, 0.2054634839296341, -0.3234444260597229, 0.013863349333405495, 0.8693333864212036, -0.06630738824605942, -0.0116084674...
I'm starting work on a program which is perhaps most naturally described as a batch of calculations on database tables, and will be executed once a month. All input is in Oracle database tables, and all output will be to Oracle database tables. The program should stay maintainable for many years to come. It seems straight-forward to implement this as a series of stored procedures, each performing a sensible transformation, for example distributing costs among departments according to some business rules. I can then write unit tests to check if the output of each transformation is as I expected. Is it a
[ 0.5125864744186401, 0.12156126648187637, 0.24793139100074768, 0.04353714734315872, 0.25660979747772217, 0.29692378640174866, -0.1811884641647339, -0.19086700677871704, -0.22452369332313538, -0.48502135276794434, -0.1380368024110794, 0.4105795919895172, 0.2516120374202728, 0.090696565806865...
bad idea to do this all in PL/SQL? Would you rather do heavy batch calculations in a typical object oriented programming language, such as C#? Isn't it more expressive to use a database centric programming language such as PL/SQL? Normally I say put as little in PL/SQL as possible - it is typically a lot less maintainable - at one of my last jobs I really saw how messy and hard to work with it could get. However, since it is batch processing - and since the input and output are both the DB - it makes good sense to put
[ 0.5381794571876526, 0.22390925884246826, -0.27056005597114563, 0.10826275497674942, -0.11667495220899582, -0.24485671520233154, 0.0724768415093422, 0.027548959478735924, -0.15019790828227997, -0.292932391166687, 0.29105544090270996, 0.5415191054344177, -0.5722908973693848, 0.01917817257344...
the logic into PL/SQL - to minimize "moving parts". However, if it were business logic - or components used by other pieces of your system - I would say don't do it..
[ 0.34092482924461365, 0.09469547867774963, 0.033952951431274414, 0.24476012587547302, 0.041603002697229385, -0.2133007049560547, -0.025487428531050682, 0.09668835252523422, -0.15773813426494598, -0.14695824682712555, -0.040570396929979324, 0.30506590008735657, -0.2557348608970642, -0.014232...
I'm trying to find out if there is any way to elevate a specific function within an application. For example, I have an app with system and user settings that are stored in the registry, I only need elevation for when the system settings need to be changed. Unfortunately all of the info I've come across talks about only starting a new process with elevated privileges. What you really need to do is store your settings the Application Data folder.
[ 0.4598870277404785, 0.13360370695590973, 0.281504362821579, 0.2535015940666199, 0.26692837476730347, 0.0725722461938858, 0.21763989329338074, -0.005604719743132591, -0.21244533360004425, -0.6666531562805176, 0.43570592999458313, 0.7880523800849915, -0.06709464639425278, 0.03746389225125313...
Does any know of a good calendar (not datepicker, but a BIG browsable calendar) plugin for one of the major javascript frameworks. I'd prefer jQuery. just published a new open source project (jQuery plugin). sounds exactly like what you want: [FullCalendar](http://arshaw.com/fullcalendar/) Hope it works well for you!
[ 0.37973225116729736, -0.1504671424627304, 0.4347781240940094, 0.07185405492782593, -0.03773682937026024, -0.10892579704523087, 0.22039176523685455, 0.6842209100723267, -0.20880082249641418, -0.5569294095039368, 0.17132921516895294, 0.422819048166275, 0.29620516300201416, -0.274557650089263...
I have a 'foreach' macro I use frequently in C++ that works for most STL containers: ``` #define foreach(var, container) \ for(typeof((container).begin()) var = (container).begin(); \ var != (container).end(); \ ++var) ``` (Note that 'typeof' is a gcc extension.) It is used like this: ``` std::vector< Blorgus > blorgi = ...; foreach(blorgus, blorgi) { blorgus->draw(); } ``` I would like to make something similar that iterates over a map's values. Call it "foreach\_value", perhaps. So instead of writing ``` foreach(pair, mymap) { pair->second->foo(); } ``` I would write ``` foreach_value(v, mymap) { v.foo(); } ``` I can't come up with a macro that will do this,
[ 0.22604554891586304, 0.05132042616605759, 0.716474711894989, -0.5361688137054443, -0.012291035614907742, 0.3156399726867676, -0.11944229155778885, -0.3546549081802368, -0.0568055734038353, -0.7249327301979065, 0.09372106194496155, 0.5460740923881531, -0.7327359914779663, 0.1808513998985290...
because it requires declaring two variables: the iterator and the value variable ('v', above). I don't know how to do that in the initializer of a for loop, even using gcc extensions. I could declare it just before the foreach\_value call, but then it will conflict with other instances of the foreach\_value macro in the same scope. If I could suffix the current line number to the iterator variable name, it would work, but I don't know how to do that. You can do this using two loops. The first declares the iterator, with a name which is a function of
[ 0.4810214340686798, -0.21506069600582123, 0.5250023603439331, -0.32199499011039734, 0.2202918529510498, 0.009391753934323788, 0.29680997133255005, -0.2872318923473358, 0.019809432327747345, -0.3173704743385315, -0.09510014206171036, 0.7677072286605835, -0.5343785285949707, 0.38951289653778...
the container variable (and you can make this uglier if you're worried about conflicts with your own code). The second declares the value variable. ``` #define ci(container) container ## iter #define foreach_value(var, container) \ for (typeof((container).begin()) ci(container) = container.begin(); \ ci(container) != container.end(); ) \ for (typeof(ci(container)->second)* var = &ci(container)->second; \ ci(container) != container.end(); \ (++ci(container) != container.end()) ? \
[ -0.12725794315338135, -0.29407259821891785, 0.32504209876060486, -0.2141314297914505, 0.12884505093097687, -0.14626532793045044, -0.2708815336227417, -0.17196376621723175, 0.000972891750279814, -0.6721212863922119, -0.35898470878601074, 0.38155192136764526, -0.40395283699035645, 0.62604886...
(var = &ci(container)->second) : var) ``` By using the same loop termination condition, the outer loop only happens once (and if you're lucky, gets optimized away). Also, you avoid calling ->second on the iterator if the map is empty. That's the same reason for the ternary operator in the increment of the inner loop; at the end, we just leave var at the last value, since it won't be referenced again. You could inline ci(container), but I think it makes the macro more readable.
[ 0.05498034507036209, -0.47769200801849365, 0.19258809089660645, -0.29642951488494873, -0.15235331654548645, -0.17458263039588928, 0.2457946538925171, -0.0024731606245040894, -0.19022750854492188, -0.7486231327056885, -0.050444215536117554, 0.5967404842376709, -0.49971428513526917, 0.142509...
I have read that using database keys in a URL is a bad thing to do. For instance, My table has 3 fields: `ID:int`, `Title:nvarchar(5)`, `Description:Text` I want to create a page that displays a record. Something like ... ``` http://server/viewitem.aspx?id=1234 ``` 1. First off, could someone elaborate on why this is a bad thing to do? 2. and secondly, what are some ways to work around using primary keys in a url? I think it's perfectly reasonable to use primary keys in the URL. Some considerations, however: 1) Avoid SQL injection attacks. If you just blindly accept the value of the id URL parameter and pass it into the DB,
[ 0.6711552143096924, 0.39942947030067444, 0.16582626104354858, 0.07812843471765518, -0.23529745638370514, -0.36253121495246887, 0.12929049134254456, -0.10541585832834244, -0.07436353713274002, -0.4784248173236847, 0.05763506144285202, 0.33403223752975464, -0.27990520000457764, 0.58073914051...
you are at risk. Make sure you sanitise the input so that it matches whatever format of key you have (e.g. strip any non-numeric characters). 2) SEO. It helps if your URL contains some context about the item (e.g. "big fluffy rabbit" rather than 1234). This helps search engines see that your page is relevant. It can also be useful for your users (I can tell from my browser history which record is which without having to remember a number).
[ 0.11420527845621109, -0.17006102204322815, 0.382926344871521, 0.3079279363155365, -0.23256312310695648, -0.4733215570449829, 0.14205168187618256, -0.15148264169692993, -0.4237827658653259, -0.5209620594978333, -0.4380519986152649, 0.1956396996974945, 0.028507541865110397, -0.63714003562927...
How can I programmatically make a query in MS Access default to landscape when printed, specifically when viewing it as a PivotChart? I'm currently attempting this in MS Access 2003, but would like to see a solution for any version. The following function should do the trick: ``` Function SetLandscape() Application.Printer.Orientation = acPRORLandscape End Function ``` Should be able to call this from the autoexec function to ensure it always runs.
[ -0.1209866851568222, -0.05441496893763542, 0.7207992672920227, -0.16196025907993317, 0.045646075159311295, -0.3079368472099304, -0.036776404827833176, -0.014856078661978245, -0.27457547187805176, -0.8439520001411438, 0.13129210472106934, 0.8003100752830505, -0.48057207465171814, -0.3680236...
> Where a new system concept or new technology is used, one has to build a > system to throw away, for even the best planning is not so omniscient as > to get it right the first time. Hence **plan to throw one away; you will, anyhow.** > > > -- Fred Brooks, [*The Mythical Man-Month*](http://en.wikipedia.org/wiki/The_Mythical_Man-Month) [Emphasis mine] Build one to throw away. That's what they told me. Then they told me that we're all [agile](http://agilemanifesto.org/) now, so we should [Refactor Mercilessly](http://c2.com/cgi/wiki?RefactorMercilessly). What gives? Is it **always** better to refactor my way out of trouble? If not, can anyone suggest a rule-of-thumb
[ 0.45654675364494324, 0.3559020757675171, -0.14817005395889282, 0.055656399577856064, 0.13862241804599762, 0.16294005513191223, 0.4338929355144501, -0.07118658721446991, -0.48748713731765747, -0.4996276795864105, -0.1715095192193985, -0.0333496630191803, 0.08968132734298706, 0.1353783458471...
to help me decide when to stick with it, and when to give up and start over? If you're doing test-driven development, you can refactor your way out of almost any trouble. I've changed major design decisions without much trouble, and rescued decade-old codebases. The only exception is when you've discovered that your architecture is completely wrong from beginning to end. For example, if you wrote your app using threads, but you discovered that you wanted a bunch of asynchronous state machines. At that point, go ahead and throw away the first draft.
[ 0.6425511837005615, 0.2120094746351242, -0.41569939255714417, 0.27425363659858704, 0.07275281101465225, 0.15897098183631897, 0.21697355806827545, 0.13721993565559387, -0.02921207807958126, -0.5121656060218811, 0.14223061501979828, 0.3229188621044159, -0.17527347803115845, 0.133172646164894...
How do you use `gen_udp` in Erlang to do [multicasting](https://en.wikipedia.org/wiki/Multicast)? I know its in the code, there is just no documentation behind it. Sending out data is obvious and simple. I was wondering on how to add memberships. Not only adding memberships at start-up, but adding memberships while running would be useful too. Here is example code on how to listen in on Bonjour / Zeroconf traffic. ``` -module(zcclient). -export([open/2,start/0]). -export([stop/1,receiver/0]). open(Addr,Port) -> {ok,S} = gen_udp:open(Port,[{reuseaddr,true}, {ip,Addr}, {multicast_ttl,4}, {multicast_loop,false}, binary]), inet:setopts(S,[{add_membership,{Addr,{0,0,0,0}}}]), S. close(S) -> gen_udp:close(S). start() -> S=open({224,0,0,251},5353), Pid=spawn(?MODULE,receiver,[]), gen_udp:controlling_process(S,Pid), {S,Pid}. stop({S,Pid}) ->
[ -0.22200609743595123, -0.2829275131225586, 0.5203752517700195, -0.08907490223646164, -0.2367839366197586, -0.14746251702308655, 0.009613395668566227, -0.28230586647987366, -0.46255001425743103, -0.6919208765029907, -0.16069570183753967, 0.2937469780445099, -0.6718014478683472, 0.2076345980...
close(S), Pid ! stop. receiver() -> receive {udp, _Socket, IP, InPortNo, Packet} -> io:format("~n~nFrom: ~p~nPort: ~p~nData: ~p~n",[IP,InPortNo,inet_dns:decode(Packet)]), receiver(); stop -> true; AnythingElse -> io:format("RECEIVED: ~p~n",[AnythingElse]), receiver() end. ```
[ -0.29638582468032837, -0.0005504983710125089, 0.984492838382721, -0.304208368062973, 0.12786765396595, 0.15909641981124878, 0.3196193277835846, -0.1806374192237854, -0.014140531420707703, -0.5043928027153015, -0.49034908413887024, 0.5275711417198181, -0.26694878935813904, 0.329108923673629...
I am new to developing for Office Forms Server / MOSS 2007. I have to choose between designing my web-based forms and writing code for them in Visual Studio Tools for Applications (aka VSTA) or Visual Studio Tools for Office (aka VSTO). VSTA is included free as part of the license for InfoPath 2007; VSTO, also free, requires Visual Studio 2005 / 2008. I have licenses for both of the products and cannot easily decide what the pros and cons of each IDE might be. This explains it better than I can: <http://blogs.msdn.com/andreww/archive/2006/02/21/536179.aspx> Given the fact that the license for VSTA comes
[ 0.40600183606147766, -0.025751538574695587, 0.5607805252075195, -0.09027101844549179, -0.00872211717069149, 0.16613677144050598, 0.19248440861701965, -0.22272121906280518, 0.009336770512163639, -0.5788143277168274, 0.18224307894706726, 0.7117180824279785, -0.014865429140627384, -0.33299863...
with InfoPath, I'd probably run with that.
[ 0.5700634121894836, -0.058711469173431396, 0.04297564923763275, 0.32212403416633606, -0.2237425446510315, -0.21780288219451904, -0.11812750995159149, 0.2901054322719574, 0.16762177646160126, -0.07718680799007416, 0.09871017932891846, 0.4459170401096344, -0.04552358388900757, 0.032767947763...
I have a few scripts on a site I recently started maintaining. I get those Object Not Found errors in IE6 (which Firefox fails to report in its Error Console?). What's the best way to debug these- any good cross-browser-compatible IDEs, or javascript debugging libraries of some sort? There's no cross-browser JS debugger that I know of (because most browsers use different JS engines). For firefox, I'd definitely recommend firebug (<http://www.getfirebug.com>) For IE, the best I've found is Microsoft Script Debugger (<http://www.microsoft.com/downloads/details.aspx?familyid=2f465be0-94fd-4569-b3c4-dffdf19ccd99&displaylang=en>). If you have Office installed, you may also have Microsoft Script Editor installed. To use either of these, you need to
[ 0.1941075474023819, 0.30912330746650696, 0.0762009546160698, 0.10893142968416214, -0.2025367170572281, -0.3277503550052643, 0.5349801182746887, 0.07710730284452438, -0.05021701008081436, -1.1079752445220947, -0.145271897315979, 0.6232213973999023, -0.242991104722023, -0.154366597533226, ...
turn on script debugging in IE. (uncheck Tools -> Internet Options -> Advanced -> Disable Script debugging).
[ 0.028583386912941933, -0.046076856553554535, 0.21457050740718842, 0.002101018326357007, -0.04315726086497307, -0.6808941960334778, 0.44338884949684143, 0.18070916831493378, 0.3476701080799103, -0.9501816034317017, -0.6166476607322693, 0.7502993941307068, -0.17752663791179657, 0.03307871147...
I'm working on a regular expression in a `.NET` project to get a specific tag. I would like to match the entire DIV tag and its contents: ``` <html> <head><title>Test</title></head> <body> <p>The first paragraph.</p> <div id='super_special'> <p>The Store paragraph</p> </div> </body> </head> ``` Code: ``` Regex re = new Regex("(<div id='super_special'>.*?</div>)", RegexOptions.Multiline); if (re.IsMatch(test)) Console.WriteLine("it matches"); else
[ 0.32820069789886475, -0.02236100658774376, 0.5769863724708557, -0.1103443056344986, -0.18266062438488007, 0.004100040998309851, 0.16413545608520508, -0.6147995591163635, 0.2606739103794098, -0.6621218919754028, -0.06151743605732918, 0.3406865894794464, -0.2621530592441559, -0.0413273014128...
Console.WriteLine("no match"); ``` I want to match this: ``` <div id="super_special"> <p>Anything could go in here...doesn't matter. Let's get it all</p> </div> ``` I thought `.` was supposed to get all characters, but it seems to having trouble with the carriage returns. What is my regex missing? Thanks. Out-of-the-box, without special modifiers, most regex implementations don't go beyond the end-of-line to match text. You probably should look in the documentation of the regex engine you're using for such modifier. I have one other advice: beware of greed! Traditionally, regex **are** greedy which means that your regex would probably match this: ``` <div id="super_special"> I'm the
[ 0.05321696773171425, -0.06699694693088531, 0.41566625237464905, 0.1255623698234558, 0.047886770218610764, -0.16986693441867828, -0.05339634418487549, 0.09368732571601868, -0.019933508709073067, -0.6145807504653931, -0.02290012128651142, 0.79287189245224, -0.09707549214363098, 0.14743505418...
wanted div! </div> <div id="not_special"> I'm not wanted, but I've been caught too :( </div> ``` You should check for a "not-greedy" modifier, so that your regex would stop matching text at the **first** occurence of `</div>`, not at the **last** one. Also, as others have said, consider using an HTML parser instead of regexes. It will save you a lot of headache. *Edit: even a non-greedy regex wouldn't work as expected either, if `<div>`s are nested! Another reason to consider using an HTML parser.*
[ 0.2758449614048004, -0.3320375978946686, 0.32644662261009216, -0.011911020614206791, 0.03869084641337395, 0.08660624176263809, 0.26841339468955994, -0.2917070984840393, -0.42328983545303345, -0.42675331234931946, -0.0467744916677475, 0.5157220363616943, -0.32565319538116455, -0.08851793408...
I am looking for a best practice for End to End Authentication for internal Web Applications to the Database layer. The most common scenario I have seen is to use a single SQL account with the permissions set to what is required by the application. This account is used by all application calls. Then when people require access over the database via query tools or such a separate Group is created with the query access and people are given access to that group. The other scenario I have seen is to use complete Windows Authentication End to End. So the users themselves
[ 0.022022372111678123, 0.35521164536476135, 0.1415543407201767, -0.04185672104358673, 0.020408200100064278, -0.09250058233737946, 0.19588011503219604, -0.0688149482011795, -0.19362826645374298, -0.44708412885665894, -0.007494300603866577, 0.4086683988571167, -0.2244408130645752, 0.050958640...
are added to groups which have all the permissions set so the user is able to update and change outside the parameters of the application. This normally involves securing people down to the appropriate stored procedures so they aren't updating the tables directly. The first scenario seems relatively easily to maintain but raises concerns if there is a security hole in the application then the whole database is compromised. The second scenario seems more secure but has the opposite concern of having to much business logic in stored procedures on the database. This seems to limit the use of the some
[ -0.06467226147651672, 0.061114151030778885, 0.40709561109542847, 0.3206962049007416, 0.30753856897354126, -0.4530061185359955, 0.3748418688774109, -0.38054606318473816, -0.41760846972465515, -0.48394283652305603, -0.377472847700119, 0.40059471130371094, -0.3649962544441223, -0.159605711698...
really cool technologies like Nhibernate and LINQ. However in this day and age where people can use data in so many different ways we don't foresee e.g. mash-ups etc is this the best approach. Dale - That's it exactly. If you want to provide access to the underlying data store to those users then do it via services. And in my experience, it is those experienced computer users coming out of Uni/College that damage things the most. As the saying goes, they know just enough to be dangerous. If they want to automate part of their job, and they can display they
[ 0.9302138686180115, 0.1149764358997345, -0.21545769274234772, 0.5798435211181641, -0.016637859866023064, -0.6073858737945557, -0.21093280613422394, 0.2265896499156952, -0.24165265262126923, -0.06543436646461487, 0.16706342995166779, 0.5931215286254883, 0.07855192571878433, -0.0472147390246...
have the requisite knowledge, then go ahead, grant their **domain account** access to the backend. That way anything they do via their little VBA automation is tied to their account and you know exactly who to go look at when the data gets hosed. My basic point is that the database is the proverbial holy grail of the application. You want as few fingers in that particular pie as possible. As a consultant, whenever I hear that someone has allowed normal users into the database, my eyes light up because I know it's going to end up being a big
[ 0.8924990296363831, 0.1981636881828308, 0.2992575764656067, 0.19200409948825836, 0.2146599143743515, -0.08527418226003647, 0.3317994773387909, 0.16984927654266357, -0.2557592988014221, -0.061339590698480606, -0.044195570051670074, 0.709287166595459, 0.05752181261777878, 0.03317089378833771...
paycheck for me when I get called to fix it.
[ 0.41356393694877625, 0.20831778645515442, 0.3962000906467438, -0.037042345851659775, -0.05143977701663971, 0.30860382318496704, -0.031095130369067192, 0.49441269040107727, -0.18838609755039215, -0.3290063142776489, 0.09102851897478104, 0.45581552386283875, 0.4844062626361847, -0.6586186289...
Briefly: Does anyone know of a GUI for gdb that brings it on par or close to the feature set you get in the more recent version of Visual C++? In detail: As someone who has spent a lot of time programming in Windows, one of the larger stumbling blocks I've found whenever I have to code C++ in Linux is that debugging anything using commandline gdb takes me several times longer than it does in Visual Studio, and it does not seem to be getting better with practice. Some things are just easier or faster to express graphically. Specifically, I'm looking
[ 0.18733622133731842, 0.08775760978460312, -0.0518382228910923, 0.06668157875537872, -0.22753526270389557, -0.577803909778595, 0.19085276126861572, 0.3607197105884552, -0.3084680438041687, -0.41201651096343994, 0.09698526561260223, 0.8432837724685669, -0.11371347308158875, -0.20823155343532...
for a GUI that: * Handles all the basics like stepping over & into code, watch variables and breakpoints * Understands and can display the contents of complex & nested C++ data types * Doesn't get confused by and preferably can intelligently step through templated code and data structures while displaying relevant information such as the parameter types * Can handle threaded applications and switch between different threads to step through or view the state of * Can handle attaching to an already-started process or reading a core dump, in addition to starting the program up in gdb If such a program does not exist, then
[ -0.06512009352445602, -0.21241021156311035, 0.21236498653888702, 0.26333630084991455, -0.15432153642177582, -0.15957537293434143, 0.15918554365634918, -0.08350062370300293, -0.1254216432571411, -0.741335928440094, -0.16841915249824524, 0.7027019262313843, -0.31540027260780334, -0.247091785...
I'd like to hear about experiences people have had with programs that meet at least some of the bullet points. Does anyone have any recommendations? **Edit:** Listing out the possibilities is great, and I'll take what I can get, but it would be even more helpful if you could include in your responses: (a) Whether or not you've actually used this GUI and if so, what positive/negative feedback you have about it. (b) If you know, which of the above-mentioned features are/aren't supported Lists are easy to come by, sites like this are great because you can get an idea
[ 0.6902643442153931, 0.04516289755702019, -0.08034281432628632, 0.49265041947364807, 0.11175617575645447, -0.22200600802898407, 0.2797633409500122, 0.4381820857524872, -0.20533819496631622, -0.7592469453811646, 0.21534381806850433, 0.7069358229637146, 0.2510036826133728, -0.2038004249334335...
of people's personal experiences with applications. You won't find *anything* overlaying GDB which can compete with the raw power of the Visual Studio debugger. It's just too powerful, and it's just too well integrated inside the IDE. For a Linux alternative, try DDD if free software is your thing.
[ 0.6120238900184631, 0.12312745302915573, -0.04103592783212662, 0.5497649312019348, -0.3509197235107422, -0.3019804060459137, 0.10866554826498032, 0.2816276252269745, -0.1663031429052353, -0.5390568375587463, -0.10237525403499603, 0.9913793206214905, -0.3003957271575928, 0.1488897204399109,...
Kind of a special case problem: * I start a process with `System.Diagnostics.Process.Start(..)` * The process opens a splash screen -- this splash screen becomes the main window. * The splash screen closes and the 'real' UI is shown. The main window (splash screen) is now invalid. * I still have the Process object, and I can query its handle, module, etc. But the main window handle is now invalid. I need to get the process's UI (or UI handle) at this point. Assume I cannot change the behavior of the process to make this any easier (or saner). I have looked around online but I'll
[ 0.09277086704969406, 0.0035256429109722376, 0.7556326389312744, -0.0056631178595125675, 0.06273163110017776, 0.059790804982185364, 0.2997235357761383, 0.03844944387674332, -0.34868118166923523, -0.8034201264381409, -0.3170311152935028, 0.4575210511684418, -0.002053184900432825, 0.527547836...
admit I didn't look for more than an hour. Seemed like it should be somewhat trivial :-( If you don't mind using the Windows API, you could use [`EnumWindowsProc`](http://msdn.microsoft.com/en-gb/library/windows/desktop/ms633498.aspx), and check each of the handles that that turns up using [`GetWindowThreadProcessId`](http://msdn.microsoft.com/en-gb/library/windows/desktop/ms633522.aspx) (to see that it's in your process), and then maybe `IsWindowVisible`, `GetWindowCaption` and `GetWindowTextLength` to determine which `hWnd` in your process is the one you want. Though if you haven't used those functions before that approach will be a real pain, so hopefully there's a simpler way.
[ 0.3535977602005005, 0.2991037368774414, 0.400203138589859, 0.23240411281585693, -0.21393704414367676, -0.2355981320142746, 0.2164088636636734, -0.15507183969020844, -0.0929858386516571, -0.4659172594547272, 0.22233133018016815, 0.5409613847732544, 0.30397555232048035, 0.2313919961452484, ...
So, im trying to write some code that utilizes Nvidia's CUDA architecture. I noticed that copying to and from the device was really hurting my overall performance, so now I am trying to move a large amount of data onto the device. As this data is used in numerous functions, I would like it to be global. Yes, I can pass pointers around, but I would really like to know how to work with globals in this instance. So, I have device functions that want to access a device allocated array. Ideally, I could do something like: ``` __device__ float* global_data; main() { cudaMalloc(global_data);
[ -0.17921651899814606, 0.15469060838222504, 0.34360501170158386, -0.07439550757408142, 0.08610314875841141, 0.00010053321602754295, -0.00454332958906889, 0.05886353552341461, -0.19787941873073578, -0.7892793416976929, -0.005303343757987022, 0.6161984205245972, -0.5776124000549316, 0.4923754...
kernel1<<<blah>>>(blah); //access global data kernel2<<<blah>>>(blah); //access global data again } ``` However, I havent figured out how to create a dynamic array. I figured out a work around by declaring the array as follows: ``` __device__ float global_data[REALLY_LARGE_NUMBER]; ``` And while that doesn't require a cudaMalloc call, I would prefer the dynamic allocation approach. Something like this should probably work. ``` #include <algorithm> #define NDEBUG #define CUT_CHECK_ERROR(errorMessage) do { \ cudaThreadSynchronize();
[ -0.1809319257736206, -0.29614534974098206, 0.26393839716911316, -0.1996152102947235, 0.013647586107254028, 0.07343951612710953, -0.030736226588487625, 0.04802687466144562, -0.3444105088710785, -0.7257019877433777, -0.1326517015695572, 0.8274358510971069, -0.6419108510017395, 0.032849956303...
\ cudaError_t err = cudaGetLastError(); \ if( cudaSuccess != err) {
[ -0.6553282737731934, 0.013046606443822384, 0.45992136001586914, -0.39916080236434937, 0.20981957018375397, -0.028497841209173203, 0.01629655435681343, 0.09982120245695114, 0.09386928379535675, -0.3318609893321991, -0.2795099914073944, 0.8077930212020874, -0.1324060559272766, 0.369949877262...
\ fprintf(stderr, "Cuda error: %s in file '%s' in line %i : %s.\n", \
[ -0.21436551213264465, 0.23460155725479126, 0.1467210054397583, -0.1833152025938034, 0.47127699851989746, 0.18300652503967285, 0.4528129994869232, 0.12197021394968033, 0.019907500594854355, -0.4538350999355316, -0.3217049539089203, 0.5179967284202576, -0.1714175045490265, 0.2094741016626358...
errorMessage, __FILE__, __LINE__, cudaGetErrorString( err) );\ exit(EXIT_FAILURE); \
[ -0.61063551902771, 0.15828730165958405, 0.25378769636154175, -0.3126184940338135, 0.4918062686920166, 0.1874588429927826, 0.5306268930435181, 0.11821303516626358, 0.07168572396039963, -0.4898909032344818, -0.37236058712005615, 0.785164475440979, -0.3045505881309509, 0.3051672875881195, 0...
} } while (0) __device__ float *devPtr; __global__ void kernel1(float *some_neat_data) { devPtr = some_neat_data; } __global__ void kernel2(void) { devPtr[threadIdx.x] *= .3f; } int main(int argc, char *argv[]) { float* otherDevPtr; cudaMalloc((void**)&otherDevPtr, 256 * sizeof(*otherDevPtr)); cudaMemset(otherDevPtr, 0, 256 * sizeof(*otherDevPtr)); kernel1<<<1,128>>>(otherDevPtr); CUT_CHECK_ERROR("kernel1"); kernel2<<<1,128>>>(); CUT_CHECK_ERROR("kernel2"); return 0; } ``` Give it a whirl.
[ -0.0006860370631329715, -0.3479359745979309, 0.8790019750595093, -0.2718026340007782, 0.2572154700756073, -0.07176131010055542, 0.24300508201122284, -0.11066659539937973, -0.21408218145370483, -0.29901716113090515, -0.22348552942276, 0.37581419944763184, -0.24418343603610992, 0.58182233572...
I'm building a function to extend the `Enum.Parse` concept that * Allows a default value to be parsed in case that an Enum value is not found * Is case insensitive So I wrote the following: ``` public static T GetEnumFromString<T>(string value, T defaultValue) where T : Enum { if (string.IsNullOrEmpty(value)) return defaultValue; foreach (T item in Enum.GetValues(typeof(T))) { if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item; } return defaultValue; } ``` I am getting a Error Constraint cannot be special class `System.Enum`. Fair enough, but is there a workaround to allow
[ -0.11391109228134155, -0.35262224078178406, 0.635258138179779, -0.2514842748641968, 0.19249166548252106, 0.05425431579351425, 0.40774276852607727, -0.3182579278945923, 0.046413082629442215, -0.43939366936683655, -0.3047359585762024, 0.6705144047737122, -0.49385878443717957, 0.1525737941265...
a Generic Enum, or am I going to have to mimic the `Parse` function and pass a type as an attribute, which forces the ugly boxing requirement to your code. **EDIT** All suggestions below have been greatly appreciated, thanks. Have settled on (I've left the loop to maintain case insensitivity - I am using this when parsing XML) ``` public static class EnumUtils { public static T ParseEnum<T>(string value, T defaultValue) where T : struct, IConvertible { if (!typeof(T).IsEnum) throw new ArgumentException("T must be an enumerated type");
[ 0.2634899914264679, -0.26773738861083984, 0.2820037007331848, -0.11441455781459808, -0.2508642375469208, -0.19843581318855286, 0.48997175693511963, -0.17484574019908905, 0.2870943248271942, -0.6117856502532959, -0.1962030977010727, 0.7235653400421143, -0.6637367606163025, 0.201762288808822...
if (string.IsNullOrEmpty(value)) return defaultValue; foreach (T item in Enum.GetValues(typeof(T))) { if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item; } return defaultValue; } } ``` **EDIT:** (16th Feb 2015) Christopher Currens has posted [a compiler enforced type-safe generic solution in MSIL or F#](https://stackoverflow.com/a/8086788) below, which is well worth a look, and an upvote. I will remove this edit if the solution bubbles further up
[ -0.4602188467979431, -0.17965687811374664, 0.5827330350875854, -0.13064005970954895, -0.07777535170316696, -0.02600468136370182, 0.5112809538841248, -0.3461155891418457, -0.37507978081703186, -0.37015047669410706, -0.3453522324562073, 0.39344385266304016, -0.6174686551094055, 0.39095219969...
the page. **EDIT 2:** (13th Apr 2021) As this has now been addressed, and supported, since C# 7.3, I have changed the accepted answer, though full perusal of the top answers is worth it for academic, and historical, interest :) This feature is finally supported in C# 7.3! -------------------------------------------- The following snippet (from [the dotnet samples](https://github.com/dotnet/samples/blob/3ee82879284e3f4755251fd33c3b3e533f7b3485/snippets/csharp/keywords/GenericWhereConstraints.cs#L180-L190)) demonstrates how: ``` public static Dictionary<int, string> EnumNamedValues<T>() where T : System.Enum { var result = new Dictionary<int, string>(); var values = Enum.GetValues(typeof(T)); foreach (int item in values) result.Add(item, Enum.GetName(typeof(T), item)); return
[ -0.1721869707107544, -0.14480780065059662, 0.9357883930206299, 0.1274932473897934, 0.08602576702833176, 0.03094133548438549, 0.44976621866226196, 0.06898965686559677, -0.3472529649734497, -0.7527549266815186, -0.6300178170204163, 0.4509486258029938, -0.20214413106441498, -0.014420407824218...
result; } ``` Be sure to set your language version in your C# project to version 7.3. --- Original Answer below: I'm late to the game, but I took it as a challenge to see how it could be done. It's not possible in C# (or VB.NET, but scroll down for F#), but *is possible* in MSIL. I wrote this little....thing ``` // license: http://www.apache.org/licenses/LICENSE-2.0.html .assembly MyThing{} .class public abstract sealed MyThing.Thing extends [mscorlib]System.Object { .method public static !!T GetEnumFromString<valuetype .ctor ([mscorlib]System.Enum) T>(string strValue,
[ 0.17146244645118713, -0.07217860966920853, 0.5695851445198059, -0.22450785338878632, 0.3314395844936371, -0.49088025093078613, 0.29416564106941223, -0.20106154680252075, -0.12749840319156647, -0.6314363479614258, -0.3009794056415558, 0.655817985534668, 0.17418386042118073, 0.09293722361326...
!!T defaultValue) cil managed { .maxstack 2 .locals init ([0] !!T temp,
[ -0.06410329788923264, -0.5372172594070435, 0.4431232213973999, 0.04105781018733978, 0.4228798747062683, -0.04779335856437683, 0.41546592116355896, 0.02045716531574726, -0.05205069109797478, -0.5337932705879211, -0.2715402841567993, 0.6516821980476379, -0.5237635970115662, -0.05526883155107...
[1] !!T return_value, [2] class [mscorlib]System.Collections.IEnumerator enumerator, [3] class [mscorlib]System.IDisposable disposer) // if(string.IsNullOrEmpty(strValue)) return defaultValue; ldarg strValue call bool [mscorlib]System.String::IsNullOrEmpty(string) brfalse.s HASVALUE br RETURNDEF // return default it empty // foreach (T item
[ -0.5544108748435974, -0.49762529134750366, 0.892743706703186, -0.28794175386428833, 0.3034496307373047, 0.11001946032047272, 0.23049262166023254, -0.4011010527610779, -0.25899648666381836, -0.33117055892944336, -0.47501373291015625, 0.7624529004096985, -0.46298497915267944, 0.2208694070577...
in Enum.GetValues(typeof(T))) HASVALUE: // Enum.GetValues.GetEnumerator() ldtoken !!T call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle) call class [mscorlib]System.Array [mscorlib]System.Enum::GetValues(class [mscorlib]System.Type) callvirt instance class [mscorlib]System.Collections.IEnumerator [mscorlib]System.Array::GetEnumerator() stloc enumerator .try { CONDITION: ldloc enumerator callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext() brfalse.s LEAVE STATEMENTS:
[ -0.24934019148349762, -0.2302969992160797, 0.7502533793449402, -0.22406059503555298, 0.23431934416294098, 0.02090650610625744, 0.274573415517807, -0.493717759847641, -0.3349868059158325, -0.6688091158866882, -0.67927485704422, 0.4861176609992981, -0.4258139431476593, 0.0571957528591156, ...
// T item = (T)Enumerator.Current ldloc enumerator callvirt instance object [mscorlib]System.Collections.IEnumerator::get_Current() unbox.any !!T stloc temp ldloca.s temp constrained. !!T // if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item; callvirt instance string [mscorlib]System.Object::ToString() callvirt instance string [mscorlib]System.String::ToLower()
[ -0.3460725247859955, -0.6835580468177795, 0.9145544767379761, -0.1459231972694397, 0.27850431203842163, 0.12280085682868958, 0.02485221065580845, -0.45340433716773987, -0.43869733810424805, -0.3721701204776764, -0.40422067046165466, 0.7411276698112488, -0.4996163547039032, 0.19813291728496...
ldarg strValue callvirt instance string [mscorlib]System.String::Trim() callvirt instance string [mscorlib]System.String::ToLower() callvirt instance bool [mscorlib]System.String::Equals(string) brfalse.s CONDITION ldloc temp stloc return_value leave.s RETURNVAL LEAVE: leave.s RETURNDEF } finally {
[ -0.5031089186668396, -0.5507110953330994, 0.946036159992218, -0.27668145298957825, 0.14259198307991028, -0.24137723445892334, 0.23316694796085358, -0.24914894998073578, 0.10603056102991104, -0.46664565801620483, -0.25825873017311096, 0.8866925239562988, -0.6641761660575867, -0.023596631363...
// ArrayList's Enumerator may or may not inherit from IDisposable ldloc enumerator isinst [mscorlib]System.IDisposable stloc.s disposer ldloc.s disposer ldnull ceq brtrue.s LEAVEFINALLY ldloc.s disposer callvirt instance void [mscorlib]System.IDisposable::Dispose()
[ -0.07471688836812973, -0.30141979455947876, 0.23952876031398773, -0.14141157269477844, 0.1421351134777069, -0.1802978366613388, 0.03519057109951973, -0.3726285994052887, -0.1580004096031189, -0.3291606605052948, -0.21447351574897766, 0.42784202098846436, -0.5668362975120544, 0.011813914403...
LEAVEFINALLY: endfinally } RETURNDEF: ldarg defaultValue stloc return_value RETURNVAL: ldloc return_value ret } } ``` Which generates a function that **would** look like this, if it were valid C#: ``` T GetEnumFromString<T>(string valueString, T defaultValue) where T : Enum ``` Then with the following C# code: ``` using MyThing; // stuff... private enum MyEnum { Yes, No, Okay } static void Main(string[] args) { Thing.GetEnumFromString("No", MyEnum.Yes); // returns MyEnum.No Thing.GetEnumFromString("Invalid", MyEnum.Okay); // returns MyEnum.Okay
[ -0.16458047926425934, -0.15304072201251984, 0.5700642466545105, -0.14163219928741455, 0.4093320965766907, 0.12195425480604172, 0.1899421513080597, -0.13377919793128967, -0.11142636835575104, -0.1074453741312027, -0.6857852935791016, 0.8369224667549133, -0.26435011625289917, -0.027036147192...
Thing.GetEnumFromString("AnotherInvalid", 0); // compiler error, not an Enum } ``` Unfortunately, this means having this part of your code written in MSIL instead of C#, with the only added benefit being that you're able to constrain this method by `System.Enum`. It's also kind of a bummer, because it gets compiled into a separate assembly. However, it doesn't mean you have to deploy it that way. By removing the line `.assembly MyThing{}` and invoking ilasm as follows: ``` ilasm.exe /DLL /OUTPUT=MyThing.netmodule ``` you get a netmodule instead of an assembly. Unfortunately, VS2010 (and earlier, obviously) does not support adding netmodule references, which means you'd have to leave it in 2
[ 0.3443918824195862, -0.15600407123565674, 0.09317313134670258, -0.02613472379744053, -0.025119302794337273, -0.3216959238052368, 0.5236867070198059, -0.24692578613758087, -0.12342752516269684, -0.5507441163063049, -0.10841036587953568, 0.5725980401039124, -0.3546678125858307, 0.14006179571...
separate assemblies when you're debugging. The only way you can add them as part of your assembly would be to run csc.exe yourself using the `/addmodule:{files}` command line argument. It wouldn't be *too* painful in an MSBuild script. Of course, if you're brave or stupid, you can run csc yourself manually each time. And it certainly gets more complicated as multiple assemblies need access to it. So, it CAN be done in .Net. Is it worth the extra effort? Um, well, I guess I'll let you decide on that one. --- ### F# Solution as alternative Extra Credit: It turns out that a generic
[ 0.34183716773986816, -0.08781435340642929, 0.025634557008743286, 0.18427032232284546, -0.184161975979805, -0.0245938990265131, 0.4219476580619812, 0.10586827248334885, -0.4368975758552551, -0.218240424990654, -0.1320601850748062, 0.06080866605043411, -0.5696925520896912, 0.3019561767578125...
restriction on `enum` is possible in at least one other .NET language besides MSIL: F#. ```ml type MyThing = static member GetEnumFromString<'T when 'T :> Enum> str defaultValue: 'T = /// protect for null (only required in interop with C#) let str = if isNull str then String.Empty else str Enum.GetValues(typedefof<'T>) |> Seq.cast<_> |> Seq.tryFind(fun v -> String.Compare(v.ToString(), str.Trim(), true) = 0)
[ 0.002935259137302637, -0.29565227031707764, 0.29214274883270264, -0.08289036154747009, -0.0880887359380722, -0.20073886215686798, 0.2759271562099457, -0.28757479786872864, 0.02823762781918049, -0.3567713499069214, -0.627153217792511, 0.5481268763542175, -0.570171058177948, 0.10881169885396...
|> function Some x -> x | None -> defaultValue ``` This one is easier to maintain since it's a well-known language with full Visual Studio IDE support, but you still need a separate project in your solution for it. However, it naturally produces considerably different IL (the code *is* very different) and it relies on the `FSharp.Core` library, which, just like any other external library, needs to become part of your distribution. Here's how you can use it (basically the same as the MSIL solution), and to show that it correctly fails on otherwise synonymous structs: ``` // works,
[ 0.16840532422065735, -0.18823762238025665, 0.23006154596805573, -0.15777619183063507, -0.12759576737880707, -0.2512893080711365, 0.2011454552412033, -0.0798284262418747, 0.10144423693418503, -0.7687755227088928, -0.2079312652349472, 0.6065286993980408, -0.5076761245727539, -0.2354208081960...
result is inferred to have type StringComparison var result = MyThing.GetEnumFromString("OrdinalIgnoreCase", StringComparison.Ordinal); // type restriction is recognized by C#, this fails at compile time var result = MyThing.GetEnumFromString("OrdinalIgnoreCase", 42); ```
[ -0.047874823212623596, -0.27739500999450684, 0.21267227828502655, -0.05689101666212082, 0.08285520225763321, -0.04537736624479294, 0.2049504816532135, -0.1725301891565323, -0.231233149766922, -0.08519052714109421, -0.4417831599712372, 0.4678688943386078, -0.2506338357925415, -0.10076498985...
When working with tables in Oracle, how do you know when you are setting up a good index versus a bad index? This depends on what you mean by 'good' and 'bad'. Basically you need to realise that every index you add will increase performance on any search by that column (so adding an index to the 'lastname' column of a person table will increase performance on queries that have "where lastname = " in them) but decrease write performance across the whole table. The reason for this is when you add or update a row, it must add-to or update both
[ -0.03287804499268532, -0.03647375851869583, 0.05133358761668205, 0.22019095718860626, -0.1311214417219162, 0.10172625631093979, -0.04437007009983063, -0.25955477356910706, -0.16250337660312653, -0.5924844741821289, 0.17820890247821808, 0.5843426585197449, -0.13074897229671478, -0.048866882...
the table itself and every index that row is a member of. So if you have five indexes on a table, each addition must write to six places - five indexes and the table - and an update may be touching up to six places in the worst case. Index creation is a balancing act then between query speed and write speed. In some cases, such as a datamart that is only loaded with data once a week in an overnight job but queried thousands of times daily, it makes a great deal of sense to overload with indexes and speed
[ -0.09997779130935669, 0.20849937200546265, 0.6667448282241821, 0.4146971106529236, 0.012816209346055984, 0.10351055860519409, -0.02601606585085392, -0.2675902545452118, -0.415233850479126, -0.6690645217895508, -0.20190946757793427, 0.25172433257102966, -0.08615583181381226, 0.3205243945121...
the queries up as much as possible. In the case of online transaction processing systems however, you want to try and find a balance between them. So in short, add indexes to columns that are used a lot in select queries, but try to avoid adding too many and so add the most-used columns first. After that its a matter of load testing to see how the performance reacts under production conditions, and a lot of tweaking to find an aceeptable balance.
[ 0.1576596200466156, 0.14453724026679993, 0.367136150598526, 0.5979004502296448, -0.17716515064239502, 0.05211391672492027, -0.2368982881307602, -0.23719842731952667, -0.23424406349658966, -0.6609828472137451, -0.011527318507432938, 0.5572664141654968, -0.15973979234695435, -0.0490588098764...
I am looking to have 4 Virtual servers(various linux flavors) running on a Windows server 2003 R2 64 bit edition server located at a datacenter. I can also purchase a 2008 server or 32 bit 2k3 if needed. They would each have their own ip address for networking so that they could be publicly accessed. I do not know much about VPS software but have worked with it before. [Virtual Server 2005 R2 SP1](http://technet.microsoft.com/en-gb/bb738033.aspx) is free (registration required) and supports x64 hosts. It does not support x64 guests. Windows Server 2008 includes Hyper-V, Microsoft's new virtualization technology, which supports x64 guests and
[ -0.13807958364486694, 0.017701661214232445, 0.9746447205543518, 0.0217466801404953, -0.37622907757759094, -0.0376717746257782, 0.2837503254413605, 0.0495283380150795, -0.44483956694602966, -0.47051337361335754, -0.15262030065059662, 0.4429461658000946, -0.1289830505847931, 0.38052102923393...
multiple virtual processors. There are editions without Hyper-V as well, for marginally less money, to satisfy the anti-trust authorities. The Hyper-V [update](http://support.microsoft.com/?kbid=950050) has to be downloaded as it was completed after the rest of Windows Server 2008 was released. [VMware Server](http://www.vmware.com/products/server/) is also free. It supports (experimentally) up to 2 virtual CPUs. To get best performance you need drivers and patches in the virtual machine which work well with the virtualization environment. In Virtual Server these are called Additions, in Hyper-V they are Integration Components, and for VMware, VMware Tools. Because of the nature of kernel binary compatibility (there are no guarantees),
[ -0.0006759545649401844, -0.053079184144735336, 0.8193286657333374, 0.27220433950424194, -0.19786398112773895, 0.3260674774646759, 0.13081775605678558, 0.006724206730723381, -0.3419363796710968, -0.4808826744556427, -0.1785779744386673, 0.4550109803676605, -0.31632524728775024, 0.2844580709...
only specific distributions are generally supported. * [Download Virtual Server Additions for Linux](http://technet.microsoft.com/en-us/virtualserver/bb676671.aspx) * [Download Hyper-V Linux Integration Components](https://connect.microsoft.com/SelfNomination.aspx?ProgramID=1863&pageType=1&SiteID=495)
[ 0.07536768913269043, -0.22484982013702393, 0.2544604241847992, -0.1226535439491272, 0.10884910821914673, 0.07245250791311264, 0.027754012495279312, 0.1529923379421234, -0.24848756194114685, -0.5670276880264282, -0.6202681064605713, 0.4210180342197418, -0.3940117359161377, 0.2457265406847, ...
If I were to want to create a nice looking widget to stay running in the background with a small memory footprint, where would I start building the windows application. It's goal is to keep an updated list of items off of a web service. Similar to an RSS reader. note: The data layer will be connecting through REST, which I already have a C# dll, that I assume will not affect the footprint too much. Obviously i would like to use a nice WPF project, but the ~60,000k initial size is too big. \*C# Forms application is about ~20,000k \*C++ Forms ~16,000k \*CLR or
[ 0.3689250349998474, 0.03201083838939667, 0.5647892355918884, 0.11412709951400757, -0.07228584587574005, -0.16894827783107758, -0.12838268280029297, -0.13644897937774658, -0.29323717951774597, -0.9144923090934753, 0.023859778419137, 0.41020211577415466, 0.0037683588452637196, -0.14746434986...
MFC much smaller, under 5 Is there a way to strip down the WPF or Forms? and if im stuck using CLR or MFC what would be the easiest way to make it pretty. (my experience with MFC is making very award forms) *Update: Clarification* The above sizes, are the memory being used as the process is ran, not the executable. re: > Update: Clarification The above sizes, > are the memory being used as the > process is ran, not the executable. Okay, when you run a tiny C# Win Forms app, the smallest amount of RAM that is reserved for it is around
[ 0.27667728066444397, -0.0013349426444619894, 0.775250256061554, 0.030055223032832146, 0.04728753864765167, 0.020968686789274216, 0.06126798316836357, -0.34933701157569885, -0.4339142143726349, -0.7941451072692871, -0.17287050187587738, 0.4738764762878418, -0.22386963665485382, -0.150882586...
2 meg, maybe 4 meg. This is just a working set that it creates. It's not actively using all of this memory, or anything like it. It just reserves that much space up front so it doesn't have to do long/slow/expensive requests for more memory later as needed. Reserving a smaller size upfront is likely to be a false optimization. (You can reduce the working set with a pinvoke call if it really matters. see [pinvoke for 'set process working set size'](http://www.pinvoke.net/default.aspx/kernel32.SetProcessWorkingSetSize) )
[ 0.188031405210495, -0.38031405210494995, 0.4327356815338135, -0.023645885288715363, -0.06293657422065735, 0.18595896661281586, 0.20278584957122803, -0.28348082304000854, -0.4974243938922882, -0.6862359046936035, -0.2692566215991974, 0.4383029043674469, -0.3873855769634247, -0.1385478079319...
I've drawn an ellipse in the XZ plane, and set my perspective slightly up on the Y-axis and back on the Z, looking at the center of ellipse from a 45-degree angle, using gluPerspective() to set my viewing frustrum. [![ellipse](https://farm4.static.flickr.com/3153/2863703051_a768ed86a9_m.jpg)](http://www.flickr.com/photos/rampion/2863703051/ "ellipse by rampion, on Flickr") Unrotated, the major axis of the ellipse spans the width of my viewport. When I rotate 90-degrees about my line-of-sight, the major axis of the ellipse now spans the height of my viewport, thus deforming the ellipse (in this case, making it appear less eccentric). [![rotated ellipse](https://farm4.static.flickr.com/3187/2863703073_24c6549d4b_m.jpg)](http://www.flickr.com/photos/rampion/2863703073/ "rotated ellipse by rampion, on Flickr") What do I need to do
[ -0.12029103189706802, 0.17158664762973785, 1.039119839668274, 0.1983596533536911, 0.012675188481807709, 0.05113479495048523, -0.2304190695285797, -0.01301634218543768, -0.5728781223297119, -0.3369581699371338, 0.02601919136941433, 0.4451256990432739, 0.10176319628953934, 0.0483426526188850...
to prevent this deformation (or at least account for it), so rotation about the line-of-sight preserves the perceived major axis of the ellipse (in this case, causing it to go beyond the viewport)? It looks like you're using 1.0 as the aspect when you call gluPerspective(). You should use width/height. For example, if your viewport is 640x480, you would use 1.33333 as the aspect argument.
[ -0.152641162276268, -0.3889509439468384, 0.7218940854072571, -0.014081013388931751, -0.27652618288993835, 0.26439762115478516, -0.084281325340271, 0.030157234519720078, -0.42406925559043884, -0.4838426411151886, -0.15899863839149475, 0.7870938181877136, -0.02709675207734108, -0.13243681192...
In a multitasking operating system context, sometimes you hear the term round-robin scheduling. What does it refer to? What other kind of scheduling is there? **Round Robin Scheduling** If you are a host in a party of 100 guests, round-robin scheduling would mean that you spend 1 minute (a fixed amount) per guest. You go through each guest one-by-one, and after 100 minutes, you would have spent 1 minute with each guest. More on [Wikipedia](http://en.wikipedia.org/wiki/Round-robin_scheduling "Round-robin scheduling"). There are many other types of scheduling, such as priority-based (i.e. most important people first), first-come-first-serve, earliest-deadline-first (i.e. person leaving earliest first), etc. You can
[ 0.09460221976041794, -0.5208261013031006, 0.21546979248523712, 0.07131429016590118, 0.08513204753398895, 0.02220243215560913, -0.05018137767910957, -0.14567020535469055, -0.7743393778800964, -0.6694072484970093, -0.23368628323078156, 0.08375642448663712, -0.2636277377605438, 0.037727255374...
start off by googling for scheduling algorithms or check out [scheduling at Wikipedia](http://en.wikipedia.org/wiki/Scheduling_algorithm "scheduling algorithms")
[ 0.5341816544532776, -0.06750424206256866, 0.17883740365505219, 0.1480519026517868, 0.20861582458019257, -0.11715906858444214, -0.3005076050758362, 0.21610642969608307, -0.39021992683410645, -0.765909731388092, -0.035534802824258804, 0.5245835185050964, -0.04360022768378258, -0.165263116359...
I think it's associating my Web Service's CS files with the related ASMX files. But whatever's happening, I can't double-click to open the CS files - I have to "view Code" or it opens in the designer. Anyone know how to turn off this automatic behavior? I just want to edit the code! Try right-clicking, select "Open with...", mark "CSharp Editor" and select "Set as Default". That works for avoiding the WinForms Designer.
[ 0.43655747175216675, 0.3166905343532562, 0.144867941737175, 0.0438106395304203, -0.2025889754295349, -0.20509904623031616, 0.3086853325366974, 0.24527114629745483, -0.3264409899711609, -0.723098874092102, -0.09665611386299133, 0.6828854084014893, -0.3519851863384247, 0.19027134776115417, ...
I need to add role based permissions to my Rails application, and am wondering what the best plugins out there are to look into. I am currently using the RESTful authentication plugin to handle user authentication. Why is the plug in you suggest better than the other ones out there? I use, and really like, role\_requirement: <http://code.google.com/p/rolerequirement/>
[ 0.054587576538324356, -0.16901060938835144, 0.19644366204738617, -0.06711200624704361, 0.14341844618320465, 0.0505090057849884, 0.16397009789943695, -0.05730918422341347, 0.1400884985923767, -0.7918699979782104, 0.2186715304851532, 0.5658010244369507, -0.17042163014411926, -0.2840755581855...
I've set up Passenger in development (Mac OS X) and it works flawlessly. The only problem came later: now I have a custom `GEM_HOME` path and ImageMagick binaries installed in `"/usr/local"`. I can put them in one of the shell rc files that get sourced and this solves the environment variables for processes spawned from the console; but what about Passenger? The same application cannot find my gems when run this way. I know of two solutions. The first (documented [here](http://www.viget.com/extend/rubyinline-in-shared-rails-environments/)) is essentially the same as manveru's—set the ENV variable directly in your code. The second is to create a wrapper around
[ -0.03825365751981735, 0.19167698919773102, 0.03475654125213623, -0.10122422873973846, -0.08509743958711624, 0.18946166336536407, 0.22973725199699402, -0.18509256839752197, 0.038733649998903275, -0.7445230484008789, 0.3434026837348938, 0.7883170247077942, -0.16063912212848663, 0.03188068047...
the Ruby interpreter that Passenger uses, and is documented [here](http://blog.rayapps.com/2008/05/21/using-mod_rails-with-rails-applications-on-oracle/) (look for passenger\_with\_ruby). The gist is that you create (and point PassengerRuby in your Apache config to) /usr/bin/ruby\_with\_env, an executable file consisting of: ``` #!/bin/bash export ENV_VAR=value /usr/bin/ruby $* ``` Both work; the former approach is a little less hackish, I think.
[ -0.404992938041687, 0.24973902106285095, -0.022734904661774635, 0.00366355967707932, -0.30664098262786865, 0.1759319007396698, 0.41743338108062744, -0.1784360706806183, 0.10867079347372055, -0.4932578206062317, 0.06775005161762238, 0.41554513573646545, -0.34153929352760315, -0.102221086621...