text
stringlengths
0
30.5k
title
stringclasses
1 value
embeddings
listlengths
768
768
I am trying to determine the best time efficient algorithm to accomplish the task described below. I have a set of records. For this set of records I have connection data which indicates how pairs of records from this set connect to one another. This basically represents an undirected graph, with the records being the vertices and the connection data the edges. All of the records in the set have connection information (i.e. no orphan records are present; each record in the set connects to one or more other records in the set). I want to choose any two records from the set
[ 0.1934802234172821, 0.08451198041439056, 0.19119855761528015, 0.4374443590641022, 0.15732355415821075, 0.2447836846113205, -0.051418136805295944, -0.2158464938402176, -0.33051231503486633, -0.9895368814468384, 0.21061712503433228, -0.19872280955314636, -0.4804679751396179, 0.48564064502716...
and be able to show all simple paths between the chosen records. By "simple paths" I mean the paths which do not have repeated records in the path (i.e. finite paths only). Note: The two chosen records will always be different (i.e. start and end vertex will never be the same; no cycles). For example: ``` If I have the following records: A, B, C, D, E and the following represents the connections: (A,B),(A,C),(B,A),(B,D),(B,E),(B,F),(C,A),(C,E), (C,F),(D,B),(E,C),(E,F),(F,B),(F,C),(F,E)
[ 0.1515854448080063, 0.26801446080207825, 0.22438712418079376, 0.1619488000869751, 0.07612582296133041, 0.12264730781316757, 0.43088918924331665, -0.23553884029388428, -0.5362741947174072, -0.8806254863739014, -0.01227429136633873, 0.018822841346263885, -0.25397229194641113, 0.4886281192302...
[where (A,B) means record A connects to record B] ``` If I chose B as my starting record and E as my ending record, I would want to find all simple paths through the record connections that would connect record B to record E. ``` All paths connecting B to E: B->E B->F->E B->F->C->E B->A->C->E B->A->C->F->E ``` This is an example, in practice I may have sets containing hundreds of thousands
[ 0.20850059390068054, 0.3167596459388733, 0.14701224863529205, 0.29761895537376404, 0.23946259915828705, 0.09532378613948822, 0.20354975759983063, -0.406156986951828, -0.4769001007080078, -0.5312438607215881, 0.11017382889986038, 0.0011718171881511807, -0.0606624074280262, 0.542305409908294...
of records. It appears that this can be accomplished with a depth-first search of the graph. **The depth-first search will find all non-cyclical paths between two nodes.** This algorithm should be very fast and scale to large graphs (The graph data structure is sparse so it only uses as much memory as it needs to). I noticed that the graph you specified above has only one edge that is directional (B,E). Was this a typo or is it really a directed graph? This solution works regardless. Sorry I was unable to do it in C, I'm a bit weak in that area.
[ 0.10329711437225342, 0.23444469273090363, 0.3515297472476959, 0.202980637550354, 0.004313285928219557, -0.1678490936756134, 0.3188793361186981, -0.10286171734333038, -0.6790985465049744, -0.6708902716636658, 0.19673848152160645, 0.28246334195137024, -0.46830984950065613, 0.2022351920604705...
I expect that you will be able to translate this Java code without too much trouble though. **Graph.java:** ```java import java.util.HashMap; import java.util.LinkedHashSet; import java.util.LinkedList; import java.util.Map; import java.util.Set; public class Graph { private Map<String, LinkedHashSet<String>> map = new HashMap(); public void addEdge(String node1, String node2) { LinkedHashSet<String> adjacent = map.get(node1); if(adjacent==null) { adjacent = new LinkedHashSet(); map.put(node1, adjacent);
[ -0.11858246475458145, -0.15291820466518402, 0.463674396276474, -0.2808113992214203, -0.22556190192699432, -0.006982259918004274, 0.3248541057109833, -0.27012431621551514, -0.22651967406272888, -1.156493067741394, -0.31613412499427795, 0.23090273141860962, -0.6805956363677979, 0.08022967725...
} adjacent.add(node2); } public void addTwoWayVertex(String node1, String node2) { addEdge(node1, node2); addEdge(node2, node1); } public boolean isConnected(String node1, String node2) { Set adjacent = map.get(node1); if(adjacent==null) { return false; }
[ -0.005429303273558617, -0.5113688707351685, 0.4525531828403473, -0.09921195358037949, 0.25194665789604187, -0.17198634147644043, -0.006911200471222401, 0.01403715182095766, -0.2827724814414978, -0.8429625630378723, -0.36797839403152466, 0.2666342556476593, -0.4569159150123596, 0.2971925139...
return adjacent.contains(node2); } public LinkedList<String> adjacentNodes(String last) { LinkedHashSet<String> adjacent = map.get(last); if(adjacent==null) { return new LinkedList(); } return new LinkedList<String>(adjacent); } } ``` **Search.java:** ```java import java.util.LinkedList; public class Search { private static final String START = "B"; private static final String END = "E";
[ -0.3159905970096588, -0.11629584431648254, 0.5432505011558533, -0.33806777000427246, 0.14150802791118622, -0.20411743223667145, 0.04576045274734497, -0.34164202213287354, -0.19421996176242828, -0.5674156546592712, -0.728934645652771, 0.2600926160812378, -0.5203322768211365, 0.3622393906116...
public static void main(String[] args) { // this graph is directional Graph graph = new Graph(); graph.addEdge("A", "B"); graph.addEdge("A", "C"); graph.addEdge("B", "A"); graph.addEdge("B", "D"); graph.addEdge("B", "E"); // this is the only one-way connection graph.addEdge("B", "F"); graph.addEdge("C", "A");
[ -0.31332096457481384, -0.2826267182826996, 0.6346683502197266, -0.38792526721954346, -0.04978550225496292, -0.011735586449503899, 0.4641084671020508, 0.2533143162727356, -0.3140594959259033, -0.5165723562240601, -0.28590890765190125, 0.30989566445350647, -0.9188644886016846, 0.387623846530...
graph.addEdge("C", "E"); graph.addEdge("C", "F"); graph.addEdge("D", "B"); graph.addEdge("E", "C"); graph.addEdge("E", "F"); graph.addEdge("F", "B"); graph.addEdge("F", "C"); graph.addEdge("F", "E"); LinkedList<String> visited = new LinkedList(); visited.add(START); new
[ -0.22430844604969025, -0.284452885389328, 0.8894041180610657, -0.3494592607021332, -0.019017383456230164, -0.03864334151148796, 0.2623010277748108, 0.056739456951618195, -0.6982389688491821, -0.7322579026222229, -0.563659131526947, 0.486501544713974, -0.8057257533073425, 0.448914498090744,...
Search().depthFirst(graph, visited); } private void depthFirst(Graph graph, LinkedList<String> visited) { LinkedList<String> nodes = graph.adjacentNodes(visited.getLast()); // examine adjacent nodes for (String node : nodes) { if (visited.contains(node)) { continue; }
[ -0.30941128730773926, -0.3373667299747467, 0.7426668405532837, -0.14762504398822784, 0.27633875608444214, -0.0040546003729105, 0.2619752585887909, 0.09822375327348709, -0.2839702367782593, -0.7526695728302002, -0.2622210681438446, 0.5207898020744324, -0.38630542159080505, 0.422339171171188...
if (node.equals(END)) { visited.add(node); printPath(visited); visited.removeLast(); break; } } for (String
[ -0.47444531321525574, -0.34196409583091736, 0.6689299941062927, -0.24183732271194458, 0.42188212275505066, -0.1249615028500557, 0.3290588855743408, 0.22342978417873383, -0.25563570857048035, -0.5939943194389343, -0.11726969480514526, 0.3266269564628601, -0.4390091300010681, 0.4257682561874...
node : nodes) { if (visited.contains(node) || node.equals(END)) { continue; } visited.addLast(node); depthFirst(graph, visited); visited.removeLast(); } }
[ -0.462412565946579, -0.24445614218711853, 1.0138044357299805, -0.17842820286750793, 0.5609675049781799, 0.08691585063934326, 0.32937508821487427, 0.17541374266147614, -0.24966681003570557, -0.6952964663505554, -0.35548070073127747, 0.1428312510251999, -0.23551888763904572, 0.64656394720077...
private void printPath(LinkedList<String> visited) { for (String node : visited) { System.out.print(node); System.out.print(" "); } System.out.println(); } } ``` Program Output: ```java B E B A C E B A C F E B F E B F C E ```
[ -0.0955464094877243, -0.07229762524366379, 0.4217572510242462, -0.21522824466228485, 0.2274169772863388, 0.05940892547369003, 0.3556889295578003, 0.20291855931282043, -0.16604742407798767, -0.5494754910469055, -0.23264628648757935, 0.7592343688011169, -0.721352756023407, 0.1161165088415145...
The Recent Projects panel on the Start Page of VS2008 Professional doesn't appear to work, and constantly remains empty. I've noticed this on 3 of our developers VS2008 installations, in fact all the installations that weren't updated from 2005 but installed from scratch. I generally treat this as a bit of a curiosity, but now I have a new laptop and fresh install of VS2008, it's also happening to me, and I've upgraded the phenomena from a curio to an annoyance. Anyone know if this is a bug or if there is a setting I'm missing somewhere. Thanks **EDIT** Thanks, but Tools | Options
[ 0.309293657541275, 0.016644122079014778, 0.3722524344921112, 0.1351025551557541, 0.04436729848384857, -0.18653111159801483, 0.31200167536735535, 0.25284087657928467, -0.5414762496948242, -0.5983008146286011, 0.2042049765586853, 0.7486345171928406, -0.03104611113667488, 0.31202268600463867,...
| Environment | General | "items shown in recently used lists" was and is set to 6 by default Finally worked it out! The recent projects is driven by (or at least shares a 'Show' flag with) the Recent Documents in the Start Menu. For some reason our SOE has this hidden. Both the following need th be set to 0: HKEY\_CURRENT\_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoRecentDocsHistory HKEY\_CURRENT\_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoRecentDocsMenu
[ 0.28097161650657654, 0.2593959867954254, 0.7582696080207825, -0.05634050443768501, 0.0026625879108905792, -0.22772479057312012, -0.07634216547012329, -0.07780014723539352, -0.3255375027656555, -0.4584614932537079, -0.39073535799980164, 0.3437763452529907, -0.10911838710308075, 0.1995946764...
I am facing a problem with .NET generics. The thing I want to do is saving an array of generics types (GraphicsItem): ``` public class GraphicsItem<T> { private T _item; public void Load(T item) { _item = item; } } ``` How can I save such open generic type in an array? Implement a non-generic interface and use that: ``` public class GraphicsItem<T> : IGraphicsItem { private T _item; public void Load(T item) { _item
[ 0.2360324114561081, -0.07348277419805527, 0.3995245695114136, 0.03287235274910927, -0.369659423828125, 0.1578499674797058, 0.17617656290531158, -0.16019898653030396, -0.27035757899284363, -0.8232591152191162, 0.04011143371462822, 0.6574040651321411, -0.4730759561061859, 0.44840937852859497...
= item; } public void SomethingWhichIsNotGeneric(int i) { // Code goes here... } } public interface IGraphicsItem { void SomethingWhichIsNotGeneric(int i); } ``` Then use that interface as the item in the list: ``` var values = new List<IGraphicsItem>(); ```
[ 0.16917596757411957, -0.16080398857593536, 0.41600432991981506, -0.15841740369796753, 0.03629755228757858, 0.12960146367549896, 0.3211108446121216, -0.26921918988227844, -0.29008129239082336, -0.625482976436615, -0.31076857447624207, 0.36640554666519165, -0.24033837020397186, 0.27993616461...
How can I have SQL repeat some set-based operation an arbitrary number of times without looping? How can I have SQL perform an operation against a range of numbers? I'm basically looking for a way to do a set-based for loop. I know I can just create a small table with integers in it, say from 1 to 1000 and then use it for range operations that are within that range. For example, if I had that table I could make a select to find the sum of numbers 100-200 like this: ``` select sum(n) from numbers where n between 100 and 200 ``` Any ideas?
[ 0.48258933424949646, 0.02985762059688568, -0.07758662104606628, 0.11812032759189606, -0.21214225888252258, 0.028079703450202942, 0.12428783625364304, -0.254830539226532, -0.2119922637939453, -0.28630945086479187, 0.35699066519737244, 0.3525526523590088, -0.4252229332923889, 0.1507881730794...
I'm kinda looking for something that works for T-SQL but any platform would be okay. [Edit] I have my own solution for this using SQL CLR which works great for MS SQL 2005 or 2008. [See below.](https://stackoverflow.com/questions/58429/sql-set-based-range#59657) I think the very short answer to your question is to use WITH clauses to generate your own. Unfortunately, the big names in databases don't have built-in queryable number-range pseudo-tables. Or, more generally, easy pure-SQL data generation features. Personally, I think this is a **huge** failing, because if they did it would be possible to move a lot of code that is currently locked up in
[ 0.3278529644012451, 0.180556520819664, 0.20050448179244995, 0.1392371952533722, -0.035033296793699265, -0.039904795587062836, 0.34675922989845276, -0.06237137317657471, -0.2562122344970703, -0.24053505063056946, 0.12115851789712906, 0.5585994720458984, -0.19070874154567719, 0.0716477110981...
procedural scripts (T-SQL, PL/SQL, etc.) into pure-SQL, which has a number of benefits to performance and code complexity. So anyway, it sounds like what you need in a general sense is the ability to generate data on the fly. Oracle and T-SQL both support a WITH clause that can be used to do this. They work a little differently in the different DBMS's, and MS calls them "common table expressions", but they are very similar in form. Using these with recursion, you can generate a sequence of numbers or text values fairly easily. Here is what it might look like... In Oracle SQL: ``` WITH
[ 0.523666262626648, -0.16480064392089844, 0.036605119705200195, 0.2800076901912689, -0.3792397975921631, -0.2649557292461395, -0.09253773093223572, -0.33787801861763, -0.06803994625806808, -0.4614756107330322, -0.002450148807838559, 0.6755658984184265, -0.37503373622894287, -0.1819661706686...
digits AS -- Limit recursion by just using it for digits. (SELECT LEVEL - 1 AS num FROM DUAL WHERE LEVEL < 10 CONNECT BY num = (PRIOR num) + 1), numrange AS (SELECT ones.num + (tens.num * 10) + (hundreds.num * 100)
[ -0.26449280977249146, -0.24060757458209991, 0.49841591715812683, 0.15515530109405518, -0.019828403368592262, 0.24093268811702728, 0.26701420545578003, -0.3716474771499634, -0.31409764289855957, -0.6015444993972778, -0.18085895478725433, 0.34308117628097534, 0.19496075809001923, 0.192202806...
AS num FROM digits ones CROSS JOIN digits tens CROSS JOIN digits hundreds WHERE hundreds.num in (1, 2)) -- Use the WHERE clause to restrict each digit as needed. SELECT -- Some columns and operations FROM numrange -- Join to other data if needed ``` This is admittedly quite verbose. Oracle's recursion functionality is
[ 0.08299614489078522, 0.07635754346847534, -0.0912930890917778, 0.004644550383090973, -0.11755913496017456, 0.16153556108474731, -0.08578632026910782, -0.20675154030323029, -0.03231486305594444, -0.05286478251218796, -0.17976950109004974, 0.26748886704444885, 0.0009544125059619546, -0.06538...
limited. The syntax is clunky, it's not performant, and it is limited to 500 (I think) nested levels. This is why I chose to use recursion only for the first 10 digits, and then cross (cartesian) joins to combine them into actual numbers. I haven't used SQL Server's Common Table Expressions myself, but since they allow self-reference, recursion is MUCH simpler than it is in Oracle. Whether performance is comparable, and what the nesting limits are, I don't know. At any rate, recursion and the WITH clause are very useful tools in creating queries that require on-the-fly generated data sets. Then by
[ -0.02212178148329258, 0.25849950313568115, 0.04166429489850998, 0.2680719494819641, -0.12842325866222382, -0.041385289281606674, 0.37622177600860596, -0.1628725379705429, -0.009663998149335384, -0.4276913106441498, -0.07482265681028366, 0.42168641090393066, -0.2845337390899658, -0.27414026...
querying this data set, doing operations on the values, you can get all sorts of different types of generated data. Aggregations, duplications, combinations, permutations, and so on. You can even use such generated data to aid in rolling up or drilling down into other data. **UPDATE:** I just want to add that, once you start working with data in this way, it opens your mind to new ways of thinking about SQL. It's not just a scripting language. It's a fairly robust data-driven [declarative language](http://en.wikipedia.org/wiki/Declarative_programming_language). Sometimes it's a pain to use because for years it has suffered a dearth of enhancements
[ 0.25281351804733276, 0.06322155892848969, 0.020592808723449707, 0.262343168258667, -0.16174401342868805, -0.3877996504306793, 0.2598913013935089, 0.29459822177886963, -0.10003878176212311, -0.5105413794517517, -0.007307850755751133, 0.624054491519928, -0.12987986207008362, -0.0127702057361...
to aid in reducing the redundancy needed for complex operations. But nonetheless it is very powerful, and a fairly intuitive way to work with data sets as both the target and the driver of your algorithms.
[ -0.04281669855117798, -0.06296640634536743, 0.02147386223077774, 0.41120362281799316, -0.03081366792321205, -0.1989501416683197, 0.014931939542293549, -0.025601377710700035, 0.024401022121310234, -0.5259974002838135, -0.04717813804745674, 0.5113577246665955, 0.007125742267817259, -0.075075...
How do you randomly select a table row in T-SQL based on an applied weight for all candidate rows? For example, I have a set of rows in a table weighted at 50, 25, and 25 (which adds up to 100 but does not need to), and I want to select one of them randomly with a statistical outcome equivalent to the respective weight. Dane's answer includes a self joins in a way that introduces a square law. `(n*n/2)` rows after the join where there are n rows in the table. What would be more ideal is to be able to just parse
[ 0.18832948803901672, 0.011680015362799168, 0.19457946717739105, 0.13234451413154602, -0.23421165347099304, 0.18782688677310944, 0.04249851778149605, -0.7264341711997986, 0.1832933872938156, -0.31280243396759033, 0.0657011866569519, 0.28547799587249756, -0.13313910365104675, -0.091116048395...
the table once. ``` DECLARE @id int, @weight_sum int, @weight_point int DECLARE @table TABLE (id int, weight int) INSERT INTO @table(id, weight) VALUES(1, 50) INSERT INTO @table(id, weight) VALUES(2, 25) INSERT INTO @table(id, weight) VALUES(3, 25) SELECT @weight_sum = SUM(weight) FROM @table SELECT @weight_point = FLOOR(((@weight_sum - 1) * RAND() + 1)) SELECT @id = CASE WHEN @weight_point < 0 THEN @id ELSE [table].id END, @weight_point = @weight_point - [table].weight FROM @table [table] ORDER BY [table].Weight DESC ``` This will go through the table, setting `@id` to each record's `id` value while at the same time decrementing `@weight` point. Eventually, the `@weight_point`
[ 0.02112976275384426, 0.21212723851203918, 0.6197779774665833, -0.21447797119617462, 0.03982926905155182, -0.01955542340874672, -0.28096967935562134, -0.867369532585144, -0.4654494822025299, -0.6931113004684448, 0.07745055854320526, 0.24558261036872864, 0.06457364559173584, 0.13450266420841...
will go negative. This means that the `SUM` of all preceding weights is greater than the randomly chosen target value. This is the record we want, so from that point onwards we set `@id` to itself (ignoring any IDs in the table). This runs through the table just once, but does have to run through the entire table even if the chosen value is the first record. Because the average position is half way through the table (and less if ordered by ascending weight) writing a loop could possibly be faster... (Especially if the weightings are in common groups): ``` DECLARE @id int,
[ 0.41088053584098816, -0.11708521842956543, 0.19666613638401031, -0.24199028313159943, 0.06779449433088303, -0.05031467229127884, 0.22068379819393158, -0.5859259963035583, -0.11729152500629425, -0.3001534640789032, 0.057583972811698914, 0.3319125175476074, -0.277814656496048, 0.019130831584...
@weight_sum int, @weight_point int, @next_weight int, @row_count int DECLARE @table TABLE (id int, weight int) INSERT INTO @table(id, weight) VALUES(1, 50) INSERT INTO @table(id, weight) VALUES(2, 25) INSERT INTO @table(id, weight) VALUES(3, 25) SELECT @weight_sum = SUM(weight) FROM @table SELECT @weight_point = ROUND(((@weight_sum - 1) * RAND() + 1), 0) SELECT @next_weight = MAX(weight) FROM @table SELECT @row_count = COUNT(*) FROM @table WHERE weight = @next_weight SET @weight_point = @weight_point - (@next_weight * @row_count) WHILE (@weight_point > 0) BEGIN SELECT @next_weight = MAX(weight) FROM @table WHERE weight < @next_weight SELECT @row_count = COUNT(*) FROM @table WHERE
[ -0.06911131739616394, -0.14892050623893738, 0.5773478746414185, -0.40380486845970154, 0.002005492802709341, 0.24798527359962463, -0.3299608826637268, -1.2482303380966187, 0.006552346516400576, -0.4293793737888336, 0.12493912130594254, 0.405656099319458, -0.007462671492248774, -0.2699243724...
weight = @next_weight SET @weight_point = @weight_point - (@next_weight * @row_count) END -- # Once the @weight_point is less than 0, we know that the randomly chosen record -- # is in the group of records WHERE [table].weight = @next_weight SELECT @row_count = FLOOR(((@row_count - 1) * RAND() + 1)) SELECT @id = CASE WHEN @row_count < 0 THEN @id ELSE [table].id END, @row_count = @row_count - 1 FROM @table [table] WHERE [table].weight = @next_weight ORDER BY [table].Weight DESC ```
[ -0.10569275915622711, -0.025172431021928787, 0.5306349992752075, -0.04927794635295868, 0.12435490638017654, -0.09513206034898758, -0.29724130034446716, -0.8048884272575378, -0.4922623932361603, -0.6349602341651917, -0.10097360610961914, 0.3065430223941803, 0.22562704980373383, 0.1635354608...
How do I Unit Test a MVC redirection? ``` public ActionResult Create(Product product) { _productTask.Save(product); return RedirectToAction("Success"); } public ActionResult Success() { return View(); } ``` Is [Ayende's](http://www.ayende.com/Blog/archive/2007/12/13/Dont-like-visibility-levels-change-that.aspx) approach still the best way to go, with preview 5: ``` public static void RenderView(this Controller self, string action) { typeof(Controller).GetMethod("RenderView").Invoke(self,new object[] { action} ); } ``` Seems odd to have to do this, especially as the MVC team have said they are writing the framework to be testable. ``` [TestFixture] public class RedirectTester { [Test] public void Should_redirect_to_success_action() {
[ 0.26137691736221313, -0.15937437117099762, 0.5535282492637634, -0.21005724370479584, -0.03758466988801956, -0.021460290998220444, 0.5053351521492004, -0.8608925342559814, 0.08434131741523743, -0.368982195854187, -0.24099093675613403, 0.7635391354560852, 0.03896587714552879, 0.0999432653188...
var controller = new RedirectController(); var result = controller.Index() as RedirectToRouteResult; Assert.That(result, Is.Not.Null); Assert.That(result.Values["action"], Is.EqualTo("success")); } } public class RedirectController : Controller { public ActionResult Index() { return RedirectToAction("success"); } } ```
[ -0.3597365915775299, -0.8049958944320679, 0.7027104496955872, 0.04003094136714935, -0.06290409713983536, 0.13411107659339905, 0.27460283041000366, -0.5101208090782166, -0.6108929514884949, -0.2837770879268646, -0.32395970821380615, 0.6448132395744324, -0.43357375264167786, 0.46168190240859...
Selecting a large amount of text that extends over many screens in an IDE like Eclipse is fairly easy since you can use the mouse, but what is the best way to e.g. select and delete multiscreen blocks of text or write e.g. three large methods out to another file and then delete them for testing purposes in Vim when using it via putty/ssh where you cannot use the mouse? I can easily yank-to-the-end-of-line or yank-to-the-end-of-code-block but if the text extends over many screens, or has lots of blank lines in it, I feel like my hands are tied in Vim.
[ 0.28072768449783325, -0.28499945998191833, 0.1820528209209442, 0.2538033723831177, -0.3973281681537628, -0.17592079937458038, 0.029432306066155434, -0.22961744666099548, -0.20216451585292816, -0.7453755140304565, 0.12634234130382538, 0.3687787353992462, -0.09873462468385696, 0.054162994027...
Any solutions? And a related question: is there a way to somehow select 40 lines, and then comment them all out (with "#" or "//"), as is common in most IDEs? Well, first of all, you can set `vim` to work with the mouse, which would allow you to select text just like you would in `Eclipse`. You can also use the Visual selection - `v`, by default. Once selected, you can `yank`, `cut`, etc. As far as commenting out the block, I usually select it with `VISUAL`, then do ``` :'<,'>s/^/# / ``` Replacing the beginning of each line with a `#`. (The `'<` and `'>` markers
[ 0.09794938564300537, -0.1288997381925583, 0.6843315362930298, 0.38188955187797546, -0.12079776078462601, 0.40015527606010437, 0.27710744738578796, -0.11379901319742203, -0.18001499772071838, -0.6512033343315125, -0.3018666207790375, 0.4119994640350342, -0.24293862283229828, -0.193128794431...
are the beginning and and of the visual selection.
[ 0.4353077709674835, 0.18417592346668243, 0.02770804800093174, 0.020078390836715698, 0.23555895686149597, 0.40474528074264526, -0.038204826414585114, -0.11976300925016403, -0.047594405710697174, -0.6511173248291016, -0.2819490134716034, 0.5317878723144531, 0.37070271372795105, -0.0545701719...
How do I take a set of polygons which contain arbitrary values and create a corresponding bitmap where each pixel contains the value of the polygon at that location? To put the question into context, my polygons contain information about the average number of people per square kilometre within the polygon. I need to create a raster/bitmap that contains pixels representing the population in 200 metre bins. I've done something similar in the past where I've used a polygon to create a mask by drawing into a bitmap and filling values, then converting the bitmap into an array that I can manipulate.
[ 0.5318087339401245, -0.08968926966190338, 0.13887861371040344, 0.17349261045455933, -0.19237154722213745, 0.5542411208152771, 0.46117687225341797, -0.23876318335533142, -0.16507282853126526, -0.769963800907135, -0.004949611146003008, 0.18000547587871552, -0.3760852813720703, -0.13988423347...
I'm sure there's a better method for doing this! I'm clarifying the question a bit more as requested. 1. There are multiple polygons, each polygon is a set of vectors 2. Each polygon will have a single unique value 3. The polygons don't overlap Thanks Nick @Nick R > I was originally using ArcGIS 9.2, but that doesn't work well with C# and 64 bit, so I am now using GDAL (<http://www.gdal.org>). Doesn't [gdal\_rasterize](http://www.gdal.org/gdal_rasterize.html) do exactly what you want?
[ 0.45027270913124084, 0.05808374285697937, 0.3379455506801605, 0.00726685393601656, -0.4637095034122467, 0.11730434000492096, 0.22437752783298492, -0.12791922688484192, -0.39563673734664917, -0.8370639085769653, 0.09268929809331894, 0.44803276658058167, -0.527171790599823, -0.27297124266624...
I would like to write a small program in C# which goes through my jpeg photos and, for example, sorts them into dated folders (using MY dating conventions, dammit...). Does anyone know a relatively easy way to get at the EXIF data such as Date And Time or Exposure programatically? Thanks! Check out this [metadata extractor](https://www.drewnoakes.com/code/exif/). It is written in Java but has also been ported to C#. I have used the Java version to write a small utility to rename my jpeg files based on the date and model tags. Very easy to use. --- **EDIT** *metadata-extractor* supports .NET too. It's a very
[ 0.46513018012046814, -0.016591288149356842, 0.35134708881378174, 0.028619151562452316, -0.3603515028953552, -0.370481938123703, -0.03383159264922142, 0.13424232602119446, -0.059207990765571594, -0.5187584161758423, 0.3998965620994568, 0.2922968566417694, -0.1251196265220642, -0.03483175113...
fast and simple library for accessing metadata from images and videos. It fully supports Exif, as well as IPTC, XMP and many other types of metadata from file types including JPEG, PNG, GIF, PNG, ICO, WebP, PSD, ... ``` var directories = ImageMetadataReader.ReadMetadata(imagePath); // print out all metadata foreach (var directory in directories) foreach (var tag in directory.Tags) Console.WriteLine($"{directory.Name} - {tag.Name} = {tag.Description}"); // access the date time var subIfdDirectory = directories.OfType<ExifSubIfdDirectory>().FirstOrDefault(); var dateTime = subIfdDirectory?.GetDateTime(ExifDirectoryBase.TagDateTime); ``` It's available via [NuGet](https://www.nuget.org/packages/MetadataExtractor/) and the [code's on GitHub](https://github.com/drewnoakes/metadata-extractor-dotnet).
[ -0.0369025357067585, -0.5123369693756104, 0.7550115585327148, 0.3030019700527191, -0.010176821611821651, -0.07458467036485672, -0.10339832305908203, -0.27634286880493164, -0.17548595368862152, -0.701188325881958, -0.08996111154556274, 0.39189860224723816, -0.03075334243476391, 0.2453432977...
I want to be able to generate PDF output from my (native) C++ Windows application. Are there any free/open source libraries available to do this? I looked at the answers to [this question](https://stackoverflow.com/questions/177/how-do-i-programmatically-create-a-pdf-in-my-net-application), but they mostly relate to .Net. [LibHaru](https://github.com/libharu/libharu) > Haru is a free, cross platform, > open-sourced software library for > generating PDF written in ANSI-C. It > can work as both a static-library (.a, > .lib) and a shared-library (.so, > .dll). Didn't try it myself, but maybe it can help you
[ 0.12897707521915436, 0.14886099100112915, 0.4259032905101776, -0.04811294376850128, -0.5885547399520874, -0.1081850603222847, -0.016678843647241592, -0.2962435483932495, -0.1460077315568924, -0.531829297542572, -0.2469457983970642, 0.4353777766227722, -0.12566667795181274, 0.25425702333450...
I want to copy the entire contents of a directory from one location to another in C#. There doesn't appear to be a way to do this using `System.IO` classes without lots of recursion. There is a method in VB that we can use if we add a reference to `Microsoft.VisualBasic`: ``` new Microsoft.VisualBasic.Devices.Computer(). FileSystem.CopyDirectory( sourceFolder, outputFolder ); ``` This seems like a rather ugly hack. Is there a better way? Much easier ``` private static void CopyFilesRecursively(string sourcePath, string targetPath) { //Now Create all of the directories foreach (string dirPath in Directory.GetDirectories(sourcePath, "*", SearchOption.AllDirectories)) {
[ 0.6079227328300476, 0.2315283566713333, 0.48593151569366455, 0.06586726009845734, 0.34256821870803833, -0.23418840765953064, 0.1143839955329895, 0.010999605059623718, -0.0047590467147529125, -0.9762966632843018, 0.05059633031487465, 0.6319634914398193, -0.5348436236381531, 0.39712202548980...
Directory.CreateDirectory(dirPath.Replace(sourcePath, targetPath)); } //Copy all the files & Replaces any files with the same name foreach (string newPath in Directory.GetFiles(sourcePath, "*.*",SearchOption.AllDirectories)) { File.Copy(newPath, newPath.Replace(sourcePath, targetPath), true); } } ```
[ 0.23333851993083954, 0.014306386932730675, 0.5136909484863281, -0.06801677495241165, 0.2946587800979614, -0.12984585762023926, 0.24331088364124298, -0.07705869525671005, -0.269471138715744, -0.8878782391548157, -0.333634614944458, 0.6498212218284607, -0.5867508053779602, 0.4446796476840973...
Is it possible to obtain raw logs from Google Analytic? Is there any tool that can generate the raw logs from GA? No you can't get the raw logs, but there's nothing stopping you from getting the exact same data logged to your own web server logs. Have a look at the [Urchin code](https://ssl.google-analytics.com/urchin.js) and borrow that, changing the following two lines to point to your web server instead. ``` var _ugifpath2="http://www.google-analytics.com/__utm.gif"; if (_udl.protocol=="https:") _ugifpath2="https://ssl.google-analytics.com/__utm.gif"; ``` You'll want to create a `__utm.gif` file so that they don't show up in the logs as 404s. Obviously you'll need to parse the variables out of the hits into your
[ 0.31648194789886475, 0.11601398885250092, 0.12232999503612518, 0.28221872448921204, -0.18228180706501007, -0.012701570987701416, 0.2626466155052185, -0.11156398057937622, -0.225345179438591, -0.3938482403755188, 0.022465698421001434, 0.5973933339118958, -0.35103893280029297, 0.152726650238...
web server logs. The log line in Apache looks something like this. You'll have lots of "fun" parsing out all the various stuff you want from that, but everything Google Analytics gets from the basic JavaScript tagging comes in like this. ``` 127.0.0.1 - - [02/Oct/2008:10:17:18 +1000] "GET /__utm.gif?utmwv=1.3&utmn=172543292&utmcs=ISO-8859-1&utmsr=1280x1024&utmsc=32-bit&utmul=en-us&utmje=1&utmfl=9.0%20%20r124&utmdt=My%20Web%20Page&utmhn=www.mydomain.com&utmhid=979599568&utmr=-&utmp=/urlgoeshere/&utmac=UA-1715941-2&utmcc=__utma%3D113887236.511203954.1220404968.1222846275.1222906638.33%3B%2B__utmz%3D113887236.1222393496.27.2.utmccn%3D(organic)%7Cutmcsr%3Dgoogle%7Cutmctr%3Dsapphire%2Btechnologies%2Bsite%253Arumble.net%7Cutmcmd%3Dorganic%3B%2B HTTP/1.0" 200 35 "http://www.mydomain.com/urlgoeshere/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/0.2.153.1 Safari/525.19" ```
[ -0.030478082597255707, 0.3026386499404907, 0.5954393744468689, 0.10395190864801407, -0.2817312180995941, -0.13930627703666687, 0.5337015390396118, -0.19209568202495575, -0.3246985077857971, -0.8273776769638062, -0.15588849782943726, 0.1851687729358673, -0.41606301069259644, 0.3729316294193...
What is the best way to do per-user database connections in `Rails`? I realize this is a poor Rails design practice, but we're gradually replacing an existing web application that uses one database per user. A complete redesign/rewrite is not feasible. Put something like this in your application controller. I'm using the subdomain plus "\_clientdb" to pick the name of the database. I have all the databases using the same username and password, so I can grab that from the db config file. Hope this helps! ``` class ApplicationController < ActionController::Base before_filter :hijack_db def hijack_db db_name = request.subdomains.first +
[ -0.01870150864124298, -0.0014466430293396115, 0.1394234597682953, 0.15421856939792633, 0.12116693705320358, -0.08031933009624481, 0.21259824931621552, -0.28399938344955444, -0.2025233954191208, -0.964754045009613, 0.11251144856214523, 0.4037570059299469, -0.5161228775978088, 0.201365604996...
"_clientdb" # lets manually connect to the proper db ActiveRecord::Base.establish_connection( :adapter => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['adapter'], :host => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['host'], :username => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['username'], :password => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['password'], :database => db_name ) end end ```
[ -0.062336113303899765, 0.033502429723739624, 0.5950652956962585, -0.08633120357990265, -0.2501855492591858, -0.31974902749061584, 0.37676844000816345, -0.3090478777885437, 0.02575153298676014, -0.7444347143173218, -0.3600773513317108, 0.5775993466377258, -0.3617226779460907, 0.324236810207...
I want to paste something I have cut from my desktop into a file open in Vi. But if I paste the tabs embed on top of each other across the page. I think it is some sort of visual mode change but can't find the command. If you're using plain vi: ------------------------- You probably have `autoindent` on. To turn it off while pasting: ``` <Esc> :set noai <paste all you want> <Esc> :set ai ``` I have in my `.exrc` the following shortcuts: ``` map ^P :set noai^M map ^N :set ai^M ``` Note that these have to be the actual control characters - insert them using `Ctrl`-`V` `Ctrl`-`P` and so on. If you're using vim: -------------------- Use the
[ 0.1997365951538086, 0.15899857878684998, 0.6239858865737915, 0.0844123512506485, -0.013207905925810337, 0.3165695369243622, 0.166816845536232, -0.03714825585484505, -0.044766899198293686, -0.9069219827651978, -0.2044426053762436, 0.7686665654182434, -0.39467939734458923, -0.031263764947652...
[`paste`](http://www.vim.org/htmldoc/options.html#%27paste%27) option. In addition to disabling `autoindent` it will also set other options such as `textwidth` and `wrapmargin` to paste-friendly defaults: ``` <Esc> :set paste <paste all you want> <Esc> :set nopaste ``` You can also set a key to toggle the paste mode. My `.vimrc` has the following line: ``` set pastetoggle=<C-P> " Ctrl-P toggles paste mode ```
[ -0.15069754421710968, 0.16785013675689697, 0.7874594330787659, -0.29896655678749084, -0.05580121651291847, 0.3409172296524048, 0.14729441702365875, 0.23667356371879578, -0.12637260556221008, -0.45180240273475647, -0.25572705268859863, 0.6429176330566406, -0.3851991891860962, 0.000819958513...
OK, I am not sure if the title it completely accurate, open to suggestions! I am in the process of creating an ASP.NET custom control, this is something that is still relatively new to me, so please bear with me. I am thinking about the event model. Since we are not using Web Controls there are no events being fired from buttons, rather I am manually calling **\_\_doPostBack** with the appropriate arguments. However this can obviously mean that there are a lot of postbacks occuring when say, selecting options (which render differently when selected). In time, I will need to make this more
[ 0.1835041642189026, -0.10509245097637177, -0.03709951415657997, 0.37680479884147644, -0.21774205565452576, -0.3523508906364441, 0.13948850333690643, 0.18869027495384216, -0.3084166944026947, -0.4033486247062683, 0.059663835912942886, 0.6489869356155396, 0.05425509437918663, -0.021424630656...
Ajax-y and responsive, so I will need to change the event binding to call local Javascript. So, I was thinking I should be able to toggle the "mode" of the control, it can either use postback and handlle itself, or you can specify the Javascript function names to call instead of the doPostBack. * What are your thoughts on this? * Am I approaching the raising of the events from the control in the wrong way? (totally open to suggestions here!) * How would you approach a similar problem? --- Edit - To Clarify ----------------- * I am creating a custom rendered control (i.e. inherits from WebControl). * We
[ -0.034084782004356384, -0.1712087094783783, 0.1164916455745697, 0.4117872714996338, -0.39741048216819763, -0.12617841362953186, -0.060343530029058456, 0.251565158367157, -0.1628137230873108, -0.8368518948554993, 0.18493029475212097, 0.5662575364112854, -0.17373132705688477, -0.187719568610...
are not using existnig Web Controls since we want complete control over the rendered output. * AFAIK the only way to get a server side event to occur from a custom rendered control is to call doPostBack from the rendered elements (please correct if wrong!). * ASP.NET MVC is not an option. You might like to try this [improved Javascript syntax highlighter](http://www.vim.org/scripts/script.php?script_id=1491) rather than the one that ships with VIMRUNTIME.
[ 0.37830203771591187, -0.2116277515888214, 0.2778291702270508, 0.059422384947538376, -0.35081490874290466, -0.3162056803703308, 0.603280246257782, -0.01600143313407898, -0.4769512414932251, -0.7751560211181641, -0.12316298484802246, 0.3610556423664093, -0.280052125453949, 0.0453573837876319...
My boss found a bug in a query I created, and I don't understand the reasoning behind the bug, although the query results prove he's correct. Here's the query (simplified version) before the fix: ``` select PTNO,PTNM,CATCD from PARTS left join CATEGORIES on (CATEGORIES.CATCD=PARTS.CATCD); ``` and here it is after the fix: ``` select PTNO,PTNM,PARTS.CATCD from PARTS left join CATEGORIES on (CATEGORIES.CATCD=PARTS.CATCD); ``` The bug was, that null values were being shown for column CATCD, i.e. the query results included results from table CATEGORIES instead of PARTS. Here's what I don't understand: if there was ambiguity in the original query, why didn't Oracle throw an error? As far
[ -0.20745134353637695, 0.10971294343471527, 0.2504999041557312, 0.1504889875650406, 0.30645766854286194, 0.05108512565493584, -0.030803490430116653, -0.12358854711055756, -0.4300892949104309, -0.36370113492012024, 0.028741661459207535, 0.4922022223472595, -0.4805050790309906, 0.069478414952...
as I understood, in the case of left joins, the "main" table in the query (PARTS) has precedence in ambiguity. Am I wrong, or just not thinking about this problem correctly? Update: Here's a revised example, where the ambiguity error is not thrown: ``` CREATE TABLE PARTS (PTNO NUMBER, CATCD NUMBER, SECCD NUMBER); CREATE TABLE CATEGORIES(CATCD NUMBER); CREATE TABLE SECTIONS(SECCD NUMBER, CATCD NUMBER); select PTNO,CATCD from PARTS left join CATEGORIES on (CATEGORIES.CATCD=PARTS.CATCD) left join SECTIONS on (SECTIONS.SECCD=PARTS.SECCD) ; ``` Anybody have a clue? I'm afraid I can't tell you why you're not getting an exception, but I can postulate as to why it chose CATEGORIES' version of the column over
[ -0.11687672883272171, -0.05252029374241829, 0.24928289651870728, 0.3103625476360321, 0.019975241273641586, 0.004458372946828604, 0.11635514348745346, -0.18836362659931183, -0.3567114472389221, -0.35624608397483826, -0.41973361372947693, 0.039885494858026505, -0.5217504501342773, 0.03676422...
PARTS' version. > As far as I understood, in the case of left joins, the "main" table in the query (PARTS) has precedence in ambiguity It's not clear whether by "main" you mean simply the left table in a left join, or the "driving" table, as you see the query conceptually... But in either case, what you see as the "main" table in the query as you've written it will not necessarily be the "main" table in the actual execution of that query. My guess is that Oracle is simply using the column from the first table it hits in executing the query.
[ -0.13150812685489655, -0.04432891309261322, 0.2692382037639618, 0.11524787545204163, -0.20893312990665436, 0.21072159707546234, -0.029558518901467323, 0.01682283543050289, -0.21627260744571686, -0.2930140495300293, -0.11964239925146103, 0.38562238216400146, -0.12470186501741409, 0.33763095...
And since most individual operations in SQL do not require one table to be hit before the other, the DBMS will decide at parse time which is the most efficient one to scan first. Try getting an execution plan for the query. I suspect it may reveal that it's hitting CATEGORIES first and then PARTS.
[ 0.06864567846059799, -0.01785658486187458, 0.1649930626153946, 0.04530401900410652, -0.06517582386732101, -0.15521879494190216, -0.04964941740036011, -0.18105636537075043, -0.2820029556751251, -0.5672860145568848, -0.07283629477024078, 0.3337577283382416, -0.15659376978874207, -0.188527986...
Using VB.NET, how do I toggle the state of Caps Lock? From: <http://www.vbforums.com/showthread.php?referrerid=61394&t=537891> ``` Imports System.Runtime.InteropServices Public Class Form2 Private Declare Sub keybd_event Lib "user32" ( _ ByVal bVk As Byte, _ ByVal bScan As Byte, _ ByVal dwFlags As Integer, _ ByVal dwExtraInfo As Integer _ ) Private Const VK_CAPITAL As Integer = &H14 Private Const KEYEVENTF_EXTENDEDKEY As Integer = &H1
[ -0.053976017981767654, 0.10044895857572556, 0.574681282043457, -0.2823067903518677, 0.09666775166988373, -0.11537449061870575, 0.19605642557144165, -0.6039910316467285, -0.4348573684692383, -0.41195815801620483, -0.15549546480178833, 0.7008495926856995, -0.31111764907836914, 0.232175081968...
Private Const KEYEVENTF_KEYUP As Integer = &H2 Private Sub Button1_Click( _ ByVal sender As System.Object, _ ByVal e As System.EventArgs _ ) Handles Button1.Click ' Toggle CapsLock ' Simulate the Key Press keybd_event(VK_CAPITAL, &H45, KEYEVENTF_EXTENDEDKEY Or 0, 0) ' Simulate the Key Release keybd_event(VK_CAPITAL,
[ -0.25748777389526367, -0.009022827260196209, 0.4468381404876709, 0.09547919034957886, 0.2213461846113205, 0.4236755073070526, 0.3472517430782318, -0.242333322763443, -0.23776671290397644, -0.18467240035533905, -0.1061486005783081, 0.6515694260597229, -0.19256408512592316, 0.021496774628758...
&H45, KEYEVENTF_EXTENDEDKEY Or KEYEVENTF_KEYUP, 0) End Sub End Class ```
[ -0.1219426617026329, -0.15413251519203186, 0.4320366382598877, -0.24777686595916748, 0.418830931186676, -0.1692640781402588, 0.11489299684762955, -0.3300970196723938, 0.05762021243572235, -0.22366097569465637, -0.5170273780822754, 0.3840055465698242, -0.102060966193676, 0.2755064070224762,...
I have a site, which contains several ashx handlers, on a couple of the handlers I want to reject non-SSL requests. Is there a way that I can do this in code? If you must do it programmatically, a way I've done it in the past is to inspect the url and look for "https" in it. Redirect if you don't see that. Request.IsSecureConnection should be the preferred method, however. You may have to add additional logic to handle a loopback address.
[ 0.45632651448249817, 0.014084970578551292, -0.007734161801636219, 0.13465268909931183, -0.2910902202129364, -0.2891464829444885, 0.577355146408081, 0.0353078693151474, -0.13666990399360657, -0.5161940455436707, -0.02716197818517685, 0.24489635229110718, -0.39694735407829285, 0.191226929426...
The Open/Closed Principle states that software entities (classes, modules, etc.) should be open for extension, but closed for modification. What does this mean, and why is it an important principle of good object-oriented design? Specifically, it is about a "Holy Grail" of design in OOP of making an entity extensible enough (through its individual design or through its participation in the architecture) to support future unforseen changes without rewriting its code (and sometimes even without re-compiling \*\*). Some ways to do this include Polymorphism/Inheritance, Composition, Inversion of Control (a.k.a. DIP), Aspect-Oriented Programming, Patterns such as Strategy, Visitor, Template Method, and many other
[ 0.12265294790267944, 0.2322479635477066, 0.18352973461151123, 0.16435886919498444, -0.09623442590236664, -0.34093767404556274, 0.30670031905174255, -0.012301300652325153, -0.20113588869571686, -0.21296142041683197, -0.8638883829116821, 0.3320434093475342, -0.49766477942466736, 0.4193921387...
principles, patterns, and techniques of OOAD. \*\* See the 6 "package principles", [REP, CCP, CRP, ADP, SDP, SAP](http://www.butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod)
[ 0.3332332372665405, 0.05404428765177727, 0.23942609131336212, 0.04320712387561798, 0.4450787305831909, -0.018186094239354134, -0.016594134271144867, -0.30476123094558716, -0.12279804050922394, -0.22507253289222717, -0.35600775480270386, 0.4488348960876465, -0.28254008293151855, -0.13300454...
Question is pretty self explanitory. I want to do a simple find and replace, like you would in a text editor on the data in a column of my database (which is MsSQL on MS Windows server 2003) The following query replace each and every `a` character with a `b` character. ``` UPDATE YourTable SET Column1 = REPLACE(Column1,'a','b') WHERE Column1 LIKE '%a%' ``` This will not work on SQL server 2003.
[ -0.004384518601000309, 0.39139771461486816, 0.19028618931770325, -0.12118180841207504, -0.29705145955085754, 0.012002182193100452, -0.014558808878064156, -0.36930224299430847, 0.10991698503494263, -0.21639662981033325, -0.003115012776106596, 0.5032323598861694, -0.4264805018901825, 0.00175...
How do I save each sheet in an Excel workbook to separate `CSV` files with a macro? I have an excel with multiple sheets and I was looking for a macro that will save each sheet to a separate `CSV (comma separated file)`. Excel will not allow you to save all sheets to different `CSV` files. Here is one that will give you a visual file chooser to pick the folder you want to save the files to and also lets you choose the CSV delimiter (I use pipes '|' because my fields contain commas and I don't want to deal with
[ 0.22424331307411194, 0.11267998814582825, 0.32422807812690735, 0.022466668859124184, -0.11681977659463882, 0.24568778276443481, -0.168053537607193, -0.1050976812839508, -0.2560364305973053, -0.9712502956390381, 0.37766286730766296, 0.3186889588832855, -0.1747409701347351, 0.000687662686686...
quotes): ``` ' ---------------------- Directory Choosing Helper Functions ----------------------- ' Excel and VBA do not provide any convenient directory chooser or file chooser ' dialogs, but these functions will provide a reference to a system DLL ' with the necessary capabilities Private Type BROWSEINFO ' used by the function GetFolderName hOwner As Long pidlRoot As Long pszDisplayName As String lpszTitle As String ulFlags As Long lpfn As Long lParam As Long iImage As Long End Type Private Declare Function SHGetPathFromIDList Lib
[ -0.0076627531088888645, -0.12112648040056229, 0.7642008066177368, 0.12658138573169708, -0.22196130454540253, 0.10447826981544495, -0.00706838583573699, -0.26544052362442017, -0.07529498636722565, -0.5519671440124512, -0.37390249967575073, 0.5398831963539124, -0.3353736400604248, 0.01817575...
"shell32.dll" _ Alias "SHGetPathFromIDListA" (ByVal pidl As Long, ByVal pszPath As String) As Long Private Declare Function SHBrowseForFolder Lib "shell32.dll" _
[ -0.16937707364559174, -0.18195840716362, 0.14434339106082916, -0.1932409554719925, 0.016851205378770828, -0.1161365881562233, 0.3054969906806946, -0.20912641286849976, -0.11316574364900589, -0.3123767375946045, -0.20731762051582336, 0.7056206464767456, -0.35177257657051086, 0.1119031980633...
Alias "SHBrowseForFolderA" (lpBrowseInfo As BROWSEINFO) As Long Function GetFolderName(Msg As String) As String ' returns the name of the folder selected by the user Dim bInfo As BROWSEINFO, path As String, r As Long Dim X As Long, pos As Integer bInfo.pidlRoot = 0& ' Root folder = Desktop If IsMissing(Msg) Then bInfo.lpszTitle = "Select a folder." ' the dialog title
[ 0.157699316740036, -0.45015034079551697, 0.7247818112373352, -0.0009533616830594838, 0.2247486561536789, -0.06924298405647278, 0.24093374609947205, -0.0673660859465599, -0.3509714901447296, -0.3265068829059601, -0.24300062656402588, 0.7664151787757874, -0.27501460909843445, 0.0758156701922...
Else bInfo.lpszTitle = Msg ' the dialog title End If bInfo.ulFlags = &H1 ' Type of directory to return X = SHBrowseForFolder(bInfo) ' display the dialog ' Parse the result path = Space$(512) r = SHGetPathFromIDList(ByVal X, ByVal path) If r Then pos = InStr(path, Chr$(0)) GetFolderName =
[ 0.0007272589718922973, -0.15308234095573425, 0.8429638743400574, -0.10166391730308533, 0.1777118295431137, -0.12159345299005508, 0.1372573971748352, -0.5312873125076294, -0.18677634000778198, -0.4699614346027374, -0.2483925074338913, 0.5979902744293213, -0.36562079191207886, -0.07994627207...
Left(path, pos - 1) Else GetFolderName = "" End If End Function '---------------------- END Directory Chooser Helper Functions ---------------------- Public Sub DoTheExport() Dim FName As Variant Dim Sep As String Dim wsSheet As Worksheet Dim nFileNum As Integer Dim csvPath As String Sep = InputBox("Enter a single delimiter character (e.g., comma or semi-colon)", _
[ -0.3853262662887573, -0.615442156791687, 0.8950464129447937, -0.1430342048406601, 0.004057128448039293, -0.07839232683181763, -0.18295222520828247, -0.16660259664058685, -0.3276611566543579, -0.36987370252609253, -0.40555909276008606, 0.3766656517982483, -0.3241174817085266, 0.080390565097...
"Export To Text File") 'csvPath = InputBox("Enter the full path to export CSV files to: ") csvPath = GetFolderName("Choose the folder to export CSV files to:") If csvPath = "" Then MsgBox ("You didn't choose an export directory. Nothing will be exported.") Exit Sub End If For Each wsSheet In Worksheets wsSheet.Activate nFileNum = FreeFile
[ -0.03135939687490463, -0.3749890923500061, 0.4630427658557892, -0.20225106179714203, 0.09241417795419693, 0.020381003618240356, 0.3828004002571106, 0.09272972494363785, -0.4686089754104614, -0.738828718662262, -0.3523206412792206, 0.2810884118080139, -0.7903695702552795, 0.1335375159978866...
Open csvPath & "\" & _ wsSheet.Name & ".csv" For Output As #nFileNum ExportToTextFile CStr(nFileNum), Sep, False Close nFileNum Next wsSheet End Sub Public Sub ExportToTextFile(nFileNum As Integer, _ Sep As String, SelectionOnly As Boolean) Dim WholeLine As
[ -0.22011852264404297, -0.2721545398235321, 0.7909321784973145, -0.19533710181713104, 0.0894961804151535, -0.07662486284971237, 0.20262472331523895, 0.08047828078269958, -0.289341539144516, -0.5112656950950623, -0.5359998941421509, 0.2812667489051819, -0.5104405283927917, 0.3251034319400787...
String Dim RowNdx As Long Dim ColNdx As Integer Dim StartRow As Long Dim EndRow As Long Dim StartCol As Integer Dim EndCol As Integer Dim CellValue As String Application.ScreenUpdating = False On Error GoTo EndMacro: If SelectionOnly = True Then With Selection StartRow = .Cells(1).Row
[ -0.15451861917972565, -0.44330641627311707, 0.6406940221786499, -0.4373263120651245, 0.023636765778064728, 0.15816164016723633, 0.09693101048469543, -0.45273005962371826, -0.0407094731926918, -0.2960192859172821, -0.21931500732898712, 0.5156102776527405, -0.4086982309818268, 0.315064340829...
StartCol = .Cells(1).Column EndRow = .Cells(.Cells.Count).Row EndCol = .Cells(.Cells.Count).Column End With Else With ActiveSheet.UsedRange StartRow = .Cells(1).Row StartCol = .Cells(1).Column EndRow = .Cells(.Cells.Count).Row
[ -0.7095828652381897, -0.1459817886352539, 0.45372289419174194, -0.41306325793266296, 0.178402841091156, 0.05773382633924484, -0.24819959700107574, -0.3349232077598572, -0.26831063628196716, -0.4637366831302643, -0.26048290729522705, -0.05804618448019028, -0.40597841143608093, 0.09101584553...
EndCol = .Cells(.Cells.Count).Column End With End If For RowNdx = StartRow To EndRow WholeLine = "" For ColNdx = StartCol To EndCol If Cells(RowNdx, ColNdx).Value = "" Then CellValue = ""
[ -0.5981470942497253, -0.2056158483028412, 0.8148084878921509, -0.3957221508026123, 0.21879175305366516, -0.0930248275399208, -0.14293180406093597, -0.10676930099725723, -0.03646709769964218, -0.419737309217453, -0.07676173001527786, 0.06443812698125839, -0.11642172187566757, 0.178635433316...
Else CellValue = Cells(RowNdx, ColNdx).Value End If WholeLine = WholeLine & CellValue & Sep Next ColNdx WholeLine = Left(WholeLine, Len(WholeLine) - Len(Sep)) Print #nFileNum, WholeLine Next RowNdx EndMacro: On Error GoTo
[ -0.3850310742855072, -0.28158038854599, 0.4482569992542267, -0.41564643383026123, 0.2602998614311218, 0.2373497635126114, 0.36252114176750183, -0.23754894733428955, -0.2842419743537903, -0.26038408279418945, -0.08760140836238861, 0.354238897562027, -0.29134446382522583, -0.0721364542841911...
0 Application.ScreenUpdating = True End Sub ```
[ 0.06579139828681946, 0.08507465571165085, 0.6734277009963989, -0.3269328474998474, 0.20851945877075195, -0.1991117000579834, 0.3325176239013672, 0.07845199853181839, 0.13284273445606232, -0.7608902454376221, -0.14002615213394165, 0.3805335462093353, -0.5260993838310242, 0.4980887770652771,...
I am stress testing a .NET web application. I did this for 2 reasons: I wanted to see what performance was like under real world conditions and also to make sure we hadn't missed any problems during testing. We had 30 concurrent users in the application using it as they would during the normal course of their jobs. Most users had multiple windows of the application open. * 10 Users: Not bad * 20 Users: Slowing down * 30 Users: Very, very slow but no timeouts It was loaded on the production server. It is a virtual server with a 2.66G Hz Xeon processor
[ -0.08485182374715805, -0.05806005373597145, 0.14960698783397675, -0.12204685807228088, -0.20118552446365356, -0.12119840830564499, 0.28796666860580444, -0.1369595229625702, 0.021629557013511658, -0.739898145198822, 0.055041857063770294, 0.5688793063163757, -0.026358578354120255, -0.4172495...
and 2 GB of RAM. We are using Win2K3 SP2. We have .NET 1.1 and 2.0 loaded and are using SQLExpress SP1. We rechecked the indexes on all of the tables afterword and they were all as they should be. How can we improve our application's performance? This is just something that I thought of, but check to see how much memory SQL Server is using when you have 20+ users - one of the limitations of the Express version is that it is [limited to 1GB of RAM](http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx). So it might just be a simple matter of there not being enough memory
[ -0.10268288850784302, 0.08195372670888901, 0.629683792591095, 0.08473633229732513, -0.06267572939395905, 0.08825766295194626, 0.38428640365600586, -0.30362725257873535, -0.1980237066745758, -0.6567038893699646, 0.10544214397668839, 0.3914027512073517, -0.15918369591236115, -0.1312774419784...
available to to server due to the limitations of Express.
[ -0.3006761968135834, -0.3009418547153473, 0.14574873447418213, 0.21233631670475006, -0.10609877109527588, 0.009424139745533466, 0.5446010231971741, 0.49188506603240967, -0.1015346497297287, -0.8224747776985168, -0.30149808526039124, 0.06429789960384369, -0.2258090227842331, 0.3244006633758...
Visually both of the following snippets produce the same UI. So why are there 2 controls.. **Snippet1** ``` <TextBlock>Name:</TextBlock> <TextBox Name="nameTextBox" /> ``` **Snippet2** ``` <Label>Name:</Label> <TextBox Name="nameTextBox" /> ``` (*Well I am gonna answer this myself... thought this is a useful tidbit I learnt today from [Programming WPF](https://rads.stackoverflow.com/amzn/click/com/0596510373)*) The WPF Textblock inherits from **FrameworkElement** instead of deriving from **System.Windows.Control** like the Label Control. This means that the Textblock is much more lightweight. The downside of using a textblock is no support for Access/Accerelator Keys and there is no link to other controls as target. ***When you want to display text by itself use the TextBlock***. The benefit is a
[ 0.07820301502943039, 0.07444809377193451, 0.6269726753234863, 0.09807764738798141, -0.5078253746032715, -0.011099210940301418, 0.31110894680023193, -0.3737259805202484, -0.3817630410194397, -0.9950035214424133, -0.2161010503768921, 0.2895045280456543, -0.3948240578174591, -0.11112411320209...
light, performant way to display text. ***When you want to associate text with another control like a TextBox use the Label control***. The benefits are access keys and references to target control.
[ -0.059816766530275345, -0.06664200127124786, 0.3629051446914673, 0.34500762820243835, -0.42992284893989563, -0.2825091779232025, 0.012567762285470963, 0.060382068157196045, -0.027742257341742516, -0.6913365125656128, -0.18390855193138123, 0.5067657828330994, -0.22734548151493073, -0.037054...
I'm converting an application to use Java 1.5 and have found the following method: ``` /** * Compare two Comparables, treat nulls as -infinity. * @param o1 * @param o2 * @return -1 if o1&lt;o2, 0 if o1==o2, 1 if o1&gt;o2 */ protected static int nullCompare(Comparable o1, Comparable o2) { if (o1 == null) { if (o2 == null) { return 0; } else {
[ -0.03900552913546562, 0.14877314865589142, 0.1837419718503952, -0.051614802330732346, -0.33068767189979553, 0.42567935585975647, 0.40293529629707336, -0.6224072575569153, -0.13175395131111145, -0.6193212866783142, 0.08523309975862503, 0.1995132714509964, -0.21352654695510864, -0.0629975199...
return -1; } } else if (o2 == null) { return 1; } else { return o1.compareTo(o2); } } ``` Ideally I would like to make the method take two Comparables of the same type, is it possible to convert this and how? I thought the following would do the trick: ``` protected static <T extends Comparable> int nullCompare(T o1, T o2) { ``` but it has failed to get rid of a warning in IntelliJ "Unchecked call
[ -0.15254199504852295, -0.12513135373592377, 0.18559293448925018, -0.0007455931045114994, -0.28004947304725647, 0.03529239073395729, 0.4120422601699829, -0.4616652727127075, 0.28885355591773987, -0.7475698590278625, 0.128742977976799, 0.6829625368118286, -0.23560389876365662, 0.105456240475...
to 'compareTo(T)' as a member of raw type 'java.lang.Comparable'" on the line: ``` return o1.compareTo(o2); ``` Change it to: ``` protected static <T extends Comparable<T>> int nullCompare(T o1, T o2) { ``` You need that because Comparable is itself a generic type.
[ 0.050400398671627045, -0.193717822432518, 0.1402939260005951, -0.08450797945261002, -0.4188494086265564, 0.03060637228190899, 0.14147813618183136, -0.07792764902114868, 0.20742148160934448, -0.6131772994995117, -0.03477608785033226, 0.531447172164917, -0.39995914697647095, 0.26271393895149...
When I press F5 in Visual Studio 2008, I want Google Chrome launched as the browser that my ASP.NET app runs in. May I know how this can be done? Right click on an .aspx file and click "Browse with..." then select Chrome and click "Set as Default." You can select more than one browser in the list if you want. There's also this really great [WoVS Default Browser Switcher Visual Studio extension](http://visualstudiogallery.msdn.microsoft.com/bb424812-f742-41ef-974a-cdac607df921/).
[ 0.07193129509687424, -0.11301448196172714, 0.7828261256217957, -0.11816871911287308, -0.6444730162620544, 0.03500116989016533, -0.014909883961081505, 0.15333408117294312, -0.47206661105155945, -0.6319261193275452, 0.14573030173778534, 0.8463631868362427, -0.07678726315498352, 0.25453168153...
So I have an object which has some fields, doesn't really matter what. I have a generic list of these objects. ``` List<MyObject> myObjects = new List<MyObject>(); myObjects.Add(myObject1); myObjects.Add(myObject2); myObjects.Add(myObject3); ``` So I want to remove objects from my list based on some criteria. For instance, `myObject.X >= 10.` I would like to use the `RemoveAll(Predicate<T> match)` method for to do this. I know I can define a delegate which can be passed into RemoveAll, but I would like to know how to define this inline with an anonymous delegate, instead of creating a bunch of delegate functions which are only used in once place. There's two options, an explicit delegate or
[ 0.14917528629302979, 0.0672779455780983, 0.26033931970596313, -0.01293647289276123, -0.20386222004890442, -0.010083620436489582, 0.15569372475147247, -0.20447872579097748, -0.2031930536031723, -0.5597004294395447, -0.12113169580698013, 0.4884442985057831, -0.5891132950782776, 0.25475662946...
a delegate disguised as a lamba construct: explicit delegate ``` myObjects.RemoveAll(delegate (MyObject m) { return m.X >= 10; }); ``` lambda ``` myObjects.RemoveAll(m => m.X >= 10); ``` --- Performance wise both are equal. As a matter of fact, both language constructs generate the same IL when compiled. This is because C# 3.0 is basically an extension on C# 2.0, so it compiles to C# 2.0 constructs
[ 0.27011922001838684, -0.03341241553425789, -0.0650179460644722, -0.23133206367492676, -0.37769266963005066, 0.19553279876708984, 0.2433546632528305, -0.5374809503555298, 0.006470854859799147, -0.26792463660240173, -0.0955289676785469, 0.4591902494430542, -0.5876569747924805, 0.090365923941...
I'm using a deploy project to deploy my ASP.net web application. When I build the deploy project, all the .compiled files are re-created. Do I need to FTP them to the production web server? If I do a small change do I need to copy all the web site again? From my own research, the .compiled files must be copied to the production server, but not needed to copied every time from [Rick Strahl](http://www.west-wind.com/) excellent blog: > The output from the merge utilitity > can combine all markup and CodeBeside > code into a single assembly, but you > will still end up with
[ 0.48566654324531555, 0.12524229288101196, -0.05120258405804634, -0.04902004823088646, 0.24990509450435638, -0.009527557529509068, 0.39204663038253784, -0.3229733109474182, -0.22452418506145477, -0.8388596177101135, -0.10974496603012085, 0.2976040244102478, 0.044362086802721024, -0.10821378...
the .compiled > files which are required for ASP.NET > to associate the page requests with a > specific class contained in the > assembly. However, because the file > names generated are fixed you don’t > need to update these files unless you > add or remove pages. In effect this > means that in most situations you can > simply update the single assembly to > update your Web. > > [Source](http://www.west-wind.com/presentations/AspNetCompilation/AspNetCompilation.asp)
[ 0.15984076261520386, -0.16122478246688843, 0.3620448410511017, 0.09604135155677795, -0.21221081912517548, -0.11605225503444672, 0.2243097424507141, -0.20572268962860107, -0.5189207792282104, -0.6663040518760681, -0.33470046520233154, 0.12028924375772476, -0.009561143815517426, -0.016649210...
How does the **open-source/free software community** develop drivers for products that offer no documentation? How do you reverse engineer something? * You observe the input and output, and develop a set of rules or models that describe the operation of the object. Example: Let's say you want to develop a USB camera driver. The "black box" is the software driver. 1. Develop hooks into the OS and/or driver so you can see the inputs and outputs of the driver 2. Generate typical inputs, and record the outputs 3. Analyze the outputs and synthesize a model that describes the relationship between the input and output 4. Test the model
[ 0.6736426949501038, 0.10967735946178436, -0.22008852660655975, 0.3370039761066437, 0.17082981765270233, -0.1264098733663559, -0.01673136092722416, -0.08229033648967743, -0.029425818473100662, -0.23690921068191528, -0.032467931509017944, 0.6717296242713928, -0.2795596718788147, -0.056910324...
- put it in place of the black box driver, and run your tests 5. If it does everything you need, you're done, if not rinse and repeat Note that this is just a regular problem solving/scientific process. For instance, weather forecasters do the same thing - they observe the weather, test the current conditions against the model, which predicts what will happen over the next few days, and then compare the model's output to reality. When it doesn't match they go back and adjust the model. This method is slightly safer (legally) than clean room reverse engineering, where someone actually decompiles the
[ 0.49796992540359497, -0.13853688538074493, 0.1830884963274002, 0.41286396980285645, 0.20699402689933777, -0.09089669585227966, 0.20539872348308563, -0.3064432740211487, -0.022401604801416397, -0.442150354385376, 0.10775808244943619, 0.40355992317199707, 0.12307345122098923, 0.1745119243860...
code, or disassembles the product, analyzes it thoroughly, and makes a model based on what they saw. Then the model (*AND NOTHING ELSE*) is passed to the developers replicating the functionality of the product. The engineer who took the original apart, however, cannot participate because he might bring copyrighted portions of the code/design and inadvertently put them in the new code. If you never disassemble or decompile the product, though, you should be in legally safe waters - the only problem left is that of patents. -Adam
[ 0.7683016657829285, 0.43209487199783325, -0.38727930188179016, 0.3595433533191681, 0.7048733234405518, -0.06715893745422363, 0.31003686785697937, -0.17392611503601074, -0.11204671114683151, -0.2631238102912903, -0.5400235652923584, 0.35628509521484375, -0.016187353059649467, 0.341397166252...
What's the easiest way to add a header and footer to a .Net PrintDocument object, either pragmatically or at design-time? Specifically I'm trying to print a 3rd party grid control (Infragistics GridEx v4.3), which takes a PrintDocument object and draws itself into it. The resulting page just contains the grid and it's contents - however I would like to add a header or title to identify the printed report, and possibly a footer to show who printed it, when, and ideally a page number and total pages. I'm using VB.Net 2.0. Thanks for your help! The printdocument object fires the printpage event for each page
[ 0.5023655891418457, -0.18566374480724335, 0.37806832790374756, 0.27797603607177734, -0.2618367671966553, -0.2969919443130493, -0.16539989411830902, -0.057209715247154236, -0.11947125196456909, -0.7303236722946167, 0.11144223809242249, 0.34050115942955017, -0.32471972703933716, -0.229474499...
to be printed. You can draw text/lines/etc into the print queue using the printpageeventargs event parameter: <http://msdn.microsoft.com/en-us/library/system.drawing.printing.printdocument.aspx> Dim it WithEvents when you pass it to the grid, so you can handle the event.
[ 0.49250710010528564, -0.470414400100708, 0.7332719564437866, -0.08059429377317429, 0.11929845809936523, -0.13608309626579285, -0.14569248259067535, 0.006880069617182016, -0.32214415073394775, -0.704939067363739, -0.1756957769393921, 0.35482582449913025, -0.1499798744916916, -0.122273750603...
I am using ActiveScaffold in a Ruby on Rails app, and to save space in the table I have replaced the default "actions" text in the table (ie. "edit", "delete", "show") with icons using CSS. I have also added a couple of custom actions with action\_link.add ("move" and "copy"). For clarity, **I would like to have a tooltip pop up with the related action** (ie. "edit", "copy") when I hover the mouse over the icon. I thought I could do this by adding a simple "alt" definition to the tag, but that doesn't appear to work. Can somebody point me in the right
[ 0.09744424372911453, -0.041329145431518555, 0.23927806317806244, -0.22403398156166077, -0.02573077194392681, 0.06046372652053833, 0.41942843794822693, -0.11329255998134613, -0.23468948900699615, -0.7498798370361328, -0.01546710729598999, 0.6393741369247437, -0.3224121928215027, -0.23965367...
direction? The `alt` attribute is to be used as an alternative to the image, in the case of the image missing, or in a text only browser. IE got it wrong, when they made `alt` appear as a tooltip. It was never meant to be that. The correct attribute for this is `title`, which of course doesn't do a tooltip in IE. So, to do have a tooltip show up in both IE, and FireFox/Safari/Chrome/Opera, use both an `alt` attribute and a `title` attribute.
[ -0.12880942225456238, -0.25927793979644775, 0.8132916688919067, -0.01985306479036808, -0.038762375712394714, -0.18593257665634155, -0.0009025972685776651, 0.06482578068971634, -0.05504831299185753, -0.39356884360313416, -0.05942995846271515, 0.66407710313797, -0.16395048797130585, -0.05692...
Starting from ASP.NET MVC Preview 3, HTML.Button ( and other related HTML controls) are no longer supported. The question is, what is the equivalent for them? I've an app that was built using Preview 2, now I have to make it compatible with the latest CTP releases. I figured it out. It goes something like this: ``` <form method="post" action="<%= Html.AttributeEncode(Url.Action("CastUpVote")) %>"> <input type="submit" value="<%=ViewData.Model.UpVotes%> up votes" /> </form> ```
[ 0.1200324073433876, -0.4176553785800934, 0.6478017568588257, -0.11585519462823868, -0.49503031373023987, 0.14276540279388428, 0.07285325229167938, -0.13451163470745087, -0.4055279493331909, -0.7501299381256104, 0.017435336485505104, 0.3159588873386383, -0.45456641912460327, -0.052372347563...
I'm trying to create a standard way of rolling out web applications for our company. Currently we do it with zip files, vbscript/javascript, and manual some steps. For thick client installs we generate MSI installers using Wise/Wix. We don't create installers currently for websites as in general they are just xcopy deploy. However we have some config files that need to be changed, verify that certain handlers are registered in IIS... The list goes on. Do most people use MSI installers for web applications as well, or some other tool/scripting language? Do consider MSDeploy, that is the direction Microsoft will be investing
[ 0.6578250527381897, 0.01857004687190056, 0.25713804364204407, 0.07557973265647888, -0.024151086807250977, -0.14331409335136414, 0.0044405702501535416, 0.04545576497912407, -0.21242472529411316, -0.9143872857093811, -0.05144147574901581, 0.44325003027915955, -0.4003000557422638, 0.036766733...
in the future for deployment of web applications... Know more about the future direction at [Overview Post for Web Deployment in VS 2010](http://vishaljoshi.blogspot.com/2009/09/overview-post-for-web-deployment-in-vs.html)
[ 0.2551690638065338, -0.5866073966026306, 0.22462165355682373, 0.25445666909217834, 0.2849351167678833, -0.09197260439395905, 0.37334516644477844, 0.13674335181713104, -0.378451406955719, -0.559418261051178, -0.14320124685764313, 0.2651420831680298, 0.19127097725868225, 0.03790299594402313,...
I have the following query: ``` select column_name, count(column_name) from table group by column_name having count(column_name) > 1; ``` What would be the difference if I replaced all calls to `count(column_name)` to `count(*)`? This question was inspired by [How do I find duplicate values in a table in Oracle?](https://stackoverflow.com/questions/59232/how-do-i-find-duplicate-values-in-a-table-in-oracle). --- To clarify the accepted answer (and maybe my question), replacing `count(column_name)` with `count(*)` would return an extra row in the result that contains a `null` and the count of `null` values in the column. `count(*)` counts NULLs and `count(column)` does not [edit] added this code so that people can run it ``` create table #bla(id int,id2 int) insert #bla values(null,null) insert #bla values(1,null) insert #bla values(null,1) insert #bla
[ 0.25564417243003845, 0.08738511055707932, 0.3935519754886627, -0.14312754571437836, -0.06339853256940842, 0.21775653958320618, 0.02871580421924591, -0.48031532764434814, -0.3751832842826843, -0.20804141461849213, 0.26354947686195374, 0.28482282161712646, -0.5799593925476074, 0.341449260711...
values(1,null) insert #bla values(null,1) insert #bla values(1,null) insert #bla values(null,null) select count(*),count(id),count(id2) from #bla ``` results 7 3 2
[ 0.32083094120025635, -0.06375761330127716, 0.14269433915615082, -0.23872078955173492, -0.04662903770804405, 0.3879950940608978, 0.15553070604801178, -0.6510869264602661, -0.11149073392152786, -0.3504011034965515, -0.12494349479675293, 0.2990707755088806, -0.5503486394882202, 0.105865053832...