lang
stringclasses 4
values | desc
stringlengths 2
8.98k
| code
stringlengths 7
36.2k
| title
stringlengths 12
162
|
|---|---|---|---|
C#
|
I am rendering some simple text to a PDF in C # using Telerik : This works great locally.I 've set up an Azure app service , and I am getting the following error when I run reportProcessor.RenderReport ( ) : I feel like this might be an issue with the way I 've configured my Azure app service . Here is my configuration : - Location : West US ( we 're based in Utah ) - `` F1 Free '' pricing tier ( also West US ) with 1 GB Storage and used by 2 App ServicesIs my Azure app service configuration missing something ?
|
using Telerik.Reporting ; using Telerik.Reporting.Processing ... ReportProcessor reportProcessor = new ReportProcessor ( ) ; InstanceReportSource instanceReportSource = new InstanceReportSource ( ) ; instanceReportSource.ReportDocument = new MyReport ( ) ; RenderingResult result = reportProcessor.RenderReport ( `` PDF '' , instanceReportSource , null ) ; An error has occurred while rendering the report : System.ArgumentException : Parameter is not valid . at System.Drawing.Graphics.GetHdc ( ) at Telerik.Reporting.Pdf.PdfContext..ctor ( ) at Telerik.Reporting.Pdf.PdfDocument..ctor ( ) at Telerik.Reporting.ImageRendering.DocumentPdf.FindOrCreateDocument ( ) at Telerik.Reporting.ImageRendering.DocumentPdf..ctor ( PdfRenderingContext context , IMeasureContext measureContext ) at Telerik.Reporting.ImageRendering.PdfReport.CreateDocument ( IDictionary renderingInfo , IDictionary deviceInfo , CreateStream createStreamCallback , PageSettings pageSettings ) at Telerik.Reporting.BaseRendering.DocumentRenderingExtensionBase.CreateWriter ( IDictionary renderingContext , IDictionary deviceInfo , CreateStream createStreamCallback , PageSettings pageSettings ) at Telerik.Reporting.ImageRendering.PdfReport.CreateWriter ( IDictionary renderingContext , IDictionary deviceInfo , CreateStream createStreamCallback , PageSettings pageSettings ) at Telerik.Reporting.BaseRendering.RenderingExtensionBase.Render ( Report report , Hashtable renderingContext , Hashtable deviceInfo , CreateStream createStreamCallback , EvaluateHeaderFooterExpressions evalHeaderFooterCallback )
|
Telerik Reporting produces pdf locally , but not on Azure
|
C#
|
I am currently working with .Net 2.0 and have an interface whose generic type is used to define a method 's return type . Something likeMy problem is that some classes that implement this interface do not really need to return anything.In Java you can use java.lang.Void for this purpose , but after quite a bit of searching I found no equivalent in C # . More generically , I also did not find a good way around this problem . I tried to find how people would do this with delegates , but found nothing either - which makes me believe that the problem is that I suck at searching : ) So what 's the best way to solve this ? How would you do it ? Thanks !
|
interface IExecutor < T > { T Execute ( ) { ... } }
|
java.lang.Void in C # ?
|
C#
|
We have a legacy .NET Remoting-based app . Our client client libary currently supports only synchronous operations . I would like to add asynchronous operations with TPL-based async Task < > methods.As proof of concept , I have set up a basic remoting server/client solution based a modified version of these instructions . I have also found this article that describes how to convert APM-based asynchronous operations to TPL-based async tasks ( using Task.Factory.FromAsync ) What I 'm unsure about is whether I 'm compelled to specify the callback function in .BeginInvoke ( ) and also to specify the .EndInvoke ( ) . If both are required , what exactly is the difference between the callback function and .EndInvoke ( ) . If only one is required , which one should I use to return values and also ensure that I have no memory leaks.Here is my current code where I do n't pass a callback to .BeginInvoke ( ) :
|
public class Client : MarshalByRefObject { private IServiceClass service ; public delegate double TimeConsumingCallDelegate ( ) ; public void Configure ( ) { RemotingConfiguration.Configure ( `` client.exe.config '' , false ) ; var wellKnownClientTypeEntry = RemotingConfiguration.GetRegisteredWellKnownClientTypes ( ) .Single ( wct = > wct.ObjectType.Equals ( typeof ( IServiceClass ) ) ) ; this.service = Activator.GetObject ( typeof ( IServiceClass ) , wellKnownClientTypeEntry.ObjectUrl ) as IServiceClass ; } public async Task < double > RemoteTimeConsumingRemoteCall ( ) { var timeConsumingCallDelegate = new TimeConsumingCallDelegate ( service.TimeConsumingRemoteCall ) ; return await Task.Factory.FromAsync ( timeConsumingCallDelegate.BeginInvoke ( null , null ) , timeConsumingCallDelegate.EndInvoke ) ; } public async Task RunAsync ( ) { var result = await RemoteTimeConsumingRemoteCall ( ) ; Console.WriteLine ( $ '' Result of TPL remote call : { result } { DateTime.Now.ToString ( `` yyyy-MM-dd HH : mm : ss '' ) } '' ) ; } } public class Program { public static async Task Main ( string [ ] Args ) { Client clientApp = new Client ( ) ; clientApp.Configure ( ) ; await clientApp.RunAsync ( ) ; Console.WriteLine ( `` Press any key to continue ... '' ) ; Console.ReadKey ( false ) ; } }
|
Wrap .NET Remoting async method in TPL Task
|
C#
|
I was looking at the IL code of a valid method with Reflector and I 've run into this : Instructions with the suffix .s are supposed to take an int8 operand , and sure enough this should be the case with Leave_S as well . However , 0x0103 is 259 , which exceeds the capacity of an int8 . The method somehow works , but when I read the instructions with method Mono.Reflection.Disassembler.GetInstructions it retrievesthat is , 3 instead of 259 , because it 's supposed to be an int8 . So , my question : how is the original instruction ( leave.s L_0103 ) possible ? I have looked at the ECMA documentation for that ( Partition III : CIL Instruction Set ) and I ca n't find anything that explains it.Any ideas ? Thanks.EDIT # 1 : Ok , I 'm an idiot . In the case of branch instructions the offset must be counted from the beginning of the instruction following the current instruction . I swear I read the documentation , but somehow I managed to skip that . In my defence , I 'm pretty sick today . Sigh.Thank you . ( And thanks for not calling me an idiot , even though this was pretty idiotic : P ) EDIT # 2 : By the way , in case anyone is interested , when Mono.Reflection.Disassembler.GetInstructions disassembles the instructions it changes the meaning of the operand in branch instructions . In particular , as it has been pointed out , the operand of a branch instruction represents the offset from the beginning of the next instruction , not from 0 . However , Mono.Reflection gives back the offset starting at 0 ( which may be why I was confused ; although it does n't explain how I managed to skip part of the documentation ) .An extract of MethodBodyReader.ReadOperand ( Instruction instruction ) : As you can see it adds il.position , which is the offset ( starting at 0 ) of the next instruction . Also , it casts to sbyte , which is the reason I 'm getting 3 instead of 259 . This appears to be a bug ( the offset starting from 0 may be larger than an sbyte ) . I 'll ask Jb Evain ( the author ) and report back.EDIT # 3 : He has n't answered yet but I 've changed it to : and it seems to have solved my problem . I cast to sbyte to get the sign right , in case it 's a backwards jump ( negative offset ) , and then when I add il.position ( which is an int ) the result is an int.I 'll let you know what he says anyway.EDIT # 4 : I forgot to report back . The author confirms this was a bug .
|
L_00a5 : leave.s L_0103 L_00a5 : leave.s L_0003 switch ( instruction.OpCode.OperandType ) { ... case OperandType.ShortInlineBrTarget : instruction.Operand = ( sbyte ) ( il.ReadByte ( ) + il.position ) ; break ; ... } switch ( instruction.OpCode.OperandType ) { ... case OperandType.ShortInlineBrTarget : instruction.Operand = ( ( sbyte ) il.ReadByte ( ) ) + il.position ; break ; ... }
|
IL short-form instructions are n't short ?
|
C#
|
I started a UWP app on a laptop running Visual Studio 2015 Update 3 . All was well and good , able to run and test it on both my laptop and my phone with no issues at all.I added the project to Source Control ( private Git server ) and pulled the repo on my home PC . The project opens in VS2015 Update 3 on my PC and I can develop and build with no issues . However , I ca n't seem to run the app on my PC running Windows 10 Build 15063 ( same as the laptop ) . At first I thought it was the temporary certificate , but I tried both creating a new one and adding the one from the working laptop to Source Control . Here is the error : Stack trace : It does n't even hit the OnLaunched event on the App.xaml.cs.I have both machines with Developer Mode enabled as I 've used my PC for other projects . It 's also worth noting that creating a blank Universal app on the PC works fine.Any help would be greatly appreciated as I do n't want to be stuck with developing on a laptop when I have a more powerful PC ( which also runs a LOT cooler than the laptop in this UK heatwave ... ) EditThe laptop I created the project on is encrypted with BitLocker . Could this cause the issue ? Edit 2If I create a brand new blank UWP app , it runs fine , however if I then copy over the source files from the original , install all packages and update namespaces , I get the access denied error again .
|
An unhandled exception of type 'System.UnauthorizedAccessException ' occurred in MyUwpApp.exeAdditional information : Access is denied . ( Exception from HRESULT : 0x80070005 ( E_ACCESSDENIED ) ) at Windows.UI.Xaml.Application.Start ( ApplicationInitializationCallback callback ) at MyUwpApp.Program.Main ( String [ ] args )
|
UWP app wo n't launch on second machine
|
C#
|
I created a method to abstract try/catch functionality . I have about 30 methods that have this exact same try/catch scenario . So I put it in one method : Now , most of the methods call it like this : My issue is that I have just a couple of methods that call it without needing to return a value : However , I ca n't use void there . The Invoke ( ) method takes a func , but in this case it needs to be an action . I did some research and it looks like I may have to create another Invoke ( ) method , but have it take an action . Those suggestions were from 2009 and 2010 though . Is it possible to somehow use my func method without having to create another Invoke ( ) method like Invoke2 ( ) ?
|
private T Invoke < T > ( Func < T > func ) { try { return func.Invoke ( ) ; } catch ( Exception ex ) { throw LogAndThrowFaultException ( ex ) ; } } public IEnumerable < PingResponse > GetAllForPingRequest ( PingRequest pingRequest ) { return Invoke ( ( ) = > PingResponseLogic.GetAllForPingRequest ( pingRequest ) ) ; } Invoke < void > ( ( ) = > CustomVariableGroupLogic.Delete ( customVariableGroup ) ) ;
|
Use func method for an action also ?
|
C#
|
So , I just hate using true/false as method arguments for `` enabled '' / '' disabled '' . To freely quote Jeff : `` I dislike it on a fundamental level '' .I repeatedly find myself defining my own enums on every new project in different namespaces all over the place , like these : Is there a generic enum I can use for these scenarios ?
|
public enum Clickability { Disabled , Enabled } public enum Editability { Disabled , Enabled } public enum Serializability { Disabled , Enabled }
|
Is there a ( well hidden ) generic enum anywhere in the BCL for Enabled/Disabled ?
|
C#
|
Help me settle an argument here . Is this : treated exactly the same as this : Ie . does it make a difference if I state specifically that the string s is a const.And , if it is not treated in the same way , why not ?
|
SqlCommand cmd = new SqlCommand ( `` sql cmd '' , conn ) ; const string s = `` sql cmd '' ; SqlCommand cmd = new SqlCommand ( s , conn ) ;
|
C # - Is this declared string treated as a const ?
|
C#
|
In the code below , I am assigning a string to text box . The text box text is wrapped , so words will be shifted to the next line if they can not fit in same line.C # : XAML : Now , with the example above , it may happen that on the last line in the text box the word `` Eyes '' is the only word due to wrapping . If the last line has only one word , I would like to decrease the font size so that the last line has at least two words.So , in short , the last line should never have one word only . It may have two or more words.Example : ( Wrong ) Example : ( Right ) I am not asking how to increase/decrease the font and on what basis the new font size should be calculated . That is different question that I need to figure out . But first step of my problem is to find out if there is a single word in the last line.How do I check if the last line in a text box has only one word ?
|
textbox.Text = `` Norma went to bed . It was eleven o'clock . She turned out the light . She lay in bed . It was dark . It was quiet . She could n't sleep . She closed her eyes . `` ; < TextBox SelectionBrush= '' # FF54FF50 '' x : Name= '' textbox '' Margin= '' 10,53,0,0 '' FontSize= '' 24 '' HorizontalAlignment= '' Left '' Width= '' 341 '' Height= '' 285 '' VerticalAlignment= '' Top '' TextChanged= '' Textbox_TextChanged '' IsReadOnly= '' True '' CaretBrush= '' Black '' BorderBrush= '' Black '' Foreground= '' Black '' FontWeight= '' Bold '' Grid.ColumnSpan= '' 2 '' Padding= '' 0,5,0,0 '' HorizontalContentAlignment= '' Center '' VerticalContentAlignment= '' Center '' VerticalScrollBarVisibility= '' Auto '' TextWrapping= '' Wrap '' / > Norma went to bed . It waseleven o'clock . She turned out the light . She lay in bed . It was dark . It was quiet . She could n't sleep . She closed her eyes . Norma went to bed . It waseleven o'clock . She turned out the light . She lay in bed . It was dark . It was quiet . She could n't sleep . She closed her eyes .
|
How to check if the last line in a text box has only one word ?
|
C#
|
I 've noticed something odd about using the bitwise XOR operator on bytes in C # . Odd to my mind , at least.I also see this issue using short , but not int or long.I thought the last two lines were equivalent , but that does n't seem to be the case . What 's going on here ?
|
byte a = 0x11 ; byte b = 0xAA ; a ^= b ; // worksa = a ^ b ; // compiler error : Can not implicitly convert type `` int '' to `` byte ''
|
C # XOR operators : ^ vs ^= and implicit type conversion
|
C#
|
I always saw people always talking about using framework like Ninject , Unity , Windsor to do the dependency resolver and injection . Take following code for example : My question is : why ca n't we simply write as : In that case seems we do n't need any framework , even for the unit test we can easily mock.So what 's the real purpose for those framework ? Thanks in advance !
|
public class ProductsController : ApiController { private IProductRepository _repository ; public ProductsController ( IProductRepository repository ) { _repository = repository ; } } public class ProductsController : ApiController { private IProductRepository _repository ; public ProductsController ( ) : this ( null ) { } public ProductsController ( IProductRepository repository ) { _repository = repository ? ? new ProductRepository ( ) ; } }
|
Why we need framework to do the Dependency Resolver ?
|
C#
|
Given two implementations of Comparison methods : Why would n't the following conditional operator code block compile : Compiler error : `` Type of conditional expression can not be determined because there is no implicit conversion between 'method group ' and 'method group ' '' However , the equivalent code block using if-else does not have any issue : ( all good in both assignments above ) So does the conditional operator , if I cast the Comparison delegate : ( all good in the assignment above , when cast even though casting was only on the true part )
|
// compares by Key ... private static int CompareByKey ( KeyValuePair < int , string > x , KeyValuePair < int , string > y ) { return x.Key.CompareTo ( y.Key ) ; } // compares by Value ... private static int CompareByValue ( KeyValuePair < int , string > x , KeyValuePair < int , string > y ) { return x.Value.CompareTo ( y.Value ) ; } Comparison < KeyValuePair < int , string > > sortMethod ; sortMethod = isSortByActualValue ? CompareByKey : CompareByValue ; Comparison < KeyValuePair < int , string > > sortMethod ; if ( isSortByActualValue ) sortMethod = CompareByKey ; else sortMethod = CompareByValue ; Comparison < KeyValuePair < int , string > > sortMethod ; sortMethod = isSortByActualValue ? ( Comparison < KeyValuePair < int , string > > ) CompareByKey : CompareByValue ;
|
Conditional operator and Comparison Delegate
|
C#
|
I want to retrive the number sequence which is at the end of string . for e.g . gives me result 56 but I want the result as 1234 . How should I do this ?
|
string contentDbIndex = Regex.Match ( `` ab56cd1234 '' , @ '' \d+ '' ) .Value ;
|
How to get number at the end of string ?
|
C#
|
I am building a little 2048 WinForms game just for fun.Note that this is not about a 2048 AI . I am just trying to make a 2048 game that can be played by humans.I first decided to use 0-17 to represent the tiles . 0 represents an empty tile . 1 represents a 2 tile . 2 represents a 4 tile . 3 represents a 8 tile , and so on.Then I was thinking about how to calculate the resulting board , given the direction of movement and the board before the move . Here 's what I thought about : To move up , it 's just rotating the board counterclockwise by 90 degrees , move left , then rotate the board backTo move right , it 's just rotating the board clockwise by 180 degrees , move left , then rotate backTo move down , it 's just rotating the board clockwise by 90 degrees , move left , then rotate back.So I just need to figure out how to calculate the resulting board when the player moves left , then I can just figure out the rest of the directions by rotating the board , move left , and rotating back . I then came p with this quite bizarre algorithm for moving left.Convert each of the initial board 's integers into characters by adding 96 and casting to char . Now a back tick ( ` ) represents an empty tile , a represents a 2 tile , b represents a 4 tile , and so on , al the way to p.Concatenate the characters to form 4 strings , each representing a row of the board.An example board might look like this : For each string , Remove all the back ticksUse the regex ( yes I 'm using a regex in a 2048 game ) ( [ a-p ] ) \1 and get the first match of the string.replace the first match with the new tilematch the rest of the string which has n't been matched yet until no more matches is found.Pad the string to the right if it has fewer than 4 characters.Turn each string back to an array of integers by subtracting 96So this is how I evaluate each row : However , there is a really big problem with my current algorithm . This algorithm only tells me the final result of a move , but I do n't know which picture box ( I 'm using picture boxes to show the tiles ) I need to move , how many spaces should each picture box move , and which picture boxes need to show a new image . I really want to not use another solution and I want to just make some changes to my current solution.Here are the things I need to get from each row ( string ) : A List < ( int x , int spaces ) > . Each element represents which tile needs to move ( the x coordinate ) , and how many spaces it should move ( spaces ) .A List < int > . Each element represents the x coordinates of the tiles which is merged into.How can I get these information from a row string ? Example : The row string : will produce a list like [ ( 1 , 1 ) , ( 3 , 3 ) ] and another list like [ 1 ] .
|
aa `` `` `` `` ` b `` cb int [ ] EvaluateRow ( int [ ] row ) { // RowToString converts an int [ ] to a string like I said above StringBuilder rowString = new StringBuilder ( RowToString ( row ) ) ; rowString.Replace ( `` ` `` , `` '' ) ; var regex = new Regex ( `` ( [ a-p ] ) \\1 '' ) ; int lastIndex = -1 ; while ( true ) { var match = regex.Match ( rowString.ToString ( ) , lastIndex + 1 ) ; if ( match.Success ) { // newChar is the new tile after the merge char newChar = ( char ) ( match.Value [ 0 ] + 1 ) ; rowString.Remove ( match.Index , match.Length ) ; rowString.Insert ( match.Index , newChar ) ; lastIndex = match.Index ; Score += // some calculation for score , irrelevant } else { break ; } } // StringToRow converts a string to an int [ ] return StringToRow ( rowString.ToString ( ) ) ; } ` a ` a
|
How can I figure out which tiles move and merge in my implementation of 2048 ?
|
C#
|
I am in the process of converting a Classic ASP/VBScript application to C # /ASP.Net . The VBScript part of the application is a series of individual scripts performed each month on a specific date with each individual task set up in Windows Scheduler . There are about 35 tasks which include database inserts ( saving monthly historical data , saving summary billing data ) as well as exporting a series of Crystal Reports as PDFs and Excel . Each task is scheduled about a minute apart , as I do not want to overwhelm the database server with simultaneous requests . As I approach this in C # , I wonder if there is a way to this in one application rather than the separate script approach I took with VBScript . I am comfortable with C # as I have been using it to convert the website part of the application to ASP.Net MVC over the past year or so , but I have no experience with timed events in C # . I was thinking of approaching this by making a separate function for each task , with sleep between each task to allow enough time for the database processing to occur before moving on to the next task . Something like the following : This would be wrapped up in a console app which itself would be set up in Windows Scheduler.Is this a sound approach ? Are there better approaches ? I have read about timers and tasks , but I am not sure if they are appropriate for what I am trying to accomplish .
|
p.DoTask1 ( ) ; Thread.Sleep ( 60000 ) ; p.DoTask2 ( ) ; Thread.Sleep ( 60000 ) ; p.DoTask3 ( ) ; Thread.Sleep ( 60000 ) ; etc ...
|
C # Scheduled Series Of Tasks
|
C#
|
I 'm converting a VB.Net app into C # , and have noticed that in the VB.Net code , there is a private member variable , which is initialised using Me like this : When I convert this to C # code like this : I have the error Argument is value while parameter type is ref.If I put ref in front of the parameter this , I get the error can not use this in member initializer . I 've read here that members are initialized before the base class , and so this can not be used in members as it may not yet be initialised . My question is why is it legal in VB.Net and not C # ? Is this down to the compiler handling it differently ? It seems weird that the two have different behaviours.To get around it I guess i 'll initialize the member in the contructor .
|
Private m_ClassA As New MyCollection ( Of ClassA ) ( Me ) private MyCollection < ClassA > _classA = new MyCollection < ClassA > ( this ) ;
|
Why in C # this is not allowed in member initializer , but in VB.Net Me is allowed
|
C#
|
I have two ServiceStack servers X and Y. Server X has functionality to register and authenticate users . It has RegistrationFeature , CredentialsAuthProvider , MemoryCacheClient and MongoDbAuthRepository features to handle the authentication.Recently , I introduced server Y and GUI forms that talk to server Y to handle another part of my business domain . Server Y needs to make requests to authenticated endpoints on server X . How do I configure server Y in such a way that when it gets login requests from the GUI forms , it passes that responsibility to Server X which has access to the user information ? I tried implementing a custom CredentialsAuthProvider in server Y like so : but later when I try to make a request from a service in server Y to an authenticated endpoint in server X , I get Unauthorized error .
|
public override bool TryAuthenticate ( IServiceBase authService , string userName , string password ) { // authenticate through server X try { var client = new JsonServiceClient ( `` http : //localhost:8088 '' ) ; var createRequest = new Authenticate { UserName = userName , Password = password , provider = Name , } ; var authResponse = client.Post ( createRequest ) ; return true ; } catch ( WebServiceException ex ) { // `` Unauthorized return false ; } } public class MyServices2 : Service { public object Any ( TwoPhase request ) { try { // make a request to server X on an authenticated endpoint var client = new JsonServiceClient ( `` http : //localhost:8088 '' ) ; var helloRequest = new Hello { Name = `` user of server Y '' } ; var response = client.Post ( helloRequest ) ; return new TwoPhaseResponse { Result = $ '' Server X says : { response.Result } '' } ; } catch ( WebServiceException e ) { Console.WriteLine ( e ) ; throw ; } } ... }
|
Passthrough Authentication in ServiceStack
|
C#
|
I see several StackOverflow questions on this already but not of them seem to match my scenario . I promise I looked.I have some queries against my database that I 'm using linq to do and I ca n't figure out why the incorrect SQL is being generated . This is happening in several places in my code . I 'm hoping we 're just falling into some well known gotcha but I ca n't wrap my head around why Linq seemingly decides my where clause is dumb and should n't add it to the generated SQL query.Why is this ? Example : The above query returns the following SQLHowever the following query : Generates the following correct SQL
|
var testing = ( from i in context.TableName1 where i.Param1 == object1.GuidParam select i ) .ToList ( ) ; { SELECT [ Extent1 ] . [ RecordId ] AS [ RecordId ] , [ Extent1 ] . [ AnotherId ] AS [ AnotherId ] , [ Extent1 ] . [ YetAnotherId ] AS [ YetAnotherId ] , [ Extent1 ] . [ WeLikeIds ] AS [ WeLikeIds ] , [ Extent1 ] . [ WeReallyLikeIds ] AS [ WeReallyLikeIds ] FROM [ dbo ] . [ SomeTable ] AS [ Extent1 ] } var testing = ( from i in context.TableName1 where i.Param1 == object1.GuidParam select i ) ; var testingToList = testing.ToList ( ) ; { SELECT [ Extent1 ] . [ RecordId ] AS [ RecordId ] , [ Extent1 ] . [ AnotherId ] AS [ AnotherId ] , [ Extent1 ] . [ YetAnotherId ] AS [ YetAnotherId ] , [ Extent1 ] . [ WeLikeIds ] AS [ WeLikeIds ] , [ Extent1 ] . [ WeReallyLikeIds ] AS [ WeReallyLikeIds ] FROM [ dbo ] . [ SomeTable ] AS [ Extent1 ] WHERE [ Extent1 ] . [ RecordId ] = '78e49f5c-0ff8-e311-93f4-00155d514a6d ' }
|
Why does Linq ignore my where clause ?
|
C#
|
I am new to C # , so please bear with me as I have inherited a script that I 'm attempting to tweak.I want to get the output of SQL PRINT/RAISERROR statements to show up in a log file that has been declared in another part of the script.This is my method I 'm calling : This is the info handler method : Instead of outputting to the Console , I want to write it to the LogFileNameAndPath variable via the `` File.AppendAllText ( LogFileNameAndPath , err.Message ) '' ; however , I have looked at many posts over the web and NOBODY provides a solution . Is there a way to do this ? Please be kind . Thanks ! =========================================================================== [ ADDED 2015-07-27 1606 EDT ] If I change this line from : ... to ... ... it fails to compile . How does this get passed to the method ? Here 's the new method :
|
public void ProcessData ( string StoredProcedure , int StartDate , int EndDate , string Directory , string LogFileNameAndPath ) { SqlConnection sqlConnection = null ; SqlCommand sqlCommand = null ; SqlParameter sqlParameter = null ; // String outputText = null ; try { sqlConnection = new SqlConnection ( _ConnectionString ) ; sqlConnection.Open ( ) ; sqlCommand = new SqlCommand ( ) ; sqlCommand.CommandType = CommandType.StoredProcedure ; sqlCommand.CommandText = StoredProcedure ; sqlCommand.Connection = sqlConnection ; sqlCommand.CommandTimeout = 0 ; sqlParameter = new SqlParameter ( `` @ StartDt '' , SqlDbType.Int ) ; sqlParameter.Value = StartDate ; sqlCommand.Parameters.Add ( sqlParameter ) ; sqlParameter = new SqlParameter ( `` @ EndDt '' , SqlDbType.Int ) ; sqlParameter.Value = EndDate ; sqlCommand.Parameters.Add ( sqlParameter ) ; sqlParameter = new SqlParameter ( `` @ stringDirs '' , SqlDbType.VarChar ) ; sqlParameter.Value = Directory ; sqlCommand.Parameters.Add ( sqlParameter ) ; sqlConnection.InfoMessage += new SqlInfoMessageEventHandler ( OnInfoMessage ) ; sqlCommand.ExecuteNonQuery ( ) ; } catch ( SqlException sqlEx ) { throw sqlEx ; } catch ( Exception ex ) { throw new Exception ( ex.ToString ( ) ) ; } finally { if ( sqlConnection ! = null ) { if ( sqlConnection.State ! = ConnectionState.Closed ) { sqlConnection.Close ( ) ; } } } } public void OnInfoMessage ( object sender , SqlInfoMessageEventArgs args ) // , String LogFileNameAndPath ) { foreach ( SqlError err in args.Errors ) { //File.AppendAllText ( LogFileNameAndPath , err.Message ) ; Console.WriteLine ( `` { 0 } '' , err.Message ) ; //return err.Message ; // '' The { 0 } has received a severity { 1 } , state { 2 } error number { 3 } \n '' + // `` on line { 4 } of procedure { 5 } on server { 6 } : \n { 7 } '' , // err.Source , err.Class , err.State , err.Number , err.LineNumber , // err.Procedure , err.Server , err.Message ) ; } } sqlConnection.InfoMessage += new SqlInfoMessageEventHandler ( OnInfoMessage ) ; sqlConnection.InfoMessage += new SqlInfoMessageEventHandler ( OnInfoMessage ( LogFileNameAndPath ) ) ; public void OnInfoMessage ( object sender , SqlInfoMessageEventArgs args , String LogFileNameAndPath ) { foreach ( SqlError err in args.Errors ) { File.AppendAllText ( @ LogFileNameAndPath , err.Message ) ; //Console.WriteLine ( `` { 0 } '' , err.Message ) ; } }
|
C # output SQL Server Messages to text file
|
C#
|
At the risk of asking a question that has already been asked butis there a counterpart in Java for the Type type available in C # ? What I want to do is filling an array with elements which reflect several primitive types such as int , byte etc.In C # it would be the following code :
|
Type [ ] types = new Type [ ] { typeof ( int ) , typeof ( byte ) , typeof ( short ) } ;
|
type of types in Java
|
C#
|
I am working on some code to use HttpWebRequest asynchronously . If any of you have ever done this before , then you know that error handling can be a bit of a pain because if an exception is thrown in one of the callback methods , it ca n't be passed back to the calling code via a try/catch block.What I want to do is handle errors by saving exceptions in my state object that gets passed to each callback method . If an exception is caught , the state object will be updated and then the http call will be aborted . The problem I have is that in my state object , I have to use an Exception property so that any type of exception can be stored . When the calling code checks the state object and `` sees '' an Exception , it does n't know what type of exception it is . Is there a way to allow my state object to hold any type of exception , but still keep the exception strongly-typed ? State Object
|
public class HttpPostClientAsyncModel { public HttpResponseSnapshot Response { get ; set ; } public HttpPostClientAsyncStatus Status { get ; set ; } public Exception Exception { get ; set ; } public WebRequest Request { get ; set ; } }
|
What 's the best way to handle asynchronous HttpWebRequest exceptions in C # ?
|
C#
|
Given the following setup in TPL dataflow.i am wondering how I can mark this to complete because of the cycle . A directory is posted to the dirBroadcast broadcaster which posts to the dirfinder that might post back new dirs to the broadcaster , so i cant simply mark it as complete because it would block any directories being added from the dirfinder . Should i redesign it to keep track of the number of dirs or is there anything for this in TPL .
|
var directory = new DirectoryInfo ( @ '' C : \dev\kortforsyningen_dsm\tiles '' ) ; var dirBroadcast=new BroadcastBlock < DirectoryInfo > ( dir= > dir ) ; var dirfinder = new TransformManyBlock < DirectoryInfo , DirectoryInfo > ( ( dir ) = > { return directory.GetDirectories ( ) ; } ) ; var tileFilder = new TransformManyBlock < DirectoryInfo , FileInfo > ( ( dir ) = > { return directory.GetFiles ( ) ; } ) ; dirBroadcast.LinkTo ( dirfinder ) ; dirBroadcast.LinkTo ( tileFilder ) ; dirfinder.LinkTo ( dirBroadcast ) ; var block = new XYZTileCombinerBlock < FileInfo > ( 3 , ( file ) = > { var coordinate = file.FullName.Split ( '\\ ' ) .Reverse ( ) .Take ( 3 ) .Reverse ( ) .Select ( s = > int.Parse ( Path.GetFileNameWithoutExtension ( s ) ) ) .ToArray ( ) ; return XYZTileCombinerBlock < CloudBlockBlob > .TileXYToQuadKey ( coordinate [ 0 ] , coordinate [ 1 ] , coordinate [ 2 ] ) ; } , ( quad ) = > XYZTileCombinerBlock < FileInfo > .QuadKeyToTileXY ( quad , ( z , x , y ) = > new FileInfo ( Path.Combine ( directory.FullName , string.Format ( `` { 0 } / { 1 } / { 2 } .png '' , z , x , y ) ) ) ) , ( ) = > new TransformBlock < string , string > ( ( s ) = > { Trace.TraceInformation ( `` Combining { 0 } '' , s ) ; return s ; } ) ) ; tileFilder.LinkTo ( block ) ; using ( new TraceTimer ( `` Time '' ) ) { dirBroadcast.Post ( directory ) ; block.LinkTo ( new ActionBlock < FileInfo > ( ( s ) = > { Trace.TraceInformation ( `` Done combining : { 0 } '' , s.Name ) ; } ) ) ; block.Complete ( ) ; block.Completion.Wait ( ) ; }
|
How to mark a TPL dataflow cycle to complete ?
|
C#
|
Compiles but shouldn'tErrors but shouldn'tCompiler Error Text : -2147483648 can not be converted to a ulongQuestion : I would expect the opposite to occur . Can anyone explain why this is ? Also how I can print this flags attribute to a byte [ ] for inspection ?
|
[ Flags ] enum TransactionData : long // 64 bits . Last bit is sign bit , but I 'm putting data there { None = 0 , Color1 = 1 < < 63 , } [ Flags ] enum TransactionData : ulong // 64 bits . No sign bit . Not allowed to put data there { None = 0 , Color1 = 1 < < 63 , } var eee = TransactionData.None | TransactionData.Color1 // How do I convert eee to byte [ ] ?
|
Unexpected behavior between [ Flags ] enum : long vs [ Flags ] enum : ulong
|
C#
|
I 've been struggling with a problem when downloading very big files ( > 2GB ) on Silverlight . My application is an out-of-browser Download Manager running with elevated permissions.When the file reaches a certain ammount of data ( 2GB ) , it throws the following exception : The only clue I have is this site , who shows the BeginCode implementation . This exception only occurs when count is < then 0.My codeThe error occurs with different kind of files , and they come from public buckets on Amazon S3 . ( with regular http requests ) .
|
System.ArgumentOutOfRangeException was caught Message=Specified argument was out of the range of valid values.Parameter name : count StackTrace : in MS.Internal.InternalNetworkStream.BeginRead ( Byte [ ] buffer , Int32 offset , Int32 count , AsyncCallback callback , Object state ) in MS.Internal.InternalNetworkStream.Read ( Byte [ ] buffer , Int32 offset , Int32 count ) in MySolution.DM.Download.BeginResponseCallback ( IAsyncResult ar ) InnerException : Null /* `` Target '' is a File object . `` source '' is a Stream object */var buffer = new byte [ 64 * 1024 ] ; int bytesRead ; Target.Seek ( 0 , SeekOrigin.End ) ; // The file might exists when resuming a download/* The exception throws from inside `` source.Read '' */while ( ( bytesRead = source.Read ( buffer , 0 , buffer.Length ) ) > 0 ) { Target.Write ( buffer , 0 , bytesRead ) ; _fileBytes = Target.Length ; Deployment.Current.Dispatcher.BeginInvoke ( ( ) = > { DownloadPercentual = Double.Parse ( Math.Round ( ( decimal ) ( _fileBytes / ( _totalSize / 100 ) ) , 5 ) .ToString ( ) ) ; } ) ; } Target.Close ( ) ; logFile.Close ( ) ;
|
ArgumentOutOfRangeException when downloading file via Stream.Read
|
C#
|
Is there any way to call write generic programs and algorithms in C # while avoiding the overhead of a dynamic solution ? Consider a simple example : Which you might call as : While seemingly efficient , this benign-looking example performs an indirect ( i.e . virtual ) call for every comparison.Obviously , the processor can not optimize indirect calls , and therefore they perform poorly . On my computer , this translates into a 25 % decrease in performance , from about 3,600 items/ms to 2,700 items/ms.Is there any way to avoid such indirect calls in writing generic code ? No matter how much juggling I do with delegates , DynamicMethod , and the like , it seems like there is always an indirect call between the library code and the user 's code , which obviously impacts performance very negatively .
|
static void QuickSort < T > ( T [ ] arr , int left , int right , Comparison < T > compare ) { do { int i = left ; int j = right ; var x = arr [ i + ( ( j - i ) > > 1 ) ] ; do { while ( i < arr.Length & & compare ( x , arr [ i ] ) > 0 ) i++ ; while ( j > = 0 & & compare ( x , arr [ j ] ) < 0 ) j -- ; if ( i > j ) { break ; } if ( i < j ) { var temp = arr [ i ] ; arr [ i ] = arr [ j ] ; arr [ j ] = temp ; } i++ ; j -- ; } while ( i < = j ) ; if ( j - left < = right - i ) { if ( left < j ) QuickSort ( arr , left , j , compare ) ; left = i ; } else { if ( i < right ) QuickSort ( arr , i , right , compare ) ; right = j ; } } while ( left < right ) ; } QuickSort ( buffer , 0 , buffer.Length - 1 , ( a , b ) = > a.CompareTo ( b ) )
|
How to write generic code while avoiding indirect calls ?
|
C#
|
Whenever I try to upload a large video via Direct Upload using the YouTube API . I get an OutOfMemory Exception . Is there anything I can do to get rid of this ? The YouTube API does not say anything about video size limit using direct upload.I gave up on the Direct Upload . Now I trying the resumable upload way . My code is below.Can anyone tell me what is wrong ? My 2 GB video not uploading .
|
YouTubeRequest request ; YouTubeRequestSettings settings = new YouTubeRequestSettings ( `` YouTube Upload '' , Client Key , `` Username '' , `` Password '' ) ; request = new YouTubeRequest ( settings ) ; Video newVideo = new Video ( ) ; ResumableUploader m_ResumableUploader = null ; Authenticator YouTubeAuthenticator ; m_ResumableUploader = new ResumableUploader ( 256 ) ; //chunksize 256 kilobytem_ResumableUploader.AsyncOperationCompleted += new AsyncOperationCompletedEventHandler ( m_ResumableUploader_AsyncOperationCompleted ) ; m_ResumableUploader.AsyncOperationProgress += new AsyncOperationProgressEventHandler ( m_ResumableUploader_AsyncOperationProgress ) ; YouTubeAuthenticator = new ClientLoginAuthenticator ( `` YouTubeUploader '' , ServiceNames.YouTube , `` kjohnson @ resoluteinnovations.com '' , `` password '' ) ; //AtomLink link = new AtomLink ( `` http : //uploads.gdata.youtube.com/resumable/feeds/api/users/uploads '' ) ; //link.Rel = ResumableUploader.CreateMediaRelation ; //newVideo.YouTubeEntry.Links.Add ( link ) ; System.IO.FileStream stream = new System.IO.FileStream ( filePath , System.IO.FileMode.Open , System.IO.FileAccess.Read ) ; byte [ ] chunk = new byte [ 256000 ] ; int count = 1 ; while ( true ) { int index = 0 ; while ( index < chunk.Length ) { int bytesRead = stream.Read ( chunk , index , chunk.Length - index ) ; if ( bytesRead == 0 ) { break ; } index += bytesRead ; } if ( index ! = 0 ) { // Our previous chunk may have been the last one newVideo.MediaSource = new MediaFileSource ( new MemoryStream ( chunk ) , filePath , `` video/quicktime '' ) ; if ( count == 1 ) { m_ResumableUploader.InsertAsync ( YouTubeAuthenticator , newVideo.YouTubeEntry , new MemoryStream ( chunk ) ) ; count++ ; } else m_ResumableUploader.ResumeAsync ( YouTubeAuthenticator , new Uri ( `` http : //uploads.gdata.youtube.com/resumable/feeds/api/users/uploads '' ) , `` POST '' , new MemoryStream ( chunk ) , `` video/quicktime '' , new object ( ) ) ; } if ( index ! = chunk.Length ) { // We did n't read a full chunk : we 're done break ; } }
|
YouTube Direct Upload - OutOfMemory Exception
|
C#
|
I 'm using Facebook as a login provider for my web application ( ASP.NET MVC ) .My login works similar to another StackOverflow post How to securely authorize a user via Facebook 's Javascript SDK . I also share the user 's concerns.The flow for my login is as Follows:1 . The user presses the login button.2 . The user must accept the app.3 . A javascript callback retrieves the response.Object returned : I 've heard that I can used the signed_request to validate the user 's request , but all the examples online are for PHP . How do I do this in .NET ?
|
var authResponse = response.authResponse ; { accessToken : `` ... '' , expiresIn : 1234 , signedRequest : `` ... '' , userID : `` 123456789 '' }
|
How do I use a Facebook signed_request in .NET ?
|
C#
|
How does the is operator work with respect to the DLR ? To make my question a little more explicit , consider the following signature : By default , what conditions are necessary for Is < T > to return true ? Furthermore , does the DLR provide any mechanism to customize this behavior ?
|
public bool Is < T > ( Func < dynamic > getInstance ) { return getInstance ( ) is T ; }
|
How does the `` is '' operator work with dynamic objects ?
|
C#
|
I have a simple Parent Child table in a database like soThe data that I have in them isParent TableChild TableThese tables are mapped to Parent and Child C # objects using the Linq-2-SQL designer in Visual Studio with no non standard options.I made a simple test program to query all child with their parentsThe output of the above program isAs you can see a Parent object was created for every Child in the database only to be discarded eventually . Questions : Why does Linq-2-Sql create these unnecessary extra Parent objects ? Are there any options to avoid creation of extra Parent objects ?
|
CREATE TABLE [ Parent ] ( [ Id ] [ int ] IDENTITY ( 1,1 ) NOT NULL , [ Name ] [ nvarchar ] ( 256 ) NOT NULL ) ALTER TABLE [ Parent ] ADD CONSTRAINT [ PK_Parent_Id ] PRIMARY KEY ( [ Id ] ) CREATE TABLE [ Child ] ( [ Id ] [ int ] IDENTITY ( 1,1 ) NOT NULL , [ ParentId ] [ int ] NOT NULL , [ Name ] [ nvarchar ] ( 256 ) NOT NULL ) ALTER TABLE [ Child ] ADD CONSTRAINT [ PK_Child_Id ] PRIMARY KEY ( [ Id ] ) ALTER TABLE [ Child ] ADD CONSTRAINT [ FK_Child_Parent_ID ] FOREIGN KEY ( [ ParentId ] ) REFERENCES [ Parent ] ( [ Id ] ) Id Name1 John Id ParentId Name1 1 Mike2 1 Jake3 1 Sue4 1 Liz public partial class Parent { static int counter = 0 ; //default OnCreated created by the linq to sql designer partial void OnCreated ( ) { Console.WriteLine ( string.Format ( `` CreatedParent { 0 } hashcode= { 1 } '' , ++counter , GetHashCode ( ) ) ) ; } } class Program { static void Main ( string [ ] args ) { using ( var db = new SimpleDbDataContext ( ) ) { DataLoadOptions opts = new DataLoadOptions ( ) ; opts.LoadWith < Child > ( c = > c.Parent ) ; db.LoadOptions = opts ; var allChildren = db.Childs.ToArray ( ) ; foreach ( var child in allChildren ) { Console.WriteLine ( string.Format ( `` Parent name= { 0 } hashcode= { 1 } '' , child.Parent.Name , child.Parent.GetHashCode ( ) ) ) ; } } } } CreatedParent 1 hashcode=53937671CreatedParent 2 hashcode=9874138CreatedParent 3 hashcode=2186493CreatedParent 4 hashcode=22537358Parent name=John hashcode=53937671Parent name=John hashcode=53937671Parent name=John hashcode=53937671Parent name=John hashcode=53937671
|
Why does linq-2-sql create extra unnecessary objects ?
|
C#
|
I read that sometimes that calling directly a Task can lead to a deadlock of the main thread.Here 's my async method : I tried a lot of ways to run this task in a sync function . Here 's some examples : I want to know which is better solution to run the async method synchronously in the syncFoo ( ) without causing deadlocks . Should I do it like in syncFoo2 ( ) ? PS : syncFoo ( ) is called from a a windows service onStart ( ) and onStop ( ) .
|
public async Task < List < JobsWithSchedules > > fillJobsAsync ( ) { IOlapJobAccess jobAccess = new OlapJobAccess ( _proxy , CentralPointPath ) ; List < OlapJob > jobs = await jobAccess.GetAllJobsAsync ( ) ; List < JobsWithSchedules > quartzJobs = null ; if ( jobs ! = null ) { quartzJobs = fillQuartzJobs ( jobs ) ; } return quartzJobs ; } public void syncFoo1 ( ) { var fillJobsTask = fillJobsAsync ( ) .ContinueWith ( ( task ) = > { if ( task.Status == TaskStatus.RanToCompletion & & task.Result ! = null ) { List < JobsWithSchedules > quartzJobs = task.Result ; // ... } else { // ... } } ) ; fillJobsTask.Wait ( ) ; } public void syncFoo2 ( ) { Task.Run ( ( ) = > fillJobsAsync ( ) ) .ContinueWith ( ( task ) = > { if ( task.Status == TaskStatus.RanToCompletion & & task.Result ! = null ) { List < JobsWithSchedules > quartzJobs = task.Result ; // ... } else { // ... } } ) ; }
|
Prevent deadlock by running a Task synchronously - Windows Service
|
C#
|
I have a list of bool , and a list of strings . I want to use IEnumerable.Zip to combine the lists , so if the value at each index of the first list is true , the result contains the corresponding item from the second list . In other words : The simplest solution I could come up with is : ... but I suspect there 's a simpler way to do this . Is there ?
|
List < bool > listA = { true , false , true , false } ; List < string > listB = { `` alpha '' , `` beta '' , `` gamma '' , `` delta '' } ; IEnumerable < string > result = listA.Zip ( listB , [ something ] ) ; //result contains `` alpha '' , `` gamma '' listA.Zip ( listB , ( a , b ) = > a ? b : null ) .Where ( a = > a ! = null ) ;
|
Linq : Exclude results using Zip
|
C#
|
When I use expression trees to replace a method , such as Math.Max , it looks like it successfully replaces it in the expression tree . But when I go to use it in Entity Framework , it throws an exception about not supporting Math.Max for Entity Framework . But I am explicitly replacing it.Does anyone know why ? And a way to fix the code ? If you callit will work . But if you replace the above GetTestItems ( ) with an entity framework _dbContext.Items , it will not work.To test this code , you would need to add Item structure to the EF project , make a migration , and push it into the database.I wish I could make this less technical so this question could be answered by a wider variety of people . Hopefully the bounty is adequate for an the answer . If not , please pm me .
|
using System ; using System.Collections.Generic ; using System.Linq ; using System.Linq.Expressions ; namespace ConsoleApplication1 { public static class CalculateDatabase { public static void Main ( string [ ] args ) { var calcs = GetCalculateToAmounts ( GetTestItems ( ) , 0.5m ) .ToList ( ) ; } public static IQueryable < Item > GetTestItems ( ) { var items = new List < Item > ( ) ; items.Add ( new Item ( ) { DoNotItem = true , ReductionAmount = 2 , PreviousDiscountAmount = 3 , CurrentDiscountAmount = 10 , CurrentAmount = 100 , PreviousAmount = 50 , CurrentBillAmount = 75 } ) ; return items.AsQueryable ( ) ; } public class Item { public bool DoNotItem { get ; set ; } public decimal ReductionAmount { get ; set ; } public decimal PreviousDiscountAmount { get ; set ; } public decimal CurrentDiscountAmount { get ; set ; } public decimal CurrentAmount { get ; set ; } public decimal PreviousAmount { get ; set ; } public decimal CurrentBillAmount { get ; set ; } } public static IQueryable < CalculateToAmount > GetCalculateToAmounts ( this IQueryable < Item > entityItems , decimal percentage ) { return entityItems.Select ( CalculateAmountExpression ( percentage ) ) ; } public class CalcType { } public class CalculateToAmount { public CalcType CalcType { get ; set ; } public Item Item { get ; set ; } public decimal ItemAmount1 { get ; set ; } public decimal ItemAmount2 { get ; set ; } public decimal ItemAmount3 { get ; set ; } public decimal Bonus { get ; set ; } public decimal Discounts { get ; set ; } public decimal Total { get ; set ; } } private static Expression < Func < Item , CalculateToAmount > > CalculateAmountExpression ( this decimal percentage ) { Expression < Func < Item , CalculateToAmount > > lambda = item = > new CalculateToAmount ( ) { Item = item , Bonus = item.DoNotItem ? 0 : item.CurrentBillAmount * ( 1 - percentage ) + item.ReductionAmount , Discounts = item.PreviousDiscountAmount + item.CurrentDiscountAmount , Total = Math.Max ( item.CurrentAmount + item.PreviousAmount , item.CurrentBillAmount ) } ; var test = MathModifier.Modify ( lambda ) ; return test ; } public class MathModifier : ExpressionVisitor { protected override Expression VisitMethodCall ( MethodCallExpression node ) { var isMinMethod = node.Method.Name.Equals ( `` Min '' , StringComparison.InvariantCultureIgnoreCase ) ; var isMaxMethod = node.Method.Name.Equals ( `` Max '' , StringComparison.InvariantCultureIgnoreCase ) ; if ( ! isMinMethod & & ! isMaxMethod ) return base.VisitMethodCall ( node ) ; var left = node.Arguments [ 0 ] ; var right = node.Arguments [ 1 ] ; var minMaxReplaceMethod = isMinMethod ? Expression.Condition ( Expression.LessThan ( left , right ) , left , right ) : Expression.Condition ( Expression.GreaterThan ( left , right ) , left , right ) ; return minMaxReplaceMethod ; } public static Expression < Func < TIn , TOut > > Modify < TIn , TOut > ( Expression < Func < TIn , TOut > > expression ) { var modifier = new MathModifier ( ) ; return ( Expression < Func < TIn , TOut > > ) modifier.Visit ( expression ) ; } } } } var calcs = GetCalculateToAmounts ( GetTestItems ( ) , 0.5 ) .ToList ( )
|
Expression Tree - Math.Max replacement
|
C#
|
Suppose I have three objects : ' a ' , ' b ' and ' c'.Object ' a ' and ' c ' are long-lived , statically referenced service singletons.Object ' b ' is short-lived , i.e . no static references keep it alive.Now suppose object ' a ' creates an instance of object ' b ' in the scope of one of its methods , e.g.Further suppose that the class B looks something like this : Now , how long does object ' b ' live ? My presumption is that it lives beyond the scope of the method that called its constructor ; specifically , for as long as its method is still in the 'ActionList ' of object ' c'.Is that correct ? If not , and it gets garbage collected , what happens when ' c ' runs all the methods in its 'ActionList ' ? Bonus question : What if the method on ' b ' is not named , but anonymous and written in the constructor as follows :
|
B b = new B ( ) ; public B ( ) { C.ActionList.Add ( SomeMethod ) ; } void SomeMethod ( ) { ... } public B ( ) { C.ActionList.Add ( ( ) = > { ... } ) ; }
|
does passing a method of one object to another object keep the first object alive ?
|
C#
|
I 'm in the process of designing a system that will allow me to represent broad-scope tasks as workflows , which expose their workitems via an IEnumerable method . The intention here is to use C # 's 'yield ' mechanism to allow me to write psuedo-procedural code that the workflow execution system can execute as it sees fit.For example , say I have a workflow that includes running a query on the database and sending an email alert if the query returns a certain result . This might be the workflow : CheckInventory and EmailWarehouse are objects deriving from WorkItem , which has an abstract Execute ( ) method that the subclasses implement , encapsulating the behavior for those actions . The Execute ( ) method gets called in the workflow framework - I have a WorkflowRunner class which enumerates the Workflow ( ) , wraps pre- and post- events around the workitem , and calls Execute in between the events . This allows the consuming application to do whatever it needs in before or after workitems , including canceling , changing workitem properties , etc.The benefit to all this , I think , is that I can express the core logic of a task in terms of the workitems responsible for getting the work done , and I can do it in a fairly straightforward , almost procedural way . Also because I 'm using IEnumerable , and C # 's syntactic sugar that supports it , I can compose these workflows - like higher-level workflows that consume and manipulate sub-workflows . For example I wrote a simple workflow that just interleaves two child workflows together.My question is this - does this sort of architecture seem reasonable , especially from a maintainability perspective ? It seems to achieve several goals for me - self-documenting code ( the workflow reads procedurally , so I know what will be executed in what steps ) , separation of concerns ( finding low inventory items does not depend on sending email to the warehouse ) , etc . Also - are there any potential problems with this sort of architecture that I 'm not seeing ? Finally , has this been tried before - am I just re-discovering this ?
|
public override IEnumerable < WorkItem > Workflow ( ) { // These would probably be injected from elsewhere var db = new DB ( ) ; var emailServer = new EmailServer ( ) ; // other workitems here var ci = new FindLowInventoryItems ( db ) ; yield return ci ; if ( ci.LowInventoryItems.Any ( ) ) { var email = new SendEmailToWarehouse ( `` Inventory is low . `` , ci.LowInventoryItems ) ; yield return email ; } // other workitems here }
|
Is this good design of a workflow-esque system ?
|
C#
|
I have some working code which produces a correct signature of a string if I load a certificate from a file or from the current user 's store . However , if I load the exact same certificate ( same .p12 and same thumbprint ) from the Machine certificate store , it behaves differently . When loaded from that store , the signatures generated by my C # code are half the length ( 1024 bits instead of 2048 ) and are incorrect . The private key appears to be loading properly in both cases.Why does which store the certificate is loaded from make any difference to which signature is generated ? And why would the signature be half the length ? Loaded from CurrentUser : ( correct ) Loaded from LocalMachine : ( incorrect - and note the 1024 bit key size and signature length ) Here 's the C # I 'm using :
|
Thumbprint : FBBE05A1C5F2AEF637CDE20A7985CD1011861651Has private key : Truersa.KeySize ( bits ) =2048Signature Length ( bits ) : 2048Signature : kBC2yh0WCo/AU8aVo+VUbRoh67aIJ7SWM4dRMkNvt ... Thumbprint : FBBE05A1C5F2AEF637CDE20A7985CD1011861651Has private key : Truersa.KeySize ( bits ) = 1024Signature Length ( bits ) : 1024Signature : RijmdQ73DXHK1IUYkOzov2R+WRdHW8tLqsH ... . string s = `` AE0DE01564,1484821101811 , http : //localhost:8080/example_site/CallBack '' ; var inputData = Encoding.UTF8.GetBytes ( s ) ; var store = new X509Store ( StoreName.My , StoreLocation.LocalMachine ) ; store.Open ( OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly ) ; string thumbprint = CleanThumbPrint ( `` fb be 05 a1 c5 f2 ae f6 37 cd e2 0a 79 85 cd 10 11 86 16 51 '' ) ; X509Certificate2Collection col = store.Certificates.Find ( X509FindType.FindByThumbprint , thumbprint , false ) ; // TODO : close store . X509Certificate2 certificate = null ; Console.WriteLine ( `` Cert count : `` + col.Count ) ; if ( col.Count == 1 ) { certificate = col [ 0 ] ; RSACryptoServiceProvider rsa = ( RSACryptoServiceProvider ) col [ 0 ] .PrivateKey ; // Force use of the Enhanced RSA and AES Cryptographic Provider with openssl-generated SHA256 keys var enhCsp = new RSACryptoServiceProvider ( ) .CspKeyContainerInfo ; var cspparams = new CspParameters ( enhCsp.ProviderType , enhCsp.ProviderName , rsa.CspKeyContainerInfo.KeyContainerName ) ; rsa = new RSACryptoServiceProvider ( cspparams ) ; Console.WriteLine ( `` Name : `` + certificate.SubjectName.Name ) ; Console.WriteLine ( `` Thumbprint : `` + certificate.Thumbprint ) ; Console.WriteLine ( `` Has private key : `` + certificate.HasPrivateKey ) ; Console.WriteLine ( `` Sig algorithm : `` + certificate.SignatureAlgorithm ) ; Console.WriteLine ( `` rsa.KeySize ( bits ) = '' + rsa.KeySize ) ; var sha256 = CryptoConfig.CreateFromName ( `` SHA256 '' ) ; byte [ ] signature = rsa.SignData ( inputData , sha256 ) ; Console.WriteLine ( `` Signature Length ( bits ) : `` + signature.Length * 8 ) ; Console.WriteLine ( `` Signature : `` + System.Convert.ToBase64String ( signature ) ) ; Console.WriteLine ( ) ; }
|
Cryptography : Why am I getting different RSA signatures depending on which certificate store the certificate was loaded from ?
|
C#
|
With this code for a very basic logger : when I try it from a few threads simultaneously I quickly get the error : Why the lock is not preventing the threads to access the file at the same time ? It does n't matter if the threads call the same instance or different instances to the same file.Also I thought It could be because of some deferral when writing files in Windows , but on Linux happens the same thing .
|
lock ( string.Concat ( `` LogWritter_ '' , this.FileName ) ) { using ( var fileStream = File.Open ( this.FileName , FileMode.Append , FileAccess.Write , FileShare.Read ) ) { using ( var w = new StreamWriter ( fileStream ) ) { w.Write ( message ) ; } } } The process ca n't access the file because its being used by another file .
|
Why the lock in this code is not working ?
|
C#
|
Really curious for the below program ( yes run in release mode without debugger attached ) , the first loop assigns a new object to each element of the array , and takes about a second to run . So I was wondering which part was taking the most time -- object creation or assignment . So I created the second loop to test the time required to create the objects , and the third loop to test assignment time , and both run in just a few milliseconds . What 's going on ?
|
static class Program { const int Count = 10000000 ; static void Main ( ) { var objects = new object [ Count ] ; var sw = new Stopwatch ( ) ; sw.Restart ( ) ; for ( var i = 0 ; i < Count ; i++ ) { objects [ i ] = new object ( ) ; } sw.Stop ( ) ; Console.WriteLine ( sw.ElapsedMilliseconds ) ; // ~800 ms sw.Restart ( ) ; object o = null ; for ( var i = 0 ; i < Count ; i++ ) { o = new object ( ) ; } sw.Stop ( ) ; Console.WriteLine ( sw.ElapsedMilliseconds ) ; // ~ 40 ms sw.Restart ( ) ; for ( var i = 0 ; i < Count ; i++ ) { objects [ i ] = o ; } sw.Stop ( ) ; Console.WriteLine ( sw.ElapsedMilliseconds ) ; // ~ 50 ms } }
|
C # performance curiosity
|
C#
|
I 'm designing an application that will allow me to draw some functions on a graphic . Each function will be drawn from a set of points that I will pass to this graphic class.There are different kinds of points , all inheriting from a MyPoint class . For some kind of points it will be just printing them on the screen as they are , others can be ignored , others added , so there is some kind of logic associated to them that can get complex.How to actually draw the graphic is not the main issue here . What bothers me is how to make the code logic such that this GraphicMaker class does n't become the so called God-Object.It would be easy to make something like this : How would you do something like this ? I have a feeling the correct way would be to put the drawing logic on each Point object ( so each child class from Point would know how to draw itself ) but two problems arise : There will be kinds of points that need to know all the other points that exist in the GraphicObject class to know how to draw themselves.I can make a lot of the methods/properties from the Graphic class public , so that all the points have a reference to the Graphic class and can make all their logic as they want , but is n't that a big price to pay for not wanting to have a God class ?
|
class GraphicMaker { ArrayList < Point > points = new ArrayList < Point > ( ) ; public void AddPoint ( Point point ) { points.add ( point ) ; } public void DoDrawing ( ) { foreach ( Point point in points ) { if ( point is PointA ) { //some logic here else if ( point is PointXYZ ) { // ... etc } } } }
|
Designing a class in such a way that it does n't become a `` God object ''
|
C#
|
Doing an experiment of translating .NET IL to C++ in a human readable fashion.Here is the issue : C # allows you to resolve multiple interfaces with the same method name that only differ in return type . C++ does n't seem to support this however making resolving two interfaces impossible using the vTable ( or am i wrong ? ) .I 've found a way to replicate the C # approach in C++ using templates but am wondering if there is a way that does n't require templates that solves the same issue ? Templates are verbose and I 'd prefer not using them for every interface type if possible.Here is the C++ version.Here is the C # reference source the C++ one is based on.This C++ method does n't work below but wish it did ( its more what I 'm going for ) .
|
template < typename T > class IMyInterface { public : short ( T : :*Foo_IMyInterface ) ( ) = 0 ; } ; template < typename T > class IMyInterface2 { public : int ( T : :*Foo_IMyInterface2 ) ( ) = 0 ; } ; class MyClass : public IMyInterface < MyClass > , public IMyInterface2 < MyClass > { public : MyClass ( ) { Foo_IMyInterface = & MyClass : :Foo ; Foo_IMyInterface2 = & MyClass : :IMyInterface2_Foo ; } public : virtual short Foo ( ) { return 1 ; } private : int IMyInterface2_Foo ( ) { return 1 ; } } ; class MyClass2 : public MyClass { public : virtual short Foo ( ) override { return 2 ; } } ; void InvokeFoo ( IMyInterface < MyClass > * k ) { ( ( ( MyClass* ) k ) - > *k- > Foo_IMyInterface ) ( ) ; } void main ( ) { auto a = new MyClass2 ( ) ; InvokeFoo ( a ) ; } interface IMyInterface { short Foo ( ) ; } interface IMyInterface2 { int Foo ( ) ; } class MyClass : IMyInterface , IMyInterface2 { public virtual short Foo ( ) { return 1 ; } int IMyInterface2.Foo ( ) { return 1 ; } } class MyClass2 : MyClass { public override short Foo ( ) { return 2 ; } } namespace CSTest { class Program { static void InvokeFoo ( IMyInterface k ) { k.Foo ( ) ; } static void Main ( string [ ] args ) { var a = new MyClass2 ( ) ; InvokeFoo ( a ) ; } } } class IMyInterface { public : virtual short Foo ( ) = 0 ; } ; class IMyInterface2 { public : virtual int Foo ( ) = 0 ; } ; class MyClass : public IMyInterface , public IMyInterface2 { public : virtual short Foo ( ) { return 1 ; } private : int IMyInterface2 : :Foo ( ) // compiler error { return 1 ; } } ; class MyClass2 : public MyClass { public : virtual short Foo ( ) override { return 2 ; } } ; void InvokeFoo ( IMyInterface* k ) { k- > Foo ( ) ; } void main ( ) { auto a = new MyClass2 ( ) ; InvokeFoo ( a ) ; }
|
C++ multiple interfaces that only differ in return type ?
|
C#
|
As you 're developing you often use things like oras a placeholder to remind you to finish something off - but these can be missed and mistakenly end up in the release.You could use something likeso it wo n't compile in Release build - but is there a more elegant way ?
|
throw new NotImplementedException ( `` Finish this off later '' ) // TODO - Finish this off later # if RELEASE Finish this off later # endif
|
Elegant way to stop release compilation with error
|
C#
|
So I have a case where the layout has evolved to become more complicated . There 's the usual things like @ section styleIncludes { ... } , then other sections that define all kinds of the things that each page can optionally ( but almost always ) specify like the structure of the current pages breadcrumb . The reason all these things are sections is because they are embedded in the structure of the layout.I find myself making copies of previous pages , because there 's 8 or so different sections , rather than try to remember the exact spelling of them , or piece meal copy/paste.I am thinking it 'd be better to create a fluent API for these so that I have some object that has 8 functions , each one returning the object itself , so you can do something like Sections.Style ( some MVC text template or razor delgate ? ) .Breadcrumb ( etc . ) The main purpose is to be able to code these sections in a guided way and strongly type the names instead of relying on perfect typing or copy/paste.However , extensions/helpers in razor return MvcHtmlString , and I imagine a @ section is represented by something completely different.Not asking you to write a complete solution for me , but just some ideas on how to precede.What object should a helper return to represent a @ section declaration ? I.e . the analogy of a MvcHtmlString . What would you suggest the parameter type for the fluent methods , like Style or Breadcrumb ? I would like the razor passed to be similar in capability to writing razor in the curly braces of the section declaration . For example , ability to access local variables declared on the razor page , just as you can do with a regular section declaration . I do n't want something like string concatenation like .SomeSection ( `` < div ... > Bunch of html stuffed in a string < /div > '' ) In other words , if many of my cshtml pages begin something likeI 'd rather have seom sort of fluent API like this , not really for the resulting style of code , but rather because it will be easier to write the code and not have problems with typos etc . since intellisense will assist :
|
@ { string title = `` Edit Person '' ViewBag.Title = title ; } @ section styles { .someOneOffPageSpecificStyle { width:59px } } @ section javascript { //javascript includes which the layout will place at the bottom ... } @ section breadcrumb { < a ... > Parent Page < /a > & gt ; < a ... > Sub Page < /a > & gt ; @ title } @ { string title = `` Edit Person '' ViewBag.Title = title ; } @ Sections.Styles ( @ < text > .someOneOffPageSpecificStyle { width:59px } < /text > ) .Javascript ( @ < text > //javascript includes which the layout will place at the bottom ... < /text > ) .Breadcrumb ( @ < text > < a ... > Parent Page < /a > & gt ; < a ... > Sub Page < /a > & gt ; @ title < /text > )
|
Hint/Fluent for razor section names ?
|
C#
|
When exposing a set of related functions in Powershell cmdlets , is it possible to share the property names and summary help to normalize these across cmdlets in an assembly ? I know that this can be done with derived classes , but this solution is awkward at best when there are multiple cmdlets with different properties to be shared.Here is an extremely simple example . I would like to share the property 'Name ' and all related comments so that they are the same across the N cmdlets we are producing , but I can not think of a good way to do this in c # . Ideally any sharing would allow specification of Parameter attributes such as Mandatory or Position .
|
namespace FrozCmdlets { using System.Management.Automation ; /// < summary > /// Adds a new froz to the system./// < /summary > [ Cmdlet ( VerbsCommon.Add , `` Froz '' ) ] public class AddFroz : Cmdlet { /// < summary > /// The name of the froz . /// For more information on the froz , see froz help manual . /// < /summary > [ Parameter ] public string Name { get ; set ; } protected override void ProcessRecord ( ) { base.ProcessRecord ( ) ; // Add the froz here } } /// < summary > /// Removes a froz from the system./// < /summary > [ Cmdlet ( VerbsCommon.Remove , `` Froz '' ) ] public class RemoveFroz : Cmdlet { /// < summary > /// The name of the froz . /// For more information on the froz , see froz help manual . /// < /summary > [ Parameter ] public string Name { get ; set ; } protected override void ProcessRecord ( ) { base.ProcessRecord ( ) ; // Remove the froz here } } }
|
Is it possible to share properties and comments between Powershell cmdlets in c # ?
|
C#
|
Given a list of dates in descending order , this code will find the largest date where the date is < = searchDate.How would I write a binary search function to replace this method ? I 'm struggling to implement it for an inexact comparison like this.This method is called frequently , and can contain several thousand records which is why I wish to replace it with a binary search .
|
List < CurrencyHistoricExchangeRate > history = GetOrderedHistory ( ) ; foreach ( var record in history ) { if ( record.Date < searchDate ) { return record ; } }
|
Binary search list of dates for largest date where date < = n
|
C#
|
I did a search for an HsbToRgb converter in the docs but did n't find anything containing `` hsb '' or `` hsl '' , so I 'm guessing it just does n't exist . Just to make sure , though , are there any classes that support this conversion ? UpdateI ended up going with this , which is slightly different than 0xA3 's . I also added an AhsbToArgb so I can convert to RGB and set the alpha channel in one shot.AhsbToArgb - allows for alpha channel : HsbToRgb - converts hue-saturation-brightness to red-green-blue : RawRgbToRgb - converts doubles to ints and returns a color object :
|
public static Color AhsbToArgb ( byte a , double h , double s , double b ) { var color = HsbToRgb ( h , s , b ) ; return Color.FromArgb ( a , color.R , color.G , color.B ) ; } public static Color HsbToRgb ( double h , double s , double b ) { if ( s == 0 ) return RawRgbToRgb ( b , b , b ) ; else { var sector = h / 60 ; var sectorNumber = ( int ) Math.Truncate ( sector ) ; var sectorFraction = sector - sectorNumber ; var b1 = b * ( 1 - s ) ; var b2 = b * ( 1 - s * sectorFraction ) ; var b3 = b * ( 1 - s * ( 1 - sectorFraction ) ) ; switch ( sectorNumber ) { case 0 : return RawRgbToRgb ( b , b3 , b1 ) ; case 1 : return RawRgbToRgb ( b2 , b , b1 ) ; case 2 : return RawRgbToRgb ( b1 , b , b3 ) ; case 3 : return RawRgbToRgb ( b1 , b2 , b ) ; case 4 : return RawRgbToRgb ( b3 , b1 , b ) ; case 5 : return RawRgbToRgb ( b , b1 , b2 ) ; default : throw new ArgumentException ( `` Hue must be between 0 and 360 '' ) ; } } } private static Color RawRgbToRgb ( double rawR , double rawG , double rawB ) { return Color.FromArgb ( ( int ) Math.Round ( rawR * 255 ) , ( int ) Math.Round ( rawG * 255 ) , ( int ) Math.Round ( rawB * 255 ) ) ; }
|
Does the .NET Framework 3.5 have an HsbToRgb converter or do I need to roll my own ?
|
C#
|
If I run this test : I get the following result : When I use the regular For : The output is : The last result is a Triangular Distribution and it is the expected output.The purpose of my question is not to discuss the applicability of parallelism . The question is why the Parallel.For behaves that way ?
|
var r = new Random ( ) ; var ints = new int [ 13 ] ; Parallel.For ( 0 , 2000000 , i = > { var result = r.Next ( 1 , 7 ) + r.Next ( 1 , 7 ) ; ints [ result ] += 1 ; } ) ; 2 : 92,144453 : 0,417654 : 0,622455 : 0,825256 : 1,040357 : 1,252158 : 1,05319 : 0,834110 : 0,633411 : 0,419212 : 0,2109 for ( int i = 0 ; i < 2000000 ; i++ ) { var result = r.Next ( 1 , 7 ) + r.Next ( 1 , 7 ) ; ints [ result ] += 1 ; } 2 : 2,77973 : 5,586454 : 8,34145 : 11,099356 : 13,89097 : 16,67318 : 13,828959 : 11,1020510 : 8,342411 : 5,571212 : 2,7845
|
Parallel.For and For yield different results
|
C#
|
I have the following function in c # : Since TModel is clear from a function parameter I want some way to not specify its type when calling a function . Ideally I want to call it like : Since this is probably impossible , I came up with the following : So I 'm now calling it like : Are there other possibilities ? Did I make something completely wrong with my solution ?
|
bool Handle < TCommandHandler , TModel > ( TModel model ) where TCommandHandler : ICommandHandler < TModel > { // ... _container.Resolve < TCommandHandler > ( ) ; // ... } Handle < MyCommandHandler > ( model ) ; HandleTemp < TModel > Handle < TModel > ( TModel model ) { return new HandleTemp < TModel > ( model ) ; } public class HandleTemp < TModel > { private TModel _model ; public HandleTemp ( TModel model ) { _model = model ; } public bool With < TCommandHandler > ( ) where TCommandHandler : ICommandHandler < TModel > { } } Handle ( model ) .With < MyCommandHandler > ( ) ;
|
Syntax sugar for double-generic function
|
C#
|
How to write correct Linq expression used in generic for condition `` where '' Repositor.csbut this with Linq expression will run
|
public static class ConStr { public static MySqlConnection Conn ( ) { return new MySqlConnection ( ConfigurationManager.ConnectionStrings [ `` DBCN '' ] .ConnectionString ) ; } } private IDbConnection cn ; public IEnumerable < TEntity > FilterBy ( Expression < Func < TEntity , bool > > expression ) { using ( cn = ConStr.Conn ( ) ) { return cn.GetAll < TEntity > ( null ) .Where ( expression ) ; < -- error does not contain definition of where } } using ( IDbConnection cn = ConStr.Conn ( ) ) { var que = cn.GetAll < Cause > ( null ) .Where ( x= > x.cause_id == 1 ) ; bool dbIE = Utils.IsAny < Cause > ( que ) ; if ( dbIE == true ) { DGRID.DataSource = que ; } else { MessageBox.Show ( `` Sorry No Value '' ) ; } }
|
Linq expression IEnumerable < TEntity > does not contain definition of where
|
C#
|
It is second nature for me to whip up some elaborate SQL set processing code to solve various domain model questions . However , the trend is not to touch SQL anymore . Is there some pattern reference or conversion tool out there that helps convert the various SQL patterns to Linq syntax ? I would look-up ways to code things like the following code : ( this has a sub query ) : ( Grab the top five highest total orders with side effects ) Alternatively , how do you know Linq executes as a single statement without using a debugger ? I know you need to follow the enumeration , but I would assume just lookup the patterns somewhere.This is from the MSDN site which is their example of doing a SQL difference . I am probably wrong , but I would n't think this uses set processing on the server ( I think it pulls both sets locally then takes the difference , which would be very inefficient ) . I am probably wrong , and this could be one of the patterns on that reference.SQL difference example : Thanks -- Update : -- Microsoft 's 101 Linq Samples in C # is a closer means of constructing linq in a pattern to produce the SQL you want . I will post more as I find them . I am really looking for a methodology ( patterns or a conversion tool ) to convert SQL to Linq. -- Update ( sql from Microsoft 's difference pattern in Linq ) : That 's what we wanted , not what I expected . So , that 's one pattern to memorize .
|
SELECT * FROM orders X WHERE ( SELECT COUNT ( * ) FROM orders YWHERE Y.totalOrder > X.totalOrder ) < 6 var differenceQuery = ( from cust in db.Customersselect cust.Country ) .Except ( from emp in db.Employees select emp.Country ) ; SELECT DISTINCT [ t0 ] . [ field ] AS [ Field_Name ] FROM [ left_table ] AS [ t0 ] WHERE NOT ( EXISTS ( SELECT NULL AS [ EMPTY ] FROM [ right_table ] AS [ t1 ] WHERE [ t0 ] . [ field ] = [ t1 ] . [ field ] ) )
|
How to get linq to produce exactly the sql I want ?
|
C#
|
I have a small 3D vector class in C # 3.0 based on struct that uses double as basic unit.An example : One vector 's y-value isI subtract a vector with an y-value ofThe value for y I would expect is Instead I getWhen I 'm doing the whole computation in one single thread , I get ( 1 ) . Also the debugger and VS quick-watch returns ( 1 ) . But , when I run a few iterations in one thread and then call the function from a different thread , the result is ( 2 ) . Now , the debugger returns ( 2 ) as well ! We have to keep in mind the .NET JIT might write the values back to memory ( website Jon Skeet ) which reduces the accuracy from 80 bit ( FPU ) to 64 bit ( double ) . However , the accuracy of ( 2 ) is far below that.The vector class looks basically like thisThe computation is as easy as this
|
-20.0 straight 10.094999999999965 -30.094999999999963 ( 1 ) -30.094999313354492 ( 2 ) public struct Vector3d { private readonly double _x , _y , _z ; ... public static Vector3d operator - ( Vector3d v1 , Vector3d v2 ) { return new Vector3d ( v1._x - v2._x , v1._y - v2._y , v1._z - v2._z ) ; } } Vector3d pos41 = pos4 - pos1 ;
|
Can floating-point precision be thread-dependent ?
|
C#
|
For some reason my stored procedure is executed without any error from the code-behind in C # but it is not deleting anything at all that the stored procedure has written . I have all the correct parameters and everything . I ran the query from SQL Server with all the same parameters from the C # code and it works perfectly . I do n't get why it works when I run from SQL Server but it does n't work when I run it from my C # code in Visual Studio.Here is my C # code that is passing the data through to the stored procedure.Here is my stored procedure . It 's executing with dynamic SQL text.Here is what the PRINT @ SQLTEXT is running : When I actually go into SQL Server to run this query , it works perfectly . But why does it not work on when executed from the C # code . Any help ?
|
string reportType = `` PostClaim '' ; string GRNBRs = `` 925 ' , '926 ' , '927 '' ; string PUNBRs = `` 100 ' , '100 ' , '100 '' ; string beginningDates = `` 20120401 '' ; string endDates= `` 20120430 '' ; try { conn = new SqlConnection ( ConnectionInfo ) ; conn.Open ( ) ; SqlDataAdapter da = new SqlDataAdapter ( `` RemoveReport '' , conn ) ; da.SelectCommand.CommandType = CommandType.StoredProcedure ; da.SelectCommand.Parameters.AddWithValue ( `` @ ReportType '' , reportType ) ; da.SelectCommand.Parameters.AddWithValue ( `` @ GRNBR '' , GRNBRs ) ; da.SelectCommand.Parameters.AddWithValue ( `` @ PUNBR '' , PUNBRs ) ; da.SelectCommand.Parameters.AddWithValue ( `` @ DATE1 '' , beginningDates ) ; da.SelectCommand.Parameters.AddWithValue ( `` @ DATE2 '' , endDates ) ; da.SelectCommand.CommandTimeout = 360 ; } catch ( SqlException ex ) { //something went wrong throw ex ; } finally { if ( conn.State == ConnectionState.Open ) conn.Close ( ) ; } ALTER PROCEDURE [ dbo ] . [ RemoveReport ] ( @ ReportType NVARCHAR ( 20 ) , @ GRNBR VARCHAR ( 4000 ) , @ PUNBR VARCHAR ( 4000 ) , @ DATE1 DATETIME , @ DATE2 DATETIME ) ASDECLARE @ SQLTEXT VARCHAR ( 4000 ) BEGINSET @ SQLTEXT = 'DELETE FROM TestingTable WHERE Report= '' '+ @ ReportType+ '' ' AND PUNBR IN ( `` '+ @ PUNBR+ '' ' ) AND [ Group ] IN ( `` '+ @ GRNBR+ '' ' ) AND StartedAt BETWEEN `` '+CONVERT ( VARCHAR ( 10 ) , @ DATE1,121 ) + '' ' AND `` '+CONVERT ( VARCHAR ( 10 ) , @ DATE2,121 ) + '' ''PRINT @ SQLTEXT < -- -I 'll print this out to show you what exactly it is executing.EXECUTE ( @ SQLTEXT ) END DELETE FROM MonthlyReportScheduleWHERE Report='PostClaim ' AND PUNBR IN ( '100 ' , '100 ' , '100 ' ) AND [ Group ] IN ( '925 ' , '926 ' , '927 ' ) AND StartedAt BETWEEN '2012-04-01 ' AND '2012-04-30 '
|
Stored procedure not running correctly with dynamic sql text
|
C#
|
When loading a rtf file into a Windows Forms RichTextBox it loses the background colour of table cells . If we use a WPF RichTextBox and load the same file everything is formatted as it should.Am I missing something when I load the file into the Windows Forms RichTextBox ? Windows Forms RichTextBox code snippet : In the above code snippet I have also tried usingandWPF RichTextBox code snippetHere is the screen shot from both versions : Thanks in advance for any help.Steve .
|
private void button1_Click ( object sender , EventArgs e ) { OpenFileDialog fDialog = new System.Windows.Forms.OpenFileDialog ( ) ; fDialog.Filter = `` Rich Text Files ( *.rtf ) |*.rtf '' ; fDialog.Multiselect = false ; fDialog.RestoreDirectory = true ; if ( fDialog.ShowDialog ( ) == System.Windows.Forms.DialogResult.OK ) { if ( fDialog.FileName ! = `` '' ) { richTextBox1.LoadFile ( fDialog.FileName , RichTextBoxStreamType.RichText ) ; } } } richTextBox1.Rtf = File.ReadAllText ( fDialog.FileName ) ; richTextBox1.LoadFile ( fDialog.FileName ) ; private void load_file_Click ( object sender , RoutedEventArgs e ) { System.Windows.Forms.OpenFileDialog fDialog = new System.Windows.Forms.OpenFileDialog ( ) ; fDialog.Filter = `` Rich Text Files ( *.rtf ) |*.rtf '' ; fDialog.Multiselect = false ; fDialog.RestoreDirectory = true ; if ( fDialog.ShowDialog ( ) == System.Windows.Forms.DialogResult.OK ) { if ( fDialog.FileName ! = `` '' ) { FileStream fStream ; fStream = new FileStream ( fDialog.FileName , FileMode.Open , FileAccess.Read , FileShare.Read ) ; richtextbox1.SelectAll ( ) ; richtextbox1.Selection.Load ( fStream , DataFormats.Rtf ) ; fStream.Close ( ) ; } } }
|
Windows.Forms.RichTextBox Loses table background colours
|
C#
|
This was mentioned in my other question and I thought it might be useful to add it to the record . In the following program , which , if any , of the locally defined delegates are cached between calls to the Work method instead of being created from scratch each time ?
|
namespace Example { class Dummy { public int age ; } class Program { private int field = 10 ; static void Main ( string [ ] args ) { var p = new Program ( ) ; while ( true ) { p.Work ( ) ; } } void Work ( ) { int local = 20 ; Action a1 = ( ) = > Console.WriteLine ( field ) ; Action a2 = ( ) = > Console.WriteLine ( local ) ; Action a3 = ( ) = > Console.WriteLine ( this.ToString ( ) ) ; Action a4 = ( ) = > Console.WriteLine ( default ( int ) ) ; Func < Dummy , Dummy , bool > dummyAgeMatch = ( l , r ) = > l.age == r.age ; a1.Invoke ( ) ; a2.Invoke ( ) ; a3.Invoke ( ) ; a4.Invoke ( ) ; dummyAgeMatch.Invoke ( new Dummy ( ) { age = 1 } , new Dummy ( ) { age = 2 } ) ; } } }
|
Which ( if any ) locally defined delegates are cached between method calls ?
|
C#
|
For each entity that has a one to many relation with other entity when i trying to add a new item it seems like i have to define these list of items that relates to this entity.For example , lets say that i have a ProductType entity that has a list of Products as following : when i try to add a new ProductType like that : it gives me an exception that Products is null : the same problem with all the entities that has a one to many relation with other entities . so how can i allow null for the EntitySet < > that represents a one to many Relation
|
[ Table ] public class ProductType { [ Column ( IsPrimaryKey = true , IsDbGenerated = true ) ] public int Id { get ; private set ; } [ Column ] public string Name { get ; set ; } private EntitySet < Product > _products ; [ Association ( Storage = `` _products '' , ThisKey = `` Id '' , OtherKey = `` ProductTypeId '' ) ] public EntitySet < Product > Products { get { return _products ; } set { _products.Assign ( value ) ; } } } ProductType newType = new ProductType { Name = `` newType '' } ; _productRepository.Add ( newType ) ; //InsertOnSubmit ( ) and SaveChanges ( ) Object reference not set to an instance of an object
|
How Can I define Nullable EntitySet < > ?
|
C#
|
I 'm looking for a way to navigate between screens in my app . Basically what I 've seen so far consists of passing a string URI to the NavigationService , complete with query string parameters. , e.g.I 'm not really keen on this though ultimately because it requires magic strings , and they can lead to problems down the road.Ideally I 'd just create an instance of the class I want to navigate to , passing the parameters as arguments to the constructor . Is this possible ? If so , how ?
|
NavigationService.Navigate ( new Uri ( `` /MainPage.xaml ? selectedItem= '' +bookName.Id , UriKind.Relative ) ) ;
|
Is there a typesafe way of navigating between screens in Windows Phone ?
|
C#
|
The azure role setting , is very useful since it lets you change values on-the-fly while IIS is running . But the problem is , if you have plenty users , and if it reads every time the config value from file , it is not best practice to use it without putting it in a static variable . The next problem , if you put it in a static variable , then you have to reset IIS every time you change it . I did some research , and found similar question on stackoverflow , which tells that only first time reads conf on file , then it stores it on cache . But that question which was answered was for ConfigurationManager , mine is about RoleManager from Azure.This is the line which gets the current setting on azure : This is the one that saves it in cache , which I know how it works , and gets current setting ex . : connectionstring in webconfig :
|
RoleEnvironment.GetConfigurationSettingValue ( `` Appname.settingKey '' ) ; ConfigurationManager.ConnectionStrings [ `` SettingKey '' ] .ConnectionString ;
|
Does RoleEnvironment.GetConfigurationSettingValue read every time from cfg file ?
|
C#
|
Basic C # question here.What is the difference between creating an instance of a class property / field either as you declare it or in the constructor of the object in question . For example : vs
|
public class MyClass { public MyObject = new MyObject ( ) ; } public class MyClass { public MyObject ; public MyCLass ( ) { MyObject = new MyObject ( ) ; } }
|
C # - What 's the difference between these two ways of instancing a class property ?
|
C#
|
Silverlight Unit Test Framework has an [ Asynchronous ] attribute ( AsynchronousAttribute ) that causes the tests to only end when EnqueueTestComplete ( ) gets called . This allows for a simple way to write tests that need to wait for an event to occur before they end . Now I am trying to pick a favorite general purpose unit test framework out of the ones that seem to be the most popular choices - VSUTF , NUnit , xUnit.NET , MbUnit and I was wondering - how you would do asynchronous testing using these frameworks ? I suppose I can roll out out some custom code that will do Monitor.Wait or ResetEventWaitOne and call it at the end of the test method , then do a Pulse/Set when the test is over , but I was looking if there is an existing common/built-in solution.This is a sample of how it is done in SUT ( from http : //smartypantscoding.com/a-cheat-sheet-for-unit-testing-silverlight-apps-on-windows-phone-7 ) .
|
[ TestClass ] public class AsyncTests : SilverlightTest { [ Asynchronous ] [ TestMethod ] public void AsyncAppendStringTest ( ) { var appendStrings = new List < string > ( ) { `` hello '' , `` there '' } ; StringJoiner.AsyncAppendStringsWithDashes ( appendStrings , ( returnString ) = > { Assert.IsTrue ( string.Compare ( returnString , `` hello-there '' ) == 0 ) ; EnqueueTestComplete ( ) ; } ) ; } }
|
Asynchronous tests in VSUTF , NUnit , xUnit.NET , MbUnit vs. SUTF ?
|
C#
|
I am querying an xml file and returning 3 attributes per selection ( each entry that meets my criteria will return 3 attributes details ) . I need to store these values , and then later look up the first attribute , and return the 2 other stored attributes related to it . The above code returns 3 attributes per item found . I need to store those 3 entries so they are associated together , then later perform a lookup on the stored entries . So example , I need to be able to look up a stored value of id=1 , and return the corresponding category and selection entry . I was researching the Lookup method of C # but do n't understand how to use it . List 's seem like they might work , but I do n't know how to store multiple pieces of data into an entry in a list ( maybe concatenate into a single entry , but then I 'm not sure about performing the lookup on it ) . Any suggestions regarding how to do this with a LIST or LOOKUP ( or other unmentioned way ) are appreciated .
|
var items = from item in doc.Descendants ( `` SUM '' ) select new { id = ( string ) item.Attribute ( `` id '' ) , category = ( string ) item.Attribute ( `` cat '' ) , selection = ( string ) item.Attribute ( `` sel '' ) } ;
|
How to store and lookup data , based on multiple xml attributes ?
|
C#
|
The question seems simple . Although the documentation says it does : the following code gives an error : saying that there is no conversion from KeyCollection < T > to IReadOnlyCollection < T > .Moreover polish documentation ( french too for that matter ) says it does not : Which is it ? And in case it 's the error in english documentation , a bonus question : Is there a way to get Keys as read-only collection ?
|
public sealed class KeyCollection : ICollection < TKey > , IReadOnlyCollection < TKey > , IEnumerable < TKey > , ICollection , IEnumerable class MyKeys < T > { readonly Dictionary < T , T > dict = new Dictionary < T , T > ( ) ; public IReadOnlyCollection < T > Keys { get { return dict.Keys ; } set ; } } [ SerializableAttribute ] public sealed class KeyCollection : ICollection < TKey > , IEnumerable < TKey > , ICollection , IEnumerable
|
Does ` Dictionary < TKey , TValue > .KeyCollection ` implement ` IReadOnlyCollection ` or not ?
|
C#
|
when choosing a character I currently have a base classAnd my characters derive from this classLastly I use this code to select the WarriorSo this way works pretty fine . But when it comes to cooldowns etc . I want to stay with a clean code so I thought about creating a Ability class.My abstract parent classThe Warrior classAnd my skill classSo the warrior will pass in his two skills and creates two ability objects . In the game I could writeand this would result into a smash with a cooldown of 3 seconds . Is this possible to implement .. somehow .. ?
|
abstract class CharacterClass { public abstract void AbilityOne ( ) ; public abstract void AbilityTwo ( ) ; } class Warrior : CharacterClass { public override void AbilityOne ( ) { // implement ability one here } public override void AbilityTwo ( ) { // implement ability two here } } CharacterClass selectedClass = new Warrior ( ) ; // start the game as a Warrior abstract class CharacterClass { public Ability AbilityOne { get ; set ; } public Ability AbilityTwo { get ; set ; } } class Warrior : CharacterClass { public Warrior ( ) { // Set the new Abilities AbilityOne = new Ability ( 3 , Smash ( ) ) ; // pass in the method as a parameter ? ? AbilityTwo = new Ability ( 7 , Shieldblock ( ) ) ; } private void Smash ( ) { // Ability 1 } private void ShieldBlock ( ) { // Ability 2 } } class Ability { public Ability ( int cooldown , [ the ability method here ] ) { CoolDown = cooldown ; TheAbility = ? the ability parameter ? } public int CoolDown { get ; set ; } public void TheAbility ( ) { } } CharacterClass selectedClass = new Warrior ( ) ; selectedClass.AbilityOne ( ) ;
|
creating ability objects in Unity
|
C#
|
I have a design problem that I ca n't figure out . Here 's what I 've got : In general , I have two general types of objects Strikes and Options . These have been abstracted into two interfaces IStrike and IOption.Let 's say that IOption has the following fields , in reality there are about 10 times as many , but we can use the following three to illustrate the problem.Now , that 's all well and good , but let 's say I 've got the following method for performing some `` math '' on the IOption implied volAgain , not a problem , but when I 'm writing some mock objects for my tests , it 's not clear to me if I need to implement Bid and Ask . I do n't , but I would n't know that unless I knew the guts inside SquareImpliedVol , which means I 'm writing tests against the code , which is bad.So to fix this , I could create another interface IOptionImpliedVol that just contains the ImpliedVol property , and then have IOption inherit from IOptionImpliedVol like soAnd then we can switch up SquareImpliedVolAnd we 're great . I can write mock objects and everything is sweet . Except ... .I want to write a method that is going to operate on a List , but the only properties I need out of IStrike are the Call.ImpliedVol and Put.ImpliedVol . I want to create something likeand then I could also haveExcept that is n't legal . I feel like there has to be some kind of design pattern that I could to work this out , but I 'm stuck in some kind of web of composition and inheritance .
|
interface IOption { double Bid { get ; set ; } double Ask { get ; set ; } double ImpliedVol { get ; set ; } } interface IStrike { IOption Call { get ; set ; } IOption Put { get ; set ; } } public double SquareImpliedVol ( IOption opt ) { return Math.Pow ( opt.ImpliedVol,2 ) ; } interface IOption : IOptionImpliedVol { double Bid { get ; set ; } double Ask { get ; set ; } } public double SquareImpliedVol ( IOptionImpliedVol opt ) { return Math.Pow ( opt.ImpliedVol,2 ) ; } interface IStrikeImpliedVol { IOptionImpliedVol Call ; IOptionImpliedVol Put ; } interface IStrike : IStrikeImpliedVol { IOption Call ; IOption Put ; }
|
Can I combine composition and inheritance with interfaces in C #
|
C#
|
Excel 2016 seems to trigger a programmatically added undo level upon saving , which does not happen in earlier versions of Excel ( 2013 , 2010 , and 2007 ) . To reproduce this apparent bug , open a new workbook and save it as a macro-enabled workbook ( .xlsm file ) . Paste the following code into the ThisWorkbook module : Then , insert a new module named modTest and paste the following code : Finally , save the workbook and reopen it . Enter any value in any cell to trigger the Application.SheetChange event . Save the workbook ( you may need to do this twice , for some reason ) , and the message in modTest will appear.Can anyone explain what may be going on here , and/or how to work around this problem ? If this is indeed a bug , what is the best way to report it to Microsoft ? This code is VBA , but since this problem affects VSTO add-ins written in VB.NET and C # , too , I am including those tags .
|
Option ExplicitPublic WithEvents App As ApplicationPrivate Sub Workbook_Open ( ) Set App = ApplicationEnd SubPrivate Sub App_SheetChange ( ByVal Sh As Object , ByVal Target As Range ) Application.OnUndo `` foo '' , `` modTest.Undo '' End Sub Public Sub Undo ( ) MsgBox `` This is the Excel 2016 bug . `` End Sub
|
Excel 2016 triggers undo upon save bug ?
|
C#
|
I 've borrowed the code below from another question ( slightly modified ) , to use in my code : The original author of this code correctly adheres to the warnings given in MSDN 's implicit & explicit documentation , but here 's my question : Is explicit always necessary in potentially exceptional code ? So , I 've got some types in my code ( e.g . `` Volume '' ) that derive from PositiveDouble and I 'd like to be able to set instances conveniently like the first line below : Being forced to use explicit casts everywhere makes the code much less readable . How does it protects the user ? In the semantics of my program , I never expect a Volume to be negative ; indeed , if it ever happens I expect an exception to be thrown . So if I use an implicit conversion and it throws , what `` unexpected results '' might clobber me ?
|
internal class PositiveDouble { private double _value ; public PositiveDouble ( double val ) { if ( val < 0 ) throw new ArgumentOutOfRangeException ( `` Value needs to be positive '' ) ; _value = val ; } // This conversion is safe , we can make it implicit public static implicit operator double ( PositiveDouble d ) { return d._value ; } // This conversion is not always safe , so we 're supposed to make it explicit public static explicit operator PositiveDouble ( double d ) { return new PositiveDouble ( d ) ; // this constructor might throw exception } } Volume v = 10 ; //only allowed by implicit conversionVolume v = new Volume ( 10 ) //required by explicit conversion , but gets messy quick
|
Why/when is it important to specify an operator as explicit ?
|
C#
|
I have an Web API 2 end point where by I want to asynchronously carry out an operation while I retrieve and verify a user . If this user does not exist I want to return a 404 Not Found like so : Could this cause me potential issues if the user was to equal to null and the method returned without awaiting the getCatTask or is it considered a bad practice ?
|
public async Task < IHttpActionResult > Get ( ) { var getCatTask = GetCatAsync ( ) ; var user = await GetUserAsync ( ) ; if ( user == null ) { return NotFound ( ) ; } var cat = await getCatTask ; return Ok ( cat ) ; }
|
Bad practice to return from method before async operation completes ?
|
C#
|
I 'm looking for a way to program a custom authorization filter in ASP.NET 5 as the current implementation relies in Policies/Requirements wich in turn rely solely in the use of Claims , thus on the umpteenth and ever-changing Identity System of wich I 'm really tired of ( I 've tried all it 's flavours ) .I have a large set of permisions ( over 200 ) wich I do n't want to code as Claims as I have my own repository for them and a lot faster way to be check against it than comparing hundreds of strings ( that is what claims are in the end ) .I need to pass a parameter in each attribute that should be checked against my custom repository of permissions : I know this is not the most frequent scenario , but I think it is n't an edge case . I 've tried implementing it in the way described by @ leastprivilege on his magnific post `` The State of Security in ASP.NET 5 and MVC 6 : Authorization '' , but I 've hit the same walls as the author , who has even opened an issue on the ASP.NET 5 github repo , wich has been closed in a not too much clarifying manner : linkAny idea of how to achieve this ? Maybe using other kind of filter ? In that case , how ?
|
[ Authorize ( Requires = enumPermission.DeleteCustomer ) ]
|
DI into a Requirement/Policy in ASP.NET MVC 6
|
C#
|
I was recently attempting to answer a question that a user posted about why the decimal struct does not declare its Min/Max values as const like every other numeric primitive ; rather , the Microsoft documentation states that it is static readonly.In researching that , I dug through the Microsoft source code , and came up with an interesting discovery ; the source ( .NET 4.5 ) makes it look like a const which is in opposition to what the documentation clearly states ( source and relevant struct constructor pasted below ) .The thread here continues to unravel , because I ca n't see how this would compile legally under the rules of C # - because while it still is technically a constant , the compiler thinks it is n't and will give you an error The expression being assigned to ... must be constant . Hence what I believe is the reason that the docs call it a static readonly.Now , this begs a question : is this file from the Microsoft source server actually the source for decimal , or has it been doctored ? Am I missing something ?
|
public const Decimal MinValue = new Decimal ( -1 , -1 , -1 , true , ( byte ) 0 ) ; public const Decimal MaxValue = new Decimal ( -1 , -1 , -1 , false , ( byte ) 0 ) ; public Decimal ( int lo , int mid , int hi , bool isNegative , byte scale ) { if ( ( int ) scale > 28 ) throw new ArgumentOutOfRangeException ( `` scale '' , Environment.GetResourceString ( `` ArgumentOutOfRange_DecimalScale '' ) ) ; this.lo = lo ; this.mid = mid ; this.hi = hi ; this.flags = ( int ) scale < < 16 ; if ( ! isNegative ) return ; this.flags |= int.MinValue ; }
|
'Decimal ' source code from Microsoft - will it build ?
|
C#
|
I have a requirement that calls for matching a Sample Set of color values against a Known Set of values to find either an exact match , or matches that are within an acceptable distance . I 'm not entirely sure what algorithm would be best suited for this and I 'm looking for suggestions . I thought about using a SQL query as I think this would be a straightforward approach , however , ideally this would be done in-memory on the application server or even on a GPU for maximum speed.Example : Let 's say we are given a set of three RGB color values , two blues and an orange : Sample Set : Color 1 : 81,177,206 ( blue ) Color 2 : 36 , 70 , 224 ( blue ) Color 3 : 255 , 132 , 0 ( orange ) This set of 3 color values must be matched against a much larger set of color values to see if this set exists within it , either with the same exact RGB values for each of the 3 colors - or - if any pattern exists where an RGB value of the colors varies by an acceptable degree . Let 's assume any of the RGB components can be up to 3 digits higher or lower in value.Let 's say our big set of known color values that we 'll search against looks like this : Known Set : Given this scenario , we would find zero matches when we run our sample set against it , because none of the known colors have a Color 1 that is anywhere close to our sample set values . However , let 's add another color to the Known Set that would return a positive match : If Sample F existed with these values in the Known Set , we would get a positive hit , because it 's the exact RGB values as Color 1 in our Sample Set . Also , we need to accept a varying degree of differences in the RGB values , so the following would also return positive hits because each RGB value is within 3 digits from Color 1 's values from the Sample Set : Positive hits : ( remember Color 1 is : 81,177,206 ) Sample F : 80,177,206 ( red channel is 1 digit away ) Sample F : 81,175,204 ( green and blue channels 2 digits away ) Sample F : 82,179,208 ( all three channels within 3 digits away ) However , if the distance is too great , then a match would not be found . Any RGB component must be within 3 digits to trigger a positive result . So if Sample F looked like the following , we would not get a positive result because the distance is too great : Negative hits : Sample F : 85,177,206 ( red channel is 4 digits away ) Sample F : 81,170,206 ( green channel is 7 digits away ) Sample F : 81,177,200 ( blue channel is 6 digits away ) So far we 've just taken Color 1 from the Sample Set into consideration . However the requirement calls for taking the entire Sample Set into consideration . So if no positive matches can be found for Color 1 , then we assume no match at all and do n't consider Colors 2 and 3 from the Sample Set . However , if we find a positive result for Color 1 , let 's say 80,177,206 which is just 1 digit off in the Red channel 80 vs 81 , then we do continue processing Color 2 , and if we find a positive match for that then we process Color 3 and so on.What are your suggestions for the algorithm best suited for this problem ? I need something that will allow the Known Set to scale very large without too much of a performance hit . There will probably be 1M+ samples in the Known Set at scale.I thought about using hashtables , one per Color to construct the Known Set . So I could test for a match on Color 1 , and if found , test the hashtable for Color 2 , and stop when I find no more hits . If I got through all 3 colors/hashtables with positive hits , then I 'd have an overall positive match , else I would not . However , this approach does n't allow for the variance needed in each of the RGB channels for each color . There would be too many combinations to allow for constructing hashtables to hold it all.Thanks in advance and thanks for reading this far down !
|
Color 1 Color 2 Color 3Sample A : [ 25 , 25 , 25 ] , [ 10 , 10 , 10 ] , [ 100 , 100 , 100 ] Sample B : [ 125 , 125 , 125 ] , [ 10 , 10 , 10 ] , [ 200 , 200 , 200 ] Sample C : [ 13 , 87 , 255 ] , [ 10 , 10 , 10 ] , [ 100 , 100 , 100 ] Sample D : [ 67 , 111 , 0 ] , [ 10 , 10 , 10 ] , [ 200 , 200 , 200 ] Sample E : [ 255 , 255 , 255 ] , [ 10 , 10 , 10 ] , [ 100 , 100 , 100 ] Sample F : [ 81,177,206 ] , [ 36 , 70 , 224 ] , [ 255 , 132 , 0 ]
|
Suggest an algorithm for color pattern matching against a large known set
|
C#
|
In Xamarin google maps for Android using C # you can create polygons like so based on this tutorial : However I have downloaded a CSV file from my Fusion Table Layer from google maps as I think this might be the easiest option to work with polygon/polyline data . The output looks like this : I uploaded a KML file to Google Maps Fusion Table Layer , it then created the map . I then went File > Download > CSV and it gave me the above example . I have added this csv file to my assets folder of my xamarin android google map app and my question would be because LatLng takes two doubles as its input , is there a way I could input the above data from the csv file into this method and if so how ? Not sure how to read the above csv and then extract the < coordinates > and then add those coordinates as new LatLng in the example code above ? If you notice however the coordinates are split into lat and lng and then the next latlng is seperated by a space -5.657018,57.3352 -5.656396,57.334463.Sudo code ( this may or may not require xamarin or android experience and may just require C # /Linq ) : As there is no way of using Fusion Table Layers in Xamarin Android with Google Maps API v2 this may provide a quick and easier workaround for those that need to split maps into regions .
|
public void OnMapReady ( GoogleMap googleMap ) { mMap = googleMap ; PolylineOptions geometry = new PolylineOptions ( ) .Add ( new LatLng ( 37.35 , -37.0123 ) ) .Add ( new LatLng ( 37.35 , -37.0123 ) ) .Add ( new LatLng ( 37.35 , -37.0123 ) ) ; Polyline polyline = mMap.AddPolyline ( geometry ) ; } description , name , label , geometry , Highland,61 , '' < Polygon > < outerBoundaryIs > < LinearRing > < coordinates > -5.657018,57.3352 -5.656396,57.334463 -5.655076,57.334556 -5.653439,57.334477 -5.652366,57.334724 -5.650064,57.334477 -5.648096,57.335082 -5.646846,57.335388 -5.644733,57.335539 -5.643309,57.335428 -5.641981,57.335448 -5.640451,57.33578 -5.633217,57.339118 -5.627278,57.338921 -5.617161,57.337649 -5.607948,57.341015 -5.595812,57.343583 -5.586043,57.345373 -5.583581,57.350648 -5.576851,57.353609 -5.570088,57.354017 -5.560732,57.354102 -5.555254,57.354033 -5.549713,57.353146 -5.547766,57.352275 -5.538932,57.352255 -5.525891,57.356217 -5.514888,57.361865 -5.504272,57.366027 -5.494515,57.374515 -5.469829,57.383765 -5.458661,57.389781 -5.453695,57.395033 -5.454057,57.402943 -5.449189,57.40731 -5.440583,57.411447 -5.436133,57.414616 -5.438312,57.415474 -5.438628,57.417955 -5.440956,57.417909 -5.444013,57.414976 -5.450778,57.421362 -5.455035,57.422333 -5.462081,57.420719 -5.468775,57.416975 -5.475205,57.41135 -5.475976,57.409117 -5.47705,57.407092 -5.478101,57.406056 -5.478901,57.40536 -5.479489,57.404534 -5.480051,57.403782 -5.481036,57.403107 -5.484538,57.402102 -5.485647,57.401856 -5.487358,57.401287 -5.488709,57.400962 -5.490175,57.400616 -5.491116,57.400176 -5.493832,57.399318 -5.495279,57.399134 -5.496726,57.39771 -5.498724,57.396836 -5.49974,57.396314 -5.501317,57.39627 -5.502869,57.395426 < /coordinates > < /LinearRing > < /innerBoundaryIs > < /Polygon > '' , Strathclyde,63 , '' < Polygon > < outerBoundaryIs > < LinearRing > < coordinates > -5.603129,56.313564 -5.603163,56.312536 -5.603643,56.311794 -5.601467,56.311875 -5.601038,56.312481 -5.600697,56.313489 -5.60071,56.31535 -5.60159,56.316107 -5.600729,56.316598 -5.598625,56.316058 -5.596203,56.317477 -5.597024,56.318119 -5.596095,56.318739 -5.595432,56.320116 -5.589343,56.322469 -5.584888,56.325178 -5.582907,56.327169 -5.581414,56.327472 -5.581435,56.326663 -5.582355,56.325602 -5.581515,56.323891 -5.576993,56.331062 -5.57886,56.331475 -5.57676,56.334449 -5.572748,56.335689 -5.569012,56.338143 -5.564802,56.342113 -5.555237,56.346668 -5.551214,56.347448 -5.547651,56.346391 -5.54444,56.344945 -5.541247,56.345945 -5.539099,56.349674 -5.533874,56.34763 -5.525195,56.342888 -5.523518,56.345066 -5.52345,56.346605 -5.526417,56.354361 -5.535455,56.353681 -5.537463,56.35508 -5.536035,56.356271 -5.538923,56.357205 -5.53891,56.359336 -5.539952,56.361491 -5.538102,56.36372 -5.535934,56.36567 -5.53392,56.367705 -5.531369,56.369729 -5.529853,56.371022 -5.532371,56.371274 -5.534177,56.371708 -5.532846,56.373256 -5.529845,56.37496 -5.527675,56.375327 -5.528531,56.375995 -5.526732,56.376343 -5.525442,56.377809 -5.524739,56.379843 -5.526069,56.380561 < /coordinates > < /LinearRing > < /innerBoundaryIs > < /Polygon > '' Read CSV var sr = new StreamReader ( Read csv from Asset folder ) ; Remove description , name , label , geometryForeach line in CSV Extract Item that contains double qoutes Foreach Item Remove Qoutes and < Polygon > < outerBoundaryIs > < LinearRing > < coordinates > from start and end Foreach item seperated by a space Extract coordinates ( This will now leave a long list of 37.35 , -37.0123 coordinates for each line ) Place in something like this maybe ? : public class Row { public double Lat { get ; set ; } public double Lng { get ; set ; } public Row ( string str ) { string [ ] separator = { `` , '' } ; var arr = str.Split ( separator , StringSplitOptions.None ) ; Lat = Convert.ToDouble ( arr [ 0 ] ) ; Lng = Convert.ToDouble ( arr [ 1 ] ) ; } } private void OnMapReady ( ) var rows = new List < Row > ( ) ; Foreach name/new line PolylineOptions geometry = new PolylineOptions ( ) ForEach ( item in rows ) //not sure how polyline options will take a foreach .Add ( New LatLng ( item.Lat , item.Lng ) ) Polyline polyline = mMap.AddPolyline ( geometry ) ;
|
Extracting data from CSV file ( fusion table and kml workaround )
|
C#
|
During development I had a TempTextBlock for testing and I 've removed it now . It builds successfully , but when I try to create a package for store , it gives this error : error CS1061 : 'MainPage ' does not contain a definition for 'TempTextBlock ' and no extension method 'TempTextBlock ' accepting a first argument of type 'MainPage ' could be found ( are you missing a using directive or an assembly reference ? ) In MainPage.g.cs I see this : So TempTextBlock is used there . If I remove the whole method , it gives this error : error CS0535 : 'MainPage ' does not implement interface member 'IComponentConnector.Connect ( int , object ) 'What 's that Connect method in MainPage.g.cs and how to resolve this issue ? thanks .
|
/// < summary > /// Connect ( ) /// < /summary > [ global : :System.CodeDom.Compiler.GeneratedCodeAttribute ( `` Microsoft.Windows.UI.Xaml.Build.Tasks '' , '' 14.0.0.0 '' ) ] [ global : :System.Diagnostics.DebuggerNonUserCodeAttribute ( ) ] public void Connect ( int connectionId , object target ) { switch ( connectionId ) { case 1 : { this.TempTextBlock = ( global : :Windows.UI.Xaml.Controls.TextBlock ) ( target ) ; } break ; default : break ; } this._contentLoaded = true ; }
|
Page does n't contain a definition for X
|
C#
|
Here 's the simplified case . I have a class that stores a delegate that it will call on completion : I have another utility class that I want to subscribe to various delegates . On construction I want itself to register to the delegate , but other than that it does n't care about the type . The thing is , I do n't know how to express that in the type system . Here 's my pseudo-C # Thanks in advance ! Thanks to Alberto Monteiro , I just use System.Action as the type for the event . My question now is , how to pass the event to the constructor so it can register itself ? This might be a very dumb question .
|
public class Animation { public delegate void AnimationEnd ( ) ; public event AnimationEnd OnEnd ; } public class WaitForDelegate { public delegateFired = false ; // How to express the generic type here ? public WaitForDelegate < F that 's a delegate > ( F trigger ) { trigger += ( ) = > { delegateFired = true ; } ; } } public class Example { Animation animation ; // assume initialized public void example ( ) { // Here I ca n't pass the delegate , and get an error like // `` The event can only appear on the left hand side of += or -= '' WaitForDelegate waiter = new WaitForDelegate ( animation.OnEnd ) ; } }
|
Constructor that takes any delegate as a parameter
|
C#
|
My expectation is that AutoMapper ( 3.3.0 ) does not automatically resolve string - > DateTime conversions , even when the string is in a well-understood format . The lack of inclusion of a default string - > DateTime converter is noted ( albeit four years ago ) in a comment by the library author , Jimmy Bogard , on this StackOverflow answer : https : //stackoverflow.com/a/4915449/1675729However , I have a .NET Fiddle which seems to suggest that AutoMapper can handle this mapping by default : https : //dotnetfiddle.net/dDtUGxIn that example , the Zing property is mapped from a string in Foo to a DateTime in Bar without a custom mapping or resolver being specified.However , when this code runs in my solution unit tests ( using the same AutoMapper version ) , it produces the exception I expect , which is : What is causing this inconsistent behavior ? For completeness , the code inside the .NET Fiddle is reproduced here :
|
AutoMapper.AutoMapperMappingExceptionMissing type map configuration or unsupported mapping.Mapping types : String - > DateTime System.String - > System.DateTimeDestination path : Bar.ZingSource value : Friday , December 26 , 2014 using System ; using AutoMapper ; public class Program { public static void Main ( ) { var foo = new Foo ( ) ; foo.Zing = DateTime.Now.ToLongDateString ( ) ; Mapper.CreateMap < Foo , Bar > ( ) ; var bar = Mapper.Map ( foo , new Bar ( ) ) ; Console.WriteLine ( bar.Zing ) ; } public class Foo { public string Zing { get ; set ; } } public class Bar { public DateTime Zing { get ; set ; } } }
|
AutoMapper inconsistently automatically resolving string - > DateTime
|
C#
|
OK , Custom Policy Based Authorization in ASP.NET Core . I kinda of understood the idea of this new identity framework , but still not 100 % clear what you can achieve with this . Assuming we have an Action in HomeController called List . This action will query and display a list of products from the database . The users that must access this list must be part of the Marketing division . Therefore in our policy we check if user has a claim called Division and the value of that is Marketing . If yes then he will be allowed to see the list otherwise not . We can decorate our action like this : All good with this . It will work perfectly.Scenario 1 : What if I want to add the policy at the product level , and based on the policy the user will see only the products from his division . So marketing guy will see his products , R & D will see his and so forth . How can I achieve that ? Can it be done with a policy ? If yes how ? Scenario 2 : What about access at the field level ? Let 's say maybe I want to hide certain fields ? Example : all products will have certain columns that must be visible to Managers and hidden to the rest of users ? Can this be done using custom policies ? If yes how ?
|
[ Authorize ( Policy = `` ProductsAccess '' ) ] public IActionResult List ( ) { //do query and return the products view model return View ( ) ; }
|
ASP.NET Core Custom Policy Based Authorization - unclear
|
C#
|
I 'm launching an external application from a ContextMenu , and I must block the the source application while the target application is running . To achieve this I 'm using Process.WaitForExit ( ) to avoid the source application responding to events.The problem is the context menu is still ahead the target application . Let 's see it with a simple example : This is the code I 'm using for the example.How could I make the ContextMenu to disappear before the target application is displayed ?
|
public MainWindow ( ) { InitializeComponent ( ) ; this.ContextMenu = new ContextMenu ( ) ; MenuItem menuItem1 = new MenuItem ( ) ; menuItem1.Header = `` Launch notepad '' ; menuItem1.Click += MyMenuItem_Click ; this.ContextMenu.Items.Add ( menuItem1 ) ; } void MyMenuItem_Click ( object sender , RoutedEventArgs e ) { Process p = new Process ( ) ; p.StartInfo.FileName = `` notepad.exe '' ; p.StartInfo.CreateNoWindow = false ; p.Start ( ) ; p.WaitForExit ( ) ; p.Close ( ) ; }
|
WPF ContextMenu still visible after launching an external process
|
C#
|
Is there a way I can determine in .NET , for any arbitrary SQL Server result set , if a given column in the result can contain nulls ? For example , if I have the statementsand and I get a datareader like this : can I have a function like this ? I want it to return true for the first statement , and false for the second statement.rdr.GetSchemaTable ( ) does n't work for this because it returns whether the underlying column can be null , which is not what I want . There are functions on datareader that return the underlying sql type of the field , but none seem to tell me if it can be null..
|
Select NullableColumn From MyTable Select IsNull ( NullableColumn , ' 5 ' ) as NotNullColumn From MyTable var cmd = new SqlCommand ( statement , connection ) ; var rdr = cmd.ExecuteReader ( ) ; bool ColumnMayHaveNullData ( SqlDataReader rdr , int ordinal ) { // ? ? ? ? }
|
SqlDataReader find out if a data field is nullable
|
C#
|
I have a List < T > of available times within a 24 hour day , and two TimeSpans , minTime and maxTime.I need to find a time of day within the List < T > that lands between the minTime and maxTime , however due to this being used in multiple timezones , the minTime and maxTime can be on separate days and span something like 1pm to 1am the next day The closest I 've come to is this , but I feel like I am missing some major component here , or doing something really inefficient since I 'm fairly new to the TimeSpan object . I just ca n't figure out what ... The List of Times uses EST ( my local time ) , however the minTime and maxTime can be based on other time zones.For example , if we ran this algorithm for a Hawaii time zone , we will end up having minTime = new TimeSpan ( 13 , 0 , 0 ) and maxTime = new TimeSpan ( 25 , 0 , 0 ) , since 8am - 8pm HST = 1pm - 1am EST.The Times collection is a List < AppointmentTime > , and AppointmentTime is a class looks like this : I 'm fairly sure I 'm missing something major here , or that there should be a more efficient way of doing this that I 'm not aware of , but I really ca n't think of what it could be . Is there something wrong with my algorithm ? Or a more efficient way of finding a TimeOfDay between two TimeSpans that may span separate days ? UpdateI figured out what I was missing based on CasperOne 's answer . I was forgetting the date actually does matter since my times are going across different time zones.Using my above example of the Hawaii time zone , scheduling appointments for Monday would result in incorrectly scheduling Hawaii appointments on Sunday night.My solution was to check that the previous day was valid before scheduling appointments for the `` first window '' of the 24-hour day , and to adjust the appointment date by .AddDays ( maxTime.Days ) when comparing with maxTime
|
// Make new TimeSpan out of maxTime to eliminate any extra days ( TotalHours > = 24 ) , // then check if time on the MaxTime is earlier than the MinTimeif ( new TimeSpan ( maxTime.Hours , maxTime.Minutes , maxTime.Seconds ) < minTime ) { // If time on MaxTime is earlier than MinTime , the two times span separate days , // so find first time after minTime OR before maxTime nextAvailableTime = Times.FirstOrDefault ( p = > ( p.Time.TimeOfDay > = minTime || ( p.Time.TimeOfDay < maxTime ) ) & & p.Count < ConcurrentAppointments ) ; } else { // If time on MaxTime is later than MinTime , the two times are for the same day // so find first time after minTime AND before maxTime nextAvailableTime = Times.FirstOrDefault ( p = > ( p.Time.TimeOfDay > = minTime & & p.Time.TimeOfDay < maxTime ) & & p.Count < ConcurrentAppointments ) ; } class AppointmentTime { DateTime Time { get ; set ; } int Count { get ; set ; } } // If time on MaxTime is earlier than MinTime , the two times span separate days , // so find first time after minTime OR before maxTime if previous day has appointments set as wellvar isPreviousDayValid = IsValidDate ( AppointmentDate.AddDays ( -1 ) ) ; nextAvailableTime = Times.FirstOrDefault ( p = > ( p.Time.TimeOfDay > = minTime || ( p.Time.AddDays ( maxTime.Days ) .TimeOfDay < maxTime & & isPreviousDayValid ) ) & & p.Count < ConcurrentAppointments ) ;
|
What am I missing in this algorithm to find a TimeOfDay between two TimeSpans that may span separate days ?
|
C#
|
I was using Dapper and having it return a dynamic IEnumerable , like this : Here , rows is of type IEnumerable < dynamic > . The IntelliSense says FirstOrDefault ( ) is awaitable , and has the usage await FirstOrDefault ( ) . Not all LINQ queries are shown as awaitable , but it seems like especially those that somehow single out elements are.As soon as I instead use strong typing , this behavior goes away.Is it because .NET ca n't know if the type you are receiving at runtime is awaitable or not , so that it `` allows '' it , in case you need it ? But does n't enforce it ? Or am I supposed to , due to some Dynamic Language Runtime behavior , actually use await here ? I have kept searching but not found the smallest thing about this online .
|
var rows = conn.Query ( `` SELECT * FROM T WHERE ID = @ id '' , new { id = tableId } ) ; var row = rows.FirstOrDefault ( ) ;
|
Why is First ( ) or ElementAt ( ) on a dynamic IEnumerable awaitable ?
|
C#
|
I have a Control lblDate in User Control MainScreen . I would like to modify it in a method in class Date , which is in another project AoWLibrary . I ca n't reference it because AoWLibrary is a dependent of the first project.I tried to make lblDate static but the compiler kept throwing errors at me , and I have a public property that Date ca n't seem to access : In class Date , I need public method CalculateDate to modify the Text property of lblDateAlso , I ca n't even access lblDate in other controls in the same project .
|
public Label LabelDate { get { return lblDate ; } set { lblDate = value ; } } public static void CalculateDate ( ) { GameDate = month.ToString ( ) + `` / '' + displayDay.ToString ( ) + `` / '' + year.ToString ( ) ; // LabelDate.Text = GameDate ; // This is essentially what I need to do }
|
Modify Windows Forms Control from another Project
|
C#
|
When writing a method chain for LINQ , I can do the Where statements one of two ways : Or Are there any benefits of one over the other ? Do n't worry too much about the datatypes in this example , but if there are issues with datatypes , then that would be good to know too.The obvious one is that the object is already referenced , so two properties hit at once is easier on the application , right ?
|
var blackOldCats = cats.Where ( cat = > cat.Age > 7 & & cat.Colour == `` noir '' ) var blackOldCats = cats.Where ( cat = > cat.Age > 7 ) .Where ( cat = > cat.Colour == `` noir '' )
|
Linq Where Clauses - Better to stack or combine ?
|
C#
|
I 'm currently working on an emulation server for a flash-client based game , which has a `` pets system '' , and I was wondering if there was a simpler way of going about checking the level of specified pets.Current code : Yes , I 'm aware I 've misspelt Experience , I had made the mistake in a previous function and had n't gotten around to updating everything .
|
public int Level { get { if ( Expirience > 100 ) // Level 2 { if ( Expirience > 200 ) // Level 3 { if ( Expirience > 400 ) // Level 4 - Unsure of Goal { if ( Expirience > 600 ) // Level 5 - Unsure of Goal { if ( Expirience > 1000 ) // Level 6 { if ( Expirience > 1300 ) // Level 7 { if ( Expirience > 1800 ) // Level 8 { if ( Expirience > 2400 ) // Level 9 { if ( Expirience > 3200 ) // Level 10 { if ( Expirience > 4300 ) // Level 11 { if ( Expirience > 7200 ) // Level 12 - Unsure of Goal { if ( Expirience > 8500 ) // Level 13 - Unsure of Goal { if ( Expirience > 10100 ) // Level 14 { if ( Expirience > 13300 ) // Level 15 { if ( Expirience > 17500 ) // Level 16 { if ( Expirience > 23000 ) // Level 17 { return 17 ; // Bored } return 16 ; } return 15 ; } return 14 ; } return 13 ; } return 12 ; } return 11 ; } return 10 ; } return 9 ; } return 8 ; } return 7 ; } return 6 ; } return 5 ; } return 4 ; } return 3 ; } return 2 ; } return 1 ; } }
|
Simpler / more efficient method of nested if ... else flow ?
|
C#
|
I am writing a Cmdlet and need to pass object structures into an API client that may contain PSObjects . Currently , these serialise as a JSON string containing CLIXML . Instead , I need it to be treated like an object ( including the NoteProperties in PSObject.Properties as properties , and recursively serialising their values ) .I tried writing my own JsonConverter but for some reason it only gets called for the top level object , not for nested PSObjects : Additionally , I am using serializing to camel case using CamelCasePropertyNamesContractResolver . Is there a way to make the converter respect that ?
|
public class PSObjectJsonConverter : JsonConverter { public override void WriteJson ( JsonWriter writer , object value , JsonSerializer serializer ) { if ( value is PSObject ) { JObject obj = new JObject ( ) ; foreach ( var prop in ( ( PSObject ) value ) .Properties ) { obj.Add ( new JProperty ( prop.Name , value ) ) ; } obj.WriteTo ( writer ) ; } else { JToken token = JToken.FromObject ( value ) ; token.WriteTo ( writer ) ; } } public override object ReadJson ( JsonReader reader , Type objectType , object existingValue , JsonSerializer serializer ) { throw new NotImplementedException ( ) ; } public override bool CanRead { get { return false ; } } public override bool CanConvert ( Type objectType ) { return true ; } }
|
How can I serialise PSObjects in C # with JSON.NET ?
|
C#
|
I have a method with the following signatureFrom a point in my code I need to move up the stacktrace to find the closest method with the SpecificationAttribute ( performance is not an issue here ) . I find this method but I can not find any custom attributes on it.I do n't think I 've ever seen this happen . What might be the reason ? This is a unit testing assembly with Optimization disabled in Build .
|
[ Specification ] public void slide_serialization ( ) {
|
Why can I not find a custom attribute on this MethodInfo
|
C#
|
PreambleI 'm trying to disassemble and reverse-engineer a program whose author is long gone . The program provides some unique features that I have yet to find elsewhere and ... I 'm curious and intrigued by reverse-engineering the program . If you 're just gon na try and help me find another program ... do n't bother.The ProblemI 'm using IDA Pro w/ Hex-Rays Decompiler to get some half-way decent pseudocode to try and speed up the reverse-engineering . One big thing that I think will help speed up things is to figure out what strings mean . So far , this is what I 'm finding for strings that are more than 4 characters longer : From looking at similar pseudocode for strings that are three characters , and using the Hex-Rays hover overs for type information , here 's how I 'm understanding this : runtimeVersion is a const wcharthis means it has Unicode characters ( UTF-16 ) the string is embedded in memory , but in this case , weakly encrypted ( XOR ? ) The above pseudocode is the same for all the big strings except the constant `` 882 '' is different for every string . I 'm assuming this is some sort compile-time encryption or macro that finds strings one by one and `` encrypts '' them uniquely . The problem is , though , that I ca n't seem to get a proper looking string by replicating the pseudocode . Here 's what I have in C # : 'rawCharacters ' is a ushort array . I split each of those dword entries in half and treat each one as a ushort . I put them in the array starting from the bottom to the top ... So the value assigned to runtimeVersion [ 0 ] gets added to my array first , then the value from dword_131893E , then dword_1318942 , etc.I 'm not sure what I 'm missing here . This seems like it 's so simple that it should be cake to reverse and recover the strings , but I 'm getting stumped on the conversion from the pseudocode to actual code.Thoughts ?
|
dword_131894E = 54264588 ; dword_131894A = 51381002 ; dword_1318946 = 51380998 ; dword_1318942 = 52429571 ; dword_131893E = 52298503 ; runtimeVersion [ 0 ] = 836 ; szIndex = 0 ; do { runtimeVersion [ szIndex ] = ( runtimeVersion [ szIndex ] - 1 ) ^ ( szIndex + 882 ) ^ 0x47 ; ++szIndex ; } while ( szIndex < 11 ) ; ushort [ ] newCharArray = new ushort [ rawCharacters.Length ] ; // Go through and decode all of the characters.ushort i = 0 ; do { newCharArray [ i ] = ( ushort ) ( ( i + 882 ) ^ ( rawCharacters [ i ] - 1 ) ^ 0x47 ) ; ++i ; } while ( i < 11 ) ;
|
Ca n't decrypt these strings
|
C#
|
Does anyone know why the last one does n't work ?
|
object nullObj = null ; short works1 = ( short ) ( nullObj ? ? ( short ) 0 ) ; short works2 = ( short ) ( nullObj ? ? default ( short ) ) ; short works3 = 0 ; short wontWork = ( short ) ( nullObj ? ? 0 ) ; //Throws : Specified cast is not valid
|
Null coalescing operator giving Specified cast is not valid int to short
|
C#
|
I am trying to get the heart rate from a Microsoft Band . It should be updating whenever the value changes . I am then trying to display that value in a TextBlock . I first create an instance of IBandClient , and set its HeartRate.ReadingChanged method like this : Then I try to update the value like this : HeartRate is an int set like so : The TextBlock text is then bound to HeartRate . However , I keep getting this error when trying to set HeartRate : The application called an interface that was marshalled for a different thread . ( Exception from HRESULT : 0x8001010E ( RPC_E_WRONG_THREAD ) ) My guess is that it 's trying to set HeartRate while it is still being set from the call before .
|
bandClient.SensorManager.HeartRate.ReadingChanged += HeartRate_ReadingChanged ; private void HeartRate_ReadingChanged ( object sender , Microsoft.Band.Sensors.BandSensorReadingEventArgs < Microsoft.Band.Sensors.IBandHeartRateReading > e ) { HeartRate = e.SensorReading.HeartRate ; } public int HeartRate { get { return ( int ) GetValue ( HeartRateProperty ) ; } set { SetValue ( HeartRateProperty , value ) ; } } // Using a DependencyProperty as the backing store for HeartRate . This enables animation , styling , binding , etc ... public static readonly DependencyProperty HeartRateProperty = DependencyProperty.Register ( `` HeartRate '' , typeof ( int ) , typeof ( MainPage ) , new PropertyMetadata ( 0 ) ) ;
|
Get Heart Rate From Microsoft Band
|
C#
|
Eric Lippert has explained in his blog post at http : //blogs.msdn.com/b/ericlippert/archive/2009/12/10/constraints-are-not-part-of-the-signature.aspx why constraints are not considered for type inference , which makes sense given that methods can not be overloaded by simply changing type constraints . However , I would like to find a way to instantiate an object using two generic types , one which can be inferred and another which could be inferred if constraints were considered , without having to specify any of the types.Given the types : and the factory : the following desired code will not compile : The error message is `` error CS0411 : The type arguments for method 'yo.Factory1.Create ( T ) ' can not be inferred from the usage . Try specifying the type arguments explicitly . `` , which is in line with what Eric said in his blog post.Thus , we can simply specify the generic type arguments explicitly , as the error message suggests : If we do n't wish to specify type arguments and we do n't need to retain type C , we can use the following factory : and now specify : However , I wish to retain type C in the returned object and not specify the types .
|
public interface I < T > { Other < T > CreateOther ( ) ; } public class C : I < string > { public Other < string > CreateOther ( ) { return new Other < string > ( ) ; } } public class Other < T > { } public static class Factory1 { public static Tuple < T , Other < T1 > > Create < T , T1 > ( T o ) where T : I < T1 > { return new Tuple < T , Other < T1 > > ( o , o.CreateOther ( ) ) ; } } public void WontCompile ( ) { C c = new C ( ) ; var v = Factory1.Create ( c ) ; // wo n't compile } public void SpecifyAllTypes ( ) { C c = new C ( ) ; var v = Factory1.Create < C , string > ( c ) ; // type is Tuple < C , Other < string > > } public static class Factory2 { public static Tuple < I < T1 > , Other < T1 > > CreateUntyped < T1 > ( I < T1 > o ) { return new Tuple < I < T1 > , Other < T1 > > ( o , o.CreateOther ( ) ) ; } } public void Untyped ( ) { C c = new C ( ) ; var v = Factory2.CreateUntyped ( c ) ; // type is Tuple < I < string > , Other < string > > }
|
Is there a workaround to C # not being able to infer generic type arguments using type constraints ?
|
C#
|
Is there any difference between the following two statement ? They both work .
|
if ( ( ( Func < bool > ) ( ( ) = > true ) ) ( ) ) { ... . } ; if ( new Func < bool > ( ( ) = > true ) ( ) ) { ... . } ;
|
Cast to Func vs new Func ?
|
C#
|
I declared the function Process32FirstW and the structure PROCESSENTRY32W like this : When calling Process32FirstW ( with a 64-bit process ) , I always get a TypeLoadException saying The type ProcessEntry could n't be loaded , because the object field at offset 44 is aligned wrong or is overlapped by another field , which is n't an object field.I also tried using char [ ] instead of string for ProcessEntry.ExeFile and using Pack=4 and Pack=8 in the structure 's StructLayoutAttribute . I always set ProcessEntry.Size to 568 and I copied the offset data from a C++ program ( 64-bit build ) : I ca n't figure out , what is going wrong , so how to declare PROCESSENTRY32W in C # for a 64-bit application ? Do I have to use C++/CLI or am I simply doing something wrong here ? EDIT : Running this code as a 64-bit program works perfectly fine for me
|
[ DllImport ( `` KERNEL32.DLL '' , CallingConvention = CallingConvention.StdCall , EntryPoint = `` Process32FirstW '' ) ] private static extern bool Process32FirstW ( IntPtr hSnapshot , ref ProcessEntry pProcessEntry ) ; [ StructLayout ( LayoutKind.Explicit , CharSet = CharSet.Unicode , Size = 568 ) ] internal struct ProcessEntry { [ FieldOffset ( 0 ) ] public int Size ; [ FieldOffset ( 8 ) ] public int ProcessId ; [ FieldOffset ( 32 ) ] public int ParentProcessID ; [ FieldOffset ( 44 ) , MarshalAs ( UnmanagedType.ByValTStr , SizeConst = 260 ) ] public string ExeFile ; } typedef unsigned long long ulong ; PROCESSENTRY32W entry ; wcout < < sizeof ( PROCESSENTRY32W ) < < endl ; // 568wcout < < ( ulong ) & entry.dwSize - ( ulong ) & entry < < endl ; // 0wcout < < ( ulong ) & entry.th32ProcessID - ( ulong ) & entry < < endl ; // 8wcout < < ( ulong ) & entry.th32ParentProcessID - ( ulong ) & entry < < endl ; // 32wcout < < ( ulong ) & entry.szExeFile - ( ulong ) & entry < < endl ; // 44 HANDLE hSnapshot = CreateToolhelp32Snapshot ( TH32CS_SNAPPROCESS , 0 ) ; PROCESSENTRY32W entry ; entry.dwSize = sizeof ( PROCESSENTRY32W ) ; if ( Process32FirstW ( hSnapshot , & entry ) ) { do { // Do stuff } while ( Process32NextW ( hSnapshot , & entry ) ) ; } CloseHandle ( hSnapshot ) ;
|
` PROCESSENTRY32W ` in C # ?
|
C#
|
I get file size =0the finilizer should have executed because I derive from CriticalFinalizerObjectI do n't want to use Trace.Close ( ) not in the Finalizer.editafter @ eric Lippert reply : Ive re-edited the code trying to match it to : constrained execution region ( but still no success )
|
class Program : CriticalFinalizerObject { static void Main ( string [ ] args ) { Program p = new Program ( ) ; TextWriterTraceListener listener = new TextWriterTraceListener ( @ '' C : \trace.txt '' ) ; Trace.Listeners.Clear ( ) ; // Remove default trace listener Trace.Listeners.Add ( listener ) ; Trace.WriteLine ( `` First Trace '' ) ; // Generate some trace messages Trace.WriteLine ( `` Perhaps last Trace . `` ) ; } ~Program ( ) { Trace.Close ( ) ; } } [ ReliabilityContract ( Consistency.WillNotCorruptState , Cer.Success ) ] class Program : CriticalFinalizerObject { static void Main ( string [ ] args ) { RuntimeHelpers.PrepareConstrainedRegions ( ) ; try { } catch ( Exception e ) { } finally { Program p = new Program ( ) ; TextWriterTraceListener listener = new TextWriterTraceListener ( @ '' C : \trace1.txt '' ) ; Trace.Listeners.Clear ( ) ; Trace.Listeners.Add ( listener ) ; Trace.WriteLine ( `` First Trace '' ) ; Trace.WriteLine ( `` Perhaps last Trace . `` ) ; } } ~Program ( ) { Trace.Flush ( ) ; } }
|
Why my Close function is n't called ?
|
C#
|
I have been trying to use Deedle F # Library to write an F # batch program . It has worked perfectly . However , I am not sure about the best design for the following 2 tasks : Combine the F # module into a existing ASP.net MVC/Web Api systemCreate a WPF interface to serve as a control panel and visual dependency controller for the various F # modules.The type of tasks the F # modules are doing are processing time series and applying statistical processes to derive new time series.I have been trying to create a class wrapper for the existing module so it can be called from C # code . I read from the C # Deep Dive that this is a better way to expose F # modules to C # callers.The following is a sample wrapper : The following is a sample module where most of the logic should reside : I have been trying to use NUnit to serve as a C # caller . I found myself putting most of the logic in the class do/let binding . The member methods are serving as passing the results to the caller . I do n't think my approach is correct.Can someone point me to the right direction ? ( I attempted to learn the F # WPF Framework on GitHub , but I am not yet up to the task ) I am aware Deedle is also avalable for C # . But , I really want to use F # . The sample code actually has too many side effects .
|
type TimeSeriesDataProcessor ( fileName : string ) = let mutable _fileName = fileName let _rawInputData = loadCsvFile _fileName let _pivotedData = _rawInputData | > pivotRawData | > fillPivotedRawData | > calculateExpandingZscore //read and write member this.FileName with get ( ) = _fileName and set ( value ) = _fileName < - value member this.RawInputData with get ( ) = _rawInputData member this.PivotedData with get ( ) = _pivotedData member this.rawInputDataCount with get ( ) = _rawInputData.RowCount member this.pivotedDataCount with get ( ) = _pivotedData.RowCount module Common = let loadCsvFile ( fileName : string ) : Frame < int , string > = let inputData = Frame.ReadCsv ( fileName ) inputData let pivotRawData inputData : Frame < DateTime , string > = let pivotedData = inputData | > Frame.pivotTable ( fun k r - > r.GetAs < DateTime > ( `` Date '' ) ) ( fun k r - > r.GetAs < string > ( `` Indicator '' ) ) ( fun f - > let maxVal = f ? Value | > Stats.max match maxVal with | Some mv - > mv | _ - > Double.NaN ) pivotedData let fillPivotedRawData inputData : Frame < DateTime , string > = let filledA = inputData ? A | > Series.fillMissing Direction.Forward inputData ? A < -filledA let filledB = inputData ? B | > Series.fillMissing Direction.Forward inputData ? B < -filledB inputData let calculateExpandingZscore inputData : Frame < DateTime , string > = let expandingMeanColA = inputData ? A | > Stats.expandingMean let expandingMeanColB = inputData ? B | > Stats.expandingMean let expandingStdevColA = inputData ? A | > Stats.expandingStdDev let expandingStdevColB = inputData ? B | > Stats.expandingStdDev let expandingZscoreColA = ( inputData ? A - expandingMeanColA ) /expandingStdevColA let expandingZscoreColB = ( inputData ? B - expandingMeanColB ) /expandingStdevColB inputData ? ExpdingMeanA < - expandingMeanColA inputData ? ExpdingMeanB < - expandingMeanColB inputData ? ExpdingStdevA < - expandingStdevColA inputData ? ExpdingStdevB < - expandingStdevColB inputData ? ExpdingZscoreA < - expandingZscoreColA inputData ? ExpdingZscoreB < - expandingZscoreColB inputData
|
Designing an F # module to be called by C # ( Console/MVC/WPF )
|
C#
|
I want to Serialize and DeSerialize an object which contains a Lazy Collection of some custom objects . Normally everything works perfectly fine but , if namespaces of classes used for serialization are changed , then this issue occurs . I have written a SerializationBinder to point to right classes while deserializing . But for some reason , I am not getting deserialized values.Following code snippet explains the problem that I am getting ; Classes used for Serialization : Above same classes are used for Deserialization but only namespace is changed to ConsoleApplication14.OtherNamespaceFor such Deserialization to work , I have used following SerializationBinder class : Serialization and Deserialization of object of MyCustomClass : I am getting NullReferenceException when Value property of deserialized Lazy object is called . ( i.e . when objOfOtherNamespaceClass.CollectionOfInnerObj.Value is called ) Please help me resolve this issue ...
|
namespace ConsoleApplication14 { [ Serializable ] public class MyInnerClass : ISerializable { private string _stringInInnerClassKey = `` StringInInnerClass '' ; public string StringInInnerClass { get ; set ; } public MyInnerClass ( ) { } private MyInnerClass ( SerializationInfo info , StreamingContext context ) { StringInInnerClass = info.GetString ( _stringInInnerClassKey ) ; } public void GetObjectData ( SerializationInfo info , StreamingContext context ) { info.AddValue ( _stringInInnerClassKey , StringInInnerClass ) ; } } [ Serializable ] public class MyOuterClass : ISerializable { private string _collectionOfObjKey = `` CollectionOfInnerObj '' ; public Lazy < Collection < MyInnerClass > > CollectionOfInnerObj { get ; set ; } private MyOuterClass ( SerializationInfo info , StreamingContext context ) { if ( info == null ) throw new ArgumentNullException ( `` serializationInfo '' ) ; CollectionOfInnerObj = ( Lazy < Collection < MyInnerClass > > ) info.GetValue ( _collectionOfObjKey , typeof ( Lazy < Collection < MyInnerClass > > ) ) ; } public MyOuterClass ( ) { } public void GetObjectData ( SerializationInfo info , StreamingContext context ) { if ( info == null ) throw new ArgumentNullException ( ) ; info.AddValue ( _collectionOfObjKey , CollectionOfInnerObj , typeof ( Lazy < Collection < MyInnerClass > > ) ) ; } } } public class MyBinder : SerializationBinder { public override Type BindToType ( string assemblyName , string typeName ) { if ( assemblyName.Equals ( `` ConsoleApplication14 , Version=1.0.0.0 , Culture=neutral , PublicKeyToken=null '' ) ) { if ( typeName.Equals ( `` ConsoleApplication14.MyOuterClass '' ) ) return typeof ( ConsoleApplication14.OtherNamespace.MyOuterClass ) ; if ( typeName.Equals ( `` ConsoleApplication14.MyInnerClass '' ) ) return typeof ( ConsoleApplication14.OtherNamespace.MyInnerClass ) ; } if ( assemblyName.Equals ( `` mscorlib , Version=4.0.0.0 , Culture=neutral , PublicKeyToken=b77a5c561934e089 '' ) ) { if ( typeName.Equals ( `` System.Collections.ObjectModel.Collection ` 1 [ [ ConsoleApplication14.MyInnerClass , ConsoleApplication14 , Version=1.0.0.0 , Culture=neutral , PublicKeyToken=null ] ] '' ) ) return typeof ( Collection < ConsoleApplication14.OtherNamespace.MyInnerClass > ) ; if ( typeName.Equals ( `` System.Collections.Generic.List ` 1 [ [ ConsoleApplication14.MyInnerClass , ConsoleApplication14 , Version=1.0.0.0 , Culture=neutral , PublicKeyToken=null ] ] '' ) ) return typeof ( List < ConsoleApplication14.OtherNamespace.MyInnerClass > ) ; if ( typeName.Equals ( `` System.Lazy ` 1 [ [ System.Collections.ObjectModel.Collection ` 1 [ [ ConsoleApplication14.MyInnerClass , ConsoleApplication14 , Version=1.0.0.0 , Culture=neutral , PublicKeyToken=null ] ] , mscorlib , Version=4.0.0.0 , Culture=neutral , PublicKeyToken=b77a5c561934e089 ] ] '' ) ) return typeof ( Lazy < Collection < ConsoleApplication14.OtherNamespace.MyInnerClass > > ) ; //I THINK , MAYBE THIS 'IF ' CONDITION IS THE PROBLEM , BUT DONT KNOW HOW TO FIX THIS . if ( typeName.Equals ( `` System.Lazy ` 1+Boxed [ [ System.Collections.ObjectModel.Collection ` 1 [ [ ConsoleApplication14.MyInnerClass , ConsoleApplication14 , Version=1.0.0.0 , Culture=neutral , PublicKeyToken=null ] ] , mscorlib , Version=4.0.0.0 , Culture=neutral , PublicKeyToken=b77a5c561934e089 ] ] '' ) ) return typeof ( Lazy < Collection < ConsoleApplication14.OtherNamespace.MyInnerClass > > ) ; } return Type.GetType ( String.Format ( `` { 0 } , { 1 } '' , typeName , assemblyName ) ) ; } } public static void Main ( string [ ] args ) { // -- -- -- -- -- -- -- -- Object Creation -- -- -- -- -- -- -- -- -- -- -- var objToSerialize = new MyOuterClass { CollectionOfInnerObj = new Lazy < Collection < MyInnerClass > > ( ( ) = > new Collection < MyInnerClass > { new MyInnerClass { StringInInnerClass = `` a '' } , new MyInnerClass { StringInInnerClass = `` aa '' } , } ) } ; // -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- // -- -- -- -- -- -- -- -- -- -- -Serialization -- -- -- -- -- -- -- -- -- -- - using ( var stream = File.Create ( `` E : \\tempFile.tmp '' ) ) { var binaryFormatter = new BinaryFormatter ( ) ; binaryFormatter.Serialize ( stream , objToSerialize ) ; stream.Close ( ) ; } // -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- // -- -- -- -- -- -- -- -- -- -DeSerialization -- -- -- -- -- -- -- -- -- -- using ( var stream = File.OpenRead ( `` E : \\tempFile.tmp '' ) ) { var binaryFormatter = new BinaryFormatter { Binder = new MyBinder ( ) } ; var objOfOtherNamespaceClass = ( OtherNamespace.MyOuterClass ) binaryFormatter.Deserialize ( stream ) ; //Getting NullReferenceException when Value property of objOfOtherNamespaceClass.CollectionOfInnerObj is called foreach ( OtherNamespace.MyInnerClass stringVal in objOfOtherNamespaceClass.CollectionOfInnerObj.Value ) Console.WriteLine ( stringVal.StringInInnerClass ) ; stream.Close ( ) ; } // -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - }
|
Not able to deserialize Lazy object
|
C#
|
I encountered some interesting behavior in the interaction between Nullable and implicit conversions . I found that providing an implicit conversion for a reference type from a value type it permits the Nullable type to be passed to a function requiring the reference type when I instead expect a compilation error . The below code demonstrates this : Output : If the conversion code is removed from Cat then you get the expected errors : Error 3 The best overloaded method match for 'ConsoleApplication2.Program.PrintCatAge ( ConsoleApplication2.Program.Cat ) ' has some invalid argumentsError 4 Argument 1 : can not convert from 'int ? ' to 'ConsoleApplication2.Program.CatIf you open the executable with ILSpy the code that was generated is as followsIn a similar experiment I removed the conversion and added an overload to PrintCatAge that takes an int ( not nullable ) to see if the compiler would perform a similar operation , but it does not.I understand what is happening , but I do n't understand the justification for it . This behavior is unexpected to me and seems odd . I did not have any success finding any reference to this behavior on MSDN in the documentation for conversions or Nullable < T > .The question I pose then is , is this intentional and is there a explanation why this is happening ?
|
static void Main ( string [ ] args ) { PrintCatAge ( new Cat ( 13 ) ) ; PrintCatAge ( 12 ) ; int ? cat = null ; PrintCatAge ( cat ) ; } private static void PrintCatAge ( Cat cat ) { if ( cat == null ) System.Console.WriteLine ( `` What cat ? `` ) ; else System.Console.WriteLine ( `` The cat 's age is { 0 } years '' , cat.Age ) ; } class Cat { public int Age { get ; set ; } public Cat ( int age ) { Age = age ; } public static implicit operator Cat ( int i ) { System.Console.WriteLine ( `` Implicit conversion from `` + i ) ; return new Cat ( i ) ; } } The cat 's age is 13 yearsImplicit conversion from 12The cat 's age is 12 yearsWhat cat ? int ? num = null ; Program.PrintCatAge ( num.HasValue ? num.GetValueOrDefault ( ) : null ) ;
|
What is the justification for this Nullable < T > behavior with implicit conversion operators
|
C#
|
In .NET Core and .NET Framework 4.x the following code works as expected : However , in netstandard , the Name property in Group is gone . I 'm wondering if there is a new way of achieving the same thing , or if this is a bug.Edit : I first thought this was a netstandard 2.0 issue , but it looks like the property is missing from all netstandard versions.Workaround for now : .Where ( grp = > ( ( string ) ( ( dynamic ) grp ) .Name ) .StartsWith ( `` val '' ) ) , which is obviously less than ideal .
|
var match = Regex.Match ( src , pattern ) .Groups .Cast < Group > ( ) .Where ( grp = > grp.Name.StartsWith ( `` val '' ) ) ;
|
netstandard - Regular Expression , Group Name inaccessible
|
C#
|
Take the following code : using those typesCompiled with C # 5 compiler against .NET 4.5.1 ( the behaviour is probably the same using older compiler/framework version ) this generates the following error : Now , I have a pretty good idea what is happening under the covers ( I blogged about it here ) but I ca n't come up with a satisfying answer why ?
|
ICanQuack quack = new Duck ( ) ; var map = ( object ) `` a map '' ; quack.Fly ( ( dynamic ) map ) ; public interface ICanFly { void Fly < T > ( T map ) ; } public interface ICanQuack : ICanFly { void Quack ( ) ; } public class Duck : ICanQuack { public void Fly < T > ( T map ) { Console.WriteLine ( `` Flying using a { 0 } map ( { 1 } ) '' , typeof ( T ) .Name , map ) ; } public void Quack ( ) { Console.WriteLine ( `` Quack Quack ! `` ) ; } }
|
C # dynamic fails invoking method from a base interface
|
C#
|
I 'm writing a Tetris-clone and I 'm prototyping in C # . The final code is supposed to run on an embedded system ( using an 8-Bit CPU and very little RAM ) , so I 'm trying to use a naïve algorithm to do line clear.Right now , my playfield is a 2D Array : ( where TetrominoType is an enum to indicate either None or one of the 7 types , used for coloring blocks ) When a line is cleared , I want to modify this array in-place , which is where my problem is . Take this example : I have already determined that lines 5 , 7 and 8 need to be removed and thus other lines should fall down , leaving me with the state on the right.My naïve way is to iterate backwards and copy the line above a cleared one , basically : The problem here is that the line above might also be cleared ( e.g , if iy == 8 then I do n't want to copy line 7 but line 6 ) and also that I need to clear the copied line ( iy-1 ) - or copy the line above that one which in turn need to trickle upward.I tried counting how many lines I already skipped , but that works only if I create a new array and then swap them out , but I ca n't get the math work for in-place modification of the playfield array.It 's possibly really simple , but I 'm just not seeing the algorithm . Does anyone have some insight how I can do this ?
|
private readonly TetrominoType [ ] [ ] _playfield ; Before After0 # # # # 1 # # # # 2 # # # # 3 # # # # 4 # # # # 5 # xxxxxx # # # 6 # x xx # # # 7 # xxxxxx # # # 8 # xxxxxx # # x xx # 9 # x xxxx # # x xxxx # # # # # # # # # # # # # # # # # for ( int iy = 9 ; iy > = 0 ; iy -- ) { if ( _linesToClear.Contains ( iy ) ) { for ( int ix = 0 ; ix < 6 ; ix++ ) { _playfield [ iy ] [ ix ] = _playfield [ iy-1 ] [ ix ] ; } } }
|
Naive Gravity for Tetris game using 2D Array for the playfield
|
C#
|
In C++ the compiler knows about primitive data types such as int , but in C # these are basically structures ( e.g . System.Int32 ) . But can I assume that C # knows about these types . I think that it does , because an int literal in C # is basically an instance of System.Int32 . For example this will work : Output :
|
Console.WriteLine ( ( 12345 ) .GetType ( ) ) ; System.Int32
|
Are C # primitive data types part of the language ?
|
C#
|
I have a Windows application that is connecting to a WCF Data Service hosted on the same machine.The first thing that occurs when the application starts is a query that returns 0 to 3 results . Here 's the code : The very next thing I do is check if ( environments.Count ( ) == 0 ) which takes about 10 seconds to evaluate . It seems to be slowest the first time , but always takes more than 6 seconds . However , if I 'm running Fiddler , I always get the results back immediately.Why does running Fiddler make it faster ?
|
var environments = ctx.Environments.AddQueryOption ( `` $ filter '' , `` Environment eq ' '' + ConfigurationManager.AppSettings [ `` environment '' ] + `` ' '' ) .AddQueryOption ( `` $ expand '' , `` Departments , SecurityGroups '' ) ;
|
Why is WCF Data Service performing better while Fiddler is running ?
|
C#
|
I am testing my asp.net core 2.2 web api with Postman . I write the JSON manually like this ( httppatch ) : Now I am thinking how can I build the patch body on the client side.My question is how can I get the equivalent of this code in json to make it look like the one I write manually ? I guess it 's all about serialization , any idea how can I make it ? -- Update -- I found this : but it grabs only the last set it combines all sets in one ( My bad , thanks to @ Simon Mourier to pointing out my mistake ! )
|
{ `` query '' : `` { \ '' name\ '' : \ '' foo\ '' } '' , `` update '' : [ `` { \ '' $ set\ '' : { \ '' name\ '' : \ '' foo2\ '' } } '' , '' { \ '' $ set\ '' : { \ '' path\ '' : \ '' foo2 path\ '' } } '' ] } var query = Builders < T > .Filter.Eq ( e = > e.name , `` foo '' ) ; var updates = Builders < T > .Update.Set ( e = > e.name , `` foo2 '' ) .Set ( e = > e.Path , `` foo2 path '' ) ; var serializerRegistry = BsonSerializer.SerializerRegistry ; var documentSerializer = serializerRegistry.GetSerializer < T > ( ) ; var upList = updates.Render ( documentSerializer , serializerRegistry ) ;
|
Mongodb Bson type to Json
|
C#
|
I have an interesting situation . When I run a query on remote SQL server in Microsoft SQL Server Management Studio it runs fast ( 12 sec ) , but when I run the same query in Entity Framework using DbContext.Database.SqlQuery < EntityType > ( script ) it takes 48 seconds.I tried setting set arithabort on . The setting got applied but it did n't change the performance . I am not able to provide query execution plans , because I only have limited permissions on the SQL server . But I can say 100 % that this is not the query issue.Consider this query : The @ t variable contains about 35k rows . The execution times are pretty much similar in EF and in SSMS . But when I remove the top 1 then strange thing begin to happen . In SSMS I get 10 sec , but in EF about 40 sec.I guess this little experiment can rule out the possibility of SQL Server choosing the wrong execution plan and slowing things down.Another point of interest would be entity materialization done by EF . I think this also is not a bottleneck , because when I run a similar query with similar size result set on a local SQL Express — I get the results almost instantly in both cases.So my next guess is network issues . I installed Microsoft Network Monitor 3.4 and monitored network traffic for both SSMS and EF . The interestig thing I found out is for some reason there are many packets of smaller size and also some TLS packets in EF version . In SSMS version packet size is more stable and there are no TLS packets.So the question is : is it possible to speed up the EF version ? What are those TLS packets , is it possible to get rid of them ? UpdateEntity Framework v6.1.3.NET v4.5.1SQL Server v10.50.2550.0Local SQLExpress v12.0.4213.0Windows 7 Pro Update This code yields time-wise same results .
|
declare @ t table ( ... ) insert into @ tselect < long query > select top 1 * from @ t using ( var connection = new SqlConnection ( DbContext.Database.Connection.ConnectionString ) ) using ( var cmd = new SqlCommand ( script , connection ) ) { connection.Open ( ) ; cmd.CommandType = CommandType.Text ; using ( SqlDataReader reader = cmd.ExecuteReader ( ) ) { reader.Read ( ) ; do { } while ( reader.Read ( ) ) ; } }
|
Entity Framework data reading performance
|
C#
|
Given this XML ... And this C # code : I know the Xdoc is not empty and contains the right XML.I also implemented some ScottGu code ( http : //weblogs.asp.net/scottgu/archive/2007/08/07/using-linq-to-xml-and-how-to-build-a-custom-rss-feed-reader-with-it.aspx ) as a sanity check and it works exactly as expected .
|
< ListBucketResult xmlns= '' http : //s3.amazonaws.com/doc/2006-03-01/ '' > < Name > public.rpmware.com < /Name > < Prefix > < /Prefix > < Marker > < /Marker > < MaxKeys > 1000 < /MaxKeys > < IsTruncated > false < /IsTruncated > < Contents > < Key > 0.dir < /Key > < LastModified > 2008-06-25T16:09:49.000Z < /LastModified > < ETag > '' 0ba2a466f9dfe225d7ae85277a99a976 '' < /ETag > < Size > 16 < /Size > < Owner > < ID > 1234 < /ID > < DisplayName > kyle < /DisplayName > < /Owner > < StorageClass > STANDARD < /StorageClass > < /Contents > < ! -- repeat similar 100x -- > < /ListBucketResult > XDocument doc = XDocument.Load ( xmlReader ) ; var contents = from content in doc.Descendants ( `` Contents '' ) select new { Key = content.Element ( `` Key '' ) .Value , ETag = content.Element ( `` ETag '' ) .Value } ; foreach ( var content in contents ) { Console.WriteLine ( content.Key ) ; Console.WriteLine ( content.ETag ) ; } XDocument doc2 = XDocument.Load ( @ '' http : //weblogs.asp.net/scottgu/rss.aspx '' ) ; var posts = from items in doc2.Descendants ( `` item '' ) select new { Title = items.Element ( `` title '' ) .Value } ; foreach ( var post in posts ) { Console.WriteLine ( post.Title ) ; }
|
Why is n't this LINQ to XML Query Working ( Amazon S3 )
|
C#
|
I 've a project where I am extensively using the generic C # dictionary . I require composite keys , so I 've been using tuples as keys . At some point I was wondering whether it would be beneficial to use a custom class which caches the hash code : I used new instead of override because I thought it would not make a difference for this test , since I was defining the dictionary using the concrete type : I noticed , however , that my custom GetHashCode method is not called at all . When I changed new to override it got called as expected.Can somebody explain why the hidden GetHashCode method is not called ? I would expect this behavior if I would define the dictionary like thisbut not if I specify the CompositeKey type explicitly as in my example.P.S . I know that hiding the GetHashCode method is probably not a good idea .
|
public class CompositeKey < T1 , T2 > : Tuple < T1 , T2 > { private readonly int _hashCode ; public CompositeKey ( T1 t1 , T2 t2 ) : base ( t1 , t2 ) { _hashCode = base.GetHashCode ( ) ; } public new int GetHashCode ( ) { return _hashCode ; } } var dict = new Dictionary < CompositeKey < string , int > , int > ( ) ; var dict = new Dictionary < Tuple < string , int > , int > ( ) ;
|
Why does the C # dictionary not call the hidden GetHashCode method
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.