diff --git "a/stack_exchange/SE/SE 2016.csv" "b/stack_exchange/SE/SE 2016.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2016.csv" @@ -0,0 +1,144146 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense,, +306303,1,306306,,1/1/2016 5:13,,-7,1000,"

I would like to program a software that will heavily utilize client side development (JavaScript), and I want to communicate with the server. Is there a way similar to SignalR where I can keep communicating with the server to get new updates if any?

+ +

Something similar to LinkedIn where when there is a new update, it will show you that there is 1 or 2 updates available, so you can click to see them.

+ +

I know I can use setTimeOut and keep checking with the server, but is it the right way?

+",208578,,,,,42370.30764,Technology like SignalR but client side,,2,0,0,42372.95069,,CC BY-SA 3.0,, +306314,1,306316,,1/1/2016 14:32,,28,6866,"

I'm going to save some string payload in the database. I have two global configurations:

+ + + +

These can be enabled or disabled using the configuration in a way that either only one of them is enabled, both are enabled or both are disabled.

+ +

My current implementation is this:

+ +
if (encryptionEnable && !compressEnable) {
+    encrypt(data);
+} else if (!encryptionEnable && compressEnable) {
+    compress(data);
+} else if (encryptionEnable && compressEnable) {
+    encrypt(compress(data));
+} else {
+  data;
+}
+
+ +

I'm thinking about the Decorator pattern. Is it the right choice, or is there perhaps a better alternative?

+",208350,,134647,,42379.90972,42379.90972,Is there a design pattern to remove the need to check for flags?,,6,7,4,,,CC BY-SA 3.0,, +306321,1,306323,,1/1/2016 17:40,,4,456,"

I am reading GoF, and the intent of builder is mentioned as to separate creation of complex object from its representation.

+ +

I couldn't understand what representation means in this context. +What does it mean?

+ +

The Text parsing example provided in the motivation section doesn't seem to separate construction from representation, but algorithm for interpretation of textual format from creation and representation of converted format.

+ +

+",204027,,,,,42370.75903,Builder Pattern : Seperation of construction from representation,,1,0,,,,CC BY-SA 3.0,, +306325,1,,,1/1/2016 18:30,,3,676,"

I'm making a framework in Java and have a method in my abstract class, BasicPhase, called processAction (snippet below). It takes two abstract classes as parameters since the user should implement them in child classes. However, it results in the user being required to constantly downcast every time they override the method. How can this be avoided?

+ +

BasicPhase.java

+ +
public class BasicPhase {
+    ...
+    public BasicGameState processAction(BasicAction action, BasicGameState state);
+    ...
+}
+
+ +

Example User Class

+ +

MyPhase.java

+ +
public class MyPhase extends BasicPhase {
+    @Override
+    public BasicGameState processAction(BasicAction basicAction, BasicGameState basicState)  {
+        MyGameState state = (MyGameState) basicState; //undesired
+        MyAction action = (MyAction) basicAction; //undesired
+
+        //Game Logic Here
+    }
+}
+
+",185305,,,,,42370.86667,Automatic Downcasting,,2,2,,,,CC BY-SA 3.0,, +306330,1,,,1/1/2016 22:50,,11,1357,"

I watched Raymond Hettinger's Pycon talk ""Super Considered Super"" and learned a little bit about Python's MRO (Method Resolution Order) which linearises a classes ""parent"" classes in a deterministic way. We can use this to our advantage, like in the below code, to do dependency injection. So now, naturally, I want to use super for everything!

+ +

In the example below, the User class declares it's dependencies by inheriting from both LoggingService and UserService. This isn't particularly special. The interesting part is that we can use the Method Resolution Order also mock out dependencies during unit testing. The code below creates a MockUserService which inherits from UserService and provides an implementation of the methods we want to mock. In the example below, we provide an implementation of validate_credentials. In order to have MockUserService handle any calls to validate_credentials we need to position it before UserService in the MRO. This done by creating a wrapper class around User called MockUser and having it inherit from User and MockUserService.

+ +

Now, when we do MockUser.authenticate and it, in turn, calls to super().validate_credentials() MockUserService is before UserService in the Method Resolution Order and, since it offers a concrete implementation of validate_credentials this implementation will be used. Yay - we've successfully mocked out UserService in our unit tests. Consider that UserService might do some expensive network or database calls - we've just removed the latency factor of this. There is also no risk of UserService touching live/prod data.

+ +
class LoggingService(object):
+    """"""
+    Just a contrived logging class for demonstration purposes
+    """"""
+    def log_error(self, error):
+        pass
+
+
+class UserService(object):
+    """"""
+    Provide a method to authenticate the user by performing some expensive DB or network operation.
+    """"""
+    def validate_credentials(self, username, password):
+        print('> UserService::validate_credentials')
+        return username == 'iainjames88' and password == 'secret'
+
+
+class User(LoggingService, UserService):
+    """"""
+    A User model class for demonstration purposes. In production, this code authenticates user credentials by calling
+    super().validate_credentials and having the MRO resolve which class should handle this call.
+    """"""
+    def __init__(self, username, password):
+        self.username = username
+        self.password = password
+
+    def authenticate(self):
+        if super().validate_credentials(self.username, self.password):
+            return True
+        super().log_error('Incorrect username/password combination')
+        return False
+
+class MockUserService(UserService):
+    """"""
+    Provide an implementation for validate_credentials() method. Now, calls from super() stop here when part of MRO.
+    """"""
+    def validate_credentials(self, username, password):
+        print('> MockUserService::validate_credentials')
+        return True
+
+
+class MockUser(User, MockUserService):
+    """"""
+    A wrapper class around User to change it's MRO so that MockUserService is injected before UserService.
+    """"""
+    pass
+
+if __name__ == '__main__':
+    # Normal useage of the User class which uses UserService to resolve super().validate_credentials() calls.
+    user = User('iainjames88', 'secret')
+    print(user.authenticate())
+
+    # Use the wrapper class MockUser which positions the MockUserService before UserService in the MRO. Since the class
+    # MockUserService provides an implementation for validate_credentials() calls to super().validate_credentials() from
+    # MockUser class will be resolved by MockUserService and not passed to the next in line.
+    mock_user = MockUser('iainjames88', 'secret')
+    print(mock_user.authenticate())
+
+ +

This feels quite clever, but is this a good and valid use of Python's multiple inheritance and Method Resolution Order? When I think about inheritance in the way that I learned OOP with Java this feels completely wrong because we can't say User is a UserService or User is a LoggingService. Thinking that way, using inheritance the way the above code uses it doesn't make much sense. Or is it? If we use inheritance purely just to provide code reuse, and not thinking in terms of parent->children relationships, then this doesn't seem so bad.

+ +

Am I doing it wrong?

+",74679,,,,,43113.13542,Using Python's Method Resolution Order for Dependency Injection - is this bad?,,1,4,3,,,CC BY-SA 3.0,, +306332,1,,,1/1/2016 23:51,,0,347,"

Langs like Java knows about true method overloading:

+ +
class Overload {
+  void demo (int a) {
+   System.out.println (""a: "" + a);
+  }
+  void demo (int a, int b) {
+   System.out.println (""a and b: "" + a + "","" + b);
+  }
+}
+
+ +

Whether langs like PHP — doesn't. So when you stand in front of a problem of processing something depending of context, which ways will you think of and which usually choose?

+ +

Typical problem could be some abstract payment system, where outside user should not think about concrete payment provider details (which could be different) and just call Provider.pay(PayDetails). In interface of PaymentProvider we must fix that it works with PayDetails. But in concrete implementations concrete provider could work for example only with it's own ConcretePayDetails.

+ +

Is it worth to polyfill such things as ad-hoc polymorphism in langs that dont' support it?

+ +

How to better solve context-related methods dispatching in such langs?

+ +

Example of possible implementation available at https://github.com/garex/php-ad-hoc-polymorphism-polyfill/

+",29028,,,,,42372.52292,How to simulate method overloading in langs without such feature?,,2,3,0,42382.55625,,CC BY-SA 3.0,, +306336,1,,,1/2/2016 0:59,,9,706,"

Given that Python allows for multiple inheritance, what does idiomatic inheritance in Python look like?

+ +

In languages with single inheritance, like Java, inheritance would be used when you could say that one object ""is-a"" of an another object and you want to share code between the objects (from the parent object to the child object). For example, you could say that Dog is a Animal:

+ +
public class Animal {...}
+public class Dog extends Animal {...}
+
+ +

But since Python supports multiple inheritance we can create an object by composing many other objects together. Consider the example below:

+ +
class UserService(object):
+    def validate_credentials(self, username, password):
+        # validate the user credentials are correct
+        pass
+
+
+class LoggingService(object):
+    def log_error(self, error):
+        # log an error
+        pass
+
+
+class User(UserService, LoggingService):
+    def __init__(self, username, password):
+        self.username = username
+        self.password = password
+
+    def authenticate(self):
+        if not super().validate_credentials(self.username, self.password):
+            super().log_error('Invalid credentials supplied')
+            return False
+         return True
+
+ +

Is this an acceptable or good use of multiple inheritance in Python? Instead of saying inheritance is when one object ""is-a"" of another object, we create a User model composed of UserService and LoggingService.

+ +

All the logic for database or network operations can be kept separate from the User model by putting them in the UserService object and keep all the logic for logging in the LoggingService.

+ +

I see some problems with this approach are:

+ +
    +
  • Does this create a God object? Since User inherits from, or is composed of, UserService and LoggingService is it really following the principle of single responsibility?
  • +
  • In order to access methods on a parent/next-in-line object (e.g., UserService.validate_credentials we have to use super. This makes it a little bit more difficult to see which object is going to handle this method and isn't as clear as, say, instantiating UserService and doing something like self.user_service.validate_credentials
  • +
+ +

What would be the Pythonic way to implement the above code?

+",74679,,,,,42513.88125,"Is Python's inheritance an ""is-a"" style of inheritance or a compositional style?",,2,0,,,,CC BY-SA 3.0,, +306354,1,306357,,1/2/2016 11:41,,4,218,"

I am reading ""Less is exponentially more"" and there is a list of advantages of Go, first of them, quote ""regular syntax (don't need a symbol table to parse)"".

+ +

What does it mean ""regular syntax""? What properties define ""language X does (not) have a regular syntax""?

+",66354,,33920,,42372.32292,42372.32292,Regular syntax -- what does it mean?,,2,2,,,,CC BY-SA 3.0,, +306360,1,,,1/2/2016 14:52,,5,387,"

Suppose I were to modify a said piece of code and put it back on github, would I have to update the license header.

+ +

This is the code(Licensed under the Apache License, Version 2.0): https://github.com/MasterAwesome/android_device_oneplus_onyx/commits/master/liblight/lights.c

+ +

The commit ""Use liblights-caf"" is from Google's repository. Now the next few commits I have made is to modify the source code for my device. Should I add my copyright notice in the header too?

+",209547,,,user53019,42371.64167,42371.64444,Do I update the copyright header when i modify the source?,,1,0,1,,,CC BY-SA 3.0,, +306366,1,,,1/2/2016 16:54,,3,259,"

So, I'm still working on my small GUI ""library"". (C++ with boost) +The goal is to provide a simple solution for small SPI displays, using a Raspberry or some embedded board.

+ +

Thus, I ended up having a Widget class from which concrete widgets are derived. +Examples would be a textbox or an image frame. +Widgets can contain other widgets to allow groups. +This is achieved via aggregation using a STL container. +In the end, the complete GUI dataset can be seen as some kind of tree, with some nodes having child nodes.

+ +

Each widget shall have a draw-method which looks like

+ +
    void drawMe(brush *myBrush);
+
+ +

A brush provides primitive drawing methods like ""setPixel"" or ""drawRectangle"" +This allows the widget to draw itself, using the methods provided by the brush. +A widget first draws itself, then it invokes the drawMe-method of each child widget, if any.

+ +

Now lets talk about my problem. +Each widget has its coordinates, relative to the parent. +This is not enough, the brush needs the absolute coordinate of the widget, because a brush does not know anything about children or parents. A brush just provides methods like setPixel(int x, int y) and draws into a pixel buffer.

+ +

But I somehow have a bad feeling letting a widgets know its absolute coordinates. This feels wrong. +On the other hand, at the moment a child does not know it is owned by a parent, thus, there's no way a widget can compute its absolute coordinates.

+ +

I can implement workaround in the brush, so that the brush supports multiple layers instead of just one canvas (the pixel buffer). A parent then has to set up the layers for its child. +Sounds like a dirty hack.

+ +

Also, is it a good idea to et the parent know its child coordinates?

+ +

You see, I'm somewhere lost with this issues. +There are GUI frameworks out there, like QT and stuff. +So, my problem should be known and solved already. +Do you have any suggestions? +How would you implement the drawing process?

+ +

The whole project is about learning new stuff, mainly OOP paradigms. +I like to hear your opinion on that.

+",207591,,,,,43134.18958,Managing widgets in a simple GUI framework,,2,3,,,,CC BY-SA 3.0,, +306376,1,306378,,1/3/2016 4:03,,6,13375,"

I'm new to REST API, so I decided to get familiar with it by designing a small web service API. I have its design written down and would like you to review it. I feel like I have made some mistakes in designing it and understanding REST concepts, which I try to address with questions on my design at the very end. I'm mostly unsure about my use of URLs for the API.

+

The web service I try to design provides a way for a user to store key=value pairs, i.e. retrieve value knowing its key, update the value of an existing key and delete an existing key. The whole CRUD set of actions!

+

Here is what I have came up with.

+

Versioning

+

Since my API might evolve, I want a notion of API versions, so I have

+
GET api.example.com/supported-versions
+
+

Which returns a JSON list of integers denoting the API versions supported.

+

The API will be available at api.example.com/{VERSION}/ endpoint, e.g. api.example.com/1/ for the first version.

+

Key=value storage

+
GET api.exmaple.com/1/keys/{KEY}
+
+

Allows a user to get a value associated with a key KEY. The server responds with a status code (200, 404, etc.) and a text value on success, both encoded in JSON.

+
+
POST and PUT api.exmaple.com/1/keys/{KEY}?value={VALUE}
+
+

Allows a user to create/update the value associated with the key KEY. The server returns a status code (200, 404, etc.). VALUE is a text string.

+
DELETE api.exmaple.com/1/keys/{KEY}
+
+

Allows a user to delete a key=value pair with key KEY. The server Which will delete the key=value pair and return a status code (200, 404, etc.).

+
+

Let's imagine that there is an OAuth2 authentication set up, which is used for the up above POST/PUT/DELETE methods (but not GET anyone can GET, you don't need authentication for that), so there is a way to uniquely identify a user and keep track which key=value pairs belong to which user.

+

Now, I want an authenticated user to be able to get a list of all keys they have, so that they don't have to store this information on client-side in order to send DELETE or PUT request later.

+
GET api.exmaple.com/1/keys?page={PAGE}
+
+

This allows a user to get a list of all existing keys they have created. PAGE is an optional parameter. User can increase page count until the user gets an error status code. It returns a paged list of all existing keys a user have created and also a status code (200, 404, etc.).

+
+

Is there anything wrong with this API?

+

Should the first GET/POST/PUT/DELETE from Key=value storage section be on /keys/ or /values/?

+

Is it fine that the last "GET", the one which returns all keys a user has, is also /keys/?

+",209580,,-1,,43998.41736,42375.60208,How to correctly implement key=value storage REST API,,2,0,2,,,CC BY-SA 3.0,, +306377,1,306379,,1/3/2016 4:13,,10,12243,"

In response to Aaronaught's response to the question at:

+ +

Can't I just use all static methods?

+ +

Isn't less memory used for a static method? I am under the impression that each object instance carries around its own executable version of a non-static member function.

+ +

Regardless of how much overhead is involved in calling a static method, regardless of poor OO design, and possible headaches down the road, doesn't it use less memory at runtime?

+ +

Here is an example:

+ +

I make a vector of zero-initialized objects. Each object contains one piece of data (a triangle consisting of nine double's). Each object is populated in sequence from data read from a .stl file. Only one static method is needed. Proper OO design dictates that a method dealing with the the data directly should be distributed to each object. Here is the standard OO solution:

+ +
foreach(obj in vec) {
+  obj.readFromFile(fileName);
+}
+
+ +

Each obj carries readFromFile's compiled code alongside the data!

+ +

Memory is more of a concern than performance in this case, and there is a LOT of data on a constrained system.

+ +

Solutions:

+ +
    +
  1. Namespace method (great for C++ but not possible in Java)
  2. +
  3. One static method in obj's class. Executable code is kept in one place at runtime. There is a small overhead to call the method.
  4. +
  5. A parent class from which obj is derived, which contains the private method readFromFile. Call with super.callPrivateMethod() which calls readFromFile. Messy, and still some memory overhead in each object.
  6. +
  7. Implement readFromFile outside obj's scope, so in vec's class or in the calling class. This, in my opinion, breaks data encapsulation.
  8. +
+ +

I realize for large amounts of data one explicit object for each triangle is not the best approach. This is only an example.

+",209593,,-1,,42837.31319,43281.29306,Does making a method static save memory on a class you'll have many instances of?,,5,10,4,,,CC BY-SA 3.0,, +306380,1,,,1/3/2016 7:04,,2,1133,"

I am working on a legacy code base and I need to find a way to write unit tests for this project.

+ +

The project has a three layer architecture (UI-Biz-DAL as we call them) and DAL is totally implemented using ADO.Net and Typed-Datasets and it is full of SQL Scripts.

+ +

Our Biz classes have methods that are responsible for doing business logic stuffs and they are dependent on other helper classes and DAL classes.

+ +

I know I can use DI to inject these classes to my Biz classes but I think that I should change a lot of code. Here's a solution that I can think of :

+ +

There's this TestContext class that acts as a container and can contain mock objects for tests but it does not have anything when it comes to run the actual code so that real objects can be used instead , here is an example :

+ +
var dal=TestContext.Current.Resolve<IMyDAL>(@default:new MyDal());
+
+ +

as you can see Resolve method accepts an argument of type IMyDAL that will be used in case of not running our tests.

+ +

First I would like to know what do you think about this solution

+ +

Second I am still thinking about a way to test SQL scripts that are hardcoded in our code base . How can I test them ?

+",49472,,49472,,42372.29861,42372.48542,Having a TestContext to test methods instead of Dependency Injection,,2,4,,,,CC BY-SA 3.0,, +306392,1,,,1/3/2016 9:38,,3,719,"

Short version of the question: What is a proper way to implement object cloning with deep copy, using generally accepted OOP principles?

+ +
+ +

I ran into this while looking into the Prototype Design Pattern in the GoF Design Patterns book, but I think it applies to general object cloning.

+ +

Wouldn't it be, each class has to properly implement its own instance method of deep_copy, because each class has its own way to ""going through"" all elements, such as left and right for a binary tree, and sometimes, an object A having 2 other references to 2 other objects: B and C, may mean A own B and C, and therefore B and C should also be cloned, while in some cases, such as a node object in a graph, A having a reference to B and C just means it is pointing to B and C and DO NOT own B, C (other nodes in the graph may also point to B and C).

+ +

There is a way to clone, which is serialize it and unserialize it (which should be same as data marshalling?) but it doesn't handle the case when the object doesn't own another object, or in the case of a node in a graph, can you serialize and unserialize, and get back a cloned node that points to the proper nodes in the graph as the original node object does?

+ +

Another complication may arise, if object A has an instance variable foo, and it has a data structure that reference object B twice, so we really should not clone B twice. Or, if foo reference it once in its data structure, but another instance variable bar also reference B, then also we should not clone B twice but once. And if A doesn't own B, then we should not clone B at all.

+ +

But let's say we ignore the complication above:

+ +

Then roughly speak, all classes in your application should implement its own method of deep_copy, and it roughly is this:

+ +
# Pseudo code:
+
+class SomeClass
+  def deep_copy
+    new_object = self.clone()  # to have all the instance variables and 
+                               # methods cloned, but just a shallow copy, and 
+                               # also, all the inheritance, access to
+                               # class variables, methods, and inheritance 
+                               # hierarchy should be properly set up
+
+    for all objects that is referenced by my instance variables
+      if I own the object (by the design of my class), then
+        # rely on polymorphism to make a proper deep_copy of this object
+        new_object.this_instance_variable = self.this_instance_variable.deep_copy() 
+      end
+    end
+
+    return new_object
+  end
+end
+
+ +

and depending on whether the primitive types are object or not, it may just say: if I own the object, but it is primitive, then don't clone it. Or in case the primitive types (like Fixnum, 1, 2, 3) are objects too, as in Ruby, then just let it clone it (because you don't want to do type checking to see whether it is clone-able), but in the self.clone line, it will raise an exception to say that this type is not for cloning, and in that case, just catch the exception and return the same object without cloning it (which is the base case of the recursion).

+ +

But the key point is, using generally accepted OOP principles, every class in your app has to have the deep_copy implemented, and its contract (the interface contract) is that it will indeed return a clone of ""myself"" together with deep copies of objects that I own (recursively). And it may be difficult because a lot of times, we define a class, and we don't really implement a deep_copy. If our app has 12 classes, and we need a clone with deep copy, then we actually have to implement such clone with deep copy for all 12 classes (or for all classes that may need to participate in the deep copy). Is the above correct, or are there some corrections according to OOP principles?

+",5487,,5487,,42372.45,42372.6875,"What is a proper way to implement object cloning with deep copy, using generally accepted OOP principles?",,2,1,1,,,CC BY-SA 3.0,, +306395,1,,,1/3/2016 10:16,,4,395,"

I came from a highly functional and procedural background in programming, and never knew that a type is the same as an interface.

+ +

As in the Design Patterns book by GoF, it says:

+ +
+

A type is a name used to denote a particular interface. We speak of an object as having the type ""Window"" if it accepts all requests for the operations defined in the interface named ""Window."" An object may have many types, and widely different objects can share a type. (p. 13)

+
+ +

The surprising thing is, I thought of type as char (like a character, or 1 byte), or int (a word, or 4 bytes or 8 bytes), or a pointer to character (a string in C language) before. Maybe even a struct with x and y as coordinate of a point, or an array, as a type, but I never thought of a type being ""an interface"".

+ +

So it looks like a Car object can be of the type Moveable and Soundable, and a Dog object can be of the type Moveable and Soundable, while a Circle object may be of the type Moveable only, until we decide that a Shape object also need to give out sound, when a user clicks on it, and we let the Shape class implement the Soundable interface and now a Circle object is also of the type Soundable?

+ +

I wonder when and how it happened? Is it actually said to be so by the GoF book for the first time in 1994 when the book got published? Or is it actually existing idea that came from long time ago?

+ +

It actually sound exactly the same as Duck Typing, but Duck Typing seems like a new concept that began about in 2003 in the Python and Ruby community, not like an idea that was in 1994 or earlier.

+",5487,,5487,,42374.50556,42374.50556,"How and when did it happen that, a type is an interface?",,2,2,,,,CC BY-SA 3.0,, +306398,1,,,1/3/2016 10:38,,-2,111,"

Imagine that you first write a compiler for your language where you necessarily report errors to the user. Compiler also collects location information for backend tools. They must know where the program elements are located. Later, when you are done with your compiler, you decide to provide IDE support as well. The editor is actually one more back-end tool. Having correct locations for program components helps a lot to syntax highlighting and error reporting. At this moment, you suddenly realize that locations reported by compiler are questionable.

+ +

It seems like EOL definitions is more or less specified in the language so that you can report lines correctly -- there is always a agreement between compiler and editor. But what about the column? If compiler reports that there is a blunder for an identifier located at line:col, editor may wonder, highlighting something different, depending the Tab settings. It seems impossible to have exact line:col location, no matter how useful it is, if tab width in the editor-specific. Nevertheless, I see that JavaCC provides getLine altogether with getBeginColumn method. I wonder how is it implemented, how is it possible in principle to track the offset? How does lexer match your Editor's width?

+",202598,,202598,,42372.67014,44067.84097,Tracking column (offset) in presence of tab characters,,2,3,,,,CC BY-SA 3.0,, +306399,1,,,1/3/2016 11:26,,3,2343,"

I'm planning a project that consists of the following parts:

+ +
    +
  1. REST API in Lumen
  2. +
  3. Web client in Laravel
  4. +
  5. Product website in Jekyll
  6. +
+ +

These separate products are going to be running on the same server.

+ +

I'd like to have a development environment that mimics the production environment as closely as possible, that's why I'm going with Vagrant in combination with a single Ansible playbook that provisions both environments.

+ +

What's the best way to organise such a project? Should I keep everything in one repository, or should I split up the parts into multiple repositories?

+ +

What about the development environment and provisioning scripts? Should I put these in a separate repository? How would I go about referencing the separate parts of the project in the development environment?

+ +

I'd like to be able to clone a single repository and spin up a development environment with as little work as possible.

+ +

Any recommendations?

+ +

Edit: My question differs from this question because I'm especially struggling with things related to production and development environments.

+",168722,,60357,,43024.28403,43024.28403,Should I use a single repo when multiple parts of the same project are running on the same server?,,2,3,2,42388.87431,,CC BY-SA 3.0,, +306402,1,,,1/3/2016 12:59,,3,1410,"

I thought a class is supposed to define, or give a blueprint, of attributes and methods for an object. And then, an interface is to provide a set of methods, as a contract for its clients. (and so a class also gives an interface as well, because a class also define a set of methods).

+ +

But say, if we have a class called Shape. Now we add an interface to it, called Moveable. Now we may have to add attributes to the class, such as velocity. But interface is only about methods, then how would we think about adding attributes to the class due to interface?

+",5487,,,,,42376.77569,"If Class is to define attributes and methods, and Interface is to define (a set of) methods, then how to think of interface needing new attributes?",,6,5,,,,CC BY-SA 3.0,, +306419,1,306420,,1/3/2016 20:23,,18,1714,"

I've seen it's very common for a compiler to be made in the language it's compiling. What is the benefit of this? Seems like it makes the process for outsiders (and the developers for a while) more difficult.

+ +

Take for instance:

+ + + +

It seems like it just makes things more tricky for the developers (at least in early stages) and for users of the compiler...

+",,user160274,,user160274,42373.02083,42373.02083,What is benefit that a compiler is implemented in the same language it compiles?,,3,4,1,42373.14583,,CC BY-SA 3.0,, +306423,1,,,1/3/2016 21:57,,1,473,"

My friend asked me for help with his task: ""Redeem a lattice parser with any programming language"" - the problem is that he can't clearly explain me how the lattice parser should work. I've tried to google a bit, but all I've found are some academic documents which doesn't help.

+ +

All I know at the moment that it is related/used very often with speech recognition - but in our case it will be only text-based for sure.

+ +

I'm looking forward for a fine explanation how this parser works (I don't ask for a solution in any programming language - I want to do it with my friend by ourselves).

+ +

I know that this lattice parser is somehow related to Earley parser (link to wiki and something called an academic parser - still doesn't know how it should help me to understand this.)

+",136427,,1204,,42373.16528,42534.90069,What is a lattice parser?,,1,4,1,,,CC BY-SA 3.0,, +306425,1,306448,,1/3/2016 22:39,,6,114,"

Context: +I am developing a visual studio plugin that generates layer diagrams. +I want the tool to be able to produce an intermediate output, which is the data representation of what is being rendered in the diagram. The idea is that this serialized data can then be checked in together with the source code which would allow developers to more quickly see how architecture changes between commits. This would either be done through general diff tools (like git-diff) or a dedicated diff tool. However i want the file format to be optimized for the first, because of their accessibility. The data would be represented as a tree structure of nodes, where each node would have a name and a list of dependencies (other nodes represented by their path + name).

+ +

Question: +When designing a file format, are there things regarding what data interchange-format or syntax is used and the formatting of this syntax (use of line breaks for example) that would make different versions of the file easier or harder to compare using tools like git-diff. For example, ensuring that the tool sees a ""rename"" as a ""rename"" and not an ""delete"" followed by an unrelated ""add"".

+",170809,,170809,,42372.97569,43582.89167,Optimizing a file type for compare tools,,1,1,0,,,CC BY-SA 3.0,, +306429,1,,,1/3/2016 23:49,,-3,5005,"

From previous experience, I had always thought that, if you are going to use variables inside of a for loop, it was much better to declare them outside of the loop vs. inside the loop itself. I recently had a code review done and had several younger developers claim that this was not true and that I should be putting my variables declarations inside the {} of the for loop.

+ +

Maybe compilers have just become more efficient, but it seems this would cause quite a number of memory releases/garbage collects since each iteration would be declaring a new instance of each variable, especially given that the majority of them are strings which are immutable.

+",209668,,213,,42373.02083,42373.12569,Best way to handle variables used in a for loop?,,3,2,0,42373.14167,,CC BY-SA 3.0,, +306432,1,306438,,1/4/2016 0:03,,3,217,"

I’ve released a piece of code into the public domain (henceforth “PD”), via The Unlicense. Recently, I’ve found someone forked that code and made some modifications without any acknowledgment/attribution. That is absolutely fine (it’s to allow that kind of freedom that I’ve released it into the PD in the first place). Fine, that is, until I see that no code was actually changed.

+ +

The only changes he did were replacing my name and website for his, and changing the icon.

+ +

So what this person did was the equivalent of taking Alice’s Adventures in Wonderland, stripping every reference to Lewis Carroll and replacing them with the equivalent references to himself, changing the cover, and republishing.

+ +

No license was attached to it by the person (and the PD one was removed), but this still feels like a dickish thing to do. I’m not exactly bothered by it and am not thinking about pursuing it (it was my decision to put it into the PD, after all) but I’m wondering about not only the ethics but the legality of this. If I understand correctly, PD still does not allow you to take credit for something you haven’t done, and that is still plagiarism. Is that correct, or is it jurisdiction-specific?

+",94821,,94821,,42373.72431,42373.97569,Taking credit for the public domain work of others,,1,6,,,,CC BY-SA 3.0,, +306436,1,306439,,1/4/2016 1:32,,7,2535,"

Simply put I'm new to the company, should I rather write advanced techniques with things like templates, std techniques..etc to make a first good impression and have my colleagues trust/be impressed at my work or should I be more concerned in writing more solid/based standard code for each current problem to be solved?

+",65797,,,,,43089.67569,Should I be using advanced techniques most of the time at my new job just because I can?,,4,2,1,42373.56597,,CC BY-SA 3.0,, +306447,1,,,1/4/2016 6:11,,5,4811,"

On comp.lang.c++.moderated@googlegroups.com, Greg Herlihy posted the following extern ""C"" function:

+ +
extern ""C"" 
+{
+    int func()
+    {
+        wchar_t memoryName[256];
+        wchar_t mutexName[256];
+        wchar_t eventName[256];
+        mbstowcs(memoryName, ""MemoryName"", 256);
+        mbstowcs(mutexName, ""MutexName"", 256);
+        mbstowcs(eventName, ""EventName"", 256);
+        std::wstring memoryString(memoryName);
+        std::wstring mutexString(mutexName);
+        std::wstring eventString(eventName);
+        CDataTransferServer *srv = new CDataTransferServer();
+        srv->Initialize(1, CC_SAMPLETYPE_MPEG4,128,256,64);
+        printf(""Inside entry point tester 1\n"");
+        srv->AddUser(5, memoryString, mutexString, eventString);
+        printf(""Inside entry point tester 2\n"");
+        delete srv;
+        printf(""Exiting entry point tester 3\n"");                                                      
+    }
+} 
+
+ +

which is a g++ entry point different than main(int argc, char*argv[]).

+ +

Greg Herlihy then wrote:

+ +
+

Calling _exit() (which presumably should be ""_Exit()"") - does not ""prevent"" + C++ global objects from being destroyed. The destruction of a C++ program's + global objects is not inevitable. Instead a C++ program is responsible for + destroying its own global objects - and can do so in one of two ways: the + program returns from main() or it calls exit(). So a C++ program that fails + to exit main() and neglects to call exit() before it terminates - will not + have destroyed its global objects by the time it ended.

+
+ +

I don't believe that follows from what is written in the standard.

+ +

As far as I can see, when I call exit(), the spec guarantees that:

+ +
    +
  • Destructors for objects with automatic storage duration +will not be called.

  • +
  • Destructors for objects with with static storage duration +will be called, in reverse order of construction.

  • +
+ +
+
+

The worrying thing is that the standard doesn't say anything about + exit() or _exit(), so I'm relying on implementation-dependent behavior.

+
+ +

Not so. The C++ Standard specifies that calling exit() destroys global + objects[3.6.3/1] And _Exit() is part of the C99 Standard (and will + presumably be incorporated into the next C++ Standard by reference).

+
+ +

Right, the C++ standard says what exit() does, but it doesn't say what +_exit() or _Exit() do. And the C standard certainly doesn't say +anything about C++ destructors.

+ +
+

I don't see any implementation-defined behavior here. Calling _Exit() is no + more likely to destroy a C++ program's global objects than calling printf() + - or calling any other function that is not exit().

+
+ +

_exit() and _Exit() are specified not to call any functions registered +atexit() or on_exit(). However, that's not useful for C++ because the +C++ spec doesn't say by what means the runtime takes care of calling +destructors of objects with static storage duration. A compliant +implementation could use a mechanism other than atexit() or on_exit() +to invoke static destructors.

+ +

In other words, the C++ spec does not say anything all about the +behavior of _exit() or _Exit(). Therefore, I cannot make any assumptions +about whether or not calling either function will cause destructors +of static objects to run or not.

+ +

Any comments are welcome.

+",199407,,161917,,42373.91736,42375.05972,Is it true that calling _exit() instead of exit() won't prevent static destructors from being called?,,4,4,1,,,CC BY-SA 3.0,, +306450,1,,,1/4/2016 7:45,,5,816,"

Came across this problem -- +You are given a grid with numbers that represent the elevation at a particular point. From each box in the grid you can go north, south, east, west - but only if the elevation of the area you are going into is less than the one you are in, i.e. you can only descend. You can start anywhere on the map and you are looking for a starting point with the longest possible path down as measured by the number of boxes you visit. And if there are several paths down of the same length, you want to take the one with the steepest vertical drop, i.e. the largest difference between your starting elevation and your ending elevation. +Ex Grid:

+ +
4 8 7 3 
+2 5 9 3 
+6 3 2 5 
+4 4 1 6
+
+ +

On this particular map the longest path down is of length=5 and it is: 9-5-3-2-1.

+ +

There is another path that is also length five: 8-5-3-2-1. dropping from 9 to 1(8) is a steeper drop than 8 to 1(7). +WAP to find the longest (and then steepest) path.

+ +

I've worked through Floyd Warshalls, DAG + Topological sorting, tried to think of DFS, though about DP, various things but I am not able to find a way to think through. Any pointers + simple pseudo code for approach would be helpful.

+",184305,,,,,44044.57292,Find path of steepest descent along with path length in a matrix,,3,1,2,,,CC BY-SA 3.0,, +306454,1,,,1/4/2016 9:49,,2,1189,"

I'm storing some data in Cassandra, then after analyzing it puts in several table, I have its aggregation as daily, weekly, monthly, yearly basis. But after some time if some user reads the content I'm changing it in to read and unread status based on user activity.

+ +

But as per my current design either I need to update in all tables at-a-time (more than 5 tables and it may increase) or need create a single table for read unread but want to join the tables, Which is not recommending with nosql concept.

+ +

Any existing good architecture for it? I checked with lambda architecture but didn't get a good solution.

+",209702,,31260,,42373.41875,42373.45417,How to architect frequent updates in No-sql db (Cassandra) - architecture,,1,0,,,,CC BY-SA 3.0,, +306464,1,,,1/4/2016 12:27,,4,76,"

I have been reading every now and then on the virtual machines of programming languages like Java, Python and Lua. They all have a notion of bytecode, into which the source code is translated and that is excutable on a virtual machine (register or stack based).

+ +

Now on an x86 architecture, all code resides in RAM which is addressable using the CPU's address space. An instruction pointer register points to the position in memory that is currently executed. Jumps modify this instruction pointer register, but basically software in memory is linearized into one piece of RAM.

+ +

With Virtual Machines, I am not so certain. Before executing the virtual machine, is all bytecode copied/linked together into a contiguous array with all instructions? Or does the VM keep various modules in different bytecode pages that are swapped/exchanged as needed?

+",80034,,,,,42373.57153,Do Virtual Machines (for execution of PL) operate on one contiguous array for their bytecode?,,1,0,,,,CC BY-SA 3.0,, +306475,1,306479,,1/4/2016 13:49,,4,2026,"

Let's say we have an app where the users gain points and can exchange them for rewards. The exchange request, in pseudo-code, could look like this:

+ +
function exchangePointsForReward(userId, rewardId){
+    user = getUser(userId)
+    reward = getReward(rewardId)
+    if (user.points >= reward.requiredPoints){
+        giveRewardToUser(userId, rewardId)
+        reduceUserPoints(userId, reward.requiredPoints)
+    }
+}
+
+ +

But if we have a malicious user what stops them from crafting a request in their favorite programming language and sending it 20 times at the same time? Before the first request reaches reduceUserPoints() ten other might've already got as far as addNewReward(). Sure, the user's points at the end of the day might go deep into negative, but what stops the user from quickly grabbing the rewards and using them up? How can I ensure that only one operation can be executed for a user at the same time?

+ +

One solution I can think of is the operation tries to acquire a lock at the beginning of the operation and only a single lockable operation can run for a user at any moment:

+ +
function aquireLock(userId){
+    lockId = getRandomLockId()
+    database.query(""UPDATE user SET lock={lockId} WHERE user={userId} AND lock IS NULL"");
+    return database.query(""SELECT lock WHERE user = {userId}"").first === lockId;
+}
+
+function exchangePointsForReward(userId, rewardId){
+    if (!aquireLock(userId)){
+        throw new Error(""Failed to acquire lock"");
+    }
+    user = getUser(userId)
+    reward = getReward(rewardId)
+    if (user.points >= reward.requiredPoints){
+        giveRewardToUser(userId, rewardId)
+        reduceUserPoints(userId, reward.requiredPoints)
+    }
+    releaseLock(userId);
+}
+
+ +

But is there any better strategy here? The question is database-agnostic.

+",81816,,81816,,42373.58403,42373.90625,How to prevent user from requesting API method multiple times in parallel?,,4,0,0,,,CC BY-SA 3.0,, +306478,1,306484,,1/4/2016 14:12,,-2,305,"

We have a class in which mainly data processing(XML nodes) is done by mainly 3 methods. Now code in itself strictly follows DRY principle. For e.g.

+ +
    +
  1. Process Children (reads data from child nodes)
  2. +
  3. Process Choose Element (conditional data check)
  4. +
  5. Extract Single Field (extract data from single node with no children)
  6. +
+ +

To give you overview ,Say we reach node A, then we call ProcessChildren() and if any children is Choose then we would call ProcessChoose(). Then we would call recursively ProcessChildren() result and so worth. +Though this code is easy to read and overall bugs are removed but debugging is very difficult since one functions to another and so on. +Is there any way we could remove this debugging hurdle so that debugging is easy?

+",209722,,161917,,42373.925,42373.925,Debugging mutually recursive functions,,1,4,,,,CC BY-SA 3.0,, +306483,1,306489,,1/4/2016 14:37,,46,86726,"

I have three classes that are circular dependant to each other:

+ +

TestExecuter execute requests of TestScenario and save a report file using ReportGenerator class. +So:

+ +
    +
  • TestExecuter depends on ReportGenerator to generate the report
  • +
  • ReportGenerator depends on TestScenario and on parameters set from TestExecuter.
  • +
  • TestScenario depends on TestExecuter.
  • +
+ +

Can't figure out how to remove thoses dependencies.

+ + + +
public class TestExecuter {
+
+  ReportGenerator reportGenerator;  
+
+  public void getReportGenerator() {
+     reportGenerator = ReportGenerator.getInstance();
+     reportGenerator.setParams(this.params);
+     /* this.params several parameters from TestExecuter class example this.owner */
+  }
+
+  public void setTestScenario (TestScenario  ts) {
+     reportGenerator.setTestScenario(ts); 
+  }
+
+  public void saveReport() {
+     reportGenerator.saveReport();    
+  }
+
+  public void executeRequest() {
+    /* do things */
+  }
+}
+
+ + + +
public class ReportGenerator{
+    public static ReportGenerator getInstance(){}
+    public void setParams(String params){}
+    public void setTestScenario (TestScenario ts){}
+    public void saveReport(){}
+}
+
+ + + +
public class TestScenario {
+
+    TestExecuter testExecuter;
+
+    public TestScenario(TestExecuter te) {
+        this.testExecuter=te;
+    }
+
+    public void execute() {
+        testExecuter.executeRequest();
+    }
+}
+
+ + + +
public class Main {
+    public static void main(String [] args) {
+      TestExecuter te = new TestExecuter();
+      TestScenario ts = new TestScenario(te);
+
+      ts.execute();
+      te.getReportGenerator();
+      te.setTestScenario(ts);
+      te.saveReport()
+    }
+}
+
+ +

EDIT: in response to an answer, more details about my TestScenario class:

+ +
public class TestScenario {
+    private LinkedList<Test> testList;
+    TestExecuter testExecuter;
+
+    public TestScenario(TestExecuter te) {
+        this.testExecuter=te;
+    }
+
+    public void execute() {
+        for (Test test: testList) {
+            testExecuter.executeRequest(test); 
+        }
+    }
+}
+
+public class Test {
+  private String testName;
+  private String testResult;
+}
+
+public class ReportData {
+/*shall have all information of the TestScenario including the list of Test */
+    }
+
+ +

An example of the xml file to be generated in case of a scenario containing two tests:

+ +
<testScenario name=""scenario1"">
+   <test name=""test1"">
+     <result>false</result>
+   </test>
+   <test name=""test1"">
+     <result>true</result>
+   </test>
+</testScenario >
+
+",202067,,4,,42699.79236,42942.55486,How to solve circular dependency?,,3,2,18,,,CC BY-SA 3.0,, +306486,1,306492,,1/4/2016 15:06,,23,22277,"

I'm freshly coming to the Python world after years of Java and PHP. While the language itself is pretty much straightforward, I'm struggling with some 'minor' issues that I can't wrap my head around — and to which I couldn't find answers in the numerous documents and tutorials I've read this far.

+ +

To the experienced Python practitioner, this question might seem silly, but I really want an answer to it so I can go further with the language:

+ +

In Java and PHP (although not strictly required), you are expected to write each class on its own file, with file's name is that of the class as a best practice.

+ +

But in Python, or at least in the tutorials I've checked, it is ok to have multiple classes in the same file.

+ +

Does this rule hold in production, deployment-ready code or it's done just for the sake of brevity in educative-only code?

+",209730,,102438,,43027.70764,43027.70903,Is it ok to have multiple classes in the same file in Python?,,3,0,3,,,CC BY-SA 3.0,, +306488,1,306491,,1/4/2016 15:52,,2,1460,"

I'm designing an analytics system that feeds all events to Elasticsearch. The event lifecycle is as follows:

+ +
    +
  1. Visitor does something.
  2. +
  3. Custom analytics server gathers data, makes an event out of it and puts it into Elasticsearch.
  4. +
  5. One a day or so, custom batch processor aggregates a large set of events from Elasticsearch, transforms them to smaller chunks of data and puts back to Elasticsearch.
  6. +
+ +

I know Logstash is made to gather and process events. Why should I consider it over my own custom solution?

+ +

I have no log files in the process. Events travel from one component to another via HTTP. I don't care transforming the data to events on my own either.

+ +

I'm especially concerned about not having the flexibility of dealing with the events and Elasticsearch directly from my own choice of programming language.

+",9059,,9059,,42373.66944,42373.67708,"Feeding events to Elasticsearch, do I really need Logstash?",,1,0,1,,,CC BY-SA 3.0,, +306506,1,,,1/4/2016 18:44,,0,69,"

I am rewriting a website that has a back-end database for containing meta-data that is displayed to the user when they enter a text code.

+ +

Background:

+ +

The old/existing system stored this data in XML files with a master index XML file that pointed the correct data file based on a GUID. The way this is currently handled for publishing to production is all changes are done in a ""staging"" environment with the files being stored on Cloudfront under a ""staging"" folder. When they want to push changes to production, they simply copy the file from ""staging"" to ""production"".

+ +

I have updated the system to use a RDB (Postgres locally and it will be Aurora on AWS for production). I am using Java for the API with JPA annotations and Spring Boot. I have a separate Java Admin UI App that manages the meta-data and a separate user-facing application that is the website in Angular. The product owner still wants to be able to test changes in lower environments before publishing them to production.

+ +

Question:

+ +

I was trying to find a solution for publishing and was thinking that I may just have a staging env and a production env. A coworker here said that maybe have the Staging Admin App normally point to the Staging API, but when publishing, have it point to the Production API instead and not have a production Admin App. The problem I am running into is primary keys. You cannot specify the primary key in JPA AND have it auto-generated if it doesn't exist. I tried several different ways to do that and it breaks if I try to insert 1, 3, 2.

+ +

Does anybody have any recommendations on how to handle this situation? Transfer data from one RDB to another RDB in a different env while keeping all relationships intact.

+",209754,,,,,42374.14167,Handle publishing data across environments,,1,0,,,,CC BY-SA 3.0,, +306518,1,306520,,1/4/2016 21:44,,28,9008,"

This Stack Overflow question is about a child having reference to its parent, through a pointer.

+ +

Comments were pretty critical initially of the design being a horrible idea.

+ +

I understand this is probably not the best idea in general. From a general rule of thumb it seems fair to say, ""don't do this!""

+ +

However, I am wondering what sorts of conditions would exist where you would need to do something like this. This question here and associated answers/commentary suggests even for graphs to not do something like this.

+",52929,,-1,,42878.52778,43134.14028,When is a circular reference to a parent pointer acceptable?,,6,8,8,,,CC BY-SA 3.0,, +306530,1,,,1/5/2016 0:33,,-3,60,"

Just a quick question about a design pattern for creating custom exceptions. The question is more about the order of parameters. If you can specify more data in the exception, should the parameter for it included in the constructor, come before or after the overload parameters?

+ +

Before:

+ +
public FooException : Exception
+{
+    public string Bar { get; private set; }
+
+    public FooException(string bar) : base()
+    {
+        this.Bar = bar;
+    }
+
+    public FooException(string bar, string message) : base(message)
+    {
+        this.Bar = bar;
+    }
+
+    public FooException(string bar, string message, Exception inner) : base(message, inner)
+    {
+        this.Bar = bar;
+    }
+}
+
+ +

After:

+ +
public FooException : Exception
+{
+    public string Bar { get; private set; }
+
+    public FooException(string bar) : base()
+    {
+        this.Bar = bar;
+    }
+
+    public FooException(string message, string bar) : base(message)
+    {
+        this.Bar = bar;
+    }
+
+    public FooException(string message, Exception inner, string bar) : base(message, inner)
+    {
+        this.Bar = bar;
+    }
+}
+
+",161992,,,,,42374.06319,Convention for exception argument order,<.net>,1,1,,42374.76875,,CC BY-SA 3.0,, +306535,1,306540,,1/5/2016 2:00,,6,1481,"

I find myself unsure about what exactly it means to have different representations of a RESTful resource. The canonical example is for an API to provide an endpoint - say /v1/users/:id - and allow the client to select the best representation of the resource between JSON, XML, HTML or PDF depending on the media range value of the ACCEPT headers.

+ +

I was under the impression that this definition of representation could be extended to encompass more than just content-types, but actually response schemas. Say for example a client wants extended information about the user they could get it by specifying a supported header.

+ +

So for instance, my application could supply different schemas for the same resource i.e.

+ +
# get the default user representation
+GET /v1/users/1234
+ACCEPT: application/vnd.myapp.v1+json
+
+# server responds with
+{""id"": 1234, ""name"": ""Jeffery Lebowski""}
+
+# get the extended user representation
+GET /v1/users/1234
+ACCEPT: application/vnd.myapp.v1.extended+json
+
+# server responds with
+{""id"": 1234, ""name"": ""Jeffery Lebowski"", ""sport"": ""Bowling""}
+
+ +

Am I correctly understanding the concept of representations in REST? Or is the concept of resource representations only applicable to content types and content negotiation?

+ +

If so, how can I document the various representations that my API can return? It doesn't seem like either of the big documentation tools (swagger.io or RAML) support documenting multiple schema representations of a single resource.

+",23288,,5099,,42374.30278,42374.30278,Resource representations and REST API documentation tools,,1,0,1,,,CC BY-SA 3.0,, +306543,1,,,1/5/2016 7:18,,17,3757,"

We are using Java as a backend development language.

+ +

One year back, we wrote a method which uses switch cases based on Enums values. Since we are continuously adding enum members and according adding cases in the method, the method has grown to very large extent. Currently, we have around 100 enum fields and corresponding number of switch cases.e.g.

+ +
 class AClass{
+
+   enum option{ o1, o2, o3...on}
+
+   Value method(Option o){
+
+      switch(o){
+        case o1:
+          value = deriveValue(p1,p2,p3);
+        case o2:
+          value = deriveValue(p2,p3,p4);
+        .
+        .
+        .
+        case on:
+          value = deriveValue(p1,p2,p3);
+      }
+   } 
+ }
+
+ +

Thus each time a business requirement comes, we add enum and corresponding switch case. Now the method has become too long and looks unmanageable, if we keep on adding the same logic in future.

+ +

To clean, we thought of replacing switch case with polymorphism by creating classes for the same, but again, we will end up creating n number of classes.

+ +

We are looking for small, simple and manageable solution.

+ +

----------------- Update ---------------------

+ +

As suggested, to elaborate more, Enum values are fields names for which a client needs a value. Thus if a client need the value of a new field, we add the field in enum, and the corresponding definition of how to fetch value for the newly added field, by adding the corresponding switch case.

+ +

In the switch case, we have a common reusable method e.g. deriveValue() (Please refer to the example given) to which we pass parameters required for deriving the value for the newly added field.

+",194161,,194161,,42375.25486,42375.25486,Refactoring a long method which is based on large number of switch cases,,8,10,1,42374.97778,,CC BY-SA 3.0,, +306550,1,306571,,1/5/2016 9:50,,2,128,"

I've only just become interested in this domain, so sorry if I'm not using the correct terminologies.

+ +

What I want is the following: Say I have a set of rules (or constraints), I want to derive some implications of those rules.

+ +

For example, in Conway's Game of Life, there are 4 basic rules. From these rules, we can see a few patterns emerge. I want a system in which I can input the rules (in some formal language), and it would output at least some of these patterns. Also, if I make a change in any rule, or add a new rule, it should show me the implication of this change (or I should be able to derive it myself from comparing the two outputs).

+ +

This should ideally apply to any game that has a set of rules. For example in chess, it should say that the knight can move two squares in front of it by performing two L moves. In checkers it could be that having a piece behind another prevents the other playing from taking that piece.

+ +

Has anything like this ever be made? Is it even feasible? Can you recommend any courses or books where I could start my (re)search?

+ +

All I've found so far are Automated Theorem Provers, but from what I can tell so far, they are way too generic and mathematically oriented (they aim to solve any theory in maths, which has a lot of rules, I just want it for simple games with a small number of rules).

+",72045,,72045,,42374.43542,42374.54167,"From a set of rules, derive the implications?",,2,6,1,42374.76944,,CC BY-SA 3.0,, +306553,1,,,1/5/2016 10:12,,3,876,"

We use Git and Jenkins as our build&release system and each build is assigned a version number which looks like this: 6.0.12345. Here, the 12345 part is a counter which increments with each build. The counter is stored on the build server in a simple text file, which is rewritten every time a build occurs. It's worth noticing that this file is not under version control.

+ +

I was thinking about more elegant way to manage build numbers, which will allow for easier portability (say move build server to a new machine) and scalability (adding another build server would be very tricky with the current setup).

+ +

The only thing I can think of now is using commit hash instead of number, but this will make it difficult to say which of two builds came first, also, I don't think customers will be willing to accept such a change.

+ +

Any ideas are appreciated.

+",51613,,,,,42374.49722,An elegant way to store build counter,,1,0,2,42396.62639,,CC BY-SA 3.0,, +306558,1,306576,,1/5/2016 10:56,,2,261,"

I have some VHDL design that I plan to release as open source.

+ +

I like MIT License for its simplicity and would like to use it for my code. +However, I am not sure if it is OK to use it with hardware design, as it says ""software"" pretty much in every sentence.

+ +

Are there any particular things I should consider when releasing VHDL under MIT license? And can I do it at all?

+",153812,,,,,42374.58333,Hardware under MIT license?,,1,0,,,,CC BY-SA 3.0,, +306563,1,306810,,1/5/2016 11:23,,6,198,"

When Instant was introduced with JSR-310, convenience methods were added to perform conversion between Date and Instant:

+ +
Date input = new Date();
+Instant instant = input.toInstant();
+Date output = Date.from(instant);
+
+ +

I wonder why it was chosen to be this way instead of, let's say, having Date output = instant.toDate(). My guess is that it was intended to avoid making Instant dependent on Date. Is this right?

+ +

But to me it is somehow strange that you make the new class independent of the already existing one, and make the old one dependent on the newly introduce one. Is this made so because of an expected deprecation of Date or is there any other reason?

+",207355,,,user22815,42377.19097,42377.19097,Why was conversion between Instant and Date named the way it was?,,1,4,1,,,CC BY-SA 3.0,, +306565,1,306575,,1/5/2016 11:52,,0,92,"

I've just refactored some code that managed a global state cache of values that didn't have locking to use double check locking. Other than moving the initialisation to a single source (the cache was being set in a controller method), the main reason was to make sure only one thread was performing the initialisation at a time.

+ +

The lifetime of the cache is for the lifetime of the application, so once it is set, it should be fine.

+ +

I've locked around the code that does the web request.

+ +

My colleague, who wrote the initial code, said he was concerned about the queued threads waiting for because the initialisation makes a web request, and if the request is failing, the last thread has to wait for all the rest to go through so may have to wait awhile and so the request should be outside of the lock, but my thoughts are that a request is expensive and unreliable, and should be minimised.

+ +

As this cache doesn't expire, it's not a big deal, but what are your thoughts? I suggested that if we wanted the correct solution, we could test the locks, using Monitor.TryLock etc for some more sophisticated locking, but it's probably overkill.

+",54644,,,,,42374.88958,Double-check locking around initialisation which performs a web request,,2,1,,,,CC BY-SA 3.0,, +306572,1,,,1/5/2016 12:55,,3,2771,"

I'm writing my first ""large"" program in C# and WPF. It's a Database system (MySQL) that has three main items, Contracts, Companies and People. I have the main UI down for the Companies section of the Database and I'm now looking at managing user privileges before I go any further.

+ +

I've hit a roadblock really on what would be a way to manage user privileges. The privileges would include access like being able to add, edit or delete a Company, the ability to view a Person's private details (home address etc) and access to specific Contract details (value of the job, whose working on it).

+ +

There are going to be many different privileges throughout the Database system as many will be very specific to the section you are in (it's not going to be a simple add/edit/delete privilege system). Another layer of complication to this is that the user is going to see different parts of the Database based on which department they work in. For example, a salesman will see some details specific to a company whereas a builder will see something totally different, but on the same company.

+ +

My initial thought is to have a privileges table in MySQL, something that looks like this;

+ +

+ +

Every user has multiple entries into this table, based on which part of the program they are in. For example if they are in sales, then they will have about 7/8 entries purely based on what sales can do and so on and so forth. This would be managed in a system like this;

+ +

+ +

Where each CheckBox would correspond to a 0 or a 1 in that entry for that person in the Database. When the user then logs into my Database program I setup their privileges at the very beginning - it could reach something like 75 global variables for what the user can or cannot do in the program which I look at further down the line when necessary.

+ +

My question really is what would be a way to manage privileges in this system? I haven't had much experience in this area before so am open to suggestions!

+",209848,,1204,,42374.63403,42375.77847,How to Manage Privileges in C# WPF applications,,1,2,,,,CC BY-SA 3.0,, +306574,1,306580,,1/5/2016 13:39,,9,1266,"

One part of my program fetches data from many tables and columns in my database for processing. Some of the columns might be null, but in the current processing context that is an error.

+ +

This should ""theoretically"" not happen, so if it does it points to bad data or a bug in the code. The errors have different severities, depending which field is null; i.e. for some fields the processing should be stopped and somebody notified, for others the processing should be allowed to continue and just notify somebody.

+ +

Are there any good architecture or design principles to handle the rare but possible null entries?

+ +

The solutions should be possible to implement with Java but I didn't use the tag because I think the problem is somewhat language-agnostic.

+ +
+ +

Some thoughts that I had myself:

+ +

Using NOT NULL

+ +

Easiest would be to use a NOT NULL constraint in the database.

+ +

But what if the original inserting of the data is more important that this later processing step? So in case the insert would put a null into the table (either because of bugs or maybe even some valid reason), I wouldn't want the insert to fail. Let's say that many more parts of the program depend on the inserted data, but not on this particular column. So I would rather risk the error in the current processing step instead of the insert step. That's why I don't want to use a NOT NULL constraint.

+ +

Naively depending on NullPointerException

+ +

I could just use the data as if I expect it to be always there (and that should really be the case), and catch resulting NPEs at an appropriate level (e.g. so that the processing of the current entry stops but not the whole processing progress). This is the ""fail fast"" principle and I often prefer it. If it is a bug at least I get a logged NPE.

+ +

But then I lose the ability to differentiate between various kinds of missing data. E.g. for some missing data I could leave it out, but for others the processing should be stopped and an admin notified.

+ +

Checking for null before each access and throwing custom exceptions

+ +

Custom exceptions would let me decide the correct action based on the exception, so this seems like the way to go.

+ +

But what if I forget to check it somewhere? Also I then clutter my code with null checks which are never or rarely expected (and so definitely not part of the business logic flow).

+ +

If I choose to go this way, what patterns are best suited for the approach?

+ +
+ +

Any thoughts and comments on my approaches are welcomed. Also better solutions of any kind (patterns, principles, better architecture of my code or models etc.).

+ +

Edit:

+ +

There is another constraint, in that i am using an ORM to do the mapping from DB to persistence object, so doing null checks on that level would not work (as the same objects are used in parts where the null does not do any harm). I added this because the answers provided so far both mentioned this option.

+",128542,,128542,,42374.66528,42375.62361,Designs and practices to guard against erroneous null entries from database,,8,8,1,,,CC BY-SA 3.0,, +306589,1,306592,,1/5/2016 17:03,,13,707,"

I work for a big company and I'm responsible for a large java application with thousands of junit tests. Since I moved to this role, there have been 200-300 broken tests (likely broken for years). The tests are old and fragile and they're a mess of spaghetti dependencies that typically end with live sandbox data.

+ +

My goal is 100% passing tests so we can break the build on unit test failures, but I can't do it until I address the broken tests. I have very little budget because the maintenance budget is primarily for support, but my team has identified and fixed the low-hanging fruit tests (mostly config/local resource issues) and we're down to 30-40 really ugly tests.

+ +

What are some opinions on best practice? I don't think the tests are valuable, but I also don't know what they're testing or why they don't work without digging in, which takes time and money we probably don't have.

+ +

I'm thinking we should document the statuses of the broken tests with anything we know, then either delete or ignore the broken tests completely and enter a lower-priority bug/work item to investigate and fix them. We'll then be at 100% and start to get real value out of the other tests and if we have a maintenance/refactoring windfall we'll be able to pick them up again.

+ +

What would be the best approach?

+ +

Edit: I think this is a different question than this question because I have a clear direction for the tests that we should be writing going forward, but I inherited legacy failing tests to address before the large current set of tests becomes meaningful.

+",209885,,-1,,42837.31319,42375.83125,Broken Old/Legacy Unit Tests,,5,13,1,,,CC BY-SA 3.0,, +306597,1,,,1/5/2016 18:26,,5,564,"

What is the origin of ""Program to an interface, not an implementation"" -- does it originate from Design Patterns, 1994, by GoF, or from a computer scientist or from some concepts in computer science earlier?

+",5487,,5487,,42374.86389,42374.95417,"What is the origin of ""Program to an interface, not an implementation""?",,1,7,1,42375.10208,,CC BY-SA 3.0,, +306603,1,306611,,1/5/2016 19:08,,2,267,"

I have often wanted the same feature which is asked for e.g. here and in many other questions on SO:

+ +

Being able to specify that something satisfies multiple interfaces without specifying the concrete type.

+ +

e.g. in C# pseudo syntax

+ +
(IEnumerable<string>, INotifyCollectionChanged) GetStringData() {
+    return /* an object which implements both interfaces */;
+}
+
+ +

It is possible to emulate this for method parameters using generics, but not for return values, fields, properties, etc.

+ +

Is there a name for this in type theory which I could use to find more information about this?

+ +

Are there any (strongishly typed) languages which implement this?

+ +

One example where this could be usefull would be an imaginary implementation of Stream.

+ +

Currently there is an abstract class with many methods / properties and feature-check properties which enable/disable functionality.

+ +

With this feature you could have many interfaces IReadStream, IWriteStream, ISeekable, IHasFixedLength, etc. and then say Ok, I need something where I can read and seek, so I take (IReadStream + ISeekable).

+ +

=============== +(Too long for a comment)

+ +

I think the best way to implement this in C# would be a combination of returning object / explicit casting and checkers implemented with Roslyn which verify that you only cast to 'allowed' interfaces. e.g.

+ +
[MultiReturn(typeof(IReaderStream), typeof(ISeekable))]
+object GetSeekableReaderStream() {
+    var stream = new ConcreteReaderWriterSeekableStream();
+    // stream actually implements IReaderStream, ISeekable AND IWriterStream
+    // but I only want to expose the first two
+    return stream;
+}
+
+ +

This could then be used like

+ +
var stream = GetSeekableReaderStream();
+(stream as ISeekable).Seek(5); // OK
+(stream as IReaderStream).Read(...); // OK
+(stream as IWriterStream).Write(...); // legal for the compiler AND at runtime, but the custom analyzer would scream
+
+ +

Something similar (but different) which is discussed here in roslyn would be ""structural interfaces"", but they are similar to ducktyping, they only enforce that specific methods are implemented, not that the object implements specific interfaces. Still this would be ""near enough"" that most of it would be possible.

+",128801,,-1,,42878.52778,42374.88264,Is there a name for this in type theory? Specify that a value satisfies multiple interfaces without specifying the concrete type,,2,12,,,,CC BY-SA 3.0,, +306628,1,,,1/6/2016 2:50,,6,959,"

I'm an iOS developer primarily. In building my current app, I needed a server that would have a REST API with a couple of GET requests. I spent a little time learning Ruby, and landed on using Sinatra, a simple web framework. I can run my server script, and access it from a browser at localhost:4567, with a request then being localhost:4567/hello, as an example.

+ +

Here's where I feel out of my depth. I setup an Ubuntu droplet at DigitalOcean, and felt my way around to setting up all necessary tools via command line, until I could again run my server, now on this droplet.

+ +

The problem then is that I couldn't then access my server via droplet.ip.address:4567, and a bit of research lead me to discovering I need Passenger and an Apache HTTP Server to be setup, and not with simple instructions.

+ +

I'm way in over my head here, and I don't feel comfortable. There must be a better way for me to take my small group of ruby files and run this on a server, than me doing this. But I have no idea what I'm doing.

+ +

Any help or advice would be greatly appreciated.

+",149371,,,,,42827.74861,How do I go about setting up my Sinatra REST API on a server?,,1,0,,,,CC BY-SA 3.0,, +306633,1,306655,,1/6/2016 4:13,,11,461,"

Q: What is the best way to move a large company to Cucumber with at least 15 years of legacy software requirements maintained in a requirements database?

+ +

Currently considering:

+ +

1) Migrate Everything

+ +

Downside: we don't have unlimited time/budget, we have to move forward to survive, we can't stop everything and GC 100% of our legacy requirements and legacy test suites.

+ +

2) Boy Scout Rule

+ +

Leave everything better than you found it. If you touch requirements or change them write/update a Cucumber feature. Downside: We will have two systems of record (Cucumber, legacy req. DB), possibly for ever assuming there are corners of a given application that don't get touched for a very long time.

+ +

3) Boy Scout Rule Plus

+ +

Same as #2 but put requirements which we are not net moving to Cucumber into Features with a single pending scenario and copy/paste the legacy requirements into the description section. This way we get metrics (via pending scenarios) as to how ""covered"" we are by Cucumber, and also riding us of the need to maintain the old requirements system. I can't find any downsides to this other than it might be a huge mess within Cucumber.

+ +

4) Insert your idea here.

+ +

Background:

+ +

Some projects moving to Cucumber have automated test suites, some only ever used manual testing. All of them maintain their requirements in a legacy requirements database. We have to do this because our requirements are a mixture of laws/regulation and complex logic for financial instruments (risk, pricing, structure, etc...).

+ +

Keep in mind this is a very large company making the move, which complicates solutions further.

+ +

We already have some projects using Cucumber for their ""new"" requirements. So we have piloted the tech and it's work for us so far. We have a mixture of web and purely data projects.

+ +

Thanks

+ +

Edit: To respond to the questions... +The legacy requirements management DB does not connect requirements to tests. It's is not ""testable"". Today connecting requirements to tests is done through an arduous and error prone manual process of linking requirements to our test case management system at the end of each project. Cucumber is an obvious better solution for us. There is no question about that. The question is just how to make the move for a large organization with an immense amount of important requirements that cannot be lost for legal and other reasons.

+",209930,,209930,,42376.65208,42376.65208,Migrate legacy requirements to BDD,,1,4,,,,CC BY-SA 3.0,, +306634,1,306635,,1/6/2016 6:09,,5,612,"

Is there an existing named software design pattern similar to Observer, but for the case where only a single observer is supported rather than a collection of observers.

+ +

I find that I use this pattern fairly often, particularly in cases where an adapter has the ability to provide fan-out (notify multiple observers) on behalf of the object with the simpler call-back behavior. I would like to be able to use a name for what I'm doing that will make the most sense to others.

+",22603,,22603,,42375.26111,42376.71528,Name of design pattern for single-observer,,1,6,2,,,CC BY-SA 3.0,, +306637,1,,,1/6/2016 8:18,,0,1320,"

I am new to the world of Javascript and their frameworks, and I feel a bit lost with this. I am trying to follow the official toutorial of AngularJS. In one of the first sections, it reads

+ +
+

Install Node.js If you want to run the preconfigured local web-server + and the test tools then you will also need Node.js v0.10.27+.

+ +

You can download a Node.js installer for your operating system from + http://nodejs.org/download/.

+
+ +

The doubt is: why have I to use NodeJS 0.10.27? it's a old version, given that now (as I've just seen in the NodeJS's website) the project is on the version 4./5.

+ +

If I am saying things that aren't, please correct me.

+ +

EDIT:

+ +

I tried to follow the steps of the tutorial with the 4.* version and I'couldnt due to the required NodeJS version of the dependencies.

+",207227,,207227,,42375.35417,42638.70278,AngularJS and NodeJS required version,,1,1,,,,CC BY-SA 3.0,, +306638,1,,,1/6/2016 9:10,,6,939,"

I want to model (TV)Events and Reminders and I’m wondering what’s ‘the best’ way to model this.

+ +

The requirements are roughly

+ +
    +
  • When an Event has no Reminder, a Reminder can be created
  • +
  • When an Event has a Reminder + +
      +
    • the Reminder can be deleted
    • +
    • an icon is shown on the Event
    • +
  • +
  • A separate Screen exists presenting all Reminders
  • +
  • Reminders are stored in the backend and have multiple properties
  • +
+ +

Obviously, our real domain is larger and also contains Channels, Recordings, ...

+ +

Basically, we need functions to

+ +
    +
  • create a Reminder
  • +
  • delete a Reminder
  • +
  • get the list of all Reminders
  • +
  • check if an Event has a Reminder
  • +
+ +

This seems a relatively simple problem but I'm struggling with it. Below, I’ve collected a number of possible options and I would greatly appreciate your feedback.

+ +

Option 1

+ +

Event creates Reminders

+ +

Event API

+ +
Reminder createReminder() //adds the Reminder to the backend 
+boolean  hasReminder()
+
+ +

Reminder API

+ +
void deleteReminder() //removes the Reminder from the backend
+static List<Reminder> getAllReminders()
+
+ +

But

+ +
    +
  • deleting and creating a Reminder are symmetric functions and hence I would expect them on a single API (now both Event and Reminder have an interface to the backend)
  • +
  • I'm not a big fan of the static function on Reminder. It seems to indicate that a higher-order object should exist
  • +
+ +

Option 2

+ +

Reminder API

+ +
Reminder createReminder(Event event) //adds the Reminder to the backend
+void deleteReminder() //removes the Reminder from the backend
+boolean hasReminder(Event event)
+static List<Reminder> getAllReminders()
+
+ +

Advantage is that the Event API doesn't change when adding Reminder functionality (=> good wrt extendibility I guess). +But

+ +
    +
  • creating a Reminder for a Event (and checking whether an event has a reminder) seems to be more a function on Event so I would expect them on the Event API
  • +
+ +

Option 3

+ +

Event API

+ +
Reminder createReminder() //adds the Reminder to the backend
+void deleteReminder() //removes the Reminder from the backend
+boolean hasReminder()
+static List<Reminder> getAllReminders()
+
+ +

But

+ +
    +
  • the getAllReminders() seems more a function on Reminder
  • +
+ +

Option 4

+ +

Create a ReminderManager/ReminderService (singleton) with the following API

+ +
Reminder createReminder(Event) //adds the Reminder to the backend
+void deleteReminder(Reminder) //removes the Reminder from the backend
+boolean hasReminder(Event event)
+List<Reminder> getAllReminders()
+
+ +

This is my preferred option but

+ +
    +
  • This results in anemic Event/Reminder objects (just getters and setters but no logic). There seems to be discussion whether this is an anti-pattern or not
  • +
+",188716,,188716,,42377.45903,42381.56042,Alternative to Anemic domain objects (Simple example provided),,3,0,,,,CC BY-SA 3.0,, +306639,1,,,1/6/2016 9:17,,19,4444,"

We are studying a prospect of using Ghostscript in a commercial product (Windows desktop application).

+ +

I read about sidestepping licensing GS altogether by simply suggesting to the users that they can download and install GS on their own to improve their experience (the software actually works without it, too, but most users would want to render/load PDF documents).

+ +

So, suppose we don't ship Ghostscript ourselves, but instead have our software check its availability and, in case if it is absent, suggest how to obtain it (short explanatory text and link to their download page). In case if it is installed, the program would use the Ghostscript API.

+ +

To me it sounds legal, as Artifex says something like ""you are not allowed to ship GS if your application doesn't meet so-and-so conditions"". Would anybody care to share an opinion on this?

+",209957,,31260,,42375.57222,42674.44653,Licensing Ghostscript in a commercial product,,2,2,4,,,CC BY-SA 3.0,, +306658,1,,,1/6/2016 14:21,,12,11537,"

As I understand it, in DDD, it is appropriate to use a repository pattern with an aggregate root. My question is, should I return the data as an entity or domain objects/DTO?

+ +

Maybe some code will explain my question further:

+ +

Entity

+ +
public class Customer
+{
+  public Guid Id { get; set; }
+  public string FirstName { get; set; }
+  public string LastName { get; set; }
+}
+
+ +

Should I do something like this?

+ +
public Customer GetCustomerByName(string name) { /*some code*/ }
+
+ +

Or something like this?

+ +
public class CustomerDTO
+{
+  public Guid Id { get; set; }
+  public FullName { get; set; }
+}
+
+public CustomerDTO GetCustomerByName(string name) { /*some code*/ }
+
+ +

Additional question:

+ +
    +
  1. In a repository, should I return IQueryable or IEnumerable?
  2. +
  3. In a service or repository, should I do something like.. GetCustomerByLastName, GetCustomerByFirstName, GetCustomerByEmail? or just make a method that is something like GetCustomerBy(Func<string, bool> predicate)?
  4. +
+",209856,,60357,,43024.83056,43024.83056,"in DDD, should repositories expose an entity or domain objects?",,3,1,5,,,CC BY-SA 3.0,, +306664,1,340728,,1/6/2016 15:42,,1,650,"

When Apache released its list of best practices, they recommended avoiding the addition of an empty panel conditionally and gave the following example of what NOT to do:

+ +
if(MySession.get().isNotLoggedIn()) {
+    add(new LoginBoxPanel(""login""))
+}
+else {
+    add(new EmptyPanel(""login""))
+}
+
+ +

However, I see a lot of this type of code when I peer review, except it the second condition usually looks like this:

+ +
    add(new EmptyPanel(""login"")).setVisible(false);
+
+ +

The programmers justify code like this by saying they don't want to build the original component if it is not going to display. That sounds like a logical argument to me. Can someone explain what we are missing and why this is a bad practice?

+",209994,,34183,,42375.65833,42758.45208,Why is conditionally using an empty panel in Apache Wicket a bad practice?,,1,0,,,,CC BY-SA 3.0,, +306668,1,349602,,1/6/2016 16:16,,0,3748,"

we are working on a JAVA EE project which handles huge amount of data, but has to provide full-text-search option (in hungarian language). +So we started to think about what kind of architecture could fulfill our requirements. My thoughts are the following:

+ +

Using ElasticSearch as a database is an antipattern so it must be used just for indexing and searching

+ +

MongoDB is fit for our expectations so it seems to be a good choice as database.

+ +

The problem is, how to index MongoDB data with ElasticSearch? I created a POC with 13 million documents. I iterated through the documents and in each iteration I saved them into MongoDB (it gave me an ID for each document) then I put the documents into ElasticSearch but stored only the Mongo ID. Document indexing was quite fast, average 4,8 ms per document.

+ +

When I search with Elastic, it gaves me back the matching document ID's and I can load the documents from Mongo with the $in operator. This also seemed quite fast.

+ +

All that means that it can be a good approach but is it really? I can't figure out when does this architecture slows down or what could be a bottleneck. Maybe syncronizing ElasticSearch with Mongo but it can be run on a distributed environment (Hadoop).

+ +

So my question: is there a better way to synchronize MongoDB with ElasticSearch?

+",138914,,138914,,42880.69653,42880.69653,How to properly index MongoDB with ElasticSearch?,,2,2,0,,,CC BY-SA 3.0,, +306669,1,,,1/6/2016 16:19,,0,12852,"

Can someone please explain the differences in C# between:

+ +
    +
  • throw
  • +
  • throw new
  • +
+ +

and exactly how exceptions ""bubble up"" as I've heard they do?

+ +
+ +

In my daily job, I've used just try/catch to mostly control the flow. I've divided the code into various ""blocks"" each one enclosed in a try/catch so I could log and debug exactly where a program crashed/had a problem, if there was any.

+ +

In researching, I've found there is a ""bubble"" logic behind the exception - which is used in more structured software development. For example, if I have a system in which I build modules/plugins/complex complementary classes, I might want the exceptions to point me to exactly where the code failed, OR, bubble up to a more generic part of the program to do a wider debug.

+ +

The way how we can do this sort of ""bubble up"" exception catching, I do not know, but is my understanding correct? Can someone perhaps give a clearer explanation of C#'s ""bubble up"" exception behaviour?

+ +

then about throw:

+ +

I see there is throw and throw new.

+ +

From what I've seen, throw is used when you check for a particular error/condition in your code and you want to warn the outside block that particular condition occurred.

+ +

Is this correct? I don't really understood why I have to use throw or throw new and the differences between them.

+",94642,,31260,,42375.80972,42375.86667,"Differences between `throw` and `throw new` and exactly how exceptions ""bubble up""",,5,15,,42377.10208,,CC BY-SA 3.0,, +306677,1,,,1/6/2016 17:33,,24,12537,"

I recently thought about the use of unsigned integers in C# (and I guess similar argument can be said about other ""high level languages"")

+ +

When In need of an integer I am normally not faced with the dilemma of the size of an integer, an example would be an age property of a Person class (but the question is not limited to properties) . With that in mind there is, as far as I can see, only one advantage of using an unsigned integer (""uint"") over a signed integer (""int"") - readability. If I wish to express the idea that an age can only be positive I can achieve this by setting the age type to uint.

+ +

On the other hand, calculations on unsigned integers can lead to errors of all sorts and it makes it difficult to do operations such as subtracting two ages. (I read this is one of the reasons Java omitted unsigned integers)

+ +

In the case of C# I can also think that a guard clause on the setter would be a solution that gives the best of two worlds, but, this would not be applicable when I for example, an age would be passes to some method. A workaround would be to define a class called Age and have the property age be the only thing there, but this pattern would have Me create many classes and would be a source of confusion (other developers would not know when an object is just a wrapper and when it's something more sofisticaded).

+ +

What are some general best practices regarding this issue? How should I deal with this type of scenario?

+",51990,,168744,,42965.83681,42965.83681,Should I avoid using unsigned int in C#?,,3,6,1,,,CC BY-SA 3.0,, +306687,1,306699,,1/6/2016 19:50,,-2,68,"

For example:

+ +
class Data {
+    private String field1;
+    private String field2;
+
+    public void someEditMethod() {}
+}
+
+class DataRepository {
+    public void save(Data data) {
+        // save to DB
+    }
+}
+
+ +

I need save Data when editing.

+ +

How better?

+ +

internally:

+ +
....
+public void someEditMethod() {
+    repo.save(this);
+}
+....
+
+ +

or externally:

+ +
....
+DataRepository repo = new DataRepository(dbConnection);
+Data data = new Data(repo);
+data.someEditMethod();
+....
+
+ +

Externally is uncomfortably, and I do not want to impose class customers knowledge about the repository.

+",210007,,,,,42375.90903,SRP. Save to repository when edit. Internally or externally?,,1,3,0,42387.86389,,CC BY-SA 3.0,, +306694,1,,,1/6/2016 21:06,,2,979,"

I'm in the midst of deciding which platform to use to implement a new kind of display server / window manager. It seems that it isn't possible to replace the DWM on Windows systems according to MSDN docs.

+ +

But then, how do folks go about doing fancy 3d stuff like this video of a 3d window manager framework for the DWM without resorting to hacks like this.

+ +

For example, in the Linux world we can configure the OS to use entirely different window managers (eg. Enlightenment, Compiz, KWin) and entirely different desktop environments (eg. KDE, Gnome).

+ +

How might one go about truly ""swapping in"" different window managers / desktop environments on the Windows platform, getting access to the buffers that applications draw to, being able to control and delegate user interface events, etc.?

+",4970,,4970,,42376.64444,42376.64444,How to replace or manipulate the Microsoft Desktop Window Manager (DWM)?,,0,7,,42376.64514,,CC BY-SA 3.0,, +306695,1,306696,,1/6/2016 21:30,,7,2188,"

For an open source project created by 2 authors, is it proper for an MIT License to have two authors?

+ +
The MIT License (MIT)
+
+Copyright (c) 2016 FIRST NAME and SECOND NAME
+
+",200008,,,,,42375.89861,MIT License - Dual Authors,,1,1,,,,CC BY-SA 3.0,, +306703,1,306723,,1/6/2016 21:23,,3,1052,"

How do you make an class that properly warns a developer in the future that they've made a mistake somewhere in their implementation that resulted in an object that gets deconstructed in a state that prevents the release of it's resources?

+ +

Background:

+ +

I recently upgraded to Visual Studio 2015 and began reloading and compiling code for a game engine I'm working on and ran into a new series of warnings ""warning C4297: '*': function assumed not to throw an exception but does"". A quick search revealed a C++ convention that I'd missed and the reasons behind said convention: destructors should not throw exceptions. I also can't really argue with the reasons, but I'm also not sure how to work around the problem.

+ +

Within OpenGL a Context basically holds all of the state information for the OpenGL engine. Only one thread may have a context at any given time and each thread may only have one current context. When the engine starts it creates the context and then relinquishes control over the context and starts up another thread which picks it up and proceeds to handle the graphics rendering for the engine. To handle all of this, I created a graphics engine class that uses semantics similar to a mutex to claim and relinquish the graphics engine and make sure that no mistakes are made that might some day result in someone attempting to do things with a context that it doesn't own.

+ +

During destruction, the graphics engine and a number of other classes that rely on it all check to make sure that the current thread has claimed the graphics engine before they perform actions that are necessary in their destruction. If the thread didn't have the graphics context claimed, the destructor was throwing. My goal was really to provide some basic protection against the class being used improperly on accident in the future, not to make the graphics engine thread-safe. Now... I'm uncertain of how best to handle this.

+ +

I've contemplated just switching over to a mutex-based approach which I could use to block access to the graphics context until a thread was done, possibly making the graphics engine class fully multi-threading capable (not that I can understand why you'd want to perform multi-threading with an OpenGL context, as the calls needed to do so are expensive enough to negate any benefit you might get out of it from what I understand).

+ +

The most tempting option has been to just log an error terminate any thread that attempts to misuse the class. Unfortunately, I can't find an OS-independent way of terminating just the current thread. If I was to go this route, I'd have to look up OS-appropriate ways to terminate the current threads.

+ +

I'm also not certain that I'm not being overly paranoid. Maybe I should just document the proper use of the class and if someone misuses it let them and hope that they're able to figure out why their application isn't doing what it's supposed to. I'm also worried about myself being the fool who misuses the class some day in the future.

+",210161,Darinth,,,,42384.06597,"OpenGL, multithreading, and throwing destructors",,6,3,1,,,CC BY-SA 3.0,, +306713,1,306719,,1/7/2016 0:02,,6,1271,"

According to this question and just about every style guide I've ever read, large methods are bad. However, suppose I have a method that does the following:

+ +
    +
  1. Receive a JSON String from some service and store it as a Dictionary
  2. +
  3. For various parameters in the Dictionary, creates connections to other services, such as using AWS keys to open connections to AWS services
  4. +
  5. Downloads, Authenticates, or Manipulates data based on values in the Dictionary and new data received from step 2.
  6. +
  7. Creates a file on the local directory and writes various values to it, dependent on step 2 and 3.
  8. +
  9. Pushes the file from step 4 to another location
  10. +
+ +

Clearly, this should not all be one large method. However, I'm not sure how to split this up into smaller methods without passing around a ton of parameters. For example, the first method is simple because I only need to tell the program one thing, and only one thing is returned. However, starting at step 3, I will either have to pass the entire Dictionary to the new method, or make the Dictionary an instance or global variable.

+ +

So, in short, here is my question. When breaking up a large method like this, what is the better alternative: should I make variables which need to be used by multiple methods instance/global variables (whichever is appropriate for this particular task), or should I pass create new methods which take many parameters?

+ +

To me, it seems like the first option is much worse, because it seems to break encapsulation: each method created in this way now has some strange dependency on a global or instance variable which is outside of its scope. However, the second method feels really messy.

+",210061,,-1,,42837.31319,42376.63472,What is the best way to split up large methods where each subtask depends on previous tasks?,,4,1,,,,CC BY-SA 3.0,, +306717,1,,,1/7/2016 0:40,,1,442,"

I have recently come across two terms: Software Engineering and Craftsmanship. +I would like to know what the difference is between Software Engineering and developing software in a Craftsmanship style. Intuitively I understand that these two approaches are not exactly the same, but I am not sure whether they are completely different.

+",210064,,,,,42376.06111,Difference between Software Engineering and developing software in a Craftsmanship style,,1,3,0,42376.47431,,CC BY-SA 3.0,, +306718,1,306766,,1/7/2016 0:40,,9,457,"

There is a portion of our codebase written in the following style:

+ +
// IScheduledTask.cs
+public interface IScheduledTask
+{
+    string TaskName { get; set; }
+    int TaskPriority { get; set; }
+    List<IScheduledTask> Subtasks { get; set; }
+    // ... several more properties in this vein
+}
+
+// ScheduledTaskImpl.cs
+public class ScheduledTaskImpl : IScheduledTask
+{
+    public string TaskName { get; set; }
+    public int TaskPriority { get; set; }
+    public List<IScheduledTask> Subtasks { get; set; }
+    // ... several more properties in this vein, 
+    // perhaps a constructor or two for convenience.
+}
+
+ +

That is, there are a large number of interfaces specifying just a set of properties with no behavior each with a sole corresponding implementation that implements these with auto-properties. The code is written by someone fairly senior (much more so than myself) and is apart from this use of interfaces reasonable procedural code. I was wondering if anyone else had encountered/used this style and if it has any advantages over just using concrete DTOs everywhere without the interfaces.

+",198855,,,,,42377.88958,Programming to Data Oriented Interfaces,,2,11,,,,CC BY-SA 3.0,, +306727,1,,,1/7/2016 3:11,,3,131,"

I am investigating Camel for connecting various services. I understand the core concepts but I was curious about more specific implementation details. This application would have a browser client front-end that communicates using Ajax REST calls into the ESB.

+ +

Imagine that the end user sends personal information and credit card information to the ESB to register for car insurance. All the information is sent and the ESB should return with a result that the application was processed. The different routes along the bus would include, verifying personal information and then processing the credit card information and then storing that data in a database and then returning with a success. So, let's imagine that there is service1-personal, service2-creditcard, and service-3-database. Between those points, I would like to send a asynchronous message using Camel and MQ/Queue and the service1 can consume the message and the service2 will consume the result of that message, etc.

+ +

I have two questions and one of them is the important one.

+ +

How should browser initiate the message and send the initial application data? Should it just post it as a form/REST? And then, should the client wait asynchronously and poll for the result? Check the last message from service3 with the result of the application? Or could it be done synchronously? What is the best approach? Should websockets be used and the ESB can send a response once the application is complete?

+ +

Also, does anyone have more granular details on the best approach for doing this with camel. For example, do you know what routes or EIP type to use as I describe here? Do you know what the camel XML config would look like? Say with using Camel MQ/Spring DSL?

+",75803,,,,,42380.58958,Camel/EIP/ESB with messaging from a website to process an order, synchronous and asynchronous,,0,1,1,,,CC BY-SA 3.0, +306731,1,,,1/7/2016 6:53,,3,198,"

I recently started working with JS Promises. What I always like to do, is create function, which returns a Promise with the final desired result, but I only do 1 operation/then. Consider something like this (this fetches the versions of an npm package in an array in reverse order):

+ +
function packageVersions(name) {
+  return fetchJson(npmUrls.getPackageUrl(name))
+    .then(npmPackage => npmPackage.versions)
+    .then(versionsObject => Object.keys(versionsObject))
+    .then(versions => versions.sort((a, b) => b.localeCompare(a)));
+}
+
+ +

Like if I was doing a lots of .maps sequentially.

+ +

My question would be: is this considered a good practice in general? Could this have significant perfomance drawbacks, if I do a lots of chaining? Do you consider this unreadable in any way?

+ +

Would this be better?

+ +
function packageVersions(name) {
+  return fetchJson(npmUrls.getPackageUrl(name))
+    .then(npmPackage => {
+        const versions = Object.keys(npmPackage.versions);
+        return versions.sort((a, b) => a.localeCompare(b));
+    });
+}
+
+",126755,,,,,42376.85278,Javascript Promise chaining,,1,0,,42377.10139,,CC BY-SA 3.0,, +306735,1,,,1/7/2016 8:33,,1,85,"

We're looking to add personal pages generator for our users, which is simple enough while all of them are on our domain. We also want to enable them to purchase their own domains through us, and serve these pages on their domain.

+ +

We currently use Google App Engine for all of our deployment and server needs, and it has been pretty great so far. Now the problem is that I can't really point a domain to single page handler on App Engine, thus I'll have to resort to an external solution.

+ +

We can easily handle the pages generation, the main issue I'm facing is serving a different dynamic page for different domains from a single server.

+ +

What architecture do you suggest? What technologies/platforms should I research? +(We're looking at significant users traffic when this thing is up)

+",119770,,119770,,42376.39028,42379.91389,Personal pages domains architecture,,1,2,,,,CC BY-SA 3.0,, +306739,1,306746,,1/7/2016 9:17,,-1,716,"

I have a library where I want to use a logger but I don't have (and cannot) create any interface for it because I'll be using it on different systems that have nothing in common (and won't have). So this won't work:

+ +
public void Foo(ILogger logger, ... other params)
+{
+    logger.Info(...):
+}
+
+ +

To overcome this problem and to make it possible to use any logging framework a particular system is using I came up with this solution:

+ +

I created a class that would provide delegates for the logging methods:

+ +
public class LoggerProxy
+{
+    private Action<string> _info;
+
+    public LoggerProxy(Action<string> info)
+    {
+        _info = info;
+    }
+
+    public void Info(string message)
+    {
+        if (_info != null) _info(message);
+    }
+}
+
+ +

And would use it like this:

+ +
static void Main()
+{
+    var logger = new LoggerProxy(SomeLoggingSystem.Info);
+    Foo(logger, ...);
+}
+
+public void Foo(LoggerProxy logger, ... other params)
+{
+    logger.Info(...):
+}
+
+ +

I call it proxy but I'm just curious if it is really a proxy or maybe an adapter or still something else. I read the answers to How do the Proxy, Decorator, Adapter, and Bridge Patterns differ? but I cannot say that I can for sure say that any of those names is fine here.

+",160257,,-1,,42878.52778,42376.57222,Which design pattern is it if any for providing a general logger interface?,,1,5,,,,CC BY-SA 3.0,, +306747,1,,,1/7/2016 11:21,,4,255,"

I'm a amateur programmer, I'm not used to working with Git. It's my job to help test new code, for a while now this has included pulling the branch containing the issue, merging it with the main branch, and pushing it to the test environment.

+ +

There are a lot of steps involved with a incredible amount of different options that are of no use to me. Since I always pull from and push to the same two servers, all I'm really doing different every time is selecting the branch for the issue that I'm testing.

+ +

My question basically is: Which practice is considered state-of-the-art to make this easier? I can't be the only tester with this problem, what are some solutions to this? Should I get a command line version of git running and see if I can script the basic steps?

+ +

Note: step three in this post seems to be roughly what I'm looking for but there is no elaboration on how the asker plans to do this.

+",210127,,-1,,42837.31319,42633.34028,Is there a simpler way to help testers push code to a testing branch?,,2,6,1,,,CC BY-SA 3.0,, +306749,1,306755,,1/7/2016 12:20,,5,16399,"

In the past few years, we've been using dynamically typed versions of PHP. However, in PHP7 we have an option to enable 'strict types':

+ + + +

(Note: Scalar Type Declarations repeatedly uses the phrase ""weak type checking"" or ""strictly typed"" versus static/dynamic.)

+ +

Is it right to say that PHP7 is a statically typed language? If so or not, why?

+",210135,,17429,,42376.6,42376.60278,Is PHP7 a static or dynamic typed language?,,1,13,1,42377.78333,,CC BY-SA 3.0,, +306767,1,,,1/7/2016 16:36,,-1,122,"

In C#, when you call a constructor, you can add one or more property initializers in curly braces:

+ +
var foo = new Bar() { Armpit = new Flapdoodle() { Limpet = 2 } };
+
+ +

What if that feature were generalized to ""set properties on a return value"":

+ +
var baz = foo.Armpit { Limpet = 4 };
+
+ +

This would be handy in cases like this:

+ +
var things = coll
+    .Select(x => {
+        var a = CreateA(x);
+        a.Parent = this;
+        return a;
+    });
+
+ +

That's a common pattern in projecting collections via LINQ, but you see it elsewhere too: You have to declare a local variable and add some noise just to set one property on a return value before you pass it along to something else.

+ +

You write

+ +
f2(f1());
+
+ +

...but then whoops, something needs to change, so it becomes three lines:

+ +
A a = f1();
+a.B = ""c"";
+f2(a);
+
+ +

It seems like it might be nicer to be able to express the same code as follows:

+ +
var things = coll.Select(x => CreateA(x) { Parent = this } );
+
+f2(f1() { B = ""c"" });
+
+ +

My question is this: Is this merely not useful enough to bother with, or is it actively a bad idea for some reason that's obvious to the C# team but not to me?

+",127387,,-1,,42878.52778,42376.71597,Hypothetical extension to C# property initializer syntax,,2,0,,,,CC BY-SA 3.0,, +306774,1,306782,,1/7/2016 18:20,,6,9352,"

The problem

+ +

I have this structure that I want to create a ""constructor"" for it.

+ +
struct example {
+    int x, y, z; /* various members */
+    struct another *another; /* pointer to another structure */
+}
+
+ +

The two different ways I know of

+ +
    +
  1. Using functions that create and delete structures on heap
  2. +
+ +

I can create structures by making functions that allocates them on the heap like this:

+ +
/* create example */
+struct example *example_new(int x, int y, int z) {
+    struct example *p = malloc(sizeof *p);
+    p->x = x;
+    p->y = y;
+    p->z = z;
+    p->another = another_new();
+}
+
+/* delete example */
+void example_new(struct example *p) {
+    another_del(p->another);
+    free(p);
+}
+
+ +
    +
  1. Using functions that initializes and frees memory for structure through pointer
  2. +
+ +

Or I could initalize the structure by passing pointer to the function and have the user responsible for allocating and deallocating memory for it.

+ +
/* initalize example */
+void example_init(struct example *p, int x, int y, int z) {
+    assert(p);
+    p->x = x;
+    p->y = y;
+    p->z = z;
+    p->another = another_new();
+}
+
+/* free memory allocated for example */
+void example_free(struct example *p) {
+    another_del(p->another);
+}
+
+ +

My thoughts

+ +

I usually prefer first approach when creating recursive structures (like Trees and Linked Lists) but use the second approach in all other cases.

+ +

I tried to use the second approach for a recursive structure and it turned out to be quite messy.

+ +

The question

+ +

How do you choose between these two ways? Is there a third way you would use to solve this problem?

+",154627,,,user53019,42376.79861,42376.79861,"How to create ""constructors"" for structures in C",,3,1,2,,,CC BY-SA 3.0,, +306780,1,306799,,1/7/2016 18:46,,1,1506,"

One of the things the coffeescript programming language is criticized for is its treatment of variable declarations and scope (example).

+ +

The answers to this question (and the blog I linked to above) seem to hinge on the false dichotomy of variables either being able to be shadowed or else be global. In any language with reasonable scoping rules, this conceit seems (to me) to be a non-sequiter. In the aforementioned coffeescript:

+ +
foo = 5
+f = () ->
+  foo = 12 #foo in the outer scope is closed-over
+  bar = 3  #binds a name in f's scope
+  null
+
+f()
+foo #12
+bar #undefined because it was bound in an inner scope
+
+ +

So clearly, bar is not global. If the snippet above was itself inside a non-global scope then foo would not be either. It seems to me that shadowing a variable from an outer scope in a language with lexical closure could only possibly create confusion. So why does coffeescript get slammed for removing the possibility? Is there an important use case here I'm missing? Regardless of how one may feel about the rest of coffeescript, this behavior seems desirable to me.

+",176474,,-1,,42837.31319,42376.92361,What is the use case for shadowing variables?,,2,5,,,,CC BY-SA 3.0,, +306781,1,306785,,1/7/2016 19:02,,1,745,"

I have spent quite some time learning all possible design patterns but I cannot find the ideal one for the following case. I am developing an iOS app where we are using multiple analytics tools like Google Analytics/Facebook Analytics and it is possible that in the future we will add some more (our company is obsessed with measuring :D).

+ +

Each of them uses their custom SDK to log an event which comes with its own method to send an event and its own parameters. I would love to refactor it and create a module where adding new analytics tool wouldn't lead to code smell and in best case scenario there would be a central place which would dispatch the events at once.

+ +

I was thinking about this:

+ +
    +
  1. Defining a protocol AnalyticsProtocol which requires implementation of sendEvent method
  2. +
  3. Defining AnalyticsEvent class for every tool (FacebookAnalyticsEvent, GoogleAnalyticsEvent) which conforms to the protocol above and implements sendEvent method. Each of these classes would have its own init method.
  4. +
  5. Cental dispatch class which takes an array of id<AnalyticsProtocol> objects and calls sendEvent on every single one of them.
  6. +
+ +

Do you have any better ideas? Thanks.

+",210185,,,,,42377.63611,Which design pattern to choose when supporting multiple analytic tools?,,2,2,1,,,CC BY-SA 3.0,, +306784,1,306788,,1/7/2016 19:28,,0,217,"

I have two objects as member variables of a class.

+ +
std::unique_ptr<Object> specificObject;
+std::vector<std::unique_ptr<Object>> objects;
+
+ +

I know that specificObject will always be within objects. How can I point to specificObject without using shared_ptrs as I do not want the ownership of the objects in the container to be considered shared? Is this a case where a raw pointer can be used, or is shared_ptr really the solution?

+",166667,,,,,42376.825,Containers and shared ownership within a class instance,,1,0,,,,CC BY-SA 3.0,, +306789,1,309104,,1/7/2016 19:56,,2,154,"

I'm working on a Scala data-processing program. Essentially we start with a collection of many small data objects, say, (eventId: String, basicInfo: Basic) and gradually filter out some objects and add more information by joining the original collections with other data sources (typically on the eventId field), or by computing new fields as functions of existing fields.

+ +

Now, I'm having a certain amount of trouble designing this properly. It's rather adhoc now - the original data item is modeled as case class FirstStage(eventId: String, basicInfo: Basic) and subsequent stages are modeled compositionally: case class SecondStage(first: FirstStage, someOtherId: Long), and so on. This results in rather ugly chains of accessors (e.g third.second.first.basicInfo) and makes the code very difficult to modify. For example, we might want to change some function persist:ThirdStage => to operate instead or additionally on SecondStage, but the lack of polymorphism in the design makes that pretty hard.

+ +

One idea might be to start out with the terminal stage, initially set all its fields to null, and gradually fill them out, but it seems as if it would be difficult to keep track of what happens when. Or, we could ditch the case classes in order to have SecondStage extend FirstStage and so on, but I think it's probable that the program will lose its linearity in the next few months, which will make the inheritance approach rather clunky.

+",210188,,,user22815,42388.97778,42402.93056,How to represent data objects gradually getting augmented in a pipeline,,1,2,,42402.93056,,CC BY-SA 3.0,, +306791,1,306806,,1/7/2016 20:13,,29,3049,"

The question above is an abstract example of a common problem I encounter in legacy code, or more accurately, problems resulting from previous attempts at solving this problem.

+ +

I can think of at least one .NET framework method that is intended to address this problem, like the Enumerable.OfType<T> method. But the fact that you ultimately end up interrogating an object's type at runtime doesn't sit right with me.

+ +

Beyond asking each horse ""Are you a unicorn?"" the following approaches also come to mind:

+ +
    +
  • Throw an exception when an attempt is made to get the length of a non-unicorn's horn (exposes functionality not appropriate for each horse)
  • +
  • Return a default or magic value for the length of a non-unicorn's horn (requires default checks peppered throughout any code that wants to crunch horn stats on a group of horses that could all be non-unicorns)
  • +
  • Do away with inheritance and create a separate object on a horse that tells you if the horse is a unicorn or not (which is potentially pushing the same problem down a layer)
  • +
+ +

I have a feeling this is going to be best answered with a ""non-answer."" But how do you approach this problem and if it depends, what's the context around your decision?

+ +

I'd also be interested in any insights on whether this problem still exists in functional code (or maybe it only exists in functional languages that support mutability?)

+ +

This was flagged as a possible duplicate of the following question: +How to avoid downcasting?

+ +

The answer to that question assumes that one is in possession of a HornMeasurer by which all horn measurements must be made. But that's quite an imposition on a codebase that was formed under the egalitarian principle that everyone should be free to measure the horn of a horse.

+ +

Absent a HornMeasurer, the accepted answer's approach mirrors the exception-based approach listed above.

+ +

There's also been some confusion in the comments on whether horses and unicorns are both equines, or if a unicorn is a magical subspecies of horse. Both possibilities should be considered--perhaps one is preferable to the other?

+",161165,,-1,,42837.31319,42381.46875,"Given a herd of horses, how do I find the average horn length of all unicorns?",,10,12,6,,,CC BY-SA 3.0,, +306792,1,,,1/7/2016 20:20,,-1,132,"

I am developing a feature related to Microsoft Outlook which has support only for Windows library.

+ +

However, I have to put this into a web page for the users. In the process of consuming the windows application into a web page, I have written a Windows application using C# and integrated it with a WPF solution and trying to package this as an ActiveX.

+ +

This would require the users to download the ActiveX on their local machines which I would like to avoid. Is there a better way to achieve this (Windows app to web)?

+",210198,,31260,,42377.32431,42377.32431,.net Windows app in a web page,<.net>,1,4,,42387.66875,,CC BY-SA 3.0,, +306801,1,306803,,1/7/2016 21:35,,64,11015,"

Today I had an argument with someone.

+ +

I was explaining the benefits of having a rich domain model as opposed to an anemic domain model. And I demoed my point with a simple class looking like that:

+ +
public class Employee
+{
+    public Employee(string firstName, string lastName)
+    {
+        FirstName = firstName;
+        LastName = lastname;
+    }
+
+    public string FirstName { get private set; }
+    public string LastName { get; private set;}
+    public int CountPaidDaysOffGranted { get; private set;}
+
+    public void AddPaidDaysOffGranted(int numberOfdays)
+    {
+        // Do stuff
+    }
+}
+
+ +

As he defended his anemic model approach, one of his arguments was: ""I am a believer in SOLID. You are violating the single responsibility principle (SRP) as you are both representing data and performing logic in the same class.""

+ +

I found this claim really surprising, as following this reasoning, any class having one property and one method violates the SRP, and therefore OOP in general is not SOLID, and functional programming is the only way to heaven.

+ +

I decided not to reply to his many arguments, but I am curious what the community thinks on this question.

+ +

If I had replied, I would have started by pointing to the paradox mentioned above, and then indicate that the SRP is highly dependent on the level of granularity you want to consider and that if you take it far enough, any class containing more than one property or one method violates it.

+ +

What would you have said?

+ +

Update: The example has been generously updated by guntbert to make the method more realistic and help us focus on the underlying discussion.

+",143959,,143959,,42378.75417,43135.92708,Does this class design violate the single responsibility principle?,,11,18,13,,,CC BY-SA 3.0,, +306818,1,,,1/8/2016 0:19,,1,78,"

I would like to discuss a question about best practices regarding exception handling (e.g. in Java). Normally, when setting the attributes of a class, I check the arguments in the setters for validity, and if the arguments are not valid, I throw an IllegalArgumentException.

+ +

Besides the setters, sometimes also the constructor of the class allows to set some values for the attributes. So I also have to check the arguments in the constructor for validity. This, however, would lead to duplicate code. I can only think of one solution for this problem: Instead of directly setting the value of the attribute in the constructor, I call the setter, which then also validates the given value.

+ +

However, as the setters are public, this has the problem that I would call an overrideable method from within the constructor, which is bad. To solve this problem again, I could make the setters final, but this I have never seen.

+ +

Another possibility I could think of, theoretically, is to have a private or many private methods which check the arguments for validity and which are called by both the constructors and setters. But this would lead to even more code. What do you think are the best practices for this problem?

+",210220,,31260,,42377.01667,42377.02361,Exception Handling for class attributes in setters and constructors,,2,0,,42387.66597,,CC BY-SA 3.0,, +306821,1,,,1/8/2016 0:50,,-1,1680,"

For example, you are given the source 1234 and the target 24. The task is to use the standard arithmetic operators +-/* placed within source, without changing the order of the digits of source to create target. In this example the solutions are:

+ +
1 * 2 * 3 * 4 = 24 = target
+12 + 3 * 4 = 24 = target
+
+ +

Another example:

+ +
target=346
+source=8675309
+# solution
+867 - 530 + 9 = 346
+
+ +

My question is what data structures and algorithms should I be thinking about for this problem? I've written a solution to this using Python, but it is terribly naive and brute forces the problem. It creates all possible splits in source, and then creates all possible expressions by interleaving every combination of +-\* within the split of source.

+ +

It takes a LONG time to find this solution:

+ +
source=314159265358 
+target=27182    
+# solution
+3141 * 5 / 9 * 26 / 5 * 3 - 5 * 8
+
+ +

I believe the complexity is O(2^n), where n is the number of digits in source. After each digit you can split or not.

+ +

Code below (note it errors if source has a 0):

+ +
#!/usr/bin/env python
+
+from __future__ import print_function
+from itertools import combinations
+from itertools import product
+from sys import argv
+
+def split_number(s):
+    """""" takes input string number and splits it into strings of separate digits """"""
+    return [_ for _ in s]
+
+def partition_list(aList, indices):
+    """""" split a list into parts, splitting at the indices """"""
+    indices = list(indices)
+    return [aList[a:b] for a, b in zip([0] + indices, indices+[None])]
+
+def create_index_splits(list_len, n):
+    """""" returns a list of all places to create splits to create n splits in a
+    list of len list_len """"""
+    return list(combinations(range(1, list_len), n))
+
+def generate_expression(l, operators):
+    """""" takes a list of list l and a sequence of operators and creates an
+    expression of the intreleaved list and operators """"""
+    expression = [item for pair in zip(l, operators) for item in pair]
+
+    # interleave operands and operators
+    return ' '.join(expression + [l[-1]])
+
+
+def generate_expressions(l):
+    """"""generate all possible combinations of digits splits and operators """"""
+    l = convert_list_ints_to_list_str(l)
+    operators = '+/*-'
+
+    # cartesian product of operators
+    operator_combinations = product(operators, repeat=len(l) - 1)
+    return [generate_expression(l, _) for _ in operator_combinations]
+
+
+def convert_list_ints_to_list_str(l):
+    """"""converst list of lists of digits to list of numbers """"""
+    return [''.join(map(str, _)) for _ in l]
+
+
+def find_solutions(a, b):
+    l = split_number(a)
+    index_splits = [create_index_splits(len(l), _) for _ in range(1, len(l))]
+    index_splits = [i for j in index_splits for i in j]
+    m = [partition_list(l, _) for _ in index_splits]
+    b = int(b)
+    expressions = list([generate_expressions(_) for _ in m])
+    expressions = [i for j in expressions for i in j]
+
+    return [_ for _ in expressions if eval(_) == b]
+
+
+def main():
+    try:
+        a = argv[1]
+        b = argv[2]
+    except (ValueError, IndexError):
+        print(""invalid args"")
+
+    [print(_) for _ in find_solutions(a, b)]
+
+if __name__ == '__main__':
+    main()
+
+ +

To run it do python thisfile.py [source] [target]

+",210224,,,,,42407.16319,Create arithmetic expression from number using +-/* that equals target,,1,0,,42409.20972,,CC BY-SA 3.0,, +306836,1,,,1/8/2016 4:38,,7,1101,"

I have to generate a code that will send through SMS or Email to implement the One Time Password (OTP) requirement of our client.

+ +

I just finished creating the design using strategy pattern, . +.

+ +

This it the first time I've used strategy pattern in real life projects so I would like to ask if I'm doing it right.

+ +

Below is just a sample code that will be used by client.

+ +
 //OTPGeneratorStrategy generatorStrategy = new EmailGeneratorStrategy(""test@gmail.com""); for sending to Email for example
+OTPGeneratorStrategy generatorStrategy = new SmsGeneratorStrategy(993454454545);
+OTPGeneratorContextgeneratorContext = new OTPGeneratorContext(generatorStrategy);
+context.generate(""1-12345-0"");
+
+OTPValidatorStrategy validatorStrategy = new SmsValidatorStrategy();
+OTPValidatorContext validatorContext = new OTPValidatorContext (validatorStrategy );
+try {
+validatorContext.validate(""1-12345-0"",""e7241f"");
+} catch (InvalidVerificationKeyException ivk) {
+System.out.println(""Verification key is invalid!"");
+} catch(VerificationKeyExpiredException vke) {
+System.out.println(""Verification key is already expired, Please regenerate"");
+}
+
+ +

Updated:

+ +

Thank you for the feedback, actually I was not able to update my latest UML design and already separated the generation and validation.

+ +

And the reason why I've used Exception rather than enums because the project was built in java 1.4.

+",206866,,206866,,42377.33194,42378.31389,Strategy Pattern Implementation,,2,1,2,,,CC BY-SA 3.0,, +306837,1,306840,,1/8/2016 5:09,,5,314,"

I have a script that throws exceptions when something goes wrong. However, for the purposes of testing I also want to capture specific points although I'm not sure whether they would be deemed errors or not.

+ +

Below is an example:

+ +
if ($model->fetchCurrentlyProcessing() > 0) {
+    throw new App_Exception_CurrentProcess('There is a currently process running. Stopping process.');
+}
+
+if ($model->fetchQueuedItems() == 0) {
+    throw new App_Exception_NothingInQueue('Nothing in queue. Stopping process.');
+}
+
+ +

In my tests I want to mock the response from these $model methods and detect which exception was thrown, and whether it was the expected exception or not. By throwing exceptions in these cases I can identify exactly at what point the script stopped. But I'm not sure if this is the correct usage of Exceptions as these possible aren't errors to be precise, as in nothing went wrong.

+",142254,,,,,42377.31667,Are exceptions only for handling errors?,,3,0,1,,,CC BY-SA 3.0,, +306845,1,306848,,1/8/2016 7:37,,1,278,"

Suppose I have an array of n numbers. The rules for the array is as below:

+ +
    +
  1. All the numbers must add up to 1000
  2. +
  3. If a number changes (positive or negative), all the other numbers in the array must be adjusted so that the sum of the array is still 1000
  4. +
  5. The hard part - all numbers must be greater than or equal to 0.
  6. +
+ +

Number 1 and 2 can be easily done by getting the delta of change, and spreading the delta among the other numbers. The trick is condition 3.

+ +

There's a question asking for its application. The numbers are percentages to be fed into pie chart; the user can adjust each pie segment with a slider. As it is a pie chart using percentages, all the sliders must add up to 100% - meaning if the user drag the slider to reduce the size of one pie slice, all the other pie slices should increase. No slice of the pie should be less than 0 or more than 100.

+",10123,,10123,,42377.33889,42377.69583,"How do I have a group of numbers which total does not exceed a limit, and each number must be >= 0?",,2,5,,,,CC BY-SA 3.0,, +306846,1,306847,,1/8/2016 7:39,,19,28943,"

What is ES6? Is it JavaScript? Or multiple language supporter?

+ +

I searched for it but can't understand it, especially the page on Wikipedia. Is it better than Javascript? And what can I do in my web developing using this language?

+ +

So what ES6 is, and how can I use it in my web app developing ?

+",209605,,209605,,42377.32361,42719.60278,different between ES6 and Javascript,,1,1,8,42408.59375,,CC BY-SA 3.0,, +306850,1,,,1/8/2016 8:38,,5,2397,"

I'm finding it difficult to understand Microsoft's motive behind building conceptual ""projects"" like this. They ship MSBuild.exe and NMAKE.exe together with a Visual Studio 2015 install but they both appear to serve the same purpose.

+ +

If they are both (in a practical sense) equivalent but using different formats, I won't bother learning how to write both. If they both have their own purposes, I might reconsider.

+ +

The following text is just my current theory on the whole thing... but I asked this question because there could be people here with experience customizing these files unlike myself...

+ +

It seems like MSBuild is made for those that depend heavily on the Visual Studio environment (.sln files) for compilation. And upon reading a little bit about it, I asked myself: Couldn't compilation of dependencies be separated by different projects allowing for partial compilation when using MSBuild? That would involve having multiple projects in a solution and then building the solution to build the entire product.

+ +

As you can see... I'm trying to understand the philosophy of MSBuild and why it's needed in the build process. At the moment, I'm starting to wonder if it's actually based on NMAKE and compiles down into NMAKE commands. That would make a whole lot of sense because I cannot see why anyone would need both in the same build process.

+",144981,,,,,42377.45903,Do MSBuild project files serve the same purpose as NMAKE makefiles in a build process? (practically equivalent),,1,1,,,,CC BY-SA 3.0,, +306856,1,,,1/8/2016 10:02,,4,1738,"

As a part of an algorithm I'm making I need to make a function that takes the current date to a function and gives out a number between 1-6, and not the same number for two consecutive dates. And it should always return the same number for a given date.

+ +

A simple solution would be to make any Monday 1, Tuesday 2, Wednesday 3, etc, but since we only have 6 values just give Sunday 3 again. And then next Monday 1.

+ +

The problem with this algorithm is that it's really predictable. A good algorithm should be harder to guess for others.

+ +

Any tips on how to approach this?

+",183475,,31260,,42377.42917,42412.56528,"How to design a function that takes a date and gives out a number between 1-6, always the same for all dates",,4,12,1,,,CC BY-SA 3.0,, +306862,1,,,1/8/2016 11:30,,0,546,"

I heard that AngularJS is becoming quite popular, and at the same time ReactJS is also. But if AngularJS already takes care of MVC with 2 way binding between model and view, would ReactJS actually be needed?

+ +

I thought ReactJS is binding model to view, but not binding view to model (say, if it is a form text input field, if the value change, it won't go to the model?) In fact, ReactJS seem to do what CanJS was doing, by reflecting what the model is by building a template, and whenever the model changes, the view is automatically updated -- there is no need to do anything at all. But doesn't AngularJS already do that?

+",5487,,,,,42990.37986,"If AngularJS already takes care of the MVC, would using ReactJS be needed if it is just the View part?",,1,0,,,,CC BY-SA 3.0,, +306871,1,306879,,1/8/2016 13:41,,0,208,"

E.g. in a domain model with two aggregate roots:

+ +
    +
  • book
  • +
  • author
  • +
+ +

Is there a specific term that refers to a collection of aggregates of the same root type (e.g. a collection of books)?

+ +

In Confusion about the meaning of the word aggregate in domain driven design I read the term aggregate type, which might or might not fit, but I haven't found a direct mention in Evan's blue book.

+",163879,,-1,,42837.31319,42378.54375,What is a Collection of Aggregates Referred to in Domain Driven Design?,,1,5,,,,CC BY-SA 3.0,, +306872,1,,,1/8/2016 14:53,,4,287,"

I'm reading this tutorial on Perlin noise: +http://www.angelcode.com/dev/perlin/perlin.html +which seems to be the clearest one but still not perfect. A lot of details are skipped and a lot of code unexplained.

+ +

My general question is, why do we need vectors for Perlin noise (instead of just noise values at specific coordinates), why do they have to be unit vectors and how do we combine them with the given input point coordinates?

+ +

Also, the article gives this piece of code as vector calculation which looks like trying to find out square cell coordinates (except 1 is subtracted instead of being added for some reason):

+ +
// Computing vectors from the four points to the input point
+float tx0 = x - floorf(x);
+float tx1 = tx0 - 1;
+
+float ty0 = y - floorf(y);
+float ty1 = ty0 - 1;
+
+ +

This doesn't look like any vector operation.

+",148122,,,,,43609.89306,"In Perlin noise, why need vectors and how to use them exactly?",,2,6,,,,CC BY-SA 3.0,, +306878,1,,,1/8/2016 16:11,,6,985,"

This is a pretty generalized question which I am asking.

+ +

Scenario 1

+ +

I have a product 'Pen' which has a quantity of 1. Now 2 users a and b has come to buy the product.User 'a' clicked on buy now and has started the payment processing and once the payment is completed the order for the product will be created and I would reduce the quantity to zero and user b will see the product as 'Out of stock'. This a more macroscopic explanation.

+ +

Scenario 2

+ +

User a is making a payment and in between user b also clicked on pay and proceeded the payment process. user a completed the order and the quantity was reduced to zero.User b completed the payment and the order failed!

+ +

How does this problem is managed by e commerce sites? Is it like, when a user proceed for the payment of a product, is the quantity is reduced to zero and once if that order fails (due to a payment failure), does the quantity is incremented back so that other user can buy it?

+ +

How is this condition managed?

+",210315,,,,,42377.69861,How does an product order payment works during a race condition?,,1,1,1,,,CC BY-SA 3.0,, +306880,1,306950,,1/8/2016 16:34,,1,1440,"

I have been programming professionally for a couple of decades. Some years ago, the framework word was not so widely used as it is today, but I still believe we've been using ""frameworks"" for a very long time before they were called that.

+ +

For example, the Java SDK has parts that can be considered frameworks; like for example the Collections framework. Also J2EE gives some structure, classes, interfaces and directives for building web solutions. Isn't that a framework? Well, Wikipedia calls it a platform.

+ +

Usually throughout the years, one builds up a plethora of interfaces and classes that conform a common practice allowing you to build apps in less time than before by leveraging your previous achievements. Isn't that body of structures, steps, analysis and design products (interfaces to implement and abstract classes to extend) a framework?

+ +
    +
  • Is there a canonical, and quotable, hard line separating what a framework is and what a comprehensive set of APIs or a SDK is?
  • +
  • When was the term framework introduced in computer science?
  • +
+ +

Note: this other question is about API vs SDK, so it's not a duplicate.

+ +

Wikipedia only says this:

+ +
+

""An architecture framework establishes a common practice for creating, + interpreting, analyzing and using architecture descriptions within a + particular domain of application or stakeholder community.

+
+",61852,,-1,,42837.31319,42555.55694,Blurred line between a framework and an SDK,,2,2,,,,CC BY-SA 3.0,, +306881,1,306885,,1/8/2016 16:36,,3,7535,"

In PHP I have this if statement ( $first and $second will evaluate to true or false):

+ +
if ($first && $second) {
+    // evereything is OK
+} else {
+    throw new Exception()...
+}
+
+ +

My real code is much more complicated, I am trying to make simple example here.

+ +

I want to turn this if/else into one if with negation like this:

+ +
if (!($first && $second)){
+    throw new Exception()...
+}
+
+// everything is OK
+
+ +

As you can see in this example, I've put ! negation sign in front of parentheses. Is this correct ? Do I need to negate every condition itself like this:

+ +

if (!$first && !$second)

+ +

Or I should use || operator:

+ +

if (!$first || !$second) // I am using OR here

+ +

I am not sure how these conditions are going to evaluate at the end, and I am confused by my dummy testing results. I really hope that someone can explain to me how all these checks are going to evaluate at the end.

+ +

Thanks to everyone who answered my question. Due to my low rep, I can not up-vote or pick some answer as the right one. You are all good for me :)

+",210317,,210317,,42377.70833,42384.79167,How to properly reverse the if statement when you have two conditions in it?,,4,6,5,,,CC BY-SA 3.0,, +306889,1,,,1/8/2016 17:21,,11,3875,"

Trying to learn Clojure and you can't help but be told continually how Clojure is all about immutable data. But you can easily redefine a variable by using def right? I get that Clojure developers avoid this but you could avoid changing variables in any language just the same. Can someone explain to me how this is any different, because I think I'm missing that from the tutorials and books I'm reading.

+ +

To give an example how is

+ +
a = 1
+a = 2
+
+ +

in Ruby (or blub, if you prefer) different from

+ +
(def a 1)
+(def a 2)
+
+ +

in Clojure?

+",54638,,54638,,42377.75833,42516.68125,If you can use def to redefine variables how is that considered immutable?,,3,0,3,,,CC BY-SA 3.0,, +306890,1,,,1/8/2016 17:22,,62,37301,"

is it bad practice that controller call repository instead of service?

+ +

to explain more:

+ +

I figure out that in good design controllers call service and service use repository.

+ +

but sometimes in controller I don't have/need any logic and just need to fetch from db and pass it to view.

+ +

and I can do it by just calling repository - no need to call service - is it bad practice?

+",150418,,,,,43905.72222,is it bad practice that controller call repository instead of service?,,4,6,17,,,CC BY-SA 3.0,, +306891,1,,,1/8/2016 17:24,,4,121,"

I write Python code for scientific computation. As it is research I face among other two problems:

+ +
    +
  1. the demands are quickly changing
  2. +
  3. results need to stay reproducable
  4. +
+ +

Imagine you have a package A that is needed in an old project and then again in a new one. The new project however requires some changes (This would not be a problem if you could keep backwards compatibility. However, I experienced that this is practically impossible in small very specialized modules.). System wide installations would disallow to rerun the old code. So separate installations are required.

+ +

Currently, I solve this with git submodules: +Package A is developed in a git repository and included as a submodule in both projects. This way, the code of the projects can always be run and updates of the package can be propagated easily.

+ +

However, I ran in a practical problem with python imports. +The folder structure of a package A would look like this:

+ +
A - A - mod.py
+ |   |- __init__.py 
+ |- B
+ |- C
+
+ +

Where B and C contain packages that are needed by A and the folders are added to the python path in init.py of A. However, this lead to a case where a package was used twice and with different versions:

+ +
A - A - mod.py
+ |   |- __init__.py 
+ |- B - B - mod2.py
+ |   |   |- __init__.py
+ |   |- C
+ |- C
+
+ +

I.e. both A and B brought their ""own"" version of package C. When B adds C to the python path, the folder is ignored, because simply the folder is used, that was added by A. This can be fixed by specifying the correct path when importing. But it gets a bit ugly (see. https://stackoverflow.com/questions/34682638/is-there-a-convenient-way-to-translate-a-from-a-import-b-as-c-to-an-python-imp)

+ +

Due to this I started to ask myself if this is the right way to reduce code duplication while assuring a valid code base in a quickly changing environment. Do you see alternatives/better ways?

+",210322,,-1,,42878.48125,42378.07361,How do I manage quickly changing python modules,,1,2,1,,,CC BY-SA 3.0,, +306903,1,,,1/8/2016 19:35,,1,403,"

I've found a paper describing a particular algorithm I'd like to use, and published with the paper is a GPL licensed source code for the algorithm. I am wondering if I can re-implement the algorithm from the paper (obviously without using any of the original code), distribute it closed source, and not be in violation of the original GPL license agreement.

+ +

As a corollary, what if I make some (minor) modifications to the algorithm along the way?

+",210341,,,,,42377.84653,Can I make a closed-source implementation from scratch of a GPL-licensed library?,,2,28,,42385.46806,,CC BY-SA 3.0,, +306906,1,,,1/8/2016 20:01,,2,212,"

I've been doing a bit of research on the subject. I know unordered_sets are hash tables, where the key and value are one and the same. What I'd like to know is how the compiler figures out where in the table each object belongs.

+",164475,,164475,,42377.85347,42378.20625,How are objects stored in unordered_sets?,,1,6,,,,CC BY-SA 3.0,, +306907,1,306908,,1/8/2016 20:36,,0,986,"

(TLDR) To build a parser for a recursive grammar by composition of individual parsers (e.g. with a parser combinator framework), there are often circular dependencies between some of the individual parsers. While circular dependencies are generally a sign of bad design, is this a valid case where the circular dependency is inevitable? If so, which solution would be preferable to deal with the circular dependency? Or are parser combinators just a bad idea altogether? (/TLDR)

+ +
+ +

There are other questions asking about dependency injection with circular dependencies. Typically, the answer is to change the design to avoid the circularity.

+ + + +

I have come across a typical case where I encounter circular dependencies: If I want to have different services to inspect a recursive structure.

+ +

I have been thinking about other examples, but so far the best I come up with is a parser for a recursive grammar. Let's use json as an example, because it is simple and well-known.

+ +
    +
  • A json ""value"" can be a string (""..""), an object ({..}) or an array ([..]).
  • +
  • A string is simple to parse, and has no further children.
  • +
  • An object is made up of keys and values, and the surrounding object syntax.
  • +
  • An array is made up of values, and the surrounding array syntax.
  • +
  • A ""key"" within an object is basically the same as a string.
  • +
+ +

Now I am going to create a number of parser objects:

+ +
    +
  • A value parser, which depends on the other 3 parsers.
  • +
  • An object parser, which depends on string parser and value parser.
  • +
  • An array parser, which depends on value parser.
  • +
  • A string parser.
  • +
+ +

I want to manage the 4 parsers with a dependency injection container. +Or, even if I don't want a container, we still have to figure out in which order to create the different parsers. There is a chicken-and-egg problem.

+ +
+ +

There are known solutions to this, which can be observed in existing parser libraries. So far I have mostly seen the ""stub"" solution.

+ +
    +
  1. Avoid the circular dependency..
  2. +
+ +

.. by passing the value parser as an argument to the object and array parsers' parse() method.

+ +

This works, but it taints the signature of the parse() method. Imagine we want this to be something like a parser combinator, which can be reused for other grammars. We would want a parser interface that is generic and independent of the specific grammar, so we can't have it require a specific parser to be passed around.

+ +
    +
  1. Use a stub.
  2. +
+ +

Instead of requiring each dependency in the constructor, we could use a set() or add() method on one of the parsers. E.g. we first create an empty value parser (""stub""), and then add the object, array and string parsers to it via the add() method.

+ +
    +
  1. Use a proxy.
  2. +
+ +

Instead of creating the actual value parser, we create a proxy object, with a reference to the container. Only when the parse() method is called the first time on the proxy value parser, the real value parser is created.

+ +
+ +

Now this is all fine, and I suppose it is just a matter of taste, which solution I prefer.

+ +

But how does this fit with the typical high-horse response that circular dependencies are a sign of bad design? The example seems totally valid, and there is an entire class of problems like this.

+",113434,,-1,,42837.31319,42378.37014,Circular dependencies: Recursive grammar parser (e.g. json),,4,3,1,,,CC BY-SA 3.0,, +306919,1,306923,,1/8/2016 22:51,,1,410,"

I'm currently a first semester IT student, and I'm wondering if it's better to write my method to find something e.g. in a C# List, or to use any built in method to do these, such as LINQ or .Find or .OrderBy method with lambdas? +I heard from a senior that the .NET optimizes my code according to the contents and amount of data in my list, and finds the best way of searching in it, or sorting it. Is that true?

+ +

The question stands basically for any built-in-method vs pseudocode learned. The C# is only an example of what I've meant with the question. I know that learning the pseudocodes will solve problems if interpreted to any language, still I'm curious about it's efficiency.

+",175368,,,,,43082.29375,"Should I be using any algorithm to sort/find items, or use a language's built-in ways?",,4,2,,,,CC BY-SA 3.0,, +306921,1,306951,,1/8/2016 23:01,,2,23597,"

How does construction work with dependency injection?

+ +

With the following code:

+ +
public class AdvancedSearchController : Controller
+{
+    private EmployeeController _employeeController;
+
+    public AdvancedSearchController(EmployeeController employeeController)
+    {
+        _employeeController = employeeController;
+    }
+
+ +

When and how does the employeeController in the code above get set or passed into the constructor? How does this magic happen?

+",66460,,,user53019,43613.50764,43613.50764,How do constructor parameters of a MVC Controller get set?,<.net>,2,4,2,,,CC BY-SA 4.0,, +306931,1,,,1/9/2016 0:52,,1,111,"

The goal is to develop something similar with a quiz application: there is a server to which players connect. At some point, a game is started. The game consists in asking the players a few multiple answer questions and determining the winner based on some criteria.

+ +

The server spawns a new thread for each client (I know it might be better and easier to use select, but the point is to understand/learn multithreading). The users are ""pending"" until a sufficient number of users are available such that the game can start.

+ +

The problem is what happens when the game starts? My current best approach is to use a lot of shared data that is constantly pooled by all threads and spawn a game controller thread that controls the game.

+ +

More specifically, I would have a data structure Player that has some flag inGame. When sufficient players are available, the player that notices this starts the game control thread and passes the available players to it. The game control thread marks all available players as being ""inGame"" and assigns a question to them. After some time, it checks for an answer. In the meantime, the players constantly check for updates from the game control thread.

+ +

I feel that this shared data approach is not the best. Do you have any suggestion on how could I improve the architecture of my application?

+",112693,,,,,42378.13056,Coordinating threads in a multithreaded server,,1,0,,,,CC BY-SA 3.0,, +306936,1,306944,,1/9/2016 2:35,,4,252,"

Promises is a fairly new pattern for me, so I'm still building my intuition for them.

+ +

I recently came across a case where some code in an adapter-like bit of code was once synchronous, and then became asynchronous, so promises were introduced. +Consider:

+ +
function runCalculation() {
+  var params = this.getParams();
+  this.callLibraryCalculation(params);
+}
+
+ +

Here, getParams collects values from various places and puts them in one object that can be used by callLibraryCalculation, which wraps, as you would expect, a call from an external source. All this is obviously simplified for the purpose of the question.

+ +

So this all worked, until a new use case required getParams to retrieve some of it's values from an external API, introducing asynchronous behavior. So that function was altered to handle async, using and returning a promise. Which means consuming it has changed.

+ +
function runCalculation() {
+  return this.getParams().then(function(params) {
+    this.callLibraryCalculation(params);
+  });
+}
+
+ +

I choose to return the promise here because the functions that call runCalculation have dependent logic that needs to be .then()-ed. but my question is about the callers of runCalculation. They are now producing their own promises as a result of consuming the one returned here. I can currently let most of those promises escape into the ether, because the callers represent the end of execution. But I notice that in the case where it is not end of execution, this passing of promises begins to invade a great deal of code, bubbling from caller to caller.

+ +

My inclination is to always return a promise from a function that must use one to order it's logic.

+ +

Is that inclination good intuition? Should I be worried that Promises, once introduced, start to take over my code?

+ +

Or to ask in a more answerable question: Are there considerations I may have missed that speak for or against such a practice?

+",200885,,1204,,43146.68194,43146.68194,Should I be returning promises from any function that uses them?,,1,0,,,,CC BY-SA 3.0,, +306940,1,307010,,1/9/2016 3:14,,0,75,"

I'm fairly new to Java and GUIs and am trying to work on a side project. I'm finding that as I continue to make more and more GUIs within eachother, I am continuously needing to access information that originated in the first GUI, from within deeper GUIs.

+ +

For example, I have a variable in the first GUI, after a user clicks on four or five sequential buttons (thus 4 or 5 GUIs later), I need to access said information.

+ +

So far I've just been passing this information into the initialization of the GUIs via their constructors, thus accessing it from there. Is this the ""proper"" way of accessing said information? Or is there a convention/more efficient way of doing so?

+ +

Example:

+ +

Within the first (main) GUI, I have a variable holding a username. After the user clicks on ""Login"" - then ""Account Settings"" then ""Profile"" - from within the ""Profile"" GUI, I need to access the variable ""userName"" originating in the main GUI. Currently I just pass this information into the constructors leading to Profile - is there a better way to do this? Perhaps a way to centralize all this information?

+",210378,,,,,42379.08889,Accessing information between GUIs in Java,,1,1,,,,CC BY-SA 3.0,, +306941,1,,,1/9/2016 3:16,,1,925,"

Note: In my ignorance of the difference between Programmers vs StackOverflow sites (which I know now), I had posted this question on StackOverflow earlier. What I'm looking for is some elaboration, for example, on the comment by Gene.

+ +
+ +

Once I am able to build an abstract syntax tree (AST) for an input, then:

+ +
    +
  1. regardless of the type of the grammar used for building the AST (LR, LL*, or even no formal grammar as with Perl 5);

  2. +
  3. regardless of the parser-generator used for building the AST (bison, antlr, or my own hand-written code); and

  4. +
  5. regardless of the number of passes I do over my input for building the AST;

  6. +
+ +

... is it true that I can implement any feature of any language ever created just by visiting the AST N number of times?

+ +

I'm not worried about the complexity of the resulting code, or its performance, I just need to know whether an AST is sufficient to allow the building of a translator (a compiler or an interpreter) regardless of the feature I am trying to build.

+ +

I am not looking for an exhaustive list of what cannot be done with an AST, just 1 example should suffice. If an AST is a sufficiently fundamental (and thus versatile/powerful) structure to allow the building of just about any translator, then a I'd like to see a confirmation of this fact. Getting the source of the book or a URL (if one exists) would be an additional bonus.

+ +

Just as an AST, being a tree, would be more powerful data structure than (and, thus, can also emulate) a flat or linear intermediate representation (IR) such as the Three-Address Code as covered in the Dragon book, so also an abstract syntax graph (ASG, if you will), being a graph, would be more powerful than an AST. Thus, elaborating further on my original post: Is there any translator feature known to mankind as of this writing that cannot be implemented by an AST and requires the use of an ASG?

+",66547,,-1,,42878.52778,42379.19722,Is an AST enough to build any translator?,,1,32,0,42378.57986,,CC BY-SA 3.0,, +306955,1,,,1/9/2016 7:15,,4,2825,"

I've been programming for years with primarily-imperative languages (C++, C#, javascript, python), but have recently experimented with some functional langauges (Lisp, Haskell) and was excited to try applying some of the declarative-style programming ideas in C++. I have a custom range-based STL replacement library I wrote a while back that made a lot of that possible in a fairly clean way.

+ +

Here's an example - a function to see if a target substring exists inside of a source string, ignoring case. First the plain-ol' imperative way:

+ +
bool StringContains(const string& source, const string& target) {
+
+    // Figure out search area, exit if target is too big to exist in source
+    if (target.size() > source.size()) {
+        return false;
+    }
+    size_t endIndex = source.size() - target.size();
+
+    // For each potential position...
+    for (size_t i = 0; i <= endIndex; i++) {
+
+        // Check if target is here
+        size_t strPos = i;
+        bool foundHere = true;
+        for (char targetChar : target) {
+            char strChar = tolower(source[strPos]);
+            targetChar = tolower(targetChar);
+            if (strChar != targetChar) {
+                foundHere = false;
+                break;
+            }
+            strPos++;
+        }
+
+        // If found here, return true
+        if (foundHere) {
+            return true;
+        }
+    }
+
+    // If not found by now, return false
+    return false;
+}
+
+ +

And here it is using my declarative library (which use some C++11 magic):

+ +
bool StringContainsDec(const string& source, const string& target) {
+
+    // Figure out search area, exit if target is too big to exist in source
+    if (target.size() > source.size()) {
+        return false;
+    }
+    size_t endIndex = source.size() - target.size();
+
+    // For each potential position...
+    auto targetRange = All(target) | Transformed(tolower);
+    for (size_t i = 0; i <= endIndex; i++) {
+
+        // If found here, return true
+        auto sourceRange = All(source) | Sliced(i, i + target.size()) | 
+            Transformed(tolower);
+        if (RangesMatch(sourceRange, targetRange)) {
+            return true;
+        }
+    }
+
+    // If not found by now, return false
+    return false;
+}
+
+ +

A little more compact and perhaps English-like and readable, which is nice. The ""|"" is analogous to a shell script pipe, routing values thru to the next operation. So:

+ +
All(source) | Sliced(i, i + target.size()) | Transformed(tolower)
+
+ +

means, set up a range that, when iterated, will take each character of 'source', sliced between index i and i + target.size(), and pass each character through tolower().

+ +

RangesMatch() iterates each of the two ranges and returns true if each element matches.

+ +

So, that's all fine and good, and it works correctly. But over time I've found, experimenting with this approach in practical situations:

+ +
    +
  • The declarative code is harder to debug. With the imperative, you can just step through in the debugger, line by line, and see what's going on. With the declarative, it's not that much more complex, but you need to jump through some different library functions of constructing the range, calling the internal iterator functions (Front(), PopFront(), etc.) etc. So it jumps you around from place to place, making it more confusing to track the logic. I imagine this is easier in e.g. a Lisp debugger.
  • +
  • The declarative code is a bit slower. On my system it's about half the speed of the imperative code. The ranges are lazily constructed and very efficient, and only allocate locals on the stack, but it still involves a little more under the hood, like tracking start/end pointers, which adds up in nested loops etc. With declarative it seems like you can easily lose touch with what your code is actually doing. If you have a huge chain of operations you'll miss opportunities to simplify, save useful intermediate values so they don't need to be recalculated later, etc.
  • +
  • The declarative code is harder to modify over time, I find. If I want to do some extra operation on each character, I need to add another transform function, or lambda etc. In imperative programming I just add a line of plain ol' code inside the loop, or 100 lines if needed, and it's fairly easy to follow.
  • +
  • I find the imperative style more intuitive as I'm writing. It better reflects the order that things happen, let's me proceed step by step without having to juggle the whole thing in my head up front, etc.
  • +
+ +

Now all this stuff might be particular to my implementation or my preferences, but I imagine some of it is inherent to the style too? This string function is just one example but I've found it with all kinds of things when I implement both side by side - that 80% of the time imperative style wins for me, just do it with plain old loops and if statements rather than messing around with higher-order functions, map/reduce, etc. They may add some code brevity and a little less typing if your text editor sucks, but in complicated real-world situations they become confusing and harder to maintain.

+ +

So is declarative overrated? Has anyone had broad experience with both approaches, especially with complex real-world projects in functional languages? Curious to hear what other people think.

+",210382,,,,,42378.37222,Is declarative programming overrated?,,2,2,2,42378.60347,,CC BY-SA 3.0,, +306959,1,306960,,1/9/2016 9:32,,3,6041,"

Trying to solve bogo coupon logic with proper design pattern, but having trouble identifying one.
+Use case: ""Buy iPad get SmartCase for free""

+ +

Suppose we have the following objects:

+ +
Product:
+ - getPrice
+
+CartItem(Product p, quantity):
+ - getPrice
+
+Cart:
+ - getItems()
+ - addItem(CartItem ci)
+
+Coupon(code):
+ - getCode
+
+CouponBuyOneGetOneFree(code) extends Coupon: (not sure about inheritance here)
+
+ +

1) What design pattern fits here?
+2) What if i needed to set up the same logic without coupon but in product settings itself?

+ +

I have implemented Decorator pattern for CartItem when discount is applied to the same CartItem by this example and it works great, but still can not achieve the result of this use case.

+ +

Somehow I need to check that Ipad and SmartCase both are in the cart and apply discount to SmartCase only. Also if I add another iPad i should get another SmartCase for free.

+",210258,,-1,,42837.31319,42379.07917,"Implementing ""buy one get one for free"" coupon logic for shopping cart",,2,2,0,,,CC BY-SA 3.0,, +306962,1,306964,,1/9/2016 10:32,,2,455,"

This is a design pattern academic question exercise. +We have a Resource class and a pure fabrication class ResourcesManager to manage objects w/ type Resource.

+ +

The question is +Given the following two definitions:

+ +
public class Resource {
+
+}
+
+public class ResourcesManager {
+
+ public static Resource loadFromDb(int resID) {
+   ...
+ }
+ public static Resource createNew(String name) {
+    ...
+ }
+}
+
+ +

Apply Singleton pattern and Factory

+ +

My solution would be to apply Singleton and Factory both to ResourcesManager class. +Since I need only one instance of the Manager, and it's the one in charge of creating new objects.

+ +

Some other student on the other side want to apply the Singleton to the Resource class.....

+ +

But If I think at the following execution on a possible client:

+ +

Resource r1 ResourcesManager = ResourcesManager.loadFromDb(1); +Resource r2 ResourcesManager = ResourcesManager.loadFromDb(2);

+ +

I will have the unexpected behaviour that r1=r2.... Because they refer to the same Resource Singleton instance....

+ +

This is the solution I got in mind:

+ +
public class Resource {
+     private String nome ;
+     private int id;
+
+     public Resource(){}
+
+public void setName(String n){
+    name=n;
+}
+
+public void setID(int n){
+    id=n;
+}
+
+public String getName(){
+     return name;
+}
+
+public int getID(){
+     return id;
+}
+
+}
+
+
+
+public class ResourcesManager {
+  private static final ResourcesManager theInstance = new ResourcesManager();
+  private ResourcesManager(){ }
+
+  public static ResourcesManager getInstance(){
+       return theInstance;
+  }
+
+  public static Resource loadFromDb(int resID) {
+    Class.classForName(“Driver”);
+    Connection con=DriverManager.get.Connection(...);
+    Statement sta = con.createStatement();
+    ResultSet rs = sta.executeQuery(“select name from resources where id=”+resID);
+    if (rs.next){
+              String name=rs.getString(“name”);
+    }
+
+    Resource s = new Resource();
+    s.setName(name);
+    s.setID(resID);
+    return s; 
+  }
+  public static Resource createNew(String name) {
+    Resource s = new Resource();
+    s.setName(name);
+    return s;
+  }
+}
+
+",210015,,210015,,42378.44653,42378.50139,How to implement Singleton on a Resource / ResourcesManager case?,,1,1,,,,CC BY-SA 3.0,, +306963,1,306965,,1/9/2016 11:09,,6,2171,"

Generally when writing automated unit tests (eg JUnit, Karma) I aim to:

+ +
    +
  • cover all the boundary conditions
  • +
  • get a high level of coverage.
  • +
+ +

I heard someone say:

+ +
+

coverage and boundary conditions aren't enough for a unit test, you need to write them so they will break if the code changes.

+
+ +

This sounds good to me in theory - but I'm not sure how to apply it.

+ +

My question is: Should I write automated unit tests that fail when the code changes? If so, how.

+",13382,,13382,,42381.10833,42384.32083,Should I write automated unit tests that fail when the code changes?,,2,23,3,,,CC BY-SA 3.0,, +306970,1,,,1/9/2016 13:27,,-1,156,"

While I like benefits of strong typing system, there is one thing that worries me the most. I think of strong type system as means of forcing design choices. If a team builds a system unaware of its poor design, types will only reflect bad design choices.

+ +

For example, a programmer often finds he needs to access (hopefully only read) a variable not declared in type of the function. This function may be used in many places where this constraint was implicit and fine. A signature change triggers a snowball effect. Change escalates until it hits main method or close to it. On top of that, we create a problem of compatibility and even merging becomes an issue.

+ +

In an impure (or object oriented) context, programmer will find his way to capture this variable. And I agree none of them will be pretty. But he can make progress without breaking stuff (most of the time). It's a form of technical debt, but people will (often rightfully) argue that it's beneficial to the project.

+ +

Has anyone experienced this kind of issues?

+",210415,,,,,42378.57153,Does pure functional programming become agility impediment?,,1,5,0,42379.50694,,CC BY-SA 3.0,, +306974,1,306976,,1/9/2016 13:56,,3,219,"

Its like, I want to call .moveToBefore(Node) on a Node object and have the node relocate to before the node passed in.

+ +

The problem arises if the node passed in is the head node. The List object will still reffer to the old head where as the old head will actually follow the new head further down the chain.

+ +

I guess this could be solved easily if the nodes held a reference to the List object. So want to know if there are any disadvantages if the node objects in a Linked List implementation held a reference to its parent ""List"" object.

+",210420,,,,,42378.92847,Is it odd if Nodes in a LinkedList held references to the List object?,,3,3,,,,CC BY-SA 3.0,, +306984,1,306993,,1/9/2016 16:51,,-1,691,"

Brief: +I have several very large math expressions written in Matlab/Octave syntax. I +want to evaluate them within a C++ function by copy and pasting them in, however the problem is they contain ""^"" operators that cannot be overloaded to work with doubles.

+ +

Details: +The expressions consist of scalar variables with nested trigonometric functions, addition and multiplication operators and power operators ""^"". There are so many expressions that it is not practical to manually edit them to be C++ compatible. The main problem arises with the power operator ""^"" which is reserved in C++ for the bitwise exclusive OR operation. I am developing this code to run on Ubuntu 14.04 LTS with gcc version 4.9.2.

+ +

I am asking for advice on how to proceed/comments on the following three ideas:

+ +

1) Write a new class that consists of ""double-like"" variables and an overloaded ""^"" operator, as well as ""+,*,-"" operators and trig functions.

+ +

2) Write some code to parse the MATLAB expressions, find each ""^"" character, identify the base term ""a"" and the exponent term ""b"" and replace them with the pow(a,b)

+ +

3) Look for some package online that has already done #1.

+ +

Backstory: +I am porting a large optimization code that originally used Mathematica to evaluate gradients, jacobians, etc. These Mathematica expressions were converted to Matlab using the ToMatlab[] package: +http://library.wolfram.com/infocenter/MathSource/577/ +I now have a need to re-write the whole thing in C++.

+ +

There are hundreds of expressions similar to:

+ +
 grad_out(1,1) = k1^(-1)*k2^(-1)*(k1*k2*(r+(-1)*R)*cos(b1*k1)+(-1)* 
+        k1^2*R*cos((b1+g1)*k1)+k1*k2*R*cos((b1+g1)*k1)+(-1)* 
+        k1^2*r*cos((b1+g1)*k1+g1*k2)+k1^2*R*cos((b1+g1)*k1+g1* 
+        k2)+k1^2*r*cos((b1+g1)*k1+(b2+g1)*k2)+(-1)*k1^2*k2*L* 
+        sin((b1+g1)*k1));
+
+",210426,,,,,42378.99722,"Translate MATLAB expressions containing ""^"" power operator to C++ syntax",,2,4,,42387.86458,,CC BY-SA 3.0,, +306990,1,,,1/9/2016 18:54,,4,2745,"

Given the following code I have to draw the corresponding class diagram:

+ +
public class Shop
+{
+    List<Client> clients;
+    Storage store;
+    User chief;
+    Set<Invoice> invoices;
+}
+
+public class Invoice
+{
+    Map<Product, Row> rows;
+    Client client;
+}
+
+public class Client{}
+public class Product{}
+
+public class Storage
+{
+    Map<Integer, Product> products;
+}
+
+public class Row 
+{
+    Product p;
+    double qty;
+}
+
+ +

On draw.io I produced the following design:

+ +

+ +

Focus of my question is on +- Relation between Invoice and row: +Is it correct to draw the qualified association like so? On draw.io I found no option to depict this particular case. I built it by dragging Product class to be adjacent to Invoice.

+ +
    +
  • Relation between Storage and Product. +Is it correct? Or should I represent it also like a qualified association using class Integer as qualifier?

  • +
  • Do you see any error on relation arrows or cardinality?

  • +
+",210015,,161917,,42379.50903,42441.59028,How to correctly draw a UML class diagram with fully qualified association?,,1,0,,,,CC BY-SA 3.0,, +307003,1,,,1/10/2016 0:17,,1,844,"

If we look at the definition of ""dynamically-typed programming languages"" in Wikipedia, it says:

+ +
+

Programming languages which include dynamic type-checking but not static type-checking are often called ""dynamically-typed programming languages"".

+
+ +

and

+ +
+

Dynamic type-checking is the process of verifying the type safety of a program at runtime. Implementations of dynamically type-checked languages generally associate each runtime object with a ""type tag"" (i.e., a reference to a type) containing its type information. This runtime type information (RTTI) can also be used to implement dynamic dispatch, late binding, downcasting, reflection, and similar features.

+
+ +

But the thing is, when using Ruby and JavaScript, I never see the type being ""checked"".

+ +

In a video I watched before, the author said that static typed just means a variable's type is defined / declared and compiled, and cannot change, while dynamic typed means a variable's type can change any time when the program is running, and I see it describing static / dynamic typed quite clearly and simply.

+ +

Actually, according to GoF, a type is simply a set of interface, so how can you ""check"" the type, other than whether it responds to a particular message? That is, when using dynamically typed languages, I don't really see it ""checked"" as in ""Dynamic type-checking"". Does or when does the checking happen?

+",5487,,5487,,42379.04653,42379.1625,"Do dynamically typed languages, such as Ruby and JavaScript, do any dynamic type checking?",,3,2,,,,CC BY-SA 3.0,, +307013,1,307695,,1/10/2016 3:28,,4,736,"

I have a very basic RESTful service written using the MEAN stack (MongoDb, Express.js, Angular.js, Node.js) and utilizing the Mongoose ODM.

+ +

Product schema

+ +
var productSchema = new mongoose.Schema({
+  name: {type: String, required: true, maxlength: 250},
+  _prices: [{
+    amount: {type: Number,required: true,min: 0.0},
+    date: {type: Date, required: true, default: Date.now}
+  }]
+});
+
+productSchema
+.virtual('price')
+.get(function () {
+  var price = this._prices[this._prices.length - 1];
+  return price !== undefined ? price.amount : undefined;
+});
+
+productSchema.methods.setPrice = function (amount) {
+  this._prices.push({amount: amount});  
+};
+
+ +

The purpose of the _prices array and the price virtual property is that I want to save an audit trail of the price history.

+ +

I have a RESTful interface for /products supporting the usual GET, POST, PUT and DELETE.

+ +

In my system, administrators are allowed to POST, PUT, and DELETE, which is obvious, however the issue arises for anonymous access. Anonymous users are allowed to GET only, but the GET response should not include the _prices field, as only administrators can see the history.

+ +

As anonymous

+ +
{
+  name: ""Widget"", 
+  price: ""5.99""
+}
+
+ +

As administrator

+ +
{
+  name: ""Widget"", 
+  price: ""5.99"", 
+  _prices: [
+    {price: ""7.25"", date: ""2015-02-28""}, 
+    {price: ""5.99"", date: ""2016-01-09""}
+  ]
+}
+
+ +

The majority of tutorials I read regarding the MEAN stack always show returning the mongoose model directly.

+ +
exports.getAll = function(request, response) {
+  Product.find({}, '-_prices', function (err, products) {
+    res.json(200, products);
+  });
+};
+
+ +

This is very simple to get around from the products controller, however, I find this to be very misplaced as this filtering logic is in the controller, and brittle in that if I add new fields to my schema, I must go back to the controller to ensure I am not exposing fields in the future.

+ +

I also have questions about going the other way, doing request validation. It seems most tutorials have you accept user input as-is and let the mongoose validation fail.

+ +
exports.post = function(request, response) {
+  var product = new Product(request.body)
+
+  product.save(function(error, product) {
+    if (error) {
+      return response.json(422, error);
+    }
+
+    response.send(201);
+  });  
+};
+
+ +

Having been used to ASP.NET WebApi, my views on this are slightly skewed. I would have probably created separate view models for the request, and different view models for the response (depending on whether I'm authenticated or not), do the input validation on the request view model, and have only invariant validation within my entity. However, I want to ensure I am doing thing the MEAN stack way.

+ +

My questions are as follows:

+ +
    +
  1. Should I rely solely on mongoose's validation for input validation, or should I write my own before it even gets to mongoose?

  2. +
  3. For output filtering, should I create a security / filtering module which accepts a Product and returns a view-model depending on the user's context, dependent on whether they're an administrator or not?

  4. +
  5. Should I override the toJSON behavior (which is extensible in mongoose, by the way) to do the filtering?

  6. +
  7. Should I create a layer of abstraction between my controllers and mongoose?

  8. +
+ +

My initial thought is that mongoose itself has too much responsibility, but I don't want to avoid using it if it's the best approach.

+",61302,,61302,,42379.15278,42387.49583,Filtering request and responses in RESTful MEAN stack,,1,3,,,,CC BY-SA 3.0,, +307033,1,,,1/10/2016 17:07,,3,678,"

Short question is: what can qualify for potential tail call recursion optimization (TCO) or tail recursion elimination (TRE) if the compiler or interpreter supports it.

+ +

The summary of this question is in the section ""Or is it true that, one simplest way to think about it"" below.

+ +

I saw the Stack Overflow question ""What is tail-recursion elimination?"" as well, and read different descriptions of how a tail call recursion can happen.

+ +

First one:

+ +
+

a tail call is a subroutine call performed as the final action of a procedure (Wikipedia as of Jan 10, 2016)

+
+ +

Second one:

+ +
+

when the value returned by the recursive call is itself immediately returned (Programming Interview Exposed, Wiley, 2013)

+
+ +

Third one:

+ +
+

when a function calls itself, it cannot do any operation with this value that involves a local variable, and just return it. This call happens at the end of function, and so any local variables or ""state"" inside the stack (inside the current scope) can be thrown away

+
+ +

I think the second one is very close to the third one, except I didn't see exactly what it mean by ""immediately"". The first one made me think if I return n * fact(n-1) then it qualifies, although later on in Wikipedia, it says n * fact(n - 1) is not tail recursive, because even though it calls itself on the last line, it uses n, so it doesn't qualify for TCO.

+ +

Interestingly, maybe the book Programming Interview Exposed 2013, Wiley, needs an errata, that it says

+ +
int factorial( int n ) {
+    if (n > 1) { 
+        /* Recursive case */
+        return factorial(n-1) * n; 
+    } else { 
+        /* Base case */
+        return 1; 
+    }
+}
+
+ +
+

when the value returned by the recursive call is itself immediately returned, as in the preceding definition for factorial, the function is tail-recursive. Some compilers can perform tail call elimination on tail-recursive functions, an optimization that reuses the same stack frame for each recursive call

+
+ +

but I don't think it is tail recursive, is it?

+ +

Is calling on the last line important? On the page Tail Call Optimization in Ruby it says the following code can qualify for TCO:

+ +
def fact(n, acc=1)             # just assuming we pass in non-negative n
+  return acc if n <= 1
+  return fact(n-1, n*acc)
+end
+
+ +

But isn't it true that even the following qualify for TCO?

+ +
def fact(n, acc=1)
+  return fact(n-1, n*acc) if n > 1
+  return acc if n <= 1
+end
+
+ +

Or is it not, because it has ""unfinished business"" -- you need the stack to remember where the code has reached and continue later on, so you cannot wipe out the stack?

+ +

What if you do math operation, but it is like return 1 + fact(n-1)? That is, you are adding a constant, 1 instead of touching any local variable, so there shouldn't be stack info that needs to be kept each time you recurs. But if you view it as needing to remember 1 + (1 + (1 + (1 + ..., then in fact you still need more and more stack, unless if typical TCO can actually optimize it to 4 + fact(...) without needing new stacks.

+ +

Or is it true that, one simplest way to think about it is:

+ +

When you can wipe out the whole stack frame, because there is no need to keep any of those info in the current stack when you make the recursive call, then you can wipe the whole stack frame out and just use a ""GO TO"", instead of adding a new stack, then it can qualify for TCO? Then, at least for this recursive call alone, there will be no stack overflow.

+ +

So because if you can wipe out the whole stack frame, that means, there is no local variables that is needed?

+ +

What if it is a recursive call to traverse a binary tree or rename all files under a directory and all its subdirectories, from the form ""Programming_Ruby.pdf"" to ""Programming Ruby.pdf"", and your recursion is not on the last line, because the last line has a print ""this folder done""... then, can you wipe the whole stack frame out? That is, will the position of the next line of code to run also depend on the stack frame?

+ +

If the next line to run depends on the stack, then maybe it can boil down to these 2 rules?

+ +
    +
  1. the recursive call must be the last operation of the function, so after this recursive call, the current function will end and there is no ""next line of code to run"" to remember.

  2. +
  3. there must be no operation involved with this recursive call. No math operations, whatsoever, especially if it involves a local variable. It must simply return f(...) and return it alone. (or call this procedure alone). The ... inside can involve calculations, even if it involves local variables. Outside of (...) you cannot do any calculation at all.

  4. +
+",5487,,-1,,42878.52778,42379.93819,What can qualify for potential tail call recursion (TCO) optimization or tail recursion elimination (TRE),,3,1,1,,,CC BY-SA 3.0,, +307035,1,307043,,1/10/2016 17:56,,0,1446,"

I have a programming problem, that I don't know how to solve. And while I have provided a sample of my code, I am interested in a conceptual answer on how to resolve this problem.

+ +

On a tradeOffers event, I call a function to get a list of offers and process them one by one using async.forEachOfSeries from Async.js module. However, if a new offer comes in while pending offers are still processing then the same offer can be processed multiple times and also return an offer unavailable error because it was already accepted. This is a problem because the new offer is effectively not evaluated.

+ +

Is it possible to solve this with simple callback or true / false flag, or would I need to implement a queue for the offers? If I need to use a queue, then I would like to avoid using SQL to store the offers. Or do I need a more complex control mechanism to handle new tradeOffers events when processing is already in place?

+ +

I am currently using steam-tradeoffers and steam-user module.

+ +

To better explain my current situation, here is the code:

+ +
    steam.on('tradeOffers', function(count) {
+        if (count > 1 && bot_working == true) {
+            processoffers();
+        }
+        });
+
+    function processoffers() {
+        if (bot_working) {
+            offers.getOffers({
+                get_received_offers: 1,
+                active_only: 1,
+                get_sent_offers: 0,
+                get_descriptions: 1,
+                time_historical_cutoff: Math.round(Date.now() / 1000),
+                language: ""en_us""
+            }, function(error, body) {
+                if(error) return;
+                if(body.response.trade_offers_received){
+                async.forEachOfSeries(body.response.trade_offers_received, function(offer, key, cboffer) {
+                        if (offer.trade_offer_state == 2){
+                            //process offer
+                        }
+                        }, function (err) {
+                if (err) console.log(err.message);
+                  console.log('Completed all incoming offers.');
+                }
+            ); //function foreach
+        } //if trade tradeoffers
+    }); //if getoffers
+};
+};
+
+",210513,,,user53019,42380.62292,42380.62292,Handling multiple asynchronous events - Wait for pending offers to process on new offer?,,1,4,,,,CC BY-SA 3.0,, +307039,1,,,1/10/2016 18:22,,8,5061,"

I need to extend a third party class I cannot modify. The class's dependencies are for the most part injected through the constructor, making them easy to mock. However the class also uses methods from a trait that call global services directly, destroying test isolation. Here's a code sample that illustrates the problem:

+ +
trait SomeTrait {
+  public function irrelevantTraitMethod() {
+    throw new \Exception('Some undesirable behavior.');
+  }
+}
+
+class SomeSystemUnderTest {
+  use SomeTrait;
+  public function methodToTest($input) {
+    $this->irrelevantTraitMethod();
+    return $input;
+  }
+}
+
+class MockWithTraitTest extends \PHPUnit_Framework_TestCase {
+  public function testMethodToTest() {
+    $sut = new SomeSystemUnderTest();
+    $this->assertSame(123, $sut->methodToTest(123)); // Unexpected exception!
+  }
+}
+
+ +

How does one deal with a situation like this in PHPUnit? Can traits be mocked and injected somehow? The only solution I've found is to mock the SUT itself and just nullify the troublesome methods from the trait, leaving the actual SUT methods intact. But that doesn't feel right.

+",90915,,4344,,42662.06736,42902.27153,Is it possible to mock and inject traits in PHPUnit?,,2,1,2,,,CC BY-SA 3.0,, +307045,1,307055,,1/10/2016 20:59,,2,1003,"

I had this idea of different write models being used in different bounded contexts, but both being the same aggregate instance (the same events).

+ +

For example consider a User aggregate that is used in Administration and User contexts (that is the admin and the user-interface parts of the application). And the domain model says that the user (in User context) cannot do any action on itself (change email, change password, change address, whatever) unless he is logged in (which is the only action he is permitted to do if hes not logged in).

+ +

In administration context, however, the action does not need that user to be logged in (basically, an admin can change user's details on his behalf).

+ +

Can this be modeled with Event Sourcing as a single aggregate instance (same ID), but depending on the context a different object model would be built from the events? i.e. in admin context, the model does not verify the user is logged in, but in user context, the model does. This way the events would be shared between the contexts.

+ +

Is this a good idea? I can smell some issues but I am not completely sure.

+",81940,,,,,42381.70764,Event Sourcing and cross-context aggregate,,3,0,,,,CC BY-SA 3.0,, +307053,1,,,1/10/2016 23:43,,3,270,"

Lets take this for example...

+ +
entryText.addTextChangedListener(new TextWatcher() {
+
+    TextView wordCount = (TextView) findViewById(R.id.wordCount);
+    TextView charCount = (TextView) findViewById(R.id.charCount);
+
+    @Override
+    public synchronized void afterTextChanged(Editable s) {
+        wordCount.setText(""W: "" + String.valueOf(ChosenFile.countWords(entryText.getText().toString())));
+        charCount.setText(""C: "" + Integer.toString(entryText.getText().length()));
+    }
+
+    @Override
+    public void beforeTextChanged(CharSequence s, int start, int count, int after) {
+        // TODO Auto-generated method stub
+    }
+
+    @Override
+    public void onTextChanged(CharSequence s, int start, int before, int count) {
+        // TODO Auto-generated method stub
+    }
+});
+
+ +

Are new instances of textviews wordCount and CharCount created each time the listener is invoked? Would it be better to make them global?

+ +

How is memory handled? Lets say new instances are created, how does that affect memory?

+",208501,,,user22815,42380.05417,42380.11042,How are objects treated in an anonymous inner class?,,1,5,1,,,CC BY-SA 3.0,, +307060,1,307067,,1/11/2016 3:32,,2,929,"

I have a question about storing encrypted passwords in your database to help secure them. I have a class that encrypts the password passed in and returns a string. This string has 3 parts too it, all separated by a colon. The first part is the iteration, the 2nd part of it is the salt, and the third part of it is the encrypted password. Ex: ""1000:{salt}:{encrypted password}"".

+ +

My question is how am I suppose to store this? As of right now, I am storing the entire string in my database under the password column. Then when I go to check the passwords for a login, I retrieve that record, and I have a method in my class that parses it and verifies that the two passwords match. Is this a big 'no no?' I've read where some people store the salt in it's own column then pull that down when verifying a login. However, this class that I am using for the encryption will return a string like I showed above. Then it came with a method that I can just pass that entire string into it and it will parse it up and verify the passwords as a login.

+ +

I guess what I'm asking is, do I need to re-construct my class to separate the string (""{iteration}:{salt}:{encrypted password}""), or is storing that entire string in the database OK?

+",186568,,5099,,42380.34653,42465.53889,ASP.NET storing encrypted & salted Password,,2,8,,,,CC BY-SA 3.0,, +307061,1,,,1/11/2016 3:33,,3,129,"

I've got a conceptual question (which is probably better posted here than on StackOverflow?).

+ +

I want to develop a client application that maintains a persistent connection to a server, and exchanges text data back and forth. An IRC client is a great example. So, these are givens:

+ +
    +
  1. There is a fairly steady stream of incoming text that needs to be interpreted and dealt with in some way by the client.
  2. +
  3. The client will send commands and it expects specifically formatted responses back from the server.
  4. +
+ +

There are probably quite a few approaches, but I'm looking for (obviously) the most efficient. Should I just manage all incoming text through the ""on data received"" event?

+ +

(pseudo-code)

+ +
on(data)
+  if data matches /###DAT action1/
+    handleAction1
+  else if data matches /###DAT action2/
+    handleAction2
+  else if data matches /###DAT PRIVMSG/
+    handlePrivateMessage
+  else if data matches /###DAT LOGIN_OK/
+    handleConfirmedLogin
+
+ +

So every possible bit of text that I'd want to deal with, is scanned here, whether it's a private message from someone, or a response to a login command I just sent.

+ +

Is there a better way to deal with this? If it matters, I'm using JavaScript.

+",210564,,210564,,42380.36458,42410.56944,How to handle a client app that connects to a TCP server and sends/receives text data bidirectionally?,,1,0,,,,CC BY-SA 3.0,, +307063,1,307070,,1/11/2016 6:22,,74,12179,"

Consider the following situation:

+ +
    +
  • You have a clone of a git repository
  • +
  • You have some local commits (commits that have not yet been pushed anywhere)
  • +
  • The remote repository has new commits that you have not yet reconciled
  • +
+ +

So something like this:

+ +

+ +

If you execute git pull with the default settings, you'll get something like this:

+ +

+ +

This is because git performed a merge.

+ +

There's an alternative, though. You can tell pull to do a rebase instead:

+ +
git pull --rebase
+
+ +

and you'll get this:

+ +

+ +

In my opinion, the rebased version has numerous advantages that mostly center around keeping both your code and the history clean, so I'm a little struck by the fact that git does the merge by default. Yes, the hashes of your local commits will get changed, but this seems like a small price to pay for the simpler history you get in return.

+ +

By no means am I suggesting that this is somehow a bad or a wrong default, though. I am just having trouble thinking of reasons why the merge might be preferred for the default. Do we have any insight into why it was chosen? Are there benefits that make it more suitable as a default?

+ +

The primary motivation for this question is that my company is trying to establish some baseline standards (hopefully, more like guidelines) for how we organize and manage our repositories to make it easier for developers to approach a repository they haven't worked with before. I am interested in making a case that we should usually rebase in this type of situation (and probably for recommending developers set their global config to rebase by default), but if I were opposed to that, I would certainly be asking why rebase isn't the default if it's so great. So I'm wondering if there is something I'm missing.

+ +

It has been suggested that this question is a duplicate of Why do so many websites prefer “git rebase” over “git merge”?; however, that question is somewhat the reverse of this one. It discusses the merits of rebase over merge, while this question asks about the benefits of merge over rebase. The answers there reflect this, focusing on problems with merge and benefits of rebase.

+",92517,,-1,,42837.31319,42381.2875,Why does git pull perform a merge instead of a rebase by default?,,4,8,19,,,CC BY-SA 3.0,, +307076,1,,,1/11/2016 11:22,,5,838,"

In a project we have a rather formal code review, which is currently done manually and documented in Excel sheets and Word docs.

+ +

We would like to improve the current code review process by integrating it into a Git workflow (utilizing tools like Bitbucket Server, which we already have, Gerrit, etc.).

+ +

The current idea is that each developer implements features and bugfixes and creates a pull request. This pull request is reviewed by other developers and then merged into our main development branch or not.

+ +

We would like to export all pull requests (which are now the code reviews) to formally document them in an offline document. This code review document is a delivery item for our customer.

+ +

Is this a feasible approach at all?

+",10158,,4,,42380.51944,42619.96528,Documentation of code review,,2,5,0,,,CC BY-SA 3.0,, +307077,1,307091,,1/11/2016 11:24,,0,4330,"

The name database host, how the name indicates ""the server that hosts the database"", should be the server ip, that hosts that database. I don't see any relation of it with the scripts that try to access it. But conversely, what I see the explanation mostly is :

+ +

""As long as the program ( php/.net scripts ) are on the same server as the database, which is the case with our servers, you will want to use 'localhost' as the database hostname.If you are needing to test an external connection, you can use the domain name (if pointed to our nameservers) or the server IP address""
+It means, the scripts location decide the ""database-host"" ?

+ +

So, does that mean this terminology is confusing ? Also, why something called database-host needs to be configured ? What is it's importance ?

+",50008,,,,,42380.64375,What is the significance of database host name while configuring database?,,1,2,1,,,CC BY-SA 3.0,, +307079,1,307132,,1/11/2016 12:08,,9,1215,"

From Agile Software Development, Principles, Patterns, and Practices: Pearson New International Edition:

+ +
+

Sometimes, the methods invoked by different groups of clients will overlap. If the overlap is small, then the + interfaces for the groups should remain separate. The common functions should be declared in all the overlapping + interfaces. The server class will inherit the common functions from each of those interfaces, but it will implement + them only once.

+
+ +

Uncle Bob, talks about the case when there is minor overlap.

+ +

What should we do if there is significant overlap?

+ +

Say we have

+ +
Class UiInterface1;
+Class UiInterface2;
+Class UiInterface3;
+
+Class UiIterface : public UiInterface1, public UiInterface2, public UiInterface3{};
+
+ +

What should we do if there is significant overlap between UiInterface1 and UiInterface2?

+",204027,,115355,,42380.86875,42381.19375,Interface Segregation Principle: What to do if interfaces have significant overlap?,,2,2,1,,,CC BY-SA 3.0,, +307083,1,307320,,1/11/2016 13:47,,2,379,"

PROBLEM: There is a coding imperative (S. McConnel, Code Complete) that one shouldn't code on language, but by means of it, e.g. doing right style things even if language doesn't have some possibilities. Javascript is a script language in its heart, but it pretends being OOP very much. I want my Javascript and Java coding styles be better agreed on that basis.

+ +

QUESTION ONE: In Javascript methods I split code for simplicity and do encapsulation using closures. One large method turns into multiple inside its scope. Is it a bad practice?

+ +

QUESTION TWO: In Javascript method I declare variables first, then declare a method-constructor with a name of its parent-method, then go other methods-resources - like in Java classes. No script is floating directly in the body, except for one-line constructor call in the end of parent-method (void or return). When you open a closure method its main operations are atop. Any reasons why shouldn't I do that?

+ +
function parentFunction() {
+  var myVariable;
+  var scope = this; // Keep scope stable inside enclosed methods.
+
+  function parentFunction() {
+    // A pointless action to demonstrate an approach:
+    return (
+      sonFunction() ||
+      daughterFunction());
+  }
+
+  function sonFunction() {
+    return scope.name;
+  }
+
+  function daughterFunction() {
+    return scope.name;
+  }
+
+  return parentFunction();    
+}
+
+",146870,,146870,,42381.18611,42382.77917,Javascript Closure Style Similar to Java Class Structure,,1,4,,,,CC BY-SA 3.0,, +307101,1,307123,,1/11/2016 18:01,,53,17942,"

I've been learning about NoSQL Databases for a week now.

+ +

I really understand the advantages of NoSQL Databases and the many use cases they are great for.

+ +

But often people write their articles as if NoSQL could replace Relational Databases. And there is the point I can't get my head around:

+ +
+

NoSQL Databases are (often) key-value stores.

+
+ +

Of course it is possible to store everything into a key-value store (by encoding the data in JSON, XML, whatever), but the problem I see is that you need to get some amount of data that matches a specific criterion, in many use cases. In a NoSQL database you have only one criterion you can search for effectively - the key. Relational Databases are optimized to search for any value in the data row effectively.

+ +

So NoSQL Databases are not really a choice for persisting data that need to be searched by their content. Or have I misunderstood something?

+ +

An example:

+ +

You need to store user data for a webshop.

+ +

In a relational database you store every user as an row in the users table, with an ID, the name, his country, etc.

+ +

In a NoSQL Database you would store each user with his ID as key and all his data (encoded in JSON, etc.) as value.

+ +

So if you need to get all users from a specific country (for some reason the marketing guys need to know something about them), it's easy to do so in the Relational Database, but not very effective in the NoSQL Database, because you have to get every user, parse all the data and filter.

+ +

I don't say it's impossible, but it gets a lot more tricky and I guess not that effective if you want to search in the data of NoSQL entries.

+ +

You could create a key for each country that stores the keys of every user who lives in this country, and get the users of a specific country by getting all the keys which are deposited in the key for this country. But I think this techique makes a complex dataset even more complex - it's harder to implement and not as effective as querying an SQL Database. So I think it's not a way you would use in production. Or is it?

+ +

I'm not really sure if I misunderstood something or overlooked some concepts or best practices to handle such use cases. Maybe you could correct my statements and answer my questions.

+",198145,,1130,,42439.87569,42441.41875,Is the use of NoSQL Databases impractical for large datasets where you need to search by content?,,8,13,13,,,CC BY-SA 3.0,, +307108,1,307112,,1/11/2016 20:50,,3,244,"

I know that the data is passed through HTTP, but I'm not sure if I should be passing data through HTTP headers, or HTTP bodies. Which one is the convention for APIs?

+ +

Also, theoretically using PHP, which would be easier to process? Based on what I know, using headers seems to be the easiest method.

+ +

POST body

+ + + +
POST /api/v1/test_params.json HTTP/1.1
+Host: rollrbla.de
+Content-Length: 59
+X-Target-URI: http://rollrbla.de
+Content-Type: application/x-www-form-urlencoded; charset=UTF-8
+Connection: Keep-Alive
+
+Test-Parameter-1=Test-value-1&Test-Parameter-2=Test-Value-2
+
+ +

POST header

+ +
POST /api/v1/test_params.json HTTP/1.1
+Test-Parameter-2: Test-Value-2
+Host: rollrbla.de
+Test-Parameter-1: Test-Value-1
+Content-Length: 0
+X-Target-URI: http://rollrbla.de
+Connection: Keep-Alive
+
+ +

Overall, what would be the best way to process this data, and what is the convention?

+",92848,,,user40980,42380.88472,42380.89931,How are POST/PUT/DELETE data passed to APIs?,,2,0,,,,CC BY-SA 3.0,, +307111,1,,,1/11/2016 21:23,,3,126,"

I am in the final stages of development of a simple embedded system. The device performs PID coefficient estimation and then instantiates a PID controller with the estimated coefficients.

+ +

The architecture of the program is cooperative multitasking. The file main.c defines a series of void task_...(void) functions and calls them one after the other upon each timer tick.

+ +

I would like to perform the operation void task_pid_tune(void) for as long as it takes. Then, I want to perform the operation pid_configure(coefficients) once. From then on, I want to perform the operation pid_run() instead of the previous two.

+ +

Currently all 4 tasks (pid_tune() is not implemented yet) run in an extremely simple way

+ +
while( wait_for_timer_tick() )
+{
+    task1();
+    task2();
+    task3();
+    task4();
+ }
+
+ +

I would prefer to keep it that way, instead of writing logic inside this function. I am writing in C99. I am fine with static and global variables, as well as public getters.

+ +

Please advise how shall I implement communication between void task_pid_tune(void) and void task_pid_run(void). Please keep in mind that the latter will be called much more often than the former, hence performance is important in that case.

+ +
+ +

Clarification.

+ +

No task is allowed to be blocking. In other words, all scheduled tasks must return before the next timer tick.

+ +

Consequently, the required execution order is:

+ +
    task1();
+    task2();
+    task3();
+    task4();
+    task_pid_tune();
+    // this loops until pid tuning is ready
+    // that is, pid_tune() samples some signal once per system tick
+    // and at some point, it gets happy and provides estimated pid coefficients
+
+    task1();
+    task2();
+    task3();
+    task4();
+    task_pid_configure();  // this runs only one 
+
+    task1();
+    task2();
+    task3();
+    task4();
+    task_pid_run();
+    // This loops forever.
+
+",54268,,54268,,42381.27986,42381.33819,"How to communicate between cooperative tasks ""first me, then you""?",,1,2,,,,CC BY-SA 3.0,, +307113,1,,,1/11/2016 22:02,,6,863,"

I'm maintaining a system providing a typical synchronous web service REST API. It routes each request to one of several possible backend services. Now there is a new backend I want to use, but there is a problem - it has an asynchronous API - I will need to do a request, and then wait for a callback request.

+ +

I'm thinking that it's possible to hide this asynchronicity behind a syncronizing facade. This facade will call the async API and then put the incoming web requests ""on hold"". When the callback endpoint is called a few seconds later it will resume the correct web request and respond to the original caller. If the callback does not arrive within let's say 10 seconds I will stop waiting, and respond with an error.

+ +

A complicating factor is that I have several web nodes behind a load balancer, but I'm thinking I can use a distributed storage solution like Redis to publish incoming callbacks, so that the correct web node can pick it up.

+ +

The question is: Is this feasible? Or do you believe it will be really difficult to get it to work properly? Any experience you can share or links to relevant resources would be much appreciated. Any other aproach to using an asynchronous web service in another service which needs to be syncronous?

+ +

Some pseudo code of what I'm thinking:

+ +
define rest-endpoint ""/foobar"" {
+    call async backend
+    until 10 seconds has passed {
+        sleep 1 second
+        result = redis.get(some_correlation_id)
+        if (result)
+            return new Response(result)
+    }
+    return new Response(""failed"")
+}
+
+define rest-endpoint ""/callback"" {
+    redis.push(some_correlation_id)
+}
+
+ +

Note that if I time out and respond ""failed"" I don't (in my particular case) have to create any kind of compensating transaction towards the backend, and the client is also free to retry the same request later.

+ +

For the record: The system is implemented on the .NET stack, but I'm interested in any technology which would make implementing something like this simpler.

+ +

Also note that I would probably not actually use sleep in a loop, but some kind of Task / future abstraction.

+",1049,,,,,42381.39861,Synchronous facade hiding asynchronous web service,,2,1,,,,CC BY-SA 3.0,, +307121,1,307131,,1/12/2016 0:35,,-2,162,"

I am attempting to build an attendance program which could analyze if someone is sitting in a seat. The seats are fixed making knowing where each seat is easy, but I don't really know where to start to determine if there is someone in the seat. I have tried to do some research but it is all quite complex. I thought that starting with some basic code to highlight contrast would help such as:

+ +
#!/usr/bin/env python
+
+from PIL import Image
+from pylab import *
+
+im = array(Image.open('images/14.jpg'))
+
+figure('ORIG')
+
+imshow(im)
+
+figure('MOD')
+
+c = copy(im)
+c = c[:,:,0]
+c = 255 - floor(4 * c**(1/2))
+gray()
+imshow(c)
+
+show()
+
+ +

Which does help: + +

+ +

Where do I go from here? Are there any good resources for image processing? Is what I have so far a good approach or is there a superior one?

+",139870,,,,,42381.14514,Detecting Persons In Seats from Image,,2,1,,42382.55694,,CC BY-SA 3.0,, +307125,1,307129,,1/12/2016 1:19,,3,109,"

I'm writing an application for which I want to use some 20-year old open source code. The original code is kind of antiquated in certain aspects, for example, it's written in K&R C rather than C99; it has to manually compute certain things that are now defined in C standard library headers; there are no tests; and so on and so forth. All that the original code says by way of licensing is ""Released in the public domain by so and so""; no BSD/MIT/GPL, just ""public domain"".

+ +

In addition to my original application, I'd like to release my own fork of the legacy library with a few of changes, chiefly to modernize it and to add unit tests. I've contacted the original author about it, but have not heard back from him. Am I within my rights to release my fork with a liberal (BSD/MIT) license? I'm not trying to make any money off of either my update of the legacy library or off of the application built on top of it.

+",98059,,78905,,42381.07431,42381.13819,License for forked software library,,1,6,,,,CC BY-SA 3.0,, +307128,1,307133,,1/12/2016 3:16,,9,8002,"

Many low level programs use the volatile keyword for types for memory mapping and such, however I'm sort of confused as to what it REALLY does in the background. In other words, what does it mean when the compiler doesn't ""optimize away"" the memory address?

+",156808,,,user53019,42381.60278,42384.11319,What does it mean to declare a volatile variable?,,5,3,,,,CC BY-SA 3.0,, +307134,1,,,1/12/2016 3:56,,11,1253,"

I've heard that Clojure macros are easier to write but not as reliable as Racket's hygienic macros. My question has 2 parts:

+ +
    +
  1. How does gensym differ from hygienic macros?
  2. +
  3. What do Racket macros provide that Clojure's don't? (be it safety, composability, or anything)
  4. +
+",143220,,,,,42678.88194,What practical problem results from lack of hygienic macros in Clojure?,,1,2,2,,,CC BY-SA 3.0,, +307137,1,307145,,1/12/2016 4:13,,1,821,"

I have followed the tutorial from Amazon to get started with the Echo. I made a skill and setup an application server on their AWS Lambda for basic testing.

+ +

I have a few questions about the Echo, skills and AWS Lambda:

+ +

When I create a skill does it reside on the Echo itself or does the echo have to go out to the internet and retrieve it? If it goes out to the internet, is there any way I can develop a skill and store it locally on the Echo? (I don't want the Echo to use the internet).

+ +

The last part to the process for answering and processing a vocal command by the echo is an application server, usually AWS Lambda provided by Amazon. What do I need to setup an application server on the LAN? (I don't want the Echo to use the internet). I understand they had a software called JAWS(now serverless-serve) but I feel like I'm missing something else.

+ +

I also see I'm able to export an AmazonMachineImage. Does this mean I can set it up on my own VM? Or is it for AWS only?

+",210230,,,,,42381.30208,Amazon Echo Development on LAN,,1,0,,,,CC BY-SA 3.0,, +307138,1,,,1/12/2016 5:18,,-2,108,"

Is batch processing possible in jdbi with the below condition?

+ +

If one insert query in the batch fails because of violating some constraints ( Primary key, foreign key violation.. etc) will it resume processing next set of insert queries in the batch?

+",210580,,210580,,42381.22292,42381.96319,Do any Java libraries support batch sql query processing and proceeds even if any duplicate records exists in the batch?,,1,3,,,,CC BY-SA 3.0,, +307139,1,,,1/12/2016 5:27,,11,332,"

I am creating a 2d game for a website where the universe can grow extremely large (basically infinitely large). Initially, the universe is composed of 6 stars that are an equal distance from the origin (0, 0). My task is to be able to generate more stars that will have ""paths"" (edges) that connect to each other. How can I design an algorithm that meets these restrictions:

+ +
    +
  1. Stars are randomly generated outward. (e.g. (x, y) coordinates for new stars will slowly go outwards from (0, 0) in all directions, preferably in spiral format)
  2. +
  3. Edges will NOT cross.
  4. +
  5. Although there should be some variance, new stars should not be too far or too close to other stars. (E.g. there must be a minimum radius)
  6. +
  7. No star/point should have a multiplicity of more than 3.
  8. +
  9. Given that all of this will be stored in a Database, the algorithm cannot be too costly. In other words, I would love to achieve something of O(n) complexity (I don't know if this is feasible).
  10. +
+ +

Essentially, what I am going for is a spiral looking galaxy where the stars are points on the graph and travel between stars is depicted by edges between those stars.

+ +

The particular steps I need to solve are:

+ +
    +
  1. Randomly generate a point in the neighboring vicinity of other stars that do not yet have a multiplicity of 3.
  2. +
  3. Find the first star that does not yet have a multiplicity of 3 that will not generate edge conflict.
  4. +
  5. If the star is a minimum distance of x units away then create an edge between the two points.
  6. +
+ +

I tried looking for solutions, but my math skills (and knowledge on Graph Theory) need a lot of work. Also, any resources/links on this matter would be greatly appreciated.

+ +

Here is some pseudo-code I was thinking of, but I'm not sure if this would even work and i'm sure it would not perform terribly well after a few 10,000, etc stars.

+ +
newStar = randomly generated (x, y) within radius of last star from origin
+while(newStar has not been connected):
+    for (star in the known universe):
+        if(distance between newStar and star > x units):
+            if(star has < 3 multiplicity):
+                if(path from newStar to star does not intersect another path):
+                    connect the star to the other star
+                    break;
+
+    newStar = new random (x, y) coordinate
+
+ +

Also, if anyone has any advice on how I should store this on a MySQL database, I would also appreciate that.

+ +

Finally, in case nothing makes sense above, I have included an image of what I would like to achieve below:

+",210707,,,,,42580.42986,Algorithm to generate Edges and Vertexes outwards from origin with max multiplicity of 3,,1,12,3,,,CC BY-SA 3.0,, +307143,1,,,1/12/2016 6:27,,4,449,"

I am trying to understand the generic repository implementation. I have seen this line (or similar to it) in many examples:

+ +
public interface IRepository<TEntity> : IDisposable where TEntity : IEntity
+
+ +

Can some explain the parts to this? Also, I'm having a hard time understanding the IEntity and how it plays into the interface.

+ +
public interface IEntity
+    {
+        int Id { get; set; } 
+    } 
+
+public interface IRepository<TEntity> : IDisposable where TEntity : IEntity
+    {
+        IQueryable<TEntity> GetAll();
+        void Delete(TEntity entity);
+        void Add(TEntity entity);
+    } 
+
+",166897,,,,,42384.82083,Understanding Generic Repository Pattern,,1,0,1,,,CC BY-SA 3.0,, +307159,1,307161,,1/12/2016 10:38,,0,535,"

It seems as if this example implementation of the Observer pattern is drawn from the book Headfirst Design Patterns, OReilly, which I am currently reading. Here is a UML diagram from the book

+ +

+ +

It's not very cleanly visible, but the methods, composing the Subject Interface are:

+ +
    +
  • registerObserver()
  • +
  • removeObserver()
  • +
  • notifyObservers()
  • +
+ +

What I am skeptical about is the last method. Why would the clients of the interface know about the specific way in which they are called? IMHO the place of this method is inside the concrete subject implementation -ConcreteSubject.

+",54268,,,,,42531.84861,Is this example implementation of the Observer pattern well-written?,,2,3,,,,CC BY-SA 3.0,, +307162,1,307173,,1/12/2016 11:07,,0,163,"

In my node.js application, I have a queue class which has push and pop methods and a data property. I have an Event class which handles an event and pushes it on to the queue.

+ +

If I think object oriented, is the relationship between queue and event better modeled as ""is a"" (inheritance) or ""has a"" (composition)?

+ +

First approach:

+ +
var util = require('util'),
+    array = [];
+
+function Q(source){
+    this.data = source;
+}
+
+Q.prototype.push = function(x){
+    this.data.push(x);
+};
+
+Q.prototype.show = function(){
+    console.log(this.data);
+};
+
+function Event(y){
+    Q.call(this, y);
+}
+
+util.inherits(Event,Q);
+
+Event.prototype.trigger = function(bb){
+ //some logic
+ this.push(bb);
+};
+
+var x = new Event(array);
+x.trigger('hi');
+x.show();
+
+ +

Second approach :

+ +
var array = [];
+function Q(source){
+    this.data = source;
+}
+
+Q.prototype.push = function(x){
+    this.data.push(x);
+};
+
+Q.prototype.show = function(){
+    console.log(this.data);
+};
+
+function Event(y){
+    this.arr = new Q(y);
+}
+
+Event.prototype.trigger = function(bb){
+    //some logic
+    this.arr.push(bb);
+};
+
+var x = new Event(array);
+x.trigger('hi');
+x.show();
+
+",116360,,17429,,42381.58403,42381.58403,"When to use ""is a"" or ""has a""?",,1,4,0,42381.56458,,CC BY-SA 3.0,, +307168,1,307171,,1/12/2016 12:43,,17,8654,"

At our team, in addition to individual units of work (Stories), we have longer-running themes of work (Epics). Multiple stories make an epic.

+ +

Traditionally we've had feature branches for each Story, and merged those straight to master when they pass QA. However, we'd like to start holding back on release of completed stories in an Epic until the Epic is deemed ""feature complete"". We'd only release these features to production when the entire Epic is closed. Furthermore, we have a nightly build server - we'd like all closed Stories (including those that are part of incomplete Epics) to be deployed to this nightly server automatically.

+ +

Are there any suggestions on how to manage our repo to achieve this? I've considered introducing ""epic branches"", where we'd merging closed stories to the related epic branch instead of direct to master, but my concerns are:

+ +
    +
  • I worry about the merge conflicts that may arise if epic branches are kept open for long
  • +
  • Nightly builds would require merging all epic branches into a ""nightly build"" branch. Again, merge conflicts could arise, and this is to be done automatically
  • +
+",210752,,31260,,42381.55139,42381.76667,Git branching strategy for long-running unreleased code,,3,0,6,,,CC BY-SA 3.0,, +307176,1,307178,,1/12/2016 13:55,,1,2995,"

Though it might be trivial for someone, I find it a little inconvenient when someone formats the string while passing it as a parameter to a method. For e.g.

+ +
AddMessage( string.Format(""{0} ("" + Constants.Message1 + "")"",
+    Path.GetFileNameWithoutExtension(document.FileName)),           
+    string.Format(""{0}"" + FileExtensionPdf,
+    Path.GetFileNameWithoutExtension(document.FileName)));
+
+ +

I have taken a simple example here, but it could become a little messy sometimes. I prefer to not do this and instead format my string before I pass them to the method. For me, this reduces readability.

+ +

I would like to know if there is a standard practice which goes against the above style. Since I am doing a code review, I am not sure if I should put it as a comment.

+",58460,,58460,,42381.58403,42381.61736,The best practice for passing formatted string to methods,,3,8,,,,CC BY-SA 3.0,, +307177,1,307439,,1/12/2016 14:14,,5,4005,"

Backstory

+

While developing with Qt Signal/Slots, I came across a few segmentation faults that had me puzzled as to what was causing it. Eventually I figured out that you could actually pass a slot function without providing it any of its arguments, which of course would cause a Segmentation Fault if the argument was used.

+

To avoid this pitfall, I am getting into the habit of checking all my arguments to make sure they are not NULL:

+
void MyClass::setMyString(QString input)
+{
+    if (input.isNull()) {
+        qDebug() << "Error: setMyString() received NULL argument";
+        return;
+    }
+
+    m_MyString = input;
+    emit myStringChanged();
+}
+
+

I'm thinking I ought to do this with all my functions from now on. Part of the reasoning is that I figure that it can not hurt to do this, and that if I or someone else ever needs to re-factor my code, it is less likely they will omit a necessary null value check.

+

Questions:

+
    +
  1. Is this an acceptable style in all or most circumstances?
  2. +
  3. Should I be introducing a runtime error, halting the program, instead of simply returning the function?
  4. +
  5. Does this style of coding improve security?
  6. +
  7. Does this style of coding improve stability?
  8. +
  9. Are there any performance considerations, to the extent that checking for null values may take longer depending on the type?
  10. +
  11. Beyond c++ and Qt, what other languages benefit from this style?
  12. +
  13. Are there any significant drawbacks worth considering?
  14. +
+",136084,,-1,,43998.41736,43269.37847,Is it good practice to have your C++/Qt functions always check all its arguments for null values?,,3,12,2,,,CC BY-SA 3.0,, +307180,1,307317,,1/12/2016 14:32,,12,2787,"

Is there any convention to where we should declare the module.exports no Javascript/Node.js module files?

+ +

Should it be in the beginning of the file like:

+ +
module.exports = Foo;
+
+function Foo() {
+    this.bar = 'bar';
+}
+
+Foo.prototype.getBar = function() {
+    return this.bar;
+}
+
+ +

Or should it be in the end of the file:

+ +
function Foo() {
+    this.bar = 'bar';
+}
+
+Foo.prototype.getBar = function() {
+    return this.bar;
+}
+
+module.exports = Foo;
+
+ +

I know that there is no technical difference. The first example is perfectly valid because of declaration hoisting.

+ +

So I was wondering if there are some kind of best practices.

+",91694,,,,,43287.53333,Convention to where to declare module.exports on Javascript files,,1,3,1,,,CC BY-SA 3.0,, +307181,1,,,1/12/2016 14:38,,3,116,"

Case in point in Entity Framework, but this is a design question which has applicability to any ORM.

+ +

In the current application we have a couple of ORM data classes which do stuff not directly related to the function of that class.

+ +

One example is the Document class, which is a database representation of document stored on the network drive. It has a constructor which actually makes a physical copy of the document:

+ + + +
public Document(string filePath, int categoryId, User currentUser)
+{
+    FileInfo info = new FileInfo(filePath);
+    this.Name = info.Name;
+    this.Size = info.Length.ToString();
+
+    this.CategoryId = categoryId;
+    this.User = currentUser;
+    this. CreatedOn = DateTime.Now;
+
+    targetPath = ConfigurationManager.AppSettings[""CampaignFolder""];
+    FileHelper.CopyFile(filePath, targetPath + ""\"" + this.Name);
+}
+
+ +

You can see how this would be convenient, especially if you're potentially loading files from lots of places in the code and you want them stored in a central location.

+ +

Another example is the Category class, which is related to the above. The class itself has a function that has to go back to the repository to do some additional work - so we pass one in:

+ +
public List<Document> SearchDocuments(string description, IRepository rep)
+{
+    if (this.Documents.Any(dc => dc.Description == description))
+    {
+        return this.Documents.(dc => dc.Description == description).ToList();
+    }
+
+    using (IRepository r = rep.Renew())
+    {
+        return r.GetDocuments(d => d.CategoryId == this.CategoryId 
+                        && dc.Description == description).ToList();
+    }
+}
+
+ +

Again, you can see how this is convenient - it uses properties inherent to the object, and it's a useful place to put this function if it's called from a lot of different places.

+ +

It's also worth noting that this latter function, at least, doesn't break testability.

+ +

However, I can't help the sense that this is wrong somehow. It feels like a violation of single responsibility - these things are just for carrying data and perhaps some computed properties. But on the other hand, putting functions like this inside them removes the need for an annoying proliferation of helper classes.

+ +

Is this a bad idea? Or am I worrying over nothing?

+",22742,,,user40980,42381.63889,42381.65139,Is it a bad idea to put externally dependent logic in ORM classes?,,2,0,,,,CC BY-SA 3.0,, +307187,1,307188,,1/12/2016 15:04,,1,244,"

I need to understand the proper name for an object that has cropped up in two projects now. Here is the conventional representation for the MVC pattern:

+ +

+ +

However there is another ""Model"" that is normally present in MVC, which passes the data between the controller and the view (and back):

+ +

+ +

What is the second model object called? I have heard it referred to as a ""Data Transfer Object,"" a ""Value Object,"" and a ""Model Object.""

+ +

This object is typically necessary because the data being presented in a ""View"" is almost never in the same form as the data being stored in the data store. Four or five (or more) DAOs may be pulled together in the service or controller to populate this object, and it may be used to update more than one DAO on the way back to the data store.

+ +

So, what is the ""proper"" name for this second model object?

+ +

(Image from Creative Commons, Shane Brinkman Davis)

+",105068,,,,,42381.75208,What is the correct name for this data object?,,3,9,,,,CC BY-SA 3.0,, +307189,1,307190,,1/12/2016 15:18,,5,467,"

I want to understand why we use the term ""network topology"" as opposed to ""network graph"", or another term, to talk about the structure of networks. I'm working on a network design for a project, and want to make sure I don't use any terms that I don't truly understand.

+ +

Wikipedia defines network topology as ""the arrangement of the various elements (links, nodes, etc.) of a computer network."" This strikes me as interesting, because when I hear the words link and node, I immediately think of graph theory and the objects it is concerned with.

+ +

Topology, according to Wikipedia again, is ""concerned with the properties of space that are preserved under continuous deformations..."" And when you look at the basic examples of topological objects, you see coffee cups and Möbius strips, as opposed to the discrete vertices and edges you see with graph theory.

+ +

So why do we refer to networks as having a ""topology""?

+",92609,,,,,43637.59792,"Why is it called network ""topology""? Why not network ""graph""?",,3,3,,,,CC BY-SA 3.0,, +307193,1,307194,,1/12/2016 15:46,,2,179,"

Referring primarily to here, it suggests that values which are constant in JavaScript (using the keyword const) should be named in SHOUT_CASE. I'm of the opinion though that mutability is much more important (and rare) than immutability, at least in JavaScript, and that having so many variables put in SHOUT_CASE would actually harm readability, rather than aid it, and dilute the meaningfulness of the convention itself.

+ +

Now, I understand that SHOUT_CASE for constants is useful in languages that do not have inherent support for constant values built into the runtime - for example, ES5 javascript, where you had var and nothing else. But with language-level support for const values, is there much use for this convention any more?

+ +

At runtime, any identifier created using the keyword const cannot be re-used or re-assigned to. This isn't strictly const correctness in the C/C++ sense, but for primitives it is fine. For objects, you'd have to use Object.freeze to get const-correctness. JavaScript is far from the only language to do this, of course. Fields are commonly readonly (C#) or final (Java) [citation needed].

+ +

What benefits would having things labelled in SHOUT_CASE present in a language that already has const support built into the syntax?

+",110316,,81495,,42381.66806,42381.66806,Should constant values be in SHOUT_CASE when there is language support for them?,,1,2,,,,CC BY-SA 3.0,, +307195,1,307197,,1/12/2016 15:54,,7,2537,"

For the Red-black tree insertion fixup the book distinguishes between 6 cases, 3 of which are symmetric. +The cases are (z is the node being inserted):

+ +
    +
  • Case 1: z's uncle is red
  • +
  • Case 2: z's uncle is black and z is a right child
  • +
  • Case 3: z's uncle is black and z is a left child
  • +
+ +

Case 2 is a subset of case 3, as we can transform Case 2 into 3 with a left rotation.

+ +

However in the book's pseudocode which you can see here or here they write as follows:

+ +
if uncle.color == red:
+  # Handle case
+else if z == z.p.right:
+  # Handle case 2
+  # Handle case 3
+
+ +

Shouldn't this be:

+ +
if uncle.color == red:
+  # Handle case
+else:
+  if z == z.p.right:
+    # Handle case 2
+  # Handle case 3
+
+ +

Am I missing something? Does the book use else if in a different way than say Python does? The C++ implementation provided here uses the second version as I expected.

+",95060,,,,,42381.67847,Is this Red-Black tree insertion pseudocode from Introduction to Algorithms (CLRS) correct?,,1,0,3,,,CC BY-SA 3.0,, +307199,1,,,1/12/2016 17:04,,13,11566,"

Here is a typical C++ code:

+ +

foo.hpp

+ +
#pragma once
+
+class Foo {
+public:
+  void f();
+  void g();
+  ...
+};
+
+ +

foo.cpp

+ +
#include ""foo.hpp""
+
+namespace {
+    const int kUpperX = 111;
+    const int kAlternativeX = 222;
+
+    bool match(int x) {
+      return x < kUpperX || x == kAlternativeX;
+    }
+} // namespace
+
+void Foo::f() {
+  ...
+  if (match(x)) return;
+  ...
+
+ +

It looks like a decent idiomatic C++ code - a class, a helper function match which is used by the methods of Foo, some constants for that helper function.

+ +

And then I want to write tests.
+It would be perfectly logical to write a separate unit test for match, because it's quite non-trivial.
+But it resides in an anonymous namespace.
+Of course I can write a test which would call Foo::f(). However it won't be a good test if Foo is heavy and complicated, such test won't isolate the testee from other unrelated factors.

+ +

So I have to move match and everything else out of the anonymous namespace.

+ +

Question: what's the point of putting functions and constants into the anonymous namespace, if it makes them unusable in tests?

+",7270,,7270,,42382.53958,42384.09167,Anonymous namespaces make code untestable,,3,11,2,,,CC BY-SA 3.0,, +307206,1,307218,,1/12/2016 18:28,,3,769,"

I can't find many tips for how to design complex json structures beyond the obvious tips of not trying to nest too deeply, using defined data types, etc.

+ +

For example, if I have a location that needs to have security scanning done on all of its segments and devices within the segments, there are many options of how I could do this.

+ +
{
+    ""site"": ""Site 1"",
+    ""segments"": [
+        {
+            ""name"": ""Segment 1"",
+            ""devices"": [
+                {
+                    ""name"": ""Device 1"",
+                    ""scans"": [
+                        {
+                            ""type"": ""discovery"",
+                            ""date"": ""2016-01-12"",
+                            ""phase"": ""10"",
+                            ""remediate"": ""0""
+                        },
+                        {} ...
+                    ]
+                },
+                {} ...
+            ]
+        },
+        {} ...
+    ]
+}
+
+ +

For this example, a few questions come to mind:

+ +
    +
  1. Is it okay to use the property ""name"" twice, since they are on different levels? I've read that it's better to keep the property names short for parsers. Therefore, should you use it twice? Or change them to ""seg_name"" and ""dev_name"", for example?

  2. +
  3. You can see a clear pattern for ""segments"", ""devices"", and ""scans"" where they are each an array of objects.

  4. +
+ +

I could change it to something like this:

+ +
{
+    ""site"": ""Site 1"",
+    ""segments"": {
+        ""Segment 1"": {
+            ""Device 1"": {
+                ""discovery"": {
+                    ""date"": ""2016-01-12"",
+                    ""phase"": ""10"",
+                    ""remediate"": ""0""
+                },
+                ""exploit"": {} ...
+            },
+            ""Device 2"": {} ...
+        },
+        ""Segment 2"": {} ...
+    }
+}
+
+ +

The issue I can see popping up with this format is that if you wanted to have a property for all of the segments, you would have to put it at the root level, instead of inside the ""segments"" property, since the property name could possibly conflict with a segment name. However, it is less nested, which is a plus.

+ +
+ +
+

I'm wondering if there are some guidelines of which situations are best suited for a certain format?

+
+ +
+ +

If it's really dependent on what language you are using it for, I would be sending the data between JavaScript and PHP.

+",175782,,29020,,42381.85069,42381.88819,Suggestions for structuring complex json structures?,,1,0,,,,CC BY-SA 3.0,, +307208,1,,,1/12/2016 16:42,,2,3435,"

This is from Effective C++ (Meyers):

+ +
+

Classes not designed to be base classes or not designed to be used polymorphically should not declare virtual destructors

+
+ +

I don't understand why non-polymorphic classes should not declare virtual destructors.

+ +

Assuming I have a parent class and a child class, with no virtual functions, and I have a parent-class-pointer to a child object: if I call delete on the parent-class-pointer, it will only call the parent destructor, even though I also want to call the child destructor.

+",,programmer,149904,,42382.3625,42382.46181,C++ Virtual destructors used only when there are virtual functions,,3,3,1,,,CC BY-SA 3.0,, +307211,1,307526,,1/12/2016 19:01,,4,94,"

We're working on a project wherein the business users operate on a set of data that is periodically published. We've labeled the publishing milestones as Versions, and, due to some business constraints, end up duplicating the data set for each Version that gets published.

+ +

During the work cycle, it often becomes prudent for one of the work groups to spin off a side project (which we've aptly named Project), work on that set of data for some time, then blend their changes back into the parent Version, possibly waiting until two or more Versions have been published before their changes get blended back in.

+ +

I can see the lights coming on from here. ""Ah ha!"" you say, ""That's the same thing that happens in [Git, SVN, TFS, etc.]; code branching and merging. Just do that.""

+ +

Right, that's a perfect lead-in to my question: how does one implement branching and merging for business data that is spread across seven or so primary objects and their representative children? I understand clearly what I want to achieve, I just have no idea how to write the rules so that the developers can take it forward and implement successfully (I'm the solution architect for the project and a middling code writer). I tried researching it, but am only finding articles on which Git (or Svn or whatever) commands to run to get the desired results.

+ +

By the way, we're developing on .NET 4.5 and implementing Workflow Foundation and Entity Framework if that helps matters.

+ +

Update

+ +

Per the question below, the ""primary objects"" are things like Model, Option, Version, Rule, and Note. Each of those have child objects (a Rule has a set of Conditions), joining objects (RuleVersion), and of course properties (Option.Description, etc.). One of the system's primary function is to track cost and retail price, so Price and Cost objects are hanging on the edges, each with their own set of properties (Price.Retail, Price.Calculated, etc.).

+",170302,,170302,,42381.87917,42384.72222,Branching and Merging Business Data,<.net>,2,4,,,,CC BY-SA 3.0,, +307212,1,307265,,1/12/2016 19:26,,3,1707,"

For service methods that call repository methods to interact with database how could I unit test these service methods?

+ +

For example,

+ + + +
public function updateSlideshow($data){
+    // do some logic and method calls (I can test those methods,
+    // but for this method that call repository ...)
+
+    $r = $this->blockRepo->update($data, $config, $slides);
+}
+
+ +

How can I check that this method work correctly or at least send right data to update method?

+ +

And what about a scenario that method first fetch data from repository and do logic on it?

+",150418,,,user40980,42382.59167,42382.59167,How unit test service method that use repository method,,2,2,,,,CC BY-SA 3.0,, +307214,1,,,1/12/2016 19:48,,1,2936,"

I'm designing a price list table for my database.

+ +

It will include Customer, Model, Start_date, End_date, Price, Currency, RRP

+ +

When I update a new price list, which is sent every now and then (maybe every 1~3 months), I need to update the prices but I would like to keep the records of what has already been raised.

+ +

Currently in system:

+ +
Customer  - Model - Start Date - End date  - Price - Currency - RRP
+A         - Z     - 2015/10/20 - 2015-12-19- 120   - GBP      - 220
+A         - Z     - 2015/12/20 - 2999-12-31- 100   - GBP      - 200
+
+ +

After updating new price:

+ +
Customer  - Model - Start Date - End date  - Price - Currency - RRP
+A         - Z     - 2015/10/20 - 2015-12-19- 120   - GBP      - 220
+A         - Z     - 2015/12/20 - 2016-02-20- 100   - GBP      - 200
+A         - Z     - 2016/02/21 - 2999-12-31-  90   - GBP      - 180
+
+ +

What is the best way to update the price?

+ +

I tried to google which solved all the problems until now as most of the search results is bringing up SQL price plans :(

+ +

I have learned MySQL 9 years ago in college for a few months so I know how to interpret the SQL scripts but I am totally lost when I'm trying to create anything from scratch.

+",210803,,61852,,42835.05208,43045.28403,Designing pricing table (RDBMS agnostic),,5,7,0,,,CC BY-SA 3.0,, +307219,1,,,1/12/2016 20:30,,2,115,"

Considering a small embedded C project, how to decide if certain constants belong to

+
    +
  • a global configuration file

    +
  • +
  • the header of the "module == compilation unit"

    +
  • +
  • on top of the the actual C file

    +
  • +
  • or inside the code?

    +

    I am talking about constants that can often change, and need to be conveniently located for the end user, as well as for the developer. Several examples include ENABLE_GLOBAL_ERROR_CHECKING, USART_BAUD_RATE and TEMPERATURE_SETPOINT.

    +
  • +
+",54268,,-1,,43998.41736,42381.96319,Where to put configuration constants?,,2,2,,,,CC BY-SA 3.0,, +307221,1,307337,,1/12/2016 20:39,,3,923,"

In this very simplistic but realistic scenario, I have 2 combo web/database servers (A) behind a load balancer and a single IP address. They also permit access only from one IP address -- the client (B). On each database server, we need to store read-only secret identity information (like email address, username, etc.).

+ +

I need a solution that prevents a hacker who may have compromised server (A) from being able to do anything with the data unless they also gain access to server (B).

+ +

I need a strategy where client server (B) uses a public key from a private/public API key pair to find a match for this identity information (like an email address) on server (A), and then, without server (A) evaluating that result, return the result back to (B) for evaluation using its private key. The theory being that if the hacker gets root level access to server (A), they can't do anything with it to decode the data, not even with brute-force attacks on md5(), sha1(), sha256(), or against any hashing algorithm.

+ +

Here's a strategy that won't work, as an example. Imagine you have PHP's libsodium (a common public/private key system). You use it to encrypt the data and store it on (A). Then, when (B) wants to do a lookup, they send in the public key, it gets mated with the private key on (A), a hash is generated, and that hash is checked against a hash in the database. (So, you find if hashed email request matches a hashed email address in the database.) The problem with this strategy is that if server (A) is compromised for root-level access, all the attacker has to do is intercept the web traffic to find the public key from (B), combine it with the private key from (A) in the PHP server scripts, and then use that information to run a powerful brute-force script to break the hash so that the entire database can be decrypted. This brute-force attack has been used to break md5() and sha1(), but can be used to break even someone's sha256 or other hash API.

+ +

Right, so that won't work. It would be better if there was some way that the result of the match can't be interpreted on server (A) and must be sent back to server (B) for final evaluation, which then requires server (B)'s private key to decrypt the result.

+ +

Using PHP, can you explain a programmer API key strategy where a compromised data store will yield no useful information to a hacker unless he has the private key of that programmer's public/private key pair?

+",25887,,25887,,42382.31667,42382.91319,"Strategy for Public / Private API, Encrypted (or Hashed) Data, and Server Compromise",,1,4,2,,,CC BY-SA 3.0,, +307223,1,,,1/12/2016 21:17,,13,3820,"

When programming in C I have found it invaluable to pack structs using GCCs __attribute__((__packed__)) attribute so I can easily convert a structured chunk of volatile memory to an array of bytes to be transmitted over a bus, saved to storage or applied to a block of registers. Packed structs guarantee that when treated as an array of bytes it will not contain any padding, which is both wasteful, a possible security risk and possibly incompatible when with interfacing hardware.

+ +

Is there no standard for packing structs that works in all C compilers? If not then I am an outlier in thinking this is a critical feature for systems programming? Did early users of the C language not find a need for packing structs or is there some kind of alternative?

+",26933,,,,,43089.68472,Is there a standard way or standard alternative to packing a struct in c?,,4,1,1,,,CC BY-SA 3.0,, +307228,1,307232,,1/12/2016 22:53,,6,424,"

Say I have an array of strings, like this:

+ +
var folders = new[]
+{
+    ""Foo"",
+    ""Bar"",
+    ""Foo\Bar""
+    ""Foo\Bar\Baz""
+};
+
+ +

And that I have an object that represents a folder - something like this:

+ +
class Folder
+{
+    private readonly string _name;
+    private readonly IEnumerable<Folder> _folders;
+
+    public Folder(string name, IEnumerable<Folder> folders)
+    {
+        _name = name; 
+        _folders = folders;
+    }
+
+    public string Name { get { return _name; } }
+    public IEnumerable<Folder> Folders { get { return _folders; } }
+}
+
+ +

What would be a good way to end up with an object structure like this?

+ +
- Folder {Name:Foo}
+  - Folder {Name:Bar}
+    - Folder {Name:Baz}
+- Folder {Name:Bar}
+
+ +

I'm thinking this in terms of splitting the strings on the delimiter and then grouping... and I'm thinking this wrong, I simply don't have an approach to get there, it's not going anywhere. I get the gut-feeling that I need to involve recursion somehow, but I don't see where to fit that in, I'm stuck.

+ +

The example code above is C#, but I don't need actual code, just some pseudo-code, or a line of thought, a little clue.

+ +

...I hope it's in-scope?

+",68834,,,user40980,42381.96319,42382.00903,What would be a good approach to generate a tree of folders?,,3,0,,,,CC BY-SA 3.0,, +307242,1,307257,,1/13/2016 2:21,,1,250,"
+

However, there are some classes that should have only one instance.

+ +

Sometimes they are factories, which you can use to create the other objects in the system.

+ +

If more than one factory exist, clerical control over the created objects may be compromised.

+
+ +

From : Agile software development : PPP

+ +

What clerical control Uncle Bob talks about, that would be compromised?

+",204027,,,,,42382.50347,Making more than one instance of factory that is supposed to be singleton,,1,3,,,,CC BY-SA 3.0,, +307253,1,307291,,1/13/2016 8:16,,0,1775,"

When Eclipse is written in Java and Java is platform independent, why does Eclipse offer different versions according to platforms?

+ +

I assume it should be write once, use anywhere code.

+ +

+",89049,,89049,,42382.40347,42382.61806,Why is Eclipse platform dependent?,,1,14,,,,CC BY-SA 3.0,, +307258,1,307285,,1/13/2016 9:29,,1,91,"

I'm developing a MVC Web application with a REST interface.

+ +

The REST controller performs actions on persisted items through a service class, which translates exceptions coming from the persistence layer.

+ +

When a request involves an item that not exists in the database, I would like to return 404 code, but actually this is not possible due to the exception translation operated by the service, that prevents the controller from knowing the reason of the problem.

+ +

Should the service class return a particular exception only for this case? Conversely, should the controller check for item existence before any action?

+",210873,,210873,,42382.57431,42382.58611,Should a client check for persisted item existence before modifying it?,,2,4,,,,CC BY-SA 3.0,, +307263,1,307290,,1/13/2016 10:36,,2,4648,"

I have a tree that has n-levels. For example here I have four levels:

+ +

+ +

Each node has two children (except for the last one), however everyone except for the first and last node of each row has two parents. I'm trying to figure out a scalable way to get all paths into a List of Lists, so that for this example, I will have a List of Lists of chars:

+ +
A,B,D,G
+
+A,B,D,H
+
+A,B,E,H
+
+ +

etc.

+ +

Can anyone help steer me to the right direction of finding an algorithm for this regardless of how many levels?

+",150538,,150538,,42382.44653,42382.60486,Algorithm to get all paths in a tree,,1,3,1,,,CC BY-SA 3.0,, +307269,1,,,1/13/2016 11:29,,-1,803,"

I have been developing and deploying many ASP.Net MVC web applications under Windows Server 2008 R2 & Windows Server 2012 R2, and I used IIS 7 & IIS 8. At the same time I know that I can deploy my ASP.Net MVC under Linux and Unix, without the need to change my code.

+ +

Today I created a console application which is being called from my ASP.Net MVC web application. This console application represents a long running process which calls 3rd party systems and generate a report. It can be started from my ASP.Net MVC web application and it runs on timely basis as a task inside Windows tasks scheduler.

+ +

Does that mean that my web application can not run on any operating system other than Windows?

+",179709,,25936,,42383.57153,42383.57153,Creating an ASP.Net MVC web application that can work on different operating systems,<.net>,2,1,0,,,CC BY-SA 3.0,, +307276,1,307280,,1/13/2016 12:46,,4,2386,"

I'm creating an API, and I want to overload a function for strip:

+ +
QString MyClass::strip();
+QString MyClass::strip(QRegularExpression open);
+QString MyClass::strip(QRegularExpression close);
+QString MyClass::strip(QRegularExpression open, QRegularExpression close);
+
+ +

Obviously the second and third conflict.

+ +

What is the recommended style for C++ and Qt programmers to restructure this?

+",136084,,,,,42384.55139,I want to overload a function with the same type parameter, what should I do?,,3,4,1,,,CC BY-SA 3.0, +307283,1,307319,,1/13/2016 13:37,,4,121,"

On a mobile device, a set of operations has been saved in the local DB with a wrong date (because the system date was set in the future). Then the device regularly doing synchronisation with a server DB, the erroneous operations have been propagated to the server.

+ +

Later the wrong date has been detected by the mobile user, and the system date set back correctly, and the user resumed his normal flow of work, pushing even more data to the server.

+ +

What should be the best strategy to clean the data from this incorrect state. As suggested in Accountants Don't Use Erasers, a wrong entry (invoice for example) should be compensated by a negative corresponding entry (credit note).

+ +

Then what if the problem is with the date of the entry ? How to compensate a wrong date ?

+ +

More generally what is the best strategy to rollback a database longtime after data have been committed ? and how to rollback when database is synchronized over multiple systems ?

+ +

NB : I posted first in stackoverflow.com but I believe here is a more appropriate place.

+",210929,,,,,42382.76875,How can we rollback a database synchronized over multiple systems?,,1,4,,,,CC BY-SA 3.0,, +307292,1,,,1/13/2016 15:14,,110,11394,"

I'm an experienced developer, but have not done many code reviews. I'm being asked to review code written in Python but I do not know Python.

+ +

Does it make any sense at all to review code in a language I don't know?

+",210942,,150269,,42383.76736,42385.63889,Is it effective to review code in language I don't know?,,11,16,11,,,CC BY-SA 3.0,, +307293,1,307305,,1/13/2016 15:16,,0,287,"

As I was beginning to tool around a bit with node, I was told that I needed to undergo a little bit of a paradigm shift since I was coming from a PHP background. I would ask questions like, ""I have my script working in terminal, now how can I upload it to the server and access through an AJAX request?"". I was told that ""it doesn't work like that"" and that ""Node scripts aren't just that magically work like PHP when you upload them to a server. Node IS a server"".

+ +

So I thought to myself, ""Okay. Let's just put this on the shelf for a while."" I basically only use node for scripts that I run locally anyway, so if I have to type node server.js in terminal to use my script, it wasn't that inconvenient, and it allowed me to happily learn it without having to worry about shifting any of my paradigms.

+ +

Then, last week I made a small front end that worked with some node scripts I was writing and I realized that I needed members of another team to use it. I guessed that they weren't going to have Node installed or be comfortable enough with a CLI to type node server.js. I found out about Heroku. And I uploaded all my scripts to it, and you know what: it magically works! Paradigm shift avoided AGAIN.

+ +

I feel like continually having avoided this paradigm shift is hurting me. So I need to figure it out. How node can be a server? Yet also node files can be uploaded to something like Heroku (/Docker? Another ""container"" I've heard discussed) and work in the same way as I'm used to server-side PHP running? Yet, it seems, at the same time not: I can't upload them to an Apache server I don't think and expect them to work. Also, since I've mentioned Docker, once I understand exactly what Heroku is, is Docker basically the same thing?

+ +

(I felt this question is correct for this SE site, but please let me know if you think it would be a better fit on SO or somewhere else)

+",83426,,,,,42382.84236,"Confusion over Node as a ""script"" vs Node as a ""server""",,1,3,,,,CC BY-SA 3.0,, +307315,1,307459,,1/13/2016 18:01,,4,242,"

I am attempting to write a program for a competition (we're allowed to consult Stack Exchange, as long as I'm not given physical code) that takes in a list of 5000 people's names (distributed fairly evenly from A-Z, but are real names - not random character strings, and each name is unique) and sorts this list of people into ""rooms"" of exactly 4 people each, with each person being placed in exactly one room only (cannot reuse names, cannot omit names).

+ +

Each ""solution"" consists of 1250 of such rooms (I store it in a 2d array and am using java), and the solution's ""score"" is defined to be the sum of each room's score, where the room score is the number of common letters held in each of the 4 peoples' names. A given letter is only counted more than once if it appears exactly that many times in each person's name.

+ +

I began with algorithms that simply placed the names into random rooms just to test the waters and see what the low-end of scores might look like (around 650). I then tried doing a simple alphabetical sort of the list containing the names (using Collections.sort()), which considerably improved the score (around 3800), but still had quite a bit of room for improvement.

+ +

My latest algorithm has been a slightly modified version of simulated annealing, where I begin with the alphabetically sorted list and continuously switch two people from two rooms (by picking two random room numbers and two random indices in the rooms). Inside each outer level iteration, I update the annealing temperature and conduct an inner loop search of 1000 iterations (at the temperature of the outer loop) where I get 1000 random swaps, pick the best one (no temperature-probability selection in the inner loop), and use that in the outer loop, accepting it as the new solution if its score is higher and accepting it with some probability if it is lower.

+ +

This SA approach has gotten me a score in the low 6000's, and runs quite quickly, but it seems to have asymptoted. Are there any other algorithms or optimizations I should consider? Should I move away from a stochastic approach and look for construction approaches that make use of the alphabetical nature of the words themselves? Thanks!

+",210975,,210975,,42383.22917,42384.26458,Name-Based Sorting Algorithms For Maximized Common Letters,,1,2,,,,CC BY-SA 3.0,, +307322,1,,,1/13/2016 19:00,,-1,83,"

Is the definition for ""Secondary Index"" anything more specific than just ""Any index that is not the primary index""?

+ +

EDIT: Here is some research I have done:

+ +
    +
  1. Search Google. I evaluated the first 20 results. These included:

    + +

    a) links to websites for priopetary DB products such as DynamoDB, Oracle, MySQL, etc. (and therefore not valid answers IMO because I'm asking about definitions independent of DB product)

    + +

    b) the opinion of someone named Devang Savaliya[1] who identifies as a Footballista and whose expertise I'm less than 100% confident in.

    + +

    c) Course notes from universities such as Simon Fraser University[2] and UC Davis[3], which don't seem to be specifically about database indices

    + +

    d) A defintion from TutorialsPoint.com[4]

  2. +
  3. Refer to the Wikipedia entry for database indexes[5] which does not contain the word ""secondary""

  4. +
+ +

[1] https://www.quora.com/What-is-difference-between-primary-index-and-secondary-index-exactly-And-whats-advantage-of-one-over-another/answer/Devang-Savaliya +[2] http://www.cs.sfu.ca/CourseCentral/354/zaiane/material/notes/Chapter11/node8.html +[3] http://web.cs.ucdavis.edu/~green/courses/ecs165a-w11/7-indexes.pdf +[4] http://www.tutorialspoint.com/dbms/dbms_indexing.htm +[5] https://en.wikipedia.org/wiki/Database_index

+",32000,,32000,,42382.91389,42382.91389,"Is there a generally accepted definition of ""Secondary Index"" independent of DB product?",,1,2,,42388.975,,CC BY-SA 3.0,, +307323,1,307327,,1/13/2016 19:04,,1,239,"

At a recent interview I was asked to write an algorithm which:

+ +

Given a string text, and string subtext, finds the starting character positions of each subtext found within the text.

+ +
""hey hi how are you hi"" = 5, 20
+
+ +

I was forbidden to use any System.String functions (Substring, IndexOf etc).

+ +

I wrote this in C#:

+ +
IEnumerable<int> CalculateSubtextPositions(string text, string subtext)
+{
+    var charIndexToMatch = 0;
+    for (int i = 0; i < text.Length; i++)
+    {
+        if (text[i] != subtext[charIndexToMatch])
+            charIndexToMatch = 0;
+
+        if (text[i] == subtext[charIndexToMatch])
+        {
+            charIndexToMatch++;
+            if (charIndexToMatch == subtext.Length)
+            {
+                yield return i - charIndexToMatch + 2;
+                charIndexToMatch = 0;
+            }
+        }
+    }
+}
+
+ +

Which was described as 'disappointing'. It may well be I've missed something, but I can't see what's hugely wrong with the above. It works, it's efficient, and reasonably easy to read compared to many approaches (at least in my opinion, please correct me if I'm wrong).

+ +

Would people be able to suggest where I might have messed up? It may have been because I was asked to make the code 'reusable', but I'm not sure to what extent that can be applied when you're only asked to write an algorithm.

+ +

Please do criticise! :)

+",45370,,81495,,42382.91389,42382.91389,What is wrong with this substring-matching algorithm?,,1,7,,42391.89861,,CC BY-SA 3.0,, +307329,1,,,1/13/2016 19:48,,2,138,"

This is a crazy idea that I just came up with, and I'm interested in knowing if it would be workable, or if someone already wrote about or implemented it.

+ +

Imagine you are on a platform (a game console, iOS, ...) where you cannot implement a JIT compiler due to technical reasons1 - you cannot make writable memory executable. You can write an interpreter, but you'd like to make it faster. Now, memory is fairly cheap and code is relatively small compared to other assets, so you can always add more pre-compiled code.

+ +

What if you just add lots of (ahead-of-time) compiled code pieces to your binary - one for every sequence of instructions you're likely to need? You can make the pieces configurable by passing arguments in through registers or memory. One trivial example is replacing a simple loop (pseudocode)

+ +
for i in range(100000):
+    array[i] = 0;
+
+ +

with memset(&array, 0, 100000). But you can do a lot better. Compile some typical programs, take the 1000 top N-grams of instructions, and put them in your binary. Now string them together - either using computed jumps (I don't know if they would be available in a typical locked-down system) - or by wrapping the larger ones in functions, or by using some return-based-programming trickery.

+ +

There are a few trade-offs here:

+ +
    +
  • One is that there is much overhead since you have to compile in a lot more code than you actually will use. However, it might be that performance-critical code (for a given platform and use case) has a lot of common pieces. Think graphics code for example.

  • +
  • Another one is that, while executing the compiled code bits is faster than interpreting them, you have some overhead due to jumping around between the code bits. I also have a hunch that the lack of cache locality between far-apart code pieces might be bad. Both these should be especially true on modern processors.

  • +
+ +

So, I'm wondering if someone smarter than me already thought about this, and can tell me about these trade-offs, and how well this would work in reality.

+ +
+ +

1) Note I'm not asking about the legal aspects, which is beyond the scope of the site anyway. Someone might forbid you from writing a JIT compiler, and then you invent something that is technically not a JIT, but the same thing in spirit, and you've just created a lot of work for lawyers. This question is about technical aspects - say you want something JIT-like on a Havard architecture computer.

+",62069,,,,,42382.94306,JIT based on precompiled code templates,,2,2,,,,CC BY-SA 3.0,, +307331,1,,,1/13/2016 20:21,,4,151,"

The Problem

+

Suppose we are given a variable-length list of objects containing a fixed-length list of positive decimal numbers as attributes.

+

JSON Example

+
[
+    {a: 0.1, b: 0.6, c: 0.0},
+    {a: 1.0, b: 1.3, c: 0.2},
+    {a: 1.2, b: 0.1, c: 0.3},
+    {a: 0.2, b: 0.2, c: 0.5},
+    {a: 0.8, b: 0.2, c: 0.6}
+]
+
+

Create an quantity distribution to multiply each of the objects' attributes by to make the sum product of all objects multiplied by the distribution equals 1. I want to use as few objects as possible to create the total, rather than using small amounts of every object.

+

If the distribution was [1, 1, 1, 1, 1], the result would be {a: 3.3, b: 2.4, c: 1.6} because we would simply add up each of the attributes as they are all multiplied by 1.

+

A Solution

+

A perfect distribution in this case would be [0, 0.5, 0, 1.5, 0.25] yielding {a: 1, b: 1, c: 1}.

+

Creating an Algorithm

+

I want to create an algorithm that solves this number for much larger data sets with more attributes.

+

First Attempt

+

My first attempt was to start with an arbitrary value and add enough to bring just one attribute to 1. Then find the closest match for the missing attributes. At this point I would subtract from quantities to allow for the new quantity to be added without going over 1 in any category. Picking the right quantities to lower was difficult.

+

Second Attempt (removing the limit)

+

I then thought of removing the limit of 1 and making the algorithm stop once all sum products were relatively equal. That way increasing quantity of one decreases the relative quantity of all others. This will hopefully weed out bad values by making them have little significance on the final result. I would probably need a cutoff value to make sure I don't end up with a little of everything.

+

In our example, I found the distribution [0, 2, 0, 6, 1] resulted in {a: 4, b: 4, c: 4}. By dividing the distribution by 4, I got a perfect distribution of [0, 0.5, 0, 1.5, 0.25].

+

Other Solutions & Improvements

+

I'm looking for suggestions for improvement, concerns, or alternative solutions to this problem. Are there any related problems so that I can do more research on potential solutions?

+",210819,,-1,,43998.41736,42764.29653,Generate quantity distribution of objects to reach goal for each attribute,,1,2,,,,CC BY-SA 3.0,, +307338,1,,,1/13/2016 21:57,,4,6672,"

How do I unit test a constant that defines an implementation detail?
+And should I?

+ +

For instance, let's say I have the following class:

+ +
class A 
+{
+   private Cache _cache;
+   const int timeToLiveInCache = 30; // 30 seconds of TTL
+
+   A()
+   {
+      _cache = new Cache(timeToLiveInCache);
+   }
+}
+
+ +

In my unit test, how can I test that my TTL is indeed 30 seconds?

+ +

The Cache class has been tested and I know that whatever TTL I input will work as intended, but I want a unit test to make sure that the class A actually uses a TTL of 30 seconds. Is it even a good idea to unit test that?

+",44000,,173647,user40980,43891.82153,43892.67222,How to unit test a constant that defines an implementation detail?,,3,8,2,,,CC BY-SA 4.0,, +307346,1,307347,,1/14/2016 0:31,,16,4561,"

I'm relatively new to programming (July 2015), and I've always wondered why it's good programming practice to hide variables as much as possible.

+ +

I've run into this question mainly recently when I looked into events and delegates in C#. I searched around as to why I should use events rather than just a delegate, since they do the same thing it seems. +I read that it's better programming practice to hide the delegate fields and use an event.

+ +

I decided it was time to learn why it was good programming practice, but I couldn't really find anything other than ""Because it's good programming practice"".

+ +

If you could provide some basic examples and maybe some pseudo-code that would be helpful.

+",211016,,41461,,42383.39653,42383.39653,Why is it good programming practice to limit scope?,,2,5,0,42383.5625,,CC BY-SA 3.0,, +307349,1,,,1/14/2016 1:39,,1,320,"

This question was answered in ""Understanding Abstract Data Types (ADTs)"", and the top voted answerer (by Frank Shearar) is currently as follows:

+ +
+

Objects are not ADTs (*) [editor's note: Links to ""On Understanding Data Abstraction, Revisited"" - Cook]. (So in C# String and Int32 are not + ADTs.)

+ +

With that out of the way, an abstract data type ""has a public name, a + hidden representation, and operations to create, combine and observe + values of the abstraction"". (Quoting from the linked paper.)

+ +

(*) Briefly, Cook explains that:

+ +
    +
  • Objects cannot inspect the hidden representation of other objects, unlike members of an ADT. That implies that values of an ADT may be
    + implemented efficiently, even for operations that require inspection
    + of multiple abstract values.

  • +
  • Objects behave like a characteristic function over the values of a type, rather than as an algebra. Objects use procedural abstraction + rather than type abstraction

  • +
  • ADTs usually have a unique implementation in a program. When one's language has modules, it's possible to have multiple implementations + of an ADT, but they can't usually interoperate.
  • +
+
+ +

My problem is with the second bullet point.I think that in the paper linked, it just happens to be the case that for the specific example of a set used in Cook's paper, an object can be viewed as a characteristic function. I don't think that objects, in general can be viewed as characteristic functions.

+ +

Also, in the paper Aldritch (The power of interoperability: Why objects are inevitable) suggests

+ +
+

Cook’s definition essentially identifies dynamic dispatch as the most + important characteristic of object

+
+ +

agreeing with this and with Alan Kay when he said

+ +
+

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.

+
+ +

However, in these set of lecture slides it suggests that Java classes are ADTs while Java interfaces are objects - and indeed using interfaces ""objects"" can inter-operate (one of the key features of OOP as given in one of the bullet points above).

+ +

My questions are

+ +
    +
  1. Am I correct to say that characteristic functions are not a key feature of objects and that Frank Shearar is mistaken

  2. +
  3. Are data that talk to each other through Java interfaces examples of objects even though they don't use dynamic dispatch? Why? (my understanding is that dynamic dispatch is more flexible, and that interfaces are a step towards objective-C/smalltalk/erlang style messaging)

  4. +
  5. Is the idea of dependency inversion principle related to the distinction between ADTs and objects? (see the wikipedia page or google ""The Talking Objects: A Tale About Message-Oriented Programming"". Although I'm new to the concept, I understand that it involves adding interfaces between ""layers"" of a program (see wikipedia page diagram)

  6. +
  7. Please provide any other examples/clarifications of the distinction between objects and ADTs, if you want.

  8. +
+",211013,,-1,,42837.31319,42546.87569,What is the difference between ADTs and objects?,,1,2,,42383.51181,,CC BY-SA 3.0,, +307352,1,,,1/14/2016 2:59,,16,962,"

I am frequently tasked with debugging an application at my job. It is a BI Application that we deploy to businesses, which includes a test environment, and a production environment. I am wondering if there are any apps/tools/methods that people can suggest, based on these constraints:

+ +
    +
  1. Debugger cannot be used on client site or locally, because the software depends on custom third party applications that we don't have test environments for. (EDIT: to be fair, it is possible to debug locally in some cases. If we use nothing but the core code. Much of the problematic code resides in a dll that encapsulates third party specific communication: sockets, process pipes, soap calls, custom logic that changes the behavior of core code. Typically during an implementation or enhancement for a client, we would be writing new code to this area.)

  2. +
  3. There is virtually no logging done in our apps. There are no unit tests.

  4. +
  5. Version control only has 1 version of the full solution (using source safe 2005). So it's not possible to get a previous version of the entire solution, only individual files. (Unless someone knows ways around this).

  6. +
  7. Cannot reproduce locally, often times cannot reproduce on test environment (High chance that test and production are not the same version).

  8. +
  9. There is a high chance that the version the client is using is different from the one on source safe. This is because individual files are updated, which have embedded custom logic for that specific client. Often what happens is an update is made to a binary, which requires changes to several other binaries, but when a commit is done, no one has any record or knowledge of this. A somewhat common error I see is the 'Function/Method not found' or 'Method call has too many/too few parameters specified' on a clients environment.

  10. +
  11. This is a .net VB solution

  12. +
  13. Cannot install any software on client sites, but can locally

  14. +
  15. Our application is extremely customizable, but unfortunately the customization logic is spread out across all the classes and files, from the front end all the way to the data layer, including custom changes made to the database on a per client basis.

  16. +
  17. There are virtually no comments in the code. There is no documentation about the architecture. No documentation about the api. The only thing we have are hundreds upon hundreds of email chains that somewhat explain what's going on. The only people that know the code are the ones that originally wrote it, but they're no longer developers per say so they don't get involved that much.

  18. +
+ +

And before you say it... yes I know; I want to shoot myself as well. It doesn't help that there's spaghetti code, hundreds of compiler warnings, and broken polymorphism that REALLY should be fixed, but I don't have a say in it.

+ +

Most common sort of errors I run into are null reference errors, invalid casts, and missing functions/function signature mismatches. Sometimes I am lucky and event viewer will log the class, method, and exception message. It's not the most helpful, but it's still something. The worst are the errors that have no trace, no repro steps besides a screenshot, and are generic error messages like the ones mentioned above. Sometimes it's not possible to find out why they occurred, only to pray that the environment is not properly configured, and that it will go away later.

+ +

I know this comes off as a bit of a rant, and to some extent it is. But I'm desperate for options. Are there other methods / tools I can use?

+",74674,,32523,,42384.96944,42394.42222,Methods of debugging code (Nightmare situation),<.net>,6,6,3,,,CC BY-SA 3.0,, +307353,1,307356,,1/14/2016 3:08,,6,651,"

I am about to start a new project which involves taking an excel file, parsing the data (php-excel-reader) and then using the parsed values in a HTML email.

+ +

My question is pretty simple. Is it better practice to store the parsed data in a database first and then use the data however I wish?

+ +

For me it makes more sense as then I don't need to re-parse if errors occur when sending the email for example.

+",211026,,,,,42383.49514,Should I always store parsed data in database before manipulating?,,6,3,1,,,CC BY-SA 3.0,, +307357,1,,,1/14/2016 3:38,,1,47,"

Problem Description

+ +
+ +

I am working on an enterprise data discovery project that is designed to scan databases for sensitive information. The basic search unit is called a classifier and covers things like 'Social Security Number', 'LastName', 'Driver's License', 'Credit Card Number', etc.

+ +

Currently, each classifier is an independent item with its own regex pattern, so a search for 'Driver's License' will yield matches for any pattern that matches the format ^[A-Z]\d{3}-\d{4}-\d{4}$.

+ +

We want to reduce false positives from this approach by leveraging data from related classifiers. For instance, if both my last name and driver's license number appear in the same record, I should be able to verify that the first letter of my last name matches the first character of the driver's license number instead of only relying on the regex pattern.

+ +

Here's another example: suppose I am searching for 'Zip Codes' on a database and a scan marks the following records as matches:

+ +
FirstName LastName 123 Fake Street City, Illinois 61234
+012345678 01234567 012345678 987654321 1234556 61234
+
+ +

I'd like to assign a higher confidence rating to the first match, because it is located near related classifiers like 'Street Address' and 'State' while the second match is probably a false positive from a stream of unrelated digits.

+ +
+ +

What kind of problem domain is involved here and what existing algorithms apply? I've looked through articles on record linkage, semantic matching, information extraction, and other things, but I can't seem to find research on the exact idea I'm explaining.

+",211024,,161917,,42383.6,42383.6,Correlating Search Classifiers in a Database Scan for Sensitive Information,,0,1,,,,CC BY-SA 3.0,, +307360,1,344521,,1/14/2016 4:58,,18,31730,"

In the, commonly referred to as, Git-flow model hotfixes go in their specific hotfix-* branch and small integration fixes right before release go in the release-* branch. General bugfixes from the previous version does not seem to have a place.

+ +

Where should they appear? Should they be in their own bug-* branch branching off of develop (just like feature branches)?

+",22402,,,,,44015.15903,Where do bugfixes go in the git-flow model?,,3,4,4,,,CC BY-SA 3.0,, +307361,1,307369,,1/14/2016 5:02,,4,129,"

From this blog post describing the git-flow model, it seems that feature branches have to remain local to the user working on them and are actually deleted right before pushing to origin.

+ +

This kind of makes sense, but what if other people want to checkout my feature branch to help me out for example? Should I push my branch only then? Should I not? If so why?

+",22402,,22402,,42383.37361,42383.37361,Do feature fixes need to be local only?,,1,1,,,,CC BY-SA 3.0,, +307362,1,307373,,1/14/2016 5:11,,2,248,"

PROBLEM: this points to different instances at different stages of execution (as it should). But it is not always obvious which instance is that. On the other hand we could minimize the use of this, fixing its state with meaningful-name variables.

+ +

EXAMPLE:

+ +
// Variant A:
+this.someFunction();
+
+// Variant B:
+var combobox = this;
+combobox.someFunction();
+
+ +

QUESTION: Is it a good or a bad style to replace this pointer with variables and why?

+",146870,,,,,42383.63681,Avoid This Pointer,,2,12,,,,CC BY-SA 3.0,, +307363,1,,,1/14/2016 5:32,,4,161,"

Reading the gitflow model, it seems to me that the master branch is there only to provide a branch where to store stable versions. What does that add to just tagging develop with the stable versions?

+",22402,,,,,42383.37431,Isn't the master branch just a surrogate of tagging in the gitflow model?,,1,5,,,,CC BY-SA 3.0,, +307368,1,307381,,1/14/2016 6:47,,3,2181,"

I've been going through the RxJS tutorials http://reactivex.io/learnrx/. +Almost all of the exercises involve moving from a hierarchical structure to a flat structure so I thought I'd try to do the opposite.

+ +

I want to convert from a flat array to a tree structure based on a property of each array item, using the same functional constructs from the tutorial.

+ +

I.e. Go from this:

+ +
var videos = [
+    {
+        ""id"": 70111470,
+        ""title"": ""Die Hard"",
+        ""category"": ""Action""
+    },
+    {
+        ""id"": 654356453,
+        ""title"": ""Bad Boys"",
+        ""category"": ""Action""
+    },
+    {
+        ""id"": 65432445,
+        ""title"": ""Anchorman"",
+        ""category"": ""Comedy""
+    },
+    {
+        ""id"": 675465,
+        ""title"": ""Everest"",
+        ""category"": ""New Release""
+    }
+];
+
+ +

To this (based on each video's category):

+ +
result === [
+    {
+        ""category"": ""Action"",
+        ""videos"": [
+            {
+                ""id"": 70111470,
+                ""title"": ""Die Hard""
+            },
+            {
+                ""id"": 654356453,
+                ""title"": ""Bad Boys""
+            }
+        ]
+    },
+    {
+        ""category"": ""Comedy"",
+        ""videos"": [
+            {
+                ""id"": 65432445,
+                ""title"": ""Anchorman""
+            }
+        ]
+    },
+    {
+        ""category"": ""New Release"",
+        ""videos"": [
+            {
+                ""id"": 675465,
+                ""title"": ""Everest""
+            }
+        ]
+    }
+];
+
+ +

I've come up with the following code that works, but I feel like I'm missing an easier way (perhaps solving in a single reduce) which would in turn be more performant (rather than filtering videos many times):

+ +
var result = 
+    videos.reduce(function(catArray, video) {
+       var catName = video.category;
+       if (catArray.indexOf(catName) === -1) {
+          catArray.push(catName);
+       }
+       return catArray;
+    }, [])
+    .map(function(categoryName) {
+        return {
+            category: categoryName,
+            videos: videos.filter(function(video) {
+                        return video.category === categoryName;
+                    }).map(function(video) {
+                        return {
+                            id: video.id,
+                            title: video.title
+                        };
+                    })
+        };
+    });
+
+ +

Is there a better way using functional methods?

+",211042,,169832,,42383.29306,43152.75208,JavaScript functional conversion from flat list to tree,,1,0,1,,,CC BY-SA 3.0,, +307376,1,307415,,1/14/2016 9:10,,5,3134,"

So I am building a web-app. The app will be hosted on heroku and I using a MEAN Stack for development. The main purpose of the app is to allow users to search through data and be able to find a document they are looking for.

+ +

This is for my companies internal documents as most of our employees are all over the world and they need a way to find data easily.

+ +

The Idea:

+ +

The idea I have come up with is to create a web-app which provides them with a interface that allows them to search and filter data.

+ +

The filtering options provided on the web-app will be similar to that of eBay (see below)

+ +

+ +

The Data:

+ +

Initially the data set will be low. But with time it will grow quite big and I want it to be scalable to it can be used for a long time and not break or slow down as data increases.

+ +

Just a note that the data will mostly only be text. Any files such as pdf, excel or another formats will be saved on external resources like a central dropbox account and then the links for those files will be added to the web app.

+ +

The Question:

+ +

To provide the user the option to filter the data whats the best way? +When the user fill out the filtration form as the image attached above should the filtration be done on Server side and then sent to the client or should it be performed client side?

+ +

In my opinion server side is the best way to go because I can keep the whole logic of the system on the server and keep the client side code clean.

+ +

Also note that initially this will be a web-app but in future we will create a iPhone app for this as well.

+ +

Thanks in advance.

+",186048,,,,,42383.67917,Client Side Filtering or Server Side Filtering,,1,0,1,,,CC BY-SA 3.0,, +307382,1,,,1/14/2016 11:37,,1,230,"

I was just confused about the following declaration in C:

+ +
char **p[5]
+
+ +

I understand the char *p[] as an array of character pointers, but this one is puzzling me. Based on the precedence of [] over *, how can I interpret this?

+",90320,,17429,,42383.54028,42384.48472,Declaration confusion in pointers,,2,5,1,42391.15208,,CC BY-SA 3.0,, +307384,1,307391,,1/14/2016 11:39,,1,696,"

I am building a community website with a NodeJS express backend and a mysql database. Now I am up to the point where I want to store profile pictures of users and pictures related to specfic questions. I googled a bit and found many ways how to handle image upload, but I still have problems to figure out how and what to store regarding the location of the image.

+ +

I was thinking about creating a key-value store or a database table with a generated UUID as the key and the location of the image on the server (e.g. /images/user/123/profilepic/bla.jpg) as the value. This way I could store the UUID in the database (for example in the table user as profilePicId) and handle all image requests the same way, just by asking for the image with a certain UUID.

+ +

Using this method, the process would be like the following when client wants to get user info + profile pic of a user:

+ +
    +
  1. Do get request to get user a object (firstname lastname, picUUID).
  2. +
  3. Do get request on the api /file/{picUUID}
  4. +
  5. Server > check key-value store what location is assigned to UUID and +return image
  6. +
  7. Put the apiUrl/file/{user.picUUID} address in an image tag's src and done :)
  8. +
+ +

Am I on the right track here or what would be your take on this? I guess that this problem has been tackled many times. I am therefore also pretty stumped that there is not much information about this topic.

+",160232,,160232,,42383.49236,42383.55208,How to handle file uploads express (UUID / location),,1,0,,,,CC BY-SA 3.0,, +307386,1,,,1/14/2016 12:22,,6,788,"

I'm defining a DB structure, and I have a strong feeling I'm not doing it right. I have hotels that can be configured to offer some optional services (airport pickup, massage), that can in turn be booked with a room. So, each hotel picks which services it offers and the price. Currently I have the following tables (simplified):

+ +
Guests (id, (PK), name)
+Hotels (id (PK), name)
+HotelServices (name (PK))
+Hotels_HotelServices (id (PK), hotel_id (FK), service (FK), price_value, price_currency)
+Rooms (id (PK), hotel_id (FK), number, capacity, price_value, price_currency)
+Bookings (id (PK), guest_id (FK), room_id (FK), date_from, date_to)
+Bookings_Hotels_HotelServices (id (PK), hotels_hotelServices_id (FK))
+
+ +

Is this last table that disturbs me the most. I don't like having a junction table pointing to another junction table, but I can't think of another way of representing which services were booked with the room.

+ +

Is there a better, common approach to modelling such situation?

+",207355,,25936,,42383.52778,42383.64792,Junction table related to another junction table,,2,3,,,,CC BY-SA 3.0,, +307389,1,,,1/14/2016 12:25,,4,795,"

I'm learning Java multithreading programming by the book ""Java Concurrency In Practice"". In chapter 9.4.2 Split Data Model, I read this:

+ +
+

From the perspective of the GUI, the Swing table model classes like TableModel and treeModel are the official + repository for data to be displayed. However, these model objects are often themselves ""views"" of other objects + managed by the application. A program that has both a presentation‐domain and an application domain data model is + said to have a split‐model design (Fowler, 2005).

+ +

In a split‐model design, the presentation model is confined to the event thread and the other model, the shared model, + is thread‐safe and may be accessed by both the event thread and application threads. The presentation model registers + listeners with the shared model so it can be notified of updates. The presentation model can then be updated from the + shared model by embedding a snapshot of the relevant state in the update message or by having the presentation + model retrieve the data directly from the shared model when it receives an update event.

+ +

The snapshot approach is simple, but has limitations. It works well when the data model is small, updates are not too + frequent, and the structure of the two models is similar. If the data model is large or updates are very frequent, or if one + or both sides of the split contain information that is not visible to the other side, it can be more efficient to send + incremental updates instead of entire snapshots. This approach has the effect of serializing updates on the shared + model and recreating them in the event thread against the presentation model. Another advantage of incremental + updates is that finer‐grained information about what changed can improve the perceived quality of the display if only + one vehicle moves, we don't have to repaint the entire display, just the affected regions.

+
+ +

What is the meaning of ""Split Data Model""? Can you show me an example to help me understand it?

+",211091,FireSun,25936,,42383.57778,42384.04167,"What is ""Split Data Model"", mentioned in the book ""Java Concurrency In Practice""?",,3,0,0,,,CC BY-SA 3.0,, +307396,1,,,1/14/2016 14:17,,1,41,"

I've inherited a problem from a programmer that isn't with our group anymore - a piece of our application suffers from performance issues under a particular set of circumstances.

+ +

I can replicate these circumstances - however, I don't actually know what is causing this performance issue. I suspect it is a large Java object taking up too much memory at once and causing our front-end to lag, but I can't determine which object it might be.

+ +

Since I know what functionality is causing the issue, but not the exact location of the performance issue, is there any way to test this performance and narrow my search down by class and function?

+",123958,,31260,,42383.59653,42383.59653,How can I test for performance issues in a specific piece of code?,,0,5,,42383.60278,,CC BY-SA 3.0,, +307399,1,307421,,1/14/2016 14:37,,5,10955,"

The original question is as follows:

+ +
+

Consider a 12-bit two's complement fixed point representation, with 8 bits for the integer part and 4 bits for the fraction.
+ What is the binary encoding of the decimal value 45.625?

+
+ +

I found the binary representation for 4510 in 8 bits to be 001011012

+ +

I then split the .62510 into a .510 and a .12510

+ +

.510 in binary is .12

+ +

.12510 in binary is .0012

+ +

Therefore, I came up with the final answer of 00101101.10102

+ +

However, this was counted incorrect. Where did I go wrong?

+",211103,,64132,,42383.69236,42383.79375,Representing a number in 12 Bit 2's Complement Fixed Point,,1,5,,,,CC BY-SA 3.0,, +307401,1,,,1/14/2016 14:38,,3,1135,"

Now, it doesn't necessarily have to be Word — for ease of comparison, let's use ODT, which is based on XML — which is pretty similar to HTML. That would, to my mind, make rendering an ODT document almost like rendering an HTML website.

+ +

With ODT and HTML+CSS basically being two ways of describing a page's layout, what are the differences in rendering them?
+Is it simply that HTML+CSS is more flexible and thus requires more complex rendering? A complicated website can have countless nested elements, all with relative positioning, custom styling etc. Compared to that, an ODT has a far simpler/more predictable structure, which I think should be easier to render.

+",211105,,158187,,42383.96111,42384.64028,How is rendering a Word document different from rendering a website?,,3,7,0,42383.85556,,CC BY-SA 3.0,, +307409,1,307416,,1/14/2016 15:35,,2,125,"

I am having a bit of trouble designing a new feature at the moment. It is part of a resource management system. I was wondering if anyone has experience doing anything similar.

+ +

I'll try to explain:

+ +

Resource: a Person, Place, or Thing

+ +

Availability Template (AT): allows you to define a standard availability pattern for a type of resource. e.g.: Mon-Fri 9-5.

+ +

Resource Additional Availability (AA): allows you to define one-off availability for a resource. e.g.: Bob's overtime 10-3 on Saturdays.

+ +

Resource Availability Exclusions (AE): allows you to define when a resource is not available. e.g: Room 4 is cleaned between 4-5 on Fridays.

+ +

It's easy enough to check whether or not a resource is available at a point in time: Availability = (AT(rID, time) || AA(rID, time)) && !AE(rID, time).

+ +

But I need to be able to query a resource's availability over a time period. i.e.: ""Is Room 4 available at 9am for 2 hours on Thursday?"".

+ +

So far, I have created an algorithm that ""samples"" the availability at a specific interval over the time period. However, this means that the returned availability lags the actual availability and some information might be missed.

+ +

E.g.: if I sample the availability every 15 minutes between 9 and 11, it would miss an exclusion between 9:05 and 9:10.

+ +

I could use a very small sampling period (eg: 1 minute) but that might not be performant and overall it is still quite an ugly brute-force approach.

+ +

Are there any standard patterns or algorithms for this type of problem?

+",79635,,,,,42384.35903,Improving sampling algorithm,,1,4,,,,CC BY-SA 3.0,, +307417,1,307525,,1/14/2016 16:58,,2,307,"

The usage of quotes in Go's import statement strikes me as unnecessary. Typical Go import statements look like:

+ +
import ""foo/bar""
+import other_name ""foo/bar""
+import (
+    ""foo/bar""
+    x ""foo/baz/bar""
+)
+
+ +

According to this Quora post, it's for unusual paths containing weird characters.

+ +

As far as I can tell:

+ + + +

It strikes me as an unusual concession to make - how often does one see import/includes using paths with spaces or other weird characters? What was the original reason behind requiring quoted paths? Is it documented anywhere?

+",142522,,-1,,42878.52778,42384.70764,What was the reason behind using quotes in Go's import statements?,,1,1,,,,CC BY-SA 3.0,, +307425,1,307426,,1/14/2016 20:48,,1,1142,"

I got a question about why we need equals if we have hashcode.

+ +

My first attempt was the answer because collision. But we corrected starting point with the assumption that we have not many objects so there is no collision at all.

+ +

My second attempt was the answer because of the speed. But I also got the reply that there is something conceptual difference between hashcode and equals.

+ +

So I read a lot of posts, the java doc and can not find the answer. Do I miss something?

+",45935,,,user40980,42383.90625,42383.92847,Why would I need `equals` if I have already `hashcode`?,,3,11,,,,CC BY-SA 3.0,, +307432,1,307447,,1/14/2016 22:41,,5,5630,"

I had this idea that I would achieve some good automation and separation of concerns as follows:

+ +
    +
  1. Define an interface, IDataProvider, in a class in a DataMuncher project that needs to both consume and output data.
  2. +
  3. In separate projects, have different implementations of that interface, e.g., in LunarData.csproj would be LunarDataProvider : IDataProvider, and in SolarData.csproj would be SolarDataProvider : IDataProvider.
  4. +
  5. In DataMuncher, reflect over the assembly and linked assemblies and find all classes implementing IDataProvider.
  6. +
  7. Create instances of the IDataProvider-implementing classes that were found, ask them what data they can provide, and if their data matches up with what was similarly learned from IDataConsumer classes, request the appropriate data, and send it onward to the consumers.
  8. +
+ +

The idea is that if I need to add a new data provider, I can do so with only the barest touch on any other projects. I just create a new project, add a class that implements IDataProvider, and whether I'm manufacturing the data (as might occur in a unit test), reading from a text file, connecting to a database, or calling a web API, or any other means of getting the data, as long as I return it in the form required by the interface, it can be consumed. Thus, the quirks of the data are not exposed to the rest of the system and no proper nouns are propagated to other parts when they shouldn't be.

+ +

However, I've run into a wee bit of a problem where the provider classes need to depend on the DataMuncher, but the DataMuncher needs to depend on the provider classes (in order to do what ultimately amounts to self dependency-injection).

+ +

So for this scenario to work I need at least 3 projects: one where the interface is defined, one where it's implemented (dependent), and one that is dependent on both of them that does what amounts to dependency injection, call this the DataHub project. But that seems like a lot of complexity.

+ +

One suggestion was made to just put the interface definition and the data source implementations all in the same project. This would enable the implementing project to have no outside dependencies, and would avoid the need for another coordinating project to which many projects must refer. It also reduces the number of projects in the solution, which is also a possible benefit. (E.g., DataProviders.cs would have all three: IDataProvider plus LunarDataProvider and SolarDataProvider.

+ +

What would you do in this scenario? I saw value in having one project per implementation, but I have to admit that I don't have a plethora of experience with various project structures that would have given me an idea of what works best. I also saw value in the division between projects forcing the separation of implementation internals from other projects, so that no one implementation would ever, even by mistake, use code or classes from another implementation. With one project each, a review of the project reference dependency graph could easily establish that the separate implementations are properly isolated from each other. This makes sure the code is truly modular and properly encapsulated.

+ +

Another benefit of separate projects is that if the data source doesn't need to access, say, a database, there are no references to database libraries. Perhaps it's file-based, and doesn't need any dependencies. Then, the reference graph can be examined for correctness and anything strange will stick out much better: rather than ""hmmm, I guess one of the 10 implementations uses that library"", ""why is X using Y!?!""

+ +

I also considered, still with three projects, defining the interface in the DataHub, but this would prevent it, due to circular reference problems, from doing the reflection to find data sources—the recipient of the benefit of the interface, DataMuncher, would have to perform that task, and instead of LunarData having a reference to DataMuncher, it would be the other way around. This is now more references, and the DataHub class now also has to have a lot of projects referencing it (again, instead of the other way around). All these extra references suggest that perhaps the single project with interface and implementations is better.

+ +

Some guidance or ideas would be appreciated.

+ +

Hmmmm ... it just occurred to me, 20 minutes after posting, that perhaps I should simply use a dependency injection library. That would use a pattern that is well established, and would remove some of the questions by reducing the number of reasonable ways to structure things. Still thinking on this ...

+ +

For Reference

+ +

First we go fetch all the types available in all referenced assemblies that implement IDataProvider:

+ +
var types = GetType()
+   .Assembly
+   .GetReferencedAssemblies()
+   .SelectMany(assemblyName => Assembly.Load(assemblyName).GetTypes())
+   .Where(type => (typeof (IDataProvider)).IsAssignableFrom(type))
+   .ToList();
+
+ +

Then we can create instances of these:

+ +
var instantiatedTypes = types
+   .Select(Activator.CreateInstance)
+   .ToList()
+   .AsReadOnly();
+
+ +

This list of instantiated types then need to be passed to the core of the system, as a parameter to the DataMuncher class (or perhaps a method).

+ +
public sealed class DataMuncher {
+   public DataMuncher(
+      IEnumerable<IDataProvider> dataProviders,
+      IEnumerable<IDataConsumer> dataConsumers
+   ) {
+      var consumersAndData = dataConsumers
+         .Select(consumer => new { consumer, dataspec = consumer.GetDesiredDataSpecification() })
+         .Select(cas => new {
+            cas.consumer,
+            cas.dataspec,
+            data = dataProviders
+               .Select(provider => provider.RetrieveData(cas.dataspec))
+         })
+         .ToList();
+      consumersAndData.ForEach(cas => cas.consumer.ReceiveData(cas.data));
+   }
+}
+
+ +

This is very rough, just to show the idea. Don't bother code reviewing this particular code, it is only here to show the basic idea and get assistance in how to structure the interfaces.

+ +

Note: I have been reading .Net solution structure of an enterprise application and it is possibly helping, but I don't have a conclusion yet.

+",3941,,3941,,42762.84167,42762.84167,Structuring projects in a solution for interfaces,,1,3,,,,CC BY-SA 3.0,, +307433,1,,,1/14/2016 22:52,,1,126,"

Imagine a client-server app that lets user upload images/documents etc. to server and then lets users who have access, retrieve and view them later on their +respective mobile devices.

+ +

So the flow is something like this:

+ +
    +
  • Creator (C) creates a message with n number of attachments.
  • +
  • C uploads the files and message to server via REST API.
  • +
  • Recipients R1, R2... Rn get notified about new content
  • +
  • R1, R2, and other go in on their own times and fetch the message and attachments.
  • +
  • Once downloaded, the mobile app doesn't request server the next time any of the recipients tries to view the attachments, it uses locally cached data.
  • +
+ +

Now the creator C already has the files (since it created them) and need not download them again. So when it gets a response from server like:

+ +
{
+  messageId: 123,
+  attachments : [ { id: a1, ...}, {id: a2, ...}, {id: a3, ...} ]
+}
+
+ +

However, it doesn't know which attachment is what (i.e. which one is a1 and which one is a2 and so on).

+ +

It just knows that message #123 has 3 attachments.

+ +

What would be a good way to figure this out? There are 2 approaches that come to my mind:

+ +

Approach #1: use file hash

+ +
    +
  • Calculate file hash (MD5 or SHA) and use that as a key on the mobile app.
  • +
  • Server does the same and returns the hash as identifier instead of a generated id.
  • +
+ +

Pros:

+ +
    +
  • No extra metadata is needed for sync
  • +
+ +

Cons:

+ +
    +
  • Probably error prone - hash may not match due to OS or hash implementation differences
  • +
  • Hashing will be slow for big files
  • +
+ +

Approach #2: use a key

+ +
    +
  • App generates an UUID and sends it to server as an identifier.
  • +
  • Server stores it and returns the key with the response for syncing.
  • +
+ +

Pros

+ +
    +
  • Hash related errors are eliminated
  • +
  • Hashing speed is not a concern since UUID generation is pretty quick and deterministic.
  • +
+ +

Cons

+ +
    +
  • And extra key is passed between app/server which has no other purpose.
  • +
+ +

Which one of these is preferable? Or maybe there is a 3rd option?

+",28013,,,user40980,42383.95903,42383.95903,Relying on file hash for data synchronisation across mobile and server,,0,3,,,,CC BY-SA 3.0,, +307438,1,,,1/15/2016 0:31,,1,986,"

When implementing the repository it's fairly easy for a stand alone class. Unfortunately, we are unable to use an ORM to manage our data access, so I'm trying to recreate some of the functionality manually (uggh).

+ +

For example, If I have a class User the implements a generic IRepository. Additionally, I may have a class called Task that also implements IRepository. If there is a One-to-Many relationship from User to Task I would plan to have some code like this:

+ +
Public class User : IRepository<T> where T : IEntity
+{
+    public string Name{get;set;}
+
+    public list<Task> ....(This is where I start getting fuzzy)
+}
+
+ +

I would like to be able to chain my calls for example:

+ +
User.Tasks; 
+
+ +

Would return a list of all of that users task. How would the data access for task should look to handle this, Also, how should the Task be defined in the User class. And lastly, what about eager/lazy loading the Task on a user? How can this be accomplished?

+",166897,,,,,42384.4375,Repository Pattern Class Collection examples with Lazy/Eager loading,,1,1,,,,CC BY-SA 3.0,, +307446,1,307518,,1/15/2016 2:20,,3,3279,"

I'm trying to learn about REST and having problems with the concept of HATEOAS (Hypermedia As The Engine Of Application State). What is it for?

+ +

It seems to me the majority of commenters on the web think that HATEOAS should be used by a client to discover how to use a RESTful web service. And most seem to conclude that HATEOAS is not worth including in RESTful web services for one of two reasons:

+ +

1) HATEOAS only gives URLs but these are useless by themselves without knowing what methods can be used with each URL (eg HTTP GET, POST, PUT). Since the additional information must be passed to the client out of band (eg via documentation) then there is no point in using HATEOAS;

+ +

2) Similar to (1) but takes it a step further: The client can figure out what methods are applicable to a given URL by calling the HTTP OPTIONS method. However, the client still needs out of band information, to describe the format of the data it must pass for, say, POST or PUT methods. So we end up in the same place as (1) - HATEOAS isn't sufficient for a client to discover everything it needs to know, so why bother with it.

+ +

These arguments seem valid if HATEOAS is supposed to be used by a client to discover how to use a RESTful web service. However, is that what HATEOAS is for? From the few examples I've seen, it seems to me to be perfect for navigating through a web service - I've performed a certain action, now what are the valid actions I can perform next? That seems to me to gel with the ""Application State"" part of HATEOAS but very few articles I've read talk of it in terms of navigation, almost all are about discovery.

+ +

So, is HATEOAS about discovering how to use a RESTful web service or is it really about navigation?

+",60568,,,,,42386.02778,In REST is HATEOAS really about self-discovery or about navigation?,,2,8,,42386.03611,,CC BY-SA 3.0,, +307453,1,307468,,1/15/2016 4:12,,1,235,"

Example taken from : Agile software development : principles, patterns and practices

+ +

A new employee is added by the receipt of an AddEmp transaction. This transaction contains the employee's +name, address, and assigned employee number. The transaction has three forms:

+ +
AddEmp <EmpID> ""<name>"" ""<address>"" H <hourly-rate>
+AddEmp <EmpID> ""<name>"" ""<address>"" S <monthly-salary>
+AddEmp <EmpID> ""<name>"" ""<address>"" C <monthly-salary> <commission-rate>
+
+ +

proposed employee class + +Add employee transaction +

+ +

My question would it be ""better"" if instead of subclassing AddEmployeeTransaction we have a factory for salary classification and pay schedule?

+ +

If better is too vague a term, what are disadvantages of my design over the one proposed by Uncle Bob?

+ +
+ +

Say inside EmployeeTransaction we have SalaryClassificationFactory and we call SalaryClassificationFactory.get(SalaryDetails d)

+ +

where,

+ +
enum SalaryId{H,S,C};
+
+
+struct hourly
+{
+    int hourlyRate;
+};
+
+struct monthly
+{
+    int monthlySalary;
+};
+
+struct commissioned
+{
+    int salary;
+    int commisssionRate;
+};
+struct SalaryDetails
+{
+    SalaryId kind;
+    union SalaryUnion
+    {
+        hourly h;
+        monthly s;
+       commissioned c;
+    }
+};
+
+",204027,,204027,,42384.28681,42384.71944,Parameterization vs subclassing,,2,1,,,,CC BY-SA 3.0,, +307454,1,307456,,1/15/2016 4:14,,23,2459,"

Why does the documentation on some languages say "equivalent to" rather than "is"?

+

For example, the Python Docs say

+
+

itertools.chain(*iterables)

+

...

+

Equivalent to:

+
def chain(*iterables):
+    # chain('ABC', 'DEF') --> A B C D E F
+    for it in iterables:
+        for element in it:
+            yield element
+
+
+

Or this C++ reference on find_if:

+
+

The behavior of this function template is equivalent to:

+
template<class InputIterator, class UnaryPredicate>
+  InputIterator find_if (InputIterator first, InputIterator last, UnaryPredicate pred)
+{
+  while (first!=last) {
+    if (pred(*first)) return first;
+    ++first;
+  }
+  return last;
+}
+
+
+

If that's not the actual code, can't they post it? And if it is the actual code, why do they have to say it's "Equivalent" rather than simply "is"?

+",206617,,-1,,43998.41736,42384.21111,"Why does the documentation on some languages say ""equivalent to"" rather than ""is""?",,1,4,2,,,CC BY-SA 3.0,, +307462,1,,,1/15/2016 6:46,,1,67,"

I'm considering a few different possible flows for a restful api. For minimum viable product, I'm assuming a web browser as the front end.

+ +

Current use case: the task of the human operator is to create an ordered collection of entities. Each entity will have some values assigned by the user. The assumption is that the user will be creating entries in ascending order, but that some back tracking will be required to correct data errors.

+ +

Insertion, removal, re-ordering and normalization will be supported by different interfaces. For values that can be predicted with high confidence, the initial presentation of the data entry resources should assume that those values are immutable, but provide links to resources that offer the human operator greater control. Dynamic operator assistance (think typeahead) is out of scope for mVP.

+ +

Manual manipulation of the URLs is not supported. The host is expected to classify externally generated URLs as hostile (but not for mVP, of course).

+ +

Question #1: is there any prior are describing the best practices for building such a thing? It would be nice to review, for instance, a list of use cases I haven't considered yet before I get deeply invested in the wrong design.

+ +

As I'm going to be using a browser as the front end, all of the requests that I dispatch are going to be using GET or POST.

+ +

As the entries, and the list itself, are entities, they will each have a uuid assigned to them. This would be one of the high confidence values that I expect to excuse the human operator from entering.

+ +

To first order, the operator's next task is always going to be entering data in a form. So the design should get the designer to the most likely next form as simply as possible.

+ +

The home page is published as the entry point to the api. As uses cases for reads are more common than use cases for writes, the operator will need to follow a GET relation to access the data entry states.

+ +

My initial implementation coupled the resources very tightly to the entities.

+ +
    +
  • Follow (GET) a link to the form that creates the collection entity
  • +
  • User POSTs data to the endpoint
  • +
  • Server generates id for the new collection
  • +
  • Server creates the collection
  • +
  • Server redirects the user to a representation of the created collection
  • +
+ +

In this design, the form itself persists at the given location, which limits how much transient data I can include.

+ +

It occurred to me this evening that I don't need to lock myself in quite so tightly. A slightly different ordering

+ +
    +
  • Follow (GET) a link to the form that creates the collection entity
  • +
  • Server generates id for the new collection
  • +
  • Server redirects to a representation of the form with the new ids specified
  • +
  • User POSTs data to the endpoint
  • +
  • Server creates the collection
  • +
  • Server generates ids for the next anticipated step
  • +
  • Server redirects the user to a representation of the created collection, with a form supporting the next task, with pre-generated ids, ready to go.
  • +
  • turtles all the way down
  • +
+ +

Given that ""extra"" round trips that don't require action by the human operator are acceptable, are there reasons to avoid persuing the second design?

+",199871,,,,,42384.28194,Restful flows for data entry,,0,0,,,,CC BY-SA 3.0,, +307466,1,307474,,1/15/2016 8:18,,8,209,"

In files that contain multiple languages (ie. template files), are there any best practises regarding indentation?

+ +

I mostly use this:

+ +
<div>
+    IF FOO
+        <div>
+            <p>contents>
+        </div>
+    END FOO
+</div>
+
+ +

Indent for every new block, regardless of language. This has a few disadvantages, though. In more complex files it can break the indentation of either language:

+ +
<div>
+    IF FOO
+        <div someattribute>
+    ELSE
+        <div otherattribute>
+    END FOO
+        <p>content</p>
+    </div>
+</div>
+
+ +

I've also seen this used:

+ +
<div>
+    IF FOO
+    <div>
+       <p>contents>
+    </div>
+    END FOO
+</div>
+
+ +

Ie. indent only one language. This has the advantage of always being consistent, but in more complex files can almost completely hide some implementation details, like a block being conditional.

+ +

The goal here is obviously maximizing readability.

+",63047,,31260,,42388.57708,42388.57708,Indentation in a multi-language file,,3,9,2,42388.57917,,CC BY-SA 3.0,, +307467,1,307472,,1/15/2016 8:25,,201,38827,"

While developing the application I started to wonder - How should I design command line arguments?

+ +

A lot of programs are using formula like this -argument value or /argument value. Solution which came to my mind was argument:value. I thought it is good because with no white spaces there is no way that values and arguments can be messed up. Also it is easy to split a string into two on the first from the left : character.

+ +

My questions are:

+ +
    +
  1. Is popular -argument value formula better than argument:value (more readable, easier to write, bugfree, easier to understand by expert developers)?
  2. +
  3. Are there some commonly known rules which I should follow while designing command line arguments (other than if it works it is OK)?
  4. +
+ +
+ +

Asked for some more details I will provide it. However I think they should not affect the answers. The question is about a good habits in general. I think they are all the same for all kinds of applications.

+ +

We are working on an application which will be used in public places (touch totems, tables). Applications are written using Qt Quick 5 (C++, QML, JS). Devices will have Windows 8.1/10 installed. We will provide front-end interface to manage the devices. However some advanced administrators may want to configure the application on their own. It is not very important from the side of the business but as I agree with what Kilian Foth said I do not want my application to be a pain for a user. Not finding in the Internet what I want I asked here.

+ +
+ +

To more advanced Stack Exchange users: I wanted this question to be general. Maybe it qualifies to the community wiki (I do not know if existing question can be converted with answers). As I want this question to be operating system and programming language independent the answers appearing here can be a valuable lesson for other developers.

+",210140,,-1,,42837.31319,42905.53056,What are good habits for designing command line arguments?,,5,21,93,,,CC BY-SA 3.0,, +307471,1,307502,,1/15/2016 9:36,,5,1784,"

If I extend software under the GPL licence, I understand that my source code must be freely available. If my source code is freely available can I legally charge a fee for using the version that I publish?

+ +

My situation: I'm using data from a program in my program and have an In-App purchase for access to all the data. The source code is publicly available and is mentioned in the program. Is this allowed?

+ +

Obviously my software isn't extremely popular and I'm banking on the fact that lots of people have no interest in downloading and compiling it themselves.

+ +

I have read Can I use GPL software in a commercial application which discusses in detail, whether or not my software would have to be under the GPL, but it does not discuss money!

+",211218,,-1,,42837.31319,42384.90278,Charging a fee for GPL software,,4,8,1,,,CC BY-SA 3.0,, +307475,1,307476,,1/15/2016 9:51,,-2,314,"
void function(int x){
+    if(x<=0)
+        return;
+    function(x--);
+}
+
+ +

This is a recursion function which is called with the value of x = 20.

+ +

The Recursive call will take place in this way

+ +
function(20)...function(19).......function(0)
+
+ +

Each function call will use some memory and if the data is big it will throw StackOverflowException.

+ +

So what I want to know is:
+Is there any way by which we can call a function and remove it from the call stack so that its memory can be utilized (eg. after function(20) calls function(19) the memory for function(20) should be de-allocated), and the the end of the recursive call (here function(0) ) it should get returned from the first place from where it was called (i.e. function(20)).

+ +

Can this be done in Java and C?

+",208758,,9113,,43279.53194,43279.53194,Problem on recursion,,1,5,0,,,CC BY-SA 3.0,, +307478,1,,,1/15/2016 10:40,,1,49,"

In our application users can enter custom expressions to calculate certain things. For instance they can specify an invoice and define a number of lines for cost calculation.

+ +

Example for a course with $400 price $10 transaction fee and a number of free people (say because of some credit)

+ +
    +
  • (?numberOfParticipants? - ?creditedtTickets?) * 400 + 10
  • +
+ +

We currently have custom code that parses and executes this. But we recently found a bug in it and need to spend some time on this.

+ +

We could improve the current hacky code, we could build a proper parser but I feel this is a very generic problem that lots of people have and there should be an off the shelf solution but I can't seem to find one.

+ +

Does anybody recognize this problem and how did you handle it?

+",210938,,,,,42384.53889,How to handle user created expressions in application,,1,4,,,,CC BY-SA 3.0,, +307483,1,307492,,1/15/2016 12:01,,2,147,"

Way back in the hazy origins of our platform it was decided that we were going to need some hierarchical data structures stored in the RDBMS. The relationships between nodes were stored via a 'parent_id' column that referenced another row in the same table. Although at face value this may seem slightly sensible, the reality has turned out very differently.

+ +

I have now been tasked with implementing some functionality that requires traversing the hierarchy - Specifically, I need to build a list of all of the descendents of a node for a given node. This would be trivial if the relationships were stored as 'parent -> children' but as the relationships are stored as 'child -> (unique) parent' I can't figure out a performant way of doing this (the naive approach is O(n^2) I believe).

+ +

Having wrestled with this for a few hours, the current thinking is that we should refactor the database but if anyone has experienced this before it'd be good to hear about your solution. If not, let this be a warning to anyone trying to save a tree in this fashion if you ever plan on having to traverse it!

+",211229,,,,,43090.64444,Retrieving an incorrectly stored tree data structure,,3,4,,,,CC BY-SA 3.0,, +307495,1,307497,,1/15/2016 13:41,,6,885,"

Imagine a humongous web aplication built using Single Page Application framework such as AngularJS. With its each route it downloads a couple of HTML template files. Each of these template files contain JS files unique to it. So, now when each of these template files download it will also pull in and download the JS files as well.

+ +

This is just a hypothetical situation to understand how browsers would work under such a situation.

+ +

What will happen in terms of memory and performance when user quickly hops across all the pages (which can be 100s or 1000s of such pages)? Will all the JS variables consume the memory and unnecessarily clog it or will the JS Garbage Collector come to the rescue?

+",203074,,31260,,42384.58264,42384.58611,In SPA what happens in terms of memory and performance when user hops across all the pages?,,1,0,,,,CC BY-SA 3.0,, +307498,1,,,1/15/2016 14:03,,0,471,"

I have an optimization problem and I was wondering where to start from to be able resolve it. I think it can be solved with an NP-complete algorithm but I am not sure where to start from. The problem is the following:

+ +
    +
  • There are N types of different colored same-size rectangles that have to be printed.
  • +
  • For each type there is a minimum count that has to be printed.
  • +
  • Arranging them on a printing plaque costs X amount of money, each piece of paper printed costs Y amount of money.
  • +
  • They are all the same size so the maximum amount of rectangles on piece of paper is given as an input, of course you can arrange less than the maximum amount.
  • +
  • There could be more pieces printed than the minimum amount required.
  • +
+ +

What is the best way to combine the rectangles having the given input of types and count and prices for arranging and printing so the amount of X + Y to be the lowest?

+ +

Where should I start with this one, it really sounds like an NP problem where I try all the combinations of pieces of paper and record the best outcome.

+",154653,,154653,,42384.60139,42384.70833,"Optimization problem where to start, what algorithms to use?",,2,12,,42387.97083,,CC BY-SA 3.0,, +307503,1,307513,,1/15/2016 14:42,,1,137,"

I want to model an interaction between classes, e.g. there is a general class Hero and he can have some items. (I came up with this analogy so it is easier to understand) Some of them are e.g. potions and can modify hero's hp, regen, strength etc., and others can be equipped ( run method hero.setArmor(item) )

+ +

I want to know how to maintain loose coupling so hero is only running methods like hero.use(item) and in items I can implement various actions that affect hero's private sections and use hero's methods. Items must have known who called them (because they hold information on what to do on hero).

+ +

I already tried another approach, using items inside hero, e.g hero have a list of items he can use on himself. That would look like item.use(this) where this is an instance of hero. OK now items must implement their use action on various classes, have modification rights of heroes. I can declare friend class in hero for this, but items have derived classes which are not friends to hero and have no access to private section. Inheriting friendship would solve the case, but I know it is not allowed in C++. Looking for general solution.

+ +

Is making heroes setters public a good approach in that situation?

+",211236,,,,,42384.68542,Modeling specific objects interaction,,1,0,,,,CC BY-SA 3.0,, +307509,1,307521,,1/15/2016 15:57,,6,2583,"

Me and a couple of colleagues are having a discussion regarding the following case:

+ +

In an OrderStatus table we're keeping track of all the statuses an order goes through in time, including ""Pending"", ""Available"", ""Returned"", etc. There's a timestamp field in the table to keep track of when each status was added. However, there are two statuses that may be added exactly at the same time, one after the other, in which case both may have exactly the same timestamp (including milliseconds). In another part of the code, we need to check which is the last status an order was set to, and since we're ordering by the timestamp we can easily get the wrong record from the database.

+ +

The table has an IDENTITY(1,1) primary key, and my colleagues think we should order using that column, since that will always necessarily give you the latest record added to the table.

+ +

I feel like an IDENTITY column should be voided of any meaning and thus should not be used for any sorting, and would rather do a Thread.Sleep(1) to make sure the timestamps are different and we can keep sorting using that field.

+ +

What do you think is the best practice in this case?

+",192448,,,,,42552.42361,Best practice for getting last record inserted in DB,,4,11,,,,CC BY-SA 3.0,, +307512,1,307520,,1/15/2016 16:06,,6,79,"

I was having a conversation with a friend about the C# StringBuilder class, and what it's behavior was. I'll paraphrase, but my side of the conversation was something like this (I oversimplified because exactly how StringBuilder works isn't important for my question)

+ +
+

StringBuilder is more efficient to use for extensive string concatenation than to simply use +. The reason is that StringBuilder doesn't dynamically create a new string for each concatenation operation. It waits until all of your desired concatenations are ""built"" and only then does it dynamically allocate the space and give you back your string.

+
+ +

My friend said something along the lines of

+ +
+

That may be true today, but if you are reliant on such optimizations, then you should make your own StringBuilder. In future versions, there's no reason that StringBuilder couldn't use simple string concatenation (str1 = str2 + str3). Libraries only guarantee functional equivalence, not runtime equivalence.

+
+ +

Is this true for libraries?

+ +
    +
  • If I use a library's Sort() function that has runtime O(n*log(n)), is it possible that a future version would change the runtime to O(n^2)?
  • +
+ +

Is the same true for executable tools?

+ +
    +
  • Could (for example) grep's runtime fundamentally change in the future?
  • +
+ +

That aside, wouldn't it be good practice for a library/API/tool developer to keep runtimes for the same calls similar over time?

+",187263,,,,,42385.73125,Library/API Runtime Between Versions,,2,5,,,,CC BY-SA 3.0,, +307517,1,307522,,1/15/2016 16:38,,11,7829,"

I'm currently working on a webapp where we often need to condition some server logic based on the page that is going to be returned to the user.

+ +

Each page is given a 4-letter page code, and these page codes are currently listed in a class as static Strings:

+ +
public class PageCodes {
+    public static final String FOFP = ""FOFP"";
+    public static final String FOMS = ""FOMS"";
+    public static final String BKGD = ""BKGD"";
+    public static final String ITCO = ""ITCO"";
+    public static final String PURF = ""PURF"";
+    // etc..
+}
+
+ +

And often in the code we see code like this (1st form):

+ +
if (PageCode.PURF.equals(destinationPageCode) || PageCodes.ITCO.equals(destinationPageCode)) {
+    // some code with no obvious intent
+} 
+if (PageCode.FOFP.equals(destinationPageCode) || PageCodes.FOMS.equals(destinationPageCode)) {
+    // some other code with no obvious intent either
+} 
+
+ +

Which is kind of horrible to read because it doesn't show what common property of these pages led the author of the code to put them together here. We have to read the code in the if branch to understand.

+ +

Current solution

+ +

These ifs have partly been simplified using lists of pages which are declared by different people in different classes. This makes the code look like (2nd form):

+ +
private static final List<String> pagesWithShoppingCart = Collections.unmodifiableList(Arrays.asList(PageCodes.ITCO, PageCodes.PURF));
+private static final List<String> flightAvailabilityPages = Collections.unmodifiableList(Arrays.asList(PageCodes.FOMS, PageCodes.FOFP));
+
+// later in the same class
+if (pagesWithShoppingCart.contains(destinationPageCode)) {
+    // some code with no obvious intent
+} 
+if (flightAvailabilityPages.contains(destinationPageCode)) {
+    // some other code with no obvious intent either
+} 
+
+ +

...which is expresses the intent much better. But...

+ +

Current Problem

+ +

The problem here is that if we add a page, we would in theory need to go through all the code base to find if we need to add our page to an if() or to a list like that.

+ +

Even if we moved all those lists to the PageCodes class as static constants, it still needs discipline from developers to check if their new page fits in any of those lists and add it accordingly.

+ +

New solution

+ +

My solution was to create an enum (because there is a finite well-known list of page codes) where each page contain some properties that we need to set:

+ +
public enum Page {
+    FOFP(true, false),
+    FOMS(true, false),
+    BKGD(false, false),
+    PURF(false, true),
+    ITCO(false, true),
+    // and so on
+
+    private final boolean isAvailabilityPage;
+    private final boolean hasShoppingCart;
+
+    PageCode(boolean isAvailabilityPage, boolean hasShoppingCart) {
+        // field initialization
+    }
+
+    // getters
+}
+
+ +

Then the conditional code now looks like this (3rd form):

+ +
if (destinationPage.hasShoppingCart()) {
+    // add some shopping-cart-related data to the response
+}
+if (destinationPage.isAvailabilityPage()) {
+    // add some info related to flight availability
+}
+
+ +

Which is very readable. In addition, if someone needs to add a page, he/she is forced to think about each boolean and whether this is true or false for his new page.

+ +

New Problem

+ +

One problem I see is that there will be maybe 10 booleans like this, which makes the constructor really big and it might be hard to get the declaration right when you add a page. +Does anyone have a better solution?

+",106595,,106595,,42384.77708,42384.77708,Enum with a lot of boolean properties,,1,0,2,,,CC BY-SA 3.0,, +307527,1,307535,,1/15/2016 18:15,,3,543,"

I'm redesigning one of my programs which performs certain actions on processes of interest (known as ""Monitored Processes"" in my program).

+ +

Some actions I always need to do on those processes are:

+ +
    +
  • Open process handle
  • +
  • Get process name
  • +
  • Close process handle
  • +
+ +

And some further actions I may need to do on some or all of them are:

+ +
    +
  • Suspend process
  • +
  • Resume process
  • +
  • Set process priority
  • +
  • Set process affinity
  • +
+ +

Previously I simply had a ""handler"" class that opened a handle to the process, checked its name, and performed the requested actions. All of that is done via p/invoke, I don't use .NET's Process class at all.

+ +

However I am trying to redesign it from a SOLID perspective, so I created a ""MonitoredProcess"" object and have added a number of methods to it to perform the actions.

+ +

Here it is in its current state:

+ +
public class MonitoredProcess : IMonitoredProcess
+{
+    private readonly IMonitoredProcessConfig config;
+    private readonly int processId;
+    private IntPtr handle = IntPtr.Zero;
+    private uint suspendResumeResult;
+    private string processName;
+
+    public IMonitoredProcessConfig Config { get { return config; } }
+    public int ProcessId { get { return processId; } }
+    public uint SuspendResumeResult { get { return suspendResumeResult; } }
+    public string ProcessName
+    {
+        get
+        {
+            if (processName == null) PopulateProcessName();
+
+            return processName;
+        }
+    }
+
+    public MonitoredProcess(IMonitoredProcessConfig config, int processId)
+    {
+        if (config == null) throw new ArgumentNullException(""config"");
+
+        this.config = config;
+        this.processId = processId;
+    }
+
+
+    public bool OpenProcess()
+    {
+        var alreadyOpen = handle != IntPtr.Zero;
+        if (alreadyOpen) return true;
+
+        handle = Kernel32.OpenProcess(config.RequiredRights, false, (uint)processId);
+        return handle != IntPtr.Zero;
+    }
+
+    public bool CloseProcess()
+    {
+        var closed = Kernel32.CloseHandle(handle);
+        if (closed) handle = IntPtr.Zero;
+        return closed;
+    }
+
+    public uint Resume()
+    {
+        suspendResumeResult = Ntdll.NtResumeProcess(handle);
+        return suspendResumeResult;
+    }
+
+    public uint Suspend()
+    {
+        suspendResumeResult = Ntdll.NtSuspendProcess(handle);
+        return suspendResumeResult;
+    }
+
+    private void PopulateProcessName()
+    {
+        var buffer = new StringBuilder((int)Kernel32.GeneralConstants.MAX_PATH);
+        var success = 0 != Psapi.GetModuleFileNameEx(
+            handle,
+            IntPtr.Zero,
+            buffer,
+            (uint)buffer.Capacity
+            );
+
+        processName = success ? Path.GetFileName(buffer.ToString()) : string.Empty;
+    }
+
+}
+
+ +

I haven't yet added methods to set the process's priority and affinity. The interface it implements is not yet really defined, it will become whatever the class eventually exposes.

+ +

Question:

+ +

Is this class doing too much? Should a ""process"" know how to suspend/resume itself, and set its own priority and CPU affinity? Or should I have other classes that take an IMonitoredProcess and perform these actions on them (which is how I did it before)?

+ +
+ +

In the way I did it before, the classes performing the actions implemented them through p/invoke, so they ""knew"" they could throw and catch Win32Exceptions to get the error message. If this MonitoredProcess class implements everything via p/invoke internally, the class calling those methods technically shouldn't know how they're implemented, so has no reason to either throw or catch Win32Exceptions.

+ +

And I wouldn't want to simply throw them from this class (even wrapped in a domain-specific exception) because in half the places I call the methods I'm not interested in the outcome, but I'd still have to try/catch it every time - and it creates objects unnecessarily.

+ +

Which again makes me wonder if this class should really be doing all these actions, ostensibly on itself, but it uses external functions to perform all of them.

+",207009,,,,,42385.36667,Single Responsibility - is this class doing too much?,,3,2,,,,CC BY-SA 3.0,, +307528,1,307541,,1/15/2016 18:17,,5,959,"

I'm working on a project that uses RxJS to perform data transformations on varying sources of data, and I'm in the process of writing some documentation for it.

+ +

I want to find an effective way to document the following:

+ +
    +
  1. An abstract way to describe the cardinality and relationships of the data.
  2. +
  3. An abstract description of the data transformations.
  4. +
+ +

Here are two examples of how I'm describing a data transformation. Table headers are the destination fields, the second row is the source data or a transformation done on the source data to get the desired data.

+ +

+ +

+ +

I can see that the Github Markdown format is very limited for this purpose, which is why I'm asking for help on this.

+ +

I also have a few ERD diagrams that looks like this:

+ +

+ +

I'm not sure of a clean way to document how the transformations relate to the schema, and what assumptions about cardinality are made within those transformations (getStudentTestScoreDcid in particular)

+",161910,,5099,,42384.82847,43124.15347,How can I better document these data relationships/transformations?,,1,2,0,,,CC BY-SA 3.0,, +307543,1,307563,,1/16/2016 0:08,,3,430,"

I have a very large selenium test framework I use to test a web application. It it built around a page-object pattern.

+ +

This week I fixed a bug in a low level piece of the framework, but it broke many other pieces that got built on top of this bug. I didn't have a good way to find these spots without running the full test suite of several thousand UI tests.

+ +

Since this isn't ideal, what strategy can I use to unit test new functionality and page-object modifications/additions to the framework?

+",211321,,,,,42386.16042,How do you test a selenium framework?,,3,3,,,,CC BY-SA 3.0,, +307544,1,,,1/16/2016 2:00,,0,113,"

I'm working on a piece of functionality that simply allows a guest user to perform an action a certain number of times before requiring them to login\create account. In this instance, they can vote on photos in a gallery 5 times before the app asks them to create an account.

+ +

My solution is roughly as follows.

+ +
    +
  1. On initial load, set cookie with vote_count=1;
  2. +
  3. retrieve and increment vote_count each time a user votes;
  4. +
  5. When vote_count=5, redirect user to page asking them to login or sign up. (of course the user can delete their cookies or use another browser etc, but I'm not concerned with that right now).
  6. +
+ +

My problem is where to implement this code.

+ +
    +
  1. Even though I will only call this feature on one route, I feel the controller is the wrong place because it adds quite a bit of code, and if I change the number of times a user can vote, I'm changing code in the controller (maybe that's not a bad thing?)
  2. +
  3. I don't believe this should live in a helpers.php file because it's not needed anywhere else in the app.
  4. +
  5. I guess I could create it's own class and it could live as a static method, but I feel static methods are best left for functionality in which they will always return the same value and have the same result regardless of where they are called.
  6. +
+ +

To put this into a good question. What is a good OOP way of implementing this that fits in line with Laravel?

+",183584,,,,,42387.46875,"static method, helper function or in the controller, where does this go?",,1,3,,,,CC BY-SA 3.0,, +307555,1,307581,,1/16/2016 5:15,,0,103,"
+

In choosing the classes to group together into packages, we must consider the opposing forces involved in reusability and + developability. Balancing these forces with the needs of the + application is nontrivial. Moreover, the balance is almost always + dynamic. That is, the partitioning that is appropriate today might + not be appropriate next year. Thus, the composition of the packages + will likely jitter and evolve with time as the focus of the project + changes from developability to reusability.

+
+ +

From : Agile software development : PPP

+ +

What are these opposing forces of reusability and developability?

+",204027,,204027,,42385.80278,42385.80278,Packages : opposing forces of reusability and developability,,1,1,,,,CC BY-SA 3.0,, +307561,1,307574,,1/16/2016 8:46,,0,409,"

I'm developing a program for the Razer Deathadder and Firefly, I am going to create new light effects for the Firefly. I'm doing this in C# with the Colore library.

+ +

Now these light effects will be created using Threads with while(true) statements but how can I check and destroy threads and then reactivate them again when the user has pressed a button/radiobutton?

+ +

Right now the threads are either going over each other or getting deactivated and won't ""turn on"" again, is there a simpler way to do this?

+ +
    +
  • I haven't tried anything other than creating functions to kill the threads but then they will not activate again.
  • +
+ +

This is how I have done it:

+ +
private void button1_Click(object sender, EventArgs e) {
+    Thread effect = new Thread(new ThreadStart((Effect));
+    effect.Start();
+}
+
+private void Effect() {
+    while(true) {
+       // lightning effect will go here
+    }
+}
+
+",181160,,31260,,42385.40278,42385.59653,Create/destroy multiple threads for animations/light effects,,1,0,0,,,CC BY-SA 3.0,, +307565,1,307570,,1/16/2016 11:27,,6,1680,"

Not sure if it's an appropriate question, but here it goes.

+ +

I know Haskell's do notation pretty well. And I realized that Scala's ""for comprehension"" really is just mostly the same as do notation in Haskell. Something I don't quite understand is why did the designers choose this name though? The words ""for"" and ""yield"" seem likely to confuse programmers who are used to, for example, Java's for loop and Ruby's yield.

+ +

Yet I don't think monadic composition has much in common with those things at all. Is there a particular reason to name this syntactic sugar with such words? Or is it just mostly because a lot of keywords have already been occupied in the language, so they had to use these keywords which are relatively free?

+",106675,,106675,,42385.48056,42385.64444,"Why does Scala name monadic composition as ""for comprehension""?",,2,3,,,,CC BY-SA 3.0,, +307571,1,307572,,1/16/2016 13:14,,0,826,"

I've seen blog posts explaining how to do Git bisect, but how to you fix the bug, commit again and maintain the rest of the commit history?

+ +

An actual problem I faced in my project was like this (the image is just an approximation):

+ +

+ +

I had marked the top-most commit as bad and the bottom-most commit as good. Some of the commits below Buggy commit A were good commits, because somewhere in-between, I had commented out the code that made Bad commit B a bad commit. So naturally, git bisect led me first to Buggy commit B.

+ +

I corrected the bug and when I tried to commit, I wanted the changed lines of code to be applied to the same Buggy commit B commit. But I found out that:

+ +
    +
  1. Git would always create a new commit for this
  2. +
  3. I got the detached head error:

    + +
    HEAD detached at b855e36 Changes not staged for commit:
    +modified:  main.cpp
    +
  4. +
+ +

Now what do I do?

+ +

Create a new branch (let's name it fixedit) and commit this corrected code? If I do, then what happens to all the other commits above it? Should I do a rebase (never done it before) of the commit above Buggy commit B and put the entire line of commits onto the new fixedit branch?

+ +

I would prefer that I'd just be allowed to modify the existing commit and the entire tree would remain as it is, but then I'd have to go to all the commits above Buggy commit B and fix the buggy line in all those commits. That doesn't make sense. So how does one fix bugs and yet retain commit history properly?

+ +

UPDATE:

+ +

So I do a git bisect reset and fix the code on the topmost commit (because I know which line of Buggy commit B introduced the bug)?

+ +

Like I mentioned earlier, the code that caused Buggy commit B was commented out in one of the commits just before Buggy commit A.

+ +
    +
  1. So when in my topmost commit I see that the line is commented out, I realize that the bug is somewhere else.
  2. +
  3. During the first bisect since one of the commits between A and B was marked good, I'd choose that as the good commit for my second Git bisect attempt, and the topmost newest commit as the bad commit and then continue with Git bisect. That'd lead me to Buggy commit A.
  4. +
  5. Then I'd do a git bisect reset and correct the lines of code in the topmost code and then commit.
  6. +
+ +

Ok; that makes sense. Thanks :-)

+",8413,,164151,,42385.67361,42385.67361,Git Bisect found the buggy commit. Now what?,,1,4,,,,CC BY-SA 3.0,, +307579,1,,,1/16/2016 16:22,,6,427,"

I have two lists (sizes m and n) containing high dimensional bit-vectors. All vectors have the same number of dimensions and use Hamming distance as measure if distance.

+ +

For each element in the first list I want to find the closest elements in the second list. Such a closest element may differ by several thousand bits from the element I'm searching for.

+ +

The naive approach would be computing the hamming distance for each pair of vectors, but that has runtime O(m*n) making it infeasible. So I'm looking for an algorithm that's significantly faster.

+ +

Lets say I have d=10000, m=1 billion and n=100 billion and I want the algorithm to terminate in a couple of CPU days.

+ +
+ +

The elements in the first list are created by taking a random element from the second list and flipping each bit with the same probability p < 0.5. I want to support values of p that are as close as possible to 0.5. I'm fine with probabilistic algorithms that find matches with high probability.

+",8669,,8669,,42386.40278,42459.34792,Finding similar high dimensional vectors,,1,7,,,,CC BY-SA 3.0,, +307583,1,307683,,1/16/2016 18:20,,1,86,"

Let's say that I want to embed a scripting engine inside some other program, to allow users to create custom behaviours for objects.

+ +

For example, when a particular event happens, the server would launch an embedded scripting engine and run a user-supplied response-to-event script supplied by each actor. Conceptually you can imagine it like a movie script. A new character enters the scene. Each of the actors in the scene must now respond to the new arrival, and that behaviour is governed by their user-supplied scripts.

+ +

Now I can launch an embedded interpreter which will run the user-supplied scripts, but I need somehow for the main application to ""timeslice"" or schedule those sub-processes.

+ +

For example there might be a rule that says each actor can perform one action per second. So after an actor performs the one action, I would like suspend their interpreter until their next chance to act.

+ +

Another way I thought of implementing it was for the actors to voluntarily release control. e.g. their script might go ""say hello; sleep 1000 millis; wave arms; sleep 1000 mills;"" If I was going to do that I might as well give them a ""crash me"" button to push as well.

+ +

If I had to destroy or re-enter the interpreters, I would also need some way of storing the state/program counter so it could resume.

+ +

I'm also considering running them as separate threads or processes, but I'm not sure how exactly how much control I will have over them, to be able to suspend and resume a thread or subprocess in this manner. Edit: I would also be dependant on the clients behaviour to synchronise everything, just as if I was depending on them to sleep.

+ +

I am told this is a common game design pattern, but I can't really find any useful information on how to implement it. I've looked at embedded interpreters, for example Nashorn JS in Java or Lua in C++ and they don't appear to support suspending the interpreter, which would be ""easy way"" to do this.

+ +

Can anyone give guidelines here? At the moment I think thread or process control might be the best solution?

+ +

Edit: I don't really to want to have write my own interpreter ... although if I did, I could obviously suspend it at will. It would be a lot of work even for a basic stack machine.

+ +

Edit: I'm beginning to think that I need to have a custom interpreter that I tell to ""run 100 opcodes (etc), then sleep"".

+",,user170146,,user170146,42386.64653,43905.95764,Scheduling/suspending embedded interpreters,,2,3,,,,CC BY-SA 3.0,, +307588,1,307594,,1/16/2016 19:26,,3,1037,"

Based on the context, C# can generate the expression tree for a LambdaExpression from lambda expression syntax:

+ +
Expression<Func<string, int>> expr1 = s => s.Length;
+
+ +

as can VB.NET:

+ +
Dim expr1 As Expression(Of Func(Of String,Integer)) = Function(s) s.Length
+
+ +

Why can't either language compiler generate an expression tree from other expression types?

+ +

C#:

+ +
Expression<DateTime> expr2 = DateTime.Now
+
+ +

VB.NET:

+ +
Dim expr2 As Expression(Of DateTime) = DateTime.Now
+
+ +

I am assuming this behavior is either by design; or there are technical reasons that make this unfeasable; or it is unnecessary for the requirements that made Expressions necessary in the first place -- LINQ queries. I would like further details on the subject.

+",100120,,100120,,42386.71111,42386.77917,C# / VB.NET build expression trees only from lambda expressions -- why?,<.net>,2,0,,,,CC BY-SA 3.0,, +307590,1,308873,,1/16/2016 20:17,,9,444,"

One thing that always intuitively struck me as a positive feature of C (well, actually of its implementations like gcc, clang, ...) is the fact that it does not store any hidden information next to your own variables at runtime. By this I mean that if you for example wanted a variable ""x"" of the type ""uint16_t"", you could be sure that ""x"" will only occupy 2 bytes of space (and won't carry any hidden information like its type etc.). Similarly, if you wanted an array of 100 integers, you could be sure it is as big as 100 integers.

+ +

However, the more I am trying to come up with concrete use cases for this feature the more I am wondering if it actually has any practical advantages at all. The only thing I could come up with so far is that it obviously needs less RAM. For limited environments, like AVR chips etc., this is definitely a huge plus, but for everyday desktop / server use cases, it seems to be rather irrelevant. Another possibility I am thinking of is that it might be helpful / crucial for accessing hardware, or maybe mapping memory regions (for example for VGA output and the like) ... ?

+ +

My question: Are there any concrete domains that either can't or can only very cumbersomely be implemented without this feature?

+ +

P.S. Please tell me if you have a better name for it! ;)

+",211395,,29020,,42386.3875,42401.89236,"How useful is C's ""true"" sizing of variables?",,3,18,,,,CC BY-SA 3.0,, +307609,1,307789,,1/17/2016 3:43,,5,614,"

I'm trying to understand the concepts of HATEOAS (Hypermedia As The Engine Of Application State) in REST. The following have been very useful:

+ +

What does HATEOAS offer for discoverability and decoupling besides ability to change your URL structure more or less freely?

+ +

REST APIs must be hypertext-driven (from Roy Fielding himself)

+ +

From them I understand that HATEOAS isn't just about providing links to tell a client what the next valid actions are, it's also about providing information about the valid methods that can be applied to a resource, and the format of the data being passed in both directions between the client and the server.

+ +

It's the data format that I'm having trouble with. Roy Fielding mentions defining new media types to describe the data formats. If new media types have to be created for every API anyone creates there would be hundreds of thousands of single-use media types defined.

+ +

To get around this issue, someone suggested using the HAL (Hypertext Application Language) media type for RESTful APIs. HAL includes the concept of CURIEs, prefixes for relation names that expand into URLs that point to documentation.

+ +

For example,

+ +
""_links"": {
+  ""curies"": [
+    {
+      ""name"": ""doc"",
+      ""href"": ""http://haltalk.herokuapp.com/docs/{rel}"",
+      ""templated"": true
+    }
+  ],
+
+  ""doc:latest-posts"": {
+    ""href"": ""/posts/latest""
+  }
+}
+
+ +

Here the relation name is doc:latest-posts where doc is a CURIE that expands to http://haltalk.herokuapp.com/docs/, and http://haltalk.herokuapp.com/docs/latest-posts will point to documentation about the latest-posts resource (note that the URL to the latest-posts resource itself, as opposed to the documentation about it, is /posts/latest)

+ +

This seems a good way for client-side developers to learn about each resource provided by the API. But does having an automatic link to human readable documentation satisfy the discoverability requirements of HATEOAS? Or must the information describing the data formats required for each resource be machine readable (eg a custom media type for the API)?

+",60568,,-1,,42837.31319,42388.64792,"Does discoverability in HATEOAS require the information must be machine readable, or can it just be human readable?",,1,2,1,,,CC BY-SA 3.0,, +307610,1,307626,,1/17/2016 3:44,,3,598,"

I want to license my software with the Apache License 2.0. The software is only a single file of source code. I do not want to include the entire license verbatim in the code. The license appendix describes how I may apply the license to my work by reference. This is done by including text in my software, referring to the license, which I am willing to do.

+ +

Recipients of my software then clearly will be subject to the terms of the license, one of which is a redistribution restriction (section 4(a)). That restriction requires that recipients who want to redistribute the work ""must give any other recipients of the Work or Derivative Works a copy of this License"".

+ +

Does this mean that recipients who want to redistribute the work must include 2 files, instead of merely 1 - the single file of source code, and a copy of the license?

+",20608,,,,,42416.49931,Using the Apache License 2.0 By Reference Only,,2,1,,,,CC BY-SA 3.0,, +307616,1,307631,,1/17/2016 8:22,,9,34311,"

Me and my R&D team maintain a large codebase. We've divided our business logic into multiple packages. some of which have classes with identical names.

+ +

As you can guess, the names conflict when both classes are referenced in the same Java file.

+ +
+ +

For example:

+ +
com.myapp.model (package)
+ - Device (class)
+ - ...
+
+com.myapp.data (package)
+ - Device (class)
+ - ...
+
+ +
+ +

We had a debate on what's the best practice to treat these cases and the following options came up:

+ +

1st Option

+ +
    +
  • Renaming the class, adding a prefix

    + +
    ModelDevice
    +DataDevice
    +
  • +
+ +

2nd Option

+ +
    +
  • Using the full package+class name when both are referenced

    + +
    com.myapp.model.Device
    +com.myapp.data.Device
    +
  • +
+ +
+ +

What's more correct in terms of code management and scalability?

+ +

we are currently mixing both approaches and starting to have inconsistency

+",206305,,206305,,43392.56042,43392.62292,How to deal with Classes having the same name (different packages),,3,3,4,,,CC BY-SA 4.0,, +307620,1,,,1/17/2016 9:19,,5,796,"

Does the command pattern uses OCP ?

+ +

In a command patter the invoker is only extensible by actually extending the class. If we want to add custom methods to it, we can make our own sub-class or we could modify the base class's constructor, which violates the open/closed principle.

+ +

Which leads me to the question, is the command pattern using OCP or not

+",211454,,9113,,43093.40556,43093.40556,Command pattern and open-closed-principle,,3,0,2,,,CC BY-SA 3.0,, +307629,1,,,1/17/2016 11:52,,2,759,"

So I'm designing a database for a project which is going to enable beauty businesses to create a profile, clients to create a profile and for the client's to be able to book appointments and for the businesses to manage appointments and their clients.

+ +

I have the following tables and fields so far(I've kept is as simple as possible on here but there is more to the tables on paper):

+ +
BUSINESS_TABLE
+id (PK)
+name
+address
+contact
+about
+website
+
+TREATMENT_TABLE
+id (PK)
+name
+description
+duration
+price
+
+CLIENT_TABLE
+id (PK)
+name
+address
+contact
+about
+current_medical
+medical_history
+notes
+
+EMPLOYEE_TABLE
+id (PK)
+name
+address
+business_id (FK)
+business_name (JOIN)
+availability_days
+availability_hours
+
+APPOINTMENT_TABLE
+id (PK)
+client_id (FK)
+client_name (JOIN)
+business_id (FK)
+business_name (JOIN)
+employee_id (FK)
+employee_name (JOIN)
+treatment_id (FK)
+treatment_name (JOIN)
+treatment_duration (JOIN)
+treatment_price (JOIN)
+appointment_day
+appointment_time
+
+ +

So I think I've normalised it correctly:

+ +

Business can have many clients, appointments, treatments and employees +Clients can have many appointments +Treatments don't have anything +Employees are not linked back to business to keep things simple for now +Appointments take treatments, businesses, employees and clients

+ +

So APPOINTMENT_TABLE is the linking table and all the rest feed into it.

+ +

Apart from checking to see if my logic is right, I was also asking whether I should split up the CLIENT_TABLE into more, smaller tables as there are 11 fields under the current_medical category and 20 fields in the medical_history category and both of these can be added to at any time. I think that it'll be worth putting these two in separate tables and JOINing them to the CLIENT_TABLE for organisation and being able to amend them more easily (though a little more complex in programming).

+ +

I would be grateful to hear any opinions.

+ +

Thanks,

+ +

Michelle

+",211464,,,,,42386.55208,"Database Design - lots of small tables or fewer, bigger tables?",,2,6,,,,CC BY-SA 3.0,, +307636,1,314417,,1/17/2016 14:03,,2,1093,"

I'm utilising OpenCalais API to tag the articles from multiple news sources.

+ +

I know which category each article belongs to (e.g. crime, politics etc). Also each article has three social tags available.

+ +

How can I find out which topic is the most talked about by multiple news sources?

+ +

I thought I first gather all the saved articles in the past 24 hours.

+ +
    +
  • Grab the first article and add its category (e.g. crime) as a key to a dictionary. The value for that key will be a list of articles.

  • +
  • Hence within a loop, I would add any article from that category to the list above.

  • +
  • With this approach I have a dictionary where the keys are the category and value represents the articles belonging to that category.

  • +
+ +

e.g.

+ +
{ 
+   ""Crime"" : [""article1"", ""article4"", ""article6"", ""article7"",
+   ""Politics"" : [""article2"", ""article3""] 
+}
+
+ +

The challenge is to find out if the articles in Crime category are talking about the same crime or not.

+ +
e.g. article1 has these three social tags:  
+   [""Crime in London"", ""Holborn"", ""Subterranean London""]
+article4:
+   [""Hatton Garden"", ""Holborn"", ""Subterranean London""]
+article6:
+   [""Clerkenwell crime syndicate"", ""Crime in London"", ""Holborn""]
+But article7 seems to be about a different kind of crime than Hatton Garden heist:
+   [""Subterranean London"", ""Tube"", ""Assault""]
+
+ +

I suppose I need to use some kind of mathematical intersection to find out for each article how many social tags match each other.

+ +

So that I could say article1 and article4 have two tags that match each other, and hence have a higher probability that they are covering the same news.

+ +

Article6 is similar, as it matches two tags with article1, but not article4. However because article1 and 4 match, we conclude that article1, 4 and 6 are covering the same news. (I don't know how to achieve this in code)

+ +

While article7 is matching only one Social tag, which matches article1 and article4 respectively but hence is less probable to be talking about the same kind of crime. (Unsure how to achieve this)

+ +

Does it make sense, what I'm trying to achieve? Thanks for advice.

+",165592,,165592,,42386.90278,42461.00764,How to find related articles among a set of articles?,,1,1,,,,CC BY-SA 3.0,, +307637,1,307645,,1/17/2016 14:04,,3,1852,"

I have been working on few web applications/REST web services recently (Spring IoC/MVC/Data JPA etc) and they usually follow the same pattern: Controller classes --> Service classes (which have number of ""utility""/business logic classes autowired) --> Spring Data Repositories.

+ +

Pretty much all of them classes above are Spring singletons, which sometimes I feel like it makes the code and some functions within a class dirtier (for example because I cant have a state in a class, I need to pass quite a lot parameters between methods, and I dont really like having more than 1-2 parameters, although I know sometimes its necessary).

+ +

I was wondering how is this problem overcome in the big (e.g. enterprise) kind of application.

+ +
    +
  • Is it a common practice to use non-Spring managed classes in the Spring application? If so, how do you pass dependencies into it (the ones that would normally be autowired)? If I use constructor injection for example, then I need to autowire all necessary dependencies into the class that creates the object and I wanted to avoid that. Also I dont really want to be messing with load time weaving etc. to autowire beans into non-Spring objects
  • +
  • Is using prototype scoped beans a good solution? The only thing is that I need to use AOP's scoped proxies (or method injection etc) to make sure that I get a new instance for any bean thats autowired into singleton in the first place. Is that a ""clean"" and reliable option - i.e. is it certain that there will be no concurrency type of issues and can I still autowire any singletons into those classes with no issues?
  • +
+ +

If anyone that worked on a large system, that actually managed to keep the structure not ""bloated"" and clean have any recommendations? Maybe there are some patterns I am not aware and could use?

+ +

Any help appreciated.

+",211471,,,,,42386.71597,Using prototype/non-Spring managed beans in Spring Web application,,1,3,1,,,CC BY-SA 3.0,, +307638,1,,,1/17/2016 15:15,,5,708,"

So I'm a relatively new programmer, attempting to create a web application (ASP.net) to display the system information (E.G. Status of windows services, disk & resource usage and errors in event logs) of a number of remote servers for monitoring purposes (initially - I'd also like to keep the design open to the addition, in the future, of some basic remote control over these servers too).

+ +

I've started by beginning to create a small prototype but I've become a little confused in regards to the design of the solution.

+ +

My current solution involves the creation of a windows service that runs on each of the client servers (those being monitored), which periodically collects the necessary data from the local machine and then sends it as JSON via a POST request to a Web API hosted in IIS on the Monitoring server, which receives the data and stores it in an SQL Database. I now intend to create a separate ASP.net MVC project to host as a separate application in IIS on the Monitoring server and use this as my front end - to read from the DB and display the stats of the machines.

+ +

Is there anything blindingly wrong with the design specified above?

+ +

I'm also unsure on whether this design limits how far I could take this app, for example how I could add features to allow users to perform actions on the client servers via the web interface (E.G. manually requesting for real time system information, remotely controlling windows services, clearing event logs etc). Would it be a bad design to also host a Web API within the client windows services so that the server could talk to/make requests to all the clients too?

+ +

Sorry in advance if this post is too lengthy/ambiguous - Any advice and/or design suggestions would be greatly appreciated!

+",211472,,211472,,42386.81042,42424.99375,Architecture for Web application to monitor remote servers,,1,0,,,,CC BY-SA 3.0,, +307639,1,307643,,1/17/2016 15:17,,11,21957,"

I often hear things like:

+ +
    +
  • Mapping the classes
  • +
  • Mapping the objects from the database
  • +
  • Mapping the objects
  • +
  • Mapping the elements of a list
  • +
  • A mapper
  • +
+ +

What does a mapper and the act of mapping something actually mean?

+",184058,,258164,,42734.76736,42734.775,What does mapping mean in programming?,,4,1,10,,,CC BY-SA 3.0,, +307658,1,,,1/17/2016 23:11,,9,1810,"

I'm presently reading Building Maintainable Software by Joost Visser and some of the maintenance guidelines they recommend include: A) each unit/method should be short (less than 15 lines per method) and B) methods should have a low cyclomatic complexity. It suggests that both these guideline helps with testing.

+ +

The example below is from the book explaining how they would refactor a complex method to reduce the per method cyclomatic complexity.

+ +

Before:

+ +
public static int calculateDepth(BinaryTreeNode<Integer> t, int n) {
+    int depth = 0;
+    if (t.getValue() == n) {
+        return depth;
+    } else {
+        if (n < t.getValue()) {
+            BinaryTreeNode<Integer> left = t.getLeft();
+            if (left == null) {
+                throw new TreeException(""Value not found in tree!"");
+            } else {
+                return 1 + calculateDepth(left, n);
+            }
+        } else {
+             BinaryTreeNode<Integer> right = t.getRight();
+             if (right == null) {
+                throw new TreeException(""Value not found in tree!"");
+             } else {
+                return 1 + calculateDepth(right, n);
+             }
+        }
+    }
+}
+
+ +

After:

+ +
public static int calculateDepth(BinaryTreeNode<Integer> t, int n) {
+     int depth = 0;
+     if (t.getValue() == n)
+        return depth;
+     else
+        return traverseByValue(t, n);
+}
+private static int traverseByValue(BinaryTreeNode<Integer> t, int n) {
+     BinaryTreeNode<Integer> childNode = getChildNode(t, n);
+     if (childNode == null) {
+        throw new TreeException(""Value not found in tree!"");
+     } else {
+        return 1 + calculateDepth(childNode, n);
+     }
+}
+private static BinaryTreeNode<Integer> getChildNode(
+     BinaryTreeNode<Integer> t, int n) {
+     if (n < t.getValue()) {
+        return t.getLeft();
+     } else {
+        return t.getRight();
+     }
+}
+
+ +

In their justification they state (emphasis mine):

+ +

Argument:

+ +
+

“Replacing one method with McCabe 15 by three methods with McCabe 5 + each means that overall McCabe is still 15 (and therefore, there are + 15 control ow branches overall). So nothing is gained.”

+
+ +

Counter Argument:

+ +
+

Of course, you will not decrease the overall McCabe complexity of a system by + refactoring a method into several new methods. But from a + maintainability perspective, there is an advantage to doing so: it + will become easier to test and understand the code that was written. + So, as we already mentioned, newly written unit tests allow you to + more easily identify the root cause of your failing tests.

+
+ +

Question: How does it become easier to test?

+ +

According to the answers to this question, this question, this question, this question and this blog we should not be testing private methods directly. Which means we need to test them via the public methods that use them. So going back to the example in the book, if we are testing the private methods via the public method, then how does the unit tests functionality improve, or change at all for that matter?

+",189585,,-1,,42878.52778,43819.86736,How does breaking up a big method into smaller methods improve unit testability when the methods are all private?,,6,4,4,,,CC BY-SA 3.0,, +307659,1,,,1/17/2016 23:34,,1,376,"

Where I work we have around 10 VS projects in a solution which are identical in functionality (with some different different rules in a few methods) and share many exact methods. They share the same namespace and the classes are called with if else's individually.

+ +

I believe it would be better and more succinct to have an interface and an abstract class with protected methods which are common among all the children.

+ +

The methods which take one of these classes can then take an Interface rather than have 10 different methods which take one and run the same method on each class.

+ +

The public methods all have the same signature and return the same thing.

+ +

Future implementations of the class will have a clear guideline on how to implement it (other than copy pasting and editing a previous version) and it is more maintainable. The reason I am considering this is because I am adding an extra version and unhappy with the idea of copying and pasting a lot of mess and changing all the bits I need to change (just in 2-3 long methods of ~20 methods)

+ +

I am having difficulty expressing my reasons to why I should be allowed to make these changes (would probably not take more than a day and changes no logic so minimal testing is required). I know intuitively it is better but that is not good enough!

+ +

How do I explain to the lead dev when DRY is a good thing? Or is he right that we should just keep copy and pasting things because we ""inherited the code this way""?

+",97745,,97745,,42387.27639,42387.27639,How can I sell DRY?,,3,3,,42387.91528,,CC BY-SA 3.0,, +307660,1,307661,,1/17/2016 23:50,,2,1790,"

What is the difference between a 3rd party lib and a plugin?

+ +

How do I choose what should be pushed to these repos?

+ +
plugins-release-local = Your and 3rd party plugins (releases)
+plugins-snapshot-local =  Your and 3rd party plugins (snapshots)
+
+ext-release-local = Manually deployed 3rd party libs (releases)
+ext-snapshot-local = Manually deployed 3rd party libs (snapshots)
+
+",211515,,31260,,42387.2125,42387.2125,What is the difference between a 3rd party lib and a plugin,,1,0,1,,,CC BY-SA 3.0,, +307668,1,,,1/18/2016 2:37,,-2,509,"

I just wonder if exists a compiled language that can modify it's own machine code.

+ +

I know that in most common operative systems the executable code is protected during execution time, but maybe if you're using some specific a permissive operative system, you can use a language that allow you use reflexive features over it's own code.

+ +

I found that some assembly programmers used reflexivity so maybe someone invented some high level language that allows you to compile reflexive machine code.

+ +

UPDATE +The real case +We are building a high performance scientific program. We are using in C as a portable assembly. We usually check the generated assembly code to improve our code making it faster. Looking inside we figured that a good way to improve the code is that C have some facilities of machine code reflexivity.

+ +

Example: +Imagine this code:

+ +
void func (int b) {
+int i;
+for ( i = 0 ; i < 10 ; i ++) {
+  if ( i + b == 2 ) do_some();
+  do_other();
+}
+
+ +

Would be more performant if you could make a reflexive approach

+ +
void util(int i, int b) {
+  if ( i + b == 2 ) {
+    do_some();
+    delete_util(); //this will delete the util function and just continue where is called.
+  }
+}
+
+void func(int b) {
+int i;
+for ( i = 0 ; i < 10 ; i ++) {
+  util(i, b);
+  do_other();
+}
+
+ +

I know there are a lot of reflexive languages that are interpreted or compiles to VMs, but because of that are less performant, then not what I need.

+",179197,,179197,,42387.50625,43977.43125,Does exist a reflexive compiled language?,,2,10,1,42387.63056,,CC BY-SA 3.0,, +307672,1,307675,,1/18/2016 3:45,,6,321,"

So here is what happened

+ +

The customer gave me a list of things he wanted to have. It is a funding website. So to start a funding project he wanted to have certain fields in the ""create/edit funding project"" part. He gave me a document on what should get implemented. +I estimated around 25 hours for it and made a price tag for that milestone, but am now at 66 hours, and according to him, it is not done yet.

+ +

An example: +We changed an existing wordpress theme. He wanted a funding project initiator name for each project. So i estimated 0.5 hours for adding a field 'project initiator name'. For me, that was adding the field and adding it to the ""create/edit funding project page"" there was not mentioned to add it somewhere else in the document.

+ +

Now he added a ""bug"" that this new field does not change in the ""My account"" the display of the ""my profile"". He added other things that were not in the feature list as well.

+ +

How do you create contracts for milestones that don't give the customer a bad understanding of what is inside and me no need to explain what is not inside?

+",29473,,,,,43090.79375,How to define fixed priced software projects and how to argue what is inside scope and what is outside scope?,,2,1,1,42387.56806,,CC BY-SA 3.0,, +307674,1,308117,,1/18/2016 4:56,,1,497,"

I have a golang library that abstracts a network service (think IRC-alike). The network server produces events which users of my library should consume. I'm using blocking network calls internally. I want to design my API to minimise friction users of my library, so i'm trying to decide between

+ +
    +
  • having the user supply func callbacks to mylib.doStuff(), which would enter my library's blocking network loop; it would be up to the caller to background this into a goroutine if they wished (and then perform their own synchronisation as necessary); or

  • +
  • having the user call mylib.startDoingStuff(), which would spawn a background goroutine to handle the blocking network calls, returning multiple channels of events for the caller to select over. It has the advantage of isolating the goroutine and the blocking from the calling code.

  • +
+ +

What's more idiomatic? What would you expect to see in a network library?

+",211538,,,,,42392.04028,Exposing blocking API in golang?,,1,0,,,,CC BY-SA 3.0,, +307686,1,307693,,1/18/2016 10:18,,7,980,"

I am currently unit testing some C code and I am faced with a problem:

+ +

Within the code there are called functions that contain inline assembler code for the SPARC 8 architecture.

+ +

Since I am doing the unit tests on a x86 architecture, it obviously can't be compiled. Unfortunately, testing it on the target plattform is not possible for me.

+ +

What do you think would be a proper approach for this problem? Should I modify the original code by enveloping the inline assembler code with #ifdef UNIT_TEST #endif, or is there an alternative solution?

+",211572,,211572,,42387.46875,42387.48542,Unit Testing: Assembler code and different architectures,,1,4,1,,,CC BY-SA 3.0,, +307690,1,309299,,1/18/2016 11:03,,4,3026,"

I'd like to design a REST API supporting:

+ +
    +
  • Login
  • +
  • Temporary token generation
  • +
+ +

The reason being there are a number of client-side REST libraries that would speed up development if used, as they take care of serialization, connection, etc. If not used, we would have to code these parts separately. I don't need REST for performance, there will be no load balancing or caching involved in the server side.

+ +

Login and token generation are quite common. And yet I'm having a hard time trying to figure how to do them in REST. For instance:

+ +

Login

+ +

I've read a lot about login in REST, and apparently there's no right answer here. In the end many people end up using OAuth just because it is mainstream. Actually I just need to check if the user exists and the password is correct. Any other operation in the API will be passed the user name and password for authorization purposes, so for now we are still in the stateless side.

+ +

The problem here is that I've envisioned login as a query against the users collection:

+ +
GET https://api.example.com/users?usr=username&psw=1234
+
+ +

The server will respond with a filtered list of a single user.

+ +

But I don't like passing the password in the query string. It does not look OK to me. The connection will be made using HTTPS, but there's an additional step of having to URL-encode all odd characters, and decode them in the server, which we wont have in a POST. And also URLs are more frequently logged than payloads.

+ +

I could also get the password in the server response:

+ +
GET https://api.example.com/users?usr=username
+
+ +

The returned JSON object would contain all the fields for the user (id, password, etc), so I could check the password in the client.

+ +

Which one is better? Any alternative?

+ +

Token generation

+ +

A registered user is able to generate temporary tokens. I'm struggling trying to force tokens into being a resource. They are generated on the fly, and the operation is not idempotent, as each subsequent token request will return a different token. To make things worse, this operation is stateful: there will be a temporary table in the backend where tokens will be stored for a period of time. So what would be the REST version? It could be a PUT if the client were the one generating it, but it is the server.

+ +

TL;DR +This is so difficult. If I succeed in creating a REST version of this API, the client code will be shorter and clearer, as we will be using well-known libraries that have been extensively tested. But honestly it looks almost impossible to force the stateful into stateless. Maybe I should just give up and provide a straightforward stateful API? How could I explain the extra time it would take to management? They will probably argue that nowadays everything is REST.

+",29765,,29765,,42398.89375,42405.01944,API design dilemma: to REST or not to REST,,5,7,8,,,CC BY-SA 3.0,, +307692,1,308509,,1/18/2016 11:30,,0,1221,"

I started working on a project where I need to read and parse an xml file from a url. This url needs a credentials to successfully read the data from it, so I'm thinking while reading the file there will be a modal window that will accept a username and password to authenticate the source.

+ +

This url is from the outside source and needed a credential to authenticate the connection for us to successfully connect and read the xml file. Also this is only available privately on request.

+ +

My concern is how could I prevent a user to see the credential being passed in the modal form then processed it in the background? It's like instead of getting the actual password I'm thinking like hash it? But by doing that the authentication would fail? Any suggestion if there is a better approach for this?

+",146464,,146464,,42387.49583,42408.78611,Secure way in authenticating credentials when Reading an xml file from the outside source,,1,3,,,,CC BY-SA 3.0,, +307698,1,,,1/18/2016 12:39,,0,5658,"

I'm studying functional programming and I'm having some question concerning array population.

+ +

Actually, I'm trying to rebuild the Array.prototype.map function, and here's what I've got:

+ +
Array.prototype.map = function(callback) {
+  let newArray = [];
+  for (let item of this) {
+    newArray.push(callback(item));
+  }
+  return newArray;
+};
+
+ +

It's working, bug I'm having some trouble with the fact that I'm allocation memory on each iteration, meaning that I'm breaking functional principles (no references, only new things).

+ +

Is that true when dealing with arrays? +I mean there, is it correct to loop and assign in to a new array?

+",199375,,31260,,42387.53194,42389.47778,"Functional programming, and pushing item to array",,1,3,1,42395.95764,,CC BY-SA 3.0,, +307702,1,307703,,1/18/2016 13:48,,7,8829,"

I know there are a lot of Q/As in legalese or plain English. I am wondering if anyone could provide a real life example of the text that needs to be included with the distribution of software using code with Apache License Version 2.0.

+ +

For example, if an app uses Apache Commons Math, a Java library with Apache License Version 2.0, what is the text regarding this license needed to be included in its distribution?

+",171357,,,,,42387.58333,Looking for an example of using code with Apache License Version 2.0,,1,1,3,,,CC BY-SA 3.0,, +307704,1,308459,,1/18/2016 14:00,,2,1093,"

I am working on a rather big Node.JS project with several thousand lines of code. It's not a homepage, but acts more like a configurable general purpose application server. As such it brings some parts which are useful in most projects I do.

+ +

The problem is that I easily lose overview in the core modules. So I did a bit of research and came up with an interesting structure based on C++ Header/Code file structures. I want to know if this structure is good in the long run (maintainability, testability, extensibility), how it can be improved and if there is already a (better) ""standard"" way of doing the structuring I did not find.

+ +

The structure has three kinds of files, where xxx is the module name and yyy is the method name.

+ +
    +
  • xxx.h.js: The ""header"" file, which contains the class and method declarations
  • +
  • xxx.yyy.c.js: The ""code"" files, which contain one method each (and possibly local helper functions)
  • +
  • index-xxx.js: The glue and main file for the module
  • +
+ +

I would like to structure all my sub-modules like this and then use a loading-mechanism to load all modules, namespace them and finally use them globally.

+ +

Here's an example:

+ +

package.json

+ +
{
+    ""name"": ""Foo"",
+    ""version"": ""1.0.0"",
+    ""description"": ""Does something in the core system"",
+    ""author"": ""Marco Alka"",
+    ""main"": ""index-foo.js""
+}
+
+ +


+ +
// index-foo.js
+'use strict';
+
+// return class
+module.exports = require('./foo.h.js');
+
+// overwrite definitions with declarations
+// this part can and most probably will be done generically by my module loader. I write it down for easier overview.
+require('./src/foo.bar.c.js');
+require('./src/foo.baz.c.js');
+
+ +


+ +
// foo.h.js
+'use strict';
+
+/**
+ * Easy-to-read interface/class
+ * No obstructive code in my way
+ */
+module.exports = class Foo {
+
+  constructor() {}
+
+  /**
+   * Put comments here
+   */
+  bar() { throw 'Not Implemented'; }
+
+  /**
+   * More comments for methods
+   * See how the method declarations+documentations nicely dominate the file?
+   */
+  baz() { throw 'Not Implemented'; }
+}
+
+ +


+ +
// src/foo.bar.c.js
+'use strict';
+
+// include header
+var h = require('../foo.h.js');
+
+// implement method Foo::bar
+h.prototype.bar = function () {
+
+  console.log('Foo.bar');
+}
+
+ +


+ +
// src/foo.baz.c.js
+'use strict';
+
+// include header
+var h = require('../foo.h.js');
+
+// implement method Foo::bar
+h.prototype.baz = function () {
+
+  console.log('Foo.baz');
+}
+
+ +


+Example on how to use the whole thing from within the root folder

+ +
'use strict';
+
+// later on a module loader will be used which loads the folder as module instead of the index file directly
+var F = require('./index-foo.js');
+
+// make object
+var foo = new F();
+
+// call method
+foo.bar();
+
+ +

Output in console: Foo.bar\n

+",211604,,1204,,42389.65347,42396.00833,How to structure big Node.JS modules,,1,13,0,,,CC BY-SA 3.0,, +307705,1,307710,,1/18/2016 14:02,,2,495,"

My dilemma is about the GIT flow.

+ +

Let's assume that I have an application that can be deployed differently (staging release, QA release, production release).

+ +

Since the application has profiles to be executed as staging version or as production version (it doesn't depend on which branch the executable is) with the same binary, I think it's really useless to create three different deployable branches, but some colleagues think that it is correct because we need to clarify the binary purpose.

+ +

What for me is wrong is that once you checked out the staging branch, you have to force the deploy as staging (it is still possible to select the production version). This problem isn't possible to avoid, so I think that the three deployable branches aren't useful

+ +

This is the state they are thinking about:

+ +
    +
  • branch master : binary + all profiles
  • +
  • branch staging : same master sources
  • +
  • branch qa : same master sources
  • +
+ +

They just want to keep the ideas ordered (I don't think so)

+ +

Are both right?

+",211609,,,user40980,42387.61389,42387.64306,More than one deployable git branch,,1,7,,,,CC BY-SA 3.0,, +307711,1,,,1/18/2016 15:56,,12,3316,"

There are quite a few similar questions out there 1, 2, 3, 4, but non seems exactly the case in this question, nor do the solutions seem optimal.

+ +

This is a general OOP question, assuming polymorphism, generics, and mixins are available. The actual language to be used is OOP Javascript (Typescript), but it's the same problem in Java or C++.

+ +

I have parallel class hierarchies, that sometimes share the same behaviour (interface and implementation), but sometimes each has its own 'protected' behaviour. Illustrated like so:

+ +

+ +

This is for illustration purposes only; it isn't the actual class diagram. To read it:

+ +
    +
  • Anything in the common hierarchy (centre) is shared between both the Canvas (left) and SVG (right) hierarchies. By share I mean both interface and implementation.
  • +
  • Anything only on the left or right columns means a behaviour (methods and members) specific to that hierarchy. For example: + +
      +
    • Both the left and right hierarchies use exactly the same validation mechanisms, shown as a single method (Viewee.validate()) on the common hierarchy.
    • +
    • Only the canvas hierarchy has a method paint(). This method calls the paint method on all children.
    • +
    • The SVG hierarchy needs to override the addChild() method of Composite, but such is not the case with the canvas hierarchy.
    • +
  • +
  • The constructs from the two side hierarchies cannot be mixed. A factory ensures that.
  • +
+ +

Solution I - Tease Apart Inheritance

+ +

Fowler's Tease Apart Inheritance doesn't seem to do the job here, because there is some discrepancy between the two parallels.

+ +

Solution II - Mixins

+ +

This is the only one I can currently think of. The two hierarchies are develop separately, but at each level the classes mixin the common class, which are not part of a class hierarchy. Omitting the structural fork, will look like this:

+ +

+ +

Note that each column will be in its own namespace, so class names won't conflict.

+ +

The question

+ +

Can anyone see faults with this approach? Can anyone think of a better solution?

+ +
+ +

Addendum

+ +

Here is some sample code how this is to be used. The namespace svg may be replaced with canvas:

+ +
var iView        = document.getElementById( 'view' ),
+    iKandinsky   = new svg.Kandinsky(),
+    iEpigone     = new svg.Epigone(),
+    iTonyBlair   = new svg.TonyBlair( iView, iKandinsky ),
+    iLayer       = new svg.Layer(),
+    iZoomer      = new svg.Zoomer(),
+    iFace        = new svg.Rectangle( new Rect( 20, 20, 100, 60) ),
+    iEyeL        = new svg.Rectangle( new Rect( 20, 20, 20, 20) ),
+    iEyeR        = new svg.Rectangle( new Rect( 60, 20, 20, 20) );
+
+iKandinsky.setContext( iTonyBlair.canvas.getContext( '2d' ) );
+iEpigone.setContext( iTonyBlair.canvas.getContext( '2d' ) );
+
+iFace.addChildren( iEyeL, iEyeR );
+iZoomer.setZoom( new Point( 2, 2 ) );
+iZoomer.addChild( iFace );
+iLayer.addChild( iZoomer );
+iTonyBlair.setContent( iLayer );
+
+ +

Essentially, in run-time clients compose instances hierarchy of Viewee subclasses; like so:

+ +

+ +

Say all these viewees are from the canvas hierarchy, they are rendered by traversing the hierarchy can calling paint() on each viewee. If they are from the svg hierarchy, viewees know how to add themselves to the DOM, but there isn't paint() traversal.

+",130377,,-1,,42878.48125,42559.90208,"Parallel hierarchies - partly same, partly different",,4,9,,,,CC BY-SA 3.0,, +307721,1,307723,,1/18/2016 17:55,,2,2464,"

In C# or Java we find that the objects are stored on heap and their reference vars are stored on stack. But at run time where is the class definition stored to be used as a template for creating objects?

+",110246,,155513,,42387.77153,42387.87292,Where is the class itself stored at runtime to be used as a reference,,2,11,,,,CC BY-SA 3.0,, +307724,1,307725,,1/18/2016 18:29,,1,281,"

So I have this super sweet library I'm working on. Let's say I release it into the wild under GPL. In tandem I'm working on a commercial piece of software, and I'd like to use this library in it.

+ +

Since I made the library in the first place (even though I made it GPL), is it safe to assume I can use it in the commerical app?

+ +

Now that it's released as GPL, lets say some people make some awesome updates to it. Their work is also under GPL. Therefore I cannot include their changes in my commercial app, unless I wanted to make my commerical app GPL as well (which I don't). In that scenario, the safest thing to do would be to use my library as it was originally released when it was 100% my own code.

+ +

Is the above correct? If so, would releasing my library under LGPL be a better idea so I can re-incorporate changes to it into my commercial app since I'll just be linking to the DLL?

+ +

Edit after Thomas' answer:

+ +

To add a bit about the dual licensing option, it does open up the world of being able to commercially sell my library. On the other hand, I really want to bring in other developers to improve it (if they want), which kind mucks up the commercial licensing aspect. What is more important? For me, I'd rather have a better library than make a couple bucks off of it (it's not THAT impressive), so Thomas' point about going with LGPL is perfect. That allows me to both solicit assistance, and to take advantage of other contributors I can simply link to the DLL inside of having the original code baked into my solution file. The important bit is that I can use this improved code in my main commercial app and I won't worry about commercializing a subset of it (this library).

+",151236,,151236,,42388.19653,42388.44236,I released my project under GPL. Can I still use it in my commercial app?,,2,3,,,,CC BY-SA 3.0,, +307729,1,,,1/18/2016 21:51,,6,235,"

Here is a concrete minimal example to formulate my question :

+ +

In small ball game, you have a physics engine that moves the objects regularly:

+ +
void move(set< PhysicalObject* > objets, Duration t)
+
+ +

And that engine can use user defined behaviour when a collision happens (using a strategy class). For instance, default strategy is to update objects directions, but you can provide it with a custom strategy class that kills objects according to some specific rules.

+ +
void update_after_collision(DerivedObject* a, DerivedObject* b)
+
+ +

In order to implement new behaviour during a collision, I have some specific derived classes (they have additional attributes/methods corresponding to their lifetime, etc.)

+ +

The problem is: the engine calls the update_after_collision strategy, hence I can't use my derived class is this strategy without using a visitor pattern or a type cast. Note that the engine only knows about the base class, not the derived ones.

+ +

Is there a way to avoid it? What is the standard approach to look at?

+ +

Is another programming paradigm better for this particular case?

+ +

EDIT : Code is written in C++

+",164491,,164491,,42387.94514,42388.07361,Software design: recommend approach to avoid slicing/type erasure here,,1,8,,,,CC BY-SA 3.0,, +307735,1,,,1/19/2016 0:00,,4,641,"

I see in the code or sometimes people talk about it, for some JavaScript code:

+ +
(function() {
+    var something;
+
+    function someFunction() { 
+        // some code here
+    }
+
+    // do something
+
+}());
+
+ +

That's an ""Immediately Invoked Function Expression"", or IIFE. I often hear people say, ""yeah, do it in a closure"", or ""do this in a closure"" in the code comment -- as if a closure is ""protecting the leak to the global space.""

+ +

But is that the correct concept? I think it is a local scope, or an anonymous local scope, that is shield any local variables from leaking to the global scope. It really has nothing to do with a closure, which is a function with a scope chain. Sure, the anonymous function used for the IIFE is a closure, but it is not relevant here. If you say you want a closure, it is because you want the access to the current scope (and all scopes in scope chain). To say, shielding local variable to the global scope ""by using a closure"", is not a correct concept, is it?

+ +

Update: In any language that doesn't have closure, such as C, you can still do the exact same thing of shielding any local variables to leak into the global space. So it is not ""closure"" that is doing the job.

+",5487,,5487,,42388.01944,42388.01944,A JavaScript IIFE prevents leaks to the global space as a closure? Is that the correct concept?,,1,1,,,,CC BY-SA 3.0,, +307739,1,307742,,1/19/2016 2:12,,1,250,"

I'm re-creating a web app (Google Apps Script) that i made a couple months ago, and decided that I wanted to make it fully configurable this time around. So that I don't have to go in any change any code or settings to adapt to new changes in the data it manipulates, queries, and displays. I've also worked to make some optimizations that have increased the responsiveness of the app by nearly an order of magnitude (From 2-10 seconds down to 1 second or less) through caching.

+ +

I'm almost complete, and after taking a step back I've noticed my codebase is nearly 3x the size and feels much more complex than it did earlier. I no longer hard code any values or settings, I can swap, add, or remove columns from spreadsheets or databases without breaking the programs behavior.

+ +

However, I am not sure if it's ""worth"" it. I may not have to go in and make changes to avoid breakage, but if I need to jump in a few months from now I doubt I'll understand how my code works right off the bat.

+",171304,,9113,,42388.66111,42388.66319,Is increasing codebase size and complexity worth it to make fully configurable?,,3,3,,,,CC BY-SA 3.0,, +307745,1,307867,,1/19/2016 3:52,,0,114,"

I want to understand how the theoretical foundations of computation relate to real-world computers. As far as my knowledge goes, Turing machines, recursive functions, finite state machines, lambda calculus, combinatoric logic, and other models are all theoretically equivalent in how they represent computation. To me, these models seem to underlie what computation really is, at the most basic level.

+ +

Thus, my primary question is, how do these purely theoretical models of computation relate to real-world computer architecture? How do we go from these theoretical models to a real-world tool, which makes use of electrical engineering and physics, aspects which are not taken into account in the models of computation? For example, is the real-world computer an advanced physical embodiment of the Turing machine, or is this completely off the mark?

+ +

EDIT: My question is different than simply asking how computers work. I am specifically asking about the relationship between practical, real-life computers and models of computation.

+",179129,,179129,,42388.26944,42389.40972,"Foundations of computation, and relation to modern computers",,1,2,,42395.81111,,CC BY-SA 3.0,, +307750,1,307764,,1/19/2016 5:51,,3,393,"

In the book The Little Lisper, you implement a minimal Scheme in 10 Chapters that is capable of interpreting any chapter in the book.

+ +

To me it seems you could do the same for a 'minimal subset of a typed language' to be self-bootstrapping. I'm trying to work out what is feasible.

+ +

To me the options are:

+ +
    +
  • Simply Typed Lambda Calculus
  • +
  • System F
  • +
  • Miranda
  • +
  • Haskell subset
  • +
+ +

Assumptions:

+ +
    +
  • Note that I'm ignoring the GHC runtime elements like spineless tagless graph-interpreter
  • +
+ +

My question is: What is the equivalent of the Little Lisper project in Haskell?

+",13382,,,,,42388.68264,What is the equivalent of The Little Lisper project in Haskell?,,2,1,1,,,CC BY-SA 3.0,, +307753,1,,,1/19/2016 6:48,,3,677,"

Why virtual inheritance is not used as default? Default as in by the programmer and not the language.

+ +

If not, why? What are the cases where it may fail? Is there some run time overhead? Is it significant?

+ +
+ +

The motivation of this question comes from Stairway to heaven pattern. Its usage is not completely transparent, because of the use of virtual inheritance. If virtual inheritance was the default behaviour, it could have been used transparently. I am not saying it should be virtual by default because of this pattern.

+",204027,,204027,,42389.35417,42389.35417,Is it a good practice to use virtual inheritance as default?,,0,11,1,42391.15139,,CC BY-SA 3.0,, +307763,1,,,1/19/2016 8:51,,4,18391,"

I partially found a solution to my question, but I'm not really satisfied with the result.

+ +

My application consists of ASP.NET MVC + MS SQL Server.

+ +

The case is as follows:

+ +
    +
  1. An external application saves data periodically to the SQL Database. In one tenth of a second, 20 to 100 records can be saved.
  2. +
  3. SQL should be polled from the back-end, or there should be push notifications after each save, in order to update (for example) a running total of saved items.
  4. +
  5. The back-end will push notifications to the UI (this will be done using SignalR)
  6. +
+ +

I followed this tutorial http://venkatbaggu.com/signalr-database-update-notifications-asp-net-mvc-usiing-sql-dependency/ to set it up.

+ +

SqlDependency with broker service is used to get push notifications from SQL firing the OnChange Event. The problem is that this solution is very slow. There is only one record saved every second for a small database, and the events fire with pretty big delays.

+ +

Is there another technology to get data changes from SQL? Or maybe the solution I'm using requires some kind of optimization?

+",154994,,283761,,43882.37292,43882.37292,Best way to get push notifications to server from MS SQL Server,,2,3,1,,,CC BY-SA 4.0,, +307768,1,,,1/19/2016 9:51,,3,6028,"

From what I understand, isn't XML used for layouts and to setup how an activity looks?

+ +

My book says that XML files are converted into Java code but then, why not just write everything in Java?

+",211729,,31260,,42438.37014,43193.59653,Why use XML in Android?,,3,6,2,,,CC BY-SA 3.0,, +307771,1,,,1/19/2016 10:29,,0,51,"

I have a web app that has different types of object: Car, Client, Lead. Each record has a user owner and a user can do according to its permissions action only on its own records. Database is MySQL.

+ +

I need to add to the system a feature where a user can share a specific record to a user(or a group of users)- probably with access level (read,write,full). How can I store these permissions in order to have an efficient system of filtering that when they are queried. For example, in this moment, the only condition was smth like owner_id=user->id.

+ +

One way would be to have a table with: record_id, record_type, share_user_id, access_level. For a record data will be recalculated when ever ""share settings"" are changed.

+ +

My fear is that having a lot of objects, this table would grow in size (over 10K of objects, over 100 users). Is there any other approach?

+",195776,,,,,42388.43681,Storing user access on objects,,0,2,,,,CC BY-SA 3.0,, +307773,1,,,1/19/2016 10:44,,26,11459,"

Consider the following method:

+ +
public List<Guid> ReturnEmployeeIds(bool includeManagement = false)
+{
+
+}
+
+ +

And the following call:

+ +
var ids = ReturnEmployeeIds(true);
+
+ +

For a developer new to the system, it'd be pretty difficult guessing what true did. First thing you'd do is hover over the method name or go to the definition (neither of which are big tasks in the slightest). But, for readability sake, does it make sense to write:

+ +
var ids = ReturnEmployeeIds(includeManagement: true);
+
+ +

Is there anywhere that, formally, discusses whether or not to explicitly specify optional parameters when the compiler doesn't need you to?

+ +

The following article discusses certain coding conventions: https://msdn.microsoft.com/en-gb/library/ff926074.aspx

+ +

Something similar to the article above would be great.

+",53014,,53014,,42388.45625,42389.69792,Specify optional parameter names even though not required?,,7,12,2,,,CC BY-SA 3.0,, +307790,1,,,1/19/2016 14:35,,11,1824,"

I wonder if it's possible to combine Visual Studio Team Services and/or TFS with a GitHub repository. We think both products have their own advantages and would like to work on one repo within our company.

+ +

The reason to use VSTS / TFS is the integration in Visual Studio for Work Items.

+",179148,,300493,,43194.60417,44159.5875,Combining GitHub and TFS / Visual Studio Team Services,,1,6,,,,CC BY-SA 3.0,, +307794,1,307796,,1/19/2016 15:37,,0,8565,"

I am using the protobuf-net library for serialization/deserialization of messages.

+ +

Due to the distributed nature of the application some applications will have an older version of the object that is being used to serialize messages. The cases I'm concerned with are

+ +
    +
  • adding new fields to the message object that some consumers(deserializers) won't have

  • +
  • removing fields in the message object that some consumers(deserializers) will still have.

  • +
+ +

Example an old app has this definition of Foo

+ +
class Foo
+{
+   public int Field1 {get; set;}
+}
+
+ +

A newer app has this definition of Foo

+ +
class Foo
+{
+   public int Field1 {get; set;}
+   public int Field2 {get; set;}
+}
+
+ +

I want to have a test to verify that the library can deserialize an object with missing fields and fields that it isn't aware of in case I ever need to change my serialization library.

+ +

Is there an easy way to unit test this? C# won't let the user change the fields availabe in an object.

+ +

Currently I have to build my reciever app and then change the object that is transported to add a new field and then build my sending app to test this.

+ +

Is there a better way?

+",69243,,69243,,42388.66111,43026.45486,How can you easily unit test deserialization to different versions of an object?,,3,6,1,,,CC BY-SA 3.0,, +307799,1,,,1/19/2016 16:09,,1,275,"

We're trying to put together a workflow for Wordpress development which turns out to be pretty complicated with lots of elements. Is there a way to streamline this or filling in the gaps?

+ +

We currently have 2 developers in the team - but essentially this can be however many, all working in the same way.

+ +

The problems

+ +

WP Stagecoach seems the only way to allow us to merge a database rather than just copying over one. Does anyone know any other methods?

+ +

We work on a lot of sites that need the live version to continue to run (and the data get updated)... whilst developing new features (therefore both versions are adding to the posts table in the database). This brings up the problem of losing data, copying over some etc etc.

+ +

The problem Wp Stagecoach has is that we can create a staging site, make the changes there, then merge back with the live site - that part works great. But we want to be able to work locally rather than on a staging site (which HAS to be on the Stagecoach servers).

+ +

So the real question is, how can we work on a site locally, and get it live without copying over live data (i.e merging with it)? Doing the files is straightforward, it's the database that causes the problem.

+",23425,,23425,,42390.39167,42390.39167,Development workflow for a team: local to live (with GIT),,1,0,,,,CC BY-SA 3.0,, +307800,1,,,1/19/2016 16:15,,8,5560,"

I've been following some tutorials on how to design REST APIs, but I still have some big questions marks. All these tutorials show resources with relatively simple hierarchies, and I would like to know how the principles used in those apply to a more complex one. Furthermore, they stay at a very high/architectural level. They barely show any relevant code, let alone the persistence layer. I'm specially concerned about database load/performance, as Gavin King said:

+ +
+

you will save yourself effort if you pay attention to the database at + all stages of development

+
+ +

Let's say my application will provide training for Companies. Companies have Departments and Offices. Departments have Employees. Employees have Skills and Courses, and certain Level of certain skills are required to be able to sign for some courses. The hierarchy is as as follows, but with :

+ +
-Companies
+  -Departments
+    -Employees
+      -PersonalInformation
+        -Address
+      -Skills (quasi-static data)
+        -Levels (quasi-static data)
+      -Courses
+        -Address
+  -Offices
+    -Address
+
+ +

Paths would be something as:

+ +
companies/1/departments/1/employees/1/courses/1
+companies/1/offices/1/employees/1/courses/1
+
+ +

Fetching a resource

+ +

So ok, when returning a company, I obviously don't return the whole hierarchy companies/1/departments/1/employees/1/courses/1 + companies/1/offices/../. I might return a list of links to the departments or the expanded departments, and have to take the same decission at this level: do I return a list of links to the department's employees or the expanded employees? That will depend on the number of departments, employees, etc.

+ +

Question 1: Is my thinking correct, is ""where to cut the hierarchy"" a typical engineering decission I need to make?

+ +

Now let's say that when asked GET companies/id, I decide to return a list of links to the department collection, and the expanded office information. My companies don't have many offices, so joining with the tables Offices and Addresses shouldn't be a big deal. Example of response:

+ +
GET /companies/1
+
+200 OK
+{
+  ""_links"":{
+    ""self"" : {
+      ""href"":""http://trainingprovider.com:8080/companies/1""
+      },
+      ""offices"": [
+            { ""href"": ""http://trainingprovider.com:8080/companies/1/offices/1""},
+            { ""href"": ""http://trainingprovider.com:8080/companies/1/offices/2""},
+            { ""href"": ""http://trainingprovider.com:8080/companies/1/offices/3""}
+      ],
+      ""departments"": [
+            { ""href"": ""http://trainingprovider.com:8080/companies/1/departments/1""},
+            { ""href"": ""http://trainingprovider.com:8080/companies/1/departments/2""},
+            { ""href"": ""http://trainingprovider.com:8080/companies/1/departments/3""}
+      ]
+  }
+  ""name"":""Acme"",
+  ""industry"":""Manufacturing"",
+  ""description"":""Some text here"",
+  ""offices"": {
+    ""_meta"":{
+      ""href"":""http://trainingprovider.com:8080/companies/1/offices""
+      // expanded offices information here
+    }
+  }
+}
+
+ +

At the code level, this implies that (using Hibernate, I'm not sure how it is with other providers, but I guess that's pretty much the same) I won't put a collection of Department as a field in my Company class, because:

+ +
    +
  • As said, I'm not loading it with Company, so I don't want to load it eagerly
  • +
  • And if I don't load it eagerly, I might as well remove it, because the persistence context will close after I load a Company and there is no point in trying to load it afterwards (LazyInitializationException).
  • +
+ +

Then, I'll put a Integer companyId in the Department class, so that I can add a department to a company.

+ +

Also, I need to get the ids of all the departments. Another hit to the DB but not a heavy one, so should be ok. The code could look like:

+ +
@Service
+@Path(""/companies"")
+public class CompanyResource {
+
+    @Autowired
+    private CompanyService companyService;
+
+    @Autowired
+    private CompanyParser companyParser;
+
+    @Path(""/{id}"")
+    @GET
+    @Consumes(MediaType.APPLICATION_JSON)
+    @Produces(MediaType.APPLICATION_JSON)
+    public Response findById(@PathParam(""id"") Integer id) {
+        Optional<Company> company = companyService.findById(id);
+        if (!company.isPresent()) {
+            throw new CompanyNotFoundException();
+        }
+        CompanyResponse companyResponse = companyParser.parse(company.get());
+        // Creates a DTO with a similar structure to Company, and recursivelly builds
+        // sub-resource DTOs such as OfficeDTO
+        Set<Integer> departmentIds = companyService.getDepartmentIds(id);
+        // ""SELECT id FROM departments WHERE companyId = id""
+        // add list of links to the response
+        return Response.ok(companyResponse).build();
+    }
+}
+
+ + + +
@Entity
+@Table(name = ""companies"")
+public class Company {
+
+    @Id
+    @GeneratedValue(strategy = GenerationType.IDENTITY)
+    private Integer id;
+
+    private String name;
+
+    private String industry;
+
+    @OneToMany(fetch = EAGER, cascade = {ALL}, orphanRemoval = true)
+    @JoinColumn(name = ""companyId_fk"", referencedColumnName = ""id"", nullable = false)
+    private Set<Office> offices = new HashSet<>();
+
+    // getters and setters
+}
+
+ + + +
@Entity
+@Table(name = ""departments"")
+public class Department {
+
+    @Id
+    @GeneratedValue(strategy = GenerationType.IDENTITY)
+    private Integer id;
+
+    private String name;
+
+    private Integer companyId;
+
+    @OneToMany(fetch = EAGER, cascade = {ALL}, orphanRemoval = true)
+    @JoinColumn(name = ""departmentId"", referencedColumnName = ""id"", nullable = false)
+    private Set<Employee> employees = new HashSet<>();
+
+    // getters and setters
+}
+
+ +

Updating a resource

+ +

For the update operation, I can expose an endpoint with PUT or POST. Since I want my PUT to be idempotent, I can't allow partial updates. But then, if I want to modify the company's description field, I need to send the whole resource representation. That seems too bloated. The same when updating an employee's PersonalInformation. I don't think it makes sense having to send all the Skills + Courses together with that.

+ +

Question 2: Is PUT just used for fine-grained resources?

+ +

I've seen in the logs that, when merging an entity, Hibernate executes a bunch of SELECT queries. I guess that's just to check if anything has changed and update whatever information needed. The upper the entity in the hierarchy, the heavier and more complex the queries. But some sources advise to use coarse grained resources. So again, I'll need to check how many tables are too much, and find a compromise between resource granularity and DB query complexity.

+ +

Question 3: Is this just another ""know where to cut"" engineering decission or am I missing something?

+ +

Question 4: Is this, or if not, what is the right ""thinking process"" when designing a REST service and searching for a compromise between resource granularity, query complexity and network chattiness?

+",183242,,,user40980,42388.68264,42395.74444,How to design a complex REST API considering DB performance?,,5,9,2,,,CC BY-SA 3.0,, +307805,1,,,1/19/2016 16:35,,1,23769,"

Our program does these kinds of operations hundreds of times for many different variables and lists, then uses them throughout the program:

+ +
variable = values[5]
+
+ +

The list values are coming from the command line, often as words, paragraphs, rows of a table or other (it's not important). Our issue is that our program, which is designed to run in a continuous loop, stops when there is an index out of range error. Since this comes from the command line, we can expect these fairly often, but cannot accept our program stopping.

+ +

Thus, the question is: How can I catch all index out of range errors in python so that my program does not stop. Do I have to use try/except statements everywhere I make the above type statements (which is a lot of extra work for 40k lines of code) or can I do a catch all somehow? How the error is handled is important (I expect 99% of the time we can set it to NULL, ERROR or 0), but keeping the loop running is even more important.

+",202412,,,user22815,42388.70833,42625.67708,How can you catch all index out of range errors in python?,,2,6,1,,,CC BY-SA 3.0,, +307807,1,,,1/19/2016 16:42,,3,632,"

Recently, I've tried my hand at building a notification system, but I quickly found that notifications are tricky things, especially in the context of building a general model. The diversity of what triggers them (from a clicking a button to something timing out; ""actions"" as I will call them), who is to be notified, and even what kind of message to send mandates an extremely flexible system to define and manage them.

+ +

I've seen many solutions advocating for an ""active"" approach, where the events that spawn notifications themselves create them. However, in my mind, this is the equivalent of a faulty toaster telling emergency services that it started a fire.

+ +

The metaphor applies both in terms of active intervention on the part of the actor, which decentralizes notifications (wait, which parts of the code create notifications again, and when do these happen?), as well as muddying the responsibilities of parts of the overall system, giving a lot of power to the actions which should focus on, well, acting.

+ +

On the other hand, it is completely intractable to analyse every side effects of every action, and decide what notification to spawn. Furthermore, the relationship isn't necessarily one-to-one: actions could conceivably spawn multiple notifications of varying types, destined to multiple users.

+ +

However, persistent state modification — in most cases, a database, and in mine, MySQL — is the crux of many of these actions, and I've tried to leverage this to create a notification system that can be both passive and general. The gist is as follows:

+ +
    +
  1. All actions inherit from an Action class that serves solely to package the ""acting"" part and trigger the processing of notifications.
  2. +
  3. During the execution of the action, all database queries are recorded.
  4. +
  5. After the execution has ended, the notifications, each of which list their dependencies in the database (what values changed could cause them to spawn), are first filtered with a boolean expression specific to the notification on the list of tables/columns that have changed.
  6. +
  7. The set of notifications coming out of initial filtering then query the database and each determine if they should be issued.
  8. +
+ +

In case the definition of ""action"" extends beyond what is conveniently packageable, step 3 could execute at the end of the entire program execution, and all queries could be logged — the feasibility of which is determinable on a case-per-case basis.

+ +

The system is in the process of being fully implemented for testing in my project. However, I am concerned that relying solely on parsed queries is a rather Bad Idea. Notably, the initial filtering assumes to some extent that the queries are predictable up to the specificity of the notifications' dependencies (even if these are only specific up to tables changed, triggers pose a problem). 100% accuracy is a top priority, but at the same time, the efficiency of the system is paramount as well, especially if there is a prohibitive number of notification definitions.

+ +

Is my concern well-founded? Does there exist a [more] reliable way to achieve what I have outlined?

+",184318,,,,,42388.69583,Building a generalized notification system: passive vs. active,,0,7,1,,,CC BY-SA 3.0,, +307823,1,,,1/19/2016 19:51,,2,282,"

I just started working on an Android app that should display posts from my site and then a three-column list of some products. The three columns on the list contain the product names, a short description, and then some pricing information. The pricing info should always be up to date.

+ +

I am not experienced, but I am thinking of two approaches:

+ +
    +
  • Allowing the app to interact with an online database in order to get the most current data. However, this approach means I will also need another web app to manage the database, which seems complicated to me.
  • +
  • Hosting an up-to date CSV or XML file in a certain URL. Then, each time the app synchronizes, it gets the file and process it. However, this seems to be a waste of bandwidth.
  • +
+ +

My questions are:

+ +
    +
  1. Of the two options, which one is viable/appropriate? And why.
  2. +
  3. Is there any other efficient way to handle this?
  4. +
+",209416,,31260,,42388.83472,42389.27917,A way to update a list an Android application displays,,1,0,,,,CC BY-SA 3.0,, +307826,1,307829,,1/19/2016 20:35,,-1,468,"

My coworkers and I are discussing this code and need a third party to settle our discussion :)

+ +
Random randNum = new Random();
+int[] A = Enumerable.Repeat(0, 1000).Select(i => randNum.Next(0, 100)).ToArray();
+int k = randNum.Next(0, A.Length);
+int[] B = new int[A.Length - k];
+
+for (int i = 0; i < B.Length; i++)
+{
+    int min = A[i];
+
+    for (int j = i + 1; j <= i + k; j++)
+    {
+        min = Math.Min(A[j], min);
+    }
+
+    B[i] = min;
+}
+
+ +

What would this be considered in big O notation?

+",120830,,,,,42388.87153,Big O of loop of 0...n with nested loop doing k iterations,,1,5,0,42389.29028,,CC BY-SA 3.0,, +307839,1,307845,,1/20/2016 1:24,,2,233,"

I'm writing a client application that receives a JSON response from a server. In the past I've run into situations where a developer on the server side changes the JSON response in a way that causes the client application to crash. An example of this is when the client expects that a JSON field or subobject will always be present, but a change on the server side causes the JSON to deviate from what is expected possibly returning a null value when null should never be a possible response.

+ +

It seem like the server side could always have unit tests that ensure that the JSON response fulfills the contract, but that's susceptible to human error if a developer decides to rewrite a test or simply makes a mistake in testing or misunderstands a requirement. The client side can check that the JSON response is valid, but this would need to occur at runtime and if the server is writing proper tests, the double-checking of the server response by the client would be unnecessary.

+ +

Is there a recommended process to ensure that the contract (JSON response format) between the client and server doesn't get broken?

+",211841,,,,,42389.65833,Should a client ever test server response at runtime?,,2,3,,,,CC BY-SA 3.0,, +307841,1,,,1/20/2016 2:29,,6,702,"

I'm in the planning/design stages of a new project, and I'm having trouble coming up with a good way to handle one of the requirements:

+ +
+

Users must be able to create a new record and save it as ""Incomplete"" without filling out all required fields.

+
+ +

This is a requirement that will apply to almost all user-editable record types in the application.

+ +

Records have a number of potential states, and in any state other than ""Incomplete"" they must have all required fields filled out.

+ +

It's straightforward enough, but I'm having trouble figuring out how to store it in a way that won't sacrifice the ability to enforce data correctness on ""completed"" records.

+ +

The simplest way to handle this requirement would be to make all the fields of the business object and its database table nullable, and then have strict validation checks run against records any time they are moved from the Incomplete state. However, I dislike the idea of enforcing most of the data correctness requirements purely in application-level code rather than in the design of the database, so I'm trying to come up with some other option.

+ +

Maintaining a separate schema for incomplete records is an idea that's been discussed, but there's concerns that it will add too much overhead, since we now have a separate schema to extend every time something changes and we'll need to manage moving records from one to the other as they're completed.

+ +

Is there a best practice for this sort of problem? Should we just bite the bullet and make all the fields nullable?

+ +

(The technology stack we're using is ASP.NET WebAPI on top of SQL Server 2014, but it seems pretty applicable to any stack backed by a relational database)

+",211824,,,,,42389.41319,How can I store incomplete records but enforce data correctness?,,3,2,1,,,CC BY-SA 3.0,, +307861,1,,,1/20/2016 7:33,,29,8869,"

I've been reviewing C programming and there are just a couple things bothering me.

+ +

Let's take this code for example:

+ +
int myArray[5] = {1, 2, 2147483648, 4, 5};
+int* ptr = myArray;
+int i;
+for(i=0; i<5; i++, ptr++)
+    printf(""\n Element %d holds %d at address %p"", i, myArray[i], ptr);
+
+ +

I know that an int can hold a maximum value of positive 2,147,483,647. So by going one over that, does it ""spill over"" to the next memory address which causes element 2 to appear as ""-2147483648"" at that address? But then that doesn't really make sense because in the output it still says that the next address holds the value 4, then 5. If the number had spilled over to the next address then wouldn't that change the value stored at that address?

+ +

I vaguely remember from programming in MIPS Assembly and watching the addresses change values during the program step by step that values assigned to those addresses would change.

+ +

Unless I am remembering incorrectly then here is another question: If the number assigned to a specific address is bigger than the type (like in myArray[2]) then does it not affect the values stored at the subsequent address?

+ +

Example: We have int myNum = 4 billion at address 0x10010000. Of course myNum can't store 4 billion so it appears as some negative number at that address. Despite not being able to store this large number, it has no effect on the value stored at the subsequent address of 0x10010004. Correct?

+ +

The memory addresses just have enough space to hold certain sizes of numbers/characters, and if the size goes over the limit then it will be represented differently (like trying to store 4 billion into the int but it will appear as a negative number) and so it has no effect on the numbers/characters stored at the next address.

+ +

Sorry if I went overboard. I've been having a major brain fart all day from this.

+",211871,,,,,42391.06458,If a number is too big does it spill over to the next memory location?,,5,11,10,,,CC BY-SA 3.0,, +307871,1,,,1/20/2016 10:37,,3,228,"

I am a noob at JVM internals.

+ +

Can someone explain what happens at Java interpreter level when IncompatibleClassChangeError is thrown?

+ +

I am facing an issue similar to the one described here: https://bugs.openjdk.java.net/browse/JDK-4171827 but having a hard time understanding this comment(quoted from the same page):

+ +
+

This is actually a bug in the interpreter's handling of IncompatibleClassChangeError. The bug is that the logic that checks for invoking a method on a null object, which results in a NullPointerException, is executed before the logic that checks for IncompatibleClassChangeError. In the case of a non-static method becoming static this will cause the interpreter to read a value of the stack which isn't guaranteed to be valid. In this test case it reads a value one above the top of stack which may or may not have a valid value in it. If you modify the test slightly to push a couple nulls and then pop them off before doing the invocation, the test will fail on every vm JavaSoft has shipped. Here's the modified test...

+
+",211896,,31260,,42389.44444,42466.37569,what happens at Java interpreter level when IncompatibleClassChangeError is thrown?,,1,4,1,,,CC BY-SA 3.0,, +307872,1,307875,,1/20/2016 10:51,,-1,99,"

PROBLEM: In Javascript I use closures to encapsulated methods inside a class. It helps building hierarchy inside and minimize amount of methods at root, but they become lengthy.

+ +

RESEARCH: Applied to library classes or GUI frameworks classes (prototype-based inside).

+ +

QUESTION: Should I consider refactoring each root method into a separate class or keep them all thematically composed? Is the same applicable to other languages (Java)?

+",146870,,,,,42389.48611,Javascript Closures to Separate Class,,1,0,,42391.62222,,CC BY-SA 3.0,, +307881,1,,,1/20/2016 14:28,,-4,21258,"

Wikipedia's definition of a ""computer system"" is:

+ +
+

Computer system is define as the combination of hardware software user and data with referring to communication and procedure involved in between them.

+
+ +

Which doesn't make much sense to me. Whereas the definition of an operating system is:

+ +
+

An operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs.

+
+ +

Both those definitions seem to refer to the same thing: a system that manages hardware and software for controlling programs.

+ +

A list of topics doesn't help me much, either. For example, MIT's computer systems course lists the topics as:

+ +
    +
  1. virtual memory
  2. +
  3. threads
  4. +
  5. networks
  6. +
  7. atomicity
  8. +
  9. coordination of parallel activities
  10. +
  11. recovery and reliability
  12. +
  13. privacy, security, and encryption
  14. +
+ +

While MIT's operating systems course lists the topics as:

+ +
    +
  1. virtual memory
  2. +
  3. threads
  4. +
  5. context switches
  6. +
  7. kernels
  8. +
  9. interprocess communication
  10. +
  11. interrupts
  12. +
  13. system calls
  14. +
  15. coordination
  16. +
  17. interaction between software and hardware*
  18. +
+ +

*According to the only answer to this question, this is a computer systems topic?

+ +

It seems like OS is slightly more related to software and to individual computers, whereas computer systems might involve multiple computers?

+",115607,,115607,,42389.76944,43537.48819,What are the differences between operating systems and computer systems?,,2,0,1,,,CC BY-SA 3.0,, +307883,1,,,1/20/2016 14:30,,1,500,"

I'm trying to design the data schema and the restful api for a specific case.

+ +

In my system I have several users and several books. Every book can have several users as authors and at the same time every user can be an authors of several books.

+ +

Now the hard part, users are interested to have a list of their books and to let the system know they are the authors, but it's important that they can also keep this association private. So a user can be the author of a book but doesn't want to make this information public.

+ +

Now a first solution can be to replicate the books information for every author and set them as private or public but this is not what I want. My idea was to use two associations in the user entity (books, public_books) where the second one is a subset of the first one.

+ +

But another problem comes with the restful api. I'm not sure what is a good way to expose this kind of data.

+ +

Something like this maybe?

+ +
/users/:id/books
+
+ +

that returns a list of books made with an additional boolean property ispublic?

+ +

Is there a better idea to handle this situation in your opinion?

+ +

EDIT: +The REST API needs to return all the books linked to the user and offer a way to distinguish between public and private books. A user can obviously see all his books, but only the public books of the other users.

+ +

Another necessary thing is the possibility for a user to change the status of one of his books. A private book can become public and viceversa.

+",93229,,93229,,42389.61667,42389.625,Difficult design with multiple associations between two entities,,2,0,,,,CC BY-SA 3.0,, +307886,1,,,1/20/2016 14:51,,3,104,"

I've ran into a problem trying to calculate prorating of a service. Hoping to get some help/hint from the community.

+ +

I apologize if this isn't the right place to ask this question - will appreciate help in finding the right StackExchange service.

+ +

Given:

+ +
    +
  1. An online service that issues ""licenses"" that can be purchased for the time periods in multiples of 30 days/1 month (for simplicity, let's assume that all months have 30 days)
  2. +
  3. ""Base"" subscription costs $15/month
  4. +
  5. On top of base subscription there are also 20 ""feature add-ons"". Their price varies between $2 and $5 per month
  6. +
  7. Customer is allowed to purchase ""base"" subscription without any add-ons
  8. +
  9. Having paid the ""base"" $15, customer is allowed to add any ""feature add-on"" any time he desires without paying extra for it.
  10. +
+ +

Details:

+ +

a. Item #5 above needs a bit of explanation. The idea there is to allow customers who are paying the base amount to ""try out"" any ""feature add-on"" without collecting more money from them.

+ +

Rationale:

+ +

""Customer X has already paid me $15 for 30days of service. Now, I will let him add a new ""feature add-on"" worth of $5/mo without sending him to shopping cart. However, I'll recalculate the expiry of the license based on $15+$5/mo rate. Thus the license would have fewer days until it expires"".

+ +

Difficulty:

+ +

a. The newly added $5 add-on should itself be prorated. It is also possible that customer selects the amount of add-ons that would lead to the situation when the service would owe the customer.

+ +

b. There is a ""cut-off"" date that's determined by the amount the new add-ons cost after which moving license expiry date ""to the left"" (i.e. closer to current moment in time) would essentially make it expire immediately.

+ +

Questions:

+ +
    +
  1. How do I calculate the updated expiry of a license based on the rationale above?
  2. +
  3. How do I calculate the highest value of add-ons that can be selected without making license expire?
  4. +
+",158235,,,user53019,42389.65278,42389.65278,Prorating a license (its expiry) dependent on feature add-ons,,1,1,,,,CC BY-SA 3.0,, +307892,1,,,1/20/2016 15:41,,1,168,"

I am using an embedded system with a web server. I really have no way to dig deep into the webserver and see how it works, so I am asking a more general question:

+ +

How do servers or back-end scripts (like PHP) handle HTML FormData objects? For example, it is easy to send form data to any web server/script processor like this

+ +

name=bob&job=accountant&age=30

+ +

With FormData I am not actually sure what a ""server"" receives. Is it just a string of text like above? Or is it a more complex object that the backend must parse?

+",186497,,,,,42389.65347,How is FormData actually handled by a server?,,0,5,,,,CC BY-SA 3.0,, +307898,1,,,1/20/2016 16:37,,6,274,"

I am an experienced C++ developer but new to JavaScript. I want to write an ES6 JavaScript class that maintains state.

+ +

How do I tell when state has changed?

+ +

I can think of two ways to do this. One way is to inspect an instance of the class to see if it is ""dirty"" since the last time it was marked ""clean"". I.e. mark an instance object as clean and changing any data member of the class marks it as dirty. Or be able to compare two instances of the same class. If an incoming state does not equal a known state then state has changed. I know this is not built into JavaScript.

+ +

What is the best way to do this in JavaScript? I am working in Typescript if it makes a difference.

+",211939,,158187,,42389.81597,42389.84792,Idiomatic way to write JavaScript class that maintains state and tells you when that state has changed,,2,2,1,,,CC BY-SA 3.0,, +307905,1,,,1/20/2016 18:45,,2,2004,"

I have a class Foo that creates instances of other classes A, B, and C in its constructor. A, B, and C share the same constructor parameter and are used in other parts of the codebase, not just in Foo. A, B, and C access an outside resource configured by the passed parameter. In unit tests, A, B, and C are mocked out with unittest.mock.patch.

+ +
class Foo(object):
+    def __init__(self, param):
+        self.a = A(param)
+        self.b = B(param)
+        self.c = C(param)
+
+foo = Foo('param')
+
+ +

Would the following be an improvement or just unnecessary code? Is passing in unittest.mock.Mock objects in tests preferred over using unittest.mock.patch? In practice, A, B, and C should always share the same parameter.

+ +
class Foo(object):
+    def __init__(self, a, b, c):
+        self.a = a
+        self.b = b
+        self.c = c
+
+    @classmethod
+    def from_param(cls, param):
+        return cls(A(param), B(param), C(param))
+
+foo = Foo.from_param('param')
+
+",211952,,,,,42389.85486,Passing in objects to __init__ or passing a common param to and constructing objects inside __init__?,,1,2,,,,CC BY-SA 3.0,, +307906,1,307908,,1/20/2016 19:13,,2,431,"

So far I've been building applications abstracting data entity operations behind the use of Repositories. Each of them would encapsulate the domain that corresponds to their defined entity. For instance: UserRepository would encapsulate domain logic regarding the User entity, such as changePassword(old,new), createAccount(email,password, retrieveAllFriendsForUser(id) and such.

+ +

But there are cases where you need multiple resources to interact and abstracting that logic in a repository from another entity feels smelly. For example: buying an Item would require you to create a Quotation, create a Payment, reducing the ItemQuantity from the Inventory and such; handling that operation in the ItemRepository would mean it's implementation would depend on QuotationRepository, PaymentRepository, etc...

+ +

I've read before about the Business Facade and the Unit of Work patterns, but having just read them again they seem not to be aimed at this specific multiple resource interaction issue, but rather aimed at simplified access to chunks of code and handling atomic operations, respectively.

+ +

What's a design pattern for handling multiple resource interaction and if possible a quick example code that complements it.

+",136188,,,,,42389.84236,Design pattern for abstracting interaction between multiple resources,,2,3,,,,CC BY-SA 3.0,, +307924,1,307925,,1/21/2016 0:45,,22,647,"

I'm trying to explain to someone that the way they've written the code makes it hard to understand, and if you refactor it then it's easier to read. This style of code I'm driving at is commonly called 'idiomatic' code'.

+ +

But the phrase idiomatic code brings with it baggage of moral correctness, which is not a great motivator to get people to change their coding style. A more positive way of saying this is code that follows the common style - but to a critical thinker that comes across as herd mentality reasoning.

+ +

The way I've come up with explaining this idea in a way that motivates people to change their code is:

+ +
    +
  • writing the code in such a way that it reduces the cognitive overhead of the reader (eg I can't remember if this is the first kind of vector - or the 5th kind of vector)
  • +
  • code that makes it easier to understand the intent (eg what is this vector for?)
  • +
+ +

(As an aside, I'm aware the book The Joy of Clojure, prior to its first publishing, had the draft title Idiomatic Clojure. So it would seem a reason for making code 'idiomatic', to 'bring joy' to the reader).

+ +

My question is: Is the purpose behind code being 'idiomatic' to reduce cognitive overhead?

+",13382,,40857,,42390.65833,42390.65833,Is the purpose behind code being 'idiomatic' to reduce cognitive overhead?,,3,1,2,,,CC BY-SA 3.0,, +307928,1,,,1/21/2016 1:27,,7,354,"

If I am assigned a bug, I sometimes check version control to see when it was introduced. Should I notify the developer that introduced the bug, even if I already fixed it? The advantage is that it could help learning but the disadvantage is it could seem like criticism

+",4526,,,,,42390.64375,Should I notify my colleagues when I find a bug in their code?,,5,6,0,42390.72361,,CC BY-SA 3.0,, +307934,1,307943,,1/21/2016 3:58,,4,160,"

I have this project that has several license notes (all GNU GPLv3) on the top of the source files.

+ +

They are all following the ""rule"" that the year in the copyright notice should be the year on which the file was last modified.

+ +

I wonder if I can substitute

+ +

Copyright (C) 201X Bernardo Sulzbach

+ +

by

+ +

Copyright (C) Bernardo Sulzbach

+ +

on all the licenses to alleviate the maintenance burden.

+",149436,,,,,42390.625,Using a license without a year value,,2,3,1,,,CC BY-SA 3.0,, +307936,1,,,1/21/2016 5:07,,1,91,"

PROBLEM: GUI built above database entities usually consist of creator window (pane etc.), editor window, maybe grid or other view window. To represent an entity for user it usually requires building several predefined windows, and the same for each one. Entitites and its fields could not be in similar relations to treat them generally.

+ +

RESEARCH: There are GUI approaches like Pivot Table that allow developing one module to represent any underlying data.

+ +

QUESTION: Is there a project design approach that helps building one module to dynamically represent any data (by user), or is it impossible? Example applications or why is it used so rarely?

+",146870,,,,,42390.21319,Universal Modules GUI Design,,0,0,,,,CC BY-SA 3.0,, +307937,1,307940,,1/21/2016 5:32,,-1,474,"

A PySide project I've been working on for a while now has started to grow to a point where it's becoming large enough that I've had to step back and re-think the overall design.

+ +

I've spent some time with both Flask and Django, so I'm familiar with the MVC design pattern conceptually, but had the advantage (or maybe disadvantage!) of so many great examples of how to structure and layout Flask/Django projects that I didn't really have to give it a lot of thought. Now that I’m trying to slot my own project into this type of design pattern for the first time, I’m finding it tough to envision how everything should be laid out. Spending some time reading about the MVC pattern I was a little pleased to see that I had partly gone down that path without consciously trying, but had made things unnecessarily complicated and coupled.

+ +

There is nothing particularly special about my program – a user interacts with a UI which retrieves data from a database for either viewing or processing.

+ +

Trying to map this to a MVC structure, I came up with:

+ +

Controller: Most of the actual application functionality – user interacts with a widget on the UI, the controller takes this input and actions the appropriate operations – retrieve data from the model and send it to the View to display on the UI, run operations on it and send back to the model to update in the database, etc. In Qt speak, widget signals connected to Controller methods involving pulling data from the model (usually) for processing, or view via the View.

+ +

Model: Insert/updates/selects data from the database as directed by the Controller.

+ +

View: The actual UI work is done here – updating widget stylesheets (core function of the program) and also formatting data passed to the View by the Controller (from the Model) as specified by the Controller (taken from various options the user has selected on the UI) to then display to the user via display widgets. This is where most my individual object classes would come into play.

+ +

The main difference between what I’ve listed above and how I’ve been structuring my program currently is the Model/View components were more or less overlapping, with the Model retrieving data but also doing most of the formatting of the data as well.

+ +

So, am I on the right track here?

+",212003,,31260,,42390.23333,42390.29861,(Re)structuring a Qt Project,,1,0,,42398.85069,,CC BY-SA 3.0,, +307939,1,307941,,1/21/2016 6:20,,3,2202,"

I have been learning C++ recently, and upon reading up on pointers I had a moment of thought.

+ +

I'm still attempting to grasp the very idea of pointers so excuse me if this doesn't make sense beyond comprehension.

+ +

Is it possible for a set of pointers recursively point into one another's memory location upon trying to print out the value of what they point at.

+ +

This should be done without needing to create this in a ""fake"" way of creating them in a never ending loop.

+ +

It should indefinitely point into infinity when trying to access a pointer's pointed at value. Is this a feasible ""thing"" to create?

+",212010,,7422,,42390.31111,42390.39861,Is it possible to have pointers recursively point into themselves?,,2,2,1,,,CC BY-SA 3.0,, +307944,1,307950,,1/21/2016 7:42,,3,1473,"

A major limitation of node.js is its single threaded execution and the fact that JS is slow with computations.

+ +

What are the advantages/drawbacks of using C++ to do the application's heavy lifting while using node as ""glue code""? Obviously there are security risks using C++, but let's assume for the sake of argument that the C++ code is well-tested and perfect.

+",212019,,,,,42390.37292,node.js C++ addons to do all major computations,,2,4,0,42399.85556,,CC BY-SA 3.0,, +307948,1,,,1/21/2016 8:54,,1,341,"

I'm writing a real-time operating system for microcontrollers in C++11 - distortos. Currently I'm thinking about a C++ framework for various peripherals. The most basic peripheral which I would like to have there are input/output pins. Because such input/output pins can both be built into the chip or be external (I2C expanders, shift registers, ...) I plan to implement interfaces for them with pure abstract classes.

+ +

These are some of mine requirements:

+ +
    +
  • some pins can only be inputs and it shouldn't be possible to (easily) change their direction - for example if you change the direction of pin connected to push-button to ""output"" you can irreversibly damage the chip, the board or the device that is connected to the other side of the wire;
  • +
  • some pins can only be outputs and it shouldn't be possible to (easily) change their direction - if you are using a HC595 shift register then this can only be ""output"" and nothing more;
  • +
  • some pins can change their direction freely - this is required if you want to implement software I2C or software 1-wire;
  • +
+ +

Initially I have thought about doing it like this:

+ +
    +
  1. The most basic interface class would be InputPin - it would only support ""reading"" the state of the pin.
  2. +
  3. Second interface class would be OutputPin - it would inherit from InputPin (as you can read the value of pin configured as output) and also add the ability to ""write"" the value.
  4. +
  5. Third interface class - InputOutputPin would inherit from OutputPin and also provide a function to change direction.
  6. +
+ +

Such approach has a very nice feature that I can use use any derived class in function that requires one of the bases. But actually there is one substitution that is not always correct - using InputOutputPin in a function that requires OutputPin can be a problem when the instance of InputOutputPin is currently configured to be an input...

+ +

I've tried googling around for some inspiration and it seems that there is not much to use. ARM's mbed framework has 3 completely unrelated classes (there is no inheritance there) - one for ""in"", one for ""out"" and one for ""in/out"". Most of the code that I found has only one class that is for ""in/out"". I've actually found an article about exactly the same issue that I'm facing - http://www.embeddedrelated.com/showarticle/108.php . The solution presented there doesn't suit my needs - in a multi-threaded environment the operation of changing the direction of the pin is not so simple and usually requires exclusive access to configuration registers (the change is a read-modify-write operation).

+ +

I wouldn't like to implement any solution where the functions could fail (for example return error codes when the operation is not possible) - this would really complicate the usage.

+ +

Implementing a type-conversion operator - so that conversion from InputOutputPin to OutputPin would be done in a function which also makes sure the direction is correct - is error prone, because the reference to OutputPin can be stored for later use and at that time the direction may have changed again.

+ +

Maybe I should just simplify the hierarchy - OutputPin and InputOutputPin would both inherit from InputPin (this would always be correct), so OutputPin wouldn't be a base for InputOutputPin. This looses some of the flexibility, but I'm not sure that such substitution would be required anyway.

+",158845,,31260,,42390.41319,42537.36597,"Pure abstract classes for input-, output- and bidirectional-pin of microcontroller",,1,2,,,,CC BY-SA 3.0,, +307949,1,307964,,1/21/2016 8:54,,33,25300,"

I am currently working on a system where there are Users, and each user have one or multiple roles. Is it a good practice to use List of Enum values on User? I can't think of anything better, but this doesn't feel alright.

+ +
enum Role{
+  Admin = 1,
+  User = 2,
+}
+
+class User{
+   ...
+   List<Role> Roles {get;set;}
+}
+
+",212028,,,,,42391.92431,Is it a good practice to use List of Enums?,,8,14,9,,,CC BY-SA 3.0,, +307966,1,307977,,1/21/2016 13:28,,20,6292,"

I've generally been working with PHP warnings and notices off, since I work on a lot of projects where it's already in live production. Now, if I turn on the warnings and notices on these live production websites, they'll be overloaded with them.

+ +

The projects that I work on at home, on local, I usually try to work away ALL warnings and notices. Sometimes, there's no solution to not having a notice, so I'd just have to deal with looking at the notice until I decide to turn them off altogether.

+ +

In the end, I don't know whether I'm wasting my time trying to get rid of all warnings and notices, or that I'm actually doing this for the greater good.

+ +

Hence my question, is it good practice to avoid warnings and notices altogether, or does it really not matter?

+",210884,,319880,,43483.62986,43483.62986,Is it good practice to avoid warnings and notices?,,3,9,3,,,CC BY-SA 3.0,, +307981,1,,,1/21/2016 18:07,,0,1068,"

The primary use for Node.JS is of course as a full server stack, and I've used it in that manner to great success.

+ +

However, a number of useful, interesting NPM packages deal with things like transpiling a styling language, adding typing information to typeless JavaScript, running JavaScript unit tests, even having a ""piped"" build system like Gulp.

+ +

Currently, I work on a project that uses Tomcat, and is primarily written in Java. Java and Ant utilities have felt somewhat limiting in terms of interacting with our JavaScript files when building/testing, so I'm looking into the possibility of adding a dependency on NodeJS, and setting up build dependencies.

+ +

Why would I do this?

+ +

I distinctly want to avoid adding dependencies ""because they're cool"". I only want to add Node packages for scenarios where it is infeasible, or even deprecated, to solve particular problems using Java-based programs (eg, Ant).

+ +

One example: Our JavaScript widget library, Dojo, has stated they will not support doing Dojo builds via Java for much longer, having largely moved to Node builds. Additionally, some CSS compression toolkits run using Java have stopped being maintained in favor of those like LESS. There are also developer tools like JavaScript unit tests, or TypeScript types we'd like to consider to make development more reliable. Using a dependency manager might also help us engineer a solution where we don't have Dojo's entire source code committed to our repository.

+ +

The question: Proper project layout and potential pitfalls

+ +

What would be some reliable practices to follow if I want to use Node as a build/test/development dependency, but not require it for the final, packaged app (And, pros/cons of different approaches)? In this scenario, does it actually make sense to include a package.json in the root project hierarchy? Should Node-related build tasks be invoked via Ant, or does it make sense to require separate commandline actions to invoke them? Under what scenarios should developers/build machines run ""npm install"", and should it always be automated?

+ +

If based on experience people tend to find that the answer to those questions don't matter, that is also a helpful answer - but I'm asking for experience to hopefully avoid some design pitfalls.

+",90889,,,,,44192.37569,Best practices for adding Node.JS build features to a non-Node project,,2,4,1,,,CC BY-SA 3.0,, +307982,1,307986,,1/21/2016 18:29,,3,2025,"

I'm currently trying to learn C# and want to enhance my understanding of Object Oriented Programming (OOP). I'm hoping to accomplish this by experimenting with a small program that keeps track of my school assignments/information. I've constructed classes relating to my Institutions, Semesters(Terms), Courses, and Assignments. My question is whether or not I have created and implemented my classes correctly in regards to the information they represent. My thinking suggests that my classes should not inherit from one another in a parent/child like fashion because they are physically unrelated. However accessing objects through multilevel lists seems impractical (I think).. Is there a better way of doing this that doesn't force me to access object by iterating through collections, but still implements OOP best practices?

+ +
+ +

Program Source Code

+ +

Entry Point / Main

+ +
static void Main(string[] args)
+{
+    List<Institution> Institutions = new List<Institution>();
+    Institutions.Add(new Institution(""My college""));
+    Institutions[0].AddNewTerm(""2016"", ""Spring"");
+    Institutions[0].Terms[0].AddNewCourse(""Math 210"");
+    Institutions[0].Terms[0].Courses[0].AddNewAssignment(""Chapter 1"");
+    MessageBox.Show(Institutions[0].Terms[0].Courses[0].Assignments[0].Name);
+}
+
+ +

Class List

+ +

Institution

+ +
class Institution
+{
+    public string Name { get; set; }
+    public List<Term> Terms { get; set; }
+
+    public Institution(string UP_Name)
+    {
+        this.Name = UP_Name;
+        this.Terms = new List<Term>();
+    }
+
+    public void AddNewTerm(string NewTermYear, string NewTermSeason)
+    {
+        Terms.Add(new Term(NewTermYear, NewTermSeason));
+    }
+}
+
+ +

Term

+ +
class Term
+{
+    public string Name { get; set; }
+    public string Year { get; set; }
+    public string Season { get; set; }
+    public List<Course> Courses { get; set; }
+
+    public Term(string NewSeason, string NewYear)
+    {
+        this.Season = NewSeason;
+        this.Year = NewYear;
+        this.Courses = new List<Course>();
+        this.Name = (this.Season + "" "" + this.Year);
+
+    }
+
+    public void AddNewCourse(string NewCourseName)
+    {
+        this.Courses.Add(new Course(NewCourseName));
+    }
+}
+
+ +

Course

+ +
class Course
+{
+    public string Name { get; set; }
+
+    public List<Assignment> Assignments { get; set; }
+
+    public Course(string UP_Name)
+    {
+        this.Name = UP_Name;
+        this.Assignments = new List<Assignment>();
+    }
+
+    public void AddNewAssignment(string NewAssignmentName)
+    {
+        Assignments.Add(new Assignment(NewAssignmentName));
+    }
+
+}
+
+ +

Assignment

+ +
class Assignment
+{
+    public string Name { get; set; }
+
+    public Assignment(string UP_Name)
+    {
+        this.Name = UP_Name;
+    }
+}
+
+",185811,,,user53019,42390.77292,42392.82639,Handling Multiple Collections in C#,,4,7,,,,CC BY-SA 3.0,, +307984,1,,,1/21/2016 18:43,,1,118,"

My application will have at least two types of users: clients and companies, these are the types of profiles. Because each type of profile requires different additional information, it will be required to create a table for each type of profile.

+ +

How should I create my relationships so a user has zero or one client profile, or zero or one company profile, but in a way that he must have one and only one of the two types of profiles?

+ +

E.g. John is a client, then he has a row on Users table and UsersProfiles. He can't have a row in CompaniesProfiles at the same time.

+",212121,,,,,42480.86042,User account model with two or three possible profile,,1,3,,,,CC BY-SA 3.0,, +307988,1,308031,,1/21/2016 19:00,,0,94,"
    +
  • I will use C# here as an example, but my question is about any language.
  • +
  • My question is from framework to compiler perspective (i.e the solution can be by implementing given idea inside compiler)
  • +
+ +

Consider such code:

+ +
if (sequence.Any()) ...
+
+ +

and let's say the sequence ""holds"" 1 million items. This condition will be executed pretty fast nevertheless (with single check -- either iterator moves or not). Now, slight change:

+ +
if (sequence.Count() > 1) ...
+
+ +

now the sequence will be iterated over 1 million items and after that the result will be ""oh yes, we have at least 1 item"".

+ +

Question: how it could be optimized in such way, that the excessive iterations will not be made. On the other hand I would like to avoid polluting framework with plethora of ""optimized"" methods -- CountAtMost, CountAtLeast -- and so on.

+ +

Of source Count is just example, another aggregation queries come to mind -- consider expression (wrong example I am keeping it for historical reasons) collection.Sum() > 1000.

+ +

I am not asking about C#-specific optimizations, the question is completely general -- iterable sequences are present in a lot of languages. Iterable sequences can come from generators as well, so the question is how to optimize aggregation query with external (comparing to query) arguments.

+",66354,,66354,,42390.84861,42391.40486,How to optimize iterable queries with external arguments,,3,2,,,,CC BY-SA 3.0,, +307993,1,307996,,1/21/2016 20:18,,6,23342,"

We learned in Intro to Programming that if you divide two integers, you always get an integer. To fix the problem, make at least one of those integers a float.

+ +

Why doesn't the compiler understand I want the result to be a decimal number?

+",164475,,,,,42435.8625,Why does integer division result in an integer?,,4,12,2,,,CC BY-SA 3.0,, +307994,1,,,1/21/2016 20:21,,2,4942,"

Lets say we have the famous Joshua Bloch Nutrition Builder and we want to change it so it be a bit like dynamic builder which restricts visibility of setters and propably uses generics :

+ +
 public class NutritionFacts {
+private final int servingSize;
+private final int servings;
+private final int calories;
+private final int fat;
+private final int sodium;
+private final int carbohydrate;
+
+public static class Builder {
+    // Required parameters
+    private final int servingSize;
+    private final int servings;
+
+    // Optional parameters - initialized to default values
+    private int calories      = 0;
+    private int fat           = 0;
+    private int carbohydrate  = 0;
+    private int sodium        = 0;
+
+    public Builder(int servingSize, int servings) {
+        this.servingSize = servingSize;
+        this.servings    = servings;
+    }
+
+    public Builder calories(int val)
+        { calories = val;      return this; }
+    public Builder fat(int val)
+        { fat = val;           return this; }
+    public Builder carbohydrate(int val)
+        { carbohydrate = val;  return this; }
+    public Builder sodium(int val)
+        { sodium = val;        return this; }
+
+    public NutritionFacts build() {
+        return new NutritionFacts(this);
+    }
+}
+
+private NutritionFacts(Builder builder) {
+    servingSize  = builder.servingSize;
+    servings     = builder.servings;
+    calories     = builder.calories;
+    fat          = builder.fat;
+    sodium       = builder.sodium;
+    carbohydrate = builder.carbohydrate;
+}
+
+ +

}

+ +

Now I would like this builder to build two types of NutritionFacts. +So lets say instead of:

+ +
NutritionFacts cocaCola = new NutritionFacts.Builder(240, 8).
+  calories(100).sodium(35).carbohydrate(27).build();
+
+ +

I would like to have something like that:

+ +
NutritionFactsA a = new NutritionFacts.Builder<NutritionFactsA>...(240, 8).
+  calories(100).build(); //.sodium(35).carbohydrate(27) - these are not possible to be set, when I write NutritionFactsA I cannot see these parameters
+
+NutritionFactsB b = new NutritionFacts.Builder<NutritionFactsB>(240, 8).
+  .sodium(35).build(); //here I have set only sodium value, I cannot set anything else - I dont even see other setters...
+
+ +

I don't know if I express myself clear and if I wrote generics in proper places...

+ +

Pleas advice, show some details, I am only beginning with generics and love it!

+",186681,,,,,42393.55694,Intelligent builder pattern - different parameters depending on type - generics?,,1,0,2,,,CC BY-SA 3.0,, +307998,1,308001,,1/21/2016 20:47,,4,310,"

I have a discussion with my team recently and I heard a suggestion for deploying multiple releases. Such as version (newest) and version (old). +But I am thinking what would be the reason for customer want to use the older version. What if the bug was introduced in the older version we would have to make additional support to it.

+ +

I see there are benefits of using older version. Maybe I am wrong but what would be the reason why people want the older releases?? and if the trade off wasnt worth it (support vs demand) then why are we plan to support multiple releases?

+",212132,,,user22815,42390.97778,42391.85,Is there a reason for support multiple different releases?,,4,3,,,,CC BY-SA 3.0,, +307999,1,308003,,1/21/2016 20:50,,1,2016,"

I have a container class similar to the one below (with much of the logic omitted):

+ +
class Container<T>
+{
+    Dictionary<T, TWrapped> contains = new Dictionary<T, TWrapper>();
+
+    public void Add(T item)
+    {
+        TWrapped wrappedItem = new TWrapped(item);
+        contains[item] = wrappedItem;
+
+        // logic involving wrappedItem...
+    }
+
+    public bool Contains(T item)
+    {
+        return contains.ContainsKey(item);
+    }
+
+    // Item has been updated, so Sort called on container
+    public void Sort(T item)
+    {
+        TWrapped wrappedItem;
+        if(contains.TryGetValue(item, out wrappedItem))
+        {
+            // sort the wrappedItem within the container...
+        }
+    }
+
+}
+
+ +

Is there some way of either telling users of Container that T will be used as a dictionary key or, equivalently, of forcing users to have overridden GetHashCode() and Equals() ?

+ +

Further Info

+ +

My actual container class implements a heap implicitly with an array. To do so, items are wrapped with a class that contains index information for finding parents and children within the array. This wrapper class is private within the heap to avoid unintended tampering with the index values.

+ +

I now wish to support re-sorting of specific items in the heap; however, to map from the user supplied item to its wrapped representation in the array, I need a dictionary to store such a mapping... hence the GetHashCode() Equals() woes.

+ +

Any criticisms of the design would be welcome!

+",171526,,,,,42390.88472,Can I enforce the overriding of GetHashCode() and Equals() methods for users of a generic container class?,,1,5,,,,CC BY-SA 3.0,, +308000,1,,,1/21/2016 20:54,,7,1995,"

I have my main project set up very simply in TFS. I've recently begun making a few of my project components generic enough that I can sell them to other organizations using the software for which I made my custom components.

+ +

I want to pull these components out into their own branch so I can make minor tweaks and so that I can easily just zip up the relevant files and send them to people who purchase them, but I don't want to have all the surrounding files in that branch.

+ +

My project is at $/Project/Main/. The files I need to merge into their own branch are spread out.

+ +
$/Project/Main/web/less/modules/myCustomModule.less
+$/Project/Main/web/css/myCustomModule.css
+$/Project/Main/web/scripts/myCustomModule.js
+$/Project/Main/web/UserControls/custom/myCustomModule.ascx
+$/Project/Main/web/UserControls/custom/myCustomModule.ascx.cs
+
+ +

I need to copy the directory structure after .../web/ into $/Project/NinjaModules/ but I only want those five files.

+ +
tf merge $/Project/Main/web/less/modules/myCustomModule.less $/Project/NinjaModules/less/modules/myCustomModule.less
+...
+
+ +

This works fine until I want to merge my changes from $/Project/Main to $/Project/NinjaModules, since I then have to re-run all my commands individually.

+ +

Is there a correct way to structure this?

+ +

Note: I could tf merge all the directories under $/Project/Main and then tf delete all the files I don't want, but that seems like a rather heavy method to handle it, and any time I add a new file to $/Project/Main and then do my merge, I'd have to delete the new files in $/Project/NinjaModules.

+",169692,,,,,42505.65833,How can I organize my TFS structure such that a one-off project that pulls files from another project can be easily managed?,,1,2,0,,,CC BY-SA 3.0,, +308025,1,308026,,1/22/2016 2:49,,1,479,"

I am building an API that will connect to multiple databases. I am using Emberjs on the front end and the default rest adapter seems to prefer shallow routes. Say, I have the following routes

+ +
api/databases
+api/databases/1/employees
+
+ +

At this point, my routes are already 2 levels deep. Any further nesting makes it hard to deal with

+ +
api/databases/1/employees/2/tasks
+
+ +

In the above URL, I'd like to be able to eliminate the 'databases/1/' portion which will make the rest of the resources easy to access within one or two levels but I am not sure how to pass in the information about the database. I thought about using query parameters but that seems like bad design

+ +
api/employees/2/tasks?database=1
+
+ +

Any feedback would be appreciated.

+",,user212161,,,,42391.16389,Design - shallow routes for an API,,1,5,,,,CC BY-SA 3.0,, +308027,1,,,1/22/2016 5:06,,2,163,"

Let's imagine. I have following data from database.

+ +
======================================
+id    |  Title     |   parentId
+======================================
+100      Asia          NULL
+--------------------------------------
+101      India         100
+--------------------------------------
+102      Tamil Nadu    101
+--------------------------------------
+103      Chennai       102
+--------------------------------------
+104      Karnataka     101
+--------------------------------------
+105      Bengalaru     104
+--------------------------------------
+
+ +

Which can be rendered as like below JSON Format. Lets keep this as first format.

+ +
{
+  ""children"": [{
+    ""id"": 100,
+    ""title"": ""Asia"",
+    ""parentId"": """"
+  }, {
+    ""id"": 107,
+    ""title"": ""India"",
+    ""parentId"": 100
+  }, {
+    ""id"": 108,
+    ""title"": ""Tamil Nadu"",
+    ""parentId"": 107
+  }, {
+    ""id"": 109,
+    ""title"": ""Karnataka"",
+    ""parentId"": 107
+  }, {
+    ""id"": 112,
+    ""title"": ""Chennai"",
+    ""parentId"": 108
+  }, {
+    ""id"": 113,
+    ""title"": ""Bengalaru"",
+    ""parentId"": 109
+  }]
+}
+
+ +

I would like to have below JSON Format which is good for recursive drill down menu and keep this as second format.

+ +
{
+  ""children"": [{
+    ""id"": 100,
+    ""title"": ""Asia"",
+    ""children"": [{
+      ""title"": ""India"",
+      ""children"": [{
+        ""title"": ""Tamil Nadu"",
+        ""children"":[{
+          ""title"": ""Chennai""
+        }]
+      }, {
+        ""title"": ""Karnataka"",
+        ""children"":[{
+          ""title"": ""Bengaluru""
+        }]
+      }]
+    }, {
+      ""title"": ""China""
+    }]
+  }]
+}
+
+ +

My question here which format is REST Standard / best practice in REST. Using first format and modifying through JavaScript model / using second format.

+",109292,,,user40980,42394.70556,42394.70556,REST Standard for changing SPA Model after fetched from REST API,,1,0,,,,CC BY-SA 3.0,, +308035,1,,,1/22/2016 7:57,,3,76,"

We currently have a .Net MVC web app with a SQL server back-end database and two web services that perform periodically some computing tasks in the database.

+ +

I am thinking of packaging this app for the cloud, where a client would ""instantiate"" it as a whole package. This process would allocate automatically the pieces needed to run the whole thing, i.e. the site, the database and the other two services.

+ +

I've been reading quite a bit about azure and I am having a very hard time getting my head around the whole azure development, testing and deployment process, then upgrading and applying changes to existing deployments. I am used to having control of all the pieces. First I like to have everything local on my workstation. We use git and we have our own self managed repository. I find it weird to have a dev environment on a remote cloud computer. I think source code is one of the most valuable assets of a company along with the data (the data more so).

+ +

In the current environment we build the apps manually (I know it's far from being ideal, we will automate this process and I am not worried about it for now) and we deploy by simply copying the new version of top of the old one. We also do backups, recompile database stored procedures and apply scripts that massage data, change tables structures or add indexes and so on. There is some downtime allocated while the upgrades are done. Before we deploy to production we deploy to the integration and test environments.

+ +

How does all this translate to an azure architecture?

+ +

Here is my mental model:

+ +
    +
  • Development and integration are done locally.
  • +
  • The software is packaged locally as a SAAS and uploaded to the cloud. I assume the cloud will take a manifest that includes all the resources the app needs and will allocate them when the app is created the first time in the cloud. This process will have to be tested in the cloud by us.
  • +
  • As for upgrades, the upgrades would have to be applied by the client to existing deployments. I guess another option would be to create a new instance and migrate the data from the old one. I see the upgrades as packages that can be dropped onto existing deployments and they would perform certain tasks, much like a setup software that performs an upgrade on an older version. The upgrades would have be tested as well in the cloud.
  • +
+ +

I kind of assumed that the app can have some minor downtime. I realize that other apps might not have this luxury, but I don't want to go there now.

+ +

Anyway, is this doable in the azure world? Should anything be done differently? And how would one charge in this model?

+ +

Thanks

+ +

An update: I was thinking more about it and I guess another model would be to build a monolithic app that would allow users to sign up for the service (much like an email app) and it would allocate resources accordingly. I guess in this case the charging model is easier. The downside is that the app would have to be designed to be multi-tenant and it would have its own set of challenges. I am still fuzzy about how the whole development->test->production cycle would work in this case.

+",134875,,134875,,42391.34792,42391.55417,Questions about developing for azure,,1,0,,,,CC BY-SA 3.0,, +308036,1,308038,,1/22/2016 9:54,,26,3873,"

Is it a good practice to replace constants used outside of classes by getters?

+ +

As an example, is it better to use if User.getRole().getCode() == Role.CODE_ADMIN or if User.getRole().isCodeAdmin()?

+ +

That would lead to this class:

+ +
class Role {
+    constant CODE_ADMIN = ""admin""
+    constant CODE_USER = ""user""
+
+    private code
+
+    getRoleCode() {
+       return Role.code
+    }
+
+    isCodeAdmin () {
+       return Role.code == Role.CODE_ADMIN
+    }
+
+    isCodeUser () {
+       return Role.code == Role.CODE_USER
+    }
+}
+
+",174167,,591,,42391.76667,42399.44167,Is it a good practice to avoid constants by using getters?,,5,4,3,,,CC BY-SA 3.0,, +308040,1,308042,,1/22/2016 11:11,,1,177,"

I have the following code:

+ +
subroutine foo(int index)
+{
+    // Check A.
+    // Critical: Check A must precede Check B below.
+    if (index == 1)
+    {
+        return true;
+    }
+
+    // Check B.
+    if (index - 2 < 0)
+    {
+        return false;
+    }
+
+    return true;
+}
+
+ +

The code is a simplified representation of a real-life scenario where I am checking for validity of punctuation marks in a string.

+ +

My question: Is there a construct in any language which would guarantee that the order of the two if statements are maintained as is? (Without having to place a comment in the code as I have done and hope that it is heeded.)

+ +

For the case where index is 1, if Check B is moved before Check A, Check A will never be caught and foo() will always return false, which is bad.

+ +

Again, my concern is for the maintenance of the code by future programmers. There are about 10 if statements in the code, one after another, and their order is important.

+ +

EDIT 1:

+ +

I am an experienced developer, and I am really asking whether there are any new developments in languages that would allow for what I am asking above. I am sorry that I did not make this clear.

+ +

EDIT 2:

+ +

In response to comments suggesting index < 2 instead of index - 2 < 0: I don't agree. index - 2 indicates that I am interested if there is an item two locations before the current index, while index < 2 does not convey the same information. (Of course, this is my opinion!)

+",88438,,88438,,42391.51667,42391.57847,Enforcing order for two consecutive statements,,3,13,1,,,CC BY-SA 3.0,, +308052,1,308130,,1/22/2016 12:55,,10,19378,"

I am currently planing an application that will be used in a company. It is required to build a Desktop Application. At the moment they are not sure if the application should be available on mobile or browser in the near future.

+ +

I have two possibilities:

+ +
    +
  1. Access the database directly from the Desktop Application

  2. +
  3. Create a REST API and connect to this

  4. +
+ +

Can I use a REST API if the application stays just a Desktop Application within the company? I know that it's possible, but is it ""the right"" way? (Best practices)

+ +
+ +

There are some (possible) advantages and disadvantages for creating directly a REST API:

+ +

Disadvantages:

+ +
    +
  • Takes longer to develop
  • +
  • More complex
  • +
  • The server does more work
  • +
  • ̶Se̶̶c̶̶u̶̶r̶̶i̶̶t̶̶y̶̶ ̶̶i̶̶s̶̶s̶̶u̶̶e̶̶s̶̶?̶
  • +
  • Slower? (The server and the Desktop Application are on the same Network)
  • +
+ +

Advantages:

+ +
    +
  • Migrating to other platforms is easier
  • +
  • The Business Logic is also needed when calling directly the database. It won't take much longer to develop
  • +
  • Same goes for complexity
  • +
  • Security (as mentioned by tkausl in the comments)
  • +
  • Maintainability (as mentioned by WindRaven in the comments)
  • +
+",212220,,211059,,42901.58264,42901.58264,REST API vs directly DB calls in Desktop Application,,3,9,8,,,CC BY-SA 3.0,, +308056,1,,,1/22/2016 14:11,,0,704,"

I've been discussing about whether ""logging in"" is a use case and I've been wondering: is there some ""official"" standard where to look up the definition of a use case or what the point of a use case specification is?

+",22402,,,,,42391.80764,Is there a standard for use case specifications?,,1,7,,,,CC BY-SA 3.0,, +308061,1,,,1/22/2016 14:52,,1,55,"

The general version of the question is above - a a little more detail, I am using Django Rest Framework, but am happy for answers to be dealing with the problem in abstract.

+ +

So, I have data with ~200 countries, ~10,000 regions (the countries subdivided) and ~600 extra small districts in the UK.

+ +

My plan is currently to have the client send a request, with its current zoomlevel and position, then on the server:

+ +
    +
  1. filter objects to correct zoomlevel
  2. +
  3. filter objects to ones that overlap with viewing rect
  4. +
  5. insert ""custom data"" into the features before returning
  6. +
+ +

Alternative plan for 3. is to pass back the ""custom data"" in a separate request.

+ +

A little more detail on what the custom data is - votes - there will be a large number of voteables, and even more votes, my plan here is to precalc/cache the vote totals for each question+area there is a vote in, to make returning the data as fast as possible.

+ +

Still, I'm not too sure of a sensible way to minimise the number of DB Queries - perhaps get the geo areas that match the view, and do one query to get all pre-calculated ""custom data"" that matches.

+ +

Any thoughts on a good way to deal with this kind of situation are very welcome. There may we be bits I have not thought of too much (like avoiding re-sending data more than needed if the user keeps scrolling around)

+",211095,,211095,,42391.64097,42391.64097,"How do I minimize the number of database queries in a GeoJson API (of countries, and smaller areas) with custom data?",,0,7,,,,CC BY-SA 3.0,, +308062,1,308068,,1/22/2016 15:22,,2,727,"

In languages that support currying, I can't think of many cases where using a tuple as function input parameters would be better than breaking the tuple apart into multiple parameters, which then allows you to enjoy the full power of currying. In which situations is keeping a tuple as function parameter better than breaking it apart? (Apart from the situation where the input originally was already in tuple form) Does such situation occur a lot in production code?

+",106675,,106675,,42391.65,42391.66597,Why use tuples as function parameters in languages that support currying?,,1,5,1,,,CC BY-SA 3.0,, +308071,1,308096,,1/22/2016 16:14,,4,1909,"

I am developing an app where I need to divide the entire world map into fixed sized square blocks. For simplicity, think of it as a problem of dividing the google maps into a fixed sized block. I can choose any arbitrary point in the map and my app needs to create a block of fixed size around that point.

+ +

The primary concern here is when i choose a point and define a block around it, there should never be an overlap between this block and any other blocks that were defined previously. In other words, any point can belong to one and only one block.

+ +

What is the best algorithm I can use to do this? I am guessing this is a problem that has been solved before and doesn't require me to reinvent the wheel.

+ +

Right now, all I am doing is when the user selects a point, I get the latitude and longitude of the point and define a square block around it. While doing so, I am making a check to see whether the new block would overlap with other blocks around it or not.

+ +

Any help would be greatly appreciated.

+",212253,,,,,42422.00694,Divide the world map into fixed sized blocks,,1,7,,,,CC BY-SA 3.0,, +308077,1,,,1/22/2016 17:18,,9,472,"

When comparing floating point values for equality, there are two different approaches:

+
    +
  • NaN not being equal to itself, which matches the IEEE 754 specification.
  • +
  • NaN being equal to itself, which provides the mathematical property of Reflexivity which is essential to the definition of an Equivalence relation
  • +
+

The built in IEEE floating point types in C# (float and double) follow IEEE semantics for == and != (and the relational operators like <) but ensure reflexivity for object.Equals, IEquatable<T>.Equals (and CompareTo).

+

Now consider a library that provides vector structs on top of float/double. Such a vector type would overload ==/!= and override object.Equals/IEquatable<T>.Equals.

+

What everybody agrees on is that ==/!= should follow IEEE semantics. The question is, should such a library implement the Equals method (which is separate from the equality operators) in a way that's reflexive or in a way that matches the IEEE semantics.

+

Arguments for using IEEE semantics for Equals:

+
    +
  • It follows IEEE 754

    +
  • +
  • It's (possibly much) faster because it can take advantage of SIMD instructions

    +

    I've asked a separate question on stackoverflow about how you'd express reflexive equality using SIMD instructions and their performance impact: SIMD instructions for floating point equality comparison

    +

    Update: It seems like it's possible to implement reflexive equality efficiently using three SIMD instructions.

    +
  • +
  • The documentation for Equals doesn't require reflexivity when involving floating point:

    +
    +

    The following statements must be true for all implementations of the Equals(Object) method. In the list, x, y, and z represent object references that are not null.

    +

    x.Equals(x) returns true, except in cases that involve floating-point types. See ISO/IEC/IEEE 60559:2011, Information technology -- Microprocessor Systems -- Floating-Point arithmetic.

    +
    +
  • +
  • If you're using floats as dictionary keys you're living in a state of sin and should not expect sane behaviour.

    +
  • +
+

Arguments for being reflexive:

+
    +
  • It's consistent with existing types, including Single, Double, Tuple and System.Numerics.Complex.

    +

    I don't know any precedent in the BCL where Equals follows IEEE instead of being reflexive. Counter examples include Single, Double, Tuple and System.Numerics.Complex.

    +
  • +
  • Equals is mostly used by containers and search algorithms which rely on reflexivity. For these algorithms a performance gain is irrelevant if prevents them from working. Don't sacrifice correctness for performance.

    +
  • +
  • It breaks all hash based sets and dictionaries, Contains, Find, IndexOf on various collections/LINQ, set based LINQ operations (Union, Except, etc.) if the data contains NaN values.

    +
  • +
  • Code that does actual computations where IEEE semantic is acceptable usually works on concrete types and uses == / != (or more likely epsilon comparisons).

    +

    You currently can't write high performance computations using generics since you need arithmetic operations for that, but these aren't available through interfaces/virtual methods.

    +

    So a slower Equals method wouldn't affect most high performance code.

    +
  • +
  • It's possible to provide an IeeeEquals method or an IeeeEqualityComparer<T> for the cases where you either need the IEEE semantics or you need to performance advantage.

    +
  • +
+

In my opinion these arguments strongly favour a reflexive implementation.

+

Microsoft's CoreFX team plans to introduce such a vector type in .NET. Unlike me they prefer the IEEE solution, mainly due to the performance advantages. Since such a decision certainly won't be changed after a final release, I want to get feedback from the community, on what I believe to be a big mistake.

+",8669,,-1,,43998.41736,42392.84931,Should `Vector.Equals` be reflexive or should it follow IEEE 754 semantics?,<.net>,2,3,2,,,CC BY-SA 3.0,, +308083,1,,,1/22/2016 18:46,,-1,738,"

Edit: this differs from a similar question because I'm interested specifically in how Django works with the front-end. I.e. what is considered best practice when developing using the Django framework.

+ +

I am learning Python at university and need to produce an implementation project. I'd like to write a web application using Python with the Django web framework.

+ +

I've been learning as much HTML, CSS and Javascript as possible so I know what's going on at the front-end.

+ +

I've found some nice designs on Codepen which would be useful in what I'm designing, but can I use them and design a Django app around it? Or do I need to get the back-end together first and then design the front-end to fit.

+",212277,,212277,,42392.83194,42392.83194,Developing an app using Django. Do I design front-end after app? Or develop Django to fit UI?,,1,1,,42395.79653,,CC BY-SA 3.0,, +308084,1,,,1/22/2016 19:56,,7,1784,"

I was wondering if it would be good practice for commit messages to contain the ticket number they were apart of. It would be like

+ +
    2568 Fix heating issue
+
+    Summary of the issue with a bunch of detailed comments
+
+ +

It would be a bit of a pain to get the ticket or isse # in each commit and I am wondering if people think it would be worth it in the long run. Also if my branch has the ticket number is this redundant? The main issue is that I use source tree and that in the log / history view it can get a little complicated to see what commit corresponds with what ticket. I just would like a better way to see how everything works.

+",212278,,,,,42392.12361,Is there any downside to commit messages containing the ticket number,,4,2,1,,,CC BY-SA 3.0,, +308091,1,308094,,1/22/2016 21:11,,0,246,"

Today while programming I stumbled upon the following question - are there any compilers which optimize based on mathematical assumptions?

+ +

For instance in cases like

+ +
unsigned int i,b;
+(i,b not constant)
+if(sqrt(i) == b)
+...
+
+ +

In this case it would be a lot more effective to use

+ +
unsigned int i,b;
+(i,b not constant)
+if(i == b*b)
+...
+
+ +

Assuming a sqrt() function, that handles unsigned integers and rounds sensefully.

+ +

Since I was not able to find useful information (probably because I did not know what to search for specifically), can someone please tell me or point me to a relevant source?

+ +

Are there compilers (for imperative languages) who optimize such things using some kind of heuristic? Or more specifically - what about gcc and microsoft visual c++ and matlab?

+",211424,,31260,,42395.63542,42395.63542,Are there compilers which optimize the use of mathematical functions?,,1,12,,,,CC BY-SA 3.0,, +308102,1,,,1/22/2016 23:26,,3,185,"

Long question context, skip to tl;dr for the meat of it.

+ +

I am designing an integration between two web applications, and have come to the conclusion that a message pattern would be an appropriate solution to the problem. The gist of the requirements is that data in System A should be synced to System B, through an interface in System A. All requests are http requests. The data in System A is an object database, the data in System B is a relational database. System B accepts and returns JSON for every call, while System A uses domain-specific formats. In addition, all business logic has to be in System A. System B is a black box, and only allows access to the data layer.

+ +

I experimented with a Transfer Object Assembler Pattern, but found that this was not quite right for the integration. Each combination of object and operation on System B ultimately has to be done with a separate request. If I need to update a user and an organization, these have to be in separate calls to separate resources. Using the DTO pattern here would require a separate data transfer object for each call, which kind of defeats the purpose (as far as I understand).

+ +

Long story short, the message pattern seems to be the way to go. Bonus points since it's a short jump to using an actual message bus for integrations in the future. So now, the idea is to create a message for each operation type. CreateMessage, UpdateMessage, etc. Each message would have an instance of a parameter class that specifies the type of object, and the JSON body of the message. The only missing piece is translating the System A domain data into JSON, and JSON into System A data.

+ +

tl;dr

+ +

I want to write this correctly so that the next guy doesn't have to maintain really poorly written/designed code.

+ +

I'm thinking of using the Message Mapper pattern to transform an object from System A to a Create/Update/Delete/Read message with a JSON center, so that it can be executed in System B. System A is OO, but doesn't support generics. Would it be more practical to write a collection of classes that are 1:1 with the domain as we need them, or a monolithic class that does all the mapping? I really don't like the idea of any very large class, but since generics aren't an option it's difficult to tell which is more appropriate.

+",212301,,4,,43956.97222,43956.97222,Message Mapper Design Considerations,,1,2,,,,CC BY-SA 4.0,, +308107,1,,,1/22/2016 23:56,,1,102,"

Sometimes when I resume work on a project after a long break, I forget where in the many files and lines I stopped working. I can usually find it after a minute, but I'm wondering if any popular IDEs have a built-in way to indicate where you stopped working.

+ +

I currently use XCode/C++, but I'm curious if other IDEs have this feature.

+ +

Obviously I could make an additional file, and I'll do that if there isn't a more elegant way.

+",181729,,,,,42392.03889,Resuming a project where you left off,,1,3,0,42396.64514,,CC BY-SA 3.0,, +308108,1,308110,,1/23/2016 0:05,,46,12106,"

We are designing coding standards, and are having disagreements as to if it is ever appropriate to break code out into separate functions within a class, when those functions will only ever be called once.

+ +

For instance:

+ +
f1()
+{
+   f2();  
+   f4();
+}
+
+
+f2()
+{
+    f3()
+    // Logic here
+}
+
+
+f3()
+{
+   // Logic here
+}
+
+f4()
+{
+   // Logic here
+}
+
+ +

versus:

+ +
f1()
+{
+   // Logic here
+   // Logic here
+   // Logic here
+}
+
+ +

Some argue that it is simpler to read when you break up a large function using separate single-use sub functions. However, when reading code for the first time, I find it tedious to follow the logic chains and optimize the system as a whole. Are there any rules typically applied to this sort of function layout?

+ +

Please note that unlike other questions, I am asking for the best set of conditions to differentiate allowable and non-allowable uses of single call functions, not just if they are allowed.

+",172875,,172875,,42394.71875,43692.80833,When is it appropriate to make a separate function when there will only ever be a single call to said function?,,8,5,18,42394.43472,,CC BY-SA 3.0,, +308118,1,308119,,1/23/2016 1:58,,6,201,"

Question Background:
+I have a rather large project that I started before I learned about a beautiful thing called version control. Now, I have a ton of files labeled with the convention ""ProjectName_Date.""

+ +

Question:
+I'm thinking of deleting the old files and starting a new repository with the latest version. However, is that a good practice? Would it be better to try to maintain that history by figuring out what changes were made between files and then store everything in a repository? Or is it better to start from scratch and not risk setting up a questionable repository?

+ +

What I've done:
+I know the question is subjective, but I've read different articles (like the one here) about best practices and can't find anything helpful regarding this type of issue. It seems like everything assumes that you're starting from scratch.

+ +

Any advice in the form of experience or an article is appreciated. +Thanks!

+",212311,,,,,42392.45,Project Cleanup and Version Control Best Practices,,3,1,,,,CC BY-SA 3.0,, +308136,1,,,1/23/2016 12:12,,2,904,"

How does the current copy and paste work on a computer? Such as being able to copy some formatted text together with an image, and then when pasted to a text editor, it will intelligently paste only ASCII or UTF-8 text, while if pasted to some webapps or Microsoft Word or Pages on a Mac, it will paste the text together with the formatting info as well as the image. (Is this actually one of the GoF Design Pattern or a design pattern that is well defined that actually has a name to it?)

+",5487,,5487,,42392.78819,42393.10556,How does the copy and paste mechanism work and is it a standard design pattern as in GoF?,,1,6,,,,CC BY-SA 3.0,, +308143,1,308145,,1/23/2016 13:46,,0,395,"

I am a second year Computer Science student currently on a placement and I am currently developing a Java EE application that collects meta data from several sources and then visualises the data. This is the first major programming project I have done. (Nothing like university with 200 lines of code for an assignment)

+ +

I am about half of the way through the development of this application and have learnt an incredible amount. One of those things is TDD. From learning this I have gained an understanding of how important testing is and how it can help speed up development.

+ +

The reason for the back story is to cover for my ignorance. My university hasn't taught it to me and it is non-existent at my work place. I am incredibly keen in making sure I follow best practice in everything I do and I would like to include it in my application as I feel it would bring many benefits.

+ +

The problems I have though, are:

+ +
    +
  • My application (like many I am sure) will contact several external sources. Take my message consumer for example. It will contact up to 3 remote restful interfaces and a database. The URLS of these apis are dynamic and are only accessible once the application has loaded the properties file (which has to be modified by the application user).
  • +
  • I feel it would be difficult to retro-fit the tests on a system that hasn't been built in a TDD manner.
  • +
  • I practically don't have time to retro-fit/ learn how to do advanced TDD with java EE applications.
  • +
  • As it isn't practice at all in my work place I wouldn't be able to ask questions or be corrected on if what I was doing is best.
  • +
+ +

Should I just accept defeat on this project as I am too far in and going forward I should just enforce TDD on everything I can? (where it is practical of course).

+ +

I'm just concerned that any future potential employers might look at my work from this placement and think less of me because lack of good practice. (Not worried what the current company thinks as it isn't a thing and they are very happy with my project.)

+",212357,,,,,42392.60417,TDD with a half baked Java EE application,,1,1,,,,CC BY-SA 3.0,, +308149,1,308151,,1/23/2016 15:43,,0,498,"

I'm not in the field, so I don't have any professional experience from projects following the TDD design. I am trying to adopt this pattern, but I'm confused as to when I start actually writing the tests. In my example, a personal blogging app. I'm starting from complete scratch using ASP.NET 5, MVC 6. I've built the main functionality such as displaying the homepage, allowing login using Identity, allowing the posting of a blog post, and retrieval of those blog posts through an MVC controller as well as an API.

+ +

This is all done using the Repository pattern, and MVVM.

+ +

At what point in that process should I really begin testing?

+ +

Do I test that the repository does what it should do? That smells weird to me, since repositories are injectable.

+ +

Do I test that the ViewModel has certain properties?

+ +

Or is it acceptable to simply start with testing whether a valid ViewModel allows creation of a new BlogPost, and an invalid does not?

+",172798,,,,,42392.84097,At what point in a projects life-cycle do you begin writing tests?,,3,1,,42398.94167,,CC BY-SA 3.0,, +308152,1,,,1/23/2016 16:23,,1,1214,"

I was thinking about it and I was curious as to how one would code an efficient repeating alarm clock in C? Would you set an alarm time and then offset the time with the ms time equivalent of a day (or two days or a week depending on how often it needed to repeat)? And then poll periodically to see if the times are equal? That sounds like a very inefficient solution, to me at least. I was am interested in how an alarm clock would work programatically.

+",212379,,,,,42394.12431,Efficient Repeating Alarm Clock in Low Level Language like C,,2,1,,,,CC BY-SA 3.0,, +308158,1,308159,,1/23/2016 19:16,,6,2258,"

In C++, it is possible to write an overriding for a base class's method even if the visibility declaration of the two don't match. +What are the possible design considerations under the decision of not considering the visibility in the overriding rule?

+ +

Consider this piece of code as an example:

+ +
class A{
+    public: virtual void f() { cout << ""A::f"" << endl; }
+};
+
+class B : public A {
+    private: void f() { cout << ""B::f"" << endl; }
+};
+
+int main() {
+    A* a = new B;
+    a->f();
+}
+
+ +

The above compiles in clang, and running it prints B::f, showing that it is possible to call a private function of B from outside the class, thus breaking encapsulation.

+ +

I don't really see why this type of behavior should be allowed. It is clearly not for performance/efficiency reasons, since checking statically that two visibility declarations match is trivial. Does anybody have an idea or hypothesis about what could possibly be the design decision behind this?

+",181877,,,,,42393.53611,Why does the overriding rule of C++ not care about visibility changes?,,3,2,,,,CC BY-SA 3.0,, +308160,1,308540,,1/23/2016 19:50,,17,10171,"

In TDD there is Arrange Act Assert (AAA) syntax:

+ +
[Test]
+public void Test_ReturnItemForRefund_ReturnsStockOfBlackSweatersAsTwo_WhenOneInStockAndOneIsReturned()
+{
+    //Arrange
+    ShopStock shopStock = new ShopStock();
+    Item blackSweater = new Item(""ID: 25"");
+    shopStock.AddStock(blackSweater);
+    int expectedResult = 2;
+    Item blackSweaterToReturn = new Item(""ID: 25"");
+
+    //Act
+    shopStock.ReturnItemForRefund(blackSweaterToReturn);
+    int actualResult = shopStock.GetStock(""ID: 25"");
+
+    //Assert
+    Assert.AreEqual(expectedResult, actualResult);
+}
+
+ +

In BDD writing tests uses a similar structure but with Given When Then (GWT) syntax:

+ +
    [Given(@""a customer previously bought a black sweater from me"")]
+    public void GivenACustomerPreviouslyBoughtABlackSweaterFromMe()
+    { /* Code goes here */   }
+
+    [Given(@""I currently have three black sweaters left in stock"")]
+    public void GivenICurrentlyHaveThreeBlackSweatersLeftInStock()
+    { /* Code goes here */   }
+
+    [When(@""he returns the sweater for a refund"")]
+    public void WhenHeReturnsTheSweaterForARefund()
+    { /* Code goes here */   }
+
+    [Then(@""I should have four black sweaters in stock"")]
+    public void ThenIShouldHaveFourBlackSweatersInStock()
+    { /* Code goes here */   }
+
+ +

Although they are often considered the same there are differences. A few key ones are:

+ +
    +
  1. GWT can be mapped directly to the specification of a feature file in BDD frameworks

  2. +
  3. GWT is easier for non-developers to understand by encouraging use of plain English, and having a short description of what each part is doing

  4. +
  5. Given When and Then are keywords in various BDD frameworks such as SpecFlow, and Cucumber

  6. +
+ +

My question is are there any other differences (besides the names) between AAA and GWT? And is there any reason besides the ones specified above that one should be preferred over the other?

+",212398,,,,,43188.55069,Differences between Given When Then (GWT) and Arrange Act Assert (AAA)?,,3,3,9,,,CC BY-SA 3.0,, +308168,1,,,1/23/2016 20:27,,2,660,"

Based on the following snippet

+ +

HTML

+ +
<table id=""example"" class=""display"" cellspacing=""0"" width=""100%"">
+<thead>
+<tr>
+  <th>Name</th>
+  <th>Position</th>
+  <th>Office</th>
+  <th>Age</th>
+  <th>Start date</th>
+  <th>Salary</th>
+</tr>
+
+ +

+

+ +

Javascript

+ +
var tableNames = { ""example"": ""#example"" };
+
+function main() {
+    $(tableNames[""example""]).DataTable();
+}
+
+$(document).ready(main()); 
+
+ +

I just want to know which are some consistent ways to keep track of variable names in both HTML and JS.

+ +

Let's say that tomorrow I just rename example to foobar. Then the dataTables instance is not going to work as it works now. How may I manage that coupling between the variable name in the HTML code and the variable name in the javascript code?

+ +

I thought about the tableNames object, but I'm wondering if a more structured approach exists

+ +

Thanks

+",138825,,138825,,42392.88194,42937.63819,Javascript and HTML decoupling,,4,3,,42943.64028,,CC BY-SA 3.0,, +308170,1,308289,,1/23/2016 20:50,,3,188,"

How should I structure a piece of code that executes an operation, but may have slightly different behavior depending on, let's say, user roles?

+ +

Example: +My app has a 'manager' and a 'employee' roles. There are many managers in my app, and each of them have many employees. I have a dashboard where managers and their employees can add/edit/delete products.

+ +

Both can edit the products the have created themselves, but managers can edit products belonging to his employees. If a manager edits a product created by an employee, the employee will get an email notification about the edit.

+ +

The issue I have, is that I´m creating if/else or switch/case statements for checking the user roles, and I feel once I add new roles my code will need to check for them too and my code will become harder to read.

+ +

For example:

+ +
// What I currently have:
+public function updateProductForUser(Product $product, UserInterface $user)
+{
+    $userWhoCreatedProduct = $product->getCreatedByUser();
+
+    if ($userWhoCreatedProduct === $user) {
+        // If the user who created the product is the same one trying to update it,
+        // then go ahead and execute the update.
+        $this->_doUpdateProductForUser($product, $user);
+        return;
+
+    } elseif ($user instanceof Manager) {
+        $allManagerEmployees = $this->someService->findAllEmployeesOfManager($user);
+
+        if (in_array($userWhoCreatedProduct, $allManagerEmployees)) {
+            // If the user trying to update the product is a manager, and the product was
+            // created by an employee of the manager, then execute the update but also
+            // send an notification to the employee that the product he created got updated
+            // by his manager.
+            $this->_doUpdateProductForUser($product, $user);
+            $this->notifyUserThatProductGotUpdated($product);
+        }
+    }
+}
+
+ +

How can I improve this? Is there a better way? What are the pitfalls of the implementation? Any advice is very welcomed. Thanks!

+",186012,,186012,,42393.94792,42401.11319,Architecture: API with slightly different behavior depending on the logged-in user roles,,1,3,,,,CC BY-SA 3.0,, +308174,1,308202,,1/23/2016 21:16,,3,2083,"

I am working on a new project in which we are currently deciding which technologies and frameworks we will be using. The application will eventually be cross platform. Therefore, for the server side, we will be using a REST api written in Java Spring.

+ +

We are now deciding on which technology we will be using for the front end web application.

+ +

The options are as following:

+ +
    +
  1. Using a front end javascript framework (we'd probably use ember js). Pages would be rendered purely in javascript, all REST requests would be sent from within Ember

  2. +
  3. Using a PHP server framework (Laravel). All REST calls would be made from within Laravel. Server side page rendering

  4. +
+ +

My question: which approach is the best one? And why? If we would go for option 2 (using laravel to make the requests), wouldn't it be overkill to use Laravel as we would only be using it to generate views and make the calls to the REST api's? (all logic comes from the REST API) What bothers me in option 1 is that the rendering will be done client side, which will impact the initial loading time. The application will be widely used, also by users with low-end hardware.

+ +

Any input is welcome!If you have other suggestions or better options, please let me know!

+",212392,,,,,43862.30903,Consuming REST services: client or server,,2,2,2,,,CC BY-SA 3.0,, +308176,1,308179,,1/23/2016 22:48,,1,316,"

I have a group of methods that is going to be very large. I need to be able to call methods systematically from a large group, in two different ways. +The methods create a new item object with variables specific to the type of item created, for instance the rubber(); method makes an item with

+ +
itemId = 12, weight = 1, name = ""Rubber"", ...
+
+ +

for 20+ more variables. +I am currently able to call these by name

+ +
object.rubber();
+
+ +

I want to be able to call them by itemId

+ +
object.getById(12);
+
+ +

I need to be able to process them by itemId for situations like this:

+ +
/* itemId:
+1: ruined copper
+2: weak copper
+3: refined copper
+4: ruined iron
+5: weak iron
+6: refined iron
+7: ruined brass
+-(More metals in this pattern)-
+36: sturdy aluminum
+*/
+
+
+//within furnace class
+
+public void smelt(item input, inventory furnace){
+//input is the item to smelt, furnace is the furnace's crucible.
+
+    if(input.getId() < 36 && input.getId()%3 = 1){
+    //if the imput's ID is less than 36, it's a metal, and if ID % 3 = 1, it's a ruined metal, meaning it is smeltable.
+
+        furnace.addItem(handler.list.addById(input.getId() + 1))
+        //add an item to the furnace with an ID one higher than that of the input, which will be a 'weak' metal of the same type
+
+        furnace.removeItem(input);
+        //delete the input item
+    }
+}
+
+ +

The metals are all regularly ordered, so if I can call them by ID it will be possible to manipulate them with very little code, whereas otherwise the furnace will need an if statement for each individual metal to determine what it smelts into. Many other processes will need to be able to work similarly.

+ +

I've found two possible solutions so far:

+ +

one would be to use reflect to call the methods, but I've been warned that it's notoriously unstable and difficult to figure out.

+ +

the other would be to overhaul my current system and store the item properties on a text file, something like

+ +
Ruined_Copper false 4 256 ...
+Weak_Copper false 4 128 ...
+etc.
+
+ +

and then have a filereader open the file and count lines until the line number equaled the target itemId. It would then split the line by whitespace and assign the values to variables, ie:

+ +
newItem.setName(lineOutput(1));
+newItem.setIsContainer(lineOutput(2));
+...
+
+ +

This would output an item with

+ +
name=Ruined_Copper, isContainer = false, etc.
+
+ +

I'd prefer to figure this out early in development so I don't have to change thousands of items to a new format in the future. +Which one of these will go faster? Which one will be more stable? Is there a better way alltogether?

+ +

EDIT: To clarify, I realize that it would be a bad idea to have metals 0-36, woods 37-56, etc. My goal is to allow systematic or player-created items to be added by player actions in game, so the finished product will have space between catagories, ie 0-60 are metals, 61-999 are blank, 1000-1085 are wood types, 1086-1999 are blank, etc.

+",212409,,212409,,42393.83472,42395.36319,Java need to call many methods systematically,,3,0,1,,,CC BY-SA 3.0,, +308178,1,308184,,1/24/2016 0:08,,54,72220,"

I am trying to understand these classifications and why they exist. Is my understanding right? If not, what?

+ +
    +
  1. P is polynomial complexity, or O(nk) for some non-negative real number k, such as O(1), O(n1/2), O(n2), O(n3), etc. If a problem belongs to P, then there exists at least one algorithm that can solve it from scratch in polynomial time. For example I can always figure out if some integer n is prime by looping over 2 <= k <= sqrt(n) and checking at each step if k divides n.

  2. +
  3. NP is non-deterministic polynomial complexity. I don't really know what it means for it to be non-deterministic. I think it means it is easy to verify in polynomial time, but may or may not be polynomial time to solve from scratch if we didn't already know the answer. Since it may be solvable in polynomial time, all P problems are also NP problems. Integer factorization gets quoted as an example of NP, but I don't understand why it's not P, personally, since trial factorization takes O(sqrt(n)) time.

  4. +
  5. NP-Complete I don't understand at all, but the Traveling Salesman Problem is quoted as an example of this. But in my opinion the TSP problem might just be NP, because it takes something like O(2n n2) time to solve, but O(n) to verify if you are given the path up front.

  6. +
  7. NP-Hard I assume is just full of unknowns. Hard to verify, hard to solve.

  8. +
+",212406,,,user40980,42393.15833,43894.83264,Trying to understand P vs NP vs NP Complete vs NP Hard,,4,6,38,,,CC BY-SA 3.0,, +308195,1,,,1/24/2016 7:32,,1,524,"

I have an interface whose job is to communicate with repository (that implements some interface). It doesn't seem to make sense to implement this interface without receiving a repository,so I'd like to inject it.

+ +

I then ask:where should I receive the repository? Since it is required in almost every method call it seems reasonable to inject the dependency in the constructor, and not in (practically) every method in the interface as this leads to a volition of the DRY (don't repeat yourself) principle.

+ +

On the other hand, there is no way of enforcing that every implementor would have this dependency in its constructor (moreover - how should an implementor even know this?)

+ +

I also thought about having an abstract class instead of an interface, but I soon ruled out that option as to not limit the option to derive another class.

+ +

What should I do in this situation, where should I receive the dependency?

+",51990,,,,,42393.38056,Where to inject dependency required by all implementors of an interface?,,2,2,,,,CC BY-SA 3.0,, +308196,1,,,1/24/2016 7:43,,1,137,"

I'm currently on the process of creating a website/webapp.

+ +

My application is based on Node JS with the express framework.

+ +

My core backend concept consists of

+ +
    +
  • routers: handle http request. Like [POST] /api/team
  • +
  • controllers: business logic behind routers
  • +
  • model: interface between database and controller
  • +
+ +

I'm not sure if this follows any best practice patterns as MVC. It started as a very small personal project with business logic in routers. At some point I had to refractor my app (especially considering DRY) so I added controllers.

+ +

So let's say one can create an team with his account. The router will validate the request (user is authenticated, has permissions, ...) and forward the task to the controller. The controller will validate the form data, create a new database entry and return the new entry. The router will render the template and respond the request.

+ +

I think so far everything should be pretty straight forward.

+ +

Now the team is created and the user can invite other users to his team. Pretty much the same procedure as before.

+ +

Now we come to my actual question.
+The invited user gets an notification. Currently I create notifications at the same place as I call the model to create a new entry for the invite.

+ +

I don't like that notifications after at the same place. Notifications are a side effect and just decrease readability of the actual task.

+ +

So my idea was to use events. The team controller should emit an event when someone got invited.

+ +

The event handler will create the actual notification. But in my current concept there is no component which would fit the role of handling the event in my opinion.
+The notification controller could listen for events on the team controller. But this could end in ring-dependencies (the current example is not the best example for this situation).

+ +

So my question is: WHO should listen for events and complete side tasks?

+",146996,,,,,42453.46458,"""Who"" should handle side tasks for events?",,1,0,,,,CC BY-SA 3.0,, +308207,1,308209,,1/24/2016 13:09,,1,235,"

I'm playing with Google's voice recognition for a personal project and I have a fun little Q&A program written in Python using it. The problem, as it were, is that it means I have to be connected to the internet to access the API or the program doesn't work.

+ +

I'm wondering if there is a way I can download the entire API so I can use it offline rather than being wifi dependent.

+",212476,,61852,,42393.56528,42393.9875,Is there a way to download Google's voice recognition API so it can be used offline?,,1,0,,42397.89028,,CC BY-SA 3.0,, +308211,1,,,1/24/2016 13:48,,6,4664,"

Have you ever being implementing Option<T> functional type? It is discussed here: +https://app.pluralsight.com/library/courses/tactical-design-patterns-dot-net-control-flow/table-of-contents

+ +

Basically it is about using IEnumerable<T> with no or only one element instead of potentially nullable object reference in C#. We can empower LINQ functionality to streamline processing reducing cyclomatic complexity because of no ""if(null)"" conditions anymore.

+ +

My adoption of the idea looks this way:

+ +
public struct Optional<T> : IEnumerable<T>
+    where T : class
+{
+    public static implicit operator Optional<T>(T value)
+    {
+        return new Optional<T>(value);
+    }
+
+    public static implicit operator T(Optional<T> optional)
+    {
+        return optional.Value;
+    }
+
+    Optional(T value)
+        : this()
+    {
+        Value = value;
+    }
+
+    public IEnumerator<T> GetEnumerator()
+    {
+        if (HasValue)
+            yield return Value;
+    }
+
+    T Value { get; }
+    bool HasValue => Value != null;
+    IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
+    public override string ToString() => Value?.ToString();
+}
+
+ +

It is not so bad, we can use it as argument/return type with automatic conversion to/from T when necessary. Consumption example might look like this:

+ +
     var p = new Product(""Milk"");
+     var basket = ShoppingBasket.Empty
+            .Place(p)
+            .Place(p)
+            .Place(null);
+
+ +

It creates basket with single basket entry for two milk boxes if the following is defined:

+ +
public static class ShoppingBasket
+{
+    public static readonly IEnumerable<BasketItem> Empty =
+        Enumerable.Empty<BasketItem>();
+
+    public static IEnumerable<BasketItem> Place(
+        this IEnumerable<BasketItem> basket, 
+        Optional<Product> product) =>
+            basket
+                .SetOrAdd(
+                    i => product.Contains(i.Product), 
+                    i => i.OneMore(), 
+                    product.Select(p => new BasketItem(p, 1)))
+                .ToArray();  
+}
+
+ +

Where:

+ +
public class BasketItem
+{
+    public BasketItem(Product product, int quantity)
+    {
+        Product = product;
+        Quantity = quantity;
+    }
+
+    public Product Product { get; }
+    public int Quantity { get; }
+    public BasketItem OneMore() => 
+        new BasketItem(Product, Quantity + 1);
+}
+
+public class Product
+{
+    public Product(string name)
+    {
+        Name = name;
+    }
+
+    public string Name { get; }
+}
+
+ +

I use this general IEnumerable<T> extension:

+ +
public static class EnumerableHelper
+{
+    public static IEnumerable<T> SetOrAdd<T>(
+        this IEnumerable<T> source,
+        Func<T, bool> predicate,
+        Func<T, T> set,
+        params T[] add)
+    {
+        return source
+            .SetOrAdd(predicate, set, add as IEnumerable<T>);
+    } 
+
+    public static IEnumerable<T> SetOrAdd<T>(
+        this IEnumerable<T> source,
+        Func<T, bool> predicate,
+        Func<T, T> set,
+        IEnumerable<T> add)
+    {
+        var empty = Enumerable.Empty<T>();
+        foreach (var item in source)
+            if (predicate(item))
+            {
+                yield return set(item);
+                add = empty;
+            }
+            else
+                yield return item;
+
+        foreach (var item in add)
+            yield return item;
+    }
+}
+
+ +

Well, it is pretty functional as far as I can see. But the question is: does it make any sense? Cyclomatic complexity is very low in ShoppingBasket class. But the same time we still have three ""virtual paths"" of execution to test - for ""add"" and ""replace"" in the Place() method + we also need to test for null product argument.

+ +

The problem is that they are almost invisible. What do you say? Would you personally prefer to maintain such kind of code in C#?

+",126611,,126611,,42393.60694,43927.725,Option functional type implementation and scenarios,,1,3,1,,,CC BY-SA 3.0,, +308214,1,308225,,1/24/2016 15:01,,3,288,"

I am reading ""The Go Programming Language"" right now and I have read package initialization chapter which tells (or I read it wrong) that Go uses eagerly initialization.

+ +

So in time we saw say C++ eagerly initialization, then Java with on demand initialization, C# also on demand, and Go back with eager initialization (those languages are not strictly related, I am talking about time).

+ +

And I am curious about the reasons (in historical perspective) why on-demand model was chosen for Java and C#.

+",66354,,,,,42393.73403,For what reasons Java and C# initialize static data on demand?,,2,2,1,,,CC BY-SA 3.0,, +308221,1,,,1/24/2016 16:05,,-3,222,"

Recently I came up to something illogical, reading the latest ANSI C paper. It was talking about linkage but it never mentioned a way to declare internal identifiers inside block-scope (or at least in a useful way). Imagine something like this:

+ +
int main()
+{
+
+    //Here how to declare the identifier 'b'
+    //with internal linkage   
+
+    b = 2;
+
+    printf(""%d"", b);
+}
+
+static int b;
+
+ +

If you try something like this:

+ +
{
+    extern int b;
+    //...
+}
+
+ +

Then the this would be UB because the identifier b is declared both internal and external in the same TU (according to the current standard).

+ +

However reading the K&B white paper (""The C Programming Language Edition 1"") I came up to this (at page 137):

+ +
+

Imagine a fragment of a compiler that manipulates a symbol table. Each + identifier in a program has certain information associated with it, + for example, whether or not it is a keyword, whether or not it is + external and/or static, and so on. The most compact way to encode such + information is a set of one-bit flags in a single char or int.

+
+ +

See the important part is that in this example (which shows how a compiler could be written) it is considered the possibility for an identifier to have both static and extern storage specifier (remember in the current ANSI C standard we can only have a single storage specifier). Which actually make a lot of sense. This way we could specify in a declaration whatever we are referring to the identifier with internal or external linkage. This would be useful in certain situations. Imagine we have internal and external variable with the same identifier. This way we could distinguish between the both:

+ +
int main()
+{
+    extern static int b; //here internal variable should be declared (!)
+
+    b = 2; //here internal variable should be modified
+
+    {
+        extern int b; //here external variable should be declared (!)
+
+
+        b = 2; //here external variable should be modified
+    }
+}
+
+static int b; //internal variable named 'b' defined here
+
+ +

In the current ANSI C standard this is not supported and also there is some synthetic rule that if an identifier is declared with the extern specifier and there is already such identifier visible with some linkage — this linkage is inherit to the identifier currently declared.

+ +

This lead to code like this:

+ +
static int b;
+
+int main()
+{
+    extern int b; //re-declaration of identifier 'b' with internal linkage
+
+    b = 2;
+}
+
+ +

Which is rather confusing at least. Writing such code you'll suppose that the identifier with external linkages is modified.

+ +

Note that just putting static in a block-scope declarations only alters the storage-type of the variable.

+ +

Here are the original ANSI C statements about this:

+ +

$6.2.2.4:

+ +
+

For an identifier declared with the storage-class specifier extern in + a scope in which a prior declaration of that identifier is visible,31) + if the prior declaration specifies internal or external linkage, the + linkage of the identifier at the later declaration is the same as the + linkage specified at the prior declaration. If no prior declaration is + visible, or if the prior declaration specifies no linkage, then the + identifier has external linkage.

+
+ +

And at $6.2.2.7:

+ +
+

If, within a translation unit, the same identifier appears with both + internal and external linkage, the behavior is undefined.

+
+ +

About storage-specifiers at $6.7.1.2:

+ +
+

At most, one storage-class specifier may be given in the declaration + specifiers in a declaration, except that _Thread_local may appear with + static or extern)

+
+",175567,,175567,,42394.33403,42394.33403,Could ANSI C standardized linkage syntax from early C wrong?,,1,4,,42395.79514,,CC BY-SA 3.0,, +308226,1,308227,,1/24/2016 17:48,,2,1079,"

I'm developing a C++14 application and would like to take advantage of the new multithreading features, in particular std::async. I have seen a number of applications which allow the user to specify the maximum number of software threads that can be used for the duration of the program run. However, the recommend usage of std::async is default launch policy, which implies no control over the number of software threads that are actually created.

+ +

I assume the idea of explicit thread limits is to try control the number of cores used by the program, but I believe this to be misleading as the number of cores used will ultimately be determined by the OS. Is there any good reason to allow the program user to explicitly limit the number of software threads created by a program, or is it always preferable to let the implementation and OS handle threading?

+ +

Rather than allow the user to specify a thread limit, my intended solution was to just have a flag to allow enable threading, does this kind of user control even offer any advantage?

+",165513,,,,,42499.6,Does it make sense to have a user specified thread limit?,,2,4,,,,CC BY-SA 3.0,, +308233,1,308235,,1/24/2016 21:31,,1,778,"

I was wondering if there is a time line when I have to distribute source code for a binary that has been released under the GPLv2 license. To my understanding, I would need to release the source code along site it (ie. immediately).

+ +

I'm asking this question because I've heard that Apple does not update its open source page as soon as they make changes to the kernel and publish a patch. Thus would They be violating the license agreement.

+",212527,,,,,42393.90347,When do I have to distribute the source code of GPLv2 licensed software?,,1,8,,,,CC BY-SA 3.0,, +308238,1,,,1/25/2016 0:54,,2,692,"

According to Wikipedia:

+ +
+

In object-oriented programming, the command pattern is a behavioral design pattern in which an object is used to encapsulate all information needed to perform an action or trigger an event at a later time. This information includes the method name, the object that owns the method and values for the method parameters.

+
+ +

And according to professor Schmidt's text, a command has:

+ +
    +
  • Time-independent execution of application logic. The encapsulation of +application logic allows to queue it and execute it at a different point in +time.
  • +
  • Context-independent execution of the application logic. The separation +between application logic and context allows to execute the application +in separate contexts, such as in a different thread or using a different state.
  • +
  • Exchangeability of application logic. The separation between application +logic and context allows to easier exchange the application logic.
  • +
+ +

If you encapsulate all information into an Intent, the onHandleIntent works as an abstract method to the command executor, just like described at command processor pattern text.

+ +

Thus, instead of explicitly implement the executor in order to invoke a command, you simply delegate command execution to the operating system.

+ +

So the questions are:

+ +
    +
  1. Is IntentService the framework implementation for command pattern?
  2. +
  3. In affirmative case, why some Android MVP implementations explicitly implements its own, using ThreadExecutor as consumer and Runnable as commands, instead of using the one provided by the framework?
  4. +
+",212528,,212528,,42398.08333,42398.08333,Is IntentService an implementation of Command Pattern?,,1,0,2,,,CC BY-SA 3.0,, +308243,1,,,1/25/2016 2:53,,3,188,"

Lets say a user has a set of data. This data is stored in user_data table. This user_data table is referenced by other users in some way. In case the user wants to delete a row from the user_data table, referential integrity dictates that it cannot be done as long as there exist a reference to that data.

+ +

Let me explain further by giving you an example of the type of data that user can store and are referenced by other users - a simplified video streaming service. Some data to be stored:

+ +

user_videos

+ +

id
+video_name
+video_url
+owner_id

+ +

video_playlist

+ +

id
+name
+owner_id

+ +

playlist_items

+ +

playlist_id
+video_id

+ +

So from the above example, a user can have videos that can be referenced in playlists which can be created by other users. If referential integrity is enforced, then user won't be able to delete the video unless there are no further references (on delete cascade).

+ +

How should I implement the database to handle the deletion?

+ +

Option 1: Do not enforce referential integrity. Playlists may refer to videos that do not exist.

+ +

Option 2: Hide user video data by introducing a ""deleted"" field.

+ +

Option 3: Assign video to a special user that takes ownership of all orphaned videos.

+ +

Option 4: On delete cascade and delete all playlist that references the video.

+ +

I'm leaning to towards option 1, but I've always been told to enforce a foreign key constraint. But in the case of interlinking user data as per my example above, could a foreign key be counter intuitive?

+",154116,,,,,42394.22986,How to delete user data that is referenced by another user?,,1,3,,,,CC BY-SA 3.0,, +308245,1,,,1/25/2016 2:57,,112,19329,"

Suppose I am limited to use C++ by the environment in the project. Is it good to prevent the use of some language features that C++ has but Java doesn't have (e.g.: multiple inheritance, operator overloading)?

+ +

I think the reasons are:

+ +
    +
  1. As Java is newer than C++, if Java doesn't provide a feature that C++ has, it means that the feature is not good, so we should avoid using it.
  2. +
  3. C++ code with C++ specific features (e.g.: friend functions, multiple inheritance) can only be maintained or reviewed by C++ programmers, but if we just write C++ like Java (without C++ language specific feature), the code can be maintained or reviewed by both C++ and Java programmers.
  4. +
  5. You may be asked to convert the code to Java some day
  6. +
  7. Code without C++ specific features is usually more maintainable
  8. +
  9. Every C++ language specific feature (e.g.: multiple inheritance) should have alternatives to be implemented in Java. If it doesn't, that means the design pattern or code architecture is problematic.
  10. +
+ +

Is that true?

+",196142,,196142,,42872.06111,42872.06111,Should we avoid language features that C++ has but Java doesn't?,,13,15,16,,,CC BY-SA 3.0,, +308250,1,,,1/25/2016 4:07,,29,7956,"

Suppose I have a segment of code to connect to internet and show connection results like it:

+ +
HttpRequest* httpRequest=new HttpRequest();
+httpRequest->setUrl(""(some domain .com)"");
+httpRequest->setRequestType(HttpRequest::Type::POST);
+httpRequest->setRequestData(""(something like name=?&age=30&...)"");
+httpRequest->setResponseCallback([=](HttpClient* client, HttpResponse* response){
+    string responseString=response->getResponseDataString();
+        if(response->getErrorCode()!=200){
+            if(response->getErrorCode()==404){
+                Alert* alert=new Alert();
+                alert->setFontSize(30);
+                alert->setFontColor(255,255,255);
+                alert->setPosition(Screen.MIDDLE);
+                alert->show(""Connection Error"",""Not Found"");
+            }else if((some other different cases)){
+                (some other alert)
+            }else
+                Alert* alert=new Alert();
+                alert->setFontSize(30);
+                alert->setPosition(Screen.MIDDLE);
+                alert->setFontColor(255,255,255);
+                alert->show(""Connection Error"",""unknown error"");
+            }
+        }else{
+            (other handle methods depend on different URL)
+        }
+}
+
+ +

the code is long, and it is commonly used, but the code above does not require any extra things such as custom function and class (HttpRequest and Alert are both provided by framework by default), and although the code segment is long, it is straightforward and not complex (it is long just because there are bundles of settings such as url, font size...), and the code segment has little variations among class (e.g.: url,request data, error code handle cases, normal handle cases...)

+ +

My question is, is it acceptable to copy and paste long but straightforward code instead of wrapping them in a function to reduce the dependency of code?

+",196142,,,,,42394.825,Is it acceptable to copy and paste long but straightforward code instead of wrapping them into a class or function?,,8,14,5,,,CC BY-SA 3.0,, +308252,1,,,1/25/2016 4:24,,2,1681,"

Since I started programming, I've always been taught to leave a trailing blank line at the end of my files, the reason usually being something relating to how it makes concatenated files easier to read when using cat.

+ +

While I can't find an example right now, GitHub indicates missing blank lines at the end of a file using a red symbol, or at least, used to - so clearly it's frowned upon by a considerable chunk of the community.

+ +

Working with Go lately, I noticed that gofmt doesn't like blank lines at the end of a file, and my Vim plugin removes them automatically.

+ +

Why are blank lines at the end of a file discouraged rather than enforced in Go?

+",71709,,71709,,42409.62708,43528.58681,Why does gofmt discourage blank lines at the end of files?,,2,6,,,,CC BY-SA 3.0,, +308267,1,308635,,1/25/2016 9:37,,7,982,"

In Java or Scala if I have an argument configuration: Conf, I can look for Conf class/trait/case class and see its construct so I know which arguments to pass.

+ +

Recently I started dealing with JavaScript, I see function like this:

+ +
function init(conf) {
+  some body // as external developer to init I should not mess 
+            // with the internals here I just use init, proper design.
+ }
+
+ +

What I'm curious to know, how is it possible for me to know what to send exactly in conf? documentation? examples? sniff around init implementation? all look like bad alternatives for me, not well formed, depending on developer writing or not documentation providing or not examples. isn't there a more formal strict way for me to know what conf means?

+ +

I must say that on my day to day work with statically typed languages I don't need almost no documentation I just look at the types functions receive and I know what I need to pass in most cases.

+",153865,,153865,,42398.32917,43420.46806,in dynamic language like javascript how do you know what the argument is?,,4,4,,,,CC BY-SA 3.0,, +308272,1,,,1/25/2016 9:53,,3,569,"

I heard that semantics and type systems are very important for all programmers! But I why are they so important? I don't understand. Maybe they are imortant only for theoreticians and compiler developers?

+ +

In my practice I never think about semantics and type theory. As for me it is enough to know that the semantics defines the meaning of syntax. When I write + I just know that it is the addition operation. And it is enough for me.

+ +

Then, I want to describe my thoughts about type systems. Also, when I write int i;, I know that int is an integer and it is enough for me.

+ +

Another case - it is important to know what the difference beetwen static and dynamic, weak and strong type systems.

+ +

Tell me please, are there any practical benefits from the knowledge of semantics and type systems?

+",212598,,,user40980,42398.91042,42399.43611,Why are semantics and type systems are so important?,,2,7,1,,,CC BY-SA 3.0,, +308279,1,308280,,1/25/2016 10:33,,51,11164,"

I am trying to explain segmentation faults to someone, and I was thinking about the level 256 kill-screen in Pacman, and how it's triggered by integer overflow, and how similar the behavior is to the ""unknown state"" oft-described in a segmentation fault.

+ +

I want to say this is a good example of what I call an ""unhandled segfault"", but I would rather get a second opinion before I potentially spread misinformation.

+ +

I tried looking it up, but all I'm getting are documents on the bug itself, as well as that collab between Hipster Whale and Namco.

+ +

So, would you consider the behavior in level 256 of Pacman to be an example of unhandled segmentation violation?

+",198539,,,,,42473.83819,"Can the ""level 256 bug"" in the game of Pacman be considered an unhandled segfault?",,5,5,6,,,CC BY-SA 3.0,, +308286,1,,,1/25/2016 12:40,,1,159,"

Vernon and Millett describe several patterns of getting data for ui (reporting in general) needs.

+ +

Though some pros and cons of each approach are discussed, I could not get a robust understanding of which to use when.

+ +

Consider ad hoc queries. Vernon calls them Use Case Optimal Query, but does not say too much. Millett just warns about performance vs dry violation tradeoff. Some blogs discuss it as the step towards CQRS.

+ +

And all this seems to be more a matter of personal preferences rather than a well-grounded choice. And as soon as subjectivity appears while discussing architectural decisions conflicts occur.

+ +


+Update I'd add more details, I suppose...
+

+ +

Preface

+ +

First, we do not really do DDD. Absolutely. In no way.
+We use (sad but true) TransactionScript with AnemicModels.
+Yes, we do have complex Domain, where DDD won't be an overkill.
+Now I hear you say: ""DDD is about Strategic Patterns!"". I know, they are also not with us. =)
+Nonetheless, all patterns are useful. if applied reasonably.
+Nobody prohibits making converging steps from both sides. I have already stressed in comments that Tactical Patterns worry team the most, and can bring dividends relatively fast.

+ +

The case +In our business rules we extensively use DTOs which contain all data sufficient for performing command processing. However, we use same DTOs for assembling data for UI/Reporting. It introduces very unpleasant coupling. Practice shows that these DTOs are often bended towards UI needs. But it is UI what changes frequently for several iterations. Users understand they'd like to have more data differently structured and organized.
+Contrary to UI, Commands are much more stable, and when structure of command needs to change UI gets almost inevitably changed as well.
+Thus 'breathing' UI 'shakes' the layers used by business logic which processes commands. We often find ourselves aligning interfaces and fixing broken tests.
+With all said above I found it reasonable to TRY to decouple queries from command processing in absolutely new functionality.

+ +

IBlaBlaQuery is the exact implementation of Use Case Optimal Repository Query. Client receives specific view data + data for building commands. It is covered with unit and integration tests. Code works, some teammates like the solution. +Yet exist those who
+- a) insist on violated DRY principle,
+- b) complain about new unaccustomed design.

+ +

You probably ask here: ""Why didn't you discuss this design prior putting it into life?"" And that is the place for my question.

+ +

They keep pushing on defending DRY, you offer them to face the facts and agree on fragility of design, they declare they don't see it or it is not such a big problem as a radically new approach. Other developers prefer to stay neutral and do not explicitly express their opinions.

+ +
+

Cannot you give us unequivocal solid arguments? - Then we do not accept and even consider a trial of your design.

+
+ +

After several such iterations you just recall sayings ""Practice is the criterion of truth"" and ""It's better ask for forgiveness than permission"" and voluntary give it a go instantly becoming a jackass. (Can admit I really am, but simply do not notice it.)

+ +

That's why I try to avoid subjectivity and ask for formal objective criteria.

+",71962,,-1,,42837.31319,42394.80069,Are there more or less straightforward guidelines for adopting one or another approach to reporting in DDD?,,1,12,,42395.82014,,CC BY-SA 3.0,, +308292,1,,,1/25/2016 13:58,,1,233,"

Not sure if this is the right platform for this, but here goes :) Throughout my career I have been generating reports for users, and most of the times it always started from a Word template (.dot/.dotm) handed to us by a user, containing all the required/wanted layout/logos/texts, which we then edited to insert correct bookmarks in the right places, and filled up using a combination of XML and VBA :)

+ +

Once this work was done, the user could then edit the .dot-files herself (e.g. replace the logo, edit the fixed address).

+ +

Now I am yet again facing this same problem, and I am wondering what the current state or alternatives are. I generally do not like to work with Word/VBA (yet another dependancy -- I create websites using ruby on rails).

+ +

Generally/currently it is pretty easy to convert html to pdf/word or create pdf/word documents directly, but I am wondering if there is a way for a layman to describe ""templates""/layouts and to use that template to generate reports (programmatically) (preferably without the use of Word Automation).

+ +

So, the requirement is to generate printable reports, where a user can specify/supply her own template, without having to program, and can manipulate text/content/images.

+ +

Things I can think of (but not sure if they exist)

+ +
    +
  • if such a thing exists: define PDF Templates and then programmatically fill them with data?
  • +
  • a way to edit HTML pages/snippets, so a user could interactively edit a report in HTML (e.g. add logos/footer/header) and then save the HTML as a ""template"" for the report
  • +
+ +

Any ideas/suggestions?

+",8175,,,,,42394.58194,User-editable templates for reporting: alternatives to word?,,0,3,,42395.55903,,CC BY-SA 3.0,, +308295,1,,,1/25/2016 14:54,,1,366,"

I need to forecast budgeted hours for 2 departments to help them schedule staff as currently the best they can do is just wing it. When we receive a job proposal I'll have the person submitting the form input that job's budgeted hours into each department as float field.

+ +

Jobs can have hours budgeted for Dept. A, Dept. B, or both. Jobs have start and end dates. It's been decided by my supervisors that simply dividing the job hours evenly across all days will be a useful gauge. What I want to do is sum [all jobs] daily totals for Dept A and Dept B. Here's where it get's complex...

+ +
    +
  • Dept. A works 7 days a week and Dept. B works regular business days.
  • +
  • Both departments need the flexibility to mark a day as closed and take the hours allocated to that day and push them to the remaining days.
  • +
  • But I also want to be able to put holidays in or events where we know far in advance we won't be open and instead of pushing the hours forward it distributes them back and forward. For example, Dept. A will never work on the day of a Superbowl.
  • +
+ +

I am looking for recommendations about how to approach this problem, specifically the days closed. I was thinking I might have a table in the database simply tracking closed days?

+ +

I'm not looking for help writing the code but rather help understanding how to design this process more generally.

+",212636,,80833,,42424.67708,42574.95208,How to approach hours forecasting,,3,1,1,,,CC BY-SA 3.0,, +308307,1,,,1/25/2016 16:49,,8,2416,"

I was going through several blog posts at stackoverflow and programmers and I am still a bit confused. You can install NLog (or some other logging lib) and start logging quite fast and then you can install the application insights and an adapter to actually gather the data into the Azure Application Insights.

+ +

Is it really necessary to use two approaches (NLog combined with AppInsights)? What if we simply use the AppInsight's TelemetryClient to log and trace. If I want to go fancy, I can wrap it with an interface, so I can easily change it later on. I can even wrap it in Common.Logging to be even fancier. (I would be really fancy there.)

+ +

Are there any benefits of using both NLog and Application Insights? Do you think logging into a ""local"" database and into Azure is superfluous? Are there any drawbacks of using only Application Insights for both logging and tracing (latency, too expensive to call either time wise or money wise)?

+ +

AppInsight's example - Imagine you have a DI container somewhere:

+ +

ILog.cs

+ +
public interface ILog
+{
+    void Exception(Exception ex);
+}
+
+ +

TelemetryClientLog.cs

+ +
public class TelemetryClientLog : ILog
+{
+    private readonly TelemetryClient telemetryClient = null;
+
+    public TelemetryClientLog(TelemetryClient telemetryClient)
+    {
+        this.telemetryClient = telemetryClient;
+    }
+
+    public void Exception(Exception ex)
+    {
+        this.telemetryClient.TrackException(ex);
+    }
+}
+
+ +

StarWars.cs

+ +
// somewhere in a code far far away with an injected logger
+try
+{
+    this.CallJedi();
+}
+catch(LightSaberException lse)
+{
+   // log implements ILog interface, look above
+   this.log.Exception(lse);
+}
+
+",121802,,121802,,43096.42153,43096.42153,Log and trace with Azure Application Insights only (instead of combining NLog and AppInsights) in Asp MVC,,1,5,1,43019.55764,,CC BY-SA 3.0,, +308321,1,,,1/25/2016 18:18,,4,249,"

Following up on my ambiguous question, here's a question that is probably more focused.

+ +

Consider the following code snippet form a Haskell program:

+ +
data NightWatchCommand = InvalidCommand | DownloadCommand { url :: String } | PauseCommand { gid :: String } | UnpauseCommand { gid :: String } | StatusCommand { gid :: String } deriving (Show, Eq)
+data AuthNightwatchCommand = AuthNightwatchCommand {
+  command :: NightWatchCommand,
+  user :: User
+}
+
+ +

Now, the business constraint I want to enforce via the type-system is this: it should not be possible to instantiate an unauthenticated NightwatchCommand. And the only way to instantiate an AuthNightwatchCommand should be via a special function, say:

+ +
fromIncomingMsg :: String -> AuthNightwatchCommand
+
+ +

Just to provide greater context, the string argument to this function could possibly be:

+ +
status <some-id> <auth-token>
+
+ +

Now, to complicate matters further, fromIncomingMsg needs to validate the <auth-token> from the DB. Which means, it will do IO. A more appropriate function signature would be:

+ +
fromIncomingMsg :: String -> IO (AuthNightwatchCommand)
+
+ +

Apart from shoving this into a module and hiding the data constructors, is there any other way to do this?

+",208258,,-1,,42837.31319,42428.97639,How to use a strong type system to model business constraints?,,3,3,,,,CC BY-SA 3.0,, +308322,1,308368,,1/25/2016 18:18,,1,690,"

It's a discussion about follow very strictly the SRP (Single Responsability Principle) vs. be more flexible when you write simple codes, like setting or getting properties values, even out of a direct context.

+ +

My first doubt is about where SRP should be included. I always read about SRP should be applied on classes. But and about methods, properties or traits (as in PHP), ...?

+ +

If this answer is no, maybe we can discard all text below.

+ +
+ +

To exemplify, let's define an example case. To be more easy understand, we'll write a singleton structure for a class called Example. Disregard IoC, for this case.

+ +

It should have two methods, basically:

+ +
    +
  • getInstance(): returns an unique instance of your class; and
  • +
  • rebuildInstance(): it'll only destroy current instance and recreate it;
  • +
+ +

Basically to it works, getInstance() should check if the internal instance was set, create it if not, then return it. And rebuildInstance() should destroy current instance if it was defined and create a new, without return nothing.

+ +

Let's turn it more visual (in some pseudo-language):

+ + + +
class Example 
+{
+    static private Example instance;
+
+    static public Example getInstance() 
+    {
+        if (!this.instance) 
+        {
+            this.instance = new Example;
+        }
+
+        return this.instance;
+    }
+
+    static public void rebuildInstance() 
+    {
+        if (this.instance)
+        {
+            destroy(this.instance);
+        }
+
+        this.instance = new Example;
+    }
+}
+
+ +

So note that:

+ +
    +
  • getInstance() have two responsabilities: 1. check and instantiate; 2. return.
  • +
  • rebuildInstance() have two responsabilities: 1. check and destroy; 2. instantiate;
  • +
+ +

You will see that both should instantiate, so let consider it as an effect duplication. To avoid that, I should decouple my methods to each one have a single responsability, then I need creates two other methods:

+ +
    +
  • buildInstanceWhenNeed();
  • +
  • destroyInstanceWhenNeed();
  • +
+ +

It'll turn more clear what each methods does by name and by its structure. So our code will seems like:

+ +
class Example 
+{
+    static private Example instance;
+
+    static private void buildInstanceWhenNeed() 
+    {
+        if (!this.instance) 
+        {
+            this.instance = new Example;
+        }
+    }
+
+    static public Example getInstance() 
+    {
+        self.buildInstanceWhenNeed();
+
+        return this.instance;
+    }
+
+    static private void destroyInstanceWhenNeed()
+    {
+        if (this.instance)
+        {
+            destroy(this.instance);
+        }
+    }
+
+    static public void rebuildInstance() 
+    {
+        self.destroyInstanceWhenNeed();
+        self.buildInstanceWhenNeed();
+    }
+}
+
+ +

Okay.

+ +

Seems that now I'm respecting SRP completely on methods, very strictly. Each method have a single responsability without get out the class responsability. Except by the fact it have a singleton pattern inside of it, but I told to disregard IoC usage on start of topic.

+ +

So now I have additional questions:

+ +
    +
  1. Should I apply SRP on methods, decoupling it, to make sure that it have a single responsability on each method?
  2. +
  3. If yes. Should I do it even when the method don't duplicate any effect? For instance: if rebuildInstance() doesn't exists here, then I don't need decouple getInstance() because is the unique instance that will, in fact, create a new instance of class. So in this case, do I still need create buildInstanceWhenNeed() or not?
  4. +
  5. If yes, but lower priority. And about performance? I mean, when it really affect the runtime, decreasing considerably the speed of application because of ultra decoupled methods (which perhaps only affects interpreted languages), but with high specifications to each one.
  6. +
+",140013,,1204,,42394.94097,43026.89931,When decoupling methods is not a good idea?,,2,6,,,,CC BY-SA 3.0,, +308326,1,308351,,1/25/2016 18:38,,0,171,"

I am forking a BSD 2-clause licensed project, adding some AGPL-licensed files, +and re-distributing as AGPL. The existing project comes with a LICENSE file, +and the files themselves have various copyright holders.

+ +

How do I modify the LICENSE file to make it clear that this project is distributed under the AGPL ?

+ +

Thanks!

+",111829,,,,,42397.32083,Forking BSD project and distributing as AGPL,,1,1,,,,CC BY-SA 3.0,, +308335,1,,,1/25/2016 20:56,,4,1192,"

I am setting up a PostgreSQL database with a very small number of tables for a simple website. I am looking at ways to make it as solid and auditable as possible. Is there any particular reason why I should or should not create a PostgreSQL role for each user account on the website? The idea is that I could then use row level permissions on tables containing personal or private information and queries would look something like like this:

+ +
-- Service logs in with role mywebsite and runs:
+BEGIN
+SET LOCAL ROLE charlie; -- A user on the website
+INSERT ...              -- DO STUFF
+COMMIT;
+
+ +

Logging code would then know both the system account (mywebsite in the above example) used to log in to the database and also the user on whose behalf changes are being made (charlie). The two are available in PostgreSQL as session_user (uid) and current_user (euid).

+ +

If you know of any precedents or any reasons why this would be a great or terrible idea I'd love to know.

+",210442,,,user40980,42395.58264,42395.58264,Postgres roles for website users,,1,3,2,,,CC BY-SA 3.0,, +308338,1,308342,,1/25/2016 20:59,,2,3341,"

From what I understand the deployment process consists of these steps.

+ +

Compiling, linking/packaging, deploying.

+ +

What does the packaging refer to? is that just a reference to packaging the object files together?

+",201022,,,,,42394.89306,"What does ""packaging"" refer to in the software Deployment process?",,2,2,,,,CC BY-SA 3.0,, +308349,1,,,1/26/2016 0:13,,8,9962,"

Suppose there is a primary resource ""/accounts"" which has Profile (name, national id, DOB), addresses and contacts(email, phones). I am considering them as sub-resources because they cannot exist without an account. To update them I am thinking two options

+ +

Option 1

+ +
    +
  • PUT /accounts/{accountid}/address
  • +
  • PUT /accounts/{accountid}/contacts
  • +
  • PUT /accounts/{accountid}/profile
  • +
+ +

Option 2

+ +
    +
  • PUT /accounts/{accountid}
    +(Depending on the presence/absence of address/phone/profile decide which updates to perform)
  • +
+ +

I am tempted to use option 1 because from implementation perspective each of the updates have own logic & process flow. A separate URI may keep implementation cleaner & manageable

+ +
    +
  1. Would it be incorrect to consider profile,address,contacts as subresources. if so, what would be an appropriate way of representing them
  2. +
  3. If they can be considered sub-resources which among the above is an appropriate option or is there a completely different option to be considered
  4. +
+",212698,,,,,42396.41736,REST API - Handling subresources,,2,2,2,,,CC BY-SA 3.0,, +308353,1,308366,,1/26/2016 1:17,,-1,1121,"

I am trying to animate a texture in OpenGL. I feel like it should be easy, as I know how to animate in SDL and other libraries but I am having trouble. I have a x,y,w,h that holds the texture coordinates for the frame I want. My vertex data has a x,y for the vertex, and a x,y for the texture coordinates. I know how to change these, but the problem is the width and height. How do I change these in a spritesheet? All I see is 2d vectors with no width or height. Am I overlooking something? Should I be doing this in the VBO, or in the vertex shader? I looked through several other related questions but they were all looking for what is the best strategy, or most optimizes or a unrelated to this animation question.

+ +

Edit I realized I was looking at this the wrong way. I can show a portion of the texture, then decided which portion to look at, in the vertex shader very easily.

+",192637,,192637,,42395.06944,42395.30764,2D Animating in OpenGL,,1,0,0,42397.83681,,CC BY-SA 3.0,, +308354,1,,,1/26/2016 2:16,,4,154,"

I have a web API that accepts Authorization headers to allow access. It responds with the requested data in addition to setting a session cookie. Subsequent requests can be made with no auth headers as the cookie is used in its stead.

+ +

Originally, no auth was handled on the node server. API requests were all made directly from browser. This made auth simple, as the client entered user/pass was sent directly in headers and received a session cookie in return.

+ +

Now, however, we would like to move towards doing initial payload rendering on the server. This will require both the Server and Client to be ""authorized"" with web API. What are some common approaches to doing this?

+ +

A couple of possibilities that occurred to me:

+ +
    +
  • Client auths with API as before, API sets cookie for future browser requests, client copies cookie to node server (which stores in session) for use in future server requests.
  • +
  • Client sends user/pass to node server, node server auths and stores cookie in session, all subsequent API calls run through node server.
  • +
  • Client sends user/pass to node server, client auths normally, node server auths normally.
  • +
+ +

First option seems ideal (if possible). This is how I would imagine one would use an ""access token"". Is it possible to use cookies like this? Are there cross origin restrictions that pertain to cookies that I am overlooking? For instance, I know that this does not work the other way around (I cannot auth on server, put API cookie in response to browser, and have the client use that cookie for a different path (the web API instead of the node server).

+ +

Second option I really want to avoid as it seems like a lot of unnecessary traffic on the node server, a lot of boilerplate routes, and our web API is already very simple to use directly from browser. Performance wise, this also adds one additional stop. Ideally, the node server should be serving page requests and thats it.

+ +

Third option just seems kind of silly. It results in the same user being authed in multiple places, which the web API may or may not like (I didn't design it nor do I know its inner working). We are also sending username password over the wire twice (which logically seems less secure than once).

+ +

Any other suggestions are welcome!

+",212718,,212718,,42395.12222,42395.12222,"Client Browser, Node Server, Web API auth structure",,0,0,2,,,CC BY-SA 3.0,, +308357,1,308365,,1/26/2016 4:04,,4,166,"

I currently want to improve an application which is licensed under GPLv3. I wanted to improve it's design and Interface and use icons from google which is licensed under Creative Commons Attribution 4.0 International Public License. +On Gnu website it seems like it's compatible as I understand. But because of this line it should not be used on software. So I wanted to clarify If I can use them together

+",79032,,,,,42395.30069,using CC-BY and GPL,,1,0,,,,CC BY-SA 3.0,, +308372,1,,,1/26/2016 8:19,,1,359,"

Consider a software which runs on a dedicated system (basically a Linux box), and controls some machinery. The system has all the required hardware interfaces for the task. The software also has a GUI for controlling said machinery.

+ +

However, this system should later be able to be used from external devices, so other software, using the same or similar GUI will be running on various PCs, smartphones, etc. The central SW will contain a TCP server through which the external devices will connect. The central device with its software is to be developed first, and the controlability from external devices will be a later add-on.

+ +

My proposal for the ""central"" software would be to split it into two separate programs. One would be a server without any GUI at all, which will handle all the controlling and hardware interfacing stuff, and a separate software would fulfill solely the roles of the GUI.

+ +

Of course, a good MVC separation is possible even within a single program, but the main advantage I see for splitting the program is that the GUI part will be much more easily portable to other systems. Actually, in case of a PC as an external device, the same software can be re-used 100%. For other systems, I guess it would be much easier to port a program 1 to 1, than to port only the GUI part of a more complex program.

+ +

The main disadvantage of this solution is that the initial system will be a more complicated, as the GUI needs to communicate via TCP with the controlling part, instead of directly inside the same program.

+ +

Are there other major advantages and disadvantages of splitting the application in two separate programs communicating over TCP?

+ +

The scope of the project is up to 2 years for a small, 2-3 person team.

+",47197,,47197,,42395.37014,42398.40069,Is a TCP client/server a good solution for a system which can be controlled by a GUI running on multiple platforms?,,3,5,,,,CC BY-SA 3.0,, +308377,1,308425,,1/26/2016 10:12,,5,142,"

In a product we are rebranding a feature. +For example, we have tweets and we are rebranding it to news. On the code side we have tables and structures like tweet, user_tweets, favourite_tweets and so on.

+ +

Does simple rebranding justify renames and migrations in code and Data base (structure names, file names, fields, table names)?

+",45024,,45024,,42395.43472,42396.39167,When should feature rebranding be followed by model and code renaming?,,1,4,,42396.00833,,CC BY-SA 3.0,, +308378,1,,,1/26/2016 10:18,,3,547,"

For my angularJS application I'm using asp.net web api backend.

+ +

For B2C I'm normally have one scalable backend application and each customer can have an account to use the services. There is one database and each customer has access to it like Facebook or Foursquare.

+ +

But how would I do that in a B2B SaaS application? Should I host for each company an own backend application with an own database? Or is it ok to host also one backend application for all companies? If then, how would I separate the data belonging to the companies?

+ +

For example:

+ +

Separate hosted apps for each company:

+ +
app --> companyA --> dbA
+
+app --> companyB --> dbB
+
+ +

One hosted app for all companies with one database for all:

+ +
       --> companyA -- 
+app --|               |--> db
+       --> companyB -- 
+
+ +

One hostet app for all customer with different databases for each company:

+ +
       --> companyA --> dbA
+app --|
+       --> companyB --> dbB
+
+ +

If I have just one app, how would I make sure, that each company only has access to their data or to the database belonging to the company?

+ +

Please can you point me in the right direction? What are the best practices to do that? For example, when I am using Azure as cloud provider and Azure table storage as database, would each company get their own storage account?

+",206706,,,,,42395.42917,B2B SaaS application architecture,,0,0,,,,CC BY-SA 3.0,, +308382,1,308384,,1/26/2016 11:06,,0,147,"

If I have a method to look for a valid email address in an array, that could either:

+ +
    +
  1. Return the email address
  2. +
  3. Return false if an email wasn't found.
  4. +
  5. Throw an exception if an email was found, but it wasn't valid.
  6. +
+ +

Should the method comment be written as though it would always work, or should it indicate that it might fail? For example:

+ +
/**
+ * Attempts to return the email address from $data.
+ * @param Array $data
+ * @return Boolean
+ * @throws My_Custom_Exception_InvalidEmailAddress
+ */
+
+ +

+vs

+ +
/**
+ * Returns the email address in the $data.
+ * @param Array $data
+ * @return Boolean
+ * @throws My_Custom_Exception_InvalidEmailAddress 
+ *      Throw if an email key is found in $data, but the value is empty
+ *      or if the value doesn't pass the Zend_Validate_Email check.
+ */
+
+ +

Generally, I go by the rule of, if I've wrote some ""throw new..."" in the method, I'll write it more like the first example, because I know there's a good chance that the method won't do what the comment says, but, you can tell that by the @throws annotation, so it seems a bit unnecessary.

+ +

Is either comment style ""prefered"", or is it more just a case of be consistent throughout the project?

+",139852,,139852,,42395.47083,42395.47083,Should method comments be written as if everything will work?,,3,3,,,,CC BY-SA 3.0,, +308387,1,,,1/26/2016 11:28,,3,923,"

I am reading Martin Fowler's 'Refactoring: Improving the Design of Existing Code'. I have not understood a section of the second chapter where Kent Beck describes the pros of indirection.

+ +

One of the pros listed is Encoding Conditional Logic and it is described as:

+ +
+

Objects have a fabulous mechanism, polymorphic messages, to flexibly + but clearly express conditional logic. By changing explicit + conditionals to messages, you can often reduce duplication, add + clarity, and increase flexibility all at the same time.

+
+ +

What are polymorphic messages? What does he mean by 'change explicit conditionals to messages'?

+",212770,,,,,43748.48958,What are polymorphic messages?,,2,4,0,42416.31111,,CC BY-SA 3.0,, +308395,1,308623,,1/26/2016 13:39,,-4,334,"

Suppose that you describe programs, which have a lot of AssignmentStatement(target, /*value*/Expression). There are other statements, like if-statement and for-statement and all of them may have optional label. So, our assignment must be coded like AssignmentStatement(name, target, /*value*/Expression) on the blackboard. Now, whenever an instance is created, it will allocate 8 bytes more to store the null reference to the name. You see, the problem is the waste of memory. Name is null for 95% of statements but it still consumes 8 bytes of memory. Advanced programmers may advise Option[Name] to consume even more. But let's focus on plain String names for the sake of the argument.

+ +

Because 95% of instances will carry null in their first field, we can separate them into a separate class. In order to save the memory, I feel it is better to split my classes into those which have name and those which don't,

+ +
class NamedAssignmentStatement(name, target, /*value*/Expression)
+class AnonymousAssignmentStatement(target, /*value*/Expression)
+
+ +

This eliminates duplication (many instances carrying the same attributes) and saves memory. The only problem is the difficulty of handling the duplicated classes.

+ +

They are proliferating exponentially actually. The value expression actually may have one optional mode argument, which, likewise the name, is almost never used. It makes sense to save more memory to replace AssignmentStatement(target, mode, /*value*/Expression) with two classes, one which has mode and another which not.

+ +

Now, I have 4 classes instead of single AssignmentStatement(name, target, mode, /*value*/Expression). But that is not the end of story. The values can be either simple expressions, evaluated to value but can also be waveforms, evaluated to a list of (value, duration) pairs. To distinguish between value types, I can wrap the value expression into SimpleExpression(expression) or into Waveform(expression) to be used like AssignmentStatement(target, Waveform(Expression)). The problem is that instantiating extra wrapper per every assignment will consume ca 20 extra bytes as far as I know from JVM. Instead of using the wrapper class, I can save the memory by signaling the value type with the assignment type itself, which means that I need to split the assginment class once more

+ +
SimpleAssignmentStatement(target, /*waveform_*/expression)
+WaveformAssignmentStatement(target, /*simple_*/expression)
+
+ +

You see, I have got 8 classes instead of originally 1. It should be a hell to manage such system. Are there languages that address easy handling of such multiple case classes? To ask more generally, is the static typing and large number of classes vs dynamic memory allocation and abuse issue that I addressed here is a known in programming?

+ +
+ +

I see that programmers are pretty careless regarding the memory usage. They will answer that 8 bytes for null is a meaningless price, worth to ignore. But,

+ +
+

watch the pennies, and the pounds take care of themselves

+
+ +

The highest mountains are made of tiniest, invisible sub-micron particles, which, in case of computational process has a form of (hundreds of) millions of objects in memory which accumulate to the huge strucutres. The problem is not only the cost of the memory (which you say is dirt-cheap and should not be considered) but the PC performance bottleneck, which is the questoin of how well your data fits the cache size. When your data fits the cache, it operates thouthands times faster and I cannot feel free looking how simple assignment, which needs only couple of byes to encode the statement, target and value reference escalates enormously, to the tens of bytes.

+ +

A disclaimer: I do not ask about importance of microoptimization. You can add your considerations regarding it into your answer but I ask how to reclaim that memory, not whether it is worth. Please understand the difference between asked how and not asked if it is worth or not.

+ +
+ +

Here is some more food for thought that may help to answer. Suppose that expressions can be binary operation, like Add(a,b), Multiply(a,b) or Concatenate(a,b). You would encode them using one general class BinOp(opCode,arg1,arg2). On the other hand, one could make BinOp an interface and make opcode its method rather than a class field and use 3 classes to implement this interface. This again would save memory because all instaces of Add class would share the opcode at the class level instead of carrying ""+"" String in their opcode, causing huge duplication. Such subclassing would probably improve performance because instead 2-level cases, first determining that we are dealing with BinOp and then taking action depending on the opcode, we can take action directly since every subclassed binary operation now knows what action is expected on it. The action can be evaluate operation, which depends on the binary operation subclass. Probably, there are known techniques to deal with such multiple subclasses, which can encompass the exponential explosion of the classes that I address in my question.

+ +

Hint2 Scala makes you less productive claims that splitting one class into many case-subclasses makes you more productive actually, in the case-classes section. Probably, if you can understand that idea, you can probably tell if it already answers the question. He says that class explosion won't occur but I do not quite understand why.

+",202598,,202598,,42404.56181,42404.56181,Splitting one class into subclasses to save memory,,4,15,,,,CC BY-SA 3.0,, +308397,1,,,1/26/2016 13:53,,1,272,"

I have been wondering this question recently. Why does PE images have to be executable in MS-DOS? I've read on the topic and most of the answers I've seen simply explain it with compatibility. But what this compatibility actually means and targets?

+ +

If it was intended to allow a single executable both to run on DOS and Windows - what was the problem to include two executables for each OS instead?

+ +

Some of the answers I've read explaining it with compatibility states that if "".exe"" files were purely formatted in the COFF format with the PE signature this would lead to a crash in the system when executed on DOS. But I was wondering if that's actually true. Wasn't the MS-DOS checking for ""MZ"" starting signature and if not would it have been so expensive to include such check and also perhaps another to print message similar to the DOS-stub if the image was starting with ""PE"" (and so it was a PE executable instead)?

+",175567,,,,,42395.70694,What's the historical reason of including DOS-stub in PE images?,,1,2,,,,CC BY-SA 3.0,, +308399,1,308441,,1/26/2016 14:04,,1,1547,"

So Im building an app and I'm trying to implement the structure of the app so that its robust and scalable in future.

+ +

My app is mostly divided into JavaScript Modules (revealing pattern):

+ +
// filter.js file
+//My javascript module structure
+var filter = (function(){
+
+var _privateMethod = function(){
+}
+
+var publicMethod = function(){
+    _privateMethod();
+}
+
+return{
+   publicMethod: publicMethod
+}
+});
+
+ +

Now the main issue is that I want to use AngularJS within my app too and AngularJs has its own angular modules like so:

+ +
// app.js file
+var myApp =  angular.module('myApp',[]);
+myApp.controller('myController', function($scope){
+
+});
+
+ +

The way I have planned my app developed is:

+ +
    +
  1. All HTTP calls to the server will be done within my JavaScript module or to be more clear within my ""filter.js"" file (see above)
  2. +
  3. Returned data from the HTTP calls to server will then be sent back to angularjs controller or in my case the ""app.js"" file (see above) and that controller/app.js file will be responsible for updating my view/html file.
  4. +
+ +

The reason for doing this is so that all my functions that connect with server and handle data are within a private scope (the filter.js module) and no one should have access to those functions as they are being kept out of the global scope.

+ +

But to pass data from my filter.js file to my Angular app.js controller I had to use callbacks.

+ +

Below is the complete code of my above described scenario:

+ +

filter.js

+ +
var filter = (function(){
+
+// public method
+var testMethod = function(callback){
+  loadDoc(function(dta){
+    alert(dta);
+    callback(dta);
+ })
+};
+
+// private Method
+function loadDoc(callback) {
+  // Create the XHR object to do GET to /data resource  
+  var xhr = new XMLHttpRequest();
+  var dta;
+  xhr.open(""GET"",""url"",true);
+
+  // register the event handler
+  xhr.addEventListener('load',function(){
+  if(xhr.status === 200){
+       //alert(""We got data: "" + xhr.response);
+        dta = xhr.response;
+        callback(dta);
+   }
+  },false) 
+
+  // perform the work
+  xhr.send();
+  }
+
+ return {
+     testMethod: testMethod   
+  };
+
+})();
+
+ +

app.js:

+ +
var myApp = angular.module('myApp',[]);
+
+myApp.controller('myController', function($scope){
+
+
+$scope.myFunc = function(){
+
+     filter.testMethod(function(dta){
+        alert(dta);
+        $scope.test = dta;
+        console.log($scope.test);
+      });
+   };
+});
+
+ +

You can see from my above code structure that I am using callbacks to send data back to my controllers.

+ +

My Question:

+ +
    +
  1. My question is that the above descried scenario of how I see my code +implementation and structure, is that right approach?

  2. +
  3. Does the implementation adhere to good, clean and correct javascript + code?

  4. +
  5. Is there something else I can do to improve my app's code structure?

  6. +
+",186048,,,,,42396.02917,JavaScript & AngularJs Modules Implementation technique and structure,,1,0,1,,,CC BY-SA 3.0,, +308405,1,,,1/26/2016 14:46,,3,6643,"

The lastest version of Python is 3.5.1. However, the latest release is 3.4.4. Why is 3.4.x still developed? Are there breaking changes in 3.5 with respect to 3.4?

+ +

https://www.python.org/downloads/

+",212798,,31260,,42395.67708,42395.67708,Why is Python 3.4 still developed after the release of 3.5?,,2,18,1,42395.64444,,CC BY-SA 3.0,, +308406,1,,,1/26/2016 14:51,,2,1165,"

Let's say I have an API method with can be used to calculate the sum of all orders made by a specific customer:

+ +
Amount CalculateOrderSum(int customerId)
+{
+    // Perform authentication to make sure caller has access to customerId
+
+    // Retrieve customer with id customerId
+
+    // Retrieve all orders related to the customer
+
+    // Retrieve details for different orders (not always, depending on state)
+}
+
+ +

At any point in this function, some system administrator may purge old items from the system. This means that one of the below can happen while the method above is running:

+ +
    +
  • The authentication will fail, because the customer no longer exists
  • +
  • The customer can't be loaded, since it was deleted
  • +
  • The customer can be loaded, but a millisecond later the orders are deleted and cannot be retrieved.
  • +
  • Retrieving order details works for some orders, but fails for others since they have been deleted mid-processing.
  • +
+ +

I want the application to return a friendly error message when any of this happens, rather than returning a NullReferenceException or similar.

+ +

As I see it, there's some different approaches to add error-handling for this logic:

+ +
    +
  1. I could introduce a lot of null-checks throughout the code for example: if (customer != null) throw OrderRetrievalFailedException(""customer is a goner.""). Since all the data in the database can be purged at any time, this would lead to quite a lot of if's spread throughout the code (which seems to get messy)
  2. +
  3. I could change the purging functionality to mark customers or orders as deleted (rather than actually removing the database rows). This way the function could still do its work because the data will still be there. The issue here is that we actually want to purge old data for different reasons (less attack surface and performance considerations for example).
  4. +
  5. I could change all methods to throw if an object can't be loaded (so GetOrders(customerId) could throw CustomerNotFoundException if the customer cannot be loaded) which would be catched in the CalculateOrderSum function and an error given to the user. So basically the code would have to be littered with if (something == null) throw new SomeException.
  6. +
  7. I could introduce some global locking mechanism, so that a customer can't be deleted while any one is reading any of its data. The issue here is that our system is distributed so we would need to implement a central locking mechanism. Also, I have a bad experience with locking of database rows in use-cases like this in high-traffic database.
  8. +
+ +

All of these approaches feels quite convoluted and tricky to get right to me, and it means that the ""main success flow"" of the code will be littered with handling of exception scenarios. I'm leaning towards alternative 3, but I would to hear if there's some other ""standard and robust"" way of handling this.

+ +

(I'm using C#, but I assume that the same issue would apply to users of for example Java or C++)

+",212801,,,,,42399.28819,Handling null-references in C# logic,,5,1,,,,CC BY-SA 3.0,, +308412,1,377081,,1/26/2016 15:27,,1,272,"

We support IE9, IE10, IE11, Chrome and Firefox, and we support them on Win7, Win8, Win8, and Win10.

+ +

If we tested every possible combination of browser and OS that would be 20 times through our script and we're wondering if that is necessary.

+ +

I'm fairly confident that the results we get for Chrome or Firefox on one OS will hold for all OS's, but we're not sure if there might be difference in the way certain versions of IE behave on the different OS's.

+ +
    +
  1. Am I correct in assuming that Chrome and Firefox will be consistent across OS's?
  2. +
  3. Will each version of IE be consistent across OS's or do we need to test each possible OS/Browser combo?
  4. +
  5. Are any differences likely to be functionality differences (piece of javascript fails) or display (css issues)?
  6. +
+",89877,,,,,43330.70417,Testing Browser/OS combinations,,1,3,0,,,CC BY-SA 3.0,, +308413,1,308573,,1/26/2016 15:33,,2,601,"

in UML deployment diagrams, the node element is used to represent a ""computational resource"" (in other words, something that can run software).

+ +

I know that nodes may have other nodes placed inside them (to imply nesting), and that they may have artifacts placed inside them (to imply the deployment relationship, i. e. the artifact being deployed to the node).

+ +

However, I've also found a couple of illustrations that had component elements placed inside a node, which I'm not sure how to interpret.

+ +

Here's what I'd like to know:

+ +
    +
  • Is it legal to place a component inside a node?
  • +
  • If so, what exactly does it imply - the component being deployed to the node (which I'm not sure is allowed), or the node consisting of the component?
  • +
  • If not, does the specification explicitly say so at any point?
  • +
+",181419,,,,,42396.86181,"In UML, can a component be placed inside a node?",,1,1,,,,CC BY-SA 3.0,, +308414,1,308422,,1/26/2016 15:33,,1,4953,"

I've been working on a small Java project by myself for a department that does not do a lot of software development but mostly database stuff.

+ +

I showed my boss the code I've been writing and he saw that in some classes there were some static variables. He then told me to put the static variables in XML instead to make it ""easier.""

+ +

He left and I did not get to ask him how it makes them manageable. He's going to be gone for a couple of weeks.

+ +

I know this might be a pretty vague question but what could possibly be the reasons why he would tell me to put static variables into XML? And how do I go about doing them?

+ +

First, I had something like this:

+ +
private final static String[] ACCESS_EXTENSIONS = {""accdb"", ""mdb""};
+
+ +

After he told me about putting static variables in an XML file, I wrote an XML code:

+ +
<input_filetypes>
+
+    <filetype id = ""access"">
+        <title>Access Database</title>
+        <extension>accdb</extension>
+        <extension>mdb</extension>
+    </filetype>
+
+    <filetype id = ....>
+        ....
+    </filetype>
+
+    ......
+
+</input_filetypes>
+
+ +

But then this resulted to me creating a static String variable that contains the location of my XML file:

+ +
private final static String xmlFile = ""location_of_my_xml_file"";
+
+ +

I also learned about how to use XPath to access data in my XML, but I ended up writing another static code which is:

+ +
private final static String GET_ACCESS_EXTNS_EXP =
+    ""input_filetypes/filetype[@id='access']/extension"";
+
+ +

Now I think I made things worse. I know I should wait for him to get back to clarify, but that is a long time from now, and I think it's best if I start reading up while waiting. I just need ideas on where to start.

+",209882,,,user40980,42395.66736,42395.66736,Moving Java static variables into XML,,2,3,,,,CC BY-SA 3.0,, +308416,1,308420,,1/26/2016 15:35,,12,526,"

I am writing an application and I got to this point:

+ + + +
private void SomeMethod()
+{
+    if (Settings.GiveApples)
+    {
+        GiveApples();
+    }
+
+    if (Settings.GiveBananas)
+    {
+        GiveBananas();
+    }
+}
+
+private void GiveApples()
+{
+    ...
+}
+
+private void GiveBananas()
+{
+    ...
+}
+
+ +

This looks pretty straight-forward. There are some conditions and if they are true the methods are being called. However, I was thinking, is it rather better to do like this:

+ +
private void SomeMethod()
+{
+    GiveApples();
+    GiveBananas();
+}
+
+private void GiveApples()
+{
+    if (!Settings.GiveApples)
+    {
+        return;
+    }
+
+    ...
+}
+
+private void GiveBananas()
+{
+    if (!Settings.GiveBananas)
+    {
+        return;
+    }
+
+    ...
+}
+
+ +

In the second case, each of the methods guards itself, so even if any of those methods GiveApples or GiveBananas is called from outside SomeMethod, they are going to be executed only if they have the correct flag in Settings.

+ +

Is this something that I should actually consider as a problem?

+ +

In my current context, it is very unlikely that those two methods will be called from outside this method, but no one can ever guarantee that.

+",91471,,,user40980,42395.86806,42396.24722,Is it better to guard the method call or the method itself?,,3,2,1,,,CC BY-SA 3.0,, +308427,1,308432,,1/26/2016 16:13,,-1,1296,"

I am not so experienced in client-server applications and I could not find exactly answer to my question anywhere in google.

+ +

I am developing part of application on server side and my collegue who develops frontend suggest me that I should manipulate his payload rather then act as a service.

+ +

I will explain it in example:

+ +

Frontent receives some data in tree structure (it is JSON message):

+ +
Node1
+  NodeB
+  NodeC
+  NodeD
+Node2
+  NodeE
+  NodeF
+  NodeG
+
+ +

User wants to move NodeG from Node2 to Node1 (drag and drop).

+ +

So front-end application should send me whole payload and some instruction like ""move NodeG to Node1"" and wait until i process data, update database and send response with new structure and render it to the user?

+ +

or

+ +

front-end should send asynchronously just instructions to backend like ""move NodeG to Node1"" and care about displaying by itself.

+ +

as ""Instruction"" i mean some HTTP POST request.

+",212809,,,,,42395.70833,"What is a correct way to exchange information between ""frontend"" and ""backend""?",,3,1,,42398.87917,,CC BY-SA 3.0,, +308439,1,308603,,1/26/2016 19:09,,6,913,"

Disclaimer: I come from a PHP background.

+ +

In PHP, I could have thousands files, which are never loaded, if not needed, due to the autoloader feature (If some code is needed, it would be loaded)

+ +

How do .NET's assemblies work? Do they load all the program code into the memory on application startup?

+ +

For example, I can have some areas with multiple Controllers:

+ +
    +
  • Area 1 + +
      +
    • Controller 1
    • +
  • +
  • Area 2 + +
      +
    • Controller 1
    • +
    • ...
    • +
    • Controller 42
    • +
  • +
+ +

If Area 2 is optional by configuration, would it be a better Idea to extract this Area into a seperate Assembly and load it if needed?

+",122683,,189038,,42397.32639,42397.49861,Does unused code affects the assembly after startup (Memory for example)?,,1,7,1,,,CC BY-SA 3.0,, +308446,1,,,1/26/2016 21:08,,1,833,"

What do you consider that is there relationship between DDD aggregate and architectural component? I think that it is quite reasoned that services, which are related to specific aggregate, defined a component structure.

+ +

When I'm modeling the domain model of my software, I'd find out several domain aggregates (with aggregate root) and, as you know, in DDD practice, each aggregate must handle as a single unit. Right?

+ +

Now simplified, the next stage is a higher level component structure modeling and I have to find out, which kind of components there exist. So could I consider that the each aggregate defined a new component and that component provides specific ports or interfaces for each business processes, which the encapsulated aggregate implements?

+",193934,,,user22815,42398.87986,42398.87986,DDD aggregate and component structure,,1,4,1,42404.74236,,CC BY-SA 3.0,, +308452,1,308467,,1/26/2016 21:54,,1,95,"

Let's say I have a set of objects,

+ +
foo f;
+bar br;
+baz bz;
+
+ +

I also have a string of JSON data,

+ +
string JSONstring;
+
+ +

Depending on the object type of the JSON string, I need to transform it into either foo, bar, or baz. Ok, cool, I'll have a method for that.

+ +
public object parseJSONToFooBarBaz(string jsonString);
+
+ +

What I want to avoid is writing something like:

+ +
map<string, object> topLevelJSON = deserialized json string;
+
+if(map[foo] != null) return new foo(jsonString);
+else if(map[bar] != null) return new bar(jsonString);
+...
+// And the list balloons up and is difficult to maintain
+
+ +

I feel like this is either a good condidate or almost a good candidate for a factory pattern, but something doesn't feel quite right. Is there a simple solution that I'm overlooking, or is a set of conditionals or a switch/case really an OK way to solve this?

+",212301,,,,,42396.12014,Rewriting conditionals in OOP without generics,,1,2,,,,CC BY-SA 3.0,, +308471,1,,,1/27/2016 6:18,,4,2264,"

I'm using meanjs for a project. It includes a yeoman generator with some express tests (model.test.js & routes.test.js)

+ +

The tests do exactly what they advertise. My question is though, should I be writing tests for my express controllers as well?

+ +

Right now the routes tests make sure the server responds with the appropriate data as a whole, but should I be writing more granular tests for each part of my controller as well? I don't see many examples of this online....

+",206227,,,,,42456.52778,Unit test express controllers?,,1,0,2,,,CC BY-SA 3.0,, +308478,1,308481,,1/27/2016 8:42,,1,269,"

Maybe it looks like a weird question, but what term should be attributed inside the code for files that are not folders to differentiate them from folders?

+ +

If I need write 2 functions isFolder() and isFile() 2nd one has a misleading name because folders are files too

+",212913,,105684,,42396.38125,42402.3875,How are non-folder files called?,,2,4,,42399.96597,,CC BY-SA 3.0,, +308479,1,308491,,1/27/2016 8:45,,9,1162,"

This question is about applying rules of my application that confuse me.

+ +

My controller is using service and the service is using repository.

+ +
public class CommentController: ApiController{
+
+    [HttpPost]
+    public bool EditComment(Comment comment){
+        commentService.Update(comment);
+    }
+}
+
+public class CommentService{
+    ICommentRepository repository;
+    ....
+    ....
+    public void Update(Comment comment){
+        repository.Update(comment);
+    }
+}
+
+ +

If the user is authenticated, he can update a comment.

+ +
    +
  • But a user should edit own comments.

  • +
  • But an admin can edit all comments.

  • +
  • But comment can not edit after a specified date.

  • +
  • Edit by a department

  • +
+ +

And I have something like these rules.

+ +

If I apply ""user editing own comment"" rule in service layer, I will change Update methot and pass parameter of controller User.Identity.Name,

+ +
public class CommentService{
+    ICommentRepository repository;
+    ....
+    ....
+    public void Update(string updatedByThisUser, Comment comment){
+        // if updatedByThisUser is owner of comment
+        repository.Update(comment);
+    }
+}
+
+ +

But, is true changing service operations by rule?

+ +

I am confused a bit about where can I apply the rules. In controller or in service or in repository.

+ +

Is there any standart way to do this like design patterns.

+",160523,,,,,42696.81319,Where does apply authorization rules for my layered application?,,2,2,1,,,CC BY-SA 3.0,, +308490,1,,,1/27/2016 10:17,,0,1362,"

I have a class with multiple setters and want to make atomic updates to multiple properties/variables. As far as I can see there are three methods that could work:

+ +

Call all setters in synchronized block.

+ +
synchronized {
+  obj.setA(a);
+  obj.setB(b);
+  // ...
+}
+
+ +

Use explicit locks.

+ +
obj.lock();
+obj.setA(a);
+obj.setB(b);
+// ...
+obj.unlock();
+
+ +

Use an update object.

+ +
update = new UpdateObject();
+update.setA(a);
+update.setB(b);
+// ...
+obj.update(update);
+
+ +

Update objects are often used by the Windows API to make atomic changes to an object without explicit locking in a single system call.

+ +

Is there any method that I missed?

+ +

Concurrency control is a cross-cutting concern. What are the architectural implication of each method? What is the most idiomatic method in Java?

+",147933,,,,,42396.47431,Concurrent and atomic updates to multiple properties/variable of an object,,1,2,,,,CC BY-SA 3.0,, +308495,1,308500,,1/27/2016 11:10,,2,703,"

As a corporate developer who works alone I find myself creating and writing a lot of websites that consist of screens that are basically wrappers for a DB table. So for instance on a screen that updates companies I would have a controller like this:

+ +
 public class CompaniesController
+{
+    private readonly ICompaniesManager manager;
+
+    public CompaniesController(ICompaniesManager manager)
+    {
+        this.manager = manager;
+    }
+
+    [HttpPost]
+    public JsonResult Update(Company company)
+    {
+            // validation goes here
+            this.manager.Update(company);
+            // handle update result
+    }
+}
+
+ +

this controller is talking to a middle tier ""Manager"" class.

+ +
 public class CompaniesManager : ICompaniesManager
+{
+    private readonly ICompaniesRepository repository;
+
+    public CompaniesManager(ICompaniesRepository repository) 
+    {
+        this.repository = repository;
+    }
+
+    public void Update(Company company)
+    {   
+        // business logic & caching code    
+        this.repository.Update(company);
+    }
+}
+
+ +

As you can see this is talking to a repository class that actually handles updating the database.

+ +

So I always seem to end up with the pattern xxxController -> xxxManager -> xxxRepository (where xxx is the object/table). I keep finding myself wondering if I am missing something fundamental about how a 3 tier app should work or how middle tier classes should be named.

+ +

Would this be considered a correct 3-tier architecture or am I missing something?

+",184971,,,,,42614.51319,ASP.NET MVC Middle tier object naming,,1,0,,,,CC BY-SA 3.0,, +308499,1,308505,,1/27/2016 11:41,,4,268,"

Background: The company I work for uses different systems to hold their insurance data which is customer related. They want to have an app for their customers where they can find their insurance related things in. Some of these systems provide web services to get the necessary information, so proper user authentication and whatsoever is possible. However, some provide direct database access without anything which comes even close to user authentication, because it was not designed to do that.

+ +

To solve this issue we are building an adapter-service, which can connect to all the different systems to get the necessary data. However the user-authentication is still an issue for the systems which don't provide that.

+ +

Another software engineer suggested (we are the only engineers) we build a general authentication mechanism which provides authentication to all the systems to get the necessary customer information. We both have the same background, we are young, we don't have that much experience in designing such a system or understanding all the security issues for such a system.

+ +

Question: With the background as stated above, I am concerned about all the security concerns which may come up if we would do that. That for the serious kind of information (insurance data) we may lack the experience to set up a system like that, to ensure the best kind of security is possible and not limited by our knowledge (or Google).

+ +
    +
  • His opinion on this matter is that a general authentication mechanism should be build for all systems. So with one login on our side they can get all the data from the different systems if it is available there. If we follow up with things such as OAuth that we don't have to fear the security issue that much. And if we (re)search these things on the internet, that it will be OK.
  • +
  • My opinion on this matter is that a general authentication mechanism will be a good thing for all the systems which don't provide the necessarry things for user authentication (which is also possible the only solution). However we should not override systems which provide their own authentication to access their system to get the necessary data from a customer. Perhaps information can be exchanged between all the systems to ensure the same kind of login is available everywhere (if user exists in a system which provides authentication and with a systems which uses the general authentication). If an password for example is changed. This idea would make sure if our general authentication mechanism is breached somehow, that not all the systems will be exposed to the attacker, and who knows what the consequences might be.
  • +
+ +

Should we just go through with the idea to make a general authentication mechanism or limit it to the systems who need it and possible exchange login-details if the user should exist in the two different kind of systems (system which supports authentication or doesn't). What are good arguments for or against these ideas?

+ +

Edit: The general public would be using it, each system would provide the contract(s) with its data (if the customer exists in the different systems). Example: for customer '1' system A (database) and B (web-service) contains both different contracts from him. However, for customer '2' only system A (database) contains a contract. Depending on the customer their kind of contract(s) they are stored in different systems. All of these systems are third party.

+",212937,,212937,,42396.56875,42396.68056,Having an discussion about security concerns with another software engineer,,2,2,,42402.59167,,CC BY-SA 3.0,, +308503,1,308510,,1/27/2016 12:04,,5,2530,"

When I work with handling exceptions, I notice that I often have to deal with the ones I had no idea about. Especially it is noticeable when I program a method that grabs data from web. An error may occur, for example, due to connection loss, I can handle it. But then another error occurs, a different error with the same cause - connection loss. Okay, added it. Sometimes even yet another occurs. So, the problem is, I am never sure enough if I've handled all the possible errors that may occur due to a certain cause.

+ +

At first, I thought about going wildcard and do something like (example in Python):

+ +
try:
+    #do stuff
+except:
+    #handle error
+
+ +

But it proved to be a wrong approach soon, cause if I need to handle, say, KeyboardInterrupt, which is raised when a user terminates the program, instead of being handled by a scope I want it to be handled by, it is handled by this wildcard, which is not supposed to have anything to do with it.

+ +

So how do I handle exceptions I don't know of but that may possibly occur (or not occur)? Some kind of exceptEverythingBut KeyboardInterrupt:? I doubt many languages have that in their syntax.

+ +

EDIT1: a really simplified example:

+ +
#!/usr/bin/python3 -u
+# -*- coding: utf-8 -*-
+
+try:
+    while True:
+        try:
+            print(1)
+        except:
+            print(2)
+except KeyboardInterrupt:
+    print('end')
+
+ +

When I press Ctrl+C, I want it to print 'end' and finish. But instead it prints 2 and continues execution.

+ +

If I try this:

+ +
#!/usr/bin/python3 -u
+# -*- coding: utf-8 -*-
+
+try:
+    while True:
+        try:
+            print(1)
+        except KeyboardInterrupt:
+            break
+        except:
+            print(2)
+except KeyboardInterrupt:
+    print('end')
+
+ +

it finishes, but it skips the outer except and doesn't print 'end'. And that's not what I want. So, the only way I see is to prevent the inner scope from handling KeyboardInterrupt altogether. But it is not possible if there is a except: or except KeyboardInterrupt: in there. So, I need to specify exactly which errors I want to handle in the inner except. But, as I mentioned in the beginning, I don't always know what they can be.

+ +

I'm asking this question, because my common way to do it is to just let the program unexpectedly fail several times, read the logs and add handling of errors I didn't know about to new versions; however this could just be a naïve approach, so I want to know how it is done by experienced people.

+",212946,,163990,,42453.6,42453.6,Handling exceptions I don't know about,,4,4,1,,,CC BY-SA 3.0,, +308506,1,,,1/27/2016 12:23,,1,552,"

I need to send out a batch of reminder emails, never more than 100 emails per batch, using authenticated SMTP. This is happening on a WindowsService (no GUI).

+ +

The SMTP library we're using raises a Sent event. In the Sent eventlistener, we grab some info from the Sent event's args and write some data to a SqlServer database via SqlClient library, to note the fact that the particular email has been sent to the recipient, and when it went out.

+ +

Is there any concern or issue for the Sent eventlistener when the emails are being sent out on multiple threads managed by ThreadPool?

+ +

pseudocode:

+ +
  for each record in batch
+     {
+       var email = buildEmail(record);
+       ThreadPool.QueueUserWorkItem( a =>
+            {
+               if (!SmtpClient.Connected) SmtpClient.Connect();
+               if (!SmtpClient.Authenticated) SmtpClient.Authenticate();
+               if (SmtpClient.Authenticated) SmtpClient.Send(email);
+            }); 
+     }
+
+
+   SmtpClient.Sent += (sender, args) =>
+  {
+          //get info from args
+          // populate SqlCommand parameters
+
+        ThreadPool.QueueUserWorkItem(b =>
+           {
+               UpdateDatabase(info);
+           });
+  };
+
+",149085,,1204,,42396.66597,42697.06458,ThreadPool.QueueUserWorkItem is this example a valid use case?,<.net>,1,4,,,,CC BY-SA 3.0,, +308507,1,308524,,1/27/2016 12:26,,0,84,"

I'm working on a huge legacy codebase that uses Bing Maps API as a service provider and I have got the task to scrap Bing which is the foundation of the software and has been referenced through the source code everywhere.

+ +

There are no unit tests at all. Very few pages of documentation. Also it's a JavaScript code-base, built on top of ExtJS 3.4, a badly designed custom wrapper around objects to mock classic inheritance and class hierarchies, and Bing being used all over the code and referenced through functions everywhere.

+ +

My question is: How should I document and count the references of Bing through out the code? I'm hoping to use this to make a better time estimation, and later, help ease the refactoring process.

+ +

I have already started browsing the code-base but I'm not sure what's the best way of documenting references.

+",67077,,158187,,42396.74236,42396.74236,Preparing to remove a tightly coupled service provider out of source code,,2,2,,42427.60417,,CC BY-SA 3.0,, +308511,1,,,1/27/2016 13:07,,1,136,"

I am in the process of writing my first true API. In the process, I am defining an interface for mapping complex data structures onto other complex data structures.

+ +

At the moment, the interface contains a set method for the input data structure, a run method to kick off the mapping process, and several methods which basically have the same signature, but of course different names and different documentation. The latter all return java.lang.Object. Something as follows.

+ +
public interface DataCompiler {
+
+  public void setInputDataStructure(IDS ids);
+
+  public void run();
+
+  /**
+   * Maps a ""this"" structure to a target ""this"" structure and
+   * returns the resulting target ""this"" structure.
+   */
+  public Object mapThisStructure();
+
+  /**
+   * Maps a ""that"" structure to a target ""that"" structure and
+   * returns the resulting target ""that"" structure.
+   */
+  public Object mapThatStructure();
+
+  /**
+   * Maps an ""other"" structure to a target ""other"" structure and
+   * returns the resulting target ""other"" structure.
+   */
+  public Object mapOtherStructure();
+
+}
+
+ +

So, an interface is meant to define a contract that must be fulfilled by implementations of the interface.

+ +

However, with my interface there is no safety catch in the method signatures themselves to prevent a misuse of any of the last three methods. E.g., mapThisStructure could actually be implemented in the exact way that mapThatStructure is meant to be implemented. Or, someone could put all mapping work into either one of the three mappings (which would of course breach the principle of one method doing one and only one thing), and simply let the other two return null.

+ +

Thus actual contract is defined in the JavaDoc. So, does the set up of such an interface make sense?

+",70086,,,,,42396.6875,Does an interface including several methods that return instances of Object make sense?,,2,2,,,,CC BY-SA 3.0,, +308515,1,308517,,1/27/2016 13:21,,101,18876,"

Why would you run unit tests on a CI server?

+ +

Surely, by the time something gets committed to master, a developer has already run all the unit tests before and fixed any errors that might've occurred with their new code. Isn't that the point of unit tests? Otherwise they've just committed broken code.

+",175261,,3329,,42397.65833,42403.55139,What's the point of running unit tests on a CI server?,,9,12,28,,,CC BY-SA 3.0,, +308523,1,,,1/27/2016 13:42,,7,1447,"

Recently I asked this question. As commented in the answer by someone who sounds like a numPy developer, this behavior is clearly not desired. The issue posted was closed stating that this is expected behavior but not a bug. Additionally there are plans to fix it.

+ +

What is the difference between a bug and behavior which is expected but not desired? Do those terms have precise and consistent definitions?

+ +

As a silly extreme, if a program always returns 3 when asked to compute 1+1, returning 3 is certainly expected behavior by the program since it has always worked that way, but it certainly isn't desired behavior, so is it a bug or not?

+ +

Recently I've been studying philosophy and this question has struck my interest.

+",,user2133814,30975,,44021.83681,44021.83681,What is the difference between 'expected but not desired behavior' and a software bug,,5,6,,,,CC BY-SA 4.0,, +308525,1,308529,,1/27/2016 14:11,,1,1268,"

I am trying to understands MVP using Winforms.

+ +

I found this example. Why do UserModel and UserView need to implement the IUserModel and IUserView interfaces ?

+",202783,,1204,,42396.59167,42396.61042,Why are interfaces necessary in MVP design pattern?,<.net>,1,8,1,,,CC BY-SA 3.0,, +308532,1,308538,,1/27/2016 14:57,,4,8247,"

I maintain an old Java app that deploys to Tomcat and which uses SSL (and hence a keystore). It is important to note that this app will not even start up if the SSL cert is bad/expired/invalid!

+ +

Every year the SSL cert expires, and so someone has to replace the old/expiring cert stored in the JKS with a new one (provided to us by IT). I am now starting that process/fun for this year.

+ +

I started by running the keytool command that prints the contents of the JKS:

+ +
keytool -list -v -keystore myapp.jks
+
+ +

Unless I'm reading its output wrong, I only see a cert that expired in 2015! If that's the case, how in the heck has this app been running for the last year?!? The one thing I do see is a cert in the ""chain"" that expires in 2034. I guess I don't understand SSL chains as well as I should, but my theory is that (somehow) the cert that expires in 2034 is somehow keeping my main cert alive/valid - is that possible? Here's a censored/summary of the output of that list command from above:

+ +
Keystore type: JKS
+Keystore provider: SUN
+
+Your keystore contains 1 entry
+
+Alias name: blah
+Creation date: May 1, 2014
+Entry type: PrivateKeyEntry
+Certificate chain length: 3
+Certificate[1]:
+Owner: blah
+Issuer: blah
+Serial number: blah
+Valid from: Thu Feb 16 15:49:19 EST 2012 until: Mon Feb 16 15:49:19 EST 2015
+Certificate fingerprints:
+    MD5:  blah
+    SHA1: blah
+    SHA256: blah
+    Signature algorithm name: SHA1withRSA
+    Version: 3
+
+Extensions: 
+
+#1: ObjectId: blah Criticality=false
+AuthorityInfoAccess [
+
+...a lot of stuff omitted for brevity
+
+Certificate[3]:
+Owner: OU=Go Daddy Class 2 Certification Authority, O=""The Go Daddy Group, Inc."", C=US
+Issuer: OU=Go Daddy Class 2 Certification Authority, O=""The Go Daddy Group, Inc."", C=US
+Serial number: 0
+Valid from: Tue Jun 29 13:06:20 EDT 2004 until: Thu Jun 29 13:06:20 EDT 2034
+Certificate fingerprints:
+    MD5:  blah
+    SHA1: blah
+    SHA256: blah
+    Signature algorithm name: SHA1withRSA
+    Version: 3
+
+...a lot more stuff, again omitted for brevity
+
+ +

Can anybody see any reason why this cert would still be valid?!?

+",154753,,,,,42396.67014,How is this Java Keystore cert still valid?,,1,2,1,,,CC BY-SA 3.0,, +308541,1,308564,,1/27/2016 16:14,,15,3050,"

Achieving Zero Downtime Deployment touched on the same issue but I need some advice on a strategy that I am considering.

+

Context

+

A web-based application with Apache/PHP for server-side processing and MySQL DB/filesystem for persistence.

+

We are currently building the infrastructure. All networking hardware will have redundancy and all main network cables will be used in bonded pairs for fault-tolerance. Servers are being configured as high-availability pairs for hardware fault-tolerance and will be load-balanced for both virtual-machine fault-tolerance and general performance.

+

It is my intent that we are able to apply updates to the application without any down-time. I have taken great pains when designing the infrastructure to ensure that I can provide 100% up-time; it would be extremely disappointing to then have 10-15 minutes downtime every time an update was applied. This is particularly significant as we intend to have a very rapid release cycle (sometimes it may reach one or more releases per day.

+

Network Topology

+

This is a summary of the network:

+
                      Load Balancer
+             |----------------------------|
+              /       /         \       \  
+             /       /           \       \ 
+ | Web Server |  DB Server | Web Server |  DB Server |
+ |-------------------------|-------------------------|
+ |   Host-1   |   Host-2   |   Host-1   |   Host-2   |
+ |-------------------------|-------------------------|
+            Node A        \ /        Node B
+              |            /            |
+              |           / \           |
+   |---------------------|   |---------------------|
+           Switch 1                  Switch 2
+    
+   And onward to VRRP enabled routers and the internet
+
+

Note: DB servers use master-master replication

+

Suggested Strategy

+

To achieve this, I am currently thinking of breaking the DB schema upgrade scripts into two parts. The upgrade would look like this:

+
    +
  1. Web-Server on node A is taken off-line; traffic continues to be processed by web-server on node B.
  2. +
  3. Transitional Schema changes are applied to DB servers
  4. +
  5. Web-Server A code-base is updated, caches are cleared, and any other upgrade actions are taken.
  6. +
  7. Web-Server A is brought online and web-server B is taken offline.
  8. +
  9. Web-server B code-base is updated, caches are cleared, and any other upgrade actions are taken.
  10. +
  11. Web-server B is brought online.
  12. +
  13. Final Schema changes are applied to DB
  14. +
+

'Transitional Schema' would be designed to establish a cross-version compatible DB. This would mostly make use of table views that simulate the old version schema whilst the table itself would be altered to the new schema. This allows the old version to interact with the DB as normal. The table names would include schema version numbers to ensure that there won't be any confusion about which table to write to.

+

'Final Schema' would remove the backwards compatibility and tidy the schema.

+

Question

+

In short, will this work?

+

more specifically:

+
    +
  1. Will there be problems due to the potential for concurrent writes at the specific point of the transitional schema change? Is there a way to make sure that the group of queries that modify the table and create the backwards-compatible view are executed consecutively? i.e. with any other queries being held in buffer until the schema changes are completed, which will generally only be milliseconds.

    +
  2. +
  3. Are there simpler methods that provide this degree of stability whilst also allowing updates without down-time? It is also preferred to avoid the 'evolutionary' schema strategy as I do not wish to become locked into backwards schema compatibility.

    +
  4. +
+",174739,,-1,,43998.41736,42398.42153,Zero Downtime Deployment - Transitional Db Schema,,2,0,2,,,CC BY-SA 3.0,, +308542,1,308600,,1/27/2016 16:18,,6,953,"

return this (or similar construct) allows method chaining. Lack of it is painful, because you have to write such code (C#):

+ +
var list = new List<string>();
+list.Add(""hello"");
+list.Add(""world"");
+
+ +

instead of

+ +
list.Add(""hello"").Add(""world"");
+
+ +

Elixir solves it nicely for function chaining, instead of relying on callee it relies on caller (forgive me my mistakes, I don't know Elixir):

+ +
list |> add(""hello"") |> add(""world"");
+
+ +

But now I have just read this sentence at wikipedia:

+ +
+

Returning an object of built-in type from a function usually carries + little to no overhead, since the object typically fits in a CPU + register.

+
+ +

On one hand callee does not know if the result will be used or not, on the other hand caller cannot stop callee from setting the result value. So I am skeptical about this ""little"", but ""no overhead""?

+ +

Thus MY QUESTION for this very particular pattern (i.e return this with method chaining) -- can it be optimized with no overhead? How?

+ +

Question by example -- say I will write a framework and sprinkle every possible method with return this just to give ability for method chaining. The question arise -- will user who does not use method chaining will pay the price of lowered performance? How compiler could optimize code that this feature will have zero cost.

+ +

Update after first 2 comments -- ""cheap""!=""free"", so maybe another perspective for my question, why the difference ""little"" vs. ""no cost"". If it can be guaranteed it is at no cost, we write ""no cost"", period. So I assume it cannot be guaranteed, thus ""little"".

+ +

Clarification I am not asking how to make another Elixir-like syntax in other language. I am asking how it is possible for compiler to optimize callee-caller interaction on return this + method chaining (or lack of it, when not used).

+",66354,,66354,,42396.80764,42398.59861,"Can ""return this"" pattern be optimized to no cost performance?",,8,5,1,,,CC BY-SA 3.0,, +308544,1,,,1/27/2016 16:26,,7,1053,"

Right now in my current company I must ""parse"" a csv and extract some data out of it (in the sense of data mining).

+ +

I surprised myself by defining the following data structure (Java 8):

+ +
static List<Map<String, Map<String, LocalDate[]>>> intervalStructure
+
+ +

Is this considered bad practice? Should I use a self-define class instead to store some data?

+",120058,,31260,,42396.68542,42397.01667,Is using this complex data structure bad practice?,,2,1,,,,CC BY-SA 3.0,, +308551,1,308565,,1/27/2016 17:18,,0,5764,"

Having lots of Interface that needs to pass in a constructor looks messy, is there any neat way of doing it?

+ +

Code snippet:

+ +
public class Foo
+{
+    private readonly IRepository1 _repository1;
+    private readonly IRepository2 _repository2;
+    private readonly IRepository3 _repository3;
+    private readonly IRepository4 _repository4;
+
+    public Foo(IRepository1 repository1, IRepository2 repository2, 
+               IRepository3 repository3, IRepository4  repository4)
+    {
+        repository1 = repository1;
+        repository2 = repository2;
+        repository3 = repository3;
+        repository4 = repository4;
+    }
+}
+
+",146464,,,,,42396.78472,Neat way on passing interface parameter to a constructor,<.net>,1,1,1,,,CC BY-SA 3.0,, +308561,1,,,1/27/2016 18:22,,2,113,"

In writing PHPSPEC tests for a Zend Framework 2 application, I'm left wondering how far to 'dig'. Consider this very simple case:

+ +

A DomainService (Domain in the URL sense of the word) should be able to list all Foo objects, works.

+ +

Making heads/tails of the test is not simple. This DomainService is fed via DI with a DomainMapper. This DomainMapper is a Doctrine Mapper that transforms MySQL rows into Doctrine Entities. In theory, would I not want to cover that my code can:

+ +
    +
  1. instantiate the DomainService (but this is done through ZF2 Factories -- I'm completely bypassing the Factory process in using phpspec's let). Does this tailspin into a separate test for the Factory, that needs a functional service locator object?
  2. +
  3. test that it can find all Foo objects, but this means invoking Doctrine and running an SQL query. The test shouldn't be database dependent I assume, we wouldn't want external data to modify the test. But what then should I test, that it successfully returns an empty array? Semantic query errors could create empty array conditions that are failures. Do we simply accept that tests don't cover this?
  4. +
+ +

Where do you draw the line? Testing DI'ed services that interact with databases can't automatically be off-limits?

+",213011,,209774,,43345.55,43345.55,"How far do I, or can I take TDD tests with Service Objects?",,1,0,1,,,CC BY-SA 3.0,, +308575,1,326006,,1/27/2016 20:50,,1,1480,"

I have an issue with a client's requirement that wants to import a string of html text within a csv document.

+ +

For example, a sanitized version of one import line:

+ +
""IDNumber,TextIdentifierNumber,<p><strong>Hello, this text is **>** this text. 32 < 64</strong></p>""
+
+ +

The issue here is not importing this text, but angle brackets. These are apart of their every day business practice and are needed to indicate a less or greater denomination.

+ +

Background: +At this time our client is using a .NET web application and a batch load application (console), both written in Visual Basic .NET 4.0. Our web application uses a WYSIWYG editor for entering such text and we handle such angle brackets by their named entities and encoding.

+ +

Our issue is discerning an angle bracket among an HTML rich input string.

+ +

What we have done to date:

+ +

We employ the use of HTMLAgilityPack to strictly parse through HTML and weed out HTML tags we don't allow. Unfortunately, HTMLAgilityPack strips out this angle bracket and any text that could follow a potential closing tag. This buggers up HTML string badly and causes issues in our reports.

+ +

We have kicked around a few options, such as text replacement (sending in [LESSTHAN]) by our customer and then our code converts it to proper angle bracket direction. Unfortunately, this most definitely will not work due to their source data coming from another system.

+",213032,,31260,,42397.02083,44041.60972,Concept to differentiate between html tags and angle brackets,,4,6,,,,CC BY-SA 3.0,, +308578,1,308690,,1/27/2016 21:58,,1,75,"

Suppose you're writing an interface that's suppose to be your public API, let's call it Session. It's implementation is called SessionImpl and it's going to be injected by your DI framework.

+ +

I'm stuck thinking about the class access for this SessionImpl class, whether it should be restricted to it's package (since the public API is the interface) but not being able to test it from a different test project, or just make it public and test it.

+ +

The third option would be testing it through it's interface by making the DI use a factory, and calling the factory from the test.

+ +

What's the correct approach to maintain a simple maintainable code?

+",136188,,,,,42638.15556,Class access modifiers on tested implementations,,1,1,,,,CC BY-SA 3.0,, +308581,1,308583,,1/27/2016 22:30,,5,1932,"

I am currently reading Design Patterns - Elements of Reusable Object-Oriented Software. I am in chapter 1 at page 16 in section Class versus Interface Inheritance. There in the last line of the page it says "" An object can have many types, and objects of different classes can have the same type. ""

+ +

I think I understand the second part of the sentence, how objects of different classes can have same types. It is possible by inheritance, if Base class and derived class have same interface.

+ +

My question is how can object have many types. Is it trying to say a derived object can have two type base and derived?

+",101222,,,,,42396.95556,How can a object have many types?,,1,5,,,,CC BY-SA 3.0,, +308595,1,308598,,1/28/2016 3:08,,0,73,"

If I'm bulding an application like a social network and I implement a like / favorite button with AJAX, is it preferable to use one URL to like / dislike or 2 differents URL?

+ +

If I use one, for example like.php, in that url I'd check if the post ID was already liked or not, and then do the opposite action.

+ +

If I use two, for example like.php and dislike.php, in those urls I'd check if the post ID was already liked or not liked, and then return an error if the action was already done.

+ +

I'd say the first option is easier, because it requires less backend/frontend code, but many big websites use the second method. Is there a reason behind that?

+",202023,,,,,42397.49861,Do and Undo action in the same / different page - Is there a difference?,,3,1,,,,CC BY-SA 3.0,, +308608,1,308610,,1/28/2016 7:03,,2,1238,"

Usually, real-time web-apps are built with websockets, right?

+ +

Well, let me be radical here - what if I used Ajax?

+ +

Okay, okay, I know it has its limitations. You can't build Agar.io, sending data packets up to fifty times a second, with latency in the milliseconds.

+ +

But what about an app that had real-time updates that were a bit less frequent. Maybe receiving notifications with in a few seconds, or a turn-based game?

+ +

Is it acceptable to use Ajax.post() every, say, few seconds to receive updates from a, say, PHP script, getting information from an SQL database? As opposed to either RTS games, or the Stackexchange model, where information is loaded when the page is loaded, instead of while the page is open.

+ +

So would there be any issues with using this model? Would it lag the client's computer or have some problem related?

+",213093,,1204,,42397.29792,42397.30486,Building a real-time web-app with Ajax .post(),,1,2,,,,CC BY-SA 3.0,, +308615,1,308636,,1/28/2016 8:56,,1,228,"

I am fairly new to JavaEE, so I have some concepts still missing. +I am learning Docker to use it in our DEV / CI Build environments. I could make it work on my machine. But for it to work in the CI server, my current approach would be to store the docker-compose.yml and dockerfiles in git and, in the CI server download it, build the images and start it.

+ +

To setup the docker image for the web container (Wildfly) I had to add:

+ +
    +
  • DB Drivers (.jar files)
  • +
  • Standalone.xml (.xml file)
  • +
  • Modules we use (mix of .xml and .jar files)
  • +
+ +

But these files are not present in the CI server. +I could download the DB drivers when building the image, but the modules and standalone.xml are not available online.

+ +

Is this approach the reasonable? If so, where would one store these files so they get updated when needed and the CI Server is able to download them to build the image?

+",7764,,31260,,42397.42361,42397.66042,Where to store standalone.xml and other files so it is acessible in my CI server?,,2,0,,,,CC BY-SA 3.0,, +308621,1,308637,,1/28/2016 10:46,,2,142,"

I am studying user stories and use cases, and I'm curious about ways to combine both these techniques. Basically it seems to me that there is a 1:1 relationship between user stories and use cases, meaning that for each user story, there is a use case related and vice-versa.

+ +

I'm not interested in any specific processes, and not even about the type of process (i.e agile versus waterfall). What I want to know is if it is reasonable to assume that this relationship, or at least adopt this assumption as a design practice, even if it is not an absolute law.

+",97410,,,,,42397.57708,Is it reasonable to assume a 1:1 relationship between user stories and use cases?,,1,3,,,,CC BY-SA 3.0,, +308625,1,,,1/28/2016 11:34,,8,3898,"

I am trying to understand the Three way handshake in the TCP connection setup. +My book states, the client first contacts the server, say we want an HTTP connection, so it sends a SYN to port 80. (1)

+ +

The server then replies a SYN ACK package. (Here is my question) (2)

+ +

And now the client sends a final ACK. (3)

+ +

In the book the graphic shows, that (2) goes from the server socket back to the initial client scocket . Then the graphic shows that (3) goes from the client socket to a ""Welcoming Socket"". +The welcoming socket is not the same as the connection socket from (2).

+ +

I have downloaded the http.cap from the Wireshark wiki and am taking a look at the initial 3 packets. Here we have the SYN with port 3372 -> 80 +then a SYNACK 80 -> 3372 +and finaly ACK 3372 -> 80 (with potential Data already).

+ +

What confuses me is that the final ACK also goes to port 80 on the server. I thougth that we had created a new Welcoming Socket with a new port, such that the Connection Socket with port 80 can continue listening for new connections.

+",213137,,,,,42397.54861,"TCP - Three Way Handshake, which Port?",,2,1,3,,,CC BY-SA 3.0,, +308640,1,308644,,1/28/2016 14:08,,15,3920,"

I always seem to write code in C that is mostly object oriented, so say I had a source file or something I would create a struct then pass the pointer to this struct to functions (methods) owned by this structure:

+ +
struct foo {
+    int x;
+};
+
+struct foo* createFoo(); // mallocs foo
+
+void destroyFoo(struct foo* foo); // frees foo and its things
+
+ +

Is this bad practice? How do I learn to write C the ""proper way"".

+",205999,,,,,42398.71389,Is it bad to write object oriented C?,,5,14,3,42398.87431,,CC BY-SA 3.0,, +308651,1,308683,,1/28/2016 15:30,,0,426,"

In currently developing a project where I'm going to use a lot of combobox, in order to avoid repeating a lot of code I'm planning on building a user control containing a ComboBox that retrieve the data I need from my tables.

+ +

What is the best approach for this since I'm going to use a lot of different ComboBox with different tables ? Differents user controls for differents table or just 1 user control where I pass the table I need ?

+ +

Example:

+ +
Combo1
+  Spring/Summer 2015/2016  
+  Spring/Summer 2016/2017  
+  Fall/Winter 2015/2016  
+  Fall/Winter 2016/2017  
+
+Combo2
+  Collection 1  
+  Collection 2  
+  Collection 3  
+  Collection 4  
+  Collection 5  
+
+ +

These 2 combo will be repeated a lot of times in the project. My question is : +should I create 2 user control (combo1UserControl,combo2UserControl) or just some comboUserControl with some parameteres like the name of the tables ?

+",107109,,107109,,42397.65903,42397.91458,Replace use of ComboBox with user controls,,1,1,,,,CC BY-SA 3.0,, +308652,1,308654,,1/28/2016 15:31,,5,532,"

You who have worked with a framework implementing the MVC architectural pattern most likely know how these frameworks are usually implemented.

+ +

They contain a base Controller class, which you extend, where some part of the name of the controller indicates the begining of a route and methods, methods which you as a developer add to your newly created controller, represent the rest.

+ +

I.e. the route http://hostname.com/users/filter/name/John Wick requires a UsersController class (extending a main Controller) with a filter method accepting one parameter, the name.

+ +

By introducing new public methods, which the extended Controller classes public interface has no idea about, the frameworks break the LSP, which says:

+ +
+

What is wanted here is something like the following substitution + property: If for each object o1 of type S there is an object o2 of + type T such that for all programs P defined in terms of T, the + behavior of P is unchanged when o1 is substituted for o2, then S is a + subtype of T. (See also 2, 17 for other work in this area.)

+
+ +

Link to the official document by Barbara Liskov.

+ +

Where the T is the base Controller class and P is the user defined UserController. MVC frameworks must be implemented in terms of the base Controller, because the frameworks do not know what the routes and implementations will be, and based on the routes they perform some kind of a magical lookup to find the controller and method which suits the needs, otherwise redirects to an error page.

+ +

Update in response to not only but including enderland's answer

+ +

To make my intentions more clear, I am going to provide a code example, explaining my thought process.

+ +

Imagine two classes. For demonstration purpose I am going to use the common duck/toy duck example.

+ +

This concrete piece of code will be in C#, but I believe it is so simple you should not have problems with understanding it as long as you have some OOP background.

+ +
class Duck
+{
+    public string Name { get; protected set; }
+
+    public Duck(string name)
+    {
+        this.Name = name;
+    }
+
+    public void Quack()
+    {
+        Console.WriteLine(""Duck quacked."");
+    }
+}
+
+class ToyDuck : Duck
+{
+    protected bool _batteriesAreCharged;
+
+    public ToyDuck(string name, bool batteriesAreCharged = false)
+        : base(name)
+    {
+        this._batteriesAreCharged = batteriesAreCharged;
+    }
+
+    public void ChargeBatteries()
+    {
+        this._batteriesAreCharged = true;
+    }
+
+    public void Quack()
+    {
+        if (this._batteriesAreCharged == true) {
+            Console.WriteLine(""Toy duck quacked."");
+        }
+
+        this._batteriesAreCharged = false;
+    }
+}
+
+ +

Citing Barbara Liskov and also enderland's answer:

+ +
+

The intuitive idea of a subtype is one whose objects provide all the + behavior of objects of another type (the supertype) plus something + extra.

+
+ +

Pretty much covers what I have just done. I took a supertype, extended it and added new functionality altering the public API by adding new public methods. The intuitive approach to code reuse. But is it the correct one?

+ +

Be aware, I have not changed the pre nor post condition of the Quack method itself, only changed the implementation, thus that rule is not broken.

+ +

Now imagine you would have a method, perhaps a facade, which uses the ToyDuck class:

+ +
void makeToyDuckQuack(ToyDuck duck)
+{
+    duck.ChargeBatteries();
+    duck.Quack();
+}
+
+ +

Fairly straightforward, you pass a ToyDuck object to this method and it makes sure the ToyDuck will always quack, by calling the ChargeBatteries method before quacking.

+ +

Which brings us back to the first paragraph I cited by Ms. Liskov.

+ +
+

What is wanted here is something like the following substitution + property: If for each object o1 of type S there is an object o2 of + type T such that for all programs P defined in terms of T, the + behavior of P is unchanged when o1 is substituted for o2, then S is a + subtype of T. (See also 2, 17 for other work in this area.)

+
+ +

Based on this, programs should be written in terms of the supertype T, here the base Duck class. But the makeToyDuckQuack method I presented uses the subtype P instead. So to fullfil Ms. Liskov's terms, let's swap the tighter bond detertmined by the ToyDuck class to a more generic approach.

+ +
void makeDuckQuack(Duck duck)
+{
+    duck.ChargeBatteries(); // <-- EEeeeeh? What is this method?
+    duck.Quack();
+}
+
+ +

And the code will not compile, because the Duck class does not know the ChargeBatteries method exists. I just broke the code.

+ +

The constraint of using the supertype still remains, so to fix the code, I am forced to remove the call to ChargeBatteries method.

+ +
void makeDuckQuack(Duck duck)
+{
+    duck.Quack();
+}
+
+ +

But now there's a problem. The ToyDuck will never ever quack more than once. No matter how how hard I try, it simply will not. So after one call to the Quack method on a ToyDuck object, I can throw the object away and create it again to make it quack once more.

+ +

So by extending a class and extending the public API, you are in fact introducing methods, which will never be called (unless some kind of a magical runtime lookup is performed), and why would you even introduce methods, which are useless? YAGNI is all against that.

+ +

How is this connected to the controllers?

+ +

As I said before, the MVC frameworks cannot know how you will extend the base Controller class, so the core of such frameworks must be programmed in terms of the base class.

+ +

There might be methods like:

+ +
void processRoute(Controller controller)
+{
+   // ...
+}
+
+ +

But by using the base class, you would never have access to the new public methods of its children, by definition. Now, I know MVC frameworks do not care about this, because the methods are looked up during runtime based on the request, that's how MVC architecture works.

+ +
+ +

Because you are unable to know, what the routes are, because they are not defined up front, would this be considered a violation of the LSP?

+ +

I personally think it might be, because I believe an extending class should never introduce new public methods which the base class does not know about.

+ +

If it is a violation, there's another question. Is there a MVC framework which does not break it, perhaps by using composition over inheritance? I tried looking for some, but without luck.

+ +

To clarify:

+ +

I know violating the LSP is not completely wrong and LSP is merely a mean to make writing code easier when using IDE to help you with code completition and not having to downcast variables to access child methods.

+ +

The covers of MVC framework probably do not care about this at all, considering the route lookup is dynamic, as pointed out before.

+",193669,,31260,,42471.88958,42471.88958,Do common MVC frameworks violate the LSP and is there a MVC framework which does not?,,2,15,,,,CC BY-SA 3.0,, +308658,1,,,1/28/2016 16:15,,7,1269,"

I'm creating an FSM in python (it's a step sequencer and sample pad based on a Raspberry Pi 2).

+ +

Right now there are two states and the third is the Menu. This is handled by a class System which handles all the states. The Menu state has to edit the attributes of the other states, so I passed the other states to its constructor.

+ + + +
    class State(object):
+        def buttonPress(self, numberButton):
+            raise NotImplementedError
+
+    class StepSequencer(State):
+        def buttonPress(self, numberButton):
+            ...
+
+    class SamplePad(State):
+        def buttonPress(self, numberButton):
+            ...
+
+    class Menu(State):
+        def __init__(self,stepSequencer,samplePad):
+            self.stepsequencer = stepSequencer
+            self.samplepad = samplePad
+        def buttonPress(self, numberButton):
+            ...
+        def setMenuItem(self, currentMenuItem):
+            self.currentMenuItem = currentMenuItem
+
+    class MenuItem(object):
+        def __init__(self, text):
+            self.text = text
+
+    class System(object):
+        def __init__(self):
+            self.stepsequencer = StepSequencer()
+            self.samplepad = SamplePad()
+            self.menu = Menu(self.stepsequencer, self.samplepad)
+        def setState(self,state):
+            self.state = state
+        def buttonPress(self, numberButton):
+            self.state.buttonPress(numberButton)
+
+ +

I can't figure out how to create the structure for the menu. I thought of creating a class MenuItem for every menu item so the Menu state has all these objects and can change the current MenuItem, however, there are some things that I cannot overcome:

+ +
    +
  • how can I declare all the menu items and pass them to the Menu state to create dynamically the structure of the menu?

  • +
  • having this menu:

    + +
    Step Sequencer settings:
    +    Set samples
    +    Set volume
    +Sample Pad setting:
    +    Set samples
    +    Set volume
    +
    + +

    for example, if I want to have a slider that sets the volume, do I have to create another state to handle this? This state can be another state of the System or it must be a substate of the Menu state?

  • +
  • can I execute the code of each state using this logic? :

    + +
    while(system.state = system.stepsequencer):
    +    ...
    +while(system.state = system.menu):
    +    ...
    +
    + +

    The state is changed by a listener in another thread. This seems a very unorthodox way of handling states but it seems to work. If this is effective, how can I handle the Menu substates?

  • +
+",208531,,271897,user40980,42872.41181,43149.08264,Finite state machine menu design,,1,1,3,,,CC BY-SA 3.0,, +308661,1,371911,,1/28/2016 16:47,,6,3397,"

In my Python application, I can distinguish between entry points (scripts) and what I think of as library code.

+ +

My instinct is to put the library code inside a package, and the scripts elsewhere importing from the package.

+ +

It is possible using setup.py to reference methods inside a package to use as entry-points, thereby having all Python code inside packages.

+ +

Which is preferable?

+ +

Note: this link discusses both options but doesn't really offer an opinion.

+ +

Edit: to give a more concrete example, I am reviewing some code which contains one package. There are forty modules inside it:

+ +
    +
  • __init__.py
  • +
  • 11 'scripts'
  • +
  • 10 'library modules' used by those scripts
  • +
  • 18 test modules
  • +
+ +

This doesn't feel like it is using the capability of packages very well, but I can't put my finger on what exactly is wrong.

+ +

I appreciate that having tests in the same package was not in my original question.

+",175506,,175506,,42398.34167,43252.33264,Should I include scripts inside a Python package?,,3,2,2,,,CC BY-SA 3.0,, +308666,1,308688,,1/28/2016 17:36,,10,8087,"

I'm coding tests in C# and I settled with this structure:

+ +
try
+{
+    // ==========
+    // ARRANGE
+    // ==========
+
+    // Insert into the database all test data I'll need during the test
+
+    // ==========
+    // ACT
+    // ==========
+
+    // Do what needs to be tested
+
+    // ==========
+    // ASSERT
+    // ==========
+
+    // Check for correct behavior
+}
+finally
+{
+    // ==========
+    // CLEANUP
+    // ==========
+
+    // Inverse of ARRANGE, delete the test data inserted during this test
+}
+
+ +

The concept was ""every test cleans up the mess it makes"". However, some tests are leaving the database dirty and the failing the tests that come after.

+ +

What's the right way to do this? (minimize errors, minimize run time)

+ +
    +
  • Deletes everything » Insert defaults » Insert test data » Run test?
  • +
  • Insert defaults » Insert test data » Run test » Delete everything?

  • +
  • Currently:

    + +
      +
    • (per session) Deletes everything » Insert defaults
    • +
    • (per test) Insert test data » Run test » Delete test data
    • +
  • +
+",55576,,19584,,42412.65208,42412.65208,Cleanup & Arrange practices during integration testing to avoid dirty databases,,4,0,3,,,CC BY-SA 3.0,, +308671,1,,,1/28/2016 18:52,,5,297,"

Supposing you work in a very large global company with lot of code at all levels (e.g. from embedded to mobile platform application code), and projects can last anything between 6 months to 5 years...

+ +

What would be considered the best practice in terms of keeping static analysis information?, assuming traceability between defects and static warnings is important for the organization (e.g. aerospace, automotive, government, etc.)

+",10869,,,,,42475.58403,How much static analysis history should a company keep?,,2,6,,,,CC BY-SA 3.0,, +308677,1,308684,,1/28/2016 20:29,,5,160,"

I was tutoring a student that came up with this assignment. It basically requires a data structure with the following characteristics:

+ +
    +
  • it holds a set of integers in {1, 2, ..., n}
  • +
  • n is power of 2
  • +
  • O(log(n)) insertion, deletion and maximum
  • +
  • O(1) for determining whether an element is in the set
  • +
  • it uses only O(n) bits of storage
  • +
+ +

Does this data structure even exist?

+",22402,,22402,,42397.86667,42401.72014,Is it possible to have these characteristics in a data structure?,,1,10,1,,,CC BY-SA 3.0,, +308686,1,,,1/28/2016 22:27,,5,299,"

Automated testing is pretty hyped-up in recent years, with particular emphasis on TDD at the ""unit"" level. The touted advantages include things like:

+ +
    +
  • Stabilizing existing code: breaking changes are identified before deployment
  • +
  • Coercing code, to some extent, into smaller, more testable structures
  • +
  • A codebase that self-documents requirements
  • +
  • A codebase that self-documents the usage of its units and modules
  • +
+ +

The perceived disadvantages include things like:

+ +
    +
  • Increased initial development time
  • +
  • Increased lines of code per requirement (test code + implementation)
  • +
  • The patterns required for certain features to be testable is, in some cases, said to drastically increase complexity / lines of actual implementation code required per feature
  • +
  • Increased developer skills requirement
  • +
  • Increased tooling
  • +
+ +

There are also some simple limitations and impediments:

+ +
    +
  • Tests cannot ensure code is bug-free; only the specific, test-encoded requirements
  • +
  • Tests can only validate testable code
  • +
  • Some interfaces and/or hardware may be very/prohibitively time-consuming to mock or fake
  • +
  • Some requirements cannot [yet] be tested automatically (e.g., ""it looks good"")
  • +
+ +

And so, for a given project, either new or ongoing, how should an unbiased, professional programmer weigh these things against each other to determine whether an automated testing suite is a net gain?

+ +

Or, if knowing ahead of time is too fuzzy of a science, having implemented an automated testing suite, can we measure and/or demonstrate its net impact?

+ +
+ +

For example:

+ +

We might conclude ""intuitively"" that a large, widely used financial system benefits from a thorough automated test suite. Without crunching any number, the cost of adding many meaningful tests ""feels"" lower than the liability of financial transactions that ""forget"" which accounts the money is really in.

+ +

On the other end, a test suite of any size around a PHP implementation of the echo command is probably going to inflate our development time by 10 or 100 times with no observable gain.

+ +

But, what about the things between the extremes?

+",94768,,-1,,42837.31319,43111.66389,How do you evaluate automated testing for a particular product/project?,,2,13,2,,,CC BY-SA 3.0,, +308692,1,308696,,1/28/2016 22:52,,1,13216,"

I'm having a hard time understanding exactly what is happening in the code here and how this script is changing other functions.

+ +

This is taken from eloquentjavascript.net chapter 5 on higher order functions.

+ +
function noisy(f) {
+  return function(arg) {
+    console.log(""calling with"", arg);
+    var val = f(arg);
+    console.log(""called with"", arg, ""- got"", val);
+    return val;
+  };
+}
+noisy(Boolean)(0);
+
+// → calling with 0
+// → called with 0 - got false
+
+",213229,,,user40980,42397.95625,42397.98403,How does this function returning a function work?,,4,1,1,,,CC BY-SA 3.0,, +308700,1,,,1/29/2016 1:11,,3,1055,"

The language I am using is C#, but I am looking more for help with the algorithm more than I am concerned with which language. I have been trying to develop this algorithm for a while now and I can't seem to get it down.

+ +

I have a tree like structure which has many parent/child relationships. So leaf #1 can have children, and the children can have children, and so on and so on. This forms a tree. My problem is that I want to be able to list out every possible path down the tree to the bottom.

+ +

My data is in a C# list of objects and each object has 2 fields, parent and child.

+ +

So my data could look something like this:

+ +
Parent       Child
+
+1              2
+
+1              3
+
+2              4
+
+2              5
+
+3              6
+
+3              7
+
+3              8
+
+4              9
+
+4              10
+
+ +

My real data is more complex than this as I have thousands of rows. In the sample above here is the results I want to achieve:

+ +
1  2  4  9
+
+1  2  4  10
+
+1  2  4
+
+1  2  5
+
+1  3  6
+
+1  3  7
+
+ +

I did see some other similar solutions on this website, but they were a little different than what I need. A leaf in my tree can have more than 2 children as shown above.

+",213237,,198652,,42472.53611,42472.53611,Find all paths in a tree type of structure,,0,5,,,,CC BY-SA 3.0,, +308705,1,,,1/29/2016 3:48,,-3,263,"

I've been following The Complete Android Developer Course in Udemy. I followed the exact steps downloading the same files, I've already downloaded and saved JDK_HOME in the environmental variables with the right path accordingly.

+ +

I'm currently using Windows 10 and am not sure whether the files are supported. Any help would be appreciated!

+ +

+ +

+",213260,,,,,42398.17361,Android Studios: Unable to create a New File,,1,3,2,,,CC BY-SA 3.0,, +308715,1,,,1/29/2016 6:39,,0,1610,"

I was thinking to store all file operations inside an array, along with the reverse operations which are used for undos.

+ +

Example:

+ +
[
+
+  [
+     'op' => 'move',
+     'parameters' => [$path_from, $path_to],
+     'undo' => [
+        'op' => 'move',
+        'parameters' => [$path_to, $path_from],
+     ]
+  ],
+
+
+
+  [
+     'op' => 'create_folder',
+     'parameters' => [$path],
+     'undo' => [
+        'op' => 'remove',
+        'parameters' => [$path],
+     ]
+  ],    
+
+  .........
+
+
+]
+
+ +

This is a list of file operations that were made by the user. The 'op' value corresponds to a function and 'parameters' are the arguments passed to the function.

+ +

When user requests to undo everything until the first op, the ""undos"" are executed.

+ +

Will this work? Are there better options? Are there some PHP support classes that I can use to write less code?

+",212913,,,,,42398.44167,Undo/redo implementation for file changes in PHP,,2,0,,,,CC BY-SA 3.0,, +308720,1,,,1/29/2016 8:26,,-1,952,"

The company i'm working for is selling HRIS management software. It was built using ASP and mysql. But one of our client's demand is for it to be based on MS SQL Server. our team is made up mostly of junior programmers / fresh grad and we don't have much experience working in sql server and migrating from one db to another.

+ +

Can Mysql stored procedures be migrated to sql server using 'SQL Server Migration Assistant'?

+ +

If so, how?

+ +

If no, what other options do we have?

+",211526,,31260,,42436.50139,42436.50139,Migrating MYSQL functions and stored procedures to SQL Server,,1,7,2,,,CC BY-SA 3.0,, +308725,1,308739,,1/29/2016 8:47,,1,570,"

I'm having a hard time planning how to implement the architecture. +The problem:

+ +
>  A user can save a number of profiles:
+>     Name
+>     URL
+>     Time Interval
+
+Name       | URL      |Time Interval
+Sample1    |s.com     |5 mins
+Sample2    |x.com     |2 mins
+Sample3    |xxx.com   |7 mins
+
+ +

The main purpose of the app is to download images set by the user through time interval. The problem is how can we check every profile with different intervals like for example the user created 20 profiles which have different time intervals.

+ +

We're going to be using WPF and WCF on this one.

+",213292,,213292,,42398.38403,42398.51389,How to execute multiple timed intervals?,,2,2,,,,CC BY-SA 3.0,, +308732,1,308747,,1/29/2016 10:33,,2,2693,"

Recently, I come across an issue about the design of scope guard. A scope guard invokes a supplied function object (usually performs cleanup procedures) upon exiting the enclosing scope. The current design is as follows:

+ +
#define HHXX_ON_SCOPE_EXIT_F(...) ...
+#define HHXX_ON_SCOPE_EXIT(...) HHXX_ON_SCOPE_EXIT_F([&]() { __VA_ARGS__ })
+
+ +

HHXX_ON_SCOPE_EXIT_F(...) executes the function object as defined by __VA_ARGS__ upon exiting the enclosing scope. For example,

+ +
int i = 0;
+{
+  HHXX_ON_SCOPE_EXIT_F([&]() {
+    assert(++i == 4);
+  });
+  HHXX_ON_SCOPE_EXIT_F([&]() {
+    assert(++i == 3);
+  });
+  HHXX_ON_SCOPE_EXIT(
+    assert(++i == 2);
+  );
+  HHXX_ON_SCOPE_EXIT(
+    assert(++i == 1);
+  );
+}
+assert(i == 4);
+
+ +

The implementation is also simple:

+ +
#define HHXX_ON_SCOPE_EXIT_F(...)                                              \
+  auto HHXX_UNIQUE_NAME(hhxx_scope_guard) =                                    \
+    make_scope_guard(__VA_ARGS__)
+
+#define HHXX_ON_SCOPE_EXIT(...) HHXX_ON_SCOPE_EXIT_F([&]() { __VA_ARGS__ })
+
+template <typename F>
+class scope_guard {
+public:
+  explicit scope_guard(F f)
+      : f_(std::move(f)) {
+    // nop
+  }
+  ~scope_guard() {
+    f_();
+  }
+
+private:
+  F f_;
+};
+
+template <typename F>
+auto make_scope_guard(F f) {
+  return scope_guard<F>(std::move(f));
+}
+
+ +

As you can see, it doesn't provide built-in support for a method to abandon the invocation at scope exit. Providing such a feature, however, invalidates conciseness and simplicity of the current design. It also induces certain performance overhead. Basically, I'm in favor of maintaining the current design and rejecting the feature proposal. I want to hear opinions from you guys, so as to be sure I'm making a right decision.

+",200711,,200711,,42398.47014,42398.56806,Preferable design of scope guard in C++,,2,11,1,,,CC BY-SA 3.0,, +308736,1,,,1/29/2016 10:48,,1,95,"

I have a question about best practices to design a database which have to hold the following data:

+ +

There is a page with a questionnaire which has default questions like Firstname, Lastname, Street... (50 more fields). Now people can register to the page and add a another custom field to the questionnaire. Here comes the tricky part: +This field is only visible on forms the person created. Other people should only see their own custom fields on their own questionnaires.

+ +

How is the best practice to design something like that in a database? I can imagine two things:

+ +
    +
  1. Create a table for the questionnaire answers where every field is a column. Add another column for a json string which defines the custom fields (nvarchar(max)) and contains the answer of a client. A seconds table holds questionairs ids of the user and in a second column a json string which defines the custom fields.
  2. +
  3. Create a table for the default questionnaire anwsers where every default field is a column. A seconds table holds questionairs ids of the user. Create a third table which contains the custom fields for every user. Col1 UserID, Col2 QuestionnaireID, Col3 QuestionName, Col4 QuestionType. In a fourth table the answers of the clients are linked to table 3.
  4. +
+ +

Probably there is another better way?

+",213309,,213309,,42398.45625,42398.60417,DB Design for non static front-end,,2,5,,,,CC BY-SA 3.0,, +308741,1,308750,,1/29/2016 12:13,,1,417,"

I'm programming a small lisp/scheme interpreter and I came across the following situation :

+ +

When a quoted list contains lambdas, they are not parsed as lambdas. +Here is a sample code (live on repl.it):

+ +
(define list1 '(
+                   (lambda (x) (+ x 1))
+                   (lambda (y) (+ y 2))
+                )
+)
+
+(define add1 (car list1))
+(print (add1 1))
+
+ +

and the result is :

+ +
Error: ('lambda ('x) ('+ 'x 1)) is not a function [add1]
+
+ +

Is it normal behaviour ? I thought lambdas were special forms that should always be binded.

+ +

If it is indeed the expected behaviour : When my Parser parse the Lambda and wraps it in an object (say, of type LambdaWrap) and my Interpreter returns that object unevaluated then I guess that this is a wrong behaviour because it's supposed to return some unbinded symbols instead. Is that right ?

+",164491,,7422,,42398.51319,42398.72847,Scheme : Lambda inside quoted list is unbound,,1,2,,,,CC BY-SA 3.0,, +308743,1,308751,,1/29/2016 12:26,,1,1457,"

Is there a formula, or generally accepted approach?

+ +

Should I be measuring database updates and averaging their frequency then using that to determine how often clients should poll the server?

+ +

I would like all clients to have up to date data, but there's no point in polling the server every second if the data only changes once a week.

+ +
+ +

It is a Browser based app, with MySql on a server.

+ +

I thought that I would send a CRC of the data from the client with each poll. If it matches, then I don't need to re-send the data.

+ +

I am still unsure how often to poll, though.

+",979,,1204,,42398.64236,42398.64236,How to calculate how often to poll the server for new data?,,1,4,,42407.86111,,CC BY-SA 3.0,, +308745,1,,,1/29/2016 12:53,,3,81,"

Is there any way to find every github project which:

+ +
    +
  • is GPL licensed
  • +
  • has over one million lines of C++ code
  • +
+ +

I am imagining that the Github API was designed to automate such requests, but I am not sure at all.

+ +

In other words, is the GitHub API related to searching within all the existing github projects?

+ +

Another request might be to find every project in C or C++ having a single line containing fork (); with only spaces before and after that. This would approximate (ignoring those few who would code pid_p p = on one line and fork(); on the next) the quest for badly written programs which do not keep the result of fork (as they should)

+ +

PS. Google codesearch was in the past somehow useful for such things. It has vanished.

+",40065,,-1,,42878.48125,42398.54375,mining (& searching in) github projects,,0,2,2,,,CC BY-SA 3.0,, +308749,1,,,1/29/2016 13:44,,8,2819,"

I am developing an e-commerce product and I have been able to implement all functionality and am left with allowing users to create additional attributes for a product. Right Now I have two options.

+ +

EAV

+ +

EAV is largely frowned upon but seems to work for Magento. But after researching all the headaches it causes I am a bit reluctant to use it

+ +

Use JSON Columns in MySql 5.7

+ +

This is rather new an I have not seen it being implemented anywhere else and I am dreading full table scans as a result of querying the JSON attributes. But after reading this MySql 5.7 JSON they seem to recommend using JSON. and it would be less tiresome than implementing something like this Practical MySql schema advice .

+ +

My question is, although I am biased towards using the JSON column way of storing attributes as NoSQL is not an option for me, are there any drawbacks that are more severe than using EAV tables.

+",213325,,-1,,42837.31319,42754.75764,Using MySql 5.7 JSON columns for EAV,,1,1,,,,CC BY-SA 3.0,, +308758,1,,,1/29/2016 14:38,,2,1599,"

Is it true that ngBind and ngModel are very similar: bind the model (the data) the either static text element or changeable element, such as text input box, select element, or textarea.

+ +

In that case, wouldn't it make sense, just to have one name, say ngModel, and let it work for all elements (and so we don't need ngBind)?

+ +

This almost feels like polymorphism, with which, ngModel has one behavior for changeable element and another behavior for unchangeable elements.

+",5487,,,,,42953.64583,"if ngModel is for changeable element and ngBind is for unchangeable element, couldn't they just both have the same name?",,3,1,,,,CC BY-SA 3.0,, +308760,1,,,1/29/2016 14:50,,1,89,"

I am trying to do a nightly package integration with one of our projects that consumes 6 other internal teams’ -PreRelease packages.

+ +

Background Info

+ +
    +
  • We have our own internal ProGet Server that hosts only our company packages.
  • +
  • Package IDs are in the format of: MyCompany.Project.Component
  • +
  • We have VS2015 Enterprise ($$), so our build system only has the VS 2015 Build tools installed.
  • +
  • We use CruiseControl.NET ( will be going to TFS later this year )
  • +
+ +

My integration script ( Powershell ) is something like:

+ +
#Restores all the packages 1st
+nuget restore ( Get-ChildItem $Dir -filter *.sln –Recurse ).FullName 
+
+#Gets all the packages.configs
+foreach ( $PackageFile in (Get-ChildItem $RepoDir -filter packages.config -Recurse).FullName )) 
+{ 
+    #Gets all the packages
+    nuget list $PackageFile  
+
+    #Filters my companies packages
+    | Select-String -Pattern 'MyCompany.*' –AllMatches 
+
+    #Removes package version and only gets the package ID
+    | % { [regex]::Split($_, "" "")[0] } 
+
+    #Updates the packages
+    | Start-Process nuget -ArgumentList ""update $PackageFile -ID $_ -Source MyCompnySrc’s –Prerelease –verbosity detailed"" -Wait –NoNewWindow
+}
+
+ +

Then build the solution, execute unit/integration tests then either check in the new package.configs or report there are some breaking changes.

+ +

I thought about splitting MyCompany’s packages into a separate package config but thought it would be wiser to get advice before restructuring all the teams repo’s, and implementing this on other projects.

+ +

I am new to Release Engineering and have that feeling I am making this too complicated

+",213339,,31260,,42398.69306,42398.69306,Continous integration of only MyCompanies nuget packages,,0,5,,,,CC BY-SA 3.0,, +308761,1,308783,,1/29/2016 15:07,,0,1522,"

I am learning Particle Swarm Optimization.

+ +

The problem is to find the pair of number whose sum is lowest.

+ +

Lets say we have some numbers z1,z2,z3,z4

+ +
Number  Value
+ z1     -2
+ z2     -3
+ z3      3
+ z4     -5
+
+ +

The goal is to find pairs of number whose sum is minimum (z2,z4).

+ +

Initially we have 3 available pairs to choose +(z1,z2),(z2,z3),(z3,z4).

+ +

Questions

+ +
    +
  1. How can I formulate this as an optimization problem? I want to be able to include different combinations as well such as the combinations (x1,x2,x3,x4),(x4,x5,x6,x7) etc.

  2. +
  3. Can PSO work with this situation?

  4. +
+",213333,,52929,,42398.85347,42398.85694,"Approaching Particle Swarm Optimization with optimization problem, how to use PSO with it?",,1,0,1,,,CC BY-SA 3.0,, +308762,1,308770,,1/29/2016 16:04,,1,410,"

My professor gave this example in a lecture:

+ +
+

Example: Given an integer N, print out the values 1…N.

+ +

for (int i=1; i<=N; i=i+1) { System.out.print(i); }

+
+ +

The professor said that the loop was O(n) because it printed the values 1 to N. However I thought that Big O Notation was a reference to the amount of items in the input data therefore it would be O(1) (which is technically equivalent to O(n) as the input size is 1) due to it only accessing a single data item once.

+ +

Am I right in thinking this?

+",202968,,,,,43647.35833,Big O Notation of an example,,3,0,1,,,CC BY-SA 3.0,, +308766,1,308772,,1/29/2016 16:29,,6,2395,"

I'm asking in terms of a loop, obviously break is important in switch statements. Whether or not switch statements themselves are code smells is a separate issue.

+ +

So consider the following use cases for iterating a data structure:

+ +
    +
  1. You want to do something to the entire structure (no break needed)
  2. +
  3. You want to do something to part of a data structure.
  4. +
  5. You want to find something(s) in the data structure (which may or may not involve iterating the entire structure)
  6. +
+ +

The above list seems more-or-less exhaustive to me, maybe I'm missing something there.

+ +

Case 1 can be thrown right out, we can use map/forEach. Case 2 sounds like filter or reduce would work. For case 3, needing to iterate the data structure to find something seems plain wrong, either the data structure itself should provide a relevant method or you are likely using the wrong data structure.

+ +

While not every javascript data structure (or implementation) has those methods it's trivially simple to write the relevant functions for pretty much any javascript data structure.

+ +

I saw this when researching but it is explicitly in the context of C/C++. I could understand how it would be more-or-less a necessity in C, but my understanding is that C++ has lambdas and many data structures are generally objects (e.g. std::vector, std::map), so I'm not sure I understand why the answers are all so universally in favor of break there, but I don't feel I know C++ well enough to even begin to comment.

+ +

I also realize that for some corner-case exceedingly large data structure the cost of iterating the entire structure may be unacceptably high, but I doubt those are very common when working even in node.js. Certainly it's the kind of thing you'd want to profile.

+ +

So I just don't see a use-case for break in today's javascript. Am I missing something here?

+",176474,,-1,,42878.52778,42403.89722,Is break a code smell?,,3,13,1,,,CC BY-SA 3.0,, +308767,1,,,1/29/2016 16:33,,7,265,"

I'm revisiting old code, and I noticed that the main logic is in one method, which is longer than I'd like (~60 lines). So I split it, and there's a natural seam on which to do so: the first half gathers parameters from UI objects, validates them, confirms a file overwrite if necessary, and logs that the procedure start. The second half is the actual meat: it opens a file, connects to a service, and records data. This works well because the second method is now interface-agnostic, it just accepts parameters.

+ +

However, I'm not clear on how to name these two halves. The best I've come up with is GatherParameters() and RecordData(). I feel like there should be a common name for this: one method which collects and vets inputs and one method which accepts (and trusts) those parameters and just gets on with the doing.

+ +

To be clear, I'm not asking about what this type of refactoring is called (""extraction,"" I guess), but rather for the resulting code pattern of ""Prep/Test"" method and ""Do"" method.

+ +

Incidentally, the vague name for RecordData() applies in this case, because the methods retrieve data from a user-selected table in Salesforce. It's not known at compile time what type of data will be recorded.

+",,user25946,,user25946,42398.70278,42398.78958,"Is there a term for splitting a function into ""prep"" and ""do"" halves?",,1,4,2,,,CC BY-SA 3.0,, +308768,1,,,1/29/2016 16:44,,1,4075,"

I have a Entity Framework class that was derived from the database layout. I was wondering if there was any problems by extending the class by creating another cs file and using the same public partial class to add the additional properties?

+ +

Much like you would do when creating a file for data annotations. I have tested and it works for my MVC application. It allows me to pass some dynamic properties to the view model while still using the original model from EF 'and additional file I created'.

+ +

Does this pose any problems, or is there a better solution?

+",213343,,31260,,42398.69792,42398.89097,c# extending a entity framework class,,1,0,,,,CC BY-SA 3.0,, +308773,1,308774,,1/29/2016 17:06,,0,111,"

I found a bug. Yay for me.

+ +

The bug is such a mix of technologies I am not sure who I should submit the bug to.

+ +

The bug is a mix of Adobe Experience Manager and Angular Materials. Neither is really at fault. It has to do with some very clever javascript inside of AEM and the use of Internet Explorer Media Query hacks that are used inside of Angular Materials.

+ +

It may even be the browsers at fault.

+ +

For those not aware of the Internet Explorer media query hack, it involves writing a media query like this:

+ +

@media screen\0

+ +

The real problem is that this creates an invalid character in the DOM's CSS rule property. You can see it on this CodePen if your really interested, just look at the console of your browser.

+ +

http://codepen.io/TylersDesk/pen/gPzZwJ

+ +

Proprietary javascript inside of AEM can be hacked to work and check for the invalid character and it fixes the issue, but then I am modifying code I shouldn't and breaking upgrade paths.

+ +

Angular Materials is using this Media Query Hack in their production codebase, but it is a pretty mainstream hack from what I can tell.

+ +

When two libraries collide in such a strange way, with no body truly at fault, who do I submit my bug to?

+",198001,,,,,42398.73611,Who should I submit a bug to when two vendor libraries collide?,,1,1,,,,CC BY-SA 3.0,, +308776,1,,,1/29/2016 17:47,,3,546,"

I have a remote machine where multiple developers work on the same repository. We use gitolite and SSH keys to manage git access.

+ +

As for now we code on the local machines, commit, push, pull remotely and test. It works, but is time consuming and annoying.

+ +

If we edit code on remote repo, we can commit and push, forwarding agent gives correct access rights to repository, but still name and email is the one configured on the server.

+ +

Tried setting up environment variables:

+ +
GIT_COMMITTER_NAME=name
+GIT_COMMITTER_EMAIL=mail@example.com
+
+ +

But it still uses data from git config. If I unset those, git complains about that during commit.

+ +

Is there a way to configure email and name independently for each user, eg. through ssh keys passed by forwarding agent?

+ +

A nasty workaround is to mount remote directory locally and then commit on local machine. Nasty and takes ages.

+",213356,,325277,,43817.26806,44207.37569,Multiple git committers on same repo and machine with forwarding agent,,1,7,1,,,CC BY-SA 3.0,, +308796,1,308799,,1/29/2016 23:55,,7,4917,"

How would I write a unit test, say JUnit, for validating an XML file?

+ +

I have an application that creates a document as an XML file. In order to validate this XML structure, do I need to create an XML mock object and compare each of its elements against the ones in the 'actual' XML file?

+ +

IF SO: Then what is the best practice for doing so? e.g. would I use the XML data from the 'actual' XML file to create the mock object? etc.

+ +

IF NOT: What is a valid unit test?

+",203352,,31260,,42465.51806,42465.51806,Writing a valid test case for validating XMLs,,1,1,,,,CC BY-SA 3.0,, +308797,1,308811,,1/30/2016 0:08,,0,131,"

I'm developing a simple application that crawls in web pages to obtain some information. For this I used and tested some libraries, like crawler4j, jsoup, jaunt and htmlunit. +I exchanged several times a API to another because sometimes perceived that one served me better in certain feature than the one I was using. I may have to do it other times and every time I do so I have to go around the code making various refactorings.

+ +

So I decide to separate the calls for this APIs in a kind of encapsulated classes that holds all the operations I have to do with this API.

+ +

Is there a design pattern to mitigate this problem?

+ +

Below a simple example that I use ""Handler"" suffix:

+ +

EDIT the final version:

+ +
public interface CrawlerApiHandler {
+    String visit (String url);
+}
+
+public class JsoupCrawlerApiHandler implements CrawlerApiHandler {
+    public static final String USER_AGENT = ""Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36"";
+
+    @Override
+    public String visit(String url) {
+        try {
+            return Jsoup.connect(url).timeout(20000)
+                    .userAgent(USER_AGENT).get().toString();
+        } catch (IOException e) {
+            //LOG
+            return """";
+        }
+    }
+}
+
+public class JauntApiHandlerImpl implements CrawlerApiHandler {
+    UserAgent userAgent;
+
+    public JauntApiHandlerImpl(UserAgent userAgent) {
+        this.userAgent = userAgent;
+    }
+
+
+    @Override
+    public String visit(String url) {
+        try {
+            return userAgent.visit(url).toString();
+        } catch (ResponseException e) {
+            return """";
+        }
+    }
+}
+
+",175424,,175424,,42399.09097,42399.27986,Design pattern to holds API exchanges?,,2,3,,42421.81667,,CC BY-SA 3.0,, +308807,1,308808,,1/30/2016 5:12,,5,3044,"

Lets assume I have following base class:

+ +
public class Base 
+{
+    public int Id {get; set;}
+    public string SomeText {get; set;}
+    public string SomeOtherText {get; set;}
+
+    public static Base BuildFromOtherAssembly(ClassFromAssemblyA source)
+    {
+        Id = source.Id;
+        SomeText = source.SomeText;
+        SomeOtherText = source.SomeOtherText
+    }
+}
+
+public class Derived : Base 
+{
+    public DataFromAssemblyB DataFromAssemblyB {get; set;}
+
+    public Derived(DataFromAssemblyB dataFromAssemblyB)
+    {
+        DataFromAssemblyB = dataFromAssemblyB;
+    }
+}
+
+ +

The base class gets built and returned from a Assembly that references AssemblyA, to AssemblyB, which does not know anything about AssemblyA.

+ +

AssemblyB now wants to add it's own data to the Base class, and wants to make sure the return type indicates that the additional data is present.

+ +

Basically, I want to make sure on a compiler level that DataFromAssemblyB is present.

+ +

I know that composition would work well in this scenario, but the problem is that logically DataFromAssemblyB belongs to Base.

+ +

To give a more real world example, imagine this scenario:

+ +
public class ShoppingCart 
+{
+    public int Id {get; set;}
+    public string OwnerName {get; set;}
+    public decimal Value {get; set;}
+
+    public static Base BuildFromDataModel(ShoppingCartDatabaseModel source)
+    {
+        Id = source.Id;
+        OwnerName = source.OwnerName;
+        Value = source.Value
+    }
+}
+
+public class ShoppingCartWithContents : ShoppingCart 
+{
+    public ShoppingCartContent Content {get; set;}
+
+    public ShoppingCartWithContents (ShoppingCartContent content)
+    {
+        Content = content;
+    }
+}
+
+ +

I hope this explains it. The reason I have this problem is because ShoppingCart comes from the database layer, while ShoppingCartContent comes from the caching layer, and a intermediary layer is joining the data together.

+ +

I want strong typing so it's always clear to the developer whether they are dealing with a ShoppingCart that has contents loaded, or a ShoppingCart that does not.

+ +

I want to avoid mapping all properties around by hand, as that could introduce bugs should a developer decide to add a property to ShoppingCart, but forget to map that property in ShoppingCartWithContents.

+ +

Please let me know what the best practice in such a situation is.

+ +

Update:

+ +

I created a few different variations I came up with on Github, would love to get some feedback and thoughts.

+ +

Composition? Inheritance? Gist

+ +

The premise is the same - we create a shopping cart object from a mock database model, and then try to add content data to it.

+ +

To test various capabilities, I thought it would be nice if the ShoppingCart has a .AddContents(content) method that allows to cleanly create a ShoppingCartWithContents.

+ +

As a additional functionality test as a stand in for Business Logic, I added a GetFriendlyString method which should still be accessible once items have been added.
+In some samples I overrode GetFriendlyString to have a extra ""With Items"" to show that the base functionality can be extended.

+ +

I tried to summarize my feelings on the various approaches at the top.

+ +

Update 2:

+ +

I wanted to give a bit more information on the circumstances, as that may influence what the best way is.

+ +

This is a web api project, with EF as a ORM. My business objects like ShoppingCart are hand-mapped from EF's Db Models to only pick the properties relevant for the business task. Thus, my business logic is not constrained by the ORM or Data Layer ( in fact, the assembly that contains ShoppingCart does not even reference EF ).

+ +

Updates and creates are handled through their own business models, for example if a shopping cart needs to be updated, there would be a ShoppingCartUpdate class that has validation rules and only knows how to map it's changes to the database model.

+ +

Some data is cached in Redis, and some situations call for creating business objects from both EF and Redis. In the shopping cart example for instance, some business logic may only need to know if a user has a shopping cart or not, but other business logic that has to sum up the price relies on that extra information being available.

+ +

I want to avoid using bools like HasDataLoaded, and I also want to avoid nullable types as that inevitably leads to repeating nullchecks and if statements all over the codebase.

+ +

Worse yet, if a dev forgets a if / null check it could potentially try to ship out a empty order or something stupid like that.

+ +

That's why I thought it would be a better approach to enforce the presence or absence of data using unique classes - a checkout method would only take a ShoppingCartWithContents parameter, thus ensuring that the content has been loaded.

+ +

I hope that makes sense.

+",139649,,139649,,42418.05347,42418.94375,Cleanest way to expand a base class without explicitly mapping properties in C#,,2,1,,,,CC BY-SA 3.0,, +308809,1,,,1/30/2016 6:06,,1,145,"

I'm designing a fairly simplistic stack-based programming language, and implementing it in Python. (no link, because it's not fully implemented yet.)

+ +

The language itself is essentially intended to be the actually usable lovechild of Forth and Joy, but JIT-compiled (mainly so I can learn to write a compiler).

+ +

Being Python, there is no ""Virtual Machine"" to speak of in the normal sense, but the runtime is a communication between a stack machine (a module's worth of classes implementing various features) and the lexerless ""parser"" which really just turns bytes into function calls at runtime.

+ +

Real native code does not have an idea of exceptions (obviously, being physical instructions, they don't really care either way), but the kernel can tell when things get out of hand, and so it and the memmapper are in a way the handler for errors in platform-native code.

+ +

By the language's nature, a large portion of Python's control flow is implemented through exceptions and handling them. However, the stack machine is meant to closely resemble a real Stack Machine in that giving it bad instructions will get you a slap in the face. In my opinion while code littered with try; except; else may well be Pythonic, sometimes it gets a bit much, especially if they're nested.

+ +

Do the exception handlers go nearest the raw data and functions, making them possibly easier to debug, or do they go alongside the (minimal) type-checking done by the runner, leaving the stack machine to be a Stupid Machine that returns None sometimes?

+",201219,,102438,,42399.67708,42399.79861,Does exception handling belong at the lowest level of the runtime?,,1,11,,,,CC BY-SA 3.0,, +308817,1,308828,,1/30/2016 8:59,,29,7892,"

I was once advised that a C++ program should ultimately catch all exceptions. The reasoning given at the time was essentially that programs which allow exceptions to bubble up outside of main() enter a weird zombie state. I was told this several years ago and in retrospect I believe the observed phenomenon was due to lengthy generation of exceptionally large core dumps from the project in question.

+ +

At the time this seemed bizarre but convincing. It was totally nonsensical that C++ should ""punish"" programmers for not catching all exceptions but the evidence before me did seem to back this up. For the project in question, programs that threw uncaught exceptions did seem to enter a weird zombie state -- or as I suspect the cause was now, a process in the midst of an unwanted core dump is unusually hard to stop.

+ +

(For anyone wondering why this wasn't more obvious at the time: The project generated a large amount of output in multiple files from multiple processes which effectively obscured any sort of aborted (core dumped) message and in this particular case, post-mortem examination of core dumps wasn't an important debugging technique so core dumps weren't given much thought. Issues with a program usually didn't depend on state accumulated from many events over time by a long lived program but rather the initial inputs to a short lived program (<1 hour) so it was more practical to just rerun a program with the same inputs from a debug build or in a debugger to get more info.)

+ +

Currently, I'm unsure of whether there is any major advantage or disadvantage of catching exceptions solely for the purpose of preventing exceptions from leaving main().

+ +

The small advantage I can think of for allowing exceptions to bubble up past main() is that it causes the result of std::exception::what() to be printed to the terminal (at least with gcc compiled programs on Linux). On the other hand, this is trivial to achieve by instead catching all exceptions derived from std::exception and printing the result of std::exception::what() and if it's desirable to print a message from an exception that doesn't derive from std::exception then it must be caught before leaving main() in order to print the message.

+ +

The modest disadvantage I can think of for allowing exceptions to bubble up past main() is that unwanted core dumps may be generated. For a process using a large amount of memory this can be quite a nuisance and controlling core dumping behavior from a program requires OS-specific function calls. On the other hand, if a core dump and exit is desired then this could instead be achieved at any time by calling std::abort() and an exit without core dump can be achieved at any time by calling std::exit().

+ +

Anecdotally, I don't think I've ever seen the default what(): ... message printed by a widely distributed program upon crashing.

+ +

What, if any, are the strong arguments for or against allowing C++ exceptions to bubble up past main()?

+ +

Edit: There are a lot of general exception handling questions on this site. My question is specifically about C++ exceptions that cannot be handled and have made it all the way to main() -- maybe an error message can be printed but it's an immediately show stopping error.

+",128967,,128967,,42399.38958,42405.59236,Should a C++ program catch all exceptions and prevent exceptions from bubbling up past main()?,,7,6,3,,,CC BY-SA 3.0,, +308821,1,308831,,1/30/2016 10:29,,-1,63,"

Normal behavior of the SQL Table is to add new rows in end of the table. I want to query results of single row only.

+ +

So mysql command is

+ +
SELECT id FROM users WHERE country='india' LIMIT 1
+
+ +

this will give me the first entry with the country india. But i want to get the last person so i used order by DESC

+ +
SELECT id FROM users ORDER BY DESC WHERE country='india' LIMIT 1
+
+ +

This gives me the desired result but i wanted to know if its the pefect way to achieve it.

+ +

What Order actually does? +Does it fetch the result and then order it by Descending or Does it query from the end of the table?

+",213432,,,,,42400.67083,Querying Results from End of the table,,1,2,0,,,CC BY-SA 3.0,, +308826,1,,,1/30/2016 12:23,,2,291,"

I would like to create a library consisting of two layers, lets call them A and B. There should be a class ""Sample"" in layer A. Layer B also knows about class ""Sample"" and enlarges it by some methods.

+ +

Then I would like to have another project, referencing this library, so I can have the following scenarios for an instance of ""Sample"", lets call it sample_instance:

+ +
1) using A;
+
+ +

Typing ""sample_instance."", Intellisense will show me all the methods that are defined in layer A.

+ +
2) using B;
+
+ +

Typing ""sample_instance."", Intellisense will show me all the methods that are defined in layer B.

+ +
3) using A;
+   using B;
+
+ +

Typing ""sample_instance."", Intellisense will show me all the methods that are defined in layer A and B.

+ +

I've been thinking about this many days now but I'm not able to find a proper solution. I've been experimenting with extension methods in layer B, which gets Scenario 1) and 3) to work. But I can't realise 2) this way, because the project doesn't know the class ""Sample"" if I only do ""using B"".

+ +

Does someone have a good idea, how to design it?

+",213424,,,,,42648.57917,"What's the best way to create a two-level library, containing a class with different levels of abilities?",,2,4,,,,CC BY-SA 3.0,, +308829,1,308835,,1/30/2016 13:35,,20,1629,"

I have often heard developers mention that Java can't ""do Real Time"", meaning a Java app running on Linux cannot meet the requirements of a deterministic real-time system, such as something running on RIOT-OS, etc.

+ +

I am trying to understand why. My SWAG tells me that this is probably largely due to Java's Garbage Collector, which can run at any time and totally pause the system. And although there are so-called ""pauseless GCs"" out there, I don't necessarily believe their advertising, and also don't have $80K-per-JVM-instance to fork over for a hobby project!

+ +

I was also reading this article about running drone software on Linux. In that article, the author describes a scenario where Linux almost caused his drone to crash into his car:

+ +
+

I learnt a hard lesson after choosing to do the low level control loop (PIDs) on the Pi - trying to be clever I decided to put a log write in the middle of the loop for debugging - the quad initially flied fine but then Linux decided to take 2seconds to write one log entry and the quad almost crashed into my car!

+
+ +

Now although that author wrote his drone software in C++, I would imagine a Java app running on Linux could very well suffer the same fate.

+ +

According to Wikipedia:

+ +
+

A system is said to be real-time if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed.

+
+ +

So to me, this means ""You don't have real-time if total correctness requires logical correctness and timeliness.""

+ +

Let's pretend I've written a Java app to be super performant, and that I've ""squeezed the lemon"" so to speak, and it couldn't reasonably be written (in Java) to be any faster.

+ +

All in all, my question is: I'm looking for someone to explain to me all/most of the reasons for why a Java app running n Linux would fail to be a ""real time app"". Meaning, what are all the categories of things on a Java/Linux stack that prevent it from ""being timely"", and therefore, from being ""totally correct""? As mentioned, it looks like GC and Linux log-flushing can pause execution, but I'm sure there are more things outside the Java app itself that would cause bad timing/performance, and cause it to meet hard deadline constraints. What are they?

+",154753,,,,,42425.43264,"What are the reasons for why a Java/Linux stack fails to be ""real time""?",,3,3,2,,,CC BY-SA 3.0,, +308833,1,308840,,1/30/2016 13:57,,4,34288,"

I have got a text file called ""vholders.txt"".

+ +

I am making multiple threads as you can see here ,those threads work with their own given data and at last they write their own output to the vholders.txt. But I get IO exception cause file is being used by another thread. So how can I write to vholders.txt file without colliding with other threads.The sequence of which thread should write first doesn't matter.

+ +

this is my code:

+ +
public void execute()
+    {
+        for(int x=0;x<entered_length;x++)
+        {
+            ThreadPool.QueueUserWorkItem(new WaitCallback(PooledProc),x);
+        }
+
+    }
+    private void PooledProc(object x_)
+    {
+        string output = string.Empty;
+        //does the processing...and assign output its value...
+        /*this is where I get error*/
+        StreamWriter sw = File.AppendText(""vholders.txt""); //error, file is being used by another process
+        sw.WriteLine(output);
+        sw.Close();
+        /*Now how can I write the output value to the text file vholders.txt without getting IO Exception*/
+    }
+
+",196377,,,,,42399.63889,How to let multiple threads write on the same file,,1,5,2,42402.43611,,CC BY-SA 3.0,, +308836,1,,,1/30/2016 14:22,,1,106,"

I've read many times AGPL3 (http://www.gnu.org/licenses/agpl-3.0.html) but I have a question, maybe somebody with a deeper knowledge can clarify me something.

+ +

I would like to use a source code which has this AGPL3, and I'd like to add a layout over it (new design, I'm meaning just to design, CSS, markup). I'm not referring to new features in backend, just the visual.

+ +

In this case, do I have to share all that visual work within the source code?

+",213450,,31260,,42399.59931,42399.97014,AGPL affects layout/design?,,1,2,,,,CC BY-SA 3.0,, +308842,1,308857,,1/30/2016 17:23,,32,4169,"

Until now, I always believed that you should learn programming languages that make you do low-level stuff (e.g. C) to understand what's really happening under the hood and how the computer really works. this question, this question and an answer from this question reinforced that belief:

+ +
+The more I program in the abstracted languages, the more I miss what got me into computers in the first place: poking around the computer and seeing what twitches. Assembler and C are very much suited for poking :) +
+ +

Eventually, I thought you will become a better programmer knowing this, because you'll know what's happening rather than assuming that everything is magic. And knowing/writing low-level stuff is much more interesting than writing business programs, I think.

+ +

But a month ago, I came across this book called Structure and Interpretation of Computer Programs. Everything in the web suggests that this is one of the best computer science books, and you will get better as a programmer when reading it.

+ +

I'm really enjoying the concepts a lot. But I find that the book makes it seem that abstraction is the best concept in computer science while only spending one chapter on the low-level part.

+ +

My goal is to become a better programmer, to understand computer science more and this got me really confused. Mainly shouldn't we avoid all abstractions and observe what really is happening at the very low-level? I know why abstraction is great, but doesn't that prevent you from learning how computers work?

+",144461,,-1,,42837.31319,42558.66667,"Help in understanding computer science, programming and abstraction",,13,30,15,42400.87778,,CC BY-SA 3.0,, +308846,1,308847,,1/30/2016 18:18,,1,126,"

I am developing a commercial closed software using Qt which is under LGPLv3.

+ +

May I prevent my clients from giving copies to other people?

+",213473,,213473,,42399.78889,42399.78889,Prevent my clients from giving or selling my software using a LGPLv3 library to others,,1,0,,,,CC BY-SA 3.0,, +308848,1,,,1/30/2016 18:35,,3,128,"

I have been trying to implement an optimization for 2D sprite rendering to fight the problem of limited fillrate on mobile devices. The idea is to render textured polygons instead of quads that will map to visible portions of an image with transparency discarding the invisble parts from processing.

+ +

Thus far I have implemented at least two approaches that let me find the set of points (outline) of an image with transparency data using Marching Squares (exact outline) and more flexible one, that also alows me to introduce much needed padding (Euclidian Distance Transform).

+ +

I have also prepared a triangulation with Ear Clipping, what I am left with is simplification of the set of points generated by first two algorithms mentioned.

+ +

Because of the nature of the problem I need an algorithm that can eliminate points, eventually move or introduce new points that result in area-optimized version of the original shape but also guarantees that the new shape encloses the previous one (as I don't want to cut visible parts of an image I am about to render)

+ +

I googled and checked my options but so far I haven't found anything usefull. I was looking at Ramer–Douglas–Peucker line simplification algorithm but it doesn't seem to provide guarantees I need.

+ +

Is the problem I am facing something common, does it have some kind of name? Any suggestions on how I should approach the given problem?

+ +

So far the most obvious simplification was to remove the points that do not add to the curvature (adjacent edges to the vertex create 180 degree angle) but thats not enough.

+",213474,,,,,42399.77431,Polygon simplification that encloses original set of points,,0,2,,,,CC BY-SA 3.0,, +308861,1,,,1/30/2016 21:50,,0,173,"

I would like to have a class creating any code from given options and output it in a chosen way.

+ +

I have written class for generation of simple JavaScript code for such cases where is needed to prepare JavaScript in PHP.

+ +

That class has some bug that causes some problems with assembling of output code into larger blocks - and so I decided to rewrite it without that bug (I got idea how to solve it).

+ +

All code is generated once (and once per change of text of section).

+ +

Problem

+ +

How to set (how to keep):

+ +
    +
  • what pieces of code may be generated
  • +
  • what limitations have each generated piece of code
  • +
+",213483,,213483,,42430.63056,42430.63056,Best practice for settings for PHP class for code generation,,1,0,,42409.20903,,CC BY-SA 3.0,, +308864,1,,,1/30/2016 22:31,,1,116,"

This is a very simplified version of the problem:

+ +

You have an integer saved on a server called ""B"" and you have an script saved on a server call ""A"" exposed to the internet. The script pulls the integer from server B, adds one to it, and then updates the remote integer. None of this operations are atomic, the pulling, the adding and the update, all of them have a random latency.

+ +

If there is only one user calling the script in sequence there is not going to be any problems, and the int is going to be incremented consistently. But, since script is exposed to internet, it can be called in parallel from several users at the same time, so while an instance of the script will be incrementing the value locally before updating it remotely, another one will pull the actual value from the remote server, incrementing an old version of it.

+ +

My solution is to use a queue (or at least try to); So instead of calling the script, the users will call another script that will write on a DB the execution request (a message; ""please call script.php""). Since the writing in the DB is not atomic, I'm also writing the microtime of the server so later I can order the execution requests by it. A third script, usually call called a worker, will query the DB for this messages and process the request.

+ +

The problem I have is most of the actual web queue solutions just call the worker every time they need it, because the don't care about order, they just want to make a synchronous job, asynchronous so the website can be faster. I cannot call the worker script every time a new request arrive since the problem will persist, two or more workers can be instantiated at the same time messing the calculations. I can put the worker on a cronjob to be lunch each 60 seconds, but it will be slow for my needs. I can also just put a while loop inside the worker that just query the DB indefinitely but that just seems inefficient to me, or maybe this is actually how queue works? just a while cicle looping till the end of times with maybe an sleep on it... What do you think?

+",213495,,174739,,42400.125,42490.54861,How to guarantee web script execution sequence/order or not parallels queues,,2,3,,,,CC BY-SA 3.0,, +308874,1,308901,,1/31/2016 1:45,,2,294,"

I am working on a new version of the following system:

+ +
    +
  1. A 'main control' service that runs on Windows Server (C#).
  2. +
  3. Clients on the following systems, that communicate with the above service via Web Sockets:

    + +

    2a. Windows 7 / Windows 10 (WPF C#)

    + +

    2b. Android phones (Java)

    + +

    2c. iOS phones (Objective C / possibly Swift)

    + +

    2d. Embedded PC running Linux (C++, no GUI).

  4. +
+ +

What I've noticed in the past, is that there is a lot of duplicated logic whenever we would add a new feature. It had to be implemented across all 3 platforms, and it just doesn't seem right. What is the right way of dealing with that situation?

+ +

I am not interested in solutions like Xamarin, since we'd like to maintain native look/functionality for each platform. So clearly the GUI layer will have to be coded separately on each platform. I am more concerned with the business/model logic that will inadvertently end up being duplicated on each system.

+ +

Of course we can push some of it right to the Server machine, making the clients as thin as possible, but there are still cases where I would like to have a library that I just write/test once, and then simply compile/deploy to each client platform.

+ +

Is something like that even possible? Am I missing a better approach?

+",148949,,31260,,42400.30486,42402.78958,Best approach to avoid duplication of code meant to run on windows/iOS/Android devices?,,1,2,,,,CC BY-SA 3.0,, +308876,1,,,1/31/2016 3:11,,3,3508,"

I'm working on a side project, and I turned on all rules for code analysis in Visual Studio, and I got the warning notice:

+ +
+

Warning CA1006 Consider a design where 'Vote<T>.CalculateWinner(List<Vote<T>>)' doesn't nest generic type 'List<Vote<T>>'

+
+ +

And it really does point out a sore spot in my design. I have an abstract base class Vote<T> that contains a list of the options for running an election on type T.

+ +

Here is what I want to exist:

+ +

There are three types of objects, PolicyCategory, Policy and Funding. A PolicyCategory contains a list of votes on the policies within it, and a vote on a Policy must also contain a list of votes on a Funding object. The election procedure (happens to be STV) is then run on the votes in the category, and then on the votes on the winning policy. The procedure is the same for both types of votes.

+ +

What I have now is PolicyVote : Vote<Policy> and FundingVote : Vote<Funding>, the core election logic is in Vote<T>, and I have methods that cast List<PolicyVote> to List<Vote<Policy>>, and then call Vote<T>.CalculateWinner().

+ +

This is clearly pretty horrible as I'm hacking around the lack of contravariance in C#.

+ +

How can this be made better?

+",43057,,213,,42400.68958,43924.41736,How to avoid having nested generic in class,,1,3,,,,CC BY-SA 3.0,, +308879,1,,,1/31/2016 4:15,,1,281,"

I've looked through the different threads, and there are a lot of conflicting information out there. The most useful article I found was this: Java theory and practice: Urban performance legends, revisited +but it's from 2005.

+ +

So I was wondering what is the state of the Java standard regarding heap allocation. In the release notes for Java 7 it says:

+ +
+

(The server compiler) does not replace a heap allocation with a stack allocation for non-globally escaping objects.

+
+ +

I got the impression from the other article that this is indeed possible to do in the compiler, but it seems to not be required by the standard.

+ +

A related question which doesn't seem to answer my question is Stack and Heap memory in Java The accepted answer for this question says that only primitives may be put on the stack. There does not seem to be any consensus on that. Secondly, I'm obviously asking a different question.

+ +

I understand that in Java everything is a reference. That said, would a compiler allocate everything on the heap even if it is only reachable from a stack reference?

+",213516,,-1,user40980,42837.31319,42401.75,Is stack allocation in Java implementation dependent?,,1,3,,,,CC BY-SA 3.0,, +308880,1,,,1/31/2016 4:45,,1,69,"

I'm writing a fairly simple authentication/authorisation api for an intranet application we are developing.

+ +

It's my first roles based authorisation system and its a good opportunity for a first timer because it is located on a Windows VPS that is isolated to the internal corporate network with no internet access at all, so any mistakes I make won't have any huge consequences.

+ +

That said, I still want to do it right.

+ +

So I wanted to know if my method below is on the right track and welcome any suggestions for improvement you guys might have.

+ +

Notes:

+ +
    +
  • I'm using the MEAN stack as it fits the business case nicely.
  • +
  • I have bolded the points that I am concerned about or not so sure they are the best way of doing things.
  • +
  • In case I am using the wrong terminology for some things I have included my definitions below so you know what I'm trying to say.
  • +
  • Definitions: + +
      +
    • Site-level permissions: The specific states/pages a user has access to
    • +
    • Domain: in this context is the specific data collections the user has access to. + +
        +
      • This access is persistent across pages so there's nothing like User has access to Domains X and Y on Page A and Domains W and Z on Page B. If a User has access to Domains X and Y, then this is used for all site-level states.
      • +
    • +
  • +
+ +

Here's what I'm thinking:

+ +

Account Creation

+ +
    +
  1. User account is created
  2. +
  3. Using Active Directory (AD), we determine site-level permissions and domain-specific permissions + +
      +
    • Store User Identifier and site-level permissions in DB
    • +
  4. +
+ +

Login

+ +
    +
  1. The authentication api verifies Identity and returns an JSON web token (JWT)
  2. +
  3. The identity token is then used to get another token specifically for authorisation. The payload of the JWT contains the role specific site-level and domain-level permissions for that user.
  4. +
  5. Auth Token is returned to the Client and checked in $routeChangeStart event of ui-router
  6. +
  7. Angular then sets the appropriate state for the requested view based on the permissions.
  8. +
+ +
+ +

Specific Questions for the programmers.stackexchange gurus:

+ +
    +
  • Does this look like an ok way to implement an Authorisation system?
  • +
  • Given that I have access to Active Directory am I overcomplicating this by storing permissions in a Roles collection in Mongo?
  • +
  • Are there any rookie flaws in this design that I should correct/ do more reading on?
  • +
+ +

Thanks for taking the time to read this and I really appreciate any feedback you might have.

+",105739,,,,,42400.19792,Feedback on simple authorisation system design,,0,0,,,,CC BY-SA 3.0,, +308885,1,,,1/31/2016 7:21,,1,60,"

Suppose we have a abstract class EntityBase which is the base class for all our entities e.g.

+ +
public abstract class EntityBase {
+  public Guid Id {get;set;}
+}
+public class Customer : EntityBase {
+  public string Name {get;set;}
+}
+public class Orders : EntityBase {
+  public string OrderNumber {get;set;}
+}
+
+ +

Where the intent is to then be able to create extension method on EntityBase so it can be projected to Domain classes e.g.

+ +
public static class ProjectionExtensions {
+  public static TProjection ProjectAs<TProjection>(this EntityBase item) where TProjection : class, new() {
+    return Mapper.Map<object, TProjection>(item);
+  }
+}
+
+ +

Does this violate LSP? Is it better to implement an interface IEntity rather than an abstract base class EntityBase?

+ +

e.g.

+ +
public interface IEntity {
+  public Guid Id {get;set;}
+}
+public class Customer : IEntity {
+  public Guid Id {get;set;}
+  public string Name {get;set;}
+}
+public class Orders : IEntity {
+  public Guid Id {get;set;}
+  public string OrderNumber {get;set;}
+}
+
+public static class ProjectionExtensions {
+  public static TProjection ProjectAs<TProjection>(this IEntity item) where TProjection : class, new() {
+    return Mapper.Map<object, TProjection>(item);
+  }
+}
+
+",206707,,206707,,42400.46458,42400.46458,Does having a EntityBase or DomainBase class violate LSP?,,1,1,,,,CC BY-SA 3.0,, +308888,1,,,1/31/2016 9:26,,-1,3306,"

I would like to have some help.

+ +

I must find a way that the output must be:

+ +
A    B    C
+90   60   80
+50   100  70
+100  20   100
+10   50   75
+
+ +

Because the previous output is:

+ +
A 90 50 100 10
+B 60 100 20 50
+C 80 70 100 75
+
+ +

And it gives me some confusion on how to get to that output.

+ +

This is the codes that I have:

+ +
 char n[3]={'A','B','C'};
+ int s[3][4]={90,50,100,10,60,100,20,50,80,70,100,75};
+ float average=0;
+ int x, y, max=0, min=0, total=0;
+
+ for(x=0;x<3;x++)
+ {
+      printf(""%c\t"",n[x]);
+      for(y=0;y<4;y++)
+      { 
+            printf(""%d\t"",s[x][y]);
+      }
+      printf(""\n"");
+ }
+
+ +

It may be simple for you but I find this hard for me. Thanks in advance for all of your help. (P.S. I'm a student so try to use easy-to-understand words and another thing is I'm not good in english)

+",204795,,204795,,42400.47014,42400.73194,(Multidimensional array in C) How to make my output in a vertical position rather than in horizontal when the size of the array is n[3][4],,1,4,,42401.17292,,CC BY-SA 3.0,, +308894,1,,,1/31/2016 11:38,,6,659,"

I have a sort of whiteboard style question about efficiently querying The Movie DB API to find a list of characters in movies.

+ +

My overall aim is to:

+ +
    +
  1. Search for a movie by title (1 request)
  2. +
  3. Look up all the characters played in that movie (1 request)
  4. +
  5. Look up which movies that character has been in, played by the same actor (1 request/movie)
  6. +
  7. Repeat with that list of movies via step 2, and filter down to only characters in the original movie, until no unexplored characters remain.
  8. +
+ +

My issue is that the DB call for step 3 only brings back the films those actors are in, as opposed to characters (in fact you can only search by actor id, not character id). This means that, for instance, step 3 for the ~23 actors returned in step 2 for the lord of the rings, it then returns 844 movies each one needing a request to find the characters in that movie. The problem is that most of these are going to be false positives.

+ +

Is there a more efficient way to query the API? I've considered combining pairs of actor IDs to narrow it down, but I don't think that would narrow down the movies returned by enough.

+ +

How can I efficiently make this query?

+",42685,,209665,,42872.40833,42872.40833,How to efficiently query movies with characters via The Movie DB API?,,0,0,,,,CC BY-SA 3.0,, +308899,1,308911,,1/31/2016 14:01,,7,317,"

I'm new to MVC and the different layers architecture and this is what I have currently:

+ +

2 models: User, Company

+ +

2 service layers: UserService, CompanyService

+ +

2 interfaces which abstract the database layer from the service layer: IUser, ICompany

+ +

2 database layers: UserDB, CompanyDB

+ +

Here's the scenario - a functionality that will update both User and Company table in the database. If update company succeed and update user failed, I will rollback the database for the company. Hence I think it makes sense to do this in 1 method.

+ +

If so, where do I put this method that handles 2 models? Is it normal to create UserCompanyDB in the database layer and UserCompanyService in the service layer just for this purpose? And even IUserCompany as well.

+ +

Since I'm new to everything, chances are, you may think I'm having some misconceptions on the layers based on my current way of thinking? If so I would like to take this opportunity to rectify it too.

+ +

Oh and if it helps, I'm mainly trying to follow the example in this link: +https://www.playframework.com/documentation/2.5.x/JavaTest#Unit-testing-models

+",144272,,144272,,42400.725,42400.7625,MVC - Do I create 'joint' layers when dealing with multiple models?,,1,0,,,,CC BY-SA 3.0,, +308904,1,308907,,1/31/2016 15:07,,1,143,"

I'm trying to write a compiler for a self-designed CPU with accompanying instruction set. The CPU has 3 registers, 2 input registers (B and C) and one output register (D). When for example an ADD instruction is executed the sum of B and C is calculated and stored in D.

+ +

I'm trying to write the compiler with the visitor design pattern: I have a bunch of language tree classes like ""IfStatement"", ""Addition"", ""Integer"" and a visitor ""Compiler"". The visitor would look at each node of the tree and append bytecode to the end of the bytecode list. I can't figure out how to cleanly handle register overrides: when evaluating the expression

+ +
2*(7+3)
+
+ +

the generated bytecode is

+ +
PUTb 2
+PUTb 7
+PUTc 3
+ADD
+MOVE D C
+MUL
+
+ +

As you can see the 2 is overridden by the 7. +I want the compiler to realize it can reverse the order to

+ +
(7+3)*2
+
+ +

or that it can store temporary result is RAM, using some other instructions, this will certainly be necessary for more complex expressions like

+ +
(7-5)*(8+3)
+
+ +

Is there a clean/object-oriented way to handle this? Is the Visitor pattern not appropriate here? Do I need to look at some advanced techniques like register coloring? The compiler will be written in Java, bu I don't think that really matters.

+",213554,,213554,,42400.67778,42400.71875,Compiler design prevent register override,,1,8,,,,CC BY-SA 3.0,, +308909,1,,,1/31/2016 16:54,,9,2707,"

We have the following BSD license in the LICENSE file:

+ +
Copyright (c) 2006-2016 SymPy Development Team
+
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+  a. Redistributions of source code must retain the above copyright notice,
+     this list of conditions and the following disclaimer.
+  b. Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in the
+     documentation and/or other materials provided with the distribution.
+  c. Neither the name of SymPy nor the names of its contributors
+     may be used to endorse or promote products derived from this software
+     without specific prior written permission.
+
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ""AS IS""
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGE.
+
+ +

We manage the source repository by git (https://github.com/sympy/sympy), and thus each author owns the patches that he or she created. We then have an AUTHORS file where we list all the people who contributed patches (currently about ~450 or so). Typically authors fork the repository on github and add patches as git commits.

+ +

One author forked the repository, but added his name into the LICENSE file itself as a copyright notice as follows (I changed the name):

+ +
Copyright (c) 2006-2015 SymPy Development Team,
+              2015-2016 John Doe
+
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+  a. Redistributions of source code must retain the above copyright notice,
+     this list of conditions and the following disclaimer.
+  b. Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in the
+     documentation and/or other materials provided with the distribution.
+  c. Neither the name of SymPy nor the names of its contributors
+     may be used to endorse or promote products derived from this software
+     without specific prior written permission.
+
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ""AS IS""
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGE.
+
+ +

The author developed a patch, that fixes a bug. The fix consists of touching just one file and removing 19 characters from one line, and adding 18 characters on another line in the same file. It also adds a 5 line test for this bug into a test file. That's it.

+ +

Under what conditions are we legally allowed to apply his patch (by cherry-picking his commits, e.g. preserving the date and author's name + email in the git meta data)?

+ +

a) Do we need to modify our LICENSE file to add his copyright notice?

+ +

b) Or are we still complying with the BSD license if we keep an up to date AUTHORS file and keep the git repository which specifically tracks which commits were contributed by which authors.

+ +

What I don't like about the option a) is that if all 450 or so contributors required this, then we would need to keep essentially the contents of the AUTHORS file in the LICENSE file, together with the Copyright word and the years. Git is much better at keeping the years (and even days and minutes) as well as which lines where modified by each author and how. Then we have a simple LICENSE file that doesn't change and we keep the list of authors in AUTHORS (and we have a script that keeps it synchronized with the list of authors from git).

+",152863,,165812,,42400.90486,42401.0875,How to manage copyright notices from contributors to a BSD licensed project,,2,7,5,,,CC BY-SA 3.0,, +308928,1,309155,,2/1/2016 2:09,,4,288,"

Is there a design pattern that would allow a class from a hierarchy to 'subscribe to' concrete methods?

+ +

For example, say you have an abstract base class that requires the implementation of a method.

+ +
Public MustInherit Class Rule
+    ' Common properties shared between sub classes
+    Public MustOverride Function Validate() As Boolean
+End Class
+
+ +

But lets say that the sub classes only want to register to a set of concrete methods for validating themselves.

+ +

Lets say a subclass of Rule - RuleItemA Requires the following:

+ +
    +
  • The entity must exist
  • +
  • The entity must be a number
  • +
  • The number must be a prime
  • +
+ +

Where as subclass of Rule - RuleItemB Requires the following:

+ +
    +
  • The entity must exist
  • +
  • The entity must be a string
  • +
  • The string must be a valid URL
  • +
+ +

For sake of argument, lets say there are about 20 different concrete validations in place. I do not want to write all possible permutations of the 20 validations. What I would rather do is the following:

+ +
Class RuleItemA Inherits Rule, SubscribesTo(EntityExists,EntityIsNumber,NumberIsPrime)
+    Public Overrides Function Validate() As Boolean
+         ' Invoke all concrete validations here
+    End Function
+End Class
+
+ +

I feel like this is similar to traits, or a multi-strategy pattern, but I'm not sure how to implement it.

+ +

Is this a known pattern? Is there a way to implement it in .Net?

+",74674,,1204,,42403.63889,42403.90764,Does such a design pattern exist? (Multi-Strategy/Multi-Traits),<.net>,2,1,,,,CC BY-SA 3.0,, +308930,1,,,2/1/2016 4:00,,-2,239,"

Why are the constants in The Software Equation so complicated?

+ +

Here is the equation.

+ +

+ +

where

+ +

E = Project effort measured in person-months or person-years

+ +

LOC =Lines of Code estimate for the project

+ +

t= Length of project measured in months or years

+ +

B= ""special skills factor""

+ +

P= ""Productivity Parameter""

+ +

Values of B and P are given for particular cases in two tables in the wikipedia page.

+ +

Since B and P are constant for particular cases we could take the constants out of the cube and define new constants that do not have complicated exponents. That would really make the equation simple.

+ +

It seems to me that these constants are vague and does not correspond to reality as much as LOC or length of project does.

+ +

If they do not have some physical meaning than why not use constants that make the equation look simple?

+",213608,,213608,,42401.57222,42401.57222,"Complicated constant in ""The Software Equation""?",,1,1,,,,CC BY-SA 3.0,, +308935,1,308957,,2/1/2016 7:29,,12,2685,"

I have seen programs using this strategy and I have also seen posts considering this bad practice. However, the posts considering this bad practice have been written in c# or some other programming language, where there are some error handling built in. This question is about the c++. Further, the errors I am addressing are errors which are fatal enough to force a program to shutdown, not some general ""maybe-I-have-missed-something-error"". With using expressions at highest level I mean this in a bit sloppy sense. It can be either something like

+ +
int main(){
+    try {
+        //run
+    } catch (fatalException){
+        //handle error and shutdown
+    }
+}
+
+ +

or it can be something like this in for example a graphical application.

+ +
void runApplication(){
+    try {
+        //run
+    } catch (fatalException){
+        //handle error and shutdown
+    }
+}
+
+ +

The alternative to this would be to handle this error where it happens and hierarchically return from functions, one-by-one until the program is terminated. The reason I can see for terminating fatal errors by a try-catch on the top level is that the reasons for this kind of error can be different, and is probably fairly unusual (corrupted databases, some out-of-memory errors, corrupted configuration files, etc...). Handling exceptions locally and return from functions one-by-one would make the code less clear and require much effort to handle problems which are unlikely to occur.

+ +

However, I am not sure whether it is good practice to start the program with a try. Personally, I think it is ugly, but somehow this ugliness reflects the ugliness of the problem itself, so this may not be a reason to not use it for this case.

+ +

EDIT +I may have misunderstood the dupe post, but I do not think this solves the problem. The question is not about not-catching some exceptions. The point is that there is in most cases are other parts of the code which can handle the exception closer the the throw in a fairly satisfactory way, but to the cost of having to step through the hierarchy to the top and terminate. However, since the program would still need termination, are there in general any reason to catch earlier that on top level? One can argue that this implies that the lower levels are unable to decide how to handle this. But in this case I believe it is unclear. I mean ""what error would have the authority to determine if a program needs to terminate?""

+",144639,,19681,,42405.30417,42405.30417,Is using Exceptions at the highest level of a program considered bad practice?,,2,5,8,42407.93889,,CC BY-SA 3.0,, +308946,1,308969,,2/1/2016 10:37,,11,7956,"

I am a relative DDD newbie, but I am reading anything and everything I can get my hands on to boil out and distill my knowledge.

+ +

I came across this DDD question, and one of the answers has me intrigued.

+ +

DDD Bounded Contexts & Domains?

+ +

In one of the answers the poster gives the example of an ecommerce system with products being in at least 2 domains:

+ +

1) Product Catalog +2) Inventory Management

+ +

OK, that all makes sense, i.e. in your ecommerce front end you are interested in displaying the product information, and not interested in inventory management.

+ +

BUT. You may want to display the inventory level on the web page, or you may want to display the edition number of the inventory in stock (imagine your inventory is books, magazines etc). This information comes from the Inventory domain.

+ +

So, how would you handle this? Would you

+ +

a) Load both the Product domain and the Inventory domain aggregates? +b) Would you hold some properties on your Product domain entity for number in stock, and edition in stock, and then use Domain Events to update these when the Inventory entity is updated?

+ +

One final question. I know we are meant to forget/ignore the persistence of the domain and just think about the domain. But just to think this through, in the example above we would end up with potentially 2 DB tables for product catalog and product inventory. Now, do we use the same identifier in these as it's the same product. Or, could we use 1 table and 1 table row for the data and simply map the relevant data onto the aggregate properties?

+",213649,,-1,,42837.31319,42401.67847,Domain Driven Design and Cross Domain interaction,,4,0,2,,,CC BY-SA 3.0,, +308948,1,,,2/1/2016 11:01,,8,1331,"

I've been pondering a really basic question about how far to take enforcing a class's invariant. Maybe that's worded badly, so as an example, let's say that I want to write a class which stores a limited palette of colours. The class's constructor should take the size of the palette - the idea being that you can add as many colours to the palette as you want, but it restricts the size on a FIFO policy. So we'll say that I want to arbitrarily restrict this size to the range 1 to 32 (""nobody will ever need more than 32 colours"").

+ +

Let's get started...

+ +
class palette
+{
+public:
+   palette(unsigned int max_size) : m_max_size{max_size} { }
+private:
+   unsigned int m_max_size;
+};
+
+ +

So far, so good. Except I'm not doing anything to confirm that the max size is within my restricted range, i.e. the invariant can be broken right from construction.

+ +

(Just to clarify: this class isn't intended for use in some public library - it's simply an internal class to a single application)

+ +

Four options spring to mind beyond ""do nothing"":

+ +
    +
  1. Put a comment on the class to ask callers to use the class correctly.

    + +
    // Please only call with 1 <= max_size <= 32
    +
  2. +
  3. Assert that it's within range.

    + +
    assert(max_size >= 1 && max_size <= 32);
    +
  4. +
  5. Throw an exception if it's outside the range

    + +
    if (max_size < 1 || max_size > 32)
    +    throw std::invalid_argument();
    +
  6. +
  7. Get the compiler to check by converting to a template

    + +
    template <unsigned int MAX>
    +class palette
    +{
    +public:
    +    palette() { // no need for the argument any more
    +        static_assert(MAX >= 1 && MAX <= 32, ""oops"");
    +    }
    +};
    +
  8. +
+ +

Options #1 seems a little bit like wishful thinking, but maybe it's good enough for 'just' an internal class? Options #2 to #4 are increasingly fussy about the size, to the point where #4 changes the syntax so that the compiler will report an error if the class isn't being used correctly.

+ +

I'd welcome any thoughts on this. I understand that the answer will probably begin with ""Well it all depends..."", but I'm just fishing for some general guidelines or some alternate suggestions.

+",75631,,75631,,42401.525,43254.68125,C++ class design with invariant,,2,1,,,,CC BY-SA 3.0,, +308955,1,,,2/1/2016 11:56,,0,214,"

I am integrating an LGPL application to my iOS application. As I am modifying the LGPL application, I am planning to open the modifications I make to the LGPL library.

+ +

I am modifying the LGPL library in such a way that, I am integrating a new library to it. So the flow is like this:

+ +

Appl Code >> LGPL Library >> New Library

+ +

Do I need to make the New Library opensource? Please note that my questions is specifically related to the call-flow. ie my new library is ""used by"" the LGPL library. What is the implication of LGPL license in that use case.

+",213666,,213666,,42401.53125,42401.90347,Calling closed source library from LGPL library,,1,2,,,,CC BY-SA 3.0,, +308958,1,308966,,2/1/2016 13:18,,1,1440,"

I have a generic helper class (1) that can be used in other projects also. Imagine something like basic handling of file and folders, something useful and DRY that always come in handy.

+ +

I have another class (2) that is project specific, but it uses the generic class (1) to accomplish things.

+ +

The generic helper class (1) is used by creating a new object of it in specific places where is needed in the project.

+ +
GenericHelperClass ghc = new GenericHelperClass();
+
+ +

Same goes for the project specific class (2), is being used by creating a new object of it in specific places where is needed in the project.

+ +
ProjectSpecificClass psc = new ProjectSpecificClass();
+
+ +

But as I said the project specific class (2) uses the generic helper class (1) to do stuff.

+ +

So here I thought, that the project specific class (2) should have its own private generic helper class (1) to do its job.

+ +

But wait, there is going to be an instance of that generic helper class (1) in the project already, so why creating another one? Let's just pass that instance of (1) on the constructor of (2).

+ +

So (2) is now created like:

+ +
GenericHelperClass ghc = new GenericHelperClass();
+// code and code and code
+// .
+// .
+ProjectSpecificClass psc = new ProjectSpecificClass(ghc);
+
+ +

with this I have only one object of GenericHelpClass, and I have avoided using static.

+ +

The question is: Am I wrong here or am I just creating spaghetti-dependencies or something? Any suggestions?

+ +

(the names selection GenericHelperClass/ProjectSpecificClass is just for demonstration)

+",213677,,213677,,42401.58264,42401.59375,"A generic helper class, a project specific class, and the rest of project",,2,5,,,,CC BY-SA 3.0,, +308964,1,,,2/1/2016 14:12,,1,871,"

I have a web application that is hosted on an embedded device. This device has its own basic web server that can serve content just like any other web server.

+ +

The web application in question is pretty data hungry. It requests about 15 XML files (usually less than 1kb) every few seconds to update a wide variety of modules on the application page. This is done with jquery / ajax. Each module has an independent timer to update itself, usually every 5-10 seconds, so often these timers align and requests are done in bursts.

+ +

In long term tests, the web application begins to fail. Within a few hours, requests begin to fail, not due to timeouts, but due to web connection resets. Using Wireshark, it is evident that it is the browser requesting to reset the connection. I am able to look directly into the web server, and there are no errors/timeouts/resets/etc from the server end. It is directly the browser deciding to reset the connection.

+ +

I am in no way asking the browser to do this. My application is basic and is simply requesting data endlessly. I can't think of any reason for the browser itself to decide to reset the connection on its own. This occurs in firefox and chrome. My only suspects are myself and jQuery's AJAX.

+ +

Any ideas are appreciated.

+",186497,,,,,42401.59167,Web browser resetting connection,,0,5,,,,CC BY-SA 3.0,, +308967,1,309095,,2/1/2016 15:13,,3,4289,"

I am working on an App which basically communicate with a third party API, it has no back-end. The front-end will be a SPA. Here is the overall scenario:

+ +
    +
  1. The external API needs current user's Id to respond to the requested query.
  2. +
  3. My server initially has the user information which it can use to query the API and get the desired result.
  4. +
  5. Based on the initial response the user fills in some more detail and requests the API again for the final response.
  6. +
+ +

Possible solutions in my mind:

+ +

Solution 1(with additional requests overhead):

+ +
    +
  1. On first request serve the home page to the user, then let the +front-end send a request(with the logged in user info) to my server +to get initial data from the external API.
  2. +
  3. Now, my server requests the external API, gets the response and +sends that response to the client.
  4. +
  5. Client gets the response, makes another request based on the response.
  6. +
  7. Repeat step 2 with the additional data.
  8. +
+ +

Solution 2:

+ +
    +
  1. Since the server already has the user info(using oauth2), let it request the API first then render the landing page with the response loaded in the page as a static global JSON data, let the front-end code to query the static data and use it construct the next request.
  2. +
  3. Now, we have the same scenario of step 2 mentioned above.
  4. +
+ +

PS: The API communicates via SOAP protocol and maintains a session using oauth2.

+ +

Question: +Which scenario is better with respect to error handling and availability? How should I handle the case when the API is down?

+",198298,,104235,,42426.62292,43164.73819,Communicating with third party API,,1,0,,,,CC BY-SA 3.0,, +308972,1,308976,,2/1/2016 16:46,,202,257546,"

I've seen this part of PEP-8 https://www.python.org/dev/peps/pep-0008/#package-and-module-names

+ +

I'm not clear on whether this refers to the file name of a module/class/package.

+ +

If I had one example of each, should the filenames be all lower case with underscores if appropriate? Or something else?

+",206740,,168744,,43035.90486,43749.42778,Python file naming convention?,,2,1,65,,,CC BY-SA 3.0,, +308977,1,308979,,2/1/2016 17:39,,48,23397,"

Suppose I have a stream of Things and I want to "enrich" them mid stream, I can use peek() to do this, eg:

+
streamOfThings.peek(this::thingMutator).forEach(this::someConsumer);
+
+

Assume that mutating the Things at this point in the code is correct behaviour - for example, the thingMutator method may set the "lastProcessed" field to the current time.

+

However, peek() in most contexts means "look, but don't touch".

+

Is using peek() to mutate stream elements an antipattern or ill-advised?

+

Edit:

+

The alternative, more conventional, approach would be to convert the consumer:

+
private void thingMutator(Thing thing) {
+    thing.setLastProcessed(System.currentTimeMillis());
+}
+
+

to a function that returns the parameter:

+
private Thing thingMutator(Thing thing) {
+    thing.setLastProcessed(currentTimeMillis());
+    return thing;
+}
+
+

and use map() instead:

+
stream.map(this::thingMutator)...
+
+

But that introduces perfunctory code (the return) and I'm not convinced it's clearer, because you know peek() returns the same object, but with map() it's not even clear at a glance that it's the same class of object.

+

Further, with peek() you can have a lambda that mutates, but with map() you have to build a train wreck. Compare:

+
stream.peek(t -> t.setLastProcessed(currentTimeMillis())).forEach(...)
+stream.map(t -> {t.setLastProcessed(currentTimeMillis()); return t;}).forEach(...)
+
+

I think the peek() version is clearer, and the lambda is clearly mutating, so there's no "mysterious" side effect. Similarly, if a method reference is used and the name of the method clearly implied mutation, that too is clear and obvious.

+

On a personal note, I don't shy away from using peek() to mutate - I find it very convenient.

+",31101,,-1,,43998.41736,43945.42014,Is it an antipattern to use peek() to modify a stream element?,,3,7,13,,,CC BY-SA 3.0,, +308978,1,310061,,2/1/2016 17:57,,2,420,"

How can I enumerate (by expression tree size, for example) all of the primitive recursive functions that map natural numbers to natural numbers in a traditional programming language like C?

+ +

For example, in Mathematica, one can express the basic primitive recursive functions as follows:

+ +
zero = Function[0];
+succ = Function[# + 1];
+proj[n_Integer] = Function[Part[{##}, n]];
+comp[f_, gs__] = Function[Apply[f, Through[{gs}[##]]]];
+prec[f_, g_] = 
+  Function[If[#1 == 0, f[##2], g[#1 - 1, #0[#1 - 1, ##2], ##2]]];
+
+ +

Hence, for example, the primitive recursive expression trees for addition, predecessor, and monus (truncated subtraction) are:

+ +

+ +

+ +

+ +

Ideally it should be possible to actually evaluate these primitive recursive functions on the natural numbers, so that one can obtain the outputs of these functions on them.

+ +

EDIT:

+ +

For example, here are the basic primitive recursive functions implemented in Python:

+ +
def zero():
+    # Takes no arguments
+    # Returns zero
+    return 0
+
+def successor(x):
+    # Takes a natural number
+    # Returns its successor
+    return x + 1
+
+def projection(n):
+    # Takes at least n+1 arguments
+    # Returns the nth argument
+    def f(*x):
+        return x[n]
+    return f
+
+def composition(g, *h):
+    # Takes a k-ary function and k m-ary functions
+    # Returns an m-ary function
+    def f(*x):
+        return g(*map(lambda h_: h_(*x), h))
+    return f
+
+def recursion(g, h):
+    # Takes a k-ary function and a (k+2)-ary function
+    # Returns a (k+1)-ary function
+    def f(n, *x):
+        if n == 0:
+            return g(*x)
+        else:
+            return h(f(n - 1, *x), n - 1, *x)
+    return f
+
+ +

Hence we can implement addition, predecessor, and monus (truncated subtraction) as follows:

+ +
addition = recursion(projection(0), composition(successor, projection(0)))
+predecessor = recursion(zero, projection(1))
+monus = recursion(projection(0), composition(predecessor, projection(0)))
+
+print addition(12, 6)
+print predecessor(16)
+print monus(10, 19)
+
+ +

I then constructed a way to represent (and parse/evaluate) the structure of different primitive recursive functions:

+ +
Expression = collections.namedtuple('Expression', ['head', 'arguments'])
+
+def parse(expression):
+    if isinstance(expression, Expression):
+        return expression.head(*map(lambda argument: parse(argument), expression.arguments))
+    else:
+        return expression
+
+ +

For example, the predecessor function can be represented as

+ +
predecessorExpression = Expression(
+    head=recursion,
+    arguments=(
+        zero,
+        Expression(
+            head=projection,
+            arguments=(
+                Expression(
+                    head=successor,
+                    arguments=(
+                        Expression(
+                            head=zero,
+                            arguments=()
+                        ),
+                    )
+                ),
+            )
+        )
+    )
+)
+
+ +

The parser works successfully when evaluating the predecessor expression:

+ +
predecessorFunction = parse(predecessorExpression)
+print predecessorFunction(42)
+
+ +

What remains is to construct the expression trees that represent the primitive recursive functions. Does anyone know what the best way to approach this would be?

+ +

EDIT 2: Just came across this promising paper.

+",92800,,-1,,42838.53819,42413.94028,Enumerating the primitive recursive functions,,1,9,1,,,CC BY-SA 3.0,, +308984,1,308986,,2/1/2016 19:21,,5,110,"

I know what SRP is, but am questioning my current design of an object I call an entity. Below is a picture of the design I am referring to. If I shift the code from the GeometricInformation and TransactionInformation objects to the abstracts GeometricEntityBase and TranslactionalEntityBase respectively, would this be considered a severe breech of SRP? I'm hoping the experts here can help poke holes in this design or affirm it if appropriate.

+ +

To summarize the intent, I have a CAD API I'm working on. Each entity in the CAD realm is represented by an Entity in code. To allow Entities to only carry those attributes appropriate to its purpose there's three levels of inheritance the Entity can use. For example, if only the very basic Entity features (like having an ID, etc) then it can just inherit from EntityBase. If it requires additional geometric properties to identify boundaries, insert point, transformation matrix, etc, then it would inherit from GeometricEntityBase. If it is to be a fully transacted entity (via a custom transactional framework), then it is required to carry the attributes of both plus transactional (such as the entity state (new, clean, dirty), document, etc).

+ +

To ensure SRP, I used the adapter pattern and set the geometric attributes and operations to an object called GeometricInformation and likewise the transactional stuff to TransactionInformation. This is working great, but I am now beginning to realize that in order to allow changes to geometric properties, I need to validate against the entity state. In otherwords, the GeometricInformation object has to know about the attributes of the TransactionInformation object. Solutions seem to involve an observer pattern such that the GeomtetricInformation could observe the TransactionInformation (which seems forceful) or assert an XXXEntityBase property ""CanModify"" that can be changed and observed by either XXXInformation object. Unfortunately this would create a two-way dependency since each XXXInformation object would have to know it's parent.

+ +

Because of the additional fact that WCF is used to communicate the entities to a lower-level data layer, there's also a translation process to convert these to DTO equivalents. In the end, I'm thinking of moving the code from the XXXInformation objects to their abstract equivalents which has very little in them anyways, but am concerned that breaking SRP would potentially cause downstream issues too. On the otherhand, this would streamline the entities more and allow the translation process to improve in terms of performance too so definitely some give-and-take.

+ +

+",213195,,1204,,42401.8375,42401.8375,How do I more effectively design this per its intended design while ensuring I'm meeting SOLID design principles?,,1,5,,,,CC BY-SA 3.0,, +308992,1,308995,,2/1/2016 22:36,,4,324,"

I know the basics of why a database uses a transaction log - fulfilling ACID properties, ability to rollback/restore, etc.

+ +

The basic algorithm that I see for a transaction is as follows:

+ +
    +
  • transaction changes data - data pages are changed in memory
  • +
  • transaction log is written to disk (must happen before commit is recorded)
  • +
  • commit is issued
  • +
  • data pages are written to disk asynchronously
  • +
+ +

In theory, this means that you save I/O operations because you don't need to wait for data pages to be written to disk before a commit. You only need to wait for the log to finish being written because the transaction can be redone from the log if a crash occurs before the data pages are written. However, if the transaction log needs to store before/after values or at least all the new data that changed in the data pages anyway, how does this save I/O? Would it not take the same amount of time to not store the data changes in the log and just wait for data pages to be written synchronously?

+ +

For example, if my transaction makes 10 MB of changes to data, wouldn't it take the same amount of time to write 10MB changes to the log as it would to just wait and write those 10MB of data pages to disk? What's the point of also storing the changed data in the log?

+",213737,,,,,42401.95069,What's the advantage of storing database transaction changes in the log as well as data pages?,,1,0,,,,CC BY-SA 3.0,, +309012,1,309064,,2/2/2016 6:32,,5,1183,"

I've a RESTFul microservice written in Grails. This microservice has it's own DB. It also depends on two other microservices.

+ +

A typical workflow of the service is:

+ +
    +
  1. Receives a GET request from a client e.g. browser
  2. +
  3. Calls another microservice through http to get some information
  4. +
  5. Queries it's own DB to get some data
  6. +
  7. Send a response to the client
  8. +
+ +

I want to write automated tests for this service. Whenever I'll run the test, it'll do the following:

+ +
    +
  • Mock the external microservices
  • +
  • Create a database and populate it with test data
  • +
  • Run the application and configure it to use the mock services and DB
  • +
  • Run the test cases by sending http requests and matching responses
  • +
+ +

My question is: What type of test is this?

+ +

I'm not a QA so may be I'm asking a stupid question.

+",162218,,162218,,42403.17847,42403.17847,What is it called when you test a microservice by mocking the dependencies?,,2,4,1,,,CC BY-SA 3.0,, +309013,1,,,2/2/2016 8:03,,0,2266,"

We have an Angular JS app for which we use Bootstrap CSS. We are currently using pure Bootstrap with the default CSS files. However, we need to style the app in my companies corporate design sooner or later. What's the best/recommended way to do this?

+ +

We already thought of the following possibilities:

+ +

1 # Simply overwrite the Bootstrap classes:

+ +
.btn {
+    background-color: blue
+}
+
+ +

2 # Modify the Bootstrap LESS files and compile Bootstrap.

+ +

3 # Introduce own CSS classes to overwrite Bootstrap:

+ +
<button class=""btn my-btn"">
+
+.my-btn {
+   background-color: blue
+}
+
+ +

I personally would prefer #1 as it sounds easy. We have nearly no experience with LESS, so we don't prefer #2. However, my colleague prefers #3 as he don't want to touch Bootstrap's classes. But my opinion is that we should blot the HTML with our own classes while still using Bootstrap classes.

+ +

Is there any recommendation?

+",51450,,,,,42927.06319,Overwriting Bootstrap CSS classes,,4,1,,,,CC BY-SA 3.0,, +309015,1,309033,,2/2/2016 8:13,,9,1105,"

Many coverage tools evaluate an entire project, including unit test code itself. In VS 2013, the Analyze Code Coverage/All Tests option includes test code in its report. OpenCover does so as well I believe. In Eclipse, a Maven project with the typical src/main/java and src/test/java setup, EclEmma will report coverage for both main and test code

+ +

This seems of minimal value to me, except for possibly ensuring that all tests are actually being executed. With test code included, the coverage % is often artificially high since often the tool will report close to 100% coverage for test code, which can skew the entire project above a benchmark level (say 80%) that it might not have otherwise achieved.

+ +

Are there legitimate reasons to include test code in coverage? Or should I continue filtering it out when automating our coverage reporting?

+",95212,,31260,,42402.37917,42402.4875,When does it make sense to include test code in coverage?,,1,5,1,,,CC BY-SA 3.0,, +309016,1,309017,,2/2/2016 8:39,,2,1833,"

One of the advantages of open-source programs is that they can be ported easily to any platform simply by recompiling. Qt is also a cross-platform toolkit that can work on several platforms simply by recompiling, and so applications can be ported between Windows, Mac, Linux, and other desktop OS's.

+ +

Is there any possibility of somehow recompiling a desktop app that uses Qt into an Android or iOS application, using the exact same desktop toolkit (i.e. menu bars, buttons, etc.). I realize that this is simply not how Android applications should work (different UI paradigms), but it can have some uses, particularly with Android tablets. Open-source applications can be ported easily in theory, and android also supports keyboard and mouse so there's that. Complex applications that require a desktop UI can be ported to tablets.

+ +

I haven't found any examples, so there must be a reason why it's not common considering it should be easy to do in theory. This question also applies to other portable toolkits but Qt is the first thing I can think of. Also, iOS probably doesn't give you flexibility in this regard so it might be a good idea to talk mostly about Android as an example.

+ +

Note, I am fully aware that mobile apps should not use the same UI as desktops and that they have a completely different UI paradigm. I'm just asking if it's still possible to run desktop applications on mobile OS's simply by recompiling (and maybe a few minor changes), much like how desktop apps can be ported to other desktop platforms in the same way, and if it isn't, why not.

+",124792,,,,,42552.43333,Can Qt desktop programs be recompiled for Android/iOS as-is?,,1,0,,,,CC BY-SA 3.0,, +309022,1,309025,,2/2/2016 9:31,,0,1288,"

So in PHP, there's PDO to fetch things from a database. Now I see a bunch of $stmt->rowCount() calls in the code I'm working with. Why not just fetch the result array and throw a PHP count() over the result to fetch the row count? Wouldn't this make PDO's rowCount() obsolete?

+",210884,,,,,42402.43889,Why use PDO's rowCount() if I can just use PHP's count()?,,1,3,,,,CC BY-SA 3.0,, +309023,1,309029,,2/2/2016 9:41,,1,136,"

I am learning OpenGL/C++ by following a tutorial series on in2gpu. I set up my Project just like the autor did. It is an Empty Visual C++ Project. Inside the project I linked glew and freeglut, the according files are inside a Dependencies folder which I copied into the project folder. Everything works like a charm so far, I can build and run.

+ +

Now I tried to add the solution to a TFS-collection on my Visual Studio Team Services account. Here I encounter two problems.

+ +
    +
  1. While the autor of the tutorial adds folders inside his Visual Studio Solution I have only the option to add filters. +
  2. +
+ +

This seems to break my project structure. After I check in the solution and inspect it in Source Control Explorer on my TFS it does not have it's folder structure anymore. All header and cpp files are placed inside the solution without any folders or filters. This of course makes it impossible to get a working latest version of the project. I have to restructure the entire project every time I get the latest version in order to solve compile errors. How can I get it to work?

+ +
    +
  1. The second problem I have is more of a general question about versioning of this kind of projects. Like I mentioned before I have several dependencies which are referenced via Linker settings.
  2. +
+ +

+

+ +

They are not included into the solution though. It is just a folder inside the project folder. Therefore they are not included into the Solution in the TFS, but the Linker settings seem to be saved. Would it be possible to include these dependencies into the version control too?

+",58653,,,,,42402.43472,Managing Opengl projects under Team Foundation Server,,1,0,,,,CC BY-SA 3.0,, +309024,1,,,2/2/2016 9:52,,2,904,"

I'm not sure if I am getting this right. In order to observe proper SOLID principles, am I forbidden to inherit from concrete classes? Does that mean that every concrete class that I have more or less be sealed (or at least considered to be sealed)?

+ +

This is confusing for me because I encountered this code from our repository:

+ +
class FontList : ObservableCollection<string> 
+{ 
+    public FontList() 
+    {
+        foreach (FontFamily f in Fonts.SystemFontFamilies)
+        {                
+            this.Add(f.ToString());                
+        }  
+    }   
+}
+
+ +

Which is inheriting from ObservableCollection<string>, a concrete class (correct?). However looking at ObservableCollection:

+ +
[Serializable]
+public class ObservableCollection<T> : Collection<T>, INotifyCollectionChanged, INotifyPropertyChanged 
+
+ +

It is inheriting from Collection<T>, which is also a concrete class. Can anyone explain the correct interpretation of DIP, especially with regard to concrete class inheritance?

+",150538,,,,,42492.48819,Dependency Inversion Principle - No deriving from concrete classes?,,3,0,,,,CC BY-SA 3.0,, +309030,1,,,2/2/2016 10:31,,24,22186,"

I'm creating a Rest API using Spring Boot and I am using Hibernate Validation to validate request inputs.

+ +

But I also need other kinds of validation, for example when update data needs to checked, if the company id doesn't exist I want to throw a custom exception.

+ +

Should this validation be located at the service layer or the controller layer?

+ +

Service Layer :

+ +
 public Company update(Company entity) {
+    if (entity.getId() == null || repository.findOne(entity.getId()) == null) {
+        throw new ResourceNotFoundException(""can not update un existence data with id : "" 
+            + entity.getId());
+    }
+    return repository.saveAndFlush(entity);
+}
+
+ +

Controller Layer :

+ +
public HttpEntity<CompanyResource> update(@Valid @RequestBody Company companyRequest) {
+    Company company = companyService.getById(companyRequest.getId());
+    Precondition.checkDataFound(company, 
+        ""Can't not find data with id : "" + companyRequest.getId());
+
+    // TODO : extract ignore properties to constant
+
+    BeanUtils.copyProperties(companyRequest, company, ""createdBy"", ""createdDate"",
+            ""updatedBy"", ""updatedDate"", ""version"", ""markForDelete"");
+    Company updatedCompany = companyService.update(company);
+    CompanyResource companyResource = companyAssembler.toResource(updatedCompany);
+    return new ResponseEntity<CompanyResource>(companyResource, HttpStatus.OK);
+}
+
+",213462,,15455,user40980,42473.44306,44151.7375,In which layer should validation be located?,,5,0,2,,,CC BY-SA 3.0,, +309038,1,309045,,2/2/2016 12:28,,2,806,"

One of the things I love about Go is how they encourage passing errors as return values, but when it comes to logging what is the most maintainable solution: passing the error as far back down the execution stack as possible and logging everything there, or logging it in the function it occurred there and then?

+ +

So the former:

+ + + +
func main() {
+    err := DoSomething()
+    if err != nil {
+        log.Printf(err)
+    }
+}
+
+func DoSomething() error {
+    return DoSomethingElse()
+}
+
+func DoSomethingElse() error {
+    return ThisFunctionMayCauseAnErrorAndIsHorriblyNamed()
+}
+
+ +

Or

+ +
func main() {
+    _ := DoSomething()
+}
+
+func DoSomething() error {
+    return DoSomethingElse()
+}
+
+func DoSomethingElse() error {
+    err := ThisFunctionMayCauseAnErrorAndIsHorriblyNamed()
+    if err != nil {
+        log.Printf(err)
+    }
+}
+
+",191800,,,user40980,42402.58125,42402.58194,Should I log errors in the function they occur? or pass them back and log them when control returns?,,2,0,1,,,CC BY-SA 3.0,, +309048,1,309055,,2/2/2016 14:40,,3,1028,"

My team is currently facing a problem that we don't know how to tackle.

+ +

Some technical details: we use Java 8, Hibernate, Spring, MySQL, and AngularJS for the front-end.

+ +

We need to do pagination on a combined set of local and remote data. The approach we are taking is the following:

+ +
    +
  1. Query our local database since we support pagination with Hibernate and retrieve the paginated records.
  2. +
  3. Prepare the previous records to invoke a REST service to retrieve extra details.
  4. +
  5. Merge all the details and render the data in a webpage.
  6. +
+ +

This approach works fine when we are doing a search to simply retrieve more details from the remote service into our local data.

+ +

The problem lies in the impossibility of using our local data as the pagination reference.

+ +

Example:
+Let's say that we are searching for all the SuperBrand cars in stock that use milk as fuel and have square wheels. In our local DB we have 10k cars from which 2k are SuperBrand. So we have this initial volume of 2k records of local data that Hibernate manage to paginate into 20 pages of 100 records each. However, the customer only wants the milk and square wheels models. In this case, we need to invoke a third-party that can tell us of those 2k cars which ones fulfil the search criteria. The thing is that not all cars run on milk and have square wheels which may disrupt the pagination. Plus, the third-party doesn't support pagination.

+ +

For this reason, we've dismissed the possibility of sending to the third-party the data per page, i.e., send the 100 records of page 1 and check with the third-party, and when the customer selects page 2, send the correspondent 100 records and re-check with the third-party. This is not ideal because from the 100 records of page 1 maybe only 10 follow the criteria. As consequence, we would only display 10 records on page 1. Perhaps the whole 100 records of page 2 fulfilled the criteria and could actually be displayed. Nevertheless, pagination is already broken by page 1.

+ +

The other possibility is to send the whole 2k records, invoke the third-party, reduce the data volume to the relevant number of records fulfilling the search criteria and create a pagination mechanism in the server to handle and hold the data. Even if conceptually this would work I'm still worried about performance for huge data volumes.

+ +

Questions regarding the topic:

+ +
    +
  • Is the previous option a valid approach?
  • +
  • Is there any possibility to stream the data even with a remote service involved, instead of doing pagination (bear in mind that the searches are triggered in AngularJS through REST calls to our servers)?
  • +
  • Would anyone recommend another approach?
  • +
+",213830,,31260,,42402.61875,42402.65972,How to paginate local and remote data?,,1,2,,,,CC BY-SA 3.0,, +309051,1,309057,,2/2/2016 15:14,,4,1514,"

For an interface that can be used symetrically like for example

+ +
interface **ipc**
+   send()
+   receive() 
+
+ +

Both components receive and send. How do I represent this in UML?

+ +

Currently I am doing this:

+ +

+ +

Is this how it's done in a component diagram? If not, what's a better approach? (Please ignore the ports, the real components contain internal details).

+",7502,,,,,42402.6625,How do I represent in UML two-way interface used by two connected components?,,1,1,,,,CC BY-SA 3.0,, +309052,1,,,2/2/2016 15:32,,1,113,"

I have written some projects. I try to annotate or comment all possible (hard to say if correctly toward clean explaining).

+ +

I would like to write documentation for it, but I have problem with examples.

+ +

It is not problem to write in annotation above each function (where it is really needed) examples of all possible using of that function - but it would be good to describe each of those examples.

+ +

And for this, it would be better to use additional file where it will be written as well as possible. But what format and what filetype to use for it?

+ +

Secondary question is (partly related to main one), that I am not sure if is it needed to examples would be running. Because in case of functions stored in traits it is probably very hard to make those examples running.

+",213483,,,,,43450.58611,The best practice for writing of examples,,3,2,,,,CC BY-SA 3.0,, +309065,1,309067,,2/2/2016 16:55,,-1,342,"

If Strings are immutable in Java, why is the output of this code 2GB, instead of 1GB?

+ +
class Laptop {
+   String memory = ""1GB"";
+}
+
+class Workshop {
+    public static void main(String args[]) {
+       Laptop life = new Laptop();
+       repair(life);
+       System.out.println(life.memory);
+    }
+
+    public static void repair(Laptop laptop) {
+        laptop.memory = ""2GB"";
+    }
+}
+
+",205488,,205488,,42402.99792,42402.99792,In which cases and examples String in Java is not immutable?,,1,5,,42407.85833,,CC BY-SA 3.0,, +309066,1,309128,,2/2/2016 16:59,,5,1067,"

I'm writing a simple whitebox unit testing suite for a project I'm working on in C. The project is broken into modules (each has a .c file with an associated .h file), and all modules are compiled together into one .so file.

+ +

Obviously, the .h file for each module contains function prototypes for any function that is meant to be visible to the other modules, but does not contain prototypes for internal or helper functions.

+ +

My test suite is a collection of .c files in a separate directory. Each .c file is responsible for testing a specific module, plus a master file which executes all the tests. All of these files are compiled into one executable. Currently this binary is linked to the .so binary of the project, and I'm #includeing the project's .h files in the individual module testing files as appropriate. This allows me to test all of the ""public"" functions.

+ +

I would also like to directly test some of the internal and helper functions. I can think of a few ways to go about this.

+ +
    +
  1. Go ahead and make all of a module's functions defined in that module's header file. I don't want to do this because it seems like bad practice.
  2. +
  3. Create a second collection of .h files which contain the prototypes for all the internal/helper functions, and include these in the appropriate testing files. I'm hesitant to do this because it seems like a lot of work to maintain.
  4. +
  5. #include the module's .c file directly. This would be easy to maintain, but it would make me feel so dirty inside.
  6. +
+ +

Which (if any) of these techniques are the best way to go about this?

+",181707,,181707,,42402.91528,42404.5375,Unit Testing Module-Internal Functions,,3,3,,,,CC BY-SA 3.0,, +309068,1,309074,,2/2/2016 17:06,,48,18208,"

Having chaining implemented on beans is very handy: no need for overloading constructors, mega constructors, factories, and gives you increased readability. I can't think of any downsides, unless you want your object to be immutable, in which case it would not have any setters anyway. So is there a reason why this isn't an OOP convention?

+ +
public class DTO {
+
+    private String foo;
+    private String bar;
+
+    public String getFoo() {
+         return foo;
+    }
+
+    public String getBar() {
+        return bar;
+    }
+
+    public DTO setFoo(String foo) {
+        this.foo = foo;
+        return this;
+    }
+
+    public DTO setBar(String bar) {
+        this.bar = bar;
+        return this;
+    }
+
+}
+
+//...//
+
+DTO dto = new DTO().setFoo(""foo"").setBar(""bar"");
+
+",139342,,142522,,42404.31111,42404.43542,Why is chaining setters unconventional?,,7,17,8,,,CC BY-SA 3.0,, +309071,1,309076,,2/2/2016 17:23,,1,184,"

Consider this bit of code :

+ +
private Norf foo(Baz baz) {
+    // ...
+    // Logic on baz
+    // ...
+
+    if (baz.color == Baz.BLUE) {
+        // Do this thing
+    }
+
+    // ...
+    // More logic
+    // ...
+
+    return norf;
+}
+
+ +

Assume that the content of the if statement is at the wrong level of abstraction for foo and therefore ripe for refactoring. I often walk into situations like this, but I never feel like my refactoring is good.

+ +

One way to refactor would be to extract a doThisThingIfBazIsBlue function. A function with an ""if"" in its name smells bad to me though, so that's not good.

+ +

Another approach is to keep the conditional in foo and extract doThisThing. This solution feels better, but sometimes it's not really foo's business whether the ""thing"" is done or not. In this case foo is too short for this to matter, but in longer functions it can end up being the bulk of the complexity.

+ +

Do you know a naming pattern that works for this kind of situation? +How do you refactor complex if statements and bodies?

+",128407,,,,,42403.69722,How to name functions that use conditionals in refactoring,,4,0,,42407.41597,,CC BY-SA 3.0,, +309075,1,,,2/2/2016 17:51,,4,944,"

Background

+ +

I'm a technical lead on a small team of three developers who work at a community college. Because of the nature of our environment, our projects are typically related (since the core set of data we use is relatively small), but all of our work is done in separate, independent C# solutions with ASP.NET front-ends.

+ +

I'm responsible for maintaining the core set of shared code, our college's framework, but I also have a full plate of projects. Because I wrote most of the framework code, neither of the other members of my team wants to modify or expand it (for reasons I can explain in greater detail).

+ +

Problems

+ +

Our entire team (the three of us and our manager) attended Scrum training last year, and are all busily trying to engage our internal stakeholders in the Scrum process. However, when we have to switch from one project to another (and we have about 25 separate projects), we incur a pretty high transition penalty.

+ +

Older code doesn't get maintained or brought up to new standards as easily because we simply add a reference to a specific version of the framework, which is then long-gone by the time we re-open older projects. We don't really take new code that we could reuse and integrate it into the framework.

+ +

Although our manager is supportive of us revising older code, our projects are always scheduled back to back - even now that we're using Scrum, we have internal pressure pushing new items to the team instead of us having room to pull new work when we're ready.

+ +

Solutions(?)

+ +

My team and I pair program, team program, or perform code reviews (if we can't pair) regularly. I have emphasized that I do not own our shared framework and regularly solicit feedback from my team for new features or bugs.

+ +

I proposed to the team that we try to find a new way to integrate our code together, which is through one large web-based project (I can hear you all inhaling sharply as I type this). I'm at my wits' end trying to build my team's skillset and confidence to the point that we can share code and we feel comfortable updating each other's code; however, progress is extremely slow while the amount of code we're maintaining grows much faster.

+ +

When I suggested this option to my team, everyone was really excited by the thought of being able to work together and improve our code reuse. We all feel like we could better support each other under this scenario. However, we're also concerned about approaching a project of this size, and we also don't know about logistical issues like the time it would take to run unit tests or even simple project builds. I recognize my own level of ignorance here, but at the same time, I'm really constrained by resources and personnel, so in spite of sitting with the idea for several weeks, I don't have any better options.

+",4869,,,,,42854.60417,How do you integrate separate projects into a single solution?,,2,6,,,,CC BY-SA 3.0,, +309081,1,309334,,2/2/2016 18:46,,16,775,"

I've been thinking about creating custom types for identifiers like this:

+ +
public enum CustomerId : int { /* intentionally empty */ }
+public enum OrderId : int { }
+public enum ProductId : int { }
+
+ +

My primary motivation for this is to prevent the kind of bug where you accidentally pass an orderItemId to a function that was expecting an orderItemDetailId.

+ +

It seems that enums work seamlessly with everything I would want to use in a typical .NET web application:

+ +
    +
  • MVC routing works fine
  • +
  • JSON serialization works fine
  • +
  • Every ORM I can think of works fine
  • +
+ +

So now I am wondering, ""why shouldn't I do this?"" These are the only drawbacks I can think of:

+ +
    +
  • It may confuse other developers
  • +
  • It introduces inconsistency in your system if you have any non-integral identifiers.
  • +
  • It may require extra casting, like (CustomerId)42. But I don't think this will be an issue, since the ORM and MVC routing will typically be handing you values of the enum type directly.
  • +
+ +

So my question is, what am I missing? This is probably a bad idea, but why?

+",39776,,,,,42405.55625,What are the drawbacks to mapping integral identifiers to enums?,,4,3,2,,,CC BY-SA 3.0,, +309085,1,,,2/2/2016 19:41,,1,856,"

Actual architecture

+ +

I have an app where a model is attached to every activity. The model ask a repository for information that can come from server or local db depending on many factors. When it asks server for information it creates a petition encapsulated under an object that goes to a webService Manager that process the request. The webService manager returns and errorEvent if somethings goes wrong and also handles the errors. where there is information it returns a genericEvent that comes with all the information retrieved from the server.

+ +

Problem

+ +

The server is going to next rest api version, with some improvements and features. The server is an engine configurable from the app, since every client has it's own server setup. Some clients will not update their servers to new version, and use the version from playstore. So I need to be able to handle different versions of restApi from the app, since there will be endpoints missing on older versions of server.

+ +

My alternative for solutions

+ +

I've changed the Interface that is each repository, to a abstract class that extends this interface. in order to create common calls, that can be overridden under each version. So every new version will inherit from the previous one. with the necessary changes. This way I'll be able to handle changes.

+ +

NOTE: that is stated that endpoint that retrieves server configuration and version, won't change in time. Or at least in few years.

+ +

Any ideas about it?

+",213860,,31260,,42402.82222,42402.82222,Handling multiple rest api versions within Android Java client,,0,2,,,,CC BY-SA 3.0,, +309086,1,309088,,2/2/2016 19:47,,1,380,"

I have a Java project whose architecture is quite component-oriented, and I am wondering

+ +
    +
  • if this is a common way to organize code
  • +
  • which rules/patterns are used,
  • +
  • if there is a name for this coding style.
  • +
+ +

Component packages

+ +
src/namespace/component1
+src/namespace/component1/error
+src/namespace/component1/impl
+src/namespace/component1/data
+
+src/namespace/component2
+...
+
+ +

Each component is a bit like a service. Some launch a Thread, some open some sockets for connections, etc

+ +

Structure of component

+ +
// ""Root"" of the package : Interfaces + InterfaceCoordinator
+src/componentx/componentx/ComponentXInterfaceCoordinator.java
+src/namespace/componentx/IStartup.java
+src/namespace/componentx/IinterfaceFoo.java
+
+// Implementation of the component
+src/namespace/componentx/impl/Startup.java
+src/namespace/componentx/impl/Foo.java
+...
+
+// Exceptions this component can throw
+src/namespace/componentx/error/SomeError.java
+...
+
+// Data (not really sure exactly what we can call data or not)
+src/namespace/componentx/data/SomeConfiguration.java
+src/namespace/componentx/data/SomePersistedEntity.java
+src/namespace/componentx/data/SomeNonPersistedEntity.java
+
+ +

Application Main

+ +
public static void main(String[] args) {
+  ...
+  logger.info(""Initialize ComponentX Component"");
+  ComponentXInterfaceCoordinator.getStartup().init();
+
+  logger.info(""Initialize ComponentY Component"");
+  ComponentYInterfaceCoordinator.getStartup().init();
+  ...
+
+ +

ComponentX Interface Coordinator

+ +

public class ComponentXInterfaceCoordinator {

+ +
public static IStartup getStartup(){
+    return Startup.getInstance();
+}
+
+ +

Startup Interface/impl

+ +
public interface IStartup {
+
+    public void init(); 
+}
+
+public class Startup implements IStartup{
+
+    private static Startup instance = null;
+    private Logger logger = Logger.getLogger(""ComponentX"");
+
+
+    public static IStartup getInstance() {
+        if(instance==null)
+            instance = new Startup();
+        return instance;
+    }
+
+    @Override
+    public void init() {
+
+        logger.debug(""Initialising subcomponents: xxx"");
+        SubComponentXy.getStartup().init();
+
+        logger.debug(""Initializing Connection Manager Component"");
+        ComponentX.getInstance().addProtocol(new MobileSystemListeningProtocol());
+        ComponentX.getInstance().startProtocols();
+    }
+}
+
+ +

In addition/part of the questions I asked at the beginning :

+ +

Most of the main component classed also have this getInstance() code which return the singleton instance of the class

+ +

Is getInstance() following some kind of pattern ? or anti-pattern ? +Is is actually just a way to get around thet fact that static methods cannot be declared in interfaces (in java <= 1.7) ? +If I start a new project in JAVA 1.8, should I go for it a different way ?

+",213849,,213849,,42403.09444,42403.09444,Service/component based application in Java,,1,1,1,,,CC BY-SA 3.0,, +309087,1,309091,,2/2/2016 20:01,,3,81,"

Usually you put your license into a single file called COPYING or LICENSE. However there may be reasons you do not want to do this - let's not discuss them - and therefore you look for alternative ways.

+ +

So what about putting the license into the issue tracker? An advantage may be that you can clearly see who did this (the author).

+ +

So is this okay from a legal perspective? Is it possibly even superior than putting the license into a file? +And should this be done? (You may list other reasons than legal ones here)

+ +
+ +

This question was arose out of a discussion about the LICENSE file on GitHub. You may have a look there to get some arguments, however please answer this question in an objective way as you should on Stackoverflow. +If you want to participate into the discussion please comment on GitHub instead.

+",209137,,,,,42402.85417,Is putting the license into the issue tracker okay or is it even better than putting it into a single file?,,2,1,,,,CC BY-SA 3.0,, +309089,1,,,2/2/2016 20:13,,-1,226,"

If the testing phase is over 5 months long do the developers help the testers execute scripts? I don't know how it's done in a waterfall lifecycle where the phases are usually quite long.

+",201022,,31260,,42402.84375,42402.87639,What do developers do during testing phase of a waterfall life cycle?,,1,7,,42402.88958,,CC BY-SA 3.0,, +309097,1,,,2/2/2016 20:49,,0,629,"

My company has a series of WinForms applications that pretty crudely provides authentication by checking usernames and passwords directly against the database (with a little hashing).

+ +

I have a fair amount of experience with ASP.NET WebAPI projects, and the single-sign-on experience you get out of the box. By that I mean I can register, log in, use [Authorize] attributes on Controllers and Endpoints, as well as link Facebook, Google, and other providers with relative ease.

+ +

What I want to do is create another ""provider"", or another way to integrate my on-premises authentication scheme with the awesome SSO you get from the ASP.NET WebAPI template projects. Looking through examples, I get a bit lost with OAuth, claims/challenges, and all that stuff. I assume that when the rubber hits the road, I gotta check as username/password against my database, I just am looking for the most straightforward way to turn this into an OWIN app.Use<MyOnPremisesProvider>(), then have access to WebAPI, and also use the token to figure out which on-premises user is linked to the SSO user (kind of like a facebook user gets an SSO account).

+ +

Any guidance is greatly appreciated. Thanks!

+",213867,,,,,42452.12778,What's the most straightforward way to integrate my company's custom authentication with ASP.NET SSO?,<.net>,1,0,0,,,CC BY-SA 3.0,, +309108,1,,,2/2/2016 22:50,,2,218,"

When creating a variable foo, Python lets you just write foo = bar. However many languages, like C# or JavaScript, require additional syntax like var foo = bar or foo := bar to signal the same thing. +Is there a reason why C#/JS require explicitly differentiating a declaration from assignment?

+ +

I can imagine that a parser, upon reading foo = bar, has to decide whether the = means declaration or assignment. But that's easy - just check if foo was declared earlier. Am I missing something here?

+",211105,,211105,,42402.99028,42403.9125,Is the var token necessary to signal variable declaration?,,4,2,,,,CC BY-SA 3.0,, +309110,1,,,2/2/2016 22:57,,1,350,"

I should probably open this by saying I do mostly Web applications at work, which obviously have some major differences from typical desktop stuff.

+ +

I had a small Windows Forms program I'd made for myself. As I went to update it and add some stuff in I found that the program and presentation logic was too tightly coupled and it was unpleasant. First I broke things up by using the MVP pattern and having separate presenters that only interact with the forms through an interface. That was a start but I still had the issue that as a button press opened up a different form my form had to know what form it was going to call and what dependencies it or its presenter had.

+ +

After pondering it a bit, I came up with and implemented this design: a singleton has a different event for all the different kinds of windows you might want to open along with custom event arguments containing any contextual information these might need. A different class is responsible for registering listeners to these events and using a dependency injection container to retrieve any dependencies, get the presenter, get the window, connect them, and show the window. When I want some action to cause a window to open, instead of constructing a new window directly, I can simply raise the event in question -- meaning I could even go and swap out, for instance, a WPF form and not have to change any of the code that might cause it to launch.

+ +

In a way, this brings things a little bit closer to the ""stateless"" model I'm more familiar with. It seems to work well and I find the code easier to work with. However, after doing some searching I can't seem to find anyone else using this sort of model, which gives me a bit of pause. Are there some drawbacks to this model, or is this a good way to encourage more loose coupling between different forms in a GUI application?

+",114133,,,,,42402.95625,Using events and event subscribers to create windows in a desktop application,,0,7,,,,CC BY-SA 3.0,, +309123,1,309153,,2/3/2016 8:20,,5,2478,"

I've recently had an argument with a colleague about using getters (without setters) in a view-model classes used by XAML.

+ +

Example:

+ +
public string FullName
+{
+   get
+   {
+      return $""{FirstName} {LastName}"";
+   }
+}
+//call OnPropertyChanged(""FullName"") when setting FirstName or LastName
+
+ +

His argument was that you should only ever create get/set pairs which use a private field (with the setter raising the PropertyChanged event if the value has indeed been changed).

+ +

Example:

+ +
private string _fullName;
+public string FullName
+{
+   get
+   {
+      return _fullName;
+   }
+   set
+   {
+      if (value != _fullName)
+      {
+         _fullName = value;
+         OnPropertyChanged(""FullName"");
+      }
+   }
+}
+//Set FullName when setting FirstName or LastName
+
+ +

I, on the other hand, think it's fine to use getters (which calculate the output on the fly), provided they aren't updated all the time via too many PropertyChanged calls.

+ +

I don't think my colleagues approach is bad - I just think it's way more busy work which might simply be not needed. However he was sure my approach is bad and should be avoided at all cost.

+ +

As a counter example I pointed out that the MVVM base class from DevExpress has RaisePropertyChanged and RaisePropertiesChanged (for refreshing multiple properties) methods ready for use - both of which wouldn't be needed if all the properties were get/set pairs (the DevExpress base class also exposes a SetProperty method for use in setters, which also includes a PropertyChanged call). His argument was that ""well, people are stubborn, keep doing it wrong, so DevExpress simply added those methods to make sure people are happy"" which I found... odd, to say the least.

+ +

So what's the take here? Is it bad design to use getters in view-models designed to be used by XAML?

+ +

EDIT: +I forgot to mention... my colleague also made the claim that using get/set pairs will always be faster. That does seem to be true as far as updating the UI is concerned, but he also used this as an argument that getters shouldn't ever be used. The whole argument started when I had a bit of performance issues when populating a data grid with more (i.e. 12k) rows where my VM used quite a few getters...

+",69801,,69801,,42403.5625,43686.86042,Is using getters in XAML view-models a bad thing?,,3,4,,,,CC BY-SA 3.0,, +309124,1,309126,,2/3/2016 8:21,,4,15049,"
#include <iostream>
+
+class Base {
+
+private:
+    int b_value;
+public:
+    void my_func() {std::cout << ""This is Base's non-virutal my_func()"" << std::endl; }
+
+    virtual void my_Vfunc() {std::cout << ""This is Base's virutal my_Vfunc()"" << std::endl;}
+};
+
+  //----------------------------//
+
+
+class Derived: public Base {
+
+private:
+    int d_value;
+public:
+    void my_func() {std::cout << ""This is Derived's non-virtual my_func()"" << std::endl; }
+
+    virtual void my_Vfunc() {std::cout << ""This is Derived's virtual my_Vfunc()"" << std::endl;}
+};
+
+
+int main(){
+
+Base * base = new Derived;
+base->my_func();
+base->my_Vfunc();
+
+return 0; 
+}
+
+ +

+ +

I was trying to understand the internal of virtual functions. So far I understand that upcasting the derived class to base class still calls the virtual function when we do base->my_Vfunc() because of Derived::vfptr.

+ +

My question is how does base's my_func() gets called here? My main confusion is how does (during the upcasting) the derived class object provide information about the base class non virtual function since it only has information of base as Base::b_value.

+",101222,,101222,,42403.36667,42403.36806,How does the base class non-virtual function get called when derived class object is assigned to base class?,,1,4,2,,,CC BY-SA 3.0,, +309130,1,309132,,2/3/2016 9:56,,3,1917,"

I'm about to build a SQL table where I want to store currency orders. That means that I need to store how much I paid for a certain quantity, and the ratio between both quantities. So for example:

+ +
+-------------+--------------+-------+
+| Quantity US | Quantity EUR | Ratio |
++-------------+--------------+-------+
+|         250 |          200 | 1.25  |
++-------------+--------------+-------+
+
+ +

My question is: does it make sense to have a column storing a value that's always going to be the division between two columns, or is it a better practice to not store that in the DB and instead do the calculations every time I need the ratio?

+",192448,,,,,42403.67292,Keeping a ratio column that's the division between two other columns,,4,4,,,,CC BY-SA 3.0,, +309134,1,309137,,2/3/2016 11:08,,27,18540,"

Take the two code examples:

+ +
if(optional.isPresent()) {
+    //do your thing
+}
+
+if(variable != null) {
+    //do your thing
+}
+
+ +

As far as I can tell the most obvious difference is that the Optional requires creating an additional object.

+ +

However, many people have started rapidly adopting Optionals. What is the advantage of using optionals versus a null check?

+",213932,,,,,42405.11111,Why is using an optional preferential to null-checking the variable?,,9,4,4,,,CC BY-SA 3.0,, +309138,1,,,2/3/2016 11:24,,2,5405,"

I have ~30 resources each having ~10 attributes. I want to store some information about each attribute. Ex: its multiplicity, it RW (Read/Write), RO (Read only), longName, shortname.

+ +

So I was thinking of storing it in a Enum like:

+ +
public enum Attributes {
+
+    U_RESOURCETYPE(true, ""RW"", ""resourceType"", ""rt""),
+    U_RESOURCEID(false, ""RO"", ""resourceID"",""ri""),
+    //...
+}
+
+ +

But this lead to 300 constants (30 resources * 10 attributes).

+ +

I could also use a config file or a Singleton Enum with a Map as member.

+ +

What is the best possible way to achieve this ??

+",173849,,,user22815,42409.24375,42409.25625,Whether to use enum vs map vs config file?,,2,1,,,,CC BY-SA 3.0,, +309140,1,309146,,2/3/2016 11:29,,-3,291,"

Gradle is such an interesting build tool that it prompted me to look at Spock and JUnit -- which I've never done before. What is the basic workflow with TDD?

+ +

My approach has been to do frequent builds, slightly less frequent clean builds, and to run the application as much as possible. Certainly, there are entire books on TDD, but my most basic question is about the workflow.

+ +

Instead of working in src/main/java, most of the coding is done in the test directory? This, to me, intuitively, seems wrong. With version control, why have the duplicate directory structure? It can only lead to discrepancies between src/main and src/test which must be resolved manually.

+ +

Why not just work in one branch, then, when complete, create a branch without the tests?

+ +

What do you do when you want to actually run the application?

+",102335,,,,,42403.53333,What is the most elemental workflow for TDD?,,3,4,,,,CC BY-SA 3.0,, +309143,1,309157,,2/3/2016 11:40,,3,93,"

I am working on Qt/QML/MySQL app and I want to achieve maximum customization of software; therefore I've packed app's settings into SQLite3 database file. I have several SQLite3 scripts hardcoded into app, but my question is is it better to have these scripts embedded into code itself or should I have physically SQLite3 files located in some settings directory which from the app loads them at first run?

+",155467,,,,,42403.90139,App db creation at first run,,2,0,,,,CC BY-SA 3.0,, +309147,1,,,2/3/2016 12:43,,7,3176,"

I'm developing an API that performs bulk update of a large number of items in a single call. This code will consist of a REST endpoint and the internal library code that it calls.

+ +

There are a few reasons why the update of a specific item may fail (e.g., the item has been deleted). The question I'm debating is how to return the result to the caller.

+ +

I'm leaning towards returning a collection that contains only the items with failures, but I could see scenarios where it would be helpful to return the result for all of them.

+ +

In either case, the response would contain a collection of objects like this:

+ +
class UpdateResponse
+{
+    int ItemId;
+    ResultType Result;
+}
+
+ +

Where ResultType is an enumeration that describes the result of the update.

+ +

Is there any compelling reason to choose one strategy over the other (return all vs. return failures)?

+",213944,,,,,42405.50417,Bulk update: return all results or only failures,,5,2,2,,,CC BY-SA 3.0,, +309151,1,309154,,2/3/2016 13:52,,4,7477,"

My application is a generic enterprise application which can be deployed on any application server running on any OS.

+ +

I don't know how/where to configure my application, except for the database information which are stored in the application server as datasource.

+ +

While it's easy to do, I could use the database as a configuration container, but... the database shouldn't contain the configuration elements: I want to be able to use any database in any environment (dev, test, acceptance, production).

+ +

Does Java EE offer any kind of configuration management similar to what is offered to the datasources? If yes, what is it? If not, what is the best practice on this?

+ +

I often read the following advice: put these items in a properties file on the server. Fine, but in that case the location to the properties file becomes a configuration item in itself, so where/how do I define the location of that properties file and transmit that info to my application?

+",176064,,,user40980,42403.65278,42403.65278,How to access environment-specific configuration in an enterprise application?,,1,0,2,,,CC BY-SA 3.0,, +309156,1,309158,,2/3/2016 14:27,,1,498,"

I'm a junior programmer near to my 6 month probation, following my initial career changing 3 month assignment in which I added TDD tests and wondering whether I should add tests to my current work. The senior programmer says I can if I want but he hasn't, but he does wants me to document the system whilst I'm learning it.

+ +

The system is a large MVC web service of 90 pages which utilizes a number of technologies young and old, for instance the project has been migrated to MVC 5, uses a DAL generator written in house in VB.net, and has code in the Unit of work design pattern amongst others It has been worked on by many programmers over a number of years and providing my code works I can do what I want.

+ +

My question really is, should I add TDD Yes on No.

+",212936,,212936,,42403.63611,42403.73403,Should a Junior programmer add TDD tests to mvc project,,2,8,2,,,CC BY-SA 3.0,, +309160,1,,,2/2/2016 12:41,,1,130,"

I and one guy have been discussing potential solution of a problem on an unrelated board, regarding the typical getter/setter hate, ie. using getters/setters leads to procedural programming.

+ +

I presented the attached solution as a representation of encapsulation and the TellDontAsk principles, which he said was not OO enough, because it persumably:

+ +
    +
  • has methods returning a value of an attribute
  • +
  • has methods mutating attributes
  • +
+ +

This is the code:

+ +
interface AnimalWithCollar
+{
+    /**
+     * Adds new collar to the animal.
+     *
+     * @param Collar $collar
+     *
+     * @throws InvalidArgumentException When the collar you are trying to add already has an owner differentiating from
+     *                                  from the animal you are trying to add the collar to.
+     */
+    public function addCollar(Collar $collar);
+
+    /**
+     * Checks, whether the animal has a collar.
+     *
+     * @return bool
+     */
+    public function hasCollar();
+}
+
+interface ItemForFetching
+{
+    /**
+     * Checks, whether the animal passed as the first parameter can carry the item.
+     *
+     * @param AnimalThatCanFetch $animal
+     *
+     * @return bool
+     */
+    public function animalCanCarryIt(AnimalThatCanFetch $animal);
+}
+
+class Stick implements ItemForFetching
+{
+    /** @var int */
+    protected $_weight;
+
+    /**
+     * @param int $weightOfTheItem
+     */
+    public function __construct($weightOfTheItem)
+    {
+        $this->_weight = $weightOfTheItem;
+    }
+
+    /**
+     * @inheritdoc
+     * @see ItemForFetching::animalCanCarryIt
+     */
+    public function animalCanCarryIt(AnimalThatCanFetch $animal)
+    {
+        return $animal->canCarryAnItemWeighting($this->_weight);
+    }
+}
+
+interface AnimalThatCanFetch extends AnimalWithCollar
+{
+    /**
+     * Fetches the item passed as the first parameter.
+     *
+     * @param ItemForFetching $item
+     *
+     * @throws InvalidArgumentException When the animal is unable to fetch the item, because it is too heavy.
+     */
+    public function fetch(ItemForFetching $item);
+
+    /**
+     * Checks, whether the animal can carry an item of said weight.
+     *
+     * @param int $weight
+     *
+     * @return bool
+     */
+    public function canCarryAnItemWeighting($weight);
+}
+
+class Dog implements AnimalThatCanFetch
+{
+    /** @var Collar[] */
+    protected $_collars;
+
+    /**
+     * @inheritdoc
+     * @see AnimalWithCollar::addCollar
+     */
+    public function addCollar(Collar $collar)
+    {
+        $collar->assignOwner($this);
+        $this->_collars[] = $collar;
+    }
+
+    /**
+     * @inheritdoc
+     * @see AnimalWithCollar::hasCollar
+     */
+    public function hasCollar()
+    {
+        return (count($this->_collars) > 0);
+    }
+
+    /**
+     * @inheritdoc
+     * @see AnimalThatCanFetch::fetch
+     */
+    public function fetch(ItemForFetching $item)
+    {
+        if ($item->animalCanCarryIt($this) == false)
+        {
+            throw new InvalidArgumentException('The item is too heavy. The animal cannot fetch it.');
+        }
+
+        // process the fetching logic
+    }
+
+    /**
+     * @inheritdoc
+     * @see AnimalThatCanFetch::canCarryAnItemWeighting
+     */
+    public function canCarryAnItemWeighting($weight)
+    {
+        return $weight <= 5; // indicating it may carry an item up to 5 kg
+    }
+}
+
+class Collar
+{
+    /** @var int */
+    protected $_diameter;
+
+    /** @var string */
+    protected $_color;
+
+    /** @var AnimalWithCollar */
+    protected $_owner;
+
+    /**
+     * @param int $diameter
+     * @param string $color
+     */
+    public function __construct($diameter, $color)
+    {
+        $this->_diameter = $diameter;
+        $this->_color = $color;
+        $this->_owner = null;
+    }
+
+    /**
+     * @param AnimalWithCollar $newOwner
+     *
+     * @throws InvalidArgumentException When the collar is already owned by a different animal than the one passed.
+     */
+    public function assignOwner(AnimalWithCollar $newOwner)
+    {
+        if ($this->hasOwner() && $this->belongsTo($newOwner) == false)
+        {
+            throw new InvalidArgumentException('The collar may have only one owner.');
+        }
+
+        $this->_owner = $newOwner;
+        $newOwner->addCollar($this);
+    }
+
+    /**
+     * @param AnimalWithCollar $animal
+     * @return bool
+     */
+    public function belongsTo(AnimalWithCollar $animal)
+    {
+        return $this->_owner == $animal;
+    }
+
+    /**
+     * @return bool
+     */
+    public function hasOwner()
+    {
+        return $this->_owner != null;
+    }
+}
+
+ +

Amount of interfaces, naming conventions aside, is there a way this design could be even more object oriented than it already is?

+",,Ondřej Šimon,,,,42403.64861,Does my code still break encapsulation and uses getters/setters instead of the TellDontAsk principle?,,2,2,,,,CC BY-SA 3.0,, +309173,1,,,2/3/2016 16:47,,8,330,"

I am looking for a pseudo code logic that would find n equally sized areas in a given polygon. No space should be between or outside the matched areas. First valid match of areas should be returned.

+ +

Assuming following polygon [2,2, 3,1, 5,1, 5,4, 4,5, 2,3] as an input:

+ +

+ +

...and 3 as a parameter a valid output could be [ [2,2, 3,2, 3,3, 4,3, 4,5, 2,3], [2,2, 3,1, 5,1, 4,2, 4,3, 3,3, 3,2], [4,5, 4,2, 5,1, 5,4] ]:

+ +

+ +

Another valid output with parameter 3 is [ [3,4, 3,3, 4,3, 4,2, 3,2, 3,1, 2,2, 2,3], [4,3, 4,2, 3,2, 3,1, 5,1, 5,3], [3,4, 3,3, 5,3, 5,4, 4,5] ]:

+ +

+ +

Please note that areas don't have to share same center point. One or more areas may happen to fall right between other areas inside the polygon.

+ +

Here is another example of sample input/output.

+ +

Assuming following polygon [1,3, 1,1, 7,1, 7,2, 8,2, 8,3, 5,6, 4,6] as an input:

+ +

+ +

..and 5 as a parameter a valid output could be [ [1,3, 1,1, 3,1, 3,2, 4,3, 3,4, 3,3], [3,2, 3,1, 7,1, 7,2, 6,2, 6,3, 5,3, 5,2], [6,2, 8,2, 8,3, 6,5, 5,5, 5,4, 6,4], [1,3, 3,3, 3,4, 5,5, 6,4, 6,5, 7,5, 6,6, 5,6], [3,4, 4,3, 3,2, 5,2, 5,3, 6,3, 6,4, 5,4, 4,5] ]:

+ +

+ +

Following assumptions are made:

+ +
    +
  • direction of all borders is divisible by 45

  • +
  • integer coordinates are used for all polygons

  • +
  • integer area of input polygon is always divisible by n

  • +
  • all polygons may be either convex or concave ones

  • +
  • solvable, meaning n areas can fit properly into the given polygon

  • +
+",155450,,155450,,42404.71944,42405.33264,Generate equally sized areas in polygon,,3,13,2,,,CC BY-SA 3.0,, +309175,1,,,2/3/2016 17:27,,2,40,"

I am launching a iPhone App. Do I need two separate domain names for the app and the landing page website or can I use the same name for both. The website will solely be used to advertise the app information.

+",213991,,,,,42403.72708,Launching iPhone App - Do I need separate names for the App and the Landing Page/website or can I use the same name?,,0,1,0,42429.95486,,CC BY-SA 3.0,, +309176,1,309194,,2/3/2016 17:31,,0,144,"

I started questioning what I know, or thought that I know, after this question:

+ +

Array.fill differs from literal 2D definition on assignment

+ +

A JavaScript question, defining an array with predefined length and filling it with values, and then changing one of the values within the second dimension.

+ +

Mutability, in this applied case, the first new Array(9) will be my object, containing an empty array([]), with a predefined length of 9. And the first following .fill() will be the called mutable method, making the referenced object([]) into whatever wanted to be filled([..., ..., ..., ...]), retaining the length. In this case, if I try this on a single dimension array, everything will be alright, I'll be referencing the main object itself, and update the values of it accordingly with it's keys.

+ +

If I take this into the second dimension, I guess this is the part I mainly don't get about, I'm now instead filling the initial 9-length empty array ([]) with a new single 9-length empty array filled with 9 zeros, for each index filled until last, but it appears not?

+ +

Instead, new reference seems to be not considered, or counted as a new object, but more like just a reference. Just the same as this:

+ +
var arr1 = new Array(9).fill(0);
+var arr2 = new Array(9).fill(arr1);
+
+ +

So instead, I'm filling nine references to the initial, first dimension. So whether I update 3rd, or 4th index for it doesn't matter, as they all point to the same reference, and update the arr1 reference's value at specified index.

+ +

What is it that I'm missing out here that causes references to be passed instead of a new objects as I thought be?

+",80559,,-1,,42878.52778,42403.80347,"Mutability, and pass-by-reference, ""new"" object, what am I having left out?",,1,2,,,,CC BY-SA 3.0,, +309178,1,,,2/3/2016 17:53,,4,1437,"

We're developing an application whose data domain (or at least 90% of it) can be modeled effectively using a relational database. We've been using PostgreSQL since the beginning and have had no problems whatsoever. However, now the need arises to store relations (friendships) between users, much like Facebook or Snapchat, and we begin to wonder which of the following two paths is preferable:

+ +
    +
  • Begin by storing friendships in a traditional relationship table in PostgreSQL and be done with it until scalability problems arise (namely the growth on the number of friendships and the infamous ""friend of friend""-type queries).
  • +
  • Start upfront with a graph database (TitanDB + Cassandra) just to be ready for when the need to scale arises, but face a slower startup on development (which includes learning about TitanDB and Cassandra).
  • +
+ +

Our target is ~75M users. We don't really have an idea on what queries we will need to perform on this ""graph""—for now, our only need is to store this information. Could PostgreSQL effectively scale to such numbers? Is it preferable to follow the graph approach upfront?

+",213994,,226181,,42579.54444,43236.71944,Relational vs Graph Database for (initially) moderately-sized network,,3,1,1,,,CC BY-SA 3.0,, +309181,1,,,1/21/2016 15:52,,1,187,"

Most OO guides say not to store things in instance variables if they can be easily calculated, because the state might become inconsistent, and there is more code to maintain. I am trying to come up with a general guideline of how to decide this issue.

+ +

For example, if I have a simple object like Rectangle, I could store only the side lengths, or I could also store some easily calculated values like Area and Perimeter, which would be updated any time a side length was changed. The area and perimeter properties (accessors) would be read-only. In a second example, more complex computations such as employee Deductions based on pay might be made. Use Instance variables or calculate whenever needed? (In all cases I would not store calculated values in a database, so that is not related to this question... Unless you mean a Data Warehouse, which throws all the relational rules out the window anyhow.)

+ +

Is there an overall outlook on how this should be decided? Are there references that actually distinguish different answers, not just make a blanket recommendation? Thank you.

+ +

In in case it makes a difference I'm using C#.
+Also, I am aware of DRY (don't repeat yourself - don't duplicate code), so I am asking about a situation where the calculation is one place in the code: either in the setter for the instance variable which defines it (Sides define the Area), or in a getter for Area itself, with Area not being stored. In both cases Area will not have a setter.

+",,user251748,,,,42403.76875,How to decide what instance variables to have in a class?,,3,1,,,,CC BY-SA 3.0,, +309190,1,309193,,2/3/2016 19:06,,3,247,"

Came across this open source project called fruit. +In the documentation are some diagrams of the code. +Here is an example from the tutorial page:

+ +

+ +

Is there a standard for this type of diagram or is it just a custom one?

+",22580,,4,,42404.52361,42404.52361,Is there a standard for this type of component diagram?,,1,0,,,,CC BY-SA 3.0,, +309191,1,309202,,2/3/2016 19:14,,3,1066,"

Let's say I have a sort method inside of my class and another class that has no relationship with that class needs the same method. Instead of writing that method twice and breaking the do not repeat yourself principle of OOP, I decide to make a class like it was an invention to solve that method calling it a Sorter. I've seen this done before and usually they contain more then one of common methods.

+ +

Is this a procedural design because it would hold a bunch of methods for all classes yet not be needed in a model of my solution or is it OOP because it is an algorithm/event I am modeling?

+",154143,,31267,,42409.14653,42409.14653,Is turning a method into a class to use it across many classes bad practice?,,2,11,0,42404.02292,,CC BY-SA 3.0,, +309199,1,309251,,2/3/2016 20:47,,4,7355,"

Hello StackExchange community! This is my first post and appreciate any help anyone can offer. I'm new to Java, and I'm sure this issue is simply due to my misunderstanding of the fundamentals.

+ +

I have a JavaFX (Java 8) window with a TreeView filled with TreeItems containing a Product object. This object has many fields, one of which is model of type String. When I add a TreeItem, I want it to not only store the Product object but also display the model String field in the TreeView list; however, I get this instead: com.[company name].Products.Product@[number]. This is the package path to the Product class. The [number] is a random string of what appears to be 8 hexidecimal characters that is unique to each instance.

+ +

TreeItems are added via a class method, and I suspect that's where my problem lies. Here's a list of the proper flow:

+ +
    +
  1. Click the ""Add Product"" button.
  2. +
  3. Open pop up window.
  4. +
  5. Select the appropriate model.
  6. +
  7. Click OK.
  8. +
  9. Product object is added to the TreeView via the makeTreeItem method (see below).
  10. +
  11. New TreeItem should show the model which is a String.
  12. +
+ +

Minimum Working Example:

+ +
import javafx.application.Application;
+import javafx.scene.Scene;
+import javafx.scene.control.TreeItem;
+import javafx.scene.control.TreeView;
+import javafx.scene.layout.BorderPane;
+import javafx.stage.Stage;
+
+public class TreeItemTest extends Application {
+
+    public static void main(String[] args) {
+        launch(args);
+    }
+
+    @Override public void start(Stage primaryStage) {
+        TreeItem<Product> rootTree = new TreeItem<>(new Product(""root""));
+        TreeView<Product> treeView = new TreeView<>(rootTree);
+        treeView.setShowRoot(false);
+
+        BorderPane rootPane = new BorderPane();
+        rootPane.setCenter(treeView);
+
+        makeTreeItem(""New Item 1"", rootTree);
+        makeTreeItem(""New Item 2"", rootTree);
+
+        Scene scene = new Scene(rootPane);
+
+        primaryStage.setScene(scene);
+        primaryStage.setTitle(""Tree Item Test"");
+        primaryStage.show();
+    }
+
+    private TreeItem<Product> makeTreeItem(String title, TreeItem<Product> parent) {
+        TreeItem<Product> newItem = new TreeItem<>(new Product(title));
+        newItem.setExpanded(true);
+        parent.getChildren().add(newItem);
+        return newItem;
+    }
+}
+
+ +

Both ""New Items"" simply show up as Product@######## (recall the hash symbols are hex characters). The Product class:

+ +
public class Product {
+    String model;
+    public Product(String model) {
+        this.model = model;
+    }
+}
+
+ +

I suspect that what I'm looking for is a CellFactory of some sort, similar to the TableView's CellFactory. No matter how hard I look though I can't for the life of me find anything very promising. Help! What am I missing?!

+",214009,,,,,42404.58403,TreeItem containing non-String object displaying strange text,,1,2,1,,,CC BY-SA 3.0,, +309204,1,,,2/3/2016 21:26,,5,261,"

I am a sole developer working for an organisation that has no testing strategy.

+ +

I am trying to integrate a crime system developed by a vendor with an external application I developed. The external application allows an end user to send an instruction to the crime system to delete the crime.

+ +

The crime system was developed by an external organisation and they have written stored procedures to actually execute the deletion, which I can call from my application.

+ +

The crime system is complex i.e. it has a custody element, a crime element, an intelligence element etc.

+ +

Whenever there is an upgrade to the crime system the stored procedures have to be retested and there are always many issues. My approach is as follows:

+ +
    +
  1. Test custody
  2. +
  3. Report custody issue 1
  4. +
  5. New release by vendor
  6. +
  7. Report custody issue n
  8. +
  9. New release by vendor. All tests pass.
  10. +
  11. Test crime
  12. +
  13. Report crime issue n
  14. +
  15. New release by vendor. All tests pass.
  16. +
  17. Report intelligence issue n
  18. +
  19. New release by vendor. All tests pass.
  20. +
  21. Testing complete. Goes live.
  22. +
+ +

The vendor has asked me to do all testing at once in future. In my experience this does not work very well with this vendor because new issues are often introduced, which affect testing to do in future i.e. a custody change affects the crime element/intelligence element.

+ +

Should I be testing everything and then submitting all of my findings?

+",65549,,9113,,42404.53194,42405.92986,"Report test failures ""all at once"", or report them one by one",,2,7,,,,CC BY-SA 3.0,, +309214,1,,,2/4/2016 0:48,,1,196,"

This is an iOS app but I will try to make it as general as possible because I think a wide audience could have good feedback.

+ +

I have a application I am making for iOS. It has 4 main tabs in a tab bar controller. There is three main model objects. Lets say they are apple, car, and clock. The first tab needs access to all the apples from an api, the second needs all the cars, and the third needs all the clocks. The fourth tab needs access to all three lists of objects.

+ +

I have implemented the first 3 tabs already and once the user goes to the first tab it will load in all the apples, same for the other 2 tabs. But now as I am starting the fourth tab I don't want to load all the objects in again (I could but it seems wrong).

+ +

I'll go ahead and give what I think might be proper and then hopefully I can get feedback if this is correct or if there is a better solution.

+ +

Proposed Solution: A class that contains a list of apples, cars, and clocks. Then each tab will get there needed list from this class. If the list hasn't been retrieved than the class containing the lists will call the api to retrieve the needed list. So I guess an object with 3 singleton objects, is this proper code design?

+ +

So as far as code structure I would have

+ +

Model: - Car - Apple - Clock - ListHolder(Is there a better way to name this?)

+ +

Controllers: - Network Controller(To call the api)

+ +

View Controllers: -The 4 view controllers

+",143642,,,,,42408.51042,"Is this an appropriate code structure, or is there a better one?",,2,1,0,,,CC BY-SA 3.0,, +309216,1,,,2/4/2016 2:43,,2,148,"

We are a small team doing work on a LoB system that needs to connect to varied systems such as ERPs and CRMs to extract business processes like invoices, customer info, production orders and the such, for internal use of the application, and then return some specific result to the ERP. This is a real-time operation, not a one-off data extraction. All the business logic is properly shielded from the data layer.

+ +

The thing is, the data we extract from those systems is always the same, but the systems change a lot depending on the customer, and so do their sources/tables/fields from where the data is extracted. We have seen dozens of ERPs to date, from SAP/Dynamics to a lot of small ones, and every time we make a new installation there's code to be done so that our system knows where/how to extract the data it requires. As I said, most are small or in-house ERP systems so it ends up being a one-off library that we can't reuse on the next customer.

+ +

We want to improve this, ideally to a setup where instead of data code, there is configuration where the ERP structures are mapped to our structures for each installation, but just thinking about it seems like a lot of work.

+ +

We thought of ETL, but it isn't real-time. REST is also not an option since it still would require programming for each installation.

+ +

Is there an existing universal database/datasource mapper/translator to solve this kind of problem? Or standard pattern to use for developing this kind of thing?

+",214035,,1130,,42439.85069,42440.83264,Standard System that connects to any datasource without much/any source-specific code?,,1,5,,,,CC BY-SA 3.0,, +309220,1,,,2/4/2016 4:57,,3,105,"

I want to store store some app specific secret on the phone - this should be available only for the app & no other app.

+ +

I was looking at DPAPI - however, if the user does not have a logon password, then it seems like the master key will be accessible to everyone. Also I am not clear what else to pass as the entropy parameter.

+",69796,,,,,42406.36181,Windows Phone 8.1 & above - is there some equivalent for Apple's keychain,,0,0,,,,CC BY-SA 3.0,, +309222,1,309240,,2/4/2016 5:22,,0,1652,"

I have created two lists of objects: One is records from an xml and the other is records from the database.

+ +

The rule is check if the record from an xml exist in the database then exclude.

+ +

I have thought of two options:

+ +

First is to loop the list of records from an xml and for each record check if the id exist in the database.

+ +

Second is creating a list of objects from an xml and a list of objects from the database. Then compare the two list and get the result.

+ +

Which one is efficient? I'm thinking of the second options because instead of looping and querying the record, each time, if existed in the database why not put them in the lists and compare them by using linq or equality comparer.

+",146464,,31260,,42404.38611,42404.49861,Efficient way in comparing two lists,,1,4,,,,CC BY-SA 3.0,, +309225,1,,,2/4/2016 5:44,,3,974,"

I’m currently building a dashboard to view some analytics about the data generated by my company's product. +We use MySQL as our database. The SQL queries to generate the analytics from the raw live data can be a bit complicated and take long time to process. So I put in place some batches that run every day or every hour, query this live data and generate the analytics and store this in some special tables that are queried only by the dashboard. +It works well but the drawback is that the analytics are not real-time.

+ +

So I would like to know what is the best practice for my requirements. +I don’t need strict real-time but near real-time of one or a few minutes.

+ +

I would like to know if replicating the live data from MySQL to something like hadoop or Elasticsearch would be a good solution.

+",25554,,,,,43570.26736,Best practices for dashboard of near real-time analytics,,1,3,,,,CC BY-SA 3.0,, +309235,1,,,2/4/2016 9:45,,1,244,"

I just developed an algorithm and additional to the usual unit tests I wrote a profiling ""test"" that I was using to measure and optimize its performance. It is structured similar to a test (arrange: setting up a sizable chunk of data to process, act: run the algorithm) but without the assert stage (it doesn't test anything).

+ +

Now that the algorithm is sufficiently optimized I'd like to keep the profiling code for future reference but since it takes several seconds to run without actually testing anything I don't want it to run every time I run my unit tests.

+ +

Of course I could turn it into a test by asserting the running time is below a certain threshold but that feels artificial.

+ +
    +
  • Should I turn it into a proper test by asserting a certain running time?
  • +
  • Should I check it in as is?
  • +
  • Should I check it in as is but disable it so it won't run automatically?
  • +
  • Should it be in a separate file/assembly or together with the unit tests?
  • +
  • Should I do something else?
  • +
+ +

I hope there are some best practices so this won't be opinion based.

+",136712,,,,,43812.70417,Where do you put your profiling code?,,3,5,2,,,CC BY-SA 3.0,, +309243,1,309244,,2/4/2016 12:32,,0,104,"

What if I have a class X that does the following:

+ +
    +
  • Read a file (within its own class).
  • +
  • Parses the file by calling a Parse class
  • +
  • Processing the parsed file by calling a Process class
  • +
  • Outputting by calling an Output class
  • +
+ +

I would assume the single responsibility is reading the file, but the other functions (parsing, processing, outputting) are also done by this class (by calling other classes.

+ +

Or what if the reading part would be done in a separate class too, and the class X only calls these 4 classes without doing something itself (like a manager)? What is then the responsibility? Is then the Single responsibility 'Managing the reading, parsing, processing output of a file' a responsibility?

+",47065,,7422,,42404.52292,42404.52569,What is the responsibility of a class 'calling' other classes as workflow?,,1,0,1,,,CC BY-SA 3.0,, +309247,1,,,2/4/2016 12:59,,2,1729,"

I have a program which includes lots of header files but it do not uses all the header files. I have removed some of them although it is working fine. I did not notice any changes in the performance. +Will this affect code size or something else. +Can I include header files as much as I want without affecting the code size and performance.
+I mean if I include all headers in a separate header file then I call only this header file in my all programs. Will it work normally?

+",214102,,,,,42404.58194,Code size overhead by including unnecessarily extra header files,,2,6,,,,CC BY-SA 3.0,, +309253,1,,,2/4/2016 14:02,,4,2148,"

I'm just getting used to unit testing and on my new project I decided to adopt a TDD approach.

+ +

My code below is to test the UserServices class which is responsible for creating a user, deleting a user, checking if a user exists etc.

+ +

Am I on the right lines? I ask because everywhere I look it discusses mocking and I can't help but think I should be mocking predis (a package used for interacting with Redis in PHP). But, mocking it would be difficult as my class expects an instance of Predis.

+ +

In the setup I create a dummy user to perform actions on. I may need to create more than one, but at this stage I just use the one. Although, there are methods such as createUser that would need to create an entirely new user.

+ +
<?php
+
+namespace VoteMySnap\Tests;
+
+class UserServicesTest extends \PHPUnit_Framework_TestCase
+{
+    const HOST = '127.0.0.1';
+    const PORT = 6379
+    const AUTH = NULL;
+
+    public $predis;
+
+    public function setUp()
+    {
+        $this->predis = new \PredisClient([
+            'host' => HOST,
+            'port' => PORT
+        ]);
+
+        /*
+         * Create a dummy user
+         */
+        $dummyUser = [
+            'user:1' => [
+                'snapchat_username' => 'james16',
+                'password' => 'somesecret',
+                'email' => 'test@example.com'
+            ]
+        ];
+
+       /*
+        * Insert user into Redis
+        */ 
+        $this->predis->hmSet(key($dummyUser), $dummyUser[key($dummyUser)]);
+    }
+
+    public function tearDown()
+    {
+        /*
+         * Remove the dummy user
+         */
+        $this->predis->delete(key($dummyUser));
+    }  
+}
+
+ +

Am I on the right tracks here?

+",201181,,,,,42410.54306,Unit Testing and 3rd party packages. Do I mock or not?,,3,4,,,,CC BY-SA 3.0,, +309254,1,,,2/4/2016 14:19,,-1,194,"

Does passing the entire object as argument rather than just a property of it, in javascript, effect performance? +For example:

+ +
<input type=""button"" onclick=""getDetails(this)""/>
+
+ +

vs

+ +
<input type=""button"" onclick=""getDetails(this.sourceIndex)""/>
+
+ +

even if I pass the sourceIndex property of 'this' object, I would retrieve the entire element from it in my function - getDetails(). +I need advise on what exactly will be the effect in this particular case.

+",214113,,214113,,42404.60972,42404.6375,Javascript: Effect of passing entire object vs a property of the object as argument on performance,,1,4,,,,CC BY-SA 3.0,, +309260,1,,,2/4/2016 15:28,,4,139,"

I've been trying to find a good way of solving the following problem, but I'm not sure how to even frame it. I think there might be relatively well-known solutions I'm not familiar with, since I don't have much knowledge of algorithms. How could I approach this?

+

Example problem:

+

There's a known begin-state A and known end-state B. There is a known sequence of (a large amount of) points that lead from A to B. I would like to find a minimum/small number of points that will describe the path from A to B. I can test for any sequence of points whether it reaches B using that, but if it doesn't I don't have an estimation of how close it is to getting to B. I'm basically trying to find the points that are essential for getting to B, but I won't know whether they are sufficient until the path is complete.

+

What I found so far:

+

The problem appears a bit similar to polyline simplification, +and one option would be the Ramer Douglas Peucker algorithm. I don't think it will work well for my problem though, because I don't necessarily want to follow the non-essential points on the path(which may be outliers or unnecessary circumventions).

+

The solution I came up with myself sounds a bit like greedy & binary search:

+
    +
  1. Pick middle point C between A and B and discard all points between A and C.
  2. +
  3. Check whether this path reaches B.
  4. +
  5. If B cannot be reached, there is an essential point between A and C we missed, so pick middle point D between A and C and test path with points between A and D discarded.
    +If B can be reached, we might be able to discard point C and some subsequent points, so pick D between C and B and discard points until D.
  6. +
  7. Do this until identified a point D furthest from A that still leads to B if all the points between A and D are discarded.
  8. +
  9. Start this search over, now starting from D instead of A until all essential points have been identified
  10. +
+

Other thoughts:

+

I might be able to give a "similarity" estimation for end-state B1 that is reached from some point C in comparison to the targeted end-state B. Would that provide a wider variety of applicable algorithms?

+",101152,,-1,user40980,43998.41736,42519.86944,Find minimum number of steps to a goal without estimator for how close intermediate steps are to the goal,,2,1,,,,CC BY-SA 3.0,, +309269,1,,,2/4/2016 17:02,,4,624,"

Our web app generates a large amount of logs. These logs include both events regarding background operations in the app (data arrives from the server, ajax failures, inter-component communication, etc.); and also user initiated actions (user clicked a button, user wrote text, etc.).

+ +

We built our own logging library with different adapters (print to console, send to server, etc.); and currently send all the logs to our server for persistence. These logs are used to analyze the behaviour and flow of the app, monitor errors, client-side exceptions, etc.

+ +

We now have a new requirement of tracking user behaviour in the app, and we consider 2 approaches:

+ +
    +
  1. Enrich our current in-code logs (which are sent to the server) and log every user action we need to track. Then use ETL jobs to collect and analyze the data using some third party service (Omninute, Kibana, etc.).
  2. +
  3. Integrate a third party service with its own JS library (Omniture, Google Analytics) and adapt our code to use that service (manually sending events from JS, HTML tagging, etc.).
  4. +
+ +

The first approach keeps our code base cleaner and with less duplication (only one logging mechanism).

+ +

The second approach involves modifying the app's code to send all the events we want to track to the analytics service, in addition to logging them with our own logging service. But it allows the analytics service to gather additional data which we don't need to implement ourselves (geotracking, browser and OS versions, etc.).

+ +

What approach should I take so the code can meet both logging requirements without unnecessary code duplication and complexity?

+",58206,,,user22815,42404.72778,42405.06111,Should we reuse web app logs for user behaviour analytics?,,3,0,0,,,CC BY-SA 3.0,, +309270,1,309282,,2/4/2016 17:07,,-1,947,"

I'm looking at a new project to be developed in .Net, and I'd like to do it the right way.

+ +

I'd like to create a solution with 3 parts : a front- and a backoffice, both using a the third part as model.

+ +

In my mind, all DB connection related infos should be stored in front- and backoffice, but then, how should I handle DB requests in my model library? +I can't resolve myself to handle a SqlConnection parameter with every function requiring DB access.

+ +

So, is my whole solution approach to be corrected? What's the best way to manage DB connections for my model classes?

+",214134,,165795,,42404.7375,42404.84931,.Net Project architecture and DB connection,,3,2,,,,CC BY-SA 3.0,, +309272,1,309445,,2/4/2016 17:36,,4,363,"

My question:

+ +

When you've got a complex converter like, that takes chunks or large result sets out of a database, converts it into a line by line file/resource in the end, should one either design it as std::basic_streambuf or should it be a std::basic_istream or something different, using C++?!

+ +

To better describe it look at the diagram below or use this text data flow:

+ +
A(ODB) -[maybe many rows]-> COSTLY CONVERSION  -[many text lines]->  ????  --> std::istream
+                                                                     ????  --> std::ofstream
+                                                                     ????  --> std::ostream
+
+ +

Note: The data volume may be large, from MB up to GB, so data could be converted in chunks and lazy from the database, which will lead to a chunk of text lines. +I my assumed approach at the moment for ???? was std::basic_streambuf

+ +

My assumption:

+ +

AFAIK the foreseen way to stuff data from one std::istream source into another std::ostream sink, like std::ofstream is to just use the <<operator with sources std::streambuf argument, like

+ +

sink << source.rdbuf();

+ +

Background

+ +

I'm designing a C++ converter module, that takes custom data (i.e. measurement data), in this example out of a database mddb via the C++ library odb, converts it into another format, called df here, and now comes the important part: Makes that converted data available for at least an std::istream and also an std::fstream. This is because, the std::istream can be used for a HTTP download via Wts Wt::WStreamRessource interface and the file stream obviously, to just store the data as a file.

+ +

+",192462,,192462,,42405.66319,42406.65208,Which C++ IO interfaces for a complex data source i.e. converter,,1,7,0,,,CC BY-SA 3.0,, +309277,1,311592,,2/4/2016 18:12,,2,355,"

I'm doing some research for a project in which I will need to create a service which can handle millions of requests per minute. Clearly I want to use an asynchronous programming model to make the best use of threads and resources. But memory usage and speed are the top priorities.

+ +

Looking at the original APM (Asynchronous Programming Model), I'm wondering if it might be more efficient or faster than the newer TAP (Task-based Asynchronous Pattern).

+ +

My thinking here is that maybe it's better to register a callback function than it is to ""spin up"" a ton of Task<T>. The task clearly has to have a little bit of overhead to it right?

+",1566,,1566,,42404.76528,42431.62917,Would the APM be faster or more efficient than TAP?,<.net>,1,4,,,,CC BY-SA 3.0,, +309290,1,,,2/4/2016 20:45,,3,2664,"

I find myself right now banging my head with the following issue (in PHP):

+ +

I have an abstract base class, which has a non-abstract method, inherited and unchanged all over the inheritance chain (which is 3-tiered right now, only the first tier is abstract).

+ +

This non-abstract method contains logic to calculate a disk path, which is done by taking into account the class' name. Both children and grandchildren adopt this method/algorithm with no modifications.

+ +

However, I desire to implement functionality so that if a Grandchild does not have its ""own"" specific path on the disk, it will attempt to use its parent's path. Only the parent can calculate this path (only the parent knows its own name) - I don't want to use reflection to store the parent's name into a variable, and I would also like this to work homogeneously - if the inheritance chain is longer, it should try to go up the inheritance chain to ask any of its ancestors for a ""place"" (the aforementioned path).

+ +

A parent::Method() call in PHP (as in other OOP languages) will try to call the parent of the current class, not the parent of the class of the object at run-time.

+ +

In PHP, one can do this through Reflection, but I am more or less persuaded that this breaks some principle of OOP which I am not aware of.

+ +
//Abstract class BaseSomething...
+if(!$this->Calculation()) {
+  $parentClass = (new \ReflectionObject($this))->getParentClass();
+  if(!$parentClass->isAbstract()) {
+    $method = $parentClass->getMethod('Calculation');
+    $method->setAccessible(true);
+    $this->result= $method->invoke($this, $arg ...);
+  }
+}
+
+ +

This works, does the job. It allows for a pseudo-recursion of sorts, stopping at the first ancestor who can provide us with what we need, but something must be more or less wrong along the way.

+ +

A possible solution whereof I am thinking is having a constructor chain, in which each class adds its name to an array. This would imply forcing each child to call the parent's constructor and to also add its name to the array, which is also not pretty.

+ +

Maybe the issue lies in what I am trying to do - using inheritance in a way more encompassing than what it was ever intended to do.

+ +

I would like to hear some thoughts on how this can be refactored in such a way as to not break rules, or perhaps some would consider than given my use case, using such foundation-redefining code is ""permissible"".

+ +

Edit: I apologize for letting this question drift away for more than a month and shirking my responsibility of adding examples and answering questions. It was, partly, just base slothfulness.

+ +

Continuing in the same vein as the original post, this is how the three tiers roughly look like (simplified as much as possible, almost identical to above):

+ +
abstract class BaseWidget extends \Illuminate\Routing\Controller {
+    private function getPath($file) {
+         $name = return get_called_class(); //Uses get_class_name
+         $file = $some_base_path . '/' . $name . '/';
+         if(file_exists($file)) {
+             return $file;
+         } else {
+             $refl = new \ReflectionObject($this);
+             $parentRefl = $refl->getParentClass();
+             $methodRefl = $parentRefl->getMethod('getPath');
+             $methodRefl->setAccessible(true);
+             $path = $methodRefl->invoke($this);
+             return $path;
+         }
+    }   
+}
+
+class Widget extends BaseWidget {
+     //This guy resides in App/Widgets/Widget/Widget.php,
+     //and has a view in Widget/Views/View.blade.php
+     //When $widget->getPath('Views/View.blade.php') is called,
+     //it will know how to 'find itself' and where to look for its own
+     //view         
+}
+
+class SpecialisedWidget extends Widget {
+     //This guy resides in ...Widgets\SpecialisedWidget.php
+     //Its 'views' folder is empty, so when it has to find it,
+     //it will instead borrow it from its parent. If this class
+     //also had a child with no view, it would go up to Widget on the
+     //inheritance chain.
+}
+
+ +

I recognise that I am using inheritance to do something which inheritance is not supposed to do. However, the functionality which I had to implement - A Widget borrowing views from its parent, which are relative to the parent's location on disk - whereof the child is unaware - necessitated that I cheat just a little bit.

+",214155,,214155,,42461.64028,42751.18056,Child class accessing its parent's method from Ancestor method,,3,5,,,,CC BY-SA 3.0,, +309295,1,,,2/4/2016 22:28,,2,754,"

Back in the days of 8-bit machines there existed an educational word game. I have no idea what it was actually called but for the sake of this question (since it involves pyramids) let's call it Stones of Giza.

+ +

The game was simple. Starting at the top level, a letter was added to a single letter word and another word was formed. A letter was then added to that word and so on down, each time making a word. The levels went from 3 to 7.

+ +

An example of level 3 might be.

+ +
  A
+ A T
+B A T
+
+Letters: AAABTT
+
+ +

Like the 80s game mastermind, you scored for guessing a letter at a level correctly but more for guessing the position correctly.

+ +

It isn't hard to see that the only candidates for the top stone are A, I and O - since these are the only single letter words in the English language.

+ +

My question is around algorithms for sourcing game boards. My brute force algorithm currently searches for length N words with one of the 3 letters above and then checks the N-1 length word on the left/right hand side containing the pinnacle letter (A, I or O).

+ +

This strikes me as slightly inefficient since there are going to be many false searches.

+ +

How can I improve on this raw algorithm? Higher levels are taking quite a long time on a good sized dictionary.

+ +

I do of course realise that the 8-bit game probably wasn't parsing the data each time and had a set number of boards but I'm just trying to optimise creating the source data out of curiosity.

+ +

EDIT #1:

+ +

I have already filtered out words in the source dictionary that don't contain A, I or O.

+",73508,,73508,,42405.49861,42525.58958,Word game algorithm,,1,12,1,,,CC BY-SA 3.0,, +309301,1,310001,,2/5/2016 1:17,,3,1319,"

I have a collection of cooperative classes whose behaviors are interdependent upon one another. But I wish to keep them loosely coupled, so I've created appropriate interfaces.

+ +

I want to determine an appropriate pattern to instantiate specific implementations of these objects.

+ +

Here's an outline of their interdependencies:

+ +
    +
  • IService : IDisposable: listens for messages; exposes a Listen method which: + +
      +
    • calls IMessageClient.GetNextMessage iteratively
    • +
    • invokes a (delegate which creates a?) new IMessageHandler instance in a new thread for each message
    • +
    • up to NumberOfThreads concurrent threads
    • +
  • +
  • IServiceMonitor<IService>: monitors the service: + +
      +
    • exposes Start method which invokes the IService.Listen()
    • +
    • exposes Stop method which disposes IService
    • +
    • exposes Pause and Resume methods which respectively zero or reset the IService.NumberOfThreads
    • +
    • calls CreateRemoteConfigClient() to get a client every 90 seconds, then IRemoteConfigClient.GetConfig
    • +
    • notifies any configuration changes to IMessageClient, IService, and any subsequent IMessageHandler
    • +
  • +
  • IMessageClient : IDisposable; exposes GetNextMessage which: + +
      +
    • long polls a message queue for the next request
    • +
  • +
  • IMessageHandler : IDisposable; exposes HandleMessage which: + +
      +
    • does something with the message, requesting on the way further IXyzClients from the IFactory to access other services
    • +
  • +
  • IRemoteConfigClient : IDisposable; exposes GetConfig which: + +
      +
    • retrieves any remote overrides of the current configuration state
    • +
  • +
+ +

This has led me to create:

+ +
    +
  • IFactory; with the following members: + +
      +
    • CreateMonitor: returns a new IServiceMonitor<IService>
    • +
    • GetService: returns the IService created to accompany the most recent IServiceMonitor, or a new IService + +
        +
      • NB: a Service should be able to be obtained without a Monitor having been created
      • +
    • +
    • CreateMessageClient: returns a new IMessageClient
    • +
    • Either: + +
        +
      • CreateMessageHandler: returns a new IMessageHandler
      • +
      • MessageHandlerDelegate: creates a new IMessageHandler and invokes HandleMessage
      • +
    • +
    • CreateRemoteConfigClient: returns a new IRemoteConfigClient
    • +
  • +
+ +

Implementations of the core interfaces accept the IFactory in their constructors. +This is so that:

+ +
    +
  • IService can call CreateMessageClient() to get a single IMessageClient which it will Dispose when it's done
  • +
  • IServiceMonitor can call GetService() to allow it to coordinate and monitor the IService
  • +
  • IMessageHandler can report its progress back via IMessageClient
  • +
+ +

IFactory, of course, started out ostensibly as an implementation of the Factory pattern, then it began to lean more towards a Builder pattern, but in reality none of those feel right. +I'm Create-ing some objects, Get-ting others, and certain things, like the fact that a subsequent call to CreateMonitor will modify the result of GetService, just feel wrong.

+ +

What's the right naming convention for a class which co-ordinates all these others, and IS there an actual pattern that can be followed, am I over-engineering, or am I over-analyzing?!

+",214183,,,,,42579.42847,Pattern to use (if any) to co-ordinate loosely coupled classes with strong interdependencies,,3,3,1,,,CC BY-SA 3.0,, +309319,1,309662,,2/5/2016 8:28,,3,540,"

I'm migrating my monolithic web application to a microservice based one. I'm going to use Spring cloud and I've got a discovery service where all the rest of the services are registered. A simplified schema of the architecture is drawn here:

+ +

+ +

Both Equipment and Task services have their own RESTful HTTP API. They take care of which operations user can perform based in roles, which are stored in a OAuth2 token, which is delivered by a proper OAuth2 Authorization server.

+ +

Now let's talk about the clients. For the mobile native app, it will grab a token from the Authorization Server for the user. Then, core services are accessed directly with that token. If the end user tries to perform an operation which has not been granted for, it will be rejected, as security is implemented at API method level.

+ +

For browser access, we could have gone with AngularJS and access the core REST API directly, but we're a bit inexperienced on it and we've chosen to implement a third service which houses the JSF framework. The views will be rendered by JSF, so the UI-Service has to talk with core services in JSON language and parse it to java (that's a breeze using Jackson library).

+ +

The decision to make the UI service monolithic is that if we want to store some data in server session, we won't need session clustering to share it amongst more UI-Services. If we want to do horizontal scaling, we might always stick to session-in-one-instance pattern.

+ +

I've even been told not to implement the UI-service as separated at all, but to integrate it in the core services. As everything is implemented in Java, that way I should not take care of JSON parsing, but it might be coupling the UI and core part tightly.

+ +

Any suggestions about it?

+",97167,,,,,42409.49444,"Configuring a microservice landscape, should the view be monolithic or be attached to core services?",,2,0,,,,CC BY-SA 3.0,, +309322,1,309332,,2/5/2016 9:36,,2,1328,"

I found this code example explaining Open / Closed principle.

+ +

Code before application of principle:

+ +
public class Logger
+{
+    public void Log(string message, LogType logType)
+    {
+        switch (logType)
+        {
+            case LogType.Console:
+                Console.WriteLine(message);
+                break;
+
+            case LogType.File:
+                // Code to send message to printer
+                break;
+        }
+    }
+}
+
+public enum LogType
+{
+    Console,
+    File
+}
+
+ +

And refactored code:

+ +
public class Logger
+{
+    IMessageLogger _messageLogger;
+
+    public Logger(IMessageLogger messageLogger)
+    {
+        _messageLogger = messageLogger;
+    }
+
+    public void Log(string message)
+    {
+        _messageLogger.Log(message);
+    }
+}
+
+public interface IMessageLogger
+{
+    void Log(string message);
+}    
+
+public class ConsoleLogger : IMessageLogger
+{
+    public void Log(string message)
+    {
+        Console.WriteLine(message);
+    }
+}
+
+public class PrinterLogger : IMessageLogger
+{
+    public void Log(string message)
+    {
+        // Code to send message to printer
+    }
+}
+
+ +

Can you explain me the reason to still keep Logger class with private IMessageLogger instance? I would simply avoid it by:

+ +
public interface ILogger
+{
+    public void Log(string message);
+}
+
+public class ConsoleLogger : ILogger
+{
+    public void Log(string message)
+    {
+        Console.WriteLine(message);
+    }
+}    
+
+public class PrinterLogger : ILogger
+{
+    public void Log(string message)
+    {
+        // Code to send message to printer
+    }
+}
+
+ +

The only reason I can think about is, that in suggested solution with Logger class, we could still refer to this class in client code, but we still need to modify all Log(msg) calls to remove LogType arguments.

+",193282,,213,,42405.88889,42405.88889,Open / Closed Principle,,1,7,,,,CC BY-SA 3.0,, +309323,1,,,2/5/2016 9:37,,0,80,"

Let's say I have a very long method which basically creates a responsive and resizable layout for a user interface by using a few fixed values and a few variable ones taken from an element's coordinates and width/height sizes. To obtain the desired layout, my code needs to perform basic but repetitive math operations (additions, subtractions, multiplications and divisions) to get every element's coordinates and size.

+ +

My code is written in Objective-C.

+ +

Let's also say that I want my code to be extremely fast, because it will run on low-performance devices, such as mobile devices.

+ +

Since I want to get high performance, does making a very precise declaration for the variable have a performance impact? For example, the compiler will know if the value is a const or unsigned, and optimize the code right along.

+ +

For example, which of these cases is faster than the others?

+ +
    +
  1. const unsigned int kInteger = 20;
  2. +
  3. const int kInteger = 20;
  4. +
  5. int integer = 20;
  6. +
  7. unsigned int integer = 20;
  8. +
+ +

In other words, does helping the compiler, really have a performance impact? Or is this just a waste of time since it's all just micro-optimization?

+",199340,,97259,,42445.95972,42445.95972,Does variable type specification lead to any performance difference?,,1,1,0,,,CC BY-SA 3.0,, +309324,1,,,2/5/2016 9:38,,0,119,"

Scenario:
+application will search in database almost all columns across about 8 tables/views for defined string and return all rows containing that string (will return collection of objects or JSON). At beginning database(DB2 v6) will contain about 100k records

+ +

Which approach is better:

+ +
    +
  • Loading these tables and mapping them to objects in application memory, searching using these objects.
  • +
  • Mapping tables from database to embed database in application(JAVA DB in this case), searching using embed database.
  • +
+ +

First one is fast but downside is that application will need memory(first prototype went up to .5 GB), the second one is slower and also is heavy on memory.

+ +

Has anyone tried something similar or can point to better solution please share it.

+",129630,,129630,,42405.91944,42435.95208,Database searching for string across different tables,,1,3,,,,CC BY-SA 3.0,, +309326,1,309817,,2/5/2016 9:53,,1,82,"

One of our assignments is to write a website which should use a database. I would like to have some help organizing it. Here are characteristics of our work.

+
    +
  • The assignment is for a group of 5 people.
  • +
  • We have access to a server, where each person of the lecture has an account and a database (MySQL). (Some haven't seen mysql until a week ago)
  • +
  • There is no git installed on that server. (We have little to no experience with git)
  • +
  • We can't access that server from our university (I could ask if this could be changed but I am afraid it won't be on time)
  • +
+

What are we doing:

+
    +
  • We have divided the task between the members:

    +

    One does the login, and input of data, another a user profile, another a different type of user profile, one has designed the database and another the program to use the information of the users

    +
  • +
  • We have set up a github repository

    +

    We try to synchronize the work using the repository, and then from there to the server (in my user folder)

    +
  • +
+

Problems we face:

+
    +
  • Recently we have discovered that is possible to edit directly to other users files. So we could make changes directly on the server (if we don't work from our university)
  • +
  • There is few cohesion between us, so there is lack of understanding what other members are doing or what should each one do.
  • +
  • Now that we try a beta of the website we found each part is not well correlated with each others.
  • +
  • We need to learn on the way: Some haven't seen mysql until a week ago, we learn php and html with tutorials and few support
  • +
+

Before doing any change to the organization or the way we work I would like to know how could we improve our work system.

+",204570,,-1,,43998.41736,42537.44792,Organizing effectively the project,,1,1,,,,CC BY-SA 3.0,, +309331,1,309375,,2/5/2016 11:14,,2,880,"

Edit: I have found a closely related question: StackOverflow

+ +

This question is not about the differences between functional and acceptance tests! Almost all the info I could find on the web just explains the difference between them. I know that functional tests (FT's) address fringe conditions and bug scenarios, whereas Acceptance Tests (AT's) address business requirements. I achieve both of with SpecFlow.

+ +

I am having some trouble wrapping my head around the separation of the two, from a project structure/hierarchy perspective. Currently, I have one unit-test project with an AcceptanceTests folder, and a FunctionalTests folder. All my step definitions are jumbled together in a StepDefinitions folder.

+ +

I find I'm having to repeat myself a lot and that the MsTest pane just mixes everything together when I group on Traits. I want to establish what the industry standard is out there, so I have five questions:

+ +
    +
  • Do I repeat the ""In order to... as a... I want to..."" story in a separate feature-file for AT's and FT's?

  • +
  • Do I repeat all the AT's scenarios in the FT's as well, or only fringe condition scenarios?

  • +
  • Should I keep the AT's and the FT's in their own namespaces and/or their own projects?

  • +
  • Should I try to call the FT's scenario step methods from the AT's step definitions, seeing as the grunt-work is being done by the FT's anyway?

  • +
  • Any advice about my current setup (e.g. is it overkill to do both?) is welcome.

  • +
+",75122,,-1,,42878.48125,42486.65139,Side-by-side Functional and Acceptance Testing (SpecFlow),,1,0,1,,,CC BY-SA 3.0,, +309337,1,,,2/5/2016 13:18,,2,238,"

Abstract Syntax Tree Metamodel (ASTM) is an OMG standard to represent ASTs.

+ +

In my very partial and limited understanding (I only spend an hour or two glancing into that spec), it is notably defining some XML representation for some kind of ASTs, and claims to be some kind of universal representation.

+ +

However, after having glanced into the ASTM spec (which I feel is implicitly focused for Java and perhaps C and maybe C++), I don't understand how it can be used for other languages like Scheme, Ocaml, Haskell, Scala, Clojure ...?

+ +

Could anyone give me several examples of ASTM in XML?

+ +
    +
  • what is a possible XML representation of the ASTM for a very minimal hello-world program in Java or in C? I would be delighted with a concrete example.... (both the tiny C or Java source file, and the corresponding ASTM XML file)

  • +
  • what would be a possible XML representation of the ASTM for some small program in Scheme or Ocaml? My feeling is that it often would be impossible (e.g. because some syntactic constructs like let-bindings or pattern clauses are not even mentioned in ASTM standard)

  • +
  • it seems that C++11 lambda-s and probably Java8 lambda-s cannot be represented in ASTM.... If that is possible, how?

  • +
+ +

I am very probably misunderstanding the whole point of ASTM.

+ +

addenda:

+ +

Javier Luis Cánovas Izquierdo mentioned me some XML examples of ASTM:

+ +

https://github.com/jlcanovas/gra2mol/tree/master/examples/Grammar2Model.examples.Java2ASTMModel
+https://github.com/jlcanovas/gra2mol/tree/master/examples/Grammar2Model.examples.PLSQL2ASTMModel

+",40065,,40065,,42408.62222,43501.87292,"OMG ASTM and weird languages (like Scheme, Ocaml, Haskell)",,1,2,,,,CC BY-SA 3.0,, +309339,1,309388,,2/5/2016 13:35,,6,2627,"

The article on the Single-Writer Principle of the Mechanical Sympathy blog explains how bad queues are (performance-wise), because they need to be able to receive messages from multiple produers, and how instead we should be using only single Consumer-Producer pairs in our systems, like their ""Disruptor"" does.

+ +

But I fail to understand how to exactly implement that.

+ +

For example: Assume some service (e.g. Facebook or Twitter) that tracks data objects (e.g. people or messages). Now assume you need to insert a new data object that somehow affects other data objects (e.g. a new user signs up and other users need to be asked if he's a friend of theirs, or a new message is published and subscribers need to be notified of it).

+ +

How does one implement that without a queue of some sort, considering that new data objects are coming in from all directions across all sorts of clients (i.e. producers). You can't exactly run the signup service using using just one thread on one server, and expect them to keep retrying till the signup succeeds, right?

+ +

One user on the comments of the mentioned article asks exactly that, and the response is to have the producers just publish their results, and then one additional process that collects them from those individual producers, aggregates them, and then republishes them, so that they are now being published by only ""one"" producer.

+ +

Isn't that just a queue in disguise too? Walking all those producers is going to take time and effort too, right? Why would this implementation be preferable to having the producers synchronize on trying to write into some proper queue in the first place?

+",201095,,,,,42405.88542,Understanding the Single-Writer Principle,,2,4,1,,,CC BY-SA 3.0,, +309345,1,309398,,2/5/2016 14:10,,1,359,"

I got something here that bogs my mind a bit.

+ +

Let's say I write me this API (in TS), check out some of these properties:

+ +
export class MyAPI{
+        propertyThatShouldContainSuffix:Array<string>; // like .jpg or .mp3 
+        somethingElses:Array<SomethingElse>; //instances of some class
+        enumProperty:SomeEnum; // enum SomeEnum{a,b,c,d}
+        constructor(object){
+           /*
+             this object is input by the API consumer,
+             and its properties will be assigned to the new fields of
+             the new instance
+                              */
+        }
+     }
+
+ +

Valid usage example:

+ +
var myApi = new MyAPI({
+   propertyThatShouldContainSuffix : [""img.jpg"",""video.mp4"" ...],
+   somethingElses : [new SomethingElse(/*yada yada*/),new SomethingElse(/* whateverrr*/) ...],
+   enumProperty:2
+});
+
+ +

Input that may cause problems:

+ +
var myApi = new MyAPI({
+       propertyThatShouldContainSuffix : [""img"",""video"",5 ...],
+       somethingElses : [new SomethingElse(/*yada yada*/),new SomethingTotallyElse(/* whateverrr*/) ...],
+       enumProperty:6
+    });
+
+ +

As you can see, the first property is an array of strings that need to have a suffix, like an image, that should be .jpg or .png or whatever. There is an array of objects that should contain some fields, and finally an enum field, let's say that it ranges from 0 to 3.

+ +

Now, it all works fine and stuff when you input the expected values into it (e.g all strings in first array has the right suffix and so on).

+ +

But then I thought that I should handle bad input, like a user that will send all his image names without any suffix, or will give me a ""9"" as input for the enum, send objects instead of arrays, and so on.

+ +

BUT! and here's the problem: how far should I go with this? should I check that every property is correct(e.g what is supposed to be an array is really an array, that all ""supposed to be suffixed"" are suffixed, that all ""somethingelses"" contain all correct fields?

+ +

Because if I do, this is a whole mess of overhead on every creation of an instance of MyAPI object.

+ +

Or should I only do something real basic like check if he didn't misspell some field in the object(therefore exposing helpless users to the perils of ""but why isn't this working? stupid stupid API!"") ?

+ +

Or anything inbetween?

+",212794,,31260,,42405.67222,42405.99097,How far should I validate user input in my own created API?,,3,4,,,,CC BY-SA 3.0,, +309350,1,,,2/5/2016 15:39,,6,14534,"

Source code analyzers (like SonarQube) complain about float (or double) equality comparisons because equality is a tricky thing with floats; the values compared can be the results of computations which often have minute rounding effects, so that 0.3 - 0.2 == 0.1 often returns false while mathematically it should always return true (as tested with Python 2.7). So this complaint makes perfect sense to warn about potentially dangerous code.

+ +

A typical approach for such situations is to check for a margin, an epsilon, which should compensate all rounding effects, e. g.

+ +
if abs(a - b) < epsilon then …
+
+ +

On the other hand one can often see code which avoids a division-by-zero problem by checking the divisor for equality with zero before the division takes place:

+ +
if divisor == 0.0 then
+    // do some special handling like skipping the list element,
+    // return 0.0 or whatever seems appropriate, depending on context
+else
+    result = divident / divisor
+endif
+
+ +

This seems to handle the div-by-zero issue but is not compliant with the source code analyzer who still complains about the spot divisor == 0.0. On first sight it looks like a problem with the analyzer. It seems like a false positive. Float-equality checks for 0.0 should be allowed, shouldn't they?

+ +

After some consideration I thought about the case that the divisor was the result of a computation which should have resulted in 0.0 (like 0.3 - 0.2 - 0.1) and which now was something in the range of 1e-17 or 0.00000000000000001.

+ +

There are two approaches for this now:

+ +
    +
  1. The value is not exactly 0.0, hence the division can take place, the resulting value will be a ""normal"" floating point number (probably; consider 1e200 / 1e-200 which is inf). Let it happen, the caller has to take care of the results.
  2. +
+ +

or

+ +
    +
  1. The value should have been 0.0, it logically is in this case, the computer just doesn't notice it, so whatever special handling of the zero case was intended should take place here as well.
  2. +
+ +

If we vote for the second option, we could use the epsilon approach and be fine. But that would treat true non-zero values which are just very small like zero-ish values. We have no way of distinguishing the two cases.

+ +

This leads to the next consideration whether such a true non-zero value which is very close to 0.0 nevertheless should be divided by or whether it should be handled like the zero case (i. e. receive the special handling). After all, dividing by such a small value will result in very large values which will often be problematic (in graphs or similar). This is surely up to the context and cannot be answered in general.

+ +

I also considered whether the existence of zero(-ish) values in the input was maybe not the root of the problem but just an effect in itself, i. e. maybe the root of the trouble lay deeper: Maybe an algorithm which expects a float and which is supposed to divide by it should never receive values which can become zero(-ish) in the first place.

+ +

I can think of use cases with integers where one may need to check for them being zero before dividing (e. g. an index whose difference to a reference index is used as divisor, when both become the same in some iteration, the difference is 0), but I couldn't think of a good example where a float value could become zero-ish. Maybe if such a thing occurred, it was just a logical error?

+ +

So, now my questions are:

+ +
    +
  1. Is there a theory about the topic of float-zero-checks to avoid division-by-zero problems addressing my considerations? I found nothing on the Internet about it yet.
  2. +
  3. Can someone provide a reasonable example of a context and an algorithm therein which is supposed to expect float values which can become zero and by which it should divide? And depending on that context which solution (epsilon, pure == 0.0-check, maybe a different approach) would you prefer there?
  4. +
+",54677,,54677,,42405.65625,42405.96389,Avoiding Division by Zero Using Float Comparison,,4,12,1,,,CC BY-SA 3.0,, +309361,1,309369,,2/5/2016 16:43,,8,2455,"

Some articles (JavaScript Module Pattern In Depth, Mastering The Module Pattern) describe defining modules in JavaScript like in the snippet below (from Addy Osmani's ""Learning JavaScript Design Patterns""):

+ +
var testModule = (function () {
+    var counter = 0;
+    return {
+        incrementCounter: function () {
+            return counter++;
+        },
+        resetCounter: function () {
+           console.log( ""counter value prior to reset: "" + counter );
+           counter = 0;
+        }
+    };
+})();
+
+ +

Module usage:

+ +
testModule.incrementCounter();
+testModule.resetCounter(); 
+
+ +

In this case we are using a single instance in the entire code, and it means that this module implementation is useful only if we want to create a singleton.

+ +

Is it true or there are other use cases when this Module pattern variation can be used?

+",154659,,89095,,42405.70347,42405.74931,Is Module Pattern in JavaScript is useful only for singleton creation?,,2,1,1,,,CC BY-SA 3.0,, +309377,1,309381,,2/5/2016 19:30,,29,2976,"

To simplify the interface, is it better to just not have the getBalance() method? Passing 0 to the charge(float c); will give the same result:

+ +
public class Client {
+    private float bal;
+    float getBalance() { return bal; }
+    float charge(float c) {
+        bal -= c;
+        return bal;
+    }
+}
+
+ +

Maybe make a note in javadoc? Or, just leave it to the class user to figure out how to get the balance?

+",207788,,31260,,42405.86181,42407.54444,"Better to have 2 methods with clear meaning, or just 1 dual use method?",,6,3,6,,,CC BY-SA 3.0,, +309383,1,,,2/5/2016 19:57,,2,591,"

We use TFS and Visual Studio 2015 at work but don't get full benefit from the ALM features as we have code in non-TFS Git repos.

+ +

We would like to integrate these products in with all the TFS goodies like checkin-to-work item linking, in-IDE pull requests, etc.

+ +

Is there any way to access these features in a non-TFS-hosted Git repo?

+",24628,,24628,,42405.83958,42415.81181,Pull requests and work item linking in Visual Studio 2015 with non-TFS remote Git repo,,2,0,,,,CC BY-SA 3.0,, +309384,1,,,2/5/2016 20:19,,1,94,"

I would like to ask a follow-up to a question I just asked: +Better to have 2 methods with clear meaning, or just 1 dual use method? I now understand why it is best to separate charge(float c); and getBalance();.

+ +

But, every method has the opportunity to return a value. So, maybe use that opportunity and just return a boolean that means that there was no trouble with executing the method?

+ +
public class Client {
+    private float bal;
+    float getBalance() { return bal; }
+
+    boolean charge(float c) {
+        if(bal < 0) return false; // some weird, illegal, state
+        bal -=c;
+        return true;
+    }
+}
+
+ +

Returning booleans for success / failure enables an (optional) extra layer for exception handling.

+ +

I mean, if you are not interested in the returned boolean then just ignore it. I don't know, I just think why not just return ""something""?

+",207788,,-1,,42837.31319,42407.66111,Use the chance to return booleans after method calls for an optional layer of exception handling?,,1,3,,42414.81944,,CC BY-SA 3.0,, +309389,1,,,2/5/2016 21:10,,31,1978,"

The C preprocessor is attached to C, but it has a completely different syntax from the main language:

+ +
    +
  • syntactically significant whitespace (end of line terminates a statement, gap after the macro determines the start of the replacement list)

  • +
  • keyword-based blocks instead of braced blocks, elif instead of else if

  • +
  • keyword-led definitions instead of declaration-reflects-use, no = for value definition

  • +
  • hints of an alternative string syntax (#include <> vs #include """")

  • +
  • lazy evaluation (of C, obviously; but 6.10.3.1 can be read as implying a specific order of macro expansion as well, in the few places that matters)

  • +
+ +

It really doesn't look like C at all! Technically it is its own language, but it's always been used as a nearly-integral part of C and it seems very strange that it wouldn't integrate with it syntactically.

+ +

Wikipedia doesn't talk about its history; the Portland Pattern Repository gives it a passing mention, but doesn't go into detail beyond the fact that it was designed by other people than the rest of C. Dennis Ritchie's website with the history of C probably had the answer, but is unfortunately no longer available.

+ +

As a macro engine, it obviously has very different semantics from the runtime language, which would explain some differences, but not the visual design aspects (it's also unclear to modern eyes whether it was originally intended as being capable of the kind of fun that its replacement system allows, or whether it was ""just"" an expedient way to inline functions in a time before powerful optimizers). It feels like something closer to what eventually became C++ templates would have been a more logical evolution towards macros, if C-like semantics had actually been the starting point, but there's less concrete evidence of this than there is for the syntax.

+ +

Do we have any record of why it was designed this way, or what the creators' influences were?

+",98794,,-1,,42838.52639,43693.08056,What is the origin of the C Preprocessor?,,2,3,6,,,CC BY-SA 3.0,, +309396,1,,,2/5/2016 22:40,,1,263,"

I am developing a suite of applications that will be hosted in Azure. Some apps will be for customer use (not public tho, i.e. invited/paid up customers) and some for internal employees.

+ +

For authentication I was initially going to use something like the excellent ThinkTecture IdentityServer. This would allow me to define claims to partition different user types (i.e. a 'HasAccessToAppX' claim or 'IsCustomer' claim). However to save having to develop/maintain and deploy another application, I was considering using Azure Active Directory (NOT the Azure AD B2C which seems to be targeted at public facing consumers/social logins etc). The problem I have is that I want to have single sign on and claims based authentication, but I cannot figure out if its possible to assign custom claims for user sets.

+ +

It appears there is group management facilities, but they seem to cost a fortune!

+ +

Am I looking at the right technology here to achieve this? Or is my approach fundamentally flawed?

+",98869,,98869,,42405.95833,42405.95833,Azure Active Directory,,0,5,,,,CC BY-SA 3.0,, +309402,1,,,2/5/2016 23:39,,0,1580,"

Say you have a document like this:

+ +
<names>
+  <first>Joe</first>
+  <last>Smoe</last>
+  <phonenumbers>
+     <phonenumber type=""home"">123-567-9876</phonenumber>
+     <phonenumber type=""cell"">345-678-1234</phonenumber>
+  </phonenumbers>
+  <emails>
+     <email>abc@dfg.com</email>
+     <email>xyz@lkj.com</email>
+  </emails>
+</names>
+
+ +

I am sort of stumped on the best table format when flattening this into table. In my case, it is a table for a Hive database.

+ +

I don't want 4 records for ""Joe Smoe"". So, can have fields like this ""phone1"", ""phone2"", ""email1"", ""email2"".

+ +

But this table actually is a Hive table so we do have ""arrays"" as a data types. So, I could have this:

+ +
+last|first|phones                     |emails
+----+-----+---------------------------+-------------------------
+Joe |Smoe |[123-567-9876,345-678-1234]|[abc@dfg.com,xyz@lkj.com]
+
+ +

But then how to save the type of phone in the table. Maybe with a ""map""?

+ +
+last|first|phones                                     |emails
+----+-----+-------------------------------------------+-------------
+Joe |Smoe |[{""number"":""123-567-9876"", ""type""=""home""}] |
+
+ +

How would you flatten this type of XML (eg. collections in collections) into a flat table?

+",127616,,127616,,42405.99583,42436.04097,Best way to flatten an XML document,,1,3,1,,,CC BY-SA 3.0,, +309405,1,309408,,2/6/2016 0:05,,5,1067,"

What is the point of hidden files? In Microsoft Windows they exist, in Mac OS X they exist and in Linux they exist. It seems to me that it just makes detecting malware more difficult. The only upside I see is protection of necessary OS files.

+",214308,,,,,42406.05972,What's the point of hidden files?,,3,2,4,,,CC BY-SA 3.0,, +309417,1,,,2/6/2016 6:06,,1,181,"

I want to prevent installing apps using third party app stores but except only one app store. This specific app store is in my control. I can change it's code to send a message to the mobile device that this store is special.

+ +

I can disable Unknown sources setting in security settings using android API according to this stack-overflow question. But if I do that, All the third party app stores will be blocked including my app store.

+ +

Any Idea to achieve this requirement using android API would be much appreciated.

+",82757,,-1,,42878.52778,42406.28056,How to turn off third party app store and side loading except my app store,,0,2,,,,CC BY-SA 3.0,, +309419,1,,,2/6/2016 6:34,,3,576,"

We know that Charles Babbage designed the first Turing-complete mechanical machine - the Analytical Engine - in the 1800s, but it was never actually built (not yet anyway).

+ +

In recent history, at least one mechanical Turing machine has been built. (This example is in fact a Universal Turing Machine, albeit with finite storage space.) But was this the first one, or are there earlier examples?

+ +

What was the first mechanical Turing-complete machine constructed, who built it and when?

+ +
+ +

Edit: By mechanical, I mean no electronics are used.

+",76435,,76435,,42409.45,42409.45,What was the first mechanical Turing-complete machine ever constructed?,,0,3,,,,CC BY-SA 3.0,, +309426,1,,,2/6/2016 11:01,,1,430,"

We have a simple REST webapp which is dependent on multiple external services, mostly Kafka messages. An attempt was made to isolate external dependencies by encapsulating all external interactions in a separate webapp, and make the core app communicate with the external interfaces app only through an internal kafka topic.

+ +
    ----------                                   ---------------
+   | core app |  <---Internal kafka topic --->  | external      | <--> external kafka topics 
+   |          |                                 | interfaces app| 
+    ----------                                   ---------------
+
+ +

Now we are slowly coming across more and more requirements where we would need to make synchronous calls to external systems, some REST, some SOAP. Would adding these kinds of requests through the external interfaces app and reading the results back via an internal kafka topic scale well? +What are other strategies we can use here to decouple external dependencies?

+",97214,,317903,,43591.42917,43591.42917,Decoupling external dependencies,,2,0,,,,CC BY-SA 3.0,, +309428,1,309491,,2/6/2016 11:29,,3,655,"

I see a lot of NodeJS articles recommending the Bluebird library for promisifying your code and avoiding callback spaghetti.

+ +

Is there any value in using such a library when using Node 4.2.4+ given that ES6 has native Promises? What can I do with Bluebird that I can't do with ES6 promises?

+ +

The Bluebird documenation is sparse and only really helps if you already ""get"" promises. Do I really need another library which might confuse the issue and introduce bugs?

+",12678,,12678,,42406.48056,42407.23472,Is there any value in using a Promises library versus ES6 Promises?,,1,3,,,,CC BY-SA 3.0,, +309435,1,309444,,2/6/2016 13:34,,2,343,"

ISO 12207 contains interesting points for design verification:

+ +
+

a) The design is correct and consistent with and traceable to + requirements.

+ +

c) Selected design can be derived from requirements.

+
+ +

EDIT +Also under Core verification, it reads:

+ +
+

c) Selected code can be derived from design or requirements.

+
+ +

What is the difference between those?

+",197285,,197285,,42406.65694,43844.32014,"""Selected design can be derived from requirements"" - meaning and difference against traceability?",,1,1,1,,,CC BY-SA 3.0,, +309438,1,309471,,2/6/2016 14:08,,45,11836,"

I have been having this idea of using encryption to prevent users from figuring out content in my program outside of the program itself. Like users may find textures never used in the game meant to be part of some kind of Easter egg while going though the games data. This may e.g. ruin it for everybody if posted online.

+ +

Imagine a secret room where the player have to press the correct numbers on a security door in the game, which if correct should generate the correct decryption key and then decrypting that part of the level and opening the door. Thus making the Easter egg otherwise inaccessible even when looking through the game-data since the key isn't actually stored, it's generated based on user-input.

+ +

Here is another example of what I was imagining. I have a puzzle game with let's say 20 levels, each one encrypted using a different key. Instead of storing the decryption key with the program directly allowing someone to decompile the program and find it, I instead generate the encryption/decryption key based on the solution of the previous puzzle. This way the player would have to actually figure out the puzzle before getting any information about the next level, even when looking through the game-data.

+ +

The player, if knowledgeable, could possible brute-force it ""easily"" given that the number of puzzle solutions is probably less that the number of decryption keys. It is really a matter complexity of the puzzle and is not very important here. Though I did post an answer regarding it over here

+ +

Is there programs/games today that has done something like this? Storing encrypted content in their games? And if not why? Is there a lot of rules and regulation about it, either at the store or country level? Does anyone see any obvious pitfalls that I'm missing? Ignoring things like user-experience, the idea seems sound to me and makes me curious why I haven't seen this before.

+ +

Edit: It may not be clear exactly what I'm saying, so here is a more concrete example.

+ +

Let's say I have a function that takes in a string of 20 characters and generates a symmetric key which I can use to encrypt/decrypt some content in the game. The only way the user could get to that content is to know those 20 characters and generate the same key. This key is never directly stored and is generated on the fly based on user input. These characters would be hidden in the game in what could be books, dialog with NPCs, maybe even outside of the game on the back of the box even.

+ +

So with 2*10^28 possible combinations to try it would probably be more likely people will find the content in the way intended rather that by looking through the game-data.

+ +

Edit 2: +The content in question would be encrypted with an arbitrary and secret key before being shipped to the consumer. This key will obviously not be shipped with the game. He or she would have to somehow puzzle the key back together given a series of clues made based on the key, and that are hidden throughout the game or somewhere else. This system would however be transparent for the user as you wouldn't know the content was encrypted unless you actually looked through the game-data.

+ +

As a lot has mention this has one obvious downside in that its use case is limited. Once a single person has figured it out he/she may share it with everybody else, if not the key/solution then the content itself. However if your intention is to keep something so secret that a single person shouldn't be able to solve it and people have to work together to solve it, or you are afraid that your easter egg is so well hidden (by design) that it is more likely someone will find it in the code rather through game play. Then I think this could work great.

+ +

I would personally recommend to only use it maybe once per game and only for things that does not affect core game-play, e.g. easter eggs, a secret ending. Any puzzle would have to be so complicated or well hidden for it to slow people down enough to make encrypting the content worth it and if this puzzle stood in the way of people progressing then nobody is probably having any fun.

+",205514,,-1,,42838.5125,43399.22639,Encrypted content in games,,13,18,13,,,CC BY-SA 3.0,, +309442,1,,,2/6/2016 14:37,,4,2487,"

Can a single SQL Server connection be shared among tasks executing in parallel? Each task would dequeue an item from a ParallelQueue, instantiate a SqlClient command, and insert a row into the table. Or does each command-instantiating-task require its own dedicated connection to the database? Can the connection(s) be instantiated and opened at the top of the method and then closed when the parallel tasks have been completed, or should each task instantiate, open, and close its connection?

+",149085,,,,,42407.99306,can a single SQL Server connection be shared among tasks executed in parallel,<.net>,2,0,,,,CC BY-SA 3.0,, +309446,1,309461,,2/6/2016 16:00,,1,92,"

I ask my question on programmers because:

+
    +
  • no coding is really involved
  • +
  • this question is very conceptual concerning an exchange protocol.
  • +
+

Bit of context:

+

When you implement Bluetooth, like I do, you have to choose between multiple protocols, such as L2CAP and RFCOMM.

+

RFCOMM relies on L2CAP protocol.

+

Question:

+

Are there devices/applications out there that can be RFCOMM only (and only this, not supporting L2CAP exchanges)?

+",214363,,-1,,43998.41736,42406.94236,Bluetooth - is rfcomm totally bypassable as a protocol?,,1,4,1,,,CC BY-SA 3.0,, +309449,1,,,2/6/2016 17:22,,3,1870,"

I'm getting beyond a simple MVVM program now, and I'd just like to sense check my current architecture and makes sure I'm going down the right path here.

+ +

Everything is structured around screens, and these screens have a kind of ""tree"" breakdown. On the top level there's a home screen, and you can drill down through related screens.

+ +

I've made a simplified example, an application that tracks employees for companies. It has three ""screens"".

+ +
    +
  • CompanyOverviewScreen, lets you see all the companies you have. Click to go to the below:
  • +
  • EmployeeTrackerScreen, lets you see all the employees in this company and their location. Click to go to the below:
  • +
  • EmployeeScreen, let's you see information about an employee.
  • +
+ +

So the screens open up like a tree:

+ +

+ +

Question 1: Where should these screen view models be created and stored? Should they be created and stored in one place above everything, or should the EmployeeTrackerScreen ViewModel create the EmployeeScreen ViewModel and store it as a collection property?

+ +

I'm also using shared ViewModels that don't have views for the data and UI properties in these screens where they are displaying a collection of items. For example, the EmployeeTrackerScreenView has an ItemsCollection to directly bind to an ObservableCollection that are stored as properties in the EmployeeTrackerScreenViewModel.

+ +

Here's a quick sketch of how it's structured:

+ +

+So the EmployeeTrackerScreenView looks like this:

+ +
<UserControl x:Class=""Project.EmployeeTrackerScreenView >
+    <DockPanel>
+        <StackPanel DockPanel.Dock=""Top"">
+            <Label Content=""{Binding CorporationViweModel.Name}"" />
+            <Label Content="" has "" />
+            <Label Content=""{Binding CorporationViweModel.NumberOfEmployees}"" />
+        </StackPanel>
+        <ScrollViewer>
+            <ItemsControl ItemsSource=""{Binding ObsvCollectionOfEmployeeViewModels}"" >
+                <ItemsControl.ItemTemplate>
+                    <DataTemplate>
+                        <StackPanel Orientation=""Horizontal"">
+                            <Label Content=""{Binding EmployeeModel.Name}""/>
+                            <Label Content=""is""/>
+                            <Label Content=""{Binding EmployeeModel.Location}""/>
+                        </StackPanel>
+                    </DataTemplate>
+                </ItemsControl.ItemTemplate>
+            </ItemsControl>
+        </ScrollViewer>
+    </DockPanel>
+</UserControl>
+
+ +

Question 2 Is in line with MVVM patterns? Am I going about it the right way?

+ +

I'd love some comments on how I'm structuring things, perhaps some advice on whether I'm going about it in a sensible way. I hope the question provides enough information and is appropriate. I'm new to this kind of software design, I've always made smaller apps in the past, so I'm inexperienced in proper architecture.

+ +

I've been using Caliburn Micro MVVM framework, in WPF if that's useful to know.

+",214367,,214367,,42406.73125,42865.04028,"Multi-screen MVVM architecture/design - Should my ""screen"" ViewModels contain sub-ViewModels?",,4,0,2,,,CC BY-SA 3.0,, +309452,1,309458,,2/6/2016 17:54,,24,9819,"

The Redux framework favors immutable state/pure function paradigm, which promotes creation of new state from the previous state in terms of the current action. The applicability of this paradigm is indubitable.

+ +

A major concern of mine is that, as the Redux reducers eagerly return fresh new states from previous states for each and every action invoked, massive memory drain (not to be confused with memory leaks) would become a common occurrence in many real-world applications. When considering that Javascript applications normally run in a browser in an average user's devices which could also be running several other device specific applications and several more browser tabs and windows, the necessity to conserve memory becomes ever more evident.

+ +

Has anyone actually compared the memory consumption of a Redux application to the traditional Flux architecture? If so, could they share their findings?

+",212116,,212116,,42409.67431,42409.67431,Redux memory consumption,,1,3,12,42416.44931,,CC BY-SA 3.0,, +309465,1,309467,,2/6/2016 20:43,,-2,646,"

What are some reasons you may choose a worse runtime algorithm? And what are some advantages of a linked list vs a hash table/array where access is constant time.

+",211836,,211836,,42406.91319,42406.92222,Considerations when making data structure and algorithm choices,,1,4,1,42407.45069,,CC BY-SA 3.0,, +309468,1,,,2/6/2016 22:12,,0,18,"

Today I had the idea to write a program that compares the prizes between different web-stores. The key question I am asking myself is how to find (with a high probability) the same article on another web-store again. For example:

+ +

Three different websites to search on are hard-coded into the code: A, B and C. Every article on each website has a picture, headline and a description.

+ +
    +
  1. The program accesses a random article on website A.
  2. +
  3. Now the task is to go to web-store B and C and try to find the same article there, using the picture, headline and description from website A.
  4. +
+ +

What I am asking now is what possibiliies do I have do identify two articles as the same when I only have two different texts and a picture? What I have thought of so far is:

+ +
    +
  1. Obviously, comparing the strings from the headline and look for similarities.
  2. +
  3. Taking out important key-words and espacially look for them (i.e. manufacturer name, year of creation etc.)
  4. +
  5. Analizing the pictures
  6. +
+ +

Maybe someone had already made experience with this kind of ""pattern-matching"". The result should have a high probability of correctness. Of course I am open to new ideas for comparing, which are not in my list.

+",209033,,209774,,43072.77778,43072.77778,Comparing articles from various web-stores,,1,3,0,,,CC BY-SA 3.0,, +309472,1,,,2/6/2016 23:06,,1,6555,"

I have received an XSD from a 3rd party supplier generated from a Java based system; which is to be used to create a SOAP endpoint for us to receive data transfers. Their XSD does not make any use of the nillable attribute defined within the W3C/XSD namespace, for example:

+ +
<xs:complexType name=""Customer"">
+    <xs:sequence>
+        <xs:element minOccurs=""1"" maxOccurs=""1"" name=""ID"" type=""xs:integer""/>
+        <xs:element minOccurs=""0"" maxOccurs=""1"" name=""TitleID"" type=""xs:integer""/>
+        <xs:element minOccurs=""0"" maxOccurs=""1"" name=""FirstName"" type=""xs:string""/>
+        <xs:element minOccurs=""0"" maxOccurs=""1"" name=""FamilyName"" type=""xs:string""/>
+        <xs:element minOccurs=""0"" maxOccurs=""1"" name=""PreviousName"" type=""xs:string""/>
+        <xs:element minOccurs=""0"" maxOccurs=""1"" name=""DateOfBirth"" type=""xs:date""/>
+    </xs:sequence>
+</xs:complexType>
+
+ +

When speaking to the supplier it was mentioned ""where minOccurs=""0"" and their is no value, elements will be omitted"".

+ +

When we use the XSD Schema Definition Tool to create classes; the classes are generated correctly with 2 exceptions: the DateOfBirth and TitleID properties are not nullable.

+ +

Unless nillable=""true"" is declared within the XSD for simple types, value types or not created as Nullable despite the minOccurs=""0"" configuration. After some research I feel this interpretation is correct given information in the following articles:

+ + + +

I have concluded from the above that minOccurs=""0"" is a definition that states the element within the XML is optional, but does not have any particular meaning. It is clear that within a SOAP message; to communicate no value for a particular element nill=""true"" should be used within an empty element.

+ +

When presenting this to our supplier; I have argued the missing nillable attributes leaves the XSD ambiguous and lacks intent when referenced to the standards, given than omitting elements does not state null, its only implied. The W3C states if a message needs to indicate an element is null, it should be explicitly stated. Their representative disagreed and argued that if there is no DateOfBirth (for example) they will remove the element for this in the SOAP message. They believe this is acceptable and conforms to the standards. From what I have read here, minOccurs/nillable have the same behavior in Java (maybe why they feel they don't have to use nillable attribute)

+ +

I know I can work around this but ultimately; it (potentially) has a significant increase on the amount of additional code required for the sake of our supplier not wanting to use the nill/nillable attribute in the XSD and SOAP message. I am building our SOAP API in WCF; upon deserialization I will have to check all the Specified properties that are generated to check for missing elements. This is because value types that are not not referenced in the xml message will be assigned their default values. Alternatively, I could go through the XSD and add them myself, which I feel is a bad idea. I will be changing their definitions and any new releases will require the same effort, which is currently every 3-6 months.

+ +

My questions are:

+ +
    +
  • Do you agree with arguments made by me or the supplier? If so why?
  • +
  • Am I correct in my suggestion Java treats these configurations the same?
  • +
  • If I am wrong, is there a way I can tell .Net to create nullable types for minOccurs=""0""?
  • +
+",172951,,42906,,43334.26875,44174.58542,"use of minOccurs=""0"" without nillable=""true"" in SOAP",<.net>,1,3,1,,,CC BY-SA 4.0,, +309473,1,,,2/6/2016 23:15,,-2,1207,"

I understand that it's extremely depends on a particular case, but I really would like to know at least roughly how long a specific task can take for completion to average programmer. Is there any examples of pairs like ""project - completion time""?

+ +

I'm absolutely ok with my own time-estimation abilities. So is my employer. But my problem is I have never worked in a team and I can not even roughly estimate how long something will take for other programmer.

+ +

For example, my last task was to integrate our sales-business software (which is my own product, js+php+mysql) to a local delivery service API: syncing address database, preparing package information and creating specific orders to delivery service, printing labels for packages, sending customers their package info, a web-UI for doing all that. This task took me like 150-170 work-hours to complete.

+ +

I always considered myself a slow programmer. But now I wonder how much of speed the development can gain by hiring new programmers. Is there any examples to compare?

+",214406,,,,,42407.22292,How to know if I'm slow or fast at programming?,,2,1,3,,,CC BY-SA 3.0,, +309492,1,309504,,2/7/2016 5:43,,-3,110,"

Please advise me and correct my understanding if I am in error. In my opinion, any programmer wanting to develop an accounting system should consider two approaches from the two mentioned below:

+ +
    +
  1. Make all clients connect to SQL server directly by changing the connection string in all clients to refer to SQL server.

  2. +
  3. Make all clients PCs contacting the server using stream, TCP or UDP ports ...etc, then the server makes a connection to the database then sends the result to its clients through streams and I/O methods. In other words all clients do not contact the SQL server themselves in a direct manner. Instead they just contact the server PC (this may have the database or not).

  4. +
+ +

Are the above ideas correct and which one is common?

+",214426,,,,,42407.49514,Proffesional Systems Design,,1,4,0,42407.56042,,CC BY-SA 3.0,, +309497,1,,,2/7/2016 8:53,,2,129,"

I've dabbled in a few MVC frameworks (like Rails and its ilk) and I've noticed that the file that defines restful routes often go separate from controllers which hold the actions executed through those routes.

+ +

For example, let's say that there is a route

+ +
/users
+
+ +

That go to UserController.create action.

+ +

I would expect it to be defined as part of the controller but it is often defined in some

+ +
config/routes.rb|js file
+
+ +

Why is this?

+",214431,,,,,42407.37014,Why separate routes and controllers in MVC backend applications?,,0,2,,,,CC BY-SA 3.0,, +309501,1,309503,,2/7/2016 11:02,,0,109,"

I'm about to add some user account and password functionality to my software, so I'm reading about password hashing.

+ +

So my understanding is Hashing is a one way function, and it works like this:

+ +
    +
  1. User Bob Types in password box: “MyPassword”
  2. +
  3. Request Sent to server for Bob’s Salt, and hash from database: +Hash: 8912ep98as89uasdp +Salt: 89asdjklasdo11jdsa
  4. +
  5. “MyPassword” + Salt is hashed using same function and compared to retrieved hash
  6. +
  7. If they match access is granted.
  8. +
  9. The hash is stored in application memory for future comparisons.
  10. +
+ +

Is that accurate? So that above has granted the user the ability to click around the application, but say my application is retrieving data from a server. I want things to be secure, so for each request from the service I pass along the authentication details (stored hash from step 5), and the service checks it against the database hash before returning a set of data.

+ +

Isn't knowing the hash kind of just as bad as knowing the password? If you found out the hash, you could use the service for nefarious means because all the password is used for is generating the hash, and the hash is used for all further authentication.

+ +

Edit, for clarification, this isn't a web app. It's going to be a WPF application, probably using WCF service to talk to a server.

+",214367,,214367,,42407.54097,42407.54097,Question on password hashing security,,1,5,,,,CC BY-SA 3.0,, +309510,1,309854,,2/7/2016 14:04,,2,182,"

One of Bloch's Effective Java items 55: Optimize judiciously extends Jackson's rules on optimization:

+
+

Rule no. 1: Don't optimise!

+

Rule no. 2 (for experts): Don't optimise yet!

+

Extra Bloch rule 1: Don't try it until development is finished.

+

Extra Bloch rule 2: Measure performance before and after implementing an optimisation (you'll be surprised!)

+
+

And this is essentially how I approach optimisation. However I now have a situation where I am using a complicated but accurate, OO data structure reflecting the business domain.

+

I find myself considering implementing a significant amount of denormalisations.

+

Should I, if everything else is equal?

+

Is denormalisation in the business layer just another method of optimisation, to which these rules apply?

+",160323,,-1,,43998.41736,42414.83819,"Is denormalising a data structure essentially a code optimisation, to which the normal rules apply?",,3,1,,,,CC BY-SA 3.0,, +309511,1,,,2/7/2016 14:06,,1,474,"

ES6 native promises do not allow you to synchronously check if they're resolved/pending/failed or to extract their value. I sometimes need this functionality and thus I have to code it manually.

+ +

Is this considered an anti-pattern? Is this why this feature was left out of the standard?

+ +

This is a situation where I think this sort of feature comes in handy:

+ +
// the render method of a React component
+// it will get triggered by other events other than the promise state change
+render() {
+  if (promise.resolved) {
+    // render the result
+  } else if (promise.rejected) {
+    // render an error/reason
+  } else {
+    // render a loading message
+  }
+}
+
+",73423,,73423,,42407.60278,42407.64861,Is synchronously inspecting a promise an anti-pattern?,,3,3,1,,,CC BY-SA 3.0,, +309512,1,309515,,2/7/2016 14:25,,2,7129,"

Is providing a user login a non functional requirement?

+ +

Since this is concerned with security which is a non functional requirement, I feel that providing a user login is also non functional but again I feel that it is functional.

+",214451,,31260,,42407.60139,44116.88194,Is providing a user login a functional requirement of a system?,,4,1,,,,CC BY-SA 3.0,, +309517,1,,,2/7/2016 15:04,,3,289,"

I have been asked the following question in an interview: ""What is the need of an interface when you can have an abstract method within an abstract class?"" +Which I did not know the answer to.

+ +

Could you provide an example why and when it may be useful?

+",214453,,9113,,42628.34028,42628.34028,When and why would you put a new method into an interface instead of an abstract base class?,,3,5,2,,,CC BY-SA 3.0,, +309527,1,,,2/7/2016 16:00,,3,66,"

I have a class that I use to render GUI elements on the screen, this class has a tree structure (with children, parent, and siblings). I created an additional class which allows me to interpolate some values of a GUI element (alpha, and rotation for now), each instance of the class is linked to one GUI element (which is updated by it) like this:

+ +
// Creating a simple empty cube
+GUIElem img(0, 0, 40, 40);
+// Creating an interpolation for img, to rotate it to 90°
+RotationInterpolation inter(&img, 90);
+
+// Inside the loop
+// This calculates the interpolations from 0° to 90° and updates the image
+inter.update(delta_time);
+// This draw the image
+img.draw();
+
+ +

This is how it looks now, but I want to implement interpolations inside GUIElem methods (setAngle(), setAlpha()), and I was wondering what is the best method to do it, I came up with two solutions:

+ +
    +
  • Create a list of interpolations for each GUIElem and manage them (update, delete) inside GUIElem's update()
  • +
  • Create a global list of interpolations and manage it in my loop before rendering
  • +
+ +

Since pretty much all of my interpolations would be added and deleted continuosly, I'm not sure about which of those methods is more efficient in terms of memory and performance (I could have a lot of GUIElem with no interpolations running but still in need to be checked).

+",214459,,31260,,42407.67083,42407.67083,Visual interpolations: independent or linked to object?,,0,3,,,,CC BY-SA 3.0,, +309546,1,309547,,2/8/2016 0:56,,15,58427,"

From my understanding, ARP translates an IP address into a MAC address, and then the computer uses the MAC address to establish a direct connection.

+ +

If I already know the MAC address of the computer I want to connect to, is it possible to directly connect to it (without a router)? Is there an example of this?

+",214504,,214504,,42408.05833,43781.84861,Connecting directly to another computer knowing only the MAC address?,,3,1,1,,,CC BY-SA 3.0,, +309550,1,,,2/8/2016 2:06,,1,649,"

I have a bunch of plain-text like this:

+ +
1 MILE, PACE, PURSE $1,100.
+FILLIES & MARES N/W $541 L5 STARTS AE N/A $301 L5 & N/A $60 PS
+IN 2015-16 DRAW INSIDE
+                                                                                         Last
+Horse                       HV PP    1/4     1/2     3/4     Stretch  Finish     Time    1/4  Driver           Odds   Trainer
+7   Im A Debutant               7    7/9H    7/5T    5/2T    5/2H     1/3        2:03    31   C Macpherson     7.45   R Gass
+3   M D Caseys Charm            3    2@/1H   3/1T    3/1H    3/1Q     2/3        2:03.3  32   Ma Campbell      3.20   S Ford
+5   Lucksgottachange            5    1/1H    1/T     1/1Q    1/1      3/3        2:03.3  32.1 J Hughes         1.55*  J Hughes
+2   Gascoigne Dickie            2    4/4T    5/3Q    4@@/1T  2/1      4/3H       2:03.3  31.4 K Sorrie        30.10   K Sorrie
+8   Avid Yankee                 8    8/12    8/8     8/5     8/4Q     5/5        2:04    31.3 K Murphy         5.25   A Ramsay
+1   Honor Roll                  1    3/3     2@/T    2@/1Q   4/2Q     6/6        2:04.1  32.3 B Andrew         9.90   B Andrew
+4   Julep Hanover               4    5/6Q    6@/4H   7@@/3T  6/3T     7/6        2:04.1  32   W Myers         19.05   W Myers
+6   Putnams Snap                6    6/7T    4@/3    6@/3H   7/4      8/10       2:05    33   M Mcguigan       2.75   G Dunn
+Time: 29.2, 1:00.3, 1:31.2, 2:03 (Temperature: -2, Condition: GD, Variant: 1)
+
+ +

taken from http://www.standardbredcanada.ca/racing/results/data/r0130chrtnn.dat

+ +

But in my case it's human-written and may contains extra/missing spaces, dots, etc

+ +

And I need to parse it into a data structure. +What would be different program languages approach to that? Good libraries?

+ +

I'm mostly a python programmer, but looking towards learning new languages.

+ +

Also, I'd really love to see how strong-typed languages deal with this.

+",161056,,,user40980,42408.09375,42468.72292,"""Fuzzy"" parsing in different languages",,2,3,0,,,CC BY-SA 3.0,, +309553,1,,,2/8/2016 3:57,,-1,137,"

By consulting various scattered tutorials and books, I've been able to learn that the 64-bit Linux ""exit"" system call is 60, and the status value is moved to edi. Similarly, ""write"" has call number 1, and the file descriptor must be passed to edi, etc.

+ +

However, it seems very hard to find information about the other system calls. For example, this table shows the relevant registers for each call, but in e.g. sys_chmod, a value of type mode_t must be moved to rsi. Where does one find the different integer values that can be passed as arguments? man 2 chmod doesn't appear to have them, and greping through header files in /usr/include hasn't been helpful.

+ +

In general, how does one go about finding a consistent reference for low-level details like these? Are they considered useless, since the same functionality can generally be performed with C rather than assembly? I'm reading the AMD64 developer manuals, which are helpful for conceptual understanding, but those won't help me with Linux system calls.

+",214514,,,,,42408.26667,Finding register parameters for system calls,,1,2,,,,CC BY-SA 3.0,, +309555,1,309558,,2/8/2016 5:17,,1,2083,"

I'm more or less teaching myself big O notation, so please forgive me if this is a duplicate of a question which applied to my question without me having the wisdom to realise it. For my own amusement/personal development, I'm trying to express the complexity of a function which works on 2 different datasets which relate to each other: I need to iterate over a list, and for each item in a list, iterate over another list checking if the items are related.

+ +

Pseudocode:

+ +
for things in list_a {
+  for things in list_b {
+    if (list_a.thing relates to list_b.thing) go ping
+  }
+}
+
+ +

What I have on paper currently is O(n) * O(m), but I'm wondering if it should be expressed as O(n*m) instead, where n = size of set b and m = size of set a? Or is this something else entirely? Again, apologies if this is a stupid/duplicate question, but I couldn't find anyone specifically discussing a nested loop over two different datasets. This answer would suggest that it's O(n^2), but that feels wrong to me, since the sizes of the two datasets themselves are different and independent.

+",153734,,-1,,42837.31319,42408.26528,How should I express the complexity of two nested loops over different datasets in Big O notation?,,1,0,,,,CC BY-SA 3.0,, +309556,1,,,2/8/2016 5:24,,3,90,"

I am trying to create a simple sentence generator that uses templates and a database of words. It will be a website where essentially a user could click a button to generate a sentence.

+ +

For example: The {animal} is on top of the {large_obj}. +would choose a item from animals and then choose another item from the list that it points to. And it would also choose a large_obj. +A possible outcome would be: The chicken is on top of the house.

+ +

+ +

I am looking for suggestions for how to store the data. +I was thinking either a database with tables for each collection or a large JSON object. But I don't have enough experience to feel good about either decision.

+ +

It is currently planned to use Node.js with a Postgres database. +I'm hoping to have 1000-2000 words total.

+",214522,,31260,,42408.27014,42408.27014,Suggestions for Storing large collection of related words,,0,11,2,,,CC BY-SA 3.0,, +309562,1,,,2/8/2016 7:13,,3,1263,"

I'm searching for strings matching the pattern [A-Z]\W*[0-9]+, so that in

+ + + +
V-2345
+35A235
+Q252
+
+ +

the V-2345 and Q252 would match. In another list, I want to find equivalent items that fit the same regex pattern, so that potential matches would include:

+ +
V ++ 2345
+Q 252
+
+ +

but not

+ +
V-252
+Q//2345
+
+ +

Basically, if an item matches the pattern in the first list, I want to search for that letter prefix and number suffix in the second list. Is there a term for using regex to do this kind of search? I know that I can just write my own search by using string manipulation to get the letter and number, then use them to compose a separate regex pattern for each search of the second list, but I'm wondering if there's something built into typical regex flavors (I'm using C# in this case) that serves this purpose so that I can just use the original pattern.

+",65646,,,user40980,42408.58681,42408.66042,Is it possible to write a regex that does one search then uses its results to do another search?,,1,3,1,,,CC BY-SA 3.0,, +309564,1,309574,,2/8/2016 8:10,,3,204,"

Background:

+ +

We are using agile project management and follow sprints to do development. We are using a single repository in bitbucket.org that uses mercurial. Repository is hosting 3-4 products. There are many projects with 4 solutions for each product. There are some product specific projects and there are some shared projects.

+ +

We cannot do changes in the folder structure sine we already have a team city server that is doing automated builds which works well. But I am open if its really critical to handle it.

+ +

Current Process
+Our current process can be described as follows:

+ +
    +
  • during development, we use hgFlow start and finish features (develop branch)
  • +
  • for releases, we start releases, make them stable, finish the releases (new release branch from develop)
  • +
  • for hotfix we start hotfix (new hotfix from release branch), and finish it (merges to develop and default)
  • +
+ +

Problems
+Here are some of the problems with this process:

+ +
    +
  • when we start release, there are 3 other product's code which is not ready to release, is also present in the release and when release is finished, goes to default which is unwanted.
  • +
  • when we change shared projects in development, it affects open releases and we have to patch it ther
  • +
  • when we change shared projects in release, it affects previously released products
  • +
+ +

there are so many other technical issues that we discovered and we immediately stopped using hgFlow, and right now doing manually folder diff in order to start/finish release and hotfixes.

+ +

Some of the options that I thought of

+ +
    +
  • Create single repository to host each product's source code except shared product
  • +
  • Create single repository for each of the shared project
  • +
  • Separately maintain HgFlow and Tags etc.
  • +
+ +

That is too much work, say for example, when I want to start release, I have to go to product and each affected shared project to start a new release. same is true when doing regular development, for a single feature, if I want to change product and shared project, I have to do create feature in all of the projects.

+ +

How to better handle it?

+ +

Update - 9 feb 2016 +Our products are based on .net, we are using bitbucket for hosting, mercurial and source tree with HgFlow for development operations.

+",39819,,39819,,42409.24236,42409.24236,How to improve our source control process,,1,2,1,,,CC BY-SA 3.0,, +309565,1,309569,,2/8/2016 8:19,,4,2408,"

I am working on a C# programming project in Visual Studio. I have created various VS library projects inside the VS solution containing the various components of the solution. Without giving it too much thought, I have put the factories for these components into their corresponding VS projects.

+ +

I have now needed to reuse one of these components inside another VS solution and realized that having the factory inside the components VS project may not be the correct way, since in the new project I have different requirements for the factory to construct the object (e.g. parameters, etc.). So this would suggest, that the factory should not form part of the components VS project.

+ +

On the other hand factories appear to be very closely bound to the components/objects they construct. For me this raises the question, where I should put them:

+ +

Should factories be part of the solution using the components library or should it be part of the library?

+",214534,,,,,42408.45,Project structure: Where to put object factories,,1,7,,,,CC BY-SA 3.0,, +309570,1,309572,,2/8/2016 11:17,,7,1108,"

I am looking to get some input from some more experienced testers than what I am. :)

+ +

I am trying to make my node modules testable, allowing for dependency spying/stubbing/mocking without the need to use a magic library such as rewire or mockery to intercept my module import/require statements.

+ +

I was thinking of using an approach whereby I would create a factory function within each module. The factory function accepts the required dependencies for the module to export. The default export would call the factory function using the standard import statements from the module. I would also export the factory function itself allowing for my tests to inject whatever dependencies they like.

+ +

Here is an example (which is completely contrived and simplified to highlight the design approach):

+ +
import { foo } from './utils/foo';
+
+// We export our factory, exposing it to consumers.
+export const factory = (dependencies = {}) => {
+  const {
+    $foo = foo // defaults to standard imp if none provided.
+  } = dependencies;  
+
+  return function bar() {
+    return $foo();
+  }
+}
+
+// The default implementation, which would end up using default deps.
+export default factory();
+
+ +

Thoughts? Is this too verbose? Is there a better mechanism of achieving the same? I am slightly concerned about the boiler plate additions that I will be adding to my project, but perhaps the trade off is worth it.

+",214384,,214384,,42408.50625,42426.73194,Respectable design pattern for making node modules flexible/testable?,,2,0,1,,,CC BY-SA 3.0,, +309576,1,309623,,2/8/2016 12:30,,5,642,"

My current coding style is to use single-quoted strings as a default, and use backticked template literals whenever I need to concatenate a value into a string.

+ +

But I'm now wondering what's the point in having two kinds of string at all? Maybe it would be better to stick to template literals for everything, for simplicity. Even if you're not actually interpolating a dynamic value into the template literal, there's still the other advantage that with templates you don't have to bother escaping literal quote marks (which occur way more often than literal backticks).

+ +

So I'm considering configuring my linter to warn me when I use plain quoted strings (single or double). Aesthetics aside, is there any reason why this would cause problems?

+",30385,,,,,42409.22708,Any reason to continue using plain strings in ES2015?,,1,6,,,,CC BY-SA 3.0,, +309577,1,309580,,2/8/2016 12:43,,7,184,"

Here's a silly psuedo-object to illustrate the point:

+ +
public class Testing
+{
+    public object MyUnderlyingObject = new object { Property = 44 };
+
+    public MyPropertyWrapper
+    {
+        get
+        {
+            return MyUnderlyingObject.Property;
+        }
+        set
+        {
+            MyUnderyingObject.Property = value
+        }
+    }
+}
+
+ +

When faced with situations such as this, I like to write tests that assert existing state, in order to illustrate that the value under test has actually changed. For example:

+ +
[TestMethod]
+public void TestMyProperty()
+{
+    Testing tester = new Testing();                   // arrange
+    Assert.AreEqual(44, MyUnderlyingObject.Property); // assert existing value
+    tester.MyPropertyWrapper = 55;                    // act
+    Assert.AreEqual(55, MyUnderlyingObject.Property); // assert value has changed
+}
+
+ +

Is this just a waste of code and processing time?

+",22742,,,,,42414.88194,Is there any value in asserting values prior to acting in unit tests?,,2,3,,,,CC BY-SA 3.0,, +309581,1,310000,,2/8/2016 13:44,,5,2349,"

I like implementing service classes as stateless. However, I also like to decompose my logic into more, simple methods or functions. In some scenarios it seems like the two are somewhat against each other. Consider the following two examples.

+ +

I have an entity object called House, implemented something like this.

+ +
public class House {
+  public string Address { get; set; }
+  public List<Room> Rooms { get; set; }
+
+  ...
+
+  public IEnumerable<Furniture> GetAllFurnitures() {
+    return Rooms.SelectMany(e => e.Furnitures);
+  }
+
+  ...
+}
+
+ +

1. Stateful implementation

+ +

Pros: cleaner code, no parameter hell

+ +

Cons: one service per one House, makes Dependency injection harder because of constructor parameter

+ +
public class HouseCleaner {
+
+  private readonly House _house;
+
+  public HouseCleaner(House house) {
+    _house = house;
+  }
+
+  public void Clean() {
+    vacuumClean();
+    cleanBathroom();
+    cleanToilettes();
+    cleanKitchen();
+    wipeFloor();
+  }
+}
+
+ +

2. Stateless implementation

+ +

Pros: the service instance can be shared, makes Dependency injection easier

+ +

Cons: need to pass probably many parameters to all simple methods and functions to share the current state

+ +
public class HouseCleaner {
+
+  public void Clean(House house) {
+    vacuumClean(house);
+    cleanBathroom(house);
+    cleanToilettes(house);
+    cleanKitchen(house);
+    wipeFloor(house);
+  }
+}
+
+ +

Which one do you feel more appropriate considering that this is a simplified example. In reality there could be several service dependencies, and other parameters too.

+",214581,,,,,42883.92222,Stateless service classes and method decomposition,,3,5,0,,,CC BY-SA 3.0,, +309585,1,309586,,2/8/2016 14:54,,23,701,"

When working on a fix or a feature, I sometimes stumble over other tiny issues that can be improved on the fly in a matter of seconds. When I do them immediatly and then commit the finished feature/fix, the commit includes more than one thing. For example ""add feature X and code clean up"" or ""fix bug X and improved logging"". It would be better to split this into two commits. In case the two changes happened in the same file, I can not simply add one file, commit, add the other and then commit again. So I see the following three options:

+ +
    +
  1. Deliberately overlook non-related things while working on something.

  2. +
  3. Copy the file with two changes, revert it, include one change, commit, include the other change, commit again.

  4. +
  5. Do not change small unrelated things but add them to a todo list and do them later.

  6. +
+ +

I do not really like all of the three options, because of the following reasons:

+ +
    +
  1. Code quality can suffer if one does not fix small problems. And I feel bad if I consciously miss a chance to improve something without much effort.

  2. +
  3. This increases manual work and is error prone.

  4. +
  5. This is fine for not-so-tiny todos, but adding a tiny item to a todo list and revisiting it later often takes much longer than just fixing it immediatly.

  6. +
+ +

How do you handle such situations?

+",104636,,,,,42441.86458,Splitting coding progress into meaningful commits without too much overhead,,6,6,3,,,CC BY-SA 3.0,, +309590,1,,,2/8/2016 16:30,,1,468,"

I have these two questions:

+ +

Can deployment diagrams have components in UML?

+ +

Can component diagrams have nodes in UML?

+ +

I've seen many diagrams mixing diagrams elements and is making me confused.

+",51641,,31260,,42408.77083,43633.65833,Can deployment diagrams have components in UML?,,1,5,0,,,CC BY-SA 3.0,, +309593,1,309598,,2/8/2016 18:30,,0,246,"

I am about to submit my new app to the iOS App Store.

+ +

It's very simple. The app generates a random motivational quote and displays it onscreen. I have a database of 500 quotes now, stored in a plist file.

+ +

I will add more quotes bi-monthly, essentially just adding text to the existing plist file.

+ +

How should I approach this? Do I simply upload an amended plist file to my app within the 'backend' of the App Store? Would I have to submit a whole new build and version every time I amend the plist?

+ +

Or would I store this plist file on an external server and update that file? Am I able to link this updated plist file (on my external server) to the plist file within my existing app on the App Store?

+ +

What's my best approach?

+ +

I am really new to server side architecture.

+",214610,,97259,,42429.98681,42429.98681,Do I require an external server for my new mobile app? Only minimal text content to change within my app every couple of weeks,,1,0,,,,CC BY-SA 3.0,, +309603,1,,,2/8/2016 20:05,,0,921,"

I've been studying databases and rest APIs lately and I have a question about the relationship between the two.

+ +

Imagine I have a database with three tables, STUDENTS, ENROLLED, and CLASSES.

+ +

STUDENTS and CLASSES denote the entities students and classes, whereas the ENROLLED table denotes their relationship.

+ +

If I were to map this dataset in a rest api, would I just have 3 different CRUD routes with the three tables, eg ('/students', '/classes', '/enrolled')?

+ +

And that question goes for REST APIs in general -- when you write/make a REST API, are you just constructing a 1:1 mapping of your database?

+ +

Just trying to really nail down my conceptual understanding of the relationship between the two.

+",214625,,31260,,42408.8375,42409.24514,Relationship between REST APIs and Databases,,1,4,1,42442.67431,,CC BY-SA 3.0,, +309607,1,309857,,2/8/2016 20:49,,9,1144,"

Are there any programming languages that are available and extendable in more than one natural language?

+ +

For instance, an english version with a do..while loop, a Spanish version with a hacer..mientas loop, a French version with a faire..pendant and a Dutch version with a doe..terwijl.

+ +

The only 'programming language' I can think of that sort of implements this is Microsoft VBA.

+ +

Bonus question: Why are there so few programming languages that come in multiple languages?

+",158028,,31267,,42412.67222,42665.85139,Why aren't there more multi natural language programming languages?,,7,16,6,,,CC BY-SA 3.0,, +309609,1,,,2/8/2016 20:54,,4,9859,"

I'm aware that linked lists, sets and arrays can be used to create stacks by themselves. The theory behind it is this

+ +

linked-list: In some languages, a linked-list is substitutable for an array. Stacks are First In First Out operations.

+ +

array: The push and pop methods invoke a stack like behavior.

+ +

set: A set is exactly the same as an array, except it does not feature duplicate elements.

+ +

I'm struggling to find a way that a tree can be used to create a stack

+",214629,,214629,,42408.87361,42409.15208,Can a tree be used to create a stack?,,2,10,2,,,CC BY-SA 3.0,, +309610,1,,,2/8/2016 21:25,,1,181,"

If an organization develops their own software internally (with no license agreement) but uses other software frameworks that are open source that we modified to those frameworks work for us internally, and then we decide to share this source code with other organizations, what does this organization need to do before sharing the source code?

+ +

The other frameworks that are open source (that we modified) are using various software license agreements.

+ +

BSD, MIT, Apache, GPL v3

+",54104,,54104,,42408.89375,42409.35208,using source code for individual organization vs sharing source code outside organization,,1,7,,,,CC BY-SA 3.0,, +309613,1,,,2/8/2016 22:15,,3,38,"

I am working on a Spring Boot and Angular application which has a requirement to search based on any number of the available filters being applied to a list. For example, a user searches on 'Title' and 'Author,' but not 'Edition.'

+ +

I am comfortable with how I'll accomplish this at all layers except for the ORM. In my head the process would be to simply include the criteria for which the user has submitted values. However, with the tools available in JPA, I do not see a clear path to success. I believe my options are JPA's Criteria API, or Query DSL. I either case, it seems like I'd have to write a bunch of conditional logic to append .and or the equivalent on to a query. Is there a way to pass argument lists to JPA, similar to ActiveRecord? Am I asking the right questions?

+",112021,,,,,42408.92708,Approach for querying an arbitrary set of user submitted fields and values in Spring application? Like a shopping site sidebar search?,,0,0,,,,CC BY-SA 3.0,, +309614,1,,,2/8/2016 22:26,,4,137,"

We have a sprint planning every sprint which takes 2 hours. In this little time we discuss 10 user stories, write them down and estimate them.

+ +

Very often after the estimation (during sprint) I realize that the estimation was bad because the technical solution takes more time as we did not really and exaustively talk about the problem. This puts always lots of pressure on me.

+ +

In this sprint planning I must also say we talk to much. It could be more focused.

+ +

In scrum is it really enough to plan the task for a user story in the sprint planning or should there be more meetings or other meetings for this task?

+",57734,,,,,42413.06319,Where in scrum methodology do you define the approach/concept for a certain task,,3,0,,,,CC BY-SA 3.0,, +309628,1,309637,,2/9/2016 3:46,,4,288,"

Say I had a software algorithm (for example, an FFT), and I need it to process (n) amounts of data in (t) milliseconds. This is a real-time task written in C. There are a lot of CPUs out there, and you could just select the fastest one, but we also want something just right for the job while reducing cost.

+ +

FFTs are O(n log n) as far as I know, so maybe one can say that it would take k * (n log n) to perform an FFT on n units of data. Even if the constant was known, how would I translate that to actual CPU cycles, in order to determine which CPU is suitable?

+ +

A colleague from work posed this question to me and I couldn't answer it, as this is in computer engineering territory which I'm not familiar with.

+ +

Assume that this software program runs on its own, with no OS or other overhead involved.

+",124792,,,,,42409.71458,Selecting a CPU or microcontroller needed for a given software application?,,4,1,,,,CC BY-SA 3.0,, +309644,1,309679,,2/9/2016 8:32,,4,19352,"

I dont know how to reduce the size of jar file. +When we normally code in Java Swing the jar file is created, is there any way to reduce the size of jar file? I can't remove the images and other stuff which I have included as it's necessary.

+ +

I tried to unzip the jar file and and tried to remove the stuff which is not needed. I wanted to reduce jar file size because I have to upload it in a portal for my college project, max size to upload is 1 MB.

+",212928,,-1,,42410.33819,43542.77222,How to reduce size of jar file?,,3,6,,,,CC BY-SA 3.0,, +309645,1,309647,,2/9/2016 8:36,,4,94,"

My team uses a Kanban style approach to development where feature and bug fix stories filter through to Production as and when they are ready. We currently are using SVN as our VCS and take a feature branching approach to developing each story. A very simplified version of our workflow would be this:

+ +
    +
  1. Branch, develop story
  2. +
  3. User acceptance demo of story from branch
  4. +
  5. Reintegration to trunk, TeamCity build of package which is released to Test
  6. +
  7. Into Production if all is well
  8. +
+ +

However one issue that has come up recently is if we have a number of feature stories for the same application going through at the same time this can cause problems if one of those stories has a bug. So say story 1 was reintegrated and had a bug, but stories 2 and 3 were reintegrated before that bug was discovered. We now have 3 releases containing that bug which means we end up doing one of two things, neither of which are great:

+ +
    +
  1. Create a new patch release to follow story 3, which means a large Production deployment as stories 1, 2 & 3 + patch have to be deployed as one.
  2. +
  3. Rollback the trunk to the point of the bug being introduced in story 1, fix the bug, then reintegrate stories 2 and 3 and rerelease, which is a lot of work and open to manual error.
  4. +
+ +

So my main questions would be:

+ +
    +
  • is it more optimal to conduct System testing on branches prior to their reintegration to head off this scenario? How about the risk that testing is not taking place on the true trunk code?
  • +
  • or are there other possibilities given by the use of a more modern DVCS that we are restricted from trying by our use of SVN?
  • +
+",214615,,,,,42409.36667,Best practices for releasing patches that affect multiple development stories,,1,0,,,,CC BY-SA 3.0,, +309656,1,,,2/9/2016 9:49,,1,1985,"

In many cases, there are different kind of forms in an application's user interface, and these forms are use to collect all the data - that is needed to update (or create of course) domain object (e.g. Person entity in Person Register application).

+ +

When the entire Person entity's data is transferred as DTO to backend softaware, so how to save the changes in DDD way? I have read several blog posts, where is guided that the every single way to change or fix data, should handle as an own business operation - such like changeAddress() or amplifyDescription(). But now I have all the changes in one DTO object, so what's next? Can I contemplate the DTO as a set of business operations and now I have to only transform those ""commands"" to real business operation calls? The fact is that it's quite usual that there are many operations put together in UIs and in some way, I have to handle it in backend.

+ +

I think that mapping dtos to domain objects using some kind of automapper or by hand, is wrong and anemic way to get things work, and it'd consist a lot of boilerplate and meaningless code.

+",193934,,,,,42409.55,DDD: Saving changes from UI to domain object,,3,1,2,,,CC BY-SA 3.0,, +309659,1,,,2/9/2016 10:39,,0,796,"

I'm writing an application in javascript where given a word, I need to get all the possible versions of the word with the suffix being the difference between each form. For example:

+ +

""sponsor"" should return ""sponsorship"", ""sponsoring"", ""sponsors"", etc

+ +

""spy"" should return ""spies"", ""spying"", ""spied"", etc.

+ +

I thought of having a list of common suffixes and then attaching each suffix to the given word and checking in the dictionary if the resulting word exists or not. But the problem with that is, sometimes the last one or two letter of the initial word need to be changed before the suffix can be added. Like for ""spy"", the ""y"" needs to be replaced with ""i"" before adding ""es"" to get ""spies"". I googled a lot and didn't find much help. All I could find was this post with a python program attached at the end that's not making any sense to me.

+ +

If someone could suggest me an algorithm for this or explain the logic of the python program, that'd be really helpful.

+",214695,,,,,42410.41458,Algorithm to get all possible forms of a word with varying suffixes,,1,6,,42423.04583,,CC BY-SA 3.0,, +309664,1,309667,,2/9/2016 11:59,,1,99,"

I was reading the UML specification (http://www.omg.org/spec/UML/2.5/PDF) and I found here an there some ""code"" of this type:

+ +
body: if alias->notEmpty() then
+ alias
+else
+ importedElement.name
+endif
+
+ +

for example at page 87. Where I can find the specification for this language?

+",214707,,31260,,42409.51458,42409.51528,"UML specification, understanding code",,1,0,0,42416.55625,,CC BY-SA 3.0,, +309680,1,,,2/9/2016 15:07,,11,3480,"

I have a web page with wizard format. The submission button to the API will be in the 4th step of wizard. However I want the data entered to be stored in database before moving to the next step in the wizard. I also want the REST API to be working for the pages having single tab.

+ +

So I designed the API to take a query parameter action = draft or submit. If action is draft, only certain fields are mandatory. If action is submit, all fields are mandatory. Validations in the service layer of REST API will be done based on the query parameter. Looks like I've to explicitly specify the if/else clauses in the documentation. Is this an acceptable form of RESTful design ? What would be best design with these requirements ?

+",171835,,,,,42426.00903,REST API design for web pages with wizards,,3,3,5,,,CC BY-SA 3.0,, +309685,1,309687,,2/9/2016 15:55,,8,16421,"

I generally like to organize classes I make into modules by using namespaces, and I also don't go more than 2 namespaces deep but it's still painstakingly hard to fully qualify everything.

+ +

I've thought of using using directives but I don't want some headers polluting other headers. For example:

+ +
+

MyHeader1.hpp

+
+ +
namespace MyLibrary {
+    namespace MyModule1 {
+        class MyClass1 {
+            // stuff
+        };
+    } // namespace MyModule1
+} // namespace MyLibrary
+
+ +
+

MyHeader2.hpp

+
+ +
namespace MyLibrary {
+    namespace MyModule2 {
+        // I can import stuff
+        // using namespace MyLibrary::MyModule1;
+        // using MyLibrary::MyModule1::MyClass1;
+
+        class MyClass2 {
+        public:
+            void DoSomething(MyLibrary::MyModule1::MyClass1 parameter); // I could do this
+            void DoSomething(MyClass1 parameter); // or this (easier)
+        };
+    } // namespace MyModule2
+} // namespace MyLibrary
+
+ +
+

MyHeader3.hpp

+
+ +
#include <MyModule2/MyHeader2.hpp>
+
+namespace MyLibrary {
+    namespace MyModule2 {
+        // I understand that MyClass3 may use MyClass1 from here (the using directive would be beneficial), but what if it doesn't; it's left hanging in here import-ed
+        // I can see MyLibrary::MyModule1::MyClass1 from here!
+
+        class MyClass3 {
+            MyClass2 some_var;
+        };
+    } // namespace MyModule 2
+} // namespace MyLibrary
+
+ +

The ""problem"" here is that I can see MyClass1 in MyHeader3.hpp inside the MyModule2 namespace if I import it inside MyHeader2.hpp. I can see that this would not be a problem if using directives were allowed at class scope.

+ +

Question is, is there a better way of doing this, should I just man up and fully-qualify everything or should I avoid namespaces altogether?

+",214734,,,,,42409.68125,Importing namespaces inside another namespace,,1,2,,,,CC BY-SA 3.0,, +309690,1,,,2/9/2016 16:41,,1,149,"

I see ring buffers are very useful with their speed and if you have a known maximum buffer length they make a lot of sense. Let's say in the scenario where you have streaming data, but the playing stream is paused and caching continually in the background.

+ +

What do you do with a ring buffer when the data that needs to be cached exceeds that of the size of the ring buffer? Or is there a better buffering method for this?

+",179519,,,,,42409.72778,Ring buffers for data with unknown maximum size,,1,0,,,,CC BY-SA 3.0,, +309692,1,313303,,2/9/2016 16:54,,1,59,"

I'm trying to categorise products based on various fields of data. I've had some success just matching search terms in the product names, but this naive approach doesn't work when it comes to larger bodies of text such as descriptions since a description tends to contain a lot of additional info that isn't relevant to the category.

+

My thoughts to solve this problem were to extract the entities and predicates from the text, then use a process of elimination to work out which ones have to be the subject. If there's a better approach to this though, please let me know.

+

So as an example, take the following product description:

+
+

A classic sweatshirt with the dolman sleeves providing a modern twist, +it is super-versatile and perfect for every day. Wear with our harem +trousers or with jeans or a fitted skirt to balance the relaxed shape. +Wide neck with ribbed dolman sleeves, rib neck and hem; and V insert +at front.

+
+

I won't go through the whole thing, but here are some examples of what I would expect to extract from it:

+

E1. a classic sweatshirt

+

P1. ... has dolman sleeves

+

P2. ... is super versatile

+

P3. ... wear with ... (is an instruction a predicate?)

+

E2. harem trousers

+

...etc

+

So using the above I'd guess you can work out the main entity the paragraph is focusing on is "a classic sweatshirt" since the rest of the sentences start with predicates, and some weighting could be applied to it since it's in the first sentence. After that I could go back to my original approach of matching the extracted text against an index of terms and synonyms.

+

Is there a formal approach/algorithm that solves this problem? Or do you think the approach I've outlined is doomed to fail and I should try something else? ;)

+

Related

+

What technology/algorithm should be used to abstract meaning or keywords from a passage of text?

+",125565,,-1,,43998.41736,42449.62986,Extract the main entity from a body of text for text categorisation?,,1,0,,,,CC BY-SA 3.0,, +309697,1,,,2/9/2016 17:37,,0,863,"

I am planning an app which allows users to complete surveys/forms. I'm having trouble planning the data schema for the forms.

+ +

I really want to have a data structure which can be mostly automatically serialized and deserialized into classes.

+ +

I'm not sure where/how I should store answers either. I was thinking they could be some sort of dictionary or embedded in the question classes.

+ +

Functional Requirements

+ +
    +
  • Questions can be added/edited/moved/removed (order is important)
  • +
  • Existing answers are preserved when questions are added are edited
  • +
  • Questions can be grouped (and even sub-grouped)
  • +
  • Groups can be repeat groups that can be answered a variable number of times
  • +
  • Responses can be PUT to the server as an incomplete update
  • +
  • Responses can be changed/updated/deleted
  • +
+ +

So I imagine I need some sort of data structure that allows me to do the following

+ +
    +
  • address and get a specific element by some unique identifier
  • +
  • preserve order
  • +
  • insert a new element at a particular location
  • +
  • maintain a unique identifier so that answers recorded for previous versions of a form can be copied into an updated version
  • +
  • uniquely identify multiple sets of answers for a single repeat group
  • +
+ +

Example

+ +

Here is some c# classes I've made to play with these concepts but they don't fulfill all the requirements

+ +
public class Form
+{
+    public Guid Id { get; set; }
+    public string Title { get; set; }
+    public List<FormElement> Elements { get; set; }
+}
+
+public abstract class FormElement
+{
+    public string Name { get; set; }
+    public string Label { get; set; }
+}
+
+public class QuestionElement : FormElement
+{
+    public string Type { get; set; }
+    public bool ReadOnly { get; set; } = false;
+}
+
+public class GroupElement : FormElement
+{
+    public List<FormElement> Elements { get; set; }
+}
+
+public class RepeatGroupElement : GroupElement
+{
+
+}
+
+ +

Environment

+ +
    +
  • Asp.Net 5 Server (MVC 6) and Entity Framework 7
  • +
  • Java client app + +
      +
    • I usually use Gson for serializing.
    • +
    • I'm also open to using any of the Guava collection classes but am not sure what the analogs in C# might be.
    • +
  • +
+",151769,,151769,,42409.73819,42469.80903,Data Structure for Form or Survey,,1,0,0,42476.8625,,CC BY-SA 3.0,, +309710,1,309713,,2/9/2016 20:55,,-3,1468,"

Why is it more difficult to perform a heap based buffer overflow than a stack based? (regarding x86 architecture)

+ +

I thought it could be the fact that heaps are allocating memory dynamically. But is there more than this?

+",214785,,214785,,42409.88056,42409.91597,stack based vs heap based overflow,,1,11,,42414.82014,,CC BY-SA 3.0,, +309717,1,309732,,2/10/2016 0:36,,2,340,"

I have a couple of scripts (two in Python, one in Java) that use Selenium to drive a browser and download files from a website.

+ +

I need to do some major refactoring before I do a major expansion, so I want to start by writing tests, but I don't know how to go about doing it. All my search results lead to articles about using Selenium to test websites, and I want to test the Selenium script itself.

+ +

In particular, I don't want running my tests to create a significant load on the target website.

+ +

The offline parts of it I can test pretty easy, but I need to know how to test the actual browser driving itself.

+ +

Any ideas?

+",186833,,,,,42410.61111,How to write tests for browser automation script,,1,0,,,,CC BY-SA 3.0,, +309718,1,309722,,2/10/2016 1:20,,3,76,"

I am attempting to determine the best way to handle actions that must occur in passes. Many of these actions use objects that were created in a previous pass. The solution I have come up with is to implement a computation builder.

+ +
let actions =
+    passHandler {
+        let myObject = pass1Action()
+
+        do! waitForPass2
+
+        pass2Action myObject
+    }
+
+ +

In this code waitForPass2 would cause the continuation to be placed into a priority queue which will cause it to be executed in the second pass. However, this solution seems a bit like a hack to me. I would use a more functional approach, but the library I am using is not functional and would likely be too much effort to wrap. Is there a better solution to this?

+ +

Note: I attempted to keep the question generic, but here are some specifics in case it helps. This is for the emit phase of a compiler written using Mono.Cecil. Different phases are needed to separate the creation and usage of TypeDefinition and MethodDefinition objects.

+",164436,,164436,,42410.05903,42410.18403,Handling continuations within a priority queue,,1,0,,,,CC BY-SA 3.0,, +309720,1,,,2/10/2016 2:45,,4,1958,"

I am working with a Laravel project and I am looking for a way to solve the issue of bloated models and cross referencing between them.

+ +

I had started extracting higher level methods to a repository but this doesn't solve the issue of one method needing to know about another method.

+ +

For example a task lookup method needs to lookup a slug in another table first. I don't believe I should be placing this code into either model or repository but i'd like a single method to achieve this lookup.

+ +
/Models/Slug
+/Models/Task
+/Repositories/SlugRepository
+/Repositories/TaskRepository
+
+ +

I have now started experimenting with adding a gateway/service layer with a higher level methods which can access both of the underlying repositories to complete the task.

+ +

The task service would depend on the two repositories above.

+ +
/Service/Task
+findBySlug()
+
+ +

I think this will work but I am not sure if I should now let the controllers still access the repository directly or force everything through the service/gateway layer.

+ +

Or perhaps do away with the repositories entirely and let the services access the models directly, (Laravel abstracts db access anyway).

+ +

And on top of it all I want to keep this as simple as possible!

+ +

Can anyone confirm this method as a good choice or not or perhaps suggest an alternative?

+",214777,,,,,43347.79444,"Repositories, Gateways, Models and Architecture Questions",,2,1,1,,,CC BY-SA 3.0,, +309721,1,,,2/10/2016 2:48,,0,89,"

An app I am working on (iOS) is being distributed to different businesses and each business needs a few customizations to said app before it is distributed to users on the app store.

+ +

Most files will stay the same but quite a few will change based off the businesses needs. Should the app be built in a way that booleans trigger these changes? +or Should we separate the app into separate branches where each branch is for each business? Problem is with the second solution these branches will never be merged back into master. Also how would I make changes to the overall app that should affect every businesses app and then merge these back into each businesses app? BTW these will all start with the same code.

+",214806,,,,,42410.40417,Managing a project on github in different repos or one?,,1,4,,42410.84236,,CC BY-SA 3.0,, +309724,1,,,2/10/2016 5:59,,0,180,"

I am wanting to have/ or make a program that runs on a (raspberry pi)computer cluster with one pi executing only video content while the other only handles music, etc under a main program like an AI. Am I in the right direction? Isnt this parallel computing?

+",172623,,,,,42440.29444,Raspberry pi computer cluster question?,,1,2,0,,,CC BY-SA 3.0,, +309726,1,309731,,2/10/2016 6:36,,4,552,"

When it comes to OOP where an external database is being read from and written to, is it necessary to have attributes/properties within the objects to store the data? Or is it enough to simply read from the database, display it immediately, and later on directly insert any posted data back into it?

+ +

If it is necessary to have fields within the objects to store data, how/when would you read from or write to the database and update these fields?

+ +

I understand the concept of OOP and how an object should represent an entity (a thing, such as a user), but when it comes to web development, databases, and OOP I'm not sure I entirely understand how it's meant to work together.

+ +

In the past, the way I've managed this is by having objects with CRUD methods, accepting data from the view, and passing data to the controller to be interpreted for display.

+ +

I've looked at posts such as the following, but it doesn't seem to clear anything up for me: +Do objects in OOP have to represent an entity?

+ +

I've also tried looking for 'OOP when working with a database' tutorials but only get tutorials showing how to create a database object and nothing that helps me understand better.

+ +

Edit:

+ +

I have also seen ORM mentioned a lot as a solution of mapping a database to an object, but I still have trouble understanding exactly why you need to store the data in an object as opposed to simply displaying or storing it immediately.

+",148412,,-1,,42837.31319,42412.32292,"When mapping from a database, do OOP objects (entities) need to store the database values in attributes?",,1,4,1,,,CC BY-SA 3.0,, +309734,1,,,2/10/2016 8:15,,1,107,"

Earlier, when images were to be added to staging/production, they were committed to the git repository and our deploy script used to take care of uploading the images to CDN (S3 in our case).

+ +

But as we improved our design, the number and size of images began increasing. It was then decided that we will not be uploading retina images to the repo and they will be uploaded directly to S3.

+ +

This is a good enough workflow on paper, but it has its own fair share of issues. Especially when certain fixes/changes need to be deployed urgently and the images are forgotten.

+ +

How do web services handle image uploads for their websites?

+ +

The reason we are not uploading to the repo is because it causes it to bloat a lot. Is that an issue we should overlook for ease of workflow?

+",127863,,,,,42935.43264,How to manage image uploads?,,1,0,,,,CC BY-SA 3.0,, +309744,1,309745,,2/10/2016 10:26,,1,140,"

Often, to simplify testing, I add UI controls that are only visible and enabled in debug build only. Or prepopulate mandatory input fields in debug build. Is this a bad practice? Assuming the release is tested too.

+ +

Main reason is to simplify fixing bugs - when the problem is in depths of your program, it's quite a pain to go through dozens of steps repetitively.

+ +

The way I did it so far is add ""debug"" button on the main form/page, leading to a form/page full of shortcuts to various places in the program.

+ +

This seems like a borderline code smell to me, so I was wondering if there are better ways.

+",214853,,214853,,42410.44792,42410.47847,Are debug-build-only UI controls a bad practice?,,3,0,1,,,CC BY-SA 3.0,, +309747,1,338629,,2/10/2016 10:40,,0,111,"

In order to run complex simulations, I need to do preprocessing of data from various data sources. This is done by a bunch of Postgresql scripts. However, having only these is unsatisfactory, because they are not very intelligible, maintainable etc. I want to express the data preprocessing using BPMN/BPEL, which would allow me to graphically visualise

+ +
    +
  • the different data sources and sinks
  • +
  • the flow of data between them
  • +
  • filtering of data according to some conditions
  • +
  • merging of data
  • +
  • transformations on the data (which are usually simple computations)
  • +
+ +

Is there a possibility of attaching Postgresql snippets to my BPMN or BPEL model, such that I just can run the BPMN/BPEL in order to execute the data processing?

+",214856,,209774,,43564.85625,43564.85625,Automated data processing in BPMN/BPEL?,,1,2,,,,CC BY-SA 3.0,, +309751,1,309754,,2/10/2016 11:05,,3,201,"

I'm currently developping the website of my company, which is located in France.

+ +

I always pay attention to giving explicit generic english names to my variables. For obvious reasons, and of course, in order to be able to ship the same code over other countries one day.

+ +

For instance I called french ""Régions"" and ""Départements"" (which are subdivisions of the country), ""Area"" and ""Subarea"". I called companies ""SIRET"" number, ""company_identifier"".

+ +

However, sometimes there are things very specific to the french legislation. For instance, I need to store the fact that someone benefits from something called ""CPF"" (Compte Personnel de Formation) in France, but that probably have no equivalent in other countries.

+ +

I wonder how I should store this variable, how I should name it. And how to handle this when deploying the website in other countries.

+ +

My question is very general (hopefully not too much), the ""CPF"" example is just... an example.

+ +

Another good example is when you want to store the civility of a person. In France, we have ""Monsieur"", ""Madame"" and we used to have ""Mademoiselle"". AFAIK, in UK, there is ""Mr"", ""Mrs"", Ms"" and ""Miss"". And other countries have different ways to greet a person according to his gender and social status. Currently, I just store the gender and link it to ugettext(""Monsieur"") or ugettext(""Madame"") when I want to greet a person according to his gender. But I have no idea how I will handle this when I will have to ship my website in another country where this behavior is not accurate (and might offend).

+",214859,,214859,,42410.47292,42411.25486,How do you handle country specific behaviors?,,2,2,,,,CC BY-SA 3.0,, +309755,1,,,2/10/2016 11:40,,2,904,"

So I have a Signal class and a client class which manages a list of Signal objects. +The class provides a bunch of interfaces, one of them is an update()-function.

+ +

The update()-code is completely different for each signal. Thus, one would assume that's a job for derivation.

+ +

However, the client manages tens, maybe even a hundred different Signal objects. There is no way I want to have these many child classes.

+ +

I'm thinking of introducing a member updateFunction, which is a function pointer type. +Now, the client could set this function (maybe via constructor) during the Signal's creation process. +Then, the client can call update() for each Signal object and the update()-function would call the function pointer internally.

+ +

Would do you suggest? Is there a pattern for this?

+",207591,,7422,,42410.53264,42410.62847,Function pointers vs. Derived classes,,1,2,,,,CC BY-SA 3.0,, +309766,1,309782,,2/10/2016 14:29,,3,351,"

A little background first. I have been tasked with rewriting our gun software into VB.Net (converting the code from BBJ). This software incorporates many different inventory management functions, such as performing stock moves, stock inquiries, changing stock statuses, picking goods to put into cartons, and a multitude of other operations. I'm also relatively new to designing something of this scale from the ground up (more or less).

+ +

I have already developed a menu system, using a TreeView that is filled in dynamically based on the user's profile setup in our system (so that they may only access options available to their position).

+ +

My question comes into play when actually designing the software, and keeping the code for all the different menu items separate. Is it considered ""good practice"" to create a separate project for each menu item in a scenario like this? Most of the functionality is separate from each other (IE: the code for picking would not need to access the code for stock moves, and likewise the stock moves code would not need to access the directed putaway logic).

+ +

I'm trying to keep the solution as clean as possible, so that we don't run into things like one menu option having functions or resources named similarly to another, etc. And also be able to ""plug in"" new functionality as needed. The idea is that myself and one other programmer will mainly be working on this code, and I figured if we separate the code into projects, we can just include a project as a new function is completed, adding resources as necessary. Any recommendations?

+",199148,,31260,,42410.67569,42410.70208,“Best Practices” when designing large applications with multiple functions,,1,2,1,,,CC BY-SA 3.0,, +309767,1,309770,,2/10/2016 14:36,,98,18475,"

Why did old BASICs (and maybe other languages) use line numbers as part of the source code?

+ +

I mean, what problems did it (try to) solve?

+",12450,,169490,,42410.71528,42607.08958,Why did BASIC use line numbers?,,9,11,13,,,CC BY-SA 3.0,, +309768,1,,,2/10/2016 14:46,,4,837,"

I am using Swift and am wondering about the concepts of Extension and Delegation. Recently I found a case where either one of these two concepts could be applied. This reminded of the ""composition over inheritance"" principle, which teaches that a ""has-a"" relationship may be preferred to a ""is-a"" relationship.

+ +

I encountered a case where it is always an object of some specific type that depends on an object conforming a certain protocol, therefore allowing both Extension and Delegation to be used.

+ +

For instance, I have a UIViewController that depends on a subclass of UIView. Say we take the following protocol:

+ +
protocol ViewInteraction {
+  func interact()
+}
+
+ +

This protocol will be conformed by the UIView subclass, say InteractedView. And any UIViewController may need to interact with this InteractedView.

+ +

In these types of situations, what's a good default - the Delegation pattern, or the Extension pattern? That is, should I use the Delegation pattern - letting the InteractedView conform it, and letting the UIViewController subclass manipulate a ViewInteraction property - or should I follow the Extension pattern and extend UIViewController to conform the ViewInteraction protocol (and call the InteractedView)?

+",164077,,4091,,42410.70694,42430.53056,Extension versus Delegation,,1,2,1,,,CC BY-SA 3.0,, +309773,1,309775,,2/10/2016 15:13,,4,133,"

I'm working on a product that requires a large amount of interactions with different external data providers. These data providers uses a number of standards (primarily xml-based) which are all governed and agreed upon. Some of these standards aren't that great and can be open for interpretation.

+ +

Now, once in a while, some data source decides to differ just a little bit from these standards, which means that my application shows an error when the user tries to connect to them - however - a lot of competing products seems to be less concerned about correctness, and simply goes in to quirks-mode to get something working. From my customers' perspective, my product sucks since it can't do X but the competing product can, and when I try to contact the data source to have them fix their issues, I either don't receive an answer, I get to talk with some non-technical person who has no clue about what I'm talking about, or the time-to-deploy on the fix is simply really long.

+ +

The way I see it, I got 2 options:

+ +
    +
  1. Stand firm and risk losing customers, or
  2. +
  3. Do ad-hoc quirks and hacks for semi-compatible sources (which would then need to be supported)
  4. +
+ +

I've been looking for historical precedence for how others have handled these problems (for example, the Firefox vs. IE battle, where some sites simply didn't work in FF because the site contained errors or non-standard constructs), but without much luck about finding anything of substance.

+ +

Regarding option #1 I think it's clear that if that is the route to go, I somehow have to make the user understand that the problem is on the side of the source, without getting too technical, which can get tricky.

+ +

Regarding option #2 I also have to consider the huge upkeep cost it's gonna be we I have 50+ differently hacked and quirked sources, not to mention the overhead in the of the weird bug-reports that are bounds to come. Also, by embracing quirks I would myself be part of the problem of everyone not following the standards.

+ +

Does anyone have any experience with a situation like this? What are the major things to consider here? Have I overlooked something?

+ +

[Update] After reading the comments here I've decided to give some more context. I'm working for a small team (3 devs, ~10 people total) managing a product in a large coorporation (1k+ employees). Our product is a spatial suite (think google maps) with a lot of domain-specific workflows build in. My main issues are with WFS/WMS servers using the OGC standards (http://www.opengeospatial.org/standards/wfs and http://www.opengeospatial.org/standards/wms respectively). These standards are a maze of optional parameters, open-for-abuse, and open-for-interpretation definitions specifically regarding default values (well, that's my opinion anyway). Now, if you look past the standards, in the world of spatial data you also have a lot of other issues like wrong projections, self-intersecting geometry, and oh so much more :)

+ +

As I stated, we are a rather small team, but since our corporate name is quite large, our customers expect us to handle everything in quite a professional manner - well, some customers anyway.

+",108545,,108545,,42410.69722,42410.73403,How do you deal with external sources that do not conform to service standards and protocols?,,2,4,,,,CC BY-SA 3.0,, +309788,1,309790,,2/10/2016 17:43,,0,115,"

I just hit the very basic problem in OOP and I cannot see any working solution except postponing appropriate check until run time.

+ +

It is pretty clear notion of an action ""do something with value of the same type"" no matter what means given language provides. Consider Equals in C# which in simplest form is ""is this object is equal to that object"". Or Comparable<T> also in C#. No matter where you start such hierarchy the following is valid:

+ +
Comparable<T> --> Animal --> Cat
+                       +---> Dog
+
+ +

And the problem:

+ +
Animal cat = new Cat();
+Animal dog = new Dog();
+cat.Compare(dog); // ERR
+
+ +

How to design a language from scratch in such way, that the above line would give an error in compile time?

+ +

So far I introduced type Self, which gives clear meaning what you would like to do within a type:

+ +
type Animal ... compare(cmp Self) ... 
+type Cat ... compare(cmp Self) ...
+type Dog ... compare(cmp Self) ...
+
+ +

but it is not enough to avoid the problem with cross passing objects.

+ +

Can it be done? Maybe it was already done? How?

+ +

I already considered such extreme measures as two kinds of inheriting methods -- with virtual mechanism and not -- and calling them appropriately, but this in turn excludes the generic types (unless generic types are implemented as in C++ -- as templates). With only 2 level of inheritance it could be done by forbidding using top level type except for generic constraints -- of course it is too limiting.

+",66354,,,,,42410.75764,Is it possible to detect misuse of passing self type argument in compile time?,,1,6,,,,CC BY-SA 3.0,, +309803,1,,,2/10/2016 21:45,,1,2737,"

This is not about whether or not getters/setters are wrong. I understand its impact to encapsulation and that question has been raised here and SO several times already. I also do not want to just ask for an example of how to change object state as all the examples I've seen are so naive that they're only practical for simple objects. I do want to ask for an example on more specific terms so that I can make the rubber meet the road in a very practical way concerning this concept.

+ +

The Question

+ +

How can an object's state can be changed when one of its data objects cannot be initialized upon construction of the parent object to begin with?

+ +

More Specifically

+ +

Even more specifically, I have this situation where I am defining a domain entity for an API that is representative of a data entity in a third-party system whose data is stored in files. (For this question I'm using domain/data entity to distinguish the difference between the two). This is a requirement to ensure loose coupling between layers and the fact that though the entities are representative of each other, they are not one-to-one representations, more representative in principle. The client code will be able to create these domain entities and commit an internal transaction designed to coordinate the write to the files. One of the domain entity's properties is a Handle object that contains the third-party data entity's handle ID so that any later changes to the data entity in the file can be easily and quickly coordinated. This handle ID can't be set to the domain entity until it captures it in the data access layer on the first write.

+ +

In a nutshell

+ +

So in a scenario where the object cannot obtain all its state on construction, but get the rest at a different point in the program (and maybe even a totally different application layer), how should this look without breaking principles of encapsulation and Tell-Don't-Ask?

+ +

Btw... I'm using c#, so any comparable example along these lines would help me better relate.

+",213195,,213195,,42411.52361,42411.52361,Change object state in different point in program,,0,13,,,,CC BY-SA 3.0,, +309804,1,,,2/10/2016 22:15,,0,642,"

I am developing an application with Sprintboot + REST + Angular. While I am working on the REST API, I am trying to design as RESTful as possible. I'm encountering certain pages where the Angular UI must send multiple requests to the service to display the page like a User Approval page. I'm worried this would slow down the application or create complexity in Angular code base. Should I just have an end point /orderapproval that would do every thing related to order approval functionality or have bunch of APIs that need to be called for like /order /inventory /user etc. It is awful to consider /orderapproval as a REST API. But is it acceptable to just work as an API for applications that have both Back End and UI running in same web server.

+",171835,,5525,,42575.42222,42635.46875,Is pure RESTful required for Angular JS / Backend running on same web server?,,1,3,,,,CC BY-SA 3.0,, +309807,1,,,2/10/2016 23:44,,3,643,"

We have a web application which currently operates like this on a typical view/page:

+ +
    +
  • the front has to display 100+ ""previews"" (in the form of base64 images)
  • +
  • each of this preview is built on-demand by the backend when the front requests it
  • +
  • a pool of 8 standard ajax requests are running in queue until all previews are loaded
  • +
+ +

Let me illustrate this with a picture:

+ +

+ +

Sometimes the user does something that modifies some of the previews and those (and only those) have to be requested again by the front. In this case it goes like this:

+ +

+ +

At this moment, waiting for 100+ previews to load initially can take a long time (between 30 and 50 seconds), which is very annoying.

+ +

Before coming to the actual question, let me state a few points:

+ +
    +
  • each preview is totally independent from the others
  • +
  • each preview can be built almost instantly by the back-end
  • +
  • each preview's base64 payload weighs virtually nothing (a few kb)
  • +
  • establishing an http connection actually takes most of the time
  • +
  • of course, no more than around 8 ajax requests can work concurrently
  • +
  • if the user leaves to another view while 8 requests are being handled, the browser will wait to have some room in his ajax queue before loading elements from the new view, which is very annoying (the new view can remain empty for many seconds before something happens)
  • +
  • we can't bulk all the previews in one big request because sometimes (and it's not predictable), it happens that a particular preview takes a lot of time (several seconds) to built, and we don't want to penalize the other previews which could be loaded much faster
  • +
+ +

So, my questions are:

+ +
    +
  • could using a socket dramatically improve connection time so that the app is much more responsive at initial loading and in case of modifications in previews?

  • +
  • could messages exchanged in this socket be asynchronous so that if we ask for 100+ previews at the same time, they will all be loaded very quickly? And if so, is there a maximum number of concurrent messages in the socket?

  • +
  • can all the concurrent messages for which the front is currently waiting for, be instantly dropped if the user exits this view to visit another one? or does it even matter if there are virtually no concurrent messages limit?

  • +
+ +

We're using EmberJS for the front with EmberData and Flask for the back.

+",116950,,116950,,42411.01042,42411.01042,Asynchronous socket.io for lots of concurrent messages,,1,0,,,,CC BY-SA 3.0,, +309809,1,,,2/11/2016 0:31,,0,74,"

I have been thinking about this a few days already, doing diagrams, started programming, reverted, started again, still can't decide which is the best approach.

+ +

The problem is about sharing groups of items with other users, where they can get/add/edit/remove, in real time.

+ +

My items specifically are ""list items"" which reference ""products"" which reference ""product categories"". All of these entities use uuids. So a list item has a uuid, a product has a uuid, a product category has a uuid. In the database they use foreign keys reference the dependencies via uuids.

+ +

So to share a list, which is a group of list items, with other users, I narrowed down the possible solutions to these 2:

+ +
    +
  1. Replicate everything for each user. This way each user has their own list items, products, and product categories. The list items get a n:n relationship with users, and I of course filter the list items for the user requesting them. Add/update/delete has to be done always for the copies of all the users, in a transaction. So for example user A has ""bread"" with uuid ""1"". Shares list with user B, now user B has also ""bread"" but with uuid ""2"". If I queried the list without filtering by user, it would contain 2 items. ""1""-""bread"" and ""2""-""bread"". +Problems here: 1. Redundancy/space in database. 2. Need to use semantic unique a lot to identify the items that belong together (a group of copies).

  2. +
  3. Make users reference the same list item and the same dependencies. This means that when a list is shared I have to check if the other users have a product or category with the same name (they must not end with multiple products with the same name after someone shares lists with them), replace it with the product or category of the sharing user, and make all possible list items and other types of items they have that could be referencing this product or category to reference now the product of the sharing user. When a user stops sharing the list with another user, then I have to go through all the elements from the removed user and create their own copies and again make also that everything pointing to them points now to the new copies. Bread example: user A has ""1""-""bread"", user B has ""2""-""bread"", A shares list with B, this makes B to remove ""2""-""bread"". All the items which B may have that point to ""2""-""bread"" now have to be updated to point to ""1""-""bread"". If A removed B from the list, ""bread"" is copied for B with a random new uuid and all of B's references to ""1""-""bread"" have to be updated to point to this new copy.

  4. +
+ +

Additionally to both points it comes, that I'm using an app with its own database as a client which needs to be synced with the server (it basically mirrors the structure of the server, same uuids etc, of course the app holds the state of only 1 user).

+ +

I'm omitting many little details, like question of what should happen when a user updates a product that is not in a list (which is possible) etc. but I think this is not the core of the issue and want to keep the question simple.

+ +

Right now I'm favouring 1. 2 Has a cleaner structure in the database, and it makes sense semantically as it represent what is happening in reality - multiple users are sharing the items (and their dependencies).

+ +

But 2 makes me cringe a bit with all that replacing-and-updating-references, specially since I have to do this again in the client, and this is not done only when a list is shared or a user removed from it but also each time when a user adds a list item to the list (have to check - do the other users have product(etc) with same name? if yes, remove it, update references), and since the app is usable also offline, and when it comes online after a while it needs to do a ""big sync"" in which case maybe other users replaced some references in the server in the meantime and and and...

+ +

1 feels safer. I still need to write a lot of code and transactions to ensure that everything is updated consistently but I also have to do that with 2. If something unexpected happens I think it is less likely to mess up things than with 2.

+ +

Any ideas about this? It's the first time I'm writing this kind of logic and any input is welcome.

+",75425,,75425,,42411.05278,42411.58333,Generic approach to sharing of items,,1,4,,,,CC BY-SA 3.0,, +309812,1,,,2/11/2016 2:10,,4,1117,"

I am new to NOSQL and Cassandra. I am not sure if I should store usernames and password in Cassandra. If I should, what is the best way to do that? I am getting lots of conflicting ideas from research. I want to set it up to accommodate a large number of users and be able to quickly access it. Think of it as for hosting thousands of profile pages and have it open to add as much as needed in the future.

+ +

I know Cassandra is tune-able eventual consistency. I know password and usernames and things need to me very consistent.

+ +

The rest of the application/database doesn't need anywhere the level of consistency as the username and login system needs.

+ +

Should I do two different Cassandra databases? Can I set up all of it in one Cassandra database? Should I use a different type of database for login and then have Cassandra serve everything else.

+ +

I been doing research for days, but nothing definitive. If someone with experience can guide me in the right direction for this and give me like a diagram of how I should setup this up, I would really be grateful.

+",214949,,78905,,42411.1625,42645.47847,How should I store usernames and passwords for user login using php/cassandra?,,1,2,1,,,CC BY-SA 3.0,, +309815,1,309818,,2/11/2016 3:39,,19,1044,"

Static typing in a programming language can be helpful for enforcing certain guarantees at compile time- but are types the only tool for this job? Are there other ways of specifying invariants?

+ +

For example, a language or environment could help enforce a guarantee regarding array length, or regarding the relationships between the inputs to a function. I just haven't heard of anything like this outside of a type system.

+ +

A related thing I was wondering about is if there are any non-declarative ways to do static analysis (types are declarative, for the most part).

+",201215,,-1,,42878.52778,42412.88542,Are there alternatives to types for static analysis?,,3,0,6,,,CC BY-SA 3.0,, +309816,1,,,2/11/2016 4:10,,0,90,"

I'm making an activity diagram and I'm using activities instead of actions because I'm not going into much detail in the steps being described. I want to separate the activities in the partitions but all I've been reading about partitions says that in partitions I can place actions. But it doesn't say anything about placing activities there. Look this pic explaining it:

+ +

+ +

The image was taken from this page.

+",51641,,173647,,42904.00833,42904.00833,Can I use partitions of activity diagram to place activity instead of actions?,,1,1,,42415.12639,,CC BY-SA 3.0,, +309822,1,309913,,2/11/2016 7:05,,7,201,"

Most Github repositories I'm aware of use the following workflow for pull requests: one or several users with ""contributor"" privilege review the suggested changes and then one of them merges the pull request which most of the time requires just two clicks - ""merge"" and then ""confirm merge"". Here's an example in CoreFX repo - one of contributors merged the changes.

+ +

However there's OpenCV repository with pull requests like this where one of the contributors reviews the changes and then approves them by posting a ""thumbs up!"" comment which causes a dedicated user opencv-pushbot to merge the changes. It looks like no contributor ever merges the changes himself in that repository - they are all merged by the ""push bot"" user.

+ +

What's the use of that? It takes the same amount of labor to post a ""thumbs up"" comment as it takes to merge the pull request. It also makes it a bit harder to track who approved which merge because now all changes are merged by the same user.

+ +

What advantages does the ""push bot"" approach have?

+",587,,,,,42411.98403,"Why use a dedicated ""pushbot"" user which automatically merges approved pull requests?",,1,0,,,,CC BY-SA 3.0,, +309826,1,,,2/11/2016 8:03,,8,1094,"

This is the common sequence of two distributed components in our Java application:

+ +
1  A sends request to B
+2      B starts some job J in parallel thread
+3      B returns response to A
+4  A accepts response
+5      Job finishes after some time
+6      Job sends information to A
+7  A receives response from a Job and updates
+
+ +

This is the ideal scenario, assuming everything works. Of course, real life is full of failures. For example, one of the worst cases may be if #6 fails simply because of the network: the job has been executed correctly, but A does not know anything about it.

+ +

I am looking for a lightweight approach on how to manage errors in this system. Note that we have a lot of components, so clustering them all just because of error handling does not make sense. Next, I ditched the usage of any distributed memory/repo that would again be installed on each component for the same reason.

+ +

My thoughts are going in the direction to have one absolute state on a B and to never have a persisted state on a A. This means the following:

+ +
    +
  • before #1 we mark on A that the work unit i.e. the change is about to start
  • +
  • only B may un-mark this state.
  • +
  • A may fetch info about the B any time, to update the state.
  • +
  • no new change on the same unit can be invoked on A.
  • +
+ +

what do you think? Is there any lightweight way to tame the errors in system of this kind?

+",79048,,31260,,42411.35139,43208.75694,Error handling in distributed system,,2,1,4,,,CC BY-SA 3.0,, +309828,1,309849,,2/11/2016 8:35,,-2,277,"

Rewrite of the Question: Is there a technical reason why we are not using static methods instead of instance methods.

+ +

Technical reasons are for example: Performance or added type-safety.

+ +

I reason that with static methods and only pure data classes you will get better type safety and more maintainable code.

+ +
+ +

Why do we add methods to classes?

+ +

When I look at what the compiler does we can infer that an instance method on a class gets transformed into:

+ +
public static class Foo {
+    public static int Something(object this) { ... }
+}
+
+ +

This shows us that even ""under the hood"" we just pass the data as a parameter to our methods, or more pointedly, functions.

+ +

What is the reasoning behind having instance methods?

+ +

My research has found three key reasons for adding instance methods:

+ +
    +
  • Convention, we've been taught to do it that way
  • +
  • Ignorance, we do not know of another way
  • +
  • Convenience, easier to find functionality, basically a way to group functionality.
  • +
+ +

I find all three reasons shaky at best. I think we can all agree that the first two reasons are not really valid reasons, this leaves us with the third reason, Convenience, but when you look at the calling code of an instance method you see something strange happen:

+ +
var o = new MyClass(..some parameters...);
+o.Something(...some parameters...);
+
+ +

We have split our function signature over two lines. We have basically lost our chance of really type-checking changes made to our system. And comparing this to something like currying is not a valid comparison. Would using classes to express data and using functions (static classes with static methods) not be a better and even more convenient way to group data?

+ +

We could rewrite the previous calling code to:

+ +
StaticLib.Something(...some parameters...);
+
+ +

And just be done with it. Using the code would be easier to test and to implement and grouping the code is easy as well. Just use basic segmentation criteria on your collection of functions. You can even automate this grouping when you take types and signatures into account...

+ +
+ +

EDIT: +As stated by @Daniel T. I seem to be coming down hard on convention. This is not what I want to convey. Convention is great when working in a big project or working with colleagues. Convention is also a really great way to bottle change, please read this next edit not as an attack on convention in projects but as an attack on convention as an argument for reasoning.

+ +
+ +

EDIT: +I am not talking about OOP vs FP, we should all agree that polyglot solutions are best. I am not interested in answers like:

+ +
    +
  • Most programmers expect their code to look like....
  • +
  • It is convention
  • +
  • I've been taught to do it like that...
  • +
+ +

Those are answers based on convention and dogma and will never result in better ways of thinking or coding.

+ +

I am interested in proofs and solid reasoning. The accepted answer in the so called duplicate reduces the question to two things:

+ +
    +
  • It does not feel OO enough
  • +
  • I don't like it
  • +
+ +

This is not a proof and is not an answer to the question. Furthermore I do not ask the same question so it's doubtful it's a duplicate.

+ +
+ +

EDIT: This is not a duplicate. I'm not interested in inheritance or other OOP aspects. Just about the quality and readability of the code.

+",214992,,-1,,42837.31319,42411.55486,Why do we add instance methods to classes?,,4,9,0,42411.52639,,CC BY-SA 3.0,, +309831,1,,,2/11/2016 9:13,,1,138,"

I read about JWT and how they can provide secure authentication for calling api routes over http. I naively implemented it and here is how it goes :

+ +
    +
  • a client posts username and password to a login route
  • +
  • the server checks if the credentials are ok, creates the token and sends it back
  • +
  • the client keeps the token and adds it to any protected api calls
  • +
+ +

Now say I can see all client http in/out calls. As the password is clear in the first call I would just have to see if the response looks like a legit JWT and I would be pretty sure that the credentials are correct. If the password is encrypted on the client any attacker can decrypt it (just look at the source on the client).

+ +

What I am missing here ?

+",57387,,,,,42412.42361,Secure authentication using JWT,,2,0,1,,,CC BY-SA 3.0,, +309834,1,,,2/11/2016 9:48,,3,2161,"

I have a very simple app that lets you log in and then displays a dashboard. The design process with React and Redux is very simple :

+ +
    +
  • I have two React components : Login and Dashboard
  • +
  • I store a boolean isAuthenticated in my state tree
  • +
+ +

Now some remarks :

+ +
    +
  • The Login component can only be seen if isAuthenticated is false
  • +
  • The Dashboard component can only be seen if isAuthenticated is true
  • +
  • A button in the Login component sets isAuthentication to true
  • +
  • A button in the Dashboard component sets isAuthentication to false
  • +
+ +

How should I implement this in React ? There is at least two possibilities :

+ +
    +
  • using routing : / redirects to /login if authenticated or /dashboard if not. Also /login redirects to /dashboard if authenticated and /dashboard redirects to /login if not authenticated
  • +
  • introduce an App component that renders either Login or Dashboard depending on isAuthenticated
  • +
+ +

Here is the implementation of the second proposal :

+ +
const Login = (props) => (
+  <div className=""login"">
+    <p>Login page</p>
+    <input type=""button"" value=""Log in"" onClick={props.toggleAuthentication}/>
+  </div>
+)
+const Dashboard = (props) => (
+  <div className=""dashboard"">
+    <p>Dashboard page</p>
+    <input type=""button"" value=""Log out"" onClick={props.toggleAuthentication}/>
+  </div>
+)
+class App extends React.Component {
+  constructor(props){
+    super(props)
+    this.state = {isAuthenticated: false}
+  }
+  toggleAuthentication(){
+    this.setState({isAuthenticated: !this.state.isAuthenticated})
+  }
+  render() {
+    if(!this.state.isAuthenticated){
+      return <Login toggleAuthentication={()=>this.toggleAuthentication()}/>
+    } else {
+    return <Dashboard toggleAuthentication={()=>this.toggleAuthentication()}/>
+    }
+  }
+}
+ReactDOM.render(
+  <App />,
+  document.getElementById('root')
+);
+
+",57387,,57387,,42411.46806,42905.57778,Simple React application design,,2,0,1,,,CC BY-SA 3.0,, +309838,1,,,2/11/2016 10:17,,3,92,"

I'm working on an application that's supposed to have a web based GUI where you can insert some data. That data will be saved in persistent storage and a hex number generated and stored alongside. This number also needs to be stored on a smart card accessed through a reader on the user's device, using user's credentials to determine which device.

+ +

The user's device will need to have a desktop app running in the background (in the tray). This app is supposed to:

+ +
    +
  1. Listen to events on the smart card reader
  2. +
  3. Respond to card being placed with a tray notification cloud, which when clicked on...
  4. +
  5. Displays a GUI with some data that can be modified
  6. +
  7. Transfer data or query data from a web server through a Web API
  8. +
  9. Listen for messages incoming from the server where the web app resides
  10. +
+ +

Let's say I know how to build the web app with the API, or at least know where to look for info.

+ +

How do I design the desktop part of the app? Do I need a service? MSDN says:

+ +
+

Windows Service applications run in a different window station than + the interactive station of the logged-on user. A window station is a + secure object that contains a Clipboard, a set of global atoms, and a + group of desktop objects. Because the station of the Windows service + is not an interactive station, dialog boxes raised from within a + Windows service application will not be seen and may cause your + program to stop responding. Similarly, error messages should be logged + in the Windows event log rather than raised in the user interface.

+
+ +

Does this mean that I can't have a GUI if I create a service?

+",214705,,31260,,42411.44306,42411.44306,Designing a web based smart card publisher,<.net>,0,1,,,,CC BY-SA 3.0,, +309839,1,,,2/8/2016 16:50,,7,1205,"

I have two classes (A and B) that are both complex to construct, with multiple properties that must be validated at construction time. I want to use the Builder pattern to construct these objects, but among the constraints on construction of these objects are:

+ +
    +
  • An instance of A must contain multiple instances of B
  • +
  • Each instance of B has a child relationship to a single instance of A
  • +
+ +

In addition, I need to be able to add new instances of B to A after A has been constructed.

+ +

It seems like this must be a pretty common scenario. Is there some pattern other than Builder or to augment Builder that handles this situation?

+ +

Option 1 - With a ""placeholder"" class

+ +

The best solution that I have come up with so far uses an impotent version of the Builder class, with the Build() method removed (I call it a Placeholder) to specify the parameters for the child class.

+ +
class ClassA {
+    get propertyC
+    ...
+    get list<A> children
+
+    restricted constructor() {}
+
+    BBuilder AddChild() {}
+}
+
+class ClassB {
+    get propertyD
+    ...
+    get ClassA parent
+
+    restricted constructor(ClassA parent)
+}
+
+class ABuilder {
+    get / set propertyC
+    ...
+    BPlaceholder AddChild() {}
+
+    ClassA Build() {
+        VALIDATE properties
+        BUILD ClassA
+        foreach child { BUILD child }
+    }
+}
+
+class BPlaceholder {
+    get / set propertyD
+    ...
+
+    restricted constructor(ABuilder parent) {}
+}
+
+class BBuilder : BPlaceholder {
+    restricted constructor(ClassA parent) {}
+
+    ClassB Build() {
+        VALIDATE properties
+        BUILD child
+    }
+}
+
+ +

Notice that, of the 5 classes in this example, the only one that can be simply constructed is ABuilder. The other classes are all instantiated, either directly or indirectly, through this class.

+ +

Option 2 - Similar to Abstract Factory

+ +

Another possibility I have considered is to remove the Build() method from both Builder classes (in which case they are no longer Builders) and pass both classes to a third Factory class to perform the construction, similar to Abstract Factory. This doesn't seem as nice to me for two reasons, though:

+ +
    +
  • The interface doesn't intuitively guide you to the proper use
  • +
  • Construction of B after A has already been instantiated is substantially different from construction of B before A has been instantiated
  • +
+ +

Rationale for avoidance of simple construction

+ +

One possibility is to just use simple construction, something like this:

+ +
objA = new ClassA()
+objA.Add(new ClassB(objA))
+
+ +

This is obviously a possibility, but it is missing a key element found in the Builder pattern. Builder allows you to ensure that there is no way to construct an ""incomplete"" instance of a class. My constraints state that a valid instance of ClassA must have at least one child of type ClassB. In the example above, objA exists (for a short time) without any children, and is hence invalid.

+",,danBhentschel,,,,42892.00972,Is there a design pattern to handle construction of complex objects with a [1 → 1..*] multiplicity relationship?,,1,6,0,,,CC BY-SA 3.0,, +309856,1,309861,,2/11/2016 13:11,,1,108,"

I'm writing some code that I've decoupled off into a module of its own, and even though I'm most likely the only person who will use it, I'm trying to think as if I might not be. The functions in this module perform operations on objects in an array that's passed in, and right now, I'm doing a bunch of safety checks to avoid exceptions caused by getting bad data – specifically, accounting for the possibility of an undefined/null value for that array, or one of the objects in the array, or one of the properties on one of the objects. In other words, quite a lot of checking. However, in the application where this module is used, I've already checked for all of these things before sending the data to be processed by this module – because null values in this data will cause problems elsewhere, as well.

+ +

Now that everything is working smoothly, I'm tempted to remove some of the safety checks in my module for the sake of efficiency, since I'm duplicating work. However, if this module were to be used by someone else, I would think that it'd be better to make it as bulletproof as possible – so if bad data is passed in from their end, my module will be able to handle it without being responsible for any crashes. But then I thought that a hypothetical person using my module might be in the same position I'm in – confident the data they're passing in is good, because they've already checked it.

+ +

To me, the obvious solution is to have two versions of certain key functions in the module: a 'careful' version if you don't trust the data, and a 'dangerous' version if you do trust it, and just want to maximise performance. Is this something that's done? Assuming good documentation, would it be a good idea? And if it were a good idea, would it be better to differentiate between the two on a per-function basis, e.g. processSafely(data) and liveDangerously(data)) – or via seperate namespaces within the module – myModule.safe.process(data) and myModule.reckless.process(data)?

+ +

EDIT: The answers so far have been valuable, but I thought I should add (without veering too far into Stack Overflow territory) that the specific module I'm talking about is designed to accept a (potentially large) array of blog articles as objects (as they would come from a parser of a fairly standard format), extract tags out of said articles and count them, and return an array containing each individual tag with its count. In other words, as with all tasks involving parsing of files/objects supposedly conforming to a given format, there's sort of a lot that can go wrong, but I (or the user) will probably have to account for that elsewhere anyway (e.g. before rendering the articles to a view or whatever). Also, all of the checks I'm talking about are O(n) – not just one or two preliminary if statements or type coercions.

+",153734,,153734,,42411.60972,42411.76667,"Including both ""careful"" and ""dangerous"" versions of a function in a library/module",,3,4,,,,CC BY-SA 3.0,, +309858,1,309881,,2/11/2016 13:21,,1,75,"

I often see myself trying to follow my own code to find out where the exception came from.

+ +

The typical example is when some parsing fails, and I catch the execption. Then I spend a ridiculous amount of time using breakpoints and step-in / step-out / step-over to see the exact parsing that failed. Because many methods and many different calls use that specific parsing, and I can't know for sure which one caused it.

+ +

Right now, if it's production code that is ""solid"" and there is one bug coming every once in a while, I really really struggle finding it just by reproducing and jumping every line one by one. And for what it's worth, even if it always works out, it's just so tedious.

+ +

Is there something I'm doing wrong? Maybe my architecture is wrong? I don't really have any code to show, but if anyone else ever felt the same or knows by experience this happens, I would be glad to learn something!

+",151303,,9113,,42411.58056,42411.68889,How to find the cause of an exception in async code,,1,6,,,,CC BY-SA 3.0,, +309864,1,,,2/11/2016 14:00,,0,117,"

I have been developing a GPL GUI application for many years by myself in my free time. I have been approached by a company who want to use my code as part of a propitiatory product. This would require the project being re-licensed under a more permissive license (e.g. LGPL).

+ +

In return they will help contribute to its development and have talked about a financial arrangement for the change of license.

+ +

How can I estimate what a reasonable and fair financial amount would be this kind of arrangement?

+ +

I'm keen to get some additional contributors, but I would lose power in the future to make any other financial arrangements and control over how the project is used.

+",215027,,,,,42411.62986,Relicensing GPL to more permissive license,,1,6,,42416.8375,,CC BY-SA 3.0,, +309867,1,309871,,2/11/2016 14:28,,5,2253,"

In, for example, the Bash scripting language, the following creates a string called $VAR which begins at the first "" quote and continues until the next unescaped "" quote.

+ +
$VAR=""
+    hello
+world!
+
+this string preserves all
+    whitespace""
+
+ +

This makes it very easy to write multiline strings without concatentation or a million annoying \ns everywhere, and it makes the parser very easy to write (speaking from experience) because you can just gobble everything between unescaped quotes with a regex like ""([^""\\]*(?:\\.[^""\\]*)*)"" or so.

+ +

Bash is (hopefully!) not a mission-critical or systems-programming language, but it is a systems-scripting language intended for *nx boxes on which everything is text, so perhaps it's apt.

+ +

Recall that Bash is written in C, and so this string is (probably) stored as \n\thello\nworld\n etc, but the point is the source written by the programmer (and the above is far more readable).

+ +

Many (I daresay C-influenced) ""proper"" Languages Used For Real Purposes find some unknown problem with allowing strings to contain literal newlines, and thus require one or more of the following:

+ +
    +
  • escape sequences\n (which get compiled into \r\n on Windows)

  • +
  • special syntax ("""""" multiline string """""" in Py, `multiline string` in Go, or R"" raw string literal "" in C++11, etc)

  • +
  • special functions to write newlines (Forth's CR, for example, although Forth gets a pass because it knows squat about strings)

  • +
+ +

I do not understand why more languages don't allow strings to be ""implicitly"" multiline.

+ +

Pros:

+ +
    +
  • ease of use & practicality, clearer code, etc

  • +
  • simpler, more straightforward and thus more maintainable parser (at least, for hand-written ones)

  • +
+ +

Cons:

+ +
    +
  • may make some code less readable, if abused

  • +
  • ?

  • +
+ +

Is there an explicit reason this is the case, or has it just been blindly(?) adopted from C like so many other things? Moreover, if I'm writing a parser or designing a language, is there a compelling argument as to why I should restrict string literals to a single line?

+",201219,,201219,,42411.75486,42412.72569,Why do so many languages restrict string literals to a single source line?,,4,16,1,42411.75208,,CC BY-SA 3.0,, +309869,1,310011,,2/11/2016 14:49,,1,183,"

I recently started using UtilSnips, a Vim plugin allowing for a certain level of automation while coding by using template-like code snippets for common code fragments (class and function definitions, for example).

+ +

When using a popular snippet pack with Python, I noticed that snippets used when writing a class use a pattern I've never seen before in __init__ when using more arguments than just self:

+ +
class MyClass(object):
+
+    """"""Docstring for MyClass. """"""
+
+    def __init__(self, user, password):
+        """"""TODO: to be defined.
+
+        :user: TODO
+        :password: TODO
+
+        """"""
+        self._user = user  # this is the pattern I'm unfamiliar with
+        self._password = password
+
+ +

Is it common (or good) practice to use underscore when assigning instance variables in a constructor for a class in Python, or is this likely just the snippet pack author's preference? I haven't seen this type of naming convention in any other Python code I've interacted with.

+",71709,,,,,42413.09583,Is it a common pattern in Python to have instance variables assigned in a constructor start with an underscore?,,1,1,,,,CC BY-SA 3.0,, +309879,1,309880,,2/11/2016 15:26,,-3,412,"

I have bounding boxes the key type.

+ +
Box {
+  double mins[2];
+  double maxs[2];
+}
+
+ +

And I want to have Box as the key type in the D programming language, so I have to implement:

+ +
size_t toHash() const @safe pure nothrow {
+   size_t hash;
+   for(size_t k=0; k < 2; k++) {
+       // do something here
+   }
+   return hash;
+}
+
+ +

Should I have a linked list on the output of the associative array and search through the list of there's a collision? Should I find some reasonable bound of double that is in my application then come up with some formula by shifting?

+ +

Not sure what to do here.

+",212122,,9113,,42411.69931,42411.69931,How do you hash 4 doubles into a size_t?,,2,4,,,,CC BY-SA 3.0,, +309882,1,,,2/11/2016 16:33,,1,370,"

I'm currently working with some code that IMO has abused ruby's mixin features. Given that I'm new to ruby, I wonder if MO in IMO is correct or not.

+

My primary question is what makes more sense (and if there are any major technical differences between them):

+

E.g.

+

1

+

to create Helper singleton classes (or Modules with class methods) to keep all my "helper" functions.

+
module Helper
+  def foo(); p 'foo'; end  # instance method
+end
+
+class User
+  include Helper     # foo() is now a method of User
+
+  def initialize()
+    foo()            # included, invoked on User class object
+  end
+
+end
+
+

------- OR -------

+

2

+

create mixin modules that contain these functions and include them everywhere I need them.

+
module Helper
+  def self.foo(); p 'foo'; end  # class method
+end
+
+class User
+  
+  def initialize()
+    Helper.foo()            # Invoked on Helper module object
+  end
+
+end
+
+
+

An example of the way it's abused in my case:

+
module ComplexMathPerformer
+  def perform_complex_math(); p 'perform_complex_math()'; end
+end
+
+module Renderer
+  def render_3d_object()
+    perform_complex_math() # from ComplexMathPerformer, which is NOT included here.
+  end
+end
+
+module Game
+  include ComplexMathPerformer
+  include Renderer
+
+  def play()
+    render_3d_object()
+  end
+
+end
+
+

This code works because Game includes both Renderer and ComplexMathPerformer. But is a nightmare for someone who's looking at ComplexMathPerformer and trying to figure out where the hell is is foo() defined and how is it accessible in ComplexMathPerformer.

+

Logically it makes more sense to include ComplexMathPerformer but that's skipped because Game already includes ComplexMathPerformer (for it's own purposes). Making it a mess.

+",202805,,-1,,43998.41736,42471.82639,Ruby: Abusing mixin,,1,5,,,,CC BY-SA 3.0,, +309888,1,309928,,2/11/2016 17:34,,7,3551,"

This is sort of the complementary question to How to best protect from 0 passed to std::string parameters?. Basically, I'm trying to figure out whether there is a way to have the compiler warn me if a code path would unconditionally try to call std::string's char* constructor using NULL.

+ +

Run time checks are all well and good, but for a case like:

+ +
std::string get_first(const std::string& foo) {
+    if (foo.empty()) return 0; // Or NULL, or nullptr
+    return foo.substr(0, 1);
+}
+
+ +

it's annoying that, even though the code is guaranteed to fail if that code path is exercised, and the system header is usually annotated with the precondition saying that the pointer must not be null, this still passes compilation under gcc, clang, etc., even with -std=c++11 -Wall -Wextra -pedantic -Werror. I can block the specific case of 0 on gcc with -Werror=zero-as-null-pointer-constant, but that doesn't help with NULL/nullptr and it's sort of tackling the related but dissimilar problem. The major issue is that a programmer can make this mistake with 0, NULL or nullptr and not notice it if the code path isn't exercised.

+ +

Is it possible to force this check to be compile time, covering an entire code base, without nonsense like replacing std::string with a special subclass throughout the code?

+",215049,,-1,,42837.31389,42412.7875,Compile-time checking for NULL initialized std::string,,2,3,0,,,CC BY-SA 3.0,, +309890,1,309958,,2/11/2016 18:06,,0,160,"

We had a bug caused by ruby workers (4 processes, 1 thread each) doing batching records using a legacy mongodb as the store. The race condition was around whether the batch was full or not; with one to three records sneaking into a batch. This was solved by correctly locking (although it did slow things down a bit).

+ +

For some of our services I've debated switching to a language with concurrency baked in but how do you design around 3rd party dependencies that aren't concurrent or approach concurrency differently (e.g. locking).

+ +

Can you build a lovely concurrent app and then ruin it all by sharing state elsewhere? Would this problem have been avoided if we used Erlang/Elixir and Riak?

+",34513,,,,,42412.49375,Concurrent languages and non-concurrent dependencies,,1,1,,42424.17569,,CC BY-SA 3.0,, +309893,1,309895,,2/11/2016 18:58,,1,200,"

So, I had to create a class that had a member of it's own type. I looked into why this was possible, which is fascinating. Reading the answer, I find that a lot of work had to go into making this work, and so it probably didn't exist in older languages. So my question is, just as the title says, What was the first language that allowed circular references in classes?

+",148019,,-1,,42878.52778,42412.1,What was the first language that allowed a class to contain a member of it's own type,,3,4,,,,CC BY-SA 3.0,, +309898,1,309902,,2/11/2016 20:26,,2,597,"

A colleague was asking me to explain a system information flow, as they are having problems with the wrong information being presented. It seems that what they have created is a custom built website, with it's own database, which gets it's data from SAP via a JSON based dataset.

+ +

SAP is the live information, the custom built solution enables other information to be captured, thereby enriching the SAP data. This is stored locally in the custom site's local database, to produce monthly reports.

+ +

I'm trying to wrap my head round why a JSON dataset is being used, rather than a direct (SQL?) query type approach. If the goal is to get live information from SAP on a monthly basis, why not just query SAP, rather than construct a static JSON payload, and have to parse it. Now it seems the argument is whether the JSON wrongly created, or wrongly interpreted... it just seems so much more complicated.

+ +

Since the JSON file is done every night too, it's not obvious to me why this particular design has been chosen.

+",215072,,31260,,42411.87431,42412.37847,Why use JSON dataset rather than SQL Query?,,2,3,2,,,CC BY-SA 3.0,, +309901,1,309923,,2/11/2016 20:52,,0,183,"

In my app I want the ability to continually check for Internet, Bluetooth and GPS status (if one is off/on). I need each of these to be on at all times for my application to work effectively.

+ +
    +
  1. For the Internet, I use a Handler which checks if internet is off/on +every x seconds, and I create Settings.ACTION_SETTINGS intent which +the user can select to turn internet on.

  2. +
  3. For the Bluetooth I have a Broadcast receiver which tells me if + bluetooth is on/off and I handle it accordingly.

  4. +
  5. For the GPS I use a Handler too.

  6. +
+ +

The problem I have is that my MainActivity is a Google Maps Activity and my way of handling bluetooth/internet/gps is not very user friendly, an alert dialog is brought up for each separate one (when off).

+ +

I'm thinking of having one AlertDialog that controls all of bt/internet/gps. However, I'm not sure if this is even possible (I also need to connect to pair to a bluetooth device).

+ +

What would you recommend is the best way to do this?

+",119758,,31260,,42411.87153,42412.11389,"Android continually checking Internet, Bluetooth and GPS status?",,1,1,,,,CC BY-SA 3.0,, +309903,1,309952,,2/11/2016 21:04,,1,188,"

Currently, there are only 3 possible publishers. I might want to add some more in the future:

+ +
interface NewsArticle {
+    enum Publisher { NYPost, ChiTribune, LATimes }
+
+    Publisher getPublisher();
+}
+
+ +

I like the rigidity of using enum, but when might I get tripped-up if I treat the publisher as an enum instead of a String?

+",215076,,,,,42412.46111,"Will returning an enum, instead of String, be too restrictive here:",,4,6,,,,CC BY-SA 3.0,, +309908,1,,,2/11/2016 22:02,,1,473,"

I'm using a lot of implicit any in TypeScript. I don't quite understand how to decipher a TSD file in order to get the correct type when building off a library like express or angular. Is there a way to learn what the type of something is at runtime, so that I can go back and fill in the type later? Or is there another way of figuring out types based on existing code?

+",215086,,292892,,43452.73056,44202.96111,Finding types in TypeScript,,2,3,0,,,CC BY-SA 3.0,, +309909,1,,,2/11/2016 22:12,,4,911,"

I am building a service for pricing options of a product and for pricing the product itself when product has various options.

+ +

Example

+ +

User selects some X options out of Y options available, where X <= Y. The service then computes the pricing from selected options using cost for those options.

+ +

Using Pseudocode, it goes something like this:

+ +
$totalPrice = 0;
+
+//number of modules comes from user input OR can be derived from it
+//price per module comes from database
+$productPrice = [PRICE FOR A MODULE] * [NUMBER OF MODULES];
+
+$totalPrice += $ProductPrice;
+
+//option 1 selection comes from user input
+if ([OPTION ONE IS SELECTED])
+     $totalPrice += [PRICE FOR OPTION ONE]
+
+ +

Question:

+ +

How can I decide who deals with options and who deals with pricing?

+ +

For example, I can inject selected options into a Pricing class but that complicates Pricing, since now business logic inside Pricing class needs to be aware of selected options and how to use them to compute the price.

+ +

Perhaps I can have a class for just options, and another class for just pricing, and then some kind of a third ""Combiner"" class that knows how to combine options and pricing to come up with final totals. But won't that be too complicated?

+ +

Is there a model for this sort of thing?

+ +

General Problem Statement

+ +

A more general problem is --

+ +

Given

+ +
    +
  • A) User Input (i.e. Add-on Options)
  • +
  • B) Product Specifications (i.e. number of product sub-parts) which can be derived elsewhere from user input and its own service
  • +
  • C) Pricing for sub-parts, Pricing for Options
  • +
+ +

Compute total Product price.

+ +

How can I do this in OO?

+",119333,,119333,,42412.77153,43074.28125,How to compute Pricing of Product & Options based on User Option Selection and Pricing per Option,,2,4,1,,,CC BY-SA 3.0,, +309911,1,309937,,2/11/2016 22:58,,4,7800,"

Given the following scenario:

+ +
    +
  1. Teacher entity

    + +
    {  
    +""id"": ""1234"",  
    +""name"": ""Mr. Didactic"",  
    +""Subject"": ""History"",  
    +""Classroom"": ""1A""  
    +}
    +
  2. +
  3. REST API endpoint:

    + +
    /teacher/id/1234
    +
  4. +
+ +

Let's say I submit a POST (update) request to the endpoint with this request body:

+ +
{  
+""id"": ""1234"",  
+""name"": ""Mr. Didactic"",  
+""Subject"": ""History""
+}
+
+ +

How should that be handled/interpreted? Is Classroom being requested to be set to null/empty?

+ +

Or is it untouched, i.e. don't do anything to Classroom, it's not part of the request?

+ +

Or is there some other way of interpreting this? What's expected, or are the best practices here?

+",14969,,,user40980,42411.97292,42412.76389,REST API - omitted properties in POST request: How should they be handled?,,2,10,2,,,CC BY-SA 3.0,, +309919,1,,,2/12/2016 1:55,,7,3696,"

I am actively working on a two person project with very few longstanding feature branches (longest existing branch is 3 weeks).

+ +

I have spent the afternoon trying to understand merge vs rebase and what the advantages and disadvantages are. I've made some progress and plan on incorporating rebase to clean up my local commits before pushing to github for a code review. I like to commit often and I have automated testing so it seems like the main argument against rebase that it could introduce bugs to the rebased commits isn't as relevant. While I've seen some arguments against this type of rebasing it seems less controversial. My confusion is about rebasing vs merging a feature branch into master after the code review is complete. I know that noone else is relying on it and the code review process generated many small meaningless syntax commits. Both of these seem to suggest rebasing.

+ +

My understanding is that for my project rebase has the advantage of keeping the history clean which lets me see better summaries in git blame of why a change was introduced. It also helps keep the history linear which makes it easier to use git bisect (is it about the history being linear or about having every commit be complete and not intentionally-breaking). It also allows fixes to my current example where I forgot to rename and edit a file in separate commits before submitting a pull request, leading to a large unnecessary diff. For this project none of these seem like game-changing disadvantages to using the git-merge workflow (the codebase might just not be big enough yet to experience the real disadvantages) but they would be nice to have if there aren't large cons.

+ +

The disadvantages I see are that I'm no git expert and it seems much easier to really break things with rebase than with merge. Second, we do a lot of active commenting on pull requests in github and I'm hesitant to break or lose those comments by rebasing and changing the SHA (on second thought I don't think I'll lose the comments but they'll no longer refer to the correct commits so I would have to hunt down the commits referred to by other means). Third, it gives the illusion of each commit being a logical complete and working commit when in reality I probably will still have commits that are partially broken.

+ +

In this article about merge vs rebase they give the following advice about this situation:

+ +
+

Review is done and ready to be integrated into the target branch. + Congratulations! You‘re about to delete your feature branch. Given + that other developers won’t be fetch-merging in these changes from + this point on, this is your chance to sanitize history. At this point + you can rewrite history and fold the original commits and those pesky + ‘pr rework’ and ‘merge’ commits into a small set of focussed commits. + Creating an explicit merge for these commits is optional, but has + value. It records when the feature graduated to master.

+
+ +

I'm very interested in knowing whether my above analysis is on the right track. Right now I'm leaning towards cautiously using rebase as described in the advice above. But I'm even confused how to actually do what they suggest. How do I create an explicit merge? Do I make a local branch that's a copy of feature and then rebase -i on that branch to make the history look like I want and then merge with master? Or do I rebase feature directly onto master and then use rebase -i to squash commits as appropriate. If this approach is what they are suggesting how do I then create an explicit merge at the end?

+",60878,,,,,42413.94444,merge vs rebase pull requests on github,,2,3,,,,CC BY-SA 3.0,, +309925,1,309942,,2/12/2016 3:05,,5,351,"

Let's say I create a Collection in an instance method. I do not assign that reference to any instance variables. Rather, I just return it to the invoker. Then, I exit.

+ +

Now, the only thing with a reference to that Collection is the invoker.

+ +

So, as the method writer, why do I care what the invoker does with the returned Collection? Whether the invoker wants to treat it as Unmodifiable / Immutable is none of my business. In fact, I'd be breaking my method scope by hand-cuffing the invoker like that.

+ +

Right now, I'm writing a method that tokenizes a String by using regular expressions. I was ready to go with returning an ImmutableList. But, I'm about to change my mind.:

+ +
public interface someInterface {
+    List<String> tokenize(String text);
+    com.google.common.collect.ImmutableList<String> tokenizeImmutable(String text);
+}
+
+ +

Once I exit and die, whether the invoker mutates my Collection, the one that I created, is nothing that I should have any influence over. I mean, I could lock it down, and conceptually the tokenization should be locked down, but that is out of my scope.

+",215076,,215076,,42412.18264,42412.66597,Doesn't returning a Collection as Unmodifiable / Immutable unnecessarily break method scope?,,3,3,,,,CC BY-SA 3.0,, +309944,1,,,2/12/2016 9:15,,4,116,"

There is a problem using ON DELETE CASCADE on foreign keys in a SQL database if there are multiple paths from the root foreign key to the leaf. The way around this seems to be to replace the ON DELETE CASCADEs with INSTEAD OF DELETE triggers.

+ +

However several Stack Overflow posts (e.g. here, here), and here suggest using triggers only as ""a last resort"". Why?

+",50096,,-1,,42878.52778,42412.59931,Why are triggers seen as a last resort to solve multiple paths in cascading deletes?,,2,0,1,,,CC BY-SA 3.0,, +309955,1,309966,,2/12/2016 11:06,,10,2988,"

I've been digging in covariance and contravariance in C# and there is one thing I could not manage to understand. C# (AFAIK, as of 4 version) allows one to declare the covariant or contravariant interfaces and delegates using out and in keywords. However, the declaration of the abstract class with such keywords is not possible. Could anyone share an idea about why it was made so? I understand general issues and problems with covariance and contravariance, LSP and so on. However, I was not able to come up with an idea about why the abstract class is not semantically suitable for being covariant or contravariant, whilst interface is.

+ +

Here's a sample of the abstract class just for the sake of completeness. It won't compile.

+ +
public abstract class SomeList<out T> where T : class
+{
+
+}
+
+",213138,,,,,42412.55972,Why covariance is forbidden for abstract class in C#,,1,2,,,,CC BY-SA 3.0,, +309956,1,,,2/12/2016 11:08,,2,175,"

Suppose you develop an interpreter or file system. There are objects, like variables, procedures and files in some environment. They have a name and content (variable has current value, procedure has the body of code and file has some data in its body). You can request vbl1.value, proc2.execute and file3.content to get their ""values"" and you can also always request the name of the current object by (vbl|proc|file).name. Suppose that you enable alias objects to given object, alias(name, target). It says that alias has it own name but somehow shares the ""value"" with the target. How would you design it?

+ +

The alias implementation would be strightforward if I had objects separate from their values. All aliases could share the one body then and primary object could be implemented as alias+body also.

+ +

On the other hand, primary objects are in 1-to-1 correspondence with their values and it is not wise to ""prefer containtment over overriding"" for them. Additionaly, most of the objects won't have the aliases and such design is a waste of memory (2 object is 2x memory wasteful in JVM). But, if do not split our objects into name+body, how do implement the aliases? As proxies? This looks like a duplication of code. You first code variable, proc, ... objects and then expand these classes with approapriate proxies. Even trice, since now objects must be interfaces which admit first-class and proxy implementation. Moreover, such splitting makes values/bodies ""headless"", you cannot get the name, given a body like proc1.name above. Probably that is right since body under consideretion may be accessed through any of aliases now.

+ +

I am hesitating. I admit that there can be a better approach. Which one is advised? The aliases seem to be quite in use and therefore general guidelines must exist, I am sure but cannot find anything.

+",202598,,202598,,42412.47222,42563.16806,Design of object alias,,3,2,,,,CC BY-SA 3.0,, +309967,1,,,2/12/2016 13:32,,0,871,"

And is there any danger to making my own INotifyCollectionChanged implementation that doesn't?

+ +

I'm trying to make a class library contains a Log class, which in turns contains some sort of observable collection. But if I try to actually add anything to the log from anywhere but the UI thread, the app crashes saying I'm not allowed to do that. Is there any reason .NET is designed that way? Can I work around it without causing horrible things to happen?

+",67076,,,,,42412.56597,Why does ObservableCollection require all changes to be made on the UI thread?,<.net>,1,0,,,,CC BY-SA 3.0,, +309971,1,309990,,2/12/2016 15:11,,0,294,"

We have UI automation test framework based on selenium web driver. We are in the early stages of building out load tests and are wondering if it is possible or recommended to use a browser based UI Automation framework for load testing vs the more traditional approach of simulating client requests at the HTTP layer.

+ +

We are hoping to avoid having two different ""frameworks"" for UI testing and Load testing if possible.

+ +

I have found little information on the web regarding using automation UI tests for load testing. What I have found states that although it is possible using some tool sets, it's impractical to reliably scale out.

+ +

Should we attempt to build out a load test suite using UI automation tests or should we use a more traditional tool set built specifically for load testing?

+",215153,,31260,,42412.63542,42412.77917,Is it feasible to scale UI automation tests for load testing a web application?,,1,2,,,,CC BY-SA 3.0,, +309983,1,,,2/12/2016 17:30,,1,314,"

I'm writing a path class which currently has a method called Combine which works like the .NET Path.Combine. If an argument is an absolute path (roughly because it begins with \), then it replaces anything to the left, so:

+ +
Combine(""c:\folder"", ""two""); // produces ""c:\folder\two""
+Combine(""c:\folder"", ""\two""); // produces ""c:\two""
+
+ +

I'm worried this will catch people out who expect the second to also produce `c:\folder\two' and am wondering how to protect against this.

+ +

An alternative design would be to have an enum param to select behaviour:

+ +
enum CombineMethod { respectAbsolute, ignoreAbsolute };
+
+Path Combine(CombineMethod combineMethod, Path a, Path b);
+
+ +

That seems a bit wordy, so another alternative I'm considering is having two methods:

+ +
Path Join( Path a, Path b );
+Path Merge( Path a, Path b);
+
+Merge(""c:\folder"", ""two""); // produces ""c:\folder\two""
+Merge(""c:\folder"", ""\two""); // produces ""c:\two""
+
+Join(""c:\folder"", ""two""); // produces ""c:\folder\two""
+Join(""c:\folder"", ""\two""); // produces ""c:\folder\two""
+
+ +

Do any of these alternatives seem at all intuitive, do you think it would be easy to accidentally call the wrong function. Or do you have better design suggestions so that users can get the behaviour they want without accidentally picking the wrong behaviour?

+",4684,,,,,42415.58958,Do methods Merge and Join make sense for a path class?,,3,1,,,,CC BY-SA 3.0,, +310015,1,,,2/13/2016 6:12,,8,1616,"

In the Zen of Python, I can understand most parts of it, except:

+ +
Now is better than never.  
+Although never is often better than *right* now
+
+ +

So I think doing it now or getting results now is better than never. But why ""never is often better than *right* now""? Or what does it mean?

+ +

To see the above 2 lines in context, this is the whole Zen of Python:

+ +
The Zen of Python, by Tim Peters
+
+Beautiful is better than ugly.
+Explicit is better than implicit.
+Simple is better than complex.
+Complex is better than complicated.
+Flat is better than nested.
+Sparse is better than dense.
+Readability counts.
+Special cases aren't special enough to break the rules.
+Although practicality beats purity.
+Errors should never pass silently.
+Unless explicitly silenced.
+In the face of ambiguity, refuse the temptation to guess.
+There should be one-- and preferably only one --obvious way to do it.
+Although that way may not be obvious at first unless you're Dutch.
+Now is better than never.
+Although never is often better than *right* now.
+If the implementation is hard to explain, it's a bad idea.
+If the implementation is easy to explain, it may be a good idea.
+Namespaces are one honking great idea -- let's do more of those!
+
+",5487,,5487,,42413.26667,42415.71875,"For Python programming and being Pythonic, why ""never is often better than *right* now""?",,4,1,1,42416.42917,,CC BY-SA 3.0,, +310019,1,,,2/13/2016 7:19,,2,1871,"

I wrote a Slack bot which must connect to Slack teams through websocket connections. Since the bot might be used by thousands of team, I will eventually need to distribute the teams across multiple servers. New teams are added through an HTTP server which handles the initial OAuth authentication.

+ +

I'm looking for a solutions that will help me achieve the following:

+ +
    +
  • When a server goes down or reboots, all the teams it was assigned to must be re-assigned to the remaining servers. It's ok if the connection to the Slack team temporarily goes down as long as the team is quickly picked up by a server.

  • +
  • When a team is added, it gets assigned to the less ""busy"" server. Busy could be simply defined by the amount of team it currently handles.

  • +
  • I'd like to do all of this with minimal custom code to write.

  • +
+ +

So far, I've considered the following solutions:

+ +

1) Work queue with RabbitMQ. Bot servers compete to receive teams. That's an OK solution though I need a reliable way to put back teams in the queue when a server goes down.

+ +

2) Write a custom ""orchestration"" service. The orchestration service would receive teams from the http server and dispatch them to a cluster of servers. It would need to keep track of when servers go down and which teams need to be re-assigned. I'm not really sure how to write such a service reliably and this would become a single point of failure.

+ +

3) Your suggestions!

+",2100,,,,,42413.38681,Scaling websocket client connections (not server) to multiple servers,,1,0,1,,,CC BY-SA 3.0,, +310030,1,,,2/13/2016 11:54,,0,92,"

How do companies deal with limitations and constraints of Twitter Bootstrap after choosing to use it in a particular project?

+ +

Web designers are no longer free to design Web pages. They need only to chose from Bootstrap components and how to customise them. If we give web designers all the freedom, passing from design prototypes to real html/css pages, it will cost a lot of work to adapt Bootstrap components to the desired look that was designed.

+",82090,,190130,,42413.525,42413.525,PSD to HTML/CSS limitations when using Twitter Bootstrap,,2,1,,,,CC BY-SA 3.0,, +310036,1,,,2/13/2016 13:54,,1,38,"

Due to cost I am currently thinking of using AWS for scalable hosting of the node.js backend of the application that would be nice to have monitoring,load balancing and all the goodies of the AWS architecture.

+ +

The database system (currently OrientDB,PostgreSQL and redis) would be nice to have lots of mem and SSD disks and I have found several places with pricing at 1/4.

+ +

Does anyone know of any metrics for comparing database lagging with a network request overhead or is that a non-issue in today's fast networks?

+",154794,,31260,,42413.59514,42413.59514,Splitting the application into two different hosting providers,,0,0,,,,CC BY-SA 3.0,, +310043,1,310046,,2/13/2016 16:13,,2,255,"

If there's a bug that triggers undefined behavior in a piece of code, is the undefined behavior consistent each time running it? and changes each time compiling it?

+ +

For example if you had some C code that does some string manipulation. You compile it, run it 3 times and the output is consistent of weird characters like ABCE*D-*+ĚĚĚĚĚĚĚĚĚĚĚ. You compile it again and the next time it run it 3 times it just crashes.

+ +

I'm sorry if the description is a little ambiguous, as I'm trying to figure this out myself.

+",156808,,31260,,42413.67708,43385.65972,Consistency of Undefined behavior,,2,1,,,,CC BY-SA 3.0,, +310052,1,,,2/13/2016 18:59,,1,78,"

Our customer wants to unify his ordering system, so that all of his end-customers can use one single website to place orders. The planned site will use web services to place the order in one of several production systems.

+ +

The production systems have a lot of different orderable services, that need different information when ordered. Some need the end-user's name; others will need a date until the service has to be provided; some need the location where something will be delivered to; others need a pick-up location.

+ +

The new site will be build, so that all necessary information will always be delivered. The site is build by another department, while I'm task with the creation of a webservice, that can be used by a number of different production systems.

+ +

The production systems need specific information (e-Mail Addresses to send Notification, Locations to send people to, Dates and Times at which something is available). +Let's imagine a system to help the customer service of a washing machine company: Send a notifications to the given users e-mail address, that the repair will be performed in his house at a given date, which is automatically assigned by the production system. Another E-Mail will be send to the actual worker, with the location given in the address and the date he has to perform the work.

+ +

Another system might only need the location of the customer, as it will only send out a package with a replacement part.

+ +

There are a lot off different service types and continuing work on the systems. Fields are added to the process every couple of month, to support some kind of new business case.

+ +

What would be the best way to model the different types of fields? Should there be a different class of order for each of the different types of service (DeliveryOrder, PickupOrder etc.)? Or should we use an ontology of some kind (Order with an array of fields, each order needing different fields in the array, which are identified by a FieldTypeID)?

+",215255,,215255,,42413.85417,42413.85417,Design API for placing orders with differing parameters,,0,2,,,,CC BY-SA 3.0,, +310053,1,310261,,2/13/2016 19:18,,-2,464,"

I wish to supply a software product with GUI, etc. for operating it. And the software utility is research so the users will want to tweak it to some extent and implement their own preferred algorithms instead of the ones we're shipping. Therefore, we want to make the code partly modifiable but not visible.

+ +

A possible solution here would be to provide a library with certain functions that are overridable and a documentation is provided to define their input/output.

+ +

Is this the best solution to the problem or are there better well known alternatives to the same.

+ +

The reason I want to achieve this is because the original code would lose value if exposed totally.

+ +

So basically I want to not reveal my code in the best possible way. I don't want to obfuscate it as it may not be a good practice for a research based software.

+",215314,,,user53019,42416.84097,42416.85486,How can a software code be made modifiable without being visible to the end user?,,2,6,1,,,CC BY-SA 3.0,, +310057,1,,,2/13/2016 21:21,,0,962,"

Is there an advantage to either method and/or a risk to either? Looking to upload JPG or PNG files only with a size limit of ~3mb with reduction done before insertion.

+",215323,,,,,42413.93194,Saving an image in MYSQL vs saving to a folder,,1,3,1,42415.49375,,CC BY-SA 3.0,, +310060,1,310273,,2/13/2016 22:32,,1,1407,"

I'm creating a system using some DDD principles and I'm stuck with a problem.

+ +

To give a bit more of context on what I'm trying to do, let me first describe what the system is about:

+ +

The ideia is to allow an user of an specific kind of service to be able to integrate and test many of these services without integrating his system with all of them.

+ +

To do that the user would integrate with my system, which acts as a gateway, integrating with all those services.

+ +

Now let me explain the problem:

+ +

For every service that is integrated there is an API implementation that is very specific (some use JSON, others XML) and for that reason these implementations are used as separated dependencies (projects).

+ +

In the domain layer, there is an entity named ExternalService, this entity represents a service integration that is supported.

+ +

Lets say the kind of service we are talking about here is a SMS system that sends messages to a phone number.

+ +

So we have an api implementation that integrates with the external service:

+ +
class XYZSMSProviderApi {
+    public void sendMessage(String phone) {...}
+}
+
+ +

And we have the entity ExternalService with the XYZ service data (this data is persisted in a database):

+ +
class ExternalService {
+    public String name = ""XYZ"";
+}
+
+ +

The problem is, who should be responsible for calling the sendMessage method of the XYZSMSProviderApi class?

+ +

Should I inject the API object in the entity and then have a method to do the actual message sending in the entity?

+",215331,,,,,42417.15833,How should an entity that abstracts an external service do its operations?,,2,0,,,,CC BY-SA 3.0,, +310066,1,310069,,2/14/2016 0:21,,0,79,"

I'm currently modifying code released under GPL v2.0, and I'm wondering if the GPL allows release under a different version, say, GPL v3.0 or if it has to remain under the same version as the original.

+",99117,,,,,42414.02292,Does the GNU General Public License (GPL) allow release under a different version of the GPL?,,2,1,,,,CC BY-SA 3.0,, +310071,1,310163,,2/14/2016 4:57,,34,2798,"

I'm curious to know how programmer teams typically managed their software development back in the 80s and early 90s. Was all the source code simply stored on one machine which everyone worked on, or was the source passed around and copied manually via floppy and merged manually, or did they actually use revision control systems over a network (CVS for example) like how we do now? Or perhaps something like an offline CVS was being used?

+ +

Nowadays everyone is dependent on source control.. it's a no-brainer. But in the 80s, computer networks weren't that easy to set up, and things like best practices were still being figured out...

+ +

I do know that in the 70s and 60s programming was pretty different so revision control was not necessary. But it's in the 80s and 90s that people started using computers for writing code, and applications started increasing in size and scope, so I am wondering how people managed all that back then.

+ +

Also, how does this differ between platforms? Say Apple vs Commodore 64 vs Amiga vs MS-DOS vs Windows vs Atari

+ +

Note: I'm mostly talking about programming on microcomputers of the day, not big UNIX machines.

+",124792,,31260,,42414.41875,43976.75347,How did version control work on microcomputers of the day in the 80s and 90s?,,8,9,5,,,CC BY-SA 3.0,, +310072,1,310077,,2/14/2016 5:34,,3,2377,"

We have deployed a restful web service on a application server (Apache). The volume is getting higher and we do want to scale it. We will deploy two more Apache instances on two more machines.

+ +

How do I implement a layer which sends request to each of these based on a round robin strategy?

+ +

Currently say I have www.mywebservice.com/employees?id=1 and so on.

+ +

Now how I redirect these to different instances? Where would the layer come in is I am confused about as I would need some type of facade.

+",205196,,31260,,42414.40694,42415.41944,Scaling a Restful Web Service Hosted in Server,,2,3,,,,CC BY-SA 3.0,, +310085,1,310105,,2/14/2016 10:27,,1,221,"

While using FLUX in a CRUD application, +according to what I understood, initially the data is returned from an AJAX call and then stored in the STORE. So, all the data that is currently viewed is only from the STORE.

+ +

So my question is, if I'm seeing a listing page of all the data and some other user changes that data in the server, wouldn't I have to use an AJAX call again to list all the data and then again store in the STORE? +What would be the point of using STORE if I have to call again?

+ +

I saw a tutorial where initially on the page load, the data gets rendered and after that everything is done with the data in the STORE. But with this, if the server data changes by some other users, wouldn't I have to fully reload the page again? +I seem to have a lot of confusion regarding this issue.

+ +

What is the use of FLUX and how is it implemented? +Also, what is the correct way of implementing it in a CRUD application?

+",215369,,31260,,42414.43819,42444.77986,What is the approach for implementing FLUX in a CRUD application that pulls JSON data from the server?,,1,0,,,,CC BY-SA 3.0,, +310093,1,313195,,2/14/2016 14:23,,2,2639,"

I am going to develop a server application to provide the functionality of a book keeping software in tomcat server. I can think of two ways to achieve this.

+ +
    +
  1. Creating a single web application - Let's say we have to provide invoice management, user management and report management. All these logic is bound to this single web application. Functionalities will be separated by package structure.
  2. +
  3. Creating multiple web applications - Create separate web app to provide each functionality. ex: single web application for invoice management, another single web app for user management and so on.
  4. +
+ +

One advantage I can think of creating multiple web apps for each functionality is loose coupling. ex: Developer doesn't want to concentrate on invoice related things when he develops on user management module etc. Another advantage is if one customer wants only a sub set of all the features available, We can remove other web apps easily and provide a solution quickly. I wanted to know pros and cons of these two approaches before going to implementation.

+",82757,,,,,43497.02917,Is it better to have one web app or multiple web apps,,4,5,2,,,CC BY-SA 3.0,, +310094,1,,,2/14/2016 14:39,,12,2991,"

I'm aware of a few good options:

+ +
    +
  1. Big integers (e.g., int64_t, mpz_t, any bignum lib) to represent cents or 10-n cents—say, an integer represents 1/100 of a penny ($1.05 == 10500). This is called a scaled integer.

  2. +
  3. High level library for arbitrary precision decimal arithmetic such as BigDecimal in Java, Decimal in Python, decimal.js in Javascript, boost::multiprecision in C++

  4. +
  5. Strings.

  6. +
  7. Packed BCDs (binary coded decimals) is a more esoteric method that seemed popular in old software. Read more about it.

  8. +
+ +

In production code for banks (or credit cards, ATMs, POS systems), what data type is actually used the most? I'm especially asking those who worked for banks.

+ +

EDIT: Super useful links for those with the same problem domain (needing to implement a ""money"" data structure that doesn't break).

+ + + +

EDIT for the fellow who said this is a duplicate question: This is a practical not a theoretical question of ""what's the best"". Read the unedited title of my question. I'm asking what people have seen first-hand in banks' codebases.

+ +

I know BigDecimal is ""best"" obviously, but nice APIs like that aren't available everywhere, believe it or not, and decimal libraries are expensive as opposed to ints.

+",212019,,212019,,42415.29514,42415.29514,What do banks actually use as a data type for money?,,1,8,5,42419.58194,,CC BY-SA 3.0,, +310097,1,,,2/14/2016 15:02,,6,1297,"

I'm trying to identify an object (a cube) in a set of photos. Using Canny/Sobel/Hough I've managed to get the photo down to a set of lines that are pretty accurate; however if I plot these lines on my image there are a lot of duplicates where the angle/distance varies only a tiny amount. Here's a cut down sample:

+ +

+ +

I thought I could reduce these by simply looking for lines that have a rho/theta values within a certain tolerance. However, the problem I discovered is that the more vertical a line is (the further the intersection at x=0) the greater the difference in the rho value for the same angle:

+ +

+ +

This doesn't seem like a problem maths can't fix; but it seems like there should be tried-and-tested method for doing this yet I'm failing to find any good info online about the best way to do this (either maths to allow for this, or a generally better way of merging similar lines within the image space for the output of Hough Transform).

+ +

I did wonder if converting them to x/y pairs for the edges of my image (eg. the points to use to render a line) and then comparing them might be a better idea. I'm going to give this a go; but if it's not the ""normal"" way to do it, I'd like to know!

+",8107,,,,,42804.95972,"How to ""de-dupe"" similar lines (detected using the Hough Transform as rho/theta pairs)",,1,2,,,,CC BY-SA 3.0,, +310107,1,310110,,2/14/2016 21:10,,5,488,"

Every class of a model in my application has a create method. The framework gives me the default implementation of create which is “create in the DB”. Sometimes I need to perform some extra actions during create. There are numerous of ways to achieve this: I can override create (and call super thereafter), subsribe to signals (if possible), override hook methods (such as before_create), it has never been a problem. The problem is, I don't actually want to perform this actions every time, i. e. these actions are not actually part of the create procedure, it's a part of some other high-level (comparing to the original create) procedure. They call it smart_create in one of the biggest projects I've ever worked with.

+ +

Before proceeding any further, I want to give you an idea of what these smart_create methods usually are. So, here is the example. When you create some entity for a user, you may want to notify him with an email. But you may not want notifications during tests (it may slow them down), stand-alone scripts (when you repair something you just broke) or when an entity is created via admin panel (for whatever reason). The temptation is there to name this method create_and_notify, but this won't work when you need some other actions: create_and_notify_and_add_bonus_credits is not an appropriate name.

+ +

So, what is the question? I don't like to name my methods smart_create, but I can't really tell what name is better. Here is what I tried and why I still don't like it:

+ +
    +
  • smart_create. What if I have two untrivial ways of creating entity? What is the name for the second one? wise_create? More than this, how do other developers know what method to call? Do they need to “go smart”?
  • +
  • EntityCreator class. Actually the same but with the whole class instead of just one method. Nothing really changes: it's still unclear what this “creator” do and why it's better than bare “create”.
  • +
  • No name at all. Because controller knows what to do. That's just dirty. The problem with this approach is clear: I will not be able to reuse the same logic again.
  • +
  • create_by_user. Here. This is what method really does, but I don't actually want to mention some users or clients in my models: they are not supposed to know about them at all.
  • +
+ +

I believe a lot of developers encountered such situations, and I want to know what is your usual approach to this problem.

+",180065,,180065,,42416.50486,42416.50486,“Smart create” method naming,,3,3,2,,,CC BY-SA 3.0,, +310114,1,310121,,2/15/2016 0:38,,3,884,"

My understanding is that part of the point of the JVM was that the code could ""run anywhere"", but CLR code was designed to run only on Windows: so why have a virtual machine?

+ +

I know that the CLR runtime takes care of garbage collection, but there are garbage-collected languages that still compile to native code.

+ +

The only thing I can think of is that Microsoft may have wanted leverage in order to keep the price of Intel chips low.

+",201215,,,,,42415.46944,What is the point of the Common Language Runtime (CLR)?,,2,4,,,,CC BY-SA 3.0,, +310115,1,310123,,2/15/2016 0:42,,2,309,"

I currently have two parties set up:

+ +
    +
  • typical HTTP web server
  • +
  • native app distributed to consumers (presumably behind typical home router/firewall configuration)
  • +
+ +

The app is designed to work on certain small problems, but I want apps configured to work on similar problems to work together. Right now, the only solution I can come up with is passing all the data through the web server. However I was wondering if there is any protocol that will allow my native app to use the web server just to facilitate the initial TCP handshake and then continue without the web server relaying the information in the middle?

+ +

My desired flow would be like so:

+ +
    +
  1. Web server waits around for connections
  2. +
  3. Native app instance A connects to server and notifies it that A is working on problem (i)
  4. +
  5. Native app instance B connects to server and notifies it that B is working on problem (i)
  6. +
  7. Server matches A and B as working on similar problems and notifies them both to initiate a connection
  8. +
  9. Both A and B send TCP SYN, SYN-ACK, ACK messages as appropriate to the web server, but then instead of using the traded TCP protocol parameters against the web server they continue to communicate directly with each other.
  10. +
+ +

So,

+ +

A. is such a flow possible?

+ +

B. is there a protocol that attempts to establish this?

+ +

If the answer to both of the above is ""no"" is there something else I can try to cut out the web server from middle-manning the communications?

+",216424,,,,,42535.19375,Web server facilitating a TCP handshake between two native apps behind typical home firewalls,,1,3,,,,CC BY-SA 3.0,, +310118,1,310120,,2/15/2016 1:01,,13,2204,"

When I learned C++ long ago, it was strongly emphasized to me that part of the point of C++ is that just like loops have ""loop-invariants"", classes also have invariants associated to the lifetime of the object -- things that should be true for as long the object is alive. Things that should be established by the constructors, and preserved by the methods. Encapsulation / access control is there to help you enforce the invariants. RAII is one thing that you can do with this idea.

+ +

Since C++11 we now have move semantics. For a class that supports moving, moving from an object does not formally end its life-time -- the move is supposed to leave it in some ""valid"" state.

+ +

In designing a class, is it bad practice if you design it so that the invariants of the class are only preserved up to the point that it is moved from? Or is that okay if it will allow you to make it go faster.

+ +

To make it concrete, suppose I have a non-copyable but moveable resource type like so:

+ +
class opaque {
+  opaque(const opaque &) = delete;
+
+public:
+  opaque(opaque &&);
+
+  ...
+
+  void mysterious();
+  void mysterious(int);
+  void mysterious(std::vector<std::string>);
+};
+
+ +

And for whatever reason, I need to make a copyable wrapper for this object, so that it can be used, perhaps in some existing dispatch system.

+ +
class copyable_opaque {
+  std::shared_ptr<opaque> o_;
+
+  copyable_opaque() = delete;
+public:
+  explicit copyable_opaque(opaque _o)
+    : o_(std::make_shared<opaque>(std::move(_o)))
+  {}
+
+  void operator()() { o_->mysterious(); }
+  void operator()(int i) { o_->mysterious(i); }
+  void operator()(std::vector<std::string> v) { o_->mysterious(v); }
+};
+
+ +

In this copyable_opaque object, an invariant of the class established at construction is that the member o_ always points to a valid object, since there is no default ctor, and the only ctor that isn't a copy ctor guarantees these. All of the operator() methods assume that this invariant holds, and preserve it afterwards.

+ +

However, if the object is moved from, then o_ will then point to nothing. And after that point, calling any of the methods operator() will cause UB / a crash.

+ +

If the object is never moved from, then the invariant will be preserved right up to the dtor call.

+ +

Let's suppose that hypothetically, I wrote this class, and months later, my imaginary coworker experienced UB because, in some complicated function where lots of these objects were being shuffled around for some reason, he moved from one of these things and later called one of its methods. Clearly it's his fault at the end of the day, but is this class ""poorly designed?""

+ +

Thoughts:

+ +
    +
  1. It's usually bad form in C++ to create zombie objects that explode if you touch them.
    +If you can't construct some object, can't establish the invariants, then throw an exception from the ctor. If you can't preserve the invariants in some method, then signal an error somehow and roll-back. Should this be different for moved-from objects?

  2. +
  3. Is it enough to just document ""after this object has been moved from, it is illegal (UB) to do anything with it other than destroy it"" in the header?

  4. +
  5. Is it better to continually assert that it is valid in each method call?

  6. +
+ +

Like so:

+ +
class copyable_opaque {
+  std::shared_ptr<opaque> o_;
+
+  copyable_opaque() = delete;
+public:
+  explicit copyable_opaque(opaque _o)
+    : o_(std::make_shared<opaque>(std::move(_o)))
+  {}
+
+  void operator()() { assert(o_); o_->mysterious(); }
+  void operator()(int i) { assert(o_); o_->mysterious(i); }
+  void operator()(std::vector<std::string> v) { assert(o_); o_->mysterious(v); }
+};
+
+ +

The assertions don't substantially improve the behavior, and they cause a slow-down. If your project does use the ""release build / debug build"" scheme, rather than just always running with assertions, I guess this is more attractive, since you don't pay for the checks in the release build. If you don't actually have debug builds, this seems quite unattractive though.

+ +
    +
  1. Is it better to make the class copyable, but not movable?
    +This also seems bad and causes a performance hit, but it solves the ""invariant"" issue in a straightforward way.
  2. +
+ +

What would you consider to be the relevant ""best practices"" here?

+",192361,,31260,,42415.52986,42415.52986,Object lifetime invariants vs. move semantics,,1,2,3,,,CC BY-SA 3.0,, +310126,1,310189,,2/15/2016 4:57,,1,104,"

I currently have a requirement to conduct polling in a network where new systems may log into anytime.

+ +

The idea is that whoever comes online first goes into mode1 while the second guy goes into mode2, third guy and onwards to into mode3. Simple enough right, but the problem is deciding who came first.

+ +

Assume no third party online to make the decision for you [I am taking the worst case: Every other service explodes] and no time-sync. Also assume that the delay in transmission of data between the nodes is inconsistent and highly variable causing up-time to also not be reliable.

+ +

Since comments have a word limit, I'll reply here I guess:

+ +

gnat, I really don't understand the sharing my research part. Most resources on the net state relying on a server to see what logged in [No services is always running hence I can't rely on some external source to tell who logged in first] which completely isn't possible as like I mentioned you don't have any network you log into, you just log into your node.I thought of using a time-stamp, but due to the possibility of bad time-sync (past experience) it was shot down by my lead. Also, there is always the chance of 2 devices logging in at the same time. I am afraid what that could result in. Having an addition check after is doable, but not recommended [limited bandwidth]

+",216432,,216432,,42415.43333,42415.81875,Polling without timestamp,,1,7,,42424.17292,,CC BY-SA 3.0,, +310132,1,,,2/15/2016 7:21,,3,168,"

I'm working on drawing application and I want to provide grid with snap to grid functionality. I'm trying to find the right way how to do that (ideally some design pattern), but I'm quite confused about the right class which should be responsible for that.

+ +

I have editor class which has an active tool object. Editor is listening to all user actions (mouse click, mouse move and so on..) and passes them to the active tool object. Active tool object then handles those events and renders shape based on its state (first click activates the tool, then it listens to mouse move and draws preview of the shape and second click creates the shape in my model class).

+ +

I'm thinking about using the Intercepting Filter pattern - every event will go through filter first and the result will be passed to active tool then. I guess that could work for object creation, but what to do with move and resize actions? Those depend on actual object which is being moved/resized.

+",215405,,,,,42424.225,Snap to grid functionality in drawing application,,1,2,2,,,CC BY-SA 3.0,, +310135,1,310138,,2/15/2016 7:56,,0,217,"

I am always not able to follow Netbeans default coding conventions like the following

+ +

Function should be N lines only

+ +
+

Method Length is N Lines (M allowed)

+
+ +

Warning: Do not Access Superglobal $_POST Array Directly

+ +
// Using this
+$username = filter_input(INPUT_POST, 'username');
+
+// Instead of this
+$_POST['username'];
+
+ +

And many more.

+ +

It seems really hard to follow all of the coding conventions. So, what can I achieve if I follow it 100 %?

+",144875,,-1,,42878.52778,42415.40417,Is it worth it to follow code conventions of Netbeans?,,1,3,,42417.03819,,CC BY-SA 3.0,, +310142,1,,,2/15/2016 11:34,,0,395,"

I am on a project where I call an API and I want to make statistics with the data returned. It returns a big .json object. As that is not possible to flatten, and I am not interested on all the data returned either, I want to parse only certain keys. I have thought of parsing and then creating a well structured json myself, and from there work with it with panda (I am coding this with Python)

+ +

Would this be a good approach? Is it even necessary to create a new .json structure to hold the parsed data?

+",216467,,31260,,42715.88542,42715.88542,"Parsing JSON and creating analytics out of the data, what is the best way to do it?",,1,2,,,,CC BY-SA 3.0,, +310149,1,310150,,2/15/2016 13:38,,1,569,"

Take the following class hierarchy:

+ +
    +
  • Client + +
      +
    • FacebookClient
    • +
    • PinterestClient
    • +
    • TwitterClient
    • +
  • +
+ +

Each client must define a value for an enum property named ClientType (string property Url in the original version of this question; hence the accepted answer). ClientFactory should then be able to instantiate by client type.

+ +
ClientFactory.Create(ClientType.Facebook);
+
+ +

ClientType could be a static property for each subtype, and then subtypes could be selected by its value in the factory:

+ +
// Create array of possible client subtypes
+Type[] clients = new[] { typeof(FacebookClient), typeof(PinterestClient), typeof(TwitterClient) };
+
+// Select subtype with URL that matches one passed to factory
+Type client = clients.SingleOrDefault(c => clientType == (Client)c.GetProperty(""ClientType"").GetValue(null));
+return (Client)Activator.CreateInstance(client, parameters);
+
+ +

However, since static members can't be abstract, I'm not sure of a way to ensure that ClientType is set, and there are dozens of subtypes.

+ +

Is there a way to force ClientType to be set? Would another way of doing this (e.g., a switch statement) be a better option?

+",212829,,212829,,42417.4875,42417.4875,What is the DRY-est factory pattern to instantiate subtypes based on one of their properties' values?,,1,4,1,,,CC BY-SA 3.0,, +310155,1,310161,,2/15/2016 14:29,,2,3284,"

I have a scenario, where I draw multiple graphs in same applications and each graph is an object of graph class.

+ +

I want to delete the graph using event handling (button click event) and the event handling occurs in graph class and here I can simply call the destructor to delete this object.

+ +

Is there a way to tell the main class, from where I am creating these objects, about this event so that all this object references can be deleted?

+",216484,,,user40980,42435.60069,42435.60069,Destructing an object of a class correctly,,1,1,,,,CC BY-SA 3.0,, +310158,1,,,2/15/2016 15:16,,10,1321,"

My team is planning a migration from subversion to git. We support 3 applications... 1 is primarily for internal users, 1 is for corporate ""partners"" and 1 is for end users. These applications share some code and also communicate with each other using web services and message queuing. They have different release schedules.

+ +

Currently they each have separate subversion repos (so 3 repos). We use the subversion ""externals"" feature to share the code, including the ""contracts"" for message queuing.

+ +

How should we go about deciding whether to keep the 3 repos when we move to git, using submodules or some other repo sharing technique, or whether we should adopt a monorepo?

+",4526,,,,,42995.92569,Should we use a monorepo?,,2,4,1,,,CC BY-SA 3.0,, +310159,1,310164,,2/15/2016 15:17,,66,10893,"

I'm diving into the world of functional programming and I keep reading everywhere that functional languages are better for multithreading/multicore programs. I understand how functional languages do a lot of things differently, such as recursion, random numbers etc but I can't seem to figure out if multithreading is faster in a functional language because it's compiled differently or because I write it differently.

+ +

For example, I have written a program in Java which implements a certain protocol. In this protocol the two parties send and receive to each other thousands of messages, they encrypt those messages and resend them (and receive them) again and again. As expected, multithreading is key when you deal in the scale of thousands. In this program there's no locking involved.

+ +

If I write the same program in Scala (which uses the JVM), will this implementation be faster? If yes, why? Is it because of the writing style? If it is because of the writing style, now that Java includes lambda expressions, couldn't I achieve the same results using Java with lambda? Or is it faster because Scala will compile things differently?

+",46143,,-1,,42837.31319,42481.83889,Is functional programming faster in multithreading because I write things differently or because things are compiled differently?,,4,15,13,,,CC BY-SA 3.0,, +310170,1,310174,,2/15/2016 16:56,,2,147,"

I have a dataset full of rows that I must initialize into myclass and then process.

+ +

I am currently looping through each row in the dataset, initializing a new instance of myclass, then adding that instance to a list with type myclass.

+ +

I then loop through the list and conduct a process on each instance (for purpose of the example, let's say I must send each instance as a SOAP message).

+ +

My question is, do I keep the structure as is (with a list of myclass instances) or do I ditch the list and just process my instance (ie send the SOAP message) within the same loop that I initialize the object? I would basically leverage a new method within myclass to send this SOAP message.

+ +

I care about performance and memory usage. It is a batch process where I must process 20,0000 rows at a time. I have limited resources on the host server.

+ +

Additional info: +If an initialization errors or a SOAP send fails, I want to continue with my processing of each row...

+",140178,,140178,,42415.70625,42416.74653,Load to list in one loop and then process list in another or do it all at once,,2,2,,,,CC BY-SA 3.0,, +310173,1,,,2/15/2016 17:16,,-1,303,"

Numerical analysis text books talks about the absorption phenomena (Introduction to Numerical Analysis and Scientific Computing; Nabil Nassif; Dolly Khoueiri Fayyad; CRC Press; 2014) when adding floating point numbers with a big difference in magnitude between them. There is also the concept of Unit In Last Place or ulp. I understand that the ulp of a given floating point number tell us which is the gap between the floating point number and its successor.

+ +

Is there any relationship between the absortion phenomena and the ulp? What I’m trying to do is that if we are given X and y with X >> Y so X + Y = X, then how many times I have to add Y so X doesn’t absorve the added Y (X + Y + ... + Y).

+",212325,,212325,,42415.72986,42415.80208,Floating point absorption phenomena and ULP,,1,4,,,,CC BY-SA 3.0,, +310176,1,310323,,2/15/2016 17:50,,11,10146,"

I am designing a system that uses Event Sourcing, CQRS and microservices. I am lead to understand this isn't an uncommon pattern. A key feature of the service needs to be the ability to rehydrate/restore from a system of record. Microservices will produce commands and queries on a MQ (Kafka). Other microservices will respond (events). Commands and queries will be persisted on S3 for purpose of auditing and restoring.

+ +

The current thought process was that, for the purposes of restoring the system, we could extract the event log from S3 and simply feed it back into Kafka.

+ +

However, this fails to acknowledge changes in both producers and consumers over time. Versioning at the command/query level seems to go some way toward solving the problem but I can't wrap my head around versioning consumers such that I could enforce that when a command, during a restore, is received and processed, it's the exact same [version of the] code that's performing the processing as it was the first time the command was received.

+ +

Are there any patterns I can use to solve this? Is anyone aware of other systems that advertise this feature?

+ +

EDIT: Adding an example.

+ +

A 'buyer' sends a 'question' to a 'seller' on my auction site. The flow looks as follows: + +UI -> Web App: POST /question {:text text :to seller-id :from user-id} +Web App -> MQ: SEND {:command send-question :args [text seller-id user-id]} +MQ -< Audit: <command + args appended to log in S3> +MQ -< Questions service: - Record question in DB + - Email seller 'You have a question' +

+ +

Now, as a result of a new business requirement, I adjust the 'Questions service' consumer, to persist a count of all unread questions. The DB schema is changed. We have had no notion of whether or not a question was read by the seller, until now. The last line becomes:

+ +

+MQ -< Questions service: - Record question in DB + - Email seller 'You have a question' + - Increment 'unread questions count' +

+ +

Two commands are issues, one before the change, one after the change. The 'unread questions count' equals 1.

+ +

The system crashes. We restored by replaying the commands through the new code. At the end of the restore, our 'unread questions count' equals 2. Even though, in this contrived example, the result is not a catastrophe, the state that has been restored is not what it previously was.

+",6030,,6030,,42416.60764,43987.57014,"Event sourcing, replaying and versioning",,2,3,8,,,CC BY-SA 3.0,, +310177,1,,,2/15/2016 18:09,,1,1171,"

I'm in the process of experimenting with ways of breaking up a large (and growing project). Currently, we're working with a big Angular application with what will soon be a large number of components.

+ +

One idea I'd like to try is breaking each component into a completely separate website/project, as a collection of plugins to the main website. Essentially, instead of pulling in an ng-view, I'd just be sticking an iframe in and displaying the content in there.

+ +

There are a few cons that I've noted myself already:

+ +
    +
  1. iframes can be horrible

  2. +
  3. deployment is more complicated

  4. +
  5. communicating between the plugins is less straightforward

  6. +
+ +

I'm hoping that some of the pros might turn out to be:

+ +
    +
  1. don't have to redeploy solution to add new components to site

  2. +
  3. teams working on products can do so however they like. They can use any technology and approach, and aren't limited to Angular

  4. +
+ +

Does anyone have any experience with this sort of approach? Any pros and cons to add to this approach? It's divided opinion at work a little so would be nice to have more input from the Internet.

+",216509,,5525,,42575.42222,42575.42222,Pros and Cons of odd iframe 'architecture',,1,1,,,,CC BY-SA 3.0,, +310184,1,310196,,2/15/2016 19:04,,3,716,"

Using javascript or in a native iOS App it is possible to read and modify cookies.

+ +

The server generally sets these, but if both the server and the client modifies the values then it becomes a global shared mutable state across systems.

+ +

This feels bad.

+ +

In the RFC it does not mention user agents modifying cookies other than by predefined rules (e.g. Honoring cookie expiry). It also says in the summary about cookies:

+ +
+

… These header fields can be used by HTTP servers to store state at HTTP user agents

+
+ +

However it does not actually prohibit or explicitly advise against clients modifying cookies.

+ +
    +
  • Is it acceptable to modify cookies on a user agent?
  • +
  • Are there any references or guidelines on this matter?
  • +
+",99462,,,,,42415.91597,Is it acceptable for user agents to modify cookies other than by instruction from the server?,,1,5,,,,CC BY-SA 3.0,, +310190,1,310191,,2/15/2016 20:10,,3,646,"

I'm commenting some code right now, and I'm looking for a word meaning the thing being iterated over. Iterator is an object performing an iteration, but what's the object that the iterator is iterating; the result of one iteration?

+",166152,,166152,,42415.96042,43574.38056,What's the noun for the result of an iteration?,,2,20,1,,,CC BY-SA 3.0,, +310200,1,310216,,2/15/2016 22:06,,0,2806,"

I'm currently trying to turn some R code (interpreted language) into C code (a compiled language), and I need to explain what is being done and/or why.

+ +

Does anybody know of a term/verb for this sort of practice?

+",216524,,,,,43193.68264,Is there a term for rewriting code from an interpreted language to a compiled one,,1,10,1,,,CC BY-SA 3.0,, +310202,1,,,2/15/2016 22:44,,10,2620,"

Consider a situation where a class implements the same basic behavior, methods, et cetera, but multiple different versions of that class could exist for different uses. In my particular case, I have a vector (a geometric vector, not a list) and that vector could apply to any N-dimensional Euclidean space (1 dimensional, 2 dimensional, ...). How can this class / type be defined?

+ +

This would be easy in C++ where class templates can have actual values as parameters, but we don't have that luxury in Java.

+ +

The two approaches I can think of that could be taken to solve this problem are:

+ +
    +
  1. Having an implementation of each possible case at compile time.

    + +
    public interface Vector {
    +    public double magnitude();
    +}
    +
    +public class Vector1 implements Vector {
    +    public final double x;
    +    public Vector1(double x) {
    +        this.x = x;
    +    }
    +    @Override
    +    public double magnitude() {
    +        return x;
    +    }
    +    public double getX() {
    +        return x;
    +    }
    +}
    +
    +public class Vector2 implements Vector {
    +    public final double x, y;
    +    public Vector2(double x, double y) {
    +        this.x = x;
    +        this.y = y;
    +    }
    +    @Override
    +    public double magnitude() {
    +        return Math.sqrt(x * x + y * y);
    +    }
    +    public double getX() {
    +        return x;
    +    }
    +    public double getY() {
    +        return y;
    +    }
    +}
    +
    + +

    This solution is obviously very time consuming and extremely tedious to code. In this example it doesn't seem too bad, but in my actual code I'm dealing with vectors that have multiple implementations each, with up to four dimensions (x, y, z, and w). I currently have over 2,000 lines of code, even though each vector only really needs 500.

  2. +
  3. Specifying parameters at runtime.

    + +
    public class Vector {
    +    private final double[] components;
    +    public Vector(double[] components) {
    +        this.components = components;
    +    }
    +    public int dimensions() {
    +        return components.length;
    +    }
    +    public double magnitude() {
    +        double sum = 0;
    +        for (double component : components) {
    +            sum += component * component;
    +        }
    +        return Math.sqrt(sum);
    +    }
    +    public double getComponent(int index) {
    +        return components[index];
    +    }
    +}
    +
    + +

    Unfortunately this solution hurts code performance, results in messier code than the former solution, and is not as safe at compile-time (it can't be guaranteed at compile-time that the vector you're dealing with actually is 2-dimensional, for example).

  4. +
+ +

I am currently actually developing in Xtend, so if any Xtend solutions are available, they would also be acceptable.

+",193717,,95212,,42415.99722,43149.78681,Generating Java Classes with Compile-time Value Parameters,,5,7,2,,,CC BY-SA 3.0,, +310205,1,,,2/16/2016 0:29,,5,317,"

Given a fairly traditional node class (below), what's the best way to implement equality on a given graph?

+ +

If our node looks like this

+ +
public abstract class Node{
+
+    private final Set<Node> predecessors = new HashSet<>();
+    private final S<Node> successors = new HashSet<>();    
+
+    //we do have a visitor scheme
+    public void accept(GraphVisitor visitor){ ... }
+}
+
+ +

with several specializations:

+ +
public class NodeTypeOne extends Node{
+    public int importantInteger = 42;
+}
+public class NodeTypeTwo extends Node{
+    public double importantValue = 34;
+}
+//... and so on
+
+ +

And given two nodes (assumed to be roots), how do I determine if the two graphs corresponding to those roots are logically equal?

+ +

I'd like to avoid overriding equals if possible, but I recognize that it might be a necessary evil.

+ +

Currently, the only solution I've thought of is to traverse the graph and manually look-up your counterpart, and check its relatives:

+ +
public class EqualityVisitor implements GraphVisitor{
+
+    private boolean result = true;
+    private final Node otherRoot;
+
+    public void visitEnter(Node node){
+
+        Optional<Node> counterpartNode = findInGraph(otherRoot, node);
+        result |= counterpartNode.isPresent();
+
+        counterpartNode.ifPresent(counterpart -> {
+            result |= setEquals(node.successors, counterpart.successors, this::customEquality);
+            result |= setEquals(node.predecessors, counterpart.predecessors, this::customEquality);
+        });
+    }
+
+    //annoyingly the customEquality method would have to do its own type-switch
+    //(when the whole purpose of a visitor interface is to avoid such a type-switch)
+}
+
+ +

this is cumbersome because:

+ +
    +
  • requires that manual type switch on the node's type to determine equality, though that can be mitigated if I'm willing to simply use the default equals and let each node override its equals method
  • +
  • it is far from performant, being at least quadratic or even cubic if my graph approaches the complete graph
  • +
+ +

I believe a reasonably elegant and intuitive solution exists using a work-list and a fixed-point algorithm, I'm just not sure how to code it. Alternatively, I could use a multi-map to build an adjacency matrix and then assert that each row has exactly one corresponding set-equal row in the others table, but this again feels really heavy handed.

+",76881,,103708,,42872.43958,44159.30069,Determine equality of a DAG,,2,4,,,,CC BY-SA 3.0,, +310213,1,310282,,2/16/2016 3:23,,-1,1685,"

I cannot understand the definition. Here's some code:

+ +
Integer iW;
+for (iW = 1; iW < 4; iW++)
+   System.out.println(iW);
+int i = iW;
+iW += 6;
+System.out.println(String.format(""iW = %d, i = %d"", iW , i));
+
+ +

Outputs this:

+ +
1
+2
+3
+iW = 10, i = 4
+
+ +

What is going on, exactly? Integer object iW sure looks like it's changing to me. In other uses, it actually seems to behave a bit more like a local object (not reference variable/pointer type), if I can bend my old C++ mind to this. But, it sure looks mutable.

+ +

P.S. Thanks for all the down-votes with no explanation why! I have, actually, read about every post on this topic before asking this question. Integer is an immutable class, and the hugely upvoted responses suggest (to my eye, at least) that this means you cannot mutate, i.e., change, an instance of the class. Yet, the above code sample appears to show that the object iW, or the object it references, does mutate. I still want to know what immutable class means in Java, especially from a coding standpoint, since it does not prevent the apparent mutation of immutable class instances.

+",216544,,216544,,42416.72431,42417.03056,Java Integer - Immutable,,2,4,1,42421.71528,,CC BY-SA 3.0,, +310220,1,310224,,2/16/2016 7:43,,1,57,"

I need to send a photo from one mobile phone to another. Currently, I'm splitting the image up into bytes before sending and then reassembling the image on the second mobile. This works fine, but if the photo is large (and they keep getting larger due to the better cameras on phones), then it takes some time for the photo to be received by the second mobile.

+ +

Would trying to stream the photo offer a better experience? It seems that streaming would be faster because, theoretically, multiple frames would be received per second. Would streaming be faster than sending the photo in chunks of bytes? If so, why?

+",104647,,,,,42416.35764,Sending Photo from Mobile to Mobile via Chunking Bytes versus Streaming,,2,0,,,,CC BY-SA 3.0,, +310227,1,,,2/16/2016 9:33,,3,652,"

Domain Driven Design(DDD) has an abstract repository pattern to handle saving and fetching/finding entities in storage (db, external service, doesn't matter). My question is if Repository Pattern has to only bring the objects to system and put it away and it is basically, for example, objects of database drivers like Mongo or other ORM like JPA, then is there any point to implement any validation there?

+ +

My hunch says me that I have to implement validations as a constraints in Factories or Aggregates. So, I don't have to use implementation's mechanisms. However, there are plenty ORMs that require to get a constraints for each field/property, so that it might cause a code duplications.

+",175635,,252416,,43535.71875,43685.75278,How to implement Repository that supports ORM's validation?,,2,8,,,,CC BY-SA 4.0,, +310239,1,310245,,2/16/2016 16:27,,1,285,"

The task:
+I have a database with 4 tables with 200 rows, 800 rows, 50 rows and 30 rows respectively.
+Just to simplify it, let's assume the tables are these sets:
+A = [Ar1, Ar2, Ar3], B = [Br1, Br2], C = [Cr1, Cr2, Cr3], D = [Dr1, Dr2, Dr3, Dr4], where Ar1 means row1 of table A.

+ +

There is also a 5th table ""E"" with 250 rows which contains some information which is relevant to tables A, B, C and D.

+ +

For every combination of AB, ABC and ABCD, I'm required to check all rows of E to see if there is some information relevant to the combination, and store a count of the relevant info. The count will be eventually written into an SQL table.

+ +

Eg: The combinations of AB would be:
+{Ar1, Br1}, {Ar1, Br2}, {Ar2, Br1}, {Ar2, Br2}, {Ar3, Br1}, {Ar3, Br2}
+So I have to check

+ +
forAllRowsOfE
+{    
+if (row 1 of E == content of Ar1 and row1 of E <= content of Br1) then {var Ar1Br1++;}
+}
+
+ +

and run the above for loop for all other combinations of A and B. Then also run it for combinations of ABC (for which it'd be {Ar1, Br1, Cr1}, {Ar2, Br1, Cr1}...and so on... and for combinations of ABCD).

+ +

The size:
+The total number of combinations for the A,B,C and D tables itself comes upto 200*800*50*30 = 240 million.

+ +

The problem:
+Running 240million * 5 queries, even if it takes 0.01s per query will take 138days to execute. The tables are small now. I'm expecting them to grow much much larger.

+ +

I've been advised to load these tables into the memory of a Java program and to do the computation in Java, because many of the count combinations of AB will be repeated in the combinations of ABC, so that much brute-force counting can be avoided. The other reason is that all this data might actually fit into 6GB RAM, and when the size increases, we could search for other techniques like temporarily writing to a database table etc.

+ +

The questions:

+ +
    +
  • But the main question here is, is it really more viable/faster to +perform such operations in Java memory?
  • +
  • Is the usage of nested loops really the better way to tackle this or +are there other techniques/queries?
  • +
+",8413,,8413,,42416.71944,42417.49653,"Is it viable to copy contents of a database into a program's memory, if multiple queries take time?",,2,9,,,,CC BY-SA 3.0,, +310242,1,,,2/16/2016 16:37,,-1,1010,"

I have a string that contains numbers in sequence. There are no delimiters between numbers. I have to find missing number in that sequence. For example:

+ +
176517661768 is missing the number: 1767
+8632456863245786324598632460 is missing the number: 8632458
+
+ +

I have no idea how to even start. As you can see, I don't know the number length either. On top of that, I am mostly a C programmer so get little help from inbuilt functions. Nevertheless I am looking for a good algorithm that I can implement myself. However, code/pseudo-code is highly appreciated.

+",216639,,80833,,42418.64375,42418.64375,Find missing number in sequence in string,,1,10,1,42418.23194,,CC BY-SA 3.0,, +310255,1,310295,,2/16/2016 19:33,,10,417,"

I have base two classes, Operation and Trigger. Each has a number of subclasses which specialise in certain types of operations or triggers. A Trigger can trigger a specific Operation. Whilst an Operation can be triggered by a specific Trigger.

+ +

I need to write the code that maps a given Operation to a given Trigger (or vice versa), but I'm not sure where to put it.

+ +

In this case the code doesn't clearly belong to one class or the other class. So in terms of a single-responsibility principle I'm not sure where the code should belong.

+ +

I can see three options which would all work. Whilst 1 & 2 are appear to just be a choice of semantics, 3 represents a different approach entirely.

+ +
    +
  1. On the trigger, e.g. bool Triggers(Operation o).
  2. +
  3. On the operation, e.g. bool TriggeredBy(Trigger t).
  4. +
  5. In an entirely new class which manages the mapping, e.g. bool MappingExists(Trigger t, Operation o).
  6. +
+ +

How should I decide where to place the shared mapping code in respect of a single responsibility principle?

+ +

How to manage single responsibility when the responsibility is shared?

+ +
+ +

Edit 1.

+ +

So the actual code looks like this. All the properties, are either a string, Guid, collection<string>, or enum. They are basically just represent small pieces of data.

+ +

+ +

Edit 2.

+ +

The reason for the return type of bool. Another class is going to consume a collection of Trigger and a collection of Operation. It needs to know where a mapping exists between a Trigger, and an Operation. It will use that information to create a report.

+",157442,,157442,,42416.85625,42417.50764,How to manage single responsibility when the responsibility is shared?,,3,7,0,,,CC BY-SA 3.0,, +310256,1,,,2/16/2016 19:38,,1,638,"

I want to design an approval process for my work flow but I wonder if there is an architecrural pattern or desing pattern for this solution.

+ +

For example an engineer will create a work. And program manager will approve it. Then general manager will approve it. The steps may be more. Then all users will see the work. Or another department users will see the approved work.

+",160523,,,,,42416.90833,Architecture of approval process,,1,1,,,,CC BY-SA 3.0,, +310260,1,,,2/16/2016 20:23,,0,557,"

This is a pseudo-code from ""Artificial Intelligence: A modern approach""

+ +
function Table-Driven-Agent(percept) returns action
+  static: percepts, a sequence, initially empty
+          table, a table, indexed by the percept sequences, initially fully specified
+
+  append percept to the end of percepts
+  action <-- Lookup(percepts, table)
+  return action
+
+ + + +

In this code and many other codes, there is the keyword static before some declarations. I want to use this keyword in my own algorithm pseudo-code, but I am not sure what is its usage.

+",151256,,,user40980,42416.85347,42417.68681,"What is the meaning of ""static"" in pseudo-code",,1,6,0,,,CC BY-SA 3.0,, +310264,1,338551,,2/16/2016 20:37,,0,325,"

When you sign a SOAP message with a private key, how does the server know your public key in order to verify it?

+ +

I am connecting to a Datapower instance that sits in front of our actual Java web service. I keep getting signature not valid errors from the server, so it leads me to believe it has something to do with keys not being in sync.

+",13030,,,,,42754.88403,Signing Soap/Xml Messages,,2,1,,,,CC BY-SA 3.0,, +310265,1,,,2/16/2016 20:37,,12,1001,"

Here is a simple programming problem from SPOJ: http://www.spoj.com/problems/PROBTRES/.

+ +

Basically, you are asked to output the biggest Collatz cycle for numbers between i and j. (Collatz cycle of a number $n$ is the number of steps to eventually get from $n$ to 1.)

+ +

I have been looking for a Haskell way to solve the problem with comparative performance than that of Java or C++ (so as to fits in the allowed run-time limit). Although a simple Java solution that memoizes the cycle length of any already computed cycles will work, I haven't been successful at applying the idea to obtain a Haskell solution.

+ +

I have tried the Data.Function.Memoize, as well as home-brewed log time memoization technique using the idea from this post: https://stackoverflow.com/questions/3208258/memoization-in-haskell. Unfortunately, memoization actually makes the computation of cycle(n) even slower. I believe the slow down comes from the overhead of the Haskell way. (I tried running with the compiled binary code, instead of interpreting.)

+ +

I also suspect that simply iterating numbers from i to j can be costly ($i,j\le10^6$). So I even tried precompute everything for the range query, using idea from http://blog.openendings.net/2013/10/range-trees-and-profiling-in-haskell.html. However, this still gives ""Time Limit Exceeding"" error.

+ +

Can you help to inform a neat competitive Haskell program for this?

+",168368,,-1,,42878.52778,42571.10139,Haskell ways to the 3n+1 problem,,2,1,3,,,CC BY-SA 3.0,, +310272,1,,,2/16/2016 22:46,,0,53,"

I'm working on a project with has different checklists (questions and answers) associated with an entity (Protocol). There is a business requirement to have these questions be altered in the future and when a new entity is created it would be associated with the current checklist.

+ +

Example:

+ +

Lets say there is a checklist and has 21 questions (the actual questions are nested with questions having other questions but I believe this is out of the scope of this question). This would be version 1.0. Something changes and now there are 22 questions and the version would be bumped up to 1.1.

+ +

When a new Protocol is created, it needs to have a Checklist associated with it - the current Checklist.

+ +

Simplified classes:

+ +
class Checklist {
+    String version
+    List<ChecklistQuestion> checklistQuestions
+}
+
+class ChecklistAnswerSet {
+    Checklist checklist
+    List<ChecklistAnswer> checklistAnswer
+}
+
+class Protocol {
+    ChecklistAnswerSet checklistAnswerSet
+    ...
+}
+
+ +

New Protocol's are created within the ProtocolService; the child checklistAnswerSet is also created here as well but needs to refer to the current Checklist instance.

+ +

We are working with a grails backend and it's extremely easy getting references to instances by their fields:

+ +
Checklist checklist = Checklist.findByVersion('1.1')
+
+ +

I could drop this in my ProtocolService to get the current instance but I know this isn't a good idea. Any changes to this version would require code changes to the Service and although I could avoid a redeploy (grails magic), this feels completely wrong.

+ +

Where do I store this 1.1? In a configuration file? In the database? Or am I completely wrong and my design needs a complete rework?

+ +

Initially, I was storing this 1.1 data within a generic key/value table we have in the database called System_Property, but it just felt wrong. My gut reaction is to use a configuration file (there are other Checklist's and therefore other current versions that would also go here), but a coworker is saying only environmental settings go in config files.

+",124481,,,,,42417.01181,How to store the current version of an instance? Store reference to specific instance?,,1,0,,,,CC BY-SA 3.0,, +310291,1,310296,,2/17/2016 3:01,,2,1086,"

We want to extend the session duration for our users. I guess it doesn't matter for users who are not authenticated. We can create a PHPSESSID cookie for them and if it expires when they close their browser then fine. But for users who are authenticated we want to extend the session duration so they don't have to sign in repeatedly. We could just extend the session cookie duration for all users (authenticated, and non-authenticated) but as we handle millions of requests a month, this would mean millions of active sessions right? However, the number of users we will have to authenticate is only in the thousands. Can we change the duration of some session cookies? Is this the right approach? Or is there a better method to handle long term session cookies?

+",142254,,,,,42417.19375,"Set authenticated users session cookie's to long term, but others to short term?",,1,0,,,,CC BY-SA 3.0,, +310292,1,310302,,2/17/2016 3:37,,7,2703,"

Ignoring (with difficulty) Occam's Razor which would seem to put this quickly to rest, what advantage would this have:

+ +
typedef struct s_header {
+    struct s_header *next;
+    //...
+} Header;
+
+ +

over this:

+ +
typedef struct header {
+    struct header *next;
+    //...
+} header;
+
+ +

?

+ +

This question is based on an argument I was having in the comments to this codereview answer. My point there was diluted by also pointing out an error (which I believe would be easier to avoid by using the same name everywhere).

+",32828,,-1,,42838.52778,42418.20347,Why would you want different identifiers for a typedef and its associated struct tag?,,1,3,2,,,CC BY-SA 3.0,, +310300,1,,,2/17/2016 7:27,,0,1264,"

Let's consider following scenario:

+ +
    +
  1. Person A creates a piece of software and releases the source code under a permissive license, let's say MIT
  2. +
  3. Person B downloads that piece of software and uses it accoridingly to MIT license.
  4. +
  5. As an owner and holder of all the right Person A decides to change the license from MIT to other license, especially less permissive one like GPL. For simplicity let's assume that there were no contributions to the piece of software in the meantimy by any other people.
  6. +
  7. Person A releases a new version of the software
  8. +
+ +

In that scenario what happens to the rights given to the person B?

+ +
    +
  • Is person B allowed to still use the last version released before the change of the license under the old license (MIT) and all the versions released after change of the license as new license (GPL)?
  • +
  • Is the entire piece of software licensed MIT or GPL after the change?
  • +
  • Basically can rights given by license be revoked by the owner?
  • +
+",216714,,,,,44012.45903,What happens if owner changes license of a software after it was published,,1,8,,42425.79028,,CC BY-SA 3.0,, +310304,1,310316,,2/17/2016 8:37,,1,139,"

So Im using the MEAN framework to build my web app. Like all other apps it requires a login & registration.

+ +

My Approach so far has been:

+ +
    +
  1. Every major function of my app has its own AngularJS controller
  2. +
  3. I have a data filter controller, which helps to filter data within the app. But my filter controller DOES NOT connect to the server it self. I created a AngularJS service that handles all actions with the server.
  4. +
  5. And I have a ""login controller""
  6. +
+ +

The login controller is responsible for as the name suggests handling logins.

+ +

My Question:

+ +

My question then is that is carrying out Client Side form validation (i.e checking the fields are not empty, the email is of correct format etc) within a angularJS controller a good design approach?

+ +

Should I create a service that handles all the validation? Should the validation functions be private (i.e in javascript modules). Whats the best design approach for this?

+ +

Below is my login controller code:

+ +
login.controller('login', function($scope){
+
+$scope.checkValidEmail = function(){
+   var elem = document.getElementById(""email"");
+   var err = document.getElementById(""emailErrorMessage"");
+   var emailLabel = document.getElementById(""emailLabel"");
+   var success = document.getElementById(""emailSuccessMessage"");
+   var email = $scope.userEmail.toLowerCase();
+
+   var atpos = email.indexOf(""@"");
+   var dotpos = email.lastIndexOf(""."");
+
+   if (atpos < 1 || dotpos < atpos + 2 || dotpos + 2 >= email.length) {
+
+       success.style.display = ""none"";
+       err.style.display = ""-webkit-inline-box"";
+       elem.style.borderColor = ""#ef4d23"";
+       elem.style.backgroundImage = ""url(./img/error_sign.png)"";
+       elem.style.backgroundRepeat = ""no-repeat"";
+       elem.style.backgroundPosition = ""325px"";
+       elem.style.backgroundSize = ""16px 15px"";
+       emailLabel.style.display = ""none"";
+       err.style.color = ""#ef4d23"";
+       err.innerHTML = ""Email address is incorrect!"";
+       $scope.valid = true;
+
+       return false;
+   } else {
+       success.style.display = ""-webkit-inline-box"";
+       success.style.color = ""#27ae60"";
+       success.innerHTML = ""Email looks great!"";
+       err.style.display = ""none"";
+       elem.style.borderColor = ""#27ae60"";
+       elem.style.backgroundImage = ""url(./img/correct_sign.png)"";
+       elem.style.backgroundRepeat = ""no-repeat"";
+       elem.style.backgroundPosition = ""325px"";
+       elem.style.backgroundSize = ""12px 16px"";
+       $scope.valid = false;
+   }
+  };
+});
+
+",186048,,31260,,42417.35972,42419.32014,Software design for Client side form validation,,2,0,,,,CC BY-SA 3.0,, +310308,1,,,2/17/2016 9:47,,10,530,"

I have a large codebase with a lot of ""anti-pattern"" singletons, utility classes with static methods and classes creating their own dependencies using new keyword. It makes a code very difficult to test.

+ +

I want to gradually migrate code to dependency injection container (in my case it's Guice, because it is a GWT project). From my understanding of dependency injection, it's all or nothing. Either all classes are managed by Spring/Guice or none. Since the codebase is large I cannot transform the code over night. So I need a way to do it gradually.

+ +

The problem is that when I start with a class that needs to be injected into other classes, I cannot use a simple @Inject in those classes, because those classes are not managed by container yet. So this creates a long chain up to the ""top"" classes that are not injected anywhere.

+ +

The only way I see is to make an Injector / application context globally available through a singleton for the time being, so that other classes can get a managed beans from it. But it contradicts important idea of not revealing the composition root to the application.

+ +

Another approach would be bottom-up: to start with ""high-level"" classes, include them into dependency injection container and slowly move down to ""smaller"" classes. But then I have to wait a long time, since I can test those smaller classes that still depends on globals/statics.

+ +

What would be the way to achieve such gradual migration?

+ +

P.S. The question Gradual approaches to dependency injection is similar in title, but it doesn't answer my question.

+",147650,,-1,,42837.31319,42464.62986,Gradually move codebase to dependency injection container,,2,7,,,,CC BY-SA 3.0,, +310312,1,310404,,2/17/2016 11:08,,0,529,"

My company has an old GUI application on VxWorks. Now, I was requested to port this application to Windows XP over new H/W platform. The original application calls WindML & Zinc (Tornado's GUI libraries). For successfully porting this application, I can figure out these approaches:

+ +
    +
  1. rewrite GUI functions using VC++ on Windows: This could very time-consuming for the original designer didn't expect this porting. Even he did, the effort is still to heavy.
  2. +
  3. develop WindML/Zinc-compatible libraries in Windows using VC++: That is, replace the original VxWorks GUI libraries into compatible Windows GUI libraries. This could be more systematic, but the effort is still very heavy.
  4. +
  5. Configure WindML/Zinc into Windows version: that is, the VxWorks's IDE, Tornado can be configured to build image for Windows. This approach is most efficient. But unfortunately, for some reason, it was not allowed in my company.
  6. +
  7. Use ""OS Changer"" of MapuSoft: MapuSoft claims that their product, OS Changer, can serve this job. But the issue is that OS Changer is still very strange to me. I don't have confidence for it. I don't know how much it can serve it.
  8. +
+ +

Further information about my application:

+ +
    +
  1. My VxWorks-based GUI application has about 140 K lines.
  2. +
  3. There are more than 3000 lines containing keywords belonging to Zinc +and more than 2000 lines containing keywords belonging to WindML.
  4. +
+ +

Is there any other approach for porting GUi among different OS platforms? I know the porting project contains not only GUI portion, but other new H/W dependent portion. But now, I point is only on GUI portion.

+",184656,,184656,,42418.28542,42418.52847,Port GUI program from VxWorks to Windows,,2,10,1,42425.76667,,CC BY-SA 3.0,, +310318,1,310340,,2/17/2016 13:08,,2,9672,"

I find this behaviour in Python quite peculiar and I believe it can lead to many bugs especially if you have a function/method that takes in a list and returns another list after carrying out some operations on the elements in list.

+ +
x = []
+
+val = (row for row in x) # create a generator
+print(next(val)) # will raise StopIteration exception
+
+for i in x:
+    if not i:
+        raise ValueError('The sequence is empty') # Exception is not raised
+    print(i) # does nothing and does not raise an exception
+
+ +

If you try to iterate over a generator with next and it's empty or has reached the end, a StopIteration exception is raised, but this is not the same when you use a for loop to iterate over a list or any iterable in Python.

+ +

I know I could just write if not x: to check if the list is empty but I believe an empty iterable should raise an Exception when you try to iterate over it as there is no benefit looping it.

+ +

I want to know if there's any special reason for this behavior in the Language design or am I not considering another case where it might be useful. I think the same goes for Java too.

+",150031,,31260,,42417.55139,42417.69722,Why don't empty iterables in python raise Exceptions when you try to iterate over them,,3,2,,42424,,CC BY-SA 3.0,, +310319,1,310321,,2/17/2016 13:12,,0,221,"

Command Query Separation is a useful principle, though it's not always ideal. Sometimes you need to run a process, which will result in useful data you need to return. My specific case is uploading a file to a server and getting the ID of the file (which is assigned by another system I have no control over).

+ +

In simple terms, the function is something like this:

+ +
def upload_file(filepath):
+    # Set up filedata with values
+    result = upload_to_server(filedata)
+    return result['id']
+
+ +

I'm not debating whether or not to disregard CQS, but I am wondering how to make this especially clear to a user. For instance upload_file doesn't make the return value explicit, but get_upload_file is also ambiguous. A docstring is an obvious must to make the function explicit for anyone who reads that.

+ +

But I was wondering are there any other helpful patterns that can make it explicit that a function performs actions and returns a value. Or alternatively, should I just focus on making one of these aspects clear and not worry about fully informing the user?

+",181126,,,,,42417.55486,How to make it clear I'm violating Command Query Separation,,1,0,,,,CC BY-SA 3.0,, +310322,1,,,2/17/2016 13:27,,1,46,"

I have a program that processes measurements of different types, using different units. For example, kilometers, miles and meters. Or pounds, kilograms and ounces. Each value has to be associated with a unit identifier.

+ +

I'm thinking about using urn's, instead of strings, to identify the unit. For example:

+ +
urn:mycompany.com:unit:distance:mile
+urn:mycompany.com:unit:duration:second
+urn:mycompany.com:unit:speed:knot
+
+ +

But what about using multipliers?

+ +
urn:mycompany.com:unit:distance:meter
+urn:mycompany.com:unit:distance:kilometer
+urn:mycompany.com:unit:distance:centimeter
+
+ +

I don't like marking each of them as a seperate urn, because they are basically the same unit, just using a different multiplication factor. And besides, it means I have to create 13 different versions of each unit that way.

+ +

And what about derived units, such as mph?

+ +
urn:mycompany.com:unit:speed:mph
+urn:mycompany.com:unit:speed:miles_per_hour
+urn:mycompany.com:unit:speed:(unit:distance:mile/unit:duration:hour)
+
+ +

Somehow I like the latter one the best, but is that even a valid urn? And if not, how can I change it so that it is?

+ +

And this is a simple example. Let's take the scientific definition of Joules as kg*m^2/s^2.

+ +
urn:mycompany.com:unit:energy:(unit:mass:kilogram*unit:distance:meter*unit:distance:meter/(unit:duration:second*unit:duration:second))
+
+",216756,,,,,42417.56042,Expressing units as urn's,,0,3,,,,CC BY-SA 3.0,, +310329,1,,,2/17/2016 14:19,,1,74,"

I have following task:

+ +

I need to send several instructions after each other to the hardware. Next instruction is never sent before either answer to previous instruction comes (valid answer or error) or timeout happens.

+ +

Currently all of those instruction has their own listeners with onReceived(answer) and onError(ErrorCode), and next instruction is called from onReceived of previous.. As you can imagine this code is not optimal and by adding new instruction to the sequence, class get even more cluttered and it is harder to maintain or debug.

+ +

My goal would be to have some kind of simple class that will have list of instructions in correct order and every instruction to have way to process it's own received answer and store it in common place. This simple manager class would just call execute() or next() and correct instruction would be sent, answer would be stored if correct or command repeated certain number of times in case of error.

+ +

Command pattern seems to be way to go but I am not sure if I am on the right track and how to react to responses in a correct way.

+",44386,,31260,,42417.60417,42417.60417,is Command appropriate software pattern for this?,,0,3,,,,CC BY-SA 3.0,, +310330,1,,,2/17/2016 14:41,,8,1959,"

Item 23 of Effective C++ (3rd edition) by Scott Meyers is titled: ""Prefer non-member non-friend functions to member functions"". The reason Scott suggests is the increase of encapsulation. So, only the functions that need to access the private members are made member functions, the other functions are made free-standing.

+ +

Now, the user does not care about my considerations. All he wants is a clean uniform interface that is as simple to learn as possible. Having some functions as members and some as free-standing seems to break this uniformity. Furthermore, the choice of which function is member and which is free-standing may seem random to the user, who is unaware of the implementation details (i.e. the private members) of my class.

+ +

One answer even cites a language proposal related to this issue.

+ +

My question is: is it a good idea to always provide a free-standing wrapper function for each member function in order to present the user with a uniform interface consisting of free-standing functions only?

+",216771,,-1,,42837.31319,42417.69236,Member vs. free-standing functions with respect to interface uniformity,,1,8,,,,CC BY-SA 3.0,, +310334,1,,,2/17/2016 15:26,,0,1100,"

We are developing a system for a customer that does not want to allow installation of packages from outside repositories. The project is in Python and defines its dependencies via setuptools; most of these dependencies are found on PyPI, and others are found on our company's repository. Some of them require system libraries to be present (e.g. libevent for gevent). None of them can be installed (as a direct download from the repository) in the customer's servers.

+ +

Right now, we are packaging the project, its dependencies, and recursively all dependencies of its dependencies, into RPMs, which we bundle into a single distribution tarball. This is time-consuming and error prone. Furthermore, we do not really need versioning, since the project is a service and client code does not get to choose which version of the service it talks to. We would just need to ship the latest version once we know it is stable.

+ +

The main alternative I have been considering is buildout: build the project in a staging machine with the same OS and interpreter as the production machine, then tar the whole directory and copy to the production machine. But I am not sure whether this would really be an improvement over the current distribution method.

+ +

What other options are there? Which one has been used successfully? Is there some kind of community best practice here?

+",119891,,31260,,42417.64583,43579.17014,How to distribute a project with all its dependencies?,,1,6,,,,CC BY-SA 3.0,, +310337,1,310352,,2/17/2016 16:05,,3,1295,"

I had previously deployed application storage in HTML5 to great success in the deployment stage of a few client projects. I used the app cache feature mostly as a way to allow my clients to remotely install kiosk software in their web browser remotely by setting up a temporary ""install url"" that they would visit and load the app into their browsers. Later, if I needed to patch or update the software, I would simply reestablish that same install url and the app cahce would notice an updated manifest available and reload the application's assets. It worked great.

+ +

I just read that Mozilla has announced the app cache features in their browsers are now deprecated and are advocating transition to service workers to duplicate the same functionality. I'm happy to learn the new technology, but was wondering why is this simple, useful and relatively new technology is being canned? And how safe would it be to continue to use it in 2016, judging from the canning of similar web technologies in years past.

+",103795,,103795,,42417.68403,42417.76736,The future of application cache in HTML 5,,1,4,,42417.79444,,CC BY-SA 3.0,, +310338,1,310341,,2/17/2016 16:31,,2,164,"

I'm looking for a name or design pattern on an ACL Based permission system (at least, I think it has some features of ACL).

+ +

The case
+An admin is allowed to create Managers. These managers will be granted several permissions based on what the admin submits when creating the manager (note that the permissions per manager can differ). If allowed by the admin, a manager is able to create their own employee's. Only the permissions of the employee's can never extend the permissions of the managers and the permissions of the managers can never exceed the permissions of the admin.

+ +

What is this kind of permission system called? Are there any standard definitions or examples of this? Or is this a novel approach to user security?

+",216785,,35276,,42417.74097,42417.74097,What kind of security enforces a tree of permissions on it's users?,,1,0,1,,,CC BY-SA 3.0,, +310347,1,,,2/17/2016 17:27,,4,290,"

Item 23 of Effective C++ (3rd edition) by Scott Meyers is titled: ""Prefer non-member non-friend functions to member functions"". I understood that the intention of Scott was that, whenever a function can be implemented in terms of the existing minimal set of member functions, it should be made free-standing.

+ +

However, I saw an opinion today that having a free-standing function that calls a state-changing member function is a bad practice. This opinion basically states that all free-standing functions should be pure. Is this a universal understanding or my initial understanding is justifiable as well?

+",216771,,,,,42417.79097,Free-standing function calling a state-changing member function -- a bad practice?,,2,1,,,,CC BY-SA 3.0,, +310357,1,310358,,2/17/2016 20:43,,2,247,"

Purely out of curiosity, I was posed the question with a hypothetical game corp. that has an online game that they require players to pay a monthly fee for. The specifics was that their service agreement stated that the user only has the rights to the client as long as their subscription is active.

+ +

However, said hypothetical SA poses a case where two players, let's call them John and Susan. John owns his own personal PC with the client installed. If John's subscription expired, by the SA, he would have to uninstall it essentially immediately. However, he does what any normal user would do and does not uninstall it. He is not licensed to run the client. However, Susan, assuming she has an active subscription, could run the client that resides on John's PC.

+ +

What if this client was under the GPL license? Would their monthly subscription expiring immediately deny their access to the source code of the application? (In terms of licensing, I'm not trying to somehow delete the code from their PC when their subscription expires)

+ +

Or, is this situation impossible due to it violating the license? (I can't remember where I heard it but I heard somewhere that subscription software is incompatible with the GPL but I haven't found any source on that.)

+",213775,,,,,42418.84375,GPL source code access in subscription model?,,2,2,,,,CC BY-SA 3.0,, +310359,1,,,2/17/2016 21:00,,1,260,"

To me, the word ""default"" means what happens if I do nothing. So I feel that the ""default constructor"" should refer only to the one that the compiler provides if I do not write any. That makes it clear that ""no argument constructor"" means one that I have written, just like any method could have no arguments.

+ +

I don't see that this is an agreed-on convention though, because many people use ""default constructor"" to mean the same thing as ""one with no arguments"". But that is not the case for other types of methods. All my Accessors are not referred to as ""default accessors"", and calculation methods that take no arguments are not referred to as ""default calculations"". So why is this confused for Constructors? Can we clear it up?

+",,user186205,,,,42418.02292,"Should ""default constructor"" mean the compiler-generated one and ""no-argument constructor"" mean one you create?",,2,2,,,,CC BY-SA 3.0,, +310360,1,,,2/17/2016 21:06,,0,356,"

At my Job I'm currently on refactoring a very old php CMS. By now, ""code handling"" was done by simply copying the whole thing and modify it to fit whatever was needed for this job to be done (actually the CMS implementation itself is rather an example site than a framework for building one).

+ +

Luckily, it is pretty progressive for when it was written and already uses classes. So the first thing I did was split the CMS up in several composer packages replacing the current classes. Since there are so many copied versions, being able to drop-in replace some classes with backwards compatibility is quite important to allow an easier transition to the new system.

+ +

By now I have a package for logging and one wrapping the database class (now mysqli) with a dependency on the logging package. Currently the logging component isn't doing much more than providing a factory method to create logger instances with the usual handlers1 already set up. It should however be able to log to an external database if the database component is installed, but it shouldn't depend on it. To make matters worse, the database component has to be initialized after the logging component to alert us in case of a database failure, but errors happening before the database component is up have to be stored, too.

+ +

I thought several concepts, none of them are convincing:

+ +
    +
  1. Putting it all in one package: This would pack two rather unrelated packages together and really hurt single responsibility principle.
  2. +
  3. Circle reference: Not allowed by composer. Also a bad idea for several reasons and not too different from packaging them together.
  4. +
  5. Use own/native database access: Easy to do, but it violates DRY and is really the exact thing I try to avoid by using composer.
  6. +
  7. Factory: Sounds good, but it would really just create a super-package having both as a dependency. Also this would take quite a long time to get up and running, which I don't have (refactor instead of rewriting). Also, its hard to integrate into existing code and giving a part of the code base yet another flavor seems counterproductive.
  8. +
+ +

My current solution is to have a MySQLHandler class in the logging package, which will always be registered as handler. The class checks for the existence of a stager class (located in the database package) and forward all errors if it exists and logging to the database is enabled. The stager itself is a singleton class which stores all messages until an activation method is called. This method then finally connects to the database and flushes the backlog (any further messages are autoflushed).

+ +

This works for now, but it feels really dirty to check for class existence and having to call an extra activation function after the logging setup. What is the correct way to handle this dependency situation?

+ +

1 Monolog works by having several instances of loggers which provide context and each of them has handlers to write log files, send mails etc.

+",132297,,132297,,42417.89028,44179.00347,How to handle optional dependencies in php?,,1,0,1,,,CC BY-SA 3.0,, +310362,1,310371,,2/17/2016 21:29,,3,576,"

I am very new to programming and have this humble OOP-related question:

+ +

Can we have a module in a OOP-manner written program, that contains 2 or more OOP features? Say, both Encapsulation and polymorphism OR both inheritance and modularity, etc...

+",,user216836,,,,42418,"Shall I call this ""Multitasking"" in OOP?",,3,0,1,,,CC BY-SA 3.0,, +310366,1,310374,,2/17/2016 22:17,,0,1501,"

There's an application, which accepts a user id and password to login. Validation of the id and password is processed by another system. Now after password validation, the application generates a One Time Passcode (OTP) and sends it as an SMS or email to the user.

+ +

This OTP expires in 15-30 minutes. Only when the OTP is validated successfully does the user's login complete.

+ +

If the user has two devices, and he enters his password in one device, an OTP is generated. But he doesn't use the OTP on the first device. He uses his second device to log in again, and enters his password there. Since it is within 15-30 minutes, the same OTP is generated again. Now he can log in to two devices with the same OTP.

+ +

Does this scenario appear to be good?

+ +

Continuing the scenario, his id is locked meanwhile for some reason. He attempts to enter the OTP in either of the devices. Now should we allow the login? Otherwise, should we check the id status every time an OTP is validated?

+ +

Similarly, in the 15-30 minute interval, if password is changed via the help desk, should we reset the existing sessions that have successful password validation but are waiting for the OTP? Or can we allow the login with the older password itself?

+ +

The technical problem here is that password validation is in one system, but the OTP validation is in another system, and they are mutually exclusive. They don't communicate with each other.

+ +

Is this OTP validation procedure appropriate?

+",216838,,97259,,42417.94931,42418.78889,Handling delay in entry of OTP for log in validation,,2,12,,,,CC BY-SA 3.0,, +310369,1,,,2/17/2016 22:49,,3,50,"

I am having a hard time trying to figure out the best way to report on and calculate layered time data.

+ +

The problem:

+ +

A person can sign in multiple times, during this sign in there are multiple statuses a person can be a part of. These statuses can be different per sign in per person. I am trying to figure out the best way to calculate time spent in each status as one whole session flattened together.

+ +

+ +

Let L1 be login 1 for person 1, let L2 be login 2. Essentially I am trying to calculate B or Billable status.

+ +

Assuming they were only ever in a Billable status on both logins then the bottom line would be the time spent in Billable.

+ +

Current solution

+ +

We store a session in our Postgres database and every 30 minutes go back and flatten the sessions into one based on start/end time and figure out time in each status. This works great for reporting that is based at least 30 minutes back, but not for real time reporting.

+ +

My tools:

+ +

I am trying to save each flattened second in each status per person to a Cassandra table. What would such a table look like? Could doing these calculations be a purely database operation?

+ +

Anyone got any ideas on how to flatten this data in C* so we only get the seconds each person has spent in each status regardless of how many logins were reporting that status?

+",165689,,81495,,42418.05903,42418.05903,Realtime layered time data storage and calculations,