diff --git "a/stack_exchange/SE/SE 2016.csv" "b/stack_exchange/SE/SE 2016.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2016.csv" @@ -0,0 +1,144146 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense,, +306303,1,306306,,1/1/2016 5:13,,-7,1000,"
I would like to program a software that will heavily utilize client side development (JavaScript), and I want to communicate with the server. Is there a way similar to SignalR where I can keep communicating with the server to get new updates if any?
+ +Something similar to LinkedIn where when there is a new update, it will show you that there is 1 or 2 updates available, so you can click to see them.
+ +I know I can use setTimeOut and keep checking with the server, but is it the right way?
+",208578,,,,,42370.30764,Technology like SignalR but client side,I'm going to save some string payload in the database. I have two global configurations:
+ +These can be enabled or disabled using the configuration in a way that either only one of them is enabled, both are enabled or both are disabled.
+ +My current implementation is this:
+ +if (encryptionEnable && !compressEnable) {
+ encrypt(data);
+} else if (!encryptionEnable && compressEnable) {
+ compress(data);
+} else if (encryptionEnable && compressEnable) {
+ encrypt(compress(data));
+} else {
+ data;
+}
+
+
+I'm thinking about the Decorator pattern. Is it the right choice, or is there perhaps a better alternative?
+",208350,,134647,,42379.90972,42379.90972,Is there a design pattern to remove the need to check for flags?,I am reading GoF, and the intent of builder is mentioned as to separate creation of complex object from its representation.
+ +I couldn't understand what representation means in this context. +What does it mean?
+ +The Text parsing example provided in the motivation section doesn't seem to separate construction from representation, but algorithm for interpretation of textual format from creation and representation of converted format.
+ + +",204027,,,,,42370.75903,Builder Pattern : Seperation of construction from representation,I'm making a framework in Java and have a method in my abstract class, BasicPhase, called processAction (snippet below). It takes two abstract classes as parameters since the user should implement them in child classes. However, it results in the user being required to constantly downcast every time they override the method. How can this be avoided?
+ +BasicPhase.java
+ +public class BasicPhase {
+ ...
+ public BasicGameState processAction(BasicAction action, BasicGameState state);
+ ...
+}
+
+
+Example User Class
+ +MyPhase.java
+ +public class MyPhase extends BasicPhase {
+ @Override
+ public BasicGameState processAction(BasicAction basicAction, BasicGameState basicState) {
+ MyGameState state = (MyGameState) basicState; //undesired
+ MyAction action = (MyAction) basicAction; //undesired
+
+ //Game Logic Here
+ }
+}
+
+",185305,,,,,42370.86667,Automatic Downcasting,I watched Raymond Hettinger's Pycon talk ""Super Considered Super"" and learned a little bit about Python's MRO (Method Resolution Order) which linearises a classes ""parent"" classes in a deterministic way. We can use this to our advantage, like in the below code, to do dependency injection. So now, naturally, I want to use super for everything!
In the example below, the User class declares it's dependencies by inheriting from both LoggingService and UserService. This isn't particularly special. The interesting part is that we can use the Method Resolution Order also mock out dependencies during unit testing. The code below creates a MockUserService which inherits from UserService and provides an implementation of the methods we want to mock. In the example below, we provide an implementation of validate_credentials. In order to have MockUserService handle any calls to validate_credentials we need to position it before UserService in the MRO. This done by creating a wrapper class around User called MockUser and having it inherit from User and MockUserService.
Now, when we do MockUser.authenticate and it, in turn, calls to super().validate_credentials() MockUserService is before UserService in the Method Resolution Order and, since it offers a concrete implementation of validate_credentials this implementation will be used. Yay - we've successfully mocked out UserService in our unit tests. Consider that UserService might do some expensive network or database calls - we've just removed the latency factor of this. There is also no risk of UserService touching live/prod data.
class LoggingService(object):
+ """"""
+ Just a contrived logging class for demonstration purposes
+ """"""
+ def log_error(self, error):
+ pass
+
+
+class UserService(object):
+ """"""
+ Provide a method to authenticate the user by performing some expensive DB or network operation.
+ """"""
+ def validate_credentials(self, username, password):
+ print('> UserService::validate_credentials')
+ return username == 'iainjames88' and password == 'secret'
+
+
+class User(LoggingService, UserService):
+ """"""
+ A User model class for demonstration purposes. In production, this code authenticates user credentials by calling
+ super().validate_credentials and having the MRO resolve which class should handle this call.
+ """"""
+ def __init__(self, username, password):
+ self.username = username
+ self.password = password
+
+ def authenticate(self):
+ if super().validate_credentials(self.username, self.password):
+ return True
+ super().log_error('Incorrect username/password combination')
+ return False
+
+class MockUserService(UserService):
+ """"""
+ Provide an implementation for validate_credentials() method. Now, calls from super() stop here when part of MRO.
+ """"""
+ def validate_credentials(self, username, password):
+ print('> MockUserService::validate_credentials')
+ return True
+
+
+class MockUser(User, MockUserService):
+ """"""
+ A wrapper class around User to change it's MRO so that MockUserService is injected before UserService.
+ """"""
+ pass
+
+if __name__ == '__main__':
+ # Normal useage of the User class which uses UserService to resolve super().validate_credentials() calls.
+ user = User('iainjames88', 'secret')
+ print(user.authenticate())
+
+ # Use the wrapper class MockUser which positions the MockUserService before UserService in the MRO. Since the class
+ # MockUserService provides an implementation for validate_credentials() calls to super().validate_credentials() from
+ # MockUser class will be resolved by MockUserService and not passed to the next in line.
+ mock_user = MockUser('iainjames88', 'secret')
+ print(mock_user.authenticate())
+
+
+This feels quite clever, but is this a good and valid use of Python's multiple inheritance and Method Resolution Order? When I think about inheritance in the way that I learned OOP with Java this feels completely wrong because we can't say User is a UserService or User is a LoggingService. Thinking that way, using inheritance the way the above code uses it doesn't make much sense. Or is it? If we use inheritance purely just to provide code reuse, and not thinking in terms of parent->children relationships, then this doesn't seem so bad.
Am I doing it wrong?
+",74679,,,,,43113.13542,Using Python's Method Resolution Order for Dependency Injection - is this bad?,Langs like Java knows about true method overloading:
+ +class Overload {
+ void demo (int a) {
+ System.out.println (""a: "" + a);
+ }
+ void demo (int a, int b) {
+ System.out.println (""a and b: "" + a + "","" + b);
+ }
+}
+
+
+Whether langs like PHP — doesn't. So when you stand in front of a problem of processing something depending of context, which ways will you think of and which usually choose?
+ +Typical problem could be some abstract payment system, where outside user should not think about concrete payment provider details (which could be different) and just call Provider.pay(PayDetails). In interface of PaymentProvider we must fix that it works with PayDetails. But in concrete implementations concrete provider could work for example only with it's own ConcretePayDetails.
Is it worth to polyfill such things as ad-hoc polymorphism in langs that dont' support it?
+ +How to better solve context-related methods dispatching in such langs?
+ +Example of possible implementation available at https://github.com/garex/php-ad-hoc-polymorphism-polyfill/
+",29028,,,,,42372.52292,How to simulate method overloading in langs without such feature?,Given that Python allows for multiple inheritance, what does idiomatic inheritance in Python look like?
+ +In languages with single inheritance, like Java, inheritance would be used when you could say that one object ""is-a"" of an another object and you want to share code between the objects (from the parent object to the child object). For example, you could say that Dog is a Animal:
public class Animal {...}
+public class Dog extends Animal {...}
+
+
+But since Python supports multiple inheritance we can create an object by composing many other objects together. Consider the example below:
+ +class UserService(object):
+ def validate_credentials(self, username, password):
+ # validate the user credentials are correct
+ pass
+
+
+class LoggingService(object):
+ def log_error(self, error):
+ # log an error
+ pass
+
+
+class User(UserService, LoggingService):
+ def __init__(self, username, password):
+ self.username = username
+ self.password = password
+
+ def authenticate(self):
+ if not super().validate_credentials(self.username, self.password):
+ super().log_error('Invalid credentials supplied')
+ return False
+ return True
+
+
+Is this an acceptable or good use of multiple inheritance in Python? Instead of saying inheritance is when one object ""is-a"" of another object, we create a User model composed of UserService and LoggingService.
All the logic for database or network operations can be kept separate from the User model by putting them in the UserService object and keep all the logic for logging in the LoggingService.
I see some problems with this approach are:
+ +User inherits from, or is composed of, UserService and LoggingService is it really following the principle of single responsibility?UserService.validate_credentials we have to use super. This makes it a little bit more difficult to see which object is going to handle this method and isn't as clear as, say, instantiating UserService and doing something like self.user_service.validate_credentialsWhat would be the Pythonic way to implement the above code?
+",74679,,,,,42513.88125,"Is Python's inheritance an ""is-a"" style of inheritance or a compositional style?",I am reading ""Less is exponentially more"" and there is a list of advantages of Go, first of them, quote ""regular syntax (don't need a symbol table to parse)"".
+ +What does it mean ""regular syntax""? What properties define ""language X does (not) have a regular syntax""?
+",66354,,33920,,42372.32292,42372.32292,Regular syntax -- what does it mean?,Suppose I were to modify a said piece of code and put it back on github, would I have to update the license header.
+ +This is the code(Licensed under the Apache License, Version 2.0): https://github.com/MasterAwesome/android_device_oneplus_onyx/commits/master/liblight/lights.c
+ +The commit ""Use liblights-caf"" is from Google's repository. Now the next few commits I have made is to modify the source code for my device. Should I add my copyright notice in the header too?
+",209547,,,user53019,42371.64167,42371.64444,Do I update the copyright header when i modify the source?,So, I'm still working on my small GUI ""library"". (C++ with boost) +The goal is to provide a simple solution for small SPI displays, using a Raspberry or some embedded board.
+ +Thus, I ended up having a Widget class from which concrete widgets are derived. +Examples would be a textbox or an image frame. +Widgets can contain other widgets to allow groups. +This is achieved via aggregation using a STL container. +In the end, the complete GUI dataset can be seen as some kind of tree, with some nodes having child nodes.
+ +Each widget shall have a draw-method which looks like
+ + void drawMe(brush *myBrush);
+
+
+A brush provides primitive drawing methods like ""setPixel"" or ""drawRectangle"" +This allows the widget to draw itself, using the methods provided by the brush. +A widget first draws itself, then it invokes the drawMe-method of each child widget, if any.
+ +Now lets talk about my problem. +Each widget has its coordinates, relative to the parent. +This is not enough, the brush needs the absolute coordinate of the widget, because a brush does not know anything about children or parents. A brush just provides methods like setPixel(int x, int y) and draws into a pixel buffer.
+ +But I somehow have a bad feeling letting a widgets know its absolute coordinates. This feels wrong. +On the other hand, at the moment a child does not know it is owned by a parent, thus, there's no way a widget can compute its absolute coordinates.
+ +I can implement workaround in the brush, so that the brush supports multiple layers instead of just one canvas (the pixel buffer). A parent then has to set up the layers for its child. +Sounds like a dirty hack.
+ +Also, is it a good idea to et the parent know its child coordinates?
+ +You see, I'm somewhere lost with this issues. +There are GUI frameworks out there, like QT and stuff. +So, my problem should be known and solved already. +Do you have any suggestions? +How would you implement the drawing process?
+ +The whole project is about learning new stuff, mainly OOP paradigms. +I like to hear your opinion on that.
+",207591,,,,,43134.18958,Managing widgets in a simple GUI framework,I'm new to REST API, so I decided to get familiar with it by designing a small web service API. I have its design written down and would like you to review it. I feel like I have made some mistakes in designing it and understanding REST concepts, which I try to address with questions on my design at the very end. I'm mostly unsure about my use of URLs for the API.
+The web service I try to design provides a way for a user to store key=value pairs, i.e. retrieve value knowing its key, update the value of an existing key and delete an existing key. The whole CRUD set of actions!
+Here is what I have came up with.
+Since my API might evolve, I want a notion of API versions, so I have
+GET api.example.com/supported-versions
+
+Which returns a JSON list of integers denoting the API versions supported.
+The API will be available at api.example.com/{VERSION}/ endpoint, e.g. api.example.com/1/ for the first version.
GET api.exmaple.com/1/keys/{KEY}
+
+Allows a user to get a value associated with a key KEY. The server responds with a status code (200, 404, etc.) and a text value on success, both encoded in JSON.
POST and PUT api.exmaple.com/1/keys/{KEY}?value={VALUE}
+
+Allows a user to create/update the value associated with the key KEY. The server returns a status code (200, 404, etc.). VALUE is a text string.
DELETE api.exmaple.com/1/keys/{KEY}
+
+Allows a user to delete a key=value pair with key KEY. The server Which will delete the key=value pair and return a status code (200, 404, etc.).
Let's imagine that there is an OAuth2 authentication set up, which is used for the up above POST/PUT/DELETE methods (but not GET anyone can GET, you don't need authentication for that), so there is a way to uniquely identify a user and keep track which key=value pairs belong to which user.
+Now, I want an authenticated user to be able to get a list of all keys they have, so that they don't have to store this information on client-side in order to send DELETE or PUT request later.
+GET api.exmaple.com/1/keys?page={PAGE}
+
+This allows a user to get a list of all existing keys they have created. PAGE is an optional parameter. User can increase page count until the user gets an error status code. It returns a paged list of all existing keys a user have created and also a status code (200, 404, etc.).
Is there anything wrong with this API?
+Should the first GET/POST/PUT/DELETE from Key=value storage section be on /keys/ or /values/?
Is it fine that the last "GET", the one which returns all keys a user has, is also /keys/?
In response to Aaronaught's response to the question at:
+ +Can't I just use all static methods?
+ +Isn't less memory used for a static method? I am under the impression that each object instance carries around its own executable version of a non-static member function.
+ +Regardless of how much overhead is involved in calling a static method, regardless of poor OO design, and possible headaches down the road, doesn't it use less memory at runtime?
+ +Here is an example:
+ +I make a vector of zero-initialized objects. Each object contains one piece of data (a triangle consisting of nine double's). Each object is populated in sequence from data read from a .stl file. Only one static method is needed. Proper OO design dictates that a method dealing with the the data directly should be distributed to each object. Here is the standard OO solution:
+ +foreach(obj in vec) {
+ obj.readFromFile(fileName);
+}
+
+
+Each obj carries readFromFile's compiled code alongside the data!
Memory is more of a concern than performance in this case, and there is a LOT of data on a constrained system.
+ +Solutions:
+ +readFromFile. Call with super.callPrivateMethod() which calls readFromFile. Messy, and still some memory overhead in each object.readFromFile outside obj's scope, so in vec's class or in the calling class. This, in my opinion, breaks data encapsulation.I realize for large amounts of data one explicit object for each triangle is not the best approach. This is only an example.
+",209593,,-1,,42837.31319,43281.29306,Does making a method static save memory on a class you'll have many instances of?,I am working on a legacy code base and I need to find a way to write unit tests for this project.
+ +The project has a three layer architecture (UI-Biz-DAL as we call them) and DAL is totally implemented using ADO.Net and Typed-Datasets and it is full of SQL Scripts.
Our Biz classes have methods that are responsible for doing business logic stuffs and they are dependent on other helper classes and DAL classes.
I know I can use DI to inject these classes to my Biz classes but I think that I should change a lot of code. Here's a solution that I can think of :
There's this TestContext class that acts as a container and can contain mock objects for tests but it does not have anything when it comes to run the actual code so that real objects can be used instead , here is an example :
var dal=TestContext.Current.Resolve<IMyDAL>(@default:new MyDal());
+
+
+as you can see Resolve method accepts an argument of type IMyDAL that will be used in case of not running our tests.
First I would like to know what do you think about this solution
+ +Second I am still thinking about a way to test SQL scripts that are hardcoded in our code base . How can I test them ?
+",49472,,49472,,42372.29861,42372.48542,Having a TestContext to test methods instead of Dependency Injection,Short version of the question: What is a proper way to implement object cloning with deep copy, using generally accepted OOP principles?
+ +I ran into this while looking into the Prototype Design Pattern in the GoF Design Patterns book, but I think it applies to general object cloning.
+ +Wouldn't it be, each class has to properly implement its own instance method of deep_copy, because each class has its own way to ""going through"" all elements, such as left and right for a binary tree, and sometimes, an object A having 2 other references to 2 other objects: B and C, may mean A own B and C, and therefore B and C should also be cloned, while in some cases, such as a node object in a graph, A having a reference to B and C just means it is pointing to B and C and DO NOT own B, C (other nodes in the graph may also point to B and C).
There is a way to clone, which is serialize it and unserialize it (which should be same as data marshalling?) but it doesn't handle the case when the object doesn't own another object, or in the case of a node in a graph, can you serialize and unserialize, and get back a cloned node that points to the proper nodes in the graph as the original node object does?
+ +Another complication may arise, if object A has an instance variable foo, and it has a data structure that reference object B twice, so we really should not clone B twice. Or, if foo reference it once in its data structure, but another instance variable bar also reference B, then also we should not clone B twice but once. And if A doesn't own B, then we should not clone B at all.
But let's say we ignore the complication above:
+ +Then roughly speak, all classes in your application should implement its own method of deep_copy, and it roughly is this:
# Pseudo code:
+
+class SomeClass
+ def deep_copy
+ new_object = self.clone() # to have all the instance variables and
+ # methods cloned, but just a shallow copy, and
+ # also, all the inheritance, access to
+ # class variables, methods, and inheritance
+ # hierarchy should be properly set up
+
+ for all objects that is referenced by my instance variables
+ if I own the object (by the design of my class), then
+ # rely on polymorphism to make a proper deep_copy of this object
+ new_object.this_instance_variable = self.this_instance_variable.deep_copy()
+ end
+ end
+
+ return new_object
+ end
+end
+
+
+and depending on whether the primitive types are object or not, it may just say: if I own the object, but it is primitive, then don't clone it. Or in case the primitive types (like Fixnum, 1, 2, 3) are objects too, as in Ruby, then just let it clone it (because you don't want to do type checking to see whether it is clone-able), but in the self.clone line, it will raise an exception to say that this type is not for cloning, and in that case, just catch the exception and return the same object without cloning it (which is the base case of the recursion).
But the key point is, using generally accepted OOP principles, every class in your app has to have the deep_copy implemented, and its contract (the interface contract) is that it will indeed return a clone of ""myself"" together with deep copies of objects that I own (recursively). And it may be difficult because a lot of times, we define a class, and we don't really implement a deep_copy. If our app has 12 classes, and we need a clone with deep copy, then we actually have to implement such clone with deep copy for all 12 classes (or for all classes that may need to participate in the deep copy). Is the above correct, or are there some corrections according to OOP principles?
I came from a highly functional and procedural background in programming, and never knew that a type is the same as an interface.
+ +As in the Design Patterns book by GoF, it says:
+ +++ +A type is a name used to denote a particular interface. We speak of an object as having the type ""Window"" if it accepts all requests for the operations defined in the interface named ""Window."" An object may have many types, and widely different objects can share a type. (p. 13)
+
The surprising thing is, I thought of type as char (like a character, or 1 byte), or int (a word, or 4 bytes or 8 bytes), or a pointer to character (a string in C language) before. Maybe even a struct with x and y as coordinate of a point, or an array, as a type, but I never thought of a type being ""an interface"".
+ +So it looks like a Car object can be of the type Moveable and Soundable, and a Dog object can be of the type Moveable and Soundable, while a Circle object may be of the type Moveable only, until we decide that a Shape object also need to give out sound, when a user clicks on it, and we let the Shape class implement the Soundable interface and now a Circle object is also of the type Soundable?
+ +I wonder when and how it happened? Is it actually said to be so by the GoF book for the first time in 1994 when the book got published? Or is it actually existing idea that came from long time ago?
+ +It actually sound exactly the same as Duck Typing, but Duck Typing seems like a new concept that began about in 2003 in the Python and Ruby community, not like an idea that was in 1994 or earlier.
+",5487,,5487,,42374.50556,42374.50556,"How and when did it happen that, a type is an interface?",Imagine that you first write a compiler for your language where you necessarily report errors to the user. Compiler also collects location information for backend tools. They must know where the program elements are located. Later, when you are done with your compiler, you decide to provide IDE support as well. The editor is actually one more back-end tool. Having correct locations for program components helps a lot to syntax highlighting and error reporting. At this moment, you suddenly realize that locations reported by compiler are questionable.
+ +It seems like EOL definitions is more or less specified in the language so that you can report lines correctly -- there is always a agreement between compiler and editor. But what about the column? If compiler reports that there is a blunder for an identifier located at line:col, editor may wonder, highlighting something different, depending the Tab settings. It seems impossible to have exact line:col location, no matter how useful it is, if tab width in the editor-specific. Nevertheless, I see that JavaCC provides getLine altogether with getBeginColumn method. I wonder how is it implemented, how is it possible in principle to track the offset? How does lexer match your Editor's width?
I'm planning a project that consists of the following parts:
+ +These separate products are going to be running on the same server.
+ +I'd like to have a development environment that mimics the production environment as closely as possible, that's why I'm going with Vagrant in combination with a single Ansible playbook that provisions both environments.
+ +What's the best way to organise such a project? Should I keep everything in one repository, or should I split up the parts into multiple repositories?
+ +What about the development environment and provisioning scripts? Should I put these in a separate repository? How would I go about referencing the separate parts of the project in the development environment?
+ +I'd like to be able to clone a single repository and spin up a development environment with as little work as possible.
+ +Any recommendations?
+ +Edit: My question differs from this question because I'm especially struggling with things related to production and development environments.
+",168722,,60357,,43024.28403,43024.28403,Should I use a single repo when multiple parts of the same project are running on the same server?,I thought a class is supposed to define, or give a blueprint, of attributes and methods for an object. And then, an interface is to provide a set of methods, as a contract for its clients. (and so a class also gives an interface as well, because a class also define a set of methods).
+ +But say, if we have a class called Shape. Now we add an interface to it, called Moveable. Now we may have to add attributes to the class, such as velocity. But interface is only about methods, then how would we think about adding attributes to the class due to interface?
I've seen it's very common for a compiler to be made in the language it's compiling. What is the benefit of this? Seems like it makes the process for outsiders (and the developers for a while) more difficult.
+ +Take for instance:
+ +It seems like it just makes things more tricky for the developers (at least in early stages) and for users of the compiler...
+",,user160274,,user160274,42373.02083,42373.02083,What is benefit that a compiler is implemented in the same language it compiles?,My friend asked me for help with his task: ""Redeem a lattice parser with any programming language"" - the problem is that he can't clearly explain me how the lattice parser should work. I've tried to google a bit, but all I've found are some academic documents which doesn't help.
+ +All I know at the moment that it is related/used very often with speech recognition - but in our case it will be only text-based for sure.
+ +I'm looking forward for a fine explanation how this parser works (I don't ask for a solution in any programming language - I want to do it with my friend by ourselves).
+ +I know that this lattice parser is somehow related to Earley parser (link to wiki and something called an academic parser - still doesn't know how it should help me to understand this.)
+",136427,,1204,,42373.16528,42534.90069,What is a lattice parser?,Context: +I am developing a visual studio plugin that generates layer diagrams. +I want the tool to be able to produce an intermediate output, which is the data representation of what is being rendered in the diagram. The idea is that this serialized data can then be checked in together with the source code which would allow developers to more quickly see how architecture changes between commits. This would either be done through general diff tools (like git-diff) or a dedicated diff tool. However i want the file format to be optimized for the first, because of their accessibility. The data would be represented as a tree structure of nodes, where each node would have a name and a list of dependencies (other nodes represented by their path + name).
+ +Question: +When designing a file format, are there things regarding what data interchange-format or syntax is used and the formatting of this syntax (use of line breaks for example) that would make different versions of the file easier or harder to compare using tools like git-diff. For example, ensuring that the tool sees a ""rename"" as a ""rename"" and not an ""delete"" followed by an unrelated ""add"".
+",170809,,170809,,42372.97569,43582.89167,Optimizing a file type for compare tools,From previous experience, I had always thought that, if you are going to use variables inside of a for loop, it was much better to declare them outside of the loop vs. inside the loop itself. I recently had a code review done and had several younger developers claim that this was not true and that I should be putting my variables declarations inside the {} of the for loop.
+ +Maybe compilers have just become more efficient, but it seems this would cause quite a number of memory releases/garbage collects since each iteration would be declaring a new instance of each variable, especially given that the majority of them are strings which are immutable.
+",209668,,213,,42373.02083,42373.12569,Best way to handle variables used in a for loop?,I’ve released a piece of code into the public domain (henceforth “PD”), via The Unlicense. Recently, I’ve found someone forked that code and made some modifications without any acknowledgment/attribution. That is absolutely fine (it’s to allow that kind of freedom that I’ve released it into the PD in the first place). Fine, that is, until I see that no code was actually changed.
+ +The only changes he did were replacing my name and website for his, and changing the icon.
+ +So what this person did was the equivalent of taking Alice’s Adventures in Wonderland, stripping every reference to Lewis Carroll and replacing them with the equivalent references to himself, changing the cover, and republishing.
+ +No license was attached to it by the person (and the PD one was removed), but this still feels like a dickish thing to do. I’m not exactly bothered by it and am not thinking about pursuing it (it was my decision to put it into the PD, after all) but I’m wondering about not only the ethics but the legality of this. If I understand correctly, PD still does not allow you to take credit for something you haven’t done, and that is still plagiarism. Is that correct, or is it jurisdiction-specific?
+",94821,,94821,,42373.72431,42373.97569,Taking credit for the public domain work of others,Simply put I'm new to the company, should I rather write advanced techniques with things like templates, std techniques..etc to make a first good impression and have my colleagues trust/be impressed at my work or should I be more concerned in writing more solid/based standard code for each current problem to be solved?
On comp.lang.c++.moderated@googlegroups.com, Greg Herlihy posted the following extern ""C"" function:
+ +extern ""C""
+{
+ int func()
+ {
+ wchar_t memoryName[256];
+ wchar_t mutexName[256];
+ wchar_t eventName[256];
+ mbstowcs(memoryName, ""MemoryName"", 256);
+ mbstowcs(mutexName, ""MutexName"", 256);
+ mbstowcs(eventName, ""EventName"", 256);
+ std::wstring memoryString(memoryName);
+ std::wstring mutexString(mutexName);
+ std::wstring eventString(eventName);
+ CDataTransferServer *srv = new CDataTransferServer();
+ srv->Initialize(1, CC_SAMPLETYPE_MPEG4,128,256,64);
+ printf(""Inside entry point tester 1\n"");
+ srv->AddUser(5, memoryString, mutexString, eventString);
+ printf(""Inside entry point tester 2\n"");
+ delete srv;
+ printf(""Exiting entry point tester 3\n"");
+ }
+}
+
+
+which is a g++ entry point different than main(int argc, char*argv[]).
Greg Herlihy then wrote:
+ +++ +Calling _exit() (which presumably should be ""_Exit()"") - does not ""prevent"" + C++ global objects from being destroyed. The destruction of a C++ program's + global objects is not inevitable. Instead a C++ program is responsible for + destroying its own global objects - and can do so in one of two ways: the + program returns from main() or it calls exit(). So a C++ program that fails + to exit main() and neglects to call exit() before it terminates - will not + have destroyed its global objects by the time it ended.
+
I don't believe that follows from what is written in the standard.
+ +As far as I can see, when I call exit(), the spec guarantees that:
Destructors for objects with automatic storage duration +will not be called.
Destructors for objects with with static storage duration +will be called, in reverse order of construction.
++ +++ +The worrying thing is that the standard doesn't say anything about + exit() or _exit(), so I'm relying on implementation-dependent behavior.
+Not so. The C++ Standard specifies that calling exit() destroys global + objects[3.6.3/1] And _Exit() is part of the C99 Standard (and will + presumably be incorporated into the next C++ Standard by reference).
+
Right, the C++ standard says what exit() does, but it doesn't say what
+_exit() or _Exit() do. And the C standard certainly doesn't say
+anything about C++ destructors.
++ +I don't see any implementation-defined behavior here. Calling _Exit() is no + more likely to destroy a C++ program's global objects than calling printf() + - or calling any other function that is not exit().
+
_exit() and _Exit() are specified not to call any functions registered
+atexit() or on_exit(). However, that's not useful for C++ because the
+C++ spec doesn't say by what means the runtime takes care of calling
+destructors of objects with static storage duration. A compliant
+implementation could use a mechanism other than atexit() or on_exit()
+to invoke static destructors.
In other words, the C++ spec does not say anything all about the
+behavior of _exit() or _Exit(). Therefore, I cannot make any assumptions
+about whether or not calling either function will cause destructors
+of static objects to run or not.
Any comments are welcome.
+",199407,,161917,,42373.91736,42375.05972,Is it true that calling _exit() instead of exit() won't prevent static destructors from being called?,Came across this problem -- +You are given a grid with numbers that represent the elevation at a particular point. From each box in the grid you can go north, south, east, west - but only if the elevation of the area you are going into is less than the one you are in, i.e. you can only descend. You can start anywhere on the map and you are looking for a starting point with the longest possible path down as measured by the number of boxes you visit. And if there are several paths down of the same length, you want to take the one with the steepest vertical drop, i.e. the largest difference between your starting elevation and your ending elevation. +Ex Grid:
+ +4 8 7 3
+2 5 9 3
+6 3 2 5
+4 4 1 6
+
+
+On this particular map the longest path down is of length=5 and it is: 9-5-3-2-1.
+ +There is another path that is also length five: 8-5-3-2-1. dropping from 9 to 1(8) is a steeper drop than 8 to 1(7). +WAP to find the longest (and then steepest) path.
+ +I've worked through Floyd Warshalls, DAG + Topological sorting, tried to think of DFS, though about DP, various things but I am not able to find a way to think through. Any pointers + simple pseudo code for approach would be helpful.
+",184305,,,,,44044.57292,Find path of steepest descent along with path length in a matrix,I'm storing some data in Cassandra, then after analyzing it puts in several table, I have its aggregation as daily, weekly, monthly, yearly basis. But after some time if some user reads the content I'm changing it in to read and unread status based on user activity.
+ +But as per my current design either I need to update in all tables at-a-time (more than 5 tables and it may increase) or need create a single table for read unread but want to join the tables, Which is not recommending with nosql concept.
+ +Any existing good architecture for it? I checked with lambda architecture but didn't get a good solution.
+",209702,,31260,,42373.41875,42373.45417,How to architect frequent updates in No-sql db (Cassandra) - architecture,I have been reading every now and then on the virtual machines of programming languages like Java, Python and Lua. They all have a notion of bytecode, into which the source code is translated and that is excutable on a virtual machine (register or stack based).
+ +Now on an x86 architecture, all code resides in RAM which is addressable using the CPU's address space. An instruction pointer register points to the position in memory that is currently executed. Jumps modify this instruction pointer register, but basically software in memory is linearized into one piece of RAM.
+ +With Virtual Machines, I am not so certain. Before executing the virtual machine, is all bytecode copied/linked together into a contiguous array with all instructions? Or does the VM keep various modules in different bytecode pages that are swapped/exchanged as needed?
+",80034,,,,,42373.57153,Do Virtual Machines (for execution of PL) operate on one contiguous array for their bytecode?,Let's say we have an app where the users gain points and can exchange them for rewards. The exchange request, in pseudo-code, could look like this:
+ +function exchangePointsForReward(userId, rewardId){
+ user = getUser(userId)
+ reward = getReward(rewardId)
+ if (user.points >= reward.requiredPoints){
+ giveRewardToUser(userId, rewardId)
+ reduceUserPoints(userId, reward.requiredPoints)
+ }
+}
+
+
+But if we have a malicious user what stops them from crafting a request in their favorite programming language and sending it 20 times at the same time? Before the first request reaches reduceUserPoints() ten other might've already got as far as addNewReward(). Sure, the user's points at the end of the day might go deep into negative, but what stops the user from quickly grabbing the rewards and using them up? How can I ensure that only one operation can be executed for a user at the same time?
One solution I can think of is the operation tries to acquire a lock at the beginning of the operation and only a single lockable operation can run for a user at any moment:
+ +function aquireLock(userId){
+ lockId = getRandomLockId()
+ database.query(""UPDATE user SET lock={lockId} WHERE user={userId} AND lock IS NULL"");
+ return database.query(""SELECT lock WHERE user = {userId}"").first === lockId;
+}
+
+function exchangePointsForReward(userId, rewardId){
+ if (!aquireLock(userId)){
+ throw new Error(""Failed to acquire lock"");
+ }
+ user = getUser(userId)
+ reward = getReward(rewardId)
+ if (user.points >= reward.requiredPoints){
+ giveRewardToUser(userId, rewardId)
+ reduceUserPoints(userId, reward.requiredPoints)
+ }
+ releaseLock(userId);
+}
+
+
+But is there any better strategy here? The question is database-agnostic.
+",81816,,81816,,42373.58403,42373.90625,How to prevent user from requesting API method multiple times in parallel?,We have a class in which mainly data processing(XML nodes) is done by mainly 3 methods. Now code in itself strictly follows DRY principle. For e.g.
+ +To give you overview ,Say we reach node A, then we call ProcessChildren() and if any children is Choose then we would call ProcessChoose(). Then we would call recursively ProcessChildren() result and so worth.
+Though this code is easy to read and overall bugs are removed but debugging is very difficult since one functions to another and so on.
+Is there any way we could remove this debugging hurdle so that debugging is easy?
I have three classes that are circular dependant to each other:
+ +TestExecuter execute requests of TestScenario and save a report file using ReportGenerator class. +So:
+ +Can't figure out how to remove thoses dependencies.
+ + + +public class TestExecuter {
+
+ ReportGenerator reportGenerator;
+
+ public void getReportGenerator() {
+ reportGenerator = ReportGenerator.getInstance();
+ reportGenerator.setParams(this.params);
+ /* this.params several parameters from TestExecuter class example this.owner */
+ }
+
+ public void setTestScenario (TestScenario ts) {
+ reportGenerator.setTestScenario(ts);
+ }
+
+ public void saveReport() {
+ reportGenerator.saveReport();
+ }
+
+ public void executeRequest() {
+ /* do things */
+ }
+}
+
+
+
+
+public class ReportGenerator{
+ public static ReportGenerator getInstance(){}
+ public void setParams(String params){}
+ public void setTestScenario (TestScenario ts){}
+ public void saveReport(){}
+}
+
+
+
+
+public class TestScenario {
+
+ TestExecuter testExecuter;
+
+ public TestScenario(TestExecuter te) {
+ this.testExecuter=te;
+ }
+
+ public void execute() {
+ testExecuter.executeRequest();
+ }
+}
+
+
+
+
+public class Main {
+ public static void main(String [] args) {
+ TestExecuter te = new TestExecuter();
+ TestScenario ts = new TestScenario(te);
+
+ ts.execute();
+ te.getReportGenerator();
+ te.setTestScenario(ts);
+ te.saveReport()
+ }
+}
+
+
+EDIT: in response to an answer, more details about my TestScenario class:
+ +public class TestScenario {
+ private LinkedList<Test> testList;
+ TestExecuter testExecuter;
+
+ public TestScenario(TestExecuter te) {
+ this.testExecuter=te;
+ }
+
+ public void execute() {
+ for (Test test: testList) {
+ testExecuter.executeRequest(test);
+ }
+ }
+}
+
+public class Test {
+ private String testName;
+ private String testResult;
+}
+
+public class ReportData {
+/*shall have all information of the TestScenario including the list of Test */
+ }
+
+
+An example of the xml file to be generated in case of a scenario containing two tests:
+ +<testScenario name=""scenario1"">
+ <test name=""test1"">
+ <result>false</result>
+ </test>
+ <test name=""test1"">
+ <result>true</result>
+ </test>
+</testScenario >
+
+",202067,,4,,42699.79236,42942.55486,How to solve circular dependency?,I'm freshly coming to the Python world after years of Java and PHP. While the language itself is pretty much straightforward, I'm struggling with some 'minor' issues that I can't wrap my head around — and to which I couldn't find answers in the numerous documents and tutorials I've read this far.
+ +To the experienced Python practitioner, this question might seem silly, but I really want an answer to it so I can go further with the language:
+ +In Java and PHP (although not strictly required), you are expected to write each class on its own file, with file's name is that of the class as a best practice.
But in Python, or at least in the tutorials I've checked, it is ok to have multiple classes in the same file.
+ +Does this rule hold in production, deployment-ready code or it's done just for the sake of brevity in educative-only code?
+",209730,,102438,,43027.70764,43027.70903,Is it ok to have multiple classes in the same file in Python?,I'm designing an analytics system that feeds all events to Elasticsearch. The event lifecycle is as follows:
+ +I know Logstash is made to gather and process events. Why should I consider it over my own custom solution?
+ +I have no log files in the process. Events travel from one component to another via HTTP. I don't care transforming the data to events on my own either.
+ +I'm especially concerned about not having the flexibility of dealing with the events and Elasticsearch directly from my own choice of programming language.
+",9059,,9059,,42373.66944,42373.67708,"Feeding events to Elasticsearch, do I really need Logstash?",I am rewriting a website that has a back-end database for containing meta-data that is displayed to the user when they enter a text code.
+ +Background:
+ +The old/existing system stored this data in XML files with a master index XML file that pointed the correct data file based on a GUID. The way this is currently handled for publishing to production is all changes are done in a ""staging"" environment with the files being stored on Cloudfront under a ""staging"" folder. When they want to push changes to production, they simply copy the file from ""staging"" to ""production"".
+ +I have updated the system to use a RDB (Postgres locally and it will be Aurora on AWS for production). I am using Java for the API with JPA annotations and Spring Boot. I have a separate Java Admin UI App that manages the meta-data and a separate user-facing application that is the website in Angular. The product owner still wants to be able to test changes in lower environments before publishing them to production.
+ +Question:
+ +I was trying to find a solution for publishing and was thinking that I may just have a staging env and a production env. A coworker here said that maybe have the Staging Admin App normally point to the Staging API, but when publishing, have it point to the Production API instead and not have a production Admin App. The problem I am running into is primary keys. You cannot specify the primary key in JPA AND have it auto-generated if it doesn't exist. I tried several different ways to do that and it breaks if I try to insert 1, 3, 2.
+ +Does anybody have any recommendations on how to handle this situation? Transfer data from one RDB to another RDB in a different env while keeping all relationships intact.
+",209754,,,,,42374.14167,Handle publishing data across environments,This Stack Overflow question is about a child having reference to its parent, through a pointer.
+ +Comments were pretty critical initially of the design being a horrible idea.
+ +I understand this is probably not the best idea in general. From a general rule of thumb it seems fair to say, ""don't do this!""
+ +However, I am wondering what sorts of conditions would exist where you would need to do something like this. This question here and associated answers/commentary suggests even for graphs to not do something like this.
+",52929,,-1,,42878.52778,43134.14028,When is a circular reference to a parent pointer acceptable?,Just a quick question about a design pattern for creating custom exceptions. The question is more about the order of parameters. If you can specify more data in the exception, should the parameter for it included in the constructor, come before or after the overload parameters?
+ +Before:
+ +public FooException : Exception
+{
+ public string Bar { get; private set; }
+
+ public FooException(string bar) : base()
+ {
+ this.Bar = bar;
+ }
+
+ public FooException(string bar, string message) : base(message)
+ {
+ this.Bar = bar;
+ }
+
+ public FooException(string bar, string message, Exception inner) : base(message, inner)
+ {
+ this.Bar = bar;
+ }
+}
+
+
+After:
+ +public FooException : Exception
+{
+ public string Bar { get; private set; }
+
+ public FooException(string bar) : base()
+ {
+ this.Bar = bar;
+ }
+
+ public FooException(string message, string bar) : base(message)
+ {
+ this.Bar = bar;
+ }
+
+ public FooException(string message, Exception inner, string bar) : base(message, inner)
+ {
+ this.Bar = bar;
+ }
+}
+
+",161992,,,,,42374.06319,Convention for exception argument order,I find myself unsure about what exactly it means to have different representations of a RESTful resource. The canonical example is for an API to provide an endpoint - say /v1/users/:id - and allow the client to select the best representation of the resource between JSON, XML, HTML or PDF depending on the media range value of the ACCEPT headers.
I was under the impression that this definition of representation could be extended to encompass more than just content-types, but actually response schemas. Say for example a client wants extended information about the user they could get it by specifying a supported header.
+ +So for instance, my application could supply different schemas for the same resource i.e.
+ +# get the default user representation
+GET /v1/users/1234
+ACCEPT: application/vnd.myapp.v1+json
+
+# server responds with
+{""id"": 1234, ""name"": ""Jeffery Lebowski""}
+
+# get the extended user representation
+GET /v1/users/1234
+ACCEPT: application/vnd.myapp.v1.extended+json
+
+# server responds with
+{""id"": 1234, ""name"": ""Jeffery Lebowski"", ""sport"": ""Bowling""}
+
+
+Am I correctly understanding the concept of representations in REST? Or is the concept of resource representations only applicable to content types and content negotiation?
+ +If so, how can I document the various representations that my API can return? It doesn't seem like either of the big documentation tools (swagger.io or RAML) support documenting multiple schema representations of a single resource.
+",23288,,5099,,42374.30278,42374.30278,Resource representations and REST API documentation tools,We are using Java as a backend development language.
+ +One year back, we wrote a method which uses switch cases based on Enums values. Since we are continuously adding enum members and according adding cases in the method, the method has grown to very large extent. Currently, we have around 100 enum fields and corresponding number of switch cases.e.g.
+ + class AClass{
+
+ enum option{ o1, o2, o3...on}
+
+ Value method(Option o){
+
+ switch(o){
+ case o1:
+ value = deriveValue(p1,p2,p3);
+ case o2:
+ value = deriveValue(p2,p3,p4);
+ .
+ .
+ .
+ case on:
+ value = deriveValue(p1,p2,p3);
+ }
+ }
+ }
+
+
+Thus each time a business requirement comes, we add enum and corresponding switch case. Now the method has become too long and looks unmanageable, if we keep on adding the same logic in future.
+ +To clean, we thought of replacing switch case with polymorphism by creating classes for the same, but again, we will end up creating n number of classes.
+ +We are looking for small, simple and manageable solution.
+ +----------------- Update ---------------------
+ +As suggested, to elaborate more, Enum values are fields names for which a client needs a value. Thus if a client need the value of a new field, we add the field in enum, and the corresponding definition of how to fetch value for the newly added field, by adding the corresponding switch case.
+ +In the switch case, we have a common reusable method e.g. deriveValue() (Please refer to the example given) to which we pass parameters required for deriving the value for the newly added field.
+",194161,,194161,,42375.25486,42375.25486,Refactoring a long method which is based on large number of switch cases,I've only just become interested in this domain, so sorry if I'm not using the correct terminologies.
+ +What I want is the following: Say I have a set of rules (or constraints), I want to derive some implications of those rules.
+ +For example, in Conway's Game of Life, there are 4 basic rules. From these rules, we can see a few patterns emerge. I want a system in which I can input the rules (in some formal language), and it would output at least some of these patterns. Also, if I make a change in any rule, or add a new rule, it should show me the implication of this change (or I should be able to derive it myself from comparing the two outputs).
+ +This should ideally apply to any game that has a set of rules. For example in chess, it should say that the knight can move two squares in front of it by performing two L moves. In checkers it could be that having a piece behind another prevents the other playing from taking that piece.
+ +Has anything like this ever be made? Is it even feasible? Can you recommend any courses or books where I could start my (re)search?
+ +All I've found so far are Automated Theorem Provers, but from what I can tell so far, they are way too generic and mathematically oriented (they aim to solve any theory in maths, which has a lot of rules, I just want it for simple games with a small number of rules).
+",72045,,72045,,42374.43542,42374.54167,"From a set of rules, derive the implications?",