diff --git "a/stack_exchange/SE/SE 2018.csv" "b/stack_exchange/SE/SE 2018.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2018.csv" @@ -0,0 +1,110924 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense,, +363220,1,,,1/1/2018 3:45,,2,796,"

Is many Levels of multilevel inheritance is considered bad?

+ +

This is my design

+ +
BaseFileProcessor
+RefundFile Processor inherits BaseFileProcessor (Has n child class)
+FailedRefundFile Processor inherits RefundFile (Has n child class)
+FailedRefundFile for CardsPayment inherits FailedRefundFile (n child class)
+
+",264027,,177980,,43101.36597,43101.39236,Multilevel inheritance with more than three levels,,2,1,,,,CC BY-SA 3.0,, +363222,1,,,1/1/2018 8:23,,4,316,"

I have a need for a distributed data store, where existing solutions may not work as the computers these will be running on will be extremely resource limited, for instance 64-128MB RAM. Plus, as a fun exercise.

+ +

I'm looking at writing a simple implementation of the RAFT algorithm, however the data store, which will simply be a collection of key/values, may have clients updating any of the nodes at any time, and must keep consistency among the entire cluster.

+ +

I was thinking of having a node, every time when updated from a client, calculate a hash, then get consensus/confirmation from the other members of the cluster on the old and new hashes and commit, once that is gotten from a majority. Would that ensure orderly updates when two members are updated simultaneously? Any thoughts on implementing this?

+",292185,,9113,,43101.37847,43433.10278,Distributed database algorithm,,3,2,1,,,CC BY-SA 3.0,, +363224,1,,,1/1/2018 9:21,,3,251,"

I have only theoretical knowledge about ESB.

+ +

Use case:-

+ +
    +
  1. I have ecommerce like application where I can receive orders from multiple sources in different formats .
  2. +
  3. Once application is submitted I need to send emails to couple of systems
  4. +
  5. Once application is submitted, I need to submit one output file to third party system . Today its XML only but we may need to support formats +down the line
  6. +
  7. Similarly I need to perform analytics and report preparation.
  8. +
+ +

I see both ESB vs Micro-services can fit here theoretically. Both makes the application loosely coupled and scalable instead of single monolithic application. I am not sure what are the +criteria's/attributes I should consider to select off the shelf ESB product(paid or open source ) over disintegrating the app into micro services?

+ +

My Understanding :- + ESB can be more fit here as it makes the application more loosely coupled where one component/application will simply put +the message over channel(which is nothing but java object in java based ESB apps on which other components listens). Now other components like routers/transformer/adapters/endpoints etc. will come into picture and take further action. Here Sending component just needs to know the destination +address which will be single channel address.

+ +

In case of microservices this is also true, but here consumer need to know URL for each specific operation.

+ +

But now a days I have observed most of the folks are preferring micro services over ESB. I think reason can be ESB has its own learning curve/ DSL/ Paid products but micro services is nothing but service is divided in to smaller maintainable restful components. So no learning curve and no paid products.

+",124597,,124597,,43101.40069,43101.42014,Disintegrating monolithic application with Micro services vs ESB approach?,,1,2,1,,,CC BY-SA 3.0,, +363225,1,363228,,1/1/2018 9:35,,0,65,"

I'm reading Elliotte Rusty Harold's Java Network Programming. In chapter2, I've read the text about makrSupported() method of InputStream class. The author explains that this method is not object oriented. Because the function for checking mark() available is not provided by separated type. How is the design to be more object oriented still supporting mark() and reset() methods?

+",210905,,,,,43101.43542,Object oriented design for InputStream.markSupported method,,1,0,,,,CC BY-SA 3.0,, +363231,1,,,1/1/2018 11:55,,1,65,"

There is web App_1 which is on dot net while another web App_2 is on java. App_1 need to interact with App_2.

+ +

App_1 can be inter or intra company application.

+ +

My question is does in general prominent ESB products provides the way dot net application send the dot net specific object/message on channel, then +transformer transform the dot net object to java object and put it on channel_2 like below image

+ +

+",124597,,124597,,43101.65764,43101.65764,Dot net and java application connected through ESB?,<.net>,1,0,,,,CC BY-SA 3.0,, +363234,1,,,1/1/2018 12:53,,1,1273,"

As mentioned in wiki,

+ +

A service-oriented architecture (SOA) is a style of software design where services are provided to the other components by application components, through a communication protocol over a network. The basic principles of service-oriented architecture are independent of vendors, products and technologies. A service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online.

+ +

Each SOA building block can play any of the three roles:

+ +
    +
  • Service provider

  • +
  • Service broker

  • +
  • Service requester/consumer

  • +
+ +
+ +

Implementation approach

+ +

Service-oriented architecture can be implemented with Web services. Ex: Web services based on WSDL and SOAP

+ +

A web service implementation talks about exposing POJO with @WebService tag and another implementation talks about exposing those POJO in services.xml.

+ +

The client will avail those exposed services over SOAP or REST.

+ +

————————

+ +

1) Are web services(like Axis2) only option to implement SOA building blocks?

+ +

If yes,

+ +

2) what are the challenges of conventional middle ware(RMI/CORBA) that web services addressed?

+",131582,,131582,,43103.36806,43103.36806,RMI/CORBA vs Web service,,1,3,2,,,CC BY-SA 3.0,, +363239,1,363253,,1/1/2018 15:34,,2,1235,"

I am building a web/mobile application with Django Rest Framework (DRF) that enables authenticated users to post snippets and/or vote for other user's snippets. All users (authenticated or not) can also get the list of recent snippets (paginated, ex: 5 per page). Snippets, users and votes are stored backend in database.

+ +

I'm totally new to serverless architecture so I'm asking the question: is this application a good fit for this kind of architecture? Obviously, my DRF application is built around Web REST APIs which seems to be at first glance a good fit but the authentication part of users and paginated list of snippets let me think it could not be the case.

+ +

Can someone enlighten me?

+",292160,,,,,43657.73403,Is serverless architecture a good fit for Django Rest Framework app?,,2,1,2,,,CC BY-SA 3.0,, +363246,1,,,1/1/2018 19:06,,4,280,"

The GoF book ""Design Patterns"" describes the Memento pattern as an object encapsulating its state in a separate object. However, the book specifically describes the memento for use with a specific caretaker; in their example, a undo/redo provider.

+ +

However, this pattern, especially in C# when the Memento is made a struct (in both the technical and conceptual senses), can provide many benefits when made a more general part of a class:

+ +
    +
  • Allow easy cloning, which makes caching, ect easier and less error prone
  • +
  • Allow persistence logic to be easily and gracefully shared by different versions of model which spans multiple domains
  • +
  • Display properties in a UI which would otherwise be encapsulated
  • +
  • Deal with CRUD operations for otherwise encapsulated properties
  • +
  • Dealing with scope envy when it is unavoidable, for example a class which governs an operation between two other classes (does a customer book a flight, or does a flight add a customer? Let's avoid this question by putting flight booking in a separate place)
  • +
  • We can use the convenient object intilizer syntax without making our properties editable. Also, we avoid constructors which are bloated with arguments
  • +
  • And of course, persistence of otherwise encapsulated properties
  • +
+ +

We can also use the memento as a kind of implicitly safe to serialize version of the class throughout the application.

+ +

However, you could argue, many of the above are just abusing the pattern for the purpose of circumventing language shortcomings.

+ +
+ +

For example:

+ +
public class Computer
+{
+     public Computer(ComputerMemento state)
+     {
+         _state = state;
+     }
+
+     private ComputerMemento _state;
+     public ComputerMemento State
+     {
+         get
+         {
+              var copy = _state;
+              return copy;
+         }
+     }
+
+     public void DoSomeComputerThing() { }
+     public void DoComputerStuff() { }
+     public int GetSomeComputerCalculation() { }
+}
+
+public struct ComputerMomento
+{
+    public string MachineName;
+    public float ProcessorSpeed;
+    public long AmountOfRam;
+    public int CPUCores;
+}
+
+ +

The above Computer class can now expose an editable representation of its states while still being able to certify (albeit in an easily circumventable way) that changes to its state (via its methods) are valid and supported. Basically, the consumer can select its desired level of ignorance.

+ +

Is this expanded use of what seems to be the memento pattern still the memento pattern? If not, does it have a name?

+ +
+ +

EDIT: In the above code, since structs are passed by value, the state is immutable by outside classes.

+",173910,,173910,,43102.80417,43102.80417,A Memento by any other name?,,1,6,1,,,CC BY-SA 3.0,, +363247,1,363249,,1/1/2018 19:53,,-2,37,"

I am creating an ASP.NET MVC website in which I require a number of different sections such as admin, current deliveries, an inventory of products, etc. +Would it be more beneficial, a subdomain for each different section, for example:

+ +
admin.mywebsite.com
+
+ +

or using this sort of routing:

+ +
mywebsite.com/admin
+
+ +

Which would be better design?

+",288322,,288322,,43101.84097,43101.86944,Would it be more beneficial to make a subdomain or route in a website?,,1,1,,,,CC BY-SA 3.0,, +363250,1,,,1/1/2018 20:59,,5,1636,"

I'm attempting to figure out if an algorithm currently exists to accomplish what I'm trying to accomplish.

+ +

I have a series of time slots over the course of a week where I wish to assign a roughly equal number of people to each time slot during the week. Unlike this question, the time slots are provided as just ranges of hours, and members of the population only need to be pigeon holed in a roughly equally distributed way.

+ +

Most of the population has provided a 1st, 2nd and 3rd choice for their desired time slot. People who list a time slot as multiple preferences are considered improperly filled out and the preference will only be considered once and their remaining preferences will be considered as having no preference.

+ +

Additionally some of the population may not provide an answer or may say they have no preference. They will still need to be assigned a time slot, but can be assigned whatever time slot is necessary to satisfy the algorithm.

+ +

However, the time slots themselves have no preference over who goes where, other than that they want people to be roughly evenly distributed, meaning this is not just a case of the stable-marriage/hospital-resident problem. It further differs from the stable marriage problem in that members of the population do not have a preference for every time slot, which seems to be required for that algorithm to operate.

+ +

The objective of the algorithm is as follows (in order of importance):

+ +
    +
  1. Ensure that everyone is assigned a time slot.
  2. +
  3. Ensure that everyone who provides a separate 1st, 2nd and 3rd choice is assigned to at least one of them.
  4. +
  5. Distribute the population so that they are roughly equal among time slots.
  6. +
  7. If it would make the population more evenly distributed, eliminate time slots by moving people out of them.
  8. +
  9. Maximize the number of people who get their 1st choice.
  10. +
  11. Maximize the number of people who get their 2nd choice.
  12. +
  13. Maximize the number of people who get their 3rd choice.
  14. +
  15. Minimize the resources and run-time required for the algorithm.
  16. +
+ +

In my research, I've also found that the stable-marriage problem can have different outcomes depending on which side goes first in their proposals. I hope that the starting state would not affect the outcome of the algorithm, but if necessary I can simply run it many times and take the best result. I would also like to avoid assigning arbitrary constants to preferences unless absolutely necessary.

+ +

This is a fairly complex problem so I'm not expecting to get a complete algorithm from here unless one already exists for solving this exact problem. My question is mostly regarding whether there are similar algorithms or areas of study that I should start from. Can anyone help point me in the right direction?

+ +

Additionally, am I dismissing the SMP as a starting point incorrectly?

+",117330,,117330,,43101.92361,43252.20486,Algorithm for assigning people to time slots based on preference,,1,12,,,,CC BY-SA 3.0,, +363256,1,,,1/1/2018 22:50,,2,16041,"

I am struggling to fully understand the usage of constructors in Java.

+ +

What I have learned so far about constructors is the following:

+ +
    +
  • same name as class
  • +
  • abbreviation ctor
  • +
  • overloading
  • +
  • no return type
  • +
  • create an object of a class
  • +
  • every class has a default constructor
  • +
+ +

When for example a string has to be returned from a class that needs to be called then a method could be created, i.e. a constructor will not be sufficient as this will not return anything.

+ +

Attempt to answer the question

+ +

In order to explain what I mean, I have created a class with two constructors and two methods that return a string.

+ +
public class HelloWorldConstructor {
+    public HelloWorldConstructor() { }
+
+    public HelloWorldConstructor(String a) {
+        saySomething(a);
+    }
+
+    public HelloWorldConstructor(String a, String b) {
+        saySomething(a, b);
+    }
+
+    void saySomething(String a) {
+        System.out.println(a);
+    }
+
+    void saySomething(String a, String b) {
+        System.out.println(a + "", "" + b);
+    }
+}
+
+ +

Option 1

+ +

It is possible to return a string by calling the method that resides in the class.

+ +
public class CallConstructor {
+    public static void main(String[] args) {
+        HelloWorldConstructor hwc = new HelloWorldConstructor();
+
+        hwc.saySomething(""allo"");
+        hwc.saySomething(""allo"", ""allo"");
+    }
+}
+
+ +

returns:

+ +
allo
+allo, allo
+
+ +

Option 2

+ +

It is also possible to return a string by calling the constructor directly.

+ +
public class CallConstructor2 {
+    public static void main(String[] args) {
+        new HelloWorldConstructor(""allo"");
+        new HelloWorldConstructor(""allo"", ""allo"");
+    }
+}
+
+ +

returns the same as option 1.

+ +

Discussion

+ +

When option 2 is chosen, then two objects have to be created instead of one as depicted by option 1, but when to choose option 2 and when option 1? In this case I think it is better to choose option 1 as one object will be created, but option 2 could be suitable when other circumstances are applicable.

+ +

Using Constructors in Java

+ +
+

The constructor in the example just gives an initial value to class + members.

+
+ +

https://stackoverflow.com/a/19941847/2777965

+ +
+

Constructors are used to initialize the instances of your classes. You + use a constructor to create new objects often with parameters + specifying the initial state or other important information about the + object

+
+ +

After reading about theory and Q&As about constructors I am struggling to fully understand them. I know how to call a constructor and how to call a method, but I cannot rationalize this.

+ +
    +
  • A constructor has to be called directly when ...
  • +
  • Constructor overloading will be done when ...
  • +
  • Methods will be called directly by calling the default constructor when ...
  • +
+",218283,,218283,,43102.8375,43102.8375,When to call the constructor and when to call the method in Java?,,4,7,,,,CC BY-SA 3.0,, +363257,1,363260,,1/1/2018 23:37,,2,280,"

Let’s say I have an object, called “Tile”, which has both a name and a list of directions. We’ll assume that each tile has a 3x3 grid of pixels, which can either be on or off. The middle center pixel (designated with a “N” in the diagram) of the tile turns on when the North direction is “True”. The center left pixel (“W” in the diagram) of the tile turns on when the West direction is “True”. The center right pixel (“E”) of the tile turns on when the East direction is “True”. The bottom center pixel (“S”) of the tile turns on when the South direction is “True”. The middle pixel (“C”) is always on.

+ +

Diagram:

+ +
 N 
+WCE
+ S
+
+ +

Each tile can only have one start and end, so there are 6 possible tiles. (Straights East-West and North-South, and all 4 turning pieces which form 90 degree angles)

+ +

Here’s an example tile, complete with attributes:

+ +
Tile1:
+-North: True
+-East: False
+-South: False
+-West: True
+
+ +

Here’s what that tile looks like, if dashes are “off” pixels and percent signs are on pixels:

+ +
-%-
+%%-
+——-
+
+ +

Let’s say I have a list of these tiles, where each tile connects to the one in the list before it, and I am procedurally generating them.

+ +

The idea is to be able to generate paths of arbitrary lengths, which do not need to exist inside the bounds of an array.

+ +

Of course, I want my map of tiles to be logically sensible, so it cannot have more than 3 lefts or rights in a row. My current “solution” is to have a counter where I subtract one if it turns to the right of the current tile, and add one if it turns to the left. I could then force it to choose a left-turning piece if the counter is at -3, and force it to choose a right turning piece if the counter is at 3. (This algorithm will work with my current generation methods, that is not the problem.)

+ +

What is the most efficient way, given the past tiles and their respective directions, to determine whether to increment or decrement the counter?

+ +

Sorry if this is a little verbose, I couldn’t think of a better way to phrase my question.

+ +

Edit: +To clarify, the goal of the algorithm is to determine whether a tile given turns left, right, or straight (which isn’t really a turn, but IDC) based on the previous tiles and return 1, -1, or 0, respectively. That’s it.

+",292228,,292228,,43101.99444,43102.02222,Algorithm to determine direction based on previous tiles,,1,4,,,,CC BY-SA 3.0,, +363268,1,363289,,1/2/2018 9:31,,5,642,"

As described here, a function can be said to be a query when it returns a value, and a command when it modifies a value. It also states that a function should not be both.

+ +

That a query should not be a command seems popularly enforced by functional programming, in that it is a pure function.

+ +

Does a language exist, where if and only if the function is non-void, it cannot change state?

+",212623,,113256,,43104.78056,43320.80208,Has any language enforced Command–query separation?,,2,5,,,,CC BY-SA 3.0,, +363269,1,,,1/2/2018 9:57,,2,316,"

I have a website built on PHP, and now I want users to be able to log in. I do not want to take care of user management myself, so I'm thinking that social sign in would be convenient both for myself and for the users. The website is a traditional ""server rendered"" site, i.e. I have a database on my server, and my PHP-scripts need some form of user authentication/user id in order to insert and retrieve stuff from the database.

+ +

I have used Firebase auth and FirebaseUI for ""client side"" websites before, which is really nice since they take care of everything, including the UI and login flow.

+ +

But when it comes to using it for a traditional ""server-side"" application, I'm confused. I guess I can use FirebaseUI on the browser to get the user to sign in and retrieve a JWT token, but then what? How would I pass it to the server scripts on each request? Or should I just pass it once and start a session based on the information in the JWT token? Or should I use something entirely different?

+",120703,,,,,43945.2125,How to use social sign in with my PHP website?,,1,0,,,,CC BY-SA 3.0,, +363270,1,,,1/2/2018 10:38,,1,180,"

I'm specifying an API for a REST service. The REST service will be primarily accessed by web browsers using XHR requests running code hosted from remote origins. Therefore, I'll need to add CORS support, such as the Access-Control-Allow-Origin header to allow for XSS.

+ +

Should I add CORS headers to non-200 responses, such as 400, 401 and 5xx?

+",148189,,,,,43102.44306,CORS headers for non-200 HTTP responses,,0,3,0,,,CC BY-SA 3.0,, +363272,1,363275,,1/2/2018 11:17,,4,712,"

If I have a custom type (or maybe an enum) like for ex. a Range:

+ +
Public Class Range
+
+    Sub New(minimum As Single, maximum As Single)
+        Me.Minimum = minimum
+        Me.Maximum = maximum
+    End Sub
+
+    Public Property Minimum As Single 
+    Public Property Maximum As Single 
+    Public ReadOnly Property Delta As Single 
+        Get
+            Return Maximum - Minimum
+        End Get
+    End Property
+End Class
+'-----------
+'Some methods...
+
+ +

This type should be used in the domain model(implementing DDD), in the business logic when doing stuff and also in the data layer, where it will be stored as a complex type.

+ +

So :

+ +
    +
  • should I define such classes in a Project App.Common and reference this assembly everywhere (my choice for now, but don't know if having references in domain model project is an anti-pattern?)
  • +
  • should I define it in my domain model and reference my domain model everywhere where it is needed. (maybe better? since Domain model is core in DDD, but the type is not a domain or value object...)
  • +
  • should I create different classes for each layer (which don't make sense to me, since I want to handle it like another type and not as Object)?
  • +
+ +

EDIT +In my case I use range as a MeasuringRange for measure devices. For ex. a thermometer that can measure from 0°F-250°F:

+ +
public class Thermometer
+{
+    public Thermometer(Range measureRange)
+    {
+        this.MeasureRange=measureRange;
+    }
+
+    public Range MeasureRange {get;set;}
+}
+
+ +

At the moment it's also used in the DataLayer (using EF Code First) to store the device range as a complexType (EF creates fields MeasureRange_Max and MeasureRange_Min.

+ +

It's also used in services in the BLL when doing some measurements with the devices.

+",267308,,267308,,43102.60694,43102.60694,Which layer for custom type (DDD),,2,3,,,,CC BY-SA 3.0,, +363273,1,363295,,1/2/2018 11:31,,-2,267,"

The Steam Client application allows one to download games purchased through the associated Steam account. A typical modern game uses quite a bit of disk space (in the order of 50-70 GB), so it is not unusual for users to download the files in several Steam sessions.

+ +

However, when resuming a download that has been stopped or paused, it resumes almost instantaneously from where it was stopped before. This indicates that no partial hashing is performed on the local machine (which would actually take a lot of time in order to hash tens of GB). How can Steam ensure the integrity of the files if it does not compute a partial hash when resuming the data transfer? Note that even when the download is interrupted due to a system failure, the process resumes almost immediately when the Steam Client is started.

+ +

Even if Steam performs a partial hash ""by blocks"", i.e., by hashing the last block instead of the entire file, how can it make sure that the files already on disk are indeed there, or not corrupted?

+",150590,,,,,43102.86389,How does Steam resume downloads without computing a partial hash?,,1,11,2,,,CC BY-SA 3.0,, +363278,1,,,1/2/2018 13:28,,2,80,"

I noticed a pattern in my code. It applies to situations where things can be selected. It revolves around classes which I call (and in code often suffix with) Registry, Slot and Updater.

+ +

Registry

+ +

This class is pretty much a combination of what is known as the registry pattern: (""A well-known object that other objects can use to find common objects and services."") and some filter methods.
+These are supposed to be the list of all [Type]s. They always use a collection internally, but the methods they have is determined by what other classes want from them. That might be simple wrappers for the collection class or domain specific, named filter methods. Events - for example when an item is removed - are added pragmatically when needed.

+ +
public class ProductRegistry
+{
+    private readonly List<Product> _products = new List<Product>();
+
+    public void Add(Product product)
+    {
+        _products.Add(Product product)
+    }
+
+    public List<Product> GetAllProductsOnSale()
+    {
+        // domain specific filtering...
+    }
+}
+
+ +

Slot

+ +

Named ""Slot"" because it can hold an object. In most cases, an object being held here means it is the currently selected object. Contrary to Registry and Updaters, the code for this is always the same except for the type of what is being selected. I could turn this into a generic class, but I only came to think of that while writing this question - and it's not like that would change how this pattern works.

+ +
public class ProductSlot
+{
+    public event Action WhenProductHasBeenChanged;
+
+    public Product Product{get; private set;}
+
+    public void Set(Product product)
+    {
+        Product = product;
+        if(WhenProductHasBeenChanged!=null) WhenProductHasBeenChanged();
+    }
+}
+
+ +

Updaters

+ +

These update the registry and slot. How they achieve that varies wildly, so example code wouldn't make sense here.
+However, to give an example of what they do, a ProductUpdater might supervise the process of downloading a string from a web address, parsing it into Products and then add those to the ProductRegistry.
+One thing I did repeatedly was registering an Updater to the change event of a slot for a different thing. For example, a ContractSlotUpdater listens to changes in a CustomerSlot, so it can update the ContractSlot with the Contract which corresponds to the new Customer.

+ +

Additional notes

+ +

I added the C# tag because that is the language - and thus the language features - that I am currently using. I see no problem transferring this concept to other languages as long as they have the necessary features.

+ +

The question

+ +

If this is not a good thing to do, please tell me why.
+Otherwise, I would like to know what to call it.

+",226041,,,,,43102.56111,Is there a name for this Registry-Slot-Updaters pattern?,,0,9,,,,CC BY-SA 3.0,, +363281,1,363290,,1/2/2018 14:52,,4,1368,"

I have a Aggregate Roots that use the event sourcing technique of being built from a series of events in a Repository. This is all great for when I just need to manage change of state etc, but when I come to using the Specification pattern to apply my app specific business rules, I am hitting a wall on what Entities to use, how to instantiate them etc etc.

+ +

I would want to run specific methods on my Repository, for example getProductByProductCode so I can check my current AR against it for uniqueness (simple example), but I am unsure how to do this, as my Repository is set up to get ARs by their ID from the event store.

+ +

I used to have a database backing my Repositories, now I only have an event store as I have no read models yet.

+ +
    +
  • Has anyone done this before, and how did you do it?
  • +
  • Do I need a read model first?
  • +
  • Should I query the read model, then use a resulting ID to get the AR from the event store?
  • +
  • What if I need multiple AR results to be returned?
  • +
+ +

I am so confused as I thought I was doing it all the right way now as we need event sourcing, but I can't seem to marry up my old-school thinking with how to do it now it's event sourced :(

+",208446,,,,,43102.75208,Using Event Sourced Aggregate Roots with the Specification Pattern,,1,0,2,,,CC BY-SA 3.0,, +363282,1,363322,,1/2/2018 15:01,,9,670," + +

I've been studying DDD and I'm currently struggling to find a way to apply the concepts in actual code. I have about 10 years of experience with N-tier, so it's very likely that the reason I'm struggling is that my mental model is too coupled to that design.

+ +

I've created an Asp.NET Web Application and I'm starting with a simple domain: a web monitoring application. Requirements:

+ +
    +
  • The user must be able to register a new Web App to monitor. The web app has a friendly name and points to a URL;
  • +
  • The web app will periodically poll for a status (online/offline);
  • +
  • The web app will periodically poll for its current version (the web app is expected to have a ""/version.html"", which is a file declaring its system version in a specific markup).
  • +
+ +

My doubts concern mainly the division of responsibilities, finding the proper place for each thing (validation, business rule, etc). Below, I've written some code and added comments with questions and considerations.

+ +

Please criticize and advise. Thanks in advance!

+ +
+ +

DOMAIN MODEL

+ +

Modeled to encapsulate all business rules.

+ +
// Encapsulates logic for creating and validating Url's.
+// Based on ""Unbreakable Domain Models"", YouTube talk from Mathias Verraes
+// See https://youtu.be/ZJ63ltuwMaE
+public class Url: ValueObject
+{
+    private System.Uri _uri;
+
+    public string Url => _uri.ToString();
+
+    public Url(string url)
+    {
+        _uri = new Uri(url, UriKind.Absolute); // Fails for a malformed URL.
+    }
+}
+
+// Base class for all Aggregates (root or not).
+public abstract class Aggregate
+{
+    public Guid Id { get; protected set; } = Guid.NewGuid();
+    public DateTime CreatedAt { get; protected set; } = DateTime.UtcNow;
+}
+
+public class WebApp: Aggregate
+{
+    public string Name { get; private set; }
+    public Url Url { get; private set; }
+    public string Version { get; private set; }
+    public DateTime? VersionLatestCheck { get; private set; }
+    public bool IsAlive { get; private set; }
+    public DateTime? IsAliveLatestCheck { get; private set; }
+
+    public WebApp(Guid id, string name, Url url)
+    {
+        if (/* some business validation fails */)
+            throw new InvalidWebAppException(); // Custom exception.
+
+        Id = id;
+        Name = name;
+        Url = url;
+    }
+
+    public void UpdateVersion()
+    {
+        // Delegates the plumbing of HTTP requests and markup-parsing to infrastructure.
+        var versionChecker = Container.Get<IVersionChecker>();
+        var version = versionChecker.GetCurrentVersion(this.Url);
+
+        if (version != this.Version)
+        {
+            var evt = new WebAppVersionUpdated(
+                this.Id, 
+                this.Name, 
+                this.Version /* old version */, 
+                version /* new version */);
+            this.Version = version;
+            this.VersionLatestCheck = DateTime.UtcNow;
+
+            // Now this eems very, very wrong!
+            var repository = Container.Get<IWebAppRepository>();
+            var updateResult = repository.Update(this);
+            if (!updateResult.OK) throw new Exception(updateResult.Errors.ToString());
+
+            _eventDispatcher.Publish(evt);
+        }
+
+        /*
+         * I feel that the aggregate should be responsible for checking and updating its
+         * version, but it seems very wrong to access a Global Container and create the
+         * necessary instances this way. Dependency injection should occur via the
+         * constructor, and making the aggregate depend on infrastructure also seems wrong.
+         * 
+         * But if I move such methods to WebAppService, I'm making the aggregate
+         * anaemic; It will become just a simple bag of getters and setters.
+         *
+         * Please advise.
+         */
+    }
+
+    public void UpdateIsAlive()
+    {
+        // Code very similar to UpdateVersion().
+    }
+}
+
+ +

And a DomainService class to handle Creates and Deletes, which I believe are not the concern of the Aggregate itself.

+ +
public class WebAppService
+{
+    private readonly IWebAppRepository _repository;
+    private readonly IUnitOfWork _unitOfWork;
+    private readonly IEventDispatcher _eventDispatcher;
+
+    public WebAppService(
+        IWebAppRepository repository, 
+        IUnitOfWork unitOfWork, 
+        IEventDispatcher eventDispatcher
+    ) {
+        _repository = repository;
+        _unitOfWork = unitOfWork;
+        _eventDispatcher = eventDispatcher;
+    }
+
+    public OperationResult RegisterWebApp(NewWebAppDto newWebApp)
+    {
+        var webApp = new WebApp(newWebApp);
+
+        var addResult = _repository.Add(webApp);
+        if (!addResult.OK) return addResult.Errors;
+
+        var commitResult = _unitOfWork.Commit();
+        if (!commitResult.OK) return commitResult.Errors;
+
+        _eventDispatcher.Publish(new WebAppRegistered(webApp.Id, webApp.Name, webApp.Url);
+        return OperationResult.Success;
+    }
+
+    public OperationResult RemoveWebApp(Guid webAppId)
+    {
+        var removeResult = _repository.Remove(webAppId);
+        if (!removeResult) return removeResult.Errors;
+
+        _eventDispatcher.Publish(new WebAppRemoved(webAppId);
+        return OperationResult.Success;
+    }
+}
+
+ +
+ +

APPLICATION LAYER

+ +

The class below provides an interface for the WebMonitoring domain to the outside world (web interfaces, rest api's, etc). It's just a shell at this moment, redirecting calls to the appropriate services, but it would grow in the future to orchestrate more logic (accomplished always via domain models).

+ +
public class WebMonitoringAppService
+{
+    private readonly IWebAppQueries _webAppQueries;
+    private readonly WebAppService _webAppService;
+
+    /*
+     * I'm not exactly reaching for CQRS here, but I like the idea of having a
+     * separate class for handling queries right from the beginning, since it will
+     * help me fine-tune them as needed, and always keep a clean separation between
+     * crud-like queries (needed for domain business rules) and the ones for serving
+     * the outside-world.
+     */
+
+    public WebMonitoringAppService(
+        IWebAppQueries webAppQueries, 
+        WebAppService webAppService
+    ) {
+        _webAppQueries = webAppQueries;
+        _webAppService = webAppService;
+    }
+
+    public WebAppDetailsDto GetDetails(Guid webAppId)
+    {
+        return _webAppQueries.GetDetails(webAppId);
+    }
+
+    public List<WebAppDetailsDto> ListWebApps()
+    {
+        return _webAppQueries.ListWebApps(webAppId);
+    }
+
+    public OperationResult RegisterWebApp(NewWebAppDto newWebApp)
+    {
+        return _webAppService.RegisterWebApp(newWebApp);
+    }
+
+    public OperationResult RemoveWebApp(Guid webAppId)
+    {
+        return _webAppService.RemoveWebApp(newWebApp);
+    }
+}
+
+ +
+ +

Closing the Matters

+ +

After gathering of answers here and in this other question, which I opened for a different reason but ultimatelly got to the same point as this one, I came up with this cleaner and better solution:

+ +

Solution proposition in Github Gist

+",20115,,20115,,43104.45903,43104.45903,How to apply some concepts of DDD to actual code? Specific questions inside,,1,11,5,,,CC BY-SA 3.0,, +363286,1,363294,,1/2/2018 16:45,,10,7372,"

I see size of boolean is not defined. Below are two statements I see at java primitive data size

+ +
+

not precisely defined

+
+ +

Further explanation says

+ +
+

boolean represents one bit of information, but its ""size"" isn't + something that's precisely defined.

+
+ +

Question came to my mind was why boolean in java can't be represented with 1 bit(or 1 byte if byte is minimum representation ) ?

+ +

But I see it has been already answered at https://stackoverflow.com/questions/1907318/why-is-javas-boolean-primitive-size-not-defined where +it says

+ +
+

the JVM uses a 32-bit stack cell, used to hold local variables, method + arguments, and expression values. Primitives that are smaller than 1 + cell are padded out, primitives larger than 32 bits (long and double) + take 2 cells

+
+ +

Does it mean even byte/char/short primitiva data types also take 32 bit though their size is defined as 8/16/16 bit ?

+ +

Also can we say boolean size will be 32 bit on 32 bit cpu and 64 bit on 64 bit cpu ?

+",124597,,124597,,43102.70208,43103.04931,boolean size not defined in java: why?,,2,7,5,,,CC BY-SA 3.0,, +363292,1,,,1/2/2018 19:47,,3,1528,"

I'm learning about OOD and good practices in OOP and find myself struggling with some key concepts. As a practice I'm rewriting my custom PDO database abstraction layer which used to be a single file class with >2000 lines of code.

+

I learned one should use inheritance if classes are in a "is an" relationship and composition if they have a "has a" relationship. Composition can be implemented as this, given that I would avoid php's traits (example from here):

+
<?php
+
+class Head {
+}
+
+class Human {
+    private $head;
+    public function __construct(Head $head) {
+       $this->head = $head;
+    }
+}
+
+$bob = new Human(new Head);
+
+

Good. However, in my case I want to composite a class B to A, while there can be multiple instances of B. Precisely, the main database class (A) has one or multiple table classes (B). Injecting a table object similar to the head object in the above example might be not what I want. Later, there might be also maybe a select class or a insert class. I do this just for practice and learn how I can keep my classes small in file size. Should I all inject all dependencies during construction and recylcle them? Or should I instantiate them within the main database class and inject the connection to the subclasses. The main database class holds the PDO object in '$_connection'.

+

Q1: what is the best way to compose the classes database and table.

+

I can think of these strategies.

+

Strategy #1

+
<?php
+
+class db extends PDO{
+
+  private $_connection;
+
+  public function __construct($dsn){
+
+    $this->_connection = new parent::__construct($dsn);
+
+  }
+
+  public function createTable($def){
+    
+     $table = new Table(this->_connection, $def);    
+
+  }
+
+}
+
+

Cons:

+
    +
  • I have the new operator in a method which I assume is generally not ideal. Better, I should inject all instances.
  • +
  • I have to declare a createTable method in the base class. This spams my base class. If functionality increases the base class will be bigger and bigger, which is what I wanted to circumvent in the first place. I would rather like to be able to call create on the table object as in Table->create().
  • +
  • I'm not sure about the the injection of the connection to the table class. Is that good practice?
  • +
+

Strategy #2

+
<?php
+
+class db extends PDO{
+
+  private $_connection;
+  public  $table;
+
+  public function __construct($dsn, $table){
+
+    $this->_connection = new parent::__construct($dsn);
+    $this->table = $table;
+
+  }    
+
+}
+
+$db = new db($dsn, new $Table)
+$db->table->create($def);
+
+

Cons:

+
    +
  • I don't have the connection available in the Table class as it is neither a child nor is the connection manually injected.
  • +
+

I don't think the db and Table classes are in a "is a" relationship and thus should not be inherited from each other. But currently I'm lacking a good composition implementation.

+

Disclaimer

+

I tried to work for a solution but need help on what could be the best practice for this. Composition, as posted with the example (human, head), just doesn't feel right here in the case of database and table. I hope I'll receive helpful answers, also links or buzz words are welcome as I'm just learning and I seem to have a hard time to enter the next level.

+",292274,,-1,,43998.41736,43102.97014,php class composition: how to implement “has a” relationship in the case of a DAL,,1,9,1,,,CC BY-SA 3.0,, +363298,1,363553,,1/2/2018 22:23,,1,361,"

I have a search box I'm going to use on different pages (I use the term 'page' loosely here).

+ +
    +
  • The search box puts its value (the search string) into the Redux state.
  • +
  • The results are populated from an external API
  • +
  • Another component displays the results of the search as a list.
  • +
+ +

Example state:

+ +
{
+    searchString: 'Cheeeese',
+    results: [
+        'I like Cheeeese!',
+        'There is no Cheeeese!!',
+    ],
+}
+
+ +

What's baffling me is this - if there are multiple search boxes (on different 'pages') where do I put the responsibility for requesting the results from the API?

+ +
    +
  • There's more than one place the search box is used, so the search box (nor a wrapper of the search box) can be responsible without duplicate calls.
  • +
  • The Redux action should not have a side-effect, so the reducer can't be responsible.
  • +
  • Each page shouldn't be responsible, since this would lead to code duplication.
  • +
+ +

The nearest thing to sanity I can find is a non-rendering component whose sole purpose is to watch the searchString state entry, and fire off a request to the API (updating results on response).

+ +

Is this a reasonable approach, or is my component structure itself the problem?

+",3526,,,,,43106.35417,How should I structure React Redux components when requesting Data From an API?,,1,5,1,,,CC BY-SA 3.0,, +363300,1,363304,,1/2/2018 22:38,,3,1284,"

Sometimes my Aggregate will be very simple; some scenarios are simply not complex enough to require deep trees of objects and relations.

+ +

Consider a Website Monitoring application, which periodically pings a URL to check if it is alive.

+ +

The Web App will have:

+ +
    +
  • Id
  • +
  • FriendlyName
  • +
  • ‎URL
  • +
  • ‎IsAlive
  • +
+ +

It doesn't have much data, doesn't have child objects (except maybe for URL being a Value Object) and will certainly not have much invariants - if any - to enforce either, at least not at this time.

+ +

Now some say that because it is not modelling a more complex model, with relationships and internals and whatnot, it is not an Aggregate, it is only an Entity.

+ +

The thing is, I don't think that complexity or size should be dictating if it's an Aggregate, Entity or Value Object, but rather its MEANING.

+ +

For the Web Application Monitoring domain, that Web App ""entity"" is the root model, it's what is going to be returned from a Repository. If the domain expert brings new requirements, they will be related to that Web App model.

+ +

So, for me, I believe it makes it a WebAppAggregate, rather than a WebAppEntity.

+ +
+ +

Question: is my line of thinking correct, or did I get it all wrong? Thanks in advance.

+",20115,,,,,43103.59583,"DDD: must all Aggregates model relationships, or they can be ""shallow""?",,2,0,2,,,CC BY-SA 3.0,, +363307,1,363310,,1/3/2018 1:52,,73,18348,"

Whenever I need division, for example, condition checking, I would like to refactor the expression of division into multiplication, for example:

+ +

Original version:

+ +
if(newValue / oldValue >= SOME_CONSTANT)
+
+ +

New version:

+ +
if(newValue >= oldValue * SOME_CONSTANT)
+
+ +

Because I think it can avoid:

+ +
    +
  1. Division by zero

  2. +
  3. Overflow when oldValue is very small

  4. +
+ +

Is that right? Is there a problem for this habit?

+",248528,,591,,43105.64861,43647.32639,Is it good practice to replace division with multiplication when possible?,,10,13,15,,,CC BY-SA 3.0,, +363309,1,,,1/3/2018 2:29,,2,89,"

I know someone has probably already thought of this, but here is a way to replace DAOs and Domain objects: For example

+ +
public class Bike {
+
+  private int id;
+
+  public Bike(String model) {
+    //Create bike object in database.
+    id = getAutoIncrementKey();
+  }
+
+  private Bike(int id) {
+    this.id = id;
+  }
+
+  public static Bike of(int id) {
+    return new Bike(id);
+  }
+
+  public String getModel()  {
+    //fetch from database
+    return model;
+  }
+
+  public void setModel(String model) {
+    //update in database
+  }
+
+  public void delete() {
+    //Delete in database
+    id = -1;
+  }
+
+}
+
+ +

Basically, the constructor creates the object, a static factory method is used to get an existing object, getters and setters connect to the database, and a delete method is used for deletion.

+ +

I am curious about what is wrong with this design. Can anyone tell me any problems with this and why it is not used, or tell me if this is the new Serverless?

+",242685,,242685,,43103.62153,43103.62153,Idea on replacing DAO/Domain obj pattern,,0,2,3,,,CC BY-SA 3.0,, +363312,1,,,1/3/2018 5:18,,4,2047,"

If I understand correct, a Git head pointer generally/usually points to the latest/last (end) commit of a branch.

+ +

I know head can point other objects, such as:

+ +
    +
  1. A specific non-end commit of the branch.
  2. +
  3. A specific tag of a group of commits.
  4. +
+ +

But my question reference only the general or usual usage of that term, hence I ask:

+ +

Does Git head pointer generally/usually points to the latest/last (end) commit of a branch?

+",287171,,287171,,43103.73472,43103.76319,Does Git head pointer generally/usually points to the latest/last (end) commit of a branch?,,2,2,1,,,CC BY-SA 3.0,, +363316,1,,,1/3/2018 6:37,,2,684,"

This older question tells us that in functional programming ""true"" randomness cannot be achieved since in FP functions are pure/idempotent and return the same value irrespective of number of invocations without any side effects.

+ +

But if that is true, how is FP applicable for problems like picking randomly a Captcha or some puzzle to question the user before entering the system?

+ +

I considered taking system time as a seed inside the function. But that is depending on external state.

+ +

Could anyone please demonstrate it with a code snippet in Haskell/Clojure etc?

+",292326,,292326,,43103.50833,43104.39653,Can functional programming used for solving problems which require randomness?,,3,16,,,,CC BY-SA 3.0,, +363318,1,,,1/3/2018 7:24,,36,7472,"

Let's say, I've the below logic. How to write that in Functional Programming?

+ +
    public int doSomeCalc(int[] array)
+    {
+        int answer = 0;
+        if(array!=null)
+        {
+            for(int e: array)
+            {
+                answer += e;
+                if(answer == 10) break;
+                if(answer == 150) answer += 100;
+            }
+        }
+        return answer;
+    }
+
+ +

The examples in most blogs, articles... I see just explains the simple case of one straight forward math function say 'Sum'. +But, I have a logic similar to the above written in Java and would like to migrate that to functional code in Clojure. +If we can't do the above in FP, then the kind of promotions for FP doesn't state this explicitly.

+ +

I know that the above code is totally imperative. It was not written with the forethought of migrating it to FP in future.

+",292326,,2366,,43103.60278,43554.15694,What are the functional equivalents of imperative break statements and other loop checks?,,7,5,11,,,CC BY-SA 3.0,, +363323,1,363325,,1/3/2018 9:01,,-1,544,"

I came across a tutorial which says this code is Exception Ducking

+ +
public class SomeClass {
+
+void doTask() {
+
+    try {
+        //..Some Exception prone code
+    }
+    catch(Exception e) { }
+}}
+
+ +

Though, I think ducking means to just let the exception propagate up the callstack by just not handling or throwing, whereas this code seems to swallow the exception.

+ +

Does this also comes under ducking ?

+ +

Please provide some clarification on the issue.

+ +

Thanks

+",198632,,,,,43103.39306,"Difference between Exception ""Ducking"" and ""Swallowing""",,1,0,0,,,CC BY-SA 3.0,, +363324,1,,,1/3/2018 9:09,,3,1030,"

Please see the link here: https://lostechies.com/derekgreer/2010/04/19/double-dispatch-is-a-code-smell/, which describes Double Dispatch as a code smell. Now see this link: https://lostechies.com/jimmybogard/2010/03/30/strengthening-your-domain-the-double-dispatch-pattern/, which talks about double dispatch strenghtening the domain. The same author of the second link (who believes in Double Despatch) wrote the code here: https://github.com/jbogard/presentations/blob/master/WickedDomainModels/After/Model/Member.cs and specifically this (which I believe is the Double Despatch pattern):

+ +
public Offer AssignOffer(OfferType offerType, IOfferValueCalculator valueCalculator) 
+{ 
+    DateTime dateExpiring = offerType.CalculateExpirationDate(); 
+    int value = valueCalculator.CalculateValue(this, offerType); 
+    var offer = new Offer(this, offerType, dateExpiring, value); 
+    _assignedOffers.Add(offer); 
+    NumberOfActiveOffers++; 
+    return offer; 
+} 
+
+ +

From what I can understand; the first author is saying that double despatch is an anti pattern because it violates SOLID principles. However, I believe the SOLID principles are violated in his code because of the way he has designed the classes; not specifically because of Double despatch. I also notice that the two articles were written within one month of each other in 2010.

+ +

Is Double Dispatch an anti pattern?

+",65549,,,,,43103.40694,Is Double Dispatch an anti pattern?,,1,0,2,,,CC BY-SA 3.0,, +363328,1,,,1/3/2018 9:30,,-1,85,"

I have code for a game that I want to change, make different game but with same mechanics and design.

+ +

There are some features that I don't want to have in game.

+ +

I have 2 options:

+ +

1) Remove feature

+ +

Deleting code will definitely remove feature but may take time and cause some bugs.

+ +

Pros:

+ +
    +
  • Less complex code
  • +
  • Better for maintenance
  • +
+ +

Cons:

+ +
    +
  • Harder to do

  • +
  • May cause bugs

  • +
+ +

2) Make it ineffective

+ +

For example, if I want player not to receive damage, I can set damage multiplier to 0

+ +

Pros:

+ +
    +
  • Easy

  • +
  • Features can be reused later

  • +
+ +

Cons:

+ +
    +
  • Higher complexity

  • +
  • Harder maintenance

  • +
  • Slowing down application (by a tiny bit)

  • +
+ +

Am I missing something ? What is the best practice ?

+",153946,,,,,43103.40139,Should I remove feature or make it ineffective?,,1,3,,,,CC BY-SA 3.0,, +363329,1,,,1/3/2018 9:35,,1,642,"

Using Functional language, How can 2 different parties achieve the result of increment/decrement operations concurrently?

+ +

For the below scenario, +Let's say, I've 2 quantities in stock and 2 users in e-commerce site block 1 quantity each concurrently. +How can I mark the remaining stock as '0' in FP given the state is shared and immutable? I assume that each flow gets the copy of initial stock '2' and operates upon. If so, how will it reach '0' unless serialized?

+ +

Kindly narrate this with some sample working code.

+ +

How is this different from CAS operations on AtomicInteger (in JAVA) where the client is expected to retry on failure? (Both in terms of approach and efficiency)

+",292326,,292326,,43103.40347,43103.53403,How Functional Programming addresses concurrent increment/decrement operations invoked by different users?,,2,10,0,,,CC BY-SA 3.0,, +363335,1,,,1/3/2018 9:59,,0,1598,"

I have read OAuth2 and its statelessness using JWT as token. Token expires based on expired time, then how do I control token like blacklist and block its access immediately without being stateful?

+ +

As far as I found the solution is you need a database that store token blacklist. Then now what is the different with stateful approach?

+",292340,,,,,43103.43333,How Immadiately Blacklist and Block Access of Access Token using JWT?,,1,1,,,,CC BY-SA 3.0,, +363340,1,,,1/3/2018 10:42,,3,251,"

Does any programming language have a concept of checking the type and value of a given parameter without adding an explicit if (myParam < 0) { .. } within the function?

+ +

A pseudocode example:

+ +
function myFunction(int myParam >= 0) {
+    return myParam + 2;
+}
+
+// myFunction(1)     would return 3
+// myFunction(-1)    would raise an error
+// myFunction(""abc"") would raise an error
+// myFunction(2.1)   would raise an error
+// myFunction()      would raise an error
+
+ +

or maybe:

+ +
function myFunction(int myParam 0) {
+    return myParam + 2;
+}
+
+",55875,,,,,43103.93472,Built-in type AND value checking of parameters?,,1,3,1,,,CC BY-SA 3.0,, +363356,1,,,1/3/2018 15:02,,3,1971,"

I am working with a small team that is developing a CQRS/ES ""semi-microservice architecture. We are pretty far along, but running into some interesting challenges with our projections and further challenges are we start to move our projections out into a reporting database to handle cross domain concerns. I realize these are complex problems and there's no one size fits all solution. It's my first time using a heavily event based architecture so please forgive me if I am using some of the wrong terms. Perhaps this is why I am having a hard time finding further information to tackle these challenges. I am not expecting anyone to solve these problems for me, but I would be very grateful for help with terminology and if you could point me to any resources that may be helpful with the problems I will outline below. Thanks in advance!

+ +

Alright so my team and I are building software which has several services. Each service uses a CQRS architecture, eventing and some entities are event sourced. Entities that are event sourced are event sourced because there are dependencies on specific versions of those entities. Our domain event architecture is heavily inspired by Vaughn Vernon's ""Implementing Domain-Driven Design"". Each service has it's own relational database or at least it's own schema which is treated as a separate database. Each database has a domain event table with a constraint on the aggregate and version number to ensure a transaction fails if two events come in at the same time for the same entity. This is a heavily collaborative application so this is very important to us.

+ +

Problem #1:

+ +

We are currently publishing our domain events to subscribers. The subscribers are currently limited to the service itself and are usually projections. We tried publishing the event after a transaction is processed successfully, so the events could be processed asynchronously without holding up the user, but this led to events being processed out of order. We now process most events inside the transaction. This works for now because our projection handling logic happens very quickly, but this may not be the case for very long. The projections don't have to be updated in real-time, but they do need to be updated in near real time. We can probably allow for delays up to 2 or 3 seconds. How is event order typically guaranteed in this scenario?

+ +

Problem #2:

+ +

We are beginning to require complex sorting and filtering on views that combine data from several services which seems to necessitate that we move our projection logic out into a separate reporting service. We've looked at a few different models such as push based mechanisms or pull based mechanisms inspired by Kafka, but we're having a hard time determining how we get the events out of each service and then how we can process them in order, by aggregate, in our reporting service (especially if we are running multiple instances of our reporting service). We do recognize that based on our current setup that we can only guarantee order within services and not across services, but this is acceptable as we expect these operations to be commutative in that (the aggregate of 4 events from service 1).aggregatedWith(the aggregate of 5 events from service 2) == (the aggregate of 5 events from service 2).aggregatedWith(the aggregate of 4 events from service 1). The same 2-3 second delay is also acceptable here. Any resources or search terms on this type of problem (or any alternative suggestions) would be much appreciated!

+",282103,,,,,43103.70972,"CQRS, Event Sourcing and (near) Real Time Reporting",,2,0,1,,,CC BY-SA 3.0,, +363358,1,363394,,1/3/2018 15:25,,1,1478,"

This is my first post here. I want to do the following with the help of embedding in GO,

+ +
    +
  1. extend the html.Tokenizer with new methods of my own
  2. +
  3. while still able to access all existing html.Tokenizer methods in +the mean time
  4. +
  5. define a function WalkBody() (or an interface method if possible)
  6. +
  7. in which an interface method of VisitToken() is used, which will +behave differently for different types
  8. +
+ +

After an extended discussion in [go-nuts] mlist, I changed the sample code of +https://github.com/suntong/lang/blob/master/lang/Go/src/xml/htmlParserTokens.go +to +https://github.com/suntong/lang/blob/master/lang/Go/src/xml/htmlParserTokens2.go +according to all the suggestions there.

+ +

However, all suggestions so far just kept to one focus, but not all four altogether. E.g., the first two goals can be achieved by ""type MyTokenizer struct"", but as soon as I change that to the interface type to act as the base for the two different extended types (for goal#3&4), goal#2 breaks. I.e.,

+ +

this works,

+ +
type MyTokenizer struct {
+    *html.Tokenizer
+}
+
+func NewMyTokenizer(i io.Reader) *MyTokenizer {
+ z := html.NewTokenizer(i)
+ return &MyTokenizer{z}
+}
+
+ +

As soon as I tried the suggested following, everything started to break down.

+ +
type TokenVisitor interface {
+    VisitToken()
+}
+
+func WalkBody(of TokenVisitor) {
+    // here you call of.VisitToken()
+}
+
+type MyTokenizer1 struct {
+    *html.Tokenizer
+}
+
+func (the MyTokenizer1) VisitToken() {
+
+}
+
+type MyTokenizer2 struct {
+    *html.Tokenizer
+}
+
+func (the MyTokenizer2) VisitToken() {
+
+}
+
+ +

Seem to me some compromise has to be made, what is the least compromise to make?

+ +

The actual purpose for me to ask this question is that, I have a systematic thinking in OO how to solve such kind of inherit & enhance problem, and there is a practical implementation in place for me, the virtual functions. But when it comes to Go, I still need help for how to think, and how to do.

+ +

Anybody can help please? Is there any systematic way to deal with the situations like this?

+",292371,,292371,,43103.65903,43104.0625,Extending an existing type in Go,,1,0,,,,CC BY-SA 3.0,, +363360,1,363384,,1/3/2018 15:40,,2,118,"

I have a database that stores client data. All data is in one table. For each client, there is one row per day. Clients have different kinds of contracts, their data format remains the same, however, the underlying dynamics are different.

+ +

I am creating an application to simulate those dynamics and I am struggling finding a good object structure to implement this. I came up with two approaches:

+ +

Approach 1

+ +

I would start with a superclass for all types of contract. It would handle the access to the database and fundamental properties, such as the contract number, as well as fundamental methods (find first date, total value, etc)

+ +

Then I would subclass this for the specific types to add specific methods (e.g. calculate optimal index value, the algorithm of which would be depending on contract type).

+ +

However, the type of contract is determined by a value in the database. Therefore, if the contract object loads its own data from the database, it would then need to change its own class depending on the type. I don't think is good design.

+ +

Approach 2

+ +

Alternatively, I could create a contract_manager class to hold a list of all contracts, retrieve the data from the database, and generate and fill contract objects.

+ +

In that scenario however I fear that the interface to remember would be more complicated, and database access could not be hidden inside the object. This would be desirable, since these classes will be part of an analytics and prototyping library for a team of data analysts.

+ +

What would be the correct way of phrasing this problem? What would be the best approach to handle this?

+",75099,,209774,,43103.92083,43104.40139,Object that can set its own subclass/amend its methods with subclass?,,2,0,,,,CC BY-SA 3.0,, +363363,1,,,1/3/2018 15:57,,-4,11812,"

I have a dynamic array that can often be empty, and I need to iterate over all its elements.

+ +

So far I have such code:

+ +
array.forEach(function(item, index) {
+    //stuff here
+});
+
+ +

It works fine of course, but I wonder if there is any overhead for the forEach method when the array is empty, in such case maybe better checking first, e.g.

+ +
if (array.length > 0) {
+    array.forEach(function(item, index) {
+        //stuff here
+    });
+}
+
+ +

Which is better practice?

+",30187,,,,,43103.69028,Using forEach on empty array,,1,5,,43103.69306,,CC BY-SA 3.0,, +363370,1,,,1/3/2018 17:56,,8,4000,"

I've been reading about CPUs and how they are implemented, and some big complex architectures (looking at you x86) have instructions that load from memory during one clock cycle. Since one address points to a single byte, how is it possible that I can write:

+ +
mov eax, DWORD PTR ds:[esi]
+
+ +

where I'm loading a double word (4 bytes!) from memory and chucking it into eax. How does this work with only one clock cycle? Wouldn't it have to access 4 addresses? The DWORD starts from ds:[esi] and ends up at [ds:[esi] - 3] meaning it has to compute 4 effective address, but it does it in one cycle.

+ +

How?

+ +

Thanks

+",291510,,291510,,43103.74792,43103.84514,How does a CPU load multiple bytes at once if memory is byte addressed?,,2,7,2,,,CC BY-SA 3.0,, +363374,1,,,1/3/2018 11:52,,1,85,"

Using Functional language, How can 2 different parties achieve the result of increment/decrement operations concurrently?

+ +

For the below scenario, Let's say, I've 2 quantities in stock and 2 users in e-commerce site block 1 quantity each concurrently. How can I mark the remaining stock as '0' in FP given the state is shared and immutable? I assume that each flow gets the copy of initial stock '2' and operates upon. If so, how will it reach '0' unless serialized?

+ +

Kindly narrate this with some sample working code.

+ +

How is this different from CAS operations on AtomicInteger (in JAVA) where the client is expected to retry on failure? (Both in terms of approach and efficiency)

+",292326,Vicky,,,,43103.79236,How Functional Programming addresses concurrent increment/decrement operations invoked by different users?,,0,0,1,43103.79653,,CC BY-SA 3.0,, +363376,1,363382,,1/3/2018 19:10,,1,117,"

I'm rewatching Rich Hickey great talk "Simple Made Easy" +And around min 35:40 when talking about state, mentions that State complects value and time, but I'm not sure I'm understanding this well.

+

Is it that because immutable data is always the same, being independent of time in the sense that it doesn't change over time?

+

I'd appreciate if someone with more understanding of this talk could clarify it.

+

Thank you.

+",184811,,-1,,43998.41736,43103.86181,"What does it mean ""state complects value and time""?",,1,1,,,,CC BY-SA 3.0,, +363381,1,,,1/3/2018 20:36,,3,166,"

I have a Surgeon class that is constantly changing

+ +
class Surgeon
+{
+   string name, discipline;
+ public:
+    Surgeon(string _name, string _discp) : name{_name}, discipline{_discp}{}
+    void writeDir(string _dir);
+    void readDir(string _dir);
+}
+
+ +

Now I need to add recording settings to Surgeon. Keeping in mind SOLID principles this forces me to go back in Surgeon class add a recording settings and modify everything in surgeon class. Now, I'm also aware that the current implementation of software already adds value to the company. And refactor is not necessary, but in order to do this task right I need to refactor the whole class. If I just add the feature that works with the current design then I'm simply perpetuating bad design.

+ +

Where does the software engineer provide the most value? Should I produce code that shows results quickly but is prone to errors in the future or should I delay quick results to produce good code that is more robust? What would be the happy medium if there is any?

+",169597,,,,,43103.89931,How do you add feature to a class that was originally designed wrong in the first place?,,1,1,,43104.67986,,CC BY-SA 3.0,, +363385,1,363387,,1/3/2018 21:40,,4,316,"

For example, for Firefox the cookies are kept as an SQLite DB in user's folder. Any program can read these cookies. So, for example, can't an .exe program read the contents of a cookie and pretend to the web site of that cookie as if it is the logged in user and start sending requests on behalf of that user?

+",292403,,,,,43104.51875,Isn't it unsafe that any program can access cookies of a browser?,,2,3,,,,CC BY-SA 3.0,, +363395,1,,,1/4/2018 2:08,,0,277,"

I am curious about your approaches/heuristics to the exploration of domain (subdomain, bounded context) during DDD modeling session.
+As everyone knows most of programmers tend to be perfectionists (especially in the places where it is definitely not needed). Our industry has learned that it is extremely important to produce solutions good enough rather then perfect, otherwise the cost of the development is not balanced with the effect. +Though the term good enough is ambiguous, dependent on the context, we seems to established guidelines which lead us in good direction (in most cases) - the simplest code which fulfill acceptance criteria, using design patterns, test driven development to name a few.
+Having this in mind, what is your approach when it comes to exploration of domain? When you know you should stop exploring and jump into next area/start coding? Of course each domain session you will know your domain better, explored deeper - my question is about single session. Is it based on domain expert opinion, length of modeling session or maybe some arbitrary goals set by customer? Or maybe something totally different?

+",260715,,,,,43104.66597,"How deep should we explore domain (subdomain, bounded context) in Domain Driven Design?",,1,4,0,,,CC BY-SA 3.0,, +363397,1,363399,,1/4/2018 4:22,,128,27113,"

During one of my lectures today about Unity, we discussed updating our player position by checking every frame if the user has a button pushed down. Someone said this was inefficient and we should use an event listener instead.

+ +

My question is, regardless of the programming language, or situation that it is applied in, how does an event listener work?

+ +

My intuition would assume that the event listener constantly checks if the event has been fired, meaning, in my scenario, it would be no different than checking every frame if the event has been fired.

+ +

Based on the discussion in class, it seems that event listener works in a different way.

+ +

How does an event listener work?

+",286828,,,,,43107.5625,How does an event listener work?,,11,17,35,,,CC BY-SA 3.0,, +363410,1,363428,,1/4/2018 10:45,,9,9788,"

After doing some researches I can not seem to find a simple example resolving a problem I encounter often.

+ +

Let's say I want to create a little application where I can create Squares, Circles, and other shapes, display them on a screen, modify their properties after selecting them, and then compute all of their perimeters.

+ +

I would do the model class like this:

+ +
class AbstractShape
+{
+public :
+    typedef enum{
+        SQUARE = 0,
+        CIRCLE,
+    } SHAPE_TYPE;
+
+    AbstractShape(SHAPE_TYPE type):m_type(type){}
+    virtual ~AbstractShape();
+
+    virtual float computePerimeter() const = 0;
+
+    SHAPE_TYPE getType() const{return m_type;}
+protected :
+    const SHAPE_TYPE  m_type;
+};
+
+class Square : public AbstractShape
+{
+public:
+    Square():AbstractShape(SQUARE){}
+    ~Square();
+
+    void setWidth(float w){m_width = w;}
+    float getWidth() const{return m_width;}
+
+    float computePerimeter() const{
+        return m_width*4;
+    }
+
+private :
+    float m_width;
+};
+
+class Circle : public AbstractShape
+{
+public:
+    Circle():AbstractShape(CIRCLE){}
+    ~Circle();
+
+    void setRadius(float w){m_radius = w;}
+    float getRadius() const{return m_radius;}
+
+    float computePerimeter() const{
+        return 2*M_PI*m_radius;
+    }
+
+private :
+    float m_radius;
+};
+
+ +

(Imagine I have more classes of shapes: triangles, hexagones, with each time their proprers variables and associated getters and setters. The problems I faced had 8 subclasses but for the sake of the example I stopped at 2)

+ +

I now have a ShapeManager, instantiating and storing all the shapes in an array :

+ +
class ShapeManager
+{
+public:
+    ShapeManager();
+    ~ShapeManager();
+
+    void addShape(AbstractShape* shape){
+        m_shapes.push_back(shape);
+    }
+
+    float computeShapePerimeter(int shapeIndex){
+        return m_shapes[shapeIndex]->computePerimeter();
+    }
+
+
+private :
+    std::vector<AbstractShape*> m_shapes;
+};
+
+ +

Finally, I have a view with spinboxes to change each parameter for each type of shape. For example, when I select a square on the screen, the parameter widget only displays Square-related parameters (thanks to AbstractShape::getType()) and proposes to change the width of the square. +To do that I need a function allowing me to modify the width in ShapeManager, and this is how I do it:

+ +
void ShapeManager::changeSquareWidth(int shapeIndex, float width){
+   Square* square = dynamic_cast<Square*>(m_shapes[shapeIndex]);
+   assert(square);
+   square->setWidth(width);
+}
+
+ +

Is there a better design avoiding me to use the dynamic_cast and to implement a getter/setter couple in ShapeManager for each subclass variables I may have? I already tried to use template but failed.

+ +
+ +

The problem I'm facing is not really with Shapes but with different Jobs for a 3D printer (ex: PrintPatternInZoneJob, TakePhotoOfZone, etc.) with AbstractJob as their base class. The virtual method is execute() and not getPerimeter(). The only time I need to use concrete usage is to fill the specific information a job needs :

+ +
    +
  • PrintPatternInZone needs the list of points to print, the position of the zone, some printing parameters like the temperature

  • +
  • TakePhotoOfZone needs what zone to take into photo, the path where the photo will be saved, the dimensions, etc...

  • +
+ +

When I will then call execute(), the Jobs will use the specific information they have to realise the action they are supposed to do.

+ +

The only time I need to use the concrete type of a Job is when I fill or display theses informations (if a TakePhotoOfZone Job is selected, a widget displaying and modifying the zone, path, and dimensions parameters will be shown).

+ +

The Jobs are then put into a list of Jobs which take the first job, executes it (by calling AbstractJob::execute()), the goes to the next, on and on until the end of the list. (This is why I use inheritance).

+ +

To store the different types of parameters I use a JsonObject:

+ +
    +
  • advantages : same structure for any job, no dynamic_cast when setting or reading parameters

  • +
  • problem : can't store pointers (to Pattern or Zone)

  • +
+ +

Do you thing there is a better way of storing data?

+ +

Then how would you store the concrete type of the Job to use it when I have to modify the specific parameters of that type? JobManager only has a list of AbstractJob*.

+",225136,,1204,,43152.89514,43152.89514,Proper design to avoid the use of dynamic_cast?,,2,7,3,,,CC BY-SA 3.0,, +363420,1,,,1/4/2018 12:19,,4,805,"

In the University one of the lecturers was insisting on a piece of advice I found odd.

+ +

This lecturer insisted that his pupils do not care too much about decisions like the choice of the programming language, target platform, or other design choices that are not strictly necessary to make a working prototype. When people were protesting that creating something that is known to be broken is a waste of time and waste of work, the lecturer argued that:

+ +
+

What programmers often do not realize is that code is being constantly rewritten. Look at most successful companies, Google for example, and products, World of Warcraft for example: They don't maintain their code, they rewrite it. I already lost count how many times the engine of WoW was rewritten and replaced by a new version. Rewriting code, even in another programming language and under changed requirements, is not hard once you have a working prototype; what is hard is making this working prototype. You can carefully choose your programming language to meet the requirements of your target platform and to achieve necessary performance; you can worry about the quality of your first iteration of code; then writing your first iteration will be much more difficult and time consuming and after you're finally done you will realize you have to rewrite your beautiful code because your code's quality is nevertheless unsatisfactory as it is impossible to determine how a code should look like without writing it first, not to mention changed requirements. Instead, focus only on making a working prototype, without caring for anything else; then, once you have it, make an informed decision how to fix the code and how to adjust it for your particular requirements and rewrite your code accordingly, this time caring for its quality; then possibly rewrite it once again before releasing it. If your product is successful enough to enter the maintenance phase you will also be periodically rewriting code whenever a need for a relatively major change arises.

+
+ +

In particular, this means that we should not care too much about the quality of the code of the prototype and write it in Python if we like Python even if we know that Python is unavailable for our target platform.

+ +

(I tried to summarize the lecturer's opinions above, hoping that I understand them well and that I didn't misrepresent them).

+ +

This is a direct opposite of the usual recommendations to always put an utmost care for the quality of one's code and as well as a direct opposite of this popular essay. What can be said about this piece of advise?

+",212639,,136413,,43104.96667,43105.45833,Is code being constantly rewritten and is it therefore pointless to worry about the quality of the early iterations of rewriting code?,,8,14,,,,CC BY-SA 3.0,, +363427,1,,,1/4/2018 13:26,,0,62,"

I am using PHP with Symfony and Doctrine but my question should be independent from any used language or framework.

+ +

Suppose you have an entity Product with a One-To-Many relationship to another entity Price. Price has (among others) the properties validFrom and validUntil. Now I want to know the price of a product on a specific day.

+ +

As far as I know this can be accomplished in two ways:

+ +
    +
  1. Create a custom getter on the Product entity e.g. getPriceOnDate(date). It would cycle through all associated prices until it finds the right one.
  2. +
  3. Create a repository function that fetches the correct price directly from the database.
  4. +
+ +

Which one of the two approaches are in line with the MVC Best Practices?

+",292461,,,,,43104.72986,MVC: Better Use Custom Getter or Repository Function,,1,2,,,,CC BY-SA 3.0,, +363433,1,,,1/4/2018 15:34,,1,100,"

I have project that need some UITest and UnitTest. If i should do it like in code below. What problems can arise with such approach? Or it's better to use small isolated UITest and small isolated UnitTest. Maybe somebody already faced with this approach. Or maybe it is greate paradigm. I think that this approach will be Tightly coupled and more complex than the project.

+ +
namespace TestProject
+{
+    class Program
+    {
+        static void Main(string[] args)
+        {
+            string commandLineArgs = string.Empty;
+
+            switch (commandLineArgs)
+            {
+                case ""UI"":
+                    // Run library with UI on WPF
+                    break;
+                case ""SeriaApi"":
+                    // Run library without UI
+                    break;
+            }
+        }
+    }
+
+    public interface ITest
+    {
+        void ConnectDevices(string controller1, string controller2);
+    }
+
+    public class ImplementationForUI : ITest
+    {
+        public void ConnectDevices(string controller1, string controller2)
+        {
+            // UI Test Realization, make the same that ImplementationForSerialApi.ConnectDevices with WPF wrapper
+        }
+    }
+
+    public class ImplementationForSerialApi : ITest
+    {
+        public void ConnectDevices(string controller1, string controller2)
+        {
+            // SerialApi Test Realization, make the same that ImplementationForUI.ConnectDevices but in console
+        }
+    }
+
+    public class Test
+    {
+        private ITest ITest;
+        public Test(ITest instance)
+        {
+            ITest = instance;
+        }
+        public void RunTestCase1()
+        {
+            ITest.ConnectDevices(null, null);
+        }
+    }
+}
+
+",292479,,292479,,43104.65347,43105.00417,Program testing approach,,2,2,,,,CC BY-SA 3.0,, +363437,1,,,1/4/2018 16:24,,2,149,"

Maybe I have misunderstood this concept. But is it common, when developing the backend to an app, mobile or web. To first write it in a high level programming language such as php, python, javascript to quickly develop a working prototype with fewer lines of code. +And then if performance becomes a priority rewriting the app in a low level language such as C or C++?

+",187780,,,,,43104.70972,"In app development is it common to first write your app in a high level language, then rewrite it in a low level language?",,2,1,,,,CC BY-SA 3.0,, +363442,1,363459,,1/4/2018 18:07,,7,519,"

How can I tell in an application if there are parts that will benefit from using the C# Task library (I believe that is the Parallel processing library.)

+ +

Given other optimizations have been done, how do I know other than just try it? In C#, can I use Visual Studio for this analysis?

+",8802,,,,,43115.64722,How do I know if a set of code is a good candidate for parallelization?,,3,1,2,,,CC BY-SA 3.0,, +363446,1,,,1/4/2018 18:35,,3,470,"

I was hoping to get some advice on a particular task I'm trying to implement.

+ +

I have a table that stores secure data and returns an ID as a representation of that data. No problems there. So, for example, if a social security number is stored, the code generates a representational ID and stores the social security number in an encrypted fashion in the table. The encryption is done using envelope encryption.

+ +

Here's my issue. Every time a new value comes in, I don't want to create a new ID if the data already exists. I need to check to see if the value already exists and, if so, return the existing ID. The problem I have is that the encrypted value is different each time and I certainly can't decrypt every value in the database to check for a duplicate. I could create a one way hash and store that as well but, if I do, I would need to salt it for security purposes and the hash will be different every time.

+ +

So I'm hoping to get advice/recommendations on how to achieve this? How to check for duplicates when the value is stored in an encrypted fashion.

+ +

Thank you!

+",292500,,,,,43111.83056,Checking for duplicate with encrypted values,,1,4,1,,,CC BY-SA 3.0,, +363449,1,,,1/4/2018 18:44,,1,1467,"

If I fork someone's else repository (say, under my brand), for instance when I need the code for another, my own bigger project consisting of a number of such repositories, would it be preferred to send a message to an owner about such intention? I mean, should I inform him or her that now it will be under ""my brand"", possibly with few changes?

+ +

Let's assume the repository's code is under the MIT licence.

+",292502,,292502,,43104.78403,43104.84653,How much am I allowed to use someone's else code as my fork?,,1,4,1,,,CC BY-SA 3.0,, +363450,1,363478,,1/4/2018 18:50,,1,2748,"

Most of the resources I've seen about the Decorator pattern look like the following:

+ +
interface Tea
+{
+    public double cost();
+}
+class BasicTea implements Tea
+{
+    public double cost() { return 1.99; }
+}
+abstract class TeaDecorator implements Tea
+{
+    private Tea base;
+    public TeaDecorator(Tea tea) { this.base = tea; }
+    public double cost() { return this.base.cost(); }
+}
+class TeaWithMilk extends TeaDecorator
+{
+    public TeaWithMilk(Tea tea) { this.base = tea; }
+    public double cost() { return this.base.cost() + 0.30; }
+}
+class TeaWithSugar extends TeaDecorator
+{
+    public TeaWithSugar(Tea tea) { this.base = tea; }
+    public double cost() { return this.base.cost() + 0.10; }
+}
+
+decoratedTea = new TeaWithSugar(new TeaWithMilk(new BasicTea()));
+
+ +

However, I noticed the following approach also works - decoratedTea.cost() returns the same value as above.

+ +
class Tea
+{
+    public double cost() { return 1.99; }
+}
+abstract class TeaDecorator extends Tea
+{
+    private Tea base;
+    public TeaDecorator(Tea tea) { this.base = tea; }
+    public double cost() { return this.base.cost(); }
+}
+class TeaWithMilk extends TeaDecorator
+{
+    public TeaWithMilk(Tea tea) { this.base = tea; }
+    public double cost() { return this.base.cost() + 0.30; }
+}
+class TeaWithSugar extends TeaDecorator
+{
+    public TeaWithSugar(Tea tea) { this.base = tea; }
+    public double cost() { return this.base.cost() + 0.10; }
+}
+
+decoratedTea = new TeaWithSugar(new TeaWithMilk(new Tea()));
+
+ +

Is there any problem with this method? If I would not otherwise have an interface for Tea, is it necessary to add one just to implement the Decorator pattern?

+",456,,456,,43105.88264,43105.88264,Can the Decorator pattern be used without an interface?,,2,6,1,,,CC BY-SA 3.0,, +363451,1,363453,,1/4/2018 18:58,,2,147,"

When passing around functions the term ""predicate"" is often used for a [short] function that returns a boolean. Is there a term for a function that selects a single field on an object?

+ +

For example let's say we build a custom sort function, but this sort is generic so it takes an argument for the user to specify which field to sort on. Such a function might look something like:

+ +

sort(itemsToSort, functionThatGetsFieldToSortOn)

+ +

The items might be an array of objects with a filename property.

+ +

[{ + filename: 'foo' +}]

+ +

If I were to sort based on filename with the function I might invoke a call like:

+ +

sort(items, a => a.filename)

+ +

This second argument...is there a special name for this type of function that selects a property?

+ +

I'm trying to figure out a good way to name this second argument.

+",4427,,,,,43104.84444,Is there a name for a function that grabs a field from an object?,,2,0,,,,CC BY-SA 3.0,, +363455,1,,,1/4/2018 19:25,,1,110,"

Our team is tasked with querying an external 3rd Party database to pull information. We don't own the database, so it could change without notice and our queries would stop working. We are currently storing the DB queries in SQL files in our C# project as embedded resources, and running them as SqlCommands.

+ +

This works when it works, but are there any better solutions out there? It was suggested to store the SQL in our DB, so if we ever need to change it, it can be done without a release, but that doesn't seem like the best solution either.

+",4594,,,,,43105.67014,Architecture - External 3rd Party DB Queries,,3,4,,,,CC BY-SA 3.0,, +363456,1,,,1/4/2018 19:35,,2,132,"

I'm starting to use ES6 arrow functions more, but haven't found a coding style that I like, especially when chaining them together. e.g., Eric Elliott gives this code:

+ +

mix = (...fns) => x => fns.reduce((acc, fn) => fn(acc), x)

+ +

And touts how ES6 lets you write it all on ""one line of code"", but that's partly due to formatting standards, which would require that the function(x) {...} alternative version to have the { and } on separate lines, etc...

+ +

Is there a ""coding style standard"" (or more likely several) for how that line should be formatted, particularly with newlines?

+ +

In my limited research, the Google JavaScript Style Guide and Mozilla Guide are silent.

+",90992,,,,,43104.81597,Is there an accepted Coding Style for multiple ES6 Arrow Functions?,,0,7,,,,CC BY-SA 3.0,, +363468,1,,,1/4/2018 21:44,,3,97,"

I'd like to be able to have a (Eloquent) Model implement a class based upon a property of the model once it is created. I think this is the Strategy Pattern, but since I'm doing it from inside a Model and not passing the dependency, it's not the perfect fit. I'm trying to avoid using switch or if-else blocks to create the CompanyXXX APIs, because there are currently 4 CompanyXXX, and it's already unwieldy and we could have up to 20 in a few years.

+ +

Example:

+ +
interface API
+{
+    public function getThings();
+}
+
+class CompanyOne implements API
+{
+    public function getThings()
+    {
+        // Company One API specific stuff to get Things
+    }
+}
+
+
+class Property extends Model
+{
+    // This is an Eloquent Model, this valid is populated from the DB
+    public $company; // 'CompanyOne'
+
+    // This is the class implementing the API to use based upon $company
+    private $api;
+
+    public function __construct()
+    {
+        parent::__construct();
+
+        $this->api = new $this->company;
+        // Thus, this is an object CompanyOne
+    }
+
+    public function getThings()
+    {
+        $this->api->getThings();
+        // CompanyXXX API specific stuff to getThings
+    }
+}
+
+ +

Thus, I would be able to do things like this in a Controller so it knows which API to use (based on $api) and just does it™. I don't need to know or care what API this Property is using, I just need to getThings.

+ +
$property = Property::find(1); // get from DB, has 'api' parameter set
+
+$things = $property->getThings();
+
+ +

Which is wildly more simple compared to current methods that use switch and are now difficult to maintain (we now have 46 switch statements in the code base):

+ +
$property = Property::find(1);
+
+switch ($property->api) {
+    case 'CompanyOne':
+        $api = new CompanyOne;
+    case 'CompanyTwo':
+        $api = new CompanyTwo;
+    // repeat
+}
+
+$things = $api->getThings();
+
+ +

So what's the question?

+ +
    +
  1. Is this the Strategy Pattern (modified)? Is this another pattern?
  2. +
  3. Is there a better way (especially in Laravel) to have multiple API classes (as in my example) that are determined from a model's parameter that is 'self-aware' to implement the correct class/system to reduce the use of switch and if-else statements?
  4. +
+",181129,,181129,,43104.97083,43104.98889,Self aware Eloquent Model implements Strategy Pattern,,1,0,,,,CC BY-SA 3.0,, +363471,1,363483,,1/4/2018 21:55,,2,346,"

I apologize in advance for the vague title. I didn't want to make it overly verbose, so allow me to explain more in-depth below:

+ +

I've currently been developing a strong, statically typed language that compiles down to C++(11) code. Using the answers from How does garbage collection work in languages which are natively compiled? as a guide, I've developed a simple runtime system (written in C++ obviously) for my language that essentially consist of a garbage collector.

+ +

The way I designed the GC system is that I first created a base object that all other objects representing specific data types in my language (which currently only include integers and booleans) would inherit from. The GC itself uses a basic references counting algorithm, accessing the field that holds the number of references each object has provide by the base class mentioned above.

+ +

The development of the runtime system has come along fine. However, I've just realized somewhat of a problem. Because I have to wrap all data types from my language in their respective objects, this makes the C++ code I generate extremely verbose and clunky for anything bigger than simple expressions. For example, say I had the expression 1 + 2 * (4 - 5) - 6 / (7 + 8). This would roughly be transpiled to the following C++ code by my compiler:

+ +
*new Integer(1) + *new Integer(2) * (*new Integer(4) - *new Integer(5)) - *new Integer(6) / (*new Integer(7) - *new Integer(8));
+
+ +

As can clearly be seen, the C++ code that was generated is extremely verbose compared to the original expression written in my language. You could imagine how much worse this would look for even more complex expressions.

+ +

My question is: How should a problem like this be dealt with? Is this simply a problem created by my inexperience in creating a runtime system, or is this something that normally occurs when compiling? Obviously this isn't a ""problem"" in the sense that it's inhibiting me from continuing to develop my compiler, but since I do want my compiler to generate read-able C++ code, this should be something that is solved.

+ +

One solution I've thought to this problem is to use a method like three address code to break large expressions into manageable parts, but before I implement it I'd like to understand whether this problem has a better solution.

+",242544,,200203,,43105.69375,43105.69375,How to avoid generating verbose code when compiling from a higher level language to a lower one?,,2,8,1,,,CC BY-SA 3.0,, +363473,1,,,1/4/2018 22:03,,1,65,"

I have a set of operations that need to be executed as part of large process and subclasses may slightly differ part of the step. This problem seem to be solved by using the Template Method design pattern so I have something like below.

+ +
class BaseTemplate(object):
+    def perform(self)
+        self.disable()
+        self.update()
+        self.enable()
+
+    def disable(self):
+        # Generic behavior
+
+    def update(self):
+        # Generic behavior
+
+    def enable(self)
+        # Generic behavior
+
+class FooTemplate(BaseTemplate):
+    def enable(self):
+       # Foo-specific behavior
+
+    def update(self):
+       # Foo-specific behavior
+
+class BarTemplate(BaseTemplate):
+    def update(self):
+       # Bar-specific behavior
+
+ +

However, the problem with update() is that the concrete templates just need to extra few step on top of what the base class did.

+ +

For example,

+ +

BaseTemplate

+ +
def update():
+    # Get X
+    # Set field A of X
+    # Set field B of X
+
+ +

FooTemplate

+ +
def update():
+    # Get Y (Y is a subclass of X)
+    # Set field A of Y
+    # Set field B of Y
+    # Set field C of Y (Field C is specific to Y which is a subclass of X)
+
+ +

The problem is that setting the field of A and B are duplicated as shown above, so I could do something like this in FooTemplate

+ +
def update():
+    super(FooTemplate, self).update() # Handles fields A and B
+    # Set field C
+
+ +

But calling super classes' method in an overriding method is an anti-pattern Call Super since all subclasses' update() must call super.update() otherwise it'll break.

+ +

So, a different approach would be to have a no-op method that sets the field C in the base class and then have template subclasses implement only when needed. However, this approach seem to violate OOP updating field C is only required by FooTemplate and other subclasses of BaseTemplate should not know.

+ +

Questions:

+ +
    +
  1. Is it correct to use template method pattern for my usage?
  2. +
  3. How to handle update() which adds extra steps as templates go concrete in an OOP way?
  4. +
  5. Any suggestions in general?
  6. +
+",99087,,99087,,43105.04306,43105.04306,How to handle extra steps that are part of concrete class generically in template method design pattern?,,1,5,1,,,CC BY-SA 3.0,, +363480,1,363481,,1/5/2018 0:21,,5,781,"

I was reading a litte about clusters in nodejs, and in all cases it was trivial to clusterize the application. In fact, it was so easy that I began to wonder: are there any cases that I shoudn't use clustering?

+",286744,,,,,43105.7875,Nodejs cluster: are there any downsides?,,1,0,1,,,CC BY-SA 3.0,, +363482,1,,,1/5/2018 2:36,,2,208,"

My web app (under development) need to login via http://devWebServer/ExtApp/login.aspx

+ +

After login via address above, it will pass cookie to http://devWebServer/myApp/Login/Default.aspx and continue from here.

+ +

I find it troublesome that every time I need to deploy my code to http://devWebServer/myApp and do testing from there because of the login.

+ +

What are the industry standard to debug under this circumstance?

+ +

is using project properties -> Web -> Servers -> External Host -> set as http://devWebServer/myApp going to solve my issue? I cannot test this now because of security policy, firewall.

+ +

I am reluctant to write if loginViaExtApp (do this) else (do that) or something similar in my web app.

+",176465,,176465,,43105.14097,43255.50694,How to debug in visual studio 2015 when your web app. need to login via another web app?,,2,2,,,,CC BY-SA 3.0,, +363493,1,,,1/5/2018 10:18,,4,329,"

I'm currently developing a new device, which produces (text-)logfiles. There will be many devices of this type and I want to analyse the logfiles (for error-detection and statistics).

+ +

Until now, the log analysis was a side-product, logfiles of other devices were primary for peoples who tried to debug, so the information were human-readable (""fulltext""). With my automatic loganalysis-tool, I analysed these logfiles by searching for multiple keywords, which indicated interesting lines of the logfile. So the analysis was very intense, because I had to check one line for multiple keywords, and it's possible, that at the end, a line is irrelevant for my analysis. In case that the line is relevant for me, I sometimes need to parse the line costly by cutting out information at different positions in the line. +Altogether, the analysis of these logfiles is very slow.

+ +

With the new device, I like to implement more machine-readable logfiles (but still human-readable), what enables faster and easier analysis, also during implementation/extension of my log-analysis tool. I'm just wondering, whats the best practice? +My first idea is, to use a ""trigger char"" in a log message. We are in Germany, so I choosed the Dollar-sign, because we don't use it regularly. When I find this trigger-char, I know that I have to analyse this line (and I avoid scanning for multiple keywords). In addition, I thought that the most common case is a key-value-pair, which can contain the most important information of a log message. +Just in case, that I have multiple values (e.g. a list), I need also a machine-readble version of this. So my next idea was, to combine the trigger-char with a JSON-object. In the standard case, this only includes one key-value-pair and the loganalysis-tool will parse this for performance-reasons with simple string-operations and don't use a JSON-parser-library. But for the case that there is a list, I'll create a log message with a JSON-Array and parse this with a JSON-lib (in my case json-simple). On events, I'll insert a boolean (the ""true"" is not neccessary, just to stay JSON-conform).

+ +

So at the moment, a line looks like this:

+ +
05.01.2018 11:11:23: No new APN needed. ${""currentAPN"":""m2m-net.sa.t-mobile""}
+05.01.2018 11:11:51: can't open gpio 969. ${""openGPIO969failed"":true}
+
+ +

Can I improve my concept in any way?

+ +

Side-note: I'm using a self-written Java-parser to analyse the logfiles of the older devices, which I would extend for the new project. Because of the very special log-message-""formats"" (which grew by time), something like Logstash didn't work. Nevertheless, with ElasticSearch and Kibana I use the rest of the ELK-stack for my log-analysis and -visualization.

+ +

Side-note 2: The new device is a a kiosk-device with Android, so only one app running. We have full control over the system (device-owner) and also most parts of the hardware.

+ +

/edit: extended by event-message as mentioned in first comment +/edit 2: new side-note

+",202482,,202482,,43105.46181,43105.49236,Create machine-readable logfiles,,2,8,1,,,CC BY-SA 3.0,, +363494,1,363496,,1/5/2018 10:21,,0,562,"

I am working on a Python 3 program that will among other things write data to text files in a number of formats. I have one function per format, generating the data to be written. I see two possible ways of writing these functions - one using yield and the other using stream.write.

+ +

Consider this silly example, just writing an integer per line:

+ +
def numbers(max):
+    for i in range(max):
+        yield str(i) + ""\n""
+
+with open(file, ""w"") as f:
+    f.writelines(numbers(10))
+
+ + + +
def numbers(max, stream):
+    for i in range(max):
+        stream.write(i + ""\n"")
+
+with open(file, ""w"") as f:
+    numbers(10, f)
+
+ +

I find the first version to make for a nicer function (e.g. one parameter less), but perhaps the code consuming it is somewhat more convoluted. But I also find it easier to test, since I don't need to construct a dummy stream class that captures the output.

+ +

Is any of these better than the other, or am I just getting stuck on an unimportant decision? My priorities are to produce readable and pythonic code.

+",224262,,,,,43105.43403,Should I produce output with yield or stream.write in Python 3?,,1,0,,,,CC BY-SA 3.0,, +363499,1,,,1/5/2018 11:20,,0,1085,"

I have a typical use case where i am consuming to a message broker. The messages are <^> delimited strings. I parse each and every message create POJO's and then apply different validations to understand if the message is useful for further processing. We call these validations as filters.

+ +

These filters are dynamically created by the users of the applications and these filters needs to be dynamically applied to the incoming messages.

+ +

As of now i am validating the messages using IF-ELSE loops. But, I would like to check if there are any design patterns which are already making this problem statement elegant.

+ +

A typical filter has a FilterCriteria which talks about the Conditions which needs to be checked against a message.

+ +

Code:

+ +
// load filters for a given party and apply on the rawSyslogMessage.
+    private boolean applyFilter(RawSyslogMessage message) throws EntityNotFoundException {
+
+        boolean isDropped = false;
+
+        logger.info(""---applyFilter()::rawSyslog :"" + message);
+
+        String partyID = message.getPartyID();
+
+        // load all filters for the given party
+        List<Filter> filters = filterService.getAll(partyID);
+
+        if (filters != null) {
+            for (Filter filter : filters) {
+
+                FilterCriteria filterCriteria = filter.getFilterCriteria();
+                String field = filterCriteria.getField();
+                String condition = filterCriteria.getCondition();
+                String action = filterCriteria.getAction();
+
+                // FILTER. Consider applying all fitlers on a message.
+                if (filter.getName().toUpperCase().equals(""PRIORITY"") && action.toUpperCase().equals(""ALLOW"")) {
+                    if (condition.toUpperCase().equals(""GREATER"")) {
+                        if (Long.toString(message.getSeverity()).equals(field)) {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        } else if (message.getSeverity() < Long.parseLong(field)) {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        } else {
+                            logger.info(""The message is sent for correlation"");
+                        }
+                    } else if (condition.toUpperCase().equals(""LESSER"")) {
+                        if (Long.toString(message.getSeverity()).equals(field)) {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        } else if (message.getSeverity() < Long.parseLong(field)) {
+                            logger.info(""The message is sent for correlation"");
+                        } else {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        }
+                    } else if (condition.toUpperCase().equals(""EQUALS"")) {
+                        if (Long.toString(message.getSeverity()).equals(field)) {
+                            logger.info(""The message is sent for correlation"");
+                        } else if (message.getSeverity() < Long.parseLong(field)) {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        } else {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        }
+                    } else if (condition.toUpperCase().equals(""BETWEEN"")) {
+                        String[] range = field.split(""TO"");
+                        String _left = range[0];
+                        String _right = range[1];
+                        if (message.getSeverity() >= Integer.parseInt(_left)
+                                && message.getSeverity() <= Integer.parseInt(_right)) {
+                            logger.info(""The message is sent for correlation"");
+                        } else {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        }
+                    }
+                } else if (filter.getName().toUpperCase().equals(""PRIORITY"")
+                        && action.toUpperCase().equals(""DISCARD"")) {
+                    if (condition.toUpperCase().equals(""GREATER"")) {
+                        if (Long.toString(message.getSeverity()).equals(field)) {
+                            logger.info(""The message is sent for correlation"");
+                        } else if (message.getSeverity() < Long.parseLong(field)) {
+                            logger.info(""The message is sent for correlation"");
+                        } else {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        }
+                    } else if (condition.toUpperCase().equals(""LESSER"")) {
+                        if (Long.toString(message.getSeverity()).equals(field)) {
+                            logger.info(""The message is sent for correlation"");
+                        } else if (message.getSeverity() < Long.parseLong(field)) {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        } else {
+                            logger.info(""The message is sent for correlation"");
+                        }
+                    } else if (condition.toUpperCase().equals(""EQUALS"")) {
+                        if (Long.toString(message.getSeverity()).equals(field)) {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        } else if (message.getSeverity() < Long.parseLong(field)) {
+                            logger.info(""The message is sent for correlation"");
+                        } else {
+                            logger.info(""The message is sent for correlation"");
+                        }
+                    } else if (condition.toUpperCase().equals(""BETWEEN"")) {
+                        String[] range = field.split(""TO"");
+                        String _left = range[0];
+                        String _right = range[1];
+                        if (message.getSeverity() >= Integer.parseInt(_left)
+                                && message.getSeverity() <= Integer.parseInt(_right)) {
+                            logger.info(""The message is dropped"");
+                            isDropped = true;
+                        } else {
+                            logger.info(""The message is sent for correlation"");
+                        }
+                    }
+                }
+            }
+        }
+
+        return isDropped;
+    }
+
+ +

I have referred through the

+ +

https://en.wikipedia.org/wiki/Specification_pattern (as suggested in Style for control flow with validation checks)

+ +

https://en.wikipedia.org/wiki/Strategy_pattern

+ +

https://en.wikipedia.org/wiki/Command_pattern

+ +

But, none of these seem to be fitting my requirements. Please help.

+",218838,,9113,,43105.56458,43105.60347,What are the relevant design patterns to perform validations on an object?,,4,7,,,,CC BY-SA 3.0,, +363502,1,,,1/5/2018 12:08,,0,239,"

I had a conversation once with a with a senior development manager. I said that I aimed to produce ""good"" code, meaning code that is high quality. He said that good code is functional, performant and secure, and ""quality"" is not even an issue.

+ +

His justifications were that what makes good quality code is subjective, and that in their team people tend to specialise in certain areas of the software, so that it is nearly always developer X who maintains class Y. He added that if that developer leaves or changes responsibilities then the new developer might totally rewrite class Y anyway, so there is no point in maintaining quality standards across the team.

+",105020,,105020,,43105.63958,43105.63958,How to answer someone who says that quality standards in code are irrelevant?,,1,6,,43105.66042,,CC BY-SA 3.0,, +363506,1,,,1/5/2018 13:45,,1,2135,"

I have no doubt regarding what each of the technology does. I have tried all of them as well. What I have doubt with is:

+ +

In case we use GraphQL, wont it act like a single point of failure if the GraphQL server goes down? Can it be clustered and if yes, any architecture available?

+ +

I know why kafka is used in the Microservices world. If we are using GraphQL, does kafka become irrelevant or is there any place where Kafka still makes sense with GraphQL?

+ +

If we build Microservices, we can containerize every Microservice using Docker and use Kubernetes for orchestration and Kafka for the streaming layer, but if we use GraphQL, is it built as a monolith and is the entire application containerized? Or do we still modularize the code, and containerize different modules in servers and query them via GraphQL?

+ +

Where and how should my database layer exist when implementing via GraphQL. For microservices, we use one DB instance for every microservice.

+ +

For the context, I am thinking of an architecture to build scalable, containerizable and manageable APIs with GraphQL.

+",292577,,292577,,43105.68333,43106.85347,"How to build a scalable, containerizable and manageable API with GraphQL, Kafka, Docker, Kubernetes and MongoDB?",,2,0,1,,,CC BY-SA 3.0,, +363508,1,363518,,1/5/2018 13:50,,2,114,"

I'm a Java developer working on a new module for my app that connects to, inserts, and updates DynamoDB information. For a lot of our projects, I used an MVC design pattern with a service layer to abstract business logic and bridge the gap between my controllers and model/data access layer.

+ +

We normally use PostgreSQL with an ORM to interact with data.

+ +

I have all of my credentials and dependencies to connect to my DynamoDB database. I'm just not sure what the best practice is in terms of where to put this. I read AWS SDK suggests using the builder methods to connect, which I have, but I'm not familiar with the builder design pattern (that may not even be relevant).

+ +

My first inclination is to call this new DynamoDB connection class DynamoDBClient, but upon doing some searching I don't see anything about adding a client layer to a software tier. I could make a new package (com.carella.anthony.clients), but is that common practice?

+ +

Where should I put this? Or rather, how should I construct this into my application?

+",235289,,,,,43105.68611,Where should my method(s) that connect to DynamoDB go?,,1,0,1,,,CC BY-SA 3.0,, +363514,1,,,1/5/2018 16:09,,3,606,"

I am using XML Documentation comments to document a new project. All of my public API methods are documented using the three slashes required for XML documentation.

+ +

My internal methods (protected and private) are documented using double slashes (//) and slash star (/* */). This is because I do not believe these comments should appear in the public API documentation. For example, I developed a group of classes today that use the Chain of Responsibility pattern (http://www.dofactory.com/net/chain-of-responsibility-design-pattern), however I do not believe the users of these classes need to know that the Chain of Responsibility pattern is used internally - only others developers of these classes need to know this.

+ +

Have I understood this correctly i.e. is XML Documentation used to document the external API only? Should I even be saying that these classes use the Chain of Responsibility pattern (or whatever pattern I use).

+ +

The reason I ask is because other developers will be looking at this code soon and I am trying to promote the principle of least astonishment.

+",65549,,,,,43106.54236,is XML Documentation used to document the external API only?,,2,6,,,,CC BY-SA 3.0,, +363516,1,363519,,1/5/2018 16:16,,3,5883,"

For people who will use an API, is it easier to see:

+ +
/createUser
+
+/getUser/id
+
+/editUser/id
+
+ +

The standard is to use nouns in URI eg:

+ +
/user/ POST (Create a user)
+/user/ GET (Get list of users)
+
+ +

A developer is insisting on using verbs because it is easier for others. But I think there must be some real technical debts to pay later on, other than just a case of bad ""grammar"" and people laughing at us?

+",200195,,,,,44041.58056,What are the consequences of using verbs instead of nouns in REST API URI?,,4,3,,,,CC BY-SA 3.0,, +363517,1,,,1/5/2018 16:22,,66,8195,"

Compared to about 10 years ago I have noted a shift towards frameworks using the style of routing that decouples the URL path from the filesystem. This is typically accomplished with the help of a front-controller pattern.

+ +

Namely, when before, URL path was mapped directly to the file system and therefore reflected exact files and folders on disk, nowadays, the actual URL paths are programmed to be directed to specific classes via configuration, and as such, no longer reflect the file system folder and file structure.

+ +

Question

+ +

How and why did this become commonplace? How and why was it decided that it's ""better"" to the point where once-commonplace direct-to-file approach was effectively abandoned?

+ +

Other Answers

+ +

There is a similar answer here that goes a bit into the concept of route and some benefits and drawbacks: With PHP frameworks, why is the "route" concept used?

+ +

But it does not address historical change aspects, or how or why this change gradually happened, to where any new projects nowadays are pretty much using this new routing style pattern and direct-to-file is outdated or abandoned.

+ +

Also, most of those benefits and drawbacks mentioned, do not appear to be significant enough to warrant such a global change. The only benefit that I can see driving this change perhaps is hiding the file/folder system from end-user, and also lack of ?param=value&param2=value, which makes URLs look a tad cleaner. But were those the sole reason for the change? And if yes, why were those reasons behind it?

+ +

Examples:

+ +

I am most familiar with PHP frameworks and many popular modern frameworks use this decoupled routing approach. To make it work you set up URL rewriting in Apache or similar web server, to where web application functionality is typically no longer triggered via a direct-to-file URL path.

+ +
+

Zend Expressive

+ +

https://docs.zendframework.com/zend-expressive/features/router/aura/
+ https://docs.zendframework.com/zend-expressive/features/router/fast-route/
+ https://docs.zendframework.com/zend-expressive/features/router/zf2/

+ +

Zend Framework

+ +

https://docs.zendframework.com/zend-mvc/routing/

+ +

Laravel

+ +

https://laravel.com/docs/5.5/routing

+ +

CakePHP

+ +

https://book.cakephp.org/3.0/en/development/routing.html

+
+",119333,,591,,43107.93819,43308.65903,How and why did modern web application frameworks evolve to decouple URL routes from the file system?,,10,13,14,,,CC BY-SA 3.0,, +363530,1,367426,,1/5/2018 18:26,,3,166,"

I've got a microservice called ExamResults, with a very simple component structure:

+ +
    +
  • ExamResults + +
      +
    • (offers: IExamResults)
    • +
    • (requires: IExamResultsDAO)
    • +
  • +
  • ExamResultsDAO + +
      +
    • (offers: IExamResultsDAO)
    • +
  • +
+ +

This service gets passed exam results (student ids, exam ids, given answers) in JSON format, and the component ExamResults converts them to local domain objects, does some validation, and saves it in the database using the ExamResultsDAO. +(nomenclature suggestions are not discouraged)

+ +

Now, this was all well and fine, until we started implementing it. We gave ExamResults the classes for validation and (de)serialisation, but also the domain classes (ExamResult, GivenAnswer) - and that's where we started scratching our head: why are they there, exactly? The domain classes get used by the DAO all the same.

+ +

First thought was making them another component, but we learned (we're students) that a component must always offer an interface. And the domain classes have no meaningful methods: just getters and setters and (de)serialisation keywords. +Is it proper conduct to put the domain in a separate package and mark it down as a component in the component diagram? Or mark it as something else? Or not include it at all? Or is it more proper to leave the domain classes with the component ExamResults, since that creates them and uses them the most?

+ +

What's the preferred solution here?

+",292603,,,,,43169.53889,Where to put domain classes in a component structure and diagram?,,2,12,4,,,CC BY-SA 3.0,, +363532,1,363538,,1/5/2018 18:48,,1,343,"

What are the thoughts about calling a class function that doesn't return anything but instead sets internal property values that you then read.

+ +
var myClass = new MyClass();
+myClass.DoFunction(specialDate, ""A"");
+
+// after calling DoFunction() the Name and Type property are now set
+var name = myClass.Name;
+var type = myClass.Type;
+
+ +

Off the top of my head this seems like an awkward way of doing things vs having DoFunction() return an object that has Name and Type properties in it. Thoughts?

+",126663,,126663,,43105.79792,43106.35903,Class functions that populate class properties,,6,6,1,,,CC BY-SA 3.0,, +363546,1,,,1/5/2018 23:50,,0,73,"

In the paper On the correlation between size and metric validity Gil and Lalouche conclude that all popular software metrics are only valid insofar as they are correlated with code size.

+ +

They use several definitions of size including lines of code, number of tokens, and size of gzipped code.

+ +

Are there any actionable conclusions we can draw from this other than write less code if possible?

+ +

Do I understand correctly that ""instability"", ""bugginess"", and ""change complexity"" scale linearly with code size? Does that mean even splitting one big project into several smaller projects is doomed to have the same absolute number of bugs and no other objective metric is better than code size for this purpose?

+",292629,,,,,43106.81042,"Are there any actionable conclusions from the paper ""On the correlation between size and metric validity""?",,1,3,,,,CC BY-SA 3.0,, +363547,1,,,1/6/2018 0:10,,1,109,"

I've started reading a machine learning book since a few days ago, and I've learned about how it can be used for classification/regression/etc. However, I am unsure if it will be able to handle the task I want to accomplish.

+ +

My goal is to build a machine learning algorithm that predicts the return type of a method based on its name and other information. For example, if someone is writing

+ +
GetUsers().<cursor>
+
+ +

in an IDE, and GetUsers() is not yet defined, I might want to predict that its return type is List<User> and offer autocompletion based on that. I plan to pre-train my model on lots of code on the web, where it will try to guess the type, and then compare it against the actual type (it would be a supervised training model).

+ +

My problem is: I'm not sure how to model this as a classification/regression task. A library may have many types, plus you can nest generic types like so: Task<List<User>> to build an infinite number of types.

+ +

How should I approach this problem?

+",161912,,,,,43106.00694,Can machine learning be used for non-numerical prediction?,,0,4,,,,CC BY-SA 3.0,, +363550,1,,,1/6/2018 4:26,,0,87,"

I often face this problem when new requirements come along, but have never seen it discussed anywhere. In this case, I have an existing list of items (staves in a musical score). The requirement is that a user can add and remove new items - but they are not allowed to remove the original items. So what do I name the new stored state?

+ +

An enum property with values Original and CreatedByUser reflects the original setter context, and is potentially useful in other actions like rename, copy, etc. I would usually also add a read-only isDeletable property that refers to the enum.

+ +

A bool isDeletable property reflects the new requirement, and potentially allows other contexts to set the value depending on yet more requirements.

+ +

Is one better than the other? Are there any links that describe or discuss this duality of state variables better?

+",292634,,,,,43136.43819,Should the name of a new state variable reflect the original setter context or the intended usage?,,1,3,,,,CC BY-SA 3.0,, +363552,1,363554,,1/6/2018 5:36,,0,207,"

We are creating REST API which will be consumed by Web and Mobile users.

+ +

My question is: do we need to consider user interface before designing the JSON structure?

+ +

For example, we have one resource Company. A Company has multiple customers with its experiences. We created API company/{company_id}/customers to fetch the list of customers for a particular company. In response, the API is sending a list of customers with basic details like (id, name, address, email, phone).

+ +

Is a best practice to send customer experiences with customer detail?

+ +

Currently while fetching customers, we are sending only basic details of customers without their experiences. When we refer user interface for mobile and web it's showing customer's experience with customer detail in customer listing screen.

+ +

So now is that a best practice if we make multiple small REST calls to get other information?

+",292589,,222996,,43108.36875,43858.48403,Best practices for sending various details about REST resource in a reply from service,,2,5,,,,CC BY-SA 3.0,, +363559,1,,,1/6/2018 10:55,,2,435,"

Background

+ +

As mentioned in this article,

+ +
+

Inversion of Control can be achieved through various mechanisms such as: Strategy design pattern, Service Locator pattern(SLP), Factory pattern, and Dependency Injection (DI).

+
+ +

Am missing clarity in above statement, because below is my understading.

+ +

1) Creation of container

+ +

Creating a dependency container or IOC container does not require any of these design patterns(mentioned above). We need these design patterns to get access to an implementation from that container(which is already created). Here is the C code where init_handlers() create a container(imagehandlers in config.c) of implementations that are configured in config.txt

+ +

2) Access impl from the container

+ +

To get access to an implementation from the container, for example,

+ +

One can rely on injection mechanism that is implementated using DI pattern.

+ +

or

+ +

Rely on service location mechanism that is implemented using Service locator pattern. Here is the C code where displayMenu() locates the service from imagehandlers container, based on given input(scanf(""%s"",filename);)

+ +
+ +

So, Dependency Injection or Service locator pattern has nothing to do with creation of IOC container but to get access to an implementation from that container.

+ +

For example, In Spring,

+ +
ApplicationContext appContext = new ClassPathXmlApplicationContext(""Springbeans.xml"")
+
+ +

create the IOC container with singleton instances of all beans configured in Springbeans.xml assuming the beans are not prototype-scoped and

+ +
MessageBean mBean = (MessageBean)appContext.getBean(""messagebean""); 
+
+ +

locates the messageBean service using service locator pattern from appContext container.

+ +

1) For this line of code ApplicationContext appContext = new ClassPathXmlApplicationContext(""Springbeans.xml"") that creates a dependency container, Is it right to say that, creation of container has nothing to do with design pattern(like DI or SLP)?

+ +

2) Why appContext is not called a dependency container? Instead, why appContext is called IOC container?

+",188888,,188888,,43106.61111,43106.77083,IOC container & accessing Implementation from the container,,3,0,2,,,CC BY-SA 3.0,, +363564,1,,,1/6/2018 13:38,,4,153,"

What is the right concept of calling methods of parts in composition?

+ +

I have a class

+ +
class Body{
+   Arm arm;
+}
+
+class Arm{
+   public void waveArm(){}
+   Finger finger
+}
+
+class Finger{
+   public void moveFinger(){}
+}
+
+ +

And I want to invoke moveFinger(). What is the correct way to do so? Is it

+ +
Body b;
+
+b.arm.finger.moveFinger()
+
+ +

Or I should create wrappers, e.g.

+ +
Body::moveFinger(){ arm.moveFinger();}
+Arm::moveFinger(){ finger.moveFinger();}
+b.moveFinger()
+
+ +

Using the first way, we can get really long chaining.

+ +

However using second method, for every new method I would have to create a lot of wrappers (up to n where n is the depth of the parts tree - or how to call it).

+ +

What is the correct way or how to chose between those two?

+",279669,,173647,,43408.81806,43408.92569,Composition and calling part methods,,2,1,,,,CC BY-SA 4.0,, +363569,1,,,1/6/2018 15:18,,1,578,"

I write my own 3D-graphics engine for education and have some difficulties with it's architecture. I wrote classes like OpenGLTexure, OpenGLMaterial, OpenGLGpuProgram, etc. I also wrote a class ResourceManager for loading textures, shaders and materials.

+ +

Now I want to separate OpenGL graphics component of my engine in order to add DirectX support too. I have realized interfaces for graphics classes (ITexture, IMaterial). But my resource manager uses OpenGL functions to load and initialize resources (for example, shaders) and I don't know how add DirectX support with this architecture right. I think I can write factory class to create objects for every graphics API, but how can I choose what class should I use.

+ +

How can I solve this problem right?

+",277882,,,,,43196.80625,How to organize the management of resources for OpenGL?,,1,0,1,,,CC BY-SA 3.0,, +363577,1,,,1/6/2018 18:32,,0,698,"

I want to make a reward system in my application, where

+ +
    +
  1. Each user will get some points for each task in app e.g.. solving a puzzle, answering an answer etc.
  2. +
  3. I want to show the point earners on a leaderboard (like Stack Overflow leaderboard)
  4. +
  5. Leaderboard should have the option to show daily, weekly, monthly & all time top point earners.
  6. +
+ +

Currently, i am thinking of a very crude approach:

+ +
    +
  1. In the User Table create new column totalpoints which will store all-time points of a user.

    + +

    Each time User earns point we will increment value in it.

  2. +
  3. In the User Table create new column dailyPoints which will store daily points earned by a user.

    + +

    In my app I want to give rewards to top 3 Users with the highest points earned in that day. It will store Points only between 12:15 AM to 11:45 PM. The Buffer time will be used to store find the top 3 winners and after finding top 3 winners, announce winners at 12:00 & reset the counter of each user to 0 and again start adding points in the table after 12:15 AM.

  4. +
  5. Similarly, for Weekly Leader Board create a new column in User table Weekly_Points & use the same approach like in point 2 i.e. It will store Points only from 12:15 AM Monday to 11:45 PM Sunday.

  6. +
  7. Similarly, for Monthly Leader Board create a new column in User table Monthly_Points

  8. +
+ +

But the above approach has major drawback i.e. User will not be able to see the ranking of previous days, previous weeks, previous months.

+ +

My questions are:

+ +
    +
  1. Is there any better approach which I can use?
  2. +
  3. Is this data a better fit for relational or noSQL databases?
  4. +
+",292671,,60357,,43106.77847,43106.79028,What is a good approach for Rewards Points Leader Board (Daily/Weekly/Monthly)?,,1,0,0,,,CC BY-SA 3.0,, +363583,1,363607,,1/6/2018 20:49,,2,118,"

I've been trying to design a backend architecture to a SPA that satisfies the following:

+ +
    +
  • user constantly answer yes/no questions (ex: tinder swipe cards)
  • +
  • The application calculates each user's similarity to one another solely based upon their answers
  • +
  • Similarity between users should update as they answer more questions
  • +
  • The application periodically prompts two active users with sufficient similarity to enter private chat
  • +
  • The application will have up to 10 000 questions
  • +
+ +

Question:

+ +

Which architectural component should calculate/store/handle user similarity given that there are many concurrently active users constantly/frequently updating their answer set ?

+ +

Possible Approaches I've considered:

+ +
    +
  1. Create K-many tables (db). Periodically, the users are clustered (K-Means) into exactly one of the K tables. All users within a given table have sufficient similarity. The client requests two active users from a given table. sticking points: where/when is this calculation performed in the architecture and can only 1 user be re-clustered based upon their updates?
  2. +
  3. Calculate on-the-go (client and/or server worker threads) + caching. A client background process requests a list of random users and calculates this user's similarity to each. Caches matches. sticking points: If a user is answering at a high frequency, how often is this calculation performed? Furthermore does it scale?
  4. +
  5. Bitfield/bit representation of answers in the user model(server/db): Periodically the client requests a matched user. The server responds with a user by querying based upon the bitfield. sticking points: How does this scale to answering all 10 000 questions (ie: can a database store and query on such a large bit representation)?
  6. +
+ +

Technology currently being used: Node/Express, PostgreSQL + ORM, Websockets.

+ +

Any insight with architectural patterns or database design or better approaches for this scenario would be greatly appreciated.

+ +

Research/Related:

+ + +",289612,,,,,43108.8,Architectural considerations for frequently matching similar users,,1,0,1,,,CC BY-SA 3.0,, +363586,1,,,1/7/2018 0:30,,0,146,"

What algorithm should I use, to simulate a continual stream of N increments each second — not writing a loop, but instead by timed-interval events, no more than M events per second?

+ +

I am implementing a incremental game, and the rate of increment for some resource Scrog has a widely-varying rate of increase. The system I'm using has an event timer, and that's the mechanism I want to use to generate the resource over time.

+ +

Design constraints

+ +

The inputs to this algorithm are: the effective rate of Scrog increase (e.g. “14 per second”, “578 per second”), and the lower bound of the period for each timer (e.g. “no smaller than 10 per second”, “no smaller than 100 per second”).

+ +

The outputs of this algorithm are: a small set of tuples (timer-interval, scrog-quantity), often just one tuple and typically no more than a handful, that when taken together will effectively produce the specified rate of Scrog increase per second.

+ +
    +
  • I want to simulate anything from 1 increment every few seconds, all the way to trillions per second.

  • +
  • Incrementing the Scrog resource is to be done by constant integer amounts. I want to pre-calculate the amounts and not deal with fractions of the resource.

  • +
  • The events should be fired by repeating-interval timers.

  • +
  • The generated timer intervals should be as large as can be, to run the function as infrequently as we can. Given a specified rate of increase, I don't want a polling function that does useless “is it time yet?” checks; that was known before hand, this algorithm needs to set up timers to avoid polling.

  • +
  • The generated timer intervals should be larger than a specified minimum bound (“M per second”), while small enough to in aggregate simulate the steady rate “N per second”.

  • +
  • No state can be kept in a loop; instead, the algorithm must pre-compute a collection of timers that will each fire an “increment Scrog by n” event, periodically. The period of each timer, and the integer amount of Scrog produced by each timer, are then constant.

  • +
  • The events should fire steadily, simulating a continual flow; but not indefinitely often, so that the event handler is not overloaded.

  • +
  • It's acceptable if the algorithm only approximates the specified rate, within the tolerance of M-per-second.

  • +
+ +

So I am looking for a generic algorithm, that will simulate a continual flow of Scrog at whatever rate (N per second) is specified, by setting up repeating events at a small number of fixed intervals, each interval no more frequent than M per second.

+ +

Example: up to 10 events per second

+ +

If I limit the actual rate of events to no faster than 0.1 seconds (10 times per second), that would mean:

+ +
    +
  • When the rate is “1 Scrog per 5 seconds”, the algorithm may produce the set { (5.0 seconds, 1 Scrog) }.
  • +
  • When the rate is “1 Scrog per second”, the algorithm may produce the set { (1.0 seconds, 1 Scrog) }.
  • +
  • When the rate is “7 Scrog per second”, the algorithm may produce the set { (0.143 seconds, 1 Scrog) }.
  • +
  • When the rate is “10 Scrog per second”, the algorithm may produce the set { (0.1 seconds, 1 Scrog) }.
  • +
  • When the rate is “500 Scrog per second”, the algorithm may produce the set { (0.1 seconds, 50 Scrog) }.
  • +
+ +

But I'm confused about how the algorithm should handle rates faster than 10-per-second, slower than hundreds-per-second.

+ +
    +
  • When the rate is “14 Scrog per second”, the algorithm may produce the set { (0.1 seconds, 1 Scrog), (0.25 seconds, 1 Scrog) }, because events triggered with those intervals will result in 14 Scrog per second.
  • +
+ +

A simple arithmetic problem?

+ +

Stripped of the context of events, timers, etc. this boils down to me trying to take some numbers-with units, and produce other numbers-with-units.

+ +

So this is apparently a fairly simple (?) problem: Design a general algorithm which, given these inputs, and the above constraints, will produce those outputs.

+ +

But my abstract arithmetic isn't powerful enough. How should the algorithm be written so that it produces all these results, given only the constraints and the current effective Scrog rate?

+",23421,,23421,,43108.16736,43108.16736,"Trigger no more than M events per second, simulate N increments per second",,2,8,,,,CC BY-SA 3.0,, +363588,1,363593,,1/7/2018 0:55,,2,1290,"

I am going to try to build a implementation of CQRS and ES for large scale application (1M users) authentication API. Below is my initial architecture draft (Ignore Azure related stuff). I am going to do this approach because I see the following advantages:

+ +
    +
  1. Even if the event store is down, the login system can still function (vice versa).
  2. +
  3. Since we’re going to use materialized views: + +
      +
    • Lock contention is lower
    • +
    • Read / Write operations have higher throughput
    • +
  4. +
+ +

However, I am still undecided if using a SQL Database is or NoSQL for my materialized views. Please let me know about your thoughts and I would appreciate it very well.

+ +

+",181381,,,,,43107.12847,Should I use NoSQL or RDBMS for materialized view in this CQRS / ES implementation?,,1,2,2,,,CC BY-SA 3.0,, +363590,1,363616,,1/7/2018 1:58,,5,299,"

I think it happened to everyone, having side projects, trying to do something new and big in their spare time, or maybe having a little startup where everyone has clear in their mind what the software will do and what are the design goals to address.

+ +

I was wondering what type of documentation has to be produced under these circumstances? Since the client and the analyzer are the same person, many of the models of communications are useless, at least in most of contexts. But it's useful to track requirements and changes, especially if the software is big and complex.

+ +

What's the documentation that must be provided for self-produced software? +Are there examples of documents drawn up for open source software?

+",291004,,95212,,43107.23403,43107.99931,Do I need requirement analysis if there's no client?,,4,3,1,,,CC BY-SA 3.0,, +363596,1,363612,,1/7/2018 5:36,,1,114,"

The need for the client user is to perform tasks while they aren't currently logged in (so we can assume it is time-based). For example: A user logs in, performs tasks, logs out and gets a 'reward' in 6 hours or so. How would time be kept in this situation as the client isn't logged in?

+ +

I have the option for a server and a (persistent) database. A possible solution I thought of was saving the previous user login time and when the next time they log in, it would check if 6 hours had passed and update the database from there.

+ +

But, the issue of tasks, users, and hardware rises when I need to update every user's information differently every minute.

+ +

Is there a smarter way to design this?

+",266383,,,,,43107.65903,"As a high-level overview, how do I structure my server and client code with a persistent database?",,2,0,,,,CC BY-SA 3.0,, +363597,1,363599,,1/7/2018 6:27,,2,123,"

Even though I'm programming in PHP, I'm open to reviewing language-agnostic suggestions, as they might point me to valuable directions.

+ +

To remove any possible confusion I feel some comments appear to allude to: the Pages in this question do not refer to web pages, but to in-memory pages for an API akin to word processing and/or a spreadsheet.

+ +

I have a Page class whose instances can be part of a Pages collection. These Page objects must be uniquely named if they're part of a Pages collection, as they must be uniquely identifiable by their name.

+ +

A Page object can never belong to multiple Pages collections at the same time: they will either be moved, or be copied (cloned).

+ +

Consumers will also be able to alter the name of individual Page objects, for instance with1:

+ +
$page = new Page( 'Optional name' );
+// or ($pages is a Pages collection instance)
+$page = $pages->getByName( 'Page 1' );
+
+// and then
+$page->setName( 'My Page' );
+
+ +

If a Page is added to a Pages collection, the collection will ensure the Page is renamed, if its name conflicts with another Page in the collection, like so:

+ +
class Pages
+{
+  private $objectIndex;
+  private $nameIndex;
+
+  public function __construct() {
+    $this->objectIndex = new SplObjectStorage;
+    $this->nameIndex = [];
+  }
+
+  public function add( Page $page ) {
+    if( !$this->objectIndex->contains( $page ) ) {
+      $name = $page->getName();
+      if( isset( $this->nameIndex[ $name ] ) ) {
+        // make name unique to this collection
+        $name = $this->someLogicToMakeNameUnique( $name );
+        // alter name
+        $page->setName( $name );
+      }
+      $this->nameIndex[ $name ] = $page;
+      $this->objectIndex->attach( $page, $name );
+    }
+  }
+
+  public function getByName( $name ) {
+    if( isset( $this->nameIndex[ $name ] ) ) {
+      $this->nameIndex[ $name ];
+    }
+
+    return null;
+  }
+}
+
+ +

I'm trying to come up with a strategy to make sure that, if a Page is part of a collection and its name is altered, the name automatically gets adjusted to a unique name (i.e. if ""Page 1"" already exists, rename it to ""Page 2"") and that the Pages::$nameIndex keys get properly updated as well (as this allows for faster retrieval of a Page by name than looping through it's contained Pages until a name matches).

+ +

Strategies I've come up with so far are:

+ +
    +
  1. Pass Pages collection to Page::setParent( Pages $container ) and call some verification/sanitation mechanism on $this->container in Page::setName( $name ), before altering.

  2. +
  3. Emit a rename event from Page::setName( $name ) and have Pages do something like $event->preventDefault() if the name conflicts and then have Pages alter the name to something unique instead.

  4. +
+ +

Option 1 seems the easiest/laziest/least process intensive, but puts the responsibility of verifying uniqueness in an object where it does not belong.

+ +

Option 2 appeals the most to me so far, but it has a downside as well: It will emit multiple rename events, if the Pages collection needs to alter the name itself again.

+ +

NB: Even though both the Page::setParent( Pages $container ) (as a Page can only ever be owned by one collection) and event dispatching capabilities (primarily meant for consumer purposes) are already implemented, neither are utilized for my unique naming conundrum yet.

+ +
+ +

Do you have any other suggestions that satisfy the constraints that Pages should be responsible for verifying uniqueness and that there's not to much back-and-forth communication necessary?

+ +
+ +

1) As a reference: my objective is akin to Excel VBA's Workbook.Sheets(""Sheet 1"").Name = ""Sheet"" type of behavior, which I believe behaves similar to what I am after.

+",21386,,21386,,43107.49722,43107.5125,Looking for a strategy to ensure name of child node stays unique in collection,,1,7,1,,,CC BY-SA 3.0,, +363608,1,363611,,1/7/2018 14:23,,19,4643,"

Let's say I have a function which sorts a database in O(n^2) time. I want to go about refactoring it so it runs in O(n log(n)) time, and in doing so I will change the fundamental way the operation runs, while keeping the return values and inputs equivalent.

+ +

What do I call this refactoring activity?

+ +

""Speeding-up-ifying"" doesn't seem quite right, since you can make an algorithm go faster without changing the big O speed at which it executed.

+ +

""Simplifying"" also doesn't seem right.

+ +

What do I call this activity?

+ +

Update

+ +

The best answer I could find is reducing the asympotic time complexity.

+",163825,,163825,,43109.48056,43109.48056,What do you call it when you change the Big O execution time of a function,,6,10,3,43108.78125,,CC BY-SA 3.0,, +363623,1,,,1/7/2018 22:15,,2,657,"

Last years I made myself familiar with Python and Haskell. I am surprised and impressed about the short and readable code you can write in these 2 languages, especially in comparison to languages like Java, C++ and C#. Of course this is very motivating to pursue my journey in Haskell and Python.

+ +

However with languages like Java, C++ and C# most of the time you deal with code ""from the outside"". That is: you see the class together with its properties and methods, but how exactly the methods and properties were written you do not care about, as long as they do what they promise to do.

+ +

So I was wondering if it truly matters when code is short and readable and my enthusiasm about Python and Haskell is justified because there are lots of situations (in professional software development) were you see code only from the outside.

+ +

Now I know a lot of developers do not comment their code. In these cases you do benefit from short readable code. However as long there is the opportunity to document code I am not convinced.

+",292674,,,,,43108.83264,What is the benefit of short readable code if you only see functions and classes on the outside?,,4,9,2,,,CC BY-SA 3.0,, +363624,1,,,1/7/2018 22:56,,2,250,"

Many of us have that experience where you had to release a product that you know has the bugs in it while the development. The situation being rushed deadlines, the severity of the bug...

+ +

What do you call them?

+ +

Could just call them ""known bugs"". But this particular term I'm looking for puts emphasis on the bugs that they're found before the release of the product/version, and the time was short to investigate/solve.

+",292746,,292746,,43107.96042,43107.96319,What's the name for the bugs known on the release,,1,3,,,,CC BY-SA 3.0,, +363629,1,,,1/7/2018 23:29,,8,728,"

In Haskell, lazy evaluation can often be used to perform efficient calculations of expressions that are written in a clear and concise manner. However, it seems that the language itself does not provide enough details to determine, in general, the time and space needs of a given piece of code. The situation seems to be mitigated, to some degree, by common use of ghc, which I gather gives some more specific guarantees related to weak-head-normal-form. But if I'm not mistaken, the actual performance of code can still be quite difficult to understand.

+ +

For example, we also use polymorphism to express functions in a generic fashion, again without sacrificing clarity. However, when combined with lazily-evaluated structures, the two language features seem to interact in ways that are (to me) surprising. Consider:

+ +
import Debug.Trace (trace)
+tracePlus a b = trace (show a ++ ""+"" ++ show b) (a+b)
+    -- This lets us try small integers to see how things get evaluated.
+    -- Those tests can thereby reveal the asymptotic behavior of the code, without 
+    -- needing to actually try bigger values.
+
+class Sum a where
+    one :: a
+    add :: a -> a -> a
+
+instance Sum Integer where
+    one = 1
+    add = tracePlus
+
+fibSums_list :: (Sum a) => [a]
+fibSums_list = one : one : zipWith add fibSums_list (tail fibSums_list)
+
+fibS :: Int -> Integer
+fibS = (fibSums_list !!)
+
+ +

I should note that this works fine if I compile it with ghc -O2. However, when run under ghci, it takes exponential time complexity to evaluate fibS. Yet, using a list of Fibonacci numbers of type [Integer] works fine as well.

+ +

So, one specific question I have is: is there a way to rewrite fibSums_list and/or fibS, such that it retains the use of the Sum type class, and is still clearly a generalization of the Fibonacci sequence, but that also evaluates efficiently in ghci? Where do I even start?

+ +

And I wonder if similar pitfalls await even in code compiled via ghc -O2. And if so, how do authors of Haskell code deal with those?

+ +

Another related question is When is it a good time to reason about performance in Haskell?. I think my question is an even more fundamental one; I don't even understand how to go about the task of such reasoning. There is a reasonable answer there, but it doesn't have enough specific information for me to actually go about writing a fibSums_list that works in ghci, let alone one that has any sort of guaranteed time complexity.

+",292734,,131624,,43108.24236,43418.61042,How does one reason about algorithmic complexity in Haskell?,,1,2,1,,,CC BY-SA 3.0,, +363632,1,,,1/8/2018 0:10,,0,263,"

I spend a lot of time trying to figure out what MSSQL provide to update MSSQL schema without loosing the sync, but I find it hard to find a solution. +Does any of you faced the some problem ? +I found two solutions and both are not an option :

+ +
    +
  1. move to NoSQL DB

  2. +
  3. halt calls to DB until update finish.

  4. +
+ +

The most simple scenarios is you have either 1 DB or 2 DBs in sync

+",104961,,222996,,43111.76181,43201.89653,How to update MSSQL DB schema during deployment in HA enviroment without any downtime,,2,6,,,,CC BY-SA 3.0,, +363641,1,363652,,1/8/2018 10:45,,1,83,"

I am working with 4 AWS EC2 instances (servers). Each Instance has Instance ID and Instance Name. Instance ID is unique. Each of them having multiple application servers, like on of them is running PUMA server and another one is running NGINX and so on. I want to store the running status of the instances and their application servers in a database.

+ +

Like, If Instance A is running or not, that I can get by continuously hit into the exposed port per second and store it in the database. I am scanning it in each minute. There will be 3 columns I can think of right now,

+ +
Instance ID    Running Status   Time
+
+Instance A        Running        10:10:04
+Instance A        Running        10:11:04
+Instance A        Running        10:12:04
+
+ +

Next, I want to store the status if Application Servers within Instance A running or not. There will be 3 columns I can think of right now,

+ +

Let's say PUMA and NGINX are running in Instance A

+ +

Instance A :-

+ +
Application Name   Running Status   Time
+
+PUMA                  Running       10:10:20
+NGINX                 Running       10:10:30
+PUMA                  Running       10:11:21
+NGINX                 Running       10:11:30
+
+ +

I am using Postgresql. What will be the recommended schema design ? Should I create a table for each instance and how will I map the applications with that particular Instance ? If I take Instance ID as a primary key then duplicate values in that column are not possible.

+",292789,,292789,,43108.46181,43108.55764,Database Schema design of server running status log,,1,0,,,,CC BY-SA 3.0,, +363642,1,363650,,1/8/2018 10:47,,3,871,"

Given this code from the Symfony framework:

+ +
use Symfony\Component\HttpFoundation\Request;
+
+public function indexAction(Request $request)
+{
+    $request->isXmlHttpRequest(); // is it an Ajax request?
+
+    $request->getPreferredLanguage(array('en', 'fr'));
+
+    // retrieve GET and POST variables respectively
+    $request->query->get('page');
+    $request->request->get('page');
+
+    // retrieve SERVER variables
+    $request->server->get('HTTP_HOST');
+
+    // retrieves an instance of UploadedFile identified by foo
+    $request->files->get('foo');
+
+    // retrieve a COOKIE value
+    $request->cookies->get('PHPSESSID');
+
+    // retrieve an HTTP request header, with normalized, lowercase keys
+    $request->headers->get('host');
+    $request->headers->get('content_type');
+}
+
+ +

I think this way of accessing for example the GET and POST variables is nice. You call the get() method on the query object which is part of the request object. I think the concept of method chaining is short and nice. However, I know the drawbacks of this tight coupling. Here, my controller claims to much knowledge on the method of the query object. That is, when the query object changes its method, I would need to change all these scripts. These drawbacks are manifested in the law of Demeter.

+ +

So what is the question? My question is, when there is so much description of ""good practice"" how come that such popular frameworks as Symfony decide against some of these rules. Or do I misinterpret the law of Demeter? I get the impression that sometimes good practice considerations to a degree depend on personal preference. Am I wrong?

+",292274,,1204,,43108.94583,43108.94583,Demeter's law vs method chaining: when to use which?,,2,7,,,,CC BY-SA 3.0,, +363645,1,,,1/8/2018 12:19,,0,80,"

I have multiple Models(Models in MVC), These models are injected into repositories and repositories are injected into Controllers. I need to create an api for several endpoints. The responses for these endpoints will use repositories services. Response will be similar to this;

+ +
{
+""user"": {
+    ""userId"": 1,
+    ""id"": 1,
+    ""name"": ""John"",
+    ""surname"": ""Doe""
+},
+""posts"": [{
+        ""id"": ""1"",
+        ""title"": ""Lorem"",
+        ""body"": ""test-2""
+    },
+],
+""products"": [{
+        ""id"": ""1"",
+        ""name"": ""productA""
+    },
+    {
+        ""id"": ""2"",
+        ""name"": ""productB""
+    }
+],
+""services"":[{
+        ""id"": ""1"",
+        ""name"": ""a""
+    },
+    {
+        ""id"": ""2"",
+        ""name"": ""b""
+    }
+],
+""settings"":[{
+        ""id"": ""1"",
+        ""name"": ""settingA"",
+        ""value"" ""valueA""
+    },
+    {
+        ""id"": ""2"",
+        ""name"": ""settingsB"",
+        ""value"" ""valueB""
+    }
+],
+}
+
+ +

Some of these keys(models + service responses) don't have a direct relation with each other. When I get models from DB I want to remove unnecessary columns(I can't hide while querying because I need them while generating a response) I want to modify some column values, I want to send some column values to some services and get response from these services and use this response for api response.

+ +

The question is that; Where should I do all these (calling all these services, repositories, making mappings, some calculations, and generating api response.)

+ +

If I do that in controller I will have a fat controller +If I do that in repository, then it will not be a repository anymore

+ +

Should I create another service inject all required services there, generate response, and inject this new service into controller ?

+",270775,,,,,43108.62431,Where should I generate response for api?,,1,0,,,,CC BY-SA 3.0,, +363646,1,,,1/8/2018 12:23,,2,4076,"

I am looking for some advice in regards of best practices to build a HTTP PATCH request that can edit multiple entities in one request. The reason I am asking is because to me Rest API is quite vague and I don't have much experience building Rest APIs.

+ +

My approach would be to create a request with this data:

+ +
[
+    {
+        id:<entity id to update>, 
+        patch: 
+        {
+            op: <patch operation>,
+            path: <field to update>,
+            value: <new value>
+        }
+   },
+   ...
+]
+
+ +

What do you think and what would you recommend based on your experience?

+ +

thank you very much

+",227600,,,,,43108.74861,how to define a HTTP PATCH to edit multiple entities in one request,,2,0,1,,,CC BY-SA 3.0,, +363647,1,405764,,1/8/2018 12:23,,1,1327,"

I have a service that needs to make a callback. Basically, it is an event that is expected to be handled in exactly one place, and that is too important to be optional.

+ +

The obvious approach seems to be to inject an Action. In the context of dependency injection, is it considered good (or acceptable) practice to do so?

+ +

I'm also eager to hear why (not), or what alternatives you would consider.

+ +

One particular problem that comes to mind is in the following scenario:

+ +
    +
  • Parent's constructor takes an IChild.
  • +
  • Child's constructor takes an Action, the callback.
  • +
  • Parent has the method that is to receive the callback.
  • +
  • To instantiate the Child, we need the Parent's method, and thus the Parent instance. But to create that, we first need the Child instance. Problem.
  • +
+ +

One solution I can think of is to inject an IChildFactory instead. Parent's constructor can then use that factory to create the Child instance. At this point, Parent exists, and thus it can pass its callback method to the factory.

+ +

This solution seems to get the job done, but I'm curious about alternatives.

+",213637,,213637,,43109.39028,43887.39653,Callback injection,,3,9,,,,CC BY-SA 3.0,, +363651,1,363669,,1/8/2018 13:01,,-2,74,"

The broad question is: how are the licensing terms for a classifier that is trained with open annotated data (or manually annotated) data?

+ +
    +
  1. I am trying to train a dependency parser for German text with annotated data which is licensed under creative commons license (Attribution CC BY). To train the classifier I want to use machine learning tool which is licensed under the Apache license.

    + +

    Is it legally permissible to license the resulting classifier (my code and the model file) under a commercial license?

  2. +
  3. Suppose I scrape a text from the web, or alternatively I download a corpus collection that is licensed under Attribution CC BY, and I use an annotation tool which is open source under Apache license, and I train a classifier with an Apache machine learning software.
  4. +
+ +

Will it legally be permissible to license the resulting classifier under commercial terms?

+",292797,,,,,43108.71528,Licensing of classifier tool trained on open data,,1,0,,43108.74444,,CC BY-SA 3.0,, +363655,1,363678,,1/8/2018 13:52,,28,9659,"

During development phase, there are certain variables which need to be the fixed in the same run, but may need be modified over time. For example a boolean to signal debug mode, so we do things in the program we normally wouldn't.

+ +

Is it bad style to contain these values in a constant, i.e. final static int CONSTANT = 0 in Java? I know that a constant stays the same during run time, but is it also supposed to be the same during the whole development, except for unplanned changes, of course?

+ +

I searched for similar questions, but did not find anything that matched mine exactly.

+",283636,,283636,,43110.34306,43110.46181,May a value of a constant be changed over time?,,7,14,2,,,CC BY-SA 3.0,, +363671,1,,,1/8/2018 17:30,,2,131,"

Default display of objects in the Windows PowerShell console

+ +

In PowerShell you have format.ps1xml documents. These are used to simplify what and how object data is shown to the end user. A simple default example would be Get-Process. If no changes are requested it will by default show the following property set..

+ +
Get-Process | Select -first 1
+
+Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id  SI ProcessName                  
+-------  ------    -----      ----- -----   ------     --  -- -----------                  
+    172      12   197960     208444   309 1,464.52 701036   1 7zG    
+
+ +

However a System.Diagnostics.Process has many more properties associated to it (Get-Process | gm -MemberType Property). Also, the example above has NPM(K) which is not a real property. You can see how it is derived to looking at the following format file: DotNetTypes.format.ps1xml (Common location to find this would be: C:\Windows\System32\WindowsPowerShell\v1.0)

+ +
<View>
+    <Name>process</Name>
+    <ViewSelectedBy>
+        <TypeName>System.Diagnostics.Process</TypeName>
+    </ViewSelectedBy>
+    <TableControl>
+        <TableHeaders>
+            ...
+            <TableColumnHeader>
+                <Label>NPM(K)</Label>
+                <Width>7</Width>
+                <Alignment>right</Alignment>
+            </TableColumnHeader>
+            ...
+        </TableHeaders>
+        <TableRowEntries>
+            <TableRowEntry>
+                <TableColumnItems>
+                    ....
+                    <TableColumnItem>
+                        <ScriptBlock>[long]($_.NPM / 1024)</ScriptBlock>
+                    </TableColumnItem>
+                    ....
+        </TableRowEntries>
+    </TableControl>
+</View>
+
+ +

So that column is actually a calculated property based on NPM. Most of the common cmdlets do things like this as a favour to the end user (citation needed). The raw data is still there if you know how to find it but the default output formatting is supposed to just be a quick glance of the important / familiar information.

+ +

I could see some users getting confused trying to request a property that does not exist base on the table they got initially. (Get-Process).""NPM(K)"" would show nothing.

+ +

Custom format.ps1xml

+ +

I have a module that I am making that is based on scraping a local buy and sell site. A search listing will have multiple properties like information about the number of search results, associated URL of the search and the actual posts or listing associated with the search.

+ +

Trying my best to get to the heart of the problem I have a method called .hasMorePages(). If there is more listing to be view then what is on this search this method will return true. This is not normally displayed in output as it is a method and not a property. I can use a custom format.ps1xml file to make this show up. The following is just a snippet of the larger file.

+ +
.....
+<ListItem>
+    <Label>Has More Pages</Label>
+    <ScriptBlock>if($_.hasMorePages()){""Yes""}else{""No""}</ScriptBlock>
+</ListItem>
+....
+
+ +

So you would see something like this in the standard output for the search variable:

+ +
Requested URL              : .....obfuscated
+FirstListingResultIndex    : 1
+LastListingResultIndex     : 20
+TotalNumberOfSearchResults : 84
+Has More Pages             : Yes
+Listings                   : {Toys/Games for Sale, RARE HiltonHeadopoly MONOPOLY Game with 
+                             Hilton Head Landmarks, Clue and Monopoly (Halifax version), 
+                             Assorted Monopoly Games-Various Prices...}
+
+ +

So Has More Pages : Yes shows exactly how I defined it. I am worried that this violates some sort of ""Princial of least surprise"". If you tried to isolate that variable it would not work. $searchListing.""Has More Pages"" would be null as that is not the property. So it looks nice in output but that is not the real property. This is just an example I made for the purpose of writing this question. This could be mitigated by making an alias property of the same name to point to hasmorepages() but that would show a boolean and not the yes/no

+ +

In the case of Get-Process I expect this to be true as I am familiar with how PowerShell does this and works. I would not expect the same for custom cmdlets.

+ +

As cool as making custom formatting directives are is there some sort of mantra I should try to follow where I can try and keeps things simple initially while not hiding or confusing users about where the real properties and data are?

+",292819,,,,,43329.63958,Custom object formatting caveats as to not confuse end users,,1,2,2,,,CC BY-SA 3.0,, +363674,1,363675,,1/8/2018 18:40,,12,374,"

I am creating a desktop application that persists the data in the cloud. One concern I have is a beginning to edit an item in the application and leaving it for a while causing the data to become stale. This can obviously also happen if 2 people try to edit the same item at the same time. When they finish up their editing and want to save the data I would either need to overwrite what currently exists in the database or check that they started editing after the last change and either force them to discard their changes or perhaps give them the option to risk overwriting someone else's changes.

+ +

I thought about adding a fields is_locked and lock_timestamp to the DB table. When a user begins editing the item the row would change is_locked to true and set the lock timestamp to the current time. I would then have some amount of time for which the lock is held (ex. 5 minutes). If anyone else tries to edit the item they would receive a message saying the item is locked and when the lock automatically expires. If the user walks away while editing the lock would automatically expire after a relatively short period of time and once it does the user would be warned that the lock has expired and be forced to restart the edit after the data is refreshed.

+ +

Would this be a good method for preventing overwriting stale data? Is it overkill (I don't expect the application to be used by more than a few people concurrently on a single account).

+ +

(Another concern I have is 2 people getting a lock for the same item, however I believe that is a race condition I am comfortable with.)

+",169308,,,,,43116.29583,Should I lock rows in my cloud DB while they're being edited by a user,,2,3,1,,,CC BY-SA 3.0,, +363679,1,,,1/8/2018 22:04,,2,172,"

We are using Git and want to add a server side post-receive hook that posts events to our bug tracking system.

+ +

In our company, the Git server is managed by IT, and the server of the bug tracking system is managed by us, developers. Because developers don't have access to IT's servers, if we made updates to the git hook, we have to ask IT to upload the new revision. Someone suggests that in order to avoid bothering IT, the hook, instead of running complex logic, simply passes raw parameters to a middle layer managed by us, and the middle layer does the heavy passing and complex logic, then posts events to the API endpoint of the bug tracking system, as illustrated by the red flow in the figure.

+ +

+ +

Do you agree it's a good strategy? I slightly feel the middle layer adds maintenance cost, there should be some workaround to the lack of control of IT's server. How will you deal with this situation?

+ +

I don't know what proper tags are for this question. Feel free to add.

+",78245,,,,,43108.97778,Extra layer because of lack of control,,2,3,0,,,CC BY-SA 3.0,, +363682,1,,,1/8/2018 23:46,,-1,149,"

What algorithm can I use to describe a specified gradient (N/M) approximately as the sum of a set of rational fractions { (n1/m1) + (n2/m2) … } ?

+ +

Design constraints:

+ +
    +
  • The algorithm takes as input (N, M), describing the true gradient N/M.

    + +
      +
    • N and M are integers.
    • +
    • If it matters: M is typically around 100–1000.
    • +
    • If it matters: N ranges widely, from low (1, shallow gradient) all the way to arbitrarily large (quintillions, extremely steep gradient approaching vertical).
    • +
  • +
  • The algorithm produces as output some small set of tuples, { (n1, m1), (n2, m2), …}.

    + +
      +
    • The combination of tuples (n, m) will, when combined as fractions, closely approximate the gradient N/M.
    • +
    • The number of tuples should be small (I would expect fewer than 3).
    • +
    • Every n and m is an integer.
    • +
    • Every m is as small as can be, but no smaller than the minimum for M (e.g. 100).
    • +
  • +
+ +

Example

+ +
    +
  • Given the input (50001, 1000)

    + +
      +
    • the algorithm may generate the set { (5000, 100), (1, 1000) }
    • +
    • because (50001 / 1000) == ((5000/100) + (1/1000)).
    • +
    • The output is good because it's a small set, and the denominators are low while still being above the minimum.
    • +
  • +
  • Given the input (14, 1000)

    + +
      +
    • the algorithm may generate the set { (1, 100), (1, 250) }
    • +
    • because (14/1000) == (1/100) + (1/250).
    • +
    • The output is good because it's a small set, and the denominators are low while still being above the minimum.
    • +
  • +
  • Given the input (5.07e+30, 1000)

    + +
      +
    • the algorithm may generate the set { (5.07e+29, 100) }
    • +
    • because (5.07e+30/1000) == (5.07e+29/100).
    • +
    • The output is good because it's a small set, and the denominators are low while still being above the minimum.
    • +
  • +
+ +

I don't know for sure those are the best outputs; but they would satisfy the criteria.

+ +

Math formulae appreciated but I am not math-literate

+ +

My algebra is not strong enough to describe this generally. Likewise, I am not able to look at a description in mathematical language and know what algorithm it describes; nor am I able to tell whether it actually answers this question.

+ +

Thank you for references like

+ + + +

etc., but I can't translate that into pseudo-code for an algorithm. Please suggest some pseudo-code in an answer, so I can figure out whether it's doing what I described.

+",23421,,23421,,43110.99514,43142.70556,"Describe a slope (N/M), approximately as small number of fractions (n/m)",,2,16,0,,,CC BY-SA 3.0,, +363685,1,363691,,1/9/2018 3:31,,2,182,"

I am building a website which has a single page app frontend that connects to a REST API from the backend. I also want others to be able to write programs for this website.

+ +

It seems like providing public access to the same API I build my frontend in would be useful. There would only be one API to maintain and I know every single feature of the website is available for 3rd parties.

+ +

Are there any cons to this approach?

+ +

Some things that come to mind is letting 3rd parties use the same API as I use myself for the frontend might hold back development as new features and changes can't be released right away. Another is that I'd probably want to have a system that allows users to give permissions to 3rd party tool such as allowing making posts but disallowing changing user settings. I'm not sure how well this would work while sharing the same API.

+",132856,,,,,43109.6625,Does it make sense for my website frontend to share the API as 3rd parties,,2,0,1,,,CC BY-SA 3.0,, +363693,1,363699,,1/9/2018 9:22,,2,602,"

I'm currently working with a window cleaning company that uses it's own set of heuristics for scheduling its small set of cleaners for jobs - basically a huge spreadsheet with dates and human assigned area codes. I've stupidly said that it can be done better.

+ +

Conditions:

+ +
    +
  • Jobs have a 'last cleaned' date and a regularity of which they are cleaned (monthly, every two months, ..., every six months). This gives a window of time when they must next be cleaned (their scheduled date).
  • +
  • Jobs must be completed within 3 days +/- of their scheduled date.
  • +
  • Jobs have an associated cost of doing the work, which roughly translates to the time it takes to complete the job.
  • +
+ +

Overall the task is to minimise the time taken between jobs for multiple cleaners, whilst making sure that all jobs are eventually cleaned within the conditions above.

+ +

The company is already in operation, with customers scheduled to be cleaned. I would like to be able to use a method that slowly coaxes the scheduling of cleans towards optimal (i.e. taking jobs early/late so as to better fit an optimal schedule) as enforcing it from the start would disrupt their operation.

+ +

I've been looking at Google's Optimisation Tools for hints. It's similar to a Vehicle Routing Problem, but it's difficult to know which jobs should be chosen to be part of the route on a particular day. If the jobs are sorted by scheduled date and then popped from the queue the system will never get any better than it was before.

+ +

Any pointers towards further reading or proposed methods would be greatly appreciated!

+",292856,,,,,43109.56111,Finding a solution to a real world assignment/routing problem,,2,0,1,,,CC BY-SA 3.0,, +363696,1,363697,,1/9/2018 9:45,,10,3367,"

I've many core class that require ISessionContext of the database, ILogManager for log and IService used for communicate with another services. +I want use dependency injection for this class used by all core classes.

+ +

I've two possibile implementation. +The core class that accept IAmbientContext with all the three class or inject for all class the three classes.

+ +
public interface ISessionContext 
+{
+    ...
+}
+
+public class MySessionContext: ISessionContext 
+{
+    ...
+}
+
+public interface ILogManager 
+{
+
+}
+
+public class MyLogManager: ILogManager 
+{
+    ...
+}
+
+public interface IService 
+{
+    ...
+}
+
+public class MyService: IService
+{
+    ...
+}
+
+ +

First solution:

+ +
public class AmbientContext
+{
+    private ISessionContext sessionContext;
+    private ILogManager logManager;
+    private IService service;
+
+    public AmbientContext(ISessionContext sessionContext, ILogManager logManager, IService service)
+    {
+        this.sessionContext = sessionContext;
+        this.logManager = logManager;
+        this.service = service;
+    }
+}
+
+
+public class MyCoreClass(AmbientContext ambientContext)
+{
+    ...
+}
+
+ +

second solution(without ambientcontext)

+ +
public MyCoreClass(ISessionContext sessionContext, ILogManager logManager, IService service)
+{
+    ...
+}
+
+ +

Wich is the best solution in this case?

+",251545,,,,,43115.82847,ambient context vs constructor injection,,4,1,2,,,CC BY-SA 3.0,, +363700,1,363725,,1/9/2018 10:42,,3,243,"

I am developing an application that can be simplified as follows:

+ +

The application is basically a mailing list.

+ +

Users browse to http://mysite/subscribe. +It's on the intranet, and the site is using Windows Auth, so I can retrieve their email adress from Active Directory. This email adress is stored in an SQLLite DB, on the webserver.

+ +

Every hour, on the same server, a scheduled task runs a .exe console app. It reads the email adresses from the DB, and sends a report by mail.

+ +

My question is:

+ +

Where should I save the DB file, in a way that is easily configured in both apps?

+ +

In the wwwroot/mysite folder? +In the console App folder? In some other folder dedicated for this I don't know about?

+",280279,,280279,,43109.49653,43109.82639,Where should I store the SQLLite file when it is shared between a web and a desktop app?,,2,2,,,,CC BY-SA 3.0,, +363701,1,,,1/9/2018 11:08,,1,508,"

I was arguing with my colleague on how to implement a dashboard function in a web site. Suppose a user can create a dashboard, which contains multiple gadgets in the web site. We planned to add more gadgets later on, and allow users to customized more on each gadgets, such as the formula to calculate the pie chart data, if the dashboard feature is proven successful.

+ +

My question is, which approach is better in my situation in your view and why (besides the reasons I already provided)?

+ +

My colleague approach

+ +

My colleague propose to store data in the following schema.

+ +
CREATE TABLE dashboards (
+    id INT NOT NULL PRIMARY KEY,
+    name VARCHAR(50) NOT NULL
+);
+
+
+CREATE TABLE gadgets (
+    id INT NOT NULL PRIMARY KEY,
+    parent_id INT NULL,
+    dashboard_id INT NULL,
+    category VARCHAR(10) NOT NULL,
+    title VARCHAR(100) NULL,
+    db_view_name VARCHAR(50) NULL,
+    -- More columns omitted...
+);
+
+ +

When the website has 3 type of gadgets, and users already created 2 dashboards which has 2 gadgets in each dashboard, the database will store the following information.

+ +
-------------------------
+| Table dashboards      |
+-------------------------
+| id | name             |
+-------------------------
+|  1 | Dashboard Gender |
+|  2 | Dashboard Score  |
+-------------------------
+
+------------------------------------------------------------------------------------------
+| Table gadgets                                                                          |
+------------------------------------------------------------------------------------------
+| id | dashboard_id | parent_id | title             | category | db_view_name      | ... |
+------------------------------------------------------------------------------------------
+|  1 |       (null) |    (null) | Pie Chart Gadget  | CHART    | vw_student_gender | ... |
+|  2 |       (null) |    (null) | Line Chart Gadget | CHART    | vw_student_score  | ... |
+|  3 |       (null) |    (null) | Welcome Gadget    | HTML     | (null)            | ... |
+|  4 |            1 |         1 | My Pie Chart      | CHART    | vw_student_gender | ... |
+|  5 |            1 |         3 | My Welcome Text   | HTML     | (null)            | ... |
+|  6 |            2 |         3 | My Welcome Text   | HTML     | (null)            | ... |
+|  7 |            2 |         2 | My Line Chart     | CHART    | vw_student_score  | ... |
+------------------------------------------------------------------------------------------
+
+ +

My colleague think that in this way:

+ +
    +
  1. When adding a new CHART category gadget, the developer just only +need to insert a record in database, create a table view, and +everything is automatically generated without any code change and web server restart, only database side change. The list of gadgets displayed to user for selection is as simple as SQL query with WHERE parent_id IS NULL.
  2. +
  3. Only requires update the database field to change the default value of +gadget title, or any other parameters such as SQL used retrieve the data +from table view.
  4. +
  5. Allow users to customization in very details in the future, because almost +everything related to a gadget is stored in the database, we don't need any change in database schema to support customization.
  6. +
+ +

My approach

+ +

For me, I think the schema should like this:

+ +
CREATE TABLE dashboards (
+    id INT NOT NULL PRIMARY KEY,
+    name VARCHAR(50) NOT NULL
+);
+
+CREATE TABLE gadgets (
+    id INT NOT NULL PRIMARY KEY,
+    dashboard_id INT NOT NULL,
+    title VARCHAR(100) NULL,
+    -- More columns omitted...
+);
+
+ +

When the website has 3 type of gadgets, and users already created 2 dashboard which has 2 gadgets in each dashboard, the database will store the following information.

+ +
-------------------------
+| Table dashboards      |
+-------------------------
+| id | name             |
+-------------------------
+|  1 | Dashboard Gender |
+|  2 | Dashboard Score  |
+-------------------------
+
+---------------------------------------------
+| Table gadgets                             |
+---------------------------------------------
+| id | dashboard_id | title           | ... |
+---------------------------------------------
+|  4 |            1 | My Pie Chart    | ... |
+|  5 |            1 | My Welcome Text | ... |
+|  6 |            2 | My Welcome Text | ... |
+|  7 |            2 | My Line Chart   | ... |
+---------------------------------------------
+
+ +

I think that information such as default gadget title, table view and SQL which is used to retrieve the source data should be put in source code. Probably in several classes with inheritance such as class Gadget, class Chart, class PieChart. In doing so:

+ +
    +
  1. We can avoid a lot NULL value, inconsistent data and duplicated data in database.
  2. +
  3. Track change is easier because information are stored in source code with version control.
  4. +
  5. The feature of the gadget can be more flexible because it is not bound by the predefined categories and database fields which used in my colleague approach.
  6. +
+",237294,,,,,43109.55486,Source code related information should be stored in database or file?,,2,0,,,,CC BY-SA 3.0,, +363704,1,363706,,1/9/2018 13:01,,19,9885,"

So I have to do a project for about 10 days. About the work, lets just say Im going to develop a Website with a front-end and a few interfaces between internal services. Now I have to use a projectmethod and I'm thinking of the Scrum Method. But since I'm only one person, I'm asking, if it is possible to implement the Scrum method for this project.

+ +

My Idea is, that I take the roles of the Product Owner, Development and Scrum Master and based on that, I would ""do"" the project.

+ +

So to list my question(s):

+ +
    +
  • Is this still considered ""Scrum""?
  • +
  • Is there any other project method I could use for this?
  • +
  • (Or) Should I build an ""own"" project method based on Scrum/Agile methodology?
  • +
+",292880,,,,,43110.81319,Can the Scrum method be used with only one person and only one 10 day Sprint?,,4,8,8,43110.63611,,CC BY-SA 3.0,, +363717,1,363744,,1/9/2018 17:45,,0,141,"

I use ""reference"" term here like in C++ world, not like in C# (for example). I use non-C++ syntax on purpose -- this is general question, not about this particular implementation.

+ +

Starting something like C++ afresh I would like to make rules and validation in order to prevent a case when reference outlives its source. For example this looks like a valid usage:

+ +
def foo(x ref int) ref int
+    return x;
+end
+
+ +

But this is wrong:

+ +
def bar() ref int
+    x int = 5;
+    return x;
+end
+
+ +

because in bar example x is put on stack and when reference to it is returned this stack is already gone.

+ +

I didn't so far find the analysis algorithm description so that is why I am asking -- what to allow (for example defining parameters as references), and how to check when the usage is abused creating dangling references?

+",66354,,,,,43110.55417,How to validate reference use?,,2,11,,,,CC BY-SA 3.0,, +363729,1,,,1/9/2018 23:24,,2,108,"

Motivation: (Skip to ""The Problem"" if you don't need motivation for it)

+ +

As a project for myself, I'm writing an expression parser for certain kinds of mathematical expressions, and I'm using the interpreter pattern (as it often is used) for my Expression classes.

+ +

At the same time, I'm also using the template method pattern for certain operations I wanna do on my Expressions. My use of the template method pattern is where the template method is ""public"", but the primitive operations called by the template method should not be ""public"" to the user. (This is how I learnt it, and this feels like it makes the most sense for my program.)

+ +

The specific example I was dealing with was trying to simplify an expression; there are several different simplification functions (let us call them simplify1, simplify2, and so on) I used, and each are called inside the Expression class (for example, one of my simplification functions evaluates all constant expressions, e.g. 1+2^(2*3) simplifies to 65, another simplification function simplifies repeated operations, eg. x+(3+x^2) simplifies to x+3+x^2). The way this works in each of the Expression subclasses which are non-terminal is that there are recursive calls to the contained expressions, until a terminal ""Constant"" or ""Variable"" is reached.

+ +

Now, the problem is that these different simplification functions should be hidden from the user; the interface should only allow a call to the template method ""simplify"", which calls all the different simplification functions as needed. Initially, I thought that making those simplify functions ""protected"" would solve the problem. That is, in my child classes, I would have ""simplify1"", ""simplify2"" and so on overridden, and then in the parent expression class, ""simplify"" would call ""simplify1"", then ""simplify2"", and so on, and ""simplify"" itself would be public. However, the problem is that in non-terminal classes (an expression that itself contains expressions), having a protected modifier on simplify1, simplify2, and so on, does not allow it to call simplify1, simplify2, etc. on the contained expressions.

+ +

The Problem:

+ +

Abstracting away the motivation (""Expression"" becomes ""Parent"", the different derived expression classes are the ""Child"" classes (""ChildC"" specifically is a non-terminal expression), ""simplify"" becomes ""publicFoo"", and ""simplify1"" and ""simplify2"" become ""foo1"" and ""foo2"" respectively), I want code that behaves like this:

+ +
class Parent { //abstract expression
+protected:
+    virtual void foo1() = 0; //primitive operation, don't wanna give access to public
+    virtual void foo2() = 0; //primitive operation, don't wanna give access to public
+
+public:
+    void publicFoo(){ //template method
+        this -> foo1();
+        this -> foo2();
+        //maybe does other things too
+    }; // this is what the user will call
+
+};
+
+class ChildA: public Parent { //terminal expression
+protected:
+    void foo1() override; //whatever implementation
+    void foo2() override; //whatever implementation
+
+};
+
+class ChildB: public Parent{ //terminal expression 
+protected:
+    void foo1() override; //whatever implementation
+    void foo2() override; //whatever implementation
+
+};
+
+class ChildC: public Parent { //non terminal expression
+   std::vector<Parent*> parents;
+protected:
+    void foo1() override {
+        for ( auto p: parents){
+            p->foo1(); //calls correct foo1 depending on what p truly is (this does not compile)
+        }
+    }
+    void foo2() override {
+        for ( auto p: parents){
+            p->foo2(); //calls correct foo2 depending on what p truly is (this does not compile)
+        }
+    }
+};
+
+ +

So, the problem here is that ""protected"" actually doesn't allow calling foo1 or foo2 on arbitrary Parent type objects, only on ChildC objects, and the compiler gives me errors saying ""'foo1' is a protected member of 'Parent'"" and ""'foo2' is a protected member of 'Parent'"". It feels like what I want is a modifier that's something like ""semi-protected"", which allows access for a derived class to call semi-protected functions of any kind of parent object.

+ +

Temporary Solution:

+ +

My current solution to this is to simply make the ChildC (in the example above) a friend class to Parent, and then make foo1 and foo2 private (because ChildC, as a friend, can access those anyway). However, in my time using C++, I've learnt that friend classes are usually discouraged and are considered bad style (or something of the sort). And in general, the solution doesn't feel as nice as it could be, because I don't need ChildC to have that much access to Parent's members.

+ +

Question:

+ +

Thus, my question is, is there a nicer way to combine these patterns while ensuring that the primitive operations with the template method pattern remain hidden to the user?

+",292816,,292816,,43110.01944,43110.01944,Access modifiers in combination of interpreter pattern with template method pattern,,0,5,,,,CC BY-SA 3.0,, +363730,1,363731,,1/9/2018 23:35,,2,411,"

Imagine I have a set of houses I want to sell and I want to present then on a website. The user should be able to filter the house they want by price, city, number of floors, area etc. However, I don't want it to be like this:

+ +

First select price only, then you can select city and only after that can you select the number of floors etc.

+ +

I want the user to be able to pick the order of attributes he wants.

+ +

For each iteration (attribute selection) the set of remaining attributes will have a limited range depending on the previous interaction and so on.

+ +

I have seen this implemented on some sale sites but I don't know how this is implemented, specifically with regards to the the data structure.

+ +

It doesn't look easy to add new houses for example. It feels like there is a complicated combination of binary trees and linked lists but there is probably a far better way that I haven't figured out.

+",175281,,73508,,43110.39722,43110.40069,What data structure is this?,,1,9,,,,CC BY-SA 3.0,, +363739,1,363746,,1/10/2018 9:07,,39,10940,"

According to Is it wrong to use a boolean parameter to determine behavior?, I know the importance of avoid using boolean parameters to determine a behaviour, eg:

+ +

original version

+ +
public void setState(boolean flag){
+    if(flag){
+        a();
+    }else{
+        b();
+    }
+    c();
+}
+
+ +

new version:

+ +
public void setStateTrue(){
+    a();
+    c();
+}
+
+public void setStateFalse(){
+    b();
+    c();
+}
+
+ +

But how about the case that the boolean parameter is used to determine values instead of behaviours? eg:

+ +
public void setHint(boolean isHintOn){
+    this.layer1.visible=isHintOn;
+    this.layer2.visible=!isHintOn;
+    this.layer3.visible=isHintOn;
+}
+
+ +

I'm trying to eliminate isHintOn flag and create 2 separate functions:

+ +
public void setHintOn(){
+    this.layer1.visible=true;
+    this.layer2.visible=false;
+    this.layer3.visible=true;
+}
+
+public void setHintOff(){
+    this.layer1.visible=false;
+    this.layer2.visible=true;
+    this.layer3.visible=false;
+}
+
+ +

but the modified version seems less maintainable because:

+ +
    +
  1. it has more codes than the original version

  2. +
  3. it cannot clearly show that the visibility of layer2 is opposite to the hint option

  4. +
  5. when a new layer (eg:layer4) is added, I need to add

    + +
    this.layer4.visible=false;
    +
    + +

    and

    + +
    this.layer4.visible=true;  
    +
    + +

    into setHintOn() and setHintOff() separately

  6. +
+ +

So my question is, if the boolean parameter is used to determine values only, but not behaviours (eg:no if-else on that parameter), is it still recommended to eliminate that boolean parameter?

+",248528,,248528,,43110.39097,43115.51667,Is it wrong to use a boolean parameter to determine values?,,13,13,8,,,CC BY-SA 3.0,, +363743,1,363766,,1/10/2018 10:22,,0,274,"

I have found many posts about how to organize work on develop branch and up to release branch. Or even how to work without the develop branch. +The trend of the "develop" branch going away, GitFlow branching strategy, but without the develop branch, +To branch or not to branch?... But I have problems about how to organize work before the develop branch.

+ +

If we create a new branch for every task(ticket), and push it into the common develop branch, it works OK with the smaller projects.

+ +

But I have seen larger projects where the more complicated scheme was used - for connected tasks some medium level branches were created, and ticket branches were later pushed not to the developer branch, but to the appropriate medium level branch. And I understand why - If the project is so complicated that more than one person works on the same theme and these developers start to have problems with changes made by others on the developer branch during the time he is waiting for the reactions and approvals of the pull request, and while he works on the appropriate repairs, and has to solve appearing conflicts again and again. I thought that medium level branch could be temporarily locked and thus all participants could push changes into it in turn and that would practically prevent the conflicts. But I have too little experience in large repositories organization and I am not sure at all.

+ +

In the description of the GitFlow strategy, http://datasift.github.io/gitflow/IntroducingGitFlow.html, the drawings have this very two-storey scheme. But this part and its practical use is not explained there even a little bit.

+ +

The question is: is the scheme of two levels of task branches a necessary and sufficient solution for the problem? And how should it be used to be that solution?

+ +

Edit: I am not talking about branches created for long time for some departments. I understand that they are ineffective. Imagine that we both have to do some functionalities that touch the same several classes. Do we need to solve our code conflicts on the develop branch? And if we do it on the common task branch, then we have exactly what I am speaking about: separate local branches and a common thematic branch in the repository.

+",44104,,44104,,43110.4875,43110.69028,The two-storey branching scheme **before** the develop branch,,2,0,,,,CC BY-SA 3.0,, +363759,1,,,1/10/2018 14:41,,3,824,"

I'm having a hard time to decouple two classes.

+ +

I have my code behind (will call it ""class A"") that I use to tweak the interface (defined in xaml).

+ +

Next I have a class B that is only logic. But while the logic is executing I have to update the UI.

+ +

My problem is that I can't ""return"" from the class B back to A to update the UI because B has not finished working. And I can't give the view itself to modifiy to B because it would couple A and B.

+ +

I suppose that I have to use some interfaces logic but I don't know how.

+ +

For example :

+ +
Class A
+{
+     private void OnClickEvent()
+     {
+         var B = new(B);
+         b.work();
+     }
+
+     private void UpdateUI()
+     {
+        ...
+     }
+}
+
+
+Class B
+{
+    public void work()
+    {
+        while (...)
+        {
+             ...
+             //Here, how to call A.UpdateUI() ?
+             ...
+        }
+    }
+}
+
+ +

Thanks !

+",291483,,,,,43115.68958,How to decouple code behind with business logic?,,3,12,,,,CC BY-SA 3.0,, +363763,1,363835,,1/10/2018 15:50,,0,264,"

I'm trying to get into TDD, and a lot of examples sugests that we should use stubs to make our code more flexible. If I'm using javascript (for example) then why should I use stubs, since methods and even whole objects can be easily replaced for a mock?

+ +

Should I favor this:

+ +
const fetcher = require('some-async-lib');
+
+function getData(url){
+    return fetcher(url)
+}
+
+ +

Or this

+ +
function getData(url, fetcher){
+    return fetcher(url)
+}
+
+ +

Imagine that I have some methods that use this fetcher, am I not polluting the code passing the fetcher everywhere?

+",286744,,,,,43111.71667,When to use stubs,,1,3,,,,CC BY-SA 3.0,, +363769,1,363783,,1/10/2018 17:48,,2,425,"

Assuming I have a paginated api endpoint:

+ +
GET /api/meetings
+
+ +

My client is consuming this api endpoint and displays it in an infinite scroll that underneath gets page by page as the user scrolls down.

+ +

Now I want to add real time capabilities to the api. +New meetings can come in and also existing meetings can change (for example, a meeting got canceled).

+ +

Ideally, I want the UI for anything I pulled so far to change if something in the meeting has changed.

+ +

Is this something that is possible to pull off in a performant way ? What changes (high level of course) do you think needs to be done to handle this ?

+ +

Thanks.

+",84777,,84777,,43110.80903,43111.41389,How to detect and handle changes in a realtime api,,2,5,,,,CC BY-SA 3.0,, +363776,1,,,1/10/2018 19:40,,4,3408,"

Say I have a Business Object like this:

+ +
public class CustomerBusiness
+{
+  Guid ID;
+  decimal Salary;
+  datetime DateOfBirth;
+}
+
+ +

and a data object like this:

+ +
public class CustomerData
+    {
+      Guid ID;
+      decimal Salary;
+      datetime DateOfBirth;
+      string Name;
+    }
+
+ +

Say I know the Name and Address when I map the Business Object to the Data Object.

+ +
CustomerBusiness customerBusiness = new CustomerBusiness();
+customerBusiness.Id = Guid.NewID();
+customerBusiness.Salary = 30000M;
+customerBusiness.DateOfBirth = new DateTime(1960,01,01);
+
+string name = ""Bert"";
+
+ +

I believe I have two options:

+ +

1) Change the CustomerBusiness object to include the name and address members.

+ +

2) Do this in the application:

+ +
CustomerData = AutoMapper.Mapper.Map<CustomerData>(customerBusiness);
+CustomerData.Name = AutoMapper.Mapper.Map<string>(name);
+
+ +

3) Do this:

+ +
string name = ""Bert"";
+CustomerData = AutoMapper.Mapper.Map<CustomerData>(customerBusiness);
+CustomerData.Name = name;
+
+ +

Which option is more appropriate? Is there another option I have not considered?

+ +

I realise this may sound a bit pedantic as all options work, however I am trying to follow the principle of least astonishment.

+",65549,,65549,,43110.83056,43118.15972,Can I map types where the source has fewer fields than the destination?,,1,5,1,,,CC BY-SA 3.0,, +363778,1,,,1/10/2018 19:51,,7,24681,"

I have an application that receives a number of values that need to be applied to various properties of an object (sealed class). Originally I just set the value without checking anything and updated the object, but of course sometimes the new values wouldn't be valid, and other times they would be identical to existing values, so running an update was a waste of system resources, and frankly a rather slow process.

+ +

To get around this, I've created a private bool that is set to true if/when a new value is applied, which is determined with a method that checks if the value is different (and valid):

+ +
private bool updated = false;
+private bool updateValue(string property, string value, bool allowNulls = false)
+{
+    if ((allowNulls || !string.IsNullOrEmpty(value)) && !property.Equals(value)) { updated = true;  return true; }
+    return false;
+}
+
+ +

Then, whenever I need to set the value of a property to something new, I have a single If Statement:

+ +
if (updateValue(obj.property, newValue))
+{ obj.property = newValue; }
+
+ +

And then of course, when all properties have been assigned their new values:

+ +
if (updated)
+{ obj.commitChanges(); }
+
+ +

So, 1) Is there a better way of doing this, and 2) is there a more concise way to run my If statement? Single-line If clauses like this annoy me; it seems like I should be able to say ""Set A to B when C"".

+",293030,,293030,,43110.85,43112.85486,What's the best way to handle updates only when a value has changed?,,4,11,2,,,CC BY-SA 3.0,, +363781,1,363782,,1/10/2018 21:05,,2,117,"

When you are using frameworks like Angular, Angular2+, and React, the way you put data in the UI is by binding a property to an attribute of a UI element.

+ +

On the other hand, when you're doing standard iOS dev or traditional JS/JQuery dev, you extract a UI element reference, and do what you need with that reference.

+ +

My question is, why are all these component-based JS MVC frameworks all went along with the data binding way, is there a compatibility issue between the component architecture and the UI element reference way?

+",89944,,,,,43111.72153,Why JS MVC frameworks prefer data binding to UI element reference?,,1,0,,,,CC BY-SA 3.0,, +363787,1,363797,,1/10/2018 22:58,,1,1783,"

I am looking into Event Sourcing (ES) and having a play around with some code Greg Young put together (Greg Young Git Repo).

+ +

I like what ES offers in terms of functionality, but I am trying to apply that to my knowledge of a specific problem domain. Most applications I have worked on are all based around storing all data as state. Typically CRUD applications and a lot of DDD of late - pretty much in a SQL Server backend.

+ +

I see a lot of examples around Purchase Orders and Order Items for ES, which are pretty simplistic examples - although in my Purchase Order world we have far more complex domains - where there are all kinds of rules. Whilst these are a great starting point, I would like to broaden my horizon on slightly more complex domain scenarios.

+ +

So to test out ES in my hypothetical problem domain, suppose I have the following simplified scenario where:

+ +

Aggregate 1 - Business Configuration

+ +

Aggregate 1 is configured as a Group/Item relationship, for example:

+ +
Group 1 (Aggregate1Group)
+    Item 1 (Aggregate1Item)
+    Item 2 (Aggregate1Item)
+Group 2 (Aggregate1Group)
+    Item 3 (Aggregate1Item)
+    Item 4 (Aggregate1Item)
+
+ +

This is typically a one of task a the user would run through, but nothing ever stays that way!

+ +

Aggregate 2 - Business Rule Set

+ +

Aggregate2 is pretty much standalone set of rules. Its user configured business rule ranges which are used within Aggregate3, but require a ""reference"" to Aggregate1 Group.

+ +
BusinessRule1 (BusinessRule)
+    Min: 0
+    Max: 100
+    Aggregate1Group: Group 1
+
+BusinessRule2 (BusinessRule)
+    Min: 200
+    Max: 300
+    Aggregate1Group: Group 2
+
+ +

Aggregate 3 - Enforcing Rules

+ +

Aggregate3 requires the user to pick an item and select a value, which would be implemented with the following method:

+ +
void Aggregate3.BookThisIn(int value, Aggregate1Item item, string someIrrevantInfo)
+{ 
+    bool allowed = BusinessRules.Any(b => value >= b.Min && value <= b.Max && b.Aggregate1Group.Contains(item));
+
+    if (allowed)
+    {
+        IrrelevantInfo.Add(someIrrevantInfo);
+
+        // raise events
+        Events.Add(new IrrelevantInfoAddedEvent(...));
+    }
+}
+
+ +

So basically, if the value is between the Business Rule Min and Max, and the specified item is within that Business Rule group, then we are allowed to log the irrelevant info.

+ +

Hypothetical scenarios and questions

+ +

So lets suppose the following events have happened:

+ +
    +
  1. User created Groups and Items as per Aggregate 1 - Business Configuration
  2. +
  3. User created Business Rules as per Aggregate 2 - Business Rule Set
  4. +
  5. User BookThisIn for value of 50 and Aggregate1Item of Item 1 with someIrrevantInfo of ""Some Test""
  6. +
+ +

All domain data is in a valid state so far... Or is it?

+ +

Now the user decides they got the configuration wrong. Aggregate1Item ""Item 1"" actually belongs to Aggregate1Group ""Group 2"". Given that, the data entered in step 3 is now in fact incorrect.

+ +

Potentially, the data could be determined to be historically correct in other cases, where all current data is deemed correct at the point in time events occur.

+ +

So in the case of a state based SQL database, you could look at any entities stored in the IrrelevantInfo list (that would map onto a table - say IrrelevantInfos) and you could make the user resolve the issues within a UI.

+ +

But in an event based system, how would you play out this invalidation scenario - and how could you provide the information to the user to resolve? What patterns does ES offer to overcome this hurdles?

+",153096,,,,,43111.29097,Event Sourcing and cross Aggregate validation,,1,0,1,,,CC BY-SA 3.0,, +363788,1,,,1/10/2018 23:34,,4,124,"

At work I'm dealing with a situation where we have a large amount of time series data and need to display sections of it to the user at a time. The data essentially has an infinite number of records, and so it's not possible for the client to load the entire dataset at once. However, the API call to request sections of data is slow/expensive, so I want to cache already-loaded data clientside and not have to re-request it.

+ +

An analogy would be when you watch a video online and skip forwards and backwards. The player downloads fragments of the video based on what the user is currently trying to watch and stores them in case the user watches that segment again.

+ +

There are a few differences between my use case and the video example though:

+ +
    +
  • My data set is sparse. There may be regions of several weeks with no data points in them. I need to differentiate between ""no data"" and ""not loaded"".

  • +
  • My data set doesn't have discrete segments. In an HLS or DASH video the stream is split into segments (usually 10 seconds long) which provide discrete intervals where the loading should take place. My data can be loaded between any two points in time, and as the user can zoom in or out of the data the distance between these points may not even be the same.

  • +
  • My data set is unbounded. In a video, there's a clear start and end to the video. In my data set, you can go forwards and backwards theoretically infinitely. (Although in practice the length is limited by what dates can be stored in our backend, it's still a very large amount of time)

  • +
+ +

I'm self taught in programming and I don't know the name of this concept, but I feel there must be one. I'm capable of implementing what I need myself, but I'm hoping to avoid reinventing the wheel.

+",82736,,,,,43111.35417,Is there a name for a data structure/concept where certain regions of a data set are loaded?,,2,1,0,,,CC BY-SA 3.0,, +363793,1,,,1/11/2018 1:35,,11,651,"

I was reading this paper about the differences between software development in general and game development and the authors made some good points regarding software testing, pointing out, for instance, that

+ +
+

...game developers are hesitant to use automated testing because of + these tests' rapid obsolescence in the face of shifting creative + desires of game designers.

+
+ +

So, this reading made me think, what other aspects in software testing should we consider as different or particular when we are dealing with/testing a game? Does anyone have experience with this or had anyone heard something else about it?

+",293045,,110531,,43111.53472,43131.20556,Is software testing different when we are dealing with game development?,,2,6,1,,,CC BY-SA 3.0,, +363799,1,363804,,1/11/2018 7:42,,4,584,"

In this article Mark Seemann explains how Onion/Hexagonal Architecture (or Ports and Adapters) are somewhat similar to a layered architecture when the Dependency Inversion Principle (DIP) is applied. Especially if you consider the claims made in this article to hold water, I think it's all quite clear and straight-forward.

+ +

Anyway there is one quote about Ports and Adapters that made me think about the way that I structured my classes in the past

+ +
+

The components in the inner hexagon have few or no dependencies on each other, while components in the outer hexagon act as Adapters between the inner components, and the application boundaries: its ports.

+
+ +

Given I'd like to implement some app business logic, called App hereinafter (not a too meaningful name, anyway), which would allow us to display a list of filtered employees. Displaying a list of employees would provided by a port

+ +
public interface IEmployeeListProvider
+{
+    EmployeeList GetEmployees();
+}
+
+ +

and persistence would be another port

+ +
public interface IEmployeeRpository
+{
+    IEnumerable<Employee> GetAllEmployees();
+    void AddEmployee(Employee employeeToAdd);
+    void UpdateEmployee(Employee employeeToUpdate);
+    // further method signatures
+}
+
+ +

Now I would implement my business logic

+ +
class App : IEmployeeListProvider
+{
+    // most likely the filters or filter conditions would be injected.
+    // and the IEmployeeRepository anyway
+
+    public EmployeeList GetEmployees()
+    {
+        var employees = employeeRepository.GetAllEmployees();
+        var filteredEmployees = FilterEmployees(employees);
+        return EmployeeList.FromEnumerable(filteredEmployees);
+    }
+
+    private IEnumerable<Employee> FilterEmployees(IEnumerable<IEmployee> employees)
+    {
+        // elided
+    }
+}
+
+ +

Basically this is how I understood Ports and Adapters as proposed by Alistair Cockburn. Anyway, this implementation somehow contradicts the Mark Seeman quote (see above), since App depends bot on the IEmployeeRepository and the IEmployeeListProvider ports. Of course it would be possible to restructure the design to use a filter port

+ +
public interface IEmployeeFilter
+{
+    IEnumerable<Employee> FilterEmployees(IEnumerable<Employee> employees);
+}
+
+ +

and do something like this from the UI

+ +
IEmployeeFilter filter = ...; // however this is constructed
+IEmployeeRepository repository = ...; 
+
+// ...
+
+var employees = filter.FilterEmployees(repository.GetAllEmployees());
+
+ +

but this feels wrong to me for several reasons:

+ +
    +
  • The UI would depend on a DAL port
  • +
  • We are potentially shifting logic to the UI code
  • +
  • It's quite likely that UI will become a ""dependency hog""
  • +
+ +

Did I get the whole quote of Mark Seeman wrong I is there any other part that I got fundamentally wrong?

+",143358,,,,,43143.91667,Dependencies within the inner hexagon of Ports and Adapters,,2,0,2,,,CC BY-SA 3.0,, +363805,1,363818,,1/11/2018 9:52,,2,298,"

For example, let's assume we have a class that is imported three times from other classes. This would lead to a reusability of three. However, as there might be functions that are called only once, it would make more sense to count the called functions.

+ +

One general suggestion from measuring code reusability is to answer the following question: ""How many functions are called from more than one place?""

+ +

What would be the best way to realise the solution, e.g. counting the functions that are called in other parts of the code with Python?

+",293072,,110531,,43111.64375,43111.72986,How can the reusability of Python code be measured and quantified?,,1,6,,,,CC BY-SA 3.0,, +363806,1,,,1/11/2018 10:35,,-2,77,"

Currently working with a client and their lawyers to launch a web application.

+ +

The website will be hosted in AWS on Debian Linux instances.

+ +

Debian contains (among others) the GNU General Public License which in my understanding is a Copyleft license.

+ +

We are being asked by lawyers, given this is a Copyleft license and we are hosting code running on these servers, does this mean our code must also use Copyleft and therefor be open sourced?

+ +

I am not looking for strict legal advice but opinions or places where I can read further into this, so far haven't been able to find any good resources.

+ +

Thank you in advance

+",293077,,,,,43111.44375,GNU license and hosting source code,,1,3,,,,CC BY-SA 3.0,, +363807,1,363810,,1/11/2018 10:36,,14,3435,"

I'm designing an application which will in an early stage collect data A, B, and C from clients, but later on will instead collect data A, B, and D.

+ +

A, B, C, and D are very related and right now exist as columns of a single database PostgreSQL table T.

+ +

Once C is no longer needed, I want to remove its references from my application (I use the Django ORM), but I want to keep the data that was already entered. What is the best way to do so?

+ +

I've thought of creating a new table for ABD, but that means that might cause issues with any rows referencing table T.

+ +

I could just leave column C along, and remove references to it in the code, allowing the existing data to survive.

+ +

Is there a better option I'm not seeing?

+ +

Some extra details:

+ +

The number of rows will not be big, most likely 1-2 per user. This is a mass market application, but by the time I switch from C to D, the userbase will not be very large yet. C and D will likely not be collected at the same time, although that is a possibility. C and D likely represent multiple columns each, not just one each.

+",282909,,591,,43111.77431,43112.09097,What are the best practices around retiring obsolete database columns?,,6,5,,43112.16181,,CC BY-SA 3.0,, +363811,1,363815,,1/11/2018 11:00,,3,232,"

As time passed by I learned that not strictly following the rules of a architectural pattern like mvc kind of counteracts the actual purpose of having a maintable software. Usually I end up with fat monster controllers or a model that does too much. Usually this happens because I didnt modularize my code enough just to avoid having too many classes.

+ +

In my very last project I tried a different approach: I analyzed what I wanted to program and tried to achieve a high atomicity by grouping the system into smaller components. These components will then each have a model, controller, and a view. Everything in top down strict hierarchical manner where each component is only dependent on its children and never the parent.

+ +

For example for a highly atomic archtitecture:

+ +
HumanModel, HumanController, HumanView
+BrainModel, BrainController, BrainController
+FrontalLobeModel, FrontalLobeController, FrontalLobeView,
+NerveModel, NerveController, NerveView,
+NerveNucleusModel, NerveNucleusController, NerveNucleusView
+GolgiAparatusModel, GolgiAparatusController, GolgiAparatusView
+etc
+
+ +

I could go on but you can probably see what I mean by now. While the amount of files or classes increases I believe it makes the application more ""future proof"". If I were to extend the functionality of the human then it would be easier and I dont have to rewrite anything. Since I use this approach I never had any problems. Even if some classes happen to be very minimalistic, in the future I might have to extend the functionality of the brain.

+ +

Why do people say its overkill?

+",255676,,,,,43111.48472,Is a high atomicity in mvc overkill?,,1,3,1,,,CC BY-SA 3.0,, +363823,1,,,1/11/2018 12:57,,1,77,"

Let's assume that I have such a simple database scheme:

+ +
CREATE TABLE tbl(key INTEGER PRIMARY KEY, cap TEXT NOT NULL);
+
+ +

I want to show at least three independent GUI windows. +One window (window A) with list of items, +if I choose one item it should be possible to open window B, +that allow delete or update item. Also it should be possible +to open window B to create new item. +Ordinary task for many applications I suppose. +But I want to make window B to be not modal dialog, +plus allow to create several window B type dialogs.

+ +

How should I make coherent content of these windows?

+ +

For example if I create:

+ +
struct Record {
+  int64_t id;
+  std::string text;
+};
+
+ +

and class Db that encapsulate simple update, select, insert and delete SQL statements it would be not enough.

+ +

Because of for example I can open window B with last item in tbl, maximum value of key column, that open again it in other window B, +and delete it, after that create new item, and this item will get that same key value as deleted one. So I have two window B with the same key, but different cap.

+ +

Should I create in Db class cache with std::shared_ptr<Record>? +And add possibility to subscribe on std::shared_ptr<Record> changes?

+ +

Or may be there is other standard way to handle situation like this?

+",225555,,63202,,43111.61111,43111.71806,architecture of gui app with several indepedent windows and sqlite backend,,1,3,,,,CC BY-SA 3.0,, +363827,1,363828,,1/11/2018 14:59,,5,1756,"

On our project we have this data format that we use to process and record data on. As of late our application changed so that many of the data formats parameters have become obsolete. Currently we receive these data ""packets"" over internet (UDP or TCP) and from saved binary files.

+ +

We want to create a new more space efficient format, removing things we don't need. Each format is divided into a header and the payload, where things like time-stamp information and some description of the payload is in the header.

+ +

To ensure that we can support multiple versions of a format, we decided that it made sense to put some sort of format version ID at the top of the format for every format we make. Unfortunately the previous format (created by people who are no longer on our team) does not follow the convention, and at some point the decision was made to put the format version ID in the middle of the format, in between where all the now useless junk data was.

+ +

reading this older format is an issue because currently we actually have gigabytes of that formats data that we use as test data for our application, stuff that was collected in the field.

+ +

How do we both ensure formats that don't follow the format format version ID, everything else are still able to be read by our application and future format versions that we create?

+ +

We've considered the following:

+ +
    +
  • Just moving on to the next format, ignoring old data. Not responsible, prohibitively expensive.

  • +
  • Having the user some how specify which format is which (formats which can be found out from header immediately vs old format types). Annoying, and hard on people who are not devs on this project but also contribute (of which there are many).

  • +
  • Having new format versions follow old version up to version ID portion. mitigates many of the benefits of moving to the new version, requires careful planning of where to place header bytes to ensure version ID is still in the same location (harder on developers).

  • +
  • Converting old formats to version ID first header versions, requires new tooling and maintaining of version converter, requires everyone else's files to be updated as well, these recorded files are with people who are not devs and aren't using version control either, so it will be difficult to make sure already recorded data can be correctly used for everyone.

  • +
+ +

Here is an example of what the current header looks like:

+ +

* = marked for removal

+ +
size: 8 bytes
+payload metadata: 8 bytes
+payload metadata: 8 bytes
+* non-standard timeformat: 8 bytes 
+* non-standard timeformat: 8 bytes
+* legacy undocumented data: 8 bytes
+version number: 8 bytes
+* source metadata: 8 bytes // may not want this all the time
+sequence number: 8 bytes
+short range time: 8 bytes
+payload metadata: 8 bytes
+* size data?: 8 bytes
+* spare data: 8 bytes
+payload: N bytes
+
+",267185,,267185,,43111.6375,43113.125,"Binary data formats, how to make ensure you can read different format versions?",,4,8,2,,,CC BY-SA 3.0,, +363831,1,,,1/11/2018 15:21,,2,641,"

I've heard that changing a function's behaviour based on argument type is called ad hoc polymorphism:

+ +
+
program Adhoc;
+
+function Add(x, y : Integer) : Integer;
+begin
+    Add := x + y
+end;
+
+function Add(s, t : String) : String;
+begin
+    Add := Concat(s, t)
+end;
+
+begin
+    Writeln(Add(1, 2));                   (* Prints ""3""             *)
+    Writeln(Add('Hello, ', 'World!'));    (* Prints ""Hello, World!"" *)
+end.
+
+
+ +

Is there a specific term for changing a function's behaviour based on the number of arguments passed to it?

+ +

E.g. the popular JavaScript library jQuery has many functions that do different things based on how many arguments they are passed:

+ +
$(element).attr(attribute, value)  // sets element's attribute to value
+$(element).attr(attribute)  // returns element's attribute
+
+ +

Is this also polymorphism?

+ +

And does it have a more specific name?

+ +

(I want to use this word to talk about JS)

+",224123,,224123,,43112.32986,43112.32986,Polymorphism based on number of arguments?,,2,15,0,,,CC BY-SA 3.0,, +363842,1,,,1/11/2018 17:57,,-1,84,"

I understand Role Based Security. +I have read about Policy Based. +I have read what others call Activity Based.

+ +

What I want to know is if I do Policy-Based, am I avoiding what Role-Based requires and what Activity-Based seems to avoid - hardcoding security, refactor because of some new requirement, hardcode some more, and so on.

+ +

I see the difference between Role and Activity based. And my goal is to not have groups hard-coded in a method like Admin, SuperAdmin, SuperSuperAdmin, etc.

+ +

I think but am no sure if Policy Based does what I want, but I cannot understand how. How are these three different?

+ +

If I use the Under21 example for Policy Based I have seen many times on various blogs, I have:

+ +
[Authorize(Policy = ""AtLeast21"")]
+
+ +

How is this different from:

+ +
[Authorize(Roles = ""SuperAdministrator, ChannelAdministrator"")]
+public class ChannelAdministrationController: Controller
+{
+}
+
+ +

And Activity Based (is this Context Based?) is different but appears more ""semantically"" accurate. You have an UpdateUser Activity, and it is probably not likely you'd have SuperUpdateUser Activity. But with policies, it seems you could have the same as multiple roles. I know there can be many Claims used the authentication and authorization, but all I can think is there might be a better name than Over21 for my policy and create a role that my Claim needs. But somewhere in the back, I have to create those activities and roles and users. Even Role Based has policies I can use in .NET Core Middleware, so I am confused.

+ +

For example, I could just as easily require an additional policy,

+ +
[Authorize(Policy = ""AtLeast21"", ""ExceptInTenneseeWithParent"", ""ExceptInChurch"")]
+
+ +

Is this my answer (from the MS page)?

+ +
services.AddAuthorization(options =>
+{
+    options.AddPolicy(""BadgeEntry"", policy =>
+        policy.RequireAssertion(context =>
+            context.User.HasClaim(c =>
+                (c.Type == ClaimTypes.BadgeId ||
+                 c.Type == ClaimTypes.TemporaryBadgeId) &&
+                 c.Issuer == ""https://microsoftsecurity"")));
+});
+
+ +

I am back to hardcoding requirements again.

+ +

My question is how are these different?

+ +

Additionally, does policy-based security keep me from having to add a new role or policy everytime something new comes along (see my examples above). Activity based does (it seems).

+ +
+

""Why do you believe that Role-Based Security is tightly coupled, while Policy-Based Security is not?""

+
+ +

I am not sure it isn't. That is why i am asking because something isn't adding up to me. While this isn't good,

+ +
[Authorize(Roles=""Administrator, SuperAdministrator, 
+CertainBosses, SomeGuyinHR, SuperDuperAdministrator"")]
+
+ +

I am not convinced policy is better,

+ +

because I have to change my handler with conditionals. The only thing I can think of is in the Policy-Based you can have a more self-documenting set of code that doesn't change based on a new security role.

+ +
[Authorize(Policy=""UpdateUser"")]
+
+ +

and in your policy you either,

+ +
...administrator || some_user || something else
+
+ +

or some new policy stacked

+ +
[Authorize(Policy=""UpdateUser"")]
+[Authorize(Policy=""UpdateUserIfYouAreX"")]
+
+ +

or in the database

+ +
...is the user in the database that has my roles
+
+ +

and change all my role information in the database.

+ +

Activity-Based seems most similar to the last policy design:

+ +

(from lostechies)

+ +
[HandleError]
+ public class HomeController : Controller
+ {
+     [Authorize(Activity = ""Administrators Only"")]
+     public ActionResult AdministratorsOnly()
+     {
+         return View();
+     }
+ }
+
+",8802,,1204,,43111.82083,43111.86181,Does Policy-Based authentication remove refactoring security?,,1,4,,,,CC BY-SA 3.0,, +363847,1,363857,,1/11/2018 19:37,,0,1439,"

I have a website that allows users to be ""tracked"" and track their time/work. This involves the users being able to update their account with my site at most every minute, but typically users do it only a few times per day. However, there is nothing stopping other people from updating your account.

+ +

Each update creates a datapoint, a snapshot if you will, of their account status and the amount of work they have done.

+ +

Users can view the amount of work they have done in the past day or past week. This is done by changing the range of the datapoints you want to see.

+ +

My question now, is: There is the very likely chance of having 300k+ users. There are also some user accounts which people follow more closely than others, say, a ceo or a manager, and so those accounts are going to be updated more often. This leads to there being a possible 100k+ data points for a single user, within the timespan of only a year or so.

+ +

Currently I was thinking of just storing these by the following mapping:

+ +
unixtime -> account snapshot
+
+ +

This seems like it would be easiest to store as a large json array for a user, for example (with other details I would store):

+ +
jsonArray = {
+    ""alice"": {
+        ""totalHours"": 31.6,
+        ""updates"": {
+            1515653260 : { work: 95%, hours: 8 },
+            1515691399 : { work: 93%, hours: 10 },
+            1515695125 : { work: 91%, hours: 7.6 },
+            1515698694 : { work: 56%, hours: 6 },
+         }
+    }
+     ""bob"": {
+        ""totalHours"": 7.32,
+        ""updates"": {
+            1515654356 : { work: 95%, hours: 1 },
+            1515690342 : { work: 93%, hours: 6.32 },
+         }
+    }
+}
+
+ +

Is this an effective solution? I can't imagine a json string taking up so much space that MySQL wouldn't be able to hold it but I've never dealt with things that are likely to grow this large.

+ +

Are there other data structures I could use, or that would be more efficient in grabbing/storing data?

+",293110,,1204,,43111.88611,43111.8875,Design - JSON Strings storing large amounts of data for large amount of users,,2,0,,,,CC BY-SA 3.0,, +363852,1,,,1/11/2018 20:37,,7,3228,"

What is a good naming convention for date variables or properties in an Object-based, strongly typed language like C# (and by extension, for date database columns)? Do you use the word ""date""?

+ +

I'm going to avoid an example with the canonical ""created"" or ""updated"" date/time properties, and instead pick another common example: assuming there is no technical, domain-specific, or user reason to avoid any of these names, what would you name a property that contains the date that an interval (a period of calendar time) started or will start?

+ +
    +
  • StartDate
  • +
  • StartedDate
  • +
  • DateStarted
  • +
  • DateStart
  • +
  • DateOfStart
  • +
  • Started
  • +
  • Starts
  • +
  • Start
  • +
+ +

(This question could also be asked for date-time properties, presumably using ""Time"" instead of ""Date"".)

+",78808,,78808,,43111.86319,43112.21389,What is a good naming convention for date properties?,,4,6,,43112.54653,,CC BY-SA 3.0,, +363861,1,,,1/11/2018 22:15,,1,273,"

I'm writing a WebExtensions browser plugin that will modify the DOM of the current page when the user presses a button in the toolbar. More specifically, I need to insert HTML tags into the DOM. Even more specifically, every word on the page needs to be wrapped in a span tag.

+ +

In principle there's no way to predict how this might intefere with page scripts. For example, if a page has a script that relies on the number of span tags in the page being 25, and then my script adds 800 new ones, I'll break the page script.

+ +

How can I get around this issue? I've thought of a few alternatives:

+ +
    +
  1. ""Freeze"" the page. When the user pushes a button to activate the extension, the page essentially becomes static HTML. My extension won't allow any DOM changes from page scripts after it runs.

  2. +
  3. Clone the DOM and overlay the cloned DOM over the old page using z-levels. Insert my HTML into the clone and let page scripts act on the original copy. Monitor the original page for DOM changes and mimic them in my clone.

  4. +
  5. Screw it. The extension won't work on pages with scripts that rely on the DOM not having the extra span elements in it. Do testing to make sure that doesn't happen on popular websites.

  6. +
+ +

Modifying the page DOM isn't exactly obscure behavior for extension. What are the general strategies for avoiding disastrous conflicts?

+",95736,,,,,44199.37569,My browser extension modifies the DOM. How do I stop this from interfering with page scripts?,,1,8,1,,,CC BY-SA 3.0,, +363862,1,364305,,1/11/2018 22:15,,0,676,"

If you want to implement a layered API design for example, you may have one API layer that represents the application layer. And I assume that the application layer is represented by an endpoint and uri. Let's say we implement a Spring MVC Boot based controller, within that controller we may make a httpclient rest call or use SpringRest Template invocation to another layer, say the infrastructure api layer.

+ +

My question centers around the chaining and calling different apis, is spring rest template or httpclient directly used? Or Something else more sophisticated? I ask , because it seems cumbersome replicating a lot of domain objects. Can you share model/domain objects between different layers?

+",75803,,,,,43119.84236,"In implementing layered API architecture with Spring MVC Boot, how to connect to various endpoints",,1,0,,,,CC BY-SA 3.0,, +363864,1,364057,,1/11/2018 22:35,,0,354,"

I have bunch of keys and values that I want to send to our messaging queue by packing them in one byte array. I will make one byte array of all the keys and values which should always be less than 50K and then send to our messaging queue.

+ +

Packet class:

+ +
public final class Packet implements Closeable {
+  private static final int MAX_SIZE = 50000;
+  private static final int HEADER_SIZE = 36;
+
+  private final byte dataCenter;
+  private final byte recordVersion;
+  private final long address;
+  private final long addressFrom;
+  private final long addressOrigin;
+  private final byte recordsPartition;
+  private final byte replicated;
+  private final ByteBuffer itemBuffer = ByteBuffer.allocate(MAX_SIZE);
+  private int pendingItems = 0;
+
+  public Packet(final RecordPartition recordPartition) {
+    this.recordsPartition = (byte) recordPartition.getPartition();
+    this.dataCenter = Utils.LOCATION.get().datacenter();
+    this.recordVersion = 1;
+    this.replicated = 0;
+    final long packedAddress = new Data().packAddress();
+    this.address = packedAddress;
+    this.addressFrom = 0L;
+    this.addressOrigin = packedAddress;
+  }
+
+  private void addHeader(final ByteBuffer buffer, final int items) {
+    buffer.put(dataCenter).put(recordVersion).putInt(items).putInt(buffer.capacity())
+        .putLong(address).putLong(addressFrom).putLong(addressOrigin).put(recordsPartition)
+        .put(replicated);
+  }
+
+  private void sendData() {
+    if (itemBuffer.position() == 0) {
+      // no data to be sent
+      return;
+    }
+    final ByteBuffer buffer = ByteBuffer.allocate(MAX_SIZE);
+    addHeader(buffer, pendingItems);
+    buffer.put(itemBuffer);
+    SendRecord.getInstance().sendToQueueAsync(address, buffer.array());
+    // SendRecord.getInstance().sendToQueueAsync(address, buffer.array());
+    // SendRecord.getInstance().sendToQueueSync(address, buffer.array());
+    // SendRecord.getInstance().sendToQueueSync(address, buffer.array(), socket);
+    itemBuffer.clear();
+    pendingItems = 0;
+  }
+
+  public void addAndSendJunked(final byte[] key, final byte[] data) {
+    if (key.length > 255) {
+      return;
+    }
+    final byte keyLength = (byte) key.length;
+    final byte dataLength = (byte) data.length;
+
+    final int additionalSize = dataLength + keyLength + 1 + 1 + 8 + 2;
+    final int newSize = itemBuffer.position() + additionalSize;
+    if (newSize >= (MAX_SIZE - HEADER_SIZE)) {
+      sendData();
+    }
+    if (additionalSize > (MAX_SIZE - HEADER_SIZE)) {
+      throw new AppConfigurationException(""Size of single item exceeds maximum size"");
+    }
+
+    final ByteBuffer dataBuffer = ByteBuffer.wrap(data);
+    final long timestamp = dataLength > 10 ? dataBuffer.getLong(2) : System.currentTimeMillis();
+    // data layout
+    itemBuffer.put((byte) 0).put(keyLength).put(key).putLong(timestamp).putShort(dataLength)
+        .put(data);
+    pendingItems++;
+  }
+
+  @Override
+  public void close() {
+    if (pendingItems > 0) {
+      sendData();
+    }
+  }
+}
+
+ +

Below is the way I am sending data. As of now my design only permits to send data asynchronously by calling sender.sendToQueueAsync method in sendData() method.

+ +
  private void validateAndSend(final RecordPartition partition) {
+    final ConcurrentLinkedQueue<DataHolder> dataHolders = dataHoldersByPartition.get(partition);
+
+    final Packet packet = new Packet(partition);
+
+    DataHolder dataHolder;
+    while ((dataHolder = dataHolders.poll()) != null) {
+      packet.addAndSendJunked(dataHolder.getClientKey().getBytes(StandardCharsets.UTF_8),
+          dataHolder.getProcessBytes());
+    }
+    packet.close();
+  }
+
+ +

Now I need to extend my design so that I can send data in three different ways. It is upto user to decide which way he wants to send data to either ""sync"" or ""async"".

+ +
    +
  • I need to send data asynchronously by calling sender.sendToQueueAsync method.
  • +
  • or I need to send data synchronously by calling sender.sendToQueueSync method.
  • +
  • or I need to send data synchronously but on a particular socket by calling sender.sendToQueueSync method. In this case I need to pass socket variable somehow so that sendData knows about this variable.
  • +
+ +

SendRecord class:

+ +
public class SendRecord {
+  private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(2);
+  private final Cache<Long, PendingMessage> cache = CacheBuilder.newBuilder().maximumSize(1000000)
+      .concurrencyLevel(100).build();
+
+  private static class Holder {
+    private static final SendRecord INSTANCE = new SendRecord();
+  }
+
+  public static SendRecord getInstance() {
+    return Holder.INSTANCE;
+  }
+
+  private SendRecord() {
+    executorService.scheduleAtFixedRate(new Runnable() {
+      @Override
+      public void run() {
+        handleRetry();
+      }
+    }, 0, 1, TimeUnit.SECONDS);
+  }
+
+  private void handleRetry() {
+    List<PendingMessage> messages = new ArrayList<>(cache.asMap().values());
+    for (PendingMessage message : messages) {
+      if (message.hasExpired()) {
+        if (message.shouldRetry()) {
+          message.markResent();
+          doSendAsync(message);
+        } else {
+          cache.invalidate(message.getAddress());
+        }
+      }
+    }
+  }
+
+  // called by multiple threads concurrently
+  public boolean sendToQueueAsync(final long address, final byte[] encodedRecords) {
+    PendingMessage m = new PendingMessage(address, encodedRecords, true);
+    cache.put(address, m);
+    return doSendAsync(m);
+  }
+
+  // called by above method and also by handleRetry method
+  private boolean doSendAsync(final PendingMessage pendingMessage) {
+    Optional<SocketHolder> liveSocket = SocketManager.getInstance().getNextSocket();
+    ZMsg msg = new ZMsg();
+    msg.add(pendingMessage.getEncodedRecords());
+    try {
+      // this returns instantly
+      return msg.send(liveSocket.get().getSocket());
+    } finally {
+      msg.destroy();
+    }
+  }
+
+  // called by send method below
+  private boolean doSendAsync(final PendingMessage pendingMessage, final Socket socket) {
+    ZMsg msg = new ZMsg();
+    msg.add(pendingMessage.getEncodedRecords());
+    try {
+      // this returns instantly
+      return msg.send(socket);
+    } finally {
+      msg.destroy();
+    }
+  }
+
+  // called by multiple threads to send data synchronously without passing socket
+  public boolean sendToQueueSync(final long address, final byte[] encodedRecords) {
+    PendingMessage m = new PendingMessage(address, encodedRecords, false);
+    cache.put(address, m);
+    try {
+      if (doSendAsync(m)) {
+        return m.waitForAck();
+      }
+      return false;
+    } finally {
+      cache.invalidate(address);
+    }
+  }
+
+  // called by a threads to send data synchronously but with socket as the parameter
+  public boolean sendToQueueSync(final long address, final byte[] encodedRecords, final Socket socket) {
+    PendingMessage m = new PendingMessage(address, encodedRecords, false);
+    cache.put(address, m);
+    try {
+      if (doSendAsync(m, socket)) {
+        return m.waitForAck();
+      }
+      return false;
+    } finally {
+      cache.invalidate(address);
+    }
+  }
+
+  public void handleAckReceived(final long address) {
+    PendingMessage record = cache.getIfPresent(address);
+    if (record != null) {
+      record.ackReceived();
+      cache.invalidate(address);
+    }
+  }
+}
+
+ +

Callers will only call either of below three methods:

+ +
    +
  • sendToQueueAsync by passing two parameters
  • +
  • sendToQueueSync by passing two parameters
  • +
  • sendToQueueSync by passing three parameters
  • +
+ +

How should I design my Packet and SendRecord class so that I can tell Packet class that this data needs to be send in either of above three ways to my messaging queue. It is upto user to decide which way he wants to send data to messaging queue. As of now the way my Packet class is structured, it can send data only in one way.

+",212146,,212146,,43115.81181,43118.31111,Send records using async or sync way,,3,7,,,,CC BY-SA 3.0,, +363872,1,363909,,1/12/2018 8:04,,10,624,"

I just read https://techfindings.one/archives/2652 about functional programming and came accross this:

+ +
+

anonymous functions can often not be JIT compiled and will never be + optimized

+
+ +

Can someone explain to me why this is the case?

+",151207,,,,,43112.74583,Why most anonymous functions can't be JIT compiled and will never be optimized?,,1,5,2,,,CC BY-SA 3.0,, +363874,1,,,1/12/2018 8:29,,45,14736,"

I'm trying to follow Uncle Bob's clean code suggestions and specifically to keep methods short.

+ +

I find myself unable to shorten this logic though:

+ +
if (checkCondition()) {addAlert(1);}
+else if (checkCondition2()) {addAlert(2);}
+else if (checkCondition3()) {addAlert(3);}
+else if (checkCondition4()) {addAlert(4);}
+
+ +

I cannot remove the elses and thus separate the whole thing into smaller bits, cause the ""else"" in the ""else if"" helps performance - evaluating those conditions is expensive and if I can avoid evaluating the conditions below, cause one of the first ones is true, I want to avoid them.

+ +

Even semantically speaking, evaluating the next condition if the previous was met does not make sense from the business point of view.

+ +
+ +

edit: This question was identified as a possible duplicate of Elegant ways to handle if(if else) else.

+ +

I believe this is a different question (you can see that also by comparing answers of those questions).

+ + +",96955,,96955,,43114.66181,43116.26944,How do I edit a chain of if-else if statements to adhere to Uncle Bob's Clean Code principles?,,13,23,13,,,CC BY-SA 3.0,, +363877,1,363884,,1/12/2018 9:53,,1,645,"

I have to build a Windows Service which requires OAuth2 authentication. The service is intended (like all services should be) to run unattended (no user need to be logged in in Windows). The problem is, there is some user interaction required to get the OAuth2 authentication.

+ +

The service uses a third party API (I have no control over it) to do some checks, and sends out an email if a check comes back negative. The API requires authentication through a website. The service will run on an ""always on"" server.

+ +

Once authentication is validated, I would have a refresh token to periodically refresh authentication. So no more user interaction should be required after the initial authentication.

+ +

I think the best solution is to create a winForm/WPF application which prompts the user to login to the website, install the service and start the service. As said, once authenticated, the service can run unattended since authentication can be periodically refreshed (tokens are saved in an encrypted file). If for some reason authentication is lost, the service can send out an email requesting a user to login and re-authenticate.

+ +

I would imagine the following flow:

+ +

+ +

Is this an advisable solution, or is there something better? Have I forgotten/missed something?

+",213115,,213115,,43539.7375,43539.7375,"Windows Service with authentication, some UI required",,1,2,0,,,CC BY-SA 4.0,, +363880,1,363890,,1/12/2018 11:50,,0,92,"

I have a function a plugin that is called whenever the WebAPI's ""Plugin"" endpoint is called in the main project, and that has to process the HTTP request. The request holds more information about what to do in method and query string, and I have for now copy&pasted the code to choose methods based on these:

+ +
public object OnCustomControllerCalled(Toolkit tk, HttpRequestMessage req)
+{
+    if (req.Method == HttpMethod.Get)
+    {
+        if (req.RequestUri.Query.Contains(""data=Customers""))
+        {
+            return MyPlugin.GetCustomers(tk);
+        }
+        else if(req.RequestUri.Query.Contains(""data=UserManagement""))
+        {
+            return MyPlugin.GetUserManagement(tk);
+        }
+        ...
+    }
+    else if (req.Method == HttpMethod.Post)
+    {
+        if (req.RequestUri.Query.Contains(""data=Customers""))
+        {
+            return MyPlugin.PostCustomers(tk, req);
+        }
+        else if(req.RequestUri.Query.Contains(""data=UserManagement""))
+        {
+            return MyPlugin.PostUserManagement(tk, req);
+        }
+        ...
+    }
+    else if (req.Method == HttpMethod.Put)
+    {
+        if (req.RequestUri.Query.Contains(""data=Customers""))
+        {
+            return MyPlugin.PutCustomers(tk, req);
+        }
+        ...
+    }
+    else if (req.Method == HttpMethod.Delete)
+    {
+        ...
+
+ +

I know that copy&paste is not a good approach to this problem. How am I supposed to make this easily extensible and less error-prone?

+ +

Throw reflection at it to get to the DRY principle?

+ +

Or would it make sense to have a single function MyPlugin.Customers instead of one for each method, and let that single method contain the whole behaviour regarding the ""customer"" objects, and keep the differentiation between get, post, put and delete in that function?

+",124136,,31260,,43112.51389,43112.56458,Multi-dimensional if-elseif-else block to call different functions,,1,1,,,,CC BY-SA 3.0,, +363883,1,,,1/12/2018 12:28,,3,958,"

I am working on design of a financial xchange system and especially the order-matching part.

+ +

I couldn't find a clear/complete design article especially for scalable/high available order-maching system. There lots of implementations which gives an idea about the system but none of the solutions are scalable and high available.

+ +

I want to give details about the problem and a possible solution. I want to ask whether such a solution work, applicable and you have any suggestions.

+ +

Let me start with the algoritm definition.

+ +

Order-matching algorithm

+ +

Although the algorithm has variations and will be possibly more complex in real world, it can be simply summarized as below. You have buyers and sellers in the system. Buyers creates buying orders while sellers creates selling orders in realtime. Order-matching algorithms duty is to match suitable +selling orders with buying orders ( and vice versa ) again in real-time.

+ +

Assume that some traders enter the orders shown below. (Below 2 tables and explanation is taken from https://www.apress.com/gp/book/9781590595640 which is a great book. )

+ +
| Time  | Trader   | Buy/Sell | Quantity | Price                      |
+|-------|----------|----------|----------|----------------------------|
+| 10:01 | Anthony  | Buy      | 300      | $20                        |
+| 10:05 | Anu      | Sell     | 300      | $20.10                     |
+| 10:08 | Nicola   | Buy      | 200      | $20                        |
+| 10:09 | Jason    | Sell     | 500      | $19.80                     |
+| 10:10 | Jeff     | Sell     | 400      | $20.20                     |
+| 10:15 | Nicholas | Buy      | 500      | Market Price Order ( MPO ) |
+| 10:18 | Kumar    | Buy      | 300      | $20.10                     |
+| 10:20 | Doe      | Sell     | 600      | $20                        |
+| 10:29 | Sally    | Buyy     | 700      | $19.80                     |
+
+ +

The exchange will send an order acknowledgment to the traders’ trading terminals and fill the order book as below

+ +
|---------------------------------------------|---|-------------------------------------------|
+|                 Buyer                       |   |                  Seller                   |
+|---------------------------------------------|---|-------------------------------------------|
+| Timestamp | Name     | Quantity | Buy Price |   | Sell Price | Quantity | Name  | Timestamp |
+|-----------|----------|----------|-----------|---|------------|----------|-------|-----------|
+| 10:15     | Nicholas | 500      | Market    |   | $19.80     | 500      | Jason | 10:09     |
+| 10:18     | Kumar    | 300      | $20.10    |   | $20        | 600      | Doe   | 10:20     |
+| 10:01     | Anthony  | 300      | $20       |   | $20.10     | 300      | Anu   | 10:05     |
+| 10:08     | Nicola   | 200      | $20       |   | $20.20     | 400      | Jeff  | 10:10     |
+| 10:29     | Sally    | 700      | $19.80    |   |            |          |       |           |
+|-----------|----------|----------|-----------|---|------------|----------|-------|-----------|
+
+ +

If the traders in the previous table submit their orders, the market will match the orders as follows:

+ +
    +
  • Nicholas’s buy order at market price will match Jason’s sellorder.This will result in the first trade, and both the orders will be removed from the order book.
  • +
  • Kumar’s order of 300 buy will get matched to Doe’s order of 600 sell. Interestingly, this order will get matched at $20.10 even though Doe wanted to sell at $20. Exchange systems are designed to protect the interests of both buyers and sellers. Since there was a passive order from Kumar willing to buy at $20.10 and Doe’s order comes in later asking only for $20, she will still get $20.10. Since Kumar’s order is completely filled, it will be removed completely from the order book. However, Doe’s order of 600 is only half-filled. So, 300 shares of Doe will remain in the order book.
  • +
  • In the next step, Anthony’s buy order of 300 shares will get fully filled by Doe’s balance of 300 at $20, and both orders will be removed from the order book.
  • +
  • Now Nicola wants to buy 200 at $20, but Jeff will sell only at $20.20. There is no agreement in price; hence, there will be no further matching, and the matching system will wait either for one of the parties to adjust the price or for a new order at a price where either a buy or a sell can be matched.
  • +
+ +

Questions, questions and questions

+ +

Trading platforms are living systems which means you have all the time selling orders and buying orders coming into system and prices are changes rapidly. Matching-algorithm should have very little latency because any latency may result in money loss for users.

+ +

( As I understood but not sure ) it is very hard ( if not impossible ) to implement matching algorithm in distributed manner. Where you have a google/pubsub and consumers listening on orders and doing their matching magic. Because (again) latency is a very hard requirement and putting extra queues may break it. Also sequentiality of orders is very important because all the algorithm based on this. Putting a distributed or concurrent matching logic possibly will break the sequentiality.

+ +
    +
  • If a single-threaded solution is implemented how can these systems scale?
  • +
  • If a single-threaded and not distributed solution is implemented how can we ensure high availability?
  • +
  • Because latency is one of the most critical requirements when we should persist orders, or trades to db? All these operations are async?
  • +
  • Seems lmax-disruptor is a solution for solving latency problem within a single machine pushing limits of performance. If yes, how we can make it high available?
  • +
+ +

Possible Architecture

+ +

I am trying to put a solution at least working for some assumptions. Please comment on architecture and flow also.

+ +
 Seller/Buyer
+
+          +
+          |
+          | 1 - place order
+          |
+          |
+          |
+    2     |
+  +-------v-------+
+  |               |      3      +--------------------+           6
+  | Order Service +------------->   orders queue     +----------------------+
+  |               |             +---------+----------+                      |
+  +---------------+                       |                                 |
+          |4                              |5                                |
+          |                               |                                 |
+     +----v-----+                         |                                 |
+     |          |               +---------v----------+            +---------v----------+
+     | Database |          7    |                    |   8        |                    |
+     |          <---------------+  Trading Service   +------------>  Trading Service   |
+     +----------+               |  (order+matching)  |            |  (order+matching)  |
+                                |                    |            |   cold instance    |
+                                |                    |            |                    |
+                                +--------------------+            +--------------------+
+
+ +
    +
  • Seller or buyer places order
  • +
  • Order service gets this order ( REST )
  • +
  • Order is put into queue for further processing
  • +
  • Order is saved into DB async
  • +
  • Order is fetched by trading-service. Trading service fills its own in-memory structure for order-matching. Order matching is continuosly works on data set.
  • +
  • Cold instance also fetches the same data from queue.
  • +
  • When trading service finds some matcing it send this matching to database Async.
  • +
  • In order to maintain cold instance up to date, matched orders are notified to cold instance also in async.
  • +
+ +

More and more questions

+ +
    +
  • Is there any way to eliminate the latency originating from order-queue
  • +
  • Maintaning hot and cold Trading Service is a very hard job. A distributed cache like hazelcast can be used for distributed memory management but I think it would be slower than lmax disruptor. Is it right?
  • +
  • Is there any way to distribute lmax-disruptors on distrubuted machines?
  • +
+",293153,,293153,,43112.72014,43142.74722,What can be the design of a distributed high available trade exchange?,,1,7,1,,,CC BY-SA 3.0,, +363885,1,,,1/12/2018 12:55,,1,1764,"

We are a fintech startup that trying to re-build monolith php to microservice. As typical web app, we use to manage master data in admin page. How do i distribute this master data through microservice? We build a two API management service. One for the frontend user entry point other for admin/backend user entry point. +We create a Loan service for front-end user and an administration service for back-end user. Loan service will use loan promo, city, country data which are also managable from admin servicd. Should we duplicate the model and database of loan promo, city and country from admin service to loan service? Or should we put those master data in loan service? I also read about saga pattern that maybe we can use it. After any update from admin service will trigger update on loam service.

+ +

We are still designing and figuring how to implement it best. Any advice or input I really appreciate.

+",293161,,293161,,43112.54236,44146.62778,Should I duplicate model and database of master data across services in microservice,,3,2,,,,CC BY-SA 3.0,, +363887,1,,,1/12/2018 13:08,,-2,101,"

I'm doing an assignment for my school site and I'm trying to find a ""best practice"" way to go about solving my problem.

+ +

So, whenever a certain method is run I have to reupload five different files, to five different FTP. Each one of these FTPs require different credentials, so for each upload I have to create a new client with the corresponding set of credentials.

+ +

Right now, my code looks something like this, which I feel is very dirty (but I'm not sure of a better way!):

+ +
public static string url;
+public static string username;
+public static string password;
+
+public static void UploadFiles()
+{
+   for (var i = 0; i < 5; i++)
+       GetCredentials(i)
+
+   using (var client = new FtpClient(url, username, password))
+   {
+       // connect to client
+       // and upload the file
+       // using the parameters set in GetCredentials()
+   }
+}
+
+private static void GetCredentials(int id)
+{
+    case 0:
+        username = ""user0""
+        password = ""pass0""
+    case 1:
+        username = ""user1""
+        password = ""pass1""
+    case 2:
+        username = ""user2""
+        password = ""pass2""
+    case 3:
+        username = ""user3""
+        password = ""pass3""
+    case 4:
+        username = ""user4""
+        password = ""pass4""
+}
+
+ +

This works all well and good and I'm about to set this into production, but I'd really like to learn something from this, instead of just using the first solution that comes to my mind. Any advice is appreciated!

+",293163,,,,,43112.71389,Iterating over multiple sets of credentials,,1,7,,,,CC BY-SA 3.0,, +363893,1,,,1/12/2018 14:57,,1,60,"

I'm having difficulties assessing a design decision regarding entity relations in a JavaEE persistence data model.

+ +

Let's say I want to design a simple data model that's supposed to represent a chamber orchestra comprised of different types of musicians.

+ +

For now I know that there will be a flute player, a piano player, a violin player and a contrabass player, but it is expected that over the lifetime of the system, other—unforeseeable—types of musicians will be added. Therefore my supervisor told me not to use a separate entity for every type of musician, like FlutePlayer, PianoPlayer etc., but instead have a Musician entity +with an attribute of enumeration type Instrument, so that new types of +musicians can later be added to the system easily by adding a value to that enumeration.

+ +

On the other hand, I know for sure that every orchestra will always have exactly one piano player and exactly one contrabass player and I'd like to be able to access them conveniently. That's why I'm thinking it would be good to have a pianoPlayer and a contrabassPlayer attribute instead of just a collection of musicians in ChamberOrchestra. Think of the orchestra as a performing orchestra, so a musician cannot be part of more than one orchestra at a time.

+ +

But when implementing this, the cardinalities for the associations seem a bit odd to me because they'll establish an asymmetric relationship between ChamberOrchestra and Musician: one-to-one in one +direction and many-to-one in the other direction.

+ +

I already experimented with this approach a little bit and so far it does what I expect, but the mentioned asymmetry troubles me a bit and because I'm not experienced in designing data models I'm a bit worried that such a design might have negative consequences further down the line.

+ +

So, to have a concrete question, is it okay or would it be considered bad practice to have an asymmetric relationship between entities in the sense sketched out in the code below?

+ +
@Entity
+public class ChamberOrchestra {
+
+  @OneToOne
+  @JoinColumn(""PIANO_PLAYER_ID"")
+  protected Musician pianoPlayer;
+
+  @OneToOne
+  @JoinColumn(""CONTRABASS_PLAYER"")
+  protected Musician contrabassPlayer;
+
+  public Musician getPianoPlayer() {
+    return pianoPlayer;
+  }
+
+  public Musician getContrabassPlayer() {
+    return contrabassPlayer;
+  }
+
+}
+
+@Entity
+public class Musician {
+
+  @ManyToOne
+  @JoinColumn(""CHAMBER_ORCHESTRA_ID"")
+  protected ChamberOrchestra chamberOrchestra;
+
+  @Column
+  protected Instrument instrument;
+
+  public ChamberOrchestra getChamberOrchestra() {
+    return chamberOrchestra;
+  }
+
+  public Instrument getInstrument() {
+    return instrument;
+  }
+}
+
+",185784,,185784,,43113.37847,43113.37847,Is it okay to have an asymmetric relationship between entities in a JavaEE data model?,,0,6,,,,CC BY-SA 3.0,, +363894,1,363953,,1/12/2018 15:07,,2,188,"

If we have different objects,

+ +
[A1, A2, A3, B1, B2, B3, B4, B5]
+
+ +

Some calculations will be performed to find compatible objects. For example, lets assume following 3 sets were formed and every set contains compatible objects:

+ +
    +
  1. {A1, B2}
  2. +
  3. {A3, B2}
  4. +
  5. {A1, A3, B4, B5}
  6. +
+ +

Now we need to perform filtering. One node can participate only once. For example, if B2 has made a pair with A1, then B2 can not participate in any other set. Which means, if we select set 1, then set 2 will be deleted because B2 has already participated. And set 3 will be deleted because A1 has already participated.

+ +

Now we need to do filtering such that we maximize number of objects being utilized. If we select set 1, then we are only utilizing A1 and B2.
+Thus, optimal way would be to select set 3 which is utilizing 4 objects.

+ +

Right now, i have a complex function which goes through list of sets recursively and keeps on adjusting until no changes are made to existing sets. This is not only inefficient, but it might not work in cases where changing sets can cause ripple effect.

+ +

I am not looking for coded solution, but just a guidance that is there any graph algorithm which I should study?

+",293178,,293178,,43112.65833,43115.69931,Algorithm to select sets of objects while maximizing number of objects covered,,2,4,1,,,CC BY-SA 3.0,, +363901,1,,,1/12/2018 16:39,,3,444,"

Consider the following c# example:

+ +
public class MyParentClass {
+   public int MyInt { get; set; }
+}
+
+public class MyChildClass : MyParentClass
+{
+}
+
+public class AnotherClass
+{
+   public MyChildClass GetChildClassFromParentClass(MyParentClass parent)
+   {
+       return new MyChildClass() { MyInt = parent.MyInt };
+   }
+}
+
+ +

I'm wondering why it's not possible to directly clone the parent class into it's child without the manual step of copying all its values, since MyParentClass and MyChildClass share this 'relationship'.

+ +

Maybe such a feature would not be able to guarantee the consistency of MyChildClass because of missing injected dependencies, but the this language feature could at least permit this cloning if there is a parameterless constructor defined for the child.

+ +

Let me be clear I know this feature is not a good idea and there definitely are scenarios which make this a language feature that isn't feasible or safe. But I'd like to know what those scenarios are.

+ +

Minor note: I believe this is not an opinion-based question, since no mainstream OOP language seems to offer this feature, so there must be objective arguments against it.*

+",53014,,9113,,43114.8,43114.80278,Why don't OOP languages offer a feature to clone a parent into a child class?,,5,15,0,,,CC BY-SA 3.0,, +363906,1,363911,,1/12/2018 17:35,,3,1092,"

Everyone seems talking easy about it but I don't get it.

+ +

.NET Standards is a subset of functionalities of every .NET frameworks that you have to follow if you want to make your framework .NET Standard compliant and so compatible on all the platforms .NET can target.

+ +

Then why I have to specify manually ""netstandard20"", ""net461"" and so on in targetframework?

+ +

Shouldn't it be compatible with everyone?

+ +

What's even the sense of targenting himself ""netstandard20""?

+",94642,,1204,,43112.7625,43112.94792,Why do I have to specify the target framework in Visual Studio?,<.net>,3,0,,,,CC BY-SA 3.0,, +363916,1,363927,,1/12/2018 21:16,,7,2585,"

We recently upgrade to PMD 6.0.0 and are getting several classes flagged as ""Data Classes""? It argues that it breaks encapsulation and makes for brittle design (I understand that this site has a different opinion).

+ +

Let's say that our team decides to go ahead and refactor the data class, instead of ignoring the PMD rule. Unfortunately, PMD's documentation for the rule is unclear to me.

+ +
+

Refactoring a Data Class should focus on restoring a good data-behaviour proximity. In most cases, that means moving the operations defined on the data back into the class. In some other cases it may make sense to remove entirely the class and move the data into the former client classes.

+
+ +

I don't understand what ""good data-behaviour proximity"" is, or what ""the former client classes"" are. For instance, we have something along the lines of this:

+ +
public class Person {
+    private String name;
+    private List<String> formerNames;
+    private List<Food> favoriteFoods;
+
+    //getters and setters
+}
+
+ +

How would I go about refactoring this Data Class, as PMD recommends?

+ +

I'd like to focus on what sort of refactoring would look like, rather than discussing whether the rule is a good one or not. At this time, we don't know if we want to allow this rule or exclude it.

+",81973,,,,,43433.73333,How do I refactor a Data Class to not be one?,,1,0,,,,CC BY-SA 3.0,, +363920,1,,,1/12/2018 22:31,,3,405,"

This has been on my mind as I developed several applications with this feature.

+ +

Suppose that an application is required to process incoming requests in an asynchronous manner. Take the example of a notification system (other agents will submit a request for the notification to notify people via email or text, etc.) In this case this application requires to call other external systems (smtp server in our example). These external systems might be down temporarily so a retry mechanism is required (up to a certain number of retries).

+ +

There are libraries that offer a way for retry, such as Polly. The idea is that the application will retry X times with D delay. But the problem with this is that the request processing is done in memory throughout the retry process, making it resource inefficient.

+ +

What would be a plausible pattern for this sort of problems? What are some considerations or platform I should look into? What did you do when you faced similar problem?

+ +

Every time I faced this problem, I solved it with a table that contains the tasks that need to be processed. I process them in batches, and update their statuses (NEW, IN_PROGRESS, ERROR). This mechanism is good for having one instance, but once I have multiple instances, then locking the table is necessary so that no two instances process the same request. It seems that there is a better solution for this problem.

+",197459,,,,,43113.8,Retry Architecture,,1,10,,,,CC BY-SA 3.0,, +363926,1,,,1/12/2018 23:33,,0,123,"

According to the Single Responsibility Pattern (SRP) a method or class should have one responsibility. I have read a couple of sources and viewed some videos and I would like to understand it by writing some code samples.

+ +

According to me, the following:

+ +
def addAndSubtract(a,b,c)
+  a + b - c
+end
+
+ +

is conflicting with the SRP as this method has two responsibilities, i.e. addition and subtraction.

+ +

So how to align this with SRP? One responsibility per method right? Is the following correct?

+ +
def addAndSubtractSrp(a,b,c)
+  subtract(add(a,b),c)
+end
+
+def add(a,b)
+  a + b
+end
+
+def subtract(a,b)
+  a - b
+end
+
+ +

Discussion

+ +

According to me, this is aligned with SRP as there are now two methods that have one responsibility. When I would like to add multiplication and division then I need to create separate methods right?

+",218283,,,,,43112.98125,Understanding Single Responsibility Pattern (SRP),,0,5,,43113.02569,,CC BY-SA 3.0,, +363928,1,,,1/13/2018 3:33,,1,3927,"

I'm thinking of making a realtime chat app that would allow me to store user messages into a MySQL database. So far these are the two ideas I have.

+ +

1) Create and API which the message is sent to and then saved in the database. After the message is saved into the database a push notification is sent to the person who is to receive the message and then a script is run in the background to download the message from the database.

+ +

2) Use websockets or XMPP to allow messages to be sent and received by the sever and just save the message to the MySQL database when it reaches the server.

+ +

Which one of these methods would be best to implement and scale for a realtime chat application. Method one seems pretty good but I'm not sure if constant SQL transactions are good for a server, the payload might be too much.

+ +

EDIT!

+ +

So after do some more digging around I see that I can use either XMPP with web sockets or with HTTP to create my real time chat app. My question is what would be a simple but effective way to save these messages to a MySQL database.

+",293225,,293225,,43113.95208,43117.95694,Build a realtime chat app which stores messages in a MySQL database,,2,5,1,,,CC BY-SA 3.0,, +363929,1,,,1/13/2018 4:22,,-2,76,"

What I am going to discuss is typical situation.

+ +

Most of the developers I know work on functionality first and then on design. +BUT, when clients check the builds, they usually check design first without bothering much about functionality.

+ +

And since we are creating for clients, shouldn't work on what clients are going to check first?

+ +

What risks are involved if we work on UI first?

+ +

Thanks!

+",23099,,,,,43113.24097,"When working on a new software (web, app, etc), should I first work on design or functionality?",,1,3,,43124.48542,,CC BY-SA 3.0,, +363933,1,,,1/13/2018 6:01,,5,1415,"

I am designing for a new application and I don't want to suffer later for the performance that manges discussion threads (posts + replies, very similar to Facebook or StackOverflow posts).

+ +

I wonder which kind of data store / data format I shall choose to persist the threads. I looked for an answer to my question, but all I found actually was ""How to tune an RDBMS design to handle this requirement""

+ +

But is RDBMS really the best fit for this ? Most answers I could find were somehow outdated or wanted to tune some legacy systems and they did not consider No-SQL DBs.

+ +

I think that handling big amount of requests using all proposed asnwers (like here and here for example) will hurt performance when data scales because of the need to ORDER BY clauses.

+ +

I thought about storing the entire thread as one json for the sake of fast read performance. But also I think it will make a problem for update, maintenance and traffic specially because I need to apply security roles on thread component (some users can see some replies, others not)

+ +

Actually I am not much into No-SQL DBs, I just worked slightly with hbase and SOLR and most of my experience is with RDBMS. I think that Document databases are well suited for blog posts but I have no hands-on experience with that.

+ +

Any recommendations about which kind of database technology would best fit such needs ?

+ +

Important note: I don't ask for recommendation on specific products or resources, but about arguments for the choice of technology (RDMBS vs No-SQL DB).

+ +

EDIT: Thanks to the answers below, I revisited the requirements in more details , they are as follows :

+ +

1- The data is a nested set of ""Issues"" and ""Actions"" and each one can have any number of comments (i.e. issues have actions and actions have issues , and each of actions and issues can have comments)

+ +

2- conversation can't handle more than five users (converastion is a set of ""issues"" and ""actions"" and their relative comments

+ +

3- only one conversation is active at a time per set of users

+ +

4- a subset of the conversation can include users other than the rest of the conversation (but not more than five)

+ +

5- System will be distributed (in the future)

+ +

6- It will good to use a new technology other than RDBMS -unless it hurts-

+ +

7- Frontend is mobile app

+ +

I think from the above that choosing a document DB will be better, specially for points 5 and 6 and also due to the fact that data -as described- is not relational and that modeling hierarchial data + enforcing joins will not be good when data scales.

+ +

again many thanks for all who helped and still open for any recommendations including changing the technology

+",247422,,247422,,43121.56736,43121.56736,Which database technology to choose for storing (post + replies) threads,,2,3,4,,,CC BY-SA 3.0,, +363934,1,363947,,1/13/2018 7:21,,2,83,"

I have a Python project where I have a few classes that model various thing I'm considering (e.g., a car class, a driver class and a street class) and then I have a single helper function that does a simple, low-level operation, that I use in some classes.

+ +

Is it considered good practice to put this function in a separate module? It seems unsatisfying to create a module that contains just one function, yet leaving it in the main file also is unsatisfying, since it clutters the code.

+ +

Please notice that I'm more of a beginner, having <1 year of experience.

+",293237,,293237,,43113.43542,43143.71736,"Organizing helper operations in your SW project, if there are very few helper functions",,1,2,,,,CC BY-SA 3.0,, +363937,1,,,1/13/2018 9:13,,2,526,"

There's a large shared object (kind of OpenGL context) in my project. +It is accessed from multiple threads. +To ensure only one thread at the moments is using SHARED_OBJECT, mutex (kind of NSRecursiveLock) is used.

+ +

The problem is that sometimes MAIN_THREAD is waiting a lot while some BG_THREAD is doing something with SHARED_OBJECT.

+ +

I am not supposed to change architecture (to rewrite this mutex scheme).
+The current problem lies in following: +There is my_backround_func (takes a lot of time), which is called before main_thread_func1(takes few milliseconds) and main_thread_func2.

+ +

These functions are spread all over the project. +To fix the problem, I created singleton class Dispatcher, which lets main_thread_func* to be registered, and lets my_backround_func to ask if all main_thread_funcs were already executed.

+ +

It uses Objective C mechanisms like NSStringFromSelector and ""_cmd"".

+ +

At the start of the program I register functions:

+ +
[[MyBackgroundFunctionDispatcher dispatcher] setCanCallBgFunc:NO forSelector:NSStringFromSelector(@selector(main_thread_func1:))];
+
+[[MyBackgroundFunctionDispatcher dispatcher] setCanCallBgFunc:NO forSelector:NSStringFromSelector(@selector(main_thread_func2:))];
+
+ +

And at the end of main_thread_func1 and main_thread_func2 I call

+ +
[[MyBackgroundFunctionDispatcher dispatcher] setCanCallBgFunc:YES forSelector:NSStringFromSelector(_cmd)]; 
+
+ +

All this time background thread is waiting:

+ +
while ( ![[MyBackgroundFunctionDispatcher dispatcher] setCanCallBgFunc] )
+    sleep_ms(10);
+
+ +

So, I ensured that background function will be caller AFTER registered main thread functions. MyBackgroundFunctionDispatcher can be extended to hold registered functions not only for my_backround_func, but for my_backround_func1, my_backround_func2.

+ +

While the solution looks ugly and I suspect I miss some good pattern.

+ +

Could you give any advices please?

+",124719,,,,,43113.38403,How to dispatch these functions in Objective C to not lock Main thread?,,0,1,,,,CC BY-SA 3.0,, +363939,1,363941,,1/13/2018 10:04,,1,1031,"

I have a class library that calls dll_A.

+ +

dll_A has dependencies on WPF.

+ +

How to I wrap or isolate my class library so that the code calling my library does not need to have a dependency on WPF?

+ +

(My class library obviously has to depend on WPF, but the only output is Byte[]).

+ +

In other words, I want my library to act as an entirely different process, as a sort of 'buffer', and the calling code does not care about how it works, only that it outputs Byte[].

+",30444,,1204,,43125.66667,43125.66667,How to hide the dependencies of a dll from the code calling it?,<.net>,2,2,2,,,CC BY-SA 3.0,, +363940,1,,,1/13/2018 10:30,,2,182,"

I have to compute small mathematical problems for my work. My main work is to develop and solve these problems analytically. I then have to compute these problems in mathematica, but this is not my main task.

+ +

I am not unwilling to do this, but I think I could save time by outsourcing it to someone with more technical experience in e.g. Mathematica.

+ +

For example, here is a typical mathematical problem that I would want to outsource. I'd like to write the problem in Latex, outsource it to a freelance coder, and get back the result.

+ +

However, my first attempts to do this on freelancer.com have derailed mainly due to communication. Those freelancers were not native-English speakers.

+ +
+ +

So I am wondering if you have some tips on how to outsource this kind of small project to a freelancer online. i.e. on how to communicate most effectively so that the process is as smooth as possible, and requires as little unnecessary effort or problems as possible.

+ +

(I am thinking mostly of 5$ to 50$ projects. This seems to be normal on fiverr.com/freelancer.com, but I'm not sure if it can provide quality. But it might, since this type of assignment is quite small).

+",293251,,1204,,43113.71597,43121.49722,Getting started with outsourcing small scientific computation tasks to online freelancers,,3,2,,,,CC BY-SA 3.0,, +363944,1,,,1/13/2018 12:58,,1,153,"

I have a couple of inactive Pinax sites at different versions (0.9.x and 0.7.x). These both started with two common features:

+ +
    +
  1. While the version of Pinax was the most recent to have a (nonempty) social starter project, they were both relatively old; and:

  2. +
  3. Installation hinged on procuring some extremely rare and difficult dependencies.

  4. +
+ +

In looking at this, there seems to be a theme of increasing fragility. Dependencies seem to constitute single points of failure, and the trend is to have more and more of them.

+ +

Any suggestions about how to cope with this, above creating a virtualenv while you still can? What options should I consider if I want minimize adding (possibly ephemeral) dependencies? Are there ways to estimate how ephemeral a given dependency is?

+ +

Possibly with enough duct tape, I might for instance have a gallery of virtualenvs, and ensure that every single version of every single dependency is available in source and installed format.

+ +

I'm looking at creating all of a self-contained project with its own ""roll your own"" apps, not because I think this is desirable or Pythonic in itself, but to quarantine most or all single points of failure to my own code, which ideally should be working and deployable after downloading Python, Django (if needed) and my project alone.

+",65767,,110531,,43114.79375,43118.84306,What are the most effective ways to manage rot in your platform's dependencies?,,1,1,1,,,CC BY-SA 3.0,, +363952,1,,,1/13/2018 19:19,,0,9460,"

I’m seeking a term and possibly the code behind what would help me implement that term in Python.

+ +

I have been working on a text-based Python journaling application. + When I want to review my journal from the command-line shell, it prints out a series of logs like this:

+ +

Log_1”...” +Log_2”...” +Log_3”...”

+ +

The problem is, there are a lot of logs under a lot of different dates, so the whole journal looks dense, cluttered, and messy.

+ +

I’m not a writer or a language arts expert(I barely use MS-Word), but what I want is to create a one line space between the print of each log:

+ +

Log_1”...”

+ +

Log_2”...”

+ +

Log_3”...”

+ +

I don’t know what that space in between each log would be called, so it made it impractical to just do some google research.

+ +

What would the space be called? And is they’re a specific code that could be passed through print() which would create the output of that space? Thank you.

+",289432,,,,,43388.36389,Name and code to space between lines/paragraphs,,2,6,,,,CC BY-SA 3.0,, +363957,1,364004,,1/14/2018 0:21,,4,1485,"

I'm a first timer here so let me know if I should post this question in a different forum!

+ +

I have a python program that takes in user input but is only useful when you're offline, I wanted to make the ui better so I thought using HTML CSS, and JavaScript would be perfect. So my question is, is it possible for python and JavaScript to communicate variables between each other when offline(JS front end, python back end)?

+ +

Another way to think about it is like how you can choose a script to run on a form submission (but as far as I know this only works if the script is on a server).

+ +

Any help would be much appreciated!

+",293284,,,,,43115.54653,Python and JavaScript integration for offline use,,1,1,,,,CC BY-SA 3.0,, +363960,1,363995,,1/14/2018 4:10,,2,1072,"

I am new to AWS and i have learnt and developed code in spark -scala .

+ +

My application basically merge two files in spark and created final output.

+ +

I read both files (MAIN files and INCR files )in spark from S3 bucket .

+ +

All are working fine and i am getting correct output also . +But i dont know how to automate whole process to put in production .

+ +

Here are the steps the i am doing in order to get output .

+ +

STEP 1: Loading Main files (5K text files ).I am reading files from FTP in EC2 and then uploading in the S3 bucket .

+ +

STEP 2: Loading INCR (incremental files) same way as i am loading MAIN files .

+ +

STEP 3: Creating EMR cluster manually from UI.

+ +

STEP 4: Opening Zeppelin note book and copy paste spark-scala script and run .

+ +

STEP 5: Again creating EC2 instance to read S3 bucket and send output files from S3 to FTP client .

+ +

I am using EC2 because in my case i dont have direct connect from S3 to FTP .We are in a process to get DIRECT CONNECT from AWS .

+ +

Please assist me how can i automate in a best way .

+",280187,,,,,43116.38194,How to automate my AWS spark script,,1,0,,,,CC BY-SA 3.0,, +363962,1,,,1/14/2018 5:16,,4,126,"

My question is in context with the Serverless Architecture (e.g. AWS Lambda) and how does one interact with the Databases in this system.

+ +

Typically in a 3 Tier architecture, we have a web service which interacts with the Database. The idea here is to ensure that one database table is owned by one component. So changes in there, does not require changes in multiple places and there is also a clear sense of ownership so scaling and security are easier to manage.

+ +

However, moving to serverless architecture, this ownership is no more clear and exposing a web service to access a database and having a Lambda use this web service does not make sense to me.

+ +

I would like to know a bit on the common patterns and practices around this.

+",192100,,281325,,43657.73472,43657.73472,Serverless Architecture - Integrating with Data Layer,,1,0,3,,,CC BY-SA 4.0,, +363963,1,,,1/14/2018 5:20,,2,194,"

I have written a basic web app in PHP, using MongoDB as database engine. The app basically inserts records into database as they become available, and queries the database and displays the data as on a web page when the user visits the page.

+ +

When the database is queried and cursor is returned, some processing is done on each record, like checking the age of an individual and deciding which age-category (like under-age, over-aged, just-the-right-age ;) ) he falls into, and then the data is displayed on the web page.

+ +

My Question: Previously I developed this system for 500 database records. It worked fine. BUT now I have to develop this system for 1 million or more database records.

+ +

So what changes should I make to the existing system, to make it ready for that big data?

+",293291,,9113,,43114.90278,43114.90278,"How to make a basic database application, previously developed for 500 records, ready for 1 million records?",,1,6,,,,CC BY-SA 3.0,, +363967,1,,,1/14/2018 8:08,,1,60,"

I read about payara(glassfish) server cluster and find out that clustering with payara only replicate session in multiple servers.

+ +

but I use jwt for my project so I don't use session at all.

+ +

I decided not to cluster servers and just use multiple servers without connecting to each other and have Load Balancer in-front of them, Am I losing something ?

+",150418,,,,,43114.64583,should I use server cluster when my application does not work with session,,1,0,,,,CC BY-SA 3.0,, +363968,1,363969,,1/14/2018 8:32,,3,82,"

I'm not even sure ""relevancy"" is the most accurate word, so I'll just describe the problem:

+ +

I'm building an app that needs to somehow parse product descriptions from a popular website (let's just say it's Amazon) and figure out which certifications the product has based on the text in the description alone. The descriptions for these products are not always written the same way (because they're written by different companies), but do always contain certain keywords that I'm looking for -- and the keywords have to be ""close together"" in the description in order to be considered for the resultset.

+ +

For example, given the following CSV data:

+ +
ProductName,ProductDescription
+Product1,Product1 is a really cool product that is certified for Certification1 on Region1
+Product2,Product2 has Region2 which has Certification3 and Region3 with Certification4. It also has Certification5
+
+ +

I'd want to generate the following output:

+ +
{  
+   ""Product1"":{  
+      ""Region1"":""Certification1"",
+      ""UnknownRegions"": []
+   },
+   ""Product2"":{  
+      ""Region2"":""Certification3"",
+      ""Region3"":""Certification4"",
+      ""UnknownRegions"":[  
+         ""Certification5""
+      ]
+   }
+}
+
+ +

I have almost no idea how to solve this problem, other than one thought: can some NLP algorithm help me to achieve the desired output above? If so, which one? I've heard of a technique called Named Entity Extraction but I don't know if it applies here or not.

+ +

Any advice is much appreciated here. Thank you in advance!

+",246069,,,,,43114.97292,"What approaches can I take to figure out the ""relevancy"" of certain terms in a string?",,1,1,,,,CC BY-SA 3.0,, +363973,1,,,1/14/2018 11:19,,2,53,"

I have read couple of articles on google but not sure how DB client for example java application connect to clustered DB. +All the articles I read says client will write at master node but will use slave for read.

+ +

My question is how client will know which one is master(and its location) and which one is slave(and its location). Does +Oracle uses any router server(sitting separately on existing or different node) and java client will connect to this +router and it is the responsibility of router to send the calls to master or slave based on request type (DDL or DML) ?

+",260829,,,,,43115.25417,How does client connect to replication server in oracle?,,1,0,,,,CC BY-SA 3.0,, +363985,1,363986,,1/14/2018 19:52,,5,480,"

There exist a number of articles/blogs explaining the Dependency Inversion Principle (DIP) using Swift; to name a few (top Google hits):

+ + + +

Now, Swift is widely described (even by Apple) as a protocol-oriented programming language, rather that an OOP language; so naturally these articles realize the abstractions in DIP using protocols rather than inheritance, however specifically heterogeneous protocols to allow the policy layer to use polymorphism to invoke lower layers without needing to know any implementation details. E.g.:

+ +
// Example A
+protocol Service {
+    func work()
+}
+
+final class Policy {
+    // Use heterogeneous protocol polymorphically.
+    private let service: Service
+
+    init(service: Service) { self.service = service }
+
+    func doWork() { service.work() }
+}
+
+final class SpecificService: Service {
+    func work() { /* ... */ print(""Specific work ..."") }
+}
+
+let policy = Policy(service: SpecificService())
+
+// Resolves to doWork() of SpecificService at runtime
+policy.doWork() // Specific work ...
+
+ +

I realize polymorphism is one of the key concepts in DIP (unless I'm mistaken), but from a Swift implementation perspective, I'd rather see DIP applied using protocol-constrained generics rather than runtime polymorphism. E.g.:

+ +
// Example B
+protocol Service {
+    func work()
+}
+
+final class Policy<PolicyService: Service> {
+    private let service: PolicyService
+
+    init(service: PolicyService) { self.service = service }
+
+    func doWork() { service.work() }
+}
+
+final class SpecificService: Service {
+    func work() { /* ... */ print(""Specific work ..."") }
+}
+
+let policy = Policy(service: SpecificService())
+
+// policy.service specialized as SpecificService instance at compile time
+policy.doWork() // Specific work ...
+
+ +

I haven't seen anyone use generics in the context of DIP and Swift, so I'm probably the one in the dark here, hence this question.

+ +

As a design principle, I believe B above achieves the same goal as A, w.r.t. DIP; and when applied in a statically typed protocol-oriented programming language that generally prefers composition over inheritance, specifically (afaik) protocols and generics over protocols and polymorphism, I would prefer using B. This is naturally under the constraint that we only ever use a single specialized Policy at a time, and fall back on DIP to ease changes in the low-level details by decoupling/dependency inversion.

+ +
+ +

Question: Would Example B above be considered a valid application of DIP, even if a specialized Policy ""knows"" about the concrete Service at compile time (due to generics; still de-coupled by abstractions applied as constraints to the generic placeholder)?

+",222289,,222289,,43114.88194,43115.01181,Dependency Inversion Principle (Swift) - Applicable also without polymorphism? (Abstraction: constrained generics),,1,0,2,,,CC BY-SA 3.0,, +363988,1,,,1/15/2018 0:40,,1,219,"

I really love the way open source projects use RFC as a tool to get feedback and ideas from the wider community. I've been especially noticing this over the years with the way Ember have been doing their development.

+ +

I'm thinking about RFCs in the context of 'closed source' projects. The kind a consultancy or a software house would do for their clients. Are there elements that we can take from this concept and use it in projects that don't have such a big community to draw from? Has anyone done something like this before? Is it feasible?

+ +

Thoughts?

+ +

Update

+ +

Do you think there's an opportunity for developers to create something similar to an RFC process for features they are going to implement and how they are going to implement it? Kind of an early feedback loop. Or do you think it will be too much overhead?

+",100839,,100839,,43115.98056,43116.10764,How can we use RFCs in closed source projects?,,2,7,,,,CC BY-SA 3.0,, +363989,1,363996,,1/15/2018 2:36,,1,635,"

I have the following pure function (f2 and f3 are pure too):

+ +
class X {    
+  def f1(arg: Type1): Type2 = {
+    val x = f2(arg)
+    val y = f3(x)
+    y
+  }
+  def f2...
+  def f3...
+}
+
+ +

Now, I would like to move f2 and f3 and can think of 2 options to do so. Wondering if one is more functional than the other?

+ +

Option 1

+ +

Move f2 and f3 to a new trait and mix the new trait into X like so:

+ +
trait T {
+  def f2... // with implementation
+  def f3...
+}
+
+class X extends T {
+  def f1... // same implementation as earlier
+}
+
+ +

Option 2

+ +

Move f2 & f3 to a new trait (but with the implementation in a class) and dependency inject the trait into X like so:

+ +
@ImplementedBy(classOf[T1Impl])
+trait T {
+  def f2... // No implementation
+  def f3... 
+}
+
+@Singleton
+class TImpl extends T {
+  override def f2... // Implementation here
+  override def f3...  
+}
+
+class X @Inject() (t: T) {
+  def f1 ... = {
+    val x = t.f2(arg)
+    val y = t.f3(x)
+    y
+  }
+}
+
+ +

One half of me thinks option 1 is more functional (involving no OO baggage). The other half (with OOP/Java history I come from) screams that using inheritance for code sharing is a bad idea. Particularly if X and T are un-related then X extends T (X is a T) makes the code look contrived.

+ +

Using composition makes the code look more natural (X has a T) but is it less functional?

+",293311,,,,,43115.35,Does dependency injection fly in the face of functional programming?,,1,5,1,,,CC BY-SA 3.0,, +364002,1,364030,,1/15/2018 12:31,,4,529,"

I am struggling to define methods in OOP. Currently I am practicing with this scenario: ""A hospital has started the development of a new system to keep records of analysis done by patients and the doctor who asked them.""

+ +

I defined four classes: Patient, Doctor, Analysis and Hospital.

+ +
    +
  • Patient: represents the person who goes to the hospital. May know which analysis has done.
  • +
  • Doctor: represents a professional of medicine. May know which analysis has requested.
  • +
  • Analysis: represents a study requested by a specific doctor for a specific patient.
  • +
  • Hospital: keeps track of doctors, patients and analysis.
  • +
+ +

This is the uml class diagram +

+ +

Now I am not comfortable with this diagram because:

+ +
    +
  1. If any method has access to a patient, then it could exec joe.addAnalysis(a) but analysis ""a"" is never added to Hospital. This would generate inconsistency.
  2. +
  3. Same as above but with doctor instead of patient.
  4. +
+ +

So in my second attempt, I removed methods addAnalysis() and getAllAnalysis() from Patient & Doctor. Now Hospital has

+ +
    +
  • getPatientAnalysis(p: Patient): Analysis[0..*]
  • +
  • getDoctorAnalysis(d: Doctor): Doctor[0..*]
  • +
+ +

But in this case I wouldn't be able to ask a patient for their analysis list directly. Same goes to doctors.

+ +

So the question is: Is my second attempt better? If so, then what happens with ""patients knows which studies they've done""?

+",293359,,6967,,43115.53403,43116.54653,Identify methods on OOP,,2,5,3,,,CC BY-SA 3.0,, +364003,1,,,1/15/2018 12:59,,3,59,"

I have an iOS app that currently has over 2500 images. These are generated into an asset catalog to make use of thinning (2500 x 5 variants = lots of files). The files are largely accessed at random.

+ +

Batches of files will be downloaded via in-app purchase. Eventually, I'll end up with 10K images, so possibly 50K files.

+ +

Is this the best way to store these files?

+ +

Under windows, I could batch them into separate DLLs. Does iOS have a similar concept?

+",10208,,10208,,43115.83333,43115.83333,What's the best way to efficiently build an iOS App with thousands of small resource files?,,1,0,,,,CC BY-SA 3.0,, +364005,1,364006,,1/15/2018 13:09,,1,383,"

Here is a piece of cool code to add the two integers, a and b:

+ +
NameService nameService = NameService.getSingletonInstance();
+OperationService operationService = nameService.resolve(OperationService.class);
+ValueFactory factory = OperationServiceFactory.newInstance();
+AbstractValue va = factory.newIntegerValue(a);
+AbstractValue vb = factory.newIntegerValue(b);
+Operator operator = operationService.resolve(AdditonService.class);
+AbstractValue vc = operator.performOperation(a, b);
+int c = nameService.resolve(IntegerDecorator.class).getValue(vc);
+
+ +

Unfortunately I cannot paste the actual production code that does a little bit more (while not much) but the problem should be obvious: there is much more code than minimally required and despite of ""design patterns applied"" it is very difficult to read or refactor it. It can be written much simpler and shorter but the author of the code claims you being unprofessional for saying so. Where is the problem with the code? It really could not be that there is no any.

+",81278,,7422,,43115.55139,43115.55208,Does this anti-pattern have a name?,,1,9,,,,CC BY-SA 3.0,, +364007,1,,,1/15/2018 13:18,,2,109,"

We have a very large search service (written in golang if it matters) that runs on top of ElasticSearch, receive requests, builds the correspondent query, and returns the results (with some post-processing). The service handles thousands of requests per second and should be able to serve requests quickly and efficiently.

+ +

Since the service just READS from the database (writes are being handled outside of the service, asynchronous) and builds very complex queries who varies between users, requests and AB tests being run on production, most of the code in the service is code in charge of building the query.

+ +

Currently, the ElasticSearch query is being built mixing Business logic (AB tests running in production, classification of the user, etc..). +In classic OO application with relational databases I'd probably use a DAL, the receives a request and know how to parse it to SQL, and then converts the returned value to something that the BL knows how to handle.

+ +

Building such a mechanism (an abstraction of our query, an intermediate language between our domain language query to ES query language) feels like an overhead. Since all of our service is really coupled to ElasticSearch, takes into consideration lots of ES features in order to increase performance, it feels like we'll just build our own objects who are replicated ES objects, and then do the mapping.

+ +

On the other hand, we think of the possibility we'll want to replace the package querying ES, or we'll want to update ES version and it will go through breaking changes, and we'll have to change a lot of code, scattered through the whole service and not centralized.

+ +

Any advice or help would be appreciated. Thanks.

+",293385,,,,,43118.61875,Decouple service complex query building from datastore,,1,1,,,,CC BY-SA 3.0,, +364011,1,364035,,1/15/2018 14:01,,3,82,"

If I have an app that will authenticate users using ActiveDirectoy and will Authorize them and link data with them that are stored in Sql DB.

+ +

Which one is best practice ?

+ +
    +
  1. To import data repeatedly from AD and store in DB as (Id, UserName) and each time I'll authenticate I connect to AD and make sure its password is OK and them check the role from DB ?.

  2. +
  3. Keep authenticating from AD but get the GUID of the user and set as foreign key in DB tables ?

  4. +
  5. Both previous are not correct approach and there's another better solution ?

  6. +
+ +

And what if the system should have the ability to have registered users from AD and registered users in users table in DB ?

+",186303,,,,,43115.99792,What's the best practice to build an app that can authenticate using AD and DB users together?,,2,0,,,,CC BY-SA 3.0,, +364016,1,,,1/15/2018 15:09,,-3,82,"

I work on a java web-application. +In this application, users can generate some reporting (xls/pdf) and create some pack of multiple reporting. (using pdfbox, itext)

+ +

More and more data are added in the application since few year.

+ +

The generation of reporting is made on the same server as the application, which doing slow-down or worth, GC overhead when multiple user do at the same time some reporting or if they are too much data to convert, so the application is down...

+ +

What is the best way to generate pdf on a web application ?

+ +

Export all the generation on an other server ?

+",293399,,64132,,43115.68264,43115.69028,Web-Application with file generation pattern,,1,0,,,,CC BY-SA 3.0,, +364051,1,364052,,1/16/2018 8:58,,118,17117,"

First off, I am aware that many questions have been asked about VCS as a solo developer, but they are often too broad. This concerns only branching, and still it has been marked as a duplicate...the supposed duplicate is, again, marked as another duplicate of another question that is too broad and doesn't concern branching specifically. That's how my question's unique.

+ +

What are the advantages, if any, of using branching as a solo developer? I've often seen it recommended even in a solo-dev context, but as far as I can see, beyond using a 'master' trunk for development, and branching off for working, release-ready code, I don't see how I could harness the power of branching (for example, to compartmentalize new features) without over-complicating the whole development process.

+",293466,,,user22815,43116.74792,43117.08403,What are the advantages of using branching as a solo developer?,,5,10,29,,,CC BY-SA 3.0,, +364058,1,364060,,1/16/2018 10:41,,4,350,"

Note - This is about software engineering.

+ +

A friend at work said:

+ +
+

Did you know that Facebook and Atlassian have abolished project managers for their software dev teams?

+
+ +

This sounded familiar, but I couldn't find a reference confirming this.

+ +

I'm asking the question how is it possible for large companies to have software delivery accountability when Project Managers are removed.

+ +

It seems you turn a single responsibility into a group reponsibilty which is less effective.

+ +

My question is: How is it possible for large companies to run software development teams without Project Managers?

+",13382,,,,,43116.60972,How is it possible for large companies to run software development teams without Project Managers?,,1,2,,,,CC BY-SA 3.0,, +364059,1,,,1/16/2018 10:50,,-3,166,"

I create copy of this post here because this place more relevant for my question and I don't remove original question from stakOverFlow because I got comments and start conversations with users.

+ +

I often meet in my code or in code from colleagues with bunch of init methods in onCreate() method in Activity and it's looks like this sample:

+ +
onCreate() {
+    initUI();
+}
+
+private void initUI() {
+    initToolbar();
+    initPriceView();
+    initDistanceView();
+    initSectorsView();
+    initRouteList();
+    initSwipeLayout();
+    initArrivalTimeContainer();
+    initConditionsView();
+  }
+
+ +

That's what bothers me here:

+ +
    +
  • Sometimes order of methods calls is important and change herу lead to crashes.
  • +
  • Make all work with UI in one methods lead to code sheets with bad readability.
  • +
  • The methods in code looks scattered, chaotic and unrelated. We can accidentally skip one method-call in initUI() method and we will get bug.
  • +
+ +

What you doing with this problem?

+",293480,,,,,43120.08403,How to refactor bunch of init methods in onCreate() method?,,3,2,,,,CC BY-SA 3.0,, +364064,1,364068,,1/16/2018 11:43,,7,4718,"

We would like to use Driessen's git branch model but we also have QA side. I think I understand how this git flow works but I'm still not sure about testing. For example, I have five new features, each of them is in my own branch and I want to give these features to testing. Now I'm confused about where to merge them? To do develop branch? If so, what happen when QA refuses only two and I have already merged all five features? I would like to keep separating features in my own branches but I would like to use a pull request to see the diffs and type comments but again then there is a problem in which branch to merge?

+ +

In another words, according diagram all new features (i.e. 5) should be merged into develop branch. From develop branch to the release branch and then from release branch to the master (I don't care about hotfixes right now - they are pretty clear). But where is a space for QA and option they will refuse some of feature? What then I need to change in the diagram? I apologize for English, I do what I can.

+",293486,,,,,43116.53611,Git branch model with QA and branches,,1,0,2,,,CC BY-SA 3.0,, +364067,1,364078,,1/16/2018 12:27,,1,3375,"

I have observed in lot many C# example where in following pattern is being followed but i am not sure how this will going to help us in long run

+ +

Typical approach i have seen is

+ +
    +
  1. create a interface
  2. +
  3. implement interface
  4. +
  5. create a manager
  6. +
  7. call manager
  8. +
+ +

it would be really nice if anyone can tell me how this approach will going to help in real world senario

+ +

interface IRestService for various web requests

+ +
public interface IRestService
+    {
+        Task<List<TodoItem>> RefreshDataAsync ();
+
+        Task SaveTodoItemAsync (TodoItem item, bool isNewItem);
+
+        Task DeleteTodoItemAsync (string id);
+    }
+
+ +

Code that implements IRestService interface

+ +
public class RestService : IRestService
+{
+    HttpClient client;
+
+    public List<TodoItem> Items { get; private set; }
+
+    public RestService ()
+    {
+        var authData = string.Format(""{0}:{1}"", Constants.Username, Constants.Password);
+        var authHeaderValue = Convert.ToBase64String(Encoding.UTF8.GetBytes(authData));
+
+        client = new HttpClient ();
+        client.MaxResponseContentBufferSize = 256000;
+        client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue(""Basic"", authHeaderValue);
+    }
+
+    public async Task<List<TodoItem>> RefreshDataAsync ()
+    {
+
+
+    }
+
+    public async Task SaveTodoItemAsync (TodoItem item, bool isNewItem = false)
+    {
+
+    }
+
+    public async Task DeleteTodoItemAsync (string id)
+    {
+
+    }
+}
+
+ +

Manager that calls the implemented methods

+ +
public class TodoItemManager
+{
+    IRestService restService;
+
+    public TodoItemManager (IRestService service)
+    {
+        restService = service;
+    }
+
+    public Task<List<TodoItem>> GetTasksAsync ()
+    {
+        return restService.RefreshDataAsync (); 
+    }
+
+    public Task SaveTaskAsync (TodoItem item, bool isNewItem = false)
+    {
+        return restService.SaveTodoItemAsync (item, isNewItem);
+    }
+
+    public Task DeleteTaskAsync (TodoItem item)
+    {
+        return restService.DeleteTodoItemAsync (item.ID);
+    }
+}
+
+ +

to execute request

+ +
TodoManager = new TodoItemManager (new RestService ()); 
+TodoManager.GetTasksAsync ();
+
+ +

few questions are running in my mind

+ +
    +
  1. why we need a manager why can not we just use RestService
  2. +
  3. if some day i need to develop one module to fetch contact related data from server then do i need to add methods in IRestServie to addContact() , deleteContact() , getContact()
  4. +
+",26024,,26024,,43116.52361,43116.72222,use of request manager design pattern,,3,1,1,,,CC BY-SA 3.0,, +364070,1,364072,,1/16/2018 13:31,,0,227,"

I've been using Linq for a while unknowingly that it uses a Reactive pattern that I'm quite fond of.

+ +

I'd like to roll my own function in that pattern but wasn't sure if it's appropriate and also wanted to throw out ideas about how I would do it.

+ +

My goal is to make a video player service that is injecte dinto classes, the service contains a function which plays a video but the user can do other things, something like:

+ +
videoPlayer.Play(videoClip).StartAt(3).EndAt(8).OnUpdate((event)=>{}).Finish((event)=>{})
+
+ +

Does this make sense for reactive programming and how would I go about implementing it in C#? My thought was videoPlayer.Play returns a class with all the functions such as StartAt and EndAt which in turn return the same class object.

+ +

Would that be the way to do it?

+",41652,,,,,43116.56944,Reactive Programming in C# - how to roll my own?,,1,3,,,,CC BY-SA 3.0,, +364071,1,364077,,1/16/2018 13:35,,5,360,"

TLDR; I have one repo interface and multiple data sources, each with a different data identifier - how can I maintain having only one method in my interface?

+ +

I have a need to read a domain object OrderData from two different repositories: one is an outside service and the other one is a local database. +Let's call them OutsideServiceOrderRepo and LocalDbOrderRepo. +They both implement an interface called IOrderRepoRead:

+ +
Interface IOrderRepoRead
+  +GetOrder(OrderEntityData) : Order
+
+ +

I'll never use both implementations in the same use case, it's an either/or situation. I have sorted out how to inject one or the other repo at the composition root, depending on the use case. +The problem I have is that each of these data stores has a different identifier for the data I'm after:

+ +
    +
  • Local store has a simple OrderId that I have within my system.
  • +
  • External service requires me to query using a person's Tax_number+Name+what_not etc...
  • +
+ +

The ideas I came up with so far are as follows:

+ +
    +
  1. I have a type called OrderEntityData with properties TaxNumber, Name, OrderId. Both implementations can use this type, each implementation works with the properties it requires: OutsideServiceOrderRepo uses TaxNumber, Name, whereas LocalDbOrderRepo uses OrderId.
  2. +
  3. I drop the OrderEntityData and have two different methods in my IOrderRepoRead.
  4. +
+ +

.

+ +
Interface IOrderRepoRead
+  +GetOrder(taxNumber, name, whatNot) : Order
+  +GetOrder(orderId) : Order
+
+ +

I'm inclined to go with approach #1, but it still feels like there is a certain amount of coupling among the repo and client code.

+ +

How can I efficiently alternate between different identifiers?

+",69757,,,,,43116.77639,"Repository pattern, different identifiers",,3,8,2,,,CC BY-SA 3.0,, +364079,1,364149,,1/16/2018 17:15,,2,733,"

It is generally agreed that overloading a method should not change its behavior, but how much of a method's behavior should be kept consistent?

+ +

Take for example a REST API client which is responsible for interacting with two different endpoints. Both consume a POST request, share the same base URL, and accept the same headers. The only difference is the payload (and of course the endpoints themselves). If one were to strictly follow the aforementioned rule of encapsulating a single behavior under the same method name, a client similar to the one below may be created:

+ +
public class RestClient {
+
+    private final String API_A = ""http://api.somecompany.com/a"";
+    private final String API_B = ""http://api.somecompany.com/b"";
+
+    RestTemplate restTemplate;
+
+    public ResponseA postA(RequestA request) {
+        HttpHeaders headers = getHeaders();
+        HttpEntity<RequestA> request = new HttpEntity<>(request, headers);
+        return postForObject(API_A, request, ResponseA.class);
+    }
+
+    public ResponseB postB(RequestB request) {
+        HttpHeaders headers = getHeaders();
+        HttpEntity<RequestB> request = new HttpEntity<>(request, headers);
+        return postForObject(API_B, request, ResponseB.class);
+    }
+
+    private HttpHeaders getHeaders() {
+        HttpHeaders headers = new HttpHeaders();
+        headers.setContentType(MediaType.APPLICATION_JSON);
+        headers.setAccept(Collections.singletonList(MediaType.APPLICATION_JSON));
+        return headers;
+    }
+}
+
+ +

This seems alright but also a little silly because of how similar the methods are. Why not use the same method name and just change the request and response type? Will this not imply the endpoint being called? should there be a separate client for each endpoint?

+",285980,,285980,,43117.88125,43117.88125,REST client behavior determined by parameter type,,1,4,,,,CC BY-SA 3.0,, +364082,1,,,1/16/2018 18:04,,8,5621,"

Our company runs applications on a Micro Service architecture that includes thousands of services. I am working on a backend application ""X"" that talks to 50+ services. Frontend services call my service ""X"" to execute requests on other services.

+ +

Problem:

+ +

Front end wants to show user friendly messages when something fails on other services.

+ +
    +
  1. Other services do not return user friendly messages. It is not possible for me to request changes by other teams as there are several.
  2. +
  3. There are no agreed error codes as such. Other services return a string error message. Currently, it is passed back to the UI. Sometimes the error messages are a pointer references (bad code :/)
  4. +
+ +

Possible Solution:

+ +

Check for error message string and have a mapping in my service to a user friendly message. But things can break if the callee service changed their error message. Fallback to a default error message when a custom error mapping is not found.

+ +

Any more ideas on scalable and sustainable solution? Thanks!

+",171835,,81495,,43116.75903,43143.83194,Handling error messages from others services in Micro Service Architecture,,2,5,1,,,CC BY-SA 3.0,, +364086,1,364110,,1/16/2018 18:27,,20,8808,"

Edit added 2+ years later

+

I "checked" the @dandavis answer because it answers my original question, giving reasons to prefer const foo. However, I am completely convinced by the @Wayne Bloss answer that function foo() is generally superior.

+

Original Question here

+

For example, in this Redux video, the instructor always uses syntax like

+
const counter = (state=0, action) => {
+   ... function body here
+}
+
+

where I would just use the "traditional"

+
function counter(state=0, action) {
+   ... function body here
+}
+
+

Which is actually shorter and, IMO, clearer. It's easier to scan the fairly even and structured left edge of the page for the word "function" than scan the raggedy right edge for a small "=>".

+

Other than this, and trying to be objective, not opinion, is there some useful difference or advantage to the newfangled syntax?

+",90992,,90992,,44214.76389,44214.76389,Why use `const foo = () => {}` instead of `function foo() {}`,,2,5,4,,,CC BY-SA 4.0,, +364087,1,364089,,1/16/2018 18:31,,1,101,"

See the comment inside ChildEntity ::__construct():

+ +
class ChildEntity extends ParentEntity
+{
+    /** @var int */
+    protected $classParameter;
+
+    function __construct(int $classParameter)
+    {
+        /**
+         * Question
+         *
+         * Below are the two ways of initializing the variable of ChildEntity
+         *
+         * Are they both initializing the same child(?) variable?
+         * Are they initializing the parent(?) variable?
+         * Can the child and parent have different values at the same time,
+         * perhaps in different contexts?
+         */
+        $this->classParameter = $classParameter; // init local(?) variable?
+        parent::__construct($classParameter); // init parent(?) variable?
+    }
+}
+
+class ParentEntity
+{
+    /** @var int */
+    protected $classParameter;
+
+    function __construct(int $classParameter)
+    {
+        $this->classParameter = $classParameter;
+    }
+}
+
+$childEntity = new ChildEntity(100);
+
+ +

Why does the below work (slightly different code - parameter removed in parent, and child uses only parent constructor to initialize). It looks like the parent class manipulates a variable found in the child class, without that variable being present in parent. It is as if the variable in ChildEntity and ParentEntity become one, and the child and parent instances also serve as a single instance, for all intents and purposes. Is that what actually happens behind the scenes?

+ +
class ChildEntity extends ParentEntity
+{
+    /** @var int */
+    protected $classParameter;
+
+    function __construct(int $classParameter)
+    {
+        parent::__construct($classParameter); 
+        print $this->classParameter;
+    }
+}
+
+class ParentEntity
+{
+    function __construct(int $classParameter)
+    {
+        $this->classParameter = $classParameter;
+    }
+}
+
+$childEntity = new ChildEntity(5);
+
+",119333,,119333,,43116.77708,43116.80833,Can you explain the behavior of PHP in cases when a parent class variable is masked by the child variable of the same name?,,1,0,,43136.03403,,CC BY-SA 3.0,, +364090,1,,,1/16/2018 19:54,,15,6744,"

In the past year, I created a new system using Dependency Injection and an IOC container. This taught me a lot about DI!

+ +

However, even after learning the concepts and proper patterns, I consider it a challenge to decouple code and introduce an IOC container into a legacy application. The application is large enough to the point that a true implementation would be overwhelming. Even if the value was understood and the time was granted. Who's granted time for something like this??

+ +

The goal of course is to bring unit tests to the business logic!
+Business logic that is intertwined with test-preventing database calls.

+ +

I've read the articles and I understand the dangers of Poor Man's Dependency Injection as described in this Los Techies article. I understand it does not truly decouple anything.
+I understand that it can involve much system wide refactoring as implementations require new dependencies. I would not consider using it on a new project with any amount of size.

+ +

Question: Is it okay to use Poor Man's DI to introduce testability to a legacy application and start the ball rolling?

+ +

In addition, is using Poor Man's DI as a grass roots approach to true Dependency Injection a valuable way to educate on the need and benefits of the principle?

+ +

Can you refactor a method that has a database call dependency and abstract that call to behind an interface? Simply having that abstraction would make that that method testable since a mock implementation could be passed in via a constructor overload.

+ +

Down the road, once the effort gains supporters, the project could be updated to implement an IOC container and the constructors would be out there that take in the abstractions.

+",53055,,53055,,43357.63889,43357.63889,Is Poor Man's Dependency Injection a good way to introduce testability to a legacy application?,,3,3,3,,,CC BY-SA 4.0,, +364093,1,364094,,1/16/2018 20:21,,23,11682,"

I'm using an internal library that was designed to mimic a proposed C++ library, and sometime in the past few years I see its interface changed from using std::string to string_view.

+ +

So I dutifully change my code, to conform to the new interface. Unfortunately, what I have to pass in is a std::string parameter, and something that is a std::string return value. So my code changed from something like this:

+ +
void one_time_setup(const std::string & p1, int p2) {
+   api_class api;
+   api.setup (p1, special_number_to_string(p2));
+}
+
+ +

to

+ +
void one_time_setup(const std::string & p1, int p2) {
+   api_class api;
+   const std::string p2_storage(special_number_to_string(p2));
+   api.setup (string_view(&p1[0], p1.size()), string_view(&p2_storage[0], p2_storage.size()));
+}
+
+ +

I really don't see what this change bought me as the API client, other than more code (to possibly screw up). The API call is less safe (due to the API no longer owning the storage for its parameters), probably saved my program 0 work (due to move optimizations compilers can do now), and even if it did save work, that would only be a couple of allocations that will not and would never be done after startup or in a big loop somewhere. Not for this API.

+ +

However, this approach seems to follow advice I see elsewhere, for example this answer:

+ +
+

As an aside, since C++17 you should avoid passing a const std::string& + in favor of a std::string_view:

+
+ +

I find that advice surprising, as it seems to be advocating universally replacing a relatively safe object with a less safe one (basically a glorified pointer and length), primarily for purposes of optimization.

+ +

So when should string_view be used, and when should it not?

+",127619,,,,,43362.30347,When should I use string_view in an interface?,,3,7,5,,,CC BY-SA 3.0,, +364098,1,,,1/16/2018 22:04,,0,713,"

I'm currently trying to map how to make a good algorithm that won't have issues to find the shortest path. The labyrinth consists of an X and Y dimensions as input; However, the labyrinth will generate obstacles within the dimensions and randomly spawned. There is one entrance and at least one exit after finding the shortest path. The way to find the exit and shortest path will be in an order of movements. First it checks if going up is available, then left, then right, and finally down (priority movements: up->left->right->down). The movements must be horizontally and vertically, so diagonal moves are illegal, unfortunately.

+ +

My thought was to probably build a backtracking algorithm that will solve this problem; but, there is a catch for this. It says the labyrinth will not be more than a billion square, meaning that it can be a larger X and Y. Thus, the program will timeout and have to allocate more memory to execute with a Gigabyte file. So to reduce this amount of memory, maybe use bitfields to reduce and manipulate bitwise (which I never used it as a data structure).

+ +

My weak point is to have a strong foundation of how to approach such a problem that can be devastating to code and how to take care. I was wondering what kind of approach I need to consider too. I'm happy to discuss further and be curious about your approaches. I like to ask a lot of questions just for me to understand efficiently.

+",293465,,,,,43119.79306,Computing the shortest path in a labyrinth,,1,7,,,,CC BY-SA 3.0,, +364109,1,364114,,1/17/2018 5:21,,-3,211,"

I am trying to test a function that creates a message exchange graph from IRC chat logs. I am having trouble trying to mock the dependencies. The function makes use of util module. The functions inside utils receive arguments and return an output after some processing. The problem is that these utils functions are being called multiple times inside for-loops and if-conditionals. How do I specify multiple returns based on input arguments? I would like to know how to unit-test functions similar to these.

+ +

I have been using python standard unittest and mock libraries.

+ +

Source of the code: IRCLogParser

+ +
def message_number_graph(log_dict, nicks, nick_same_list,DAY_BY_DAY_ANALYSIS=False):
+    """""" 
+    Creates a directed graph
+    with each node representing an IRC user
+    and each directed edge has a weight which 
+    mentions the number messages sent and recieved by that user 
+    in the selected time frame.
+
+Args:
+    log_dict (dict): with key as dateTime.date object and value as {""data"":datalist,""channel_name"":channels name}
+    nicks(list): list of all the nicks
+    nick_same_list(list): list of lists mentioning nicks which belong to same users
+Returns:
+   message_number_graph (nx graph object)
+""""""
+message_number_day_list = []
+conversations=[[0] for i in range(config.MAX_EXPECTED_DIFF_NICKS)]
+aggregate_message_number_graph = nx.DiGraph()  #graph with multiple directed edges between clients used
+
+G = util.to_graph(nick_same_list)
+conn_comp_list = list(connected_components(G))
+
+util.create_connected_nick_list(conn_comp_list)
+
+def msg_no_analysis_helper(rec_list, corrected_nick, nick, conn_comp_list,conversations,today_conversation):
+    for receiver in rec_list:
+        if(receiver == nick):
+            if(corrected_nick != nick):                                 
+                nick_receiver = ''
+                nick_receiver = util.get_nick_sen_rec(config.MAX_EXPECTED_DIFF_NICKS, nick, conn_comp_list, nick_receiver)    
+
+                if DAY_BY_DAY_ANALYSIS:
+                    today_conversation = util.extend_conversation_list(nick_sender, nick_receiver, today_conversation)
+                else:
+                    conversations = util.extend_conversation_list(nick_sender, nick_receiver, conversations)
+
+def message_no_add_egde(message_graph, conversation):
+    for index in xrange(config.MAX_EXPECTED_DIFF_NICKS):
+        if(len(conversation[index]) == 3 and conversation[index][0] >= config.THRESHOLD_MESSAGE_NUMBER_GRAPH):
+            if len(conversation[index][1]) >= config.MINIMUM_NICK_LENGTH and len(conversation[index][2]) >= config.MINIMUM_NICK_LENGTH:
+                message_graph.add_edge(util.get_nick_representative(nicks, nick_same_list, conversation[index][1]), util.get_nick_representative(nicks, nick_same_list, conversation[index][2]), weight=conversation[index][0])
+    return message_graph
+
+
+for day_content_all_channels in log_dict.values():
+    for day_content in day_content_all_channels:
+        day_log = day_content[""log_data""]
+        today_conversation = [[0] for i in range(config.MAX_EXPECTED_DIFF_NICKS)]
+        for line in day_log:
+            flag_comma = 0
+
+            if(util.check_if_msg_line (line)):
+                parsed_nick = re.search(r""\<(.*?)\>"", line)
+                corrected_nick = util.correctLastCharCR(parsed_nick.group(0)[1:-1])
+                nick_sender = """"
+                nick_receiver = """"                    
+                nick_sender = util.get_nick_sen_rec(config.MAX_EXPECTED_DIFF_NICKS, corrected_nick, conn_comp_list, nick_sender)        
+
+                for nick in nicks:
+                    rec_list = [e.strip() for e in line.split(':')]
+                    util.rec_list_splice(rec_list)
+                    if not rec_list[1]:
+                        break                        
+                    rec_list = util.correct_last_char_list(rec_list)       
+                    msg_no_analysis_helper(rec_list, corrected_nick, nick, conn_comp_list, conversations,today_conversation)
+
+                    if "","" in rec_list[1]:
+                        flag_comma = 1
+                        rec_list_2=[e.strip() for e in rec_list[1].split(',')]
+                        for i in xrange(0,len(rec_list_2)):
+                            if(rec_list_2[i]):
+                                rec_list_2[i] = util.correctLastCharCR(rec_list_2[i])                            
+                        msg_no_analysis_helper(rec_list_2, corrected_nick, nick, conn_comp_list, conversations, today_conversation)                
+
+                    if(flag_comma == 0):
+                        rec = line[line.find("">"")+1:line.find("", "")]
+                        rec = rec[1:]
+                        rec = util.correctLastCharCR(rec)
+                        if(rec == nick):
+                            if(corrected_nick != nick):                                   
+                                nick_receiver = nick_receiver_from_conn_comp(nick, conn_comp_list)        
+
+        if DAY_BY_DAY_ANALYSIS:
+            today_message_number_graph = nx.DiGraph()
+            today_message_number_graph = message_no_add_egde(today_message_number_graph, today_conversation)                
+            year, month, day = util.get_year_month_day(day_content)
+            message_number_day_list.append([today_message_number_graph, year+'-'+month+'-'+day])
+
+print ""\nBuilding graph object with EDGE WEIGHT THRESHOLD:"", config.THRESHOLD_MESSAGE_NUMBER_GRAPH
+
+if not DAY_BY_DAY_ANALYSIS:
+    aggregate_message_number_graph = message_no_add_egde(aggregate_message_number_graph, conversations)
+
+
+if config.DEBUGGER:
+    print ""========> 30 on "" + str(len(conversations)) + "" conversations""
+    print conversations[:30]
+
+if DAY_BY_DAY_ANALYSIS:
+    return message_number_day_list
+else:
+    return aggregate_message_number_graph
+
+",293560,,293560,,43117.40625,43117.40625,Unit-testing function with multiple-dependency,,1,1,0,43131.72569,,CC BY-SA 3.0,, +364117,1,364122,,1/17/2018 8:28,,-3,225,"

I have seen most of times developer declares the string in below fashion

+ +

Approach 1:-

+ +
public void method1(){
+String str1 =""Test"";
+
+}
+
+ +

Approach 2:-

+ +

Per my understanding better approach will be

+ +
public void method2(){
+String str2 = new String(""Test"");
+
+}
+
+ +

Again based on my understanding , Second approach is better than first, because String literal are interned and stored in permgen. So it will not be GC'ed +even thread comes out of approach 1 but in second approach str2 will be GC'ed(Str 2 will not be interned) as thread comes out of method 2 and GC runs as it is stored in heap not permgen.

+ +

Is my understanding correct ?

+ +

Per mine understanding I should literal if same string is going to be created again and again as it will be good for performance point of view +otherwise go for new String() so that can be GC'ed once not used ?

+ +

Linked related string-literal-String-Object

+",260830,,260830,,43117.52986,43117.52986,Creating String with equal operator vs new operator?,,1,5,,43117.57361,,CC BY-SA 3.0,, +364124,1,364125,,1/17/2018 11:40,,1,338,"

I am developing a mobile app in android in which I use Telugu (Indian language) texts. On my mobile Telugu language alphabets are available. Therefore, I am not facing any problem for testing my app. These characters are available in android studio also. So if I give the Unicode codes ranging from u0C00 to u0C7F in escape sequence the text in Telugu is displayed on the screen.

+ +

Now my question is - are all Unicode characters, say for example Telugu (Indian) language alphabets available on all android devices worldwide nowadays i.e., as of January 2018 - the devices which run on android 5.0 or 6.0 ?

+",293588,,209774,,43117.66875,43117.66875,Unicode Telugu language characters,,1,0,,,,CC BY-SA 3.0,, +364129,1,364133,,1/17/2018 15:09,,1,79,"

Is the following class definition a good design?

+ +
class Myclass:
+    def __init__(self,num1,num2):
+        self.complicated_tree = __class__.object_creator(num1,num2)
+
+    @classmethod
+    def tree_creator(cls,num1,num2):
+        return num1+num2 #in practice, this functions would be really long
+                         #and would return a whole tree of numbers
+
+    #my tree doesn't need all the standard traversing etc. methods, just very few
+    #special ones
+    def specialized_method1(self): 
+        pass
+
+    def specialized_method2(self): 
+        pass
+
+ +

I'm a beginner in Python and so far every class __init__ method arguments were identical to the object attributes. In this case, this is not true anymore, because this class shall contain objects that first need to be constructed in a complicated way: a special type of tree that I first need to generate in 20 lines of code using num1 and num2.

+ +

Is this defining such a class good design/practice? Or should I generate the whole tree outside of the class, so that the tree's __init __ method is

+ +
    def __init__(self,tree):
+        self.tree = tree
+
+ +

and the tree_creator function is a separate function outside the class?

+",293237,,,,,43117.64097,__init__ arguments differ from object attributes,,2,0,,,,CC BY-SA 3.0,, +364132,1,,,1/17/2018 15:22,,1,405,"

My team and I are trying to set up some integration tests of a distributed system that is highly dependant on the flow of time and events coming from external sources. Just to give you an idea about what we're working on:

+ +
+

when event A is received within 8:00 AM and 9:00 AM, and afterward, + within 5 minutes, a second event B is received, then write a record + into the DB with the timestamp at which event A has been received.

+
+ +

Now... writing an integration test against the real system seems complex.

+ +

How to set the date of the system under test ?

+ +

The system under test should ideally work with a time defined by the integration tests. Maybe the integration test should set the server time at ""9:00"" when the test starts ? Any other advice ?

+ +

How to send events at a known rate ?

+ +

Events should be sent with a reproducible and known rhythm, i.e. fire event A at 9:05 and event B at 9:06. Maybe a plain programmable scheduler can be used here ?

+ +

How to cope with tests requiring large timeouts ?

+ +

Some test scenarios requires that large amount of time pass between two events. Is it possible to artificially speed up the flow of time to compress one hour in just some second in the test ? I though about speeding up the clock of the system (not in the meaning of overclocking, just in the mean of having the seconds ticks faster). It seems to me that this cannot be done in an actual piece of hardware but maybe in a virtual machine this is possible. Any advice on this topic ?

+ +

Constraints

+ +
    +
  • Not all module being part of the integration tests are under our direct control.
  • +
+",121123,,7422,,43117.66736,43117.66736,How to write integration tests for a distributed system highly dependant on current date and flow of time?,,1,5,,,,CC BY-SA 3.0,, +364139,1,364156,,1/17/2018 16:13,,7,3596,"

In my board game I want to decouple my Player and Board class since I've changed piece moving system several times now and it's been a chore each time. I think I could use some interface for taking a player request to alter the board state but I can't decide if a Command or a Mediator interface is the appropriate solution (or even Observer?)

+ +

My understanding is that a Command executes something a that a client wishes to do to a receiver while not knowing how they do it. But does a Mediator not do mostly the same thing? You specify a request object and the mediator executes the requests on client behalf? Does the Mediator facilitate two-way communication e.g. can a player request to move a piece and if successful, can the board in turn notify the player of the new board state?

+ +

Basically, I'm having trouble knowing which way to decouple my classes between 3 seemingly related patterns. I've read the GoF discussion on this and I'm still confused since they don't use concrete examples of when they'd be more useful than one another.

+",293614,,,,,43117.93472,What is the difference between the Command and Mediator patterns?,,1,8,3,,,CC BY-SA 3.0,, +364143,1,,,1/17/2018 17:16,,1,265,"

Here's a scenario I've encountered variations of on many occasions.

+ +

Imagine an XML feed which displays data about three different types of event: concerts, plays and movies. Each has a different set of parameters. We want to scrape this information and store it for our own purposes. So we create a database which has application data and three tables for this data which we'll call event_concert, event_play and event_movie. We stick an ORM over the top.

+ +

Every hour we poll the feed, looking for new data, and saving it in our database. Because our ORM creates typed objects, there are essentially two ways we can do this.

+ +

First, write three separate functions, one for each object type, which create and populate the necessary object, and put it in the database. This is clean and easily understandable. But it's more code to write and it's not necessarily the best for maintenance, since new functions muse be added to deal with new object types and there's a risk of duplicated code.

+ +

Second, write a single loop which uses reflection various functions from the ORM context to iterate over the object types (using a fixed prefix like event_ to recognize them), and then iterates over the properties to populate them before saving to the Db. This is messy and hard to understand, but done right it should be a fire-and-forget option that will semi-automatically pick up and deal with new object types with a minimum of fuss.

+ +

In this example it's easy to go with the first option since there are only three objects to save. But what if there's five? Or fifty? Or more? Suddenly handling all those different objects with their own custom functions doesn't seem so attractive any more.

+ +

Is there any other approach or design pattern which achieves a similar outcome and is both clean and capable of dealing with lots of object types?

+ +
+ +

Here's an example of what I mean in pseudocode.

+ +

If we treat the objects as different types:

+ +
GetAndInsertConcerts();
+GetAndInsertMovies():
+GetAndInsertPlays();
+// if we need a fourth object type, we'll need to add a new function
+
+public void GetAndInsertConcerts()
+{
+    var xmlConcerts = GetObjectsFromXML(""Concerts"");
+
+    foreach(var xmlConcert in xmlConcerts)
+    {
+        Event_Concert c = new Event_Concert(xmlConcert);
+        repository.InsertAndSave(c);
+    }
+ }
+ // the functions for plays and movies will repeat this structure
+
+ +

And the second approach:

+ +
string[] eventTypes = repository.GetTablesStartingWithEvent();
+
+foreach(string eventType in eventTypes)
+{
+    string xml = GetXmlForEventType(eventType);
+    // BuildObjectFromXml function is reflection based, likely hard to code & to follow
+    var obj = BuildObjectFromXml(xml);  
+    repository.InsertAndSave(obj);
+}
+
+ +

Worth repeating - the boilerplate in the first approach is fine if you're dealing with a handful of object types you don't expect to change. But writing repeat ""get and save"" functions for a lot of different object types is going become tiresome.

+",22742,,1204,,43117.96597,43118.48403,Is there a better design than reflection to deal with large numbers of ORM objects?,,1,8,,,,CC BY-SA 3.0,, +364145,1,364161,,1/17/2018 18:41,,23,12828,"

The default behavior of assert in C++ is to do nothing in release builds. +I presume this is done for performance reasons and maybe to prevent users from seeing nasty error messages.

+ +

However, I'd argue that those situations where an assert would have fired but was disabled are even more troublesome because the application will then probably crash in an even worse way down the line because some invariant was broken.

+ +

Additionally, the performance argument for me only counts when it is a measurable problem. Most asserts in my code aren't much more complex than

+ +
assert(ptr != nullptr);
+
+ +

which will have small impact on most code.

+ +

This leads me to the question: +Should assertions (meaning the concept, not the specific implementation) be active in release builds? Why (not)?

+ +

Please note that this question is not about how to enable asserts in release builds (like #undef _NDEBUG or using a self defined assert implementation). Furthermore, it is not about enabling asserts in third party/standard library code but in code controlled by me.

+",112442,,112442,,43118.36667,43477.32361,Should there be assertions in release builds,,8,3,2,,,CC BY-SA 3.0,, +364153,1,,,1/17/2018 21:13,,2,97,"

I was talking with some coworkers about the application build and we have some divergences about what each one consider a good practice or something to worry about.

+ +

I learn that a good application build is one that is self sufficient, no matter how many dependencies and tests it have, the build will take care about them and run the build smooth.

+ +

Per example, if I have an application that uses Maven, I expect that a mvn clean install will work with the minimal requirements: Java, working Internet and maven installed (if you use mvnw in the project, even maven installed is optional).

+ +

In this scenario, not matter if your build run unit or integration tests with MongoDB, Oauth2, RabbitMQ... it's the programmer job to mock/stub these dependencies for the tests run and the build be independent from external services/databases/queues/etc running.

+ +

To be more clear, when I'm talking about integration tests, I'm talking about the the ""narrow"" ones: https://martinfowler.com/bliki/IntegrationTest.html, that typically run with the maven failsafe plugin in the Java world.

+ +

But some coworkers don't see any problem if an application build depends of a started MongoDB, Oauth2 server, RabbitMQ and etc running on the developers machine only for the build. Even using docker to help, for me this is the wrong approach, because:

+ +
    +
  • The build is more slow.
  • +
  • Add complexity in the build process.
  • +
  • You need a Wiki/Guide only to explain how put the build to work.
  • +
  • Consumes more memory, because the external dependencies are not mocked.
  • +
  • You need to ""reset"" the external dependency each build.
  • +
+ +

When we are talking about the System Tests (or the ""broad"" integration tests), yes, we need all this external dependencies up and running, but this kinds of tests not occur during the build.

+ +

I think the resistant to make a self sufficient build cames from the complexity to deal with these dependencies on Integration Tests. Even using Spring, it's not easy to bypass a RabbitMQ and Oauth2 authentication, per example.

+ +

Although is clear to me what is the right approach, I can't find any discussions on Internet about this subject. What you guys think?

+",172464,,172464,,43117.93056,43118.03403,"Application build needs external ""resources"" to work: is this bad or... normal?",,1,5,,,,CC BY-SA 3.0,, +364154,1,364366,,1/17/2018 21:30,,0,246,"

I couldn't find a specific answer to my question but how would I develop an AggregateRoot class for the following scenario.

+ +
public class Root{
+
+    public int Id {get;set;}
+    public IList<Child> Children {get;set;}
+
+}
+
+public class Child{
+    public int Id{get;set; }
+    public IList<SubChild> SubChildren{get;set; }
+}
+
+public class SubChild{
+    public int Id {get;set;}
+}
+
+ +

How can the root class update the SubChild class following DDD principles?

+",62460,,62460,,43117.94306,43120.82778,AggregateRoot Class DDD Multiple entities,,1,1,,,,CC BY-SA 3.0,, +364157,1,,,1/17/2018 22:08,,1,396,"

I am a member of a 6 person team of software engineers within a 400 person department at a fairly well known company. We are responsible for the business applications our department uses, both third-party and custom asp.net tools we write. The group has been around since 2006ish and I joined in 2011. I have been trying to introduce best practices(soc, solid,dry,unit testing,etc), but i get from about half that they simply dont care. Since as a group we should all follow the same process, should I just give up promoting this stuff and go with the flow? Every meeting we have discussing this csn get a bit heated. I am getting to my wits end.

+",13816,,,,,43118.48889,Coworkers don't seem to care about best practices,,4,8,,43119.28472,,CC BY-SA 3.0,, +364162,1,,,1/17/2018 22:40,,0,672,"

I understand an event-driven architecture leads to decoupling. However, can using an event-driven architecture lead to minimal ""getter/accessor"" use? It seems that if you had some type of event handler such as BalanceChanged that passes the new Balance around then you wouldn't need a getter to access the Balance of an Account object. Am I thinking about this incorrectly?

+ +

EXAMPLE:

+ +
public class Account
+{
+  public void MakeContribution(Contribution contribution)
+  {
+    contributions.Add(contribution);
+    OnBalanceChanged(EventArgs.Empty);
+  }
+
+  private double calculateBalance()
+  {
+    double balance = 0;
+
+    foreach (Contribution contribution in contributions)
+      balance += contribution.Amount;
+
+    return balance;
+  }
+
+  public event EventHandler BalanceChanged;
+
+  protected virtual void OnBalanceChanged(EventArgs e)
+  {
+    // Create new AccountEventArgs with the calculated balance
+    // Pass AccountEventArgs instead of EventArgs.Empty
+    BalanceChanged?.Invoke(this, EventArgs.Empty);
+  }
+
+  private IList<Contribution> contributions = new List<Contribution();
+}
+
+ +

And then inheriting EventArgs to add a double Balance attribute. Or is passing data through event (messages) not a good idea?

+",274856,,9113,,43118.80764,43118.80764,Event-driven architecture avoid getters,,0,16,,,,CC BY-SA 3.0,, +364167,1,364176,,1/18/2018 0:22,,-3,77,"

I have three attributes for a user: connectionId (e.g. ""sd4-h4sdg-u4j-a2rr3""), userName, and placeInLine.

+ +

I can't think of a single data structure that will handle my scenario of only allowing a single user to work on a given webpage resource and advancing the line of people as each active user leaves their connection. I am using C# / SignalR with jQuery.

+ +

When a user (Adam) hits the page, if this proposed data structure is empty, they get to interact with the page. The next user who comes along (Bob) while Adam is working gets put in line. So we store his userName, connectionId, and place in line as 1. Carl comes along and gets put in line as well, getting placeInLine=2. David is 3, etc.

+ +

Now, in a perfect world, everyone would wait patiently for their turn, and get prompted when the user in front of them disconnects so they can have their turn on the page. In this scenario, a Queue would work. However: If Carl leaves, we cannot simply remove him from the Queue since he is in the middle [B, C, D]. I want this code to scale so rebuilding a queue with each change is not an option.

+ +

If I were using a Dictionary, I could have the userName as the key, with a value of the placeInLine. But in this scenario, I cannot get the next user, since Get() happens via Key. If the placeInLine int is the key, I cannot remove a user based on username since Remove() is also done by key.

+ +

Am I naive for assuming this should be possible with a single data structure?

+",274088,,,,,43118.16042,Best Data Structure for ordered list of connected users,,1,9,,,,CC BY-SA 3.0,, +364171,1,,,1/18/2018 1:45,,1,675,"

I'm trying to implement this right-to-left evaluation algorithm of a postfix expression but I can't seem to get it to work.

+ +
for each token in the reversed postfix expression:
+  if token is an operator:
+    push token onto the operator stack
+    pending_operand ← False
+  else if token is an operand:
+    operand ← token
+    if pending_operand is True:
+      while the operand stack is not empty:
+        operand_1 ← pop from the operand stack
+        operator ← pop from the operator stack
+        operand ← evaluate operator with operand_1 and operand
+    push operand onto the operand stack
+    pending_operand ← True
+result ← pop from the operand stack
+
+ +

From wikipedia.

+ +

This is how the steps are illustrated:

+ +
15 7 1 1 + − ÷ 3 × 2 1 1 + + − =
+15 7 1 1 + − ÷ 3 × 2     2 + − =
+15 7 1 1 + − ÷ 3 ×         4 − =
+15 7     2 − ÷ 3 ×         4 − =
+15         5 ÷ 3 ×         4 − =
+             3 3 ×         4 − =
+                 9         4 − =
+                             5
+
+ +

I don't really get how this follows from the algorithm. I keep getting the wrong answer trying to evaluate the expression 15 7 1 1 + − ÷ 3 × 2 1 1 + + − (should be 5). I've spent hours trying to get it working in assembly and I tried manually going through it but I keep getting the wrong answer. I think part of if lies in operate(operand_1, operand), ruled out. Anyways, I threw together this piece of JavaScript to show my interpretation of the algorithm, since it's way clearer than assembly.

+ +
const   rpn = [15, 7, 1, 1, '+', '-', '/', 3, '*', 2, 1, 1, '+', '+', '-'];
+const   operator_Stack = [];
+const   operand_Stack = [];
+let     pending = false;
+
+for (i = rpn.length - 1; i >= 0; i--) {
+    const   token = rpn[i];
+    if (typeof token === ""string"") {
+        operator_Stack.push(token);
+        pending = false;
+    } else {
+        let operand = token;
+        if (pending) {
+            while (operand_Stack.length > 0) {
+                let operand_1 = operand_Stack.pop();
+                let operator = operator_Stack.pop();
+                let expr = operand + "" "" + operator + "" "" + operand_1;
+                console.log(expr);
+                operand = eval(expr);
+            }       
+        }
+        operand_Stack.push(operand);
+        pending = true;
+    }
+}
+console.log(""The expression evaluates to: "" + operand_Stack.pop());
+
+ +

This evaluates the following expression in the following order:

+ +
""1 + 1""
+""2 + 2""
+""1 + 1""
+""2 - 3""
+""-1 / 4""
+""7 * -0.25""
+""15 - -1.75""
+
+ +

The first three evaluations appear to be correct. Then things start to go wrong.

+ +

As a binary tree 15 7 1 1 + − ÷ 3 × 2 1 1 + + − would look like this

+ +
            [-]
+           /   \
+         [*]    [+]
+        / \    /   \
+      [/] [3] [2]  [+]
+      / \          /  \
+   [15] [-]      [1]  [1]
+        / \
+     [7]  [+]
+         /   \
+       [1]   [1]            
+
+ +

The correct order of evaluation should be:

+ +
1 + 1
+2 + (1 + 1)
+1 + 1
+7 - (1 + 1)
+15 / (7 - 2)
+3 * (15 / 5)
+9 - 4
+
+ +

To me, my JavaScript code implements the algorithm as it's stated. Yet obviously it's not correct. As I see there are two possibilities, the algorithm is wrong or, more likely, my interpretation is. Problem is, I can't figure out which of the two it is.

+ +

What is it that I'm missing?

+",293658,,293658,,43118.60417,43119.44861,How do I interpret this postfix right-to-left algorithm?,,1,6,,,,CC BY-SA 3.0,, +364178,1,,,1/18/2018 4:15,,-2,105,"

My program (a command-line utility) will load configuration from a file, using defaults if file not found, and I'd like to do this in a cross-platform manner that people will expect.

+ +

Is there a de-facto standard or other common algorithm for searching directories to find the config file?

+ +

I'm imagining something like:

+ +
    +
  • If Windows: + +
      +
    • Try %appdata%\myprogram\config.cfg
    • +
    • Try same directory as binary
    • +
  • +
  • Otherwise: + +
      +
    • Try $(HOME)/.myprogram/config.cfg
    • +
    • Try /etc/myprogram/config.cfg
    • +
    • Try same directory as binary
    • +
  • +
+",123271,,,,,43118.28264,Standard for config file location?,,1,3,,43118.87569,,CC BY-SA 3.0,, +364189,1,364195,,1/18/2018 9:38,,7,2917,"

I am a bit confused as for what it really means. In the related questions (Is this a violation of the Liskov Substitution Principle?), it was said that the example clearly violates LSP.

+ +

But I wonder, if there is no new exception thrown, would it still be violation? Isn't it simply polymorphism then? +I.e:

+ +
public class Task
+{
+     public Status Status { get; set; }
+
+     public virtual void Close()
+     {
+         Status = Status.Closed;
+     }
+}
+
+public class ProjectTask : Task
+{
+     public override void Close()
+     {
+          if (Status == Status.Started) 
+          {
+              base.Close(); 
+          }
+     }
+}
+
+",60327,,40857,,43118.81181,43118.81181,Liskov Substitution principle - strengthening preconditions,,6,2,2,,,CC BY-SA 3.0,, +364190,1,364192,,1/18/2018 9:45,,1,133,"

I believe there is a person who named this pattern or practice and would like to know the correct nomenclature to avoid confusion.

+ +
object.propertyA.propertyB
+
+ +

becomes

+ +
object.propertyB
+
+",14662,,14662,,43130.38056,43130.38056,What is the name given to the practice/pattern of exposing properties of properties as immediate properties?,,1,2,,43118.58333,,CC BY-SA 3.0,, +364199,1,364205,,1/18/2018 11:42,,3,1011,"

We are discussing proposal to split a big C++ program into multiple separate executables that would communicated using shared memory. The shared data structures are large, so we do not want to use loopback network or any other approach that would just copy them.

+ +

The arguments for splitting are that every part can be developed separately, potentially replacing it with the alternative implementation, even in another language. It would naturally prevent accessing private data and code and the processes would obviously run in separate threads.

+ +

The arguments against would be that C++ has built-in means to structure also a large and complex project, hiding data and functions as designed. It is possible use C++ multithreading to employ all cores of the CPU. In this case the data can be passed by reference from module to module without tricks.

+ +

Is there a known widely accepted view about dividing a C++ program into multiple binaries running in parallel on the same host? Do any widely known programs work this way?

+ +

Suggestions to implement in another language are outside the scope of this question.

+",81278,,,,,43118.58333,Should I divide C++ program into multiple interacting applications?,,3,2,1,,,CC BY-SA 3.0,, +364211,1,,,1/18/2018 14:34,,126,79314,"

We have recently moved to Java 8. Now, I see applications flooded with Optional objects.

+ +

Before Java 8 (Style 1)

+ +
Employee employee = employeeServive.getEmployee();
+
+if(employee!=null){
+    System.out.println(employee.getId());
+}
+
+ +

After Java 8 (Style 2)

+ +
Optional<Employee> employeeOptional = Optional.ofNullable(employeeService.getEmployee());
+if(employeeOptional.isPresent()){
+    Employee employee = employeeOptional.get();
+    System.out.println(employee.getId());
+}
+
+ +

I see no added value of Optional<Employee> employeeOptional = employeeService.getEmployee(); when the service itself returns optional:

+ +

Coming from a Java 6 background, I see Style 1 as more clear and with fewer lines of code. Is there any real advantage I am missing here?

+ +

Consolidated from understanding from all answers and further research at blog

+",260829,,260829,,43164.13056,43896.47431,Why use Optional in Java 8+ instead of traditional null pointer checks?,,11,15,41,,,CC BY-SA 3.0,, +364215,1,364217,,1/18/2018 15:34,,2,90,"

I was just wondering this while programming in python. If I have a function foo which take a collection (list or tuple, perhaps set etc.) and manipulates on its elements somehow, e.g. filter, trim, increases, etc. should such function return the same type? For instance, if it accepts any collection and filter things out, what should it return as an output object? Is it important at all?

+ +

There are also other containers, like numpy. So I have a numpy object a. There might be operations I cannot do on a using methods from numpy and therefore have to convert it to a list. Should I check for a type and return the same type?

+ +

If yes, is there a general technique to check type and return the same one?

+",189364,,,,,43118.83194,Function that manipulates collection and return the same type,,2,1,,,,CC BY-SA 3.0,, +364227,1,364237,,1/18/2018 19:20,,2,318,"

About the app: This is an android app. The app has two components/ two different users, scorers who score the game and fans who view commentary of the game.

+ +
    +
  1. Scoring User : A scorer watches a live baseball game and puts the +score after each pitch/event of the game. These events are pushed to the server and also stored locally in the scorer's phone.
  2. +
  3. Fans User : Fans who want to follow the game can view play-by-play/pitch-by-pitch commentary of the game on the fans app sent by the server based on the events sent by the scoring app.
  4. +
+ +

Each game will have set of data associated with the game. +Fixed data which wont change through out the game, like -:

+ +
    +
  1. Teams Name
  2. +
  3. Players List
  4. +
  5. Grounds Name +.. ETC
  6. +
+ +

Each match will also have a list/array of events. Each event will contain a number of information.

+ +
    +
  1. Batter Name
  2. +
  3. Pitcher Name
  4. +
  5. Runs scored
  6. +
  7. If wicket taken, how was the wicket taken. +.. ETC
  8. +
+ +

These list of events generated in each games are stored on the server. Over a period of time these events per match generate interesting insights about pitcher. Example : how is a pitcher better against left hand batters, how a batsman has better average when team win, etc.

+ +

Problems/Suggestions required :

+ +
    +
  1. The scorer might have made a mistake in scoring an event and needs to undo the last event. Which design pattern to use in android to support this?
  2. +
  3. Should the events be stored as JSON or POJOs on the android app? After each event on the scoring app the event needs to be stored on the local db and on the server. The data is sent to the server a JSON and dumped in a mongoDb.
  4. +
  5. The app needs to store details about each event to display in on the commentary app, also to generate insights. The scorer UI also shows the current batsman, runs scored by current batsman, total, current pitcher info etc. This keeps changing after every event. +All these data can be derived and updated on the UI by two ways: + +
      +
    1. Events list is updated after new event and all other derivable data is calculated again by looping the events. So in case of UNDO last event is removed and looping over again would recalculate actual values. This could keep the data accurate and less prone to errors but needs to traverse a long list each time.
    2. +
    3. Along with events list there is another data structure that holds values for other info to be displayed on the UI. Events list is updated after new event and all other derivable data is also updated on the other data structure. This could cause mismatch in data cause data is duplicated, but you don't need to traverse the long list of events each time.
    4. +
  6. +
+ +

I have spent sometime on design patterns but could not take a decision on what to proceed with. An expert opinion/direction or a word from someone who has done something in similar lines would be very helpful.

+",293725,,,,,43148.90625,Architecture for a scoring and commentary apps,,1,0,,,,CC BY-SA 3.0,, +364230,1,364235,,1/18/2018 19:51,,7,2860,"

As our devs are writing internal applications, often times we need to share code. There are various ways to do this, but it usually comes down to creating a nuget package on a shared server or hosting a web api internally. We don't have a definitive way to decide between the 2 and currently have a mixture of both. Any suggestions/guidance of when to do one over the other?

+ +

To get a little more specific... Say I have a few general use methods to FormatXXX(), CalculateXXX(), SendXXX(), etc... that many apps could benefit from. Should this be an internal web api, or a nuget package?

+ +

We are a .Net shop, and use VS 2017, TFS (with git). We mostly build web apps, web api's, wcf services, and command line apps. We interface with 3rd party api's all the time.

+",78273,,78273,,43118.83542,43119.69097,When to create a nuget package vs creating a web api,,3,6,2,,,CC BY-SA 3.0,, +364231,1,364271,,1/18/2018 19:53,,-2,158,"

On my current project it is a common practice to generate Hibernate Entities from database tables using NetBeans functionality. I'm normally used to first write the Entity-class and define the mapping in an XML-file or with annotations.

+ +

Is it considered bad practice to generate the entity classes through NetBeans (or another IDE) and are there any drawbacks to this approach?

+ +

We're only using Hibernate 4.3 and no JPA.

+",293315,,,,,43119.53264,Is generating Hibernate Entities from database tables considered a bad practices?,,1,1,,,,CC BY-SA 3.0,, +364236,1,,,1/18/2018 21:11,,1,155,"

I have a question around which is preferred and seen as more right approach.

+ +

Out company's messaging solution of choice is Kafka. We have a task to build a service that would provide events in cloud environment via HTTP(no port opening, widely used - easy to get clients onboarded). Events originate from company's own datacenters.

+ +

Now the performance requirement is small as we will be delivering around 5000 events a day.

+ +

To implement that there are few options:

+ +

1) Define simple Rest API we think is best, abstract underlying messaging mechanism so that other teams can integrate without depending on our implementation details. Then we can start for example with relational DB in cloud to store events.

+ +

Pros:

+ +
    +
  • Other services do not know about our implementation
  • +
  • We can choose how we implement it and change later
  • +
  • Simple API - just enough to provide what is needed
  • +
  • By not reusing alredy build Kafka cluster we do not have single point of failure within a team - if kafka fails for some reason that was build for other project - we are not affected.
  • +
+ +

Cons:

+ +
    +
  • Could be (we not 100% sure) Slightly longer than Option 2
  • +
  • We are reimplementing messaging
  • +
+ +

2) Reuse already build kafka cluster by other subteam in cloud and put Kafka rest proxy on top and expose events. That results in our consumers talking in Kafka language.

+ +

Pros:

+ +
    +
  • Possibly it will be delivered faster
  • +
  • We have already Kafka cluster
  • +
+ +

Cons:

+ +
    +
  • Learning of Kafka
  • +
  • Coupling clients to our internal mechanisms
  • +
  • Kafka as a single point of failure
  • +
+ +

Now the question is - what do you think is right thing to do?

+",79572,,,,,43118.88264,Distributed systems design and coupling,,0,4,,,,CC BY-SA 3.0,, +364240,1,364241,,1/18/2018 22:58,,10,526,"

I've read several papers, articles, and section 4.1.4, chapter 4 of Compilers: Principles, Techniques, and Tools (2nd Edition) (a.k.a ""The Dragon Book"") which all discuss the topic of syntactical compiler error recovery. However, after experimenting with several modern compilers, I've seen that they also recover from semantic errors, as well as syntactic errors.

+ +

I understand fairly well the algorithms and techniques behind compilers recovering from syntacticly related errors, however I don't exactly understand how a compiler can recover from a semantic error.

+ +

I'm currently using a slight variation of the visitor pattern to generate code from my abstract syntax tree. Consider my compiler compiling the following expressions:

+ +
1 / (2 * (3 + ""4""))
+
+ +

The compiler would generate the following abstract syntax tree:

+ +
      op(/)
+        |
+     -------
+    /       \ 
+ int(1)    op(*)
+             |
+          -------
+         /       \
+       int(2)   op(+)
+                  |
+               -------
+              /       \
+           int(3)   str(4)
+
+ +

The code-generation phase would then use the visitor pattern to recursively traverse the abstract syntax tree and perform type checking. The abstract syntax tree would be traversed until the compiler came to the innermost part of the expression; (3 + ""4""). The compiler then checks each side of the expressions and sees that they're not semantically equivalent. The compiler raises a type error. Here is where the problem lies. What now should the compiler do?

+ +

For the compiler to recover from this error and continue type checking the outer parts of the expressions, it would have to return some type (int or str) from evaluating the innermost part of the expression, to the next innermost part of the expression. But it simply doesn't have a type to return. Since a type error occurred, no type was deduced.

+ +

One possible solution I've postulated, is that if a type error does occur, an error should be raised, and a special value that signifies that a type error occurred, should be returned to previous abstract syntax tree traversal calls. If previous traversal calls encounter this value, they know that a type error occurred deeper in the abstract syntax tree, and should avoid trying to deduce a type. While this method does seem to work, it seems to be very inefficient. If the innermost part of an expression is deep in the abstract syntax tree, then the compiler will have to make many recursive calls only to realize that no real work can be done, and simply return from each one.

+ +

Is the method I described above used (I doubt it). If so, is it not efficient? If not, what exactly are the methods used when compilers recover from semantic errors?

+",242544,,242544,,43193.61667,43194.69375,How exactly does a compiler recover from a type error?,,4,1,1,,,CC BY-SA 3.0,, +364246,1,364252,,1/19/2018 1:55,,2,906,"

For my personal project I need to store 2D shapes in a Postgres database. For example Circle, Pentagon, Rectangle and so on. At first I've done it like so: all shapes are inherited from an abstract class called Shapes, which has some methods that I need each object to perform on itself, for example (I'm using java with Spring data):

+ +
for(Shape shape : shapes){
+    shape.getArea();
+}
+
+ +

This is good in code, but I don't know a good way to store this in a database. Every shape has some parameters that are different from others (radius for circle, height and length for rectangle etc.) so it looks like I need many tables for each geometry type. But how to reference each shape from another table then?

+ +

Right now I try to solve this with a single class called Geometry. It has a type field and a set of parameters linked to it Circle for example will have one record in table Geometry and one record in table Parameters. This class also has all the methods needed for each available shape, like this:

+ +
getArea(){
+    switch(type){
+        case circle: ....
+        case pentagon: ...
+    }
+}
+
+ +

But I'm wondering if there are more elegant ways to solve this?

+",292257,,209774,,43119.33611,43119.33611,"Ways to store objects, inherited from one superclass in a database",,2,1,,,,CC BY-SA 3.0,, +364248,1,364250,,1/19/2018 2:37,,6,11908,"

Is it OK to have a class something like this

+ +
public class Weapon {
+    private string name;
+    private int might;
+    // etc...
+
+    private Weapon(String name, int might) {
+        this.name = name;
+        this.might = might;
+    }
+
+    public static final Weapon Alondite = new Weapon(""Alondite"", 16);
+    // 100++ weapon
+}
+
+ +

Then calling the weapon anywhere in the project, like this Weapon.Alondite, will this create new object every time the static method member called?

+ +

Or should I do like this, to ensure the object only created once

+ +
public class Weapon {
+    private string name;
+    private int might;
+    // etc...
+
+    private Weapon(String name, int might) {
+        this.name = name;
+        this.might = might;
+    }
+
+    private static Weapon mAlondite;
+    public static Weapon Alondite() {
+        //if (mAlondite == null) {
+        //    mAlondite = new Weapon(""Alondite"", 16);
+        //    return mAlondite;
+        //} else {
+        //    return mAlondite;
+        //}
+
+        // EDIT: as suggested by everyone
+        if (mAlondite == null) {
+            mAlondite = new Weapon(""Alondite"", 16);
+        }
+        return mAlondite;
+    }
+}
+
+",232465,,232465,,43122.16806,43122.16806,Is it safe to return a new object from static method?,,3,5,5,,,CC BY-SA 3.0,, +364253,1,364264,,1/19/2018 6:14,,2,173,"

I'm trying to figure out what my API route and HTTP method should be for supporting a dual list box (e.g. http://geodan.github.io/duallistbox/sample-100.html) +Say, I have a collection, GET /employees which looks like:

+ +
[
+  {'id': 1, 'name': 'Foo', 'selected': true},
+  {'id': 2, 'name': 'Bar', 'selected': false}
+]
+
+ +

where Foo would be in the list on the right and Bar would be on the left. +If the action of moving the items from left to right (or vice-versa) in the dual list box only changes the value of 'selected', should I be using:

+ +
    +
  • JSON PATCH (items that moved) to /employees
  • +
  • POST (items in the right list) to /employees/selected (GET /employees/selected doesn't exist)
  • +
  • PATCH (items in the right list) to /employees/selected
  • +
  • Some other method and route?
  • +
+ +

Edit: Note, I'm interested in bulk updates use case. Don't want to trigger an HTTP request after every user action.

+",293750,,293750,,43120.27431,43120.34167,Rest API for supporting dual listbox,,1,0,,,,CC BY-SA 3.0,, +364255,1,,,1/19/2018 7:48,,2,395,"

I suppose I have the following use case:

+ +
    +
  • I want a synchronous service if possible (in case of machine crash I want to finish it in background, by some other node)
  • +
  • I want to save something in my database and publish an event to an event stream like kafka or kinesis (which means 2 transactions db + evt stream)
  • +
+ +

I am stateless with dockers so I don't know if the transaction machine that recovers is the same that died previously. So the question is how do I make sure such a transaction finished?

+ +

I was wondering about async transaction but then my service cannot be synchronous, or I would create a transaction with a state and poll it while it gets handled asynchronously. What are common solutions? I just need to achieve eventual consistency

+",293767,,293767,,43119.39722,43119.39722,Distributed transactions and recovery with stateless microservices,,0,2,1,,,CC BY-SA 3.0,, +364257,1,,,1/19/2018 8:11,,3,496,"

There are endless resources on creating CRUD for REST resources but I can't find much on doing the same for Messaging.

+ +

Given two services A and B where A receives incoming requests that initiates the creation of the resource R. When A receives the request it does some processing and creates R and then wants to store R in B using JMS. After creation, the Read/Update/Delete operations should be available on R.

+ +

I can see a couple of different approaches here:

+ +
    +
  1. Use a generic entity that contains an operation (crud) and some generic object that can be cast to R if updating, and to id(R) if reading or deleting. The entity is published on a common queue Q.

  2. +
  3. Use strictly typed entities for the different operations where each operation is published on its own queue Qc, Qr, Qu, Qd.

  4. +
  5. Use strictly typed entities for the different operations where each operation is published on the same queue Q but with different type.

  6. +
+ +

From my understanding of JMS it is usually recommended to use separate queues for different types of messages to avoid congestion-issues where one type of message can block for all others.

+ +

Does this mean the CRUD operations should have separate queues or should they be considered to handle the same type of resource and thus share a queue?

+",63077,,63077,,43119.34514,43119.34514,Designing CRUD Messaging communication,,0,3,1,,,CC BY-SA 3.0,, +364259,1,364273,,1/19/2018 9:17,,13,2806,"

Occasionally I see major, relatively new, open source C projects targeting very old C standards, typically C89. An example is systemd. These projects have intelligent people at the helm so they probably have a good rationale behind this decision that I don't know about. That benefit of the doubt aside, it almost seems like the rationale is ""older and standardized is always more portable and better"" which is ridiculous because the logical conclusion would be that FORTRAN is better than C and COBOL is even better than FORTRAN.

+ +

When and why is it justified for new C projects to target very old C standards?

+ +

I can't imagine a scenario where a user's system absolutely must not update its C compiler but is otherwise free to install new software. The LTS version of Debian, for example, has a gcc 4.6 package which supports C99 and some of C11. I guess that strange scenario must exist though and programs like systemd are targeting those users.

+ +

The most reasonable use case I can imagine is where users are anticipated to have exotic architectures on which there is only a C89 compiler available but they are fully willing to install new software. Given the decline in diversity of instruction set architectures, that seems like an excessively hypothetical scenario, but I'm not sure.

+",128967,,128967,,43119.39514,44140.825,"When should new C projects target very old C standards (>20 years old, i.e. C89)?",,4,6,3,,,CC BY-SA 3.0,, +364262,1,365305,,1/19/2018 10:44,,2,79,"

I have the following entities in my database (it's simplified):

+ +
    +
  • Student (id)
  • +
  • StudentCourse (id, student_id, course_id)
  • +
  • Course (id)
  • +
  • Lesson (id)
  • +
  • LessonCourse (id, lesson_id, course_id)
  • +
  • StudentCourseLesson (id, student_id, course_id, lesson_id, completed)
  • +
+ +

So:

+ +
    +
  1. Student can sign up for a course (StudentCourse entity is being created).

  2. +
  3. Every course is made of lessons, and one lesson can be a part of multiple courses.

  4. +
  5. Student is able to mark a specific lesson in a specific course as ""completed"".

  6. +
+ +

The problem:

+ +

Should the StudentCourseLesson entities be pre-inserted to the database the moment user signed up for the course with default values (hundreds of rows), or should they be created on the fly, the moment user will start some interaction with the specific lesson of the specific course (like marking one as completed)?

+ +

What I can think of:

+ +
    +
  • The first option makes my server code cleaner (no need for checks if entity exists in the database) and makes everything in the database existing in the ""well defined state"" but it causes a massive inserts surge with every sign up, and even if it may not be a problem in this case, I can see how it can quickly become a problem as a design pattern in some more complex relations (bad scalability?).
  • +
  • The second option distributes database work in time more, but seems very ugly. It's basically a ""lazy creation"" of the entities. What to do if I will need to display the list of lessons with their ""completed"" status? I could as well create them all at this point.
  • +
+ +

I personally lean towards the first option in my use-case, but I wonder if I didn't miss anything when considering this problem. How this kind of relations should be handled?

+",68826,,68826,,43119.48056,43135.78333,"Sql: Pre inserting relational entities vs the ""lazy creation"" of the entities",,2,2,,,,CC BY-SA 3.0,, +364267,1,,,1/19/2018 12:20,,-1,77,"

I have requirement like user can booked space (office, room, meeting room etc) for a rent for particular time period. But requirement is like user required to booked space every month 5th date, or they need that meeting room in every week Friday.

+ +

I have no idea from where to start for store data. +How to design class or table for requirement.

+ +

I have tried for table design like.

+ +
public partial class OrderListing 
+{
+        public int ID { get; set; }
+        public DateTimeOffset StartDate { get; set; }
+        public TimeSpan StartTime { get; set; }
+        public TimeSpan EndTime { get; set; }
+        public int Quantity { get; set; }
+        public double ListingPrice { get; set; }
+        public bool IsRepeatEveryWeek { get; set; }
+        public DateTimeOffset? RecurringEndDate { get; set; }
+        public bool IsCountPricePerDay { get; set; }
+        public int OrderID { get; set; }
+}
+
+public class OrderHourlyListingRecurring : Repository.Pattern.Ef6.Entity
+{
+        public int ID { get; set; }
+        public DateTimeOffset StartDate { get; set; }
+        public int OrderHourlyListingID { get; set; }
+        public virtual OrderListing OrderHourlyListing { get; set; }
+}
+
+ +

Is this correct or required some changes?

+",256174,,31260,,43119.59444,43119.61806,How to design classes and db table in right way for recurring Order?,,1,1,,,,CC BY-SA 3.0,, +364269,1,,,1/19/2018 12:34,,5,169,"

When I'm writing an computer-vision program (using OpenCV and Python) I need to print/show a lot of intermediate results in form of images (using cv2.imshow(..)) for debugging purposes to find out what is happening. After finishing the program I have no idea what to do with this ""debugging code"". Because this code is useless in production code, but it might be very useful in case of bug fixing in the future and it will make much easier for someone else to understand the logic of the code.

+ +

In case of plain text logging people usually use some logging library (in python it is logging library) that allows to print the redundant information only while debugging.

+ +

I did some research and I found nearly no discussion on that topic and in case of python just one unmaintained library that does this kind of logging so I am wondering whether my idea of logging of computer vision programs is wrong and I should rather think about dividing my program into processing part and visualization part?

+",293782,,,,,43150.24653,Logging/Debugging of computer vision apps,,2,0,2,,,CC BY-SA 3.0,, +364270,1,364293,,1/19/2018 12:44,,5,1104,"

I am trying to encourage working practices that are more agile. I am trying to understand the difference between a Use Case and a User Story. I have read a lot of articles and questions like What's the difference between "use case", "User Story" and "Usage Scenario"? and Is it reasonable to assume a 1:1 relationship between user stories and use cases?

+ +

The questions focus on how the approaches differ from a technical perspective. I am trying to specifically understand how to choose one over the over. One article suggests:

+ +
    +
  1. Use User Stories to define Product Backlog items

  2. +
  3. Use Use Case diagrams to define Sprint Backlog Items (the way I understand a Sprint Backlog is that it is the Product Backlog broken down into steps that should be completed over the Sprint).

  4. +
+ +

I like the thought of this as it means Use Case techniques (like UML) can be used during the Sprint.

+ +

Is this a normal approach? If it is not then when do you use a Use Case and when do you use a User Story?

+",65549,,209774,,43119.9625,43508.88194,Is it feasible to use User Stories and Use Cases in the same team?,,5,8,2,,,CC BY-SA 3.0,, +364272,1,,,1/19/2018 13:11,,3,325,"

I'm building an Mvc application with microservices that retrieve information from the database.

+ +

I have a question related the microservices. +I want an entity from database with some property in a page of the application. So I've build the method of the service that retrieve the information.

+ +

In another page, i want the same entity but with other properties in addition.

+ +

I have two possible way:

+ +
    +
  1. implement another method that return a new dto
  2. +
  3. Add the properties to the method that already exist.
  4. +
+ +

In the first case i have a new method and a new dto to mantain but any change is possible and under control, in the second one i have a unique dto but with properties not used by the first client pages.

+ +

Which is the best ""Scholastic"" microservices solution ?

+",251545,,251545,,43119.58681,43209.71389,microservices methods granularity,,1,4,,,,CC BY-SA 3.0,, +364274,1,,,1/19/2018 13:29,,1,121,"

To use Strategy Pattern in Objective-C, I think it is mainly by selector.

+ +

To omit if...else, use Objective-C runtime, convert string matching to choose selector (Strategy).

+ +

Am my understanding right?

+ +

Here is demo: Strategy Pattern in ResponderChain Communication Pattern.

+ +

create a Router to use ResponderChain Communication

+ +
#import ""UIResponder+Router.h""
+
+@implementation UIResponder (Router)
+
+- (void)routerEventWithName:(NSString *)eventName userInfo:(NSDictionary *)userInfo
+{
+    [[self nextResponder] routerEventWithName:eventName userInfo:userInfo];
+}
+
+@end
+
+ +

event sender:

+ +
[self routerEventWithName:kBLGoodsDetailBottomBarEventTappedBuyButton userInfo:nil];
+
+ +

event receiver:

+ +
#pragma mark - event response
+- (void)routerEventWithName:(NSString *)eventName userInfo:(NSDictionary *)userInfo
+{
+
+    /*
+        do things you want
+    */
+    // call the upper ,by ResponderChain
+    // [super routerEventWithName:eventName userInfo:userInfo];
+}
+
+ +

Here is the Strategy part:

+ +

when the event sources are many, use strategy to specify the concrete situation.

+ +
#pragma mark - event response
+- (void)routerEventWithName:(NSString *)eventName userInfo:(NSDictionary *)userInfo
+{
+
+    NSInvocation *invocation = self.eventStrategy[eventName];
+    [invocation setArgument:&userInfo atIndex:2];
+    [invocation invoke];
+
+    // call the upper ,by ResponderChain
+    // [super routerEventWithName:eventName userInfo:userInfo];
+}
+
+- (NSDictionary <NSString *, NSInvocation *> *)eventStrategy
+{
+    if (_eventStrategy == nil) {
+        _eventStrategy = @{
+                               kBLGoodsDetailTicketEvent:[self createInvocationWithSelector:@selector(ticketEvent:)],
+                               kBLGoodsDetailPromotionEvent:[self createInvocationWithSelector:@selector(promotionEvent:)],
+                               kBLGoodsDetailScoreEvent:[self createInvocationWithSelector:@selector(scoreEvent:)],
+                               kBLGoodsDetailTargetAddressEvent:[self createInvocationWithSelector:@selector(targetAddressEvent:)],
+                               kBLGoodsDetailServiceEvent:[self createInvocationWithSelector:@selector(serviceEvent:)],
+                               kBLGoodsDetailSKUSelectionEvent:[self createInvocationWithSelector:@selector(skuSelectionEvent:)],
+                               };
+    }
+    return _eventStrategy;
+}
+
+",279558,,,,,43119.80208,"Objective-C: Strategy Pattern, is mainly by selector?",,1,5,,,,CC BY-SA 3.0,, +364278,1,364280,,1/19/2018 14:13,,4,172,"

I am trying to create a web UI for image processing, with some operations similar to what a site like fotor.com offers. However, I have problems to achieve a similar performance. For example, lets say I uploaded an image on fotor.com of around 3+ MB and performed a basic operation of setting the image brightness to ""full"". Then the preview image (show canvas) will be rendered immediately, with almost no time lag.

+ +

I tried to do the same operation using the popular plugin ""commonjs"", but it took too long to process the same image, and in some cases it ""hangs"" the browser.

+ +

And I have also tried server-side image processing, using http://imageprocessor.org, as I am working with Asp.Net, but after processing the image on the server, it takes too long to load again on the browser.

+ +

So my question is: can someone suggest me an idea how I can achieve previewing of processed image with a minimum lag of time like (fotor.com do)?

+",293790,,9113,,43119.63889,43120.24028,How to process large image with a minimum time lag,<.net>,3,1,,,,CC BY-SA 3.0,, +364287,1,,,1/19/2018 17:21,,3,3765,"

I am a freelance developer. I have multiple clients, each client has multiple projects, and each project has multiple distinct pieces of software. I recently migrated all my source control from my old subversion service to a business github account.

+ +

My question is, what is a workable strategy for organizing clients, projects, and applications, where the tools at my disposal are GitHub’s interface plus directory structures in my repos?

+ +

Currently I just have one repo per application, and I name the repos Client_Project_Application. It’s kind of ugly because I have this huge unmanageable list of repos and also if I want to grant somebody permission to access an entire client or project, it’s tedious to do it for every applicable repo individually. It’s also difficult to find specific repos quickly on the github site and desktop app.

+ +

What I want to be able to do is:

+ +
    +
  • Keep all clients separate, ideally with the ability to grant a user permission to contribute to an entire client’s collection or an entire project.
  • +
  • Keep each application separate, because when I clone the repo on a dev machine I don’t always want everything (practical example: if I have a desktop application with 5gb of data in it and an associated raspberry pi application, there’s no reason to grab the massive desktop app on a pi dev machine — sometimes it’s not even possible).
  • +
  • Easily find/browse repos for a specific client and project on github.com and on github desktop, that is, some way to keep associated things close together instead of in the giant recently-used-first flat list of everything.
  • +
  • I prefer to work with repos visually as much as possible, that is through github.com and github desktop, especially when it comes to cloning repos because I have a hard time remembering precise repo names.
  • +
  • I’d really prefer to not have to add any extra steps into the normal change -> commit -> push workflow.
  • +
+ +

How is this type of organization usually done? I’m open to a complete change of my current “strategy”, I’ve always been poorly organized and I need to make some big changes here.

+",121251,,,,,43119.95139,Organizing multiple projects on GitHub,,1,5,,,,CC BY-SA 3.0,, +364288,1,364338,,1/19/2018 17:54,,2,196,"

Can anyone point me to patterns for, or material on decentralised federated system design / federated data/service architectures?

+ +

To give some more background on the problem I'm looking to solve, here's the full context. I've had an idea floating around for a while to create a ""reverse address book"". i.e. Rather than me having a list of all my friends, their phone numbers, addresses, birthdays, etc, I just take responsibility for managing my own address book entry. I can then connect to my friends to see their entries (or whatever parts of their entries they're happy to share with me). That way, if I move house or get a new phone number, I update my record and everyone I've given permission to see that information sees my record automatically updated in their address book.

+ +

The above is simple if we're talking about a single system... But this idea's big (i.e. it solves a lot of GDPR issues, it potentially means that I no longer have to contact 50 different companies when I move house to tell them I've moved, companies can save a fortune wasted trying to track down those who've not given them this information, etc.).

+ +

As such, it seems unreasonable to have one system which holds & controls all of this information. Security concerned users may want to run their own instance which holds their information; but still enable friends using the ""status-quo implementation"" to access their data. Alternatively, other companies may want to host their own flavours of such a system, so it's not a monopoly; but again we'd want users of any implementation of this solution to be able to see other users even if they're on a different implementation (in the same way you can use any email provider and still talk to friends on other mail providers).

+ +

Apologies that this is quite a loose question; I've tried searching but don't know enough terminology in this area to know the correct search terms, as all systems I've worked on to date have been centralised, or at most had a master database with multiple slaves, rather than being fully decentralised / federated.

+ +

I've come up with a few ideas on this for this:

+ +
    +
  • Having a centralised register, but leaving everything else decentralised. So any system providing this functionality adds their URI to the register after which all systems can see it... However I don't like this solution as if possible I don't want any centralised dependency.

  • +
  • Have each implementation holding a list of approved providers (i.e. for any big companies hosting public solutions), and allowing users to add ""ad-hoc providers"" to their personal ""trusted provider"" lists to cater for special cases (e.g. friends running their own personal instances / companies running in house instances). Essentially a similar pattern to how SSL certificates are currently handled in browsers; i.e. the recognised CAs are provided by your browser's vendor (some of these lists being published for general usage), but you can always add your own CAs to your browser as the need arises.

  • +
+",69247,,69247,,43119.75069,44167.71806,Patterns for Decentralised / Federated Data,,2,9,,,,CC BY-SA 3.0,, +364295,1,364300,,1/19/2018 18:56,,15,2801,"

When to use else in conditions?

+ +

1)

+ +

a)

+ +
long int multiplyNumbers(int n)
+{
+    if (n >= 1) {
+        return n*multiplyNumbers(n-1);
+    } else {
+        return 1;
+    }
+}
+
+ +

or

+ +

b)

+ +
long int multiplyNumbers(int n)
+{
+    if (n >= 1) {
+        return n*multiplyNumbers(n-1);
+    } 
+
+    return 1;
+}
+
+ +

2)

+ +

a)

+ +
int max(int num1, int num2) {
+   int result;
+
+   if (num1 > num2) {
+      result = num1;
+   } else {
+      result = num2;
+   }
+
+   return result; 
+}
+
+ +

or

+ +

b)

+ +
int max(int num1, int num2) {
+
+   if (num1 > num2) {
+      return num1;
+   } else {
+      return num2;
+   }
+}
+
+ +

or

+ +

c)

+ +
int max(int num1, int num2) {
+   if (num1 > num2) {
+      return num1;
+   } 
+
+   return num2;
+}
+
+ +

Is there a rule about when to use else?

+ +

Do the if with else statements take more memory? On the one hand, they are more readable, but on the other hand, too much nesting is poor reading.

+ +

If I throw exceptions, then it is better not to use else, but if these are ordinary conditional operations like in my examples?

+",293813,,285669,,43120.56042,43120.91806,When to use else in conditions?,,6,8,3,43120.99722,,CC BY-SA 3.0,, +364301,1,364306,,1/19/2018 19:42,,3,842,"

E_d is an environment where one can develop, build and test a .NET web application.

+ +

E_p is the production environment, where only running the application is possible. That is, build is not possible.

+ +

Repo_src is the source code repository.

+ +

Now I would like to implement a poor man's application update mechanism (for binaries, configuration and other assets). How does the following sound to you?

+ +

Create Repo_bin, a Git repository which is accessible from both E_d and E_p. In order to release a new version, build the application and other assets on E_d and git push the changes to Repo_bin. Add a git tag for the revision as well. On E_p, do a git pull. The new binaries and configuration replace the old ones.

+ +

Advantages: quite easy to git push & git pull as opposed to manually creating a software patch and an upgrade script for each release.

+",279833,,,,,43119.85972,Managing releases (binaries and othert build artifacts) using Git,,1,0,,,,CC BY-SA 3.0,, +364310,1,364318,,1/19/2018 21:09,,17,3706,"

My office uses Git and SourceTree for our version control. This came about because when I joined there was zero version control and SourceTree was the only system I had ever used. I am not an expert by any means, but I am the most experienced out of my coworkers so I am the de facto expert responsible for teaching everyone to use Git properly and fix any mistakes they are making.

+ +

I am making a tutorial document that goes through Git and SourceTree and explains every step of the process. In the Pull process, the SourceTree dialogue allows you to select the option ""Commit merged changes immediately"". I understand what this does and why it is useful. What I don't understand is why anyone would not want to use this feature.

+ +

Could someone explain why you would ever not want to have your merged changes committed automatically? I am trying to understand the reasoning so I can explain the feature's usefulness better and get an idea of what pitfalls to look out for in the future.

+ +

Edit: I do not believe my question is a duplicate of the linked question. The linked question is broadly asking how often to commit. I am asking about why one would choose not to use a specific feature related to commiting merges in SourceTree.

+",199550,,199550,,43122.54514,43122.78194,Why would you not commit merged changes immediately?,,3,5,2,,,CC BY-SA 3.0,, +364312,1,,,1/19/2018 21:31,,1,206,"

I'm creating a game (well, a plugin) where each player has a list of skills, each of which has an unique type object, each of which has a list of actions that need to be ran when a player executes his skills. I drew a diagram to show my classes: https://imgur.com/a/XF41c

+ +

And here's a quick mockup in pseudo-python:

+ +
class Player:
+
+    def __init__(self):
+        self.skills = []
+
+    def execute_actions(self, **event_args):
+        for skill in self.skills:
+            skill.execute_actions(player=self, **event_args)
+
+
+class Skill:
+
+    def __init__(self, type_object, level):
+        self.type_object = type_object
+        self.level = level
+
+    @property
+    def name(self):
+        return self.type_object.name
+
+    def execute_actions(self, **event_args):
+        if self.level > 0:
+            self.type_object.execute_actions(skill=self, **event_args)
+
+    # more type_object properties
+
+
+class SkillType:
+
+    def __init__(self, name, max_level):
+        self.name = name
+        self.max_level = max_level
+        self.actions = []
+
+    def execute_actions(self, **event_args):
+        for action in self.actions:
+            if action.event == event_args['event_name']:
+                action(**event_args)
+
+
+class Action:
+
+    def __init__(self, event, callback, group=None, cooldown=None):
+        self.event = event
+        self.callback = callback
+        self.group = group
+        self._cooldown = cooldown
+
+   @property
+   def cooldown(self):
+       if self._cooldown is not None:
+           return self._cooldown
+       if self.group is not None:
+           return self.group.get('cooldown', None)
+       return None
+
+# Usage
+
+player = Player()
+
+fireball = SkillType('Fireball', 5)
+
+def fireball_deal_damage(skill, player, target, **event_args):
+    player.damage(target, skill.level)
+
+fireball.actions.append(
+    Action('player_attack', fireball_deal_damage, cooldown=5)
+)
+
+player.skills.append(Skill(fireball, 1))
+
+ +

So, these skill types and actions are actually parsed from JSON:

+ +
// skills.json
+[
+  {
+    ""name"": ""Fireball"",
+    ""max_level"": 5,
+    ""actions"": [
+      {
+        ""event"": ""player_attack"",
+        ""action"": ""deal_damage"",
+        ""data"": {""amount_base"": 3, ""amount_per_level"": 1},
+        ""cooldown"": 5
+      }
+    ]
+  }
+]
+
+ +

The action above has a cooldown of five, which is easy enough to implement, just give each action a dictionary of type {Skill: time_when_last_used} so it follows each Skill instance's cooldown for that particular action.

+ +

The problem is, sometimes I need to have multiple actions that all use the same cooldown (maybe the fireball spell needs to deal damage and ignite the enemy, so that's two separate actions), so I came up with the idea of ""groups"":

+ +
// skills.json
+[
+  {
+    ""name"": ""Fireball"",
+    ""max_level"": 5,
+    ""actions"": [
+      {
+        ""event"": ""player_attack"",
+        ""action"": ""deal_damage"",
+        ""data"": {""amount_base"": 3, ""amount_per_level"": 1},
+        ""group"": 0
+      },
+      {
+        ""event"": ""player_attack"",
+        ""action"": ""ignite"",
+        ""data"": {""duration_base"": 0, ""duration_per_level"": 1},
+        ""group"": 0
+      }
+    ],
+    ""groups"": [
+      {  // Group 0
+        ""cooldown"": 5
+      }
+  }
+]
+
+ +

Now some actions will use this group cooldown, so it's no longer as simple as giving each action instance a dictionary. Yet some actions will still use an individual cooldown rather than a ""group"" cooldown. How would I go on about implementing this?

+",184890,,,,,43240.61736,"Actions that can have individual cooldowns or a ""group"" cooldown",,2,0,,,,CC BY-SA 3.0,, +364319,1,,,1/20/2018 0:23,,2,1672,"

I'm asking this question coming from a background of no experience in web development, so please be patient as I attempt to explain what I'm doing (if I use the wrong terminologies or if this has already been asked. Tried searching, but I don't exactly know how to concisely ask this question).

+ +

I have a server I've developed that handles requests from some embedded devices that my company builds. Currently, this server just acts as a file server, but it'll be a lot more in the future. Let's say for sake of example here that I want to build a webpage that gets the available files accessible to these devices, and displays them to the page.

+ +

I figure this is a job for a REST API, but I'm a bit confused over implementation details here. I see things as being as so: [devices] --> server <-- REST API <-- Webpage (arrows indicate which way requests flow).

+ +

So, here is where I get confused. Should I leave my server as it is, and build a connector class for the REST API, or is there a way to integrate a REST API into my existing project (or vice versa, implement the server into the REST API project). The way I've seen REST's built in all the tutorials I've followed they typically connect to a database with their model classes, from the controllers. I don't know how this plays out when the model isn't coming from a database (and as a result, no Entity Framework). I just figured I'd write a connector class that knows how to query my server over a socket to make a request for specific data, which would return it to the model, and then to the controller, and out the door as a HttpResponse.

+ +

I'd appreciate some input on this design here. My company is transitioning from doing desktop software into SaaS, and as such, no one in the company knows how to do this stuff yet and me and a coworker are pioneering the way (we are currently working through options for getting some professional development so we can learn best practices and such).

+",138741,,,,,43180.64306,Adding a REST API to an existing C# Project,,1,2,,,,CC BY-SA 3.0,, +364321,1,,,1/20/2018 0:50,,1,345,"

I have built simple auth for REST servers in academic projects. I always used the headers to pass the credentials. This is probably just because every tutorial I've ever seen did it this way. Initially, I'd have header fields for user and password, but I eventually switched to basic auth. Anyway, whatever it was, I always had it passed in the headers. My impression was that this was the single correct way.

+ +

I am told by the platform team at my work, that we are required to log all headers of each request received, no exceptions, and so, I should not be using headers to accept any sensitive information, such as a password. I was shown an example of a REST API built by another team. For POST and PUT methods, the auth string was accepted in the message body. For GET and DELETE methods, it was in the URL query. This seems wrong to me, but I can't exactly explain why.

+ +

I am trying to figure out if I should fight against this policy. I'd like to find out if my position is correct, and, if so, how to support it when I speak to leadership.

+ +
    +
  • Am I correct in my thinking that HTTP headers is the single correct way to pass auth credentials in a stateless REST API?
  • +
  • Am I correct that using the URL query params is dangerously insecure, or otherwise inadvisable?
  • +
  • If I am correct, is there some authoritative source that I can reference when I make the case to my leadership? (other than RFC 7235, I've read it. It is too technical for the people I need to convince.)
  • +
+",293829,,,,,43120.64514,Is there a valid alternative to HTTP headers for sending auth credentials,,2,2,1,,,CC BY-SA 3.0,, +364324,1,364334,,1/20/2018 1:20,,1,509,"

Imagine I have an abstract class Node which has several methods and attributes. (Join a network, send a message, broadcast ...).

+ +

I want to be able to add/remove functionality to/from that Node class (Routing functionality, mining functionality, ...)

+ +

I was thinking about using a Decorator pattern since that let's me change the behaviour of that class dynamically on runtime.

+ +

return new RoutingNode(new BaseNode(name));

+ +

But now, I' starting to think that this is not the right choice since I'm using an abstract base class over an interface.

+ +

Basically I want to know if it's possible to add functionality to an existing object without subclassing the base class. For example, I want to add Routing functionality to let a node know he's able to route incoming requests or Mining functionality to let the node perform mining tasks. But this should be interoperatable meaning I can add add or remove functionality on runtime.

+ +

What would be the most elegant and best way to handle my use case scenario.

+",293833,,293833,,43120.45694,43120.45694,Design Pattern - Add new functionality to an abstract class,,2,7,,,,CC BY-SA 3.0,, +364336,1,364340,,1/20/2018 10:38,,4,339,"

I am learning a lot about this principle (also thanks to two answers I received here) and would like to elaborate on another point that somebody mentioned.

+ +

1) Is the following a violation of LSP?

+ +
class Base
+{
+   public virtual void UpdateUI()
+   { 
+     //documented: immediately redraws UI completely
+   }  
+}
+
+class Component: Base
+{
+  public override UpdateUI
+  {
+    if (Time.Seconds % 10 ==0)  //updates only every ten seconds
+    {
+      //drawing logic
+    }
+  }
+}
+
+ +

In my understanding, the code description in the base class represents the contract, expected behavior, that is violated in the subtype.

+ +

2) Why breaking of behavior does not matter for weakening precondition?

+ +
class Human
+{
+  public virtual void DoSomething(int age)
+  {
+     //precondition - age < 100
+  {
+
+}
+
+class Cyborg : Human
+{
+  public virtual void DoSomething(int age)
+  {
+     //precondition - age < 200
+  {
+}
+
+ +

The Cyborg class weakened the precondition, which is allowed. For valid arguments, substitution works well. Whenever I have a Human object, a Cyborg object can be used. +But what if I have a test like this: +Human(110) - must fail, argument needs to be < 100

+ +

When I substitute with Cyborg, the test will pass. I.e. the behaviour changed. Why is that allowed?

+",60327,,209774,,43122.52153,44069.59444,Liskov substitution for voids and weakened preconditions,,3,4,,,,CC BY-SA 3.0,, +364342,1,,,1/20/2018 13:35,,3,443,"

I'm trying to understand DDD and one of the key concepts in DDD are the Domain Objects. As i understand they're supposed to 'hide' the internal state and allow modification of it by only by using methods (behaviors?) and only in a way that keeps the (internal) state always valid.

+ +

Would it be an over simplification to say that (at least from technical point of view) Domain Objects are nothing more than Finite State Machines with business logic inside, strict validation and names meaningful for the business?

+",216714,,,,,43120.60556,Are Domain Objects in DDD just a fancy name for Finite State Machines with validation?,,1,2,,,,CC BY-SA 3.0,, +364343,1,364349,,1/20/2018 13:48,,2,349,"

I want to calculate the runtime of the following function

+ +

T(n) = (1+2+3+4+5+...+n)/n

+ +

At first this doesnt seemed hard to me because it can be solved easily by transforming the formula

+ +

T(n) = (n(n-1)/2)/n = (n^2-n)/n = n-1 which leads into O(n).

+ +

By thinking about this function i struggled. I am not sure if I am allowed to curtail n, because I dont know the code behind that function.

+ +

For example it could be something like

+ +
method foo()
+{
+          methodWhichTakesNCubeAmountOfTime(); //Build sum, O(n^2)
+          methodWhichTakesNAmountOfTimeAndCantBeSimplified(); //O(n)
+}
+
+ +

For this method I would get n cubes as runtime.

+ +
+

O(n^2) + O(n) = O(n^2)

+
+ +

I know that these method doesnt cover the original term but i hope you get what i meant: The divided by n could be a completly differnt function (which has accidently a complexity of n) and therefore i cannot curtail the other n's with it.

+ +

So i am confused. Am i allowed to transform terms normally during calculation the Big O or do some math rules dont apply here?

+ +

Thanks.

+",293862,,,,,43120.74444,Am I allowed to simplify terms while calculating Big O?,,1,1,,,,CC BY-SA 3.0,, +364347,1,,,1/20/2018 14:32,,4,328,"

I looked trough some of my older code and found that I was using the using namespace directive. From what I read in a lot of google results, it seems that it is never a good idea to use this. Is there actually a valid use case for this construct or was this just a misguided attempt to make peoples lifes easier, that failed?

+",291476,,155513,,43120.61111,43120.96458,Is there a valid use case for the using namespace directive?,,2,1,,,,CC BY-SA 3.0,, +364353,1,,,1/20/2018 15:30,,1,90,"

Suppose I'd like to implement the templating pattern, but the only real differences between the subclasses are their choices of some invariant dependencies.

+ +

Is there a drawback to preferring this style:

+ +
public abstract class AbstractClass {
+    private final DependencyA dependencyA;
+    private final DependencyB dependencyB;
+
+    public AbstractClass(final DependencyA dependencyA, final DependencyB dependencyB) {
+        this.dependencyA = dependencyA;
+        this.dependencyB = dependencyB;
+    }
+
+    public void doStuffWithDeps() {
+        //Business logic using the dependency fields
+    }
+}
+
+ +

To this style?

+ +
public abstract class AbstractClass {
+    protected abstract DependencyA getDependencyA();
+    protected abstract DependencyB getDependencyB();
+
+    public void doStuffWithDeps() {
+        //Business logic using the dependency getters
+    }
+}
+
+ +

I don't have much experience using the first of the two, but I'd argue that in cases where the dependencies don't change over time, the first is preferred as there is no reason to keep asking for the dependencies for each call to the doStuffWithDeps() method. However, whenever I've seen this kind of problem it has always been solved with the second implementation, which makes me wonder if I've missed something.

+ +

I realise inheritance is not really a good solution to these kinds of problems in the first place, but suppose these are my two options, which one should I prefer and why?

+",161001,,,,,43122.59028,Constructors vs getters for implementing the templating method with invariant dependencies?,,3,4,,,,CC BY-SA 3.0,, +364357,1,,,1/20/2018 16:52,,1,174,"

This project will serve many duplicate requests with location-specific answers. I.e. 10,000 people in New York will all get the same server response (a list of businesses in New York), but one person in Atlanta will get completely different data (businesses in Atlanta).

+ +

Planned architecture:

+1. In memory cache which stores the most recent answer for each location. Answers expire every few minutes and an updated answer from the database is loaded.

+2. Relational Database stores all the actual business data to make answers.

+ +


+How can I group cache results by location? +

+1. Each database row could know which city the business is in, and users would need to pick their city from a list. Then I would have a separate result cache for each city. +

2. I could just take the user's lat/long and store the lat/long of every business in the database row and do a location query. I do not know how much cacheing I could do with that approach. Maybe see if I've answered a request within 20 miles of the query location, re-use that response? This avoids having to group things ahead of time but is less accurate.

+ +


This seems like a common problem. Is one of these approaches preferred (or something different altogether)? The table in question will have ~100k records worst case, so is this cacheing overkill anyway? Planning to use AWS if that changes anything, or if they have tools for this I should be aware of.

+",293875,,,,,43120.70278,Location specific caching for read-heavy data (server architecture),,0,5,,,,CC BY-SA 3.0,, +364358,1,364362,,1/20/2018 17:25,,4,340,"

In the comments to an answer of another question, I proposed the following Java code as a better way of writing a more procedural-style variant of the same operation:

+ +
employeeService.getEmployee()
+    .map(Employee::getId)
+    .ifPresent‌​(System.out::println‌​); 
+
+ +

(where getEmployee() returns an Optional<Employee>)

+ +

An upvoted response to this comment suggests that this ""drastically reduces code readability"" and that the commentor would reject it in code review.

+ +

I don't understand this position: to me the code seems as readable, or even more readable, than the alternative procedural style code it was intended to replace. And yet at least two viewers of that comment believe it to be hard to read. Why is that? What makes this code difficult to read (at least for some readers)?

+",66037,,,,,43120.94861,Why are functional-style chained map operations considered hard to read?,,4,2,,,,CC BY-SA 3.0,, +364364,1,,,1/20/2018 19:29,,2,325,"

I'm writing an app which tracks device location and based on some factors (user gets an assignment), it needs to change the location tracking settings (for example, frequency).

+ +

I have an issue with how to design this.

+ +
// I left out methods to start\stop tracking
+interface ILocationTracker
+{
+    event EventHandler LocationChanged(Position pos);
+
+    void UpdateTrackingSettings(TrackingSettings settings);
+}
+
+interface IUser
+{
+    event EventHandler AssignmentAdded(Assignment a);
+}
+
+ +

I don't want to couple the IUser to the ILocationTracker, so I added this (I'm leaving fields out so it looks simpler to read):

+ +
class UpdateLocationTrackingSettings
+{ 
+     public UpdateLocationTrackingSettings(ILocationTracker tracker, IUser user)
+     {
+     }
+
+     // I need to somehow start/stop it listening to AssignmendAdded event
+     public void Start() {
+          // Subscribe to IUser:AssignmentAdded
+     }
+
+     public void Stop() {
+          // Unsubscribe from IUser:AssignmentAdded
+     }
+
+      void OnAssignmentAdded(Assignment a)
+      {
+          TrackingSettings settings = GetSettingsByAssignment(a); 
+          _tracker.UpdateTrackingSettings(settings);
+      }
+}
+
+ +

The questions I have:

+ +
    +
  • How to 'start' the UpdateLocationTrackingSettings? Should I have a façade for the location tracking which contains ILocationTracking and UpdateLocationTrackingSettings and calls Start() on each?

  • +
  • I feel like for GetSettingsByAssignment I should use use strategy pattern. I am thinking to implement something like ITrackingSettingProvider which computes and returns the tracking settings based on the assignment.

  • +
  • Is this overthinking\over engineering?

  • +
  • Any suggestions of a different or better design?

  • +
+",271316,,271316,,43120.83056,43122.71111,Mediator pattern or facade or ...?,,1,2,,,,CC BY-SA 3.0,, +364370,1,364429,,1/20/2018 21:27,,-1,454,"

I am learning Python and when I learned that we can build Custom classes for exception, I got into a confusion of Why ?

+ +

for example1 :

+ +
class MyException(Exception):
+      def __init__(self, error):
+           self.error = error
+      def __str__(self):
+          # DO THE WORK TO BE DONE FOR THE EXCEPTION
+          print  ""Here is my custom made exception"" + self.error
+
+ +

example 2 :

+ +
try:
+  # SOMETHING
+except Exception:
+  # DO THE WORK TO BE DONE FOR THE EXCEPTION
+  raise Exception(""Here is my custom made exception - Whats the reason ?"")
+
+ +

If example 2 does the same work as example1, Why do we need custom exception. Is there a scenario why I needed a custom exception when I could just do everything I needed inside except block.

+",293892,,293892,,43120.90417,43122.47778,Reasoning behind custom class Exception,,3,1,,,,CC BY-SA 3.0,, +364371,1,,,1/20/2018 21:44,,5,1037,"

Let's say we wanted a typeful, pure functional programming language, like Haskell or Idris, that is aimed at systems programming without garbage collection and has no runtime (or at least not more than the C and Rust ""runtimes""). Something that can run, more or less, on bare metal.

+ +

What are some of the options for static memory safety that don't require manual memory management or runtime garbage collection, and how might the problem be solved using the type system of a pure functional similar to Haskell or Idris?

+",94662,,94662,,43122.06042,43122.06042,Type-based memory safety without manual memory manage or runtime garbage collection?,,2,13,2,43122.46181,,CC BY-SA 3.0,, +364385,1,,,1/21/2018 8:21,,-3,254,"

How should be represents process of versioning in developer team? Suppose that we have CI system, and we want start versioning our software.

+ +
+ +

If we speak about semver, then major and minor version must changes manually, right? What to do with path version? I think also manually, but not sure. Build number may got from CI system. In that case developers will responsibilities for changes, that they did, also changes must be controlled with help pull requests and code review.

+ +
+ +

Taking into account the above, suppose we develop project on .NET platform. It's normal if developers after their changes will changes AssemblyInfo? Respectively teamlead must tagged releases in git repository at the end sprint or something like that.

+ +
+ +

Summary:

+ +
    +
  • 1) Create changes in code of project;
  • +
  • 2) Manually change AssemblyVersion - major, minor or path depend on changes;
  • +
  • 3) Create pull request for merge into develop branch;
  • +
  • 4) Teamlead do code review and approve or rejects changes;
  • +
  • n) At release teamlead merge develop into master and create release tag.
  • +
+ +
+ +

Or exists common tools for automate this process? If possible, describe your point of view.

+ +

UPD: This question in context of the process developing monitoring sport events system. System consist of client/admin web application and few windows services. For develop uses N internal libraries that need versioning.

+ +

Versions will use QA for specify version of component in which was found bug. Also developers, because different clients may use different version of application, i.e need mechanism for detect state application and resolve dependencies at develop.

+",,user293916,,user293916,43121.5,43121.55903,What is structure of versioning process?,<.net>,1,4,,,,CC BY-SA 3.0,, +364387,1,,,1/21/2018 10:13,,-1,92,"

Using a static site generator has the advantage of themes, blog and hosting (via Netlify). How can this be combined with a login form, show a user is logged in and dynamic pages (flask backend), while still keeping the same styling and being perceived as a single app?

+",293921,,,,,43121.49028,How can a static site generator link dynamic properties?,,1,2,,43131.63958,,CC BY-SA 3.0,, +364395,1,364399,,1/21/2018 15:48,,5,1317,"

I have open source C# .NET project at GitHub with Appveyor CI + code coverage.

+ +

There are configurations like Release and Debug. There's also platforms like x86, x64 and Any CPU.

+ +

This results to following:

+ +
Configuration | Platform | Tests
+--------------+----------+--------
+Release       | x86      | 100/100
+Release       | x64      | 100/100
+Release       | Any CPU  | 100/100
+Debug         | x86      | 100/100
+Debug         | x64      | 100/100
+Debug         | Any CPU  | 100/100
+
+ +

Should unit tests run always on all of the combinations (six) of these two dimensions when application is in development stage? ie. mainly only developers are in the process and lots of changes are taking place.

+ +

My intuition says that only Debug configuration and one platform should be tested.

+ +

My view is that Debug configuration gives more information and as C# being quite high level language and running in VM (CLR) there's no need to test x64 and x86 separately.

+ +

There's also third dimension with .NET framework version. And fourth with OS.

+ +

Is there best practises view in the field of what is or isn't necessary when application is being developed?

+ +

When shipping all tests are of course ran. Same with possible nightly/weekly/etc builds.

+",81198,,81198,,43121.86597,43122.57431,Should all configurations and platforms run unit tests when application is in development stage?,<.net>,3,2,,,,CC BY-SA 3.0,, +364404,1,364405,,1/21/2018 18:51,,3,1143,"

I'm just starting out with Qt and I really want to try and keep my application as separated from Qt as possible in case I decide to use a different toolset later, but at the same time don't want to make any decisions that will really cripple my application right from the beginning.

+ +

When writing a Qt Application is it considered good practice to always prefer Qt implementations when they are available? Or in some situations is it best to stick with standard C++ even when Qt has an alternative?

+ +

Consider the following...

+ +

Function implementations:

+ +

Should I always prefer to use a Qt function implementation when it is availabe? Take the math functions for example...

+ +
    +
  • pow vs qPow
  • +
  • log vs qLn
  • +
  • etc...
  • +
+ +

Data types:

+ +

Should I always prefer Qt data types over the defaults?

+ +
    +
  • int32_t vs qint32
  • +
  • double vs qreal
  • +
  • etc...
  • +
+ +

Objects:

+ +

Should I always prefer Qt objects over their STL equivalents?

+ +
    +
  • std::string vs QString
  • +
  • std::vector vs QVector
  • +
+ +
+ +

In each of these situations what are the advantages/disadvantages. What will I gain or lose?

+ +

The more Qt based functions/objects I use the more difficult it would be to switch to something other than Qt later.

+",250584,,155513,,43121.80764,43121.80764,"When writing a Qt application is good practice to ALWAYS prefer Qt function implementations, data types, and classes when they are available?",,1,3,,,,CC BY-SA 3.0,, +364408,1,,,1/21/2018 20:24,,1,128,"

I have the opinion that each method in a service should only do one small step of a larger task, delegate a result to the successive step/method and terminate. No matter if this next method lives in the same service or not, it is always triggered via inter-service routing (message queue, service discovery + REST call, whatever ...) and thus perhaps handled by a different instance. Never should the method be called directly by default, unless selectively enabled.

+ +

I know, network communication is slower by orders of magnitude and complex tasks with a lot of steps will run milliseconds slower, but this is the only way I see that ensures that

+ +
    +
  • each step is implemented stateless so that it can - if needed - be replaced in a different technology with no overhead
  • +
  • failure handling is least expensive as waiting for a response or doing a successive step in the same process increases the chance to loose intermediate step results if that process dies for whatever reason
  • +
  • definite failure rate is reduced as it is easier and more predictable to implement a standardized failure handling centrally than on method level
  • +
+ +

Do you think I'm overseeing something and what is your opinion?

+",293948,,,,,43121.86736,Microservice Communication: Enforce RPC?,,1,1,,,,CC BY-SA 3.0,, +364410,1,,,1/21/2018 20:55,,-3,400,"

I have been asked to develop a multitenant application where, Company/Users can log in, a user can belong to a company, but I has been asked specifically that they don't want to be inviting users to their company manually because they think it's a lot of work, but they also want to avoid exposing other companies when a user creates an account.

+ +

For example:

+ +
    +
  • Company 1

  • +
  • Company 2

  • +
+ +

If I create a user that belongs to ""Company 1"" there is no way I should be able to know Company 2 even exists.

+ +

How can I do this?, because I can't use a dropdown/search/string field because that would expose Company 2

+ +

I don't know if you guys understand what I meant before, I mean, what I want to know is if there's a way that I can know at the time of registering an account that user belongs to Company 1 without showing Company 2

+",293954,,1204,,43121.96597,43121.96597,Best way of authenticating users using multi-tenant application,,1,6,,,,CC BY-SA 3.0,, +364411,1,,,1/21/2018 21:23,,1,52,"

So I'm working on a project and I'm running into conceptual problems in creating my user interface. It's for a DirectX11 multi-monitor game I'm writing.

+ +

I've got a prototype working, with entity menus popping up and more entities inside those menus. All is well and good, except it's a coding fuster-cluck. I'm doing it wrong, so I've been spending the past couple weeks on learning & deciding how to refactor it to be better.

+ +

So far, I've decided to separate it into three parts, similar to MVC: One is the Event Listener, or the Controller, which will contain the bounding boxes, the base class, and the GUID for the entity. And a bunch of virtual methods, that each act towards a single type of input. I'm planning in arranging the memory structure of the entities in a Quad-Tree, with objects that span multiple quad's to be allocated to both, and for the quads to be broken down until no more than 8 items are in a quad, whereupon a linear search will be made, from highest GUID item to lowest. Input is run through the Event Listener, and it runs the virtual functions on a entity.

+ +

The next part is the Entity Class, or Model, which holds the type for all entities. Units, menus, etc, etc. The base Entity class has virtual functions for all the click-types, as well as Scrolling, Hovering, and Keyboard input. By running final virtual functions in derived classes, I update my Gamestate,

+ +

Lastly, in my Gamestate or View, I'm recording what Entities are active on the screen, as well as the state of all other entities. Here is also where I register/unregister things from the event listener. It also contains methods that direct what is shown to the user.

+ +

Now, my problem is like this. For stationary entities, or ones that move rarely, like menu items and scroll-bars, the cost for recalculating the quad-tree for them is fairly low cost. Menu objects don't always need to be added or removed every frame. The problem comes in when I'm dealing with the viewing of multiple unit entities (there could be up to several hundred on screen at a time). They are moving on a 2d plane, and can be moving between bounding boxes fairly rapidly. Assembling the quad tree is something like nlog^2n in the worst case scenario, where everything is clustered or large enough to span multiple bounding boxes. So I don't think that rebuilding it each frame is the best solution.

+ +

Unregistering items also looks to be a pain, as I'll have to traverse the entire tree to find the particular GUID, unless I make second tree of GUID entities and use it to point to the particular node on a tree which will contain pointers to the node(s) on the other tree that contain that entities GUID. Or I could just rebuild the entire tree each gamestate update. Which, again, would be expensive.

+ +

Is there a better design pattern to set up my user interface, or at least the Event Listener part of it, and if so, what is it?

+",293953,,,,,43121.89097,How should I set up the listener service for a dynamic user interface?,,0,2,,,,CC BY-SA 3.0,, +364416,1,,,1/22/2018 6:08,,5,726,"

Unit testing requires writing tests first then code, on the other hand in F# (and most of the functional languages) some codes are extremely short as follows:

+ +
let f = getNames
+>> Observable.flatmap ObservableJson.jsonArrayToObservableObjects<string>
+
+ +

or :

+ +
let jsonArrayToObservableObjects<'t> =
+    JsonConvert.DeserializeObject<'t[]> 
+    >> Observable.ToObservable
+
+ +

And the simplest property-based test I ended up for the latter function is :

+ +
 testList ""ObservableJson"" [
+        testProperty ""Should convert an Observable of `json` array to Observable of single F# objects"" <| fun _ -> 
+            //--Arrange--
+            let (array , json) = createAJsonArrayOfString stringArray
+
+            //--Act--
+            let actual = jsonArray
+                         |> ObservableJson.jsonArrayToObservableObjects<string> 
+                         |> Observable.ToArray 
+                         |> Observable.Wait
+
+
+            //--Assert--
+            Expect.sequenceEqual actual sArray
+    ]
+
+ +

Regardless of the arrange part, the test is more than the function under test, so it's harder to read than the function under test!

+ +
    +
  • What would be the value of testing when it's harder to read than the production code?
  • +
+ +

On the other hand:

+ +
    +
  • The composition of two functions is another function by itself which can be considered as a unit.
  • +
  • I wonder whether the functions which are a composition of multiple functions are safe to not to be tested?
  • +
  • Should they be tested at integration and acceptance level instead?
  • +
  • And what if they are short but do complex operations?
  • +
+",28921,,28921,,43122.39236,43122.775,Should function composition and piping be tested?,,1,0,1,,,CC BY-SA 3.0,, +364418,1,,,1/22/2018 7:21,,30,11653,"

I have a piece of code where I iterate a map until a certain condition is true and then later on use that condition to do some more stuff.

+ +

Example:

+ +
Map<BigInteger, List<String>> map = handler.getMap();
+
+if(map != null && !map.isEmpty())
+{
+    for (Map.Entry<BigInteger, List<String>> entry : map.entrySet())
+    {
+        fillUpList();
+
+        if(list.size() > limit)
+        {
+            limitFlag = true;
+            break;
+        }
+    }
+}
+else
+{
+    logger.info(""\n>>>>> \n\t 6.1 NO entries to iterate over (for given FC and target) \n"");
+}
+
+if(!limitFlag) // Continue only if limitFlag is not set
+{
+    // Do something
+}
+
+ +

I feel setting a flag and then using that to do more stuff is a code smell.

+ +

Am I right? How could I remove this?

+",173849,,591,,43123.68819,43127.20833,Is it a code smell to set a flag in a loop to use it later?,,7,24,7,,,CC BY-SA 3.0,, +364424,1,,,1/22/2018 10:06,,4,2189,"

I am a layman without any programming-like education but I spent the better part of my free time to get into programming bots for some games in first AutoIt and then C++. I was introduced to programming in a very procedural style only using free functions. A couple of weeks ago, my focus shifted from actually making these bots to understanding how professional programmers are structuring their programs and inevitably I came across OOP, Design Patterns, SOLID and most recently Clean Architecture.

+ +

I want to say beforehand that I know full-well that these techniques are most likely counter-productive for the tiny programs I write; however, I want to get to know them out of the interest of learning.

+ +

I am currently trying to split the program into smaller parts to stick to the SRP and I’m sort of trying to implement the idea(s) behind the Clean Architecture. Since it’s an incredibly smaller program than what Clean is targeted at and since I’m most likely the only one who will be working on it, I am not too strict with the separation into Controllers, Presenters, ViewModels, Views and so on – also I’m not planning to separate the program into different DLLs in order to have the GUI in a completely different place. However, I am fond of the idea to have GUI absolutely dependent on the (business-) logic whilst the logic doesn’t know about the GUI at all, thus I will treat the GUI-Module as if it was in a different project. However, the “crossing of the boundaries” is something I cannot, for the life of me, understand how to implement, especially if it was from an external DLL to the application.

+ +

Correct me if I’m wrong: There is a Controller which sends a request to the Interactor-Object which will deal with several Entities and eventually return a result to the Presenter which takes care of the changes in the GUI. Basically, my input calls a function that returns some value after working with the app. This function-call is put into the Interactor-Object in the style of the Command-Pattern. It implements some Boundary-Interface* which is made public. That’s necessary to build the Controller in the first place since it needs to know the signatures it can call. Therefore, that interface becomes the access-point for the external DLL to send requests to the application.

+ +

However, wouldn’t the Interactor need to get references to the Business-Entities and the Presenter/s on construction so it can interact with them? The Controller cannot pass them into the Interactor as it doesn’t know anything about the Entities, which means the Controller would have to call an already built concretion of the Interactor – how is that being done? In my mind, the Application would have to pass the concrete Interactor to the Controller which is forbidden as the “inner circles” are not supposed to know about the outer ones.

+ +

This brings me to the second part, crossing the boundary outwards: How does the Interactor pass the response to an outer circle without knowing about it? I could imagine this is some sort of Observer-Pattern as the application does provide the interface the Presenter/s will have to implement, so the interactor does know the signature of the Presenters. However, this would require the Presenters to register themselves to a concrete Interactor-object which ends up in the same question as above.

+ +

Thank you in advance – any input would be appreciated.

+ +

TL;DR: Where do you instantiate UseCases/Interactors and how do you access these concretions from outside?

+",293989,,293989,,43122.45903,43385.15486,"""Crossing Boundaries"" in Clean Architecture",,2,4,1,,,CC BY-SA 3.0,, +364431,1,364432,,1/22/2018 11:46,,1,2067,"

Please see the question here: https://stackoverflow.com/questions/2352654/should-domain-entities-be-exposed-as-interfaces-or-as-plain-objects

+ +

The general consensus is that Domain Models should not contain interfaces. Now see here: https://github.com/jbogard/presentations/blob/master/WickedDomainModels/After/Model/Member.cs and specifically this code:

+ +
public Offer AssignOffer(OfferType offerType, IOfferValueCalculator valueCalculator) 
+        { 
+            DateTime dateExpiring = offerType.CalculateExpirationDate(); 
+            int value = valueCalculator.CalculateValue(this, offerType); 
+
+
+            var offer = new Offer(this, offerType, dateExpiring, value); 
+
+
+            _assignedOffers.Add(offer); 
+
+
+            NumberOfActiveOffers++; 
+
+
+            return offer; 
+        } 
+
+ +

The author if this blog writes a lot about DDD.

+ +

Notice that an interface is injected into the method here instead of a concrete type i.e. IOfferValueCalculator implements OfferValueCalculator.

+ +

I am talking from the perspective of a DDD purist. Is it acceptable for domain entities to implement interfaces? If the answer is yes, then in what circumstances would it be acceptable to inject an IOfferCalculator into a Domain Entity: Member?

+ +

Once again I realise that the DDD approach is not a one size fits all and that an Anemic Domain model is suitable in a lot of cases. I am just trying to improve my knowledge of this specific area.

+",65549,,,,,43122.49514,Is it acceptable for a Domain Entity to implement an interface?,,1,0,,,,CC BY-SA 3.0,, +364433,1,,,1/22/2018 12:03,,2,480,"

I am trying to implement a preemptive scheduler in C, but I have some understanding problems:

+ +

When the scheduler is called by an interrupt, a context switch may occur. The context switch can only be programmed in assembler. In my C program a task is a function. +If the scheduler is called by an interrupt and a context switch occurs, then I cannot start a new task in the interrupt. In the interrupt I store the context of the current task (I'm not sure). But after the interrupt the program will return to the function. So where do I start a new task? Should I go back to the main loop in assembler after saving the context?

+",293986,,172693,,43122.66042,43122.66042,Some questions about implementing a preemptive scheduler in C: Context switching and execution time of the dispatcher,,1,3,,,,CC BY-SA 3.0,, +364434,1,,,1/22/2018 12:10,,-2,34,"
Given an array A[N] of N booleans, return a, b such that a >= 0, b > 0 and
+A[a] = true
+A[a+b] = true
+A[a+2b] = true
+or -1 if they don't exist.
+
+ +

The best algorithm I could find was brute forcing the entire search space, O(n^2) and I wanted to know if there's a better algorithm.

+",294005,,,,,43122.52292,Finding 3 equally spaced boolean cells,,1,0,,,,CC BY-SA 3.0,, +364438,1,364507,,1/22/2018 12:50,,3,11926,"

It's very common see a generic's DAO implementation like this:

+ +
public List<E> getResultList(String namedQuery, Map<String, Object> parameters) {
+    Query query = entityManager.createNamedQuery(namedQuery);
+    parameters.entrySet().forEach(e -> query.setParameter(e.getKey(), e.getValue()));
+    return query.getResultList();
+}
+
+ +

I have some problems with this approach:

+ +
    +
  1. Using a complex structure to just pass a list of key & value data.
  2. +
  3. Map creation is very verbose.
  4. +
+ +

Example:

+ +
public List<TransacaoTEF> getActiveTransactions1(TipoMensagem tipoMensagem, LocalDate date) {
+    Map<String, Object> parameters = new HashMap<>();
+    parameters.put(""type"", tipoMensagem);
+    parameters.put(""date"", date);
+    return getResultList(""namedQueryXTPO"", parameters);
+}
+
+ +

To avoid this I thought in create a simple Parameter class:

+ +
public List<E> getResultList(String namedQuery, Parameter... parameters) {
+    Query query = entityManager.createNamedQuery(namedQuery);
+    Arrays.stream(parameters).forEach(p -> query.setParameter(p.getName(), p.getValue()));
+    return query.getResultList();
+}
+
+public List<E> getResultList(String namedQuery, List<Parameter> parameters) {
+    Query query = entityManager.createNamedQuery(namedQuery);
+    parameters.forEach(p -> query.setParameter(p.getName(), p.getValue()));
+    return query.getResultList();
+}   
+
+ +

Using:

+ +
public List<TransacaoTEF> getActiveTransactions2(TipoMensagem tipoMensagem, LocalDate date) {
+    return getResultList(""namedQueryXTPO"", 
+            new Parameter(""type"", tipoMensagem), new Parameter(""date"", date));
+}
+
+public List<TransacaoTEF> getActiveTransactions3(TipoMensagem tipoMensagem, LocalDate date) {
+    List<Parameter> parameters = Arrays.asList(
+            new Parameter(""type"", tipoMensagem), 
+            new Parameter(""date"", date));
+    return getResultList(""namedQueryXTPO"", parameters);
+}   
+
+ +

It's over engineering or just paranoia ;p ?

+",169422,,,,,43123.68958,Using Map to pass query parameters in DAO,,2,0,,,,CC BY-SA 3.0,, +364440,1,364453,,1/22/2018 13:36,,0,95,"

Using SQLite in a WPF application that schedules events for people. The application will be used by 2-4 users concurrently. I'm wondering how to ensure each user has the most current database information.

+ +

What I came up with is:

+ +
    +
  1. Start a separate thread responsible for checking for database updates.
  2. +
  3. Use time interval setting (e.g. 30s, 1 min, 2 min, 5 min, etc.) to SELECT the data.
  4. +
  5. Pull all data and create a new in-memory collection for the domain.
  6. +
+ +

Once I have that working then I can use things like LastModifiedDataAndTime column to keep track of when the database was modified so I'm not querying the same data. It seems awfully inefficient to create (recreate) a new collection every 30s, 1 min, 5 min, etc. Am I going about this the correct way using a separate thread to SELECT the schedule data?

+",274856,,,,,43123.09236,Database checking for updates,,1,2,,,,CC BY-SA 3.0,, +364441,1,364506,,1/22/2018 13:51,,1,241,"

Say I have a namespace as follows:

+ +

CompanyName.TechnologyName[.Feature][.Design]

+ +

Like this: https://github.com/vkhorikov/DddInAction/tree/master/DddInPractice.Logic/Atms

+ +

.Design is the AggregateRoot (e.g. BuyingAggregate). This namespace contains all of my Entities and Value Objects.

+ +

Say I also introduce a Domain Service, which uses entities and value objects from the BuyingAggregate only. Should this domain service be part of the BuyingAggregate namespace or should it be placed in another namespace?

+ +

I believe it should be put in the same namespace, however, I remember reading a question where the consensus was something different and I cannot find the question - hence the reason for my question.

+",65549,,222996,,43123.55,43123.55,Should I have a separate namespace for Domain Services?,,2,0,,,,CC BY-SA 3.0,, +364443,1,,,1/22/2018 14:10,,4,84,"


+ +

I have the following scenario:

+ +
    +
  • We want to create nuget packages on VSTS
  • +
  • We want the packages to be available for an external party (preferably no login, tokens...?)
  • +
  • For our developers we want to have the symbols for that package coming from VSTS
  • +
  • The Debug packages should have the output + pdb files. (no source)
  • +
+ +

If I understand correctly, we can package using -Symbols.
+This creates 2 packages, 1 with Release build and 1 with Debug build + symbols and source. How am I supposed to distribute this to achieve the above requirements?

+ +

Note: the packages are considered private so we can't simply upload to nuget.org.

+",107107,,107107,,43123.35972,43123.35972,NuGet on VSTS strategy,,1,1,,,,CC BY-SA 3.0,, +364448,1,,,1/22/2018 15:19,,10,2059,"

When parsing user input, it is generally recommended to not to throw and catch exceptions but rather to use validation methods. In the .NET BCL, this would be the difference between, for example, int.Parse (throws an exception on invalid data) and int.TryParse (returns false on invalid data).

+ +

I am designing my own

+ +
Foo.TryParse(string s, out Foo result)
+
+ +

method and I'm unsure about the return value. I could use bool like .NET's own TryParse method, but that would give no indication about the type of error, about the exact reason why s could not be parsed into a Foo. (For example, s could have unmatched parenthesis, or the wrong number of characters, or a Bar without a corresponding Baz, etc.)

+ +

As a user of APIs, I strongly dislike methods which just return a success/failure Boolean without telling me why the operation failed. This makes debugging a guessing game, and I don't want to impose that on my library's clients either.

+ +

I can think of a lot of workarounds to this issue (return status codes, return an error string, add an error string as an out parameter), but they all have their respective downsides, and I also want to stay consistent with the conventions of the .NET Framework.

+ +

Thus, my question is as follows:

+ +

Are there methods in the .NET Framework which (a) parse input without throwing exceptions and (b) still return more detailed error information than a simple true/false Boolean?

+",33843,,33843,,43122.64236,43605.82917,How would I design a TryParse method which provides detailed information in case of a parsing error?,<.net>,3,1,1,,,CC BY-SA 3.0,, +364449,1,364460,,1/22/2018 15:31,,0,43,"

Sorry if title is not clear, suggestions on better title are welcome.

+ +

For the purpose of [self-]education I am writing a toy scripting language that would compile to bytecode and be executed on a toy VM.

+ +

This is not going to be a turing-complete language, and it would only contain simple flow control structures such as if...then...else and overall be just a straight sequence of instructions.

+ +

I have already pretty much everything working except for one part -- I would like my bytecode to have a read-only data section (pretty much like .rodata in native binaries). However I am stuck on how do I reference this in opcodes? I can give the address of the beginning of data block, but how do I provide the length of data?

+ +

For example - I can have an opcode 0x01 to compare an immediate value 0x0005 with data in data section at an address 0xf002 (ignore endianness for now):

+ +
0000 0100050002
+...
+f002 0005000000
+
+ +

One possible solution I would think of is to either prepend value with length of data block (such as 0005000000 becomes 0200050000) but that leads to issue of either being limited in data block size (i.e. if use 1 byte as in this example, then it will obviously be limited by 255 bytes, which some may say is enough for everyone) or if provide size part big enough (e.g. 8 bytes), the size part very well will be bigger than the actual data in some cases, which is not desirable.

+ +

What would be a better approach?

+",115436,,,,,43122.69028,How can I access attached data section in custom script language?,,1,3,,,,CC BY-SA 3.0,, +364455,1,364472,,1/22/2018 16:07,,3,78,"

In DDD, is it okay if creation of one AR forces creation of another AR? Take for example:

+ +
class User
+{
+  // User related functins
+}
+
+class Schedule
+{
+  // Schedule related functions
+}
+
+class Scheduler
+{
+  private ScheduleRepository repo; // injected
+
+  public void Schedule(User user, Event event)
+  {
+    Schedule schedule = repo.FindByUser(user);
+    schedule.Add(event);
+    repo.Save(schedule);
+  }
+}
+
+ +

There is a 1:1 mapping between user and schedule. When a user is created they need to have a schedule. I cannot see why a schedule would exist without a user. Additionally, when a user is removed from the roster/database, I still want their old schedule to persist in the scheduling database.

+ +

The User object will not have a reference to their actual schedule. You would have to use the Scheduler object to actually schedule the event. User would have a private ID and numerous Identity properties such as 'Name', 'Email', etc.

+ +

Am I overcomplicating this? Additionally, when a user is created, I would assume I would need to query the UserRepository to get the ID (PK?) to associate with the Schedule object.

+",274856,,,,,43123.33125,Creation of one AR causes creation of another AR,,2,2,,,,CC BY-SA 3.0,, +364456,1,364458,,1/22/2018 16:09,,2,379,"

In my company, We are planning to build more than one web application on a single Database. The proposed design of each of these apps are - +ASP.NET MVC as Presentation layer, Restful API as service Layer(also works as domain/business Layer) and Entity framework (EF) as data access layer.

+ +

But Many of my colleagues are asking if we could used EF directly into the presentation layer.

+ +

I can explain the separation of concern and maintainability but is there any other innate flaw in consuming EF directly when multiple apps consume the same DB

+",294022,,294022,,43122.69375,43122.69653,Disadvantageous of using entity framework layer directly into presentation layer (ASP.NET MVC) by skipping Service layer,,2,0,1,,,CC BY-SA 3.0,, +364462,1,,,1/22/2018 16:46,,3,329,"

Let's say there is an extensible system in place, which uses an algorithm that requires that the elements it operates on are unique.

+ +

The signature to call this algorithm looks like this: function algorithm([Collection of Elements])

+ +

Now there are multiple modules that use this algorithm, which call the algorithm like this:

+ +
[Collection of Elements] elements = [...]
+[make sure the Collection only has unique elements]
+algorithm(elements)
+
+ +

Say, there is a new module added which looks like this:

+ +
[Collection of Elements] elements = [...]
+algorithm(elements)
+
+ +

(i.e. it doesn't make sure that elements are unique).

+ +

Now, in a majority of cases all elements are unique. But once in a while an Exception is raised by the algorithm complaining about the duplicate elements.

+ +

The module should should have only given unique elements to the algorithm, and as such the exception at that point is reported as a bug and should be fixed.

+ +

Now, would the bug be ""fixed"" by simply adding [make sure the Collection only has unique elements] to the offending module [Fix #1]? Or would the bug be fixed by changing the signature of the algorithm to function algorithm([Set of Elements]), so that the bug that's fixed is impossible to reoccur with any future modules [Fix #2]?

+ +

I'm specifically asking about this coming up in a review - should the bug fix be ""done"" if you review Fix #1? Should the changes of Fix #2 be added as a new refactor task? Should Fix #1 be rejected until it is implemented like Fix #2?

+",101264,,101264,,43123.49444,43123.67917,Can you consider a bug fixed when the underlying cause was not fixed?,,3,3,,,,CC BY-SA 3.0,, +364467,1,364515,,1/22/2018 17:52,,1,528,"

Reading a Node Stream I want to be able to receive a stream of text, and trigger the continuation of my stream.

+ +

The following code solves my purposes but I recently read that we are using Subject too much when we may not need to. I'll use this code to read files from disk or from S3.

+ +

Is it possible to replace my subjects with observable or is it possible to do it without using Subject?

+ +

Best practice is use Subject sparingly because of different reasons: my important is that is not reusable, if something fails in the processing I can't kickstart it again.

+ +

I think I can do my continueStream using an event-emitter as a data source and digest it via a Observable.from. Is it worthwhile to follow down this path?

+ + + +
import { split, mapSync } from 'event-stream'
+
+import { Subject } from 'rxjs/Subject'
+import 'rxjs/add/operator/finally'
+import 'rxjs/add/operator/catch'
+
+// This will receive your Node Stream
+// It will return two observable, one for getting the lines
+// and another to continue pulling from the stream
+function streamProcessing (stream: NodeJS.ReadableStream) {
+  const source = new Subject()
+  const continueStream = new Subject()
+  const s = stream
+    .pipe(split())
+    .on('data', (line) => {
+      s.pause()
+      source.next({
+        line: line
+      })
+    })
+    .on('error', (err) => {
+      source.error(err)
+    })
+    .on('end', () => {
+      source.complete()
+    })
+  continueStream.subscribe(() => {
+    s.resume()
+  })
+  return {
+    text$: source,
+    continue$: continueStream
+  }
+}
+
+// this is my control function, basically what I'll
+// use on my unit test or on my main function
+export function processTxt (asciiStream: NodeJS.ReadableStream) {
+  return new Promise((res, rej) => {
+    const { text$, continue$ } = streamProcessing(asciiStream)
+    text$
+      .finally(() => {
+        console.log('ended')
+        res('ended')
+      })
+      .subscribe((lineOfText) => {
+        console.log(lineOfText)
+        continue$.next()
+      })
+  })
+}
+
+",100672,,100672,,43126.89306,43126.89306,Handling Pausable Streams with RxJS,,1,1,,,,CC BY-SA 3.0,, +364473,1,,,1/22/2018 18:50,,0,200,"

Background

+ +

I've been thinking about documenting design patterns in our code by setting up interfaces for the common design patterns so that when people read my code it would be clear that I am using a design pattern.

+ +

I would do this by create a project, in our solution, called Design Patterns, so it was clear it was meant to be a common terminology, as opposed to business/organization specific. Possibly even making it into a open source package to further distinguish it from the business logic in our project.

+ +

The project would consist of solely interfaces for all the classes that are used for different design patterns, that would be extended/implemented when you are implementing a pattern.

+ +

Questions

+ +

Are there any functional issues in bending interfaces into a tool to document something that is inherently not functional?

+ +

In Other words, What are the consequences of using an interface as a documentation tool, where the interface is strictly superficial?

+ +

Baring the first two questions introduce no issues, would you use this ""tool"" with common design patterns?

+ +

Can you think of any reasons why you would not do this?

+",257380,,257380,,43122.875,43122.89167,Documenting my code using generic Design Patterns interfaces,,3,7,,,,CC BY-SA 3.0,, +364485,1,364486,,1/23/2018 4:03,,6,2237,"

In C# we have Extension methods.

+ +
+

Extension methods enable you to add methods to existing types without + creating a new derived type, recompiling, or otherwise modifying the + original type.

+ +

An extension method is a special kind of static method, but they are + called as if they were instance methods on the extended type.

+
+ +

However in one of our lesser design moments, we started replacing concrete classes which contain business logic and moving them to Extension Methods.

+ +

The advantages seemed neat at first e.g we could replace something like this (very simplified example).

+ +
var AddressUtils = new AddressUtils();
+AddressUtils.CleanAddress(order);
+
+var ValidationUtils = new ValidationUtils();
+ValidationUtils.ValidateCustomer(Order);
+
+ +

with

+ +
Order.CleanAddress();
+Order.ValidateCustomer();
+
+ +

I.e we were replacing a lot of Stateless instantiable Classes with short-hand static methods (syntactic sugar) that made the code more readable. This has really never sat well with me.

+ +

We now have a bunch of static classes with miscellaneous methods in them, the vast majority aren't even generic or interface driven. I.e single use.

+ +

Adding to this there is this common notional that static classes are not a good fit for test driven development.

+ +

So the alternatives really are

+ +
    +
  • Go back to concrete util style helper classes
  • +
  • And/Or Encapsulate the logic in some sort of DI service
  • +
  • Or add the logic to the Model it self.
  • +
  • Or having reusable logic sprinkled out over the various services that need to use them.
  • +
+ +

In regards to the second last option, order (as most of the the models) are actually Data entities, (and that once again) doesn't seem right (or even feasible) to push the business logic down to the domain models

+ +

So i'm left with (unless there are some better patterns) helper classes, Extension methods, DI services, some sort of entity based logic, or a unpredictable spaghetti factory.

+",,user140075,,user140075,43123.18889,43123.48958,Should Extensions methods be used for Business logic,,2,0,,,,CC BY-SA 3.0,, +364491,1,364508,,1/23/2018 8:19,,2,2041,"

I've found out that there are 5 use case levels:

+ +
    +
  • Level 0 Cloud
  • +
  • Level 1 Kite
  • +
  • Level 2 Sea
  • +
  • Level 3 Fish
  • +
  • Level 4 Clam
  • +
+ +

Cloud level lists only high level users goals such as ""Manage files"".

+ +

Kite level mentions the actor and some more specific cases.

+ +

For what I've understood, Sea level use cases should document the following things: Use Case ID, Use Case, Actor, Trigger, Precondition, Postcondition, Main Flow, Alternative Flows, Exceptions.

+ +

But what exactly is a level 3 (fish level) use case? What is it for and how is it structured? Is a fish level use case a subfunction that I can refer to in my level 2 use case?

+ +

I would appreciate some insights.

+",255676,,209774,,43123.74792,43123.74792,Understanding use case levels,,1,1,,,,CC BY-SA 3.0,, +364494,1,,,1/23/2018 8:41,,0,1158,"

I have a current project which consist out of two independently developed projects:

+ +
    +
  • Spring REST as back end
  • +
  • Angular as front end
  • +
+ +

I do have a Jenkins instance available for building my projects and I'd would like to ""marry"" both of the projects in a CD job to a single deployable file (.jar)

+ +

Is this a job of jenkins, to copy all needed files together? Or is this usually solved with a maven plugin? I couldn't find much information about that, although it seems like a very common step to me.

+",264141,,,,,43579.425,Deploy a NodeJs (FrontEnd) and a Spring (BackEnd) project as one artefact,,2,0,1,,,CC BY-SA 3.0,, +364496,1,364556,,1/23/2018 9:02,,6,18222,"

The question is pretty straightforward, I'll try to explain why I want some explanations.

+ +

(All of this is my 1½-year junior Java developer opinion, which may be more than incomplete. Which is why I am asking the question there).

+ +
    +
  • It increases the coupling between classes, as some classes need other classes, and so on. Having static beans would mean you call them whenever you want, without them actually interfering in your attributes: it would be like calling utility methods, but they are written elsewhere.
  • +
  • It is sometimes a nightmare to manage, especially when you have an xml-oriented configuration on a legacy project that you need to manage.
  • +
  • If you need more dependencies in your class (I have a cute one with something like 26 attributes that I try to cut down to pieces), you would want to cut it down to smaller classes, but those classes may need still a lot of beans to function, so you cut down again. It brings a lot of complexity to the scene.
  • +
+ +

I am working on a 70k-ish LOC Java project, and it is my very first experience in the field. I am trying to improve the design of this web-application, and I think I missed the key points of Java beans with Spring.

+ +

So the question is: is using static classes for beans a bad idea / a poor design ? What would be a good one ?

+ +

Bonus question: any advice on how to decompose with elegance (elegantly ?) a Spring bean application ?

+ +

Thank you

+ +
+ +

PS: I already read about this related question which treats more about states of classes/beans and how to avoid it, than staticity of beans as a poor/good design.

+ +
+ +

EDIT

+ +

(Sorry for the delay, couldn't post code) Here is an example of what can be found, and what made me think about this question:

+ +
// Bean managed in a spring-context.xml
+public class HugeMapper  {
+
+    @Autowired
+    // Used here only
+    private MapperBean    mapperBean;
+
+    @Autowired
+    // Used here only
+    private ServiceBean   serviceBean;
+
+    @Autowired
+    // Used in a couple of classes
+    private UtilitaryBean utilitaryBean;
+
+    @Autowired
+    // Used in a couple of beans
+    private UsefulBean    usefulBean;
+
+    @Autowired
+    // Used in nearly half the components
+    private OverusedBean  overUsedBean;
+
+    // Code treatment ...
+}
+
+ +

The question is about the beans that are used in a lot of different places. Making them static would remove the dependency, but I'm not sure about what would be the consequences, and if it is a good/bad design.

+",224662,,13156,,43634.81667,43807.90347,Are Spring beans declared as static a poor design choice?,,1,4,1,,,CC BY-SA 4.0,, +364500,1,364521,,1/23/2018 10:15,,0,100,"

I have a LogFormatter class which looks like below

+ +
@Sl4j    
+class LogFormatter {
+        public static String format(String taskType, String taskId, String message) {
+            return String.format(""TaskType: %s, TaskId: %s, Message: %s"",
+                    taskType, taskId, message);
+    }
+
+ +

Mostly there are 4 types of tasks but can grow to more in future. Most classes are dedicated to one of the four task types.

+ +

Now every time, I have to get a log line out, I need to call this logFormatter and a typical log line looks like below -

+ +
log.info(LogFormatter.format(
+                    ""StackOverflowTask"",
+                    ""StackOverflowId"",
+                    ""A long long message exceeding 50 characters""));
+
+ +

Each such log line breaks the code readability for the reader. At the least, I would like to keep this log line short enough to a single line.

+ +

One of the ways is to somehow let LogFormatter method call understand the task type to save repetition in every line.

+ +

I have two paths to follow -

+ +

One is to create separate classes for each task type like

+ +
class StackOverflowTaskLogFormatter extends LogFormatter
+
+ +

OR

+ +

create separate methods like

+ +
public void formatStackOverflowTaskLog()
+
+ +

First solution to me looks better because I can introduce more such classes without modifying existing ones (Open-Closed Principle) and the signature of method format will remain same. But I am still not content to write so many classes and then many more @Autowireds in each class, creating class bloatware. Also, I can clearly see that such kind of specialisation also increases class/method length and beats the original intended purpose behind their creation.

+ +

I am keen to understand if there are better ways to maintain code readability in above situation.

+",126411,,126411,,43123.43333,43123.61944,Logging parametrized logs while maintaining readability,,1,2,0,,,CC BY-SA 3.0,, +364501,1,,,1/23/2018 10:17,,4,117,"

I'm building a website that lets a user pay for a service that automatically does some video encoding for them.

+ +

Encoding takes several minutes. A naive solution would run each encoding job immediately. this could lead to several instances running and cause all of them to finish very slowly. the processing would probably affect serving requests.

+ +

As you can tell I don't do much web stuff. I'm sure there's some tool/approach that I'm supposed to be using in this situation, but I don't really know what it is

+",136824,,,,,43123.72083,Running expensive computation on single server,,2,2,,,,CC BY-SA 3.0,, +364503,1,,,1/23/2018 10:40,,3,1215,"

I'm asking this question for a colleague since he doesn't have enough reputation to post images in a question

+ +

During our normal development we found a deficit in our REST Api. +We display entities in our UI like this. Whereby the buttons at the top are so called actions.

+ +

+ +

In the Image you can see buttons which act as actions and a Kendo Grid/Table component. The Entry ReportExecutionJob is selected in the table and now the actions can either be used on that selection or not.

+ +

In depth details

+ +

An action itself is generic. It doesn't know a lot of stuff or metadata and more over just performs a task on any given object. For example you can add an action called delete and it will try to delete whatever entity you gave to it.

+ +

We have a user permission system and an seperated general permission system which will tell which actions are allowed on a given entity.

+ +

Example: +There can be an permission that you can perform the delete action on entity X but on the same instance the user might not have the permission to invoke the action at all.

+ +

Problem

+ +

When for example the ReportExecutionJob is selected we perform a check in the FrontEnd if the action (Pause for example) is generally allowed for the selected Entity. Afterwards the Backend holds the business logic to check if the user has the permission to invoke that action on the selected entry.

+ +

Resulting in two places handling one topic/problem.

+ +

Questions

+ +

There are some questions on how to do this the most efficient and safest way.

+ +

Would you suggest to get the informations (allowed on entity, allowed by user) as part of the entity in the response?

+ +

Should it be loaded from the Action itself ( so when something is selected, a request in the background is done which gets a result if the action is enabled or disabled on the entity and if the user is allowed)

+ +

Is there a best practice method or recommendation?

+ +

And also my colleague has the question when all this should be done. Wether on loading the page, when selecting the entry in the table or when trying to perform the action?

+",294111,,294111,,43123.51389,43124.38264,REST Api - Check if Action is allowed for entity,,3,3,,,,CC BY-SA 3.0,, +364504,1,364517,,1/23/2018 11:08,,2,266,"

With all the talk of Microservices with Domain Driven Design, I have been looking at two architectures, the Database-Centric Architecture and the Domain-Centric Architecture (Not to be confused with Domain Driven Design). Now in the Database-Centric Architecture, the database is the focal object we depend on. Without it, the application would not function. Using the Spring Framework as an example, we can now easily implement a whole Microservice ecosystem with all the magic Spring provides us. However, looking at a lot of Microservices diagrams it still looks a lot more Data-Centric to me as the database seems to be an important component that if it stopped working, the system would not function at all.

+ +

+ +

Looking at the diagram above, we easily break monoliths into independent services and we do manage to apply the Domain Driven Design within our Microservices. However, it still seems to me that we are still within the Database-Centric Architecture as our microservices would not function as expected without our databases. Also it seems that using a framework like Spring generally ties you into the MVC pattern (@Controller, @Service, @Repository) which to me is more Database-Centric rather than Domain-Centric.

+",153232,,,,,43123.6875,Does a framework like the Spring Framework fall under the Data Centric Architecture?,,2,0,,,,CC BY-SA 3.0,, +364505,1,364510,,1/23/2018 11:37,,0,86,"

I have read plenty of questions on here about overriding .equals and .hashcode for testing purposes only.

+ +

My Domain classes have implemented .equals and .hashcode. Should I be

+ +

1) Duplicating these classes in my test project

+ +

or

+ +

2) Using the Domain Class .equals in my test project

+ +

The only justification I can see for point one is if the .equals and .hascode are different in the test project, however in my case they are not. Therefore I believe that point two is the answer.

+ +

The reason I ask is that I am trying to follow the principle of least astonishment ready for when someone else looks at my code in future.

+",65549,,,,,43123.58194,Should I be duplicating Equality methods in the test project?,,1,3,,,,CC BY-SA 3.0,, +364511,1,364527,,1/23/2018 12:04,,6,107,"

I'm working on a file-synchronization client that currently produces a stream of changes to the underlying filesystem. That is a stream of create/update/move/delete events is produced for each synchronization ""target"".

+ +

Each event includes a sequence-ID, which provides information about the ordering of events, and then information such as:

+ +
    +
  • source path (& destination path for move events)
  • +
  • md5 (for files)
  • +
  • mtime timestamp
  • +
  • event type : {create, move, update, delete}
  • +
+ +

This works reasonably well, but there are often redundancies. For example, a move from path X to path Y and then back to path X will be reported as two events:

+ +
    +
  1. X -> Y
  2. +
  3. Y -> X
  4. +
+ +

This is clearly redundant, and can be removed altogether from the stream.

+ +

Are there any well-documented techniques for detecting and removing such redundancies?

+ +

Similarly, is there a well-understood and efficient way to detect redundancies stemming from deletions? For example:

+ +
    +
  1. Update A
  2. +
  3. A -> B
  4. +
  5. B -> C
  6. +
  7. Delete C
  8. +
+ +

Here, changes 1 - 4 could be reduced to Delete A.

+ +

Informal approaches, along with academic papers would be most welcome, bearing in mind that it's not uncommon to encounter streams of tens of thousands of events in this case.

+ +

Many thanks!

+ +

EDIT

+ +

Does this problem have a name? The reason I'm here is because google has failed me, and I'm guessing this is because I'm missing some vocabulary!

+",61277,,61277,,43123.51319,43123.77986,Are there any techniques for detecting redundancies a stream of changes to a filesystem?,,2,6,,,,CC BY-SA 3.0,, +364518,1,364519,,1/23/2018 14:42,,2,316,"

Is the M (model) in MVVM equal to business logic + data? Or is it just supposed to be data/state for the view?

+ +

Background: In a project we use ""model"" as a name for an object that holds presentation data. I expect that to be wrong. In my opinion a class/structure that just holds presentation data or state should called e.g. ""SomeData"" or ""SomeState"" instead of ""SomeModel"".

+",288750,,,,,43123.61667,What's the M in MVVM?,,1,0,,,,CC BY-SA 3.0,, +364520,1,,,1/23/2018 14:51,,1,588,"

I was talking with a friend another day about OOP in small projects. In the most of projects that me and him worked the SOA was the rule.

+ +

Per example, imagine a Order in a SOA application. The scenario of this application could be:

+ +
    +
  • A lot of services (UpdateOrderService, CreateOrderService, etc) calling each other.
  • +
  • The data of the Order is all open (lots of getter and setters) to be manipulated for any Service.
  • +
  • The business rules are distributed in many Services.
  • +
+ +

As Vaughn Vernon said in one of his books, this kind of strategy will not work well for bigger projects with more complex business rules. Many of us know that too.

+ +

By the way, SOA have a lot of different meanings and I'm taking the simple one, described by Vaughn Vernon: service classes calling each other.

+ +

The most obvious alternative is Domain Driven Design, right? But boy, this answer for a simple problem remembers me this phrase: ""that escalated quickly"". When you compare a simple SOA vs DDD, we are introducing a LOT of new patterns and complexity:

+ +
    +
  • Unit of Work
  • +
  • CQRS
  • +
  • Aggregation
  • +
  • Domain
  • +
  • Subdomain
  • +
  • Mappers
  • +
  • Events
  • +
  • Command
  • +
  • Value Object
  • +
+ +

And etc. I already work in a big C# project using DDD. Was a amazing experience and opportunity for learn, but is not practical introduce all these concepts in a smaller project.

+ +

There is an approach called DDD-lite, but I can't find good or more detailed examples about it.

+ +

In DDD-lite territory, One of these examples not address one of the main problems that appears in some projects: use database entity as a domain object. For me this is a mistake, because is not possible to maintain the entity updated with the constant changes of the model and it will be mixed with another abstraction soon or later (like the use of VOs to represent some models). I see the entity only as a place to save/update/delete and search informations.

+ +

And, for me, this translation between database and domain is one the major challenges to create an OOP project. With all object associations and operations (create, update and delete), I couldn't find a simple way to introduce this in a project.

+ +

So, my question is: there is a midterm between SOA and DDD to introduce a OOP concept in the application without maintain them on the Services?

+",172464,,172464,,43123.63611,43123.86111,"Use OOP approach for organize the business rules instead of SOA in a small project. Excluding DDD, is there some strategy to do this?",,4,2,1,,,CC BY-SA 3.0,, +364523,1,,,1/23/2018 16:25,,1,164,"

We had a debate on our team about how clean the Master branch should be.

+ +

The application is written and maintained by two people, me (a developer) and a GUI/UX designer. The GUI/UX designer does a lot of prototyping (or ""sandbox""-type experimentation) to test or fix various layout issues in JS & CSS. This preliminary or tentative work introduces some ""dirty"" code such as inline CSS, scattered JS, poor formatting, etc. She would like to directly check that into Master as soon as functionally her goals her achieved, and I stop her.

+ +

My own checkins into Master are very clean and I always do a Diff to make sure they're final, formatted, and modularized. I clean up the preliminary or tentative changes I had to make to ensure that Master is ""official."" Should Master be allowed to get dirty?

+",286252,,,,,43123.71111,Should checkins into the Master Branch be clean?,,2,4,,,,CC BY-SA 3.0,, +364524,1,364528,,1/23/2018 16:26,,1,594,"

Assume that you have the following 3 entities:

+ +
    +
  • Manual
  • +
  • Version
  • +
  • Document
  • +
+ +

1 Manual has multiple versions and 1 version has multiple Documents.

+ +

I want to build a Web API that allows customers to insert their manuals, versions and documents, but what is the best practice?

+ +

Do I accept the posted data object in a nested XML/Json structure:

+ +
<Manual>
+   <Versions>
+      <Documents/>
+   </Versions>
+</Manual> 
+
+ +

or do I want the customers to add the data entity per entity?

+ +

Additionally, how does this work with primary keys and foreign keys between the 2 systems? My database creates a PK per entity and so does my customer's database. Should I store his reference numbers or should he store mine for future updates, deletes and related inserts?

+",264836,,264836,,43123.70486,43123.88958,Web API best practices inserting data: nested or per entity,,2,4,,,,CC BY-SA 3.0,, +364533,1,,,1/23/2018 15:09,,4,191,"

In a company of about ~100 developers, and ~2 pen-testers, a product is being developed. There's a good amount of diverse code coming in via github PR requests every day.

+ +

The goal is to review as much new code as possible before it gets merged.

+ +

On the one end of the extreme, it'd be something like 'review every PR and don't allow merge before a pen-tester vets"". Clearly this is very resource consuming. The other extreme would be 'let everything pass through and do per-module pen-testing on releases'.

+ +

Are there any meaningful PR coverage strategies laying in between these two extremes?

+ +

Note: not asking about integrating automated code analysis into the build pipeline, the goal is to have the manual audit for as much code as possible in a meaningful way. Also, not asking about code review methods (such as methodology to identify certain classes of bugs).

+",294292,bgd223,,,,43123.89931,What's a reasonable in-house code audit policy?,,1,5,,,,CC BY-SA 3.0,, +364537,1,364540,,1/23/2018 18:17,,2,684,"

Please see the code below:

+ +
public virtual bool Equals(Entity other)
+        {
+            return Equals((Object)other);
+        }
+
+public override bool Equals(object obj)
+        {
+            var compareTo = obj as Entity;
+            if (ReferenceEquals(compareTo, null))
+                return false;
+            if (ReferenceEquals(this, compareTo))
+                return true;
+            if (GetType() != compareTo.GetType())
+                return false;
+            if (!IsTransient() && !compareTo.IsTransient() && Id == compareTo.Id)
+                return true;
+            return false;
+        }
+
+ +

IEquatable.Equals and Object.Equals override should return the same result. Therefore if there is a call to IEquatable.Equals then it just calls the override for Object.Equals.

+ +

Is this a standard approach? I am trying to follow the principle of least astonishment. Alternatively I could just duplicate the code in both methods.

+",65549,,290155,,43124.37292,43124.37292,What is the best way to ensure that the IEquatable.Equals implementation and the Object.Equals override return the same result?,,1,10,,,,CC BY-SA 3.0,, +364539,1,364545,,1/23/2018 18:27,,8,625,"

I think one of the biggest pain points in working with microservices is making sure that the APIs are well-documented and APIs do not change their behavior without affecting downstream applications. This problem becomes amplified when you have many services that are interdependent on each other. Perhaps at that point you doing microservices wrong, but I digress.

+ +

Let's say we have inherited 20 microservices that are owned by different teams and there is no clear documentation about which application uses which other application's API endpoint. Is there a prescribed way of documenting this? At first I thought of analyzing each application's endpoints and adding them to a database table, then creating FK relationship among each application and an application's route on a many-to-many table (almost all of these are rails apps). But I am not sure if this is a good way to handle this, or am I re-inventing the wheel here.

+ +

In retrospect, this might be a not so bad way to document application interaction if you are starting with microservices from scratch. This would just enforce that a single source of truth is maintained via the use of a database and any changes to the endpoints would be performed in the application in conjunction with the change in database. Thoughts?

+",150965,,,,,43123.85208,Maintaining and documenting API endpoints of many applications in a microservice architecture,,2,0,0,,,CC BY-SA 3.0,, +364548,1,364555,,1/23/2018 21:03,,4,238,"

I was reading on this page about setting a flag in a loop and using it later. Most of the answers agreed that it's a code smell.

+ +

One of the answers suggested refactoring the code by putting the loop into a method and the boolean and break become a return instead.

+ +

The second answer suggests that using the break means your loop has two possible exit points, and if your code becomes more complex, it might be harder to detect and introduce bugs.

+ +

Question:

+ +

By the logic of the second answer isn't using a return statement also a code smell? Using a return you introduce a second way to exit the loop.

+",,user294163,,,,43123.97292,Using a Break or Return instead of setting a Flag,,4,0,,43124.30347,,CC BY-SA 3.0,, +364558,1,,,1/23/2018 23:03,,-1,104,"

The implementation is MVC. The View is isolated to the browser layer. The Model is isolated to the persistence layer. The Controller is split with: UI controls in the browser mostly so input is syntacticly correct, authentication and authorization controls are in the listener layer (Tomcat and remote LDAP), and data integrity controls in the persistence layer (an RDBMS with stored procedures).

+ +

Question is, where should the business logic control go? It is defined by a data driven model in the persistence layer. So the code/logic could go in either the persistence layer itself via stored procedure, or in the listener layer via Java classes. If it is to be in the listener, additional work will need to be done to bring the data across layers.

+",2078,,,,,43214.98958,"Should we implement the state machine logic near the data in the ""persistence"" layer, or bring the data out and implement it in the ""listener"" layer?",,1,1,,,,CC BY-SA 3.0,, +364566,1,,,1/24/2018 2:12,,1,155,"

Say I have a function that returns a weighted selection from a set of resources, according to a desired distribution. For argument's sake let that resource be string colors.

+ +
const distribution = {
+  red: .1666,    // We want 1/6th of colors in the world to be 'red'  
+  yellow: .3333, // ... 1/3 to be 'yellow'
+  blue: .5       // ... and 1/2 to be 'blue'
+}
+
+// returns ~1/6 'red', ~1/3 'yellow', ~1/2 'blue'
+function getWeightedColor() {...}
+
+ +

If I wanted to further weight the return value based on existing data with the purpose of guiding the data toward the desired distribution more quickly, how would I achieve that?

+ +
// Accepts a counts dict in the format `{<color>: count, ...}` and based on
+// the distribution of that dict, further weights the selection such that
+// the return value adjusts the dict toward the desired distribution.
+function getWeightedColor(colorCounts) {...}
+
+// Examples:
+
+getWeightedColor({red: 100, yellow: 200, blue: 300}); 
+// given distribution already normal, so we'd use the unadjusted weights
+
+getWeightedColor({red: 100, yellow: 250, blue: 10});
+// given distribution has far too few blues and somewhat too many yellows, 
+// so the weights would be adjusted to compensate. The odds of 'blue'
+// would be greatly increased, red somewhat decreased and yellow moreso.
+
+",84885,,84885,,43124.68611,43124.83611,How to dynamically weight a random generator to guide a result set toward a desired distribution?,,2,18,,43132.32639,,CC BY-SA 3.0,, +364567,1,,,1/24/2018 2:19,,24,9542,"

When it comes to tests, I can think of two options:

+ +
    +
  1. Put both test and application in one image.
  2. +
  3. Include only application code in the image. Create a test-specific container that builds after the main image and adds some layers to it (test code, dependencies etc).
  4. +
+ +

With the first option, I can test the container and ship it exactly as tested. An obvious downside is unnecessary code (and potentially test data) will be included in the image.

+ +

With the second option, the image that is shipped is not quite the same as the one that is tested.

+ +

Both look like bad strategies. Is there a third, better strategy?

+",294190,,,,,43819.47917,Should I include tests in Docker image?,,3,2,3,,,CC BY-SA 3.0,, +364571,1,,,1/24/2018 4:23,,5,110,"

For example, I have simple C program that only have main function that just returns 0. What registers should loader (Linux exec loader, I guess) install before start a program? I didn't find information about this except stack pointer register.

+",294196,,31260,,43124.39097,43124.39097,Which registers should executable loader install before start a program?,,1,4,,,,CC BY-SA 3.0,, +364572,1,364618,,1/24/2018 5:04,,5,667,"

Looking at a range of cross-platform languages, libraries and GUI toolkits, I often notice a conspicuous absence of support for asynchronous file I/O. This seems like too much of a common factor to be a coincidental oversight in all of them. However I don't know enough about individual OSs or these languages'/libraries' development to understand why they don't feature it.

+ +

Here are some examples.

+ +
    +
  • Python's asyncio (started in 3.4) contains ways to handle network and subprocess activity asynchronously, but nothing for reading files from disk.
  • +
  • The Twisted library for event and protocol driven programming in Python seems to contain nothing for async file I/O.
  • +
  • Qt 5's QFile specifically does not emit the readyRead or bytesWritten signals, in contrast to other QIODevice implementations for networking.
  • +
  • All of wxWidget's file related classes are completely synchronous.
  • +
+ +

These are just the last four I looked up; it's possible that I picked four in a row without async file IO by coincidence, but they're all highly popular and stable. I know they're not the only popular libraries I've used over the last ten years in which I've missed it.

+ +

Maybe there's less demand for this than for reading data from a socket or subprocess, but is it so little as to be considered unwanted? A seemingly local file could be on the other side of a network connection (eg. a SMB share or NSF mount) making it not at all necessarily true that local disk access will be faster than a user would notice. It's not even necessarily true for a local, brand new SSD for that matter.

+ +

Common advice to roll one's own async file IO with threads seems counter to prevailing wisdom, when so much of what motivates toolkit usage is not rolling your own, especially when it comes to anything involving threading. I know it's possible, and maybe not that hard, but neither are many other things that are commonly available in these libraries and languages.

+ +

Let's take Qt as an example (extracting this from a comment on an answer). In Qt, if I want to do X without pausing my entire program, I can use Y:

+ +
    +
  • If X = redraw a canvas; Y = use signals and slots
  • +
  • If X = read data via HTTP; Y = use signals and slots
  • +
  • If X = get data from a subprocess; Y = use signals and slots
  • +
  • But if X = get data from the hard drive; Y = implement something with a thread, or maybe two threads, two semaphores, special shared memory pointers, and maybe a bunch of other stuff.
  • +
+ +

Whether or not it's simple for me to implement is beside the point. The point is that file IO is an operation that can block, but it's consistently the odd one out in cross-platform toolkits and libraries by not having high-level handling.

+ +

Basically it seems that this omission demands extra complexity from the library user for a not-uncommon task, when the goal of these libraries is to absorb that kind of low-level implementation complexity. It seems odd that putting a local file on the other side of a local webserver makes it simpler to access in an event-driven program.

+ +

To be clear, and to make my question clear, this isn't a complaint about missing functionality. I want to understand cross-platform libraries and OS differences better, and so I'm genuinely curious about why this situation arose and whether there's some technical limitation or other constraint at the root of it.

+",12878,,12878,,43124.43681,43124.93681,What is constraining cross-platform asynchronous file I/O?,,2,0,1,,,CC BY-SA 3.0,, +364573,1,364578,,1/24/2018 5:09,,7,1559,"

Here we have TV class and DVD class as example:

+ +
class TV
+{
+public:
+    TV();
+    virtual ~TV();
+
+    void on() const;
+    void off() const;
+};
+
+class DVDPlayer
+{
+public:
+    DVDPlayer();
+    ~DVDPlayer();
+
+    void SetCD() const;
+    void StartPlay() const;
+    void Eject() const;
+    void PowerOff() const;
+};
+
+ +

We create Adapter which helps DVD to ""pretends"" TV:

+ +
class TVStyleAdapter :
+    public TV
+{
+public:
+    TVStyleAdapter(DVDPlayer* AdapteeDVD);
+    ~TVStyleAdapter();
+
+    void on() const;
+    void off() const;
+
+private:
+    DVDPlayer* DVD;
+};
+
+// override base class funcs:
+void TVStyleAdapter::on() const
+{
+    DVD->SetCD();
+    DVD->StartPlay();
+}
+
+void TVStyleAdapter::off() const
+{
+    DVD->Eject();
+    DVD->PowerOff();
+}
+
+ +

After that I can add ""virtual"" in the TV(Base) class & override on() off() functions in Adapter class. And it will be works right.

+ +

BUT the questions is:

+ +
    +
  1. Can we somehow create an Adapter without ANY changes (e.g. adding ""virtual"") in a base ('TV') class?
  2. +
  3. Is it possible in this situation not violate the Open Closed +principle?
  4. +
  5. If I assume using an adapter in my program +in the future, should I do the virtual methods in my classes in +advance?
  6. +
+",294197,,209774,,43124.33542,43124.67014,How to use 'Adapter' without any changes in the existing code in c++,,3,1,3,,,CC BY-SA 3.0,, +364579,1,,,1/24/2018 7:08,,3,514,"

We have a micro-service that has a Domain Model and have an analytical service for the domain which has its own Query Model. The domain model and the query model are stored in separate persistencies.

+ +

Currently our Query Model uses a sub-set of the attributes from the Domain model. However going forward we have requirement to add additional attributes from the Domain Model to Query Model, in this case are there any recommendations with regards to the best approach that can be used to populate the 'delta' part of the Query Model with what is available in Domain Model?

+ +

As an aside, this also seems to be a weakness of the CQRS any enhancements to the Query Model would require some sort of reload of the data to populate the enhanced part of the Query Model. Or are we doing something wrong here?

+",145286,,,,,44175.87569,"CQRS, microservices and delta replication",,1,1,1,,,CC BY-SA 3.0,, +364581,1,,,1/24/2018 7:46,,1,21,"

Working on an application that uses DynamoDB for data storage, which is new-ish for me. There are two tables, both use a simple guid for their key. I need to divide data per client. Someone can sign up for their organisation, which means I need to separate data between clients. I was thinking of using a range key for clients, like

+ +
[id]            [client]
+guid1234        org1
+guid2345        org2
+guid3456        org2
+guid4567        org2
+
+ +

.. which means that I can use the current user ID to fetch their organisation and limit by range key. Is this the way to go? What's common here?

+",21973,,,,,43124.32361,What's the preferred way to section a dynamodb table by client?,,0,3,,,,CC BY-SA 3.0,, +364584,1,,,1/24/2018 9:19,,2,146,"

I have quite some experience with TDD in Java and Kotlin and +currently try to learn testing with Javascript.

+ +

I am not sure if this is really a question about weak vs. strong typing or about general design.

+ +

I always was under the impression that mocking/stubbing code you don't own is a bad idea. +In Kotlin I would create and interface for the library and implement that interface with a wrapper.

+ +

Then inject a mock of my interface into the tests.

+ +
+ +

In one of the books I am reading the suggestions to test routes
+of an express app is to stub the express.Router() class:

+ +
const { expect } = require('chai');
+const express = require('express');
+const sinon = require('sinon');
+
+describe('user routes', () => {
+  var sandbox;
+  var router;
+
+  beforeEach(() => {
+    sandbox = sinon.sandbox.create();
+    sandbox.stub(express, 'Router').returns({
+      get: sandbox.spy()
+    });
+
+    router = require('../src/routes/user');
+  });
+
+  afterEach(() => {
+    sandbox.restore();
+  });
+
+  it('should register GET / route', () => {
+    expect(router.get.calledWith('/', sandbox.match.any)).to.be.true;
+  });
+});
+
+ +

The SUT is:

+ +
const express = require('express');
+
+const router = express.Router();
+
+router.get('/', (req, res) => {
+  res.send("""");
+});
+
+module.exports = router;
+
+ +

Is this ok, or is there a better way of doing this?

+",157159,,,,,43124.75556,Is it OK to mock or stub libraries in weakly typed languages?,,1,0,,,,CC BY-SA 3.0,, +364586,1,364598,,1/24/2018 9:35,,7,406,"

I have used RabbitMQ but I haven't used Apache Kafka. Is it a similar problem that these products solve, or is there no connection?

+",12893,,,,,43124.60694,Is it appropriate to say that RabbitMQ and Apache Kafka solve similar problems?,,1,0,,,,CC BY-SA 3.0,, +364590,1,364593,,1/24/2018 10:36,,10,6235,"

I've the following endpoint:

+ +
a/{id}/b
+
+ +

and want to create a b with sending POST request to it. If a with given {id} is not found should I response with 404 NOT_FOUND or maybe with 409 CONFLICT?

+ +

It is to handle plain a/{id}, the trick is that here a subresource is used.

+",288138,,288138,,43124.52222,43130.66042,What is a proper response status code to POST when parent resource is not found?,,2,1,2,,,CC BY-SA 3.0,, +364591,1,,,1/24/2018 10:36,,0,67,"

Let's say we have following SQL table structure

+ +
    +
  • Entities (5-15k)
  • +
  • Keywords (15-20k)
  • +
  • EntityKeywords
  • +
  • ExcludedKeywords (keywords which should be excluded from common matching)
  • +
+ +

We need to find related entities, which are entities which have most common keywords ordered descending.

+ +

Now obviously, querying this on each load would be too slow because each query requires ordering by count. +One of ideas is to aggregate keywords to a single column for each entity, and use full text search over it. Don't know is this a good approach?

+ +

Is this too much for SQL server and does it require another technological stack, or there are better ways to deal with this problem?

+",198708,,,,,43124.44167,Efficient way/query to find related entities,,0,5,,,,CC BY-SA 3.0,, +364594,1,364600,,1/24/2018 13:42,,1,207,"

This scenario seems pretty ordinary, and yet, strangely, messaging systems (like Google Cloud PubSub and Task Queues and ActiveMQ) do not seem to support it -- they assume that topics/queues are long-lived.

+ +
    +
  • A frontend webapp server sends a request to a backend server.
  • +
  • The backend server replies to this request with ""expect to receive the messages on channel X"" (The terminology in the async messaging system might be ""queue"" or ""topic"" rather than ""channel"".)
  • +
  • The backend server pumps out results onto X every few seconds for about 2 minutes.
  • +
  • The frontend server polls channel X to get these results.
  • +
+ +

So, the channel needs to exist for just the 2 minutes.

+ +

Am I misunderstanding how to do this? What is the right design approach?

+",14493,,14493,,43131.49653,43131.49653,How do I set up short-lived queues?,,1,4,,,,CC BY-SA 3.0,, +364597,1,364599,,1/24/2018 14:31,,1,2374,"

The goal of the RegEx is to match exactly 6 characters, but in addition it should match empty strings or white space (e.g: ^$|\s|^(\w){6}$). Is it good practice check for empty string/white space in a RegEx expression, or to perform this check in the higher level code, such as String.IsNullOrWhiteSpace? There seems to be a code smell to doing it in the RegEx, but I may be imagining things.

+ +

Thanks.

+",181617,,,,,43124.61181,Is it a good practice to include empty/white space checks in RegEx?,,1,0,,,,CC BY-SA 3.0,, +364601,1,364604,,1/24/2018 14:53,,21,3426,"

As a C++ developer I'm quite used to C++ header files, and find it beneficial to have some kind of forced ""documentation"" inside the code. I usually have a bad time when I have to read some C# code because of that: I don't have that sort of mental map of the class I'm working with.

+ +

Let's assume that as a software engineer I'm designing a program's framework. Would it be too crazy to define every class as an abstract unimplemented class, similarly to what we would do with C++ headers, and let developers implement it?

+ +

I'm guessing there may be some reasons why someone could find this to be a terrible solution but I'm not sure why. What would one have to consider for a solution like this?

+",294243,,103359,,43132.65625,44213.59167,Use abstract class in C# as definition,,4,19,3,,,CC BY-SA 3.0,, +364608,1,364611,,1/24/2018 16:11,,2,128,"

How thick should a viewmodel be? For example, should my viewmodel or model handle the actual filtering?

+ +

For example, let's say I have a Roster object holding a collection of Users which are assigned a type (i.e. full-time, part-time, etc.). Instead of having one large viewmodel to handle the filtering of full-time or part-time (because that's what the client selected), it seems it would be better for the roster model to handle this by saying roster.GetUsersByType(type).

+ +

I have a similar issue with my Schedule object (that holds a list of events). To me it seems that since I have a master Schedule of all user's schedules, that it's better for the model to filter this and give me something like Schedule userSchedule = masterSchedule.GetScheduleOfUser(user) or Schedule selectedSchedule = masterSchedule.GetScheduleByDateInterval(dateFrom, dateTo).

+ +

Additionally, what if I wanted to swap out Views to maybe test out a new field (therefore, needing a new viewmodel), I would then need to add this filtering logic to the new viewmodel (violating DRY).

+ +

However, I'm reading a lot that the viewmodel should be thick and should handle the filtering and UI logic. Am I understanding this incorrectly? It seems that most people implement anemic domain models when doing MVVM (because that's what most of the tutorials show... e.g. for me this would be ObservableCollection in the ViewModel with a User object and Event object).

+ +

But it to seems more natural to want to push logic down into the individual components as far as I can.

+",274856,,274856,,43124.67917,43124.72431,Thick viewmodel results in thin model,,1,0,,,,CC BY-SA 3.0,, +364609,1,364632,,1/24/2018 16:12,,3,457,"

I'm new to forking and open source and I'm porting a Rust library into Swift but I wasn't sure if I needed to fork the original repo and then replace it with my new files or just upload my own repo and mention in the README that it's a Swift port of an existing library

+",293614,,209774,,43125.31875,43125.31875,Are you supposed to fork a repo if you're porting it to another language?,,2,7,,,,CC BY-SA 3.0,, +364612,1,364616,,1/24/2018 16:23,,7,436,"

I am not working yet, just studying and recently dealing with SOLID principles. I have read quite a lot about open closed principle but unfortunately most of the books and articles share the same examples. +I do not understand the following: if the class should not be changed after it has been

+",294256,,294256,,43126.31667,43126.31667,How to be OCP compliant and change algoritms?,,3,2,1,,,CC BY-SA 3.0,, +364625,1,364630,,1/24/2018 21:24,,2,591,"

I've been mulling this question over for a few days in my head and I can't come to a solid answer.
+We understand that client side validation for forms is not enough, because you can easily turn JavaScript off. But what about for a form that is submitted purely through AJAX (I have a register form that I want to be submitted through AJAX). If you turn JavaScript off, you won't be able to submit the form anyways, so wouldn't it be alright to do all of your validation on the frontend?

+",281809,,,,,43409.47986,JavaScript only validation on AJAX form submit,,4,1,2,,,CC BY-SA 3.0,, +364631,1,364641,,1/24/2018 22:49,,1,1084,"

I am trying to decide whether to introduce mocks in my isolated Domain Model tests. I have a class method similar to this:

+ +
public Offer AssignOffer(OfferType offerType, IOfferValueCalculator valueCalculator) 
+        { 
+            DateTime dateExpiring = offerType.CalculateExpirationDate(); 
+            int value = valueCalculator.CalculateValue(this, offerType); 
+            var offer = new Offer(this, offerType, dateExpiring, value); 
+            _assignedOffers.Add(offer); 
+            NumberOfActiveOffers++; 
+            return offer; 
+        } 
+
+ +

which I took from here: https://github.com/jbogard/presentations/blob/master/WickedDomainModels/After/Model/Member.cs

+ +

I have now read this article: http://enterprisecraftsmanship.com/2016/06/15/pragmatic-unit-testing/ and this article: http://www.taimila.com/blog/ddd-and-testing-strategy/. They both seem to suggest that I should not mock OfferType (as it is a Value Object). However my question is: should I be mocking IOfferValueCalculator (a Domain Service)? IOfferValueCalculator does not sit in the innermost layer of the Onion, however it does sit in the Domain Model (second most inner layer of the Onion).

+ +

The reason I ask is because all these articles specifically reference Entities and Value Objects (advising against mocking them), however they do not reference Domain Services.

+",65549,,65549,,43124.95556,43126.34583,Should I mock a Domain Service?,,2,13,,,,CC BY-SA 3.0,, +364637,1,,,1/25/2018 3:00,,-2,2582,"

I am trying to understand the async and await.Now i want to apply async and await keyword in my current project.My process structure are:

+ +
//DataAccess 
+Private List<Users> GetAllUsers()
+{
+   .... ;
+   return List<Users>
+}
+
+//UI
+List<Users> UserList=new List<Users>();
+private async void Ok_ClickAsync(object sender, RoutedEventArgs e)
+{
+    //I want to select all user ant assign to UserList but currently it not use.
+    UserList=await BindUser();    
+}
+
+private async Task<List<Users>> BindUser()
+{
+  List<Users> model=await Task.Run(()=>GetAllUsers());
+  return model;
+}
+private void btnSave_Click(object sender, RoutedEventArgs e)
+{
+   //I want to use UserList here and want to validate process   
+}
+
+ +

Problem is if i click save click button quickly,UserList count are 0(Actually UserList record are over 100,000).So I want to check BindUser() process is finish or not before doing validation process in Save_Click(). Please let me known for best solution and help me to understand async and await keyword. Thanks.

+",294290,,,,,43125.71389,Check is finish async and await behavior,,1,2,,,,CC BY-SA 3.0,, +364638,1,,,1/25/2018 4:22,,3,2532,"

The Setup

+ +

So I'm working on a project in which there exists a MainViewModel class. This MainViewModel contains a list of Soldiers through an observable collection. I have a button in the MainView which displays a new AddSoldierView. The AddSoldierView is binding to elements from the AddSoldierViewModel. The AddSoldierView is basically a form where the user inputs all of the data for that soldier.

+ +

The Problem

+ +

Now that I have the Soldier's information on the AddSoldierViewModel, I want to be able to add that back into the ObservableCollection of the MainViewModel. I have binded a command to the button (Add Soldier) on the AddSoldierView, but I'm not sure how to get that information back into the MainViewModel.

+ +

What I've tried

+ +

I've already setting up an event handler on the AddSoldierViewModel in which the SoldierModel is passed as an EventArg. But I can't get the event itself to trigger.

+ +

Any suggestions? I've been trying to stay true to the MVVM spirit, but there are still some kinks I'm trying to sort out. Let me know if you want to see some code snippets, UML diagrams, or whatever.

+ +

AddSoldierViewModel.cs

+ +
public class AddSoldierViewModel : ViewModelBase
+{
+    public event EventHandler<AddSoldierEventArgs> AddSoldierClicked;
+
+    public ICommand AddSoldierCommand;
+
+    private Soldier _soldier;
+
+    public Soldier Soldier
+    {
+        get => _soldier;
+        set
+        {
+            _soldier = value;
+            RaisePropertyChanged(nameof(Soldier));
+        }
+    }
+
+    public AddSoldierViewModel() 
+    {
+        AddSoldierCommand = new RelayCommand(AddSoldier);
+    }
+
+    private void AddSoldier()
+    {
+        OnAddSoldierClicked(new AddSoldierEventArgs()
+        {
+           Soldier = Soldier
+        });
+    }
+
+    protected virtual void OnAddSoldierClicked(AddSoldierEventArgs e)
+    {
+        var handler = AddSoldierClicked;
+        handler?.Invoke(this, e);
+    }
+}
+
+ +

MainViewModel.cs

+ +
public class MainViewModel : ViewModelBase
+{
+    #region - Private Properties -
+    private Team _selectedTeam;
+    private Soldier _selectedSoldier;
+    #endregion // - Private Properties -
+
+    #region // - Public Properties -
+    public ObservableCollection<Soldier> Soldiers { get; set; }
+    public ObservableCollection<Team> Teams { get; }
+    public Team SelectedTeam
+    {
+        get => _selectedTeam;
+        set
+        {
+            _selectedTeam = value;
+            RaisePropertyChanged(nameof(SelectedTeam));
+        }
+    }
+    public Soldier SelectedSoldier
+    {
+        get => _selectedSoldier;
+        set
+        {
+            _selectedSoldier = value;
+            RaisePropertyChanged(nameof(SelectedSoldier));
+        }
+    }
+    #endregion // - Public Properties -
+
+    #region // - Commands -
+    public ICommand DeleteTeamCommand { get; private set; }
+    public ICommand AddSoldierDialogCommand { get; private set; }
+    #endregion // - Commands -
+
+    #region  - Services -
+    public IDialogService AddSoldierDialogService { get; private set; }
+    #endregion // - Services -
+
+    #region - Constructors -
+    public MainViewModel()
+    {
+        Soldiers = new ObservableCollection<Soldier>();
+        Teams = new ObservableCollection<Team>();
+
+        Soldiers.CollectionChanged += Soldiers_CollectionChanged;
+        Teams.CollectionChanged += Teams_CollectionChanged;
+
+        DeleteTeamCommand = new RelayCommand(DeleteTeam);
+        AddSoldierDialogCommand = new RelayCommand(AddSoldierDialog);
+
+        AddSoldierDialogService = new AddSoldierDialogService();
+    }
+
+    #endregion // - Constructors -
+
+    #region - Methods -
+    private void AddSoldierDialog()
+    {
+        AddSoldierViewModel addSoldierViewModel = new AddSoldierViewModel();
+        addSoldierViewModel.AddSoldierClicked += AddSoldierViewModel_AddSoldierClicked;
+        AddSoldierDialogService.ShowDialog(addSoldierViewModel);
+    }
+
+    private void AddSoldierViewModel_AddSoldierClicked(object sender, AddSoldierEventArgs e)
+    {
+        Soldiers.Add(new Soldier(e.Soldier));
+    }
+
+    private void Teams_CollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)
+    {
+        foreach (var item in e.NewItems)
+        {
+        }
+        foreach (var item in e.OldItems)
+        {
+        }
+        RaisePropertyChanged(nameof(Teams));
+    }
+
+    private void Soldiers_CollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)
+    {
+        foreach (var item in e.NewItems)
+        {
+        }
+        foreach (var item in e.OldItems)
+        {
+        }
+        RaisePropertyChanged(nameof(Soldiers));
+    }
+    #endregion // - Methods -
+}
+
+",294297,,1204,,43125.75208,43816.20833,Adding model into another view model's collection,,2,4,,,,CC BY-SA 3.0,, +364643,1,364936,,1/25/2018 7:53,,14,796,"

I am just getting started with RxJava, Java's implementation of ReactiveX (also known as Rx and Reactive Extensions). Something that really struck me was the massive size of RxJava's Flowable class: it has 460 methods!

+ +

To be fair:

+ +
    +
  • There are a lot of methods that are overloaded, which bumps the total number of methods significantly.

  • +
  • Perhaps this class should be broken up, but my knowledge and understanding of RxJava is very limited. The folks who created RxJava are surely very smart, and they can presumably offer valid arguments for choosing to create Flowable with so many methods.

  • +
+ +

On the other hand:

+ +
    +
  • RxJava is the Java implementation of Microsoft's Reactive Extensions, and that does not even have a Flowable class, so this is not a case of blindly porting an existing class and implementing it in Java.

  • +
  • [Update: The previous point in italics is factually incorrect: Microsoft's Observable class, which has over 400 methods, was used as the basis for RxJava's Observable class, and Flowable is similar to Observable but handles backpressure for large volumes of data. So the RxJava team were porting an existing class. This post should have been challenging the original design of the Observable class by Microsoft rather than RxJava's Flowable class.]

  • +
  • RxJava is only a little over 3 years old, so this is not an example of code being mis-designed due a lack of knowledge about good (SOLID) class design principles (as was the case with early releases of Java).

  • +
+ +

For a class as big as Flowable its design seems inherently wrong, but maybe not; one answer to this SE question What is the limit to the number of a class methods? suggested that the answer is ""Have as many methods as you need"".

+ +

Clearly there are some classes that legitimately need a fair number of methods to support them regardless of language, because they don't readily break down into anything smaller and they have a fair number of characteristics and attributes. For example: strings, colors, spreadsheet cells, database result sets and HTTP requests. Having perhaps a few dozen methods for classes to represent those things doesn't seem unreasonable.

+ +

But does Flowable truly need 460 methods, or is it so huge that it is necessarily an example of bad class design?

+ +

[To be clear: this question specifically relates to RxJava's Flowable class rather than God objects in general.]

+",108656,,108656,,43127.35833,43130.42778,Can the RxJava class Flowable legitimately have 460 methods?,,3,5,2,,,CC BY-SA 3.0,, +364649,1,364651,,1/25/2018 8:56,,20,3007,"

A typical advice before any production deployments is backup the DB first. This way, if the new update has some issue that can lead to potential data loss or logical data corruption, then you still have a backup to compare and correct old records.

+ +

However, this can work well till DB size is in few GBs. Once the DB size is huge, backups take a long time complete. What are some best practices that should be followed in such situations, so as to avoid logical data corruption because of logical issues in a code deployment?

+",23888,,,,,43125.91944,What are the practices you follow to avoid wrong data updates in big databases?,,4,6,7,,,CC BY-SA 3.0,, +364654,1,364656,,1/25/2018 11:44,,2,98,"

Say I have an interface Interface, which only contains getters for various fields. This interface has multiple implementations (say Foo and Bar), each of which adds various fields. All these implementations are immutable.

+ +

Assume I have an instance interface of Interface, and I want to create a copy of this instance which changes one of the fields exposed by Interface. How can I achieve this without casting interface to a subtype? I cannot use a Builder class for Interface because this is an abstract interface.

+ +

I believe this problem is a fairly generic OOP problem but here is what the Java code for it would look like:

+ +
public interface Interface {
+     public int getVersion();
+}
+
+public class Foo implements Interface {
+     private final int version;
+     private final String owner;
+
+     public Foo(int version, String owner) {
+          this.version = version;
+          this.owner = owner;
+     }
+
+     public int getVersion() {
+          return version;
+     }
+
+     public String getOwner() {
+          return owner;
+     }
+
+     public String toString() {
+          StringBuilder builder = new StringBuilder(owner);
+          builder.append("" v"");
+          builder.append(version);
+          return builder.toString();
+     }
+}
+
+public Bar implements Interface {
+     // similar to Foo
+}
+
+ +

and the situation where I want to copy an existing instance:

+ +
Interface instance = new Foo(1, ""some value"");
+assertEquals(""some value v1"", instance.toString());
+Parent copy = // create the copy based on instance but updating the version to 2
+assertEquals(""some value v2"", copy.toString());
+
+",294326,,294326,,43125.51528,43185.35625,Creating a modified copy of an instance of an abstract interface,,2,0,,,,CC BY-SA 3.0,, +364661,1,364667,,1/25/2018 13:46,,-2,61,"

I have a collection of people. Each person have rank (A, B, C, D where A highest and D lowest) and set of skills (SkillA, SkillB, SkillC, SkillD) defined. +I also have a set of requirements - for example: +- 1 person with rank A +- 1 person with rank B and SkillA +- 3 persons with rank C and SkillA and SkillC +etc.

+ +

I'm looking for a best approach to check if there is any combination of persons in collection that will fulfill the set requirements. One person can't be used for two requirements.

+ +

I'm using .NET and at the moment I'm trying to build solve that with LINQ queries, but in the meantime I'm interested if there is other way to approach this.

+",294316,,,,,43125.62292,What is the best approach to find elements in collection that fullfill scenario,<.net>,1,2,,,,CC BY-SA 3.0,, +364666,1,364673,,1/25/2018 14:56,,-8,357,"

I want to know that whether storage is an issue with respect to current hardware and software market when we talk about the large size of C++ programs?? Because in certain conditions we have to choose between some types of programming etc like between OOP and procedural programming. I know that object oriented programs are of larger size than procedural programs. But does storage is even an serious issue while choosing the best method to solve our problem?

+",294175,,1204,,43125.75625,43126.83681,Importance of storage in c++,,1,16,,43125.75556,,CC BY-SA 3.0,, +364671,1,,,1/25/2018 15:44,,6,3194,"

I have written a data cleansing application which, for the most part, works well. It is not designed to handle large volumes of data: nothing more than about half a million rows. So early on in the design process, a decision was made to try and do as much of the work in-memory as possible. The thought was that writing to database or disk would slow things down.

+ +

For most of the various cleaning operations the application offers, this has proved true. When it comes to deduplication, however it is absurdly slow. Running on a fairly powerful server, it takes about a full 24 hours to dedupe half a million rows of data.

+ +

My algorithm runs along these steps in pseudocode:

+ +
List<FileRow> originalData;
+List<FileRow> copiedData = originalData.Copy;
+
+foreach(FileRow original in originalData)
+{
+    foreach(FileRow copy in copiedData)
+    {
+        //don't compare rows against themselves
+        if(original.Id != copy.Id) 
+        {
+            // if it's a perfect match, don't waste time with slow fuzzy algorithm
+            if(original.NameData == copy.NameData)
+            {
+                original.IsDupe = true;
+                break;
+            }
+
+            // if it's not a perfect match, try Jaro-Winkler
+            if(_fuzzyMatcher.DataMatch(original.NameData, copy,NameData))
+            {
+                original.IsDupe = true;
+                break;
+            }
+        }
+    }
+}
+
+ +

Look at this, it's obvious why it's so slow: where other operations can cycle through each row, this has to go through the whole file again for each row. So the processing time increases exponentially.

+ +

I have also used threading elsewhere to speed things up, but my attemtps to thread this have failed. In the real-world code we don't just make duplicates as ""true"" but group them, so that all instances of a given match get a unique Id. But the procedure has no way of knowing whether another thread has found and marked a duplicate, so threading leads to errors in the grouping Id assignment.

+ +

To try and improve matters we added a db-based cache of common Jaro-Winkler matches to try and eliminate the need for that relatively slow method. It didn't make a significant difference.

+ +

Is there another approach I can try, or improvements I can make to this algorithm to make it faster? Or am I better off giving up trying to do this in memory and writing it to a database to do the job there?

+",22742,,,,,43129.71319,Effecient algorithm for data deduplication in procedural code,,3,10,2,,,CC BY-SA 3.0,, +364674,1,364698,,1/25/2018 16:05,,5,1413,"

I have read questions like this: +Are unit tests really used as documentation?

+ +

With regards to code comments; my research so far is telling me:

+ +

1) Some developers do not like any code comments and prefer to read unit test code method names to understand the code.

+ +

2) Some developers prefer code comments and unit test method names to understand the code

+ +

I have used the Sandcastle documentation tool to document my project API. I am debating whether to use Sandcastle to generate .HTML files for my Unit Tests project so someone else who reads them understands them more quickly. Is this necessary or a complete overkill? My gut is telling me - document the API with code comments, but do not document the actual test project.

+ +

The reason I ask is because I read a question on here yesterday where a user was talking about documenting the test project by explaining what each test does in more detail using Sandcastle.

+",65549,,,,,43125.93681,Should I document a simple Unit Test project?,,4,5,,,,CC BY-SA 3.0,, +364684,1,364689,,1/25/2018 18:28,,11,392,"

Background: I've recently inherited a set of projects at my company and I'm trying to sort out some fundamental issues with how they've been handled. Namely, the previous developers (who are no longer with the company) were not using any form of source control, made little documentation, and didn't really have any good development processes in place.

+ +

So now I've got three servers worth of projects (development, staging, production) which consist of mostly websites and applications and tools built for third-party applications and APIs we use, down to stores of SQL scripts and other things. My first thought was to get all of this into Git before changes and fixes are made, but I'm having a difficult time figuring out the best way to do it.

+ +

A lot of previous development was done directly on the production servers, which has created a divide between each server's code base. It's not immediately clear where all the differences lie - I'm seeing bug fixes on the production side that aren't carried over on development/staging, as well as new features on the development that haven't been moved up towards staging/production.

+ +

Question: What would be the best way for me to organize and move these into Git? How would I structure my repos/branches to accommodate the differences in the code?

+ +

I've considered continuing development from clones of the production server code and keeping the development/staging code bases as historical reference. Would this potentially be a point to start with, considering I don't know anything about the dev/staging code anyway? I could simply create repos of the production servers for each website, tool, script set, etc., create branches for the existing dev/staging code, and any new development would branch from the production server's code base. Does this make sense?

+",294362,,294362,,43125.78819,43126.56181,How do I start using Git for differing code bases from different servers?,,3,7,2,,,CC BY-SA 3.0,, +364692,1,,,1/25/2018 20:18,,1,75,"

We are working on a large (legacy) distributed system with various applications, messaging (soap services, database dblinks, rest services)... We are lacking for documentation and a big view of this complex system. and we are searching a way to know if a certain update on a shared datatype could or not impact our system, and if yes how ? as for now I started to do some work in excel but I am feeling that this is a no go...

+ +
    +
  1. first this documentation will not be synchronized with the code.
  2. +
  3. some data is mutated during its life in the system (ie. concatenation of various data to make a unique id etc...)
  4. +
  5. I cannot automaticaly filter my excel file and found all the application using a certain datatype and how...
  6. +
+",285517,,58415,,43126.35278,43156.35486,what tool or methodology to use to do impact analisys in a distributed system?,,2,4,1,,,CC BY-SA 3.0,, +364696,1,,,1/25/2018 21:21,,7,1074,"

I'm trying to get my head around a the development workflow for working with microservices and docker multicontainer applications.

+ +

The thing that I'm particularly trying to solve - is getting a good 'live reload' workflow going for development.

+ +

For example:

+ +

For my frontend I can use webpack-dev-server which automatically reloads the page everytime I save changes. This makes it easy to write front end code quickly - with no waiting for deployments.

+ +

However - if the frontend is displaying some data retrieved from a REST API, I'm likely going to be running either a development version of that API (that itself is live reloading with nodemon, or similar) or a mocked API. This is easy enough to achieve with webpack using the proxy configurations for development environment.

+ +

For my REST API - I similarly might want to be mocking other microservices. For example if my REST API had a POST endpoint for saving an image - and I'm going to save that to an AWS S3 bucket, via the aws-sdk, I'm likely to want to be mocking that functionality.

+ +

Essentially - what it looks like - is that for every (or most) microservices - I'm also going to want to create a mock version of it.

+ +

What I imagine my workflow would look like, is something like this:

+ +
    +
  1. On the front end I create a button that submits an image to the backend. I check that the POST request is being made properly.
  2. +
  3. On my REST API, I create an end point that receives the POST request. I click the front end button and check that the backend is recieving it.
  4. +
  5. I create a backend microservice to make AWS SDK calls. I wire it to point to a real development AWS bucket. I wire the POST endpoint to submit the image to this microservice. I click the frontend button and check that it ends up in the S3 bucket.
  6. +
  7. But submitting this image is too slow, so I create a mock version of the AWS SDK service. I switch the REST API to use that one instead.
  8. +
  9. On the frontend I now write some functionality to display a 'image saved!' confirmation message.
  10. +
+ +

My question is:

+ +
    +
  1. Is this a standard way of doing things, or am I going way off base?
  2. +
  3. Is there a way in docker to quickly switch between whether I'm using the real microservice or the mocked one?
  4. +
+",109776,,109776,,43125.90486,43211.97639,Is mocking microservices a thing?,,1,8,1,,,CC BY-SA 3.0,, +364700,1,,,1/25/2018 22:45,,6,439,"

Looking for continuous deployment strategies regarding SQL server database projects targeting Azure SQL using VSTS. My scenario...

+ +
    +
  • Using VSTS for CI/CD
  • +
  • Using a SQL Server Database Project to define my database schema
  • +
  • Using Dapper for my ORM (so no migrations through EF)
  • +
  • Using an Azure SQL Database
  • +
  • Some of the tables have seed data, that will most likely be added to over time (assume no seed data will be deleted for the moment)
  • +
  • Using integration tests that target a separate database, as the tests will wipe each table's data
  • +
  • Using Git flow
  • +
+ +

I see three scenarios that I need to handle

+ +
    +
  1. Non-destructive database changes are made to the development branch, that may or may not include data changes
  2. +
  3. Destructive database changes are made to the development branch
  4. +
  5. Multiple commits with a combination of scenarios 1 and 2 need to be merged into the master branch
  6. +
+ +

The first scenario can easily be managed using VSTS's built in Azure SQL Deployment task and DACPACs. Seed data will be added using pre/post deployment scripts

+ +

The second scenario is a little tougher, however it should also follow the same approach as scenario one with the pre-deployment script being much more important as it would be responsible for removing constraints, deleting data, etc...

+ +

The third approach I'm at a complete loss on. What will happen is that I have a variety of commits that need to be merged into a single branch, all with pre/post deployment scripts that need to be executed in a specific order. I'm not aware of a strategy or tool that can handle this, and I'm looking for suggestions here.

+ +

Finally, and this isn't extremely important, but I also have integration tests that target a test database that I'd like to run during the build (CI part) as opposed to the release (CD) part. Is anyone aware of a guide or how-to on implementing integration tests against a test database as part of a CI process?

+ +

-Tim

+",186648,,,,,44165.20972,Continuous Deployment Database Project VSTS,,1,0,,,,CC BY-SA 3.0,, +364702,1,364703,,1/26/2018 0:41,,-1,749,"

For Example my input is this dictionary +s=[""noon"", ""n"", ""o"", ""noo"", ""Good"", ""Goodnoon"", ""marry"", ""me"", ""marryme"", ""air"", ""r"", ""airbag""]. The output should be a list of compound words. like Good noon, airbag.

+",294388,,16247,,43126.86944,43126.86944,How to list the compound words from a dictionary,,2,8,,43126.86736,,CC BY-SA 3.0,, +364709,1,364710,,1/26/2018 6:23,,3,133,"

I wanted to know if it's ok to try and dependency inject into a class that is dependency injected, something like:

+ +
class RootDependency{
+}
+
+class AnotherDependency{
+      [InjectDecoratorForWhateverLibraryYouAreUsing]
+      RootDependency injectedRoot;
+}
+
+// Inject AnotherDependency into a class...
+class RandomClass{
+      [InjectDecoratorForWhateverLibraryYouAreUsing]
+      AnotherDependency itDepends;
+}
+
+",41652,,131624,,43126.33819,43126.43264,DI into an object that is DI'd?,,1,0,,,,CC BY-SA 3.0,, +364713,1,364715,,1/26/2018 7:53,,3,493,"

Learning about the Liskov principle, I do understand that preconditions can be weakened and postconditions can be stronger in subtypes. However, I do not understand how invariants can be stronger in the subtype? + If the base class says e.g. the Speed must be lower than 100 al the time, and then a subtype says Speed < 50, replacing the base with the subtype can cause some issues, or not? I mean, if somewhere the code works with the base class and works the speed of 90, introducing the subtype means it will be invalid.

+ +

Code and some text can be found in the slides 15=16:

+ +

https://www.cs.cmu.edu/~aldrich/214/slides/formal-analysis-part2.pdf

+",294256,,294256,,43126.36181,43126.40069,Liskov principle: subclasses can have stronger invariants. How could it work?,,2,8,2,,,CC BY-SA 3.0,, +364717,1,,,1/26/2018 10:09,,11,1482,"

I'm struggling with a very simple question:

+ +

I'm now working on a server application, and I need to invent a hierarchy for the exceptions (some exceptions already exist, but a general framework is needed). How do I even start doing this?

+ +

I'm thinking of following this strategy:

+ +

1) What is going wrong?

+ +
    +
  • Something is asked, which is not allowed.
  • +
  • Something is asked, it is allowed, but it does not work, due to wrong parameters.
  • +
  • Something is asked, it is allowed, but it does not work, because of internal errors.
  • +
+ +

2) Who is launching the request?

+ +
    +
  • The client application
  • +
  • Another server application
  • +
+ +

3) Message handing : as we are dealing with a server application, it's all about receiving and sending messages. So what if the sending of a message goes wrong?

+ +

As such, we might get following exception types:

+ +
    +
  • ServerNotAllowedException
  • +
  • ClientNotAllowedException
  • +
  • ServerParameterException
  • +
  • ClientParameterException
  • +
  • InternalException (in case the server does not know where the request is coming from) + +
      +
    • ServerInternalException
    • +
    • ClientInternalException
    • +
  • +
  • MessageHandlingException
  • +
+ +

This is a very general approach to define exception hierarchy, but I'm afraid that I might be lacking some obvious cases. Do you have ideas on which areas I'm not covering, are you aware of any drawbacks of this method or is there a more general approach to this kind of question (in the latter case, where can I find it)?

+ +

Thanks in advance

+",250257,,250257,,43126.43958,43131.58125,How to design exceptions,,5,8,1,,,CC BY-SA 3.0,, +364725,1,364727,,1/26/2018 13:29,,9,2648,"

The SRP states that a class (module) should have only one reason to change. The ""duties"" of an Interactor in Bob Martin's clean architecture are, per use case: receive requests/inputs from a controller; orchestrate domain entities to fulfil the requests; and prepare the output data.

+ +

Does this imply three reasons to change? (ie whenever inputs change or domain functionality is expanded or extra output fields are added.) +If necessary, what would be a good strategy to resolve this? (eg, CQRS?)

+ +

My current approach is to make a use-case Interactor module with three classes, one per each concern, and a fourth Facade/ Mediator class for orchestration and clients interfacing. However, doesn't this push SRP violation up onto the module level?

+ +
+ +

As pointed by @Robert Harvey, the term ""duties"" was used rather sloppily. The actual design issue has been the large changes to the interactor needed both when the domain changed, and the OutputData fields/formats changed (less so with input). Aren't these two distinct reasons for change?

+ +

As I realised from @Filip Milovanović and @guillaume31, SRP is not violated, esp. with three separate classes in the interactor module. Also, at the module level, the ""Common Closure Principle"" is perhaps more appropriate than the SRP. The CCP (""Gather into components ... classes that change for the same reasons and at the same times."") might suggest to separate the interactor classes. (But then the classes corresponding to the same use case would be spread out between locations.) Thanks to the answers and comments, these trade-offs have become much clearer to me.

+",294434,,4,,43158.59861,43812.66667,"Do Interactors in ""clean architecture"" violate the Single Responsibility Principle?",,3,5,5,,,CC BY-SA 3.0,, +364729,1,364738,,1/26/2018 15:57,,1,1974,"

In my domain where I am applying CQRS, there are some external service calls in order to some validation. I am a bit puzzled about where to put these calls. I am considering to put these calls to my Process manager on the other hand AFAIK the process manager should be a simple state machine that reacts on events and dispatch commands to other aggregates. I can think of two solutions:

+ +

1-) One solution is to make these calls and depending on the service call, transition to another state by self publishing an event. Though I don't like the idea Process manager publishes events.

+ +

2-) I can wrap my service calls behind another interface and that service call itself can raise the event. Though I don't like this idea since an event should be persisted before publishing.

+ +

How should I tackle this problem?

+",255152,,,,,43126.72222,"In CQRS, is it okay to call external services from Sagas/Process Manager?",,3,2,,,,CC BY-SA 3.0,, +364733,1,,,1/26/2018 16:41,,1,769,"

So imagine, I have an endpoint where subscription payment to a magazine can be made.

+ +

There could be different flavors of subscription. eg, weekly, fortnight, yearly, 5 years etc.

+ +

For each subscription, the JSON request to the subscription endpoint varies in slightly different ways. For example, perhaps with the 5 years subscription, you might want to ask for the residential address, but for the weekly subscription, perhaps you do not care. Thus validation (and also business logic) differs slightly based on the subscription type.

+ +

The question is, which is the preferred way to model the endpoints:

+ +
    +
  1. Have one single endpoint: /subscription in the JSON that is sent, you have a discriminator property. That is: + +{ +name: ""Joe"", +Age: 66, +subscription_type: ""weekly | monthly | fortnight | yearly | etc"" +} + +So that in the implementation of /subscription you have code that performs different validation to the JSON request and executes different business logic depending on the value of the subscription_type
  2. +
  3. Have separate endpoints for the different subscription types. For example: /subscription/weekly, subscription/monthly etc and have the implementation of each of these endpoints to only care about the validation and business logic specific to their subscription type.
  4. +
+ +

Any other option possible apart from these two I mentioned? Is there any best case practice for dealing with this kind of scenario?

+",41581,,222996,,43126.72986,43130.65972,"In REST, Is Including discriminator inside JSON payload preferred to having separate endpoints",,4,2,1,,,CC BY-SA 3.0,, +364736,1,364757,,1/26/2018 16:52,,3,249,"

Please see the code below:

+ +
public class Customer
+{
+    private readonly IList<Order> _orders = new List<Order>();
+
+    public FirstName FirstName { get; set; }
+    public LastName LastName { get; set; }
+    public Province Province { get; set; }
+    public IEnumerable<Order> Orders 
+    {
+        get { foreach (var order in _orders) yield return order; }
+    }
+
+    internal void AddOrder(Order order)
+    {
+        _orders.Add(order);
+    }
+}  
+
+ +

Notice that I have removed Primitive Obsession for FirstName; LastName and Province. Notice that I also have a list of Orders, which is returned via an IEnumerable. I asked this question last month: What is the benefit of encapsulating a collection inside a class?

+ +

I decided not to encapsulate the list inside an object in the end. However, does this still have the Primitive Obsession smell? I am trying to avoid Primitive Obsession consistently.

+ +

Should I be doing:

+ +
private readonly OrderList _orders = OrderList();
+
+ +

Instead of:

+ +
private readonly IList<Order> _orders = new List<Order>();
+
+",65549,,131624,,43126.95764,43127.50069,Should a collection be encapsulated inside a class if I am avoiding Primitive Obsession?,,2,13,1,,,CC BY-SA 3.0,, +364739,1,,,1/26/2018 17:21,,4,844,"

Wherein portability is defined generally as the percentage of platforms a language or technology can run on, C/C++ are often cited as being more portable than Java, because a Java application depends on a JVM being present.

+ +

But, what prevents a Java application from shipping within a JVM wrapper? Or being transpiled to C/C++ with a supporting framework? (Essentially a JVM, but as a supporting library instead of a wrapper/container.)

+ +

Is there a technical issue? A licensing issue? Or, simply that no one has decided to do it!?

+ +
+ +

Taking a concrete example, consider the 2nd bullet of the first answer to ""Why isn't Java more widely used for game development?"", which states that:

+ +
+

Most consoles (e.g., 360, PS3) do not have a JVM, so you cannot reuse code from the PC version. It is much easier to compile C++ code to support various devices.

+
+ +

If this claim is correct (with regards to ""most"" consoles), is the lack JVM for these platforms due to technical limitations? Legal? Political? etc.

+ +
+ +

When it's stated that Java ""can't"" be run on game consoles (or iPhones), do we really mean can't!? Or, do we mean, ""no one's bothered build the necessary plumbing?""

+",94768,,94768,,43126.79861,43127.42639,What prevents Java from achieving C-level portability?,,5,2,3,,,CC BY-SA 3.0,, +364740,1,,,1/26/2018 17:35,,3,2530,"

In reference to one interface control document, I found difficulty in understanding this concept. There is one parameter having LSB of 0.0625 and MSB of 2048 that should be transmitted from one piece of equipment to another. It's range is 0 to 2400.

+ +

This should be transmitted using only one word. That is 2 bytes.

+ +

Now the problem is: I understood this as that parameter is being measured by one measurement system with a resolution of 0.0625. Since it is decimal number how can we transmit this continuous range parameter using 2 bytes which are purely integers (unsigned range for 2 bytes is 0 to 65535)?

+ +

That parameter is speed, which is being measured by an INS-GPS avionics system and is being transmitted to the CPU using only 2 bytes.

+ +

How should I understand this?

+ +

How can we represent a continuous decimal parameter as a discrete integer parameter?

+",294452,,274996,,43130.61458,43130.61458,Understanding LSB and MSB,,2,8,,,,CC BY-SA 3.0,, +364744,1,,,1/26/2018 17:48,,0,567,"

I am building an SPA. It will use WebGL, Canvas and SVG for certain components, and html for rendering views. I have a messaging system setup to define messages coming from a server, which will get handler classes to apply the logic in the domain model. I'd like the domain model as simple as possible without any ceremony to cater to other layers (e.g. UI). So, per example:

+ +
class Customer {
+    get orders() { return []; } // return just an array of orders
+    get name() { return this._id; } //return just a simple string
+}
+
+ +

This allows me to write simple handlers:

+ +
class SetCustomerNameHandler {
+    execute(customer) {
+        customer.name = this.name;
+    }
+}
+
+ +

Vue.js, React.js and other frameworks provide techniques to listen to these classes without needing to change anything. Vue.js for example redefines getters/setters under water to listen to changes. So for html views, I'm settled (providing I adher to the limitations of the listening technique).

+ +

However, I need quite some view logic in custom canvas/webGL/SVG renderers. They need to respond to changes in the domain model. UI frameworks do not really support these usecases natively, yet they do provide some helpers.

+ +

I have a few options to apply. All seem to have disadvantages which I don't like, and I'm wondering if there is a cleaner way of dealing with the problem.

+ +
    +
  • Proxies. I can instantiate a proxy on an object in the renderer class to track changes of a domain object. This works, but Handlers do not operate on proxies (they operate directly on the domain objects) and as such it breaks down. It only works if I access proxied objects on the renderers instead (e.g. CustomerRenderer.customer.name = newName. + +
      +
    • RxJs. Powerful, but requires changes in the domain model - domain objects must make themselves observable.
    • +
    • Vue.js. Simple yet as RxJs requires domain objects to be ""Vue-ed"".
    • +
  • +
+ +

Is there a way in javascript (es5/es6/es7) to keep the domain model clean and still respond to changes in the model?

+",15986,,,,,43126.74167,What is the cleanest way to model my domain in JavaScript?,,0,4,,,,CC BY-SA 3.0,, +364745,1,,,1/26/2018 18:03,,1,253,"

I am all in favour of progressive enhancement and using server-side rendering when fetching a URL. The age-old discussion gives several advantages, such as improved load time, SEO crawling and possibly an improved level of ""correctness""...etc

+ +

However I am getting my doubts as to why I am also setting up my server-side to handle form submits (i.e. a native HTML form being submitted to the server).

+ +

While my question is generic, the technology stack I am using is a Universal React + Redux application connecting to a third-party API. Therefore when JavaScript is disabled the server-side connects to the API to retrieve or POST data, while when JavaScript is enabled the user's browser connects to the API directly.

+ +

What are the advantages in handling server-side POST or when should it be a priority?

+",271580,,271580,,43127.68681,43349.08333,Why bother with server-side form submissions in a SPA?,,1,10,0,,,CC BY-SA 3.0,, +364749,1,364761,,1/26/2018 19:08,,2,3583,"

We have a high efficient library written in a low lever programming language. We would like to allow third parties to implement a GUI for it.

+ +

The approach we would like to take is to write a REST server. The GUI (written in whatever language) needs to start the server and is then able to use the library.

+ +

As said, the goal is to create a local desktop application, so the server should only listen to the localhost and the GUI (the latter may be solved via auth.).

+ +

Is there a reason such an approach is not used more often (I hardly couldn't find anything)? The only place it is mentioned seems to be The Modern Application Stack – Part 3: Building a REST API Using Express.js + as ""... MERN (MongoDB, Express, React, Node.js) Stacks, why you might want to use them, and how to combine them to build your web application (or your native mobile or desktop app).""

+ +

Are there tutorials or special architectural patterns?

+ +

I found the following resources:

+ + +",294455,,1204,,43126.83264,43537.59306,Rest-based desktop application,,3,10,1,,,CC BY-SA 3.0,, +364755,1,364766,,1/26/2018 21:02,,3,30,"

So I have a function in our application's Business Edits assembly (which references the Data Access assembly) which I've found a need to use in the Data Access assembly. I like the idea of keeping the business rules together, but if I want to use the function I obviously have to move it. So what I've decided to do is move the function to our bottom-level Helpers assembly and have the current Business Edits function call the Helpers function. That way the function is still discoverable in the Business Edits assembly but usable in the DataAccess assembly.

+ +

This solution feels wrong, but is this actually a terrible thing to do? If so, why, and what's the best alternative?

+",65425,,,,,43127.00069,Identical functions in two different assembllies - alternatives?,,1,0,,,,CC BY-SA 3.0,, +364762,1,,,1/26/2018 23:01,,0,392,"

I’ve been reading about older processors (8080, 8086 and that) and i’ve seen that those older 8-bit processors had some 16-bit instructions through the use of register pairs. For example, on the 8080, the XCHG instruction exchanges the value from the HL pair and the DE pair. if these registers are 8 bits wide, and the internal bus is 8bits wide, how did the processor exchange the values with one instruction?

+ +

Thanks

+",291510,,,,,43127.20694,How does x86 deal with register pairs?,,2,1,,,,CC BY-SA 3.0,, +364765,1,364771,,1/26/2018 23:47,,0,358,"

What I want to get clarified is, having a polymorphism factor of 100%, does it means code becomes hard to maintain , and does high polymorphism factor introduce high level of coupling even though inheritance is used to reduce coupling.

+ +

The polymorphic factor (PF) is a metric proposed by Abreu & Melo to measure how much a derived type overrides a method from the base class. It's calculated by counting the number of overridden methods compared to the number of methods

+",225165,,225165,,43127.01875,43127.35694,Polymorphism factor and code maintainability,,1,0,,,,CC BY-SA 3.0,, +364769,1,,,1/27/2018 0:51,,1,21,"

Imagine a game that needs to score words. A words needs to be scored immediately, in a list of words or even in a list of words from list of players.

+ +

I've created a scoring module which has the following public methods:

+ +
    +
  • score_single_word(word) -> int
  • +
  • score_list_of_words(list) -> int
  • +
  • score_group_of_players(players) -> [int]
  • +
+ +

These methods obviously cascade, so score_group_of_players does some calculation and calls score_list_of_words eventually calls score_single_word.

+ +

Unfortunately, the every method requires clean up, such that the words are lowercase, stripped of white space and unique. Otherwise their respectives task would fail. I didn't see another way but doing that clean up multiple times, since neither method can be sure its assumptions are fulfilled earlier by one of the higher methods.

+ +

What can I do to remove those unnecessary clean ups while making sure that the assumptions are still correct?

+",6339,,,,,43127.08194,Dealing with assumptions in modules that result in duplicate work,,2,0,,,,CC BY-SA 3.0,, +364773,1,364782,,1/27/2018 2:19,,3,412,"

I am trying to write some art of ""generic"" data access library to access the data of my company's ERP Software, which is our main/core application where all our related data is managed.

+ +

I am a student worker and my boss & coworkers always come to me and ask to write small apps, to make any kind of stuff, from continue analysis of specific customers to automatic device measurement.

+ +

For all those apps, that like said can have the most different domains, I need to pull the data from the same DB, our ERP DB.

+ +

All of the data will be read only, since any change will be persisted back to the ERP DB.

+ +

So I thought to make a library where I just create a model that mirror the DB and expose some repositories returning the model classes or an interface implementing the same properties.

+ +

So my questions are:

+ +
    +
  • Is this a good practice or should I create a data access layer for every app?

  • +
  • Are there maybe some patterns for this use case? I searched a lot but didn't find anything about read only scenarios, apart of using AsNoTracking() with EF.

  • +
  • This way the repos will return more information than required for the apps 99% of the time, but it will save me write duplicate code.

    + +
      +
    • So instead of making optimized queries with the requirements for the specific app and return a custom class with only required data, I am returning a lot of info, doing some logic and mapping it to whatever I need. I understand for huge datasets, highly accessed apps or very quick responsiveness this won't be a good idea, but for normal cases?
    • +
  • +
+",267308,,267308,,43127.10069,43510.27431,Read Only Generic data access layer Best practice,<3rd-party>,1,4,1,,,CC BY-SA 3.0,, +364775,1,364874,,1/27/2018 4:53,,0,97,"

I have previously implement Client Side Data Encryption using Azure Key Vault using the following approach:

+ +

Encryption Approach:

+ +
    +
  1. Every record that needs to be encrypted gets a Content Encryption Key (CEK)
  2. +
  3. The content is encrypted symmetrically using this CEK.
  4. +
  5. The CEK is then encrypted asymmetrically using a Master Key which is stored in Azure Key Vault.
  6. +
  7. The encrypted CEK, the Master Key identifier and the encrypted data are persisted to the Mongo DB.
  8. +
+ +

Decryption Approach: +1. When a User needs decrypted information, the encrypted CEK is first decrypted using the Master Key +2. The decrypted CEK is then used to symmetrically decrypt the encrypted data.

+ +

This approach works great if you want to decrypt records one record at a time.

+ +

This approach has a performance overhead when you consider decrypting 100s of records at the same time.

+ +

For each record, you need to first decrypt the CEK. This is an expensive call over the network using Azure Key Vault Rest API. Imagine someone wanting to export 1000s of decrypted records (as is the case in my scenario now). This would be not just a time consuming operation, but also an expensive one.

+ +

Does anyone have any suggestions on how to accomplish this? +I have dabbled with the following approaches:

+ +
    +
  1. Establish a Key Daily (or Monthly) and use it as a CEK for all of the Day/Month's Orders so that the number of CEKs to decrypt reduces when exporting/pulling reports to view en-masse. Use in-memory caching strategies to cache decrypted CEKs and only make a network call when not decrypted CEK not found in cache.

  2. +
  3. Establish a one time CEK for the Client and use it for all records.

  4. +
  5. Don't use a CEK at all and simply encrypt all data using a master key (suicide in my opinion as asymmetric encryption of large data is not recommended at all) and will increase overheads.

  6. +
+ +

Wondering if people have any suggestions!

+",282882,,,,,43129.60764,Thoughts on Data Encryption Strategy for Large Number of Records MongoDB and Azure Key Vault,,1,0,,,,CC BY-SA 3.0,, +364784,1,,,1/27/2018 9:40,,3,162,"

Amongst other things in my life, I'm writing a framework in PHP to manage a slew of common problems I come up against in every project I tackle. The framework is currently very data-centric, with the largest component (named Data) being part-ORM and part-DDD in it's approach.

+ +

The ORM-ness of this component heavily abstracts away queries to the point that the developer doesn't need to know any SQL or even know SQL terminology. The main drive behind this comes from my experience as a DBA/BIDev, where I groaned over many of the poorly constructed queries by SQL amateurs that convoluted things rather than took the simplest most direct approach. As an example, the following code would rSELECT ... FOR UPDATE from the user table where username is 'jennifer':

+ +
$userFilter = $userRepository->createBlank();
+$userFilter->username()->identifyValue('jennifer');
+$userFilter->lockForUpdate();
+$user = $userRepository->retrieve($userFilter);
+
+ +

I test my framework against several applications that I'm involved with, meaning I can mock live-fire scenarios to an extent. In recent tests, I came up against the following error:

+ +
+

SQLSTATE[0A000]: Feature not supported: 7 ERROR: FOR UPDATE is not + allowed with window functions Failed query is: SELECT * FROM ""myView"" + WHERE ""ID""=:fi_ID__ix0 FOR UPDATE;

+
+ +

It turns out that the view myView contains a column that is generated by an OLAP/window function (i.e. ROW_NUMBER() OVER(PARTITION BY...), which means that the view cannot be selected for update. The error was generated by PostgreSQL, is entirely legitimate, and is naturally in response to this invalid operation.

+ +

What I'm wondering is, what are the common/accepted ways that frameworks handle third-party problems like this? Throw the error message verbatim and let the developer figure it out, or try to value-add to the message by suggesting common problems or avenues of investigation?

+",244272,,,,,43128.46944,What are common/best practices for frameworks handling standard third-party exceptions?,,2,0,,,,CC BY-SA 3.0,, +364786,1,364788,,1/27/2018 11:50,,1,319,"

I am reading specific parts of Martin Fowlers refactoring book again (the areas I was not clear about the first time round). I am looking at the Extract Method chapter at the moment. I can understand why Extract Method is beneficial; for example:

+ +

1) Inheritance and overriding

+ +

2) Clarity for the user of the class

+ +

Say I have some code like the below ( This is a DDD domain Service). Is this a candidate for Extract Method?:

+ +
    public IEnumerable<KeyValuePair<int, int>> CalculateDenominationsFor(int cost) 
+        {
+            var target = cost;
+            foreach (var denomination in currency.AvailableDenominations.OrderByDescending(a => a))
+            {
+               var numberRequired = target / denomination;
+               if (numberRequired > 0)
+               {
+                   yield return new KeyValuePair<int, int>(denomination, numberRequired);
+               }
+               target = target - (numberRequired * denomination); 
+            }
+        }   
+
+ +

I guess I could extract the following lines of code to methods:

+ +
target = target - (numberRequired * denomination); 
+
+ +

and:

+ +
yield return new KeyValuePair<int, int>(denomination, numberRequired);
+
+ +

The concerns I have about my two ideas above are:

+ +

1) They would be private methods so no benefit to the caller.

+ +

2) The class is currently sealed so no Inheritance benefits.

+ +

Is there any guidance available stating when to use Extract Method? Am I overthinking this? I am trying to apply this principle of least astonishment and find myself overthinking a lot recently.

+",65549,,65549,,43127.52708,43127.52708,Using the Extract Method pattern to refactor a simple method with a for loop,,1,7,,,,CC BY-SA 3.0,, +364796,1,364798,,1/27/2018 15:56,,-2,73,"

I am writing a paper about the use of a message broker for inter-process communication where I state that all computer communications can be broken down into two categories:

+ +
    +
  • Function calls
  • +
  • Events
  • +
+ +

My reasoning for this that code either needs to run other code to run to achieve its goal (function call) or merely triggers other code that it does not depend on (event).

+ +

I am fairly confident in this claim, though I was not able to find a source that either supports or denies this statement.

+ +
+ +

What are your thoughts about this and can anybody cite a source that either supports or denies my statement?

+ +


+Thank you

+",294501,,,,,43127.70208,Proof that computer communication only exists of function calls and events,,1,12,,,,CC BY-SA 3.0,, +364799,1,,,1/27/2018 17:20,,0,221,"

I have java/spring based web application with front end in JSP/HTML/JS/Jquery.

+ +

We already have spring based i18n support.

+ +
    +
  1. In JSP labels are coming from property files.
  2. +
  3. html/browser download the js file specific to locale which contains validatins messages.
  4. +
+ +

Problem for both above points , half of the labels/messages are hardcoded in jsp/js files instead of picking them up from resource bundle.Now we need to replace those harcoded labels/messages with resource bundle/ js files.

+ +

I can think of only manual solution where developer need to go through each jsp/js files and see if any hard coded message exist. If yes pick it fromresource bundle. Is there any better strategy to automate this task where I can get list of all hardcoded messages in jsp and js file with some utility/third party plugin +etc

+",260829,,,,,43128.77014,Support i18n in existing application?,,1,2,,,,CC BY-SA 3.0,, +364801,1,,,1/27/2018 18:03,,0,58,"

Scenario:

+ +

Images are uploaded to the server once in a while. Users send an API request for downloading all of those images that were uploaded to the server (Images reside in the server itself). Instead of downloading every single file, the server would collect all the images, archive it and then send the zip file to the user.

+ +

Problem:

+ +

Problem with this approach is that, even for a single user, the CPU usage is very high during the archive process, but the server needs to support more than 100 concurrent requests for the download. Also, images could be uploaded at any moment to the server, so pre archived files are also not a solution.

+ +

Possible Solution

+ +

One possible solution for my use case: +Since the images are the same for all users, the server would archive all the images in any of the first requests, then on the following requests, check if any new image has been added (maybe store the last JSON in memory, and diff it with the current one). If new images are not added, send the previously archived zip file, else, archive it again. +But this would still be the same if different users have different files to download.

+ +

What could be done in this scenario? Thoughts? Solutions?

+ +

Thank you.

+",215369,,,,,43127.85833,Performant way for archiving image files in NodeJS on each user requests,,1,2,,,,CC BY-SA 3.0,, +364802,1,,,1/27/2018 18:59,,0,46,"

Underlying problem

+ +

In a case of a peer-to-peer discovery protocol,suppose we need to advertise peers in a manner that the network graph will grow such that we avoid cases of both too dense areas (e.g cliques) and too weakly connected areas (e.g bridge nodes).A node can also fail,disconnect, reconnect and update the list of peers it preserves at certain intervals.

+ +

An approach to solving this would be to advertise peers with a probability inversely proportional to the times already advertised.

+ +

I have obtained the below requirements for the use of a data structure to serve our purpose:

+ +
    +
  1. Keep nodes sorted by a score relevant to the times already advertised,preferably self adjusting sorting on insertion of new nodes
  2. +
  3. Fast selection of first k elements based on score
  4. +
  5. Easy update of score and position of nodes in each advertise action
  6. +
  7. Easy check of duplicate nodes upon insertion, e.g caused by a node failure and reconnect.
  8. +
  9. Synchronization for concurrent access, modification
  10. +
+ +

The language used for implementation is java.

+ +

Questions

+ +
    +
  1. Am I correctly addressing the problem by the use of such a score and a data structure with the aforementioned requirements, or is there a better approach?
  2. +
  3. If answer to 1 is yes, what would be a good choice of a data structure? I think that b-trees seem to fulfil most if not all of them, but i am not quite convinced so i need guidance and justified solutions on the matter.
  4. +
+ +

Thanks in advance.

+",292255,,,,,43127.79097,Correct way-data structure for advertising peers in a p2p network,,0,2,,,,CC BY-SA 3.0,, +364804,1,,,1/27/2018 19:08,,0,739,"

From my book ""Data Structures & Algorithms in Java: Sixth Edition"" the definition of Big Oh is the following:

+ +
+

Let f(n) and g(n) be functions mapping positive integers to + positive real numbers. We say that f(n) is O(g(n)) if there is a + real constant c > 0 and an integer constant n0 >= 1 such that f(n) <= c * g(n) for n >= 0

+
+ +

They then show that the function 8n + 5 is O(n) and use the following justification:

+ +
+

By the big-Oh definition, we need to find a real constant c > 0 and + integer constant n0 >= 1 such that 8n+5 <= c * n for every integer + n >= n0. It is easy to see that a possible choice is c = 9 and n0 = 5. Indeed, this is one of infinitely many choices available because there is a trade-off between cd and n0. For example, we could rely + on constant c = 13 and n0 = 1

+
+ +

In my bachelor's studies, I learned that big O is just the largest increasing factor in a method f(n) and as such this description is new to me. I can answer the questions by finding the biggest factor, but cannot justify. It would help me if I knew:

+ +
    +
  • What is meant with ""a real constant c > 0"" and ""an integer constant n0 >= 1"" What do these mean?

  • +
  • What trade-off is being talked about when they say there is a tradeoff between c and n0?

  • +
  • Why does the choice of c and n0 matter? It feels strange picking arbitrary values like c = 9999999999 and n0=1 and then concluding that indeed f(n) is Big-Oh of O(g(n)) just because 8*1 + 5 <= 999999999* 1

  • +
+ +

I can't imagine a case where a function f(n) would be bigger than c*n if you're free in choosing the c.

+",210153,,210153,,43127.88889,43129.59792,What is meant with finding real and integer constants in Big Oh notation?,,2,1,1,,,CC BY-SA 3.0,, +364816,1,,,1/28/2018 7:04,,7,586,"

I am reading this article about CQRS, and when it comes to decide where to use it, the following didn’t really get to my mind:

+ +
+

Collaborative domains where multiple operations are performed in + parallel on the same data. CQRS allows you to define commands with + enough granularity to minimize merge conflicts at the domain level + (any conflicts that do arise can be merged by the command), even when + updating what appears to be the same type of data.

+
+ +

Can some elaborate it with possibly some example. I think what it means is that you can issue more granular write commands to update smaller parts of a table, row, etc. to minimize overlapping hence, locking on smaller areas so better performance. But how is that not possible with everyday CRUD operations?

+",3613,,,,,43128.65764,CQRS in Collaborative Domains,,2,2,,,,CC BY-SA 3.0,, +364819,1,364821,,1/28/2018 11:45,,3,1697,"

I'm writing a NuGet package mainly for data access and utility functions that is going to be consumed by multiple web applications, but there will be one main web application that is going to drive the development of the NuGet package. As such I'm going to want to be developing that web application alongside the NuGet package on the same machine so I can easily add the package functionality needed and then test it out in the web application immediately.

+ +

However, there is still some annoying overhead to this because the NuGet package has to be published and the latest version pulled down each time I add some functionality and want to test it out. If it doesn't work properly I'll need to fix it and go through this procedure again.

+ +

Is there a best practice for being able to easily develop a NuGet package alongside a consumer of the package? I could have the package solution output its DLLs to a common directory and add a direct reference to the DLLs in the consumer but that would be a temporary reference that shouldn't be checked in, so it seems like a bad way to do things. Would it be appropriate to have the NuGet package project in the same solution as this main web application which is going to consume it, but use a project reference rather than a NuGet package reference?

+",125671,,125671,,43128.49514,43128.76875,Best practices for developing NuGet package alongside consumer?,,2,0,,,,CC BY-SA 3.0,, +364822,1,,,1/28/2018 12:16,,3,103,"

Recently I had to refactor some legacy code. As in most cases, I had to split big parts of code into smaller, cleaner and readable functions. I ended with many functions, that had multiple, weird parameters. Let me show an example:

+ +
public void SendOrder(Order order, XmlOrder xmlOrder, Insider insider)
+{
+    string customerCode = CustomerServices
+                 .Get(insider.CustomerId)
+                 .Code;
+
+    OutputData outputData = CreateOutputData(order, xmlOrder, customerCode);
+    CreateReservations(order, customerCode, outputData);
+    PlaceOrder(order, xmlOrder, outputData, customerCode);
+}
+...
+private void CreateReservations(Order order, string customerCode, OutputData outputData)
+{
+    ...
+    try
+    {
+        ReservationServices.AddReservation(reservation);
+    }
+    catch (BusinessException ex)
+    {
+        Logger.Log(ex);
+        outpuData.Status = Statuses.BusnessError;
+        throw;
+    }
+}
+
+ +

(That is just demonstration code, not real one)

+ +

The problem is I had to pass for example outputData to other functions just to change it's status if an exception happens or pass customerCode to multiple functions. Class is responsible of sending multiple, not connected messages to WebService, so when I was creating it, I didn't plan to put variables connected to given order as a class state. Is it a good practice to exclude such variables from function and make them a class members? Are there any guidelines for such situations? What are your practices and solutions for such problems?

+",281479,,,,,43128.79792,Should one create shareable private class member or keep variable in method scope to pass it as a second method argument?,,2,2,2,,,CC BY-SA 3.0,, +364823,1,,,1/28/2018 12:28,,7,680,"

I'm writing a library in C++ which needs to be as fast as reasonably possible. However, I'd also like to be able to provide logging in case a user (or me) needs to debug possible problems.

+ +

This library needs to be built, but also contains some header-only templated classes.

+ +

As I understand, usually logging in libraries is done by declaring, but not defining, a specific logging function, which is used to log messages inside the library. This function is then defined inside the user's program, which logs with whatever mechanism the user wants (or doesn't log at all).

+ +

As some logs are in very performance-sensitive places, I'm wondering whether I should be concerned that this approach would impact the library performances even when no logging is desired by the user at all. The library is usually linked statically.

+ +

An alternative idea I had would be to define logging as a macro, and define it both during the library build step and during the final program build step as an either empty macro or the desired build function. This would ensure that in case of no logging, nothing would remain in the final program. However, it would be more cumbersome for the user, and possibly more complicated as one would need to modify a library header file before building the library in order to correctly define the macro.

+ +

What are the best practices for this type of problem?

+",164304,,,,,43129.66389,Handle Optional Logging in High-Performance Library,,1,2,,,,CC BY-SA 3.0,, +364832,1,364846,,1/28/2018 18:15,,1,393,"

I have several questions about UML diagrams as I am not finding them very clear (they are new to me).

+ +

1) When making a diagram with generalizations/specialization: if I do not have all of the possible child classes on my diagram and only have the main children relevant to me, do I still use a blank triangle for the connection?

+ +

2) If I have such a diagram and an entity (person in this case) can belong to multiple children categories, how is this shown? Should the fact that an entity(person) can belong to multiple child categories be shown on a UML domain class model?

+ +

3) If two of my children have a relationship between them, for the two children that are patient and physician, how is the relationship shown on UML? Is it just a line with a verb and notation (eg 0..1)?

+ +

Any advice would be greatly appreciated!

+",294568,,,,,43128.94514,UML - Design Class Model diagrams,,1,4,1,,,CC BY-SA 3.0,, +364838,1,364840,,1/28/2018 18:52,,0,547,"

Say I have a Domain Object, which represents a Customer with a list of Offers.

+ +

Say I want to add a collection of Offers. I believe I have two options:

+ +

Option 1

+ +

Have the Domain Object calculate the offers and then add them:

+ +
 public IEnumerable<IProduct> GetEligibleOffers(IOfferCalculator offerCalculator, IList<IProduct> products)
+    {
+        return offerCalculator.CalculateEligibility(Gender, Expenditure, products);
+    }
+
+    //Refactored this method.  
+    public void AssignOffers(IList<IProduct> eligibleProducts)
+    {
+        foreach (var product in products)
+        {
+            _assignedProducts.Add(product);
+        }
+    }
+
+ +

Option 2

+ +

Have the Application Service calculate the offers and pass them to the Domain Object one by one. Therefore the domain object will look like this:

+ +
public IEnumerable<IProduct> GetEligibleOffers(IOfferCalculator offerCalculator, IList<IProduct> products)
+        {
+            return offerCalculator.CalculateEligibility(Gender, Expenditure, products);
+        }
+
+        //Refactored this method to only accept one product
+        public void AddOffer(IProduct eligibleProduct)
+        {
+            _assignedProducts.Add(eligibleProduct);
+        }
+
+ +

In this case; the application service gets the eligible offers (using: CalculateEligibility) and then passes them one by one to the domain object (Customer.AssignOffer).

+ +

I am trying to decide, which approach to use (I realise both approaches will work). Option one seems like the most appropriate. However, every example I can find online seems to use option 2 e.g. this one and this one. Therefore I wonder if option two is frowned upon for some reason.

+",65549,,293672,,43521.62917,43521.62917,Should a Domain Object receive a list or elements of a list?,,2,0,,,,CC BY-SA 4.0,, +364842,1,,,1/28/2018 21:33,,1,557,"

I would separate the BLL from DAL as a best practice. I interact between BLL and DAL via interface. +Example:

+ +
public interface IProductRepository
+{
+    void Add(Product myProduct);
+    Product Get(string name);
+    Product GetById(int id);
+}
+
+ +

where business object Product is:

+ +
public class Product
+{
+     public int Id { get; set; }
+     public string Name { get; set; }
+     public decimal Price { get; set; }
+}
+
+ +

The BLL class is:

+ +
public class ProductManager
+{
+     private readonly IProductRepository productRepository;
+
+     public ProductManager(IProductRepository productRepository)
+     {
+          this.productRepository = productRepository ?? throw new Exception(""message"");
+     }
+
+    public void AddProduct(Product myProduct)
+    {
+        try
+        {                  
+            // Here code validation ecc....
+
+            // Add product to database
+            productRepository.Add(myProduct);
+        }
+        catch(Exception e)
+        {
+            // Handle exception  
+        }
+    } 
+
+    public Product GetProduct(string name)
+    {
+        try
+        {
+            // Here code to validation ecc....
+
+            // Get product from database
+            var product = _productRepository.Get(name);
+
+            return product;
+        }
+        catch(Exception e)
+        {
+            // Handle exception  
+        }
+    }
+     // ecc ecc
+}
+
+ +

where DAL (i would use Entity Framework) is:

+ +
public ProductRepository : IProductRepository
+{
+    public void Add(Product myProduct)
+    {
+        using(var dbContext = MyDbContext())
+        {
+            var dbProduct = new PRODUCTS
+            {
+                NAME = myProduct.Name,
+                PRICE = myProduct.Price
+            }
+
+            dbContext.PRODUCT.Add(dbProduct);
+
+            dbContext.SaveChanges();
+        }
+    }
+
+    // ecc ecc
+}
+
+ +

Now I have some questions: +- Is this the correct implementation? +- If I want insert a product but I want to check if a product with the same name is on db, do I first call the Get method in BLL and then call Add method (the db context is open and closed each time, is an overload?) or I can insert logic in DAL like:

+ +
var dbProduct = dbContext.PRODUCTS.FirstOrDefault(p => p.NAME == name);
+
+if(dbProduct == null) .... // insert else throw exception 
+
+ +

In the latter case, however, if a change the dal the bll logig would no work anymore. +-Is it right to use EntityFramework in this way, or do I lose all the benefits of linq? +Sorry, but i'm very confused.

+ +

Thank you.

+",294577,,,,,43128.94236,Questions about business logic layer and data access layer in a project,<.net>,1,0,,,,CC BY-SA 3.0,, +364849,1,,,1/28/2018 23:05,,0,79,"

Given 2 objects / classes:

+ +
    +
  • A class for a cargo ship. It has a small set of attributes, say current location, size, number of crew and some other things. Most important, it has an attribute for the id which is unique and the class is hashable.
  • +
  • A class for a harbor. Again a small list of attributes like location and the unique identifier. This class is hashable as well.
  • +
+ +

Now you could ask interesting questions like:

+ +
    +
  1. For each cargo ship what is the nearest harbor together with its distance.
  2. +
  3. For each harbor give me a list of all cargo ships within a 100 km radius.
  4. +
  5. Give me for each cargo ship a list of all harbors visited last year (you probably need some extra info for this question, but this detail we omit for now as it is about the results).
  6. +
+ +

Next let's assume classes are immutable. All classes are in computer memory too.

+ +

Now my question is how to store the results of given questions in memory? Should I for each question only keep the relations between the id's (which each cargo ship and harbor has) or can I keep the entire classes which should work because they are hashable?

+ +

A small example for question 2:

+ +

When I only keep id's it could look like this:

+ +

{5235: [735235, 25245, 954646],

+ +

3232: [112, 34345, 65354, 45454]}

+ +

Or should it be like this:

+ +

{(an entire harbor class): [(an entire ship class), (an entire ship class) , (an entire ship class)],

+ +

(an entire harbor class): [(an entire ship class), (an entire ship class), (an entire ship class), (an entire ship class)]}

+ +

(While writing this I came up with another question, should the unique identifier be an attribute of the class, I mean every question you ask has strictly nothing to do with the identifier)

+",292674,,,,,43129.84236,Hashable classes: Should I only keep the id's for results or entire class,,2,4,,,,CC BY-SA 3.0,, +364855,1,,,1/29/2018 6:58,,0,264,"

I am trying to use a dispatch table to select a data type to cast something as and cannot figure out the syntax for this.

+ +
g_cast_table[] =
+{
+    {'h', short int}
+          or
+    {'h', ""short int""}
+}
+
+outcome: (short int)q
+
+ +

Right now my table is set to have a char name and char* type. This is causing a large number of errors as I try to compile. Is there an actual way to do this or should I rethink my process?

+",294601,,,,,43129.32222,Type selection from a dispatcher table,,1,0,,,,CC BY-SA 3.0,, +364857,1,,,1/29/2018 8:32,,7,9221,"

I'm trying to read about DDD, and i'm struggling a bit trying to to identify aggregate roots. I wrote a really simple app to divide players into different teams inside a game.

+ +

So my entities are something like this:

+ +

Game Entity:

+ +
public class Game : DomainEntityBase, IDomainEntity
+{
+    private List<Team> teams = new List<Team>();
+
+    private List<Player> players = new List<Player>();
+
+    private int teamSize;
+
+    public Game(
+        string gameName,
+        int teamSize,
+        IEnumerable<Player> players) : base(Guid.NewGuid())
+    {
+        this.teamSize = teamSize;
+
+        this.players = players.ToList();
+    }
+
+    public string GameName { get; private set; }
+
+    public ReadOnlyCollection<Team> Teams => teams.AsReadOnly();
+
+    public void SplitPlayersToTeams()
+    {
+        if (players.Count() % 2 != 0)
+        {
+            throw new NotSupportedException(""Only equally dividable teams are supported"");
+        }
+
+        var teamCount = players.Count / teamSize;
+
+        var playersPerTeam = players.Count / teamCount;
+
+        SetPlayersToTeam(teamCount, playersPerTeam);
+    }
+
+    private void SetPlayersToTeam(int teamCount, int playersPerTeam)
+    {
+        var rnd = new Random();
+
+        for (var i = 0; i < teamCount; i++)
+        {
+            var team = new Team(i.ToString());
+
+            while (team.Players.Count != playersPerTeam)
+            {
+                var randomIndex = rnd.Next(players.Count);
+
+                var player = players[randomIndex];
+
+                if (!team.Players.Contains(player))
+                {
+                    player.SetTeam(team);
+                    team.AddPlayer(player);
+                }
+            }
+
+            teams.Add(team);
+        }
+    }
+}
+
+ +

Team Entity:

+ +
public class Team : DomainEntityBase, IDomainEntity
+{
+    private List<Player> players = new List<Player>();
+
+    public Team(
+        string teamIdentifier) : base(Guid.NewGuid())
+    {
+        TeamIdentifier = teamIdentifier;
+    }
+
+    public string TeamIdentifier { get; }
+
+    public ReadOnlyCollection<Player> Players => players.AsReadOnly();
+
+    public void AddPlayer(Player player)
+    {
+        players.Add(player);
+    }
+}
+
+ +

Player entity:

+ +
public class Player : DomainEntityBase, IDomainEntity
+{
+    public Player(
+        string nickName) : base(Guid.NewGuid())
+    {
+        Nickname = nickName;
+    }
+
+    public string Nickname { get; private set; }
+
+    public Team Team { get; private set; }
+
+    public void SetTeam(Team team)
+    {
+        Team = team;
+    }
+}
+
+ +

Now first I was thinking that the game would be an aggregate root. It would make sense in a way. But then I started thinking that what if you want to persist players separately so that you don't have to add new players for every game? What if you want to persist teams separately if you have teams that can be re-used later? The game it self, would be aggregate root, because I would persist games to for example load a history of games from persistence.

+ +

So the question is, is every object I listed above an aggregate root, having their own repositories since every aggregate root should have it's own repository?

+ +

Thanks in advance.

+",253449,,,,,43215.51944,DDD: Identifying aggregate root in a simple example application domain,,4,4,1,,,CC BY-SA 3.0,, +364862,1,,,1/29/2018 10:39,,11,6086,"

I have a Unit Test, which looks like this:

+ +
[Test]
+public void Should_create_person()
+{
+     Assert.DoesNotThrow(() => new Person(Guid.NewGuid(), new DateTime(1972, 01, 01));
+}
+
+ +

I am asserting that a Person object is created here i.e. that validation does not fail. For example, if the Guid is null or the date of birth is earlier than 01/01/1900, then the validation will fail and an exception will be thrown (meaning the test fails).

+ +

The constructor looks like this:

+ +
public Person(Id id, DateTime dateOfBirth) :
+        base(id)
+    {
+        if (dateOfBirth == null)
+            throw new ArgumentNullException(""Date of Birth"");
+        elseif (dateOfBith < new DateTime(1900,01,01)
+            throw new ArgumentException(""Date of Birth"");
+        DateOfBirth = dateOfBirth;
+    }
+
+ +

Is this a good idea for a test?

+ +

Note: I am following a Classicist approach to Unit Testing the Domain Model if that holds any bearing.

+",65549,,8669,,43129.70417,43130.61528,Unit Test to test the creation of a Domain Object,,2,18,1,,,CC BY-SA 3.0,, +364868,1,364926,,1/29/2018 12:33,,7,1426,"

I have an external dependency that provides a client library that is subject to change frequently. I'm writing a wrapper class on top of the client library so that all changes with respect to that library are contained there.

+ +

This library provides some enums that are needed to send requests to the client. I dont want to introduce dependency on these enums in my other code. Is there a way to do this without writing wrapper enums as well?

+",197234,,,,,43130.30139,Wrapping enums from a library,,2,4,2,,,CC BY-SA 3.0,, +364870,1,364871,,1/29/2018 13:03,,1,359,"

I was reading this article here: http://enterprisecraftsmanship.com/2014/11/08/domain-object-base-class/. The article talks about creating a base Entity class, which handles four of the nine ways to do comparison in C# i.e. it handles:

+ +
==
+!=
+object.Equals
+IEquatable<T>.Equals<T>
+
+ +

Then I read this article here: https://ericlippert.com/2013/10/07/math-from-scratch-part-six-comparisons/

+ +

The first author is talking specifically about Domain Driven Design. Therefore I am inclined to handle the four comparisons he handles. Also it does not sound natural for a Customer to be less than or equal to another Customer and it does not sound natural for a Product to be less than or equal to another Product (as it stands). The only data structures I have used so far is Lists and HashSets.

+ +

Should I handle four comparisons or nine?

+",65549,,,,,43129.67778,Should I handle all nine comparisons?,,1,8,,,,CC BY-SA 3.0,, +364877,1,381909,,1/29/2018 15:19,,0,483,"

Trying to practice LSP, the following is not really clear to me:

+ +
+

Liskov requirements (some)
+ -There must be contravariance of the method + arguments in the subtype.
+ – There must be covariance of the return + types in the subtype.

+
+ +

Also the method in a subclass could be declared with a parameter type that is more generic than in the base class, is that right? +But as far as I know, that does not work, as I tried in C#:

+ +
class A
+{
+    public virtual void Test(Cat a)
+    { }
+
+}
+
+class B : A
+{
+    public override void Test(Animal a)  //shouldn't this work to be Liskov compliant?
+    {
+
+    }
+}
+
+class Animal
+{ }
+
+class Cat : Animal
+{ }
+
+ +

As, to my knowledge, CLR does not support covariance except for delegates and generics, how can we implement truly LSP compliant code if this cannot be met?

+",294256,,63202,,43129.68542,43427.66944,Achieving Liskov-compliant contravariance in method arguments in C#,,2,12,,,,CC BY-SA 3.0,, +364878,1,364879,,1/29/2018 15:20,,1,874,"

I've written a function that tests two floating point numbers for approximate equality (see this Code Review question). I'd like to unit test my function, but I'm not positive of the best way to do this. Obviously I could pick some arbitrary numbers that should be equal within the threshold, but it seems a lot more useful to me to test that actual computations that should be equal but fail a naive equality test (due to rounding errors) are considered equal by my function.

+ +

Is that valid, or should I just pick my magic numbers and move along? Are there standard test cases/examples that people have historically used? I tried to find something, but all I found was a bunch of references explaining why I shouldn't use exact equality in floating point unit tests, which I already know.

+ +

As an example, I could write a test like this (using gtest):

+ +
template <typename FP>
+class FloatEquality {
+  protected:
+    FP left, right, diff;
+    std::size_t ulps;
+
+    virtual void SetUp()
+    {
+        left = 2.0;
+        right = 2.1;
+        diff = .2;
+    }
+};
+
+TYPED_TEST_CASE_P(FloatEquality);
+
+TYPED_TEST_P(FloatEquality, MagicNumbers)
+{
+    EXPECT_TRUE(nearlyEqual(this->left, this->right, this->diff, this->ulps));
+}
+
+REGISTER_TYPED_TEST_CASE_P(FloatEquality, MagicNumbers);
+using FloatingPointTypes= ::testing::Types<float, double>;
+INSTANTIATE_TYPED_TEST_CASE_P(FloatingPoint, FloatEquality, FloatingPointTypes);
+
+ +

These numbers are obviously not a great choice, but they exemplify the types of magic numbers I could choose here that would be able to check all of my boxes and give me good code coverage, but don't seem that meaningful.

+ +

I did end up finding one example of someone unit testing this, but that is the magic number approach. The numbers appear to be reasonably well chosen, but it still feels like we aren't quite testing the right thing.

+",143011,,,,,43135.72569,Unit test a generic floating point equality function,,4,0,,,,CC BY-SA 3.0,, +364882,1,364952,,1/29/2018 16:30,,3,198,"

We are in the process of modernizing an existing legacy application and as part of that we are replacing a proprietary off-the-shelf product that is deeply integrated with the application - with a new off-the-shelf product.

+ +

There are two approaches we are looking for co-existence.

+ +
    +
  1. Update the Original application (say App1) to work with both off-the-shelf vendor products (calling them VP1 and VP2). This would mean modifying the existing codebase and updating all integration points so that they work for both vendor products. We are planning on achieving this via an application level switch - so depending on a condition the process flow will use implementation for VP1 or VP2 to process requests. This would mean a single application which has both implementations (abstracted via interfaces) can be used to process requests.

  2. +
  3. Second option is to have two systems running in parallel during the co-existence period. The way this is being proposed is to create a copy of existing application (App1), remove all implementations for the existing vendor product VP1 and re-implement them with the new vendor product VP2. This new copy of the application will then be hosted as a separate instance (lets call it NewApp1) - and the users will have to switch between the Original (App1 and NewApp1) to perform business functions during co-existence. This is with a view to leave the existing application as-is and not breaking the current functionality. It is also to minimize effort involved in re-testing the entire application (App1) if it is modified.

  4. +
+ +

Which of the two approaches is more suitable in case of an application that essentially is replacing the underlying vendor product?

+ +

Edit (30/01)

+ +

Adding a little more context and rationale in support of Option #1 (at-least for the use-case that i am dealing with). This is over and beyond what has already been added in the comments below.

+ +
    +
  1. It is important to note that the application in question is a monolith - which has been in production for many years. It is fair to assume that multiple updates were made to the application as bug fixes, minor updates which are not documented anywhere but in the codebase.

  2. +
  3. A downside of creating a copy and removing implementation related to the existing vendor product - and replacing it with new vendor product - would have been that we may have lost those tactic business rules.

  4. +
  5. As has been pointed out in the responses below, going ahead with Option #2 would have resulted in a cost associated with business change activities due to introduction of a manual process to select which application to use. This in-turn would have resulted in re-training of multiple team.

  6. +
  7. From an infrastructure point of view - there was a possibility of framework/version incompatibility - when deploying the copy (from Option#2) to our latest standard infrastructure stack. If going ahead with Option #1 it would have been a matter of re-deploying to the existing (albeit non-standard) stack as opposed to re-platforming the entire application to the standard stack (in case of option #2). An alternative would have been to spin up the non-standard stack for option #2 but that would have meant maintenance overhead.

  8. +
+",92987,,92987,,43130.71528,43130.71528,Legacy modernisation - Parallel systems vs Extending Original application?,,2,12,1,,,CC BY-SA 3.0,, +364883,1,,,1/29/2018 16:38,,5,734,"

I am currently building a piece of code that creates side-effects based on input parameters. +It has around ten input parameters and about six available side-effects. +Based on the input parameters, the chosen side-effects - one or multiple - differ.

+ +

I have started developing this code test first with a context/specification framework, because every input parameter is basically a context.

+ +

The resulting code so far is a deeply nested if-else structure.

+ +

I have implemented about 30% so far and the code - and the tests even more - are getting very complex, unwieldy and hard to understand. So much that I am doubting this is the correct approach.
+The main problem are actually the tests, because I keep repeating the conditions for some of the deeper nested input parameters as well as for the resulting side-effects.

+ +

Are there design patterns for building and testing this kind of ""decision graph""?

+ +

Please note that the input parameters are not fixed values. A lot of the logic is relativ, i.e. if input parameter 1 is less than input parameter 2.

+ +

The current output from my tests can be seen here: https://cloud.fire-development.com/f/12bcba0439/?raw=1
+You can see that there is a lot of repetition, making it hard to reason about what it actually does.

+",18049,,18049,,43129.69722,43130.10556,Design Patterns to model complex decision graph,,1,13,1,,,CC BY-SA 3.0,, +364886,1,,,1/29/2018 17:02,,2,740,"

In system design using use case diagrams are all use cases initiated/done by the external user: e.g. add line item, print receipt

+ +

Or can use cases be entirely internal/automatic: e.g. calculate running total, or even present running total

+ +

In regards to the second example would a cashier system have a use case of present running total? Is automatic presenting of information on a screen a use case?

+",294568,,,,,43129.90833,Use Case Diagrams - Are all use cases conducted by an outside user?,,2,0,,,,CC BY-SA 3.0,, +364887,1,364903,,1/29/2018 17:06,,1,135,"

I want to enable versioning in my REST service via urls. Need some suggestions/feedback on which is better mechanism?

+ +
<myapp_context>/v1/<sub_context>
+
+ +

or

+ +
v1/<myapp_context>/<sub_context>
+
+ +

where v1 is the version. I am asking this because a lot of articles mention it like v1/<myapp_context>/<sub_context> but according to me if more than one service is deployed on machine than it would not look good as,

+ +
v1/app1/get
+v1/app2/get
+v1/app3/get
+
+ +

Is there any advantage of providing version in the beginning?

+",169125,,,,,43129.88889,Suggestion REST service versioning,,4,0,,,,CC BY-SA 3.0,, +364893,1,364992,,1/29/2018 19:04,,0,1940,"

I need to measure a web application average bit rate consumption to see what is the recommended bandwidth needed for the end-user to connect and use the web application with no performance issues from the server.

+ +

My thinking is to use an end-user machine and make a stress test to the web application and load it with heavy web requests that a normal user can do in a day. Then, I can see the CPU/RAM utilization (with considering Network factor as well) on the server side at the time of the requests and make a decision whether the response from the server will serve with acceptable performance at the end-user.

+ +

I am still not sure how I can measure this because many factors will be in place. For example, I know the server will serve many users at any time and it will not be a fair calculation for all the user connected to get a fixed acceptable performance figure.

+ +

Any ideas?

+",293849,,,,,43131.8875,Calculate Web-application Acceptable use Bandwidth,,3,7,,,,CC BY-SA 3.0,, +364895,1,,,1/29/2018 19:55,,0,447,"

I was talking to a Business Analyst about our Domain Model a few weeks ago. I used a class diagram to facilitate communication. She understands UML at a high level and this worked out quite well. She only asked two questions: what is an Entity and what is a Value Object? I explained.

+ +

I have two base classes i.e. Entity (http://enterprisecraftsmanship.com/2014/11/08/domain-object-base-class/) and Value Object (https://lostechies.com/jimmybogard/2007/06/25/generic-value-object-equality/). This seems to be working very well today.

+ +

However, a couple of people criticized this approach earlier on my other question here: Should I handle all nine comparisons?. They argued that these bases classes:

+ +

1) Make the Domain Model anaemic as another class is responsible for comparisons. +2) Entity and Value Object are not part of the Ubiquitous language

+ +

Point two has some bearing for me because of the conversation I had with the Business Analyst. Therefore I have looked at examples of DDD apps on GitHub with this in mind and I can't find any real life examples that use this approach. However, some tutorial type apps use it like this one: https://github.com/vkhorikov/DddInAction/tree/master/DddInPractice.Logic/Common

+ +

Therefore I have to ask if this is a valid approach for a real life application or whether it is just a thought exercise?

+ +

It does not really matter whether or not I use these base types at the moment. However, I don't want to introduce problems now that only become apparent when the application scales more in future.

+",65549,,,,,43130.60764,Entity and Value Object are not part of the Ubiqtious language. Should this stop me from using them?,,2,6,,,,CC BY-SA 3.0,, +364902,1,,,1/29/2018 21:09,,1,155,"

As we know, some processes will complete.

+ +
void func (var a) { Console.WriteLine(a) };
+
+ +

And some will not,

+ +
void func (var a) { while (true) { Console.WriteLine(a) } };
+
+ +

What do we call the function that completes?

+ +

What do we call the function that never completes?

+",163825,,,,,43130.38125,What do you call a process that will not complete?,,1,8,1,43130.97222,,CC BY-SA 3.0,, +364908,1,364942,,1/29/2018 22:57,,1,1384,"

My apologies if this is the wrong stack exchange site for this question. Please, let me know if I should ask this in a different place.

+ +

I have recently went to a interview. The interview was for the position of senior full stack engineer. At that interview, I was asked the question:

+ +
+

How would you design a scalable web application

+
+ +

While the question focused on scalability, it was meant in the sense of a modern, responsive application.

+ +

I will try to reproduce the answer I gave as best as possible.

+ +

I was told that my answer, while it have some good points, was not exactly what they would expect, and that it was closer to the answer they would expect from someone applying for a mid level position rather then a senior one.

+ +

I am curious how my answer can be improved.

+ +

The answer:

+ +
+

Well, there are three things to consider, the frontend, the backend + and the data base. Since I am mainly experienced with React, Django + and MongoDB, I will assume these technologies.

+ +

Beginning with the frontend, I would be serving a html file with a + javascript bundle. Since we want to minimise the size of the bundle, + to get it across as fast as possible, I would see if there are + libraries we can do without or where we can only import partially. For + example, loadash allows partial imports, to avoid bundling the entire + library. I would also be cautious how many requests I make to the + backend and how much information I retrieve at one time. If we had to + display a very long list of items, I would go for a infinite scroller + strategy rather then trying to fetch all data at once. Another thing I + would look at is using something like rselect in order to minimise the + number of computations we do when the dom tree re renders.

+ +

Moving to the backend, we would have a RESTful architecture. Our + Django views would be mapped to different methods and endpoints and + their role would be to communicate with the data base. I would use + mongoengine rather then the lower level pymongo. We couldn't have only + one server so I would deploy this under a load balancer. Another thing + I would ensure is we serve everything over https. I would redirect all + http traffic to https, in order to improve security.

+ +

Coming to the database, we would need a cluster, in order to be able + to scale horizontally. In order to optimise performance I would index + the database. Additionally, I would ensure we had different machines + used for reading and writing, in order to distribute the load more + evenly.

+
+ +

What are the most important things to mention when it comes to scaling regarding each of the three components of the stack?

+ +

Other then scaling, what are the other features of modern web application?

+ +

What is a complete and thorough answer to this question?

+",294673,,,,,43130.5,Scalable design for a web application,,1,3,1,43136.62014,,CC BY-SA 3.0,, +364910,1,364914,,1/30/2018 0:04,,9,3351,"

I am modeling an aggregate root, which has several actions that perform operations against other entities, as you'd expect. The aggregate, however, has a state, and several of these operations can only be performed when the aggregate is in a particular state.

+ +

I created an implementation of the state pattern, so that the aggregate would simply delegate the action to the state concrete object. However, now that I have implemented it, I found meself with the following concerns:

+ +
    +
  • There are operations that can be invoked in more than one state, thus I ended up repeating implementations.
  • +
  • There are operations that generate domain events, so I had to pass the root's event collection so they can add the events properly.
  • +
  • Some operations require access to the private members of the aggregate root, so I ended up either declaring them as internal (C#) or creating internal methods that modify the private members.
  • +
+ +

So now I'm wondering whether the implementation was worth it, or if the state object should only have CanPerformOperation1 properties, and let the aggregate root check this property and if false, throw an InvalidOperationException.

+ +

The following code is a summary of what I'm trying to attempt.

+ +
interface IState {
+    void Register(DomainName domain, CustomerCode code);
+    void Activate(ActivationManifest manifest);
+    void Lock();
+    void Unlock();
+    void EnsureConsistency();
+}
+class NewState : IState {
+    // can only call Register method, transitions to RegisteredState
+}
+class RegisteredState : IState {
+    // can only call Activate method, transitions to ActiveState
+}
+class ActiveState : IState {
+    // can call Lock or EnsureConsistency
+    // Lock transitions to locked state
+    // EnsureConsistency can transition to RestrictedState or ActiveState
+}
+class LockedState : IState {
+    // can only call Unlock, transitions to ActiveState
+}
+class RestrictedState : IState {
+    // can only call EnsureConsistency, which can transition
+    // to ActiveState or RestrictedState
+}
+
+class Tenant {
+    private IState _state = new NewState(this);
+    private readonly UserAccountCollection _accounts;
+    private readonly LicenseCollection _licenses;
+    private readonly ApplicationCollection _applications;
+
+    // had to make these internal accesors to be used by
+    // EnsureConsistency in ActiveState and RestrictedState
+    internal UserAccountCollection _accounts => _accounts;
+
+    internal Application RegisterApplication(AppKey key, UserAccount admin){
+        // this method is called by the RegisteredState.Activate method
+        // so what's the point of delegating?
+    }
+    internal License RegisterLicense(LicenseKey key) {
+        // this method is also called by the RegisteredState.Activate
+        // method, just like the one above.
+    }
+    // etc
+}
+
+ +

Now, this will only increase in complexity, as the customer requires me to add more methods that depend on state. So I was just wondering whether I should just add properties like CanRegisterApplication, CanRegisterLicense, etc., and then the states will only be acting as a flag switch.

+ +

What would be a proper way to implement what I'm trying to achieve? Or maybe I'm getting the state pattern wrong?

+",228482,,,,,43132.32431,How to implement state machine pattern on aggregate root,,4,2,8,,,CC BY-SA 3.0,, +364917,1,365244,,1/30/2018 2:45,,4,461,"

This question extends from these questions:

+ + + +

Well, a JVM running on a server is well optimised, I know. Programs like Minecraft server runs pretty smoothly with a little pause time every few seconds(it's still well over 2-5ms, so that's still quite a lot for some applications). If I'm writing apps like that, I'd have no trouble getting sleep at night to think about if I should implement object pooling for that Java app that I should make. But this time, I'm making an Android app.

+ +

My app will create a considerable number of POJOs, specifically vector/quat that have 3/4 double primitives as its member(might fall back to float if necessary) every 40TPS game tick. Purpose of them being algorithms like line-plane intersection, acceleration/velocity calculation, and so on. On one tick it could take O(N*N) time, given that the optimisation will be done after the release of the app.

+ +

Should I be worried about this? Are Dalvik VMs considered the VMs with ""moden GCs""? My app's code will get bloated after this point of development and I need to decide if I should at least wrap creation of Vectors around factory method so I could implement something like thread-local object pool on it later on.

+ +

Edit

+ +
    +
  • The target API level is 21. So, the title is misleading. It's question about Android JVMs
  • +
+",292746,,,,,43134.72361,Should I implement object pooling for Dalvik VMs?,,1,2,,,,CC BY-SA 3.0,, +364918,1,364925,,1/30/2018 2:54,,39,5349,"

A common pattern for locating a bug follows this script:

+ +
    +
  1. Observe weirdness, for example, no output or a hanging program.
  2. +
  3. Locate relevant message in log or program output, for example, ""Could not find Foo"". (The following is only relevant if this is the path taken to locate the bug. If a stack trace or other debugging information is readily available that’s another story.)
  4. +
  5. Locate code where the message is printed.
  6. +
  7. Debug the code between the first place Foo enters (or should enter) the picture and where the message is printed.
  8. +
+ +

That third step is where the debugging process often grinds to a halt because there are many places in the code where ""Could not find Foo"" (or a templated string Could not find {name}) is printed. In fact, several times a spelling mistake helped me find the actual location much faster than I otherwise would - it made the message unique across the entire system and often across the world, resulting in a relevant search engine hit immediately.

+ +

The obvious conclusion from this is that we should use globally unique message IDs in the code, hard coding it as part of the message string, and possibly verifying that there’s only one occurrence of each ID in the code base. In terms of maintainability, what does this community think are the most important pros and cons of this approach, and how would you implement this or otherwise ensure that implementing it never becomes necessary (assuming that the software will always have bugs)?

+",13162,,,,,43130.81181,Making code findable by using globally unique message IDs,,6,7,12,,,CC BY-SA 3.0,, +364924,1,,,1/30/2018 6:39,,-2,109,"

I'm quite a noob in the world of testing. I've read all the materials related to Android testing:

+ +
    +
  • Unit test
  • +
  • Instrumentation test
  • +
  • JUnit
  • +
  • Robolectric
  • +
  • Espresso
  • +
  • etc.
  • +
+ +

The thing is that, all the examples provided only cover very simple use cases, such as verifying whether a text is on the screen or verifying whether the calculation of a function produce the correct result.

+ +

Let's say I have an app which is closer to a real-world app: An app that has login and fetch list of items by Retrofit and display it on a RecyclerView. In this case:

+ +
    +
  • What are the things that experienced programmers will test?
  • +
  • In case of Retrofit and RecyclerView how do you actually test it? check if json result match the one displayed on the RecyclerView?
  • +
+ +

Hope some experts in testing will shed some lights on this.

+",294609,,6509,,43130.64236,43130.64236,"Android testing, what to test?",,1,4,,43131.72292,,CC BY-SA 3.0,, +364938,1,364939,,1/30/2018 11:05,,9,17683,"

I'm writing a postcode validation library, so that I can call a helper method

+ +
var result = Postcode.IsValid(postcode, country)
+
+ +

To that end I need to have 'classes' that represent supported countries and know how to validate each. At present I have an interface thus:

+ +
public interface IPostcode {
+    bool IsValid(string postcode);
+}
+
+ +

and I have to have classes for each country, e.g.

+ +
public class USA : IPostcode {
+  public bool IsValid(string postcode) {
+     // ... validate here
+  }
+
+ +

The helper method selects the relevant IPostcode based on the country code.

+ +

The issue is that it feels wrong to have to instantiate classes that have no state or properties, just methods which would be far better if they were static. But of course static classes can't have interfaces. Is there a better pattern for this?

+",45496,,,,,43130.51944,Writing 'interfaces' for static classes,,3,1,2,,,CC BY-SA 3.0,, +364941,1,,,1/30/2018 11:58,,2,127,"

I'm maintaining a couple of software repositories (C, C++ essentially) which I want to also run - or let's start with at least build - seamlessly on Windows.

+ +

Now, my desktop machine does not have Windows installed, nor does my laptop; and I don't have a spare computer right now, nor do I want to have my desktop or laptop run Windows (instead of GNU/Linux). I realize dual-booting might be an option, but I don't want to have to reboot back and forth either.

+ +

I've been considering setting up a Windows VM, but I'm wondering:

+ +
    +
  • Is there a simpler/easier alternative to do relatively-simple, essentially-non-obtrusive testing and platform-specific debugging work in a Windows environment other than via a VM?
  • +
  • Is there some standard turn-key way to set up such a VM for my kind of work?
  • +
+ +

I know VMs are very much in vogue these days, with the cloud and everything, but I'm inexperienced with them, hence my question.

+ +

Notes:

+ +
    +
  • It's FOSS software.
  • +
  • I already have the option of just having the build run (and fail), using appveyor's GitHub integration. I want a machine I could use to get it to work.
  • +
+",63497,,63497,,43130.7375,43130.7375,Easy way to debug platform-specific issues of non-GUI software on Windows?,,1,14,,,,CC BY-SA 3.0,, +364945,1,364973,,1/30/2018 12:47,,4,752,"

I have read a few questions on the SO and elsewhere and still do not understand well where this ""widening"" of a parameter type can be helpful, i.e. compliying to Liskov substitution principle. The following code I took from an answer on the SO, explaining contravariance:

+ + + +
//Contravariance of parameter types: OK
+class Food
+class FastFood extends Food
+
+class Person { eat(FastFood food) }
+class FatPerson extends Person { eat(Food food) }
+
+ +

So I understand that the overridden method accepts more generic paramater than the method in its ancestor. But in practice, how does this help? I mean, if the original method works with certain properties of the derived type, none of this will be available in the derivative using the supertype of the parameter. Therefore, I might have issues with fulfilling the contract postconditions, if those relate to the subtype somehow. +Like:

+ +
class Animal {}
+class Cat { void Meow() void CatSpecificThing()}
+...
+
+class A
+{
+   List<Cat> ListOfCats;
+   void X(Cat c)
+   {
+      c.Meow()
+      c.CatSpecificThings()
+      ListOfCats.Add(c)
+
+   }
+}
+class B : A
+{
+   void X(Animal a)
+   {
+       //how is this now useful? I cannot do anything that needed Cat
+   }
+}
+
+ +

Let's say the postcondition of X method is to update the ListOfCats. But in the overriden method in the derived class, I would not be able to do it if there was just the supertype..?

+ +

I would be extremely happy for a simple example that demonstrates how this is useful.

+",60327,,200203,,43131.55556,43131.55556,"Argument contravariance, real world purpose and usage?",,2,1,3,,,CC BY-SA 3.0,, +364946,1,364958,,1/30/2018 13:12,,0,217,"

Last week our teacher gave us a question about Huffman Encoding Algorithm described as below.

+ +

HUFFMAN ENCODING ALGORITHM:

+ +
    +
  1. Consider all pairs: .
  2. +
  3. Choose the two lowest frequencies, and make them brothers, with the root having the combined frequency.
  4. +
  5. Iterate.
  6. +
+ +

The question was to find the Final Binary Tree and Variable Length Codes for the given Alphabets using the above defined Huffman Encoding Algorithm:

+ +

A | 10

+ +

B | 20

+ +

C | 30

+ +

D | 40

+ +

E | 50

+ +

F | 60

+ +

I solved the question but my teacher said that I made the wrong tree. Kindly check my answer below and tell me where I am wrong?

+ +

MY ANSWER:

+ +

Final Binary Tree:

+ +

+ +

Variable Length Code:

+ +

+ +

Kindly tell me where I am wrong? What wrong I did while making the above Binary Tree?

+",259355,,,,,43130.64097,Binary Tree and Variable Length Codes for given Alphabets using Huffman Encoding (Confusion),,1,0,,,,CC BY-SA 3.0,, +364957,1,,,1/30/2018 15:06,,1,311,"

I am working on a software assignment where the design is component based. The components have ports which provide interfaces.

+ +

My professor argues that the Port class which is exposed by each component should be a Singleton, as the port must be the only way to interact with the component. As I see it, multiple instances don't interfere with the requirement that the port must be the single interaction point.

+ +

Is as singleton the right way to implement a Port to the component?

+ +

To give an example: +The assignment is a model of an airplane and an airport. The components are different parts of an airplane and the airport. As it is a group assignment, different students have to implement different components. +This is the UML diagram of the different components of the airport:

+ +

The interfaces are implemented in a class named Port, which is an inner class to the actual implementation. The outer class has a public field port and is a singleton. The different components interact by loading the Jar-File and accessing the Port instance via reflection.

+",148469,,148469,,43130.65347,43132.72569,Can the port of a UML component only be a singleton?,,2,13,,,,CC BY-SA 3.0,, +364964,1,,,1/30/2018 16:42,,4,115,"

so I am thinking of using the actor model to solve a problem I currently have. For brevity, I have made up a scenario so we don't get too technical.

+ +

Lets say I have 3 benches but each bench supports a different amount of people:

+ +
    +
  • Park Bench 1 supports 3 people simultaneously
  • +
  • Park Bench 2 supports 5 people simultaneously
  • +
  • Park Bench 3 supports 2 people simultaneously
  • +
+ +

Upfront I know which bench to route too however, what I don't have context of is how many people are sitting on the bench. What I want to do is block any further people from sitting on a bench if its full but when a seat becomes available allow them to sit on it.

+ +

For some reason, the Actor model popped into my head but I am not sure whether this is the correct approach and perhaps there is alternative approach to solving my problem in which case I am keen to hear.

+ +

I envision creating 3 actors ParkBench1Actor ParkBench2Actor ParkBench3Actor and somehow control the number of live actors. I believe concepts such as pooling can help in these scenarios http://getakka.net/articles/actors/routers.html#pools-vs-groups.

+ +

Would appreciate peoples thoughts on this and tell me whether I have lost the plot a little :)

+",94188,,,,,43145.68542,Actor Model is it ideal for Controlling Concurrency,,1,1,,,,CC BY-SA 3.0,, +364965,1,,,1/30/2018 16:43,,0,182,"

I have a base class with a method called Update:

+ +
Start Update
+
+    Code block 1 (An If statement)
+
+    Code block 2 (Setting a variable based on the If result)
+
+    Code block 3 (A switch which is setting something)
+
+End Update
+
+ +

The order in which the code is executed is important so they can't be shifted around.

+ +

I also have a derived class which needs a little bit more code. This code however has to be in between Code Block 1 and 2. So when I would rewrite the entire thing it would be something like this:

+ +
Start Update
+
+    Code block 1 (An If statement)
+
+    Code block 4 (An extra calculation based on the If result)
+
+    Code block 2 (Setting a variable based on the If result)
+
+    Code block 3 (A switch which is setting something)
+
+End Update
+
+ +

I'm looking for ways how to reuse Code Block 1 through 3 and also put Code Block 4 in there.

+ +

The best thing I came up with is having a method called Extra in my base class and have the Update method look like this:

+ +
Start Update
+
+    Code block 1 (An If statement)
+
+    Call Extra
+
+    Code block 2 (Setting a variable based on the If result)
+
+    Code block 3 (A switch which is setting something)
+
+End Update
+
+ +

In the base class the Extra method would be empty since it has no use here. +The derived class would also have the Extra method but in that method the following would be called:

+ +
Start Extra
+
+    Code block 4 (An extra calculation based on the If result)
+
+End Extra
+
+",196214,,,,,43133.6,How to insert code in a method in a derived class,,4,2,,43133.61875,,CC BY-SA 3.0,, +364966,1,365038,,1/30/2018 16:51,,2,1920,"

I am building a small web API using .NET Core. I want to practice TDD and this is my first attempt at TDD. These are a few use cases of the API:

+ +
    +
  • Users in administrator role can create/edit/delete common(shared) lesson records.
  • +
  • Users in teacher role can create lesson record.
  • +
  • Users in teacher role can edit or delete lesson records that are created by them.
  • +
  • Lessons must have a name (cannot be empty).
  • +
+ +

So I now have something like this

+ +
using FluentValidation;
+using MediatR;
+using Microsoft.AspNetCore.Authorization;
+using Microsoft.AspNetCore.Mvc;
+using System.Threading.Tasks;
+
+[Route(""/lessons"")]
+[Authorize(Roles = ""teacher, admin"")]
+public class LessonsController : ApiControllerBase
+{
+    private readonly IMediator _mediator;
+
+    public LessonsController(IMediator mediator)
+    {
+        _mediator = mediator;
+    }
+
+    [HttpPost]
+    [Authorize(Roles = ""admin"")]
+    [Route(""create-common-lesson"")]
+    public async Task<IActionResult> CreateCommonLesson([FromBody]CreateLessonCommand command)
+    {
+        command.UserId = null;
+        return await ExecuteRequest(_mediator, command);
+    }
+
+    [HttpPost]
+    [Route(""create-lesson"")]
+    public async Task<IActionResult> CreateLesson([FromBody]CreateLessonCommand command)
+    {
+        command.UserId = User.FindFirst(""sub"")?.Value;
+        return await ExecuteRequest(_mediator, command);
+    }
+
+    [HttpGet]
+    [Route(""common-lessons"")]
+    public async Task<IActionResult> GetCommonLessons()
+    {
+        var query = new LessonQuery();
+
+        return Ok(await _mediator.Send(query));
+    }
+}
+
+ +

and here are some of my tests ;

+ +
using Common;
+using FluentAssertions;
+using Newtonsoft.Json;
+using System.Collections.Generic;
+using System.Net;
+using System.Net.Http;
+using System.Text;
+using System.Threading.Tasks;
+
+using Xunit;
+
+[Collection(""Api Tests"")]
+public class LessonApiTests : IntegrationTestBase, IClassFixture<WebApiTestFixture>
+{
+    public LessonApiTests(WebApiTestFixture fixture)
+        : base(fixture)
+    {
+    }
+
+    [Fact]
+    public async Task LessonApi_Get_Should_ReturnUnauthorized_When_TokenNotProvided()
+    {
+        Client.DefaultRequestHeaders.Authorization = null;
+        var response = await Client.GetAsync(""/lessons/common-lessons"");
+        response.StatusCode.Should().Be(HttpStatusCode.Unauthorized);
+    }
+
+    [Fact]
+    public async Task LessonApi_GetCommonLessons_ReturnsCommonLesson_WhenTeacherRequests()
+    {
+        var response = await TeacherGetAsync(""/lessons/common-lessons"");
+        response.StatusCode.Should().Be(HttpStatusCode.OK);
+        var lessons = await response.Content.ReadAsAsync<IEnumerable<Lesson>>();
+
+        lessons.Should().NotBeNullOrEmpty();
+    }
+
+    [Fact]
+    public async Task LessonApi_CreateCommonLesson_ReturnsForbidden_When_UserIsATeacher()
+    {
+        var model = new CreateLessonCommand()
+        {
+            Name = ""Some-Random-Name""
+        };
+        var content = new StringContent(
+            JsonConvert.SerializeObject(model),
+            Encoding.UTF8,
+            ""application/json"");
+
+        var response = await TeacherPostAsync(""/lessons/create-common-lesson"", content);
+
+        response.StatusCode.Should().Be(HttpStatusCode.Forbidden);
+    }
+
+    [Fact]
+    public async Task LessonApi_CreateCommonLesson_Creates_When_UserIsAnAdminAndValidModelPosted()
+    {
+        var model = new CreateLessonCommand()
+        {
+            Name = ""Some-Random-Name""
+        };
+        var content = new StringContent(
+            JsonConvert.SerializeObject(model),
+            Encoding.UTF8,
+            ""application/json"");
+
+        var response = await AdminPostAsync(""/lessons/create-common-lesson"", content);
+
+        response.StatusCode.Should().Be(HttpStatusCode.OK);
+        response = await TeacherGetAsync(""/lessons/common-lessons"");
+        var lessons = await response.Content.ReadAsAsync<IEnumerable<Lesson>>();
+        lessons.Should().Contain(l => l.Name == ""Some-Random-Name"");
+    }
+
+    [Fact]
+    public async Task LessonApi_CreateCommonLesson_ReturnsBadRequest_WhenEmptyNamePosted()
+    {
+        var model = new CreateLessonCommand();
+        var content = new StringContent(
+            JsonConvert.SerializeObject(model),
+            Encoding.UTF8,
+            ""application/json"");
+
+        var response = await AdminPostAsync(""/lessons/create-common-lesson"", content);
+
+        response.StatusCode.Should().Be(HttpStatusCode.BadRequest);
+        response.Content.ReadAsStringAsync().Result.Should().Contain(Constants.ErrorCodes.Lesson.NameCannotBeEmpty);
+    }
+
+    [Fact]
+    public async Task LessonApi_CreateLesson_Creates_WhenValidModelProvided()
+    {
+    }
+}
+
+ +

I am trying to write tests for every use case scenario. However, I feel like I am off track here.

+ +

My questions are:

+ +

Is writing tests for every use case scenario good practice or bad practice?

+ +

Should the name of the integration test explain the use case or should it explain how the API will behave? For example, should it be something like this:

+ +
LessonApi_CreateCommonLesson_Returns_BadRequest_When_EmptyNamePosted
+
+ +

or like this:

+ +
Admin_CannotCreateCommonLesson_When_NameIsEmpty
+
+",265537,,267852,,43130.99653,43131.62153,What are TDD and integration test naming conventions?,<.net>,1,1,1,,,CC BY-SA 3.0,, +364969,1,365048,,1/30/2018 17:25,,1,5865,"

I've developed an application that reads a file, maps it and stores info on the database. For some columns we need the primary key of an object in the database, and if the record does not exist we need to create it, for that purpose we got a class called ReferenceSolver which is abstract and has many implementations that will check if the object exists and create it if necessary.

+ +

Since I want the operation to be atomic and create the main and object and the referenced objects in one transaction I'm using a class called TransactionBuilder to which I pass all the queries that I want to run. The problem I found is that I don't know how to pass the reference to TransactionBuilder every child of ReferenceSolver, I worked around it by using a Singleton but is raising plenty of red flags on my head.

+ +

The logic in the relevant method for ReferenceSolver like this in:

+ +
private string GetReferencedObjectKey(
+        string referenceValue, 
+        Dictionary<string, string> record
+) {
+    SQLConnector connector = new SQLConnector( LogImporter.ConnectionString );
+
+    using (
+        var reader = connector.ExecuteQuery(
+            string.Format(
+                ""SELECT W6Key FROM {0} WHERE {1} = '{2}'"", 
+                TableName, 
+                ReferenceField, 
+                referenceValue
+            )
+        )
+    ) {
+        if ( reader.Read() ) { 
+            return reader[""W6Key""].ToString(); 
+        } else {
+            int key = KeyHelper.GetNextFreeKey(TableName);
+            transactionBuilder.ExecuteQuery( GetInsertQuery() );
+
+            return key.ToString();
+        }
+    }
+}
+
+ +

Is there any way that I can pass the same instance of the object without using a singleton that I haven't thought of?? I also thought of raising an event to request the creation of the object but I'm not sure if this is a good solution either.

+",225184,,225184,,43131.39792,43131.91597,How to apply DI on abstract class with many children,,1,7,,,,CC BY-SA 3.0,, +364971,1,,,1/30/2018 18:22,,0,102,"

This is a very conceptual question and looking for advice or examples you may know about. I'm just getting back into development after a very long hiatus in a sales career, so please excuse me if my thoughts or questions seem simplistic or outdated.

+ +

The data for a very large number of implementations is moving off-site from an SQL server to be accessed by a very few specific RESTful APIs. Rather than spending the man hours redeveloping each iteration of the data access to REST GET requests, I wonder if there's a better way. My idea is to develop (or find) an SQL layer that (invisible to the client) adapts the SQL queries into REST requests and returns the data in SQL format. All then, the client would need to do is switch it's SQL source to the new ""SQL service"" and automatically retrieve the data from the REST API formatted to suit the application.

+ +

This may not be possible to create a universal REST API translator because the data schemas are so different between JSON/XML and SQL generally speaking, but I wonder if because the APIs I am accessing use the same data schema if something couldn't be developed for my specific implementation. Perhaps I would need to tweak this adapter layer application for each client's nuances in their SQL queries, but maybe this would save development time for each client not wanting to rush to total redevelopment of their platform.

+ +

I wonder if anything like this has been accomplished before, is even possible, and what issues lie ahead. Is this too much of a band-aid that will break down with data changes? Could this be lightweight enough to work without significant slowdowns? Any advice or thoughts are greatly appreciated.

+",294762,,,,,43130.78264,Developing SQL <-> REST Adapter,,1,3,,,,CC BY-SA 3.0,, +364978,1,364982,,1/30/2018 20:13,,0,662,"

This is not a problem I am having in my problem domain. It is just a thought exercise.

+ +

Say I have a simple calculator like this:

+ +
public class Calculator
+{
+     public IEnumerable<KeyValuePair<int, int>> CalculateDenominationsFor(int cost) 
+            {
+                var target = cost;
+                foreach (var denomination in currency.AvailableDenominations.OrderByDescending(a => a))
+                {
+                   var numberRequired = target / denomination;
+                   if (numberRequired > 0)    
+             {
+                   yield return new KeyValuePair<int, int>(denomination, numberRequired);
+               }
+               target = target - (numberRequired * denomination); 
+            }
+    } 
+}
+
+ +

As it stands there is no Entity and no Aggregate root.

+ +

I believe I have two options:

+ +
    +
  1. No aggregate root: Have the application service call the domain service directly i.e. supply a cost and receive the denominations.

  2. +
  3. Introduce an Aggregate Root: Create a class called ChangeRequest like the following:

    + +
    public class ChangeRequest
    +{
    +    public decimal Cost {get; set;}
    +    public listKeyValuePair<int, int> denominations {get; set;}
    +
    +   public IEnumerable<KeyValuePair<int, int>> AddDenominations(Calculator calculator)
    +   {
    +     //Add denominations to list here.
    +   }
    +}
    +
  4. +
+ +

Is it normal to have an aggregate without an aggregate root?

+",65549,,1204,,43130.92917,43130.95556,An aggregate without an aggregate root?,,2,6,,,,CC BY-SA 3.0,, +364980,1,364985,,1/30/2018 21:33,,0,525,"

I am trying to design an app that is based on microservice architecture. Backend is written in JavaEE (micro profile, not Spring Boot), while for front-end I would use Angular5.

+ +

Now I am wondering how I could implement an authentication which would require user to present a valid X.509 certificate (.p12 file) signed with my self-signed CA. After user presents his certificate, I would ask from him his password (stored in database), while I would read his e-mail from certificate and log him in based on those two inputs. If user is correctly authenticated, I would send him JWT token back, with which he would access data on my angular pages.

+ +

I don't know how to begin with such implementation, is any of it done on client side, or is everything handled on server side and if it is, how since my backend is only REST API.

+ +

I am looking for general idea how to implement this and if possible some literature that is addressing this type of requirement.

+ +

I have various micro services which perform logic of my application and I would like to support different types of users: regular users, moderators and admins. Each microservice can have different access rules. (ie. Option to retrieve all data is available to everyone, option to retrieve one particular data is only for registered users, while option to create new data is available only to mods and admins.). I want to achieve that regular users login only using their email and password (both stored in DB), I have no problems with that, however I would want that mods and admins need to present x509 certificate in order to login. In both cases I would then generate JWT token and send it back to user (in token I would write their role, which would determine whether they can perform an action or not.)

+ +

I am not as interested in implementation as I am in overall workflow my app should support in order to implement such thing. I can figure out the each detail later, however I am not sure how would I proceed in REST/Angular way. I have idea how to do it with server rendering app, but now I want to migrate to client rendering.

+",274265,,1204,,43131.66736,43131.66736,X.509 authentication for microservices with JavaEE,,1,2,,43137.51458,,CC BY-SA 3.0,, +364981,1,,,1/30/2018 21:48,,0,61,"

Trying to work out where certain responsibilities lie with the following example. +We have a Project object and a Project can have Time entries booked against it. Each Time entry will have a no. of Hours and a Rate. +We then want to create a report that shows a list of projects, the amount of hours booked and the value of that time.

+ +

So my question is where should the Time entries come from for the calculations:

+ +
public class Project{
+   ......
+   public IList<Time> TimeEntries {get;set;}
+   ......
+
+   public double HoursBooked(){
+       this.TimeEntries.Sum(x=>x.Hours);
+   }
+}
+
+ +

OR

+ +
public class Project{
+
+   ......
+
+   public double HoursBooked(IList<Time> timeEntries){
+       timeEntries.Sum(x=>x.Hours);
+   }
+}
+
+ +

Along similar lines we have some Time calculations or a Project that will need to be converted to another Currency should we be passing the Currency in as a parameter or should we have another class purely responsible for Project calculations that has properties for all collections that maybe needed in order to get the final result?

+",43458,,,,,43131.07639,Object Responsibility and Calculations,<.net>,1,0,,,,CC BY-SA 3.0,, +364991,1,364994,,1/31/2018 0:52,,3,258,"

For example

+ +
class A 
+{
+    public int data1 {get; set;}
+    public int data2 {get; set;}
+}
+class B 
+{
+    public A objectA;
+}
+class C
+{
+    public B objectB;
+}
+class D
+{
+    public C objectC
+}
+class E
+{
+    public D objectD;
+}
+
+class Caller
+{
+    public void foo(E input)
+    {
+        var bar = input.objectD.objectC.objectB.objectA;
+
+        // code that looks nice
+        if (bar.data1 == 1)
+        {
+            // do something
+        }
+
+        // code that looks nice again
+        if (bar.data2 == 2)
+        {
+            // do something again
+        }
+
+        // but why?
+        if (input.objectD.objectC.objectB.objectA.data1 == 0)
+        {
+        }
+    }
+}
+
+ +

Just for the sake of getting my point across, here is an example call for a property that is nested deep within an inner class. Surely a class that requires such a deep call to get into data requires refactoring. But as an example of what I'm trying to say consider class E who has a property of class D which references down to other class up to class A which holds the data we need.

+ +

To avoid typing a long winded call to objectA coming from class E, I store objectA into a variable. If the object is only to be used within the scope of a few lines would it be a waste of memory? Aside from making it easier to type, readable, or maybe even maintainable; what other merits are there?

+",294787,,294787,,43131.40764,43131.79028,Is storing a reference of an inner class in a variable that is only going to be used locally (e.g. within a function) wasteful of memory?,,3,4,1,,,CC BY-SA 3.0,, +364997,1,,,1/31/2018 4:38,,2,172,"

I am writing a utility in C++ with Qt which communicates with an embedded device. The program was originally going to be used to just plot data from the device, but a new requirement has been added (not by me; no control over it). So now it has two modes, and thus two classes:

+ +
    +
  1. DevicePlotWindow
  2. +
  3. DeviceDebugWindow
  4. +
+ +

Both modes do exactly the same thing except for the output -- i.e. plot data and write a log message to the debug console, respectively.

+ +

Here's where it gets a bit messy. The second mode was already an existing standalone tool, which is going to be deprecated, and merged into this program. The request from the users is to keep the same look and feel as the older tool.

+ +

This means that in my utility, I have two very similarly functional modes, but look completely different.

+ +

I have already implemented this by copy-pasting the UI elements and their functionality in terms of communication with the device. But I hate breaking DRY.

+ +

What is the best, cleanest way to ""share"" the UI elements and their corresponding member function handlers with both modes (classes)? I am aware of inheritance, but I'd like to hear what others have in mind and the best way to go about this.

+ +

Example:

+ +
class DevicePlotWindow : public QWidget {
+    private:
+        ...
+        Plot plot_;
+        QSpinBox address_;
+        QSpinBox data_;
+
+    public:
+        ...
+        void plot();
+        void on_address_value_changed(int val);
+        void on_data_value_changed(int val);
+};
+
+class DeviceDebugWindow : public QWidget {
+    private:
+        ...
+        QPlainTextEdit console_;
+        QSpinBox address_;
+        QSpinBox data_;
+
+    public:
+        ...
+        void write_console(QString const &msg);
+        void on_address_value_changed(int val);
+        void on_data_value_changed(int val);
+};
+
+",294802,,,,,43131.63333,Reusing UI elements between two different QWidgets,,1,0,,,,CC BY-SA 3.0,, +364999,1,,,1/31/2018 5:11,,1,66,"

I am hoping you may be able to give me some advice on class design and what would be best for the program I'm currently writing. This is being done in Java and the goal is to generate letter data fields for the mainframe to read and run through a program for further action. Imagine there are 50 different letters, 40 of them use the same fields and are considered standard, where 10 of them may have special fields.

+ +

The guidelines to pass the file to the mainframe for consumption are the following; basically they are looking for fixed width positions for the letter fields, all fields should be passed per line. If the fields are not being used then they should be padded with either spaces or zeros. So if I'm always supposed to be writing out the same number of fields I have kind of a thinking point or quandry.

+ +

The way that I've structured my calculations functions and properties, once I have the values I set them to the appropriate property and that's all that is needed.

+ +

-One person on my team suggested I have one class for the common fields and then other classes for specific letters. There are several setup fields that are common no matter what such as the SEND TO address. There was an idea to create a common letter, then create a specific letter and copy properties existing since inheriting does not inherit the values from the parent object.

+ +

-The more I think about it I feel it may be easier to have one Letter class. The fields in this would be the common ones in addition to the specific ones. This would be a one-to-one match of what is set in the constructor by default, and then also a match when I write the fields out to a file. This would certainly make the whole design and structure of the program simpler with less classes

+",224325,,,,,43131.21597,Program and Class Design,,0,1,,,,CC BY-SA 3.0,, +365001,1,,,1/31/2018 5:50,,3,2003,"

I am developing an app which uses Entity Framework for data access. The architecture of the app somewhat like below:

+ +

+ +

As depicted in the drawing, the business service can be consumed from either web app, cli app or windows service. What I'm trying to design is, each service request should be performed in a single transaction. I'm using dependency injection to inject services to web api controllers. If I use request scoped dbcontext using DI container, it'll do the job for web api, but won't work for service requests coming directly from CLI app or windows services.

+ +

What are best practices used to handle service level transactions with Entity Framework?

+ +

e.g.

+ +
// Services
+public class UserService
+{
+    private TaskService _taskService;
+    private UserRepository _userRepository;
+
+    public UserService(TaskService taskService, UserRepository userRepository)
+    {
+        this._taskService = taskService;
+        this._userRepository = userRepository;
+    }
+
+    public void MarkInactive(int userId)
+    {
+        this._taskService.CloseAllPendingTasks(userId);
+        this._userRepository.MarkInactive(userId);
+    }
+}
+
+public class TaskService
+{
+    public void CloseAllPendingTasks(int userId)
+    {
+        ...
+    }
+}
+
+// Consumer
+// Scenario 1:
+this._taskService.CloseAllPendingTasks(1);
+
+// Scenario 2:
+this._userService.MarkInactive(2);
+
+ +

In above example, In case of Scenario 1, task service should create new transaction for the operation. While in Scenario 2, user service should create a transaction and task service should join the already open transaction.

+",67408,,,,,43634.40833,Best practice for transaction handling using Entity Framework,,1,11,1,,,CC BY-SA 3.0,, +365004,1,365006,,1/31/2018 6:11,,0,171,"

Let's say I have written a library containing many classes in C++. Obviously, I can call this library from C++ client programs.

+ +

But, now let's say I want to use another language for my client programs. Since my C++ library contains many classes, I don't want to redevelop them in the client language.

+ +

So, I would like to be able to call my C++ library classes from other languages.

+ +

Also, I want the solution to be portable across platforms. I don't want to be tied to one platform (such as .NET where any .NET language can call any other .NET language).

+ +

In the extreme case, this inter-language call feature would be required even from ""managed"" languages such as Java.

+ +

So, my question is: Is it possible to call a library function written in one language, from a program written in another language? If so, how?

+",294597,,294597,,43131.56319,43131.58125,Middleware between Languages,,1,2,,,,CC BY-SA 3.0,, +365007,1,,,1/31/2018 6:38,,-3,98,"

Is there a best practice for naming class selectors for identification alone?

+ +

For example, for defining a single amount field with action button, we end up creating several div containers and div items among other elements.

+ +
<div class=""form-group debit-amount"">
+  <label class=""control-label"">Debit amount/label>
+  <div class=""input-group"">
+    <span class=""input-group-addon"">$</span>
+    <input type=""text"" class=""form-control"">
+    <span class=""input-group-btn"">
+      <button class=""btn btn-default act-convert"" type=""button"">Apply</button>
+    </span>
+  </div>
+</div>
+
+ +

Now, I want to add a css selector, which is purely to identify click on the action button.

+ +
//Using a new class only to identify 
+$myform.find('.debit-amount .act-convert').on('click', doConvert);
+
+//Using the style class itself 
+$myform.find('.debit-amount .btn-default').on('click', doConvert);
+
+ +

Is there a naming convention so that, these identification classes are not confused with style classes?

+",294809,,294809,,43131.30972,43134.48889,Is there a best practice for naming class selectors for identification alone,,1,9,,,,CC BY-SA 3.0,, +365008,1,365009,,1/31/2018 6:40,,31,16361,"

For example, I want to show a list of buttons from 0,0.5,... 5, which jumps for each 0.5. I use a for loop to do that, and have different color at button STANDARD_LINE:

+ +
var MAX=5.0;
+var DIFF=0.5
+var STANDARD_LINE=1.5;
+
+for(var i=0;i<=MAX;i=i+DIFF){
+    button.text=i+'';
+    if(i==STANDARD_LINE){
+      button.color='red';
+    }
+}
+
+ +

At this case there should be no rounding errors as each value is exact in IEEE 754.But I'm struggling if I should change it to avoid floating point equality comparison:

+ +
var MAX=10;
+var STANDARD_LINE=3;
+
+for(var i=0;i<=MAX;i++){
+    button.text=i/2.0+'';
+    if(i==STANDARD_LINE/2.0){
+      button.color='red';
+    }
+}
+
+ +

On one hand, the original code is more simple and forward to me. But there is one thing I'm considering : is i==STANDARD_LINE misleads junior teammates? Does it hide the fact that floating point numbers may have rounding errors? After reading comments from this post:

+ +

https://stackoverflow.com/questions/33646148/is-hardcode-float-precise-if-it-can-be-represented-by-binary-format-in-ieee-754

+ +

it seems there are many developers don't know some float numbers are exact. Should I avoid float number equality comparisons even if it is valid in my case? Or am I over thinking about this?

+",248528,,7422,,43131.29167,43132.47153,Does comparing equality of float numbers mislead junior developers even if no rounding error occurs in my case?,,8,15,5,,,CC BY-SA 3.0,, +365015,1,,,1/31/2018 8:56,,0,111,"

A common bug in JavaScript is to forget the await keyword when calling an async function.

+ +

Of course you don't always want to await, sometimes you really want to get a promise. And of course you can't statically determine all target functions.

+ +

But there's still a vast majority of trivial cases that we'd like to check in order to avoid silly bugs.

+ +

Excluding naming schemes (which are cumbersome) what convenient strategies are there to deal with that problem ? Any linter ? checker ? colorization plugin ?

+",52680,,,,,43131.37778,How to statically check you didn't forget to await for an async function,,1,0,,,,CC BY-SA 3.0,, +365017,1,,,1/31/2018 10:14,,25,7617,"

I have read plenty of articles recently that describe primitive obsession as a code smell.

+ +

There are two benefits of avoiding primitive obsession:

+ +
    +
  1. It makes the domain model more explicit. For example, I can talk to a business analyst about a Post Code instead of a string that contains a post code.

  2. +
  3. All the validation is in one place instead of across the application.

  4. +
+ +

There are plenty of articles out there that describe when it is a code smell. For example, I can see the benefit of removing primitive obsession for a post code like this:

+ +
public class Address
+{
+    public ZipCode ZipCode { get; set; }
+}
+
+ +

Here is the constructor of the ZipCode:

+ +
public ZipCode(string value)
+    {
+        // Perform regex matching to verify XXXXX or XXXXX-XXXX format
+        _value = value;
+    }
+
+ +

You would be breaking the DRY principle putting that validation logic everywhere a zip code is used.

+ +

However, what about the following objects:

+ +
    +
  1. Date of Birth: Check that greater than mindate and less than today's date.

  2. +
  3. Salary: Check that greater than or equal to zero.

  4. +
+ +

Would you create a DateOfBirth object and a Salary object? The benefit is that you can talk about them when describing the domain model. However, is this a case of overengineering as there is not a lot of validation. Is there a rule that describes when and when not to remove primitive obsession or should you always do it if possible?

+ +

I guess I could create a type alias instead of a class, which would help with point one above.

+",65549,,591,,43490.59583,44008.56736,When is primitive obsession not a code smell?,,9,15,8,,,CC BY-SA 4.0,, +365020,1,,,1/31/2018 10:43,,2,83,"

Currently I have a design problem, which I am not sure how exactly to solve and what would be the best approach.

+ +

So what I have is a ASP.NET Core WebApi project which is actually clustered solution. It has several layers, of course representation (RESTful API), services, validation, repositories and data. Solution is using SQL database.

+ +

My current problem is in validation layer. So I have an HttpPost action which receives parameter X that should be stored in a SQL database. This parameter should be unique. What I am doing in my validation layer is checking if it already exist in database, if exist => return proper Http response, if it doesn't then I'm updating the database.

+ +

The problem comes when API is deployed and actually it's clustered solution. So if we have 2 requests at same time we might have a case where both of requests are validated through validation layer (because still nothing is written to database) and then 1 of the requests will successfully write to database and update record data, but second request will receive an exception which will be in data entities.

+ +

What can be done in that case is the proper exception in data layer to be treated in a different way and of course return again proper response to the customer, but will only happen if we add extra constrains to our SQL database and say that X column should have unique value.

+ +

If we imaging that we don't have access to database and we cannot add this constraint (so X column can actually have same values), then we depend only on API to validate.

+ +

I am not sure what is the solution of this problem. Just once again, if I try to add same values which should be unique, but SQL doesn't validate them (maybe we don't have such permissions for access) and we only depend on our WebApi, which is async and validation layer might say ""Ok"" to both requests, because DB is still not updated, then what is the solution.

+ +

Thanks in advance

+",294826,,174505,,43131.55625,43131.56806,Validation layer of clustered WebApi solution,,1,1,,,,CC BY-SA 3.0,, +365033,1,365045,,1/31/2018 14:21,,5,318,"

A pure function is assumed to produce the same outputs given the same inputs. Suppose an (otherwise) side-effects free function computes with floating-point numbers. Due to numerical error, these outputs may differ (dependent on the system, parallelism, compiler optimisations ...) For instance https://stackoverflow.com/questions/2342396/why-does-this-floating-point-calculation-give-different-results-on-different-mac. So technically the function is not referentially transparent.

+ +

A different but related issue is when we define an addition monoid over floating-point numbers. The key underlying assumption, in FP, is of addition associativity, which is violated under finite precision: cf. http://www.walkingrandomly.com/?p=5380 for a basic example.

+ +

Is there any practical relevance of such violations to functional programming? If not, should we simply assume referential transparency? Or should we strive for maximal RT, eg, by choosing a language with better reproducibility of numerical results at the expense of their precision?

+",294434,,1204,,43131.67569,43146.90625,Can referential transparency be assumed when dealing with floating-point arithmetic?,,3,16,,,,CC BY-SA 3.0,, +365036,1,365037,,1/31/2018 10:39,,2,445,"

I am retrieving GeoPoints from a database and organize them for usage in frontend. The GeoPoints are aggregated in segments, the segments in layers, the layers are mapped to drive ids and that map is maped to car ids. The frontend displays all of that nicely, with Tooltips and such.

+ +

My REST GetMapping method looks like this.

+ +
public ResponseEntity<Map<Integer, Map<Integer, List<List<List<double[]>>>>>>
+
+ +

For new colleagues, this is insanely hard to understand. But I can't think of any other way to do it. Should I create dummy child classes of Map and List just so I can write something like

+ +
public ResponseEntity<CarMap<Integer, DriveMap<Integer, Layers<Segments<GeoPoints<double[]>>>>>>
+
+ +

How is something like this handled typically?

+",206293,Blauhirn,,,,43131.62083,How to design an API that returns nested lists?,,1,8,,,,CC BY-SA 3.0,, +365039,1,365040,,1/31/2018 14:59,,1,174,"

Imagine this use case:

+ +

I have a class with 50 attributes of which 10 are relationships plus 100 methods which perform calculations and return a value.

+ +

I need to save that data (including relationships and method's return values) into a database for later retrieval. The idea is to have an snapshot of the current state of the object instance.

+ +

Currently I only need the data of 20 of that attributes and 25 method return values, but in the future will need to retrieve more. I will probably never use some of the data saved.

+ +

I could easily save and retrieve data from a JSON.

+ +

Now imagine that after 3 months having generated and saved 1.000.000 instances of this class, I need to start changing things eventually, for instance:

+ +
    +
  • Change the name of 2 attributes and 3 methods
  • +
  • Drop one relationship
  • +
  • Split the value of an attribute into 2 parts (for instance object.name into object.first_name object.last_name)
  • +
  • Add 5 new methods and attributes
  • +
+ +

While I need to not break the data retrieval for historical data (That is, update registries or process accordingly on fetching).

+ +

Having considered that use-case:

+ +

What's the best approach for data retention, manipulation and retrieval? (speaking in terms of databases, design patterns and other technical means)

+",230014,,230014,,43131.88611,43131.88611,Save not-so dynamic data into a database and retrieve with changes in the future,,2,4,,,,CC BY-SA 3.0,, +365050,1,,,1/31/2018 18:23,,3,272,"

I am building a fleet unit gps system and currently and i need to figure out how to link my objects.

+ +

Here is the scenario:

+ +
    +
  1. Each fleet unit may have attached one or more gps devices.
  2. +
  3. Each fleet unit may have one or more engines.
  4. +
  5. Each fleet unit's engine may have one or two fuel flow meters connected to one or two gps devices. Depending on the engine type the fuel flow meter can handle both the forward fuel and backward fuel, but sometimes there needs to be two different fuel flow meters, one for the forward fuel and one for the backward fuel. What makes it even worse is that some gps device models have inputs for the both fuel flow meters but others don't, so when two fuel flow meters are needed there needs to be mounted two different gps devices.
  6. +
+ +

So if i have the following objects:

+ +

FleetUnit, GpsDevice, Engine, FuelFlowMeter

+ +

I want to find a way to link them without having a circular dependency.

+ +

If the FleetUnit object have a list of GpsDevice objects and list of Engine objects and GpsDevice object has a list of FuelFlowMeter and FuelFlowMeter has reference to a GpsDevice object and Engine object, I think there are too many circular dependency and the design is not clean.

+ +

Can you suggest me how to handle such scenario where few objects are behaving like a graph?

+",127436,,,,,43131.80694,Avoinding circular dependency,,2,3,,,,CC BY-SA 3.0,, +365058,1,,,1/31/2018 19:47,,0,560,"

I have producer / worker queue (1 producer, many workers).
+Currently using Redis, but I dont mind switching to RabbitMQ or anything else.

+ +

Worker takes a task from queue, runs long running job and confirms that it finished.

+ +

Now I have a new type of worker which allows cheaper queue processing, but it may not always be available or there may not be many of them (AWS spot instance)
+I need to provide tasks from queue only to new worker type but if the task fails to process after some time (i.e. 30 seconds), it should be provided to other workers instead to meet SLA.

+ +

How should I implement this?

+ +

In short I need to deliver message to one queue, and it is stays there for more than 10 seconds, move it to another queue.

+",294870,,,,,43137.34097,Queue - move message from one queue to another after expiration,,1,0,,,,CC BY-SA 3.0,, +365060,1,365069,,1/31/2018 20:04,,3,1245,"

I'm currently in the process of architecting a small RPG-style dungeon crawl (in Unity), and am a little stuck on how to update various objects when things change, while not updating unrelated objects. For this question, I will be using my current project as an example, but the question itself can be applied to any situation where multiple different kinds of Observers each watch a single Subject.

+ +

The design I'm currently leaning towards for object interactions in my game implements the Observer Design Pattern, where I have a single Subject (let's call it GameEventManager), and each interactive object is an Observer of this Subject (this includes things like the player, enemies, interactive items on the ground, etc.).

+ +

When an event occurs (let's say a User hits Spacebar to shoot an arrow at a selected enemy), the current plan I have for the action's lifetime can be broken into steps:

+ +
    +
  1. The Input system (not an Observer, but contains a reference to the Subject), sends an Event to the GameEventManager for broadcasting to the right place. I haven't designed the Event class yet, to keep the problem simpler and in case I need to redesign things.
  2. +
  3. The GameEventManager broadcasts the Event to the Player with the relevant information.
  4. +
  5. The Player, realizing an Event came in saying to shoot an arrow, does some internal math (like subtracting from the total ammo it has, maybe performing some sort of animation visually), and emits a new Event to the GameEventManager telling it that the Player shot an arrow at ""enemy2"".
  6. +
  7. The GameEventManager broadcasts the Event to ""enemy2"".
  8. +
  9. ""Enemy2"" receives the Event, realizes that it got shot, and dies in some fashion.
  10. +
+ +

So the question is, how does GameEventManager know where to pass around all these different types of events (i.e. the movement event should only go to the player, and the arrow should only go to ""enemy2""). Also, once the event gets to its designated target(s), how do these Observers know what to do with the event (i.e. how does the player know to shoot an arrow, and not take damage?).

+ +

I've done some research on Observer examples, but most are either obscenely complex and abstract, or are just simple ""Subject emits event to ALL observers, and ALL observers interact with EVERY event they get"", which is not what I want at all (an arrow shot at ""enemy2"" should not impact ""enemy1"" in any way).

+ +

Sorry if this is a little abstract, I like to have a plan before I start to actually build the code.

+ +

Also, as an aside, I am fully aware that in Unity I can have one GameObject directly interact with another, but the purpose of this entire project is to make everything as polymorphic and re-usable as possible, and most Unity tutorials simply don't touch on extensibility at all.

+",101233,,,,,43651.49097,Using Observer Pattern to selectively act on events,,5,2,,,,CC BY-SA 3.0,, +365063,1,386249,,1/31/2018 21:07,,3,254,"

In switching from a procedural background to ""FP in the small, OO in the large"" I'm grappling with the following problem. Suppose there're modules, each only containing numerical math functions without side-effects. Some functions need the results from functions in another module. +Here's a toy example in pseudo-Scala:

+ +
//Maths Domain: 
+case class A(v: Decimal), B(..), ..., FF(..) //basic numeric outputs 
+case class EE(b: B, c: C, d: FF) //compound results
+
+object Primary{
+  def alpha(v: A): B  = ...  // maths
+  def beta(v: A): C = ...
+  ... }
+
+ object Secondary{
+  import Primary.{alpha, beta}
+  def one(a: A, d: DD): EE = EE(alpha(a), beta(a), two(d))
+  def two(d: DD): FF = ...  
+}
+
+//Core Domain:
+case class Foo(..)
+case class Baz(foo: Foo, e: EE)
+
+object Service {
+  import Secondary.one
+  def makeFoo(): Baz = {
+    val e = one(A(..), DD(..))
+    Baz(Foo(..), e) }
+}
+
+ +

The hardcoded dependencies seem like a Big Ball of Mud in the making. +So how should such dependencies be accommodated cleanly? (There have been relevant questions on SE, for example, Are there any alternatives to dependency injection for stateless classes?; Dependency Injection vs Static Methods; Is Functional Programming a viable alternative to dependency injection patterns?; Is Functional Programming a viable alternative to dependency injection patterns? and Is static universally "evil" for unit testing and if so why does resharper recommend it?. However, they seem to address other aspects of the problem, mainly for languages other than Scala.)

+ +

Specifically, I'm interested

+ +
    +
  • Whether the dependency problem emerges in the Secondary module, or, if thePrimary functions +are independent and won't change, only in the +Service module
  • +
  • Is the use of import here a design smell (and very different, eg, from importing java.math)?
  • +
+ +

Would some of these work, in large applications, or be an overkill?

+ +
    +
  1. Make Service call the methods in Primary and supply the results to Secondary. Then Secondary is independent, but Service is exposed to lower-level details
  2. +
  3. Currying and supplying functions as arguments. This increases the number of method parameters and possibly exposes implementation details
  4. +
  5. Turn both maths modules into traits and define object MathService extends Primary with Secondary. This is injected either in makeFoo() or into Service, which would become a class. However, this could expose Service to unnecessary methods, violating ""Interface Segregation Principle""
  6. +
  7. Full-on composition: object Program extends MathService with Service. This is in line with the algebraic approach in ""Functional and Reactive Domain Modeling"" by D. Ghosh. My reservation is about the cohesion of composing modules across bounded contexts
  8. +
  9. Cake patterns
  10. +
  11. Reader Monad
  12. +
  13. Standard DI containers, but for functional modules
  14. +
+ +

I'd appreciate guidance on these or another solution, both generic and Scala-specific.

+",294434,,294434,,43132.75208,43494.17847,Dependencies between functions-only modules: hardcoding vs alternatives,,1,1,,,,CC BY-SA 3.0,, +365064,1,,,1/31/2018 21:09,,3,1851,"

I've been tasked with refactoring a console application, which is constantly running on a server and receiving messages from a service bus.

+ +

Right now, it just parses the incoming message, and based on a property, will use a switch statement to call one of many different functions (about 70 at the moment, always growing). One problem is that if a function fails, it's not retried. Not to mention just the ugliness of one giant switch statement.

+ +

I'm leaning towards using the Command Pattern to rectify this (https://scottlilly.com/c-design-patterns-the-command-pattern/), but have also considered a pub/sub pattern to handle these functions.

+ +

Anybody know what the best architecture pattern might be for this situation?

+",294885,,,,,43195.45625,How to handle large switch statement running several different commands?,,3,8,0,,,CC BY-SA 3.0,, +365067,1,,,1/31/2018 22:00,,-2,451,"

In my company I proposed NodeJS for developing a microservice that acts as a bridge between the front-end and a third party chat API, that Microservice will receive large amounts of messages (about 1000 or more) at once, that will have to be able to queue using Amazon Simple Queue Service.

+ +

I have already developed a proof of concept that does something similar with NodeJS, however the company is not willing to use NodeJS, stating that the rest of technical team doesn't know the technology.

+ +

Is NodeJs a more optimal option to develop such a service? +What potential drawbacks can PHP eventually have if we use it in this case?

+",294883,,,,,43132.10625,PHP or NodeJS for a chat app with message queue,,1,9,,,,CC BY-SA 3.0,, +365068,1,,,1/31/2018 22:03,,3,1029,"

this is my first post on here. I am wondering if there is any disadvantage to using websockets as a communication method for a non-web-based client application to connect to a server?

+ +

I am looking at designing a turn-based game, using a client-server approach. I would like to design the game server so that different types of clients can connect to it. Some might be web-based (in which case a websocket seems ideal); however, others might not be browser-based. If I could use websockets for all of them, I would think that might simplify the server-side implementation.

+ +

However, there must be some downsides to using websockets, otherwise every client-server application would be using them, right?

+ +

(btw, I am planning to implement the server in Java)

+",294872,,,,,43440.41528,Any disadvantage to using websockets for non-web client apps?,,1,3,,,,CC BY-SA 3.0,, +365075,1,366963,,2/1/2018 0:28,,1,680,"

Currently implemented the model and working on the UI for a scheduling application. This is just an in-house application for work between ~5 people that may occasionally have the application running at the same time.

+ +

It will be run from the network and use a SQLite database (I understand the concurrency issues and feel that it won't be an issue for our use). It will look something like this:

+ +
+------------------------------------------------------------+
+|       |1/1/2018 | 1/2/2018 | 1/3/2018 | 1/4/2018 | 1/5/2018|
++------------------------------------------------------------+
+|Jack   |Available|Leave     |Leave     |Available |Leave    |
+|Jim    |Available|Available |Available |Available |Available|
+|John   |Leave    |Leave     |Available |Available |Available|
++------------------------------------------------------------+
+
+ +

The model works correctly with a console app. Currently implementing the views & viewmodels now. One issue that I've thought about for a few days is what to do about stale data?

+ +

Eventually I will implement a background thread that will check the database at user set intervals for a refresh. This would work fine for reads, but as I speak to below, doesn't this negate an in-memory collection in my model?

+ +

However, I'm not sure what to do about writes. When a user makes a change to someone's availability, I assume I want to persist that data immediately so other users can be informed. But doesn't this negate the collections in the model (and the model really), if I just persist directly from the viewmodel to the database?

+ +

Additionally, what do I do if two users are making changes and I get into a race condition where the 2nd one makes a write using stale data because he never received the update from the first user?

+ +
-> User clicks drop down for Jack on 1/1/2018 
+-> Drop down displays list of  availability options (e.g. leave, available, unavailable, etc.)
+-> user selects different availability option
+-> updates viewmodel property with new selected availability
+-> updates Jack's schedule in the model 
+-> ??? Persist to database immediately and/or do I even need a collection of schedules?
+
+ +

I'd rather do this myself and handle these issues as I get to learn a lot by doing it, so I'd rather not use a ORM framework such as Entity.

+",274856,,274856,,43132.07431,43162.83681,Dealing with stale data persistence,,2,0,,,,CC BY-SA 3.0,, +365076,1,,,2/1/2018 1:03,,1,29,"

I'm creating a database schema for a internet store and I wonder how to better create the tables ""images"" and ""products"". A simpler way will be to have it as one_to_many, that is:

+ +
 images(....., product_id references products(id)) 
+
+ +

The more flexible way is to create it as many_to_many. However, I can't think of any scenario where I'll really need this products <-> many_to_many <-> images relationship, that is, where I'll reuse images for more that one product.

+ +

What would you recommend? Is there anything I'm missing? When will I need many_to_many in my case?

+",294898,,294898,,43132.05903,43132.06528,Imagers for a product in Internet Store -- one to many or many to many?,,1,1,,,,CC BY-SA 3.0,, +365086,1,,,2/1/2018 6:50,,1,49,"

I would like to ask what is a prefered way when you have such a code, (this is distributed transaction since I cannot delete the item and publishToKafka atomically), and I don't want to have event sourcing right now:

+ +
if(exists(Item))
+   DeleteItem(Item)
+   PublishEventKafka(ItemDeleted)
+
+ +

Now obviously if a crash is between DeleteItem then in the event of restart the PublishEventKafka is not called because if(exists) returns false.

+ +

I was also thinking about:

+ +
if(exists(Item))
+   PublishEventKafka(ItemDeleted)
+   DeleteItem(Item)
+
+ +

This, however, is a lie because the item is still there, so my conclusion was:

+ +
if(exists(Item) || DeleteInProgress(Item))
+   DeleteItem(Item, InProgress)
+   PublishEventKafka(ItemDeleted)
+   DeleteItem(Item, Finish)
+
+ +

Ie I would change the state of the object to ""deleting"" and after it was sent I would delete the actual item. Others would also think it's deleted if they tried to read Item...

+ +

Anyway, I think there is a more generic way to solve this kind of problem or isn't there?

+",293767,,293767,,43132.47361,43132.47361,idempotency and store with notify,,0,6,,,,CC BY-SA 3.0,, +365087,1,,,2/1/2018 6:58,,3,331,"

I am reading about SOLID principles and have just read that Dependency Inversion (DI, to be distinguished here from Dependency injection, which is one way of achieving the inversion) is an extension to the Open-Closed Principle (OCP). How is that exactly meant, if OCP is about making class extensible without touching the original code, basically.

+",294256,,294256,,43132.36181,43133.8125,How Dependency inversion is an extension of OCP?,,1,5,,,,CC BY-SA 3.0,, +365089,1,365107,,2/1/2018 7:43,,5,245,"

I'm working on a Scala project and Wartremover shows an error about Option#get usage in my code:

+ +
Option#get is disabled - use Option#fold instead
+
+ +

While I do understand how get should often be avoided, I think there are cases where it's reasonable to use it, like my current one: First, I create an answer item (in the database), then I want to return the freshly created answer by reading it from there.

+ +
def create(answer: Answer): Future[Answer] = {
+  writeToDb(answer) // returns Future[Long]
+    .flatMap(
+      readFromDb(_) // returns Future[Option[Answer]]
+        .map(_.get) // Wartremover complains here
+    )
+}
+
+ +

My understanding is that get should generally be avoided because it breaks control flow in case of None, as an exception is thrown. +However, I expect my Option here to always contain an answer, as I just created it. +If it's not there, there's likely a bug in my code or an issue with the database. +In such a case it would be unreasonable to fall back to a default value or null. I'd much rather throw an exception or directly map to a failing future.

+ +

An alternative to get is explicitly matching agains the option:

+ +
def create(answer: Answer): Future[Answer] = {
+  writeToDb(answer) // returns Future[Long]
+    .flatMap(
+      readFromDb(_) // returns Future[Option[Answer]]
+        .flatMap {
+          case Some(answer) => Future.successful(answer)
+          case None => Future.failed(
+            new IllegalStateException(""Failed to load answer after creation""))
+        }
+    )
+}
+
+ +

But this is way more verbose while achieving almost the same as a simple _.get.

+ +

Am I missing something here? Or is this just a false positive from Wartremover?

+",131840,,,,,43132.55069,Is using Option#get really a bad idea here?,,1,12,,,,CC BY-SA 3.0,, +365092,1,365102,,2/1/2018 9:23,,3,2410,"

While trying to solve an issue, explained on the StackOverflow forum, somebody advised me to use dependency injection. For personal reasons, the moment a person mentions to me the usage of a design pattern, I always start thinking of very difficult constructions. After some thinking and investigating, I've invented following construction on my own (pseudo-code):

+ +

General header file

+ +
interface ILogger {
+public:
+  void writeMsg(std::string);
+};
+
+class Application_Logger : ILogger {
+public:
+  void writeMsg(std::string output) {
+    std::printf(stringutils::format(""Application : %s"", output);
+  }
+};
+
+class Test_Logger : ILogger {
+public:
+  void writeMsg(std::string output) {
+    std::printf(stringutils::format(""Test : %s"", output);
+  }
+};
+
+ +

Application.h header file:

+ +
ILogger logger = nullptr;
+
+ +

Unit_test.h header file:

+ +
ILogger logger = nullptr;
+
+ +

Application_startup.cpp:

+ +
if (logger == nullptr)
+  ILogger logger = Application_Logger();
+
+ +

Unit_testing_startup.cpp:

+ +
if (logger == nullptr)
+  ILogger logger = Test_Logger();
+
+ +

Common_used.cpp:

+ +
logger.writeMsg(""<information>"");
+logger.writeMsg(""<more information>"");
+
+ +

Application output

+ +
Application : <information>
+Application : <more information>
+
+ +

Unit test output

+ +
Test : <information>
+Test : <more information>
+
+ +

I have no idea whether or not this works, but I believe it does (assuming that it is possible to launch a piece of code, enabling to fill in the interface pointer).
+In my opinion, this is not a special construction, but the basic usage of interfaces, not worthy of being called a design pattern. Am I correct and in case not, what needs to be added/modified in order to make a direct injection pattern out of this?

+ +

After some first comments, I'm starting to understand why this is not a direct injection, so hereby another example, trying to make a very simple constructor direct injection (just being able to switch between two ILogger interfaces):

+ +

General header file

+ +
interface ILogger {
+public:
+  void writeMsg(std::string;int);
+};
+
+class Simple_Logger : ILogger {
+public:
+  void writeMsg(std::string output;int severity) {
+    std::printf(stringutils::format(""[%d] : %s"", severity, output));
+  }
+};
+
+class Detailed_Logger : ILogger {
+public:
+  void writeMsg(std::string output;int severity) {
+    if (severity == 0) {
+    std::printf(stringutils::format(""Very important : %s"", output));
+    } else {
+    std::printf(stringutils::format(""[%d] : %s"", severity, output));
+    }
+  }
+};
+
+ +

Application.h header file:

+ +
ILogger logger = nullptr;
+
+ +

Application_startup.cpp (based on args):

+ +
if (logger == nullptr) {
+  if args == ""Simple""
+  ILogger logger = Simple_Logger();
+  else : ILogger logger = Detailed_Logger();
+}
+
+ +

If this is correct, it means that DI means that your application's processing is based on interfaces, and that you let outer data (arguments, configuration file content, interactive input, ...) decide which implementation is chosen for those interfaces. Am I right this time? (On the Wikipedia page, it's not clear where the mentioned service is coming from)

+",250257,,250257,,43132.45,43132.51111,How is dependency injection different from simple interface usage?,,3,2,,,,CC BY-SA 3.0,, +365103,1,365118,,2/1/2018 12:21,,1,1258,"

Please see the code here and specifically

+ +
using System.Collections.Generic; 
+
+ namespace DddInPractice.Logic.Common 
+ { 
+     public abstract class AggregateRoot
+     { 
+         private readonly List<IDomainEvent> _domainEvents = new List<IDomainEvent>(); 
+         public virtual IReadOnlyList<IDomainEvent> DomainEvents => _domainEvents; 
+
+
+         protected virtual void AddDomainEvent(IDomainEvent newEvent) 
+         { 
+             _domainEvents.Add(newEvent); 
+         } 
+
+
+         public virtual void ClearEvents() 
+         { 
+             _domainEvents.Clear(); 
+         } 
+     } 
+ } 
+
+ +

I am debating whether to use this class in my project. I like the idea because it encapsulates Domain Events. If I use it then all Aggregate Root classes will derive from it. Alternatively, I could use a marker interface like this (which all Aggregate Roots will implement):

+ +
public interface IAggregateRoot
+{
+}
+
+ +

Should I:

+ +
    +
  1. Create the base class or

  2. +
  3. Create the interface or

  4. +
  5. Do neither?

  6. +
+ +

I like the idea of marking my Aggregate Roots.

+",65549,,340831,,43658.525,43658.525,Should I have an interface or class for my aggregate root?,,1,5,,,,CC BY-SA 4.0,, +365105,1,,,2/1/2018 12:41,,-2,56,"

I'm currently working on a rest service writen in c++/qt and i'm also thinking about the future web ui who will use this service.

+ +

I admit i'm not super expert in html/js since i've always worked on the back-end side, and i'm not sure how the modern ui patterns are today. +As first question i aks: is a html/js only ui (accessing the rest service directly) a better / worst choice than a html/js ui with a server side php layer who will act as support for consuming the rest service (js ask to php who ask to rest)?

+ +

Thanks in advance. +Ivan.

+",294939,,294939,,43132.52986,43134.36181,Frontend design for rest service,,1,1,,,,CC BY-SA 3.0,, +365106,1,,,2/1/2018 12:46,,4,206,"

First of all please excuse my english mistakes; it's not my native language. Second, I couldn't find a better title to summarize my inquiry, so let me explain it below:

+ +

Let's say we have a software that invoices some products. The invoicing process highly depends on the current legislation. For example, the current law may require that when you create a bill for a client, it is mandatory to have the client's address filled. But the law is expected to change, such that after the 01-01-2018, when invoicing a product you must fill other client's information. This change could be valid until, let's say, 05-05-2018. After this date, the law will change again. However the software must be backward compatible, meaning that after 05-05-2018 you should still be able to create invoices based on the law requirement from 2017, and so on.

+ +

Obviously, I cannot add a pile of if-elseif-else statements based on the current date because that won't respect the ""Open-Close Principle"". What could be a proper design such that the application will be open to be extended, but closed for modifications?

+",294940,,1204,,43132.72014,43133.61806,Software design strategy for a functionality that depends on a temporal situation,,4,6,0,,,CC BY-SA 3.0,, +365109,1,365137,,2/1/2018 13:34,,0,496,"

I have a small question. Which approach is correct in the context of SOLID principles? 1 or 2 ?

+ +

In the first case, the ""CreateTask"" method does not return the Task object, but places it on the list that it accepts as the method argument.

+ +

In the second case, the ""CreateTask"" method does not accept arguments, but returns a task object that must be placed in the Tasks list

+ +

1.

+ +
var tasks = new List<Task>();
+CreateTask1(tasks);
+CreateTask2(tasks);
+CreateTask3(tasks);
+
+private void CreateTask1(List<Task> tasks)
+{
+   // some other logic about new task object       
+   tasks.Add(new Task());
+}
+
+private void CreateTask2(List<Task> tasks)
+{
+   // some other logic about new task object       
+   tasks.Add(new Task());
+}
+
+private void CreateTask3(List<Task> tasks)
+{
+   // some other logic about new task object       
+   tasks.Add(new Task());
+}
+
+ +

2.

+ +
var tasks = new List<Task>();
+tasks.Add(CreateTask1());
+tasks.Add(CreateTask2());
+tasks.Add(CreateTask3());
+
+private Task CreateTask1()
+{
+   // some other logic about new task object
+   return new Task();
+}
+
+private Task CreateTask2()
+{
+   // some other logic about new task object
+   return new Task();
+}
+
+private Task CreateTask3()
+{
+   // some other logic about new task object
+   return new Task();
+}
+
+",294945,,,,,43132.78403,S.O.L.I.D. principles,,3,3,,,,CC BY-SA 3.0,, +365112,1,,,2/1/2018 13:51,,2,216,"

So an Instanced API is one that behaves like an object. So for example:

+ +
foo* GetInstancedAPI();
+void MemFuncSetter(foo* fooThis, const int arg);
+int MemFuncGetter(const foo* fooThis) const;
+
+ +

This is as opposed to a non-instanced API which would depend upon look-ups:

+ +
int GetInstancedAPI();
+void MemFuncSetter(const int index, const int arg);
+int MemFuncGetter(const int index) const;
+
+ +

A little background about the situation, this is a C API, that's being used to wrap a C++ implementation. So internally to the implementation I am working with objects. So I've tried to think through the ramifications of each, the biggest issues I can think of are:

+ +
    +
  • How would I handle callbacks?
  • +
  • Is there a way minimize the lookup cost?
  • +
+ +

Edit:
+There have been a lot of requests for clarifications: foo* is really a void pointer in the C interface which will be reinterpret_cast into a pointer to the actual C++ object, thus it must be passed in.
+The functions that take an int are with the intention of indexing into an vector of objects in the wrapped C++.

+",98845,,98845,,43133.59097,43133.59097,Are Instanced APIs a Problem in a C Interface?,,3,6,,,,CC BY-SA 3.0,, +365117,1,365449,,2/1/2018 14:45,,-4,3309,"

Is there some reliable way to detect device/browser/OS of web page visitors except using the user agent string?

+ +

This is not for rendering/functionality of the web page/application, but only for statistics (how many percentage of visitors use ipad, iphone, pc, mac, chrome, edge, firefox etc).

+ +

This can be done either by JavaScript client side or .net server side.

+",292082,,,,,43138.37708,Reliable device/browser/OS detection,<.net>,2,2,,43301.49583,,CC BY-SA 3.0,, +365119,1,365121,,2/1/2018 15:23,,40,13545,"

I have read that using ""new"" in a constructor (for any other objects than simple value ones) is bad practice as it makes unit testing impossible (as then those collaborators needs to be created too and cannot be mocked). +As I am not really experienced in unit testing, I am trying to gather some rules that I will learn first. +Also, is this is a rule that is generally valid, regardless of the language used?

+",294256,,,,,44164.97708,"Is using ""new"" in the constructor always bad?",,7,12,11,,,CC BY-SA 3.0,, +365127,1,365129,,2/1/2018 17:24,,1,1079,"

Problem

+ +

I have a Python library which sends XML requests to an external API. If an issue occurs, the API responds with an error containing an error code and description with error details.

+ +

These errors can be caused by anything from malformed XML data, an invalid username/password, editing a read-only attribute, or trying to load data that doesn't exist.

+ +
<Response Status=""Failure"" Action=""LoadByName"">
+    <Error Message=""WFP-00235 The job name does not exist in the database."" ErrorCode=""17"" AtIndex=""0""/>
+</Response>
+
+ +

Current approach

+ +

Because I can't control the response I'll receive, my library currently handles any exceptions by raising a RuntimeError with the message received from the API.

+ +
import xml.etree.ElementTree as ElementTree
+
+def parse_response(xml):
+    # Check if XML response contains any errors
+    if xml.find("".//Error"") is not None:
+        raise RuntimeError(xml.find("".//Error"").get(""Message""))
+    ...
+
+ +
+

RuntimeError: WFP-00235 The job name does not exist in the database.

+
+ +

One major flaw with this approach is that the ErrorCode isn't preserved when an exception is raised. This makes it impossible to catch specific API errors, as each error is essentially a generic exception.

+ +

Question

+ +

How should I handle XML errors that come from an external API? Can I raise/catch errors based on their numeric ErrorCode?

+",168744,,168744,,43214.85417,43214.85417,How should I handle error codes from an external XML API?,,2,2,,,,CC BY-SA 3.0,, +365128,1,365135,,2/1/2018 17:41,,2,230,"

With a list of thousands of words and a small list of letters I am trying to find the least amount of words to make use of all given letters, assuming my dictionary of words covers all letters.

+ +

The first step is obviously to remove all words with none of the letters. Then I approached this by calculating the relative rareness of each letter for the remaining words, and sort words by combined relative rarity of the letters they contain relative to their length. So this would put words with relatively rare letters first, assuming it will be increasingly easy to satisfy the remaining, more common letters. After picking the best word with the most ""coverage"" of letters, I remove those from the list of letters, then loop to recalculate the words rareness for the remaining letters until all letters are satisfied.

+ +

This certainly works, but sometimes I get odd results that show how this is not ideal. For example a 5 letter requirement would still find a single best match covering all letters, but adding a 6th letter, all of a sudden I get 4 words returned, because of the sequence of returned words and rareness shifts in an unfavorable way.

+ +

What strategy could I use to improve this algorithm? Also I currently iterate over all words several times per loop, counting rareness etc, which gets increasingly expensive with a vast dictionary. Any suggestions welcome :)

+",60437,,,,,43132.87292,Find the least words that will use all given letters,,3,7,,,,CC BY-SA 3.0,, +365131,1,,,2/1/2018 18:05,,3,87,"

Let's say I have thousands of pdfs that are each about 30k words written in conversational English. In each of the pdfs there is a name / names of a person/people who snowboard. There are also many other names. I need to extract the name(s) of the snowboarder(s) from any future pdfs. What are some tools / methods you could approach this problem with?

+ +

I just started learning about Natural Language Processing and Machine Learning a couple weeks ago. I have been using Python's NLTK to filter my data and have used scikit-learn for my classification and multilabel classification solutions pertaining to other questions I want to answer on the same data set, but this snowboarder example is not classification. I know I can strictly use an NLP solution but I want to try to have a ML model recognize the patterns in the text because all the documents are formatted similarly (and I have a lot of documents to train with and I am willing to manually label).

+ +

I was able to get some success training a word2vec neural net on each individual document. I then checked the model similarity (model.wv.similarity(HUMAN_NAME, 'snowboard')) between each name in a list of human names and the word 'snowboard', and the most similar has been my answer so far. I know there has to be a more eloquent solution. I know Sequence to Sequence models and topic modeling might be my next steps. Can someone point me in the right direction if they have a better idea?

+",294970,,294970,,43132.75903,43132.75903,Software design strategy for a machine learning tool that outputs a subset of the text input (Information Extraction)?,,0,1,,,,CC BY-SA 3.0,, +365133,1,,,2/1/2018 18:16,,9,1400,"

First off, I want to say that I am used to doing Procedural Programming as my hobby - I'm trying to learn OOP in a couple languages, and understand the theory, just not the practice.

+ +

I have a pet-project I wanted to build, specifically in PHP with a database backend (didn't care what one). My main reason was for the app to be used on any device, so a WebApp seemed like the logical choice.

+ +

I understand that to build maintainable PHP WebApps, it should use OOP, Classes, Frameworks, Libraries, etc. This sounds logical, so I decide to try some of the popular ones. However, after an entire weekend of just trying them and trying to get through the tutorials, I'm left both confused and frustrated trying to adapt the tutorials to my small project.

+ +

I decided, mainly for a proof-of-concept, to build the app in another program (Microsoft Access), and accomplished my main goals in only a couple hours - except the Web part.

+ +

My question is, should I follow the path of what I know, then try to implement correct coding practices, or should I start with the good coding practices, and try to fudge my way through? For this project, I would like it to be Open Sourced on GitHub, so I would be open to other people using and changing my code, but I also know that if code is written poorly, it would be hard to gather coders to help.

+",40410,,1204,,43132.76944,43134.46111,"Follow the path of what I know, then try to implement correct coding practices, or start with good coding practices and try to fudge my way through?",,5,10,3,,,CC BY-SA 3.0,, +365140,1,365148,,2/1/2018 20:34,,1,74,"

I'm working on a project where I have two programs which need to invoke methods on some of each other's objects.

+ +

I do this by sending JSON objects over a TCP connection. These objects have a receiverID, methodName, and then a list of parameters. Each program has a HashMap from receiverIDs to actual object instances, and then I use reflection to invoke the method with the supplied methodName on the correct object instance.

+ +

This works, but it forces the methods that get called remotely to have an array of strings as their parameter. Then I have to look at the order that the parameters are packed into the array and read through it on the other end to parse each parameter into what the actual parameters of the method are.

+ +

This definitely doesn't feel like a very clean way to accomplish my goal, but I'm not sure how else to do it. I suppose I could mark each parameter with a type and then cast them based on the type, but I would have to add code for each new type of parameter that I want to pass.

+",294985,,,,,43132.93889,Is there a simple way to call a method on an object in another program with a given id over TCP?,,1,4,,,,CC BY-SA 3.0,, +365143,1,,,2/1/2018 21:00,,1,32,"

I am working on a java server, which has a bunch of seemingly distinct ""services"". +A lot of which are just effectively classes. From a software architecture point of view I wonder what the opinion is on such a setup. Is there any advantage to be gained by splitting the related services off into their own server? Or is the pain involved in maintaining multiple processes not worth it. I realize the answer very well might be that ""it depends"". However I was hoping to maybe establish some general guidelines/principles.

+",175938,,,,,43132.875,Running Multiple distinct Services on a single server process,,0,1,,,,CC BY-SA 3.0,, +365151,1,365201,,2/1/2018 23:14,,1,227,"

I have an API server, and it's SPA client (node+ react, but it's unrelated). All the API endpoints are under /api namespaces, and the SPA just keep make REST calls to perform operations.

+ +

We're working on a feature which require our users to link their Foo Service 3rd party account through OAuth 2. OAuth 2 requires the user to be redirected to the Foo Service auth server, and it'll redirect back, as usual with OAuth 2.

+ +

This is where the design question arises:

+ +
    +
  1. I'm making only REST calls through fetch() in the SPA:
  2. +
+ +

Should the cliente make another REST call, then the server return the usual 302 redirect, and the client would interpret the response, changing the browser to the OAuth 2 website?

+ +

Or, should the client know that said request to the server is not to be made through REST, but browser redirect, and then the server just return a 302 heading to Foo Service. If the this one is the correct one, this endpoint shoud not be under /api path, right?

+",136188,,,,,43135.53264,Design for a API server with 3rd party OAuth flow,,2,0,,,,CC BY-SA 3.0,, +365153,1,365174,,2/1/2018 23:20,,0,382,"

Context

+ +

I am making a service class that essentially functions as a business datetime calculator. That is, it can perform various calculations such as what date is X business days from a given date, when is the start of the next business date, is a given date in business hours, etc.

+ +

The catch is this class has to take in various parameters that makes it quite a bit more complex. In particular, user-defined work hours and holidays are passed in when the class is initialized; any methods that perform these business datetime calculations need to take these work hours and holidays into account. Work hours are defined by a dictionary that maps days of the week to the work hours of each of those days. This means this class has to deal with ""non-standard"" work schedules, such as Tuesday-Thursday 14:00-01:00, Friday-Saturday 16:00-03:00. (I am defining what a ""standard"" and ""non-standard"" work schedule is based on some metric I've come up with -- it's not important for the sake of this question how they're defined.)

+ +

Standard work schedules are easy to deal with, whereas non-standard work schedules make things more complex. I've been thinking about how to deal with this, and I've come to the conclusion that the implementation of these calculations should differ between standard and non-standard work schedules. That is, for each method, if we're dealing with a standard work schedule, do X; if we're dealing with a non-standard schedule, do Y.

+ +

Question

+ +

What is the best practice for setting up this class? Here are the options I can think of:

+ +
    +
  1. Each method should just have an if/else statement:

    + +
    if (standardWorkSchedule)
    +{
    +    // do standard work schedule stuff
    +}
    +else
    +{
    +    // do non-standard work schedule stuff
    +}
    +
  2. +
  3. Create an interface, IBusinessDateTime, with all the methods for business datetime calculations. Then, create two distinct service classes, StandardBusinessDateTime : IBusinessDateTime and NonStandardBusinessDateTime : IBusinessDateTime, that implement this interface. StandardBusinessDateTime will implement standard work schedule logic for its methods, whereas NonStandardBusinessDateTime will implement non-standard work schedule logic. Finally, create an additional, intermediary class, BusinessDateTime : IBusinessDateTime, that takes in the work schedule and determines whether it's standard or non-standard. Everywhere that uses this service class can just call the methods of this intermediary class, and it offloads the actual work to the other two classes. If it's standard, then use StandardBusinessDateTime; if it's non-standard, then use NonStandardBusinessDateTime.

  4. +
+ +

I'm not too familiar with design patterns, so it's possible that I'm describing a fairly common and basic problem, and the solutions I've proposed are commonly-used design patterns. I feel that option 2 is the better solution, but I just want to make sure. Also, I'm sure there are other and possibly better solutions -- I appreciate any help I can get with this.

+ +

Finally, I want to emphasize that I am not asking any of the following things:

+ +
    +
  • How to implement a business datetime calculator

  • +
  • How to define standard and non-standard work schedules

  • +
  • If I should even be dealing with non-standard work schedules

  • +
+",294990,,294990,,43132.97569,43133.39028,How to design a business datetime service class that performs two types of calculations based on an argument passed in,,2,2,,,,CC BY-SA 3.0,, +365154,1,365184,,2/1/2018 23:35,,0,692,"

I am trying to make some version control locally on my Android project. It is just to have control about my changes, but I do not want to upload them to any external repository. Neither I want to create infernal folders with dates of updates...

+ +

I would want to use Git, in which I am a bit familiar. I am also familiar with GitLab and GitKraken.

+ +

I have seen that there are some private repositories on GitLab but I guess you have to upload the code to an external repository.

+ +

My doubts are here, what are the differences in safety, when storing the code on my computer (with a version control system), compared to uploading to an external private repository? Does it have any legal effects to store my code in an external repository? It will be a public app in the future, but I do not want to share code (at least, by the moment). I want to keep all rights about it.

+ +

Also, if I use GitKraken I do not know if I can make commits to local repository without connecting it with a external (origin) repository because all examples I have seen uses one external repository as origin. Is it possible to make commits just having a local repository? Could I upload all my commits to an external private repository (if needed) in the future? I mean, all those commits that I have done in local repository.

+ +

To end, is it worth what I am trying to do? I mean, make all commits in local repository instead of a private exteral repository and, maybe upload them to an external private repository in the future (by the moment I am the only one who is going to develop the app but I do not know if in the future will be more people developing it).

+ +

I am starting with version control systems and never done it locally so any help would be appreciate.

+",294993,,9113,,43134.45139,43134.98472,Version control locally on Android,,2,2,,,,CC BY-SA 3.0,, +365155,1,,,2/1/2018 23:59,,2,112,"

I'm still currently learning OOP. I need a method that takes in a List<JsonNode>. Now there are multiple components that need some resources from the List. Component1 needs all of the keys. Component2 only needs the keys that have a numerical value. As we understand, these two both need keys, thus requiring a function to iterate through the given List and parse the keys from the given List.

+ +

What we are planning is that all of it happens in a single function/class (or rather as a single responsibility). This responsible function would take in a List instance, iterate through the List and one by one get the keys and create a resource for Component1; while it does so, it also looks for the keys with numerical values and creates a resource for Component2. The name of the responsibility would be Extractor.

+ +

Would this be in accordance with OOP principles? Should we just give it a better definition(name)? Or should we define them as two different responsibilities?

+ +

Edit:

+ +

So JsonNode has keys and they're all different from each other. I may have

+ +
[{""name"":""john"", ""age"": 10, ""status"":""single""}]
+
+ +

So I need to get

+ +
Component1 -> [name, age, status]
+Component2 -> [age]
+
+",294998,,1204,,43133.09514,43133.71597,Think of a better method name or create separate functions for a possible single responsibility?,,1,10,,,,CC BY-SA 3.0,, +365157,1,376203,,2/2/2018 1:30,,4,113,"

I've been programming professionally for years now at several different companies and I consider myself to be a fairly competent programmer. However, of everywhere I've worked there are usually dozens of different software engineers and programmers with dozens of different coding styles, and patterns. I've read tons of literature on the subject of well designed applications, but I honestly don't think I've ever seen or fully implemented one myself. I'm curious if there are any real hands on examples of different design principles, especially applications implementing a proper service architecture to really get a feel for everything.

+ +

For example, our current application began with many different conflicting design ideas, but most of the original engineers have been let go. Now, I pretty much have free reign with a whole swath of very novice developers in which to help train and educate as they begin their careers. I'm trying to set a positive example by implementing easy to follow, but still robust designs.

+ +

As our application stands today we have somewhat of a legacy area and the new area. The legacy area consists of:

+ +
    +
  • A repository layer (built with ADO.Net), but it's pretty tightly coupled and near impossible to test or write tests for.
  • +
  • A domain layer - Relies on the repositories, but intermingles logic between many different classes and areas
  • +
  • A model project - loosely reflects the database models
  • +
  • A viewmodel project
  • +
  • A web logic layer - one developer went through a detangled all web/business logic (somewhat) and placed it into its own project.
  • +
  • The web layer - contains controllers, views, and web apis.
  • +
+ +

The new stuff is built similar, but taking a different approach

+ +
    +
  • Entity layer - This project solely contains our entity framework entities and contexts. It also has a factory method that returns the proper context and gets the connection string information from our configuration files for each edmx.
  • +
  • EF Repositories - Each repository handles a minor subset of functionality, pretty much only 1 entity type is interacted with in this layer unless certain joins are necessary.
  • +
  • Domain layer - This layer builds out units of work for each and every piece of functionality going forward. It doesn't have much/any crossover functionality and is very specialized.
  • +
  • Unit test project - The unit test project creates mock repositories and checks all the logic implemented in the domain layer.
  • +
  • Web layer - responsible for wiring up dependency injection and serving up views, controllers, and API methods.
  • +
+ +

In the new stuff, every single repository implements an interface (almost to a fault), and everything works via dependency injection. I find it rather straight forward, but I'm afraid I'm either doing too much or too little.

+ +

For example, lets say I have a context called MainContext. In my repository layer, I have a repository called GenericRepository. GenericRepository implements an interface called IGenericRepository that lays out methods such as Get, Save, etc. These methods are the same for most repositories, but can be overridden if necessary. It might look something like:

+ +
public class GenericRepository<TEntity> : GenericRepository<TEntity> where TEntity : class
+{
+    internal SchoolContext context;
+    internal DbSet<TEntity> dbSet;
+
+    public GenericRepository(SchoolContext context)
+    {
+        this.context = context;
+        this.dbSet = context.Set<TEntity>();
+    }
+
+    public virtual void Insert(TEntity entity)
+    {
+        dbSet.Add(entity);
+    }
+
+    public virtual void Update(TEntity entityToUpdate)
+    {
+        dbSet.Attach(entityToUpdate);
+        context.Entry(entityToUpdate).State = EntityState.Modified;
+    }
+    // -- more methods
+}
+
+ +

All other repositories inherit from the GenericRepository, but they all also implement their own interface that also implements IGenericRepository.

+ +

At this point we are 2 or 3 interfaces deep, and I'm beginning to feel like it's a bit overkill.

+ +

Am I going overboard with the design? Are there any good, complete examples that illustrate the best path forward with a similar design?

+",204600,,,,,43312.90833,Simple examples of properly designed Asp.Net applications illustrating different service layers?,,2,0,2,,,CC BY-SA 3.0,, +365158,1,,,2/2/2018 1:48,,3,54,"

I am developing a BSD 3-clause library. In the root of the repository, I have the full license text, and at the top of each file something like

+ +
/* This file is part of <project name>, developed my <me>.
+ * 
+ * All rights reserved. Use of this source code is governed
+ * by a BSD-style license that can be found in the LICENSE.txt file:
+ *
+ *     <url to LICENSE.txt>
+ */
+
+ +

I am satisfied with this for files that are truly of my own origin. However, lets consider file foo.cpp, that has code adapted from two separate libraries.

+ +
    +
  1. The Point Cloud Library {BSD v3 as well}.
  2. +
  3. OpenCV {BSD v3 as well}.
  4. +
+ +

So I started doing

+ +
/****************************************
+ *
+ * full PCL license
+ *
+ ****************************************
+ *
+ * full OpenCV license
+ *
+ ****************************************
+ *
+ * my blurb
+ *
+ ***************************************/
+
+ +

Is including all licenses at the top the most appropriate?

+ +

Is there a clearer way of citing the proper source? I've done my best to make sure that in both the code and documentation it is clear where the original source came from, but ""license dumping"" at the top of the file makes it less clear. For example, one of the files also brings in an Apache v2 license (also compatible), which makes the licensing of the file significantly longer than the actual code in the file. That is not a problem, it's just getting to the point where I feel like I'm doing it wrong.

+ +

The final question: is it acceptable to update the license years to what is shown in the main LICENSE of the repository? For example, one file I transcribed code from had a 2012 license listed at the top. The main repo has an identical license, but includes up to 2018 (which is why I prefer the ""blurb"" approach for my own files). This part of the question may not be on-topic though, if so I will delete this part.

+",237392,,,,,43133.075,proper presentation of multiple (compatible) licenses that inspired a single source file,,0,0,1,,,CC BY-SA 3.0,, +365163,1,365168,,2/2/2018 4:29,,1,97,"

We have a system that generates lots of logs and we have to somehow maintain logging workflow in a project.

+ +

The 'strict' (let's assume it's strict) requirement is that there should be a document describing format for each message and each message should comply to that format.

+ +

(By format I mean BNF-like things like 'some text {one|two} [three]' etc.

+ +

It's also desirable to see log messages themselves in the code, not some IDs referring to them in some 'message list'.

+ +

Currently the state is that there is a large file with 1000+ messages with IDs, severities and user actions described for each one, and in the main code we are just struggling to keep log messages exactly in the format we specified (and of course failing in that).

+ +

It isn't obviously a viable option to like parse the source code on build and try to ensure message format that way (messages can be written not literally etc.).

+ +

So, how to implement such a format-complying logging system?

+ +

I know it sounds a bit like a 'advise me some tool'-kind-of-question, but I still think it's more about conceptual approaches to the problem (yet still I'd be really glad to see example of such systems! :) language doesn't matter).

+",,user143982,222996,,43133.30417,43133.32153,How to implement format-complying logging system?,,1,3,,,,CC BY-SA 3.0,, +365165,1,,,2/2/2018 6:47,,3,216,"

I’ve been wondering about this for a little bit, but I couldn’t find much info on Google.

+ +

What do big websites like YouTube do to manage all the traffic they get without their website slowing down? How are their servers different than a home web server? Do their servers run off of one network from an ISP or something else?

+ +

Here’s an example: HQ Trivia. How do they get enough bandwidth to broadcast video to over a million people at once and not have too many issues?

+ +

Sorry if these are stupid questions-I’m pretty new to this.

+",295021,,,,,43136.40694,How do websites/apps deal with so many people online at once?,,1,2,,43133.65347,,CC BY-SA 3.0,, +365169,1,365171,,2/2/2018 7:51,,1,794,"

I have been looking around for some practices to write testable code and gathered the following:

+ +

• Public methods virtual if not using interfaces -- makes mocking easier
+• Dependency injection -- makes mocking easier
+• Smaller, more targeted, choesive methods -- tests are more focused, easier to write
+• Avoidance of static classes
+• Avoid singletons, except where necessary
+• Avoid sealed classes

+ +

I am not sure if I get the point of the first one, making public methods virtual, should I really do that?

+",294256,,,,,43133.61319,Making public methods virtual to ease testability,,2,5,,,,CC BY-SA 3.0,, +365175,1,365181,,2/2/2018 9:22,,3,185,"

I have a node app with a queue for processing jobs in the background. I have a file that exports a function which when run, creates a job in my queue. In that file, I also have the handler for this type of job. It looks like this:

+ +
const actuallyDoStuff = async (...) => {
+    // code that takes time
+};
+
+queue.process('do_stuff', async (job, done) => {
+  try {
+    await actuallyDoTheStuff(job.data);
+    done();
+  } catch (err) {
+    done(err);
+  }
+});
+
+const doStuff = async () => {
+  const job = queue.create('do_stuff', { ... });
+  job.save();
+};
+
+module.exports = doStuff;
+
+ +

The thing is, they could be called the same if the other didn't exist. What I'm wondering is if you guys had some good naming practices for cases like this one. Some of the options I considered:

+ +
    +
  • Adding a prefix to the function that actually does the work, like doDoStuff
  • +
  • Adding a suffix to the other function to specify that it's creating a background job, like doStuffBG
  • +
+ +

But I don't really like those options. What are your good practices for cases like this one?

+",274064,,,,,43133.42986,How to name two functions that could be named the same?,,1,5,,,,CC BY-SA 3.0,, +365176,1,365219,,2/2/2018 9:25,,12,1394,"

I have a quite large private codebase which has evolved for about ten years now. I'm not using phpDocumentor but since using docblock sections has become quite the standard in open source projects I have adopted writing docblocks for all public methods in my repository as well. Most blocks just contain a small description and typehints for all parameters and the return type.

+ +

With the arrival of static analysis, these typehints have helped me a lot finding inconsistencies and possible bugs. Lately I've converted the entire codebase (Now running on PHP7.2) to have all parameters and return values type-hinted where possible, using PHP's typehints. And now I am wondering... Aren't these docblock typehints redundant? It asks quite a bit of work to keep all docblocks in sync with the ever changing code and since they don't add any new information I am wondering whether it is better to completely remove them or not.

+ +

In one hand, removing documentation feels bad, even when it is redundant. In the other, I really feel like breaking the Do-Not-Repeat-Yourself-principle everyday type-hinting stuff that is already type-hinted.

+",173291,,173291,,43134.39861,43140.41458,Are docblock typehints redundant when using strict typing,,3,2,1,,,CC BY-SA 3.0,, +365186,1,365208,,2/2/2018 12:31,,7,6185,"

I admit this was asked to me in interview a long time ago, but I never bothered to check it.

+ +

The question was simple, how does Python make Queue thread-safe?

+ +

My answer was, because of Interpreter Lock (GIL), any time only 1 thread is making a call to get the element from queue while others are sleeping/waiting. I was/am still not sure if it a valid answer.

+ +

The interviewer seemed dissatisfied and asked if Queues are thread-safe in Java or .Net implementation of Python which doesn't have GIL? If so, how do they implement thread-safe feature on said data structure.

+ +

I tried looking for it, but I always seem to stumble upon how to use thread-safe queues.

+ +

So how is Queue or a simple list can be made thread-safe and avoid race conditions?

+ +

Alternatively, what algorithms or techniques used by GEvent implementation of thread-safe Queues?

+",111852,,111852,,43133.53611,43133.89028,How are queues made thread-safe in Python,,1,4,3,,,CC BY-SA 3.0,, +365192,1,365202,,2/2/2018 15:04,,3,14278,"

My team is in dilemma, we have an existing maven spring-boot Java8 project with following onion architechture.

+ +
controller --> service --> dao --> jpa repositories
+                            |  --> filesystem
+                            |  --> jdbc template
+
+ +

Dao earlier using jdbc template and jpa, so it had more than one way of accessing data via many strategies. But as we are moving forward old JDBC Template to new JPA Repositories, we are planning to remove dao layer and injecting repositories directly into service.

+ +

So new architecture might look something like this:-

+ +
controller --> service --> jpa repositories
+
+ +

I have a few concerns now:-

+ +
    +
  1. How good is this idea of removing an abstraction layer of dao?

  2. +
  3. Does this architecture have anything to do with Domain Driven Design?

  4. +
  5. We have around 20 Dao Classes and more than 50 JpaRepositories and entities,so I want to consider human-hours into this refactoring, will it be huge or just nominal?

  6. +
  7. Will we make our service classes a GOD Class which will have business logic and operations related to data access?

  8. +
+",164697,,,,,43133.84514,Project Structure of Domain Driven Design in maven Java Spring-Boot,,1,1,4,43139.78264,,CC BY-SA 3.0,, +365195,1,,,2/2/2018 17:14,,1,309,"

I've been thinking about this over the past few weeks, and I've come up with no good arguments. My perspective is from Java, but if anyone has any language-specific cases outside of this language, I'd love to hear them.

+ +

It seems to me that the benefit of using a List over a Deque comes from the fact that one can access the elements within directly via index numbers. While I can see the use in something like a UI (e.g. having a drag-and-drop sortable list), when talking about pure code interaction I see three cases for this:

+ +
    +
  1. Iteration. Iterating with a for loop using get(i) and size() is expensive in most implementations, and can usually be better done with iterator(), which is present in every Iterable collection.

  2. +
  3. Lookup by index. This usually requires a table of indices and a fixed length list, in which case one would get better performance out of an array.

  4. +
  5. Operating on the front or back of a List. This is what Deque was designed for, and it doesn't require any calls to size().

  6. +
+ +

Collections support is nice, but the only thing I could find offhand that was implemented in Collections but not Arrays was a shuffle() function, which is fairly simple for an experienced programmer to implement (or delegate to Collections, since the overhead for non-primitive arrays isn't too bad IMO).

+ +

I feel that everything that one would need a List for can be better filled by either a Deque or an array. I've done some searching for comparisons, but the only info I've found either doesn't really discuss Deques or is written as a ""Welcome to Programming"" thing and doesn't offer a direct comparison of use cases. I've looked over my code for the past few years and haven't found any Lists outside of UI elements; I usually use a Set or a BlockingQueue for storing variable-length data.

+",129776,,,,,43133.91389,Are there any use cases for List when Deques and Arrays are available?,,2,12,,,,CC BY-SA 3.0,, +365199,1,365210,,2/2/2018 20:03,,2,143,"

I'm building a Sudoku generator. I have a board class with a number of methods:

+ +
public class Board {
+    public Board() { /* Creates an empty board */ }
+    public bool ValidateRow(int row) { /* Checks for errors in row */ }
+    public bool ValidateColumn(int column) { /* Checks for errors in column */ }
+    ...
+}
+
+ +

I'm following TDD, and as such have a full suite of tests for all these methods. I would like to add two new methods to this class:

+ +
public static Board GenerateFilled() { /* Creates a solved board /* }
+public bool ValidateBoard() { /* Checks for any error on the board /* }
+
+ +

I'm struggling with how to write my tests for these methods. My first thought was:

+ +
[TestMethod]
+public void GenerateFilled_GeneratesAValidSolvedBoard() {
+    var board = Board.GenerateFilled();
+    Assert.IsTrue(board.ValidateBoard());
+}
+
+ +

but I realized I wrote the same test for ValidateBoard:

+ +
[TestMethod]
+public void ValidateBoard_NoErrors_ReturnsTrue() {
+    var board = Board.GenerateFilled();
+    Assert.IsTrue(board.ValidateBoard());
+}
+
+ +

This test relies on both GenerateFilled and ValidateBoard working correctly, though the method under test changes. I've come up with the following ways to avoid this problem:

+ +
    +
  1. Duplicate the logic of the method not under test into the test. Use that logic to validate my method under test instead of calling the other method.
  2. +
  3. Leave the GenerateFilled test as is and use hardcoded sample data to test ValidateBoard instead of calling GenerateFilled.
  4. +
+ +

I'm not a fan of option 1 because it will make keeping the tests accurate tedious if small changes to the logic in the methods that were duplicated change.

+ +

I don't really like option 2 either though because it relies on my sample data case being general of all cases, which is less likely to be true the larger the dataset is.

+ +

I suppose in the worst case this just results in any error in either method causing both tests to fail, but it is a bit of a smell. Has anyone come across a similar scenario and found a better solution than the two above?

+",112819,,,,,43135.575,Using methods that are not under test within a unit test for a different method?,,3,3,,,,CC BY-SA 3.0,, +365218,1,365220,,2/3/2018 9:51,,-4,87,"

AFAIU a SAAS software is any software which is runs on a browser and performs a certain service, (aside of giving a way to contact the owner of that software) and is also aimed for reuse, whether if it's gratis or not.

+ +

Is PHPmyadmin, the Database Management Tool (DMA) SAAS actually?

+",295121,,295121,,43134.42986,43134.42986,"Is PHPmyadmin ""SAAS""?",,1,5,,43137.51111,,CC BY-SA 3.0,, +365222,1,365228,,2/3/2018 11:29,,2,254,"

Usually, getters always return the value of a variable. I learned in my literature that access to fields is controlled by getters and setters. When I had my code rated by programmers, it was suddenly said that getters and setters violate object-oriented thinking.

+ +

So far, I have always set the output-statements in the method of a class that made changes to the object. However, if I want to change the way I make outputs (for example graphically or in a terminal), I would have to rewrite all classes. Therefore, I have considered doing the following: The output is no longer realized in the class, but the class provides methods that provide the values to be displayed. Another class is then created for display, which uses special getters to query the state of the object and display the data as it pleases. The getters are in the class that has this data. For example, I could have a ""Human"" object. The Human has amongst other things the variables lifeCurrent and lifeMax. These fields represent the health, if you set these values in relation. Another class called ""Display"" is now responsible for displaying the current health of a Human. So I would create a getter ""getHealth"" in the Human-Class that returns a list with lifeCurrent and lifeMax. the Display-class would have a function called health(Human). This function calls the getter from the Human and displays the values for the user of the program. Is it an accepted style to use getters like this?

+",287426,,,,,43134.78403,OOP in Java - What can getters be used for?,,2,0,1,,,CC BY-SA 3.0,, +365227,1,,,2/3/2018 12:01,,1,936,"

I have a set of services that are used to read/write to the database. These get injected into my controllers:

+ +
ISystemSettingService _systemSettingService = null;
+IStatusTypeService _statusTypeservice = null;
+
+public MyController(
+    ISystemSettingService systemSettingService,
+    IStatusTypeService statusTypeservice)
+{
+    _systemSettingService = systemSettingService;
+    _statusTypeservice = statusTypeservice;
+}
+
+ +

So when I need something from _systemSettingService I can easily get it. Now I also have some static helper classes/functions which I call from MyController. In these functions I often need access to the services so I can access the DB. e.g. I have a Validate(string userData, ISystemSettingService systemSettingService) function that accepts some data passed in from the user.

+ +

So far, into the Validate function I have been passing in the _systemSettingService so I can use it. I'm not sure this is correct.

+ +

My question - is my approach correct or, in the Validate function should I be creating a unity container to Resolve an instance of my ISystemSettingService or, as I have been reading about, should my helper class NOT be static, and I should inject ISystemSettingService into the constructor which will apparently make unit testing easier.

+ +

I'm a bit confused! Thanks.

+",295126,,,,,43134.60347,Unity Dependency Injection and Helper functions,,2,0,,,,CC BY-SA 3.0,, +365234,1,365240,,2/3/2018 15:25,,0,866,"

I recently asked this question: https://stackoverflow.com/questions/48582699/equality-for-a-dateofbirth-value-object. I am going to avoid a DateOfBirth value object. Instead I am planning to use a type alias. I have a few options with regards to my constructor:

+ +

Option 1

+ +
using DateOfBirth=System.DateTime;
+DateOfBirth DateOfBirth;    
+
+public Person (DateOfBirth dateOfBirth)
+{
+     if (dateOfBirth.TimeOfDay.TotalSeconds > 0)
+         throw new ArgumentException(""Date of birth cannot contain a time."")
+     DateOfBirth = dateOfBirth;
+}
+
+ +

Option 2

+ +
DateOfBirth DateOfBirth;   
+public Person(int day, int month, int year)
+{
+   //Validation to make sure day, month and year are valid.
+   DateOfBirth = new DateOfBirth(year,month,day);
+}
+
+ +

I am trying to decides, which option to choose. The validation for option two could be quite complex because some months have different number of days. Therefore I am hoping that option one is suitable for this.

+ +

Also, should I be using type aliases in my Unit Tests or just refer to them as the primitive type i.e. date time?

+",65549,,65549,,43134.97986,43134.97986,Initialise a Date of Birth with a DateTime or three integers?,,2,0,1,,,CC BY-SA 3.0,, +365236,1,365242,,2/3/2018 15:39,,3,1320,"

If I write an application and decide to develop only a single graphical interface for it, and never intend to develop multiple GUIs, then by today's standards it is okay not to use the MVC pattern or MVP pattern, or should you generally prefer the MVC pattern for object-oriented software with output? The advantage of the MVC pattern is precisely that of being able to program many different GUIs. However, if you plan to develop only a single GUI from the start, this advantage will be eliminated.

+",287426,,1218,,43135.30347,43135.30347,"Should I always use the MVC pattern (or similar) for big, graphical and professional applications?",,3,1,,,,CC BY-SA 3.0,, +365243,1,,,2/3/2018 17:16,,3,1296,"

I want to do something like Facebook does. We can see any time where we're logged in. e.g.:

+ +
+

Windows PC · New Delhi, India Edge · Active now

+ +

Windows PC · New Delhi, India Chrome · 8 minutes ago

+ +

Xiaomi Redmi 4A · New Delhi, India Facebook app · 10 hours ago

+
+ +

Now, I have created a table in database that stores mainly user ID, a unique id and user agent.

+ +

Whoever logs in is saved in the table.

+ +

If a user logs in from 3 different devices, I see 3 entries for same use in table.

+ +

But, the problem I'm facing is, if I login again (when cookies are cleared or something like that), I see another entry in the table for same device. This is what I don't want and Facebook does handle it somehow. Facebook shows only one entry for a single user and device.

+ +

Now what I thought is to remove entry if same user logs in again and again (Without logging out: maybe clearing cookies or when session expired) with same browser / app and maintain a single entry.

+ +

But again, I don't know which row to delete. There may be a user logged in with 2 insanely same devices in same browser / app. Now how I can I remove entry of one of them?

+ +

Here's rows look like:

+ +
userId UniqueId UserAgent/Device
+2          342     Samsung - Chrome
+2          341     Windows PC - Chrome
+2          345     Nokia Lumia 630 - Internet Explorer (actually logged in)
+2          346     Nokia Lumia 630 - Internet Explorer (actually logged in)
+2          360     Nokia Lumia 630 - Internet Explorer 
+                   (Duplicate Entry because of re-login (Cookies cleared or session expired))
+2          363     Nokia Lumia 630 - Internet Explorer
+                   (Duplicate Entry because of re-login (Cookies cleared or session expired))
+2          389     Nokia Lumia 630 - Internet Explorer
+                   (Duplicate Entry because of re-login (Cookies cleared or session expired))
+2          367     Nokia Lumia 630 - Internet Explorer
+3          378     something
+4          379     something
+
+ +

(User id 2 has 8 sessions)

+ +

Here's what I want:

+ +
2          342     Samsung - Chrome
+2          341     Windows PC - Chrome
+2          345     Nokia Lumia 630 - Internet Explorer
+2          346     Nokia Lumia 630 - Internet Explorer
+3          378     something
+4          379     something
+
+ +

(User id 2 has 4 sessions)

+ +

Is what I'm trying to implement is a right approach or should I do something different?

+ +

EDIT: Fixed counts.

+",295143,,,,,43134.71944,How to store unique sessions for logged in user(s) with different devices?,,0,10,2,,,CC BY-SA 3.0,, +365247,1,,,2/3/2018 18:47,,3,885,"

I am looking at a scenario for creating an aggregate instance from a trigger an a different aggregate.

+ +

I've incorporated some logic in my DDD and Event Sourcing with Onion architecture learning scenario around User Registration and creation of a User Account.

+ +

Currently I have a simple Registration class instance which has a number of attributes around Email Address and Name. This currently has a single method named Activate.

+ +

When the Registration is activated, an Account should be created.

+ +

Internally, I am recording events on the various actions.

+ +
public partial class NewRegistration : EventSourcedAggregate
+{
+    internal NewRegistration(
+        string organisationName, 
+        AccountContact contact)
+    {
+        OrganisationName = organisationName ?? throw new ArgumentNullException(nameof(organisationName));
+        Contact = contact ?? throw new ArgumentNullException(nameof(contact));
+        RequestedDate = SystemClock.Current.GetCurrentUtcDateTime();
+
+        Apply(new RegistrationCreated(
+            OrganisationName, 
+            Contact.EmailAddress.ToString(), 
+            Contact.Name.GivenName,
+            Contact.Name.FamilyName,
+            RequestedDate));
+    }
+
+     NewRegistration(Guid id, int version)
+        : base(id, version)
+    {
+
+    }
+
+    public string OrganisationName { get; private set; }
+
+    public AccountContact Contact { get; private set; }
+
+    public string ActivationCode { get; private set; }
+
+    public DateTime RequestedDate { get; private set; }
+
+    public DateTime? ActivatedDate { get; private set; }
+
+    public void ChangeActivationCode(string activationCode)
+    {
+        if (ActivatedDate.HasValue) throw new InvalidOperationException(""The registration has already been activated"");
+
+        if (activationCode != ActivationCode)
+        {
+            Apply(new ActivationCodeChanged(activationCode));
+        }
+    }
+
+    public void Activate()
+    {
+        Apply(new RegistrationActivatedEvent(SystemClock.Current.GetCurrentUtcDateTime()));
+    }
+
+    protected sealed override void Apply(DomainEvent changes)
+    {
+        When((dynamic)changes);
+    }
+
+    private void When(ActivationCodeChanged activationCodeChanged)
+    {
+        ActivationCode = activationCodeChanged.ActivationCode;
+    }
+
+    private void When(RegistrationActivatedEvent registrationActivatedEvent)
+    {
+        ActivatedDate = registrationActivatedEvent.ActivationDate;
+    }
+}
+
+ +

Although the above is stored in a SQL table, it does not fully realise Event Sourcing - as it will only ever have a single record that will only get updated should the user reset the activation code.

+ +

So when Registration is activated, I need to created a new instance of Account. I currently see Registration and Account as part of the Customers bounded context. I feel Registration and Accounts are two separate domains/sub-domains within the context - I might be wrong! I was also feeling the account could be an Event sourced aggregate, although reading Udi Dahans blog post on when not to use CQRS I feel a little disappointed with what I was planning.

+ +

Now there are a number of ways to implement this. My first thought was to simply handle this in an Application Service. Typically I would put that logic here as they coordinate things. This would require the INewRegistrationRepository and IAccountRepository injected - to copy the information from one to the other.

+ +

We've been using a lot of application services at work and I have become bogged down in the number we have due to the way our business logic is (static managers, helpers, infrastructure mixed in with anaemic domain model.

+ +

But having been reading a little on Process Managers and Sagas, I which has confused me somewhat and halted my thoughts for the day.

+ +

I am all for keeping things simple, and this is a simple part of a domain implementation I am looking at. I feel a bit overcome with Application Services - is the domain Process Manager or Saga a better fit for this scenario.

+ +

How do DDD practitioners on here look at implementing such a scenario?

+",153096,,153096,,43135.54514,43141.81667,DDD creating an aggregate in response of an event on another,,2,0,2,,,CC BY-SA 3.0,, +365257,1,365288,,2/3/2018 22:01,,0,112,"

Apologies if this is a repeat of an older question but I did attempt to find a similar question and didn't have any luck. I'm going through the tutorials at www.w3schools.com for writing XSDs and XMLs. I have a basic understanding of namespaces but my question -- why do many XSDs that I've seen contain a reference to http://www.w3.org/2001/XMLSchema?

+ +

For example:

+ +

<xs:schema xmlns:xs=""http://www.w3.org/2001/XMLSchema"" +targetNamespace=""https://www.w3schools.com"" +xmlns=""https://www.w3schools.com"" +elementFormDefault=""qualified"">

+ +

Why do many people include xmlns:xs=""htttp://www.w3.org/2001/XMLSchema? Is it because it allows them to use predefined elements, attributes and types such as xs:string, xs:complexType, xs:simpleType, xs:attribute?

+ +

Does referencing the w3.org namespace allow you to use built-in datatypes?

+ +

Thanks in advance

+",295157,,,,,43135.55069,XSD Namespaces with W3.org,,2,0,,,,CC BY-SA 3.0,, +365259,1,365262,,2/3/2018 22:36,,4,2273,"

Since I'm learning the MVC pattern, this might be a very naive question.

+ +

I know that when something happens on the view (e.g. user clicks a button), the view calls the controller which in turn updates the model (e.g. set a flag to true). However I can imagine some scenarios in which the model changes without user interaction: say a timer that, after N minutes, triggers a change somewhere on the data. The model has now changed: how to update the view accordingly?

+",261929,,,,,43135.31319,How to update the view when the model changes?,,1,0,1,,,CC BY-SA 3.0,, +365265,1,,,2/4/2018 1:13,,1,163,"

Our company is in the early stages of starting to use microservices. One question that came up the other day was 'is scheduling/calendar a micro service'?

+ +

We have so many monolithic apps that have built their own calendar/schedule - some allowing appointments to be scheduled in the morning and afternoon and some more granular where like outlook you can pick 1/2 hour slots. So we have many monolithic web apps which have an outlook UI where some UIs select a slot and in some others they drag a region to create an appointment.

+ +

That is a lot of duplication all doing roughly the same thing at the back end.

+ +

So should we build one single scheduling service and use it across the company as a microservice?

+",34116,,,,,43135.68681,Is a outlook type calendar/scheduling service a microservice?,,1,0,,,,CC BY-SA 3.0,, +365267,1,365273,,2/4/2018 4:26,,1,2341,"

The background

+ +

In MVVM, we tend ""not"" want to couple windows to view models for various reasons. Though from the very second screen on your application you start hitting this conceptual grey zone.

+ +

The problem

+ +

You see viewmodals tend to want to open windows/Views (its just a fact of life). Windows/dialog APIs tend to like an Owner, because thats just the nature of the Windows Operating system. As can be seen here Window.Owner

+ +
+

When a child window is opened by a parent window by calling + ShowDialog, an implicit relationship is established between both + parent and child window. This relationship enforces certain behaviors, + including with respect to minimizing, maximizing, and restoring.

+
+ +

This all creates a bit of a dilemma where Viewmodals are natively disconnected from their view/window hence you have to use some sort of decoupled messaging event aggregation, or some other logic to couple the dialog relationship.

+ +

At this point people use frameworks or other regimes which basically consist of weird and contrived world of DI/IOC, view constructors, locators, messages or event aggregation to do all the coupling, spinning up of viewmodals and so forth and keeping the MVVM police happy.

+ +

The current solve

+ +

In the current projects im working on, the principle developer has solved this problem with using typed decoupled messages (pub/sub), which gets picked up in a view constructor with an application wide view mapping regime that maps these messages to view to windows, converts and passes objects to the newly created view models through more messages in what can only be called more spurious plumbing, and at the end of the dialog lifecyles passes back out parameters back into the original typed message which can then be probed bythe original caller.

+ +

This is all very messy (IMO) overly complicated overly engineered and inadequacy designed.

+ +

Dialog Service

+ +

I guess this where the notion of a dialog service comes in. A simple Fluent library that can use viewmaps or implicitly coupled commands to call, spinup windows.

+ +
// Modal Dialog no results
+_dialogService.Dialog<Window1Vm>(this)
+              .OnInit(vm => vm.SomeProperty = false)
+              .ShowDialog();
+
+// Dialog with some results  
+var someResult = _dialogService.Dialog<Window1Vm>(this)
+                               .OnInit(vm => vm.SomeProperty = false)
+                               .ShowDialog();
+
+// Dialog with some input params
+var someResult = _dialogService.Dialog<Window1Vm>(this)
+                               .WithParams(new SomeParamsToPassIn())
+                               .ShowDialog();
+
+// And the one the ""decoupled police"" will be wanting to arrest me for
+// Dialog with explicit notation to map a view to a viewmdal
+// (I know i should be shot and fed to the wolves)
+var dialog1 = _dialogService.Dialog<Window1Vm, Window1>(this)
+                            .OnInit(vm => vm.SomeProperty = false)
+                            .ShowDialog();
+
+ +

My question is now, should this be an injectable service, or a static class

+ +

The advantages of an injectable service is that, ViewModals and code who need access to this, can just ask for it. Its easy to see who is implimenting this functionatlity via the constructor

+ +

The disadvantages is if you have lists of lists of vms and something in the child list needs to open a window you have to inject it into all the child viewmodals or once again send a message back.

+ +

The advantages of being a static class, is you can call it everywhere, kind of like a true UI cross cutting concern.

+ +
+ +

Personally i think the DI service is the way to go, it has no state logic i.e it can be a singleton. However i have had several comments about this approach (from mainly the principle developer) that messages are just better and it can be called from anywhere, also that static is bad mmkay.

+ +

However i just want to get opinions about if logic, like does this sit better as an injectable service, a static library, or in other peoples experience that might sugest having a totally decouple system like we have is actually a better approach in large systems and the long term

+",,user140075,1204,,43135.70556,43135.70556,"WPF, Dialog Service as injectable Service or a Static class",,1,2,,,,CC BY-SA 3.0,, +365268,1,365283,,2/4/2018 4:52,,2,3911,"

I am developing a REST API which accepts JSON using Spring Boot. I use Spring Security for authentication. I have a use case where I have two services, one to test connection to a 3rd party system and other one to fetch data from the 3rd party system.

+ +
    +
  • api/system/connection
  • +
  • api/system/customData
  • +
+ +

When I call the first service, I check if I am able to connect successfully to the 3rd party system. If yes I return a ""Success"" or a ""Failure"" message. When I try to make the second call to get customData, I expect the request to have the login information to the 3rd party system to be present in the request. I again create a new connection and fetch the custom data and return the data. The problem with this approach is that I create connection objects every time +which consumes a lot of time for each request.

+ +

In order to avoid this overhead I should be changing the second request as this

+ +

api/system/connection/?/customData

+ +

But for doing this, what should I return in the first service that tests the connection?

+ +
    +
  1. Should I return the connection itself? Is it possible to convert the connection object to a JSON and deserialize it when it comes in the second request? Is it also a secure thing to do?
  2. +
  3. Should I cache this connection to 3rd party system at the server side and return a unique id for each connection? But does this not break the RESTful nature of the API?
  4. +
+ +

Or is there a different approach to how I can tackle this problem.

+ +

TIA.

+",295168,,,,,43136.85208,Maintaining stateful information in REST API,,2,3,,,,CC BY-SA 3.0,, +365269,1,365384,,2/4/2018 6:02,,2,1619,"

As you know Facebook saves login sessions and we can see them in settings where we've logged in.

+ +

Now, they show IP address, Phone name (if logged in with phone) and App name in the phone and Browser name in case we use browser.

+ +

I want to store these info for my site using PHP and JavaScript. One way is to use inbuilt features of PHP on server side to save this info to my DB when user logs in successfully.

+ +

But there's a problem in it. The feature gives a string and I find it hard to extract the browser name, OS name and device name. Further, it doesn't show me app name in case logged in to my service with app.

+ +

So is there any better and recommended way to do it? Wouldn't it be better if I handle this client side? The client that makes request to server it self sends Device name, OS name and other info.

+ +

2nd option seems to me better but I'm not sure if it actually is.

+",295143,,,,,43197.44722,What is the right approach to save user agent info during login to web / app?,,2,3,,,,CC BY-SA 3.0,, +365270,1,365271,,2/4/2018 6:22,,0,328,"

I am currently making a console program about buying property in a city. Properties can be bought from the city by someone, transferred to another person, or returned to the city if the person who has ownership dies.

+ +

I have created 3 classes: Person, City, and Property. My Person and City classes both have vectors of unique_ptrs of Property. My main.cpp has a City unique_ptr and a vector of Person unique_ptrs and does basic initialization.

+ +
class Person {
+    string name;
+    vector<unique_ptr<Property>> ownedProperties;
+    public:
+        string getName();
+        vector<unique_ptr<Property>> const& getOwnedProperties();
+};
+
+class City {
+    string name;
+    vector<unique_ptr<Property>> ownedProperties;
+    public:
+        string getName();
+        vector<unique_ptr<Property>> const& getOwnedProperties();
+};
+
+class Property {
+    string name;
+    int id;
+    public:
+        int getID();
+        string getName();
+}
+
+int main() {
+    int numProperties;
+    vector<unique_ptr<Person>> people;
+    unique_ptr<City> city;
+    //  initializing everything
+    printAllProperties();
+}
+
+ +

I decided to use unique_ptr because I am thinking of making subclasses of Property later and because I want to make sure that a Property only has one owner. I am using move() to transfer ownership between the Property vectors of City and Person. I am also printing out information like this:

+ +
City: San Johncisco
+ID | Property Name | Owner
+0  | A             | John
+1  | B             | San Johncisco
+2  | C             | Johnny
+3  | D             | Johnathan
+4  | E             | San Johncisco
+5  | F             | Johnalina
+6  | G             | Johnatha
+
+ +

The problem is that because I want to print the properties in order, I end up having something like this:

+ +
for (int currentID = 0; currentID < numProperties; currentID++) {
+    for (auto property: city->getOwnedProperties()) {
+        if (property->getID() == currentID) {
+            // print info
+        }
+    }
+    for (auto person: people) {
+        for (auto property: person->getOwnedProperties()) {
+            if (property->getID() == currentID) {
+                // print info
+            }
+        }
+    }
+}
+
+ +

This looks really ugly and I feel as though I could've designed the program / method better. Any advice?

+",295169,,,,,43135.29583,Beginner C++ question about program design with unique_ptr and vectors,,1,1,0,43143.61597,,CC BY-SA 3.0,, +365274,1,,,2/4/2018 8:19,,3,830,"

I want to create a mobile app(iOS and Android) for business need, ex: the business is Health and Beauty. So I need a mobile app for sale some goods and service. However, I need push notification to who is subscribe my topic like, health tips, magazine and new product etc.

+ +

So I plan to create:

+ +
    +
  1. Web application back-end with Laravel to manage CRUD of those data then I will implement RESTful API on my web application to handle data from the mobile app.

  2. +
  3. Create a mobile app to fetch data from web application via RESTful API.

  4. +
+ +

My concern is about notification because the mobile app must request it to the server in every x time that way the server must response a lot of requests.

+ +

So do we have another way?

+",266882,,222996,,43135.48472,43480.48681,Push notification from web application to my mobile app,,1,2,,,,CC BY-SA 3.0,, +365279,1,,,2/4/2018 11:50,,1,27,"

I am new to programming, I have been trying to get my head around this.

+ +

Context

+ +

My project is about defining investment packages for various construction projects within a country based around an economy model. There are 5 regional models, and I am writing about a company working on the interface that will allow integration of the 5 regional models with the national economy model that will allow financial decision making.

+ +

Question

+ +

How would I go about developing an interface that would allow integration of 6 other models? Would someone be able to provide a basic overview of the interface development method?

+",295188,,222996,,43135.51667,43135.51667,How would I go about developing an interface that would allow integration of 6 other models?,,0,3,,,,CC BY-SA 3.0,, +365280,1,365281,,2/4/2018 12:26,,-2,312,"

I'm confused I use to build mobile app couple of years ago and I know that android is java I read some framework like Ionic HTML/JavaScript Framework that can export to mobile app. I going to have android projects soon and I will going back to java for mobile app, I'm at web development. I saw this that Ionic framework and its not Java I'm very curious with this because I love javascript

+",287216,,287216,,43135.53125,43135.53125,Can we build mobile app using JavaScript (NOT Java)?,,1,0,,43136.69792,,CC BY-SA 3.0,, +365289,1,,,2/4/2018 13:47,,1,263,"

I am working on a web app which uses Google oAuth flow and Drive API to list user's Google Drive files. With the access_type set to online, my app doesn't receive a refresh token.

+ +

My question is, once I receive user's data, which is just meant to be shown so storing in DB is not an option, should I store that huge data object in a session variable or cookie?? So that when user refreshes the page, for a short amt of time, it would still be there and the user doesn't have to repeat the google consent screen step.

+ +

What would be the best way to deal with data that isn't meant to stored permanently??

+",294201,,294201,,43164.49097,43164.49097,Storing temporary data received through an API call,,0,2,,,,CC BY-SA 3.0,, +365291,1,,,2/4/2018 14:00,,5,558,"

I am researching cohesion topic and found out that some claim TCC metric should only include public methods, some other sources claim all methods. Would it be wrong to use either approach? Why private methods should be excluded, as some suggest?

+",60327,,,,,43135.69861,"Tight class cohesion metric, all or just public methods?",,1,0,,,,CC BY-SA 3.0,, +365294,1,,,2/4/2018 15:41,,1,52,"

This turned out to be a rather interesting problem contrary to my expectations.

+ +

Imagine a simple chat app, a user registers then can add other users to their contacts list and start conversations.

+ +

I want to be able to show in real time each user's online status (for simplicity just online/offline). The problem is how to do that efficiently.

+ +

The app utilizes WebSockets so there is a list of connections on the server which eases the task and the most obvious solution is whenever a user connects or disconnects simply broadcast that to all other users. This will obviously not be very efficient because every user that does not have the triggering one in their contact list simply won't care and if the number of users is great so will the wasted bandwidth be.

+ +

A more optimized solution would be to have a connecting user ask for the statuses of everyone in their contact list. Then create a list for each user on the server and store the ids of everyone who asked for their status. This way whenever a user connects/disconnects I will have a list of users who care about this and I can broadcast the event knowing that nothing goes to waste. However what I don't like about this is that as you can see there is quite some complexity being added to the scheme as well as keeping such lists on the server will increase the amount of memory used proportionally to the amount of users (which is always the case but it seems too much for such a task).

+ +

Since chat apps are not something new I'd like to ask what are the methods used for such functionality in existing apps or if there is a silver bullet solution that is well known.

+",143301,,,,,43135.69028,Efficiently broadcast user status,,1,0,,,,CC BY-SA 3.0,, +365295,1,365300,,2/4/2018 16:16,,5,523,"

I'm still a beginner when it comes to domain driven design, and I am trying to model something like an RPG's battle system as a bounded context. I am trying to model a bounded context in which a Combatant has a list of abilities, and each Ability has a list of effects that it would apply. For example, a common Effect would be to deal damage to the target Combatant.

+ +

As far as I can tell, the two aggregate roots would be Combatant and Ability, with Effect being a value object.

+ +

Now, I am not certain where I should be putting my logic for the interaction between Combatants. In particular, I will need logic that handles the execution of an ability taking into account the source Combatant and the target Combatant. Likewise, I will need similar logic for the individual effects of an ability. Since the action and effects are being caused by one instance of a Combatant and are targeting a second instance of Combatant, I don't think the logic should be done within the Combatant aggregate itself.

+ +

The problem is that I just don't know where the logic should go. I initially thought a domain service would be the right spot, but after researching them, I think I may have been wrong. Does anyone have any insight as to where I should be putting this logic?

+",280618,,,,,43135.71806,Where does business logic go that involves multiple aggregates?,,1,0,2,,,CC BY-SA 3.0,, +365302,1,,,2/4/2018 17:54,,2,2093,"

I'm currently trying to create a system for a specific tournament but I can't figure out how to generate the matches table.

+ +

I have a tournament with <T> teams. +Each team needs to play vs each other team exactly once.

+ +

The tournament location has <M> fields which can be played on at the same time (Therefore limiting the amount of matches per round to <M>).

+ +

One of the solutions I know of is a round robin tournament.

+ +

The two solutions on wikipedia however only show solutions for <T> / 2 matches/fields which is not what I want.

+ +

I want that the amount of matches played at the same time is variable based on the input (tournament location).

+ +

Obvioulsy there can be rounds where specific teams are not playing. But the algorithm should always return the smallest number of rounds possible.

+ +

What's the best way to solve this problem or are there any efficient algorithms for that?

+ +

To clarify: Each match/round takes the same amount of time and each match/round starts at the same time.

+ +
+ +

Edit: Here's an example:

+ +

I have 6 teams:

+ +

When we apply the round robin algorithm it'd generate a table like this:

+ +
         Field/Match 1 Field/Match 2 Field/Match 3
+Round 1: 1 vs 6     2 vs 5     3 vs 4
+Round 2: 1 vs 5     6 vs 4     2 vs 3
+Round 3: 1 vs 4     5 vs 3     6 vs 2
+Round 4: 1 vs 3     4 vs 2     5 vs 6
+Round 5: 1 vs 2     3 vs 6     4 vs 5
+
+ +

Now I'm at a tournament location with only 2 fields available, therefore only 2 matches can be played at the same time.

+ +

The easiest way to alter this table would be to append all fields which don't exist to the end of the tournament:

+ +
         Match 1    Match 2
+Round 1: 1 vs 6     2 vs 5
+Round 2: 1 vs 5     6 vs 4
+Round 3: 1 vs 4     5 vs 3
+Round 4: 1 vs 3     4 vs 2
+Round 5: 1 vs 2     3 vs 6
+Round 6: 3 vs 4
+Round 7: 2 vs 3
+Round 8: 6 vs 2
+Round 9: 5 vs 6
+Round 10: 4 vs 5
+
+ +

This however wouldn't be optimal since field 2 wouldn't be used at all.

+ +

I need an algorithm which generates a good solution with optimizes all available fields.

+ +

Example of an table with optimally used fields:

+ +
         Match 1    Match 2
+Round 1: 1 vs 6     2 vs 5
+Round 2: 1 vs 5     6 vs 4
+Round 3: 1 vs 4     5 vs 3
+Round 4: 1 vs 3     4 vs 2
+Round 5: 1 vs 2     3 vs 6
+Round 6: 3 vs 4     6 vs 2
+Round 7: 2 vs 3     5 vs 6
+Round 8: 4 vs 5
+
+ +

And I don't know how I could convert this to an algorithm which works with any amount of matches/fields per round.

+",295208,,283091,,43143.10486,43203.15278,Algorithm for a round robin based tournament with a specific amount of matches per round,,1,5,1,,,CC BY-SA 3.0,, +365310,1,,,2/4/2018 21:56,,11,3340,"

I stumble across this case somewhat often, and I'm surprised about finding so few similar discussions around the web. This question is very related, but my problem is that I want a method that does the more general ""do X if Y"" rather than ""do X if needed"". The answer in that link is to use the prefix Ensure, but that word does not fit if ensuring X is not the method's intention.

+ +

The scenario I have in mind is this:

+ +
void mayPerformAction() {
+    // Do some preparatory calculations
+    // ...
+    if (shouldPerform) {
+        // Perform action
+        // ...
+    }
+}
+
+ +

The reason I am not using two separate methods (shouldPerformAction() and performAction()) is because both the condition and the action depend on some preparatory calculations, which would have to be repeated otherwise. Now, my question is: what is the most logical and readable name for the method mayPerformAction()?

+ +

To clarify, it is important to the caller that the action may sometimes not be executed, otherwise it seems logical to me to use performAction().

+ +

I admit that this is kind of an XY-problem, and there are multiple solutions posted, each of which have good arguments for and against them. To summarize:

+ +
    +
  • Abstract away the doubt and give it a less detailed name, e.g. just performAction().
  • +
  • Prefer clarity and do the calculations twice; the performance difference will be negligible in many cases anyway: if (shouldPerform()) performAction().
  • +
  • Same as above, but store the shared result of the calculations in a global variable or return it (I prefer the latter) so no resources are wasted.
  • +
+ +

I feel like the best approach depends on how 'serious' the condition is and how expensive the preparatory calculations are; as such I'm leaving the question unanswered for now.

+",295228,,1204,,43140.70139,44038.93472,How to name a method which may or may not perform an action depending on a condition?,,6,7,4,,,CC BY-SA 3.0,, +365311,1,365335,,2/4/2018 22:22,,-5,146,"

I ask because I found no answer to this, maybe because is too obvious (I think there is no thing like ""legally obvious"").

+ +

Imagine that I make a small library (DLL) or a styling template (CSS) or whatever, and I want to make it public and open source (yay!).

+ +

Until now, I used to license these kind of things with GPLv3 if it is a whole project, or LGPLv3 if it is a small part like I said before.

+ +

My question is, as owner of my code and binaries, am I forced in some way to use them in the same way I already specified in the license I give them?

+ +

For example, using my GPLv3 library in a proprietary program (closed source) with EULA and everything...

+ +

Thanks in advance.

+",295230,,,,,43136.33611,Can I use my own software with a different license from that one used to make it public?,,1,5,,43136.69861,,CC BY-SA 3.0,, +365314,1,365330,,2/5/2018 0:01,,-2,3444,"

I've been reading a lot of comparisons between NoSQL and SQL. I feel like NoSQL might be the way to go but I'm still uncertain. So here's my example:

+ +

I want to be create an inventory system. There will be user accounts with the usual details (email, username, etc.). Users will be able to create stores and lists. Stores can be linked to a list, multiple stores can be linked to the same list.

+ +

I think there may be more reads than writes, which leads me towards NoSQL. However, I feel like I may need joins and such, which leads me towards PostGRE (I'd also like arrays).

+ +

Given the example is NoSQL the way to go?

+ +

For NoSQL I'm looking at Amazon's DynamoDB.

+",286007,,1204,,43136.14792,43136.27986,NoSQL vs SQL for creating an inventory database,,1,2,1,43136.32778,,CC BY-SA 3.0,, +365321,1,365324,,2/5/2018 4:31,,10,2176,"

Writing a User object in Swift, though my question relates to any strongly typed language. A User can have a bunch of links (FacebookProfile, InstagramProfile, etc). A few questions around this.

+ +
    +
  1. Is it good practice to wrap links in their own object?
  2. +
+ +
+    struct User {
+       var firstName: string
+       var lastName: string
+       var email: string
+       var links: Links
+    }
+
+ +
+    struct Links {
+       var facebook: string
+       var instagram: string
+       var twitter: string 
+    }
+
+ +

Or should they be loose? I know technically both ways are fine, but wondering if there is a recommended approach, in general--especially for readability.

+ +
struct User { 
+   var firstName: string
+   var lastName: string
+   var email: string
+   var facebookLink: string
+   var twitterLink: string
+   var instagramLink: string
+}
+
+ +
    +
  1. In a scenario like this, should links be a collection/list? I figured it should not be a list because there is a fixed number of link options available, and not a growing number. Is my thinking right?

  2. +
  3. Is it good practice to place my networking methods inside the User object, like getUsers, getUser, updateUser?

  4. +
+ +

I know these could be subjective, but I am trying to understand what the best practice around similar situations is. Would appreciate any pointers.

+",146330,,173910,,43136.84861,43137.27569,Is it good practice to wrap a related set of properties into its own struct/class?,,3,0,3,,,CC BY-SA 3.0,, +365333,1,365359,,2/5/2018 7:58,,-4,59,"

I am designing a state diagram for my software. I want to represent the following scenario

+ +

The application is in state X
+---> (state X)

+ +

It can have two events/actions, eg. action A and action B

+ +

If action A is performed, it goes into the same state X and again action A and +action B are available.

+ +

But, if action B is performed, it again goes into the same state X but now action A and action B are not available, instead two new actions, action C and action D +are available.

+ +

action C and action D both will again lead back to the same state X

+ +

How can I represent this ?

+",247192,,,,,43137.11319,Represent same state with different actions based on an event/action,,2,2,,,,CC BY-SA 3.0,, +365339,1,365344,,2/5/2018 9:29,,188,44281,"

As an experienced software developer, I have learned to avoid magic strings.

+ +

My problem is that it is such a long time since I have used them, I've forgotten most of the reasons why. As a result, I'm having trouble explaining why they're a problem to my less experienced colleagues.

+ +

What objective reasons are there for avoiding them? What problems do they cause?

+",1928,,1928,,43136.49653,43440.46528,What is wrong with magic strings?,,7,22,40,,,CC BY-SA 3.0,, +365340,1,,,2/5/2018 9:29,,1,172,"

We get problems on production systems every now and then, most of which can't be replicated on dev/systest/uat for following reasons,

+ +
    +
  • We don't have enough data on dev/systest/uat e.g. production has million rows... but our other environment only have few thousands

  • +
  • We don't have right data structures, depth of data etc...

  • +
+ +

As a developer I had been told that ISO 27001 credited developers can't touch production even just for investigation e.g. Reading data and not writing.

+ +

I personally don't have problems with this... but it makes investigating issues nightmare, we have to jump around infrastructure and support all day. Something we can test within hours takes days.

+ +

Is this how it meant to be like ?

+",196874,,29899,,43136.74167,43136.74167,ISO 27001 and investigating production issues,,2,4,,,,CC BY-SA 3.0,, +365346,1,365734,,2/5/2018 10:35,,6,1314,"

Subj. Atm I'm using Selenium and Python, but the same applies to any other scraping solution.

+ +

I'm wondering:

+ +
    +
  1. which of the options outlined below are optimal/recommended/best practices
  2. +
  3. if there are existing solutions/helper libraries, which keywords I should look them up by.
  4. +
+ +

To stay objective, ""optimal/recommended/best practices"" means ""widely used and/or promoted/endorsed by high-profile projects in the niche.""

+ +

I couldn't find any Selenium-related or general-purpose material on this topic having spent about a day of net time searching around which probably means I'm lacking some critical piece(s) of information.

+ +
+ +

The basic operations when scraping are:

+ +
    +
  • searching for element (by CSS selector/XPath and/or by hand for things that those aren't capable of)
  • +
  • interacting with an element (input text, click)
  • +
  • read element data
  • +
+ +

And the call chain goes like this:

+ +
+

(Test code ->) User code -> Framework (selenium) -> Browser (web driver) -> Site

+
+ +
+ +

So, there are 3 hops here that I could mock. Each one poses challenges:

+ +
    +
  • Mock the site: launch a local HTTP server and direct the browser there + +
      +
    • Have to reimplement the scraped site's interface, in web technologies
    • +
  • +
  • Mock the browser (e.g. populate HtmlUnit (an in-process browser engine) with predefined HTML at appropriate moments) + +
      +
    • much simpler but still need to emulate state transitions/action reactions somehow
    • +
  • +
  • Mock the framework calls + +
      +
    • The truest to the unit testing philosophy, the least work
    • +
    • I'm however worried that it's too restrictive. E.g. I can find the same element by various means. A mock object can only accept a very specific course of action as it lacks the sophistication to e.g. check if some other selector would produce the same result.
    • +
  • +
+ +

There are also two options for what content to provide -- either

+ +
    +
  • provide the site's original content that it produced for a test query, compiling it into some sort or self-contained package + +
      +
    • labor-intensive and error-prone, or
    • +
  • +
  • provide the bare minimun to satisfy the tested algorithm + +
      +
    • much simpler but would fail for other possible algorithms that would succeed with the real site
    • +
  • +
+ +

One last concern is the fact that a site is effectively a state machine. I'm not sure which will be more useful:

+ +
    +
  • implement the complete state machine, probably as some kind of specification, and set/check its states in the tests + +
      +
    • very labor-intensive without some kind of library that reduces the work to writing a formal specification; or
    • +
  • +
  • simply validate the action sequences + +
      +
    • which doesn't seem to actually test the code against anything -- it merely reiterates what the code does
    • +
  • +
+ +

Update to address an expressed concern:

+ +

I'm scraping a 3rd-party site -- which can and will change without notice one day. So, I'm fine with testing against ""the site's interface as it was at the time of writing"" -- to quickly check if a code change broke the scraper's internal logic.

+",108337,,108337,,43137.05,43142.12361,Preferred approach to mock a site to test a scraper,,3,0,1,,,CC BY-SA 3.0,, +365349,1,,,2/5/2018 11:47,,3,1118,"

We are building a new product in real estate space and the end users of this product are not so tech savvy. To have better user experience with our product, we want our users to find relevant things quickly and easily. Apart from a simple UI, a universal search bar seems to add value.

+ +

The search bar with auto-complete will allow users to find information such as - their billing history (past payments, invoices..), help content, support content from helpdesk tickets, data from chat history and such.

+ +

We are going with microservices architecture for our services such as user management, help ticket system, chat, CMS for help content and more. The question is how do we build this central search service that can index and store all user content from all these services that user will be able to search.

+ +

Would each service would dump data into elasticsearch to be indexed and made available for search which then a search service than query? Say when a support ticket is open, or a chat conversation is closed would the relevant microservice copy this data into a service like elastic search? Would it be better for the microservices to push such data to a queue which then is consumed into elasticsearch?

+ +

I'm open to any ideas and thoughts on how search is architected nad the best practices there of. Happy to provide more information on our service if it helps with a better answer.

+ +

UPDATED:

+ +
    +
  • There is one DB per microservice
  • +
  • Search need not be real time
  • +
  • Not too worried about the load and we will go with hosted search - AWS cloudsearch or elasticsearch
  • +
+",295266,,295266,,43137.15417,43465.23611,Architecting a universal search for a product with microservices,,3,1,3,,,CC BY-SA 3.0,, +365356,1,365361,,2/5/2018 14:08,,0,126,"
@Component
+public class RepositoryContainer {
+    @Autowired
+    public CommentRepository commentRepository;
+    @Autowired
+    public ItemRepository itemRepository;
+    @Autowired
+    public UserRepository userRepository;
+    @Autowired
+    public PostRepository postRepository;
+}
+
+ +

In this way, I just need to inject a single god RepositoryContainer and don't need to inject every repository that will pollute my controller.

+",295289,,,,,43136.66181,Should I create a Repository Container to get my repositories?,,2,1,,,,CC BY-SA 3.0,, +365357,1,365358,,2/5/2018 14:16,,0,209,"

Please see the code below:

+ +
public class Person
+{
+   public DateTime DateOfBirth;
+   public List<Offer> Offers = new List<Offer>();
+   public Person(DateTime DateOfBirth)
+   {
+        int age = DateTime.Now.Year-candidate.Year;
+        if (DateTime.Now < candidate.AddYears(age)) age--;
+            if (age < 18 || age > 99)
+                throw new ArgumentException(""Age must be greater than 18."", ""DateOfBirth"");
+    _dateOfBirth=dateOfBirth;
+   }
+
+   public void GetOffers(IOfferCalculator offerCalculator)
+   {
+     return offerCalculator.GetOffers(this);
+   }
+
+   public void AssignOffers(List<Offer> offers)
+   {
+      //Assign offers to Person.Offers here.
+   }
+}
+
+ +

Notice that I pass the entire Person object to Person.GetOffers(). The reason I do this is because the person has to be over 21 for all of the offers and this is checked (validated) in the Person constructor. Is this a good reason to pass the Enquiry as an object to: offerCalculator.GetOffers? The reason I ask is because t3chb0t described this as a code smell in my question here: https://codereview.stackexchange.com/questions/186124/testing-the-process-of-assigning-offers-to-a-customer

+ +

Alternatively I could just pass the DateOfBirth like this: return offerCalculator.GetOffers(DateOfBirth). However, this would mean that I would have to validate the DateOfBirth in every offer.

+ +

I am trying to follow the principle of least astonishment these days and find myself overthinking a lot. My two questions are:

+ +

1) Is it a good appraoch to pass the Person object to offerCalculator.GetOffers as Person already validated the date of birth?

+ +

2) Should I be checking the date of birth against every Offer, even though every Offer requires the person to be over 21? I don't think this will evert change i.e. the person will always have to be over 21.

+",65549,,,,,43136.64722,Passing an object instead of parameters,,1,1,,,,CC BY-SA 3.0,, +365360,1,365593,,2/5/2018 14:49,,4,1424,"

Ok, I'm working on DynamoDb-based caching. We have an API where each call costs us real money and the information retrieved is ""fresh"" for 2-3 weeks, so it makes sense to cache it. + There's no problem with calling dynamoDbClient.get(apiUrl) and retrieving the stored JSON response. They have batches - 25 batch items for PUT and 100 batch items for GET. So it means if we need get JSON response for 100 items we do one call instead of 100.

+ +

The question I have is how to organize this in the best way for threading.

+ +

Here's general idea and I'm open to suggestions.

+ +

Assuming we have a batch call with 100 items, we can use a BlockingQueue<String> to store the keys which we can use later. We would use blockingQueue.put() method, to make other threads wait if 150 threads arrived into method having a batch of only 100 available.

+ +

So thread that enters method, gets his place in batch queue, needs some ""locking"" mechanism to wait until response arrives and ""wake him up"" that his response is available.

+ +
pseudoCodeMethod(String resourceUrl) {
+  blockingQueue.put(resourceUrl); // sleep if no place
+  LockableResponse lockableResponse = getLockableResponse();
+  lockableResponse.setKey(resourceUrl);
+  lockableResponse.getSemaphore().acquire();
+
+  String jsonApiResponse = lockableResponse.getJsonApiResponse();
+  lockableResponse.clean();
+
+  return jsonApiResponse;
+}
+
+LockableResponse{
+  String resourceUrl;
+  String jsonApiResponse;
+  Semaphore semaphore = new Semaphore(0);
+
+  public void clean(){
+     resourceUrl = null;
+     jsonApiResponse = null;         
+  } 
+}
+
+ +

My though is that we would associate a resourceUrl for each dynamodb response with a data structure that has semaphore to await for api response which arrives for ~100 items in one batch call.

+ +

A separate thread performs the call and then iterates through response, assigns jsonApiResponse to proper resourceUrl [concurrentMap for that?? ] and then calls lockableResponse.release() to wake up thread, so thread takes resourceUrl and exists the method.

+ +

The clean() method could be added to allow reuse of the same structure. I'm thinking about array of lockable resource according to batch capacity. So they could be cleared and reused for other batch GET call.

+ +

Thoughts? Suggestions?

+",295291,,209331,,43136.73681,43139.61806,Threaded batching algorithm,,2,2,,,,CC BY-SA 3.0,, +365371,1,365372,,2/5/2018 16:26,,1,510,"

I am developing a (Java) library providing an API to read a file in a specific format into an object. The format is basically a map, and specifies valid values for some of the keys, and valid types for values for others.

+ +

E.g., the value for colour may only be one of red, green or blue, while the key date must be provided in YYYY-MM-DD format.

+ +

Also, the file in this format must have a specific name.

+ +

Obviously, the API could be used with invalid data, e.g., a file of the wrong name, or a file containing invalid values, e.g., colour: orange or date: last year.

+ +

Additionally, the API will have to deal with scenarios such as non-existing files, files in completely different formats, etc.

+ +

Are there best practices for this kind of scenario? E.g., should I throw runtime exceptions for the latter kind of issues (other format, file not found, I/O exception) that I catch during the read, and custom exceptions for the other issues (invalid file name/values)?

+ +

Or should I return some sort of result object wrapping, e.g., the data object when it is valid and has been successfully read, or a list of error messages collected during the read when something went wrong? (Should the respective other fields then be null or contain an empty value?)

+",70086,,,,,43136.84792,Should a file reader library API throw exceptions?,,3,0,,,,CC BY-SA 3.0,, +365377,1,384802,,2/5/2018 18:30,,1,819,"

We had a discussion over this topic at the work not long ago.

+ +

We have the following situation:

+ +
    +
  • A legacy database which is currently the base for the main operational application in our company.
  • +
  • A new content management system that should receive master data information from the mentioned database. That master data will always be owned by the legacy database, but it is needed in the CMS as a basis to construct its own data. You could think of it as categories or grouping data.
  • +
  • A trigger system in the legacy database that watches over certain tables. When an insert/update/delete occurs, it puts a notification into a database queue.
  • +
  • This database queue is watched by a (so called) loader service, which gets the data and puts it into a Kafka topic (Kafka is used for a lot of microservices in our systems).
  • +
  • A (again so called) sync service which is subscribed to the Kafka topic. Whenever something is published in the topic the service consumes it and pushes it to the CMS.
  • +
+ +

Here you can see a diagram that shows the architecture at high level:

+ +

+ +

For me this is (sort of) an event-driven architecture. An event happens in the legacy DB and other services get to react to it.

+ +

However, a colleague from me insisted in that it could not be considered as an event-driven architecture because the owner of the master data that provoked the event was the legacy database. Basically, my colleague's argument was that each service should own its own data and send event messages to other services (via a mediator), and that the other services should react to these events and build their data based on their own needs.

+ +

Now, I agree that each service will have to use the data from the event as it seems fit for it. But I do not agree that this argument means the architecture I explained is not event-driven. I have searched over the Internet and did not find a clear statement indicating this data ownage as being a prerequisite for an architecture to be considered as event-driven.

+ +

Is it really necessary in an event-driven architecture that each actor owns the data he works on? I.e, do I really need to not use the master data of the legacy database in the CMS? And/or should the CMS instead (ideally) create its own data and somehow map it to the legacy database data?

+ +

Some clarifications:

+ +
    +
  • I know the presented architecture may not be the best you can have, but working with legacy systems sometimes forces you to tweak things.
  • +
  • I also know this architecture means that the services are coupled at the data level, but it is what we have for now, again due to the legacy system.
  • +
+",81820,,,,,43467.18819,Does an event-driven architecture require that each actor owns his data?,,2,6,1,,,CC BY-SA 3.0,, +365379,1,365448,,2/5/2018 19:21,,1,687,"

What is the cleanest way to obtain progress from a class that will also be used in a no-gui environment for the purpose of displaying said status in a JavaFX Gui?

+ +

Example:

+ +
import java.util.ArrayList;
+import java.util.List;
+
+public class ExampleFetcher {
+    public List<Entity> fetch() {
+        final List<Entity> results = new ArrayList<>();
+
+        for (int i = 0; i < N; i++) {
+            final Entity current = //Do some heavy work
+
+            results.add(current);
+        }
+
+        return results;
+    }
+}
+
+ +
+ +

There are several possibilities, but to me each of them has some drawbacks

+ +

1. ""The JavaFX way"": Use an observable property from javafx.beans

+ +
import javafx.beans.property.ReadOnlyDoubleProperty;
+import javafx.beans.property.ReadOnlyDoubleWrapper;
+
+import java.util.ArrayList;
+import java.util.List;
+
+public class ExampleFetcher {
+    private final ReadOnlyDoubleWrapper progress = new ReadOnlyDoubleWrapper();
+
+    public List<Entity> fetch() {
+
+        final List<Entity> results = new ArrayList<>();
+
+        for (int i = 0; i < N; i++) {
+            final Entity current = //Do some heavy work
+
+            progress.set((double) i / N);
+
+            results.add(current);
+        }
+
+        return results;
+    }
+
+    public double getProgress() {
+        return progress.get();
+    }
+
+    public ReadOnlyDoubleProperty progressProperty() {
+        return progress.getReadOnlyProperty();
+    }
+}
+
+ +

Now you can simply observe the progressProperty or add a listener, which is quite nice, but it clutters the API a little. Also, is it even ok for a class to have a dependency on javafx if it's not going to be used in some cases?

+ +

2. The naive way: Add consumers to the called method

+ +
import java.util.ArrayList;
+import java.util.List;
+import java.util.function.Consumer;
+
+public class ExampleFetcher {
+    public List<Entity> fetch(final Consumer<Double> progressConsumer) {
+        final List<Entity> results = new ArrayList<>();
+
+        for (int i = 0; i < N; i++) {
+            final Entity current = //Do some heavy work
+
+            progressConsumer.accept((double) i / N);
+
+            results.add(current);
+        }
+
+        return results;
+    }
+}
+
+ +

You can pass a consumer that will update your progress bar or whatever you're using in the GUI. This avoids the dependency on javafx but adds a lot of clutter to the API, especially if you're not only monitoring progress but some other kind of status.

+ +
+ +

Now, is there another way I'm not thinking about? Is one of the ways provided actually decent? Or should I maybe look into another way of executing my tasks?

+",116527,,,,,43137.66736,Get progress from class that is also used in a no-gui environment,,2,0,,,,CC BY-SA 3.0,, +365388,1,,,2/5/2018 21:11,,-3,125,"

For the simplification, I am talking here only about the time, invested into the project starting from the task being defined for developers and functionality being passed to testers.

+ +

AFAIK, the mostly used tool for support of the project management is JIRA now. Using it, we can find how much time was used for different functionalities. Or, maybe, subtasks connected to functionalities.

+ +

But in the real life, a great deal of work of a developer belongs not to the coding, but to the different additional activities: installing and configuring of platforms, libraries, old and new parts of the project, refactoring, functional and unit testing and others. If some activity is a great and well-known time-eater, everybody knows about the problem and we can hope that the problem will be solved sooner or later. But time-eaters can expose themselves only in connection to some specific activities or parts of the product. And different developers are wasting their time again and again, not even knowing that it is a common problem.

+ +

The obvious solution is to register time put not only into different functional tasks, but into different organizational stages, necessary for their solutions. Such as reinstallation of plugin A or writing API tests.

+ +

If we had such information, we could find the problems and after solution of them greatly increase the effectiveness of the common work. Something as:

+ +
When creating a functional test, 
+  we spend 40%+-15% of time for 
+  the automatization of server launching with the necessary data. 
+
+=> 
+
+We should make some data loader for that server.
+
+ +

The tools as MS Project Manager can show us the structure of the time invested, but cannot measure it. They can be used only for postfactum analysis, if somebody has collected and put in the data.

+ +

But HOW can we register and/or measure the time put into different activities connected to the solutions of the concrete tasks? Also, I would like not to pass the measurements into the analysis process manually. JIRA, IMHO, is absolutely unusable for any of these tasks.

+ +

I am not so interested in the tool (any table calculator can serve) as in organization of the whole process among tools and people.

+",44104,,,,,43136.925,How can we measure time spend on different activities on the project? (not tasks!),,1,12,,,,CC BY-SA 3.0,, +365392,1,,,2/5/2018 21:33,,11,864,"

Suppose there is a Page class, which represents a set of instructions to a page renderer. And Suppose there is a Renderer class that knows how to render a page on screen. It is possible to structure code in two different ways:

+ +
/*
+ * 1) Page Uses Renderer internally,
+ * or receives it explicitly
+ */
+$page->renderMe(); 
+$page->renderMe($renderer); 
+
+/*
+ * 2) Page is passed to Renderer
+ */
+$renderer->renderPage($page);
+
+ +

What are the pros and cons of each approach? When will one be better? When will the other be better?

+ +
+ +

BACKGROUND

+ +

To add a little bit more background - I am finding myself using both approaches in the same code. I am using a 3rd party PDF library called TCPDF. Somewhere in my code I have to have the following for PDF rendering to work:

+ +
$pdf = new TCPDF();
+$html = ""some text"";
+$pdf->writeHTML($html);
+
+ +

Say I wish to create a representation of the page. I could create a template that holds instructions to render a PDF page snippet like so:

+ +
/*
+ * A representation of the PDF page snippet:
+ * a template directing how to render a specific PDF page snippet
+ */
+class PageSnippet
+{    
+    function runTemplate(TCPDF $pdf, array $data = null): void
+    {
+        $pdf->writeHTML($data['html']);
+    }
+}
+
+/* To be used like so */
+$pdf = new TCPDF();
+$data['html'] = ""some text"";
+$snippet = new PageSnippet();
+$snippet->runTemplate($pdf, $data);
+
+ +

1) Notice here that $snippet runs itself, as in my first code example. It also needs to know and be familiar with the $pdf, and with any $data for it to work.

+ +

But, I can create a PdfRenderer class like so:

+ +
class PdfRenderer
+{
+    /**@var TCPDF */
+    protected $pdf;
+
+    function __construct(TCPDF $pdf)
+    {
+        $this->pdf = $pdf;
+    }
+
+    function runTemplate(PageSnippet $template, array $data = null): void
+    {
+        $template->runTemplate($this->pdf, $data);
+    }
+}
+
+ +

and then my code turns to this:

+ +
$renderer = new PdfRenderer(new TCPDF());
+$renderer->runTemplate(new PageSnippet(), array('html' => 'some text'));
+
+ +

2) Here the $renderer receives the PageSnippet and any $data required for it to work. This is similar to my second code example.

+ +

So, even though the renderer receives the page snippet, inside the renderer, the snippet still runs itself. That is to say that both approaches are in play. I am not sure if you can restrict your OO usage to only one or only the other. Both might be required, even if you mask one by the other.

+",119333,,119333,,43138.77222,43139.36042,"In object oriented languages, when should objects do operations on themselves and when should operations be done on objects?",,7,8,3,,,CC BY-SA 3.0,, +365395,1,365475,,2/5/2018 21:57,,0,1718,"

I'm part of a small team that currently uses an Access database for scheduling a larger team's availability. This has presented some issues with corruption of the Access database. Additionally, I want to implement additional functionality over time.

+ +

I've set out to create an application for the 4-5 of us to use that will solve the concurrent database issue, as well as give the team more functionality.

+ +

Since this is a shared network drive, I won't have access to SQL Server (from my guess). I thought maybe a web service would be the way to go, but I don't really want to front the bill for this. Additionally, when I eventually leave the team I don't want to maintain this.

+ +

Some ideas I've come up with is an application written in C# that acts as the front-end with SQLite embedded as the back-end. However, I've spent days trying to get the Entity Framework to work with SQLite and am at the point of giving up.

+ +

I'm trying to decide what else I can do to solve this issue. Is there another technology I can use?

+",274856,,,,,43144.09167,Concurrent database access on shared network drive,,4,5,,,,CC BY-SA 3.0,, +365396,1,,,2/5/2018 21:57,,-2,162,"

I have a Report class that is basically an interface for creating HTML reports generated via a Python template engine. The Report class currently generates the report but also has a few methods dealing with retrieving/uploading data from S3. These S3 methods are mostly agnostic to what I would consider the core part of the Report class. The only reason I have them in this class is that they generate partitioning/naming based on some things like the report name and account. With my current setup, the flow would look like this for generating and uploading a report:

+ +
billing_report = Report(<account_name>,<type>,...)
+billing_report.generate()        # returns HTML report for render or processing
+billing_report.upload()          # uploads generated report to S3
+
+ +

The raw data needed to generate the report comes from S3 and then the final report can be put into S3 as well.

+ +

According to Single Responsibility principle, it seems like this Report class should be split up since it's both generating the report as well as managing things related S3 retrieval/upload. But, if I split them into say Report and S3 classes, it seems like they would be tightly coupled. Also, the S3 class would be just a lightweight wrapper for a few S3 API calls and I would have to create an S3 object which doesn't make logical sense as an object.

+ +

So what is the best way to approach this kind of situation?

+",293614,,,,,43136.96042,Splitting up a class,,1,0,,,,CC BY-SA 3.0,, +365404,1,,,2/6/2018 3:38,,-3,355,"

How can we safeguard REST API to be accessed only from trusted clients? Let me explain the scenario, lets say there is an API which will be accessed from mobile application MA and web application WA. Besides these two applications, this API should not (and must not) be accessed by any other client.

+ +

Key Points:

+ +
    +
  • I cannot use token based authentication here, as user is not required to login to application to access (or just read) the information.
  • +
  • Embedding any secret information inside the application, to be sent along with the API request, is not secure, as that secret can be leaked (though using SSL) to potential user using reverse engineering.
  • +
+ +

In this scenario, what is the best way to secure REST API?

+",237932,,,,,43137.26528,How to make REST API to be accessed only from trusted (my) application?,,3,1,,43137.44583,,CC BY-SA 3.0,, +365408,1,365411,,2/6/2018 7:00,,2,5343,"

I have a function/method which throws some exception when database is being called. +I was writing Junit tests and I was told to have good coverage. So, should I write Junit test which catches exception. +I think it don't make any sense, since I have to mock the database to do so.

+ +

So whole scenario will be like this:

+ +
public <returntype> method(<arguments>) {
+    //statments
+    try {
+        // database call
+        database.insert(<Argument>);
+    }
+    catch {
+        logger.error();
+        return error;
+    }
+}
+
+ +

It makes sense only when there is something good in catch statment. But if its only logging, then there is no point having a Junit test for this.

+ +

So should, I write these test?

+ +
public void test_catch_Exception() {
+    //setup
+    database = ..//
+
+    when(database)
+    .calls(insert)
+    .return(Exception);
+
+    try{
+        //some statements for execution of the method
+    }
+    catch(Exception e) {
+
+    }
+    Assert.fail();
+}
+
+",245712,,,,,43137.31528,Writing Junit tests for catching exception?,,1,1,,,,CC BY-SA 3.0,, +365412,1,,,2/6/2018 7:51,,56,10229,"

Is there an operator equivalent of nor? For example, my favorite color is neither green nor blue.

+ +

And the code would be equivalent to:

+ +
// example one
+if (color!=""green"" && color!=""blue"") { 
+
+}
+
+// example two
+if (x nor y) {
+    // x is false and y is false
+}
+
+",48061,,104645,,43137.98056,43138.78194,"Is there a keyword or operator for ""nor""?",,7,15,10,,,CC BY-SA 3.0,, +365416,1,,,2/6/2018 8:42,,-3,221,"

We have several ongoing projects and need to figure out the way to manage the repos.

+ +

The current situation is Project A is in production, in a SVN Repo A. Project B is in development for another client and is actually project A with some add-in features, in a SVN Repo B. So every time we have a fix on A, we need to wait for a while to let the fix verified. Meanwhile, B continues to be under development, thus after a while, the code base will be a bit different. Then after the fix is verified, I have to manually track down back to the commit in A, copy/paste the code into B (because they are in different repositories). The whole process is wasting my time. And there gonna be more clients coming later, thus I don't want to have the situation that I have to copy/paste the code across multiple projects (repos).

+ +

We are discussing if we should move to Git or keep using SVN. If using Git, we can create branches for the base code, and then each client can be matched with one branch, however, the whole branch concept of master/develop/feature-1,2,3/release-v1,2,3... might collapse. I know branch exists in SVN as well

+ +

What is the neat way to organise this situation ?

+",149283,,149283,,43137.66806,43137.66806,Repo management using Git or SVN,,2,12,,,,CC BY-SA 3.0,, +365427,1,365461,,2/6/2018 10:43,,27,15752,"

I can see several post where importance of handling exception at central location or at process boundary been emphasized as a good practice rather than littering every code block around try/catch. I strongly believe that most of us understand the importance of it however i see people still ending up with catch-log-rethrow anti pattern mainly because to ease out troubleshooting during any exception, they want to log more context specific information (example : method parameters passed) and the way is to wrap method around try/catch/log/rethrow.

+ +
public static bool DoOperation(int num1, int num2)
+{
+    try
+    {
+        /* do some work with num1 and num2 */
+    }
+    catch (Exception ex)
+    {
+        logger.log(""error occured while number 1 = {num1} and number 2 = {num2}""); 
+        throw;
+    }
+}
+
+ +

Is there right way to achieve this while still maintaining exception handling good practice ? I heard of AOP framework like PostSharp for this but would like to know if there is any downside or major performance cost associated with these AOP frameworks.

+ +

Thanks!

+",293148,,293148,,43808.36042,43808.36042,Try/Catch/Log/Rethrow - Is Anti Pattern?,,6,12,13,,,CC BY-SA 3.0,, +365429,1,365435,,2/6/2018 11:00,,-3,72,"

I am wondering how do big development teams do it, when they all work on one project.

+ +

Options are:

+ +
    +
  • 1 virtual machine in cloud that they ssh into and push updates via ftp ?
  • +
  • each developer have their own local virtual machine and updates are being controlled by git, when merged together by master branch on cloud?
  • +
  • any other ideas?
  • +
+ +

Basically I wanted to know what the structure and workflow are.

+ +

Workflow for developing a web app for example. Lets assume no testing needed.

+ +

I wanted to know what are the basics steps: +1 virtual machine that everybody logs into and commits changes, +or virtual machines are on each developer local drives?

+ +

And how do they commit changes based on above?

+ +

I am not asking for a solution to a problem, only the general knowledge of how things are being done properly in web-dev environments. And how to avoid merging traps for example, etc.

+",295388,,295388,,43137.47917,43137.49167,Sharing virtual box / environment?,,1,1,,,,CC BY-SA 3.0,, +365446,1,365452,,2/6/2018 15:27,,1,158,"

I'm creating an application which, so far, has an identity service(using identityserver4), a front end, and a calendar service.

+ +

The user logs in via third party(say, google) and grants permissions to the application to manage their calendar.

+ +

When the user logs in, identityserver stores the access and refresh tokens in AspNetUserTokens.

+ +

The question:

+ +
+

The calendar service needs access to do background tasks while the end + user is offline. Should the access and refresh tokens be stored on + the calendar service as well as the identity service?

+
+",294984,,,,,43137.67708,"Microservices - External access tokens stored in identity service, calendar service, or both?",,1,0,1,,,CC BY-SA 3.0,, +365447,1,365484,,2/6/2018 15:36,,0,440,"

Is there an automated approach to generate a grammar (which could be used later in a compiler tool such as ANTLR or similar) from given examples of a language?

+ +

With more detail: +assumed a technical language such as Java, C or (in my case) MQSC, and some source files of that language, is there an automation to derive tokens out of the existing sources as well as in a second step identifying variables etc, and later on the grammer in some form? +Target would be a grammar description for that language for example for ANTLR.

+ +

Or is the only way for that by doing it yourself?

+ +

I think this is a general question on all (programming) languages; however, my personal case deals with reading and parsing a very large and complex MQ configuration from IBM's MQSC.

+",251840,,,,,43138.11042,Grammar by example,,1,10,2,,,CC BY-SA 3.0,, +365451,1,,,2/6/2018 16:07,,3,3745,"

I'm building a website where I plan to support multiple languages. Not only via UI, but via the content too.

+ +

I have several tables where I have text columns such as ""title"", ""name"", ""description"", ""body"" and so on. What's the best way to do so? Will I have to create an additional table for each one where I have text data I want to translate? For instance:

+ +
articles(id)
+articles_content(article_id, title, description, body, language_id)
+
+
+comments(id)
+comments_content(comment_id, body, language_id)
+
+ +

And thus for each table I want to translate.

+ +

Any downsides of this solution?

+ +

Is there a better and yet simpler way?

+",295413,,,,,43230.6125,What's a recommended way to design a db schema for multi-language website?,,2,6,,,,CC BY-SA 3.0,, +365455,1,365457,,2/6/2018 16:36,,1,595,"

Quick question, I've been digging into CQRS and Event Sourcing and there's one thing that I have not been able to find info on, what happens when your Write Service crashes and you need to start it back up? I understand how you can recover your Read Service by replaying the events, that makes sense to me. What I can't find information on is rebuilding the state of the Write Service.

+",295417,,,,,43137.91667,Recovering the state of the Write Service in an Event Sourced system,,2,0,2,,,CC BY-SA 3.0,, +365460,1,365652,,2/6/2018 18:19,,11,5103,"

In my (primarily C++) development, I have long adhered to using out-of-source builds. That is, my source usually sits in a /project/src directory and the builds live in a /project/build/bin/release, /project/build/bin/debug directories. I have done this because it keeps my source directories clean from intermediate files, I have one location for all of my binaries, packaging is easier, cleaning is easier, and version control is easier. (Did I miss anything?)

+ +

I am inheriting a (large) project now that uses in-source builds. What is the motivation for this type of structure and what are its advantages? (I am most concerned with engineering-level reasons vs. personal preference types of reasons.)

+ +

I was hoping Lakos' ""Large-Scale C++ Software Design"" would have weighed in on it, but I missed it if it did.

+",253263,,,,,43140.59514,In-Source Build vs. Out-Of-Source Build,,1,11,7,,,CC BY-SA 3.0,, +365465,1,,,2/6/2018 19:49,,0,329,"

We're starting to work on microservices to rebuild our legacy application, piece by piece. We started by building a register module which is responsible for the whole register process.

+ +

To simplify, let's say the RegisterModule will use a CustomerModule to persist a customer, and then a PhoneModule that needs a customer_id to persist a phone in the database.

+ +

CustomerModule would look like this in pseudocode :

+ +
<?php
+
+class CustomerModule 
+{
+    public function createCustomer(array $inputs)
+    {
+        $this->checkInputs($inputs);
+        $customer = $this->repository->create($inputs);
+        $this->phoneModule->createPhone($customer->getId(), $inputs);
+
+        return json_encode(['data' => '1']);
+    }
+}
+
+ +

checkInputs would throw an exception handled by the upper layers in the module which would return the errors to the RegisterModule with the right HTTP code. Nothing persisted, data is consistent.

+ +

Now with the PhoneModule pseudocode :

+ +
<?php
+
+class PhoneModule
+{
+    public function createPhone(integer $customerId, array $inputs)
+    {
+        $this->checkInputs($inputs);
+        $phone = $this->repository->create($customerId, $inputs);
+
+        return json_encode(['data' => '1']);
+    }
+}
+
+ +

Again, when checkInputs encounters a validation error, nothing will be persisted for that microservice. But the problem here is that the CustomerModule has already persisted a customer in its database. But this customer has no phone associated which is not what we want.

+ +

I have read a lot about distributed transactions. My CTO does not want to hear about eventual consistency (and in this case I'm 100% on his side). Most solutions I have seen look very hard to implement, and I'm not even sure those are the tools we need to solve this problem.

+ +

A solution could be to do all the checking beforehand, and then do the persisting actions. But that either would mean duplication of checking in both the persisting and the checking routes, or that mean we could persist without any check if we ""forget"" to do the checks beforehand.

+ +

How do you handle this in the most simple way ?

+",230369,,,,,43139.6875,"Microservice depending on another, gracefully handling failure of one of them",,1,4,,43151.45,,CC BY-SA 3.0,, +365471,1,365474,,2/6/2018 22:07,,1,252,"

In Clean Architecture, Robert Martin talks about the necessity to ""decouple the structure of the tests from the structure of the application"". He notes that a test suite that has a test class for every production class/a set of test methods for every production method is deeply coupled, and thus fragile. He thus advocates for a testing API, which could decouple this structural dependency.

+ +

I was wondering how such a testing API and the test suite would look like with respect to structuring and naming of the tests? Also, such a decoupling seems to be at odds with the often encountered naming conventions of tests in the form of

+ +
+

[UnitOfWork__StateUnderTest__ExpectedBehavior]/[NameOfTheClassUnderTestTests]

+
+ +

as suggested in this highly upvoted answer on Stackoverflow, doesn't it?

+",173053,,278015,,43137.9875,43137.9875,Structural Coupling of Test,,1,0,1,,,CC BY-SA 3.0,, +365479,1,,,2/7/2018 1:18,,0,146,"

I'm currently building an app that includes sales promotions. There is an option in the promotion that allows the manager to have a start and end date. I do not want the manager to have to manually select the winner, but I want my software to read that the end date was reached and auto close the promotion.

+ +

This sounds like a job for cron.

+ +

These are my options:

+ +
    +
  1. At night, run a single script that pulls all promotions across my +entire database and updates promotions that have completed.
  2. +
  3. At the creation of the promotion record, queue up a cron to run at +the custom end date.
  4. +
  5. Force the user to manually update promotion.
  6. +
+ +

Any advice or previous experience would be greatly appreciated!

+",34483,,,,,43139.71736,Sales Promotions concept and design with cron jobs,,1,4,,,,CC BY-SA 3.0,, +365485,1,365490,,2/7/2018 4:03,,3,128,"

suppose I have code like this

+ +
someFunction:function(userId){
+  var url=SomeClass.SomeNetworkConnector.SOME_URL;
+  if(url !== undefined && userId !== undefined){
+    this.openURL(url+""?userid=""+userId);
+  }
+}
+
+ +

Initially I think the name of actual constant SomeClass.SomeNetworkConnector.SOME_URL is too long, so I use a variable with shorter name to hold it, and then use it later.

+ +

But I'm struggling if I should do this, my reason to oppose above is : SomeClass.SomeNetworkConnector.SOME_URL is already a well defined name of the constant, is it misleading to rename it as another variable? Should I always use SomeClass.SomeNetworkConnector.SOME_URL instead of creating a shorter variable name?

+",248528,,,,,43138.29583,Should I avoid creating a variable with shorter name for a constant?,,1,5,,,,CC BY-SA 3.0,, +365486,1,365488,,2/7/2018 4:52,,3,146,"

I'm a CS student and in order to practice my coding skills I'm trying to implement an e-book reader and I want some advice from more experienced programmers. I'm using C++\QML but I'll try to keep my problem non-specific to any technology.

+ +

Small preface

+ +

Once the book opened I keep it in memory. I also generate an array of pages. +Initially I keep in memory:

+ +
    +
  • the current page;
  • +
  • 3 pages to left;
  • +
  • 3 pages to right.
  • +
+ +

The last two options are needed in order to allow fast scrolling. When the user scrolls one page backward\forward my program calculates a few additional pages in that direction.

+ +

I also allow the user to change the window size which affects the page size. And here are problems.

+ +

The problem

+ +

Every time user changes the window size it changes the page size => the program have to recalculate the state of the currently displayed page (its size and content) and also nearby pages. It takes some time.

+ +

So every time user just playing around with the app window my program has hard time in calculations. Those calclulations take some time and slow down the whole app.

+ +

What I did in order to solve it

+ +
    +
  1. I thought about forbidding user to manually change the window size (and give him a few allowed page sizes) but I'm not sure how user-friendly it is :) In this case I could calculate pages of predefined sizes at the begining so it won't slow down the app as much as it is now.
  2. +
  3. I thought multi-threading would save me. As a true newbie at first I did everything in one thread. As long as my app started to actually work (slow of course, because my GUI froze sometimes due to calculations), I introduced multi-threading and now my GUI and working thread are separate. It's faster a little bit now but still sometimes I can see how slow it is especially on a small laptop when you scroll pages faster than a turtle.
  4. +
+ +

So, my question is: how appropriate is my approach? Is it sane at all and what are the ways to improve the perfomance?

+",260537,,,,,43138.58264,How to implement e-book reader page scrolling with variable page size?,,3,1,,,,CC BY-SA 3.0,, +365494,1,365499,,2/7/2018 8:32,,1,366,"

I need to call a method that is available in a third party library, but it is a private method. There is no direct or indirect way to obtain the same functionality via public methods available.

+ +

I could do this in at least two ways:

+ +
    +
  • Copy paste the class into my source code tree (the license permits) and change the method declaration to accessible.
  • +
  • Use Java reflection to call the private method.
  • +
  • Ok I can also open pull request to change the visibility on the third party library community side (I did), but they may accept, may not, unclear how long would it take, and I need the functionality provided by this method.
  • +
+ +

Both approaches do not look very nice. Which one is less ugly?

+",81278,,85461,,43138.55069,43138.55069,Need to call private method in third party library. Copy all class or use reflection?,,1,2,,,,CC BY-SA 3.0,, +365495,1,365505,,2/7/2018 8:45,,7,11089,"

Imagine a program which exposes a REST service/gRPC service/whatever service, which uses several 3rd party libraries. These libraries can of course throw exceptions if something goes wrong, for example if the user tried to access something which he isn't allowed to.

+ +

The problem here is that it isn't possible to check beforehand if the request is correct or if the user has enough rights. So we send the request to the 3rd party library and get an exception.

+ +

Is it good practice to catch these exceptions at the top level and map them to status codes like this?

+ +
var statusCode = HttpStatusCode.InternalServerError;
+if (ex is ArgumentException || ex is ResourceDoesntExistException)
+    statusCode = HttpStatusCode.BadRequest;
+else if (ex is UnauthorizedAccessException)
+    statusCode = HttpStatusCode.Forbidden;
+else if (ex is CustomerNotFoundException)
+    statusCode = HttpStatusCode.NotFound;
+
+ +

and then return the status code with an error object:

+ +
return new ErrorResponse
+{
+    Error = new ErrorDescription
+    {
+        Message = ex.Message,
+        Type = ex.GetType().Name
+    }
+};
+
+ +

Advantages I see by using this approach:

+ +
    +
  • In the program, we don't have to care if we expose a REST service or a SOAP service or whatever. Simply throw an exception, it will be handled correctly later.
  • +
  • The caller gets enough and correct information if something goes wrong (as long as the exceptions have meaningful names and information).
  • +
  • Logging can also be centralised. All unhandled exceptions will be logged right where we convert them into error responses.
  • +
+ +

Disadvantages:

+ +
    +
  • It feels a little ""hacky"".
  • +
  • ?
  • +
+ +

What is the correct way to do this?

+ +
+ +

Edit: Since it was requested, here is what I do for gRPC services, where the error mapping is quite similar:

+ +
private RpcException GenerateRpcException(Exception ex)
+{
+    var statusCode = StatusCode.Unknown;
+
+    if (ex is ArgumentException || ex is ResourceDoesntExistException)
+        statusCode = StatusCode.InvalidArgument;
+    else if (ex is UnauthorizedAccessException)
+        statusCode = StatusCode.PermissionDenied;
+    else if (ex is CustomerNotFoundException)
+        statusCode = StatusCode.NotFound;
+
+    var status = new Status(statusCode, ex.Message);
+    var exceptionName = ex.GetType().Name;
+
+    var rpcMetadata = new Metadata
+    {
+        { ""exception_name"", exceptionName }
+    };
+    return new RpcException(status, rpcMetadata);
+}
+
+",115403,,115403,,43139.375,43139.375,Mapping exceptions to error response,,2,5,,,,CC BY-SA 3.0,, +365496,1,,,2/7/2018 8:58,,1,57,"

I have a couple of classes that are doing the same thing: filling a series of object using data sources passed as a parameter. As I need two distinct of them, my interface holds the following contract:

+ +
// Arbitrary names, didn't find a good name for the application field yet
+public interface DataMapper<O>    {
+    /*
+     * Those two sources cannot be fused together, they come from
+     * different places from another API
+     */
+    public O mapData(DataSource first, DataSource second);
+}
+
+ +

However, I have a problem. One legacy class from my program only uses the first out of two sources, and doesn't need the second. As it is the only one that needs only one out of two, I considered using the contract like this:

+ +
Output output = LegacyMapper.mapData(firstDataSource, null);
+
+ +

But this feel dirty to me: anyone else than me going through the code would have to ask me first Why is the second data source null, while the interface specifies that you need both ?.

+ +
+ +

So what are my options to create a proper design that won't feel too lame ?

+ +

1) Creating a second contract mapData(DataSource onlyData);, then setting the unused one as return null considering the implementation ?

+ +

2) Creating another interface specifically for the legacy mapper ?

+ +

3) Letting the code as-is and commenting it ?

+ +

Any help would be appreciated.

+ +

Thanks !

+",224662,,,,,43138.40556,Setting a parameter to null with two-parametered interface method,,1,2,,,,CC BY-SA 3.0,, +365497,1,,,2/7/2018 9:24,,4,314,"

I am trying to figure out how to measure the relative success level of a given (but unsolved!) Rubik's Cube state. My first idea was to calculate the number of overlapping ""cells"" in the destination (solved) state and the given (unsolved) state, and present this as the following:

+ +
number of actual correct cell positions / number of all positions (6x9)
+
+ +

But I have the feeling that this ratio doesn't necessarily correlate with the actual success level (so you're not necessarily closer to the full solution with a relatively high level of correct cell positions). Is there a more elegant (and calculable) way to measure this?

+",295481,,155433,,43138.66806,43826.87639,How to measure the solution level of an unsolved Rubik's Cube?,,5,1,1,,,CC BY-SA 3.0,, +365501,1,,,2/7/2018 9:35,,2,211,"

Say that I have the following one-to-many relationship:

+ +

+ +

If I don't want to store any information about the relationship, I can simply put the primary key of School as foreign key in the Student table.

+ +

But let's say that I want to store the date on which a student has registered in a school. In this case, can I still put the primary key of School as foreign key in the Student table alongside the date_of_registration attribute? or should I create a new table that contains school_id and student_id as foreign keys alongside the date_of_registration attribute?

+ +

If I can use both either approach, what are the advantages and disadvantages of each approach?

+",249663,,,,,43138.725,Where to put the attributes of a relationship in a non many-to-many relationship?,,3,1,0,,,CC BY-SA 3.0,, +365510,1,365512,,2/7/2018 11:50,,1,104,"

What is the recommended way to reference other classes within a container class while keeping everything decoupled?

+ +

In the below example, I'd like to automatically add to the SceneGraph the constructed Path.

+ +
class SceneGraph {
+  constructor() {
+    this.children = []
+  }
+
+  addChild(child) {
+    this.children.push(child)
+  }
+}
+
+class Path {
+  constructor(opts) {
+    this.color = opts.color
+
+    // @NOTE: Coupling here
+    sceneGraph.addChild(this)
+  }
+}
+
+// User space
+
+const sceneGraph = new SceneGraph()
+const path = new Path({ color: 'red' })
+
+ +

""Automatically"" here meaning that I'd like to avoid doing something like this:

+ +
const path = new Path({ color: 'red' })
+sceneGraph.addChild(path)
+
+ +

... which looks intuitive enough, but it soon starts getting clunky as more and more methods are added to the Path that should also affect the SceneGraph - i.e path.remove().

+ +

The example above looks problematic to me, since the SceneGraph is inherently coupled with the Path.

+ +

Is there a pattern available that solves this problem?

+",108346,,108346,,43138.49792,43141.99722,Recommended pattern to affect other Classes on instantiation/method calls,,3,0,,,,CC BY-SA 3.0,, +365511,1,365516,,2/7/2018 11:54,,1,1383,"

I'm building an Android application, and trying to follow the Clean Architecture pattern.

+

I am a little confused on what my entities should look like regarding some of my use cases.

+

Example

+

I have a list of channels, and each channel have programs. +I want to display a list of the current program for each channel, and when an element is clicked, I display a channel page with the channel detailed info and the current program info.

+

Entities

+

I can see here that I should have a Channel and a Program entity. But should Channel have a field List<Program> programs (as there are mutiple programs during the day)? And another Program currentProgram? +Or should Channel be separated

+

Use case

+

Should they return entities, or some dto object? +Currently, my use cases return entities, but are mapped to some dto objects in the presenters.

+

TL;DR

+

Which of the proposition below is the way to go?

+

Pseudo-Code propositions

+

Case A

+

Entities

+
Channel
+{
+  id()
+  name()
+  logo()
+}
+
+Program
+{
+  id()
+  name()
+  thumbnail()
+  start_date()
+  end_date()
+}
+
+

Use Cases
+List<Channel> GetChannelsUseCase()
+Program GetCurrentProgramForChannel(channelId)

+

Presenter

+
fetchChannels()
+{
+    // Builds a List<CurrentProgramViewModel> based on
+    // GetChannelsUseCase composed with GetCurrentProgramForChannel for each channel
+}
+
+

Case B

+

Entities

+
Channel
+{
+  id()
+  name()
+  logo()
+  programs()
+}
+
+Program
+{
+  id()
+  name()
+  thumbnail()
+  start_date()
+  end_date()
+}
+
+

Use Cases
+List<Channel> GetChannelsUseCase()

+

Presenter

+
fetchChannels()
+{
+    // Builds a List<CurrentProgramViewModel> based on
+    // GetChannelsUseCase and filtering the programs()
+    // to extract the 'current' program
+}
+
+

Case C

+

Entities

+
Channel
+{
+  id()
+  name()
+  logo()
+}
+
+Program
+{
+  id()
+  name()
+  thumbnail()
+  start_date()
+  end_date()
+}
+
+

Repository
+List<Channel> GetChannelsUseCase()
+Program GetCurrentProgramForChannel(channelId)

+

Use Cases
+List<ProgramDto> GetCurrentProgramUseCase()

+
{
+    // The Use Case requests the repository to get the entities
+    // and builds a ProgramDto object
+}
+
+

Presenter

+
fetchChannels()
+{
+    // Maps a List<CurrentProgramViewModel> based on the
+    // List<ProgramDto> returned by GetCurrentProgramUseCase
+}
+
+

ps: I'll ask about the datasource and repository later / in another topic if necessary

+",264823,,-1,,43998.41736,43195.73333,Uncle Bob's clean architecture - Composed entities,,2,0,,,,CC BY-SA 3.0,, +365513,1,,,2/7/2018 12:10,,4,566,"

I have some doubts with Agreggate Roots and working with Entities.

+ +

Imagine we have a Box object, which contains Potatoes. In this case and in a DDD point of view, the Aggregate Root would be the Box object, so in case we want to add Potatoes to the Box, we have to do it through the box object.

+ +

Now, imagine we have to change one specific Potato to another Box. This is, we have Box A (with Potato A) and Box B. We want move the Potato A to Box B.

+ +

My first idea is to create a method inside the Box object, that changes the 'Potato' root (because we can't do it from the Potato itself). +All the logic is in the root object, so seems Ok for me.

+ +

Is it a good approach to do it this way? Or in case two aggregates roots (of the same type) are involved, we have to create a Service instead?

+ +

Some code

+ +
public class Box {
+
+    private List<Potato> potatoes;
+
+    public void changeBox(Potato potato) {
+        potato.getBox().potatoes.remove(potato);
+        potato.setBox(this);
+    }
+    ...
+}
+
+
+public class Potato {
+
+    private Box box;
+
+    protected void setBox(Box box) {
+        this.box = box;
+    }
+
+    public Box getBox(){
+        return box;
+    }
+    ...
+}
+
+
+// This should do the change of box
+Potato potato = oldBox.getTopPotato();
+newBox.changeBox(potato);
+
+ +

Thank you!!

+",289791,,289791,,43138.53889,43138.74028,Can an Aggregate Root change the 'root' of an Entity?,,1,8,1,,,CC BY-SA 3.0,, +365514,1,,,2/7/2018 12:13,,3,247,"

I have several Oracle database where my in-house applications are running. Those applications use both dba_jobs and dba_scheduler_jobs.

+ +

I want to write monitoring function: check_my_jobs which will be called periodically by Nagios to check if everything is OK with my jobs. (Are they running? Is it Broken? Is next_run_date delayed? and so on)

+ +

Solutions:

+ +

Due to the fact that I have to monitor jobs on different databases, there is two way of implementing the solution:

+ +
    +
  1. Create monitoring function and configuration tables only on one database which will check jobs on every database using the Database link.

    + +

    pros: Centralized functionality, easy to maintain.
    +cons: I have to make a check using the Database link.

  2. +
  3. Create monitoring function and configuration tables in every database where I want to check jobs.

    + +

    pros: I don't have to use DB link
    +cons: Duplicated monitoring code on every database

  4. +
+ +

Which solution is better?

+",267914,,94675,,43163.10417,44183.46181,Designing database job monitoring,,1,2,,,,CC BY-SA 3.0,, +365518,1,,,2/7/2018 13:30,,1,316,"

Background: there'a a gazillion types of virtual machines in Microsoft Azure each having different performance and price. Such virtual machines are paid per hour. The goal is to decide how to get ""the most for the money spent"". This can be also reworded in terms of electricity spent for a certain amount of computation.

+ +

So I have a piece of code as here which is highly CPU-intensive - uses little memory and no disk. My goal is comparing processor speed only and I assume that the code I use mostly uses the same CPU instructions sequences as the production load I plan for so it makes sense to use this code in the first place. I select the parameter (the number of the value to compute) for that code and it's unchanged across runs - let it be ""five million"" - and so all runs compute exactly the same.

+ +

So I run this code multiple times on VM of type T1 (imaginary, no such VM type in Azure now) and I see that it runs for 5 minutes on average. T1 machines cost 10 cents per hour.

+ +

Then I run this code multiple times on VM of type T2 and I see that it runs for 7.5 minutes on average. T2 machines cost 8 cents per hour.

+ +

I then must somehow compare which is better. My logic is the following: we're interested in minimizing both the time and the cost. So we can multiply the time to run the code by the cost per hour which gives us ""money-by-time"" (cents-by-minutes in this specific comparison) measure. A VM with the smallest ""money-by-time"" value is the most efficient. In this case T1 machines has ""5 minutes by 10 cents"" (50) and T2 machine has ""7.5 minutes by 8 cents"" (60). T1 machine is 17 % more efficient then.

+ +

Does such way of comparing VMs in terms of performance for given price make sense? Is there perhaps any error in my logic?

+",587,,587,,43138.57569,43893.65278,"Is comparing ""dollar-hours"" for running a specific piece of code practical as an estimate of rented system performance?",,2,7,,,,CC BY-SA 3.0,, +365524,1,365538,,2/7/2018 15:47,,5,1714,"

After reading the book; I understood the following:

+ +

1) Entities should implement equality and compare by ID. +2) Value Objects should implement equality and compare by all properties in the class.

+ +

I still believe that my understanding of point two is correct. However, I am confused about point one because of the following:

+ +

1) This blogger talks about creating an entity base class, which all entities inherit from: http://enterprisecraftsmanship.com/2014/11/08/domain-object-base-class/. Therefore all entities implement equality by ID because the equality comparisons are defined in the base class.

+ +

2) Most of the entity classes here have IDs: https://github.com/nhibernate/nhibernate-core/tree/master/src/NHibernate.DomainModel/Northwind/Entities

+ +

3) This question seems to suggest having an ID attribute: ID properties on Domain objects in DDD

+ +

4) This question points to a YouTube video (point one) where it is argued that entities should not have IDs.

+ +

Points 1-4 above seem to suggest either: 1) use a database ID; 2) use another ID that is not the database ID; 3) Use no ID.

+ +

I am trying to decide whether:

+ +

1) Have an entity superclass.

+ +

or +2) Have an ID (from database) in every entity.

+ +

or +3) Introduce an identifier (not database id) that identifies domain objects.

+ +

I am trying to completely isolate the domain model (I also have a data model).

+",65549,,209774,,43138.80972,43138.80972,Is it normal for a Domain Model not to have an ID?,,1,4,3,,,CC BY-SA 3.0,, +365526,1,,,2/7/2018 16:15,,4,664,"

We have a microservice that reads in large data files, each row containing information about Accounts.

+ +

For each record found, it will then go off and do other ""stuff"" and that stuff is distributed across other microservices. As such, we need to look at using correlation IDs to tie distributed transactions together from a logging perspective.

+ +

We're struggling understanding the boundaries of those correlation IDs though.

+ +

I'm aware that the general principal is ""use a correlation ID you're given or create one"", but it's the singularity of that correlation ID that's giving us the issue.

+ +

For example:

+ +
    +
  • data file contains 1,000 Account records - it would be useful to pull back the logs of all the activities against that 1 data file across the 1,000 Accounts; but
  • +
  • it would also be useful to just pull back the resultant logs for a single Account in the data file.
  • +
+ +

So it feels like two separate Correlation IDs are needed - one for the file, one for the Account - in a stack, almost.

+ +

However nothing I've ever seen on the interwebs ever approaches correlation in this way, so I'm guessing my thinking / design pattern is wrong.

+ +

Any tips? How have others dealt with this?

+",295521,,,,,43381.31736,Scoping of Correlation IDs in microservices,,4,1,1,,,CC BY-SA 3.0,, +365529,1,,,2/7/2018 16:53,,4,232,"

I am looking to perform a large overhaul on a complex simulation system that simulates several instances of several vehicle models in a classroom training environment. For example, 24 students may be running simulations on three different vehicles for maintenance and operation. Instructors will be required to have tablets that can connect to any of the 24 active simulations to control the training scenario.

+ +

The primary system will be running on Linux but there are no other requirements for OS and machine specs can be built as needed. Performance of each simulated pass must be able to consistently run at ~10ms intervals with a +-2ms tolerance.

+ +

A primary goal is to make this system very modular so that it can be extended and reused by other training facilities with unique vehicles and needs.

+ +

My thought was to use a layered architecture (system, business, UI). The definition of each vehicle model can be stored in a database and therefore edited independently by a superuser (modularity/extensibility of the vehicles). Each layer would likely have to read this database to dynamically allocate the resources that particular layer will require.

+ +

Originally I planned to use shared memory for the system layer, setting up permissions and authentication for any business layer to attempt to login. The primary simulation business logic would then continually update the vehicle details according to the active data. The instructor interface would have a business layer server that connects to all 24 clients and also logs into the system layer to modify simulation parameters. All inputs from the student and visual outputs would each have a business layer that can login to the shared memory system layer as well. All of these acting as separate applications so that they can be removed/added/extended as needed.

+ +

The problem then came when I realized that classes do not work well with shared memory. I would need to serialize every get/set of the shared memory into a flat memory structure. Having not worked with this architecture before, I am unsure if this plan will create a large performance hit. Technically I can dedicate some cores to the primary business layer logic that performs the simulation.

+ +

Would using shared memory with a suite of applications be an appropriate way to resolve this system? Would another cross-process communication type such as pipes be more advisable than shared memory? Would it be better to maintain the system and business logic into a single application and simply use mutexes and cross-threading to ensure performance? Am I going about this all wrong?

+",295522,,,,,43139.00556,Massive Simulator Software Architecture,,1,5,,43143.59028,,CC BY-SA 3.0,, +365530,1,365542,,2/7/2018 17:23,,36,8695,"

While reviewing some code, I noticed the opportunity to change it to use generics. The (obfuscated) code looks like:

+ +
public void DoAllTheThings(Type typeOfTarget, object[] possibleTargets)
+{
+    var someProperty = typeOfTarget.GetProperty(possibleTargets[0]);
+    ...
+}
+
+ +

This code could be replaced by generics, like so:

+ +
public void DoAllTheThings<T>(object[] possibleTargets[0])
+{
+    var someProperty = type(T).getProperty(possibleTargets[0]);
+    ...
+}
+
+ +

In researching the benefits and shortcomings of this approach I found a term called generic abuse. See:

+ + + +

My question comes in two parts:

+ +
    +
  1. Are there any benefits to moving to generics like this? (Performance? Readability?)
  2. +
  3. What is Generics Abuse? And is using a generic every time there is a type parameter an abuse?
  4. +
+",104909,,104909,,43158.84653,43160.86528,What is generics abuse?,,4,21,7,,,CC BY-SA 3.0,, +365535,1,365550,,2/7/2018 18:06,,0,1358,"

Hopefully this is the correct channel for such a question and my quest for further knowledge.

+ +

I've currently built a significant platform which allows clients to communicate with our API, which in turns allows ""items"" to be pushed to a print queue.

+ +

Most third party developers and app users have no problem consuming the API, and I only assume this is because they authenticate over basic auth outside a proxy environment.

+ +

We have one client, that appears to be locked behind a proxy of sorts, which prevents the consumption of a particular endpoint. I'm regrettably faced with the 407 - Proxy Authentication Required.

+ +

I'm very keen to understand is this something the end user (client) should resolve by adjusting proxy security settings (if needed) or a developer problem?

+ +

I'm also keen to understand what is actually happening? End user behind proxy calls the API, to try and authenticate via basic auth, and then tries to authenticate via proxy-authenticate?

+ +

Ideally not looking for a fix but more knowledge on what is happening, so I can fix myself.

+",206854,,,,,43138.86458,Understanding a proxy server and connecting to a web api via basic auth,,1,3,,,,CC BY-SA 3.0,, +365544,1,,,2/7/2018 19:51,,9,5848,"

I'm torn between using DI and static utility classes on my code.

+ +

I currently created a DateUtil static class that is only meant to be accessed statically. This class is responsible for just creating and manipulating dates using a variety of libraries with absolutely no external 3rd party dependency.

+ +

Everything looks fine but now unit testing becomes messier since now we have to use PowerMockito to properly stub the behavior of the static class.

+ +

On the contrary, if I removed the static-ness of DateUtil and converted it into a Spring @Component, then now unit tests can be a tad bit easier, however, now we have all these dependencies everywhere and I'm not sure that this is the purpose of Spring managed beans.

+ +

How should I proceed with this? Is it okay to using Spring beans everywhere and just forget about static code entirely? Is it okay to abuse Spring's DI capabilities? When is it ""too much DI""?

+",151231,,,,,43140.925,Java: Is it okay to abuse Spring beans (@Component) rather than use static final utility classes?,,2,0,1,,,CC BY-SA 3.0,, +365553,1,,,2/7/2018 22:31,,1,18,"

I inherited a Visual Studio solution with a Main branch, a QA branch off of Main, and a DEV branch off of QA. It seems these are the only branches and they've existed for the lifespan of the solution. The solution has 5 projects, and in our new version I want to create a different behavior for one of the projects, but then have it go back to the old behavior for subsequent versions. So my plan is to branch that project and then not merge it back.

+ +

If I branch the project in DEV, will that new branch also be reflected in QA and Main?

+ +

When I tried researching this question, I came across this article: https://docs.microsoft.com/en-us/vsts/tfvc/branching-strategies-with-tfvc

+ +

It talks about development isolation and feature isolation but doesn't explain how to do both at once.

+ +

I apologize if this seems cryptic but my reason for having different behavior in this one version is very complicated and could easily triple the length of this question and is largely irrelevant. So given that I do need a one-off modification to one of my projects, what's the best way to do what I'm trying to do?

+",267711,,,,,43138.93819,How can I make sure a branch gets duplicated in my development and main branches in TFVC?,,0,0,,,,CC BY-SA 3.0,, +365557,1,365563,,2/8/2018 2:05,,-2,229,"

I hosted a hobby NodeJS Server from my Linux and all is fine by accessing my direct IP.

+ +

Because I find this ugly and also less secure, I decided to route my domain which is set to Cloudflare to my IP. Then as I access the URL, I was prompted with an Authentication that I had never set up.

+ +

The question is, why proxy-ing through Cloudflare causes this? And how to fix this?

+ +

EDIT: I seem to find out the reason. It seems that my server is prompting Authentication whenever the port is not specified. Means Cloudflare is trying to redirect to my IP without the port. But would still need to know the best practice in this scenario and how to solve it.

+ +

EDIT2: I solved this by moving my port to 80. Since that is by default. But would pretty much still want to know if it is possible to get other ports to work.

+",295367,,295367,,43139.09236,43139.22083,Linux Server Hosting through Cloudflare,,1,1,,43143.61597,,CC BY-SA 3.0,, +365566,1,365571,,2/8/2018 6:47,,11,7954,"

I have a python package that I wrote and I want to use it within multiple docker builds.

+ +

However, I can't just install my local package situated outside of Dockerfile folder. And I don't want to copy the package into multiple projects.

+ +

So how do I keep my architecture DRY?

+",183248,,,,,43601.1,best way to install local package into docker image,,2,0,1,,,CC BY-SA 3.0,, +365569,1,365615,,2/8/2018 9:13,,0,1027,"

Is it a good idea to have individual/unique data sets for each integration test or should all tests reuse the same data? My idea for having individual data sets for each test is to have more control for each test to make it easier to update old tests as well as implementing new data. Instead of making sure the new data for a new test is not interfering with the old data/tests I can just add a new data set that will only be used by my new test. To me it makes sense but when reading up on integration/service testing it seems like most (all?) is using the same data for all tests.

+ +

My biggest reason for having unique data sets for each tests is because I want to write tests for a microservice architecture and making sure you get unique ID's that is the same throughout all DB's turned out to be a bit messy. If I would have individual datasets I could follow AAA:

+ +
// Assign
+Mock up database, could be CSV files loaded into an in memory-db.
+
+// Act
+Make a call to the endpoint I want to test, could be done through for example MSTest, Webtest or POSTMAN
+
+// Assert
+Make sure that the response contains the data I wanted.
+
+ +

In the case above, the CSV would then be the data sets. So each test would get individual CSV-files that would be used to seed the DB prior to running the test.

+",295519,,295519,,43139.61181,43139.86875,Is it a good idea to have individual data per integration test?,,2,7,0,,,CC BY-SA 3.0,, +365570,1,365578,,2/8/2018 9:20,,4,518,"

I was reading this question recently: ID properties on Domain objects in DDD

+ +

The question talks about having an surrogate key in the infrastructure layer, which is a database ID. I use a Guid for this:

+ +
Guid id = Guid.NewId();
+
+ +

The answers then talk about having a natural key in the domain layer that identifies the entities. A great example of this in my mind is a bank account i.e. the Guid identifies the bank account in the database (and Data Model) and the account number and sort code identify the bank account in the domain layer i.e. there is no database id in the domain layer.

+ +

Say I have a product entity and I want to generate a product code in the domain layer. How would I do this? The question I have linked to talks about using algorithms. What algorithms are there?

+",65549,,,,,43774.46042,How to generate a Natural key for a Domain entity?,,2,4,,,,CC BY-SA 3.0,, +365572,1,365576,,2/8/2018 9:28,,3,1347,"

I have a RESTFUL api, one of the endpoints is receiving search criteria which contains property for ""Title"".
+Should I allow consumers to send either null (or eliminate the property) or Empty string in this property and deal with both as the same meaning in this context ?
+I mean, for me I shouldn't do search with null, and in this context doing search with empty will always not return any results.

+ +

Or is this up to me and I only need to keep it documented and consistent in all my API ?

+",186303,,186303,,43139.40139,43139.63681,"As API author, should I treat Empty and Null the same in search criteria?",,3,2,1,,,CC BY-SA 3.0,, +365575,1,,,2/8/2018 9:43,,3,1481,"

I have a unit test similar to the code snippet below, it should check that the AddUser method only allows unique emails.

+ +

My question is around the Arrange part of this unit test, I use existing system code to setup the first user (class UserLogic), this is so that I have a user in context to perform the next parts of the test (Act and Arrange).

+ +
[Fact]
+public void CheckUniqueEmail()
+{
+    var context = new DbContext(); //EF Core in memory db
+
+    //Arrange
+    UserLogic userlogic = new UserLogic(context);
+    User user = new User('test@test.com');
+
+    userlogic.AddUser(user);
+
+    //Act
+    UserLogic userlogicNew = new UserLogic(context);
+    User userNew = new User('test@test.com');
+
+    bool result = userlogicNew.AddUser(userNew); //result should be false since this email has already been used
+
+    //Assert
+    Assert.False(result);
+}
+
+ +

However, I have seen this done in two ways: The first is as I have done above. The second would be to insert data directly into context, as in the next example

+ +
[Fact]
+public void CheckUniqueEmail()
+{
+var context = new DbContext(); //EF Core in memory db
+
+//Arrange
+context.Users.Add(new User({Email='test@test.com'}))
+context.SaveChanges();    
+
+//Act
+UserLogic userlogicNew = new UserLogic(context);
+User userNew = new User('test@test.com');
+
+bool result = userlogicNew.AddUser(userNew); //result should be false since this email has already been used
+
+//Assert
+Assert.False(result);
+}
+
+ +

Based on the foregoing, does matter the way I arrange the data for the unit tests? Which one of the two approaches do you think is appropriated for unit tests?

+",221219,,222996,,43139.9125,43140.53542,Does it matter how I setup test data when creating unit tests?,,2,13,,,,CC BY-SA 3.0,, +365579,1,365583,,2/8/2018 11:07,,14,5008,"

Note

+ +

I realise this could be subjective question to answer, but I don't expect an answer in the form of ""Do this and everything will work out"". I am aware there will probably be some decisions to be taken and that different approaches are better for different scenarios. In any case, I would like a review of the pros and cons of whatever strategies professionals are actually using.

+ +
+ +

Assume that I am writing a big library, called BigLibrary. This library can be divided into several smaller projects, some of which are also useful outside of BigLibrary. What is the best way to organise the folder structure and installation to have a smooth workflow?

+ +

Consider that I also use git and that multiple developers should be able to work on all aspects of the project.

+ +

I have some options:

+ +

Option 1

+ +
BigProject 
+  __init__.py
+  main_files.py
+  subLibrary1
+  subLibrary2
+
+ +

The problem with this is that subLibrary1 and subLibrary2 must be accessed through BigProject, which is not ideal. Also it complicates the workflow with git if I want subLibrary1 and subLibrary2 to have their own git repository.

+ +

Option 2

+ +
BigProject
+  __init__.py
+Library1
+Library2
+
+ +

The problem with this is that BigProject will not see Library1 and Library2, unless I add them manually to sys.path. This seems like an ugly solution. It does make everything work nicely with git, but it also pollutes the namespace a bit.

+ +

Ideally, I should also be able to modify Library1 and Library2 without this affecting BigProject - and only deciding to update to a certain version of Library1 and Library2 when I feel like it is a good moment. Basically I want to make BigProject use a certain checkout of Library1 and 2 until I decide to ""update it"" and bring it to the current checkout.

+",295596,,,,,43882.92986,How do I deal with projects within projects in python?,,1,2,6,,,CC BY-SA 3.0,, +365580,1,,,2/8/2018 11:12,,1,373,"

I have a class Tiles that looks something like this:

+ +
class Tiles {
+ public:
+  void AddTile(int x) { tiles_.push_back(x); }
+  std::vector<int> tiles_;
+}
+
+ +

Now I want to create a class Tiles with data, i.e.,

+ +
class TilesWithData : public Tiles {
+ public:
+  void AddTile(int x, double data) { 
+    tiles_.push_back(x);
+    data_.push_back(data);  
+  }
+  std::vector<double> data_;
+}
+
+ +

In this scenario I want that if a TilesWithData is created, the AddTile(int x) deleted, so that only the version AddTile(int x, double data) is available. Is that possible? What is the best design pattern for this case? Thanks!

+",167320,,260537,,43140.31389,43140.31389,design pattern for class with data attached,,3,1,0,,,CC BY-SA 3.0,, +365590,1,,,2/8/2018 13:42,,2,1250,"

I have an address as part of my domain. The address keeps information about country, city, zipcode, street and housenumber. It is used in multiple places - a company can have an office address, invoicing address and/or correspondence address; transports have addresses at start, middle and end points; couple of other places, too.

+ +

I'm wondering how to correctly handle this DDD-way. Should address be a value object, an entity or an aggregate root? I've seen similar questions but not surprisingly none of those corresponded well with my domain.

+ +

An address doesn't know how to validate itself - company's addresses can be initially almost empty and filled later, but transport's addresses should be always complete. So generally it depends on the object containing an address, not on the address itself.

+ +

Next thing is that address of one company is completely independent from addresses in other companies - event if they are the same in terms of location. Also, changing company's address doesn't mean replacing it with another - it is just an update done to the current object. It also doesn't affect other any other address. Does that alone qualify address as an entity?

+ +

As address can't function on it's own, it shouldn't be AG, right?

+ +

Maybe address should be just an interface with CompanyAddress, TransportAddress etc. implementing it? They are following different rules after all...

+ +

I'd really appreciate your opinion about this. I feel like I'm missing something here, but can't quite figure out what exactly.

+",295320,,270801,,43139.68958,43143.24236,Domain driven design - Address entity / value,,3,0,1,,,CC BY-SA 3.0,, +365592,1,,,2/8/2018 14:07,,-2,218,"

We have

+ +
int i;
+
+ +

which have 4 bytes lets have address from 1000 to 1004. +if we declared a pointer

+ +
int* p;
+p = &i;
+
+ +

now is the pointer holds only 1000 or from 1000 to 1004? +The pointer holds the addresses then why we need to declare a datatype to a variable? +when we declare a datatype to pointer, can a pointer can be typecasted? +why can't we use void* p which can hold any datatype?

+",295624,,,,,43139.69028,Is pointer holds a base address or it holds whole limits?,,1,4,,43139.95278,,CC BY-SA 3.0,, +365599,1,365600,,2/8/2018 16:53,,0,2268,"

In my project I have some constants where I reference almost everywhere:

+ +
public sealed class Constants{
+    public static readonly int MAX_QUAL { get; } = 1080;
+    public static readonly bool CC { get; set;} = false;
+}
+
+ +

Is it a good practice to store these values in in the application code itself, similar to the snippet above, or in the configuration files (App.config/Web.config/project.json/Manifest.xml etc),

+ +
<appSettings>
+  <add key=""MAX_QUAL"" value=""1080"" />
+  <add key=""CC"" value=""false"" />
+</appsettings>
+
+ +

and then read them through these files on the runtime?

+ +
public static readonly int MAX_QUAL { get; } = Convert.ToInt32(
+    System.Configuration.ConfigurationManager.AppSettings[""MAX_QUAL""]);
+
+ +

I want to structure the application in a way that non-developers can change these values if it is neccessary, without having to change the source code itself. One scenario is when the provider changes the SDK key while the developer is out for vacation, and someone needs to update these values.

+",295642,,,,,43139.71111,Is it a good practice to store constants in configuration files,,1,1,,43139.72083,,CC BY-SA 3.0,, +365602,1,,,2/8/2018 17:39,,0,790,"

I was really glad that ES6 introduced the let keyword for defining variables. var scoping rules can lead to all kinds of issues, especially when working with loops and event handlers. Even when programmers used var in the past, they still almost always expected let scoping rules.

+ +

Variables declared with var are available in the entire function. Variables defined with let are only available inside the block where they are declared.

+ +

However, var's scoping rules are also useful sometimes, though very rarely. Here is an example: (imagine all of these examples are inside a function, I don't intend to define globals here)

+ +
try {
+    var result = mightThrowAnException();
+    //do various things
+}
+finally {
+    if (result) cleanup(result);
+}
+//do other things
+callSomeFunction(result); //reference result
+
+return result;
+
+ +

In the above example, replacing var with let would cause an error. In order to use let, one would have to write a function like this:

+ +
let result; //declare the variable outside the braces
+try {
+    result = mightThrowAnException();
+    //do various things
+}
+finally {
+    if (result) cleanup(result);
+}
+//do other things
+callSomeFunction(result); //reference result
+
+return result;
+
+ +

I honestly think writing it this way isn't any better. It feels worse when you have to predefine more variables like these, just because they're declared in a try block.

+ +

In TypeScript, there is the additional annoyance that you have to explicitly declare the type of result if you want to predefine it in this way. If you initialize it, the type will be deduced automatically.

+ +

Is it a good idea to use var instead of let to take advantage of these scoping rules, when in almost every other part of your code you use let?

+",57922,,297359,,43199.75833,43199.75833,"Is it okay to use var on purpose in ES6, as opposed to let?",,1,2,,,,CC BY-SA 3.0,, +365604,1,365611,,2/8/2018 17:41,,1,95,"

I'm currently developing an application using reactive programming. Every entity creation or modification in the system publishes an event and no two entities can be created/modified within the same use case.

+ +

In this situation, there are entities that have to be created based on specific states of other entities. For example, there is an entity Slot that, when created with a state IN_AUCTION, should trigger the creation of an Auction entity. This is optional, as the Slot could be created with a state AVAILABLE, in which case, no Auction should be created.

+ +

Given this is the case, my doubt is about the type of event to publish when the Slot is created. These are the options I've thought about:

+ +
    +
  1. Publish a generic SlotCreatedEvent. In this case, the listener would need to verify the state of the Slot, either by adding it to the Slot event or by querying it to check its state.
  2. +
  3. Publish a SlotInAuctionEvent or a SlotAvailableEvent. In this case, there would be a specific listener that would create the Auction without checking the Slot state but if I follow this approach I could have an explosion of events.
  4. +
+ +

So, given this example, what's the expected granularity when publishing events? Should there be a specific event for each state/entity modification or just a generic one with all the information of the entity modified?

+",295652,,,,,43139.85069,What kind of granularity is recommended in reactive programming when publishing state change events?,,1,13,,,,CC BY-SA 3.0,, +365606,1,,,2/8/2018 18:27,,1,139,"

I need to implement a class hierarchy and be able to implement it in multiple ways.
+I want to create a logical representation of a query language, and then be able to parse it to/from different formats (XML, JSON, YAML, etc), and switch between them.

+ +

I tried to start with the classes below (Logical Hierarchy) to represent the language (I know there are already tools for SQL Queries but this is just the basic, it will expand), Then I created interface QueryComponentParser<T extends QueryNode, U> followed by parsers & deparsers classes hierarchies for every format I need (json parser hierarchy below for example)

+ +

I ended up duplicating the logical hierarchy twice for every format I needed (once parse, once deparse), resulting with 7 classes hierarchy just for 3 formats (JSON, XML and YAML) which are basically the same and very coupled to the main logical hierarchy. I considered inherit from the classes instead of creating parsers, but I still had the structural duplication, and it made the interchangeability between formats (parse from json and deparse to xml) harder.

+ +

What is the best design to achieve my goals?

+ +

EDIT: Clarification
+My organization created its own query language, something between SQL and grahpQL. For now it is too late to change this standard. Therefore I'm trying to create the best implementation for the situation. The querying language actions are the base of the project, so IMO the best way is separation to 3 layers:
+Query format -> logical query -> query implementation
+The query format may change from xml to json, or even both. The query implementation may change according to the DB. The logical query should not be affected by changes to other parts of the system

+ +

Logical Hierarchy

+ +
public abstract class QueryNode {
+}
+
+public class QueryRoot extends QueryNode {
+    protected final Select select;
+    protected final Optional<Where> where;
+}
+
+public class Select extends QueryNode {
+    protected final Set<EntityProjection> entities;
+}
+
+public class EntityProjection extends QueryNode {
+    protected final String entityName;
+    protected final Set<String> fields;
+}
+
+public  class Where extends QueryNode {
+    protected final List<Expression> expressions;
+}
+
+public abstract class Expression extends QueryNode {
+}
+
+public class BinaryExpression extends Expression {
+    protected final String field;
+    protected final Object value;
+    protected final Action action;
+
+    public enum Action {
+        EQUALS_TO,
+        GREATER_THAN,
+        LOWER_THAN
+    }
+}
+
+public class MultiExpression extends Expression {
+    protected final List<Expression> expressions;
+    protected final Action action;
+
+    public enum Action {
+        AND,
+        OR
+    }
+}   
+
+ +

Json Parser Hierarchy

+ +
public class JsonQueryRootParser implements QueryComponentParser<QueryRoot, JsonElement> {
+    @Override
+    public QueryRoot parse(JsonElement obj) throws ParseException {
+        // Using JsonWhereParser & JsonSelectParser...
+    }
+}
+
+public class JsonWhereParser implements QueryComponentParser<Where, JsonElement> {
+    @Override
+    public Where parse(JsonElement obj) throws ParseException {
+        // Using JsonExpressionParser...
+    }
+}
+
+public class JsonSelectParser implements QueryComponentParser<Select, JsonElement> {
+    @Override
+    public Select parse(JsonElement obj) throws ParseException {
+        // Using JsonEntityProjectionParser
+    }
+}
+
+public class JsonEntityProjectionParser implements QueryComponentParser<EntityProjection, JsonElement> {
+
+    @Override
+    public EntityProjection parse(JsonElement obj) throws ParseException {
+        // Parse...
+    }
+}
+
+public class JsonExpressionParser implements QueryComponentParser<Expression, JsonElement> {
+
+    @Override
+    public Expression parse(JsonElement entry) throws ParseException {
+        // Parse...
+    }
+}
+
+",295654,,295654,,43142.41667,43202.69444,Abstract composition hierarchy with multiple implementations,,2,2,1,,,CC BY-SA 3.0,, +365608,1,365610,,2/8/2018 18:46,,1,48,"

I'm trying to figure out the best option to mark a database entity that could be closed manually or automatically. This entity already have a status that accepts OPEN or CANCELED.

+ +

Now, I would like to ""close"", but the close action can be manually or automatically, and I will need this information in the future to know how this entity was closed.

+ +

So, I think I had two options:

+ +
    +
  1. Create two different status: CLOSE_MANUALLY and CLOSE_AUTOMATICALLY.
  2. +
  3. Create one status CLOSE and create a boolean flag close_automatically
  4. +
+ +

Is there some obvious option between this two? Is there another option?

+",172464,,,,,43139.81458,Simple question about status: CLOSE_MANUAL/CLOSE_AUTO vs CLOSE with boolean flag auto,,2,1,,,,CC BY-SA 3.0,, +365617,1,365626,,2/8/2018 21:39,,3,460,"

I'm writing a really simple interpreted object oriented programming language with a C-like syntax. I've been looking into type inference and I've found a few implementations of the 'Hindley-Milner Type Inference' system.

+ +

Most are in functional languages, which I can't read, but I found this one written in Python that I can make sense of.

+ +

One question though is I'm not sure how to represent more complicated types like a Structure or a Tuple.

+ +

This implementation represents primitive types like booleans with the TypeOperator (psuedo-y code):

+ +
bool = new TypeOperator(""bool"", [])
+int = new TypeOperator(""int"", [])
+
+type_environment.add TypeOperator(""true"", [bool])
+type_environment.add TypeOperator(""false"", [bool])
+
+ +

So could I use this type operator to represent a structure?

+ +
struct Person {
+    int age;
+    bool is_dead;
+}
+
+ +

Like so:

+ +
type_environment.add TypeOperator(""Person"", [int, bool]);
+
+",295670,,295670,,43139.90625,43140.68403,Representing a structure/tuple type with Hindley Milner Type Inference,,2,4,,,,CC BY-SA 3.0,, +365622,1,365628,,2/9/2018 1:18,,12,598,"

As an example, say you are writing an app in Java.

+ +

Your app communicates with an API server written in Python.

+ +

The Python server communicates with an SQL database.

+ +

You also have a website for your app written in JavaScript.

+ +

With 4 different languages, it's easy to end up repeating essentially the same data structures 4 different times.

+ +

For example, a User type might look like this (pseudocode):

+ +
type User {
+  integer id;
+  string name;
+  timestamp birthday;
+}
+
+ +

Every part of the project would need some kind of representation for User. The Java and Python parts would need two different class declarations. The database would need a User table declaration. And the front end site would need to represent a User too.

+ +

Repeating this type 4 different times really breaks the Don't-Repeat-Yourself principle. Also there is the problem that if the User type is altered then these changes need to be repeated in every different part of the project.

+ +

I know that Google's protobuf library offers a kind of solution to this problem wherein you write a datastructure using a special syntax, and then the library generates a structure declaration for you in multiple different programming languages. But this still doesn't deal with the issue of having to repeat validation logic for your types.

+ +

Does anyone have any suggestions or links to books/blog posts about this?

+",295681,,,,,43140.31111,How to avoid duplication of data structures when parts of an app are written in different languages?,,2,4,1,,,CC BY-SA 3.0,, +365623,1,,,2/9/2018 1:20,,3,308,"

In a nutshell what I need is a simple good practice for multiple (loosely coupled or tightly coupled), git repositories +I would love to hear there is a good framework for that, but I have read a dozen other s.exchanges threads and most simply give a compromise solution, and only one suggested ""have a good practice"" ... just as an idea

+ +

A longer explanation would be :

+ +

We deliver distributed environments, i.e- many on-premises integrated SW solutions.

+ +

Let's assume these are my ""code parts"" :

+ +
+
+Services :
+Service A
+Service B
+Service c
+
+Shared libs:
+db-lib
+logger-lib
+communication-lib
+
+FrontEnd:
+FrontEnd A
+FrontEnd B
+Android
+ios
+
+ +

On an SVN monolith repo it would probably look like this:

+ +
+--libs
+--db
+    --logger
+    --com
+--service A
+    --assets
+    --tests
+    --
+--service B
+    --assets
+    --tests
+    --
+--service C
+    --assets
+    --tests
+    --
+--FrontEnd A - html
+    --assets
+--FrontEnd B - android app
+    --assets
+--FrontEnd C - iphone app
+    --assets
+--FrontEnd D - also html
+    --assets
+
+ +

the connection between services can be from interactive μServices, to a solution suite (such an internal organization portal, CMS , and 3rd party API)

+ +

Now, lets assume that I have just ran complete set of tests and integrated this environment into a client A on-premises site (his internal business portal).

+ +

after a success with this client we want to integrate client B . +This client have some requests and we are now developing a new version, testing and pushing to site B. +The fixes and features are in many of the above components inc. libs

+ +

Now client A has a bug and a minor request for change. +And we need a complete set of the above code with the exact versions +OR +Push him to latest version (not always possible and/or cost effective)

+ +

After readings many threads on the s.exchange - dep management such as Git submodule , git subtree , subrepo are not intuitive, not easily configured and does not automatically change, and thus creates extra overhead and are error-prone .

+ +

Dep build/dependency tools are usually per-coding language and a mixed solution as the above will not use one solution to rule them all ,

+ +

I know there are external tools, and also researched some suggested hacks, most of those ""admit"" to be a partial solution for a deep challange.

+ +

So, regardless of the ""tool"" (or no tool),
+What I would really love to learn is of a good and simple dependency management experience that will allow the developers to easily go back to ""a point in time"" , pull the right version of the code from each of the repos to a specific hotfix / feature branch(s) fix some code (2 lines of db lib, 3 lines of service B) , push the code , build, test and deploy, +and easily repeat this cycle with little pain as possible.

+ +

e.g., something like this:
+http://nvie.com/posts/a-successful-git-branching-model/
+but for dependent multiples repo flow.

+",295669,,245095,,43160.62431,43475.71806,Multiple Git repos practices for distributed environment,,1,0,3,,,CC BY-SA 3.0,, +365625,1,365639,,2/9/2018 2:07,,2,2092,"

I am storing information about Widgets in my database, and each Widget has one (non-unique) transformation function associated with it. My problem is how to associate Widgets with their transformation function.

+ +

My first instinct was to use an enum for the transformation function. I am using sqlalchemy for my database ORM, which includes an enum type that makes it really easy to use an enum column. It takes care of validation, and I can simply have an enum in my code for the possible transformation functions, an enum column on Widgets, and then a map in my code from enum values to the actual functions.

+ +

Then I read why enums are evil. It's pretty convincing, but if I use a reference table, then I can't use a python enum object, and my map will have to be from strings to functions - which means that if I make a typo it might happen that I am mapping from a transformation that doesn't exist in the database, which is not good, and I have no way of knowing that my map is exhaustive and correct.

+ +

What's the best way to do this?

+",295682,,295682,,43140.19653,43140.37778,Enum or reference table when dealing with maps,,1,2,1,,,CC BY-SA 3.0,, +365632,1,,,2/9/2018 7:20,,5,617,"

Is there a way to implement the traveling salesman or purchaser algorithm with constraints between locations? For example, I have to grab item X before item B, c before D and F,G,H in any order.

+",252178,,252178,,43142.66528,43580.86944,How to implement the travelling salesman algorithm with dependecies between locations,,3,2,1,,,CC BY-SA 3.0,, +365637,1,365667,,2/9/2018 8:57,,4,1421,"

I'm in the process of designing a server responsible for serving files that are between 10MB and 50MB in size.
+Initially we will run two instances of the server (lets call them fs1 and fs2), with future plans to switch to a micro-service architecture, where the server instances will grow or shrink depending on the load.

+ +

These two instances need to interact with a third server running a scheduler and a file management application, as well as a database (on another server) where some metadata will be saved for clients to use.

+ +

My initial thoughts where to use a rabbitmq to allow the fs1 and fs2 to communicate with each other and the management app. the process would work as follows:

+ +
    +
  1. The management app uploads to fs1 server (could be either fs1 or fs2)
  2. +
  3. fs1 notifies fs2 and the management app when upload is complete
  4. +
  5. fs2 contacts fs1 and stores a copy of the file
  6. +
  7. fs2 notifies the management app when upload is complete
  8. +
  9. The management app saves metadata to the external database
  10. +
  11. both fs1 and fs2 can now server the files when requested
  12. +
+ +

This seems OK, if there are only two instances, but once you start adding more it doesn't work.
+Our ops department are very much against the idea of using the database to store files. They are worried that it will slow down the system too much. I agree it might, which is why I want a separate database for the specific purpose of storing the files and metadata. +I want to build something like the following:
+

+ +

My thinking is that the upload service can manage uploading of files and saving of metadata to the database. +When the scheduler schedules a new job, the upload service (badly named, I know, but I'm not making that image again :-) ) can notify the file server instances that they need to cache the required file(s) from the database, which they can access directly.
+The file servers won't need to cache more than 5 or 6 files each at a time.
+Also, in the diagram I missed that the file management service will receive download progress messages from both file servers.

+ +

So to my questions:

+ +
    +
  1. Is this a reasonable way to store files of this size for serving?
  2. +
  3. Is this the right way to be thinking when considering the move to microservices in the future?
  4. +
  5. Are there advantages to storing the files on the file system of each fs instance instead of just caching?
  6. +
  7. How can I convince our ops team that storing 50MB files in a database is the way to go? what are the pros and cons?
  8. +
  9. Any other thoughts or comments appreciated.
  10. +
+",13417,,13417,,43140.62014,43143.66181,Is storing files of up to 50MB in size in a database for use by multiple servers a reasonable idea? Example inside,,4,18,1,,,CC BY-SA 3.0,, +365640,1,365645,,2/9/2018 9:12,,5,611,"

We all know that Scrum works very well for a traditional team where team members come into work each week/sprint, agree goals and meet commitments.

+ +

However open source projects are often very different:

+ +
    +
  • People tend not to work on Open Source projects full time
  • +
  • While some projects have a group of core contributors many pull requests are ad-hoc with people submitting pull requests which most likely won't contribute to any agreed sprint goal
  • +
  • Communities are often run via messaging systems, blogs, and forums rather than formal meetings like plannings sessions, retrospectives, and sprint reviews
  • +
  • The direction of projects is often more of a democracy (or whoever contributes the most) rather than being focused by a Product Owner
  • +
+ +

Do these differences mean the scrum methodology cannot be applied to an open source project or do adjustments need to be made?

+",29899,,,,,43140.575,Can scrum work for a traditional open source project?,,3,5,1,,,CC BY-SA 3.0,, +365641,1,365718,,2/9/2018 9:32,,2,82,"

For example, if I have a game which have some components, which has Player and Enemy,as well as the parent container,Game, e.g.:

+ +

Version 1:

+ +
public class Player{
+    private int hp;
+    private Label hpLabel();
+    //some other properties
+}
+
+public class Enemy{
+    private int hp;
+    private Label hpLabel();
+    //some other properties
+}
+
+//some other component classes
+
+public class Game{
+    Player player;
+    Enemy enemy;
+    //some other components
+    public void changeState(int state){
+        //game loop
+    }
+}
+
+ +

and another style which groups data fields (as well as views) into a single class:

+ +

Version 2

+ +
public class GameData{
+    public int playerHp;
+    public int enemyHp;
+    //some other data field
+}
+
+public class GameViews{
+    public Label playerHpLabel;
+    public Label enemyHpLabel;
+    //some other views
+}
+
+public class Game{
+    public GameData gameData;
+    public GameViews gameViews;
+    public void changeState(int state){
+        //game loop
+    }
+}
+
+ +

I know Version 1 is the standard way to do this, but I also found Version 2 has some advantages, eg: It lets me have a big picture about which data or state would the game has, as well as which UI components available at the screen. which version should I use?

+",248528,,248528,,43532.42847,43532.42847,"Should I group related data fields and views together, or group data fields into a DataClass (as well as group views into a ViewsClass)?",,1,1,,,,CC BY-SA 4.0,, +365643,1,365649,,2/9/2018 9:54,,2,1762,"

Say I have a data model, which looks like this:

+ +
public class DataCustomer
+{ 
+    public virtual System.DateTime CreatedTime { get; set; }
+    public virtual Guid Id { get; set; }
+    public virtual string FirstName { get; set; }
+    public virtual string Surname { get; set; }
+    public virtual string FaxNumber{ get; set; }
+    public virtual System.DateTime DateOfBirth { get; set; }
+    public virtual String Gender { get; set; }
+    public virtual string UserID { get; set; }
+    public virtual List<Offers> Offers { get; set; }
+}
+
+ +

This class is mapped to NHibernate. Now say I have a Domain Model like this:

+ +
public class DomainCustomer
+{ 
+    private virtual Guid Id { get; set; }
+    private virtual String Gender { get; set; }
+    private virtual DateTime DateOfBirth { get; set; }
+    public virtual List<Offers> Offers { get; set; }
+
+    public DomainCustomer(Guid id, string gender, DateTime dateOfBirth)
+    {
+          Id=id;
+          Gender=gender;
+          DateOfBirth=dateOfBirth;
+    }
+
+    public void AssignOffers(IOfferCalculator offerCalculator, IList<Offers> offers)
+    {
+          //Assign the offers here
+    }
+
+}
+
+ +

Notice that the Data Model looks different to the Domain Model. I believe this is normal practice, however every example I look at online seems to show the Data Model being the same as the domain model.

+ +

Option 1

+ +

The advantage of them being identical is that mapping between the Domain Model and Data Model is very simple i.e. you can do this with AutoMapper:

+ +
DataCustomer dataCustomer = AutoMapper.Mapper.Map<DataCustomer>(domainCustomer);
+
+ +

Option 2

+ +

The advantage of them being different is that there is less data passed between the data model and domain model and vice versa. Also I think that it makes it slightly clearer i.e. a user/reader of my class knows what fields are needed by the domain model. With this approach; I would only map the members that have changed i.e. offers:

+ +
customerData.Offers = AutoMapper.Mapper.Map<string>(customerDomain.Offers);
+
+ +

Decision

+ +

Both options use a factory when mapping between CustomerData and CustomerDomain. The question is about the mapping between CustomerDomain and CustomerData using AutoMapper.

+ +

Which option is more expected if I am following the principle of least astonishment?

+",65549,,65549,,43140.425,43151.72083,Should the Data Model be identical to the domain model for mapping purposes?,,4,3,2,,,CC BY-SA 3.0,, +365648,1,365685,,2/9/2018 11:52,,1,293,"

While working on a web (JEE based) application I saw some different ways people have instantiated loggers in different classes. First way is classic way like,

+ +
private static final Logger logger = LoggerFactory.getLogger(AbstractPersistenceObjectDAO.class);
+
+ +

but as JEE applications are CDI enabled in some managed classes it was injected like

+ +
@Inject
+private Logger logger;
+
+ +

is there any advantage if using logger with CDI, in terms of performance (time or memory)? Are there any downsides of any of these approaches? While @inject can only be used in managed environments, does it offer any advantage over other?

+",169125,,169125,,43140.57083,43665.89306,logger initialization in JEE based web application,,1,4,,,,CC BY-SA 3.0,, +365653,1,,,2/9/2018 14:37,,11,2864,"

My understanding of story estimation has been that one should estimate the size of a story as it would be for an imaginary, average developer — a bit like the ""reasonable bystander"" concept in law. That is, you should not estimate the story's size assuming you have to do it.

+ +

To give an example: in my previous job I was part of a team where I was far and away the most confident Ruby developer. My teammates would routinely estimate Ruby-related stories far bigger than I would, with arguments like, ""Well I don't know how X works in Ruby, so this would take me ages to do.""

+ +

My argument against this comes from the fact that sprint planning is where the team's capacity comes into play. That is the correct forum to say, ""Our capacity this sprint will be slightly lower than usual because the majority of the tasks are Ruby-based, and we only have one strong Ruby developer."" Factoring this in during estimation would double up this aspect.

+ +

I'd appreciate any authoritative references in answers, but simple opinions would be great too.

+",93651,,245095,,43178.53403,43452.73056,Should individual ability be considered in story points?,,5,0,0,,,CC BY-SA 3.0,, +365654,1,365709,,2/9/2018 14:54,,0,96,"

The Eclipse Scala IDE (and Intellij Idea, too) has, together with a standard REPL CLI, an artifact named Worksheet, that works like a persistent REPL log: the whole file is compiled and executed at save time, and the results are in-lined as comments. It is a great tool for API exploration and design, and initial testing.

+ +

Is there any guideline if such a file belongs to the SCM?

+ +

I usually don't share those files, because they are often rough, limited in scope and not commented, being a way to test and try different ideas. Sometimes, however, a worksheet gains enough value and informative content to become valuable.

+ +

Should it be converted to a unit test or committed as is? do you find it an effective way to document and share the design or some example usage of a piece of code?

+",295738,,106965,,43141.6375,43187.94861,Do Scala worksheet (or REPL logs) belong to the SCM?,,2,0,,,,CC BY-SA 3.0,, +365658,1,365662,,2/9/2018 15:49,,22,7173,"

I'm currently working for a company that uses VSTS for managing git code. Microsoft's ""recommended"" way of merging a branch is to do a ""squash merge"", meaning that all commits for that branch get squashed into one new commit incorporating all of the changes.

+ +

The trouble is, what if I do some changes in one branch for one backlog item, then immediately want to start doing changes in another branch for another backlog item, and those changes depend on the first branch's set of changes?

+ +

I can create a branch for that backlog item and base it on the first branch. So far, so good. However, when it comes time to create a pull request for me second branch, the first branch has already been merged into master and because it's been done as a squash merge, git flags up a bunch of conflicts. This is because git doesn't see the original commits that the second branch was based off of, it just sees the one big squash merge and so in order to merge the second branch in to master it tries to replay all the first branch's commits in top of the squash merge, causing lots of conflicts.

+ +

So my question is, is there any way to get around this (other than just never basing one feature branch off another, which limits my workflow) or does squash merging just break git's merging algorithm?

+",125671,,,,,43951.92917,Does squashing pull requests break git's merging algorithm?,,3,0,5,,,CC BY-SA 3.0,, +365660,1,365695,,2/9/2018 15:59,,1,137,"

Assuming that the entire product team has agreed that implementing some automated end-to-end tests* is worthwhile in the first place... By what criteria should the workload of implementing automated end-to-end tests be distributed between developers and dedicated QA (automation) engineers (or some other role)?

+ +

*I'd prefer not to define end-to-end test too precisely; any very common definition is fair game. I tend to mean UI tests or public API tests, but feel free to vary from that definition (answers could include or reference a brief definition). If this proves problematic I may update this question to be more narrow.

+ +

I am specifically not asking about deciding what to test, which may have a separate answer. I am asking about implementing automated tests, the requirements (essentially test plans) for which may be decided by or may receive input from an entirely different set of people (dedicated QA, product management, the customer, some external agency...). Although I acknowledge that perhaps the question of how test plans are designed might influence the answer.

+ +

Allocation of implementation workload might be, for example, one of:

+ +
    +
  • Dedicated QA (automation) engineers could implement all the automated end-to-end tests.
  • +
  • Developers could implement all the automated end-to-end tests. (Indeed the team may have no dedicated automation or QA engineers.)
  • +
  • The workload be distributed between both groups (or some other role[s]).
  • +
+ +

What criteria should govern who implements what?

+ +

Perhaps some specific types of end-to-end tests are more suitable for one group or the other? For example:

+ +
    +
  • release-blocking vs. non-release-blocking tests
  • +
  • pre-merge vs. post-merge tests
  • +
  • pre-production vs. production tests
  • +
  • functional vs. performance tests (indeed performance tests are often designed and even implemented by yet another role, dedicated performance engineer)
  • +
  • tests that require significant new tool development or acquisition vs. tests that do not
  • +
  • tests that run outside the organization like at a customer or partner site vs. tests that only run within the organization...
  • +
+ +

Perhaps particulars of the product team and its environment (management, market, on-premise vs. SaaS...) may impact the decision?

+ +

I'm especially interested in the SaaS context but on-premise is fair game.

+ +

Perhaps it is important to consider whether a developer would implement automated end-to-end tests for their own feature vs. another developer's feature. And even though I am interested in test implementation not test design, perhaps the test design question should nonetheless influence the answer.

+ +

Existing questions

+ +

The following excellent questions are related but a bit more focused on whether it's okay to have no tester role:

+ + + +

By contrast, I concede that some teams may have a dedicated tester role and some may not (both of those alternatives are fair game to this question). Also by contrast, this question focuses specifically on implementing automated end-to-end tests, not the larger question of a QA or testing role, or even test planning per se. These existing questions and answers are important but focus more on testing and test planning, and are too broad to address implementing automated end-to-end tests. These tests are right at a traditional boundary where developer and QA roles often meet; on one side automated unit tests are generally the realm of developers, and on the other side manual end-to-end tests are often the realm of QA; with automated end-to-end tests, the roles may blur and assignment is less obvious. Thus this topic seems ripe for careful analysis here.

+ +
+ +

Update

+ +

Previously I asked about ""writing"" automated end-to-end tests, but I am really specifically interested in the question of implementing (programming) automation, not designing a test plan. I have updated the question to reflect that, and to avoid a discussion of whether developers should be testers, or such.

+ +

All this begs the question of whether it is actually okay to have one person design a test plan and another person implement automated versions of those tests. I'd like to remain agnostic on that here in my question, but answers should be free to come down on one side of that or the other if desired.

+",50639,,50639,,43140.86042,43141.33194,By what criteria should automated end-to-end test implementation work be allocated?,,1,11,,,,CC BY-SA 3.0,, +365674,1,,,2/9/2018 19:49,,3,1498,"

When planning our database, we ended up with a setup like this:

+ +

We have Company, Product and Person.

+ +

There is a many-to-many relationship between Company and Product, through a junction table Company_Product, because a given company may produce more than one product (such as ""car"" and ""bicycle""), but also a given product, such as ""car"", can be produced by multiple companies. In the junction table Company_Product there is an extra field ""price"" which is the price in which the given company sells the given product.

+ +

There is another many-to-many relationship between Company_Product and Person, through a junction table Company_Product_Person. Yes, it is a many-to-many relationship involving one entity that is already a junction table. This is because a Person can own multiple products, such as a car from company1 and a bicycle from company2, and in turn the same company_product can be owned by more than one person, since for example both person1 and person2 could have bought a car from company1. In the junction table Company_Product_Person there is an extra field ""thoughts"" which contains the thoughts of the person at the moment they purchased the company_product.

+ +

Is this commonly-acceptable or is this a sign that something is wrong? If it's weird, what would be a better solution?

+ +

I've been running into a few problems with this (such as this one), so I was wondering, perhaps my database setup is wrong to begin with.

+",211763,,211763,,43141.04583,43147.54306,Is it OK to have a many-to-many relationship where one of the tables involved is already a junction table?,,3,8,,,,CC BY-SA 3.0,, +365681,1,365688,,2/9/2018 22:17,,2,150,"

I am trying to figure out the best way to keep objects in memory without having them scattered everywhere within the code.

+ +

For example: I have a PyQT menu system which interacts with objects. Currently, these menus are able to create and modify an object by hitting code outside of their menu-related files, but afterwards the objects are being held inside the menu objects themselves.

+ +

e.g.

+ +
from foo_obj import Foo
+
+class Menu
+foo = Foo()
+...
+
+ +

Some objects that need to be accessed from multiple menus sit in their own file where they are saved to a variable and imported into the ""highest-level"" file (main.py) so they aren't destroyed till the program terminates.

+ +

I'm sure this is not the best way to go about this. I have heard of MVC design and I have tried to follow something like that where almost all controller logic such as modifying the objects, conditionals, etc. are in their own ""controller-like"" files, and menu files are almost entirely views. Unfortunately, the issue still remains that I cannot find a good way to store these objects without creating some sort of abstract concept like object_container.py.

+ +

What is a good approach for this type of issue?

+",295769,,9113,,43141.225,43141.225,Maintainable way for keeping objects in memory,,1,7,,,,CC BY-SA 3.0,, +365691,1,365696,,2/10/2018 7:07,,2,5348,"

I was wondering about client side vs server side calculations regarding websites that provide some convenience type service. For example, an online website where you input a certain date and the website tells you how many days between today and that date. Or a website that tells you steps to solve the inputted math equation. Do these types of websites warrant server side calculations of any kind?

+",295796,,,,,43141.33333,Client Side or Server Side Calculations?,,1,1,,,,CC BY-SA 3.0,, +365699,1,,,2/10/2018 11:54,,-1,112,"

I'm dealing with a design problem. Actually I have the following architecture:

+ +

A Process interface :

+ +
public interface Processor {
+
+    void applyNewConfiguration(final Configuration conf);
+
+    Result process();
+}
+
+ +

A processor MyProcessor which implements this interface :

+ +
public class MyProcessor implements Processor {
+
+    @Override
+    public void applyNewConfiguration(Configuration conf) {
+        // Manage configuration which can take a long time
+        // Scan dir, fetch BIG files, etc.      
+    }
+
+    @Override
+    public Result process() {
+        // Process regarding the configuration
+    }
+}
+
+ +

And a Manager which trigger the process method of each Processor when needed :

+ +
public class ProcessManager {
+
+    public void execute() {
+
+        //  Eventually apply new configuration to a processor
+
+        // Call process method of Processor
+    }
+}
+
+ +

I explain this : actually, on application startup, all processors are loaded and a configuration is applied. +Loading the configuration can be very long and does not change so often so I did not want to recreate a Processor each time I want to make it process.

+ +

So I'm juste creating a single instance of each processor and when needed, I apply a new configuration and then call the process method as usual.

+ +

To me, it's a bad design as I'm changing the behaviour of my processors while they should be Stateless, but I do not known how to deal with that? If I called the process method while the configuration is being applied, behaviour will be dramatic....

+ +

I could of course have a constructor in each processor with the configuration as parameter, and re-create a processor each time, but the configuration management is a very long process so I do not want to do that.

+ +

I was thinking about having a separate class (maybe ConfigurationManagement) which can handle each Configuration object, process it and associate it to a Pojo (for example in a Map<Configuration,Pojo>) but as soon as a new Configuration will be applied, previous one will not be garbage collected.

+ +

Do you have any idea how to solve this problem? It seems quite simple but I can't find any good solution.

+",260111,,,,,43141.92569,Design of Stateless class with partial modification,,2,0,,,,CC BY-SA 3.0,, +365700,1,365704,,2/10/2018 11:58,,0,1026,"

So memory segmentation can be done with or without paging. I always hear people talking about stack and heaps when discussing something memory related in C++. However, what I do not get is that if the program's memory (its virtual address space) is segmented using pages, it won't get divided into stack, heap, etc., right? But rather it is divided into pages without names - in that case (with the assumption that my understanding of segmentation is correct), can you still talk about the stack and heap?

+ +

Also, if the virtual address space is divided into pages, where does the stack, heap, data, etc. segments come from?

+",295868,,295868,,43141.50208,43141.53889,"Memory segmentation - stack, heap, etc.?",,3,1,,,,CC BY-SA 3.0,, +365705,1,,,2/10/2018 14:40,,14,2701,"

I have recently learned C and want to start a project to solidify my knowledge. I've settled on making a very simple text editor, something like vim. The problem I face is that I genuinely have no clue how a text editor even works, and I don't know what to google for to learn about it.

+ +

Googling about it led to vim's GitHub repo, which is useless to me because the codebase is huge and the code is confusing me. I also found tutorials for making a text editor in C that functions kind of like vim.

+ +

Although I thought about following the tutorials, it feels like cheating. How did the vim developers figure out how to code vim without specific tutorials? Or did they start from simpler text editors? How did they figure that out just from knowledge of languages and their documentation?

+ +

What is it exactly that I need in order to start writing this text editor without directly following a tutorial? Another example I like to think of is: how did Dennis Ritchie and Ken Thompson code up Unix? I have an idea of how OS's function, but I have no idea how to put it into code. What is it that I'm missing? How do I transfer this knowledge of the language into actual, practical use?

+",295819,,97259,,43143.47153,43143.47153,How do you code something when you have no idea how it actually works?,,4,13,2,43141.87222,,CC BY-SA 3.0,, +365708,1,365713,,2/10/2018 16:26,,10,2780,"

I am currently reviewing some of the code of junior developers who just joined my team, I am wondering how should I deliver the output of this review:

+ +
    +
  1. Should I fix the code myself?

  2. +
  3. Should I give them feedback on the review process and let them do the fixes according to my instructions? And if so, how do I give this feedback, do I fill certain template document and send it to them, or there is some software that will help me mark things with problems inside the code files where they can later check for it? (I am using Visual Studio).

  4. +
+ +

After I have done reviewing the code and fixes are done then some time has passed and some parts of the code I have reviewed in the past has changed, how do I do the re-review process? Should I recheck all the code all over again? Or do I just check the parts that have changed? And if so how do I keep track of the parts that have changed so that I avoid double reviewing code?

+",101940,,83021,,43141.81806,43148.15972,How to give feedback after code review process,,3,1,,,,CC BY-SA 3.0,, +365723,1,,,2/10/2018 21:29,,6,303,"

Is it considered bad practice to call for example tshark or ffmpeg in my code, assuming I couldn't find a good enough library to use?

+",259441,,,,,43141.91736,Is it considered bad practice to use external programs?,,2,1,1,,,CC BY-SA 3.0,, +365730,1,,,2/11/2018 1:08,,0,632,"

I’m writing an asynchronous, Promise-returning function. In the processing I test some conditions, and if the conditions pass, the promise should be fulfilled (resolved), but if one fails, then the promise should be rejected.

+ +

I want to know which is the preferred coding style, or which is more readable, when writing promises:

+ +
function earlyReturns(data) {
+  return new Promise(function (resolve, reject) {
+    if (/* something fails */) return reject(new Error('error type 1'))
+    /* do some processing 1 */
+    /* do some processing 2 */
+    if (/* something else fails */) return reject(new Error('error type 2'))
+    /* do some processing 3 */
+    return resolve(processing_result)
+  })
+}
+
+ + + +
function conditionals(data) {
+  return new Promise(function (resolve, reject) {
+    if (/* something fails */) reject(new Error('error type 1'))
+    else {
+      /* do some processing 1 */
+      /* do some processing 2 */
+      if (/* something else fails */) reject(new Error('error type 2'))
+      else {
+        /* do some processing 3 */
+        resolve(processing_result)
+      }
+    }
+    return;
+  })
+}
+
+ +

On one hand, the earlyReturns function is more succinct, but on the other hand, conditionals has only one exit point. (Also, return reject() and return resolve() are misleading because the return value is really undefined.)

+",42643,,,,,43142.61458,"Which is more readable: early returns, or conditionals?",,3,8,,43142.63194,,CC BY-SA 3.0,, +365733,1,,,2/11/2018 2:26,,3,2118,"

While I do see the benefits of IoC containers (as I've used them a fair bit), I don't see how they can be incorporated within TDD unit tests (note, I'm only interested in unit tests here, not integration tests).

+ +

When I TDD, I refactor constructors to use IoC, so that I can inject fake dependencies wherever I might need to. Implementing a container implies that I'd be deviating from the red-green-refactor-repeat loop and adding code that wouldn't be covered by my tests.

+ +

Now let's say that you somehow (with great design prowess) managed to hook in a container in your TDD life-cyle. You certainly aren't meant to create instances in your unit test by resolving dependencies, as strictly speaking, that turns it into an integration test (bringing in multiple production components).

+ +

So my questions are:

+ +

1) In what scenario might you need a container while unit testing within TDD?

+ +

2) Assuming a valid scenario for (1) exists, how would you go about incorporating a container without breaking away from red-green-refactor-repeat?

+ +

I should clarify that by 'need', I'm talking about a stage you'd get to where manually managing DI gets tedious because you have a massive object graph.

+ +

IMPORTANT: I'm not asking about containers for your test. I'm asking strictly about production containers. I have a feeling that an IoC container cannot be implemented in a TDD life-cyle without breaking away from red-green-refactor-repeat (rgrr) and managing the container would have to be done in a sort of parallel way, if that makes sense.

+",272476,,272476,,43143.26319,43143.26319,Is there a need for an IoC container in TDD unit tests?,,3,8,2,,,CC BY-SA 3.0,, +365736,1,,,2/11/2018 4:00,,1,53,"

Project 1: a Java/Maven project

+ +

Project 2: a Scala/sbt project

+ +

Thing: generally, an immutable object instantiated in from 3rd party Java library. For example, ThingBuilder.foo(""bar"").build()

+ +

Question: How can I define Thing in only one place, and have it used both in Project 1 and 2? (So the parameters in ThingBuilder are guaranteed to be the same in both projects.)

+",23880,,23880,,43142.17569,43179.04514,Java/Maven and Scala/sbt projects share immutable thing,,1,4,,,,CC BY-SA 3.0,, +365737,1,365740,,2/11/2018 7:18,,0,613,"

Is 'call site' some auto code generated by compiler - I come across this term frequently and it sounds like calling method is simply referred as 'call site' - which literally sounds fine but I believe it has some deeper intricacies. And if 'call site' is compiler generated code - under what circumstances this is required?

+ +

Though I am talking in context of C#.Net, but this looks to be standard terminology. Would appreciate any clear/precise explanation with some examples.

+",293148,,293148,,43142.30833,43142.4625,Is 'call site' compiler generated auto code?,,2,3,,,,CC BY-SA 3.0,, +365755,1,,,2/11/2018 17:18,,4,4551,"

I have read this article, but I still really do not know a programming model is. I saw it being used in the following context:

+ +
+

Any given instruction set can be implemented in a variety of ways. All + ways of implementing a particular instruction set provide the same + programming model, and all implementations of that instruction set are + able to run the same executables.

+
+ +

Could someone please try to help me understand what a programming model is?

+",295868,,,,,43142.83958,What is a programming model?,,1,19,3,,,CC BY-SA 3.0,, +365758,1,,,2/11/2018 19:55,,1,209,"

Let’s say we have the following

+ +
abstract class HungryAnimal {
+
+    private MovementStrategy movementStrategy;
+
+    public HungryAnimal(MovementStrategy movementStrategy) {
+        this.movementStrategy = movementStrategy;
+    }
+
+    public void searchFood() {
+        while (/* no food found */) {
+            ...
+            movementStrategy.move();
+            ...
+        }
+    }
+}
+
+ +

and the movement strategies

+ +
interface MovementStrategy {
+    void move();
+}
+
+class WalkStrategy implements MovementStrategy {
+    void move() {
+        // Walking
+    }
+}
+
+class SwimStrategy implements MovementStrategy {
+    void move() {
+        // Swimming
+    }
+}
+
+ +

Since each hungry animal has one specific way to move we directly instantiate the according movement strategies in the concrete animals’ constructors:

+ +
class Wolf extends HungryAnimal {
+    public Wolf() {
+        super(new WalkStrategy());
+    }
+}
+
+class Shark extends HungryAnimal {
+    public Shark() {
+        super(new SwimStrategy());
+    }
+}
+
+ +

While in this concrete example this solution seems to be okay it might become a problem for testing because we cannot easily mock our movement strategies.

+ +

If instead our concrete animal classes would look like this

+ +
class Wolf extends HungryAnimal {
+    public Wolf(MovementStrategy movementStrategy) {
+        super(movementStrategy);
+    }
+}
+
+class Shark extends HungryAnimal {
+    public Shark(MovementStrategy movementStrategy) {
+        super(movementStrategy);
+    }
+}
+
+ +

we could use constructor injection to inject the proper dependencies while maintaining testability.

+ +

QUESTION

+ +

As nobody stops the injector from injecting the SwimStrategy for the wolf or the WalkStrategy for the shark who is actually responsible for making sure that the “right” dependencies are injected? Is this some kind of contract you would document in a comment at the top of the class definition? Or would it be right not to have MovementStrategy as a dependency for Wolf and Shark but turn SwimStrategy and WalkStrategy into interfaces that extend MovementStrategy and instead use these as dependencies for the concrete animals?

+ +

I’m struggling to find a reasonable argument regarding how strict one should be when defining the types of dependencies. I really appreciate some insights about best practices or helpful metrics to narrow down this question!

+",282310,,282310,,43142.91806,43144.59306,How to make sure to inject valid dependencies?,,3,3,,,,CC BY-SA 3.0,, +365762,1,365764,,2/11/2018 21:05,,2,3046,"

I imagine it is simply, as it is named, the existence of data throughout layers of a software application. I ask because I have not been able to find a clear definition that states something of the sort: Data Persistence is the existence of data throughout layers of a software application. If that exists please share the link.

+ +

I did find this link but it seems to be, at least partially, incorrect. I'm assuming that data persistence in software allows change and is accessed frequently; I'm just assuming though.

+ +

If I am correct, are there other qualities to this ""data persistence"" that I am leaving out.

+ +

I'm sure there are best practices and anti-patterns to data persistence. I just want to know the definition of data persistence.

+",270184,,270184,,43312.875,43312.875,What is data persistence in the context of software engineering?,,2,17,,,,CC BY-SA 4.0,, +365763,1,365767,,2/11/2018 22:04,,0,139,"

The definition of production seems to contradict what web developers consider an application in production to be. Why is the term in release or published not used instead? I have been in professional web development for ~4 years and we've always used the terminology (development, QA, and production) when referring to the different states of an application. It seems it would be correct to actually refer to development as production and production as release or publish.

+",270184,,36726,,43142.94514,43142.94514,"Why are web applications said to be in ""production"" when in reality they are in ""release"" or ""published""?",,2,2,,43143.12847,,CC BY-SA 3.0,, +365768,1,365769,,2/11/2018 23:00,,3,1023,"

I know that the Arithmetic Logic Unit (ALU of a processor performs arithmetic (and bitwise) operations and the result is stored as the ALU's output - but what component, device or software is actually accessing/using the output of an ALU? And is it valid to say that the ALU returns the result of an operations or would that be wrong?

+",295868,,1204,,43143.00694,43143.00694,What receives the output of the ALU?,,1,1,,,,CC BY-SA 3.0,, +365772,1,365777,,2/12/2018 3:04,,37,10330,"

For example, consider I have a class for other classes to extend:

+ +
public class LoginPage {
+    public String userId;
+    public String session;
+    public boolean checkSessionValid() {
+    }
+}
+
+ +

and some subclasses:

+ +
public class HomePage extends LoginPage {
+
+}
+
+public class EditInfoPage extends LoginPage {
+
+}
+
+ +

In fact, the subclass has not any methods to override, also I would not access the HomePage in generic way, i.e.: I would not do something like:

+ +
for (int i = 0; i < loginPages.length; i++) {
+    loginPages[i].doSomething();
+}
+
+ +

I just want to reuse the login page. But according to https://stackoverflow.com/a/53354, I should prefer composition here because I don't need the interface LoginPage, so I don't use inheritance here:

+ +
public class HomePage {
+    public LoginPage loginPage;
+}
+
+public class EditInfoPage {
+    public LoginPage loginPage;
+}
+
+ +

but the problem comes here, at the new version, the code:

+ +
public LoginPage loginPage;
+
+ +

duplicates when a new class is added. And if LoginPage needs setter and getter, more codes need to be copied:

+ +
public LoginPage loginPage;
+
+private LoginPage getLoginPage() {
+    return this.loginPage;
+}
+private void setLoginPage(LoginPage loginPage) {
+    this.loginPage = loginPage;
+}
+
+ +

So my question is, is ""composition over inheritance"" violating ""dry principle""?

+",248528,,202103,,43145.20764,43158.4,"Is ""composition over inheritance"" violating ""dry principle""?",,3,8,10,,,CC BY-SA 3.0,, +365788,1,,,2/12/2018 11:47,,0,183,"

I am designing a role playing game. In such games there is a character that has a level. I found out that the level itself is an object. It has values like experience, the knowledge of how moch experience is needed for a level-up, and methods like improve() or gainExperience(). So the character should have a level object. A game-object could have a fight-method. When the character wins a fight, he gains experience. The game object has access to the player object. But the problem now is: All fields should be private. So I would have to write a getter for getting the level object to increase the experience which looks like unesseccary code to me. Is it an accepted way to set the level-field of the character object public, and access it like that:

+ +
player.level.gainExperience(value);
+
+ +

or is there a better way to access methods of fields that are also objects?

+ +

Edit +My question is different from this one. +While the quoted question is very general, mine refers to a concrete example, which limits the possibilities of answering, and makes an answer simpler and clearer

+",287426,,287426,,43143.53819,43143.56667,Java - Access to methods of objects fields,,2,3,,,,CC BY-SA 3.0,, +365791,1,365803,,2/12/2018 13:04,,0,76,"

So an addressing mode is the way that an operand is specified (from what I understand) and there are different modes provides by different architectures, such as Immediate mode, register indirect mode, etc.

+ +

In that context, I frequently encounter an a specification of an offset, such as in the direct mode where you specify the operand's offset directly in the instruction like ADD AL,[0301] - but an offset from what? There must be some base address or something that the offset is based on? Is there some starting place, like an address of an instruction ?

+",295868,,,,,43143.90139,Regarding addressing modes - where does an offset start?,,1,2,,,,CC BY-SA 3.0,, +365799,1,365801,,2/12/2018 14:47,,2,161,"

I am facing an issue where I have a data stream that sends unordered data. I'm trying to find a way to receive the data in random order, but send it in order.

+ +

As an example, I'll receive object4 and then object3 and then object1. I'll need my system to store object4 and object3 when they arrive and immediately send object1. In the future, when object2 arrives, I'll need the system to immediately send object2 and then recheck the array to send object3 and object4 and so on.

+ +

Some more info:

+ +
    +
  • The data is sure to be received fully, so there's no missing data.
  • +
  • The data is numbered (e.g: object1, object20).
  • +
+ +

My current solution is:

+ +
    +
  • When receiving a new object... + +
      +
    • if the new object is in order, send it immediately.
    • +
    • If the new object is not in order + +
        +
      • Store it in a list
      • +
      • Check the list if it contains the next object to send
      • +
    • +
  • +
  • After sending an object... + +
      +
    • Check the list if it contains the next object to send
    • +
  • +
+ +

So this system is rechecking the list for items to send on two events:

+ +
    +
  1. When a new not-in-order object is added.
  2. +
  3. After a successful send
  4. +
+ +

As for sending

+ +

After a successful send, the sent object will be removed from the list

+ +

As for concurrency

+ +

For sake of argument, assume its a producer-consumer relationship where the list is concurrently accessed from both players:

+ +
    +
  • The producer thread is pushing new data to the list.
  • +
  • The consumer thread is checking the list, sending and deleting the sent data.
  • +
+ +

My question is that, is this a good mechanism? Is there a better data structure to help me with this issue?

+",283720,,283720,,43143.62222,43144.04583,"Receive data in random order, send in order",,2,3,3,,,CC BY-SA 3.0,, +365805,1,,,2/12/2018 16:27,,1,1424,"

I'm writing this in node.js

+ +

I have some data that needs validating before anything can be done with this. The data is validated in two different ways. I can use JSONSchema to validate the structure of the data e.g. the data must contain a name property of minLength 1 and maxLength 100. I then use another validator for business rules e.g. if (x and !y) { throw error).

+ +

Both with the structural validator and the business logic validator, i'm looking at somewhere around 40 odd rules to validate against. Current thinking was to use the Visitor design pattern (as shown in this question Design Pattern for Data Validation), however, I might be over thinking this, But i was wondering if having 2 validation classes (structural and logical) with over 20 rule validations (in each class) and various other helper functions is too large... part of me is wondering if really each rule (and it's helpers) should be broken out into it's own class... then use the pipeline pattern? or something else entirely.

+",44746,,,,,43143.68542,design pattern/oop for large validation rule set,,0,3,1,,,CC BY-SA 3.0,, +365806,1,,,2/12/2018 17:02,,0,404,"

EDIT

+ +

I am working on a project to update a legacy ETL infrastructure that supports a number of clients, each with a slightly different setup.

+ +

Constraints that cannot be changed:

+ +
    +
  1. Source data can come from flat files over sftp and then to S3, from MSSQL databases, or events published through webhooks.
  2. +
  3. Source data ranges in size from ~60GB to ~1TB in size. Though we are grabbing diffs that are usually >5GB in total size. And the final databases for our clients are sized roughly the same.
  4. +
  5. The current destination for all of our data is two-fold: a MySQL database (one for each client, may or may not be on a shared server) and second, a filtered set of data stored in a sql file on S3.
  6. +
+ +

The existing system is designed to only load data from files so even when we have the data in a MSSQL RDS instance, we first extract the data to a series of files to be processed. Data is then loaded in through the MySQL LOAD DATA INFILE command, tranforms performed to get the raw data from the source system linked to the data in our system, and finally the data is moved from staging tables to the production tables. We are not doing any aggregations and so all of the data stays very close to its raw form.

+ +

One large problem we have with these existing systems is that as the data grows we are forced to increase the amount of RAM specifically because query performance degrades in MySQL as data is no longer able to be stored in memory.

+ +

In researching possible solutions, I've come to believe that parallelizing the ETL tasks using micro-batches would be a better approach as it should allow us to process increasingly large datasets without a linear increase in cost and/or processing time.

+ +

/EDIT

+ +

Our devops team is pretty invested in the AWS ecosystem so I've been concentrating on using AWS Lambda for parallel processing. I've come across some posts talking about using the MapReduce technique with AWS Lambda to perform the data processing.

+ +

What I'm wondering is whether Lambda with a MapReduce-like solution is really the correct solution when I'm really just moving, filtering, and tranforming data and not building any sort of aggregate indexes?

+",53,,53,,43143.84375,43143.84375,"Is MapReduce a correct framework for Extract, Transform, Load of data?",,0,6,,,,CC BY-SA 3.0,, +365808,1,365810,,2/12/2018 18:16,,1,469,"

I have a neat domain model that makes it easy to communicate with an external web service. New requirements have made the external web service's interface messy and now I have to gather data from multiple places in my model. Is a DTO appropriate to hold all the necessary data?

+ +
+ +

I have an interface that deals with searching for some domain models. The concrete implementation behind it goes to an external web service for results.

+ +
interface IConsentSearch    // only one method
+ +Search(Customer) : IList<Consent>
+
+class Customer
+ +various properties
+
+ +

My domain model is, of course, much more complex, but this is the gist of it.

+ +

Now, I got some new requirements that would allow us to search by order id in addition to the Customer's properties. +In my internal domain model, orderId has its own little place somewhere (not part of Customer). The problem is that the external web service interface has a completely different idea of where orderId is.

+ +

Note: it is not an either/or situation: everything Customer has to offer is mandatory for the search to take place - order id is just an additional criteria.

+ +

I am going to have to change Search() signature and thought of creating a new DTO in my application service layer called ConsentSearchDTO. New DTO would serve one purpose: to combine data and models that are now scattered throughout my domain model and necessary for querying the external web service:

+ +
interface IConsentSearch    // still only one method
+ +Search(ConsentSearchDTO) : IList<Consent>
+
+class Customer      // still the same
+ +various properties
+
+class ConsentSearchDTO
+ +Customer : Customer
+ +OrderId : int
+
+ +

My questions:

+ +
    +
  1. Does the above ConsentSearchDTO make sense?
  2. +
  3. Is it ok to create a DTO that references domain model classes (Customer) or should I create a whole new structure that mimics the original domain model? This seems like a lot of work and doesn't bring me any additional value.
  4. +
+",69757,,1204,,43143.82708,43143.82708,DTO to hold disparate data from domain model?,,1,4,,,,CC BY-SA 3.0,, +365809,1,365827,,2/12/2018 19:07,,8,1495,"

We have three environments: Dev, UAT and Prod. We use TFS to schedule releases of our master branch into Dev for our internal verification, then UAT for business verification, and of course finally to Prod once approved.

+ +

We've recently adopted a new lightweight Git branching strategy as follows:

+ +

master is always prod-ready. At any point master should be able to be deployed to production.

+ +

All new development is done in a separate feature (topic) branch as follows:

+ +
    +
  1. Create a new feature branch off master, call it FeatureA
  2. +
  3. Develop FeatureA until completion
  4. +
  5. Once FeatureA is finished, release it to Dev, and then UAT
  6. +
  7. Once the business signs off on FeatureA in UAT, it's considered prod-ready. Merge FeatureA into master, then deploy the new master branch to Dev then UAT. During the way, ""smoke test"" the branch in UAT to ensure the resulting merge into master didn't cause any unforeseen side-effects. Once smoke-tested, release to Prod.
  8. +
+ +

The problem we're coming across right now is that we may have multiple features being developed in parallel, all of which could potentially need to be deployed to the test environment for verification at the same time. The approach we've taken to solving this problem is:

+ +

If FeatureA and FeatureB need to be in UAT at the same time, then:

+ +
    +
  1. Create a new branch, FeatureAandB, which will encompass both features
  2. +
  3. Merge FeatureA into FeatureAandB
  4. +
  5. Merge FeatureB into FeatureAandB
  6. +
  7. Release FeatureAandB to Dev, then UAT
  8. +
+ +

The downside to this is that it's unlikely both FeatureA and FeatureB will be UAT verified at the same time. If FeatureA is verified and FeatureB is not, we need to release FeatureA to prod without FeatureB. What we've discussed in this scenario is to:

+ +
    +
  1. Merge FeatureA (not the joint branch, but just FeatureA) into master
  2. +
  3. Release master to Dev, then UAT for a quick smoke-test, and finally Prod
  4. +
  5. Once in prod, re-release just FeatureB to Dev then UAT so testing can continue.
  6. +
+ +

The downside to this is that it directly impacts any testing for FeatureB, and potentially unwinds any work the testers have accomplished with FeatureB.

+ +

How do you manage multiple features living simultaneously in each environment and being released potentially independent of one another? We can mitigate the issue a little more if we have multiple environments, or turn-around UAT testing much quicker, but at the end of the day the same problem can exist.

+ +

I'm not opposed to hearing alternative branching strategies, either.

+",54765,,,,,43144.84583,Release strategy for multiple Git feature branches being tested simultaneously,,2,2,1,,,CC BY-SA 3.0,, +365811,1,,,2/12/2018 19:54,,2,396,"

I'm building a REST service in java that does basic CRUD operations on Customers. The easy way would be for me to create one Customer model, and annotate it with JPA annotations so my persistence layer knows how to map it to the DB and add some jackson annotations so that my web layer knows how to deserialize it from http requests and serialize it into responses.

+ +

If I'm doing things the correct DDD way, should I have 3 versions of Customer?

+ +
    +
  1. A Customer Domain object not polluted by annotations
  2. +
  3. A CustomerDTO for the web layer with the jackson annotations
  4. +
  5. A CustomerPersistence object with the JPA annotations
  6. +
+ +

This seems like a lot of versions of the same thing, does anyone do things this way?

+",295991,,,,,43144.50417,Should a REST CRUD Service that accesses a DB have 3 versions of its model,,2,3,,,,CC BY-SA 3.0,, +365818,1,365823,,2/12/2018 21:47,,0,426,"

Say I have an MVC .net core website where 100% of the controllers/methods are behind [Authorize] attributes (complete with policies and all). +Would it be taboo, to carve out a set of un-authorized/anonymous controller/methods to handle user requests for access to the site. It seems perfectly reasonable to me, and outside of developer error, I can't see it introducing a new security risk.

+",291003,,,,,43143.97361,Anonymous Controller/Action within Authorized Site,,1,0,,,,CC BY-SA 3.0,, +365828,1,365842,,2/13/2018 4:56,,0,1444,"

In a multi-tenant deployment of Web application, How can the Asp.Net Core Web API services be designed to work with different authorization services? The Web applications use OAuth and JWT Bearer authentication and pass access token to the Web API services.

+ +

+ +

One approach that I could think of is to, get the Authority from the Request to understand the identity source and redirect to the respective authorization service.

+ +

==>

+ +

To elaborate further, When the Web Applications are deployed the Authorization service details (Audience, Authority and others) are shared with the API service as shown in the diagram below. This should not be a problem since we deploy the Web application and integrate it with the client's identity management system.

+ +

+ +

In the requests to API service, the Web app must include app_id along with the access_token in the request headers. This will directs the API service to validate the token with the corresponding authority.

+ +

In order to make this work, we have to implement the complete JWT bearer validation middleware. Is this a proper approach and achievable? are there any other solutions used in these situations?

+",284405,,284405,,43144.69028,43144.69028,Integrate Web API Services with multiple authentication services,,2,2,1,,,CC BY-SA 3.0,, +365829,1,365852,,2/13/2018 5:56,,45,12041,"

OK, so the title is a little clickbaity but seriously I've been on a tell, don't ask kick for a while. I like how it encourages methods to be used as messages in true object-oriented fashion. But this has a nagging problem that has been rattling about in my head.

+

I have come to suspect that well-written code can follow OO principles and functional principles at the same time. I'm trying to reconcile these ideas and the big sticking point that I've landed on is return.

+

A pure function has two qualities:

+
    +
  1. Calling it repeatedly with the same inputs always gives the same result. This implies that it is immutable. Its state is set only once.

    +
  2. +
  3. It produces no side effects. The only change caused by calling it is producing the result.

    +
  4. +
+

So, how does one go about being purely functional if you've sworn off using return as your way of communicating results?

+

The tell, don't ask idea works by using what some would consider a side effect. When I deal with an object I don't ask it about its internal state. I tell it what I need to be done and it uses its internal state to figure out what to do with what I've told it to do. Once I tell it, I don't ask what it did. I just expect it to have done something about what it was told to do.

+

I think of Tell, Don't Ask as more than just a different name for encapsulation. When I use return I have no idea what called me. I can't speak it's protocol, I have to force it to deal with my protocol. Which in many cases gets expressed as the internal state. Even if what is exposed isn't exactly state it's usually just some calculation performed on state and input args. Having an interface to respond through affords the chance to massage the results into something more meaningful than internal state or calculations. That is message passing. See this example.

+

Way back in the day, when disk drives actually had disks in them and a thumb drive was what you did in the car when the wheel was too cold to touch with your fingers, I was taught how annoying people consider functions that have out parameters. void swap(int *first, int *second) seemed so handy but we were encouraged to write functions that returned the results. So I took this to heart on faith and started following it.

+

But now I see people building architectures where objects let how they were constructed control where they send their results. Here's an example implementation. Injecting the output port object seems a bit like the out parameter idea all over again. But that's how tell-don't-ask objects tell other objects what they've done.

+

When I first learned about side effects I thought of it like the output parameter. We were being told not to surprise people by having some of the work happen in a surprising way, that is, by not following the return result convention. Now sure, I know there's a pile of parallel asynchronous threading issues that side effects muck about with but return is really just a convention that has you leave the result pushed on the stack so whatever called you can pop it off later. That's all it really is.

+

What I'm really trying to ask:

+

Is return the only way to avoid all that side effect misery and get thread safety without locks, etc. Or can I follow tell, don't ask in a purely functional way?

+",131624,,131624,,44019.18542,44019.18542,Return considered harmful? Can code be functional without it?,,9,26,15,,,CC BY-SA 4.0,, +365837,1,,,2/13/2018 8:18,,2,99,"

Let's say my app depends on lib A that depends on B that depends on C (we are owners of them all). Now, you bump the version of C to 1.0.1 (a bug fix).

+ +

How would you propagate the change of C to the app? Would you bump versions of B and A, too?

+ +

What if there are frequent changes of C needed across the teams? Do you release snapshots on every change; but then again someone needs to update all up-dependencies of C.

+ +

In our environment, we have more components in the game, and it is getting hard just to update one component that is far in the dependency chain. For that reason, some propose to depend only on master branch, so everyone is building dependencies locally, which I do not like.

+",79048,,,,,43204.82569,Efficient dependency management between components,,1,3,,,,CC BY-SA 3.0,, +365838,1,365867,,2/13/2018 8:54,,1,1591,"

We're completely remodelling a system at the company for which I am currently working. We're applying DDD and for the very first time I have actually got someone on my team who has some prior experience with DDD as well (yay!)

+ +

This new system is extremely user-centric, i.e. pretty much every operation within the system comes from the end-user. Some of these operations may be:

+ +
    +
  1. Create an Account. A non-paying user is limited to one active account, pro-members can have as many accounts as they want.
  2. +
  3. Add a Transaction to an Account, but only if a User has access to said Account.
  4. +
  5. Reorder Divisions in an Account, but only if a User has access to said Account.
  6. +
+ +

There are a lot of these rules, all point to a single user and this is where me and my colleague came into argument.

+ +

My opinion is, aggregates should be as small as necessary, as cohesive, this makes their testing and understanding easier. E.g. if I model the first rule, I would create a simple aggregate, which could look like this:

+ +
class UserWithActiveAccounts {
+
+    private UserId id;
+    private NonNegativeNumber countOfActiveAccounts;
+    private MembershipType membershipType;
+
+    public Account createNewAccount(AccountId accountId, NonEmptyString accountName) {
+        if (
+            MembershipType.FREE.equals(membershipType) &&
+            countOfActiveAccounts.GT(NonNegativeNumber.fromValue(1))
+        ) {
+            throw AccountLimitExceeded.forFreeUserTooManyAccounts(id);
+        }
+
+        return Account.withOwner(accountId, accountName, id);
+    }
+}
+
+ +

This aggregate contains only count of currently active accounts and user's membership type, because that's what the business rule when creating a new account cares about.

+ +

Then, if I had to model the other 2 user cases, I would need another aggregate, such as UserWithAccessibleAccounts, which would internally contain a List of AccountId values, which I could iterate upon adding a Transaction with a specific AccountId to check, whether the operation is allowed or not.

+ +
class UserWithAccessibleAccounts {
+
+    private UserId id;
+    private List<AccountId> accessibleActiveAccounts;
+
+    public Transaction addTransaction(
+        TransactionId transactionId,
+        AccountId toAccountId,
+        Number value
+    ) {
+        if (hasAccessToAccount(toAccountId)) {
+            throw CannotAddTransaction.noAccessToAccount(id, toAccountId);
+        }
+
+        return Transaction.byUserInAccount(transactionId, value, id, toAccountId);
+    }
+
+    private bool hasAccessToAccount(AccountId accountId) {
+        return accessibleActiveAccounts.contains(accountId);
+    }
+}
+
+ +

Obviously, with my approach you will have a lot of small aggregates each responsible for a tiny piece of business rule.

+ +

What my colleague would like is to have a large class, such as User, which would contain all operations. Which I am against. I believe it would lead to a god messy class, where it would be much more difficult to track bugs. Also, as this class would grow, each time you would want to run any of the operations on the user, you would need to load a lot of properties which are unrelated to the operation that the user is currently trying to do (such as loading the List of AccountId values when a simple count of them is necessary, in this very simplified example).

+ +

We're basically trying to solve the problem of:

+ +
    +
  • either having to come up with a lot of class names for all those small aggregates,
  • +
  • deal with scalability issue and a humongous class due to pulling all the data for each operation, no matter how small.
  • +
+ +

I am inclined to the small aggregates approach, but perhaps I am not looking at it from the correct angle and there's something about the approach my colleague is suggesting I am simply not seeing.

+",193669,,,,,43144.81944,"Applying DDD, is having multiple aggregates representing the same concept from a different view a good idea?",,3,4,1,,,CC BY-SA 3.0,, +365839,1,365848,,2/13/2018 8:58,,0,79,"

Using PHP, I created a chatbox, and I used an unorthodox way of storing chat messages in MySQL.

+ +

The tbl_chat_messages has the following columns:

+ +
id | sender_id | receiver_id | message | date
+
+ +

The actual messages are json_encoded, and is then stored in the message column which has the datatype of Longtext. It is structured like this:

+ +
{
+    ""sender_id"":""3"",
+    ""message"":""This is a test message"",
+    ""date"":""2017-09-17 04:47:40""
+},
+{
+    ""sender_id"":""11"",
+    ""message"":""This is another test message"",
+    ""date"":""2017-09-17 04:47:42""
+}
+
+ +
+

Now, the ""conventional"" way to store chat messages in a MySQL database is to store 1 message per row in the MySQL table.

+
+ +

But in my case, I am storing 1 conversation in 1 row.

+ +

What led me into using this technique is the idea that, if you are only looking up and retrieving 1 row out of 1,000,000 rows in the MySQL database table, it would be faster than looking up and retrieving 10 rows out of 1,000,000.

+ +

And it also led me to believe that this reduces the load of the request and response made to and by the server, because it is only retrieving 1 row from the table.

+ +

What I did not realize back then that the messages from a single conversation may grow larger up to, let's say at least 2 gigabytes of data for example. Meaning that the ""messages"" column may contain 2GB of text data.

+ +

My questions are:

+ +
    +
  1. Was I correct to think that the technique that I used reduces the load of the requests and response for the server?

  2. +
  3. Does using the ""conventional"" way of storing chat messages, give more advantages than disadvantages to the way that I did.

  4. +
+ +

Additional Notes:

+ +

I do not have any problems in using any of the two, right now, I just can't decide which one is better.

+ +

I also have no problem with scalability, as I find both ways easy and fun to develop even in the long run.

+",286589,,200203,,43144.52431,43144.52431,Server load/capacity based on data structure,,1,0,,,,CC BY-SA 3.0,, +365840,1,365851,,2/13/2018 9:29,,0,95,"

I have recently come across a design question and will explain it using an imaginary scenario:

+ +

Assume you need to retrieve the days where there are no bookings for a given date range. You could filter the results and manually identify what date is not contained, but you'd be getting the booking information unnecessarily, if all you're after is a list of dates without bookings.

+ +
/bookings?startDate=x&endDate=y
+
+ +

Returns

+ +
{
+  ""id"": 1,
+  ""title"": ""mybooking"",
+  ""bookingdate"": ""2018-01-15T16:18:44.258843Z""
+}
+
+ +

Would such a scenario require a new resource as below?

+ +
/freedays
+
+ +

Returns:

+ +
[
+  ""2018-01-15T16:18:44.258843Z"",
+  ""2019-01-15T16:18:44.258843Z""
+]
+
+ +

As the response schema is different, compared to a booking response, I think a filter on the bookings resource would not be applicable for this scenario.

+ +

What would be a good rest design for the above use case?

+ +

Thanks

+",296041,,200203,,43144.46111,43144.48819,REST API Design for getting days without resource,,1,0,,,,CC BY-SA 3.0,, +365843,1,365948,,2/13/2018 10:03,,7,1617,"

I have been working for quite a lot of time on a research project at University focused on Access Control. More specifically, I am studying how to protect unauthorized access to personal data in a distributed system and in general in the Internet.

+ +

In this context, I stumbled on the XACML (wiki, official spec) specification, which seemed quite interesting. After some time spent digging into it, though, it seemed more and more that no company would actually spend a lot of effort (time, money) in realizing the described architeture (to my understanding, it needs at least three different entities to store policies, evaluate them and enforce the decision).

+ +

I am still studying, so I don't know a lot about common industrial procedures: is something like XACML really implemented for data management and protection? If not, is there any alternative (possibly effective) technology which is employed?

+ +

It seems to me that most companies do not care so much about personal data protection (mostly because of the profit and insight gained from data analytics and ads), so I doubt that personal data protection is thoroughly employed.

+",250766,,,,,43147.92639,Is XACML actually used and implemented?,,2,2,1,,,CC BY-SA 3.0,, +365846,1,365847,,2/13/2018 10:34,,4,99,"

In the company I work for, I keep seeing IManager interfaces being converted into their real types and lots of ""instanceof / TypeOf"" if statement checks.

+ +

For example:

+ +

IManager manager // passed to method as a parameter

+ +
if (manager.GetType() == typeof(CustomerManager)) {
+    customerManager = (CustomerManager) manager;
+    customers = customerManager.GetCustomers();
+    groupId = customerManager.GetCustomerGroupID(); 
+    // etc...
+}
+
+ +

I've been reading Liskov Substitution Principle and it states that ""Subtypes must be substitutable for their base type"". This seems to violate this.

+ +

The sub-types have their own very specific functions and I'm not sure on how to solve this. This is a problem I see happen a lot when I myself am developing. I find it hard to avoid and I think I need to change how I treat interfaces.

+ +

Does anyone have any SE advice to avoiding common problems like this?

+",296055,,,,,43144.44444,Converting Interfaces to Sub types. Is this bad SE?,,1,0,,,,CC BY-SA 3.0,, +365853,1,365869,,2/13/2018 11:51,,4,227,"

I am writing a simulation engine consisting of a number of components, each of which operates on a fixed set of shared buffers.

+ +

In practice, the simulation will run entirely on the GPU. When developing a component however, it is easier to copy the buffers from the GPU, execute the component on the CPU, and write the updated buffers back, continuing on the GPU. When the component has been fully debugged, it is ported to a GPU kernel.

+ +

I want to clean up my code and write an interface to the 'main system' (the bit that maintains the buffers) that the components will use, and this raises the question of how to present the buffers.

+ +

I could write something like:

+ +
interface ISystem
+{
+    Array x;
+    Array y;
+    ComputeBuffer gpu_x;
+    ComputeBuffer gpu_y;
+    int numElements;
+}
+
+ +

But this is not very neat.

+ +

I could do something like:

+ +
interface ISystem
+{
+    IBuffer x;
+    IBuffer y;
+    int numElements;
+}
+
+ +

Where IBuffer is an interface suitable for use by code that wants CPU access to the buffer, but also that which binds the buffer to its GPU kernels.

+ +

My question is, how far should I push this abstraction?

+ +

I could make a truly polymorphic object like:

+ +
class BufferHelper<T>
+{
+    static implicit operator ComputeBuffer(BufferHelper helper);
+    static implicit operator T[](BufferHelper helper);
+}
+
+ +

Reading back to the CPU has significant performance implications. However, anyone who will write components will know this, and it will be obvious that this occurs, and where, from the profiler.

+ +

From a performance point of view then, there is no benefit to code self-documenting by having explicit interfaces for the two uses of the buffer, but still my instincts say that something is not right with this design.

+ +

In the traditional example of polymorphism you'd, say, add two integers or concatenate two strings; operations which are analogous if not identical.

+ +

In the above case, the operations (binding a buffer vs. reading it back and modifying it) are completely different.

+ +

To put it another way, does this use of polymorphism go so far as to decrease readability and maintainability of the code by hiding things the developer should see?

+",48380,,,,,43144.58889,Is wrapping a hardware resource using polymorphism going too far?,,2,0,,,,CC BY-SA 3.0,, +365858,1,365861,,2/13/2018 12:27,,2,119,"

Recently I have been trying to avoid down-casting object types from an interface type to their concrete types, and 'if' statements that check for an objects concrete type at run-time. This has made me ask myself a question; Why add public methods to concrete classes that also implement an interface?

+ +

I ended up making a distinction between 2 types of classes: a service and a data item, i.e. something you use versus something you create and push out (either to persistent storage or to another process). Usually a server-like class would have no constructor parameters unless it uses dependency injection or a factory (however, you could argue that a service being constructed inside a factory, before it starts being used like a service, matches the data item definition).

+ +

You would need to use public methods if you were creating an instance of a class for the first time and it was locally scoped to the method or class. However, when returning the instance from a method, and it implements an interface, usually you return it in its generic form (an interface). This makes the public methods not declared as part of the interface untouchable unless you cast it down to the concrete type which, in SE principles, is a bad thing. Often, if you need to do this then the concrete class should not be using an interface to begin with.

+ +

What is the convention for dealing with concrete public methods belonging to a class that implements an interface? Are they only used when the instance of a concrete class is in the same local scope?

+",296055,,296055,,43144.52222,43144.53819,When should concrete public methods be used if implementing an interface?,,1,1,,,,CC BY-SA 3.0,, +365859,1,365866,,2/13/2018 12:37,,1,419,"

We have a bunch of appSettings in our ASP.NET web application, not to mention connection strings, that change on a per-environment basis. Not just for different environments to which the web application is deployed, but across different developer machines too. Therefore, changing these with a Web.config transform isn't good enough because these transforms aren't carried out when developers debug the application.

+ +

At the moment the settings are hard-coded into Web.config but if a developer wants to change them during development, this obviously means the change is flagged up in source control which is undesirable. These changes shouldn't be checked into source control.

+ +

How can we move these settings out so that they can be changed without affecting source control? In the past I have worked on projects where we used configSource to point to an external config file which was then excluded from source control, and a .config.example file was put in its place which had to be copied to the .config file. The trouble with this is that by default, it gives a broken build because this config file is missing until it's created. We want a working build that can be pushed to VSTS for deployment. How can we do this while keeping the per-environment settings separate from source control?

+",125671,,,,,43144.58333,Dealing with settings that will change on a per-environment basis?,,4,0,,,,CC BY-SA 3.0,, +365864,1,365872,,2/13/2018 13:12,,1,2705,"

what is an empty method and how are they used?

+ +

I was reading a documented about the BUILDER Pattern and I got a curious about how it is implemented in C++. The author defines that in C++ could be possible to implement empty methods as default in the builder pattern in order to letting clients override only the operations they're interested. Someone may explain me that sentence in more details with some example.

+ +

Thank you

+",296067,,296067,,43144.57083,43144.61458,what is an empty method and how are they used?,,2,1,0,,,CC BY-SA 3.0,, +365865,1,365871,,2/13/2018 13:16,,1,1483,"

I don't know if the title is reflecting the question correctly, but I can explain more than write a good title.

+ +

If I have a database that has (employees, departments, orders, etc...), and have many ways to call the endpoints of this system, and each way requests data less or more of the same entity (e.g employee entity), let's say I'm building the employees endpoints:

+ +
    +
  • An endpoint to show all employees for managing purposes.
  • +
  • One to be used in dropdowns on UI to choose one employee (for some reason), and only should show IsActive = true employees.
  • +
  • Need one because in one page for a specific role he should be able to see only his employees not all of them (and at the other hand another role allowed to see all the employees).
  • +
+ +

This kind of requirements is normal and repeated in a lot of systems, but I'm lost and need to know when to put the filters and is it ok to use one endpoint and provide many filters for it (like one for getting only { id, name } but with filter to be able to get only IsActive = true or all employees), or should I separate the endpoints ? and depending on what exactly ?

+ +

I know that I can use GraphQL or OData and it'll ease the things for me and for consumers of my API, is it the only solution ?

+",186303,,78541,,43144.7125,43144.7125,What is the best practice for API endpoints for an entity?,,1,2,1,,,CC BY-SA 3.0,, +365882,1,365884,,2/13/2018 16:47,,1,132,"

In my project I have different types of entities.
+I get the data for these entities in text files from a 3rd party.
+I've written a class to read and parse these text files, using the strategy pattern.
+The method in this class must return different entity types so I've made the entire class generic - something like this:

+ +
public class Parser<T>
+{
+    public IStrategy<T> strategy { get;set;}
+
+    public IEnumerable<T> LoadFromFile()
+    { 
+        // implementation details, not interesting
+    }
+}
+
+ +

But the problem is that now I need a new instance of Parser for every entity I load, so I changed my initial implementation into this:

+ +
public class Parser
+{
+    public IEnumerable<T> LoadFromFile(IStrategy<T> strategy)
+    {
+        // implementation details, not interesting
+    }
+}
+
+ +

So, is this still considered as an implementation of the strategy pattern?

+ +

If not, is there any way I could return IEnumerable<T> without having to specify what type T actually is anywhere but in the IStrategy<T>?

+",212012,,212012,,43144.70278,43144.75417,Is this still considered an implementation of the strategy pattern?,,1,18,,,,CC BY-SA 3.0,, +365891,1,,,2/13/2018 19:31,,2,2165,"

I work on system with such architecture:

+ +
    +
  • There are two types of components, let’s call them CLIENT and SERVER.
  • +
  • There is only one instance of CLIENT component and there might be +multiple instances of SERVER component.
  • +
  • CLIENT uses HTTP interfaces of those SERVERs (they are the same interfaces).
  • +
+ +

What diagram should I use to represent those concepts and how it could look like?

+ +

I have tried with Component Diagram:

+ +

+ +

but this diagram does not say that there are multiple servers and single client.

+",296097,,296097,,43293.50556,43293.50556,How express in UML that there are multiple instances of given component type,,1,4,,,,CC BY-SA 4.0,, +365893,1,365894,,2/13/2018 20:15,,0,7070,"

I found an interesting quote in my book with which I learn Java:

+ +
+

Manche Methoden der in diesem Kapitel beschriebenen Schnittstellen + sind in der Dokumentation als optional gekennzeichnet. Bei manchen + Collections-Klassen führt die Nutzung solcher Methoden zu einer + UnsupportedOperationException . Sie dürfen sich also nicht darauf + verlas- sen, dass jede Collections-Klasse alle Methoden der genutzten + Schnitt- stellen tatsächlich implementiert.

+
+ +

From ""Java Einführung"" by Michael Kofler

+ +

I try to translate it:

+ +
+

Some methods of interfaces described in this chapter are marked as + optional in the documentation. The usage of these methods will lead to + a unsupportedOperationException. So you should not think that every + Collections-class implements all methods of the interfaces.

+
+ +

I learned that not implementing a method of a abstract class or interface leads to a compilation error. So how is this possible?

+",287426,,,,,43144.84653,Not implemented Methods of Interfaces,,2,1,,43147.78194,,CC BY-SA 3.0,, +365896,1,,,2/13/2018 20:24,,3,94,"

Backstory

+

I am writing a library that accesses the kernel module, uinput which allows you to create or take control of devices in /dev/input/event#, and insert events into them.

+

A simple usecase would allow someone to write a script that would move a mouse cursor to the center of your display, and perform a left click.

+

As such, it needs root priveleges in order for it to function, and I am not sure what tangible risks I am taking here, and exactly what precautions I need to employ.

+
+

I asked about this in ##kernel, and one response I got was:

+

+

Questions

+
    +
  1. Is what he says true, that my library will be fine to the extent that my system is not "pwned"?
  2. +
  3. To that, how do I know my library will not become a vector for my system to be "pwned"? I am not a hacker so I would not know what exposes my library to such a thing.
  4. +
  5. I want my code to be presentable to potential employers, so even if it is true that I need not worry, what security practices should I employ just to demonstrate that I am astute and conscious of potential security risks?
  6. +
  7. Would a best practice in this case involve me creating a user with a special permission group that limits his exposure to the system?
  8. +
+

Thanks.

+",136084,,-1,,43998.41736,43144.87361,What security practices do I employ when building a library that requires low level root access to certain devices and files?,,1,6,1,,,CC BY-SA 3.0,, +365904,1,365946,,2/13/2018 23:35,,0,482,"

When designing a RESTful API, providing a spec for updating an entity will force the designer to make some decisions on how the update will behave (an update mode or type). Here are some of the modes I can think of:

+ +
    +
  1. If the update body contains null values, ignore them and only update the values with non-null values. (This is the most common behavior, and I think this is called a Delta.)
  2. +
  3. If the update body contains null values, replace existing values with these null values. (Less common in my experience; I think at one place I worked they called this an Overlay... is that typical?)
  4. +
  5. Update only values that are currently null, and non-null in the update body. (I'm not sure I've ever seen this form, but theoretically there might be a use for it.)
  6. +
+ +

Basically, my question is, Is there generally excepted terminology used in REST (or even software design in general) for these concepts?

+ +

What are different behaviors of updating typically called? (Type, Mode, something else?)

+ +

Do these different types/modes have names typically used?

+ +

Are there types/modes that I didn't list?

+",84987,,84987,,43145.05903,43145.88264,RESTful APIs: Terminology for Update Modes/Types,,3,3,1,,,CC BY-SA 3.0,, +365905,1,,,2/13/2018 23:55,,0,71,"

I'm creating a software that's supposed to accept a certain form of data and does something to the data. We decided to create a small dependency beforehand which will be used to make the development of the software smoother. We're planning to accept support for json, csv, yaml, and sql files/url's(consuming rest) and convert them to lists(ArrayList). For now, we have the support for json. We've created a parent Data Reader which contains two getData method contracts, one for file uploads and one for url links - each returns json converted to a list. This is implemented by JsonDataReader. It's working and perfectly usable. However, I have this as the code for the software:

+ +
list = JsonDataReader().getData(new File(""file""));
+
+ +

I see this as a problem. If my software has this as a code, it means that it will only strictly accept Json. What if we create the YamlDataReader, CsvDataReader, and much more? Is there a better way to design the multiple converter that will be implementable for the software development?

+ +

Process/Flow of software

+ +
    +
  1. Upload file from upload page
  2. +
  3. File can now be found in server directory
  4. +
  5. In data processing page, the file is taken in and converted into the list as the code above shows.
  6. +
+ +

Would it be better if I just create a component that accepts any data, detects the file type, converts it, and returns it as a list?

+ +

My colleague suggested this.

+ +
+

What you could do is you have something that stores all of your data + readers in a map and the key is the file extension it basically then + gets from the Map the appropriate DataReader based on the file type

+
+ +

Is this possible? To store DataReaders as values? DataReaders are the converters.

+ +

My colleague also pointed out that if I go with the DataReader that accepts anything and returns a list would be a case of multiple behaviors and not polymorphism - indicating a bad design.

+",294998,,1204,,43145.7875,43145.78819,How to create multiple converter,,2,0,,,,CC BY-SA 3.0,, +365906,1,365908,,2/14/2018 2:57,,5,554,"

First of all, yes I'm aware of PEP8 and the alphabetical method.

+ +

I do sort my imports first by the PEP8 recommendation:

+ +
+
    +
  1. standard library imports
  2. +
  3. related third party imports
  4. +
  5. local application/library specific imports
  6. +
+
+ +

But then, within each category, I like to sort by the length of the module name. Yes, it makes it look pretty, but then I find it arguably makes it easier to find a module than the alphabetical way.

+ +

Also, the shorter names tend to be the more basic, common modules (like os and sys), so there's an additional organization layer in that way. You know the length of the name you're looking for, and the visual shape of the import list tells you very quickly where to look for names that long.

+ +

An example:

+ +
import os
+import sys
+import time
+import logging
+import argparse
+import subprocess
+
+import django
+import requests
+import webencodings
+
+import mymodule
+import localthing
+import supercustomstuff
+
+ +

Right now I mostly write for myself, and I don't work with a team. But I intend to in the future. So what I'm wondering is, does anyone else do this, does anyone else see the sense in it, or will everyone else think I'm nuts?

+ +

Edit: Separate subgroups in example, as suggested by Mark Ransom, in order not to distract from the core issue about the ordering.

+",68685,,68685,,43145.175,43146.57292,I order my Python imports by name length. Does this make any sense?,,4,9,,,,CC BY-SA 3.0,, +365916,1,,,2/14/2018 6:54,,-2,480,"

We want to pull application and business metrics from the source web application to track billing, usage and performance of the application. These metrics are to be stored in a different database (Oracle) for further processing and analytics. We would be building analytics dashboards over these metrics that would be presented to different stakeholders including clients. Below are the points that should be noted

+ +
    +
  1. Metrics collection should have a very low performance overhead (cpu, memory, storage) on the source web application server (java ee based)
  2. +
  3. It should not introduce new components in the system infrastructure given that we would be collecting 100-200 metrics from the source application. It may not worth the maintenance effort (deployment/operations/overhead).
  4. +
  5. Some of the metrics are event based. For e.g. web service request size, user uploaded file size, user login and logout timestamp etc. Also not all metrics are of data type numeric. For e.g. ip-address, timestamps.
  6. +
  7. Metric collection is to be done from around 50-100 deployments (multi-tenant & single-tenant) of the application.
  8. +
+ +

I would like to understand the different architecture approaches that we can take into consideration for collecting these metrics into a different database. Please provide enough details to get an idea of how the implementation would look like.

+",78819,,78819,,43146.39722,43146.42431,Architecture Design Approaches for Metric Collection,,1,4,,,,CC BY-SA 3.0,, +365918,1,,,2/14/2018 7:16,,2,611,"

We are developing visual controls for WinForms in .NET, and one of our customers using our products is still compiling all his apps for .NET 4 Client Profile. He has to do this as his customers, some larger companies, would not allow the full install of dot net framework since it contained security risks that were unacceptable to some customers. I wonder, are there any benefits of using .NET Framework 4.0 Client Profile nowadays - especially looking at the fact that Microsoft abandoned client profiles with the release of .NET Framework 4.5?

+ +

And a related question. To support our customers writing apps for .NET Client Profiles, we provide our controls in two DLLs. One of the DLL is the core functionality redistributed with end-user apps, and the other DLL implements only the design-time functionality. If we do not consider this separation in the context of .NET client profiles, are there any advantages of this separation?

+",155152,,,,,43145.49444,Are there any benefits of using .NET Framework 4.0 Client Profile nowadays?,<.net>,3,6,,,,CC BY-SA 3.0,, +365928,1,,,2/14/2018 13:10,,7,12853,"

I'm new to this Repository pattern and have seen lot of repository pattern + UoW implementations all over the internet and I'm not able to reach to a conclusion as to which of them is correct . After going through many links I've been able to implement one .

+ +

Considering the following points in mind

+ +
    +
  • It should satisfy SOLID principles
  • +
  • Be testable
  • +
  • Be independent +of framework
  • +
  • Be independent of DB
  • +
+ +

Here is the code of the implementation

+ +

Generic IRepository

+ +
 public interface IRepository<T> where T : class
+    {
+
+        void Add(T entity);
+
+        void Update(T entity);
+
+        void Delete(T entity);
+
+        T GetByKey(object id);
+
+    }
+
+ +

RepositoryBase

+ +
public abstract class RepositoryBase<D, T> : IRepository<T> where T : class where D : BaseDbContext
+{
+
+
+    private D dataContext;
+    private readonly IDbSet<T> dbSet;
+
+    protected IDbFactory<D> DbFactory
+    {
+        get;
+        private set;
+    }
+
+    protected D DbContext
+    {
+        get { return dataContext ?? (dataContext = DbFactory.Init()); }
+    }
+
+
+    protected RepositoryBase(IDbFactory<D> dbFactory)
+    {
+
+        DbFactory = dbFactory;
+        dbSet = DbContext.Set<T>();
+
+    }
+
+
+    #region Implementation
+    public virtual void Add(T entity)
+    {
+        dbSet.Add(entity);
+    }
+
+    public virtual void Update(T entity)
+    {
+        dbSet.Attach(entity);
+        DbContext.Entry(entity).State = EntityState.Modified;
+    }
+
+    public virtual void Delete(T entity)
+    {
+        dbSet.Remove(entity);
+    }
+
+
+    public T GetByKey(object id)
+    {
+        return dbSet.Find(id);
+    }
+
+
+    #endregion
+
+}
+
+ +

IUnitofWork , UnitOfWork

+ +
public interface IUnitOfWork<D> where D : BaseDbContext
+    {
+        void Commit();
+    }
+
+
+
+public class UnitOfWork<D> : IUnitOfWork<D> where D : BaseDbContext, new()
+{
+    private readonly IDbFactory<D> dbFactory;
+    private D dbContext;
+
+    public UnitOfWork(IDbFactory<D> dbFactory)
+    {
+        this.dbFactory = dbFactory;
+    }
+
+    public D DbContext
+    {
+        get { return dbContext ?? (dbContext = dbFactory.Init()); }
+    }
+
+    public void Commit()
+    {
+        DbContext.SaveChanges();
+    }
+}
+
+ +

IDBFactory ,DBFactory

+ +
public interface IDbFactory<D> where D : BaseDbContext
+{
+    D Init();
+}
+
+
+
+
+public class DbFactory<D> : Disposable, IDbFactory<D> where D : BaseDbContext, new()
+        {
+            D dbContext;
+            public D Init()
+            {
+                return dbContext ?? (dbContext = new D());
+            }
+            protected override void DisposeCore()
+            {
+                if (dbContext != null)
+                    dbContext.Dispose();
+            }
+        }
+
+ +

BaseDbContext

+ +
public abstract class BaseDbContext : DbContext
+{
+public BaseDbContext(string nameOrConnectionString) : base(nameOrConnectionString)
+        {
+
+        }
+
+}
+
+ +

ProjectDbContext

+ +
 public partial class ProjectDbContext : BaseDbContext
+    {
+        public ProjectDbContext()
+            : base(""name=ProjectDbContext"")
+        {
+            Database.SetInitializer<ProjectDbContext>(null);
+        }
+
+
+    }
+
+ +

EXAMPLE USAGE

+ +

Controller

+ +
 public class StudentsController : BaseController
+    {
+
+        private IStudentBusiness objstudentbusiness;
+        public StudentsController(IStudentBusiness rstudentbusiness)
+        {
+            objstudentbusiness = rstudentbusiness;
+        }
+
+
+        public JsonResult LoadStudents()
+        {
+
+                var data = objstudentbusiness.ListStudents();
+                var jsonResult = Json(data, JsonRequestBehavior.AllowGet);
+                return jsonResult;
+
+        }
+
+    }
+
+ +

IStudentBAL,StudentBAL

+ +
 public interface IStudentBAL
+    {
+        void SaveStudent(StudentDto student);
+        List<StudentDto> ListStudents();
+    }
+
+
+public class StudentBAL : BusinessBase, IStudentBAL
+{
+
+    private readonly IStudentRepository objStudentRepository;
+    private readonly IUnitOfWork<ProjectDbContext> objIUnitOfWork;
+
+    public StudentBAL(IStudentRepository rIStudentRepository, IUnitOfWork<ProjectDbContext> rIUnitOfWork)
+    {
+        try
+        {
+            objStudentRepository = rIStudentRepository;
+            objIUnitOfWork = rIUnitOfWork;
+        }
+        catch (Exception ex)
+        {
+
+            Log.Error(ex);
+        }
+
+    }
+
+    public List<StudentDto> ListStudents()
+    {
+        try
+        {
+            var tusrs = objStudentRepository.ListStudents() ?? new List<StudentDto>();
+            return tusrs;
+        }
+        catch (Exception ex)
+        {
+            Log.Error(ex);
+
+        }
+        return new List<StudentDto>();
+    }
+}
+
+ +

IStudentRepository,StudentRepository

+ +
 public interface IStudentRepository
+    {
+        void SaveStudent(Student Student);
+        StudentDto GetStudentByName(StudentDto Studentname);
+        Student GetStudentByID(int Studentid);
+        List<StudentDto> ListStudents();
+    }
+
+public class StudentRepository : RepositoryBase<ProjectDbContext, Student>, IStudentRepository
+{
+    public StudentRepository(IDbFactory<ProjectDbContext> dbFactory) : base(dbFactory)
+    {
+    }
+    public List<StudentDto> ListStudents()
+    {
+
+            var students = (from t in DbContext.Students
+
+                        select new StudentDto
+                        {
+                           // all the required properties
+                        }).ToList();
+
+
+            return students;
+
+    }
+}
+
+ +
    +
  • Dependency Injection is done using AutoFac
  • +
  • Code for logging is omitted
  • +
+ +

I am asking if this seems like a good implementation or am I missing anything?

+ +

I would appreciate any feedback on my implementation you can offer regarding correctness, efficiency, and any suggestions. So here are my questions

+ +
    +
  • Is this loosely coupled ?
  • +
  • Is it having any leaky abstraction and why ?
  • +
  • What needs to be done to switch from EF to MySQL db and how much effort would it require to implement the changes?
  • +
  • Is this pattern breaking any of the SOLID princples ,Law or Demeter or any Object Oriented laws?
  • +
  • Does this pattern have any code redundancies which are not required ?
  • +
  • How this architecture using EF would scale with a project containing more than 100+ domain entities and with each entity having at least more than 10 fields . Is it going to be maintenance nightmare later on?
  • +
+ +

-All criticisms greatly appreciated !!

+",296154,,,,,43145.96389,Generic repository pattern +EF and unit of work,,1,5,3,,,CC BY-SA 3.0,, +365930,1,365939,,2/14/2018 13:30,,1,1418,"

I am developing my own header-only library that I want to use as a framework in other software. I am using CMake for setting up demo targets, tests, and so on.

+ +

However, I am unsure how to deal with dependencies of my library. Currently, I am using the following approach:

+ +
FIND_PACKAGE( foo )
+
+ +

I have a configuration config/foo.hh.in file for foo that checks whether the package exists:

+ +
#ifndef FOO_HH__
+#define FOO_HH__
+  #cmakedefine HAVE_FOO
+#endif
+
+ +

In the main CMakeLists.txt, I do:

+ +
CONFIGURE_FILE( ${CMAKE_SOURCE_DIR}/include/config/foo.hh.in ${CMAKE_SOURCE_DIR}/include/config/foo.hh )
+
+ +

In the code, I can thus do:

+ +
#include ""config/foo.hh""
+#ifdef HAVE_FOO
+  // Code that is specific to having the `foo` library available
+#endif
+
+ +

The problem with this approach is that it only works once, namely when the library is installed for the first time and all the dependencies are being checked. Since I am not shipping the optional dependencies with my library, I am wondering whether this is a good approach. What if the user decides to install an optional dependency after installing my library? In this case, I will not be able to detect the presence of the library.

+ +

How do you usually solve this issue? What am I doing wrong with this approach?

+",296167,,,,,43145.73264,How to control optional dependencies of a header-only library in `CMake`?,,1,2,1,,,CC BY-SA 3.0,, +365931,1,365934,,2/14/2018 14:19,,3,422,"

What is the best way to inform users that there has been a bug found in some software they used?

+ +

For example, let's say a user finds a bug in some software and reports it to the development team who decide that fixing this bug will be done in an upcoming release (rather than a one off patch) due to it being low risk (very rarely happens and has minimal, although some negative impact)

+ +

What is the best way to inform other users who may be affected by the same bug that we know of a bug and offer them a workaround?

+ +

We would like to be pro-active in notifying users in order to minimise the occurance of the bug.

+ +

A colleague suggested having a bug log on the landing page of the software although I feel like this clutters the homepage with information the users are unlikely to read, my argument also was that when I log onto Amazon / Ebay etc, none of them force a bug list on me.

+ +

Perhaps a happy middle ground is to have a notification of a new bug when the user opens the software which the user then acknowledges and once acknowledged, the notification will stop appearing?

+ +

Thoughts would be appreciated

+",296169,,,,,43146.32431,Informing Users of Outstanding Bugs,,1,6,,,,CC BY-SA 3.0,, +365932,1,,,2/14/2018 14:41,,0,763,"

I have a list of 75000 websites that need to be monitored for uptime. Monitoring the websites involves making a HTTP request for the website every minute, and the website is said to be ""up"" or ""down"" based on whether we could successfully make a HTTP request and received a HTTP response. This kind of ""polling"" is the only thing I can do, as I do not have control of the hosts running the websites.

+ +

Initially, I thought of having a couple of nodes, each of which would be tasked with monitoring a subset of the websites. Each node would run a program designed like so:

+ +
isServerUp(httpClient, url) {
+    time = Time.now()
+
+    try {
+        httpClient.get(url)
+        status = true
+    }
+    catch (e) {
+        status = false
+    }
+
+    // do some other stuff, like saving the
+    // status to a database.
+}
+
+websiteChecker(url) {
+    httpClient = HttpClient()
+    t = Thread(isServerUp, httpClient, url)
+
+    while (true) {
+        t.run()
+        sleep(60)
+    }
+}
+
+main() {
+    for (website in websiteList) {
+        t = Thread(websiteChecker, url)
+        t.run()
+    }
+}
+
+ +

Basically, the program creates ""websiteChecker"" threads for each website to be checked. Each of these ""websiteChecker"" threads spawn a new ""isServerUp"" thread, that checks if the website is up.

+ +

However, such an architecture would barely work, as the huge number of ""websiteChecker"" threads would cause high memory consumption and extremely high resource contention.

+ +

How can I design an architecture that would work well in this scenario, and be scalable as well?

+",201263,,,,,43145.63958,What would be the architecture for a polling-based monitoring service?,,1,3,,,,CC BY-SA 3.0,, +365937,1,,,2/14/2018 17:03,,-1,182,"

I am about to start coding for a school project that requires me to demonstrate good OO practices in a web app. Now, I have built a few web apps and have never felt the need to define my own classes and utilize objects (outside of classes provided by frameworks and modules) so I was looking for help in making some design decisions. It might seem like I am forcing some complexity into a project that is too simple to warrant it but it is for demonstration purposes.

+ +

The web app is dead simple. The ""database"" is just a local JSON file with 2 collections which will be the ""tables"": users and polls. Those collections are lists of entry collections.

+ +

My first inclination is to define a ""database"" connection class. This class would create a file object of the JSON file in read/write mode on instantiation and contain some methods for inserting, updating, deleting, etc...

+ +

But, that is the only part of the project that I can think of that would reasonably warrant a class. Should I define user and poll classes and create objects of those every time one needs to be inserted into the database? The data would be coming into the app from an html form and I would normally just use a function to massage it into the right format and then call database function to insert it. Instantiating a class somewhere in that process seems completely unnecessary.

+ +

Users can create Polls and vote on other Polls and view the results of past Polls. The Polls will contain a start date and end date and once the end date has passed results will be counted and stored in a ""results"" key in that Poll's entry collection.

+ +

There are only 2 entities in this app: Users and Polls. So, more specifically I am asking should I create a database connection class with insert,update etc methods OR classes for Users and Polls that have methods like vote() and update() that then interact with the JSON file

+ +

Where would classes best be implemented here?

+",295153,,295153,,43145.74028,43175.80694,OOP Design Choices in a Web App,,2,18,,,,CC BY-SA 3.0,, +365940,1,365943,,2/14/2018 17:27,,9,798,"

Once in a while, I leave comments like

+ +
# We only need to use the following for V4 of the computation.
+# See APIPROJ-14 for details.
+
+ +

or

+ +
# We only need to use the following for V4 of the computation.
+# See https://theboringcompany.atlassian.net/browse/DIGIT-827 for details.
+
+ +

My main concern with doing so is that it increases our dependence on JIRA, so those comments would be entirely moot if we were to migrate into another project management system. While I don't foresee that happening in the near future, I remain wary of the increased coupling of organizational components (in this case: code, code repositories and a project management system).

+ +

However, I do see the benefit of having references to documented design decisions and feature inspiration throughout the code base. As far as I can tell, the benefits are

+ +
    +
  1. a clear path to design decisions, which helps with debugging and ramping up on particular segments of unfamiliar code,
  2. +
  3. fewer multi-line comments, which makes code seem cleaner/less intimidating to new contributors,
  4. +
  5. a clear path to (potentially) current technical and non-technical stakeholders, and
  6. +
  7. a decrease in the number of ""why is this here"" questions because of the aforementioned.
  8. +
+",36526,,177980,,43145.72847,43145.83542,Is it generally helpful to include JIRA issues in code comments?,,5,3,1,,,CC BY-SA 3.0,, +365944,1,,,2/14/2018 18:12,,0,38,"

I have data that is structured like the following:

+ +
users:
+
+id | name  | parent_id
+1  | Bob   | NULL
+2  | Jan   | 1
+3  | Mat   | 2
+4  | Irene | 2
+5  | Ellie | 2
+6  | Laura | 5
+7  | Uma   | 6
+
+user_sales:
+
+user_id | sales_period | total_volume | total_revenue | ....
+    1   |  Jan-2017    |  1000        |   56000
+    1   |  Feb-2017    |  1500        |   65000
+    2   |  Jan-2017    |  650         |   45500
+    5   |  Jan-2017    |  800         |   49005
+    6   |  Jan-2017    |  1000        |   56000
+
+add a bunch more tables that use the core users tree structure...
+
+ +

We have client databases ranging in size from ~60GB to ~1TB and infinitely scaling database servers to support large ETL operations isn't an option. In researching solutions, it looks like our best bet would be to find a way to employ parallel processing but a fundamental question we keep coming back to is whether you can use parallel processing when everything requires traversing a tree structure like we have?

+ +

Can anyone answer whether we can process a rooted tree data structure in parallel and if so, do you have suggestions on how it should be done?

+",53,,,,,43145.75833,Can data in a rooted tree be processed in parallel?,,0,2,,,,CC BY-SA 3.0,, +365949,1,365960,,2/14/2018 19:13,,2,173,"

I am writing some code to parse some files (which I call ""assets""), and I'm planning to structure this as three classes: AssetParser, NamespacesParser, and TransfomersParser. AssetParser will use objects of the other two classes to parse some parts of the assets.

+ +

I want to write this as three classes rather than one big class to split functionality into manageable fragments (that is I use something like structured programming, where methods of AssetParser call methods of the other two classes)

+ +

My question: Should I use dependency injection to create objects of NamespacesParser and TransformersParser classes?

+ +

The parts of the parser are tightly coupled to the parsers. In my opinion, it is highly unlikely that AssetParser would use other subparsers in place of NamespacesParser and TransformersParser. So it looks like that dependency injection is not necessary here. (I can just use new to create objects of NamespacesParser and TransformersParser.)

+ +

One reason to use dependency injection is that it allows easily make NamespacesParser and TransformersParser singletons (if I will want to use singletons). Are there other reasons?

+",45576,,45576,,43146.30903,43146.30903,Dependency injection for tightly coupled components,,1,6,,,,CC BY-SA 3.0,, +365951,1,366014,,2/14/2018 19:27,,-1,135,"

https://www.toyota.de/automobile/corolla/index.json

+ +

I understand how .JSON works but, how is it possible to use that instead of say, .php or .html this is the first time I've seen anything like that.

+",224451,,,,,43146.62153,How are some websites utilizing .JSON protocols as their web address,,1,3,0,,,CC BY-SA 3.0,, +365954,1,,,2/14/2018 20:24,,-2,206,"

We currently have a Users table in the database, this has quite a few columns in it. Around 50% of the columns are barely used in the system (only on one or two pages). We've been discussing amongst ourselves whether to split this table into two tables e.g. Users and UserSettings. Our thinking is also that we could then split the C# objects up as well to follow suit.

+ +

We trying to think about db index size and things like caching as the user objects are cached in Redis. +I know in some cases we'd need a couple of db queries to the get the data instead of a single one but as this would be the exception the saving everywhere else would be of a bigger benefit.

+",43458,,,,,43902.20278,One Single Table or Two Smaller Tables,,5,6,,,,CC BY-SA 3.0,, +365957,1,,,2/14/2018 21:35,,2,460,"

Sometimes it is pretty easy to find very natural concept to represent relationship, for example Person and Team can be connected by Membership, but occasionally it is not so easy and the concept might seem like so artificial. So is there any strategy to find or name these kind of relationships?

+",193934,,,,,43146.47361,Naming relationship aggregates,,2,1,,,,CC BY-SA 3.0,, +365959,1,,,2/14/2018 22:16,,0,712,"

ISAs define things like instruction lengths and the instructions themselves and there are some things that I do not understand.

+ +
    +
  • Does the instruction length (the amount of bits) affect the amount of instructions that can be performed in a clock cycle?
  • +
  • I have tried to do some research on what really determines the amount of instructions that can be performed in a cycle (which led to the question above), but without luck. Is the CPI always variable or is there a specific amount of cycles it takes for certain instructions to be performed, like maybe a ""a plain load instruction always takes one cycle""?
  • +
+",295868,,,,,43146.43889,Does instruction length affect cycles per instruction?,,4,4,,,,CC BY-SA 3.0,, +365969,1,,,2/15/2018 0:16,,1,66,"

So I have a design question, I was reading about Dependency Injection on fsharpforfunandforprofit.com and the article said that hidden dependencies on local methods is a problem. That got me thinking, ""Just how decoupled does a function need to be from the domain?"" I made a contrived example here where the function lotsOfChickens depends on create. Is this bad design?

+ +
type Chicken = {
+        Name : string
+        Size : float
+    }
+module Chicken =
+    let create a s =
+        {
+            Name = a
+            Size = s
+        }
+
+    let lotsOfChickens a s i =
+        [for _ in [1..i] -> create a s]
+
+",295417,,,,,43146.01111,Dependency on local function,,0,6,,,,CC BY-SA 3.0,, +365970,1,365975,,2/15/2018 3:38,,1,164,"

I'm trying to figure out how to create a Node.js/Express.js application that is a framework for hosting and running third-party code in my application, and what are the appropriate JavaScript/Node/Express terms for these concepts if they're different from other language I've used in the past (C++/C#/Java).

+ +

In developing the back-end for an Android/iOS app using some or all MEAN technologies--at least Express.js and Node.js--I'm getting requirements to set it up as a framework so third-parties can code to a published API to develop plugins/modules for my back-end that I might have running on some server somewhere.

+ +

This portion might get outsourced to someone else, in which case I would be responsible for ensuring that I communicate the requirements accurately to the assignee.

+ +

I'm relatively new to Node and Express, and I'm having difficulty finding information on how to go about writing code for this if I were to do it myself. When I try to search for information on how to create what I described above, I end up getting hits for Express itself or Node itself. Searching for ""Express"" and ""frameworks"", I basically get information on Express itself since Express.js is a framework; similarly searching for ""Node"" and ""modules"" basically just gives me tons of NPM information since NPM hosts Node modules.

+ +

Coming from a C++/C#/Java background, I envision this application would use the equivalent of something like ""dlopen"" to manually open the equivalent of a .so, .dll, or ,.jar file, find a class that implements an expected interface, and then call methods on that interface.

+ +

If a third-party developer creates a separate plug-in module to run in my application, my understanding is there is no concept of a DLL or JAR in JavaScript, so all that's available would be some special .js file.

+ +

If I have ten of these or a hundred of these from numerous third-party developers, as the owner of the back-end, would I be responsible for hooking each of these into my application so that my application runs with all of these--with the end user potentially enabling/disabling/configuring zero or more of these for his/her own account via his/her own app?

+ +

I'm wondering if I should account for three separate cases for the outside code:

+ +
    +
  1. Fully-trusted code developed in-house, so no protection needed. If nothing else, this would help for debugging if ever necessary as a layer of complexity can be removed just to get into the code to get it to run with no frills.

  2. +
  3. Untrusted code from an arbitrary third party, so it might be beneficial to run it in its own dedicated process at the very least. Regarding the badly designed code, we might want to run our in-house modules in this environment in production anyway just for stability so if a module goes down, it doesn't take the entire Express application down with it.

  4. +
  5. Remotely-hosted code. It might be the case that a third party might not want to provide their JavaScript code to us to run, or they might want to develop their module in something other than JavaScript. If something like a REST API is available to them and they expose a fixed REST API, they could host their own code themselves and hook-into my back-end with web service calls.

  6. +
+ +

I'm now wondering if #3 would be the way to go for all cases of #1 and #2 as well. Even for trusted code, simply provide the ability to put in web hooks and stand up separate, compartmentalized services perhaps in isolated Docker containers on the same host for one standardized interface, and don't even bother with ""modules"" and ""plugins"".

+",171263,,1204,,43236.67083,43236.67083,How to create a framework in Express.js,,1,4,,,,CC BY-SA 4.0,, +365972,1,,,2/15/2018 5:07,,1,55,"

I'm currently working on getting a good workflow going on, from development to a kubernetes deployment on cloud platform.

+ +

I'm pretty comfortable with various docker commands, but rewriting long commands each time is getting painful.

+ +

For example, for deploying docker images to google cloud I need to tag and push them to google gcloud.

+ +
docker tag my-image gcr.io/my-project/my-image:test
+gcloud docker -- push gcr.io/my-project/my-image 
+
+ +

This would obviously be quite painful to write out everyday, so for conveinience I could put these into package json and run them with something like npm run docker-image-release.

+ +

The question I have is - is mixing package.json and docker/deployment stuff frowned upon? Or is this a perfectly reasonable project structure?

+ +

If it is a bad idea - what's the best way to conveniently remember commands I'm using a lot?

+",109776,,109776,,43146.35278,43146.35278,Should I put docker commands in my package.json?,,1,3,,,,CC BY-SA 3.0,, +365973,1,366023,,2/15/2018 5:51,,0,282,"

I would like to write a data structure implementation in Java that uses caches as a core part of its functionality, and I would like the user to be able to provide their own cache implementations that implement a particular cache interface so they can test performance using various strategies (like LRU, LFU, MRU, etc.).

+ +

What is the best way to allow a user to swap in their own cache in an instance of one of the data structures without giving them access to my codebase? Is there a way I can pass in a class that implements the cache interface as a parameter?

+ +

This structure will contain perhaps several caches arranged in different ways, so I would need more than one instance of a cache, and I would like to be able to create and destroy caches at runtime. Would passing a constructor to the cache as a Lambda function be a good solution?

+",296218,,1204,,43146.82083,43146.91597,Classes as parameters,,2,3,,,,CC BY-SA 3.0,, +365976,1,,,2/15/2018 7:04,,3,55,"

I work in a company that uses microservices. We have one particular service (let's call it Cart Service for example purposes) that manages some logic (storing carts, allowing other services to add and remove items from a given user's cart, etc). My team owns this service and multiple other critical paths (like Item Search) talk to it routinely at 1000+ QPS.

+ +

Now, we need to add a web frontend that allows customer support people to remove/add items to the cart manually. This frontend basically needs to do all the stuff our service exposes currently, plus some more specific things (changing the owner of a cart, say). We have two choices:

+ +
    +
  1. Make Cart Service serve the frontend. The frontend can talk to Cart Service to change the owners of the cart.
  2. +
  3. Make a new Cart Admin Service that only serves the frontend and which talks to the Cart Service to perform the cart actions. This service wouldn't store any new data (probably) but would have to talk to other services (like customer support auth, etc).
  4. +
+ +

I think that we should go with option 2 to avoid bloat of the Cart Service (which does a lot of stuff already) and that way we can scale the new service independently. It also separates the customer support concern from the core business concerns - Cart Admin Service and Cart Service have different uptime requirements, and we might bring down Cart Service inadvertently if we introduce a bug trying to change the frontend.

+ +

However, my coworker thinks that we should go with option 1 because writing the new APIs that the Cart Service would need to expose would be time better spent doing other things, and since the load of Cart Admin would be so insignificant compared to the current load of the Cart Service, load concerns don't really matter.

+ +

What are the pros and cons of each method beyond what I've described? Is there a reason one method is clearly better than the other?

+",296223,,,,,43146.29444,I need a web frontend for managing some data. Should I add it to an existing microservice or create a new one?,,0,2,,,,CC BY-SA 3.0,, +365978,1,,,2/15/2018 7:37,,-1,130,"

I understand how service prividers in Laravel works and how to use them. But I'm not sure about how to keep clean code, specificaly in count of method parameters. For example I have this route:

+ +
Route::post('/user/user-rooms', '\Modules\Administration\User\Services\UserRooms@rooms')
+
+ +

Where class looks like this:

+ +
public function rooms(Request $request, UserRulesInRoomsRepository $userRulesInRoomsRepository, RoomRepository $roomRepository)
+{
+    $user_rooms = $userRulesInRoomsRepository->findBy();
+    $available_rooms = $this->availableRooms($userRulesInRoomsRepository, $roomRepository);
+}
+
+private function availableRooms(UserRulesInRoomsRepository $userRulesInRoomsRepository, RoomRepository $roomRepository)
+{
+    $userRules = $userRulesInRoomsRepository->getFilter();
+    $rooms = $roomRepository->all();
+}
+
+ +

I'm using repository RoomRepository only in method availableRooms but I need this repository already in method rooms because I want to keep dependecy injection.

+ +

I could use resolver App::make(RoomRepository ::class) instead parameter in availableRooms method but I think this is against DI pattern.

+ +

Another option would be save this repository to the class property and then call it from method but I think there is still problem that I have this parameter in rooms method even I'm using this repository only in one another method.

+ +

Hopefully I wrote it clear. What is best practice in this case?

+",293486,,,,,43176.42431,Laravel Service Providers count and DI,,1,2,,,,CC BY-SA 3.0,, +365982,1,365984,,2/15/2018 8:34,,3,471,"

Suppose that I have two services Person Service and Company Service and I want to maintain links between them for example a Person is linked to Company because he works there or he owns the company etc. So I will go ahead and create a database table like

+ +
PersonId, CompanyId, RelationType
+
+ +

Now the business logic can be written in either of the services that client will call to link them. But what if I have requirement to link multiple person to a single company and multiple companies to a single person. I need to have two methods, one that takes single PersonId and list of CompanyIds and one that takes single CompanyId and multiple PersonIds. So I have written two different methods in each service. Following method is in Person Service

+ +
void LinkPersonToCompanies(long personId,
+                           IEnumerable<long> companyIds,
+                           RelationType type)
+
+ +

and below method is in Company Service

+ +
void LinkCompanyToPersons(long companyId,
+                          IEnumerable<long> personIds,
+                          RelationType type)
+
+ +

I know that these are two different methods but they are doing the same task i.e. their logic repeats itself. It can become tough to maintain because a single change in linking mechanism should now be made to both methods. Is it against DRY principle? What should be the right design to efficiently solve this problem?

+",296229,,200203,,43146.37153,43146.48194,Does my code violate DRY principle?,,5,6,1,,,CC BY-SA 3.0,, +365988,1,371398,,2/15/2018 9:30,,0,542,"

A system that sometimes will need to use a pretrained machine learning model. +That model is about 10Gb on disk, +and when loaded uses about 10Gb of RAM.

+ +

Loading it from disk takes a nontrivial amount of time, +so in general I wish to not do it too often. +Certainly not every function call against it.

+ +

Right now, I am using a Lazy Loading-(ish) pattern, +where the first time a function call is made against it +it is loaded then stored in a global variable.

+ +

This is nice, because doing some runs of my system it will never be needed. +So loading it lazily saves a couple of minutes on those runs.

+ +

However, other times my system is running as a long-running process (exposed via a web API). +In these cases, I don't want to be using up 10GB of RAM all the time, +it might be days (or weeks) between people using the API methods that rely on that model, and then it might be used 1000 times over 1 hour, and then be unused for days.

+ +

There are other programs (and other users) on this system, +so I don't want to be hogging all the resources to this one program, when they are not being used.

+ +

So my idea is that after a certain amount of time, +if no API calls have used the model I will trigger some code to unload +the model (garbage collecting the memory), + leaving it to be lazy-loaded again the next time it is needed.

+ +
    +
  • Is this a sensible plan?
  • +
  • Is it a well-known pattern?
  • +
  • Maybe it is not required and I should just trust my OS to SWAP that out to disk.
  • +
+ +

This is related to +Is there a name for the counterpart of the lazy loading pattern? +However, that question seems unclear as to if it is actually just asking about memory management patterns in general.

+",25849,,25849,,43147.66944,43243.40972,"Extending the concept of Lazy Loading, to also unloading",,3,5,,,,CC BY-SA 3.0,, +365990,1,,,2/15/2018 10:10,,0,46,"

We have a database with stores Outlets / Supermarkets (mainly in germany but other countries are also possible). We store some informations of this outlets / supermarkets like name, street, postalcode, city, geocodes (longitude +, latitude).

+ +

It is very important that one outlet is only once in the database.

+ +

From our clients we get lists of outlets / supermarkets to import them in our database. When the outlet / supermarket alreasy exists in our database we ignore the outlet / supermarket in the import list but when it does not exist in our database we create a new entry in our database.

+ +

And here comes the problem: The quality of the import lists from our clients is very often extrem badly. Like street and postalcode doesn't match, spelling mistakes in street name, spelling mistakes in outlet name etc...The import lists are not standardized.

+ +

We need a far-reaching automatic process. In some cases, an import list has more than 10,000 entries and is far too costly if an employee carries out this process.

+ +

Our previous approaches:

+ +
    +
  1. Strictly check for equality (Name == ImportName && Street == ImportStreet). +It is obvious that this can not work. the automatic process would find only a few matches.

  2. +
  3. Geocoding and Levenshtein algorithm +(https://en.wikipedia.org/wiki/Levenshtein_distance)

  4. +
+ +

First we geocode the address from the import list. Then we look in a radius of 500 meters if there are another outlets already in the database. If no, then we create a new entry in the database. But if there is already an existing outlet nearby we check if it is the same with the help of the Levenshtein algorithm.

+ +

There are several problems with this solution: Geocoding often fails because of the many misspellings and spellings of the address (we tried to change our gecoding provider to google maps because geocoding is much better but this would very expensive), Levenshtein algorithm can be wrong so we have wrong mappings to existing outlets. Depending on which limit your choose for the Levenshtein distance, many outlets have to be manually imported or wrong assignments are created.

+ +

We are looking for new approaches and ideas for this problem. Are there suitable algorithms or do you have other ideas? Many Thanks.

+",296236,,296236,,43146.42778,43146.43819,Algorithm: Identify same Outlets / Supermarkets,,1,0,,,,CC BY-SA 3.0,, +365993,1,366024,,2/15/2018 10:17,,2,115,"

I got an architectural problem here.

+ +

Let say there is an IShell. It mainly responsible to map user's commands (represented as a linux-like strings) to an appropriate IExecutable's.

+ +

Those executables are pretty various. Hence, there are different modules/assemblies/dlls (or how-the-hell-you-call-it): Module1, Module2, Module3 all providing different implementations of abovementioned IExecutable.

+ +

Now there is a requirement to be able substitute any of those modules on the fly, at the run time, without stopping IShell instance. Obviously, there gonna be some kind of API, like /api/update/module1 and... it might return a raw code to be compiled locally (what are advantages and disadvantages?) or it might respond with precompiled *dll.

+ +

No matter how, at some point IShell will eventually receive newest version of particular module. And here emerges the biggest problem. Assuming I can ""pause"" what's happening inside, what's the clearest and robust way to substitute currently loaded module with a new one?

+ +

Since most of the code written in Haskell, I am definitely interested in Haskell-related solution. Or, at least, I'd like to acknowledge the fact that there is no one.

+ +

P.S. If someone of you, guys, is aware of concept might work, but not Haskell's realization of that one, still let me know. Thanks in advance.

+",296164,,,,,43146.89653,Dynamic *dll substitution?,,1,2,,,,CC BY-SA 3.0,, +366000,1,366004,,2/15/2018 11:04,,0,291,"

Please see the code below:

+ +
    public sealed class UKCurrency : ICurrency
+    {
+        private static readonly int _decimalPlaces=2; 
+            private static readonly decimal[] _denominations = new decimal[] {
+                50.00M, 20.00M, 10.00M,
+                5.00M,  2.00M,  1.00M,
+                0.50M,  0.20M,  0.10M,
+                0.05M,  0.02M,  0.01M,
+            };
+
+            public IEnumerable<decimal> Denominations
+            {
+                get { foreach (var denomination in _denominations) yield return denomination; }
+            }
+    }
+
+    public sealed class DenominationCounter
+    {
+        private readonly decimal _cost;
+
+        public decimal Cost
+        {
+            get { return _cost; }
+        }
+
+        public ICurrency Currency
+        {
+            get { return _currency; }
+        }
+
+            public DenominationCounter(decimal cost, ICurrency currency)
+            {
+                if (currency == null)
+                    throw new ArgumentNullException(""Currency cannot be null"", ""ICurrency"");
+                if (cost < 0)
+                    throw new ArgumentException(""Cost cannot be negative"", ""Cost"");
+                if (decimal.Round(cost, currency.DecimalPlaces) != cost)
+                    throw new ArgumentException(string.Concat(""Cost has too many decimal places.  It should only have: "", currency.DecimalPlaces), ""Cost"");
+                _cost = cost;
+                _currency = currency;
+            }
+
+public IEnumerable<System.Collections.Generic.KeyValuePair<decimal, int>> CalculateDenominations()
+        {
+            var target = _cost;
+            foreach (var denomination in _currency.AvailableDenominations)
+            {
+                var numberRequired = target / denomination;
+                if (numberRequired >= 1)
+                {
+                    int quantity = (int)Math.Floor(numberRequired);
+                    yield return new KeyValuePair<decimal, int>(denomination, quantity);
+                    target = target - (quantity * denomination);
+                }
+            }
+        }
+    }
+
+ +

The DenominationCounter constructor throws an exception if the cost has the wrong number of decimal places.

+ +

Notice that the UKCurrency class is used to validate the DenominationCounter as shown below:

+ +
if (decimal.Round(cost, currency.DecimalPlaces) != cost)
+
+ +

Is this a normal to approach validation like this:

+ +

1) A Value Objects member is used to validate an entity

+ +

2) A Value Objects member is used to validate another value object

+ +

I am asking this because I have never seen validation approached like this before and I am trying to follow the principle of least astonishment these days.

+",65549,,65549,,43146.58125,43146.62431,Value Objects member used to validate another value object,,3,15,,,,CC BY-SA 3.0,, +366005,1,,,2/15/2018 12:26,,1,1096,"

I have dozens of Delphi (version 10.2.2 Tokyo) functions that I would like to be accessible from a C# MVC web project. These functions are mainly report queries that take a bunch of input parameters and output an array (or list) of records (or objects) as result.

+ +

I have full control on both code bases, Delphi and C#. What would be the best architecture in this scenario to share the code base? DLL would be an option but I don't like the low level details that comes with it, what are better options?

+ +

Obs: the functions must be available to be called from both, Delphi and C# code.

+",47566,,47566,,43146.52847,43147.56458,Calling Delphi code from C# program,,2,4,,,,CC BY-SA 3.0,, +366010,1,366016,,2/15/2018 14:29,,3,1232,"

Note: BLL = Business Logic Layer (can also mean your domain)

+ +

I'm trying to understand the onion architecture. It seems to me that it's actually the same thing as the layered architecture, only with the dependency inversion principle (DIP) applied. For example, this is the typical layered architecture (arrows represent dependencies):

+ +

UI > BLL > DAL

+ +

Note: That's simplified, and should not imply that DIP can't/isn't used with it. DIP simply means that we should depend on abstractions.

+ +

This is the typical onion flow (also simplified):

+ +

UI > BLL < DAL

+ +

Notice the last arrow is reversed. The BLL has the abstractions, so they're at the center of the onion, and the other layers reference it. +Onion article: http://jeffreypalermo.com/blog/the-onion-architecture-part-1/

+ +

Since I'm used to the layered architecture, I wanted to see what the flow would look like if I combined that with DIP. Here it is:

+ +

UI > [interface] < BLL > [interface] < DAL

+ +

Excellent diagram of DIP, which mirrors this. +https://en.wikipedia.org/wiki/Dependency_inversion_principle

+ +

Here is an article explaining the difference between layered and onion. It made me have the questions I presented above. +http://blog.ploeh.dk/2013/12/03/layers-onions-ports-adapters-its-all-the-same/

+ +

So my question is, what is the difference between the onion architecture, and the layered architecture with DIP? Is there one?

+ +

My question has been flagged as a possible duplicate of this one: +Onion architecture vs 3 layered architecture +Mine is different because I want to know if there really is such a thing as the onion architecture. If you claim the onion architecture is layered + DIP, then does the onion really exist? Or would that mean that the onion really is just a version of layered, like layered with bad practices?

+",29526,,29526,,43146.72153,43146.72153,Is there really such a thing as the onion architecture?,,1,7,1,,,CC BY-SA 3.0,, +366011,1,366015,,2/15/2018 14:48,,3,669,"

I am working on EWS. A class is made to query mail box and read emails.

+ +
public class MailReader
+{
+
+    private readonly ExchangeService _service;
+    private readonly PropertySet _propertySet;
+
+    private readonly IFolderSearch _folder;
+    private readonly IMailAttachmentAdapter _attachment;
+    private const int MaxEmail = 1000; 
+
+    public MailReader(
+        ExchangeService service,
+        PropertySet propertySet,
+        IFolderSearch folder,
+        IMailAttachmentAdapter attachment)
+    {
+        _service = service;
+        _folder = folder;
+        _propertySet = propertySet;
+        _attachment = attachment;
+    }
+
+ +

what is the best way to inject ExchangeService , PropertySet classes are sealed and with no interface. Planning to use AutoFac for dependency management.

+ +

any pointers will be helpful.

+",293940,,,,,43146.75556,Handle third party library class depedency which do not have interface,,1,3,1,,,CC BY-SA 3.0,, +366021,1,366094,,2/15/2018 17:44,,3,1285,"

We currently have a work stream where we are splitting out a Monolith into microservices and there has been a debate as to how we will access and persist data.

+ +

To give you a good idea of how most of our apps are being set up, we have an app built strictly for the UI, a Web API it talks to and then another Web API that, that API talks to that manage access to the database.

+ +

While it definitely seems appealing to have a Web API that we use to manages access to the database, we still have to build out endpoints specific to each app and, for the most part, the Web API our UI App talks to just acts as a proxy.

+ +

Also, there is no real design pattern set forth, do we just use our Web API as a proxy, or do we set up multiple endpoints in the data layer API and consume all of those to create our own DTOs?

+ +

This is a C# project and we typically use Entity Framework, so to me, it seems like using a Database First approach to generate our Entities in our Web API makes a lot more sense. That way we aren't managing migrations in multiple projects.

+ +

It seems too late in the game to separate our database into multiple databases, which is something I would prefer.

+",256005,,256005,,43162.72847,43162.72847,Managing Database Access for MicroServices,,1,13,,,,CC BY-SA 3.0,, +366025,1,,,2/15/2018 21:34,,-3,73,"

I have a set of objects with different attributes; e.g. a set of cars with colors, brand etc. I choose subsets by virtue of a specific set of attributes.

+ +

Now I want to identify pairs of such subsets that differ only by one element.

+ +

Eg. say I have 4 cars:

+ +

A: Red, Ford

+ +

B: Red, Ford

+ +

C: Blue, Ford

+ +

D: Green, Volvo

+ +

Then the set of “Ford” cars is [A,B,C] and the set of “Red” cars is [A,B], i.e. these two sets only differ by one element and would be identified. However the set of “Volvo” cars [D] differs by more than one element from all other possible subsets (looking only at subsets according to a specific set of attributes), and would not be identified as part of a pair.

+ +

Is there a generalized algorithm for this specific set problem/logic (identifying the existence of such sets), and/or solutions for identifying such sets that is smarter than generating and testing all possible permutations of possible sets against each other?

+",284429,,9113,,43147.28681,43152.69653,Identify sets differing by one,,1,5,,,,CC BY-SA 3.0,, +366026,1,366086,,2/15/2018 22:26,,-2,433,"

Should I leave the else clause in a case like this where the compiler marks the else as redundant code and the else makes the code appear more logical --> better readable?

+ +
if (aCondition){
+    doStuff();
+    return someMethodResult();
+}
+else {
+    doOtherStuff();
+    return someOtherMethodResult();
+}
+
+ +

or

+ +
if (aCondition){
+    doStuff();
+    return someMethodResult();
+}
+doOtherStuff();
+return someOtherMethodResult();`
+
+",296297,,296297,,43146.95,43147.81667,Should i use redundant code e.g. else to make it better readable,,2,3,,43147.83194,,CC BY-SA 3.0,, +366027,1,,,2/15/2018 23:22,,1,60,"

I've got a view that renders a menu. These menu items are dynamic, in that they only appear based on some conditions (authorization, for example).

+ +

I have two options:

+ +
    +
  1. Hard code the menu in the view with all the necessary conditionals
  2. +
  3. Structure and filter data outside of the view so the view is only responsible for displaying an array of actions.
  4. +
+ +

Option 2 seems to be more elegant to me, but this seems to come at the cost of increasing the size of the controller and doesn't feel like it should be the responsibility of the controller.

+ +

So, my questions are:

+ +
    +
  1. Is structuring data for the view standard practice?
  2. +
  3. Where should the structuring of this data live? I've considered using a helper method (imported from module), a service object, or just a private method within the controller, it's just not clear to me which would be better.
  4. +
+ +

Beyond just answering the above questions, I would appreciate any insights in how to approach these types of conceptual issues.

+ +

For what it's worth, the MVC framework I'm using is rails.

+",125752,,,,,43148.47917,Should I structure/process data for a view in an MVC architecture?,,2,0,,,,CC BY-SA 3.0,, +366029,1,366033,,2/15/2018 23:51,,4,217,"

I'm working on a display library which in most cases works on a 2D plane but it has some components which display 2D projections of a 3D space.

+ +

This domain has the notion of Position, Size, Position3D and Size3D.

+ +

Position and Size are straightforward:

+ +
data class Position(val x: Int, val y: Int)
+
+data class Size(val width: Int, val height: Int)
+
+ +

but when I introduce the 3D variants height is no longer self-explanatory:

+ +
data class Position3D(val x: Int, val y: Int, val z: Int)
+
+data class Size3D(val width: Int, // this is OK
+                  val height: Int, // is this the Y or the Z axis?
+                  val depth: Int) // is depth universally recognized?
+
+ +

Is there a best practice for this problem?

+ +

I can work around it by doing something like this:

+ +
data class Size3D(val xAmount: Int,
+                  val yAmount: Int,
+                  val zAmount: Int)
+
+ +

but it feels a bit hacky.

+ +

Edit: How should I name 2D vs 3D sizes? Should I keep using height in 2D (y axis) even if in 3D height is for z axis?

+ +

How can I solve this?

+",18074,,18074,,43147.29167,43147.51806,How should I name the fields in a class representing 2D and 3D sizes?,,2,3,,,,CC BY-SA 3.0,, +366031,1,366037,,2/16/2018 0:36,,2,317,"

I often see the term stale used to refer to data that is out of date, and fresh to describe data that is up to date. Both of those make sense to me.

+ +

But what do you call data that is not yet fresh, and has never been fresh?

+ +

The situation that made me ask this question is I'm working on an application that uses an ORM, and there's an operation that fails if performed on an entity that's never been flushed to the database. I wasn't sure what to call the entity in that state.

+",168891,,,,,43148.86181,What to call data that's not ready yet?,,4,7,,,,CC BY-SA 3.0,, +366039,1,366041,,2/16/2018 7:03,,11,1348,"

We have a dependency to a third-party service which exposes a gigantic interface of which we only need like 3 methods. Additionally, the interface changes frequently...

+ +

I've decided to wrap the interface in a class in our project and only expose the methods that we need.

+ +

But I'm unsure how I should handle the return values... +The interface returns an object of type Storage. We internally have a type StorageModel which is our internal representation of a Storage.

+ +

What would you return in the mapper: Storage or StorageModel? +We have a DataService StorageService which get a dependency of the wrapper injected.

+ +

Currently I'm doing it basically like this:

+ +
public class StorageService 
+{
+    private readonly IExternalStorageWrapper externalStorageWrapper;
+
+    public StorageService(IExternalStorageWrapper externalStorageWrapper)
+    {
+        this.externalStorageWrapper = externalStorageWrapper;
+    }
+
+    public StorageModel GetStorage(int storageId)
+    {
+        return this.externalStorageWrapper.GetStorage(storageId).ConvertToStorageModel();
+    }
+}
+
+public class ExternalStorageWrapper : IExternalStorageWrapper
+{
+    public Storage GetStorage(int storageId)
+    {
+        using(var ext = new ExternalStorage())
+        {
+            return ext.GetStorage(storageId);
+        }
+    }
+}
+
+ +

What would you say:

+ +
    +
  • Is it good like above, that the wrapper returns the external Storage object and the internal StorageService returns the internal StorageModel?
  • +
  • Or would you return a StorageModel in the wrapper already?
  • +
+",150691,,,user22815,43147.66597,43150.43264,How do I wrap a service so it is simpler,,4,3,1,,,CC BY-SA 3.0,, +366044,1,366047,,2/16/2018 9:27,,4,636,"

Let's say my user requests my ES/CQRS system to open a support ticket:

+ +
    +
  • The controller sends an ask-support command, this command checks if the user has enough credit to do that, then emits an asked-support event.

  • +
  • Somewhere, a listener responsible for side-effects gets this event. It calls an external support API to open a ticket and it retrieves a token from this call.

  • +
  • It sends an open-ticket command containing this token and this results in an opened-ticket event.

  • +
  • Now my controller should return the precious token to the client, but how?

  • +
+ +

With this publish/subscribe logic, my listener doesn't know the controller and can't tell him ""hey, your ticket is created, here is your token"".

+ +

I could have a read projection that results in a list of support tickets tokens and the controller could call it until the token appears (but that isn't great). Or somehow temporarily subscribe to the projection changes (but that's complex).

+ +

What strategy would you recommand for this case? Is my original design flawed?

+ +

Thanks

+",296322,,,,,43147.46736,"With event sourcing, how to get the response of a call that is a side-effect?",,3,4,,,,CC BY-SA 3.0,, +366046,1,,,2/16/2018 9:48,,4,585,"

where I currently work, there is a team that uses code-generation to generate slight variations of a program. I find this a little bit awkward. I can imagine using code generators that produce complex code for stuff that is usable in different projects, but using a code generator just for variations of an existing program seems overkill to me.

+ +

I believe that code-generation is no good solution for this, because you create many versions of a certain program which in my eyes is hard to maintain. I believe you should try to use object-oriented programming (or another paradigm) to make one program flexible so it can handle all the required variations you need. This way you only have one version to maintain.

+ +

I was wondering what you think about it? When is using a code-generator really useful? Always? Or only for complex things?

+ +

I hope to hear some great thoughts about this, because to me it seems such an overkill.

+ +

Thanks!

+",296323,,,,,43147.5875,When to use code generation over flexible programming,,3,5,,,,CC BY-SA 3.0,, +366051,1,,,2/16/2018 10:31,,0,860,"

I've been struggling with this issue for a while. Let's say you have a business application that has a rich domain and in one of the front-end views you show a list of open Orders. In this view you show things like that orderDate, customerName, totalAmount,... but the client also wants to see how many items are in the order (basically the count of OrderItems entity). What is the best way to handle this?

+ +

Assumptions

+ +
    +
  • The Order class has a List<OrderItem> which is fetched lazily
  • +
  • Each ResponseDTO object only contains its core properties and any properties from @ManyToOne associations (so Customer would be included when fetching an Order
  • +
+ +

Solution 1

+ +

You can do a rest call for the Order resource by calling /api/orders, you'll get a response with the core properties and the customer data. For each order you'll have to do a new fetch to /api/orders/<id>/items. This is basically an N+1 problem with many HTTP calls.

+ +

Solution 2

+ +

I change my initial assumption and eagerly fetch collections and immediately include the OrderItems in the ResponseDTO. The question then becomes, am I going to eagerly fetch each collection of each entity? This seems like a serious performance issue. In this solution the front end would only need to make 1 call.

+ +

Solution 3

+ +

Initial assumptions are in play, so I lazily fetch collections but I move the N+1 problem from the front-end to the back-end (so no extra HTTP calls), not really a solution but at least the expensive HTTP calls are weeded out.

+ +

Solution 4

+ +

In my rich domain model I add an attribute to the Order entity called numberOfItems. Whenever the method addOrderItem(OrderItem item) is invoked on an Order object, I increment the numberOfItems attribute. It is stored in the database as well. This way I can have all my initial assumptions and use the ResponseDTO with only core properties and I only make 1 HTTP call. The issue I have with this solution is that many people say that you shouldn't store aggregate data like that in the database.

+ +

I realize there's no perfect solution and it always depends, but for the life of me I can't seem to find good discussions about this issue. When you google an issue like this you often find information about HATEOAS but that would fit in my Solution 1, which I don't think is really a solution.

+ +

If anyone can give an alternative solution or improve on any of the solutions I gave, it would be greatly appreciated.

+",52596,,52596,,43147.47569,43173.81736,Including aggregate data in web service response in DDD,,2,5,,,,CC BY-SA 3.0,, +366053,1,366055,,2/16/2018 10:47,,1,833,"

I've lately been involved with WPF and looked into MVVM. I understand that view model shouldn't be aware of view.

+ +

However, sometimes I come across situation where my initial instinct is to write property in view model something like public bool ShowDialog { get; private set; } or public bool ShowPopUp { get; private set; }

+ +

Does it suggest that view's responsibility is creeping in view model and I should approach differently?

+",260913,,,,,43147.46806,MVVM. Is it a code smell when view model has properties with names show/hide/display that semantically belong to view?,,2,0,1,,,CC BY-SA 3.0,, +366058,1,366062,,2/16/2018 12:19,,33,7219,"

Context

+ +

In Clean Code, page 35, it says

+ +
+

This implies that the blocks within if statements, else statements, + while statements, and so on should be one line long. Probably that + line should be a function call. Not only does this keep the enclosing + function small, but it also adds documentary value because the + function called within the block can have a nicely descriptive name.

+
+ +

I completely concur, that makes a lot of sense.

+ +

Later on, on page 40, it says about function arguments

+ +
+

The ideal number of arguments for a function is zero (niladic). Next + comes one (monadic), followed closely by two (dyadic). Three arguments + (triadic) should be avoided where possible. More than three (polyadic) + requires very special justification—and then shouldn’t be used anyway. + Arguments are hard. They take a lot of conceptual power.

+
+ +

I completely concur, that makes a lot of sense.

+ +

Issue

+ +

However, rather often I find myself creating a list from another list and I will have to live with one of two evils.

+ +

Either I use two lines in the block, one for creating the thing, one for adding it to the result:

+ +
    public List<Flurp> CreateFlurps(List<BadaBoom> badaBooms)
+    {
+        List<Flurp> flurps = new List<Flurp>();
+        foreach (BadaBoom badaBoom in badaBooms)
+        {
+            Flurp flurp = CreateFlurp(badaBoom);
+            flurps.Add(flurp);
+        }
+        return flurps;
+    }
+
+ +

Or I add an argument to the function for the list where the thing will be added to, making it ""one argument worse"".

+ +
    public List<Flurp> CreateFlurps(List<BadaBoom> badaBooms)
+    {
+        List<Flurp> flurps = new List<Flurp>();
+        foreach (BadaBoom badaBoom in badaBooms)
+        {
+            CreateFlurpInList(badaBoom, flurps);
+        }
+        return flurps;
+    }
+
+ +

Question

+ +

Are there (dis-)advantages I am not seeing, which make one of them preferable in general? Or are there such advantages in certain situations; in that case, what should I look for when making a decision?

+",226041,,,,,43147.67847,Additional line in block vs additional parameter in Clean Code,,6,16,12,,,CC BY-SA 3.0,, +366069,1,366118,,2/16/2018 14:21,,3,2517,"

I've encountered API that tell as being ""restful"" but then I see resources with verbs instead of reserving those said verbs to the METHOD. +Here's some (paths shortened so only the relevant method and path parts are shown):

+ +
    +
  1. POST /things/84/lock
  2. +
  3. POST /things/84/unlock
  4. +
  5. POST /things/84/edit
  6. +
  7. POST /things/prepend (Adding to the beginning of the ordered collection)
  8. +
+ +

Why doesn't it make sense to do instead:

+ +
    +
  1. LOCK /things/84
  2. +
  3. UNLOCK /things/84
  4. +
  5. PATCH /things/84 or EDIT /things/84 (I prefer the first one)
  6. +
  7. PREFIX /things or PREPEND /things
  8. +
+ +

I've only been told again and again that this is not restful because it must only use GET, POST, PUT, DELETE and PATCH to remain restful.

+ +

What logical explanation exists for having verbs in the path section of the URL for restful API?

+ +

Notes:

+ +
    +
  1. I can't show you real life examples so I won't be pointing fingers +at anybody.
  2. +
  3. As for proxy limitations excuse I've been experiencing. It was valid while no workaround existed. Nowadays, most modern frameworks have mechanisms to allow method override using query variables or headers (with de-facto standards, even).
  4. +
+",75111,,75111,,43148.39167,43148.58958,Is it restful to have verbs on the HTTP path instead of the HTTP method?,,3,11,,,,CC BY-SA 3.0,, +366071,1,,,2/16/2018 14:56,,3,536,"

I do a lot of coding in python and got a lot of if conditions without an else statement so to say partial branches.

+ +

E.g.:

+ +
# if a certain kwarg was passed to a function call
+if kwargs.get('a_option'):
+    # overwrite an entry in an already existing dict:
+    config['a_option'] = kwargs['a_option']
+
+ +

This always shows up in my test coverage as a partial hit unless I mark it with a pragma: no branch.

+ +

So my question is, is it considered bad practice to use partial branches? And if it is, are there any popular ways how to avoid them? (not something like ""just add else without any statements like in python else: pass"")

+ +

Note that this is refering to branching in general and not only python. I just used that as example but I'm interested in general how such things are handled in no matter what language.

+",296357,,,,,43147.72083,Best practice to avoid partial branches,,3,4,,,,CC BY-SA 3.0,, +366077,1,366216,,2/16/2018 16:23,,6,764,"

I've created about a dozen services for a intranet. Now I've gotten to the point that these services are more coupled to each other than I'm comfortable with, and I've been having problems where once service can cause a few other service to degrade.

+ +

I've been working on a EDA design so that I can make these services decoupled from each other but as I'm the only programmer here I need some feedback.

+ +

The design aims to solve these scenarios:

+ +

Scenario 1: Client requests data from Service A, to fulfill that request Service A needs data from Service B but Service B has failed and is unresponsive

+ +

Scenario 2: Client updates an entity in Service B, the reference to this entity needs to be updated in Service A and C

+ +

I've split the design into two parts, Operations and Events

+ +

An operation is sent by a service or client to change an entity in a service. This always results in an event being fired. Each operation contains a user id, operation id and a few timestamps.

+ +

An event is a reaction to a operation, it notifies anyone who's listening that a change has occurred. Each event contains the id of the operation that started it, the id of the user that caused it and some timestamps. Each event also has a hash of the complete entity so that any listeners can compare against their versions.

+ +

I'm using RabbitMQ as a message bus and each service has persistent durable queues to store any pending events or operations.

+ +

In order to decouple service from each other, each service caches any entities it depends on that belong to another services.

+ +

When a service requires a entity from another service it fetches it and stores it in a table in its database. It then listens for any changes made to any entities in its cache and updates them when an event related to it occurs.

+ +

When storing or fetching an entity from the cache fails, it will re-fetch it from the appropriate service.

+ +

For auditing, I've got a service that listens to events from all services and stores them. It can then recreate any entity at any given point in time.

+ +

I'd love comments from anyone more experienced than I am and whether there are any glaring issues with the design.

+",270627,,90352,,43151.87917,43151.87917,Designing event-driven architecture for multiple services,,2,0,3,,,CC BY-SA 3.0,, +366079,1,366081,,2/16/2018 16:58,,7,1354,"

I'm currently teaching myself event sourcing and have got the concept enough to start developing a dummy app in C# and EventStore. My app is an easy-to-understand bank account system.

+ +

If we model a bank account as a series of withdrawal and deposit events then we can easily calculate the account's current balance by adding up all the deposits and subtracting all the withdrawals, and EventStore gives us a way to implement this in the data store using projections.

+ +

However, if we have a business rule that dictates that accounts can never go overdrawn (i.e. we must reject requests to withdraw from an account if the current balance is insufficient to cover the requested funds), then our BankAccount entity also needs to know its current balance.

+ +

Again, this is easy to calculate by running over the series of bank account events but I have a deep unease with the fact that I have had to implement my projection of ""balance"" twice: once in the data store and once in the domain.

+ +

Is this expected from Event Sourcing, and I shouldn't worry about it? Or am I going down the wrong path, and in fact the balance should not be calculated in the domain code and should instead be retrieved from the data store every time it is required? This means that a round trip to the data store would be required every time an entity is modified.

+ +

I've seen in Greg Young's talks that it's supposed to be a natural fit for DDD so there shouldn't be an incompatibility there.

+ +

If I've misunderstood any part of this set up then any help would be greatly appreciated.

+",126283,,,,,43148.32431,How do I avoid implementing my Event Sourced projections twice?,,3,3,2,,,CC BY-SA 3.0,, +366085,1,366471,,2/16/2018 19:06,,2,166,"

I'm meeting quite a challenge. We have an old grown software with mostly Delphi applications, and an underlying ISAM database server (ADS)1, used with a lot of free tables and manually programmed referencial integrity (if so at all).

+ +

Our deadline to move that mess to a modern RDBMS is 2020.

+ +

Some of the preferable DB Server systems would be

+ +
    +
  • MariaDB
  • +
  • Postgres
  • +
+ +

Anyways which RDBMS chosen finally isn't really the problem I'm asking about. Within 2 years is quite sportive though, let me explain why:

+ +

We have ~17000 customer installations with quite different (and customized for their workflow processes) DB structures in the field.

+ +

ADS allows to (mis-)use directory structures and those free tables, where the applications often create tables like

+ +
<db_root>/application_dir/DATA/<KeyX>/<KeyY>/<Year>/TableXYZ.adt
+
+ +

or

+ +
<db_root>/application_dir/DATA/<Year>/<KeyX>/<KeyY>/TableXYZ.adt
+
+ +

While this technique allows fast access to the data using a Delphi TTable or SQL query based TDataSet component, the overall referential integrity of that data might contain pitfalls, which nobody in the company is aware anymore.

+ +

I'm a bit in a doubt what would be the key points/methodologies to approach that problem.

+ +

I believe that

+ +
    +
  • Collecting usage statistics from customer installations2
  • +
  • Analyzing the present structures from particularly selected more complex customer installations
  • +
+ +

could be helpful to support more founded decisions what could be the ""least painful"" ways to go for migration.

+ +

Writing the tools to do so would cost some developer efforts a priori though.

+ +
+ +

Also I am (and I am not the only one among my colleagues) not very confident in Embarcadero's promoted FireDAC technology.
+TBH if it's the same humble, and full of flaws quality I see in the Delphi System and any WinAPI related, I'm not so sure we should follow that path (there are even drivers for MySQL, which might fit for MariaDB).

+ +

ODBC won't be really an option, it would be apparently too slow, too generic and produce too much network ping-pong, with the current structure and behavior of our applications.

+ +

So writing the mentioned TTable, TDataSet components based on a native driver ourselves, also comes into consideration.

+ +

What are your recommended methodologies to approach that gordian knot, and to even specify the essential requirements?

+ +
+ +

1)Worth mentioning that the system already was migrated from PARADOX to ADS, as a replacement for an ISAM file based DB Server, a ""few"" years ago.

+ +

2)We already have an app analytics service that could be used for that maybe.

+",83178,,83178,,43147.89583,43154.94236,Recommended methodologies for refactoring a large ISAM based DB structure to a RDBMS?,,1,19,,,,CC BY-SA 3.0,, +366090,1,366095,,2/16/2018 22:00,,1,433,"

For many languages there are various tools which measure code coverage. But how exactly does this work?

+ +

I have some ideas, how this could work:

+ +

Do coverage tools just run the code in the debugger and step through it? Is some kind of static analysis used? Or do some tools just inject some markers into the source code, which are invoked when the code is exercised? I also imagine that in some cases the platform provides tools to measure coverage on a lower level.

+",180748,,,,,43147.94028,How is code coverage measured?,,2,2,1,,,CC BY-SA 3.0,, +366092,1,366239,,2/16/2018 22:14,,1,145,"

I'm reading Applying UML and Patterns and trying to match the OOA&D principles in there to a project I've worked on. It is kind of a retroactive learning exercise.

+ +

The basic question I'm trying to answer is do I have the correct understanding of how to find the actors in this domain? Have I found them correctly?

+ +

The project's basic idea is to connect security alarms of various brands having generally similar behavior (although with significant differences) and distinct proprietary TCP protocols to a server or servers and control them via a mobile app (and also via a web application if you're not an end user).

+ +

So the user will be able to arm/disarm an alarm of any brand from her phone and perform similar commands, as well as be notified when the alarm goes off and other events. Additionally, events will be sent to monitoring centers (third-party). Users are clients of the monitoring centers.

+ +

An arm command sent by the phone will call a REST API which in turn will send the command to the server to be executed by interacting with the alarm through its protocol.

+ +

The server then exposes at least two ports, one for receiving commands and one for alarms of a given model to connect to it.

+ +

I see the following Systems under Discussion (SuD's): the mobile app, the web application containing the REST API, the model X alarms, the model Y alarms, the TCP server (there may be more than one).

+ +

The book talks about Primary actors. Those have user goals fulfilled through using services of the SuD.

+ +

I see the following primary actors:

+ +
    +
  • The user: her goal is to interact with the alarms through the mobile app.

  • +
  • The mobile app: its goal is to interact with the alarms through the REST API.

  • +
  • The web application: its goal is to command the panels through the TCP server and to register and manage users and alarms.

  • +
  • The monitoring center: its goal is to register users and alarms.

  • +
  • The administrator: its goal is to register monitoring centers and manage users and alarms.

  • +
+ +

The book talks about supporting actors. Those provide a service (for example, information) to the SuD.

+ +

I see the following supporting actors:

+ +
    +
  • The TCP server: accomplishes commands from the web application's REST API.

  • +
  • The alarms: obey the TCP server.

  • +
+ +

Offstage actors: have an interest in the behavior of the use case, but are not primary or supporting. I see none.

+ +

Have I found the SuD's and actors correctly?

+",93338,,93338,,43269.87153,43269.87153,OOA&D: finding the systems and actors when multiple processes are involved,,1,20,1,,,CC BY-SA 4.0,, +366103,1,,,2/17/2018 4:12,,4,421,"

I need to make a web app which provides the feature to install/uninstall +plugins. Think of something like Eclipse IDE like software. The only difference +is you cannot restart it like Eclipse to apply changes. I guess I can say, it would be similar to Wordpress.

+ +

I thought of am implementing this using OSGi as follows: +Whenever a new plugin is installed, new HTTP servlet will be registered at +Http Whiteboard. These exposed servlet endpoints will be used by other +bundles as well as third-party apps which are present outside. Each plugin +will be developed separately as an OSGi bundle. I can use Apache Felix web console to allow the user to install/uninstall bundle of his choice.

+ +

The problem with OSGi is that there is very little support available.

+ +
    +
  1. Is there any better architecture and framework to do this?
  2. +
  3. Anyone knows about Wordpress plugin ecosystem architecture?
  4. +
+ +

Note: I have already looked at https://stackoverflow.com/questions/323202/how-to-design-extensible-software-plugin-architecture. It was asked 10 years ago. My question is more specific and I hope that lot must have happened in this area in last 10 years.

+",230249,,154049,,43747.78958,43747.78958,Plugin framework for extensible software,,1,10,,,,CC BY-SA 4.0,, +366104,1,366124,,2/17/2018 4:24,,-1,108,"

This article states

+ +
+

However, this design does run into problems with some of the methods of the MutableSequence ABC. Most notably, insert and + __ delitem __ can't operate on position 0, because you can't change the head of the list when the only reference you have is to the head of + the list.

+
+ +

If we have a reference to the head node, why can't we just create a new head node that contains the old head node as its tail

+",296417,,,,,43148.74722,Inserting at head for a recursively defined linked list,,1,0,,,,CC BY-SA 3.0,, +366108,1,366125,,2/17/2018 8:12,,2,57,"

I've been looking at ways to let my app's users write plugins for it. However, to give them more options, I decided to implement a polyglot plugin system.

+ +

From the engineering perspective there are multiple ways to do this.

+ +
    +
  • Transpiling: Not many libraries that can reliably do this.
  • +
  • Bindings: Types can be a pain.
  • +
  • Using APIs: Seems like the easiest route, but plugins end up being complex.
  • +
+ +

One method I thought of was to use Jupyter kernels. Sending messages(via IPC) between event loops that already run in the respective interpreters. I only want to support interpreted/dynamic languages(at least 5) so it will be easier to load at run-time.

+ +

Are there are any common methods that I have missed out on and how do I go about building an application like this? What do I have to watch out to prevent the most common pitfalls?

+ +

All this has to work while still supporting some sort of event loop. However, I don't need threading since python doesn't have good support for it anyways.

+",275963,,,,,43148.77083,How do I approach polyglot plugins for asynchrous python applications?,,1,2,,,,CC BY-SA 3.0,, +366121,1,,,2/17/2018 15:59,,0,163,"

Let's say I have a mobile client for answering questions and then purchasing a widget. For example, I might have 10 screens from my UI with about 8 questions per screen. Imagine there is a screen for first name, last name data, 'create personal information' is basically the operation/service call.

+ +

With REST, JSON over HTTP microservices, the thought is that you might have a service for that particular 'create person' request and then we would save to some database. That request may take in 10 properties for that particular call.

+ +

This is a simple example and use-case with not a lot of detail, but here is the general question. Let's say there is an API embedded in this application, one for collecting questions about a person. And the other is for making a purchase. When you are working with mobile native clients, should the client talk directly to the microservice which in term may flow to the database in one synchronous operation? I was also looking at the approach of an event-driven model for microservices, is an asynchronous approach preferred?

+",75803,,94675,,43164.40486,43164.48194,From a mobile application client to API/microservices,,2,0,,,,CC BY-SA 3.0,, +366129,1,,,2/17/2018 19:35,,1,638,"

I have seen this pattern that allows web pages to interact with local system resources through a HTTP interface and I have a couple questions about it:

+ +
    +
  1. What is this pattern called?
  2. +
  3. What recommendations exist for implementing this pattern?
  4. +
  5. What are the security implications?
  6. +
+ +

Basically, the requirement is to access a local resource on the users machine, such as a USB device. The pattern is as follows:

+ +
    +
  1. The user is prompted to download an executable.
  2. +
  3. The executable exposes a service on http://localhost:port.
  4. +
  5. The controlling web page handles the UI/UX and communicates with the service through http.
  6. +
+ +

The Bose update service is one example. Navigate to https://btu.bose.com and you are prompted to download and install the Bose updater.

+ +

+ +

The page begins polling localhost and receiving a timeout error. After installation, the connection succeeds and the page changes:

+ +

+ +

Here is one of the URLs and the response:

+ +
http://localhost:49312/updater/getUpdaterVersion?callback=BoseUpdater.remoteCallback&token=T187369b21bf6
+
+BoseUpdater.remoteCallback(""T187369b21bf6"",{""version"" : ""3.0.1.1891""},0);
+
+",186233,,,,,43333.06042,Web page accessing local system via localhost HTTP API,,1,2,,,,CC BY-SA 3.0,, +366134,1,366135,,2/17/2018 22:53,,3,4247,"

So in networking class they taught us the OSI model, TCP, CDMA, congestion, frames, etc.

+ +

What I would like to know if standards that have been around for 20-30 years like TCP, are burnt into today's hardware, or can someone come up with say TCPX, write the code, deploy it to two machines on the network and use that protocol for communication? Since this doesn't affect how to find someone on the network, I assume it's not an issue on generic hardware.

+ +

But say a custom IP protocol was developed. Would incompatible network hardware drop those packets since it can't understand where it needs to be sent?

+ +

What I'm trying to figure out is how low (or high? never understood the model orientation) can someone go on the stack with custom implementations before the network between them can no longer transmit data.

+",150758,,,,,43148.96111,Can someone implement a custom network protocol?,,1,0,3,,,CC BY-SA 3.0,, +366138,1,366144,,2/18/2018 6:27,,-1,1294,"

Who writes functional specification in a software company ? Given a software company typically comprises of Software Engineers, Engineering Managers, Product managers and Directors?

+",243503,,,,,43149.33611,Who writes a functional specification in a software company?,,1,4,,43149.51875,,CC BY-SA 3.0,, +366143,1,,,2/18/2018 7:08,,1,1844,"

I want to implement the ""shopping cart"" feature on my website. Both anonymous/unauthenticated and authenticated can have it.

+ +

While it's clear how to implement it for an authenticated user, it's not completely clear for the anonymous/unauthenticated user case. I believe that I'll have to create a long id, such as GUID or the like, in a database and install a cookie with that long id/GUID, right? Not just integer 32/64 id, because an integer id will be easy to guess or bruteforce, correct?

+ +

On the other hand, the threat model of guessing an integer id of a shopping cart isn't high -- it's not a big issue if I guess a shopping card id of some anonymous/unauthenticated user, I think. Right?

+ +

Your advice?

+",296474,,,,,43150.54792,Storing an id of a shopping cart in a cookie for unauthenticated user,,1,2,,,,CC BY-SA 3.0,, +366145,1,,,2/18/2018 8:17,,1,215,"

I'm writing a little toy operating system using a mix of C and Assembly (It's not meant to be good/fast, just meant to learn from). I know that I can issue an interrupt (I think it's INT 0x15) to check the size of installed memory, which the BIOS does for you. My question is, how does it do this? There obviously needs to be a catch-all function that it performs, because it can't guarantee two systems have the same amount of memory.

+",291510,,167185,,43151.35625,43151.35625,How does the BIOS detect the size of installed memory?,,0,4,,43151.47014,,CC BY-SA 3.0,, +366146,1,,,2/18/2018 9:20,,-2,2023,"
+

Speaking of php. It doesn't really have its own runtime in compare to python, ruby or go. Php is jsut a text preprocessor.

+
+ +

What exactly does it mean by, Php doesn't have runtime?

+",280855,,,,,43149.51111,"What does it mean, ""Php doesn't have its own runtime""?",,2,4,2,,,CC BY-SA 3.0,, +366147,1,,,2/18/2018 10:00,,-1,496,"

The big image is catalog or ad blocks that comes from catalog table. And small images are the actual products block that comes from product table.

+ +

There may be more products and more catalog. attached image is just a layout sample.

+ +

Please refer the below image link for layout.

+ +

https://i.stack.imgur.com/f4KxN.png

+",296481,,,,,43150.17014,How do i create a logic for dynamic product list page like below using php and bootstrap?,,2,1,,,,CC BY-SA 3.0,, +366155,1,366249,,2/18/2018 13:51,,1,157,"

I have a set of types that can be customized by formatters. A formatter is a function object defined as Func<string, string, string>. Client of a library may than choose to implement FormattedLoggerBase with one of formatters provided or write new one.

+ +
public abstract class FormattedLoggerBase : LoggerBase
+    {
+        public Func<string, string, string> Formatter { get; set; }    
+
+        /.../
+    }
+
+ +

Where is a best place to put all the function objects?

+ +
    +
  • One static class?
  • +
  • Wrapper classes implementing IFormatter?
  • +
  • Factory?
  • +
+ +

Static class has least overhead, but wrapper classes would provide a clear message to the users, that formatter can be injected. On the other hand, they provide much code overherad, as well as factory. Which one would you choose and why? Or maybe you have better solution?

+",203709,,203709,,43149.58056,43151.58958,Where is appropriate place for Function objects?,,2,1,,,,CC BY-SA 3.0,, +366158,1,,,2/18/2018 15:11,,0,717,"

Studying Inter Processes Communication in Operating System, I've discovered that asynchronous communication can be built on top of synchronous communication. But, it's not clear to me how can it be done. +Can you explain to me ? :)

+",296499,,,,,43149.67708,Asynchronous communication from synchronous communication,,1,1,,,,CC BY-SA 3.0,, +366161,1,366183,,2/18/2018 17:01,,-3,167,"

In one of my projects, I had to name some variables and functions that belong to the person using my app. I named them userCard, getUserInventory() and so on. I named them like that cause there were variables and functions that belonged to other persons, like opponentCard, getAllyInventory() etc.

+ +

Now, I have a model class named User, which contains account information for a specific user. Like other variables and functions, this model can belong to the person using my app, as well as his/her opponent and ally. If I follow previous conventions, I have to name them userUser, getUserUser(), opponentUser, getAllyUser() etc, which I don't like, specially the userUser and getUserUser() ones.

+ +

What strategy should I follow, assuming I can't change the names of the models, but only the prefixes (user-, opponent-, ally-)?

+",227030,,,,,43150.42569,"Ambiguous use of ""User""",,3,4,,,,CC BY-SA 3.0,, +366170,1,,,2/18/2018 21:50,,3,152,"

Let's say you have a permission system with which you can specify things like: user U is a member of group A, which is a subgroup of group B, which is a subgroup of group C, and all members of group C have access to object O. Then, you can query the system like: ""does user U have access to object O""?

+ +

As far as I can tell, this is a reachability problem, where the vertices are users, groups, and objects, and an edge implies direct access. However, I don't think that most large-scale permission systems frame this as a reachability problem because the only fast reachability algorithms are restricted to planar graphs.

+ +

How are these systems usually designed?

+",191583,,,,,43149.90972,How do large scale permission systems work with membership expansion?,,0,3,1,,,CC BY-SA 3.0,, +366175,1,366180,,2/19/2018 1:40,,3,90,"

Recently my team has inherited a code-base with an interesting problem.

+ +

The system keeps historical records by having multiple listeners on the same event. These listeners execute serially, so if we have listener A, B, and C, B would wait for A to finish first before C and so-on.

+ +

Each listener updates a separate table.

+ +

However, if the system goes down while we're attempting to write B, then the tables in A, B, and C will come out of sync.

+ +

Is there a good solution for getting a transactional write for these listeners without coupling the listeners together?

+",273967,,,,,43151.60069,Guaranteeing transactions with multiple Listeners,,2,2,,,,CC BY-SA 3.0,, +366176,1,,,2/19/2018 3:32,,6,635,"

Currently we are debating over securing our multiple micro-services. +The major concern is that the JWT token provided to us will expire before the call is finished. (This is in the synchronous design) +Here are three proposals:

+ +
    +
  1. Client App has an 'ensure(int minutes)' method before lengthy calls, calling token provider if necessary. Let JWT expire if it hits security filter.
  2. +
  3. Client App sends both JWT and Refresh Token. If JWT expires, use refresh token to get new one and place on response headers via token provider.
  4. +
  5. Create ""login"" service. Login caches refresh info and returns JWT. Send old JWT to get a refreshed JWT via token provider.
  6. +
+ +

Thoughts? +Note: My vote is for #1. The rest seem insecure, but convenient.

+",296534,,296534,,43150.16319,43294.68472,How do you handle JWT expiration for long running calls?,,2,4,3,,,CC BY-SA 3.0,, +366181,1,366187,,2/19/2018 9:30,,8,359,"

I'm trying to figure out is there a common name for object's interface if our intent is to show that this group of objects has date of creation and tracked dated of last modification. It is an entity in DB.

+ +

Have thought about IHasModificationDate and ITrackable. But I'm sure that other programmers outside of the solution where I want to apply this interface would not figure out what does that mean just by reading the name, so I'm looking for something widespread and commonly known.

+ +

Is there anything like this for such entities? Maybe something from development patterns?

+",296557,,,,,43152.43542,Common name for an interface that has Created and Modified fields,,2,7,1,,,CC BY-SA 3.0,, +366182,1,366207,,2/19/2018 9:32,,0,232,"

My team needs to design an API which sends objects to a queue in the cloud and retrieves objects from it. The data is inserted into the queue as byte[].

+ +

We have 2 ideas until now which I would love to hear your notes/ideas about them:

+ +

First version:

+ +
    +
  • public void push(String object)
  • +
  • public String retrieve()
  • +
+ +

Notes: To support this API, any consumer must implement a logic to convert his objects to Strings and vice versa.

+ +

Advantages:

+ +
    +
  1. Easy to implement.
  2. +
  3. Easy to maintain.
  4. +
+ +

Disadvantages:

+ +
    +
  1. The consumer of our API must design a solution which is not related to his logic for converting his objects to Strings.
  2. +
  3. The consumer of our API must call a method which converts his objects by himself even though it is not related to his logic.
  4. +
+ +

Second version:

+ +
    +
  • public void <T> push(T object)
  • +
  • public T <T> retrieve(Class<T> type)
  • +
+ +

Advantages:

+ +
    +
  1. Easy to implement. (In Java, we can use ObjectMapper)
  2. +
  3. Easy to use this API.
  4. +
  5. The consumer of our API does not need to manually convert his objects to anything else.
  6. +
+ +

Disadvantages:

+ +
    +
  1. The types of the consumer of our API must support Json converts - recursively, a class with object properties, all property types must support Json converts.
  2. +
  3. If the consumer does not read the documentation carefully, something won't work or there will be a silent bug.
  4. +
+ +

General notes:

+ +
    +
  1. The API needs to be supported for at least 4 years.
  2. +
+",197581,,161805,,43150.41875,43151.40486,Design public API - String or generic type?,,2,2,,,,CC BY-SA 3.0,, +366188,1,366193,,2/19/2018 11:53,,7,6144,"

I'm struggling a little bit with Domain Driven Design because there are so many names and concepts to grasp.

+ +

Today it striked me to know what is exactly the difference between an 'application service' and an 'use case'. Are they the same thing?

+",91694,,,,,43150.53264,DDD - Are 'use cases' and 'application services' different names for the same thing?,,1,2,2,,,CC BY-SA 3.0,, +366189,1,366248,,2/19/2018 12:05,,0,884,"

I'm working on an MVVM project and trying to preserve separation of concerns. Our current architecture has an entity framework model and MVVM light view and viewmodel projects. I'm working ViewModel first so I'm passing those around with a datatemplate in the view project to get the proper view.

+ +

Having to type check every type to get the correct viewmodel just seems wrong. + Is there a better way?

+ +

To set this up suppose I have a model with classes:

+ +
namespace FoodData
+{
+    public class Hamburger:EntityObject
+    {
+            ..... //Add properties
+    }
+
+    public class Taco:EntityObject
+    {
+           .... //Add properties
+    }
+
+}
+
+ +

Now Lets say in my ViewModel namespace I have two different ViewModels for Taco and Hamburger

+ +
namespace FoodViewModel
+{
+  public class HamburgerViewModel : ViewModelBase
+  {
+  }
+
+  public class TacoViewModel : ViewModelBase
+  {
+  }
+
+  public class FoodViewModel : ViewModelBase
+  {
+
+    private ViewModelBase _displayViewModel;
+
+    public ViewModelBase DisplayViewModel
+    {
+       get {return _displayViewModel;}
+       set 
+        {
+            _displayViewModel = value;
+            RaisePropertyChanged(nameof(DisplayViewModel));
+        }
+     }
+
+     private EntityObject _selectedFood;
+
+     public EntityObject SelectedFood
+     {
+        get{return _selectedFood;}
+        set
+        {
+          _selectedFood = value;
+          DisplayViewModel = _selectedFood?.GetViewModel();
+          RaisePoropertyChanged(nameof(SelectedFood));
+        }
+     }
+
+  }
+}
+
+ +

The current solution I have is using extension methods in the FoodViewModel namespace

+ +
namespace FoodViewModel
+{
+   public static FoodViewModelExtensions
+   {
+       public static ViewModelBase GetViewModel(this EntityObject obj)
+       {
+            if(obj is Taco)
+            {
+              return new TacoViewModel();
+            }
+            if(obj is Hamburger)
+            {
+              return new HamburgerViewModel();
+            }
+
+            return new EmptyViewModel(""Object not recognized"");
+       }
+    }
+
+}
+
+",206934,,206934,,43150.88333,43151.48542,Avoid type checking but preserve separation of concerns,,4,11,,,,CC BY-SA 3.0,, +366192,1,366215,,2/19/2018 12:29,,3,157,"

The application I'm developing requires that some data is obtained through different channels. As this is a network process, I have decided to use async programming.

+ +

I have designed the following interface to represent all the possible channels:

+ +
public interface IMyDataChannel
+{
+    Task<MyData> GetMyDataAsync(/* parameters not relevant to the question */);
+}
+
+ +

I need to obtain this data with different parameters, and from different channels sometimes, so ideally I should throw all the requests in paralell for the best performance.

+ +

The problem is that one of the channels is very unstable, and if I throw several requests in parallel, or even sequentially, it overloads and returns unexpected results. I need to make sure that this channel does not support parallel requests and waits for some time after each request, to avoid overloading it.

+ +

I think this logic belongs in the channel implementation, because callers should not be aware if the underlaying channel is stable or not. Instead, they should fetch the data from the channels in an unified way.

+ +

First, I tought I could use a static lock in the implementation of the unstable channel, and make sure that the process sleeps for some time before releasing the lock. The implementation would be like this:

+ +
public class UnstableMyDataChannel : IMyDataChannel
+{
+    private const int stabilityDelay = 1000;
+    private static readonly object stabilityLock = new object();
+
+    public async Task<MyData> GetMyDataAsync(/* parameters not relevant to the question */)
+    {
+        lock(stabilityLock)
+        {
+            //Throws a compiler error because you can't ""await"" inside a ""lock""
+            MyData data = await PrivateMethodsForRetrievingDataAsync(/* parameters not relevant to the question */);
+
+            //Throws a compiler error because you can't ""await"" inside a ""lock""
+            await Task.Delay(stabilityDelay);
+            return data;
+        }
+    }
+}
+
+ +

However, you can't await anything inside a lock statement. Even if you could, this design doesn't completely convince me. Can anyone provide a better (and possible) solution?

+",286860,,286860,,43150.66181,43150.67639,Design for avoiding concurrent calls to an interface implementation,,1,6,1,,,CC BY-SA 3.0,, +366196,1,366199,,2/19/2018 13:13,,15,838,"

Developers create scripts to help in their work. For example, to run Maven with certain parameters, to kill unneeded background tasks that crop up in development, or to connect to a certain server. The scripts are not core build scripts nor are they used in our Continuous Integration server.

+ +

What is the best way to manage them? To put them into a directory (maybe /scripts ) and check them into Git? To maintain them separately in some file server?

+ +

The argument for treating them as source code is that they are source and can change. The argument for not doing it is that they are just auxiliary tools and that not all developers need any given script (e.g. Linux-specific scripts where some developers work on Windows).

+",14493,,222996,,43152.89861,43158.14583,What is the right way to manage developer scripts?,,3,3,2,,,CC BY-SA 3.0,, +366201,1,366246,,2/19/2018 13:44,,2,243,"

Please see the class below:

+ +
public class Customer
+    {
+        private readonly IList<Order> _orders = new List<Order>();
+
+        public string FirstName { get; set; }
+        public string LastName { get; set; }
+        public string Province { get; set; }
+        public IEnumerable<Order> Orders { get { return _orders; } } //This line
+
+        internal void AddOrder(Order order)
+        {
+            _orders.Add(order);
+        }
+}
+
+ +

As it stands; a person can only have one order assigned to them (there is a good reason for this as it stands - if the same person creates another order then another order is created). However, this could change in the future.

+ +

I was recently asked why I am using a list in the domain model to represent orders. There are two options:

+ +

1) Use a list on the basis that in the future the plan is to have one person with multiple orders.

+ +

2) Use an object for now and change it to a list in future if needed.

+ +

I realise this is a bit pedantic. However, I am working for an organisation that likes to use the principle of least astonishment. Which option should I be going with; with regards to the above. Is there a principle at stake here?

+",65549,,,,,43151.63819,Should I use a collection property if I only require an object at the moment?,,6,4,,,,CC BY-SA 3.0,, +366203,1,366205,,2/19/2018 13:53,,4,466,"

I currently have a website, made of about 20 micro-services and a (still too large) main monolithic web application. The monolith and multiple micro-services each run queries to the same database, each time having to deal with login to the DB and formatting/running the queries.

+ +

I may switch our databases from SQL to NoSQL in the near future; to help this transition, as well as every future modification involving my database, I thought about creating a micro-service that will act as an access point to the database. It will handle the connection, and run the queries passed to it.

+ +
    +
  • What would be the drawbacks of such an architecture?
  • +
  • Is passing the queries as strings a reasonable idea, or should I create a wrapper function inside the micro-service for each queries I have to run (like getThatThing() or updateThisItem())?
  • +
  • Should the micro-service be aware of the type of results for the queries, in order to return an object built with ORM?
  • +
+",296584,,,,,43158.53611,Routing all SQL queries through a single micro-service,,1,3,,,,CC BY-SA 3.0,, +366204,1,366211,,2/19/2018 13:58,,14,4093,"

When you use the concept of polymorphism you create a class hierarchy and using parents reference you call the interface functions without knowing which specific type has the object. This is great. Example:

+ +

You have collection of animals and you call on all animals function eat and you don't care if it is a dog eating or a cat. But in the same class hierarchy you have animals that have additional - other than inherited and implemented from class Animal, e.g. makeEggs, getBackFromTheFreezedState and so on. So in some cases in you function you might want to know the specific type to call additional behaviours.

+ +

For example, in case it is morning time and if it is just an animal then you call eat, otherwise if it is a human, then call first washHands, getDressed and only then call eat. How to handle this cases? Polymorphism dies. You need to find out the type of the object, which sounds like a code smell. Is there a common approach to handle this cases?

+",46961,,46961,,43151.46806,43151.73542,How to handle the methods that have been added for subtypes in the context of polymorphism?,,4,11,3,,,CC BY-SA 3.0,, +366214,1,366322,,2/19/2018 15:47,,2,1753,"

We are developing an application where providers can offer their products and consumers can buy them (sort of marketplace). We try to apply DDD concepts into our model design and the implementation follows a microservices style. This implies that the data belongs to a Bounded Context (BC) and only the microservices within that BC can access it. Outside that BC, specific information can only be either queried through a public interface of the BC or by subscribing to events published by that BC.

+

My question is about the design of the Orders. Orders are placed by consumers and accepted and fulfilled by providers. They can also be manipulated by customer service. An order right now contains only products from a single provider, but I might be asked in the future to support buying from multiple providers at once.

+

All implementations I've seen of similar systems contain a single Order model, which tends to be really bloated with information about the products, the provider, the consumer, invoicing, deliveries, payments, etc. I am trying to avoid that, but I am facing the question of "Who owns the order"?

+

I can think of the following answers:

+
    +
  1. There is an Orders bounded context which is accessed by both the consumer and the provider. This means that the consumer API has a Place Order operation that talks to the Orders BC and creates an order and the Providers API has an operation like Accept Order which talks to the same Orders BC and changes the status of that same order model.
  2. +
  3. There are 2 Orders BCs: Consumer Orders and Provider Orders. The Consumer API places an order in the Consumer BC. This creates the order and publishes a ConsumerOrderCreatedEvent. The ProviderOrders BC listens to that event an creates a local Order (ProviderOrder) which references the ConsumerOrder. Through the Provider API, the order can be accepted, which will publish a ProviderOrderAcceptedEvent, which will allow the ConsumerOrders to mark the order as accepted and notify the consumer about it.
  4. +
+

My personal preferred approach is option 2 as I can see several benefits (see below), but I'm not sure if they are worth the added complexity.

+

I can't formulate a specific question, but as this problem must have been solved thousands of times, I'd like to know if there is one preferred approach, well-known solution or reference design that can help me.

+

Benefits of separate ProviderOrders and ConsumerOrders bounded contexts:

+
    +
  1. A single ConsumerOrder can generate multiple ProviderOrders (if the order contains products from multiple providers
  2. +
  3. The workflow of a ProviderOrder might be different/more complex than the workflow of a ConsumerOrder.
  4. +
  5. Both the consumer and the provider need to see their order history, which I envision as a denormalized table for fast reads, but both order histories contain different data (ie consumer orders contain provider information and provider orders contain consumer information) and are queried differently (by the consumer and by provider). This can be implemented in single table obviously, but it seems cleaner if they are 2 tables dedicated to a single purpose.
  6. +
  7. Data isolation/partitioning. Consumer orders are always accessed by consumer Id, Provider Orders are always accessed by ProviderId.
  8. +
+

I'm having a very interesting conversation about this topic on a separate forum, so I thought I should link it here, in case someone wants to read more thoughts on this topic. +Same question on NServiceBus discussion board

+

Note: This is implemented in .NET, by multiple teams, from multiple repositories and Visual Studio Solutions, hosted in a Service Fabric cluster and using NServiceBus for messaging.

+",238707,,-1,,43998.41736,43158.60556,Who owns the Orders in a consumer-provider marketplace like platform?,,3,9,2,,,CC BY-SA 3.0,, +366217,1,,,2/19/2018 16:52,,1,117,"

In the Wikipedia article for “tracing garbage collection”, the following claim is made:

+ +
+

...most modern tracing garbage collectors implement some variant of the tri-color marking abstraction

+
+ +

In this abstraction, objects are grouped into three sets: white, black, and grey. The basic tracing algorithm enumerates all the elements of the grey set, marking each grey object as black until all the unreachable objects are white and can be deleted.

+ +

What the article doesn’t go into is how the grey set is implemented, which has clear implications for the overall performance of the GC algorithm. The most direct and obvious solution is to model the grey set as a data structure of pointers to objects (hash set, stack, queue, whatever), but (keep in mind I have no data on this) this seems quite expensive in terms of space, one pointer per object reference in the call stack.

+ +

How is the “grey list” implemented in modern garbage collectors to maximize efficiency in terms of time and space and what kind of overhead do these solutions incur?

+",73217,,,,,43150.75764,How is the “grey list” implemented in modern garbage collectors?,,1,3,,,,CC BY-SA 3.0,, +366218,1,,,2/19/2018 16:53,,6,3901,"

Consider the following command-line program manage-id. It does these things:

+ +
manage-id list                       (list all usernames and user-ids)
+manage-id show  <username>           (shows username's id)
+manage-id clear <username>           (erases username's id) 
+manage-id set   <username> <user-id> (sets usernames id)
+manage-id find  <string>             (list usernames whose id contains <string>)
+
+ +

The above is one way to design the user interface. Here is another:

+ +
manage-id --action list
+manage-id --action show  --username <username>
+manage-id --action clear --username <username>
+manage-id --action set   --username <username> --id <user-id>
+manage-id --action find  --search <string>
+
+ +

The first is a ""positional argument design"" and the second a ""command-line option design"".

+ +

I tend to prefer the ""command-line option design"" for a few reasons:

+ +
    +
  • the arguments can be presented in any order
  • +
  • the option names are self-documenting
  • +
  • removes ambiguity about role of argument (e.g., in the two commands manage-id show johndoe and manage-id find john, the second argument plays different roles).
  • +
+ +

On the other hand, the ""command-line option design"" uses ""options"" that are not really optional.

+ +

My question is this: Is there a recommended (and widely-followed) style choice that prefers one of these two styles over the other for Linux command-line programs?

+",25257,,,,,44030.87292,Positional arguments vs options in a command-line interface,,6,1,1,,,CC BY-SA 3.0,, +366233,1,366236,,2/19/2018 23:45,,8,958,"

I'm looking for information on the best approach to message validation in asynchronous-messaging-based services (i.e. services that pull messages from some sort of message queue or broker, rather than providing an HTTP-based API or something similarly request-response oriented).

+ +

In an HTTP API, if the message is invalid (wrong fields, invalid JSON/XML/Whatever) you can return a 400 error. The receiving service can go on as before, and the sender has to deal with the error. (Many HTTP libraries raise an exception when a non-200 response is returned, making the source of the problem very clear.)

+ +

With async messaging processes, I can think of a few approaches:

+ +
    +
  1. The receiving process deals with the error somehow.
  2. +
  3. The receiving service just ignores messages that are not valid.
  4. +
  5. The receiving service logs the invalid message and then moves on to the next message. If things don't seem to be working correctly, hopefully, we can locate the problem in the logs.
  6. +
  7. The receiving service sends an error notification back on some sort of a response channel.
  8. +
+ +

None of these options seem particularly good.

+ +

Option (1) is really just a ""hand-wave"" in the direction of some as-yet-unknown solution.

+ +

Options (2)-(3) seem to run the risk of problems in the sender's code being masked and made difficult to identify (compared to the request-response situation, where you can immediately identify a 400 response, which occurs in a ""location"" very closely linked to the source of the error).

+ +

Option (4) seems to be attempting to recreate a request/response system, negating any reasons for opting for an async-messaging based system. If you're going to request/response, why would you be using a message-broker based system in the first place?

+ +

I would appreciate any information on best practices, and on how the tension described above might be resolved.

+",69004,,94675,,43156.59306,43706.18472,Message validation in async messaging-based services,,5,4,2,,,CC BY-SA 3.0,, +366235,1,366237,,2/20/2018 0:29,,45,17816,"

My scenario is as follows.

+ +

I am designing a system designed to receive data from various types of sensors, and convert and then persist it to be used by various front-end and analytics services later.

+ +

I'm trying to design every service to be as independent as possible, but I'm having some trouble. The team has decided on a DTO we would like to use. The outward-facing services (sensor data recipients) will receive the data in their own unique way, then convert it to a JSON object (the DTO) and send it off to the Message Broker. Consumers of the messages will then know exactly how to read the sensor data messages.

+ +

The problem is that I'm using the same DTO in a few different services. An update has to be implemented in multiple locations. Obviously, we've designed it in such a way that a few extra or missing fields in the DTO here and there are not much of an issue until the services have been updated, but it's still bugging me and makes me feel like I'm making a mistake. It could easily turn into a headache.

+ +

Am I going about architecting the system wrong? If not, what are some ways around this, or at least to ease my worries?

+",296624,,296624,,43657.82222,43657.82222,Ways to share DTO across microservices?,,3,2,19,,,CC BY-SA 4.0,, +366251,1,,,2/20/2018 11:28,,2,948,"

I am writing gui application. I want to implement following structure:

+ +
    +
  1. project tree with nodes of different type and behavior (i.e. when right clicking or selecting there can be different menu options)

  2. +
  3. Editor window, which can be dynamically splitted vertically and horisontally, to add more edit areas. Each edit area corresponds to one node of project tree.

  4. +
+ +

I am using c++/qt, but have the problem in design rather than programming languages and libraries.

+ +

Currently, to implement project tree I created abstract interface of tree node which contains link to parent and to children. For each node of specific type I make new class. Because I am using Qt, I have a model, which acts like a intermediate object between view and actual tree. It seems correct that information from visual representaion can't leak to this tree.

+ +

I have the following problems with implementation:

+ +
    +
  1. Is it right to use this tree as a holder of all my data? Can I use it to hold information about objects I'm editing, or should I use external holder for all data, but have link to it from my nodes?

    + +
      +
    1. In case of holding information about objects, the information about visual representation, already passed to objects (they know structure of the tree).
    2. +
    3. In case of link to external holder, I can't understand how to create such link and support dynamic creation/deletion of nodes.
    4. +
  2. +
  3. When I click on view item, I want somehow to make data inside node to be opened for editing in selected edit area.

    + +
      +
    1. I can't figure out how to do it without dynamic_cast'ing to specific type of node and passing that internal information to currently selected edit area.
    2. +
    3. Another badly looking approach, which come to my mind is to introduce virtual function in node, but node can't implement the logic of this function, because it relates to visual representation.
    4. +
  4. +
+ +

Example of tree:

+ +

+ +

Models is entities which can be edited by own vector graphics editor, Materials is table-like key-value properties, where each key can be attached to some primitives in graphics editor.

+",288130,,288130,,43151.55833,43152.39722,Architecture of gui application,,3,8,,,,CC BY-SA 3.0,, +366253,1,,,2/20/2018 12:11,,3,1436,"

What kind of NoSql data modelling is best suitable for the following requirement?

+ +

This can be visualised (NoSQL-Document) as a Collection of Records where each Record contains nested Documents.The nested document entities at different levels are either flat or tree structure based. A Record is structurally a single entity, i.e., a deletion of a record deletes all of the record and its module data.

+ +

+ +

The nested documents type model does not provide the flexibility in querying the sub-document items but keeps the structure of the record straightforward. On the other hand, if the Modules are represented as different Collections then it may improve querying but the reference and join operations have to be handled back end.

+ +

The typical use cases of the system are, the model entities can grow up to 100,000 items per module, more-frequent query and less-frequent update of the entities and heavy cross reference of entities (for example, an entity deep inside the tree structure in module B referring to an entity in module A).

+ +

If the Modules are implemented as separate documents then maintaining the relationship between the corresponding record and the module entities would become cumbersome. This will become a more serious problem if the number of records increases then the respective module collection will grow significantly more without bounds, whereas embedding the module inside the record will grow only up to 100,000 items.

+ +

One proposal for the above issue. Create separate documents with Record+Module information then this can be treated logically grouped as a single entity. For example, Record1 as one document, ModuleA of the Record1 as Record1_ModuleA document and so on. This way the documents with the record id+module name combination can be grouped to form one entity.

+ +

Does any NoSQL Db support grouping of related documents? In this case, the record document can be grouped with its modules as separate documents and the group can be logically identified as a single entity of record.

+ +

Right now, I'm thinking in terms of Document Databases. Could this be efficiently handled with any other data modeling techniques?

+",284405,,284405,,43153.16042,43153.16042,NoSQL data modelling for multi level nested documents,,0,7,1,,,CC BY-SA 3.0,, +366257,1,366271,,2/20/2018 14:45,,2,92,"

I'm writing a reusable library, and I'm looking for a way to handle the exceptions that might occur during the processing. For example, I have the following class:

+ +
public interface IObjectFetcher
+{
+    public IObject GetObject(int objectId);
+}
+
+public class WidgetFetcher : IObjectFetcher
+{
+    public IObject GetObject(int objectId)
+    {
+        var fileName = DetermineWidgetFileName(objectId);
+        var widget = GetWidgetFromFile(fileName);
+        widget.Frob = GetWidgetFrob(objectId);
+        return widget;
+    }
+
+    private string DetermineWidgetFileName(int widgetId)
+    {
+        return someExternalDbService.GetFileNameForWidget(widgetId); //throws DbException
+    }
+
+    private Widget GetWidgetFromFile(string fileName)
+    {
+        return someExternalFileService.GetWidgetFromFile(fileName); //throws FileException
+    }
+
+    private Frob GetFrobForWidget(int widgetId)
+    {
+        return someExternalDbService.GetFrobForWidget(widgetId); //throws DbException
+    }
+}
+
+ +

How to handle the DbException and FileException so that:

+ +
    +
  • the exception contract of IObjectFetcher.GetObject doesn't get too complex and can allow different implementations
  • +
  • there's a way to access the underlying cause of the exception, like a failed database connection or a missing file
  • +
  • there's a way to differentiate between the DbException caused by looking up a file name and the DbException caused by retrieving a Frob
  • +
  • it's possible to attach the exception metadata like the object ID and the file name in a structured way, eg. to extract exception message templates into a resource file
  • +
+ +

It seems to me that I should create custom exceptions for each step of the process (with the InnerException being either the DbException or the FileException), and then I have two ways of approaching the problem - either have those custom exceptions inherit from a single base class, or have them wrapped by a single exception class. Which approach is correct? Or should I simplify it further?

+",106724,,,,,43151.77083,How to handle a hierarchy of exceptions - by wrapping or by inheritance?,,1,0,,,,CC BY-SA 3.0,, +366258,1,366261,,2/20/2018 14:51,,2,139,"

Note: I use term ""resource"" in opposition to memory here, resource here is file, socket, etc.

+ +

Is there a language implementation that uses M&S GC (but not necessarily only that kind ¹) that is able to automatically free the resources (like file handles)? If yes, how it is done?

+ +

By automatically I mean that user can mark in some way the resources (not their usage though), but does not have to remember all the time to mark the actual act of freeing it. For example I consider C# with its using, F# with its use, OCaml with protect/finally pattern as manually freeing resources.

+ +

In other words resources should be handled 100% deterministically, once they are no longer used they should be freed right away.

+ +

So far I didn't notice a language implementation with M&S GC capable of doing it.

+ +

¹ I made that remark because maybe some hybrid approach could provide best of the worlds -- RC for resources, M&S for the rest.

+",66354,,66354,,43151.83264,43151.83264,Mark&sweep GC with automatic freeing resources,,1,12,,,,CC BY-SA 3.0,, +366262,1,,,2/20/2018 16:12,,1,48,"

I am writting a module where i am creating a rate for a list of product, and the rate for the product are assigned to different Store.

+ +

The rate can be created in a group of Store, with Multiple Store, and Individual Store.

+ +

As shown in the image Below.

+ +

+ +

a. Group of Store

+ +

A Store group is selected from the list of group. The list of product are displayed where we set the rate for the products, the rate of the products will be tagged to Store which are tagged in the group

+ +

b. Multiple Store

+ +

The multiple store can be selected from the list and the rate are added to the store.

+ +

c. Individual Store

+ +

The rate for the product are assigned to a individual branch.

+ +

Enitity for Product Rate

+ +
public class ProductRate
+{
+
+    public decimal Price { get; set; }
+
+    public decimal MRP { get; set; }
+
+    public int ProductId { get; set; }
+
+    public int StoreId { get; set; }
+
+    [ForeignKey(""StoreId"")]
+    public virtual Store Store { get; set; }
+
+    [ForeignKey(""ProductId"")]
+    public virtual Product Product { get; set; }
+
+    public bool IsApproved { get; set; }
+
+}
+
+ +

Currently my implementation looks as following:

+ +
 public void CreateRateForIndividualStore(RateIndividualStoreViewModel model)
+ {
+     //Saving the rate of product
+     // Maintaining history of the product rate in ProductRateHistory table
+ }
+
+ public List<string> RateForMultipleStore(RateMultipleStoreViewModel model)
+ {
+      //Saving the rate of product
+     // Maintaining history of the product rate in ProductRateHistory table
+ }
+
+ public List<string> CreateRateByGroup(RateStoreGroupViewModel model)
+ {
+
+ }
+
+ +

Currently i am doing with the separate configuration for all three types.

+ +

ProductRateHistory table is the exact the same duplicate of the entity ProductRate.

+ +

Is there a more generic way for doing this task? Any pre-defined design pattern or methods?

+",296478,,,,,43151.675,Architecture for Product rate assigning to multiple store group. C#,,0,5,,,,CC BY-SA 3.0,, +366263,1,366264,,2/20/2018 16:21,,-4,1638,"

By theory please correct me if I am wrong, it says that an int can take upto max 4 bits in case of number 9.

+ +

I haven't coded it low level languages but I assume that the datatype int in c and python shouldn't be different.

+ +

I want to know what is the reason that in Python an int is taking 24 bytes

+ +

Also if someone can explain how a bit/byte is calculated for a character.

+ +

Thanks

+ +
>>> var = 1
+>>> import sys
+>>> print sys.getsizeof(var)
+24
+>>> var = 'Raheel'
+>>> print sys.getsizeof(var)
+43
+>>> var = 2
+>>> print sys.getsizeof(var)
+24
+>>> var = 'R'
+>>> print sys.getsizeof(var)
+38
+>>>
+
+",,user212699,,,,43151.79931,Python variable in bytes,,1,3,,,,CC BY-SA 3.0,, +366275,1,366324,,2/20/2018 19:23,,0,215,"

The most obvious criteria for me is for component re-usability, as well as internal state management but aside from that are there are any other good reasons?

+ +

For instance, I could have a component called ArticleFeed.js. Within ArticleFeed.js I could have a child component known as ArticleFeedItem.js OR I could just have an html markup object in ArticleFeed.js that isnt a react component but instead is just the markup for the 'article feed item'.

+",263240,,,,,43692.30347,When does it make sense to make an individual React component?,,3,0,,,,CC BY-SA 3.0,, +366276,1,366277,,2/20/2018 19:48,,7,1383,"

I don't have a particular coding problem on hand this is just an excercise to improve my thought process.

+ +

Few months back I started learning about functional programming( mostly in R) and I fell in love with it. +Once in a while I try to think of problems that might be(correction: that I might find) difficult to solve in FP. Recently I thought of a situation where I might be more interested coding using imperative programming.

+ +

It seems to me that all higher order functions like map or reduce will iterate over whole list provided to them which makes sense. +How would you avoid iterating over whole list in functional programming for whatever reason - ex. list is too long, list is actually an infinite series, evaluating each item is very expensive etc.

+ +

So to make this problem more specific lets say I have an array and I want to return every member from zeroth member to first element whose value is greater than 10, but I want to stop searching through the list once I find that element greater than 10.

+ +

How would you solve this?

+ +

Example array: 1 2 3 1 2 3 11 1 2 3 1 2 3

+ +

I'm not looking for answer specifically in R I've seen enough Haskell & Scala to usually make sense of the code.

+ +

EDIT:

+ +

I forgot to mention why I would rather use imperative programing here. +I find that it is easier to stop iterating through array above with while/until loops because once I reach my terminating condition interpreter exits the loop, but map does not retain information about previous elements, and reduce solves that but I continues to iterate to the end of my list.

+",171831,,29020,,43152.51736,43173.35903,Functional Programming best practices,,2,2,2,,,CC BY-SA 3.0,, +366281,1,,,2/20/2018 23:58,,3,3084,"

Under the REST architecture principles, a RESTful application should be stateless, therefore each time I invoke an ASP.NET 4 REST service (with GET verb) that pulls tens of thousands of records, the REST service paginates them with in chunks of 10 (with OData v4), which makes the UI lightweight because it only loads 10 records each time, however each time the user calls for the next chunk of 10 records the ASP.NET controller calls the read method on the data access layer (Dapper micro ORM) which in turns pulls the same thousands of records over and over again, even though the controller only returns 10 records each time thanks to OData pagination engine, the data access layer (Dapper) queries for the same thousands of records each time, which is expensive and slow. I know I could modify the query that Dapper uses so the pagination filter goes down to the query level, but I find that's too much burden to do since the filter OData sends can be quite complex and I don't have the luxury to generate a semantic tree for generating filters on the WHERE clause, and besides, isn't that OData work in the first place? Isn't it possible to simply cache the thousands of records somewhere to avoid calling the database each time if I the same filer is being request over and over again?

+ +

Oh yes, and Entity Framework is an absolute no go, Dapper is mandatory instead.

+",274862,,,,,43152.70764,Is it possible to cache data on a REST service that returns paginated data?,,4,1,3,,,CC BY-SA 3.0,, +366289,1,,,2/21/2018 7:33,,1,1395,"

This question has been asked quite a few times on StackOverflow and I would like to get an understanding of the concepts and process rather than directly solve a problem. I found the following post where the explanation was nicely detailed but have some follow up questions which I hope someone can elaborate on.

+ +

video streaming

+ +

The following extract is what I am referring my questions to:

+ +
+

This is however a lot of additional code in your servlet which requires a well understanding of the HTTP Range specification. Usually the servletcontainer's (Tomcat, JBoss AS, Glassfish, etc) own default servlet already supports this out the box. So if there's a way to publish the media folder into the web by standard means, so that you don't need to homegrow a servlet for this, then I'd go on this route.

+
+ +

Here the author explains that a lot of servletcontainers (servlets/webservers I assume) handle the range request specification by returning the specific range request of a video stream response. The software I am working on delivers a static video file stream to a servlet that plays back the video in a Chrome pop-up window. I am reading the video into an inputstream and deliver it as the httpresponse to the request and add the relevant headers like (Content-Range, Content-Length, etc.) which is all done on the webserver or dataserver end. The response is returned to a servlet. +Now I am still not sure if I understand this correctly or explained the setup above that it makes sense, but it does work right now as I can see the video playing, am able to seek back and forward and can see the browser debugger where the requests are made back for the next range of bytes.

+ +

Am I correct then that the servlet does all the byte range work and I don't have to explicitly rewrite code to only send certain byte ranges in the response? I ask this in hopes of a serious response, because my supervisor insists I need to send the specific byte ranges. My argument was that if this works fine in Firefox, why would I need to rewrite the code completely to render it in Chrome. It only fails in Chrome with large video sizes where after playing most of the video it gives an EofException which my understanding is related to jetty settings and also another error related to a mismatch in expected length of the audio which I send as well.

+ +

But back to my question on my interpretation of the way this works; this is someone with 20+ years experience so I have no clue what I am talking about and need to verify if my understanding is correct.

+",210149,,200203,,43152.34028,43328.54028,understanding video streaming on a servlet or webserver,