diff --git "a/stack_exchange/SE/SE 2020.csv" "b/stack_exchange/SE/SE 2020.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2020.csv" @@ -0,0 +1,80951 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense, +403167,1,403168,,1/1/2020 17:03,,-2,71,"

I'm not sure if this question belongs to Stack Overflow or somewhere else. Sorry if it doesn't belong here.

+ +

My question is, in an IDE, does its linter, check the whole source code every time (whenever a user update the code or periodically), or does it check only the code that has been added/removed?

+",354077,,354077,,43833.75069,43833.75069,Working of linters in an IDE,,1,2,,43831.74931,,CC BY-SA 4.0, +403174,1,403175,,1/1/2020 21:09,,3,219,"

I maintain a small (tiny!) .NET library.

+ +

It has a few ill-defined edge cases, which I call out explicitly in the docs. +Places, where the ""correct"" behaviour is not self-evidently well-defined, so I explicitly said

+ +
+

For invocations like this, the method may return any of the viable outputs.

+ +

Note that in all of these 'may return any' cases, I suspect that it will always return the top-most value, but that will be dependant on .NET's implementation of various methods and is not in anyway guaranteed by this library! Frankly, any of these would be a mis-use of the library and could arguably throw instead.

+
+ +


+I want to make a change, to fix a bug, which will still satisfy the first sentence above, but which will happen to switch to return the bottom-most value. (Probably).

+ +

I think that I could release this as a patch, and still be honouring SemVer. Because even though the in-practice functionality has changed, it is functionality which the docs explicitly instructed users not to rely upon.

+ +

Have I correctly interpretted SemVer?

+",277850,Reinstate Monica --Brondahl--,,,,43832.56667,"In SemVer, am I allowed to change the in-practice behaviour of undefined behaviour usages",,3,5,,,,CC BY-SA 4.0, +403178,1,,,1/2/2020 2:56,,-2,84,"

I am making a website that requires a secure area for the website owner to easily upload new content to. Because this is a relatively simple website there is no account system. I just have a URL for them to go to and enter a password (in a form with a ReCaptcha) to enter the secure area. My issue is, how do I allow access only after they are verified. Here are the options I've considered:

+ +
    +
  1. Set a session before I redirect the user from the verification .php the file where they are sent by the form. Then check for the session every time that page loads.
  2. +
  3. Name the secure area file something completely random so that it cannot be guessed. This would NOT be very secure so I would not want to do this.
  4. +
  5. Use the verification .php file as the secure area (Not ideal, but it's by far the most secure way I can think of)
  6. +
+ +

Are there any options that I am missing? I couldn't find anything related to this at all online. This site doesn't have to be super secure, but I really don't want the wrong person getting in. I'm also asking this question for best practices later on.

+",354092,,1204,,43832.69583,43832.69583,Securely allow access to a secure area of a website with PHP,,2,4,,,,CC BY-SA 4.0, +403184,1,403185,,1/2/2020 9:36,,1,105,"

I'm doing a project in PHP and I'm implementing Aggregates and Event Sourcing. In order to avoid coding up all the logic related with ES myself I've decided to use a third party library called EventSauce.

+ +

I would want to avoid coupling my code, specially my domain, with an external library. However, from what I have seen in the docs I need to use classes that implement the EventSauce interfaces. I would like to use my own interfaces in order to have control of the possible changes made to the library or if I would like to change to another library in the future. I would illustrate this with an example.

+ +

I have a repository with the following interface:

+ +
interface ItemRepository
+{
+    public function itemOfSku(Sku $sku): Item;
+
+    public function nextSku(): Sku;
+
+    public function save(Item $item): void;
+}
+
+ +

And the library needs an interface, which is kind of equivalent, like:

+ +
interface MessageRepository
+{
+    public function persist(Message ... $messages);
+    public function retrieveAll(AggregateRootId $id): Generator;
+}
+
+ +

Other parts of the library depend on this MessageRepository so I am forced to implement this interface. If this restriction doesn't exist it would be much easier.

+ +

I would like to know if there is any design pattern that would allow me to decople from the third party interface and not pollute my domain.

+",354102,,,,,43832.41597,Decoupling from third party library,,1,3,0,,,CC BY-SA 4.0, +403197,1,,,1/2/2020 15:20,,2,134,"

I have a Foo service that, among other things, retrieves data from a table called Widget. This table has about 50 columns. This service is called by a request from a browser.

+ +

Now, I have another service, Bar, that wants to call Foo. This is a Windows Service that we can trust because there is no end user. Bar also needs data from the Widget table. However, Bar only needs 3 columns of data. Bar needs this before calling Foo.

+ +

My concern is the inefficiency of Bar getting Widget, only for Foo to also do the same thing. I can think of three options:

+ +
    +
  1. Bar makes the same repo call to get all the Widget data.
  2. +
  3. Same as #1, but Bar passes Widget data to Foo.
  4. +
  5. Bar makes a different repo call to only get the three columns it needs.
  6. +
+ +

Number 1 is inefficient because Bar and Foo will get the same data.

+ +

Number 2 works, but Foo has to trust the caller. Foo can't trust the caller in one case, because it's from a web request. But Foo can trust the service call.

+ +

Number 3 also works, but it's slightly inefficient because it's getting Widget data, although it's a much slimmer version. And we would need DTO just for this Bar process, and the DTO would just have the three fields.

+ +

I'm leaning towards #3, but would love some input, especially if there is another approach that I'm not considering.

+",29526,,,,,43832.66597,Is it ok to have a DTO for each process/app/service in a system?,,3,15,,,,CC BY-SA 4.0, +403199,1,403203,,1/2/2020 15:48,,2,109,"

In a php project that I maintain I have a database structure without migrations, hence no way to reproduce it or make on the fly a test database.

+ +

And the queries used to be performed on them are rather complex ones. The pattern in code used with them is (I try to show a generelized preview of the problem):

+ +
public function someMethod(PDO $connection, string $param) {
+
+  $sql=""^some complex sql query^"";
+  $pdo->prepare($sql);
+  $pdo->bindParam('param',$param);
+  $stmt->execute();
+
+  return $stmt->fetchAll();
+}
+
+
+ +

In order to test this function I created a duplicate database and from that I deleted all the production data keeping the least minimal. I use the following testing approach when I write phphunit test

+ +
    +
  1. Populate the database with test data.
  2. +
  3. Run the method.
  4. +
  5. Do my Assertions.
  6. +
  7. Delete the test data manually using Delete sql statements.
  8. +
+ +

Did this approach creates a Unit test for my application or an Integration one. Keep in mind that the whole logic is implemented in an SQL query instead on the php layer.

+ +

For example a query like that:

+ +
Select
+    value,
+    CASE 
+     WHEN ""time""::time < '12:00' and ""time""::time > '00:00' then true 
+    ELSE 
+      false end as observed_in_morning  
+from 
+  observations
+
+ +

The example above the logic that indicates whether an observation happened in me morning or not is written on the sql statement, clearly a business logic.

+ +

So a test testing whether an observation happened in the morning is a unit test or an integration one?

+",249660,,249660,,43833.37778,43833.37778,Testing Queries Themselves with Test data is a Unit test or an Integration test?,,1,1,,,,CC BY-SA 4.0, +403205,1,403206,,1/2/2020 16:39,,2,608,"

Consider something like....

+ +

(Pseudocode Javascript like)

+ +
var capitalizeWord: function(word){
+    word.toAllCaps(); 
+}
+
+ +

What if now I need to do multiple words, but still sometimes only do one still. Is it better to do...

+ +
var capitalizeWords: function(words){
+    _.forEach(words, function(word){
+        capitalizeWord(word);
+    }
+}
+
+var capitalizeWord: function(word){
+    word.toAllCaps(); 
+}
+
+ +

OR just have one function

+ +
var capitalizeWords: function(words){
+    _.forEach(words, function(word){
+        word.toAllCaps();
+    }
+}
+
+ +

And when I need to just capitalize one word....cast it to an array before calling capitalizeWords?

+ +
capitalizeWords([word]);
+
+",354132,,,,,43833.65903,"When a function can take a single item or multiple, is it better to have more than one function or just one function that takes arrays?",,4,6,1,,,CC BY-SA 4.0, +403209,1,,,1/2/2020 18:29,,-2,346,"

I'm a great fan of refactoring but I've been wondering about the issues raised by refactoring.
+Fowler advises refactoring

+ +
    +
  • to make code readable to all users
  • +
  • to make the code structure more sensible, e.g. methods in the class most affected by them
  • +
  • to make the overall project more OO - and hence more easily maintained and extended
  • +
+ +

To this end he dislikes temps and long lists of parameters in methods. He sees a lot of temps or parameters as symptoms of more fundamental problems with the code structure. But getting rid of temps can itself raise other issues. Obviously if a temp is brought in to hold the output of one method before it's input to another method, you lose code readability if you just pipeline the two methods on a single statement and eliminate that temp. So we can't eliminate all temps if we want to maintain code readability.

+ +

But we should try to remove as many as possible. The crudest way to eliminate a share of temps is to use global variables. But with this you are making mere utility objects into class-wide fields (attributes). This would lead to a lot of fields with few of them having a claim to be germane to the definition, or working essence, of the class per se. This is hardly in the spirit of good OO design.

+ +

Often we see the input parameters to a class made into that class' fields. +However, if the input parameters are simply containers (e.g. input files, collections or aggregates) for the data vital to a class, then it makes no sense to make such things into class fields. It would make more sense for the constructor to extract the essential data from these inputs and make those objects/collections the class fields. Likewise if a class is a data processor of some kind - think of a text processor which has file inputs for raw text and 'noise' words and which produces data structures for 'noise' and non-noise words - then I think it's arguable to have the output data structures of the process also defined as class fields. This especially so when the output objects will be used as inputs to, say, some stat analysis or graphic display classes. Here we clearly need some methods to getOutputCollection1(), getOutputCollection2(), etc. Having those output collections stored as class fields with their predefined getters facilitates this retrieval without re-running the processing to produce them. Other types of class (model classes for real-world objects, e.g. HotelRoom) may not require such judicious consideration of field data as the attributes would be more obvious.

+ +

I have seen very little focus on selection or use of class fields. It's hard enough to find anything but a short discussion on this matter. There's no mention at all on their potential for eliminating temps in Fowler's or other books on refactoring.

+ +

If any other SO members have views on this question then I would like to receive them.

+ +

I attach a code example that is small enough and big enough, I think. +This program takes a load of pre-formatted messages from a file holding a large block of messages from a free email service (""gPost"") and takes another file of noise words. It imports into suitable data structures both the message text and the noise words. It then creates a LinkedList of all non-noise words in the messages from each mailbox user and outputs all users' top 10 most frequently used non-noise words.

+ +
public class GPostProcessor
+{                                                                                               
+    private List<GPost> gPosts = new ArrayList<>();                                                                         // GPost ArrayList holding ALL contents of input file
+    private Set<String> noiseWords;                                                                                         // HashSet buffer for noise words
+    private Set<MyString> mailboxHandles = new TreeSet<>();                                                                 // TreeSet for holding mailbox handles alphabetically
+    private Map<MyString, List<WordCount>> wordUsages = new HashMap<>();                                                    // HashMap collection for vocab list of all mailboxes
+
+
+                                                                                                                            // CONSTRUCTORS ...
+    /** Parameterless constructor.<br>
+     *  The system cannot operate without input file parameters.<br>
+     *  So the parameterless constructor essentially displays an error message <br>
+     *  before conducting an orderly shut down of the program.
+    */
+    GPostProcessor()
+    {
+        System.out.println(""ERROR at GPostProcessor - no parameters in constructor !"" 
+                            + ""\nThe system will now shut down in 5 seconds."");
+        try
+        {
+            TimeUnit.SECONDS.sleep(5);
+        }
+        catch(Exception e)
+        {
+            e.printStackTrace();
+        }
+        System.exit(0);
+        System.out.println();
+        throw new IllegalArgumentException(""No parameters for GPostProcessor instance ! ""
+                                            + ""\nSystem shutdown follows."");
+    }
+
+    /** Parametered constructor.<br>
+     *  Input messages and noise-words files are applied to input parameters.<br>
+     *  @param messagesFile A String holding the name of the messages file.
+     *  @param noiseWordsFile A String holding the name of the noise words file.
+    */
+    GPostProcessor(String messagesFile, String noiseWordsFile)
+    {
+        if (messagesFile == null)
+        {
+            System.out.println(""ERROR at GPostProcessor parametered constructor !""
+                                + ""\nNull parameter for messages file.""
+                                + ""\nThe system will now shut down in 5 seconds . . ."");
+            try
+            {
+                TimeUnit.SECONDS.sleep(5);
+            }
+            catch(Exception e)
+            {
+                e.printStackTrace();
+            }
+            System.exit(0);
+        }
+        else if (noiseWordsFile.equals(null))
+        {
+            System.out.println(""ERROR at GPostProcessor parametered constructor !""
+                                + ""\nNull parameter for noise-words file.\n""
+                                + ""\nThe system will now shut down in 5 seconds . . ."");
+            try
+            {
+                TimeUnit.SECONDS.sleep(5);
+            }
+            catch(Exception e)
+            {
+                e.printStackTrace();
+            }
+            System.exit(0);
+        }
+        else                                                                                                                // Fill in primary data structures ...
+        {
+            this.gPosts = parseToGPosts(loadGPostMessages(messagesFile));                                                   // Extract all gPosts from input file to one big string
+            this.noiseWords = loadNoiseWords(noiseWordsFile);                                                               // Load noise words from their file into a HashSet
+        }
+    }
+
+                                                                                                                            // GETTERS & SETTERS - mainly used for the JUnit tests.
+
+
+
+    /******************************************************************
+     ***** This is the MAIN class for the gPostProcessor package. *****
+     *
+     * @param args A String array into which arguments may be put.
+     *****************************************************************/
+    public static void main(String[] args)
+    {
+        final Instant startTime = Instant.now();
+        new GPostProcessor(""messages3.txt"", ""noiseWords.txt"").processGPosts();
+        final Instant endTime = Instant.now();
+        final long duration = Duration.between(startTime, endTime).toNanos();
+        System.out.printf(""\n%-10s%12s%1s%2s"", ""Duration: "", duration, "" "", ""ns"");
+    }
+
+
+                                                                                                                            // DATA MANIPULATION METHODS ...
+    /** Processes the file of gPosts, <i>messagesFile</i>, excluding words in the
+     *  noise words file, <i>noiseWordsFile</i>.
+     *  Each gPost message is drawn from an ArrayList&lt;String&gt; buffer generated
+     *  by method <i>loadGPostMessages(.)</i>, converted into GPost objects and analysed
+     *  before adding its mailbox handle and WordCount list to a HashMap collection, <i>wordUsages</i>.
+     *  Finally, a report is generated showing the 10 most used words for each gPost
+     *  user whose emails are listed in <i>messagesFile</i>.
+    */
+    void processGPosts()
+    {
+        GPost gPost;                                                                                                        // Temp GPost object
+        MyString mHandle;                                                                                                   // Temp gPost address handle
+        List<WordCount> messageList;                                                                                        // Temp WordCount list got from a GPost
+
+        for(int p = 0; p < gPosts.size(); p++)                                                                              // Analyse each gPost in the input file
+        {
+            System.out.println(""Processing gPost #"" + (p + 1) + "" ..."");                                                    // Display email # being processed
+            gPost = gPosts.get(p);
+            mHandle = getMailboxHandle(gPost);                                                                              // Extract the user's mailbox handle
+            mailboxHandles.add(mHandle);                                                                                    // Add mailbox handle to TreeSet of handles
+            messageList = getMessageList(gPost);                                                                            // Extract the wordCountList from the gPost
+            mergeInUserVocab(mHandle, messageList);                                                                         // Merge wordCountList with existing vocabList
+        }
+        Map<MyString, List<WordCount>> wordUsagesTop10 = prepareDataForReport();                                            // Prepare data for report ...
+        generateReport(wordUsagesTop10, ""report.txt"");                                                                      // ... and generate report for mailbox user
+    }
+
+                                                                                                                            // FILE BUFFERING METHODS ...
+
+    /** Reads in gPost items line by line from the text file, <i>messagesFile</i> and puts 
+     *  its contents immediately into a single big String object.<br>
+     *  This ensures minimal delay at the slowest stage of the system, i.e. reading input. 
+     *  Parsing and analysing of gPost items from the input file is done by other non-I/O methods.
+     *  @param messagesFile A String holding the name of a file holding all gPosts.
+     *  @return A String object holding all the input file contents..
+    */
+    String loadGPostMessages(String messagesFile)
+    {
+        final Instant startTime = Instant.now();
+        StringBuilder contents = new StringBuilder(),
+                      textLine = new StringBuilder();                                                                       // Output gPosts as a single StringBuilder
+
+        if (messagesFile == null)   
+        {                                                                                                                   // Check for null filename
+            contents = null;
+            System.out.println(""ERROR ! No or null filename for messages file.""
+                + ""\nPlease check messages file and ensure valid name entered."");
+        }
+        else
+        {
+            try
+            {
+                BufferedReader buffReader = new BufferedReader(new FileReader(messagesFile)); 
+                while ( !textLine.append(buffReader.readLine() ).toString().equals(""null"") )                                // While file has content in next line ... 
+                { 
+                    contents = contents.append(textLine);                                                                   // ... add line to file contents StringBuilder
+                    textLine.setLength(0);                                                                                  // ... then clear textLine for next read.           
+                }
+                buffReader.close();      
+            }
+            catch(IOException re)
+            {
+                contents = null;
+                System.out.println(""\nReading of "" + messagesFile + "" failed."");
+                re.printStackTrace();
+            }
+        }
+        final Instant endTime = Instant.now();
+        final long duration = Duration.between(startTime, endTime).toNanos();
+        System.out.printf(""\n%-10s%12s%1s%2s"", ""Load Duration: "", duration, "" "", ""ns"");
+        return contents.toString();
+    }
+
+
+    /** Reads in noise words from the text file, <i>noiseWordsFile</i>, and puts
+     *  them into a HashSet&lt;String&gt;. <br>
+     *  Noise words are assumed to be on separate lines of the text file to
+     *  facilitate rapid reading of the input.
+     *
+     *  @param noiseWordsFile A String object giving the name of the file
+     *  that holds all the noise words.
+     *  @return A HashSet&lt;String&gt; holding all the noise words.
+    */
+    Set<String> loadNoiseWords(String noiseWordsFile)
+    {
+        Set<String> noiseWords = new HashSet<>();
+        String textLine;
+
+        try                                                                                                                 // Read in noise words from textfile, ""noiseWordsFile""
+        {
+            BufferedReader buffReader = new BufferedReader(new FileReader(noiseWordsFile));
+            textLine = buffReader.readLine();
+            while (textLine != null)
+            {
+                noiseWords.add(textLine.toLowerCase().trim());
+                textLine = buffReader.readLine();
+            }
+            buffReader.close();
+        }
+        catch(IOException re)
+        {
+            System.out.println(""Read of "" + noiseWordsFile + "" failed.\n"");
+            re.printStackTrace();
+        }
+        return noiseWords;
+    }
+
+                                                                                                                            // PROCESSING METHODS ...
+                                                                                                                            // ==================
+    /** Check each block of the gPost string to see if it is in valid gPost format.
+     *  @return A String value that holds the error message if the gPostsText is not validly 
+     *  formatted or a null string otherwise.
+     *  */
+    String validGPost(StringBuilder gPostsText)
+    {
+        int mHandle = gPostsText.substring(0, 254).indexOf(""@gPost.com""),                                                   // Find index of first email handle
+            start = gPostsText.indexOf(""gPostBegin""),                                                                       // ... next gPostBegin ...
+            end = gPostsText.indexOf(""gPostEnd"");                                                                           // ... and next gPostEnd
+        if (mHandle < 0)        
+        {
+            return ""ERROR in input text: missing sender address."";
+        }
+        else if (start < 0)
+        {
+            return ""ERROR in input text: missing message preamble."";
+        }
+        else if (end < 0)
+        {
+            return ""ERROR in input text: missing message epilogue."";
+        }
+        else if (start < mHandle)
+        {
+            return ""ERROR in input file: message preamble before sender."";
+        }
+        else if (end < mHandle)
+        {
+            return ""ERROR in input file: message epilogue before sender."";
+        }
+        else if (end < start)
+        {
+            return ""ERROR in input file: message epilogue before preamble."";
+        }
+        else
+        {
+            return null;
+        }
+    }
+
+    /** Converts a large string containing many gPosts into a ArrayList<GPost> collection.
+     *
+     *  @param gPostsString A String object that carries the important contents of a gPosts' file.
+     *  @return An ArrayList<GPost> collection holding sender and message text for the input gPosts.
+    */
+    ArrayList<GPost> parseToGPosts(String gPostsString)
+    {   
+        ArrayList<GPost> result = new ArrayList<>();                                                                        // Copy input file String to a StringBuilder
+        StringBuilder sender = new StringBuilder(""""),
+                      message = new StringBuilder("""");
+        String error;
+        int postNum = 0,                                                                                                    // gPost index
+            iHandle,                                                                                                        // Email handle index
+            start,                                                                                                          // Start index of message
+            end = 0;                                                                                                        // End index of message
+
+        if (!gPostsString.equals(null))
+        {
+            StringBuilder gPostsText = new StringBuilder(gPostsString.replaceAll(""\\r\\n|\\r|\\n"", "" ""));                   // Turn line breaks to spaces
+            while (gPostsText.length() != 0)
+            {   
+                if ((error = validGPost(gPostsText)) != null)                                                               // Invalid gPost formatting ?
+                {
+                    System.out.println(""\n Post #"" + postNum + "" - "" + error);                                              // => Output error 
+                }
+                else                                                                                                        // Valid gPost format ...
+                {
+                    iHandle = gPostsText.indexOf(""@gPost.com"");                                                             // Find index of handle
+                    sender.append(gPostsText.substring(0, iHandle + 10).trim());                                            // Store sender handle 
+                    start = gPostsText.indexOf(""gPostBegin"") + 10;                                                          // Start of gPost message is after preamble
+                    end = gPostsText.indexOf(""gPostEnd"");                                                                   // End of gPost message is before epilogue
+                    message.append(gPostsText.substring(start, end).trim());                                                // Strip out gPost message
+                    if (message.equals(null))
+                    {
+                        System.out.println(""\n Post #"" + postNum + "": Empty message."");                                     // Empty message warning
+                        break;
+                    }
+                    else
+                    {
+                        result.add(postNum, new GPost(sender.toString(), message.toString()));                              // Add to gPost ArrayList
+                        start = end + 8;                                                                                    // Reset start index for remainder of gPostsString
+                        sender.setLength(0);                                                                                // Clear sender ...
+                        message.setLength(0);                                                                               // ... and message StringBuilders
+                    }
+                }
+                gPostsText = gPostsText.delete(0, end + 8);
+                postNum++;
+            }
+        }
+        else
+        {
+            result = null;
+            System.out.println(""No gPosts produced. \nPlease check input file"");
+        }
+        return result;      
+    }
+
+
+    /** Returns the mailbox handle for the sender of a gPost message.<br>
+     *  The mailbox handle for a sender with an email address of <i>john.west@gPost.com</i>
+     *  is simply the email with the <i>{@literal @}</i> character and the domain name removed,
+     *  i.e. <i>john.west</i>. <br>
+     *
+     *  @param gPost A GPost object holding the contents of a gPost message.
+     *  @return Returns the handle of the email address sending a GPost message.
+    */
+    MyString getMailboxHandle(GPost gPost)
+    {
+        return new MyString(gPost.getSender().substring(0, gPost.getSender().indexOf(""@"")));
+    }
+
+
+    /** Analyses the input gPost to generate the list of non-noise words in it.
+     *  Noise words are drawn from the class attribute, <i>noiseWords</i>.<br>
+     *  @param gPost A GPost object holding the contents of a gPost message. .
+     *  @return Returns the unsorted WordCount list extracted from a GPost message.
+    */
+    List<WordCount> getMessageList(GPost gPost)
+    {
+        List<WordCount> result = new LinkedList<>();                                                                        // Returned List of non-noise words
+        List<String> wordList;                                                                                              // List for words extracted from gPost message
+        WordCount newWordCount;                                                                                             // A WordCount object for additions to messageList
+        String word;                                                                                                        // Any word in wordList
+        int i = 0,                                                                                                          // Iteration indices
+            j = 0;
+
+        String message = gPost.getMessageText().toLowerCase();                                                              // Extract message text from gPost & lowercase it
+        message = message.replaceAll(""[\\p{P}&&[^\u0027]]"", "" "");                                                           // Replace punctuations bar apostrophes by space
+        wordList = new LinkedList<>(Arrays.asList(message.split("" +"")));                                                    // Split message into words then put these into a LinkedList
+        while (i < wordList.size())                                                                                         // Remove noise & non-alphabetic words
+        {
+            word = wordList.get(i).trim();
+            if ( (noiseWords.contains(word)) || (!word.matches(""[a-z]+$"")) )                                                // Remove noisewords & non-alphabetic char words ...
+            {
+                wordList.remove(i);                                                                                         // ... this includes words with apostrophes
+            }
+            else                                                                                                            // CARE! Don't increment iterator index after removal !
+            {
+                for(j = 0; j < result.size() && !result.get(j).getWord().equals(word); j++);                                // Check each word in the message ...
+                {
+                    if (j < result.size())                                                                                  // Already on messageList ?
+                        result.get(j).setCount(result.get(j).getCount() + 1);                                               // => increment its count
+                    else                                                                                                    // Not on messageList ...
+                    {
+                        newWordCount = new WordCount(word, 1);                                                              // => create WordCount object for that word with count = 1 ...
+                        result.add(newWordCount);                                                                           // Increment wordList iterator
+                    }
+                    i++;
+                }
+            }
+        }
+        return result;
+    }
+
+
+    /** Merges a WordCount list (<i>messageList</i>) got from a gPost message with the
+     *  WordCount list for the entire message vocabulary (<i> vocabList</i>) already held
+     *  for that same gPost user.
+     *  To make subsequent mergings as efficient as possible, we exploit the fact that the
+     *  word use frequency by a person follows a Zipf distribution - the discrete variable
+     *  version of the Pareto or 80/20 distribution. Basically, the vast majority of the words
+     *  a person uses comes from a small minority of their vocabulary. So by keeping <i>vocabList</i>
+     *  sorted in order of frequency and the most frequent word at the head of the list, we
+     *  can sharply reduce the average search time for words in subsequent messages.
+     *
+     *  @param messageList A LinkedList&lt;WordCount&gt; object holding WordCount data from
+     *  a single message.
+     *  @return A LinkedList&lt;WordCount&gt; collection showing a given user's WordCount data.
+    */
+    void mergeInUserVocab(MyString mHandle, List<WordCount> messageList)
+    {
+        int j,                                                                                                              // Indices for list items
+            mark;
+        WordCount wordCount;
+        List<WordCount> vocabList = wordUsages.get(mHandle);                                                                // Temp for mailbox WordCount list
+
+        if (vocabList == null)                                                                                              // When existing vocabList for that handle is empty ...
+        {
+            messageList.sort(WordCount::compareByCount);                                                                    // ... sort the messageList by COUNT and assign it to vocabList
+            wordUsages.put(mHandle, messageList);
+        }
+        else                                                                                                                // When the existing vocabList for that handle is NOT empty ...
+        {                                                                                                                   // ... check if each word in messageList exists in vocabList
+            for(int i = 0; i < messageList.size(); i++)
+            {
+                wordCount = messageList.get(i);
+                for(j = 0; j < vocabList.size(); j++)                                                                       // Check vocabList for each word in messageList
+                {
+                    if(wordCount.getWord().equals(vocabList.get(j).getWord()))                                              // When a word match is found ...
+                    {
+                        vocabList.get(j).setCount(vocabList.get(j).getCount() + wordCount.getCount());                      // .. update vocabList count accordingly                    
+                        break;                                                                                              // ... and stop searching vocabList
+                    }
+                }                                                                                                           // NOTE: j is closing index of vocabList search loop            
+                mark = j;                                                                                                   // Set insertion index at either vocabList match index or list tail 
+                while (mark > 0 && vocabList.get(mark - 1).getCount() < wordCount.getCount())                               // Find correct count location for updated or new word
+                {
+                    mark--;
+                }
+                if (j == vocabList.size())                                                                                  // When the messageList item is not in the vocabList ...
+                {
+                    vocabList.add(mark, wordCount);                                                                         // .. insert the new item at the correct index ...
+                }               
+            }
+        }
+        wordUsages.put(mHandle, vocabList);                                                                                 // wordUsages now updated & re-sorted by count.
+    }
+
+
+    /** Prepares the processed WordCount lists for each user so that the 10 most frequently used words 
+     *  are stored against email address for each mailbox owner. <br/>
+     *  Since wordUsages contains lists of words that are SORTED BY FREQUENCY ALONE and have not yet 
+     *  been sorted alphabetically, we have to look at words on each list after the tenth word in case 
+     *  some of these have the same count as the latter.
+     *  After all of these are identified and added to the top 10 words, the resulting list can
+     *  be sorted by firstly count and then alphabetically for words on equal count.
+     *  @return A HashMap collection of the most frequently used words in all gPost users hashed by 
+     *  the user's email handle.
+     * */
+    Map<MyString, List<WordCount>> prepareDataForReport()
+    {
+        Map<MyString, List<WordCount>> result = new HashMap<>();                                                            // Temp to hold top 10 words for each user
+        List<WordCount> wordUsage = new LinkedList<>();                                                                     // Temp for user vocab
+
+        System.out.println(""Preparing data for report ..."");                                                                // Log system status
+        for(MyString mHandle : mailboxHandles)                                                                              // Find each user's mailbox address
+        {                                                                                                                   // Locate associated user's vocab list
+            int i = 0;
+            wordUsage = wordUsages.get(mHandle);
+            for(i = 0; i < 10; i++)                                                                                         // ... fill its first 10 words
+            {
+                wordUsage.set(i, wordUsages.get(mHandle).get(i));
+            }
+            while (wordUsage.get(i).getCount() == wordUsage.get(9).getCount())                                              // ... get words with same count as 10th word ...
+            {
+                result.get(mHandle).set(i, wordUsage.get(i));                                                               // ... then add any such word to the sub-list
+                i++;
+            }
+            wordUsage.sort(WordCount::compareByCountThenWord);                                                              // Finally re-sort wordUsageTops by count and then word ...
+            wordUsage.subList(10, wordUsage.size()).clear();                                                                // Trim list to first 10 elements
+            result.put(mHandle, wordUsage);                                                                                 // ... and update 
+        }   
+        return result;",,,,,,,,,,,,,,,
+403211,1,,,1/2/2020 19:47,,5,4172,"

I wanted to know what fellow developers follow while designing rest API error responses.

+ +

For all the not-found get resources, do you also send the some sort of body indicating that resource was not found or some more detailed level error messages or only error code with null or blank response?

+",254574,,,,,44207.89028,Should 404 response include body,,4,7,2,,,CC BY-SA 4.0, +403213,1,403276,,1/2/2020 20:03,,1,90,"

I have an external data model framework with frequent updates to the names of fields. Say for an iteration I implement on

+ +
- EnterpriseModelObject
+-- EnterpriseDomainContentList
+--- EnterpriseDomainContent
+-- ContentDeliveryResult
+--- ErrorList
+---- Error
+--- WarningList
+---- Warning
+
+ +

Another iteration this will become

+ +
- EnterpriseModelObject
+-- EnterpriseContentList
+--- EnterpriseContent
+-- DeliveryResult
+--- EnterpriseErrorList
+---- EnterpriseError
+--- EnterpriseWarningList
+---- EnterpriseWarning
+
+ +

I have a number of microservices that require converting a variety of bean types to this model and back. For this I have a library that does the common work but I am still struggling with updates when my bean model changes.

+ +

The microservices are implemented to directly use the object names in the model and the library simply provides common conversion methods. So when I start updating the conversion library this breaks the unit tests in the service APIs. Ideally I want to be able to localize all object name changes to my library and the services will return the correct fields without requiring micromanagement.

+ +

What kind of approach should I take with the services so that they automatically use the updates without compilation or unit test issues?

+",,user313675,,user313675,43832.85694,43833.90139,API design for data model with frequent field name updates,,1,7,,,,CC BY-SA 4.0, +403219,1,,,1/3/2020 0:09,,-1,57,"

Curious how containers are portable across development/testing/cloud environments with no worry needed about the underlying infrastructure. Does the Docker Engine essentially standardize operating systems to the point where the engine is able to launch the same container from the different operating systems? Or does a cloud deployment environment for the container(s) need to be running Linux if the development environment was also running Linux? Thanks!

+",354158,,,,,43833.07569,Containers Across Operating systems,,1,0,,,,CC BY-SA 4.0, +403226,1,,,1/3/2020 2:34,,0,93,"

I'm starting a project to build an instant messaging application for mobile devices. +Although not the complete list of components these are the main data flows I'm concerned about:

+ +
    +
  1. Mobile app conencted to WebSocket -*for message analysis (abusive language / etc) and persistence.

  2. +
  3. Mobile app connected to MQTT broker -*for message publishing and message receiving

  4. +
  5. Mobile app connected to RESTful API -* for businness handling (balance check, token expiring, purchases, friend lists, user profile) **the different APIS can be distributed

  6. +
+ +

Let me explain my concern: +I want to save as much bandwidth as possible. Obviously I would prefer to have the least amount of channels but at the same time I can't just have an open connection to the message broker I need business logic in place for each message.

+ +

Because I don't want to sacrifice delivery speed I decided to have a separate channel for analysis and for message publishing (WebSocket and MQTT over TCP) instead of sending everything through websocket, and another one for business, like blocking a user, purchases, etc. (RESTful API), so based on the response from the REST API every so often I would apply some logic for sending messages.

+ +

What do you think of this approach? Do you have any suggestions?

+",354160,,,,,43833.28681,Architecture question about Instant Messaging Platform with MQTT,,1,0,,,,CC BY-SA 4.0, +403227,1,403249,,1/3/2020 3:16,,3,398,"

Gang of Four’s Flyweight design pattern introduces the concept of intrinsic and extrinsic states:

+ +
+

The key concept here is the distinction between intrinsic and extrinsic state. Intrinsic state is stored in the flyweight; it consists of information that’s independent of the flyweight’s context, thereby making it sharable. Extrinsic state depends on and varies with the flyweight’s context and therefore can’t be shared. Client objects are responsible for passing extrinsic state to the flyweight when it needs it.

+
+ +

In other words, the state of an object can be decomposed with respect to a group of objects as an intrinsic state and an extrinsic state, where the intrinsic state is the intersection of the states of all objects of the group and the extrinsic state is the difference of the state of the object and the intrinsic state. Since the intrinsic state is duplicated in each object of the group, space can be saved by replacing the group of objects by a single flyweight object storing a single intrinsic state. The flyweight object cannot however store the multiple extrinsic states of the objects of the group, so the extrinsic states are stored outside and passed to the flyweight object in each request from client objects. Such an optimised communication protocol is often called a stateless protocol since the flyweight object does not store extrinsic state. Examples of stateless protocols include IP and HTTP (and more generally any REST protocols, where intrinsic state is called resource state and extrinsic state is called application state).

+ +

For instance, let’s take three objects with their respective clients:

+ +
+

o1 ← c1
+ o2 ← c2
+ o3 ← c3

+
+ +

We can decompose the state of each object with respect to the three objects:

+ +
+

state 1 = intrinsic stateextrinsic state 1
+ state 2 = intrinsic stateextrinsic state 2
+ state 3 = intrinsic stateextrinsic state 3

+ +

where:

+ +

intrinsic state = state 1state 2state 3
+ extrinsic state 1 = state 1 \ intrinsic state
+ extrinsic state 2 = state 2 \ intrinsic state
+ extrinsic state 3 = state 3 \ intrinsic state

+
+ +

Here the intrinsic state is duplicated. So storing it in a single flyweight object (and moving the extrinsic states into clients) saves space:

+ +
+

o ← c1, c2, c3

+
+ +

So far so good. Now I have been wondering if it is possible to classify attributes as intrinsic state or extrinsic state from the definition of an unshared class, in order to derive a flyweight class.

+ +

Is the following syntactic characterisation of intrinsic and extrinsic state for unshared classes correct?

+ +
    +
  • intrinsic state = immutable instance variables ∪ class variables;
  • +
  • extrinsic state = mutable instance variables.
  • +
+ +

For instance, let’s define the following unshared class:

+ +
class Unshared:
+    __z = 0
+
+    def __init__(self):
+        self.__x = 0
+        self.__y = 0
+
+    def f(self):
+        return self.__x + self.__y
+
+    def g(self):
+        self.__y += 1
+
+    @classmethod
+    def h(cls):
+        cls.__z += 1
+
+ +

The proposed characterisation would yield:

+ +
    +
  • intrinsic state = {__x, __z};
  • +
  • extrinsic state = {__y}.
  • +
+ +

which seems correct. That unshared class can then be transformed into a flyweight class for saving space:

+ +
class Flyweight:
+    __z = 0
+
+    def __init__(self):
+        self.__x = 0
+
+    def f(self, y):
+        return self.__x + y
+
+    @classmethod
+    def h(cls):
+        cls.__z += 1
+
+ +

Edit

+ +

My characterisation was incomplete, as noted by @DocBrown, because it prevented mutable flyweight objects, by excluding mutable instance variables that are instance or class specific from the intrinsic state. And it also incorrectly included immutable instance variables that are instance specific in the intrinsic state.

+ +

Let’s define the following terms:

+ +
    +
  • A group is a set of unshared objects from which a single flyweight object is derived.
  • +
  • An instance-scope instance variable is an instance variable that is specific to an individual object.
  • +
  • A lesser-group-scope instance variable is an instance variable that is common to a smaller group of unshared objects than the group under consideration.
  • +
  • A group-scope instance variable is an instance variable that is common to the group of unshared objects under consideration.
  • +
  • An greater-group-scope instance variable is an instance variable that is common to a larger group of unshared objects than the group under consideration.
  • +
  • A class-scope instance variable is an instance variable that is common to all objects.
  • +
+ +

An instance-scope instance variable and a lesser-group-scope instance variable are not sharable by the group of unshared objects under consideration, while a group-scope instance variable, greater-group-scope instance variable, a class-scope instance variable and a class variable are sharable by the group of unshared objects under consideration.

+ +

We can conclude that the complete characterisation of intrinsic and extrinsic state for unshared classes is the following:

+ +
    +
  • intrinsic state = group-scope instance variables ∪ greater-group-scope instance variables ∪ class-scope instance variables ∪ class variables;
  • +
  • extrinsic state = instance-scope instance variables ∪ lesser-group-scope instance variables.
  • +
+ +

However as @DocBrown rightly pointed, this characterisation is not purely syntactic. Class variables, class-scope instance variables and some mutable instance-scope instance variables can be identified syntactically just by looking at the unshared class definition. But greater-group-scope instance variables, group-scope instance variables, lesser-group-scope instance variables, immutable instance-scope instance variables and mutable instance-scope instance variables can only be identified semantically.

+ +

When deriving a flyweight class from an unshared class, its instance-scope instance variables and lesser-group-scope instance variables are moved outside of the flyweight class, its group-scope instance variables become instance-scope instance variables of the flyweight class, its greater-group-scope instance variables are moved outside of the flyweight class, its class-scope instance variables remain class-scope instance variables of the flyweight class, and its class variables remain class variables of the flyweight class.

+ +

For instance, let’s define the following unshared class:

+ +
import collections
+
+class Unshared:
+    __L = 0  # immutable class variable
+    __instances = collections.defaultdict(list)  # mutable class variable
+
+    def __init__(self, GROUP, I, i, j, k):
+        self.__instances[GROUP].append(self)
+        self.__GROUP = GROUP  # immutable group-scope instance variable
+        self.__I = I  # immutable instance-scope instance variable
+        self.__i = i  # mutable instance-scope instance variable
+        for instance in self.__instances[GROUP]:
+            instance.__j = j  # mutable group-scope instance variable
+        self.__K = 0  # immutable class-scope instance variable
+        for instances in self.__instances.values():
+            for instance in instances:
+                instance.__k = k  # mutable class-scope instance variable
+
+    def query(self):
+        return {
+            ""instance"": [self.__I, self.__i],
+            ""group"": [self.__GROUP, self.__j],
+            ""class"": [self.__K, self.__k, self.__L, tuple(self.__instances)]
+        }
+
+    def manipulate(self):
+        # Update the mutable instance-scope variables.
+        self.__i = None
+        # Update the mutable group-scope variables.
+        for instance in self.__instances[self.__GROUP]:
+            instance.__j = None
+        # Update the mutable class-scope variables.
+        for instances in self.__instances.values():
+            for instance in instances:
+                instance.__k = None
+        self.__instances[None] = []
+
+ +

The new characterisation yields:

+ +
    +
  • intrinsic state = {__GROUP, __j, __K, __k, __L, __instances};
  • +
  • extrinsic state = {__I, __i}.
  • +
+ +

That unshared class can then be transformed into a flyweight class for saving space:

+ +
class Flyweight:
+    __L = 0
+    __instances = {}
+
+    def __init__(self, GROUP, j, k):
+        self.__instances[GROUP] = self
+        self.__GROUP = GROUP
+        self.__j = j
+        self.__K = 0
+        for instance in self.__instances.values():
+            instance.__k = k
+
+    def query(self, I, i):
+        return {
+            ""instance"": [I, i],
+            ""group"": [self.__GROUP, self.__j],
+            ""class"": [self.__K, self.__k, self.__L, tuple(self.__instances)]
+        }
+
+    def manipulate(self):
+        # Update the mutable group-scope variables.
+        self.__j = None
+        # Update the mutable class-scope variables.
+        for instance in self.__instances.values():
+            instance.__k = None
+        self.__instances[None] = []
+
+ +

Unshared class usage:

+ +
ra1 = Unshared(""a"", 1, 1, 1, 0)  # unshared instance of group ""a""
+ra2 = Unshared(""a"", 2, 2, 1, 0)  # unshared instance of group ""a""
+rb1 = Unshared(""b"", 1, 1, 2, 0)  # unshared instance of group ""b""
+rb2 = Unshared(""b"", 2, 2, 2, 0)  # unshared instance of group ""b""
+
+print(ra1.query())  # {""instance"": [1, 1], ""group"": [""a"", 1], ""class"": [0, 0, 0, (""a"", ""b"")]}
+print(ra2.query())  # {""instance"": [2, 2], ""group"": [""a"", 1], ""class"": [0, 0, 0, (""a"", ""b"")]}
+print(rb1.query())  # {""instance"": [1, 1], ""group"": [""b"", 2], ""class"": [0, 0, 0, (""a"", ""b"")]}
+print(rb2.query())  # {""instance"": [2, 2], ""group"": [""b"", 2], ""class"": [0, 0, 0, (""a"", ""b"")]}
+ra1.manipulate()
+print(ra1.query())  # {""instance"": [1, None], ""group"": [""a"", None], ""class"": [0, None, 0, (""a"", ""b"", None)]}
+print(ra2.query())  # {""instance"": [2, 2], ""group"": [""a"", None], ""class"": [0, None, 0, (""a"", ""b"", None)]}
+print(rb1.query())  # {""instance"": [1, 1], ""group"": [""b"", 2], ""class"": [0, None, 0, (""a"", ""b"", None)]}
+print(rb2.query())  # {""instance"": [2, 2], ""group"": [""b"", 2], ""class"": [0, None, 0, (""a"", ""b"", None)]}
+
+ +

Flyweight class usage:

+ +
fa = Flyweight(""a"", 1, 0)  # shared instance of group ""a""
+fb = Flyweight(""b"", 2, 0)  # shared instance of group ""b""
+
+print(fa.query(1, 1))  # {""instance"": [1, 1], ""group"": [""a"", 1], ""class"": [0, 0, 0, (""a"", ""b"")]}
+print(fa.query(2, 2))  # {""instance"": [2, 2], ""group"": [""a"", 1], ""class"": [0, 0, 0, (""a"", ""b"")]}
+print(fb.query(1, 1))  # {""instance"": [1, 1], ""group"": [""b"", 2], ""class"": [0, 0, 0, (""a"", ""b"")]}
+print(fb.query(2, 2))  # {""instance"": [2, 2], ""group"": [""b"", 2], ""class"": [0, 0, 0, (""a"", ""b"")]}
+fa.manipulate()
+print(fa.query(1, None))  # {""instance"": [1, None], ""group"": [""a"", None], ""class"": [0, None, 0, (""a"", ""b"", None)]}
+print(fa.query(2, 2))     # {""instance"": [2, 2], ""group"": [""a"", None], ""class"": [0, None, 0, (""a"", ""b"", None)]}
+print(fb.query(1, 1))     # {""instance"": [1, 1], ""group"": [""b"", 2], ""class"": [0, None, 0, (""a"", ""b"", None)]}
+print(fb.query(2, 2))     # {""instance"": [2, 2], ""group"": [""b"", 2], ""class"": [0, None, 0, (""a"", ""b"", None)]}
+
+",184279,,184279,,43954.49792,43954.49792,Syntactic characterisation of intrinsic and extrinsic states,,2,6,0,,,CC BY-SA 4.0, +403228,1,,,1/3/2020 4:03,,21,7918,"

I am making a full-stack web application for a professor. At his request, the passwords and usernames are generated programmatically, and they cannot be changed or reset by the students. (If you forget your password, you ask the professor, who can look it up.) Does this tightly-controlled system eliminate the need to do all of the normal best practices with regard to storing passwords in a database?

+ +

In case it's relevant, the app does not contain any association or identifiers between the student's identifying information (name, gender, etc.) and their username and password.

+ +
+ +

EDIT 1: Thank you for the many responses. This is very helpful to me as I didn't take the traditional route to this career and have some holes in my knowledge that probably seem fundamental to you. Here are a few points of clarification:

+ +
    +
  • I am a freelancer on my first-ever freelancing project, and the client/customer is a professor. I am not his student, and this is not an assignment.
  • +
  • My task is to replace an existing application that is very old and for which the source code is lost.
  • +
  • The application is used in a class taught by several professors in different schools in the US. Much of the content is just static, like a textbook. However you can also take some questionnaires/instruments developed by the professors to get insight into the topic of the course relative to a real-world example of your choosing (i.e., the information you supply is not about yourself or other people).
  • +
  • My original goals in having username/passwords was to identify users so that I could enforce expiration of access to the content, and also control permissions. Permissions matter because in addition to students, there is the concept of administrator users (who have a dashboard where they can view lists of username/passwords they have created, and create more) and a super-admin (who in addition to what admins can do, can create other admin users).
  • +
  • The primary reason I started down the path of username and login is because that is what the old app had. But, the old app stored (lots!) of student information. The professor in charge now does not want to go afoul of FERPA laws so he has changed the requirements there.
  • +
  • A professor can create another username/password set and add it to the current class if needed. But they aren't forced to currently. (They can just give out the original password.) This was the professor's decision when I asked him if he wanted a ""reset password"" button on the list.
  • +
+",241967,,241967,,43836.85,43836.85,Is this scenario an exception to the rule of never storing passwords in plaintext?,,9,10,4,,,CC BY-SA 4.0, +403234,1,,,1/3/2020 7:35,,-1,90,"

I have googled enough for days but i cant find a clear intuitive document about this embedding technique. +In their paper they say : We present StarSpace, a general-purpose neural embedding model that can solve a wide variety of problems. +But they don't explain how. Thanks in advance.

+",354178,,,,,43833.37292,Can somone explain facebook starspace?,,1,0,,,,CC BY-SA 4.0, +403238,1,,,1/3/2020 9:55,,6,476,"

Tennis is played as singles or doubles. I considered making my tennis scoring model logic refer to ""teams"" throughout its naming since ""player"" wouldn't take into account doubles. However, seeing teamOne and teamTwo throughout the code to accommodate both modes seems awkward since a team of one person doesn't make sense and the sport is usually played as singles.

+ +

What would be a good way to reconcile this? I try to adhere to the Swift API Design Guidelines since it's the language I'm using. Maybe I'm bikeshedding and I should just go with teamOne and teamTwo even if it's imprecise?

+",267972,,267972,,44089.02222,44095.82361,What is a good approach to naming when modeling a sport that can be between either individuals or teams?,,5,9,,,,CC BY-SA 4.0, +403239,1,,,1/3/2020 10:19,,0,55,"

I'm working with a project that uses ASP.NET Core 2.2. The main solution contains different projects, including APIs for a mobile application, APIs to integrate the system with third parties, a web application, etc.

+ +
    +
  1. Mobile API - For internal application
  2. +
  3. Public API - To integrate our system with others
  4. +
  5. Web - Website
  6. +
+ +

We're using token-based authentication for mobile API project. However, to expose our Public APIs for third parties, we don't want to use token-based authentication, instead, we decided to use the key - secret.

+ +

But the issue is that another project (integration to use our system internally) has its registered users and they don't want to create an account in our system. I suspect that, in this case, key-secret things wouldn't work. We currently use api-key for every project(tenant), and they have to keep api-key for their users. However, this could create security issues if we expose API endpoints publicly.

+ +

Along with this, we'd like to track request per user too.

+ +

What type of authentication solves this problem? and how can we handle such a scenario?

+",241751,,241751,,43833.43611,43833.43611,Which authentication should be used for external users (not registered with the system),,0,11,,,,CC BY-SA 4.0, +403248,1,403290,,1/3/2020 14:56,,0,560,"

I have queue of MoveableItems in a class BackgroundTaskQueue (based on example from Microsoft here). Clients call BackgroundTaskQueue.QueueBackgroundWorkItem to add a MoveableItem to the queue.

+ +

An instance of BackgroundTaskQueue is passed to a class QueuedHostedService (which is also based on this). QueuedHostedService processes the queue on a separate thread to the enqueue action.

+ +

The BackgroundTaskQueue.DequeueAsync method blocks on a SemaphoreSlim (due to async). +When a client calls BackgroundTaskQueue.QueueBackgroundWorkItem method it queues the MoveableItem and signals (releases) the semaphore. QueuedHostedService then processes the MoveableItem

+ +

Clients of would like to know all MoveableItem that are incomplete (defined as all items remaining in the queue and the currently processing item).

+ +

What suggested approaches exist to ensure Clients get an accurate audit of the incomplete items while Clients could be concurrently enqueuing items and asking for IncompleteItems, while Bar is concurrently dequeuing?

+ +

Problem I found is if I use another SemaphoreSlim in Bar as a mutex to ensure thread-safety of the Incomplete-Items list, then I get a deadlock race-condition - this happens when Bar is in the processing loop, its thread would claim the mutex-semaphore for protecting Incomplete-Items, then Bar calls Foo.Dequeue and blocks Foo's signalling-semaphore while it is waiting for a new item. At this point Bar has claimed the mutex and is blocked waiting for the enqueue signal. If a Client thread asks for the list of Incomplete Items it would attempt to claim Bar's mutex-semaphore and can't because it is held by Bar's queue processing thread.

+ +
public interface IBackgroundTaskQueue
+{
+    IList<MoveableItem> Items { get; }
+
+    void QueueBackgroundWorkItem(MoveableItem moveableItem);
+
+    Task<MoveableItem> DequeueAsync(
+        CancellationToken cancellationToken);
+}
+
+public class BackgroundTaskQueue : IBackgroundTaskQueue
+{
+    private ConcurrentQueue<MoveableItem> _workItems =
+        new ConcurrentQueue<MoveableItem>();
+    private SemaphoreSlim _signal = new SemaphoreSlim(0);
+    private readonly ILogger _log;
+
+    public BackgroundTaskQueue(ILogger<BackgroundTaskQueue> logger)
+    {
+        _log = logger;
+    }
+
+    public void QueueBackgroundWorkItem(MoveableItem moveableItem)
+    {
+        if (moveableItem == null)
+        {
+            throw new ArgumentNullException(nameof(moveableItem));
+        }
+
+        _workItems.Enqueue(moveableItem);
+        _signal.Release();
+        _log.LogDebug($""{nameof(_signal)} release {nameof(QueueBackgroundWorkItem)}"");
+    }
+
+    public async Task<MoveableItem> DequeueAsync(
+        CancellationToken cancellationToken)
+    {
+        _log.LogDebug($""{nameof(_signal)} waiting async {nameof(QueueBackgroundWorkItem)}"");
+
+        await _signal.WaitAsync(cancellationToken);
+
+        _log.LogDebug($""{nameof(_signal)} running {nameof(QueueBackgroundWorkItem)}"");
+
+        _workItems.TryDequeue(out var workItem);
+
+        return workItem;
+    }
+
+    public bool IsEmpty()
+    {
+        return _workItems.IsEmpty;
+    }
+
+    public IList<MoveableItem> Items {  get => _workItems.ToList(); }
+}
+
+
+public class QueuedHostedService : BackgroundService
+{
+    private readonly ILogger _logger;
+    private volatile MoveableItem _inProgressItem;
+    private volatile IList<MoveableItem> _incompleteItems = new List<MoveableItem>();
+    private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1, 1);
+    private readonly IBackgroundTaskQueue _taskQueue;
+
+    public QueuedHostedService(IBackgroundTaskQueue taskQueue,
+        ILogger<QueuedHostedService> logger)
+    {
+        _taskQueue = taskQueue;
+        _logger = logger;
+    }
+
+    public override void Dispose()
+    {
+        _semaphore?.Dispose();
+        base.Dispose();
+    }
+
+    public MoveableItem InProgressItem { get => _inProgressItem; private set => _inProgressItem = value; }
+
+    public IList<MoveableItem> GetItems()
+    {
+        _logger.LogDebug($""{nameof(_semaphore)} waiting GetItems"");
+        _semaphore.Wait();
+        _logger.LogDebug($""{nameof(_semaphore)} running GetItems"");
+
+        try
+        {
+            return _incompleteItems;
+        }
+        finally
+        {
+            _semaphore.Release(1);
+            _logger.LogDebug($""{nameof(_semaphore)} release GetItems"");
+        }
+
+    }
+
+    protected async override Task ExecuteAsync(
+        CancellationToken cancellationToken)
+    {
+        _logger.LogInformation(""Queued Hosted Service is starting."");
+
+        while (!cancellationToken.IsCancellationRequested)
+        {
+            MoveableItem moveableItem;
+
+            moveableItem = await _taskQueue.DequeueAsync(cancellationToken);
+            InProgressItem = moveableItem;
+
+            _logger.LogDebug($""{nameof(_semaphore)} waiting ExecuteAsync StoreList"");
+            _semaphore.Wait();
+            _logger.LogDebug($""{nameof(_semaphore)} running ExecuteAsync StoreList"");
+
+            try
+            {
+                var items = new List<MoveableItem>();
+                if (InProgressItem != null)
+                {
+                    items.Add(InProgressItem);
+                }
+                items.AddRange(_taskQueue.Items);
+                _incompleteItems = items;
+
+            }
+            finally
+            {
+                _semaphore.Release();
+                _logger.LogDebug($""{nameof(_semaphore)} release ExecuteAsync StoreList"");
+            }
+
+            try
+            {
+                // Do Stuff
+            }
+            catch (Exception ex)
+            {
+                _logger.LogError(ex,
+                   $""Error occurred moviing {moveableItem}."");
+            }
+            finally
+            {
+                _logger.LogDebug($""{nameof(_semaphore)} waiting ExecuteAsync null"");
+                _semaphore.Wait();
+                _logger.LogDebug($""{nameof(_semaphore)} running ExecuteAsync null"");
+
+                try
+                {
+                    InProgressItem = null;
+                }
+                finally
+                {
+                    _semaphore.Release();
+                    _logger.LogDebug($""{nameof(_semaphore)} release ExecuteAsync null"");
+                }
+            }
+        }
+
+
+        _logger.LogInformation(""Queued Hosted Service is stopping."");
+    }
+}
+
+",297242,,297242,,43834.15833,43834.17986,How to avoid deadlock,,2,4,,,,CC BY-SA 4.0, +403250,1,403254,,1/3/2020 15:21,,1,574,"
public class B { }
+public class C { }
+public class D { }
+public class E { }
+
+public class A :
+    IRetrievable<B, C>,
+    IRetrievable<D, E>
+{
+    public TValue Retrieve<TKey, TValue>(TKey input)
+    {
+        throw new NotImplementedException();
+    }
+}
+
+ +

What is the reason I can't do this? Visual Studio is telling me that the interfaces aren't being implemented, even though they could be via putting types B, C and D, E in TKey, TValue, respectively.

+",354214,,,,,43833.66458,Why can't I use a generic method to implement multiple typed interfaces in C#?,,2,4,1,,,CC BY-SA 4.0, +403251,1,403252,,1/3/2020 15:23,,8,8091,"

I'm building an API endpoint for a UI grid to search, filter, and display a list of domain objects, let's call them ""widgets."" In the past, I would have built this with a list of named query string parameters, like this:

+ +
GET /api/v1/widgets?type=2&name=what&from=2019-12-31&to=2020-01-03&pagesize=25&page=2&sort=name,-createdate
+
+ +

This would result in SQL something like

+ +
SELECT <selectlist>
+FROM widget
+WHERE type = 2
+    AND name LIKE 'what%'
+    AND createdate >= '2019-12-31'
+    AND createdate <= '2020-01-03'
+ORDER BY name ASC, createdate DESC
+LIMIT 25, 25;
+
+ +

I've had a co-worker propose that instead of a long list of parameters, we pass a couple of JSON objects on the query string, like this:

+ +
""filters"": {
+    { ""column"": ""type"", ""operator"": ""="", ""value"": 2 },
+    { ""column"": ""name"", ""operator"": ""like"", ""value"": ""what"" },
+    { ""column"": ""createdate"", ""operator"": "">="", ""value"": ""2019-12-31"" },
+    { ""column"": ""createdate"", ""operator"": ""<="", ""value"": ""2020-01-03"" }
+},
+""pagination"": {
+    ""page"": ""2"",
+    ""pagesize"": ""25"",
+    ""sort"": [ ""name"", ""createdate"" ],
+    ""sortdirection"": [ ""asc"", ""desc"" ]
+}
+
+ +

Which, after encoding, looks something like this as a URL (sorry if I messed up the encoding, I made this up as an example):

+ +
GET /api/v1/widgets?filters=[%7B%22column%22:%22type%22,%22operator%22:%22%3D%22,%22value%22:2%7D,%7B%22column%22:%22name%22,%22operator%22:%22like%22,%22value%22:%22what%22%7D,%7B%22column%22:%22createdate%22,%22operator%22:%22%3E%3D%22,%22value%22:%222019-12-31%22%7D,%7B%22column%22:%22createdate%22,%22operator%22:%22%3C%3D%22,%22value%22:%222020-01-03%22%7D]&pagination=%7B%22page%22:%222%22,%22pagesize%22:%2225%22,%22sort%22:[%22name%22,%22createdate%22],%22sortdirection%22:[%22asc%22,%22desc%22]%7D
+
+ +

We are in JavaScript on both client and server, so parsing the JSON object is not a difficult task. And I realize that the structure of the JSON object for querying would have to be altered, as it does not account for some things. Disregarding that, the basic question is:

+ +

Is this a bad/good idea? I see advantages and disadvantages, but I'm trying to look past my personal bias.

+",3266,,3266,,43833.72708,43836.14236,"Is it a bad idea to pass JSON objects on the query string for an API ""search"" operation?",,4,6,1,,,CC BY-SA 4.0, +403258,1,403266,,1/3/2020 17:04,,3,177,"

I’ve been trying to get a firm understanding of the MVC design pattern so that I can write my own framework for implementing the back-end of a forum web application using Slim 3. In particular, after reading this, and asking this and this question myself I’m now at a point where I think I understand how the model should be a layer composed of data mappers, services and domain objects. However I have some more questions regarding domain objects.

+ +

At the moment with a very similar structure from an answer given to this question I asked. I have a basic model layer structure for registering a user to my forum application. However it seems to me that my User ‘domain object’ isn’t really doing anything at the moment, it only has getters and setters and I could probably do without it.

+ +

User domain object:

+ +
class User
+{
+    private $id;
+    private $email;
+    private $username;
+    private $password;
+
+    public function __construct(Email $email, Username $username, Password $password)
+    {
+        $this->email = $email;
+        $this->username = $username;
+        $this->password = $password;
+    }
+
+    public function getId()
+    {
+        return $this->id;
+    }
+
+    public function setId(int $id)
+    {
+        $this->id = $id;
+    }
+
+    public function getEmail()
+    {
+        return $this->email;
+    }
+
+    public function setEmail(Email $email)
+    {
+        $this->email = $email;
+    }
+
+    public function getUsername()
+    {
+        return $this->username;
+    }
+
+    public function setUsername(Username $username)
+    {
+        $this->username = $username;
+    }
+
+    public function getPassword()
+    {
+        return $this->password;
+    }
+
+    public function setPassword(Password $password)
+    {
+        $this->password = $password;
+    }
+}
+
+ +

This user domain object directly mirrors that of my ‘user’ SQL table that is populated when a user registers. In fact this created user domain object is passed to my data mapper then the fields of the passed in user object are extracted and inserted into the database.

+ +

Inserting a user object into my ‘user’ table:

+ +
 private function insertUser(User $user) 
+ {
+      $sql = 'INSERT INTO users (email, username, password) 
+              VALUES (:email, :username, :password)';
+
+      $statement = $this->connection->prepare($sql);
+      $statement->execute([':email' => $user->getEmail(),
+                           ':username' => $user->getUsername(),
+                            ':password' => $user->getPassword()]);
+
+      $user->setId($this->connection->lastInsertId());
+
+      return $user;
+ }
+
+ +

Could I not just directly pass in the strings that the user typed into the form (after validation and sanitation) instead of having to create this user domain object. It just seems unnecessary at the moment.

+ +

This is where I’m confused as to what exactly a domain object should be. I understand they should encompass business logic, but identifying what business logic I would need for a forum application is proving difficult. For example I would like each user to have their own profile page that is publicly visible, that they can only edit and update when the user in question is logged in.

+ +

Would I need to create a ‘profile’ domain object together with a new profile table that links to my original user table and user domain object? Or could I simply extend my existing user table and user domain object to have properties and methods that can allow a user to display and update their profile?

+ +

I understand this is a matter of design and can be subjective, but some advice on what should be in a domain object (especially for the previous example with a profile) and if they need to marry up to a table or not would be helpful.

+ +

A lot of the tutorials I see such as this, simply have a single model object that has data mappers methods in them which doesn’t seem right. A lot of tutorials also use frameworks that utilise ORM which is something I’m trying to avoid using as I want to write everything from scratch, although I’m fine with using Slim 3 / 4 PHP.

+",354228,,354228,,43833.72569,43833.80694,What domain objects might I need to represent a user and users profile in a forum web application,,1,0,,,,CC BY-SA 4.0, +403264,1,403270,,1/3/2020 18:53,,8,217,"

I am trying to draw diagrams that show the difference between system virtual machines and Java virtual machines.

+ +

The first two images looks correct to me. But I don't know how to draw the third.

+ +

Image 1:

+ +

+ +

Image 2:

+ +

+ +

Image 3:

+ +

+ +

As you see, both red blocks and gray frames have the same captions: ""JVM"". I don't think it is correct. I'm sure that caption should be different: ""JVM"" for gray frames and ""something different"" for red blocks.

+ +

How the third diagram should be fixed?

+",215211,,215211,,43833.79583,43834.6375,To show the difference between system VMs and JVMs,,1,2,,,,CC BY-SA 4.0, +403271,1,403272,,1/3/2020 20:03,,1,368,"

My questions are regarding the use of mysqli::rollback.

+ +

If not used in a declared transaction but used in a try...catch will rollback do anything at all? (autocommit on)

+ +

If it does work without a transaction being declared is rollback necessary for single query statements? See example below...

+ +
try{
+    $sql = ""DELETE FROM users WHERE user_id=?"";
+    $row = run($mysqli, $sql, [$user_id]);
+}catch(exception $e){
+    $mysqli->rollback();
+    error($mysqli);
+}
+
+ +

If I have a transaction declared with multiple queries and all queries in the transaction fail, will rollback affect an older transaction? What is the scope of rollback?

+",352360,,1204,,43833.84792,43857.42431,If not used in a declared transaction but used in a try...catch will rollback do anything at all?,,2,0,,,,CC BY-SA 4.0, +403273,1,,,1/3/2020 20:43,,1,63,"

A site I'm working on tracks users before they sign up or log in to determine things like which pages drive the most users to sign up for the service, etc.

+ +

Currently we make use of a browser fingerprinting library. We fingerprint the browser, save the result in a cookie, and use this cookie to uniquely determine users. However, many browsers end up computing the same fingerprint as other browsers. eg. We see a lot of the same fingerprint from Safari on an iPad, even though it's distinct iPads.

+ +

We only use the fingerprint on our own first-party site, so I'm thinking that perhaps fingerprinting wasn't even the correct approach to begin with. I'm thinking of switching to a system where the server hands out unique tokens to each client. The client would store the received token as a cookie in place of the old fingerprint, and if it already has one of these server-side created tokens, then it ignores the new one, or maybe doesn't request it in the first place.

+ +

Does this plan make sense? Am I overlooking anything obvious?

+ +

I'm aware that inclined users can read though our client side code and change the values of their cookies to mess with us. I'm not worried about that small percentage of users, this is to get a broad picture of what's going on with the site.

+",39699,,,,,43833.925,Tracking non-logged-in users,,2,1,,,,CC BY-SA 4.0, +403274,1,,,1/3/2020 20:49,,0,584,"

We currently have a requirement to build a multi-tenant application that has cross-tenant users. That is the business data is separated by tenant but users can belong to multiple tenants and should have access to all of the data for each of their tenants. My question is does this even make sense? And if so, how can we handle login for this scenario using OpenID Connect? When I've done multi-tenancy using OpenID Connect in the past, the tenantId has been part of the login. The given requirement complicates things significantly. Should I push back? The reason for the requirement is primarily to prevent users belonging to multiple tenants from having to have multiple logins. Thoughts?

+",307498,,209774,,43833.95556,43834.02708,Multitenancy with Cross-Tenant Users,,2,22,,,,CC BY-SA 4.0, +403280,1,403284,,1/3/2020 22:30,,-5,40,"

How can I discover and master the technical system of an ERP and/or a framework published by the company that has just hired me?

+ +

For example: is it useful to draw UML diagrams on paper (and if so, which ones?)? What notes on paper to take?

+",310712,,,,,43833.97014,How can I discover and master the technical system of an ERP and/or a framework published by the company that has just hired me?,,1,3,,43834.41806,,CC BY-SA 4.0, +403283,1,403286,,1/3/2020 23:07,,1,47,"

Consider the following Activity Diagram :

+ +

+ +

First of all the activity a22 is confusing me through this configuration .If we need to list all the possible activity sequences that can take place in this activity diagram. My answer is as follows :

+ +

Sequence one : a11 ; parallel combination of a22 , a33 and a44 ;a77.

+ +

Sequence two : a11 ;parallel combination of a22 , a33 and a44 ; parallel combination of a55 and a66.

+ +

Is this a right answer ?

+",331812,,331812,,43833.96944,43834.025,Fork and Join in Activity Diagram,,1,0,0,,,CC BY-SA 4.0, +403285,1,,,1/3/2020 23:26,,-1,113,"

I usually do Javascript and I like putting console.assert liberally in my application. It throws an error if the first argument is falsey. E.g.:

+ +

console.assert(price > 0, 'Price isn\'t above 0')

+ +

It's easy to automatically remove this for production builds. When developing, I often accidentally break an assertion. I think this is better than unit tests, at least during the early stages of development, because:

+ +
    +
  1. development application state is more realistic than test states

  2. +
  3. assertions are easier to write, so developers will write more of them

  4. +
+ +

Also, for a big enough application, I think it'll be good to keep the assertions in production for 1% of users. It's better to have it fail for 1% of users than to have silent errors for everyone.

+ +

However, I've never seen any tech companies do this. Why is that?

+",315965,,,,,43834.21875,Why don't people usually use asserts throughout their application?,,2,3,,,,CC BY-SA 4.0, +403289,1,403302,,1/4/2020 4:16,,0,199,"

Probably the answer is you can't. However, I would like to have a work-around to solve my problem.

+ +

Objective

+ +

I am trying to create a program in which I try to avoid nulls as much as possible.

+ +

Problem

+ +

I am using the empty object pattern in places where null could have been returned. Like this:

+ +
public User GetByEmail(string email){
+   //search user on database given unique email
+   if(!WasFound())
+      return User.Empty;
+}
+
+ +

and here is the User Domain model.

+ +
//User has more fields but for simplicity I omitted them
+public class User {
+   public User(string username,string email){
+      this.Username = username;
+      this.Email = email;
+   }
+
+   //...
+
+   //probaly not the best default values but I trivially chose them for explanation.
+   private static readonly User _empty = new User(""*"",""na@empty.com"");
+   public static User Empty{
+      get{return _empty;}
+   }
+}
+
+ +

So far so good. However, I am planning that the static Empty function be implemented by the majority of my domain models. Also, Someone else might create a domain and I want to force them to implement the Empty function. So I created a DataModel class that will be the base of all domain Models.

+ +
public abstract class DataModel<TModel>: BaseEntity, IEmptiable<TModel> where TModel : new()
+    {
+        private bool _isEmpty = false;
+
+        public virtual static TModel Empty {
+            get {
+                _isEmpty = true;
+                return CreateEmpty();
+            }
+        }
+
+        public virtual bool IsEmpty() => _isEmpty;
+
+        protected abstract TModel CreateEmpty();
+    }
+
+ +

And my new User class would look like this.

+ +
public class User: DataModel<User> {
+   public User(string username,string email){
+      this.Username = username;
+      this.Email = email;
+   }
+
+   //...
+
+   private readonly User _empty = new User(""*"",na@empty.com);
+   protected User CreateEmpty(){
+      return _empty;
+   }
+}
+
+ +

In that way I am forcing the developer to implement a CreateEmpty() class and the Empty function will be present in the domain if you extend the DataModel abstract class. +My problem is that Empty function is static and is giving me many problems. +Is there a work-around to this problem? My vision is to be able to execute something like:

+ +
Account.Empty
+//and to check if is empty with
+if(Account.IsEmpty())
+  //Plan B
+
+ +

Keep in mind that this is pseudocode. (I know that static function can't user non-static variables declared outside of the function.)

+ +

Any help will be much appreciated.

+",353845,,,,,43851.90139,How to inherit a static function in a class?,,3,7,,,,CC BY-SA 4.0, +403292,1,,,1/4/2020 4:50,,0,257,"

im trying to create a sequence diagram for login , but i dont know if its true or not , maybe the community can help me check , because i see many way to create sequence diagram and its not the same at all ..

+ +

Thank you for the help and explanations

+",354264,,,,,43864.45694,Is this right way to create sequence diagram?,,3,3,1,,,CC BY-SA 4.0, +403294,1,,,1/4/2020 5:38,,0,97,"

Linux Onlyoffice Docker scripts are available for download. Are docker containers tied to a particular CPU architecture (IA64 vs ARM)? I would like to run Onlyoffice on an ARM platform (Raspbery Pi 4 or similar SBC computer).

+",251599,,,,,43834.42431,Are Docker images tied to CPU architectures?,,1,3,,,,CC BY-SA 4.0, +403296,1,,,1/4/2020 5:47,,-1,100,"

What do you think about using interfaces just for the sake of enforcing certain naming and patterns across your team? Other than that, it doesn't hold any practical value programmatically.

+ +

I'm on the fence on this one because I tend to minimize abstraction whenever possible, but this seems a bit unnecessary. However, it does have some value when it comes to creating variations of certain classes which aren't necessarily related.

+",204609,,110531,,43835.57847,43835.57847,Is it encouraged or discouraged to use interfaces simply to enforce consistency?,,1,2,,,,CC BY-SA 4.0, +403297,1,403950,,1/4/2020 8:24,,0,785,"

We have a set of micro-services that are functionally dependent and have intertwined validation logic. e.g. consider 'Credit-Card' service and 'Loan' service. When user has a credit card then loan shouldn't be given and vice versa. Hence when processing a request we validate across these services using API calls and consider approval of the request. How to handle scenarios when both are initiated simultaneously in distributed systems.

+ +

How do we prevent a customer from getting both loan and a credit card at the same time. I am considering using distributed locks to implement a solution. Would like to know if there are other possibilities apart from locks.

+",262713,,262713,,43847.58819,43847.74167,Race condition in Distributed Systems,,2,2,1,,,CC BY-SA 4.0, +403307,1,403314,,1/4/2020 11:54,,-1,77,"

https://docs.microsoft.com/en-us/dotnet/core/tutorials/publishing-with-visual-studio

+ +
+

In Create a Hello World application with .NET Core in Visual Studio, you built a Hello World console application. In + Debug your Hello World application with Visual Studio, you tested it using the Visual Studio debugger. Now that + you're sure that it works as expected, you can publish it so that other users can run it. Publishing creates the set of + files that are needed to run your application. To deploy the files, copy them to the target machine.

+
+ +

What do publishing and deployment mean?

+ +
    +
  • Is it correct that deployment means that after building/compiling programs in development machines, and packaging, copy the package to the production machine?

  • +
  • Is it correct that deployment does not include building and packaging?

  • +
  • Does publishing mean building/compiling programs and packaging on development machines? (I was once asked to ""publish"" (or ""deploy"", I forget which word) created by others. It was +actually to click a few buttons (I forgot which) in Visual Studio IDE, maybe related +to the web server, but all this was done on a development machine. It seemed to me to build and package the program.)

  • +
  • Does publishing not include deployment?

  • +
  • Do publishing and deployment mean the same?

    + +

    The above quote seems to say no.

    + +

    But Publish .NET Core apps with the CLI being a subsection of .NET Core application deployment seems to say yes.

    + +

    Literally doesn't publishing mean move something to a publicly accessible place? Isn't that the same as deployment?

  • +
+ +

Are the two words standard (or generally used) in software engineering, including outside dotnet e.g. Java, C++, Python, ...? What other standard or popular words that are used instead, if any?

+ +

Thanks.

+",699,,699,,43834.54028,43834.70972,Do publishing and deployment mean the same?,,1,12,,,,CC BY-SA 4.0, +403308,1,403311,,1/4/2020 13:20,,-1,135,"

I am creating a small project that implements an interface in Java. I am not allowed to modify the interface, which means I can't change the functions in my class that implements the interface.

+ +

However, one of the methods in the class (from the interface) requires a different behaviour in some cases. How do I implement this, in a way that it allows me to adapt the behaviour and make changes to the method without changing the interface?

+",354290,,326536,,43834.74097,43834.74097,How to implement a different behaviour for a method without changing the interface?,,2,7,0,,,CC BY-SA 4.0, +403310,1,,,1/4/2020 13:34,,2,63,"

Assume the flow graph of a function is below (sorry I have forgotten the code of the function):

+ +

And when I try to calculate the cyclomatic complexity:

+ +

When I use the edge and node to calculate it, V(G)= 9 - 8 + 2 = 3; when I use the branch node to calculate it, V(G) = 3 + 1 = 4. However, when I try to find the independent paths, I think I can find 4 paths:

+ +
Path 1: 1 -> 2 -> 6 -> 7
+Path 2: 1 -> 2 -> 6 -> 8
+Path 3: 1 -> 2 -> 3 -> 5 -> 2 -> ...
+Path 4: 1 -> 2 -> 3 -> 4 -> 5 -> 2 -> ...
+
+ +

And I wonder which one is correct and what is my mistake? Thanks a lot.

+",354291,,,,,43834.56528,A question about cyclomatic complexity,,0,2,,,,CC BY-SA 4.0, +403315,1,403321,,1/4/2020 17:08,,2,3863,"

We know that the primary actor is the one that initiates a use case and a secondary actor is the one that helps completion of the use case through his specific support. The Primary actor is usually placed at the left side to the boundary of the system and the secondary actor is placed to the right of the boundary of the system. But lets consider a library system where we have two actors , librarian and the reader . Lets consider some specific use cases here :

+ +

1) A use case where the library can add books to the library system. In this case the librarian is a primary actor.

+ +

2) A use case where the reader borrows a book . More specifically , the reader gives the book that he needs to borrow to the librarian and the librarian scan the book using bar code and enters all the needed info in this system ( reader ID , load time ,....etc ) . In this case the librarian is a secondary actor .

+ +

So my question is where to put the librarian in the use case diagram? Right or left to the boundary of the system ? Because in some use cases he is a primary actor and secondary actor in others .

+",331812,,,,,43837.05903,Primary and secondary actors in use case,,3,7,,,,CC BY-SA 4.0, +403316,1,,,1/4/2020 17:16,,2,389,"

I have a Customer which can have sereval address. +Sereval customers can possibily live at the same address. +So in my relational database a classic many to many relationship.

+ +

As :

+ +
    +
  • An address can not change (a customer can change address);
  • +
  • It does not involve any logic with side effect.
  • +
  • And the equility between two adress is determine only with the values in it.
  • +
+ +

I argue address is a good candidate to be a value object.

+ +

However I need (or maybe not) to persist my customers address and to do so I need to add an Id.

+ +

I am a little bit confused and struggled, I think pure domain logic and common sens tell me is a value object but the technology constraint makes that an address has an Id.

+ +

Is it fine to store value object?

+",41032,,,,,43834.87014,Is putting an Id to a value object a bad id?,,2,0,,,,CC BY-SA 4.0, +403318,1,,,1/4/2020 17:46,,105,19541,"

I am confused because in quite a few places I've already read that the so-called 'boneheaded' exceptions (ones that result from bugs in code) are not supposed to be caught. Instead, they must be allowed to crash the application:

+ + + +

At least two of the three above people are established authorities.

+ +

I am surprised. Especially for some (important!) use cases, like server side code, I simply can't see why is catching such an exception suboptimal and why the application must be allowed to crash.

+ +

As far as I'm aware, the typical solution in such a case is to catch the exception, return HTTP 500 to the client, have an automatic system that sends an emergency e-mail to the development team so that they can fix the problem ASAP - but do not crash the application (one request must fail, there's nothing we can do here, but why take the whole service down and make everyone else unable to use our website? Downtime is costly!). Am I incorrect?

+ +

Why am I asking - I'm perpetually trying to finish a hobby project, which is a browser based game in .net core. As far as I'm aware, in many cases the framework does for me out of the box the precise thing Eric Lippert and Stephen Cleary are recommending against! - that is, if handling a request throws, the framework automatically catches the exception and prevents the server from crashing. In a few places, however, the framework does not do this. In such places, I am wrapping my own code with try {...} catch {...} to catch all possible 'boneheaded' exceptions.

+ +

One of such places, AFAIK, is background tasks. For example, I am now implementing a background ban clearing service that is supposed to clear all expired temporary bans every few minutes. Here, I'm even using a few layers of all-catching try blocks:

+ +
try // prevent server from crashing if boneheaded exception occurs here
+{
+    var expiredBans = GetExpiredBans();
+    foreach(var ban in expiredBans)
+    {
+        try // If removing one ban fails, eg because of a boneheaded problem, 
+        {   // still try to remove other bans
+            RemoveBan(ban);
+        }
+        catch
+        {
+
+        }
+    }
+}
+catch
+{
+
+}
+
+ +

(Yes, my catch blocks are empty right now - I am aware that ignoring these exceptions is unacceptable, adding some logging is perpetually on my TODO list)

+ +

Having read the articles I linked to above, I can no longer continue doing this without some serious doubt... Am I not shooting myself in the foot? Why / Why not?

+ +

If and why should boneheaded exceptions never be caught?

+",212639,,,,,44197.07778,"Why should 'boneheaded' exceptions not be caught, especially in server code?",,17,30,44,,,CC BY-SA 4.0, +403324,1,,,1/4/2020 20:41,,-1,63,"

Here is part of an ASP.NET MVC program, from https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/controller-methods-views?view=aspnetcore-3.1 :

+ +

A model class Movie:

+ +
public class Movie
+{
+    public int Id { get; set; }
+    public string Title { get; set; }
+    [Display(Name = ""Release Date"")]
+    [DataType(DataType.Date)]
+    public DateTime ReleaseDate { get; set; }
+    public string Genre { get; set; }
+    [Column(TypeName = ""decimal(18, 2)"")]
+    public decimal Price { get; set; }
+}
+
+ +

and a method from a controller class MoviesController is:

+ +
// GET: Movies/Edit/5
+public async Task<IActionResult> Edit(int? id)
+{
+    if (id == null)
+    {
+        return NotFound();
+    }
+    var movie = await _context.Movie.FindAsync(id);
+    if (movie == null)
+    {
+        return NotFound();
+    }
+    return View(movie);
+}
+
+ +

If I want to write an ASP.NET MVC program into three layers: presentation, business logic and data access layers,

+ +
    +
  • Since presentation layer is responsible to provides output to users and interact with users, should both view and controller belong to the presentation layer?

  • +
  • Should the business layer be implemented by the model Movie?

    + +

    ""The Model in an MVC application represents the state of the application and any business logic or operations that should be performed by it"" seems to say so.

    + +

    Does the business layer need to be implemented as some method, whereas the model doesn't have any method? (I guess the model class is used as an entity class by the Entity Framework. Does EF require every entity class have no method?)

  • +
  • In the method of the controller, _context.Movie.FindAsync(id) is using Entity Framework (ORM) to make a query. So does the controller also implement the data access layer, besides implementing part of the presentation layer?

  • +
  • How shall I separate presentation, business logic and data access layers?

  • +
+ +

Thanks.

+",699,,699,,43834.93611,43834.95694,"How shall I implement a MVC program into presentation, business logic and data access layers?",,1,0,1,,,CC BY-SA 4.0, +403326,1,,,1/4/2020 20:55,,0,110,"

I am quite confused about the responsibility-driven design concept. Mainly because of ever so slightly changing definitions depending on the source.

+

Quoting BlueJ (the book I am learning that teaches Java):

+
+

Responsibility-driven design expresses the idea that each class should be responsible for handling its own data. Often, when we need to add some new functionality to an application, we need to ask ourselves in which class we should add a method to implement this new function. Which class should be responsible for the task? The answer is that the class that is responsible for storing some data should also be responsible for manipulating it.

+
+

Later, in a "concept box" in the BlueJ book:

+
+

Reponsibility-driven design is the process of designing classes by assigning well-defined responsibilites to each class. This process can be used to determine which class should implement which part of an application function.

+
+

This second definition confuses me, as I don't see how that correlates to the first "definition"; the one saying that "it expresses the idea that each class should be responsible for handling its own data".

+

Will someone please shed some light on the concept of responsibility-driven design?

+",258273,,-1,,43998.41736,43835.50417,Responsibility-driven design,,1,1,,,,CC BY-SA 4.0, +403328,1,,,1/4/2020 21:19,,-4,163,"

List is implemented in C# exactly as Stack, see:

+ +

Given that:

+

Why should developers choose, List or Stack, while in fact they can have "2 in 1". +(Implementing Pop for List, would make it 2-in-1! ).

+

Coming from more flexible languages, I don't want to commit to a limiting-collection-type. Starting with Stack, maybe later as problem changes, I'll need to access an [index], alternatively w/Lists, maybe later need Pop or List1.Add (List2.Pop()) often.

+
    +
  1. Why doesn't RemoveAt() return a value? What damage would happen if it returned a value (as actually done in Java ArrayList)?

    +
  2. +
  3. Why no Pop()?

    +
  4. +
+

Looking at Python, Perl, Ruby and 'e', one does wonder how come a C# List doesn't have Pop(). I want the "Swiss army knife" of arrays. i.e. List! It's convenient to use just one language construct, and avoid conversions, etc. As List is implemented using array and supports Add in O(1) (Add is just a different name for Push), why wasn't Pop added as well?

+
    +
  1. Why do we need a C# Stack when List could easily do all Stack does, and more?

    +

    Easy way to pop 'd make me always use List, hence saving the need of conversions/casting between List and Stack.

    +
  2. +
+

Am i the only C# dude, that wants a (slightly) more powerful List ? ( instead of having 2 weaker concepts and having to convert between them ). +( other than that c# is really good).

+

Update : Wow !! Thanks to this thread Microsoft had decided to add Pop to C# list !! https://github.com/dotnet/runtime/issues/31828. +many thanks to the community, especially to user @Theraot.

+",354317,,354317,,44112.61875,44112.61875,C# Why should i limit myself to List or Stack ? ( instead of having both),,4,10,1,,,CC BY-SA 4.0, +403329,1,,,1/4/2020 21:33,,-1,387,"

to summarize a bit my current situation: I am building a backend based on microservices using spring boot. +These are aggregated behind an api-gateway. +My intention is to consume these api's from both a mobile app and a webapp.

+ +

However, I'm constantly wondering what kind of technology I should use to build my webapp frontend?

+ +

This is a project I'm doing on my own and I currently don't have any knowledge of javascript frameworks like react, angular or vue.

+ +

Basically, I can not find a valid reason for investing the time in learning a javascript framework to build a SPA-frontend. I am still convinced that server-side should be the first option, client-side the second option. I also rather see an MPA as a valid option for my app instead of a SPA.

+ +

I have knowledge however of thymeleaf and spring mvc. Basically I would like to keep all my logic on the server side as much as possible. I was looking into using the non-blocking reactive webclient feature of spring mvc to call my REST-api's server side and send the data on to the client.

+ +

Anyone have any experience with this kind of setup? Any reason why you shouldn't do this? +As I understand it, using the webclient feature is async and non-blocking, which is already a step up from the classic resttemplate feature. +Difference with e.g. jquery or any other javascript technology is that the api is called serverside instead of clientside, so this means full reloads of the page (e.g. also the header, footer, menu, ...) instead of a partial reload like when using javascript?

+ +

A huge advantage I'm seeing however is that even if a user does not have javascript enabled, the webapp will still just work by using this setup.

+ +

I don't know if this is a valid option in context of microservices to build a frontend? Anyone here also using this setup and has some more insight in drawbacks/issues with this?

+ +

Thanks in advance for any feedback!

+",354318,,,,,43834.99444,building a frontend for a microservice backend: architectures? (spring boot stack),,1,0,,,,CC BY-SA 4.0, +403332,1,,,1/4/2020 22:45,,1,923,"

In MVC, does the view component deliver a new view to the user directly or indirectly via the controller component? says

+ +
+

The first thing to realize is that Server-side Web MVC (e.g. ASP.NET MVC & similar where controllers handle requests and views render to HTML) is not the same as client-side/desktop MVC/MVP UI pattern. In the UI pattern, generally the View component is the view (it doesn't create one). Also modern view widgets have the capability to detect user input, back when MVC was first created, widgets had no such capability (they were just pictures on the screen), so every widget had it's own MVC, where C handled the input - in modern MVC, C implements the nontrivial behavior of a larger view.

+
+ +

I was wondering why and how ""Server-side Web MVC (e.g. ASP.NET MVC & similar where controllers handle requests and views render to HTML) is not the same as client-side/desktop MVC/MVP UI pattern""?

+ +

Are they different variants of MVC pattern?

+ +

Thanks.

+",699,,,,,43835.78333,Why and how are Server-side Web MVC and client-side/desktop MVC not the same?,,2,0,,,,CC BY-SA 4.0, +403334,1,403335,,1/4/2020 22:59,,0,145,"

I am letting users change their password after a reset, by following a link containing hashes of the password and user name (e-mail address). The link can look like this:

+ +
+

www.example.domain/login?h1=8tGecrXvKJBOhtzvyDmJNjpLF5RF3Ed+QSkimxlJaFo=&h2=Bv2WmO4uzgrDIRj9scKtMz0Ek0KpyJ3M00wJrMU7oeA=

+
+ +

Note the presence of a ""+"" in one of the hashes. This will be translated to "" "", so in the login-method below, I'm replacing "" "" with ""+"":

+ +
public async Task<IActionResult> Login(string h1, string h2, Uri returnUrl = null)
+{
+    // h1 = password
+    // h2 = e-mail address
+    if (h1 != null && h2 != null)
+    {
+        h1 = h1.Replace("" "", ""+"");
+        h2 = h2.Replace("" "", ""+"");
+        AdminUser au = await db.AdminUsers
+            .Include(p => p.Person)
+            .Where(u => 
+                u.PasswordHash == h1 &&
+                HashPassword(u.Person.Email1, Convert.FromBase64String(u.PasswordSalt)) == h2)
+            .FirstOrDefaultAsync().ConfigureAwait(false);
+        if (au != null)
+        {
+            return RedirectToAction(""ChangePassword"", ""Admin"", new { id = au.PersonId, userType = ""au"" });
+        }
+        // Some unfinished business here (if (au == null))
+    }
+    LoginFormViewModel vm = new LoginFormViewModel
+    {
+        ReturnUrl = returnUrl
+    };
+    return View(vm);
+}
+
+ +

This has worked so far in my tests, but I don't know if I might run into problems because of the .Replace() in some scenario. If a hash contains ""/"" or ""="", there is no problem.

+",320898,,,,,43835.54583,"Is it ok to use .Replace("" "", ""+"") when reading hashes in a querystring?",,2,0,,,,CC BY-SA 4.0, +403346,1,,,1/5/2020 5:49,,5,1399,"

What's the difference between reactive programming and event driven architecture?

+ +

Is reactive programming a way of implementing event driven programming?

+",150418,,,,,43891.76736,What's the difference between reactive programming and event driven architecture?,,2,0,1,,,CC BY-SA 4.0, +403349,1,403353,,1/5/2020 10:06,,1,69,"

You have a video game in which upon killing final boss you get coins that get distributed based on whether you are:

+ +
    +
  1. Person(individual)
  2. +
  3. Group(consists of individuals or groups)
  4. +
+ +

If reward for killing monster is 100. And the Fighters that killed monster together are:

+ +
  var joe = new Person(""Joe"");//Person
+  var jake = new Person(""jake"");//Person
+  var emily = new Person(""emily"");//Person
+
+  var oldBob = new Person(""oldBob"");//Person that belongs to a group
+var newBob = new Person(""newBob"");//Person that belongs to a group
+      var familyGroup = new Group(""familyGroup"", new List<IFighter>() { newBob, oldBob });
+
+ +

The output will be:

+ +

OUTPUT:

+ +
You have killed the Giant IE6 Monster and gained 100 gold!
+MegaParty has 100 gold coins.
+familyGroup has 25 gold coins.
+--newBob has 13 gold coins.
+--oldBob has 12 gold coins.
+--Joe has 25 gold coins.
+--jake has 25 gold coins.
+--emily has 25 gold coins.
+
+ +

with the below code:

+ +

Interface:

+ +
public interface IFighter
+{
+    string Name { get; }
+    int Gold { get; set; }      
+    void Stats();
+}
+**CLIENT:**
+
+class Program
+{
+    static void Main(string[] args)
+    {
+        int goldForKill = 100;
+        Console.WriteLine(""You have killed the Giant IE6 Monster and gained {0} gold!"", goldForKill);
+        var joe = new Person(""Joe"");
+        var jake = new Person(""jake"");
+        var emily = new Person(""emily"");
+        var oldBob = new Person(""oldBob"");
+        var newBob = new Person(""newBob"");
+        var familyGroup = new Group(""familyGroup"", new List<IFighter>() { newBob, oldBob });
+        var parties = new Group(""MegaParty"", new List<IFighter> { familyGroup, joe, jake, emily });
+        parties.Gold += goldForKill;
+        parties.Stats();
+        Console.ReadKey();
+    }
+
+ +

Person:

+ +
public class Person : IFighter
+{
+    public string Name { get; }
+    public int Gold { get; set; }
+
+    public Person(string name)
+    {
+        this.Name = name;            
+    }
+
+    public void Stats()
+    {
+        Console.WriteLine(""--{0} has {1} gold coins."", Name, Gold);
+    }
+}
+
+ +

Group:

+ +
public class Group : IFighter
+{
+    public string Name { get; set; }
+    public List<IFighter> Members { get; set; }
+    public int Gold
+    {
+         get
+         {
+             int totalGold=0;
+            foreach (var member in Members)
+            {
+                totalGold += member.Gold;
+            }
+            return totalGold;
+        }
+        set
+         {
+            var eachSplit = value /this.Members.Count;;
+            var leftOver = value % this.Members.Count;;
+            foreach (var member in Members)
+            {
+                member.Gold = eachSplit +leftOver;
+                leftOver = 0;
+            }
+         }
+        }
+
+        public Group(string name, List<IFighter> members)
+        {
+            Name = name;
+            Members = members;
+        }    
+
+        public void Stats()
+        {
+            Console.WriteLine(""{0} has {1} gold coins."", this.Name, Gold);
+            foreach (var member in Members)
+            {
+                member.Stats();
+            }
+        }
+    }
+
+ +

But, what if we were to give 10% extra weightage to those Person who belong to a group or a certain group? +I could create class such as:

+ +

Female:IFighter, SeniorCitizen:IFighter to whom I want to give extra weightage.

+ +

How do you go about it?

+",96641,,,,,43835.77986,Composite Design Pattern with unequal weightage,,2,0,,,,CC BY-SA 4.0, +403357,1,,,1/5/2020 13:18,,2,227,"

I am currently developing a wrapper API for a translation service that should provide multiple methods for translating strings,

+ +
Task<string> ITranslator.TranslateAsync(string phrase, Language from, Language to)
+
+ +

an ICollection<string>,

+ +
Task<ICollection<string>> ITranslator.TranslateAsync(ICollection<string> phrases, Language from, Language to)
+
+ +

and entire documents:

+ +
Task<Stream> ITranslator.TranslateDocumentAsync(string file, Language from, Language to)
+
+ +

The Language class looks as follows

+ +
public sealed class Language
+{
+    [JsonProperty(""language"")]
+    public string TwoLetterISOLanguageName { get; internal set; }
+
+    [JsonProperty(""name"")]
+    public string NativeName { get; internal set; }
+
+    internal Language()
+    {
+    }
+
+    /* Provided in case users want to display these in a ComboBox or similar. */
+    public override string ToString() => NativeName;
+}
+
+ +

and instances of the class can only be acquired by a call to ITranslator.GetSupportedLanguagesAsync.

+ +

However, I don't want to force users of the API to query the supported languages and additionally provide a less verbose way of making translation requests with methods that take from and to parameters of type string instead of Language, where the provided string would be a language code such as ""de"".

+ +

My issue with that solution is that it would lead to the doubling of Translate* methods and thus unnecessarily clutter the API and duplicate most of the documentation regarding these methods.

+ +

I finally thought about introducing an implicit operator function to the Language class which would convert instances to their TwoLetterISOLanguageCode and remove the need for method overloads, but I am unsure if it would be bad design, as the Language class already has the ToString method yielding NativeName instead.

+ +

Would this addition of the implicit operator be bad design?

+",,user354327,,user354327,43835.64028,43838.92361,Is having different implementations for ToString and the implicit operator bad design?,,4,2,,,,CC BY-SA 4.0, +403359,1,403362,,1/5/2020 15:13,,-1,317,"

As the title implies I am trying to understand the difference between boundary, controller and logical class types, which are used in the MVC pattern.

+ +

I will do so using an example. Please consider the following Class Diagram : +

+ +

I have to add the needed analyse classes of type boundary and controller to account for the two use-cases : borrow a book and reserves book. The following is my answer ( sorry for my hand-writing ) : +

+ +

So is this a right answer? Also, the logical classes in this case are book, student copy ( the classes that do the business process analysis ) ?

+",331812,,209774,,43835.72569,43836.33542,"Needs to understand the difference between boundary, controller, logical class types",,2,0,,,,CC BY-SA 4.0, +403361,1,403363,,1/5/2020 17:04,,-4,266,"

Why are FLASH memories considered to be EEPROMS? EEPROM has ""Read Only"" in its name, so by definition you cannot rewrite it, yet FLASH memories are used for USBs and many other technologies which are constantly rewritten.

+ +

I understand that EEPROMS are ""Erasable"" which means I can modify their contents by erasing and re-writing entire blocks but how come a USB works exactly like an HDD when I read and write things?

+ +

Also, what is the point in calling it Read-Only and at the same time Erasable? +Isn't it like saying ""This is unbreakable metal that can be broken""?

+",354335,,326536,,43835.76528,43835.84931,"Why is FLASH memory considered ""Read Only"" by definition?",,2,0,,,,CC BY-SA 4.0, +403364,1,,,1/5/2020 18:32,,0,201,"

The usual workflow for my team looks something like this: features are planned by product management and then collectively broken down to business requirements. Then the developers start working separately through those requirements one by one until the next feature comes around.

+ +

One issue I noticed is since the stories are implemented in an incremental way the associated code tends to be slapped on the existing code. Initially it is easy to make sure the new code somehow fits with the old code. But over time portions of the codebase begin to diverge and the whole thing turns into patchwork.

+ +

I have been thinking an initial design story to put down method and test stubs, discuss the development strategy to follow etc. for the rest of the stories can help with this although I am not sure how exactly it would work.

+ +

Alternatively or alongside this I want to propose switching our code review system from a single peer review by a tech lead to a consensus system, but not sure how the alternative would look like as far as what everyone would review for, how much effort would go into it, what kind of standards we would follow or how I would go about proving it would bring value.

+",,user313675,,user313675,43835.77639,43836.53542,Preventing the codebase from turning into patchwork as more stories are implemented,,4,4,,,,CC BY-SA 4.0, +403370,1,,,1/5/2020 20:35,,3,1315,"

How does one service get the data from another service? I will give few examples:

+ +

1) Let's say we have two services which need to share the data: User Service and Order Service. Order Service pretty much will want to get some data User Service's database has like address, bonus points to cut the price, but all Order Service receives about a user is its userId, it doesn't have a direct access to the User DB.

+ +

2) An app has a user balance system with some internal coins used as base currency. The app provides different kind of services to buy which user can spend those internal coins for. The service responsible for fulfilling user's request should be somehow able to tell User Service or Balance Service (if it was decided to split balance from users to a different service) to reduce its balance depending on how much it costed or check the balance and then return a error if it's insufficient.

+ +

3) An app from #2 situation doesn't have predetermined price anymore, tasks got so complicated that now we need to run Analyze Service and now the task will go to that service to determinate the price of a task, what service should handle the task and only then the service which is able to fulfill user's request will do it.

+ +

So, I will give my thoughts about these after reading numerous articles about microservice architecture:

+ +

1) I see few ways to do it:

+ +
    +
  • direct synchronous HTTP request from one service to another to get the necessary data (slow, most importantly services are now coupled and I loose benefits of microservice architecture)
  • +
  • client app has all necessary data in its global store (if it's a SPA app e.g.) and will just send them in the request (but we still need a mean to check if the data we got is valid...)
  • +
  • saving necessary data in JWT (weird..?)
  • +
+ +

2) I guess here is a good example where message brokers come in handy. I could send a message about the task being done with its price and userId to User Service or Balance Service and callback will reduce user's balance.

+ +

3) Message broker again. Analyze Service sends a message, bound performing task service will start doing a task after analyze is finished, which task service will do that depends on the queue Analyze Service will send a message in.

+ +

I'm not sure about any of solutions here so here is my question. What are common ways to deal with those kind of problems?

+",342857,,,,,43838.71736,Data from one microservice to another,,2,1,,,,CC BY-SA 4.0, +403374,1,,,1/5/2020 21:39,,0,69,"

GitHub and NuGet

+ +

Do you use connection strings to instantiate your adapter services or just always configure it through IoC? I think that connection strings could provide a way more flexibility, especially while being combined with IoC container. Here are my two helper classes which I defined to handle URI based connection string syntax:

+ +
clr://assembly-name/full-class-name?param1=arg1&param2=arg2 
+ioc://assembly-name/full-class-name?param1=arg1&param2=arg2 
+
+ +

How would you evaluate this approach from an architectural standpoint?

+ +

I have this to support clr://:

+ +
namespace System
+{
+    public static class Activator<T>
+    {
+        public static T CreateInstance(string uri) =>
+            CreateInstance(new Uri(uri));
+
+        public static T CreateInstance(Uri uri) =>
+            uri.Scheme != ""clr""
+            ? throw new NotSupportedException()
+            : (T)Activator.CreateInstance(GetReturnType(uri), GetArguments(uri));
+
+        static object[] GetArguments(Uri uri)
+        {
+            var arguments = uri.Query
+                .TrimStart('?')
+                .Split(new[] { '&' }, StringSplitOptions.RemoveEmptyEntries)
+                .Select(p => p.Split('='))
+                .ToDictionary(nv => nv[0], nv => Uri.UnescapeDataString(nv[1]));
+
+            return GetParameters(uri)
+                .Select(p => Convert.ChangeType(arguments[p.Name], p.ParameterType))
+                .ToArray();
+        }
+
+        static ParameterInfo[] GetParameters(Uri uri) =>
+            GetReturnType(uri)
+                .GetConstructors()
+                .First()
+                .GetParameters();
+
+        static Type GetReturnType(Uri uri) =>
+            Type.GetType($""{uri.Segments[1]}, {uri.Host}"");
+    }
+}
+
+ +

So:

+ +
using (var reader = Activator<TextReader>.CreateInstance(
+    ""clr://mscorlib/System.IO.StringReader?s=Hello%20World""))
+    Assert.AreEqual(""Hello World"", reader.ReadToEnd());
+
+ +

And the following is for ioc:// (works perfect with AutofacServiceProvider, etc.):

+ +
namespace System
+{
+    public static class ServiceLocator
+    {
+        public static T GetService<T>(this IServiceProvider provider, string uri) =>
+            provider.GetService<T>(new Uri(uri));
+
+        public static object GetService(this IServiceProvider provider, string uri) =>
+            provider.GetService(new Uri(uri));
+
+        public static T GetService<T>(this IServiceProvider provider, Uri uri) =>
+            (T)provider.GetService(uri);
+
+        public static object GetService(this IServiceProvider provider, Uri uri)
+        {
+            if (uri.Scheme != ""ioc"")
+                throw new NotSupportedException(""Schema not supported."");
+
+            var factory = (Delegate)provider.GetService(GetFactory(uri));
+            return factory.DynamicInvoke(GetArguments(uri));
+        }
+
+        static Type GetFactory(Uri uri)
+        {
+            Func<Type[], Type> getType = Expression.GetFuncType;
+            var types = GetParameters(uri)
+                .Select(p => p.ParameterType)
+                .Append(GetReturnType(uri));
+
+            return getType(types.ToArray());
+        }
+
+        static object[] GetArguments(Uri uri)
+        {
+            var query = GetQuery(uri);
+            return GetParameters(uri)
+                .Select(p => Convert.ChangeType(query[p.Name], p.ParameterType))
+                .ToArray();
+        }
+
+        static Dictionary<string, string> GetQuery(Uri uri) => 
+            uri.Query
+                .TrimStart('?')
+                .Split(new[] { '&' }, StringSplitOptions.RemoveEmptyEntries)
+                .Select(p => p.Split('='))
+                .ToDictionary(nv => nv[0], nv => Uri.UnescapeDataString(nv[1]));
+
+        static ParameterInfo[] GetParameters(Uri uri)
+        {
+            var query = GetQuery(uri);
+            return GetReturnType(uri)
+                .GetConstructors()
+                .First()
+                .GetParameters()
+                .Where(p => query.ContainsKey(p.Name))
+                .ToArray();
+        }
+
+        static Type GetReturnType(Uri uri) =>
+            Type.GetType($""{uri.Segments[1]}, {uri.Host}"");
+    }
+}
+
+ +

Where missing parameters are supposed to be provided by IoC container.

+",126611,,126611,,43836.02986,43836.02986,Instantiating adapters in onion architecture,,0,3,,,,CC BY-SA 4.0, +403377,1,403379,,1/5/2020 22:38,,-2,142,"

So, recently I have been doing a lot of programming in Python. +I have noticed that my programs can be somewhat hard to read. I usually have one main class which does everything, sort of like this:

+ +
class main:
+    def __init__(self, *args, **kwargs):
+        # Make baseline variables here
+        self.foo = ""bar""
+        self.hello = 12345
+        # Get starting input.
+        self.inputted = self.get_input()
+
+    def get_input(self):
+        inputted = input(""Hello, world!"")
+        return inputted
+
+    def do_something(self):
+        # Do data processing, make stuff, call other functions within the class.
+        pass
+
+main()
+
+ +

I have been doing this for purposes of making variable access easy. Considering how it messes with code readability, should I be using another approach? If so, what?

+",349829,,,,,43835.99444,Proper program structuring in Python,,1,4,1,,,CC BY-SA 4.0, +403382,1,,,1/6/2020 2:27,,1,170,"

I've got 2 near-identical functions (NodeJS). One of them queries DB for shop information by its phone number, while the other one queries by its id. I'd like to know if there's a clean way to merge these 2 together.

+ +
async getShopInfo (phone) {
+    try {
+      let shopInfo = await shopConfigModel.getShopInfo(phone)
+      if (shopInfo && shopInfo.length > 0) {
+        return shopInfo
+      }
+    } catch (error) {
+      logger.error('error on getShopInfo %s', error.message)
+    }
+
+    return null
+}
+
+async getShopInfoById (shopId) {
+    try {
+      let shopInfo = await shopConfigModel.getShopInfoById(shopId)
+      if (shopInfo && shopInfo.length > 0) {
+        return shopInfo
+      }
+    } catch (error) {
+      logger.error('error on getShopInfo %s', error.message)
+    }
+
+    return null
+}
+
+",212887,,,,,43877.25694,How to refactor duplicated functions with only one difference in parameter list,,2,2,,,,CC BY-SA 4.0, +403383,1,,,1/6/2020 2:42,,2,138,"

I have a eCommerce like system which produces 5000 user events (of different kind like product search/product view/profile view) per second

+ +

Now for reporting business users would like to view the different dimensions like

+ +
 1. Find user session(say of 30 mins) in any given x days.. 
+ 2. Find number of searches/product/profile views happened in any given x
+    days.
+
+ +

There are two parts involved in above use case

+ +
    1. Computation/Aggregation of events data
+    2. How to store the data efficiently.
+
+ +

First thoughts and question on storage part as this will decide how to compute/aggregate the data

+ +
    +
  1. I believe i should store the data per day as this is the most granular unit data can be asked for. It can be stored in elastic so that it is searchable and can aggregated for x days.
  2. +
  3. Should I store the each dimension(session/searches/product views) separately or session should be the top level object which should internally(nested) contain the other related data(like searches/product view etc) in that session. Now if product views query is asked , it can be served from here itself.
  4. +
+ +

Second how to design the aggregation/computation part. Here are my thoughts about

+ +
    +
  1. Collector(say java based) will put it scalable messaging system like partitioned kafka queue.
  2. +
  3. Multiple spark consumer will process the events from queue so that computation can be done in parallel and near to real time
  4. +
  5. Now multiple spark consumers process the events for aggregation of events per user per 30 mins and store it in elastic which can be searched through kibana dashboard.
  6. +
  7. Aggregation can be computed like this.

    + +

    4a. Get the event from queue , get the user_id, create the in memory map till you get the event for +30 mins where key will be user_id and value will be session object containing session object which internally contains search/product view/profile view for that session.

    + +

    4b. Once next event is after 30 event, push the object in map to elastic.

  8. +
+ +

Is my design for storage and aggregation on right path ?

+",85226,,85226,,43836.42569,44106.50486,Aggregation and storage system design for user event processing?,,2,0,,,,CC BY-SA 4.0, +403387,1,403392,,1/6/2020 8:01,,2,172,"

We are using Entity Framework with SqlServer Database. Business program needs to create many columns which are not in the Database, due to storage, high querying cost etc. Currently, the team is Copying the whole DB Entity Layer (from scaffolding), and Creating whole another layer adding computed members in new entities. Currently taking EF layer and applying AutoMapper to new layer. For some reason, not sure if this is optimal, however architect wants it this way.

+ +

Is it general practice in Software industry to copy the Database layer into another Copy layer with Computed Members? I'm aware of DDD Domain Driven Design, however we are not creating Aggregate Roots, Value Objects, Clusters, etc.

+ +

What is an alternate solution if this is not good practice?

+ +

*I started software programming two years ago from college, curious if this is good industry practice, or alternative exists. Searched all over google and stack, and did not cite this strategy.

+ +

So basically

+ +

SQL Database ---> EF Layer ---> Another Copy Layer (with EF and computer members in the class) ---> Application Service ---> Dto ---> Controller APIs

+ +

I agree with every layer, excepted this EF copy layer with computed members. Couldn't we just utilize partial classes?

+ +

Example: New class layer would contain all existing members, plus these added in a class, etc.

+ +
FullName => FirstName + LastName
+
+AccountValue => Quantity * StockPrice
+
+ +

Update:

+ +

Would like to hear from people who think differently also, appreciate answers below,

+",,user354368,,user354368,43836.69236,43836.69236,Copy Database Entities into Another Layer in Software Architecture?,<.net>,2,0,,,,CC BY-SA 4.0, +403404,1,,,1/6/2020 12:56,,2,174,"

We have a set of microservices and would like to expose endpoints from a subset of these for third parties to use. To this end, we will build an API Gateway that acts as the access control mechanism for all our services.

+ +

In terms of how access is controlled, I've thought about the following but I'm unable to decide on the appropriate solution among them:

+ +

A. API keys aka the standard way this is usually done. The disadvantage being that api keys can be stolen and used without us ever knowing.

+ +

B. Oauth2. I don't think this works for our use case as Oauth is the answer to accessing user data on behalf of the user. Our integration will require the third party to update us with periodic updates relating to an item the customer has ordered.

+ +

C. Have a db for third party user accounts, with signed JWTs that enable access. I see this as an implementation of the RBAC (role based access control) model. RBAC traditionally acts as a solution where there are a large number of users so with one third party user - this is potentially an overkill for a solution.

+ +

D. AWS' API gateway. I don't have personal experience with this to be able to judge how good of an option it would be, and also most of our software is hosted internally so the Gateway would need to route traffic to an internal server that's probably going to increase latency - if Amazon even supports this option.

+ +

--

+ +

Which option do you think works best? It doesn't have to be A-D. If there's further reading you can suggest, please feel free to do so.

+",345294,,,,,43836.93611,Securing API for third party use,,3,2,,,,CC BY-SA 4.0, +403405,1,,,1/6/2020 13:14,,1,68,"

I'm making an app which is a Django backend and a React frontend (being developed by someone else). The plan currently is to fully decouple the two and have them communicate over API. However, I would like to make use of Django's Authentication features. My understanding is that Django Authentication makes use of Django sessions, which in turn abstracts the sending and receiving of cookies. My suspicion, therefore, is that if the setup looks like this:

+ +
User request <-> React app <-> API request <-> Django app
+
+ +

It's not going to work. Likely the cookie won't make it all the way to and from the user, and rather the React app itself is going to end up getting authenticated?

+ +

Is there a (sane, relatively straight forward) way to 'push' cookies through the react app to and from the user to make use of Django authentication of users in this setup? Or do I have to serve the client directly from the Django app?

+",339042,,205897,,43845.59514,44115.62778,Using Django Sessions when views are accessed by API,,1,1,,,,CC BY-SA 4.0, +403407,1,,,1/6/2020 14:00,,1,86,"

My model classes (business layer/library of ASP .NET core solution) uses a number of services (IOrmService, IEmailService, IFileService, IHtmlToPdfConverter etc.). Different models requires different combinations of the services for CRUD operations. It prevents me to use base class to max extent and forces to repeat code/methods that are very similar. It could be solved by passing not a combination of services to factory methods but a container with all of the services. +Which solution should I chose: +- inject IServiceProvider (looks like a bit too tight coupling with ASP); +- create and inject my own service container (e.g. IBusinessServiceProvider); +- leave everything as is.

+",353988,,,,,43836.66806,Custom services container for ASP .NET core business layer,,2,4,,,,CC BY-SA 4.0, +403410,1,,,1/6/2020 14:47,,-4,144,"

I want to create a program for the Linux OS in Haskell with multiple .hs files. I don't need help with the code or compiling it, what I don't know how to do is create an application that can be called from the terminal (like how python or ghci is). So:

+ +
    +
  • Where should the files go?
  • +
  • What do I need to do to allow it to be called from the command line?
  • +
+ +

I'll appreciate any help/tutorials on this.

+ +

Edit: I need to turn my program into whatever it is that you install on your computer. Like Python! How does Python pull this off?

+",354397,,348453,,43974.38472,43974.38472,How to Create an Application on Linux?,,1,3,,43836.86944,,CC BY-SA 4.0, +403413,1,,,1/6/2020 15:53,,3,444,"

In my limited understanding of microservices they seem to focus on quite limited pools of information, leaving it up to the application to bring all the data together in ways which were perhaps not anticipated.

+ +

I am working through a simple project in an attempt to get up-to-speed with a number of new concepts for me. The application is for a library consisting of books. A book has an Id, a title and a number of 'can-haves'. For example, 0 or more authors, 0 or more editors, 0 or more translators etc.

+ +

The books are in one table, the people in another table and the relationships in a third table.

+ +
@IdClass(BookAuthorId.class)
+public class BookAuthor {
+    @Id
+    private char type;  //A=Author, E=Editor, T=Trl8r
+
+    @Id
+    @JsonProperty(""BOOK_ID"")
+    private Long bookId;
+
+    @Id
+    @JsonProperty(""AUTH_ID"")
+    private Long authorId;
+}
+
+ +

I could put these relationships directly into the book model but I'm thinking of a different approach where the authors are picked up in a separate call.

+ +
http://host/api/book/{bookid}
+
+http://host/api/book/{bookid}/author
+http://host/api/book/{bookid}/editor
+http://host/api/book/{bookid}/translator
+
+ +

It seems to me that the data returned is kept to a minimum and the queries are kept ultra simple and data contract is probably easier to maintain.

+ +

On the downside, the number of calls increases but this might be mitigated by the use of a Backend-for-Frontend layer which would marshal all the information on the server side and not in the client.

+ +

For my simple project I'm sure either way will work fine but I'm wondering how these 2 approaches would pan out in the real world.

+ +
    +
  • Is one more scalable?
  • +
  • Is one more difficult to maintain?
  • +
  • Should one be avoided at all costs?
  • +
+",598,,353068,,43837.83542,43837.83542,Is it better for a microservice to access database tables individually or to work with joined data?,,4,0,,,,CC BY-SA 4.0, +403418,1,403432,,1/6/2020 16:51,,-2,186,"

I have very little experience in this area, so sorry if I use any incorrect terminology or if this is a stupid/simplistic question in general.

+ +

But from what I understand, when a developer wants to push out an update for an app, it has to be approved by Apple/Google which can take up to 2 days. And apparently it even use to be much longer with Apple. So what happens if, for example, a massively popular game on an app store gets found out to have a huge exploit which could lead to players seeing other players credit card details, as an extreme example. Is there a way to bypass these wait times to push out a fix?

+ +

And I've heard of server side hotfixes in games before, so I assume these can be done to bypass these approval systems. And if so, doesn't that make the approval system pointless? I understand that hotfixes don't require the user to manually download a new update, but couldn't hotfixes still be used to do malicious things which I assume these approval systems are in place to prevent?

+ +

I guess my questions boil down to: +Why are these approval systems in place? +What can be done if there's a major exploit within a game that cannot be fixed by a hotfix?

+",354404,,,,,43836.91667,What is done when an urgent fix for an app is needed but updates have to be approved by Apple/Google etc?,,2,7,,,,CC BY-SA 4.0, +403419,1,403435,,1/6/2020 16:51,,-4,87,"

I have a design question and am hoping for some validation. I want to create a 'serverless' web app which will parse a csv file and return an XML file.

+ +

I will have a single page react app hosted in AWS S3, where a user can drag and drop a file, and it will be parsed and the contents of the csv file will be sent to a lambda, which will then convert it to an XML file and return this to the user.

+ +

I am hoping not to have to use S3 to store the csv and resulting xml file but I can if that is needed.

+ +

Is this possible without a server or am I way off?

+",302041,,,,,43836.92778,Use React to parse a file (serverless),,1,4,,,,CC BY-SA 4.0, +403424,1,,,1/6/2020 17:55,,1,353,"

Android uses the modified version of JVM known as Android Runtime to execute android apps.

+ +

How do games (with Unity 3d) and Apps (Xamarin) written in C# (which require dot Net's Mono framework) run on Android?

+ +

The official website of mono says

+ +
+

The Mono runtime has been ported to the Android OS.

+
+ +

What exactly does that mean?

+ +

Is Mono Bundled with Android OS?

+ +

Is Mono Bundled with each App and Game?

+ +

According to this answer on Unity3d forum. It says Mono is similar to Java, in terms of being able to do cross-platform Software development.

+ +

This leads back to my initial question,

+ +

does android have two runtimes (Mono and ART)?

+",351666,,351666,,43836.82639,43836.88403,"How does .Net's Mono Framework, Work on Android",,1,0,,,,CC BY-SA 4.0, +403427,1,,,1/6/2020 19:10,,-2,165,"

An example of my API is

+ +
mydomain.com/v1/update-profile
+
+ +

So the user is authenticated and wants to update some stuff in their profile. Should we design the API to expect a POST or a PUT HTTP Method?

+ +

From my internet search it seems PUT is idempotent, but at the same time it's usually used for updating a particular resource in a URL with an id at the end. Example:

+ +
mydomain.com/v1/questions/{id}
+
+",354422,,110531,,43837.34514,43837.37083,Should you use PUT or POST when an authenticated user is trying to update their own settings?,,2,3,,,,CC BY-SA 4.0, +403438,1,403439,,1/6/2020 23:39,,1,104,"

I'm ingesting 150 objects that each require an user capabilities check, the function isUserAdmin tells me whether or not an user is an admin or not. Inside this function, there's a lot of deeper checks and so on. It'd be really, really nice if I could cache this stuff so I don't have to check for 150 objects.

+ +

I'd like to, before anything loads, determine who the user is and set up things that in practice, don't change -- or do they? Assume a super admin changes the privileges of an admin user mid-request but I've already set his privilege information, so, assume this admin has the capability to delete other users, he's running a request to do that, but during that time, the superadmin says ""nope, can't do that anymore"", but because on that request I already set that user's permissions, this will not take effect and the admin will still be able to delete users.

+ +

What's the fix here? Setting information pre-emptively is basically dangerous in an async-by-proxy environment (which everything is) but not doing this makes me perform the same check 150 times in a row.

+ +

I feel like this is a chicken & egg problem and every time I'm faced with such issue, I'm not seeing a vital angle. What am I missing?

+",353781,,,,,43841.30139,How can I store an user's capabilities to boost performance while allowing real-time updates of said capabilities?,,4,3,,,,CC BY-SA 4.0, +403441,1,403479,,1/7/2020 1:29,,-3,127,"

I am trying to create a season schedule for our Knowledge Bowl league (middle school trivia). I am NOT very mathematical nor a ""techie"". Here are my constraints:

+ +
    +
  1. There are seven schools participating; each school has three teams. Example: Tenino Black, Tenino Red, and Tenino White This means there will be 21 teams total.
  2. +
  3. Teams from the same school cannot play each other. So each team will play the other 18 teams.
  4. +
  5. There will be seven meets.
  6. +
  7. Each individual meet is made up of three matches in which three teams compete against each other. Also, each team will complete in 21 matches total.
  8. +
  9. It does not matter if a school is hosting the meet--""Home Advantage"" is not a factor.
  10. +
+ +

I know there must be a simple mathematical way to set this up, but I just can't figure it out. I've tried to just do this randomly, but am quickly approaching insanity. Please help! Thank you in advance.

+",354435,,319783,,43837.27569,43837.83542,Help Developing a Season Schedule?,,2,3,,,,CC BY-SA 4.0, +403442,1,403459,,1/7/2020 2:35,,2,267,"

I am designing an application where there will be one column that stores a large amount of text. I'm debating whether the text should be stored in the database itself, or whether the database should store a reference to an object store such as Google Cloud Storage or Amazon S3, and have the application use that reference to pull from those services. My thinking is that the object store offers a significantly cheaper storage cost per GB than the database; but I am uncertain what I am trading off in doing this.

+ +

So what tradeoffs are there to storing large binary data in my database as opposed to an object store?

+",283155,,,,,43837.66736,When would I use a database over an object-store for large blobs?,,3,4,,,,CC BY-SA 4.0, +403446,1,403448,,1/7/2020 7:03,,2,136,"

I have different types of posts a user can create:

+ +
    +
  • TextPost
  • +
  • ImagePost
  • +
  • VideoPost
  • +
+ +

The frontend client needs to retrieve the last 10 posts from a user. +I am wondering how to model this structure correctly in a relational database and the data structure for sending the data to the frontend.

+ +

My proposed solution:

+ +

A post table with columns:

+ +
    +
  • user_name
  • +
  • create_date
  • +
  • text_post_id
  • +
  • video_post_id
  • +
  • image_post_id
  • +
+ +

A row has only one of the foreign keys _id columns set. To get the last 10 post the backend will do a select on the post table to get the last posts. Next it will do do another select query for each of the post types. Example:

+ +
select * from text_post where text_post_id in (?1)
+
+ +

The data is then mapped in a List of post object. A post object having the fields textPost, videoPost, imagePost. Again only one of them is set at a time. The list is then ordered by create date for each post. Finally we send the response to the frontend which will iterate over the list and display each post type accordingly:

+ +
[
+  {
+    ""videoPost"": null,
+    ""textPost"" : {},
+    ""imagePost"": null
+  },
+  {
+    ""videoPost"": {},
+    ""textPost"" : null,
+    ""imagePost"": null
+  }
+]
+
+ +

Is this a good solution that will scale and allow for the possibility of adding more post types in the future?

+",354445,,,,,43837.86597,"Modeling different types of ""Posts""",,3,0,,,,CC BY-SA 4.0, +403449,1,,,1/7/2020 8:18,,1,67,"

Imagine that we have a storage of objects O (can be in SQL, NoSQL, doesn't matter) where the object contains some property P_1 to P_n. Some of these properties are stored in storage S1, some in storage S2, etc. In general, imagine that the full definition of the a single object can only be done by invoking some kind of lookup or join across many storage. You can also think that the object is vertically partitioned across different logical database, and each database is oblivious of other database.

+ +

Now, we want to define a way to filter and sort these objects. But the function to filter and sort the objects are dependent on properties that span multiple database. At a glance, that means if we were to perform any kind of sorting or filtering, we have to pull the whole database and join the properties in the caller side, which can quickly get expensive for non-trivial sized database.

+ +

How can we implement this efficiently?

+ +

Some options come to mind:

+ +
    +
  1. Store columns that are used for filtering/sorting together in one storage. In other words, make sure that properties that are used for filtering/sorting stick together, so that we can perform the filtering/sorting in the database server. But this is inflexible, because as and when we need to change the filtering/sorting algorithm we potentially have to change the storage schema. Admittedly not too painful in NoSQL database, but still a change in the implicit schema.
  2. +
  3. Limit any filtering/sorting algorithm only to properties that exist stick together in one database. This is effectively a variant of no. 1, but put the burden of being inflexible on the caller side.
  4. +
  5. Have a storage specifically to store a filtered + sorted list of object. This feels a bit dumb, and totally space-inefficient especially if the filter/sorting algorithm is very generic.
  6. +
+ +

While thinking about this problem, it also occurs to me that it is similar problem to how Facebook can give customized News Feed to every user. How does it perform the sorting of the News Feed efficiently when the relevancy score of each News Feed item is specific to the user?

+",147945,,147945,,43837.39792,43837.40278,System design for filtering/sorting objects whose properties are stored in distributed storage,,2,0,,,,CC BY-SA 4.0, +403454,1,,,1/7/2020 11:54,,1,65,"

Here's a simple scenario: a REST API of which you launch multiple, load-balanced replicas of the same service with Gunicorn. Unit and integration tests are run in single-instance cases, but how can you run integration tests that include all instances so you check that they have a single source of truth?

+ +

I can think of several ways but I don't know if there's a better one in Python (or in other languages). Maybe I just miss the vocabulary to search for it. Here's what I think

+ +
    +
  • Launching the several instances of the server from some external program, and use API requests from the test to check that.

  • +
  • Use Gunicorn as a module, write a script that works in the same way as the command you used, and run integration tests on that one.

  • +
+ +

Is there any better way to do this?

+",186925,,186925,,43837.57153,43837.62014,How can you do integration tests when launching multiple REST API instances of the same server?,,1,5,,,,CC BY-SA 4.0, +403455,1,403470,,1/7/2020 13:29,,3,741,"

For modelling software implemented with the imperative or procedural programming paradigm we have Flowcharts, process diagrams, etc.

+ +

For object oriented we have UML class diagrams, object diagrams, state diagrams, etc.

+ +

Is there an UML diagram suitable designing software implemented with functional programming?

+",297840,,209774,,43837.94931,43837.94931,Is there a UML diagram for functional programming?,,1,6,2,,,CC BY-SA 4.0, +403466,1,,,1/7/2020 16:02,,0,62,"

I have a complex PostgreSQL database structure that is consisted with views, materialized views and foreign wrappers.

+ +

The database schema is manually updated without any sort of migration script, hence I want to introduce a database migration schemes.

+ +

One approach is to generate an initial SINGLE migration that will run only under these conditions:

+ +
    +
  1. The environment is not a production or a staging one.
  2. +
  3. There are no tables into the database.
  4. +
+ +

This single migration will generate the existing database schema from a database dump. After that each change on the database will be placed upon a new migration script. The framework that will execute the migration scripts is the Laravel and the database layer is the PostgreSQL one.

+ +

The reason why I tried this approach is that I want to avoid database corruption on the existing db on production and staging, but to be able to reproduce the database on local development environment.

+ +

AFAIK Laravel keeps in the database the last executed migrations as well, so I want to avoid the migrations to be out of sync.

+ +

Also, approaches such as this one https://github.com/Xethron/migrations-generator fail to handle the foreign database tables, each foreign table is being generated as normal one.

+ +

Would you recommend this approach in my case?

+",249660,,326536,,43837.78403,43837.85278,Should I skip some migrations on production in case of migrating a legacy database that has not been generated via migration script?,,1,2,,,,CC BY-SA 4.0, +403472,1,403497,,1/7/2020 18:15,,-3,153,"

I know that a RESTful would have unified API and it treats everything as a resource (a noun, example a book, a product,...) and it can be applied with CRUD operations using HTTP Verbs (GET, PUT, POST, PATCH, DELETE,...)

+ +

I am aware of the fact of nouns & verbs.

+ +

Now, I am building a Web API that is managing the books and custom analytics of the books like,

+ +
    +
  1. Which books or categories are searched frequently?
  2. +
  3. Demand vs availability
  4. +
+ +

ASP.Net Web API

+ +

BooksController - GET(Odata), PUT, POST, PATCH, DELETE

+ +

PagesController - GET(Odata), PUT, POST, PATCH, DELETE

+ +

BookAnalyticsController - GetFrequentBooks, GetFrequentCategories,...

+ +

PageAnalyticsController - GetFrequentPages, GetBookmarkedPages,...

+ +

I used Odata to query list by properties and select the properties as well. This will save duplicate counter-REST methods like GetBooksByCategory, GetBooksByYear, GetBooksByAuthor,..

+ +

Now, you know that BookAnalyticsController and PageAnalyticsController is going to have multiple HTTPGET verbs based on use-cases.

+ +

For Books or Pages controller, I can elegantly browse like

+ +

GET https://locahost/api/book

+ +

GET https://locahost/api/book/id

+ +

POST https://locahost/api/book BODY

+ +

PUT https://locahost/api/book/id BODY

+ +

PATCH https://locahost/api/book/id PARTIALBODY

+ +

For BookAnalytics,

+ +

GET https://locahost/api/bookanalytics/GetFrequentBooks

+ +

GET https://locahost/api/bookanalytics/GetFrequentCategories

+ +

IMO, it is kind of looking ugly. How would you make it pure RESTful API for analytics. Please suggest

+",318937,,,,,43838.42847,A True RESTful API | Help needed,,1,8,,,,CC BY-SA 4.0, +403478,1,,,1/7/2020 19:11,,1,138,"

I am trying to review our current architecture. +These are the current layers, trying to analyze them, and see if the following three ideas are good practice.

+ +

DatabaseSQLStorage ---> Entity Framework Class Layer ----> Generic Repository ---> Entity Framework Copy Model Layer ----> Application Service ---> DTO Layer ----> API Controller

+ +

1) One thing I am concerned, the Application layer has tangible method references to Two layers. so if the entity layer breaks, Two layers actually break (Entity and the Copy Model Layer). The Repository should only bring Model layer, and we should Only be working with that at that point, not having two references.

+ +

When discussing with architect, he mentioned 'well we have differential functional areas with our 5 application services, so not sure. Was trying to comprehend this.

+ +
public async Task<ProductDto> GetProductById(int id)
+{
+    var productData = await IProductRepository.GetAsync(id);
+    ProductModel productModel = mapper.Map<productModel>(productData);
+    var productDto = mapper.Map<ProductModel, ProductDto>(departmentModel);
+    return productDto;
+}
+
+ +

2) +Another thing I am questioning is, we have both a Model Layer, AND a DTO Layer. +The Model layer, contains computed members, see Article below. We are not doing Domain Driven Design (no aggregate roots, value objects, clusters, etc)

+ +
ProductFullName => ProductType + LastName + ProductDescription // Concatenation
+AccountValue => Quantity * ProductCost
+
+ +

Copy Database Entities into Another Layer in Software Architecture?

+ +

I am fine with DTO Layer, for example, the business contains private account numbers, Social Security, birthdate, etc, that we remove before sending APIs

+ +

Just concerned if a DTO layer is needed with the other copy layer.

+ +

3) Also, not a fan of the generic repository, since EF DbContext is a Repository and Unit of Work by itself, but that's another point too .

+",,user354368,,user354368,43839.26042,43839.26042,"Application Service Layer referring to two Layers, and Database Entity Copy Layer with DTO Layer?",<.net>,0,7,1,,,CC BY-SA 4.0, +403481,1,403487,,1/7/2020 21:36,,5,144,"

I have a class called D2Array which represents a fixed-size 2D array. It's meant to be generic and it comes with quite its lot of methods: getting an element, setting an element, extract a whole row, column, etc.

+ +

Now I want to write MyClass, which contains a D2Array, and in order to use MyClass properly I'll need to manipulate the state of the inner array. A straightforward solution would be having a getter to the array, manipulating it outside MyClass, and then putting it back into it through a setter.

+ +

(Instead of the usual Template<Type> notation, I used Template-Type to describe template parameters, which is a bit of a contraption, but StarUML wouldn't let me do otherwise.)

+ +

+ +

This way however, any D2Array could be fed to the setter, which would then need some sort of validation to prevent invalid states from happening. This can quickly become expensive in my case and I find it horribly counter-intuitive that the setter may throw, so I think this is clearly not a suitable solution.

+ +
+ +

Another straightforward solution which also has the benefit of strongly enforcing encapsulation: in MyClass, write public methods to manipulate the inner array.

+ +

+ +

This is actually fine when you need to allow manipulation of the array only in a way that is not trivial. However, when you just want to expose part or whole of the existing methods on the array, it translates to literally writing one-liners which simply forward arguments to the appropriate methods. It already feels dull having to do it for one class, let alone doing it for several classes which all use an inner array in a similar yet different fashion.

+ +

Retrospectively, I think this is the essence of my problem: I want to expose only a subset of the interface of my inner array, exactly as-is, having to write the least amount of forwarding methods to do so, and the contents of this subset also depends on the needs of the class which uses an inner array.

+ +
+ +

I came up with the following solution, which seems to fit my requirements nicely, but I'm wondering if it's not kind of an anti-pattern:

+ +
    +
  • Write a new class D2ArrayWrapper (dumb name for the sake of the example), which contains the inner array as a protected member, and publicly declares and defines forwarding methods to all of the inner D2Array methods.
  • +
  • Have MyClass inherit from D2ArrayWrapper. This way, it contains an inner 2D array and already has all of the forwarding methods implemented, saving me the pain of writing them myself. This gain is doubled if I want to write MyClass2 which also manages a D2Array: just have it inherit from D2ArrayWrapper!
  • +
  • If I want any array method not to be accessible from outside MyClass, I just need to re-declare it from MyClass, as a protected or private member.
  • +
+ +

+ +

Reasons I like it:

+ +
    +
  • It kind of achieves what I wanted.
  • +
+ +

Reasons I dislike it:

+ +
    +
  • Instead of re-declaring and re-defining the array methods that I want to be accessible from MyClass, I re-declare the array methods that I don't want to be accessible from MyClass. My DRY problem is not gone, simply reversed. (Also in C++, a simple using statement does the trick, but in Java, a whole re-definition is needed).
  • +
  • It only works in languages which handle multiple inheritance. It would work in languages which only support single inheritance too, but then it simply wastes inheritance possibilities for that class, which I find to be a huge unwanted side-effect.
  • +
  • It feels overkill when compared against the simplicity of the original goal.
  • +
+ +
+ +

I'm kind of at loss right now as to which solution I should go with.
+Solution 1 is a no-go because of the complexity a validation would incur.
+Solution 2 is fine, but feels dumb and somewhat inelegant.
+Solution 3 feels like an overcomplication of things that got slightly out of hand, and after having given it some thought, I'm pretty sure that's not something I want to go with.

+ +

As a result I'm considering going with solution 2, but is there not another way?
+Thanks.

+",153704,,153704,,43838.34931,43838.34931,Selectively exposing interface of inner members,,1,0,,,,CC BY-SA 4.0, +403483,1,403485,,1/7/2020 21:51,,82,17449,"

Currently I am working on a school project written in C#. Some teammates just started in C# and some are already familiar with C#. Today I had a discussion on whether to use syntactic sugar like this in our code:

+ +
private _someClass;
+public SomeClass SomeClass
+{
+    set => _someClass = value;
+}
+
+// Equivalent to:
+
+private _someClass;
+public SomeClass SomeClass
+{
+    set 
+    {
+        _someClass = value;
+    }
+}
+
+ +

Or this:

+ +
random ??= new Random();
+
+// Equivalent to:
+
+if (random == null)
+    random = new Random();
+
+ +

Reasons we discussed for not using syntactic sugar like this were:

+ +
    +
  • It is hard to read in general
  • +
  • For someone coming from another language e.g. Java, it is harder to see what is going on in the code.
  • +
+ +

Are these valid reasons? Does this happen in other languages? Are there some measures to decide what is the ""cleaner"" way of coding?

+",264451,,-1,,43853.45347,43853.45347,When to use / not use syntactic sugar,,9,29,14,,,CC BY-SA 4.0, +403490,1,403492,,1/8/2020 7:43,,8,1166,"

What is exactly meant when software-engineers talk about ""behaviour"" in contrast to ""state"" (Definition of ""state"")?

+",354523,,,,,44201.58681,"Definition of ""Behaviour""?",,4,2,,,,CC BY-SA 4.0, +403500,1,403502,,1/8/2020 12:05,,2,197,"

I'm developing an application that has a backend (Java, Spring) and a frontend (TypeScript, Angular). The backend application provides with an OpenAPI-compatible API and performs certain operations with different 3rd-party services (Kubernetes cluster, GitLab server and other ones).

+ +

The backend service has controllers, they call services, they call repositories, they call clients. So, for example, when a frontend application needs to delete some object (for example, a pod) from a Kubernetes cluster, it sends the following request to the server: DELETE /api/v1/workload/pod/name/blah-blah-blah. the controller has a separate method that proceeds this request and it calls the corresponding service that is responsible for pod-related operations. The service calls the repository that is responsible for pods. The repository calls an internal Kubernetes client (actually a wrapper). The client calls a 3rd-party library (the original Kubernetes client) which performs the actual communication with the Kubernetes cluster.

+ +

What do you think, where should I catch the exception from the original Kubernetes client if it's being raised? For example, if there is no object with such a name in the cluster? What layer should handle it - a controller, a service, a repository or a client? Obviously, the application needs to catch this exception to reply with the 404 code instead of replying with 500?

+ +

Thanks in advance for any suggestions and sorry if my question is not fully clear.

+",327861,,118878,,43838.58264,43839.52639,Where should I catch exceptions,,2,4,1,,,CC BY-SA 4.0, +403501,1,403511,,1/8/2020 12:11,,0,127,"

I am still quite new on here so I hope I am posting in right forum.

+ +

I am currently writing a small library where I realized I could use some kind of design pattern which lets one pass constructor arguments to initialization or allocation functions, and these can be further specialized in inherited classes.

+ +

Something like this:

+ +
class MyArgs{
+  // basically just a wrapper/container
+  // just a constructor and variables
+  // everything public or friend of MyClassBase
+};
+
+class MyClassBase{
+  MyClassBase(MyArgs a){
+    doAllocation(a);
+    doInitialization(a);
+  }
+  //... defining default doAllocation() and doInitialization() ...
+};
+
+class MyClassDerived : public MyClassBase{
+  MyClassDerived(MyArgs a) : MyClassBase(a){}
+  //... override doAllocation() and doInitialization() ...
+};
+
+ +

Does there exist some kind of design pattern which makes this easier?

+",220866,,,,,43838.8625,Design pattern for embedding constructor arguments into classes/structs,,4,2,,,,CC BY-SA 4.0, +403503,1,403621,,1/8/2020 12:30,,1,62,"

I am trying to decide between a NoSQL (MongoDB) or a SQL (Oracle) database for my project. This database will be used to store output files from a set of parameters.

+ +

I can model this as a MongoDB document, so that the parameters and their values are stored as a key:value pair. Afterwards, I would query those documents in Java by sending an example HashMap.

+ +

I can also model this as a SQL table. In this situation I would create a hash index based on the parameters and their values. I would query those entries of the table in Java by rebuilding the hash and sending it within a SQL Select.

+ +

I figured that the NoSQL modeling makes more sense, since the parameters and their values would be stored on a cleaner manner. Also, querying it would make more sense on a maintenance point of view. Rebuilding a hash and making sure the hash is consistent across the board would be a maintenance overhead with SQL.

+ +

On the other hand, I have made some testing with both alternatives and the NoSQL one seems to be the slowest to return the queries (double the time of a SQL query by the has index, and it makes sense to be this way).

+ +

This seems to me like a common problem someone had before me. Still, I could not find any comparison of this type of modeling on NoSQL and SQL on an article so that I could read from someone else's perspective.

+ +

Is this a common problem on database modeling? +Did I go right about testing it? +Do my conclusions make sense? +Is there anywhere I can find an article with a simillar problem? I have searched a lot for it and all I could find were superficial comparisons between NoSQL and SQL. Maybe I have not used proper keywords for it.

+",352971,,,,,43840.33194,Query by NoSQL keys versus SQL hash,,1,5,,,,CC BY-SA 4.0, +403506,1,403553,,1/8/2020 13:10,,0,135,"

I am investigating a good maintainable architecture for GraphQL. In particular we want to migrate a REST app to GraphQL. Specifically I am using .NET.

+ +

I am following the tutorial here: https://fullstackmark.com/post/17/building-a-graphql-api-with-aspnet-core-2-and-entity-framework-core
+ which is very similar to most tutorials

+ +

It has the following Mutator file:

+ +
public class NHLStatsMutation : ObjectGraphType
+{
+   public NHLStatsMutation(IPlayerRepository playerRepository)
+   {
+     Name = ""Mutation"";
+
+     Field<PlayerType>(
+           ""createPlayer"",
+           arguments: new QueryArguments(
+           new QueryArgument<NonNullGraphType<PlayerInputType>> { Name = ""player"" }
+     ),
+     resolve: context =>
+     {
+        var player = context.GetArgument<Player>(""player"");
+        return playerRepository.Add(player);
+     });
+   }
+}
+
+ +

This gets assigned in the schema:

+ +
public class NHLStatsSchema : Schema
+{
+  public NHLStatsSchema(IDependencyResolver resolver): base(resolver)
+  {
+     Query = resolver.Resolve<NHLStatsQuery>();
+     Mutation = resolver.Resolve<NHLStatsMutation>();
+  }
+}
+
+ +

Finally there is a single GraphQLController which handles the API requests and has an instance of the schema:

+ +
    [Route(""[controller]"")] 
+    public class GraphQLController : Controller
+    {
+        private readonly IDocumentExecuter _documentExecuter;
+        private readonly ISchema _schema;
+
+        public GraphQLController(ISchema schema, IDocumentExecuter documentExecuter)
+        {
+            _schema = schema;
+            _documentExecuter = documentExecuter;
+        }
+
+        [HttpPost]
+        public async Task<IActionResult> Post([FromBody] GraphQLQuery query)
+        {
+            if (query == null) { throw new ArgumentNullException(nameof(query)); }
+            var inputs = query.Variables.ToInputs();
+            var executionOptions = new ExecutionOptions
+            {
+                Schema = _schema,
+                Query = query.Query,
+                Inputs = inputs
+            };
+
+            var result = await _documentExecuter.ExecuteAsync(executionOptions).ConfigureAwait(false);
+
+            if (result.Errors?.Count > 0)
+            {
+                return BadRequest(result);
+            }
+
+            return Ok(result);
+        }
+    }
+}
+
+ +

Having a single schema and API endpoint seems to be ""How GraphQL is done"" according to a few parts of the internet.

+ +

However this seems to result in a very very large Query and Mutator file, with many many dependencies injected (repositories, services etc) and many, many methods in the file.

+ +

This is an Enterprise applicattion with 300+ database tables and lots of complex backend business rules. The application is a monolith with a layered architecture with a UI layer, Services layer and Data Layer.

+ +

How can I architect this to make this more maintainable?

+ +

I have read up on Schema Stitching or Federation (https://www.apollographql.com/docs/graphql-tools/schema-stitching/) +However, I'm not sure if that's the right approach as the examples seem to be using that to address microservices or API endpoints from different apps. (https://blog.apollographql.com/graphql-schema-stitching-8af23354ac37)

+",212678,,212678,,43838.55417,43839.29375,GraphQL results in a very large Query and Mutator file for Enterprise Monolith,<.net>,1,2,1,,,CC BY-SA 4.0, +403510,1,,,1/8/2020 14:59,,0,123,"

So, I was wondering about how Haskell's lists are implemented. I looked it up, and found this:

+ +
data [] a = [] | a : [a]
+
+ +

So I get that you can write lists like this if you want to:

+ +
a:b:c:[] -- instead of [a, b, c]
+
+ +

But my question is: How is the list syntax that is usually used (the [a, b, c] syntax) implemented?

+ +

Edit: I want to know the implementation, so if anyone could point me to the right standard library file, that would be much appreciated.

+",354397,,354397,,43838.67847,43838.70278,How do Haskell Lists Desugar?,,1,2,,,,CC BY-SA 4.0, +403513,1,403596,,1/8/2020 15:16,,0,108,"

Clarification question on saving entities best practice.

+ +

What is the purpose of returning an entity that you might have just saved in a repository? I see the benefit of returning it's Id once it's been created but not the whole entity. Can someone please explain the purpose or benefit in returning the entity.

+",229167,,,,,43839.89792,Create or Update method returning created entity,,1,10,,,,CC BY-SA 4.0, +403514,1,403529,,1/8/2020 15:33,,2,226,"

I understand ""the works"" of event driven systems and I've built simple ones. However I find that I am struggling a bit in efficiently designing one.

+ +

E.g., Formulating all the events upfront seems not easy, I find that I'm removing/renaming some events I initially determined later down the road.

+ +

Another thing is determining the services that are supposed to care about those events upfront. E.g., ""orderPrepared"" event? Ahh service x,y,z needs that. Catch my drift?

+ +

I guess my question is, after determining the business requirements, what's your methodology in designing an event driven system? How does one actually ""lay it all out"" in an effective manner, minimizing changes later on?

+",319317,,,,,43839.86111,Effective methodology in designing event driven systems,,5,1,1,,,CC BY-SA 4.0, +403516,1,,,1/8/2020 16:26,,3,920,"

We're implementing a service that exposes data related to a particular part of a business. It will pull data in from different sources, do some ETL, and store it in Redis. It will expose this data via ReST endpoints (and possibly GraphQL).

+ +

For scalability, Redis will be replicated from London to US. The service that resides in the US will not need to do any ETL as it'll just use the existing data that's in Redis.

+ +

Because this service has two 'modes' (read and write in London, and just read in the US), we're debating whether we should split it into two services; one that writes the data, and one that reads the data.

+ +

Option one:

+ +
+ +

Have two services, potentially in different repos, where one does the ETL and writing of data, and one that just reads that data.

+ +

The problems that we can see with this are that we'll have two lots of everything (repos, TeamCity projects, Octopus deploys etc.). We could also end up with versioning issues as they have the potential to evolve at different speeds.

+ +

Option two:

+ +
+ +

One service that can either be started as 'read-write' or 'readonly'; in London, we'll start it in 'read-write' mode, and in the US, we'll start it in 'readonly' mode.

+ +

This solves the issue of multiple repos, multiple TeamCity projects etc.

+ +

The problems we can see with this approach is the added complexity of starting it in a particular 'mode'. Also, we'll be deploying code (the ETL components) that are essentially 'switched off' in readonly mode.

+ +
+ +

We've been reading various articles online. Some say that databases should never be shared between 'Microservices' and some saying that it's absolutely fine.

+ +

Up to this point, I've used apostrophes around the word Microservices. I've done this because so far in this question, I've been using the word to describe 'technical functionality', rather than the more well defined:

+ +
+

The microservice approach to division is different, splitting up into + services organized around business capability

+
+ +

I like that description of a Microservice and I think it gives weight to my preferred option, which is option two.

+ +

But I'm no architecture expert, so I'd thought I'd ask around to see what other peoples opinions are.

+ +

What option would you choose, or would you do something entirely different?

+",30906,,,,,43840.72014,Should we share a database between two 'Microservices'?,,4,1,,,,CC BY-SA 4.0, +403519,1,,,1/8/2020 17:05,,-2,286,"

I have a issue that keeps coming up as a developer using any .NET (C# mostly) and SQL. I personally feel it is bad practice to build SQL statements in code. There are too many case scenarios that may appear that you may not be aware of. If you ever done a SQL audit, it becomes infuriating creating unit tests for every case, and is time consuming. Especially when SQL grows. Also as you build the sql and other developers tag-on it becomes less and less efficient. Most usually start with 'WHERE 1=1' so you can just create a bunch of AND's to the build structure.

+ +

My previous thoughts were to use .sql files. Load them in and you can just create a file with a naming convention for each case you want. Simple enough to choose what you need with a switch statement and very easy to read. However the issue with this is to be sure to make it thread safe.

+ +

At some point I changed into adding them to resource files so that they couldn't /easily/ be edited. I even encrypted them eventually because the SO at one company felt they should be. At this point I keep thinking about storing sql in a database (encrypted) instead of files The only real query you'd have to make thread safe is the one that retrieves other sql statements.

+ +

My question is: What other options have people come up with? What is best practice? I did research and came up with no uniform way of storing sql statements for code.

+",281502,,316049,,43839.81736,43951.21667,Best Practices for Managing SQL Code,,3,5,,,,CC BY-SA 4.0, +403533,1,,,1/8/2020 22:17,,1,45,"

I'm writing an education app and my current Firebase structure looks like:

+ +
Course
+   ∟ Lesson
+       ∟ Modules (Includes video, audio, text and quiz modules in one collection)
+
+ +

Where ""Course"", ""Lessons"" and ""Modules"" are each collections. I also have a type field inside the module document that tells me what I should deserialize the document to (video object, audio object, etc. -- the objects only differ by a few fields).

+ +

Something tells me that I should instead separate the types of modules into their own collections however:

+ +
Course
+   ∟ Lesson
+        ⊢ Video Modules
+        ⊢ Audio Modules
+        ⊢ Text Modules
+        ∟ Quiz Modules
+
+ +

I've thought a lot about the two options and I can't seem to decide. Is there anything about not mixing the types inside collections specific to Firebase or noSQL databases that would help me decide?

+",354610,,,,,43870.83472,Firebase prefer separate collections over mixed collection?,,1,2,,,,CC BY-SA 4.0, +403545,1,,,1/9/2020 2:51,,-2,162,"

I've read some articles about CI/CD and find that a typical CI pipeline often includes Build, Test and Deploy. We also add Code Quality Scan stage to our pipeline. The question is that, what is the proper order of them? Should Test goes before Build or after? Is there any best practice?

+",354628,,,,,43840.61944,What's the proper order of stages in a CI build?,,3,2,,,,CC BY-SA 4.0, +403546,1,,,1/9/2020 3:21,,1,213,"

In one of my projects, I have this following use case -

+ +

I have a variable, that I need to pass around in many methods. Business logic and object creation in those methods are dependent on that variable. It can be the case I have object creation inside a constructor of another class. For those, I am also propagating this variable so that nested object creation can use it.

+ +

This way method contract is very clear. Anyone can look at the method and readily know that is the information that this method is working with. But this leads to a messy propagation of the variable in many places across the codebase. Depending on the variation of the use case, sometimes it has to be stored as a field in many classes.

+ +

Another alternative is to use ThreadLocal. Where this variable state can be kept. It's clean and easier to implement. The application processing unit is single-threaded, so thread safety is not an issue.

+ +

But the problem is, it affects the clarity of the methods that will be using it.

+ +

What is the recommended approach in this kind of scenario?

+",354629,,354629,,43840.11667,43840.34583,Using ThreadLocal in Java,,1,5,,,,CC BY-SA 4.0, +403551,1,,,1/9/2020 6:23,,0,128,"

I have got an argument with my colleagues about this. IMO It's common practice that you don't need to check if an id exists or not before rendering it.

+ +

So here's an example. +This is what my colleagues implicitly saying its proper to do.

+ +
let hotels;
+getHotel().then(res=>{
+    hotels = res.hotels.filter(hotel=> hotel.id)
+})
+
+ +

Then you render the ""hotels"" in the view .

+ +

One of my colleague's arguments is:

+ + + +
hotels = hotels.select { |h| h.id }
+
+ +

This is selecting for any hotels that have a hotel ID. This isn’t stupid if you’re expecting there to be no ID in some cases. (He's referring to Cassandra where a key primary can have no value)

+ +

My argument is frontend doesn't care which database backend is using. It should be implemented without knowing it as best practice by assuming id is primary and should always expect to have value.

+",354631,,118878,,43839.57986,43839.58681,Should we check if the primary key exists if rendering a collection fetched from database?,,2,4,1,,,CC BY-SA 4.0, +403559,1,,,1/9/2020 9:22,,0,78,"

Let's say for a Net Core library that I will deliver to the customer

+ +

The library define a Interface like IGetData and also a default GetData class.

+ +

The library also define a ICacher interface with a default Cacher class

+ +

It's the client's responsibility to register the Interface and the default Classes to their DI container. They can also define and register their own Cacher

+ +

My question is :

+ +

1/ Is there something wrong with this design ?

+ +

2/ Normally how a dependency that is optional should be handled ?

+ +

3/ In the case that the optional dependency is handled correctly, my GetData class would contains something like

+ +
If(Cacher != null)
+return Cacher.GetData()
+
+ +

Is this also normal?

+",276473,,,,,43839.39028,How to design for optional dependency / optional functionality,,0,3,,,,CC BY-SA 4.0, +403567,1,403576,,1/9/2020 13:52,,-1,100,"

Are use case classes (application services on DDD or a.k.a. Facades) stable? +Should controllers and listeners be coupled with interfaces at all?

+",338080,,,,,43839.69583,Is a good practice to create Interfaces for use case objects?,,1,4,1,,,CC BY-SA 4.0, +403569,1,,,1/9/2020 14:59,,2,314,"

I started creating an application which uses strings as key, in order to have readably, non-guessable API values:

+ +
GET https://myApi.com/docs/obuxn6xhzg
+GET https://myApi.com/docs/qxfj1g40xf
+PUT https://myApi.com/docs/jgtw2vsqqh (--> to update the item, for example)
+
+ +

as opposed to having https://myApi.com/docs/1 .. https://myApi.com/docs/99

+ +

However, I'm now struggling to find a descent API endpoint to pass in actions. For example an archiveActive action could be represented by POST https://myApi.com/docs/archiveActive.

+ +

With this URL however, I'm starting to think there might be an ambiguity where the application might consider ""archiveActive"" as a document id.

+ +

So basically, I guess I'm asking if this is something I really should avoid doing, and if so, if there is an alternative way of approaching this when using string ids.

+",354691,,5099,,43840.37153,43840.38542,Rest API design - ids as string,,3,6,,,,CC BY-SA 4.0, +403575,1,403653,,1/9/2020 16:39,,3,78,"

I am attempting to partition a decision node based on environment attributes. The process I am modeling is slightly different depending if a variable scope is public or private.

+ +

I am trying to associate the related Map Variables action in the Pipeline partition but cannot determine the best way to model this. Should I use a fork node only for this specific action or instead add the action for both decision flows into the existing Pipeline partition?

+ +

+",348428,,,,,43840.87778,UML Partitioning Decision Nodes,,1,6,,,,CC BY-SA 4.0, +403577,1,403578,,1/9/2020 16:48,,1,307,"

I have been playing around making amortization schedules in PHP. My php.ini currently has the precision set to 14. I understand going into this that there will be rounding errors however I am hoping to minimize these errors.

+ +

My current way of doing this is only rounding my payment amounts and then leaving all other values at their php default precision value. Calculations below:

+ +
$Term_Rate = $Annual_Rate / $Year_Terms;// annual_rate / terms per year
+
+$Payment = round(( $Term_Rate * $Principle ) / ( 1 - (( 1 + $Term_Rate )**( -$Total_Terms ))), 2 );
+
+if($y%4===0)
+    $leap = 1;
+
+$interval = $date_2->diff($date_1); 
+$days = $interval->days;
+
+$interest = $days * (( $Annual_Rate * $Principle ) / ( 365 + $leap ));
+
+if($Payment>($Principle+$interest))
+    $Payment = round( $Principle + $interest, 2);
+
+$balance = ( $Principle + $interest ) - $Payment;
+
+$Principle=$balance;
+
+ +

Then for any values that are displayed back I do my rounding to 2 decimal places. For most of the inputs I have tried the calculator performs well, meaning the loan is paid of on the final payment of the term. However, some input values can cause my loan to be paid off early. Inputs such as:

+ +
$Pay_Type = ""Monthly"";
+$Principle = ""25000000.00"";
+$Annual_Rate = ""0.2"";
+$Term_Length = ""360""; 
+$Term_Unit = ""Months""; 
+$Start_Date = ""2020-01-31""; 
+
+ +

Not all of the inputs will be realistic but Im hoping to be able to handle even some of the more obsurred inputs such as a 90% interest rate on $25000000.

+ +

Please inform me if more of my code is required or if any calculations are handled improperly.

+",352360,,209774,,43840.80903,43840.80903,Loan Amortization Schedule - Precision and rounding,,1,0,,,,CC BY-SA 4.0, +403579,1,403710,,1/9/2020 17:23,,4,955,"

So I'm currently trying to write a project using Clean Architecture. Its a Unity Engine project which doesn't make the task any easier.

+ +

The Issue I'm running into however is much more basic and has to do with the fact that all examples for Clean Architecture use cases I have seen appear to be rather simple mostly CRUD and without multiple steps.

+ +

Now the (high-level Cockburn-definition) Use Cases I want to implement involve a couple of steps:

+ +
    +
  1. User creates an element
  2. +
  3. User puts information about the element
  4. +
  5. User creates additional elements and links them to the first one
  6. +
+ +

Now in the traditional use case sense as described by Clean Architecture with an input-port and an output-port I do not know how to model this workflow as a single use case as it spans multiple UI elements with their own presenters and the workflow itself is being triggered from deep within a presenter hierarchy that has nothing at all to do with the execution of this workflow aside form starting it.

+ +

I do want to keep the application as loosely coupled as possible so I do not wish to introduce tight-coupling via some god-object that holds references to all needed presenters and implements the output port.

+ +

I have a solution that is rather hands-off by simply having most of the UI listen to state changes in the repositories however this as discussed today with colleagues leads to a bit of confusion of how the flow of data works and we have people opting for having use-cases drive UI state changes instead as in:

+ +
outputPort.ChangeToInputDetailsState()
+
+ +

which I think completely breaks the principle that use cases should not care about how they are called etc.

+ +

If anyone has any example of use-cases in Clean Architecture requesting further information or in general any source of information on using Clean Architecture with more complex use-cases in a UI driven environment I would be quite happy.

+ +

Edit: More details as requested

+ +

To elaborate a bit on what I mean with spans multiple UI elements since it was asked in the comments. I can't give details on the specific project but I can try to make an analogon.

+ +

Use case (Cockburn definition) is something like:

+ +

Name: Record and Upload Video

+ +

Primary Actor: User

+ +

Steps:

+ +
    +
  1. Record Video
  2. +
  3. Annotate and Cut Video
  4. +
  5. Upload Video to Server
  6. +
+ +

Now these span multiple UI elements in the sense that steps 1, 2 and 3 require different parts of the UI with their own sublogic to complete, which themselves are used from multiple points. I'm not sure that this use-case definition can be mapped 1:1 to a Clean Architecture use-case. If it can be then the Input/Output port would have to be able to orchestrate all these different UI parts which themselves might be used by different use-cases.

+ +

Elaborating on it like that I think that my issue also contains a variant of the issue of reusing UI components in a Clean Architecture case where there might be more than one use-case associated with a single controller/presenter.

+ +

Edit 2: Elaborating on just using functions on the outport port that return after getting the necessary info.

+ +

The issue I see here is that in Clean Architecture Output Ports/Presenters to me always looked dumb (in the sense of barely any logic) and that orchestrating all the business logic falls into the realm of use cases.

+ +

If I were to however only call another use-case to get my additional information instead of passing it back into the presentation layer I would not be able to open another user interface as the use cases don't even know about it.

+",354565,,149436,,43850.58542,43850.58542,Clean Architecture: Use case spanning multiple UI elements,,3,4,1,,,CC BY-SA 4.0, +403583,1,,,1/9/2020 18:19,,5,382,"

How can I determine the monetary cost of a user story point in a given team?

+ +

I was asked this question recently since the business is interested in determining how much a given project could cost them.

+ +

Now I know a user story point is a subjective measure of estimation, e.g. The team assigns T-shirt sizes to the stories: small (1 point), medium (2-3 points), large (5 points) and extra-large (8 points), etc. This estimation of the size is very subjective and if the team changes in any way, what they subjectively consider small, medium or large could change as well.

+ +

I know that after trying planning poker for a while, the team calibrates their estimation subjectivity and eventually this becomes evident in the fact that the team delivers (on average) the same amount of subjective story points per iteration: our team velocity.

+ +

However, the problem is that my team has changed several of its members recently, some of the new members don't know the domain yet and/or have less professional experience. We won't know our velocity for a while.

+ +

So, bottom line is that thinking in agile terms, I have no clue how one could determine the cost of a story point, or even following any other strategy how much a given project would cost.

+ +

Any ideas on what's the right way to do this using a formula that makes sense from agile perspective?

+",23837,,23837,,43839.85694,43847.73472,How to determine the cost of a story point?,,3,4,,,,CC BY-SA 4.0, +403584,1,403592,,1/9/2020 18:26,,1,143,"

From what I understand, the visitor pattern is supposed to solve the expression problem (described here), where a program needs to support performing multiple operations on multiple types, ideally allowing adding new operations and new types without touching existing code.

+ +
    +
  • OOP languages can define a method for each operation on each type of object; this makes it easy to add new objects without modifying existing code, but adding a new operation requires modifying all existing objects.
  • +
  • FP languages with pattern matching have the opposite issue; adding a new operation is self-contained, but adding a new data type requires modifying all existing functions to support the new type.
  • +
+ +

The visitor pattern, as I understand it, just changes the OOP style to the FP style; adding a new operation just means adding a new type of visitor, but adding a new data type means adding a method to all existing visitors. Is my understanding correct? If so, what's the benefit of the visitor pattern, if it doesn't fully solve the expression problem?

+",212963,,,,,43839.825,Does the visitor pattern prevent the need to modify existing code when adding new data types?,,1,1,,,,CC BY-SA 4.0, +403586,1,403649,,1/9/2020 18:51,,7,350,"

Considering that:

+ +
    +
  • when using the repository pattern you deal with entities - those are the atomic units you persist (regardless of Hibernate or ""manually"")
  • +
  • when changing an entity you save it as a whole (there's no set or increment field)
  • +
  • multiple application instances might be running
  • +
  • the same entity might be fetched/changed/saved concurrently by different application instances
  • +
+ +

Won't this lead to bad data in the end? Consider the use case of ""let me save the amount of password retries in the user entity"". +Won't that be problematic if an attacker launches many multiple client requests to login? Instance A fetches entity, instance B fetches same entity, instance A changes/saves it, instance B changes/saves the original entity unknowingly of A's change/save.

+",354722,,209774,,43840.79931,43840.79931,How to prevent concurrency problems when using the repository pattern?,,1,6,,,,CC BY-SA 4.0, +403598,1,403603,,1/9/2020 21:40,,-2,79,"

This is for unity

+ +

I tried another way of stopping player after he dies and it worked fine but now it does not work with sounds

+ +

That was my way the only thing that I changed is the state

+ +

Ben's was

+ +
if  (state  ==  state.Alive) { Update() }
+
+ +

Mine was

+ +
if (state == state.Dying) { return; }
+
+ +

They look the same but the problem is the Success Sound will not play

+ +

Ben's code is working fine with me but I want to understand how it works

+ +

is mine the same?

+ +
void Update()
+{
+   if (state == State.Dying) 
+   { 
+       return; 
+   }
+   Rotate();
+   Thrust();
+}
+
+ +

Ben's way

+ +
void Update()
+{
+   if (state == State.Alive)
+   { 
+       Rotate();
+       Thrust();
+   }
+}
+
+ +

For People Asking For The Enum Definitions:

+ +
enum State { Alive, Dying, Transcending }
+State state = State.Alive;
+
+",354745,,45814,,43839.92778,43839.92986,Are These Both The Same Or Different If Statements,,1,6,,43839.92778,,CC BY-SA 4.0, +403599,1,,,1/9/2020 21:44,,6,1448,"

I am trying to follow the Onion Architecture to design my application where I have the following layers -

+ +
    +
  • Domain Layer: is the inner-most layer and defines repository interfaces
  • +
  • Infrastructure Layer: forms the outer-most layer peered with the Web/UI layer and implements the repository interfaces
  • +
  • Service Layer: is the middle layer where business logic resides and repository interfaces are injected
  • +
  • Web/UI Layer: forms the outer-most layer peered with the Infrastructure layer and handles DI configurations
  • +
+ +

Things are working. No issues so far.

+ +

I've come across several Stack Overflow answers and online articles that recommend not to implement Repository pattern when using Entity Framework and Entity Framework Core, because the DbContext itself is implemented to provide the functionality of the Unit of Work pattern while the each IDbSet<T> acts as a repository. I do understand the point and that’s very good. Because, less code to write. So, I thought let’s see if I can find a way to structure my application avoiding the repositories and using the DbContext only.

+ +

Some recommendations I've found suggests use of the DbContext directly in my Controllers, which I didn’t like. I don’t want my Controller code to get all messy with queries. I prefer them slim, and delegate the business processing and database operations to somebody else, like I currently do with the Service classes.

+ +

I've read this article Ditch the Repository Pattern Already and in the comment section in response to one comment the writer said he uses Onion Architecture and -

+ +
+

If an Application Layer exists as a discrete layer then it is injected + with the DbContext and possibly other dependencies. For a Web + application, a UI layer sets up all the DI, translates request to + calls to the Application layer, and maps responses onto view models to + be sent back to the browser. Other infrastructure layers like the + persistence layer I define the DbContext in sit at the outside layer + with the UI layer as ""peer"" layers.

+
+ +

Since according to Onion Architecture the dependency should only go inward what I really don't understand is how can you get a reference of DbContext (defined in Infrastructure Layer) in Application Layer through DI.

+ +

For those who prefer not to implement Repository and Unit of Work patterns with EF/EF Core could you suggest how can I do the same and still structure my application using Onion Architecture?

+ +

EDIT - Feb 03 2020

+ +

Thanks to Flater for his answer. I've gone through each and every link everyone provided. Thanks to all.

+ +

But it seems I failed to properly express my problem. I'll try once again. My application can be represented with the following diagram of Onion Architecture -

+ +

+ +

Each layer is implemented as a separate project for convenience. Since Onion Architecture dictates that dependencies go only inward -

+ +

Domain - has no dependency on any other project. Defines models and interfaces.

+ +

Service - has dependency only on Domain. Contains the Service classes which implements the Service interfaces. Service classes are injected with Repository interfaces.

+ +

Infrastructure - has dependency on Domain. It defines the DbContext. It contains the Repository classes which implements the Repository interfaces. Repository classes are injected with the DbContext.

+ +

Web/UI - has dependency on Service and Domain. It contains the Controller classes which are injected with Service interfaces and/or Repository interfaces. (This project also has a dependency on Infrastructure, but that is solely for DI configuration purpose).

+ +

Onion Architecture puts persistence operations at the outer most layer as part of Infrastructure and use Dependency Inversion to access them. This results in a loosely coupled design where the Application Core (comprised of Application + Services + Domain Layer) doesn't have any dependency on data access layer/technologies.

+ +

When I said -

+ +
+

what I really don't understand is how can you get a reference of + DbContext (defined in Infrastructure Layer) in Application Layer + through DI.

+
+ +

what I meant is, if I'm to discard the Repositories and use DbContext only, then I must use the DbContext from my Service classes, but DbContext is defined in Infrastructure and Service doesn't have a reference to Infrastructure.

+ +

To use the DbContext, the Service Layer it needs a direct reference to Infrastructure. Adding this reference violates the most fundamental concept required by Onion Architecture - dependencies should always go inward. +This requirement forces us to use DIP to resolve outward dependency and thus achieve the loose coupling.

+ +

So, if Service Layer have a reference to the Infrastructure, the Application Core is directly dependent on the Data Access Layer, and I don't think we can call it an Onion anymore.

+ +

That's where my question comes - how can I avoid the Repositories, use DbContext only but still adhere to Onion Architecture?

+ +

I'm just trying to implement something that I can use with smaller applications, because clearly Repo/UoW approach is overkill for some scenarios and not to mention more code to write.

+",77018,,77018,,43864.12292,44081.64861,Avoiding Repository pattern - implementing Onion Architecture with DbContext only,,2,5,2,,,CC BY-SA 4.0, +403601,1,403610,,1/9/2020 22:03,,3,149,"

My team is building a solution where a mobile app communicates with a backend. I need to describe functionality where the communication between the app and the backend is optimized according to some rules that work for both the client (mobile app) and the server (backend). This optimization will be mostly developed on the client side as it's the one collecting data and then sending it to the server and thus responsible for optimizing the amount of data sent to the server.

+ +

Now the optimization is important for both the mobile app and the backend. The reason is performance and scalability, however there are different aspects for both.

+ +

For example, for the the users of the mobile app, the main need to reduce the sent data comes from the fact that it might be on a potentially slow network and cost. For the operators/owners of the backend, the main reason is scalability and economics; the amount of data sent by multiple clients that needs to be processed and also simply because in the end, it doesn't need all the raw data gathered by the client to do its job.

+ +

Thus, I wonder how best to express all this; should it be one story or two stories where each story has a different role and value?

+ +

I also wonder if this is a story at all; ultimately the need for the backend owner/operator is to run with as little hardware as possible, to have a low enough TCO. For the user of the mobile app, the main value is to reduce the bill and to have timely feedback from the server. However, I don't even have concrete customer requirements for the end-users, though I do know what rules the transmission should obey and I know that it could be optimized. From this point of view, this is more a design concern than a concrete story. I wonder if this should go as a task in the backlog and not as one or more stories

+",354743,,165490,,43840.76181,43840.76181,User stories about one and the same functionality for two different roles,,5,2,,,,CC BY-SA 4.0, +403602,1,403668,,1/9/2020 22:07,,-1,400,"

I have a marketplace application and I stored data in a PostgreSQL. I have performance problem for product search, I know I can improve search performance if I use Elasticsearch instead of PostgreSQL but I'm confused for is Elasticsearch strong as PostgreSQL (RDBMS) in create, update or delete operations. +which makes sense; Using Elasticsearch for searches only, in other cases, using PostgreSQL, such as fetching by id. or using Elasticsearch for all operations

+",354735,,354735,,43839.94653,43841.39097,Elasticsearch and PostgreSQL combination,,1,8,1,,,CC BY-SA 4.0, +403605,1,403607,,1/9/2020 22:37,,-2,109,"

I have to send a POST request with some data to a RESTful API. Right now I have a C program that creates a socket, connects with the host and successfully sends the POST request.

+ +

After some C magic, the request is formed as follows:

+ +
POST http://remotemanager.digi.com/ws/sci HTTP/1.1
+Host: remotemanager.digi.com
+Content-Type: text/xml
+Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
+Content-Length: 123
+
+<sci_request version=""1.0""><ping><targets><device id=""00000000-00000000-001234FF-FF56789A""/></targets></ping></sci_request>
+
+ +

Then I send it with send from socket.h. Everything works perfectly fine. After sending it, I receive:

+ +
HTTP/1.1 200 OK
+Server: Apache-Coyote/1.1
+Set-Cookie: JSESSIONID=485A5984C8BB21B11BFB305145FAF3B9; Path=/ws/;         HttpOnly;Secure
+Cache-Control: no-store
+Pragma: no-cache
+Expires: Thu, 01 Jan 1970 00:00:00 GMT
+Content-Type: application/xml;charset=ISO-8859-1
+Content-Length: 195
+Date: Thu, 09 Jan 2020 21:12:17 GMT
+
+<sci_reply version=""1.0""><send_message><device id=""00000000-00000000-001234FF-FF56789A""><rci_reply version=""1.1""><do_command target=""RPC_request""/></rci_reply></device></send_message></sci_reply>
+
+ +

The problem is when I try to do this with HTML and Javascript. This is what I'm doing:

+ +
<!DOCTYPE html>
+<html lang=""en-us"">
+    <head>
+        <meta charset=""utf-8"">
+        <title>Ping Gateway</title>
+        <script>
+            function send_request() {
+                var xml =
+                '<sci_request version=""1.0"">\n' +
+                    '<send_message>\n' +
+                        '<targets>\n' +
+                            '<device id=""00000000-00000000-001234FF-FF56789A""/>\n' +
+                        '</targets>\n' +
+                        '</ping>\n' +
+                '</sci_request>\n';
+
+                var request = new XMLHttpRequest();
+
+                request.onreadystatechange = function() {
+                    if (this.readyState == XMLHttpRequest.DONE && this.status == 200) {
+
+                    }
+                    else if (this.status == 401) {
+                        document.getElementById(""response"").innerHTML = ""Unauthorized!"";
+                    }
+                }
+
+                request.open(""POST"", ""https://remotemanager.digi.com/ws/sci"", true, ""username"", ""password"");
+                request.setRequestHeader(""Content-Type"", ""text/xml"");
+
+                request.send(xml);
+            }
+        </script>
+    </head>
+
+    <body>
+        <form onsubmit=""send_request(); return false;"" method=""POST"">
+            <input type=""submit"" value=""Ping""></button>
+        </form>
+        <br>
+        <br>
+        <div id=""response""></div>
+    </body>
+</html>
+
+ +

But I get:

+ +
+

OPTIONS https://remotemanager.digi.com/ws/sci 401 (Unauthorized)

+ +

test.html:1 Access to XMLHttpRequest at 'https://username:password@remotemanager.digi.com/ws/sci' from origin 'null' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

+
+ +

So, if I'm performing the request form exactly the same computer, even the same folder, why does the C code works and the HTML/JS version triggers this CORS issue?

+",121723,,,,,43839.97639,Why does an POST request from HTML/JavaScript generates 401(unauthorized) but it doesn't from a C program?,,2,0,,,,CC BY-SA 4.0, +403608,1,403636,,1/9/2020 23:15,,-2,91,"

Scenario:

+ +
    +
  1. In Case I have MS ""MS-1""
  2. +
  3. Orchestration detect high volume
  4. +
  5. Orchestration create additional node with cloned MS ""MS-2""
  6. +
  7. ""MS-2"" get request and update its DB
  8. +
+ +

Question :

+ +

How MS ""MS-1"" will be updated with the new record ""MS-2"" insert.

+",354749,,177980,,43840.49167,43840.53403,How microservices db are updated and sync when scale?,,1,4,,43847.82569,,CC BY-SA 4.0, +403622,1,,,1/10/2020 8:18,,1,71,"

What's the best practice for maintaining app configuration for multiple test environments, (and production)?

+ +

At the moment I am keeping all the config in the code repo (a bitbucket server). Configs are text files in /config folders with subdirectories for each test environments (eg. config/SIT, config/UAT, config/PROD).

+ +

The problem is this seems very repetitive (non-DRY) and prone to human error.

+ +

Would maintaining the same config files using different git branches make more sense? Or are there any other tools or methods I should be looking at?

+",354778,,110531,,43840.41389,43840.48889,How do I maintain configuration in the case of multiple test environments and application instances,,2,1,,,,CC BY-SA 4.0, +403627,1,,,1/10/2020 10:11,,0,124,"

I was wondering how does a Company make sure that a Commercial License was purchased for a Framework they are providing?

+ +

For Example Qt is a Framework for C++ and it can be downloaded and used freely. +If someone is developing software in a Commercial way and he uses a free License how it is possible that he gets fined for it?

+ +

I can imagine that bigger companies can get easily compromised but small development Teams of 1 to 5 People?

+",354779,,,,,43840.48472,How do companies ensure that a commercial license for their Framework is used?,,1,5,,,,CC BY-SA 4.0, +403651,1,403891,,1/10/2020 20:07,,-2,224,"

We have a several GIT repos in Azure DevOps with .NET Core web applications that are related to each other with the use of submodule (we used autocreation of NuGet Packages in a private NuGet store, but this was hard to debug and maintain).

+ +

And we also have a master branch with CI/CD pipeline to a test environment and a Stable branch with CI/CD to a production environment.

+ +

Without submodules, the workflow is clear to me:

+ +
    +
  1. We work on a feature in the branches made in the Master branch and commit to the Master branch until the feature is complete and the master branch is free of bugs
  2. +
  3. We do a merge from Master to Stable branch and the CI/CD releases to production
  4. +
+ +

But with submodules this gets trickier.

+ +

Here is a sketch of our setup:

+ +
    +
  • Web App - Master branch

    + +
      +
    • Submodule 1 with custom Libraries - Master branch + +
        +
      • Submodule 2 with custom Libraries - Master branch
      • +
    • +
  • +
+ +

In order to prevent unfinished features from a master branch of one of the submodules to get into the Stable branch of the Web App I think the setup for the Stable branch should look like this:

+ +
    +
  • Web App - Stable branch

    + +
      +
    • Submodule 1 with custom Libraries - Stable branch + +
        +
      • Submodule 2 with custom Libraries - Stable branch
      • +
    • +
  • +
+ +

But to make this work, we would need to change the submodules each time we merge Master to Stable.

+ +

Is there a better workflow for working with submodules in different git branches? Or is there a good method to change the submodules to the Stable branches of those repositories?

+ +

Edit:

+ +

If you are downvoting, please let me know why in a comment so I can correct my mistake. Anonymous downvoting is in my opinion 'not done', I always (on stackoverflow) comment as to why I downvote.

+",153740,,153740,,43841.36111,43846.55972,What is the best GIT workflow with CI/CD with submodules in a master to a test environment and a stable branch to a production environment?,,3,4,,,,CC BY-SA 4.0, +403658,1,,,1/11/2020 0:25,,1,62,"

There are different kinds of ACLs I've generally implemented:

+ +
    +
  1. Does a user have access to a resourcetype(API)?
  2. +
  3. Does a user have access to a resource(Object)?
  4. +
  5. Does a resource have access to another resource?
  6. +
  7. Is a resourcetype supported for a resource?
  8. +
  9. Restrict outcome of one metaresoucetype(Seach) based on restrictions on resourcetype.
  10. +
+ +

Except for 1, everything has generally been modeled at the application layer, while (1) is answered by traditional authn/authz mechanisms. This grows immensely complex as the types of resources and resources themselves increase. Since these questions are quite similar, is there a good way to model a general purpose solution to the above class of problems? If so, what are some well known implementations/architectures?

+",285652,,,,,43841.01736,How to model ACLs?,,0,4,,,,CC BY-SA 4.0, +403659,1,,,1/11/2020 3:48,,1,72,"

I'm just starting to put software design principles and agile into practice. +I'm taking baby steps and so far I've started implementing the following for my software projects...

+ +
    +
  • A github project board with github issues acting as my User Stories.
  • +
+ +

I'm under the understanding that a huge SRS isn't necessary, but I'd like to know if it's a good practice to just attach any diagrams I feel necessary for the particular story to the issue and NOT create a Software Requirement Specification document.

+ +

Also, if it's practical to have a SRS document in agile, what important artifacts might I include?

+",163895,,209774,,43841.73333,43841.76597,"Should I be attaching diagrams, wire frames and other design artifacts to User Stories in an agile approach?",,2,0,,,,CC BY-SA 4.0, +403661,1,,,1/11/2020 4:06,,1,171,"

I have a AWS Lambda function that a user makes a GET request to and it returns presigned URL. The user then uploads an image by making a PUT request to that URL. Since AWS S3 PUT requests limits object uploads to only one per URL,

+ +

How would I send multiple images? Am I supposed to use a loop on the current number of images e.g:

+ +

Say the user wants to upload 5 images.

+ +
for(...5 images...){
+   Make GET request to generate URL
+   In the callback method, 
+   Make a PUT request to the generated URL to upload image
+} 
+
+ +

I am not sure if the service is intended to be used this way. And also this causes another issue, say what if I want all the images to be uploaded and saved into my DB, or none to be saved. Because in the PUT request's post process Lambda function, I want to save that S3 image URL into my database, how do I ensure all the images are uploaded and not only 3/5 of them, if the network fails?

+",293058,,,,,43841.17083,AWS S3 Presigned URL limited to one object upload per URL - How to upload multiple images,,0,2,,,,CC BY-SA 4.0, +403662,1,403674,,1/11/2020 4:21,,-1,159,"

I am thinking of splitting a service serving multiple endpoints into microservices that can serve a set of endpoints but the problem is that the two services have certain logic in common. can the two services use libraries to address this? is it even a good idea to split it up?

+ +

to give a hypothetical analogy, +lets say the given service has two endpoints,

+ +
    +
  1. GET /animals, which returns a list,

    + +
    [""dog"", ""cat""]
    +
  2. +
  3. POST /walk, that accepts the animal

    + +
    // request body
    +{
    +    animal: ""dog""
    +}
    +
    +// handler
    +handle(request, response) {
    +    animalFood{{""dog"": biscuits}, {""cat"" : milk}}
    +    goForAWalk({ 
    +                   animal: request.Animal,
    +                   food: animalFood[request.Animal]
    +              })
    +}
    +
  4. +
+ +

Is it a good idea to split the two API's and keep the processing logic in libraries? Feels like the services are still coupled and belong to the same domain?

+",108598,,108598,,43842.40486,43842.40486,Can a service be split into two microservices using common libraries?,,2,4,,,,CC BY-SA 4.0, +403664,1,403676,,1/11/2020 5:08,,0,193,"

I am working with Perforce for 10 years. Our repository has around 500.000 submits and our submits are either C/C++ source code or (binary) dependencies.

+ +

The lack of editor support for Perforce was always a downer. So recently I finally started evaluating Git and I enjoy the concept. I already see it's not the shiny world, but good enough to stay tuned.

+ +

So my biggest concern are dependencies. Over the entire depot lifetime we have around 200 GB of binary dependencies (new revisions, versions, etc...). The first solution is Git LFS. I made a few tests and instead of placing them in .git/objects they ended up in ./git/lfs as simple copies. My question now is, why do people still complain about Git LFS? Is that solution not good? What is the problem with Git LFS that this doesn't solve the problem git is accused of?

+",345143,,,,,43841.63681,Why is Git LFS not the solution to the problem?,,1,3,,43841.72569,,CC BY-SA 4.0, +403672,1,,,1/11/2020 12:39,,-1,49,"

There is a java code base based on the Spring Boot framework. As an activity I want to navigate the code path of every API method to check for the checked exceptions that are thrown at different points in the code path.

+ +

Now I am aware that the exceptions thrown in the code path would have to be either caught & handled or thrown to the caller but the problem here is that I have a base class for the exceptions thrown by the application which is caught or thrown by the API methods. As a result I don't know about the particular exceptions that can be expected in the method call chain. Please find below a simple code example to compliment my problem statement:

+ +
class Controller { 
+    public void apiMethod() throws CustomException {
+        Service service = new Service();
+        service.method1();//throws CustomException1
+        service.method2();//throws CustomException2
+    }
+}
+
+class Service {
+    public void method1() throws CustomException1 {
+        throw new CustomException1();   
+    }
+
+    public void method2() throws CustomException2 {
+        throw new CustomException2();
+
+    }
+}
+
+class CustomException extends Exception {
+
+}
+
+class CustomException1 extends CustomException {
+
+}
+
+class CustomException2 extends CustomException {
+
+}
+
+ +

As is evident from the code that the API method is aware that exceptions of type CustomException would be thrown by the call chain but is unaware about the exact exceptions that can be thrown. Now I want to know an automated way to know all the different type of exceptions that are thrown by the method call chain and not just the parent exception.

+ +

Please let me know if more information or clarity is required.

+",283012,,283012,,43841.57083,43855.33472,Automated code navigation for finding all types of exceptions thrown?,,1,8,,,,CC BY-SA 4.0, +403673,1,403680,,1/11/2020 13:32,,11,2060,"

I sometimes end up with services encapsulating the responsibility of doing some sort of business process for which there are several possible outputs. Typically one of those output is success and the others represent the possible failures of the process itself.

+ +

To fix the idea consider the following interfaces and classes:

+ +
  interface IOperationResult 
+  {
+  }
+
+  class Success : IOperationResult 
+  {
+    public int Result { get; }
+    public Success(int result) => Result = result;
+  }
+
+  class ApiFailure : IOperationResult 
+  {
+    public HttpStatusCode StatusCode { get; }
+    public ApiFailure(HttpStatusCode statusCode) => StatusCode = statusCode;
+  }
+
+  class ValidationFailure : IOperationResult 
+  {
+    public ReadOnlyCollection<string> Errors { get; }
+
+    public ValidationFailure(IEnumerable<string> errors)
+    {
+      if (errors == null)
+        throw new ArgumentNullException(nameof(errors));
+
+      this.Errors = new List<string>(errors).AsReadOnly();
+    }
+  }
+
+  interface IService 
+  {
+    IOperationResult DoWork(string someFancyParam);
+  }
+
+ +

The classes consuming the IService abstraction are required to process the returned IOperationResult instance. The straightforward way to do so is writing a plain old switch statement and decide what to do in each case:

+ +
      switch (result) 
+      {
+        case Success success:
+          Console.WriteLine($""Success with result {success.Result}"");
+          break;
+
+        case ApiFailure apiFailure:
+          Console.WriteLine($""Api failure with status code {apiFailure.StatusCode}"");
+          break;
+
+        case ValidationFailure validationFailure:
+          Console.WriteLine(
+            $""Validation failure with the following errors: {string.Join("", "", validationFailure.Errors)}""
+          );
+          break;
+
+        default:
+          throw new NotSupportedException($""Unknown type of operation result {result.GetType().Name}"");
+      }
+
+ +

Writing this type of code in different points of the codebase quickly generates a mess, because this basically violates the open closed principle.

+ +

Each time the implementation of IService gets modified by introducing a new implementation of IOperationResult there are several switch statements that must be modified too. The developer implementing the new feature must be aware of their existence, unless there are well written tests which can automatically detect the missing modifications in the points where the code switches over IOperationResult instances.

+ +

Maybe the switch statement can be avoided at all.

+ +

This is easy to do when IService is used for one specific purpose. As an example, when I write ASP.NET core MVC controllers in order to keep action methods simple and lean I inject a service in the controller and delegate to it all the processing logic. This way the action method only cares about handling the HTTP request, validating the parameters and returning an HTTP response to the caller. In this scenario the switch statement can be avoided from the beginning by using polymorphism. The trick is modifying IOperationResult this way:

+ +
  interface IOperationResult 
+  {
+    IActionResult ToActionResult();
+  }
+
+ +

The action method simply calls ToActionResult on the IOperationResult instance and returns the result.

+ +

In some cases the IService abstraction must be used by different callers and we need to let them the freedom to decide what to do with the operation result.

+ +

One possible solution is defining one higher order function, lets call it processor for simplicity, having the responsibility of processing a given instance of IOperationResult. It's something like this:

+ +
  static class Processors 
+  {
+    static T Process<T>(
+      IOperationResult operationResult,
+      Func<Success, T> successProcessor,
+      Func<ApiFailure, T> apiFailureProcessor,
+      Func<ValidationFailure, T> validationFailureProcessor) => 
+        operationResult switch
+        {
+          Success success => successProcessor(success),
+          ApiFailure apiFailure => apiFailureProcessor(apiFailure),
+          ValidationFailure validationFailure => validationFailureProcessor(validationFailure),
+          _ => throw new ArgumentException($""Unknown type of operation result: {operationResult.GetType().Name}"")
+        };
+  }
+
+ +

The advantages here are the following:

+ +
    +
  • there is only one point where the switch statement is done
  • +
  • each time a new implementation of IOperationResult is defined there is only one point that needs to be modified. Doing so the signature of the Process function gets modified too.
  • +
  • the modification done at the previous point produces several compile time errors where the Process function gets called. This errors must be fixed, but we can trust the compiler being able to find all the points to be modified
  • +
+ +

A more object oriented alternative is modifying the definition of IOperationResult by adding one method per each intended usage of the operation result, so that the switch statement can be avoided once more and the only thing to do is actually writing a new implementation of the interface.

+ +

This is an example in the hypothesis that there are two different consumers of IService:

+ +
  interface IOperationResult 
+  {
+    string ToEmailMessage(); // used by the email sender service
+    ICommand ToCommand(); // used by the command sender service
+  }
+
+ +

Any thoughts ? Are there other or better alternatives ?

+",310179,,310179,,43841.57153,44005.25347,Looking for an effective pattern to cope with switch statements in C#,,5,5,4,,,CC BY-SA 4.0, +403684,1,,,1/11/2020 19:38,,1,369,"

I am working on converting a milestone project into microservices. All these services are in seperate docker containers. I am using event-driven pattern where I have used RabbitMQ as the message broker with celery being the async task queue.

+ +

Celery is storing task information in Redis, which is again in a separate container.

+ +

So basically we have these services in different containers:

+ +
    +
  1. access-management
  2. +
  3. RabbitMQ
  4. +
  5. Redis
  6. +
  7. gen
  8. +
  9. apply
  10. +
  11. cm
  12. +
  13. notification
  14. +
+ +

I want to use the same Redis container as a data source for all my other services.

+ +

Here's my doubt- as Microservices manage its own data and suggests to have separate database per service. If I use the same database(Redis container) for all the services, will that be an acceptable design without being deviated from microservice principles?

+",354878,,1204,,43841.84861,43843.76944,"If I use the same database (Redis container) for all microservices, will that be an acceptable design without deviating from microservice principles?",,1,0,1,,,CC BY-SA 4.0, +403686,1,403689,,1/11/2020 20:29,,-1,74,"

The application I'm developing has:

+ +
    +
  • controllers (they are responsible for processing RESTful API calls),
  • +
  • services (their methods are being called by controllers, they are responsible for operations with various objects),
  • +
  • repositories (their method are being by controllers when they are manipulating various objects)
  • +
  • and so on.
  • +
+ +

Some operations are asynchronous, so a controller runs this operation by calling a separate service and this separate service starts a background job and returns the job ID to the caller. It looks like that:

+ +
    @DeleteMapping(""/hahaha/{id}"")
+    public Job deleteHahaha(@PathVariable(""id"") String id) {
+        return jobService.runJob(
+                () -> {
+                    hahahaService.delete(id);
+                }
+        );
+    }
+
+ +

Everything's working fine, but I'm not sure that everything's optimal.

+ +

Shouldn't I make HahahaService.delete() call JobService.runjob() by itself and return a Job? Won't it be a bad practice when services are coupled and interdependent? At the same time, JobService is a service of another kind than HahahaService, HohohoService and KekekeService, because it's an auxiliary service, so maybe it's OK to use it from another service.

+ +

What do you think?

+ +

Thanks!

+",327861,,,,,43841.87917,Services calling services,,1,0,,,,CC BY-SA 4.0, +403690,1,,,1/11/2020 23:33,,5,270,"

I'm not quite sure how to best name this, or even precisely what it is that I'm asking here because it's kinda vague and intuitive, but... I hope my explanation will make sense.

+ +

Over my years of programming carreer I've had the honor of working with many different frameworks. Some were homebrew, others were in public domain or even a paid product. And I've noticed a pattern: most frameworks deal excellently with the naive, basic use cases. The kind of things you write in simple examples (and, completely coincidentally I'm sure, also the documentation of said frameworks). They also deal moderately well with average real life use cases. But when you hit on something that lies off the beaten path, something that the core devs hadn't anticpated, all hell breaks loose. Suddenly you spend days if not weeks trying to figure out how to do what you need to do; delve through undocumented and/or reverse-engineered bowels, only to end up with an ugly hack that you hope nobody ever finds out about and which may or may not stop working with the next major release of the framework. It feels like a wrestling match, and I've been on this rodeo far more times than I would like.

+ +

As a result I've gained a dislike for large, all-encompassing frameworks that dictate Their Way in every little aspect of your program. Instead I prefer to put my programs together myself, using many unrelated external libraries which each focus narrowly on their own task, but do not put any limitations on how you use them - or if you use them at all. This gives me a lot more degrees of freedom and ugly hacks are rarely needed

+ +

However recently I've heard from several colleagues in unrelated conversations that they actually feel exactly opposite. They prefer the megaframeworks precisley because they enforce their structure and order upon your program, rather than allowing you to go whichever way you want. And I also do see the point in that. If you're a new developer to the project but you already know the framework, you'll have an easier time getting your bearings. Similarly, there will be an implicit agreement among all developers of a project about what goes where and how things are done. In contrast, the frameworkless programs do tend to get somewhat more messy when there are multiple people working on them, because each person does things their own way.

+ +

This discussion came up again recently about a fairly new project we're working on. I've started it in the ""frameworkless"" way, but a colleague feels that it would be better to move over to a certain large framework instead.

+ +

I'm quite confused and don't really know which would be objectively better. Should we rewrite the program in the large framework and stick to it, warts and all? And then say ""no"" when a requirement comes in that cannot be reasonably accomplished due to the limitations of said framework? Or should we stick with the hodgepodge of unrelated libraries and perhaps start writing our own ""style and structur guide"" to ensure code uniformity?

+",7279,,7279,,43841.99236,43843.65208,Is it better to use frameworks with strict structural requirements?,,4,10,1,,,CC BY-SA 4.0, +403693,1,403701,,1/12/2020 1:25,,1,659,"

I posted this question originally in Code Review, but then thought that I could possibly get more feedback about the design here.

+ +

I just finished writing a simple Snake clone in C++ with the goal of having an actual design this time and to adhere to the SOLID principles of OOP. So I made a class diagram of the game and wrote all the code, and it works fine, but there are some areas where I feel like I could have done better. For example, there are some hard coded elements which I wasn't able to elegantly remove, and there's one place where I use inheritance, but then I also have an enum to figure out which child a message is coming from when I would prefer not to have to care at all what type the child is.

+ +

Anyway, here's the class diagram:

+ +

+ +

Engine is a simple engine I wrote to display 2d graphics easily, I only use it for rendering and some helper classes like v2di which is a 2d integer vector for storing positional information

+ +
+ +

Here's the game class which is responsible for starting and running the game.

+ +

OnCreate() is called once on startup,

+ +

OnUpdate() and OnRender() are called once per frame and should contain the game loop,

+ +

OnDestroy() is called when the inner game loop is exiting and the program is about to quit:

+ +
///////////////// .h
+
+class SnakeGame : public rge::DXGraphicsEngine {
+public:
+    SnakeGame();
+    bool OnCreate();
+    bool OnUpdate();
+    bool OnRender();
+    void OnDestroy();
+
+protected:
+    Snake snake;
+    FieldGrid field;
+    int score;
+    bool gameOver;
+    int updateFreq;
+    int updateCounter;
+};
+
+//////////////////////// .cpp
+
+SnakeGame::SnakeGame() : DXGraphicsEngine(), field(10, 10), snake(3, rge::v2di(3, 1), Direction::RIGHT), score(0), gameOver(false), updateFreq(10), updateCounter(0){
+
+}
+
+bool SnakeGame::OnCreate() {
+    field.AddSnake(snake.GetBody());
+    field.GenerateFood();
+    return true;
+}
+
+bool SnakeGame::OnUpdate() {
+
+    //check user input
+    if(GetKey(rge::W).pressed) {
+        snake.SetDirection(Direction::UP);
+    }
+    if(GetKey(rge::S).pressed) {
+        snake.SetDirection(Direction::DOWN);
+    }
+    if(GetKey(rge::A).pressed) {
+        snake.SetDirection(Direction::LEFT);
+    }
+    if(GetKey(rge::D).pressed) {
+        snake.SetDirection(Direction::RIGHT);
+    }
+
+    updateCounter++;
+    if(!gameOver && updateCounter >= updateFreq) {
+        updateCounter = 0;
+        //clear snake body from field
+        field.ClearSnake(snake.GetBody());
+        //move
+        snake.MoveSnake();
+        //add snake body to field
+        field.AddSnake(snake.GetBody());
+        //testcollision
+        CollisionMessage cm = field.CheckCollision(snake.GetHead());
+        gameOver = cm.gameOver;
+        score += cm.scoreChange ? snake.GetLength() * 10 : 0;
+        if(cm.tileType == TileType::Food) {
+            field.GenerateFood();
+            snake.ExtendSnake();
+        }
+    }
+    return true;
+}
+
+bool SnakeGame::OnRender() {
+    std::cout << score << std::endl;
+    field.Draw(&m_colorBuffer, 100, 20, 10);
+    snake.DrawHead(&m_colorBuffer, 100, 20, 10);
+    return true;
+}
+
+ +

Next up is the Snake class that moves and extends the snake. There's also an enum for the Direction the snake can move in:

+ +
///////////// .h
+
+enum class Direction {
+    UP, DOWN, LEFT, RIGHT
+};
+
+class Snake {
+public:
+    Snake();
+    Snake(int length, rge::v2di position, Direction direction);
+    rge::v2di GetHead() { return head; }
+    std::vector<rge::v2di> GetBody() { return body; }
+    void MoveSnake();
+    void ExtendSnake();
+    Direction GetDirection() { return direction; }
+    void SetDirection(Direction direction);
+    int GetLength() { return body.size() + 1; }
+    void DrawHead(rge::Buffer* buffer, int x, int y, int size);
+
+protected:
+    std::vector<rge::v2di> body;
+    rge::v2di head;
+    Direction direction;
+    Direction oldDirection;
+};
+
+////////////// .cpp
+
+Snake::Snake(): head(rge::v2di(0, 0)), direction(Direction::UP), oldDirection(Direction::UP), body(std::vector<rge::v2di>()){
+    body.push_back(rge::v2di(head.x, head.y + 1));
+}
+
+Snake::Snake(int length, rge::v2di position, Direction direction) : head(position), direction(direction), oldDirection(direction), body(std::vector<rge::v2di>()) {
+    for(int i = 0; i < length-1; ++i) {
+        rge::v2di bodyTile;
+        switch(direction) {
+        case Direction::UP:{
+            bodyTile.x = head.x;
+            bodyTile.y = head.y + (i + 1);
+            break;
+        }
+        case Direction::DOWN:{
+            bodyTile.x = head.x;
+            bodyTile.y = head.y - (i + 1);
+            break;
+        }
+        case Direction::LEFT: {
+            bodyTile.y = head.y;
+            bodyTile.x = head.x + (i + 1);
+            break;
+        }
+        case Direction::RIGHT: {
+            bodyTile.y = head.y;
+            bodyTile.x = head.x - (i + 1);
+            break;
+        }
+        }
+        body.push_back(bodyTile);
+    }
+}
+
+void Snake::MoveSnake() {
+    oldDirection = direction;
+    for(int i = body.size()-1; i > 0; --i) {
+        body[i] = body[i - 1];
+    }
+    body[0] = head;
+
+    switch(direction) {
+    case Direction::UP: {
+        head.y--;
+        break;
+    }
+    case Direction::DOWN: {
+        head.y++;
+        break;
+    }
+    case Direction::LEFT: {
+        head.x--;
+        break;
+    }
+    case Direction::RIGHT: {
+        head.x++;
+        break;
+    }
+    }
+}
+
+void Snake::ExtendSnake() {
+    body.push_back(body[body.size() - 1]);
+}
+
+void Snake::SetDirection(Direction direction) {
+    switch(this->oldDirection) {
+    case Direction::UP:
+    case Direction::DOWN: {
+        if(direction != Direction::UP && direction != Direction::DOWN) {
+            this->direction = direction;
+        }
+        break;
+    }
+    case Direction::LEFT:
+    case Direction::RIGHT: {
+        if(direction != Direction::LEFT && direction != Direction::RIGHT) {
+            this->direction = direction;
+        }
+        break;
+    }
+    }
+}
+
+void Snake::DrawHead(rge::Buffer* buffer, int x, int y, int size) {
+    rge::Color c(100, 100, 200);
+    buffer->DrawRegion(x + head.x * size, y + head.y * size, x + head.x * size + size, y + head.y * size + size, c.GetHex());
+}
+
+ +

Then there's the FieldGrid class responsible for collision detection, food generation and storing the state of the map:

+ +
//////////// .h
+
+class FieldGrid {
+public:
+    FieldGrid();
+    FieldGrid(int width, int height);
+    ~FieldGrid();
+    void GenerateFood();
+    CollisionMessage CheckCollision(rge::v2di head);
+    void ClearSnake(std::vector<rge::v2di> body);
+    void AddSnake(std::vector<rge::v2di> body);
+    void Draw(rge::Buffer* buffer, int x, int y, int size);
+protected:
+    std::vector<std::vector<Tile*>> field;
+    int width;
+    int height;
+};
+
+//////////// .cpp
+
+FieldGrid::FieldGrid() : width(10), height(10), field(std::vector<std::vector<Tile*>>()) {
+    for(int i = 0; i < width; ++i) {
+        field.push_back(std::vector<Tile*>());
+        for(int j = 0; j < height; ++j) {
+            field[i].push_back(new EmptyTile());
+        }
+    }
+}
+
+FieldGrid::FieldGrid(int width, int height): width(width), height(height), field(std::vector<std::vector<Tile*>>()) {
+    for(int i = 0; i < width; ++i) {
+        field.push_back(std::vector<Tile*>());
+        for(int j = 0; j < height; ++j) {
+            field[i].push_back(new EmptyTile());
+        }
+    }
+}
+
+FieldGrid::~FieldGrid() {
+    for(int i = 0; i < field.size(); ++i) {
+        for(int j = 0; j < field[i].size(); ++j) {
+            delete field[i][j];
+        }
+        field[i].clear();
+    }
+    field.clear();
+}
+
+void FieldGrid::GenerateFood() {
+    int x = rand() % width;
+    int y = rand() % height;
+    while(!field[x][y]->IsFree()) {
+        x = rand() % width;
+        y = rand() % height;
+    }
+    delete field[x][y];
+    field[x][y] = new FoodTile();
+}
+
+CollisionMessage FieldGrid::CheckCollision(rge::v2di head) {
+    if(head.x < 0 || head.x >= width || head.y < 0 || head.y >= height) {
+        CollisionMessage cm;
+        cm.scoreChange = false;
+        cm.gameOver = true;
+        return cm;
+    }
+    return field[head.x][head.y]->OnCollide();
+}
+
+void FieldGrid::ClearSnake(std::vector<rge::v2di> body) {
+    for(int i = 0; i < body.size(); ++i) {
+        delete field[body[i].x][body[i].y];
+        field[body[i].x][body[i].y] = new EmptyTile();
+    }
+}
+
+void FieldGrid::AddSnake(std::vector<rge::v2di> body) {
+    for(int i = 0; i < body.size(); ++i) {
+        delete field[body[i].x][body[i].y];
+        field[body[i].x][body[i].y] = new SnakeTile();
+    }
+}
+
+void FieldGrid::Draw(rge::Buffer* buffer, int x, int y, int size) {
+    for(int xi = 0; xi < width; ++xi) {
+        for(int yi = 0; yi < height; ++yi) {
+            int xp = x + xi * size;
+            int yp = y + yi * size;
+            field[xi][yi]->Draw(buffer, xp, yp, size);
+        }
+    }
+}
+
+ +

Tile class used in FieldGrid:

+ +
class Tile {
+public:
+    virtual CollisionMessage OnCollide() = 0;
+    virtual bool IsFree() = 0;
+    void Draw(rge::Buffer* buffer, int x, int y, int size) {
+        buffer->DrawRegion(x, y, x + size, y + size, color.GetHex());
+    }
+
+protected:
+    rge::Color color;
+};
+
+class EmptyTile : public Tile {
+public:
+    EmptyTile() {
+        this->color = rge::Color(50, 50, 50);
+    }
+
+    CollisionMessage OnCollide() {
+        CollisionMessage cm;
+        cm.scoreChange = false;
+        cm.gameOver = false;
+        cm.tileType = TileType::Empty;
+        return cm;
+    }
+
+    bool IsFree() { return true; }
+};
+
+class FoodTile : public Tile {
+public:
+    FoodTile() {
+        this->color = rge::Color(50, 200, 70);
+    }
+    CollisionMessage OnCollide() {
+        CollisionMessage cm;
+        cm.scoreChange = true;
+        cm.gameOver = false;
+        cm.tileType = TileType::Food;
+        return cm;
+    }
+
+    bool IsFree() { return false; }
+};
+
+class SnakeTile : public Tile {
+public:
+    SnakeTile() {
+        this->color = rge::Color(120, 130, 250);
+    }
+
+    CollisionMessage OnCollide() {
+        CollisionMessage cm;
+        cm.scoreChange = false;
+        cm.gameOver = true;
+        cm.tileType = TileType::Snake;
+        return cm;
+    }
+
+    bool IsFree() { return false; }
+};
+
+ +

Finally here's the CollisionMessage class used to send messages to the game when the snake head collides with any Tile:

+ +
enum class TileType {
+    Empty,
+    Snake,
+    Food
+};
+
+class CollisionMessage {
+public:
+    bool scoreChange;
+    bool gameOver;
+    TileType tileType;
+};
+
+ +

I omitted all the includes and the main method, as they aren't relevant to the design and would just take up extra space.

+ +
+ +

I appreciate the time you take to read through all my code (or just look at the class diagram) and would really like to hear what you think about the overall design I chose.

+",354889,,,,,43842.52778,Simple Snake Game in C++,,2,5,,,,CC BY-SA 4.0, +403695,1,403711,,1/12/2020 7:44,,0,210,"

My manager asked me how much it costs (in money, currency, $) to make a REST API request from our client application to one of our services. We do not use cloud, we have on-prem servers.

+ +

The payload size averages around 5kb, and the client app makes around 750 million requests a week.

+ +

I don't even know where to start with calculating this. Where do I look? What do I search for? How do I prove the number to him?

+",353366,,9113,,43842.61806,43863.74028,How to calculate the costs for an on-prem API request in terms of money?,,3,6,,,,CC BY-SA 4.0, +403700,1,,,1/12/2020 10:24,,1,312,"

I want to create a system of user reviews and replies to the reviews in a website. There can be replies to replies. I'm using mongodb database which I think is an important detail.

+ +

The review document looks roughly like this:

+ +
{
+  review: {
+    reviewid: String,
+    userid: String,
+    reviewRating: Int,
+    reviewContent: String,
+    replies: [Reply]
+  }
+
+ +

}

+ +

The reply object looks like this:

+ +
{
+  reply: {
+    userid: String,
+    reviewid: String,
+    replyid: String,
+    replyContent: String
+  }
+}
+
+ +

I'm not sure which route to take with how to store user replies. On the one hand it will be super convenient to store an array of reply objects in the review document (1). On the other hand I feel it will be more stable and organized to save replies as separate documents in the db (2).

+ +

In case 1 I get the benefit of mongodb flexibility and I will not need to make db lookups.

+ +

In case 2 I thought to add previous: ObjectId and next: ObjectId fields to each reply. Then the review will only be linked to the first reply in the chain. In case 2 there will be more lookups for replies as I will have to iterate next all the time. It won't take a lot of time to get the next reply though because mongodb creates an index by default on id fields. Another disadvantage is that in order to count the number of replies per review I will need to make as many db lookups to db as there're replies unless I keep the count as a field in review and make sure to increment/decrement it each time (more complexity).

+ +

So on the face of it the first route is better in terms of performance. The main disadvantage of it is that replies order is a bit volatile in the sense that it's just an array. I also instinctively feel the first route is ""flimsy"" because the replies are not an actual document in the db without their own id.

+ +

I'm looking for suggestions on what the best practice would be here.

+",282823,,,,,43844.26736,Is database linked list a good architecture for replies and reviews system when using mongodb?,,2,1,,,,CC BY-SA 4.0, +403702,1,,,1/12/2020 12:30,,0,59,"

Having an application that defines a plugin API, I was wondering how (if possible at all) to achieve both stability and performance (see below for what that means) at the same time.

+ +

A plugin in my application is just a processing task, which takes a data structure whose format is defined in the plugin API and transforms it basically. As a rough idea, it could look like this:

+ +
struct DataObject {
+   std::vector<ProcessingResult> processingResults;
+   const uint8_t *rawData;
+};
+
+ +

And a very simplified version of a plugin could look like this:

+ +
bool pluginMainFunction(DataObject& data)
+{
+    ProcessingResult newResult(""myplugin.name"");
+    // add to results
+    data.processingResults.push_back(newResult);
+    return true;
+}
+
+ +

The first approach that comes to mind is providing the plugin as a .so file and just calling pluginMainFunction fromt the application directly. This brings one huge benefit - direct access to the data object. This is especially handy when multiple plugins run sequentially on the same data. No copies at all are needed. And this is what I mean with performance. Our benchmarks have shown that IO in general is the first bottleneck we hit when we serialize DataObject to disk and load it again for the plugin.

+ +

That said, this is actually the second approach we came up with. Writing very simple wrappers around each plugin makes them standalone executables and we can control them via scripts. The big advantage here is stability. With a little bit of extra efforts we can handle plugin crashes quite well, whereas in the first approach, a SEGAULT in a plugin crashes the whole application.

+ +

We are not so much concerned about malicious plugins, but bugs happen all the time and should not lead to a total crash. The plugins themselves can be quite complex actually, somewhat being fullblown applications.

+ +

Are there any other approaches that come to mind (maybe keeping the DataObjects in a shared memory segment? I would be very interested in reading your experiences and pros and cons you encountered whenever you developed and run similar systems.

+",321010,,,,,43842.52083,Different approaches to plugin system,,0,4,,,,CC BY-SA 4.0, +403716,1,,,1/12/2020 19:18,,2,73,"

We have a fairly large database with some fairly large tables (100s of millions of rows). Some of those tables are reported on.

+ +

We have indexes in place to make this as fast as possible, but still we hit limits on what we can achieve. As such we limit any reporting to 1 months worth of data per user.

+ +

I am trying to find out how one can achieve reporting on a larger scale, such as over a year, or comparing one year to the previous.

+ +

I figured the only way to achieve this is to roll up the data somehow into aggregations at a different scale. So instead of 5 data points a day, you could store 1 aggregated data point per day, week, month etc.

+ +

This option seems great until you consider timezones; all our data is stored in UTC but a user may be in a different timezone to their colleague. I had thought we could aggregate once per timezone that is in use, but then the data creeps up again.

+ +

I'm sure there are some best practices for this kind of work but have been unable to unearth them, if someone could put me in the right direction I'd appreciate it.

+ +

Some additional info:

+ +
    +
  • Server: MSSQL
  • +
  • Coding stack .NET
  • +
  • We do not use SSIS
  • +
+",189212,,,,,43842.91528,Techniques for handling large reporting periods in MSSQL,,1,5,,,,CC BY-SA 4.0, +403722,1,403725,,1/13/2020 11:39,,0,94,"

I've heard about vector clocks and how to test if a message was sent before another message.

+ +

E.g. Message A was sent before message B if every element of the vector of message A is smaller or equal then the belonging element of the vector of message B.

+ +

Also, there needs to be at least one element in the vector of message A that is smaller (and not equal) than the belonging element of message B.

+ +

As I see, the second parts just tests if there are elements that differ.

+ +

Is there a possibility where all elements of the vector of message A are the same as the belonging element of the vector of message B or is the second check unnecessary?

+ +

I mean, each element can only be incremented by the belonging process only and there if e.g. process X sends a message to process Y and a message gets back, the element of process Y will be changed in process Y only and the element of process X will be changed in process X only. So, there would be an update from the own element and the foreign element in any case. This means, there could not possibly be a case where all elements of the vector of message A and all elements of the vector of message B are completely the same, right?

+ +

Could the performance of the vector clock be increased if the test for equality is removed?

+",354968,,354968,,43843.49792,43843.50972,Need to test for equality on vector clocks,,1,6,,,,CC BY-SA 4.0, +403723,1,,,1/13/2020 11:59,,-1,48,"

If consumers of my API have their software on AWS, their ip address is subject to change if they are scaling their services horizontally (adding more machines).

+ +

This means that I can't whitelist a single ip address to ensure only the trusted consumer is able to access my API as it will be constantly changing if I understand AWS correctly.

+ +

How can this be managed?

+",345294,,,,,43843.66944,IP Whitelisting AWS consumers,,1,5,,,,CC BY-SA 4.0, +403726,1,403728,,1/13/2020 12:17,,0,122,"

I wrote a lot of software in C# and Python. I tried to make the overall architecture testable by using the ""Clean Architecture"" and Dependency Injection. This works well for C# (and python).

+ +

Now I move to a new company where I will program C (it will be a new codebase). Some years ago I already wrote much C, but without a strong focus on testing. This time I like to create a clean and testable architecture which allows to test anything.

+ +

Does a best pratice like the ""Clean Architecture"", ""Onion Architecture"" or some other architectural pattern exist that leads to well testable C code? (of course, given the code is well written)

+",280170,,248595,,43843.5625,43843.5625,Architectural pattern for testable C code,,1,2,,43843.80694,,CC BY-SA 4.0, +403730,1,403732,,1/13/2020 14:14,,-2,77,"

I am trying to create different versions of a software manual from a single source which is in MS Word format. Is it possible to do this algorithmically or must it be done manually for every version?

+ +

Specifically, our software has three different types of licenses and each one corresponds to a specific subset of the built-in functionality. Details of the omitted functionality should not be present in the manual.

+",354960,,,,,43843.73889,How to produce different versions of a software manual from the same MS Word document?,,1,0,,43843.67708,,CC BY-SA 4.0, +403735,1,403742,,1/13/2020 16:22,,-1,214,"

I know 100% that there already is a solution out there for what I am asking but with all the research I have done, I can't seem to find it.

+ +

Problem: +Currently, a CS student trying to do independent studies to stay ahead. Recently, I have been wondering how different languages/frameworks implement Event Handling and I have found some information on it such as there typically is an Event loop running synchronously with the main thread.

+ +

My question is, how is such a thing implemented when wanting to check for multiple events such as game events. Would you want to create a thread for each event and have those threads loop synchronously with the main thread or would you want to have a single thread? I would only assume it is better to open a new thread for each event you want to handle but there has to be a better solution.

+",286260,,,,,43843.96111,Is opening a new thread for each event you want to handle the best way to handle events?,,4,1,,,,CC BY-SA 4.0, +403739,1,,,1/13/2020 16:49,,2,134,"

Backstory: I am unable to use RDS, as I need to install cartridges in my PostgreSQL instances.

+ +

I have been trying to pin down an architecture for PostgreSQL running on EC2 instances for a few days. Most information I could find online use separate software tools like HAProxy or PGPool, or more fully fledged DB clustering products like ClusterControl.

+ +

Question: Can't a highly available PostgreSQL DB be set up on AWS without any software load balancers, cluster management tools, etc?

+ +

So far I have come up with the following architecture which I think would work. There is some custom coding required in the Lambda layer to update the PostgreSQL replication slots whenever EC2s are terminated / started - there is no escaping this since PostgreSQL does not come with built in failover mechanism.

+ +

+ +

Notes:

+ + +",354986,,,,,43843.70069,Running a high availability PostgreSQL cluster on native AWS services only,,0,0,,,,CC BY-SA 4.0, +403741,1,,,1/13/2020 17:41,,1,112,"

I am creating a booking system that will allow users to make a reservations for whole days. When a user wants to initially make a reservation, they select the day(s) and then will have 10 minutes to fill out the rest of their information.

+ +

How I achieved this was having a booked field and a lockedUntil field so that other reservations could not be made on the same days as long as there was a booked reservation or one with lockedUntil in the future.

+ +

Now as I am using mongodb, in order to enforce uniqueness I had been checking for any conflicting reservations before inserting. I just recently realized the race condition if 2 reservations were to come back with no conflicts, and then both get inserted. Obviously this is a big no-no.

+ +

My question boils down to, what is the optimal way to enforce uniqueness of dates, with maintainability in mind?

+ +

What I had attempted to implement was a unique index on the dates, so that in the case where there is a race condition, it would at least be blocked at the database level. This seemed fine until I realized that I can't handle reservations that aren't yet booked, but are just locked with lockedUntil. Now I could set a TTL index to delete these reservations, but that would result in up to a 60 second window where unique constraints would fail. Also this would prevent us from being able to see reservations that weren't completed which could possibly be valuable.

+ +

Perhaps there's a better way to achieve what I want. I had thought of using transactions, but my concern is if it is controlled at the application level, one reservation might slip by and end up with a double booking in the database.

+ +

Is there another approach I might not be thinking about? Could there be a better way to control the locking of reservations?

+",346406,,,,,43878.92292,Dealing with complex uniqueness in MongoDB,,2,2,1,,,CC BY-SA 4.0, +403743,1,,,1/13/2020 17:44,,1,424,"

As an interview assignment for software developer, I have received the task. I have a list of children, and the task is: sort children ascending by their birth_date timestamp, breaking ties by ID

+ +

The structure of child is:

+ +
""child"": {
+  ""id"": ""14""
+  ""name"": ""John""
+  ""birth_date"": ""1990-12-20T11:50:48Z""
+}
+
+ +

I can not get the meaning of phrase ""breaking ties"" in the given context. I guess it can be one of following:

+ +
    +
  • if birth_date is the same - first goes child with lower id
  • +
  • while sorting using birth_date, ignore - sign.
  • +
+",,user372012,209774,,43843.76181,43844.14931,"What means ""breaking ties"" in context of sorting",,2,3,,,,CC BY-SA 4.0, +403745,1,,,1/13/2020 18:33,,-1,210,"

I have seen numerous posts on this subject but none really answered my questions.

+ +

I have some user input that is inserted into my DB and displayed back to the user later. Before inserting it into the DB I validate the input. When I need to display it back to the user I have been using two different methods.

+ +

The first is using preg_replace(). With this I can do things like remove all non alphanumeric characters. The second method is to use filter_var() with an option like FILTER_SANITIZE_STRING, which will strip tags and encode special characters.

+ +

What I havent been able to find is which I should be using and of course, why.

+",352360,,,,,44191.96181,PHP preg_replace() vs filter_var(),,2,3,,,,CC BY-SA 4.0, +403750,1,,,1/13/2020 20:09,,10,377,"

I have been writing tests for a lot of long if/else trees recently, and I'm finding it a little discouraging. I want to speak in concrete terms, so consider the following example (I'll write in Ruby syntax but the example is quite general):

+ +

A user can either be an admin? or not (boolean method). A file can either be public? or not. An admin can open any file, and a public file can be opened by any user. A non-admin user cannot open a non-public file. To define whether a user can open a file, we make the following checks:

+ +
if user.admin?              if file.public?
+  true                        true
+elsif file.public?          elsif user.admin?
+  true                        true
+else                        else
+  false                       false
+end                         end
+
+
+ +

I'll call the left implementation ""admin-first"", and the right one ""file-first"". Clearly, both implementations return the same values for every possibility.

+ +

I want to write a test for this block of code, which tests the behaviour and not the implementation. In particular, there are three test cases I care about:

+ +
    +
  • Given an admin user, and any file whatsoever, the block returns true.
  • +
  • Given any user whatsoever, and a public file, the block returns true.
  • +
  • Given a non-admin, non-public file, the block returns false.
  • +
+ +

My problem is that I cannot see a way to actually write tests which mean the above conditions.

+ +

One option (""maximum correctness"") is to test all four options of true/false. Of course, this is a toy example and the cases I've been looking at have upwards of 15 conditions which can (in principle!) vary. Even with only four conditions, which is relatively common, this approach is impractical. It's also very ugly, especially in the extreme case: Why do I have to write more tests than there are distinct behaviours (in some cases, many many more)?

+ +

A similar option (""randomness"") is choosing inputs randomly, but random tests are generally discouraged, and I believe the arguments against it.

+ +

A pragmatic approach when faced with the ""admin-first"" implementation is to test the following three cases instead:

+ +
    +
  • Provide an admin user (any file will do, because the tree will short-circuit).
  • +
  • Provide a non-admin user with a public file.
  • +
  • Provide a non-admin user with a non-public file.
  • +
+ +

I dislike this for several reasons. First, the behaviour case of ""any user with a public file"" is not checked. That's an aesthetic point really, but it also makes the tests confusing (""why is this property set? is it relevant?""). Second, the test will fail if I were to rearrange (preserving behaviour!) into the ""file-first"" implementation, so you're forced to actually write the tests to go along with the code, or try out every possiblity as before. As you go down the cases, the number of irrelevant bits of data you need to specify only gets bigger.

+ +

The tests can be made to pass even after reordering if we change the first check to specify that the file is non-public, rather than unspecifed. Again, I'm not satisfied:

+ +
    +
  1. Now two of the three test cases fail to test the behaviour.
  2. +
  3. In the case of more branches, you need to specify every option of data at every level of the tree, which is just plain awful.
  4. +
  5. It is ""intuitively obvious"" that the overlap case (admin and public) does not need to be tested if the other two code paths work correctly. But that's only true if you know what the code looks like! In more complex cases, this inference is less clear and more confusing for a test-reader.
  6. +
+ +

For the case of very many branches, you might say something like ""refactor it to hide some of those booleans inside of other methods"", but unfortunately I am writing these tests precisely because I want to refactor this code. The tests have to come first.

+ +

The two things I thought of first, namely using guard clauses or chains of && or ||, have the same problem, because of short-circuiting.

+ +

So, my question: Is there any way to test code resembling this, or to restructure code that looks like this, in such a way that we can really test behaviour and not implementation?

+",354995,,354995,,43843.84375,43843.91181,Is it possible to test if/else trees properly without coding to the implementation?,,2,6,,,,CC BY-SA 4.0, +403759,1,,,1/14/2020 1:15,,0,35,"

In PHP, you can have a collection as an array as a class property. This collection can have a function add that takes multiple (type-hinted) parameters, as such: add( Markup $markup, Style $style = Null), the keyword here is Null, the system tells you that ""hey, you don't have to add this, but you can do it, but it has to be of type Style)"" and adds a package to that internal array:

+ +
public function add( $name, Markup $markup, Style $style = Null )
+{
+    $this->packages[$name]['markup'] = $markup;
+
+    if( $style ) {
+        $this->packages[$name]['style'] = $style;
+    }
+}
+
+ +

Which means that 100% a package has a markup object, but it can have a style as well.

+ +

Thing is - watch what happens when, by good intentions, I just wanna have things separated such that my interfaces aren't cluttered:

+ +
public function add( $name, Markup $markup, Style $style = Null, Categorized $categories, .. )
+{
+    $this->packages[$name]['markup'] = $markup;
+
+    if( $style ) {
+        $this->packages[$name]['style] = $style;
+    }
+
+    if( $categories ) {
+        $this->packages[$name]['categories'] = $categories;
+    }
+
+    //.. and so on, gets cluttered.
+}
+
+ +

This ramps up to be Satan-level hectic to maintain.

+ +

What is a solution to this?

+",353781,,353781,,43844.50417,43844.55903,Scaling inserting related optional objects to your collection,,1,8,,,,CC BY-SA 4.0, +403765,1,403776,,1/14/2020 7:27,,4,2688,"

I get the idea of the factory pattern, but I feel that it is really not necessary to use this pattern.

+ +

For example, below is some code I saw (C#) that use factory method:

+ +
public interface IAnimal
+{
+   void Speak();
+}
+
+public class Dog : IAnimal
+{
+   public void Speak()
+   {
+      Console.WriteLine(""Dog says: Bow-Wow."");
+   }
+}
+
+public class Tiger : IAnimal
+{
+   public void Speak()
+   {
+      Console.WriteLine(""Tiger says: Halum."");
+   }
+}
+
+public abstract class IAnimalFactory
+{
+   public abstract IAnimal CreateAnimal();
+}
+
+public class TigerFactory : IAnimalFactory
+{
+   public override IAnimal CreateAnimal()
+   {
+      return new Tiger();
+   }
+}
+
+public class DogFactory : IAnimalFactory
+{
+   public override IAnimal CreateAnimal()
+   {
+      return new Dog();
+   }
+}
+
+ +

and client can invoke:

+ +
IAnimalFactory tigerFactory = new TigerFactory();
+IAnimal aTiger = tigerFactory.MakeAnimal();
+aTiger.Speak();  //3 lines of code, plus needing of extra factory classes
+
+ +

but Client can also do like:

+ +
IAnimal aTiger = new Tiger();
+aTiger.Speak();  //only 2 lines of code
+
+ +

we can see that only 2 lines of code is needed, and we don't need to define factory classes. +so why takes extra steps to define and use factories?

+ +

Ant P replied that RandomNumberOfAnimalsGenerator needs a factory, but below is my version of the class, still doesn't need any factory.

+ +
public class RandomNumberOfAnimalsGenerator
+{
+    private readonly animal ;
+
+    public RandomNumberOfAnimalsGenerator(IAnimal animal)
+    {
+        this.animal = animal;
+    }
+
+    public List<IAnimal> GetAnimals()
+    {
+        var animals = new List<IAnimal>();
+        var n = RandomNumber();
+
+        for(int i=0; i<n; i++)
+        {
+            animals.Add(animal);
+        }
+
+        return animals;
+    }
+}
+
+ +

and client invokes:

+ +
var RandomNumberOfAnimalsGenerator = new RandomNumberOfAnimalsGenerator(new Tiger());
+
+ +

still doesn't need a factory

+",344348,,209331,,43844.69028,43845.44514,What's the benefits to use an abstract factory when using interfaces is already suffice?,,8,6,3,,,CC BY-SA 4.0, +403773,1,403775,,1/14/2020 9:29,,1,84,"

I am developing a Spring application which has a few different modules. There will be a bunch of users added in the database.

+ +

I want to add a feature, which will allow me to track users' availability, which will be set by themselves. It will be a simple toggle between available and unavailable.

+ +
    +
  • Each user must have a current status set in DB
  • +
  • A history of status changes should be available for each user
  • +
+ +

My goal is to have a possibility to see current status of every user + to have a historical data. I need a history to create some statistics of availability, for example to check what was John Doe's availability during last week, month etc.

+ +

My idea is to create two tables, for example user_status and user_status_history. Once user changes his status, an entry from user_status table is copied to the user_status_history table, then the status and the timestamp in user_status are changed afterwards.

+ +

Based on the timestamps I could calculate for how long user was available/unavailable. I am wondering if history data table is enough or if I should add there some more columns like timestamp_end or duration.

+ +

+ +

What will be the best solution for that problem? I am asking for best approach in regards to database and the app itself.

+",354988,,,,,43844.42708,Best approach to handle user statuses and keep their history,,1,0,,,,CC BY-SA 4.0, +403777,1,,,1/14/2020 10:34,,1,207,"

Suppose you have a client-server architecture structured with a Client class that asynchronously implements the Send() and Receive() functions. +You also have a base Message class and several other classes inherited from that class, based on the type of communication. This is the scenario:

+ +
    +
  1. When a message is received, the Receive() function creates the right istance of the class which inherits from Message class, which internally implements an Handle() function.

  2. +
  3. At the end of the Handle() function you need to send a message back using the Send() function of the Client.

  4. +
+ +

To accomplish the point #2, which one of these two approaches represents the best-practice?

+ +

Decentralized approach:
+call the Client.Send(ReplyMessage) function directly from the Handle() function in the Message. This means that all the Message classes can access and call the Send() function in the Client.

+ +

Centralized approach:
+return the ReplyMessage to the Client, which will provide to call the Send() function internally. In this way there is only one point where the messages are sent.

+",355033,,355033,,43844.45069,43844.71944,OOP: centralized vs decentralized approach,,2,2,,,,CC BY-SA 4.0, +403778,1,,,1/14/2020 11:11,,0,47,"

An example web application X allows its clients to upload various resources, e.g. images. These uploaded resources can also be removed by the user at a later time.

+ +

My question is this: what are the general practices regarding resource retention/removal? When, if ever, does it make sense to remove the actual files from the storage rather than just forget about their existence by removing any references/links to them? Are there more considerations beyond:

+ +
    +
  • (storage needs) x (storage cost) VS (budget);
  • +
  • adhering to privacy requirements / removing sensitive data?
  • +
+ +

For example, does it make sense to clean up unused files just to have a tidier database/storage?

+",315196,,,,,43844.46597,Should unused user-uploaded resources (e.g. images) be deleted from storage?,,0,7,,,,CC BY-SA 4.0, +403779,1,,,1/14/2020 11:27,,1,107,"

I'm developing an application where graphs need to be populated from realtime data.
+The Real-Time data comes from a Kafka Queue. +

+How should I send this real-time data to the front-end which is in Angular8? +
So far I have thought of two approaches:

+ +
    +
  1. Use a push-based queue mechanism from the server over the Kafka queue, and then transfer the data to the Front-end via Server-Sent Events (SSE) whenever new data is fetched from the queue (since this will not be duplex communication).

  2. +
  3. Use a push-based Kafka queue and subscribe to it from Angular itself.

  4. +
+ +

I know web-scockets can be an option, but I don't need a duplex communication. Data will only be sent from the server.

+ +

Is this the right approach? Or is there any other better way
+Also, is it a good practise to subscribe to a queue from the front-end?

+ +

I'm using Django for the back-end and Angular for the front-end

+",349317,,,,,43876.54236,Architecture to populate graphs from real-time data,,2,0,,,,CC BY-SA 4.0, +403781,1,403788,,1/14/2020 11:52,,0,69,"

There's AntD library with various UX components. Right now I required to wrap all components which I'm using in another layer, to make an abstraction. For example, if I'm using AntForm component I required to wrap it as follows:

+ +
import React from 'react';
+import AntForm from 'antd/es/form';
+
+import { FormProps } from './form.types';
+
+export function Form(props: FormProps): JSX.Element {
+  return (
+    <AntForm {...props} />
+  );
+}
+
+ +

and redefine all types internally in form.types.ts as following:

+ +
export interface Form { ... }
+
+ +

The justification which I get for this step that if we ever will be required to change a library, we will need to make changes to only this abstraction layer the rest of the code will remain untouched.

+ +

I don't like this approach. It requires a tremendous amount of work, especially redefine all the types, and internal types, and internal types of internal types. Personally I would avoid such an architecture solution, especially redefine all the types. In this way, we become bound to specific AntD implementation, and if it will be break changes in AntD, it adds another layer where we will need to make changes. Another point against it, if we will ever need to change the library, we, in any case, will be required to change all the components which using this library, it can't be done 100% abstraction, but it's just a gut feeling based on my previous experience and most likely biased.

+ +

Before I'm going to discuss with other people regarding this issue I would like to hear another unbiased opinion, cons, and pros of such an approach.

+",161072,,161072,,43844.53333,43844.54931,"Wrapping ReactJS UX component library in another layer of abstraction, pros and cons?",,1,0,,,,CC BY-SA 4.0, +403783,1,403785,,1/14/2020 11:59,,2,66,"

As a bit of a background, we create a website where a ""third party"" can organize a sport event, letting us handle enrollment and other checks.

+ +

Well on our website ever since we implemented a cookie ""wall"" we have been getting more and more requests and complains. What I discover is that most people put ""marketing"" cookies off, yet those same people then complain that the facebook like button is gone.

+ +

This has gone so far that people are actively stop using our services in favour of others.

+ +

So can one show a social media wall, yet prevent the cookies from social media? Or is it just accepted to deliberately not follow the laws?

+",43635,,209774,,43844.54097,43844.69861,Cookie consent and facebook cookies?,,1,0,1,,,CC BY-SA 4.0, +403789,1,,,1/14/2020 13:21,,0,144,"

I am trying to understand better the decorator pattern. I have read an article with an example implementation in java +and of course, GoF book.

+ +

Do I always need an abstract parent for the decorator?

+ +

For instance, in the example I have an abstract ChistmasTree and its implementation, and an abstract TreeDecorator inheriting from ChrimasTree. The TreeDecorator has an abstract decorator() method. Let's assume I have a totally unique override of decorate() for each and every decorator implementation: TreeTopper, Tinsel, Garland, and BubbleLights, simple needs to implement ChristmasTree. Why create an additional compilation unit under these circumstances?

+",94534,,209774,,43844.60139,43844.63819,Decorator Pattern - Necessity of an abstract parent vs Default Interface method,,1,4,,43849.60208,,CC BY-SA 4.0, +403795,1,403815,,1/14/2020 14:19,,1,48,"

I am designing my first ever database architecture, no real framework to build up on. So I am working on my own! Terrifying.

+ +

A huge question I have is for having two separate tables, one being products, and one being components, and joining them to create a specific part list for a certain build. I'm still messing around with laying out a framework but I need a way for all of tables to communicate, while not having everything together in one place, to make the addition of new products and new components easier.

+ +

A thought I had for my architecture is having these two tables:

+ +

Products Table

+ +
product_id | product_name | part_list_id
+abc123     | product_one  | listb
+xyz789     | product_two  | lista
+
+ +

Components Table

+ +
part_id | part_name   | part_list_id
+62345   | thumb screw | listb
+10242   | ziptie      | lista
+
+ +

And to get each item would just be a simple

+ +
SELECT * FROM Components WHERE part_list_id = 'listb'
+
+ +

I wish it could be that simple. +The thing that I know of that will make this awful is if a part is both in lista and listb. Like the addition of label below, which just adds unnecessary duplicates.

+ +
part_id | part_name   | part_list_id
+62345   | thumb screw | listb
+10242   | ziptie      | lista
+14141   | label       | lista
+14141   | label       | listb
+
+ +

I guess my question would be, is there a better way to have two different items share specific rows from a separate table, without sharing every other row aswell?

+",354594,,,,,43844.90486,Structuring a db to allow multiple products to hit a select component list,,1,0,,,,CC BY-SA 4.0, +403796,1,,,1/14/2020 14:33,,1,65,"

My goal was to be able to create an object which is composed by other objects without having to know beforehand what these objects were, then do checks to see if they actually exist, then add them to my collection and so on -- avoid all that fuss.

+ +

My use-case is that I have a Tooltip which I output to the markup. Now, this object is parsed by a higher-level Generator. This Generator looks inside the Tooltip and says ""aha! you have a StyleInterface object (behavior), and so, I will include a CSS file for your"" and so on. My goal is to allow people to add functionality to an object without enforcing anything. A friendly way, through code of saying ""if you wanna add this, you can, but you don't have to.""

+ +

And so, the Composite pattern was born.

+ +

I came up with it from here: Scaling inserting related optional objects to your collection

+ +

Yet, there is a problem that's appeared in this process: because it's PHP, I cannot type-hint that an array's members must be of a certain type, ComposableInterface in my case.

+ +

My pattern has achieved ""syntactic sugar niceness"", even if I honestly believe it's over-engineered and its goals were met:

+ +
    +
  1. Signal to the outside world that it is dynamic: it is composed of a dynamic number of behaviors (objects).
  2. +
  3. Make the insertion easy so you no longer have to name every single component in the constructor, as well as add it manually as in my original question.
  4. +
+ +

Each item that can be inside of a Composite is called a Component and it must respect the following interface:

+ +
interface ComposableInterface
+{
+    public function getComposableName();
+}
+
+ +

We identify it by getComposableName. Now, who ingests all these Composables? It's, as I said, the Composite:

+ +
interface CompositeInterface
+{
+    public function getComponents();
+}
+
+ +

Let's build the Composite:

+ +
/**
+ * An object which can be composed of multipel behaviors.
+ */
+class Composite implements CompositeInterface
+{
+    /**
+     * @var array
+     */
+    private $components = [];
+
+    public function __construct( $components = [] )
+    {
+        foreach( $components as $component ) {
+            $this->components[$component->getComposableName()] = $component;
+        }
+    }
+
+    /**
+     * Retrieve the components.
+     *
+     * @return array
+     */
+    public function getComponents()
+    {
+        return $this->components;
+    }
+}
+
+ +

Let's try to use it by first creating some Composables components/behaviors that our main object can have:

+ +
interface Style extends ComposableInterface{}
+/**
+ * A component / behavior that can be part of a composite.
+ */
+class StyleOne implements Style
+{
+    public function getComposableName()
+    {
+        return 'style';
+    }
+}
+
+interface Markup extends ComposableInterface{}
+/**
+ * A component / behavior that can be part of a composite.
+ */
+class MarkupOne implements Markup
+{
+    public function getComposableName()
+    {
+        return 'markup';
+    }
+}
+
+interface Categories extends ComposableInterface{}
+/**
+ * A component / behavior that can be part of a composite.
+ */
+class CategoriesOne implements Categories
+{
+    public function getComposableName()
+    {
+        return 'categories';
+    }
+}
+
+ +

Sweet, we have 3 behaviors our Composite can have, let's try to initialize it:

+ +
$composite = new Composite([
+    new StyleOne,
+    new MarkupOne,
+    //new CategoriesOne -- not needed, but it can be added!
+]);
+
+ +

...and voila. This way, we can take away or add as many behaviors as we want.

+ +

So, what exactly did we do and are we over-engineering? I simply want to build a dynamic (in the numbers of behaviors) system that I can re-use and not have to check if a Composable exists every time I have to work with an entire Composite object.

+ +

With that in mind, we completely avoided having to type-hint our possible components and no longer have to manually check whether a Composable exists, then get its name, then add it to our collection.

+ +

It looks cleaner. But it still has that issue. How can I tell the system that inside my Composite, I want the array of arguments to be of type ComposableInterface without having to do a manual check on each component that's passed?

+",353781,,353781,,43844.64306,43844.64306,Composing objects: how can I enforce an interface upon each component?,,0,12,,,,CC BY-SA 4.0, +403798,1,403814,,1/14/2020 14:57,,0,109,"

I'm having trouble finding an answer for this when I search variants of my question. A use case I'm thinking of is a client makes a GET request to API server A for some data, but in order to provide that data it has to make a request to API server B. For whatever reason the endpoint at B is a POST endpoint (let's say even though it's just fetching data it needs a payload body and so they decided to make it a POST request).

+ +

Should the endpoint in API server A be converted to a POST endpoint since it makes a POST request? Or should the fact that the POST endpoint is in a separate API server not concern the design of endpoint A?

+",294990,,,,,43844.90208,Should a GET endpoint in API server A make a POST request to a separate API server B?,,1,3,,,,CC BY-SA 4.0, +403803,1,,,1/14/2020 15:52,,2,110,"

The problem was already discussed here. But there was not consensus on this topic.

+ +

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg ext4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

+ +

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

+ +

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

+ +

I need your opinions on this topic. What drawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that someday such functionality will be implemented because:

+ +
    +
  1. Data volumes grow rapidly and so files also do
  2. +
  3. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.
  4. +
+",355008,,355008,,43845.52917,43845.52917,Is it possible to implement insert file operation in modern extent-based filesystems?,,0,13,,,,CC BY-SA 4.0, +403807,1,,,1/14/2020 18:33,,1,120,"

I built an audio processing web app using Rails. The user uploads a song to the website. The song is then decomposed into individual elements and then modified and recombined.

+ +

I am using a an open source command line tool that is being called from the rails controller.

+ +

My Problem: It takes around 2 to 3 minutes to do the processing and it consumes a lot of memory. The browser is in loading stage for 2 to 3 mins. And, this is just for 1 request from 1 user. I am using Amazon ec2 instance t1.large it is just not enough.

+ +

I am planning to use background processing but I don't want multiple requests to work the same time.

+ +

I want the first request to take 3 mins. 2nd request to take 3 + 3 mins. 3rd request to take 3 + 3 + 3 mins. So the website doesn't go down.

+ +

Also, the audio file is around 40 mb. Is it a good idea to use amazon s3 bucket? Or just increase the harddisk space and store audio files on my server?

+ +

The reason I don't want to use the amazon s3 bucket is because I don't want to transfer each file again from my ec2 instance into the s3 bucket, because it will add to the latency.

+",355077,,,,,43844.90069,How to efficiently process CPU intensive tasks on the server in the background,,1,2,,,,CC BY-SA 4.0, +403809,1,,,1/14/2020 19:04,,0,76,"

I am working on converting an existing python based monolith solution to a microservice. The current flow is pretty straight forward:

+ +

Accept XLSX as input -> Run some complex algorithms based on input and a default file(XLSX)-> generate XMLs

+ +

The same application cater to multiple projects. The default files are project specific and pre-loaded and not very often these files are modified.These files are put under project specific directory manually upon receiving the files from the users via email.

+ +

When a request comes for a particular project, current application reads both input as well as default file to run through the algorithm and finally creates the XML output. The default files are read intermittently and not read as a whole, few sheets are read at the time of input validations and other sheets are read at a later point of time.

+ +

As a part of my microservice architecture, I have decided to expose endpoints to upload, update and fetch the default files. Since, these default files are rarely changed, I have decided to parse it as soon as it gets uploaded and store it in a REDIS server, to remove the overhead of parsing it during the actual processing, as these files are tend to be quite large in size(~2GB).

+ +

So, I am bit perplexed about reading the default file containing lot of sheets at once and store it in REDIS would be a better solution or reading them sheet wise on an ""as-needed"" basis would be the one.

+",354878,,,,,43844.88819,Design decision of reading XLSX file at once or intermittently,,1,3,,,,CC BY-SA 4.0, +403816,1,,,1/14/2020 22:53,,2,1736,"

I'm working in a new project and I'm trying to use the Clean Architecture approach with Repository pattern. I'm using .net core 3, C#, Automapper, MediatR and Dapper.

+ +

I have these layers:

+ +

Domain (in the center, the core of all): here I have the business Entities, Aggregations and Value Objects with their validations rules and exceptions;

+ +

Application (it's around the Domain): here I'm using CQRS patterns and I have my Commands, Queries and Interfaces;

+ +

Persistence: here I have the implementations of the repositories interfaces.

+ +

I red that a repository should be responsible of the all CRUD operations relative to one table in the database. For this I want to know how I should implement the repositories for an ENTITY that is an AGGREGATION ENTITY. Should I create an AGGREGATION REPOSITORY that extract data from different tables? Or should I have a repository for each table and a SERVICE that creates AGGREGATION using more than one repository?

+ +

Thanks

+",328702,,,,,43845.08472,Clean architecture and Repository pattern,,1,4,,,,CC BY-SA 4.0, +403820,1,,,1/15/2020 1:21,,0,93,"

I have a web application that allows user view our data that front-end in JavaScript fetches the data via a Restful request to back-end, then render it in the application. The size of each data can range from few KB to 500MB where most of them are around 100MB.

+ +

According to the requirement, this data is for viewing only therefore should not be downloaded by user.

+ +

From what I researched there is no ultimate way to prevent a user from getting the data but to increase the difficulty. So the idea is to make it harder for a user to download the data, at least not easily via developer consoles.

+ +

I'm thinking applying a symmetric encryption on the file, AES-128 seems a good choice but that means a key needs to be hid in JavaScript which can easily be found by users.

+ +

This comes to the question, for my problem, is symmetric encryption useful? what are other drawbacks or security vulnerabilities it would introduce?

+ +

In general, what is a proper way to prevent or make it difficult for users from downloading restful resources?

+ +
+ +

Edit: +reworded title

+ +

The application is for really small groups of clients that i'm pretty sure there won't be info change on using the software. I know it's an optimistic believe but If the goal is really to just increase the difficulty a bit so that users won't directly use resources they downloaded from developer consoles, what would be a possible way?

+",347957,,347957,,43845.14167,43845.23056,How to increase difficulty of intercepting resources downloaded from developer console,,2,5,1,,,CC BY-SA 4.0, +403823,1,403843,,1/15/2020 3:14,,5,250,"

I'm not an architect, but am trying to put together a diagram which represents the architecture of the application which I am maintaining.

+ +

I have one question (but welcome any comments about the diagram itself, as I have not training in this discipline).

+ +

I've essentially called the layer, where all my Application Services live and DTOs originate, the Domain.

+ +

I've called the layer which actually interacts with the database the Data Access Layer. That layer also contains the entities which get transformed into DTOs on their way through the application services.

+ +

+ +

Have I misnamed the Data Access Layer?
+Would it be more accurate to call that the Domain?
+Would it be more accurate to call the currently labelled Domain, the Application Layer, Business Layer or Services Layer (or something else)?

+",306852,,209774,,43845.72083,43845.72083,Is My Data Access Layer Really My Domain?,,1,5,,,,CC BY-SA 4.0, +403824,1,,,1/15/2020 3:53,,1,90,"

I am in the process of implementing an API for the OPTIONS request for a pre-flight check on CORS calls.

+ +

The Allowed-Origins host between local, test and prod will all be different so I moved this to a dotenv file.

+ +

Now though when I create my unit test to validate this there is an issue

+ +

Lets say that local has a value of localhost:3000 but test has test.site.com and prod is site.com

+ +

So now the test that verify headers

+ +
assertTrue(response.getHeaders()[""Access-Control-Allow-Origin""][0]===""localhost:3000"")
+
+ +

Will only pass in local.

+ +

A couple ideas I had and reason I think they would not cover the cases:

+ +

Hard code like above each env condition would leave failing test in other environment.

+ +

Load the env files and validate that but this is useless because I am confirming the file equals the file.

+ +

Load the env file for each env and hardcode the expectation in the assert. Could work the plan is not to have local or prod in test.

+",238259,,,,,43850.66458,Testing Environment Variables Strategies,,2,2,,,,CC BY-SA 4.0, +403828,1,,,1/15/2020 8:36,,0,48,"

I'm now working with an API project, where I'm developing new API endpoints and the final result at the end of the life cycle is as follows: a client should receive updated data out of what it sent to API after UI actions in website. So there is API and UI, the client sends data to API then as a response gets custom URL to a specific website and then the client can do what is necessary in the UI. After doing so the client clicks 1 or another button (save/download) and it gets a document. But I need to send to the client updated document data (changes). So I'm thinking of creating a webhook server that would send this data to clients created webhook listener.

+ +

Now I know that it is quite easy to create something that would send this data to clients webhook listener. But I'm interested in how to do it in a perfect world.

+ +

So I'm thinking that client should setup its webhook listener URL in my developed API UI where it registered. But I'm not sure of how to do it properly.

+ +

E.g. Do I need to set up some verification of URL or something that would be as it should be with webhook servers.

+ +
+ +

I'm developing with Python, PostgreSQL, Flask and everything is deployed in Heroku. I have some experience in developing webhook listeners, but this is something totally new to me and I want to develop it as well as possible.

+",308670,,45814,,43845.41597,43845.41597,Creating a webhook server / sender,,0,3,,,,CC BY-SA 4.0, +403830,1,403832,,1/15/2020 10:02,,1,419,"

I am relatively new to programming but have noticed a lot of people when creating strings using variables do something like the below to put a variable into a string. I am curious at the difference/reason why this seems to be preferred by the senior developers I know and to also get opinions on why to use one over the other.

+ +
var x = ""world"";
+Console.WriteLine(""Hello {0}"",x);
+
+ +

Is there an advantage or specific reason as to why the above is used as opposed to this:

+ +
var x = ""world"";
+Console.WriteLine(""Hello "" + x);
+
+ +

Or this

+ +
var x = ""world"";
+Console.WriteLine($""Hello {x}"");
+
+",355115,,355115,,43878.69792,43878.69792,Is there a best way to insert variables into strings?,,2,4,,,,CC BY-SA 4.0, +403835,1,,,1/15/2020 11:51,,2,257,"

Looking for the correct architecture, that should take into consideration - future edge-cases, bugs and pitfalls, performance, cloud pricing, ease of building and maintaining and security.

+ +

I have a serverless app, hosted at AWS. +It uses Lambdas and several DynamoDB tables for most of the BE logic (managed with aws-amplify).

+ +

I want to add a feature where users can upload CSVs, see them as a table on the app, and create a simple public API to fetch one row, based on ID (no need for more complex queries). Structure of the CSV (columns) varies with each upload.

+ +

Each users will add about 0-10 CSVs, each CSV will contain 3-20 columns and around 1k-100k rows. Adding CSVs takes place once a month/week, reading a line with the API happens 10k-100k a day.

+ +

How should I build it? (not limited to Lambdas/DynamoDB)

+ +
+ +

The solutions I had in mind are:

+ +

1. +Create a new table (sql/document) for each CSV upload, and save the name of the table under user.csvs[].

+ +

This way I'll have huge amount of tables. Is that a reasonable solution?

+ +

2. Add all CSV data to a document db, e.g. --

+ +
user { 
+name: ""john"",
+csvs: {
+  csv123: {
+    id345: {col1: 'x', col2: 'y'},
+    id678: {...},
+    ...
+  }
+ }
+}
+
+ +

What should I index in this solution for best performance?

+ +

3. +upload the file to a bucket, and create a lambda that opens it with every request, and returns the requested line. +(this option skips DB indexing)

+ +
+ +

Hope to learn from your experience.

+",355125,,355125,,43850.58194,43852.64167,"Allow users upload CSV files, and embed the data for a simple API",,2,8,2,,,CC BY-SA 4.0, +403839,1,,,1/15/2020 14:18,,1,163,"

In a microservices architecture where each component does one thing, how do you handle GUI logic? How do you avoid building a front end web application that has a lot of smarts built into it where it knows some of the internal workings of each microservice it calls?

+ +

Let's say we have a website for employees in Acme to use to get office supplies delivered to them. We would have a sign in component... an inventory component... and a delivery component. We build a web interface to wrap all 3 components together. Inevitably in order for us to show what's in stock, in addition to calling the inventory API (that provides basic CRUD functionality to maintain inventory), the web application would need some knowledge of inventory ""stuff"" to be able to display things properly. This means the web developer that is creating the GUI application would need to be in communication with the API devs... who may or may not be in the same time zone etc.

+ +

One way to handle this would be to have domain developers include GUI logic as a part of the REST API.... code on demand. So the main GUI guy who is building the main interface can call methods on the APIs that return HTML or javascript or whatever. But the downside of this method is that the code on demand methods would basically hide some implementation details.

+ +

Interested in hearing your opinion / thoughts. +Thanks.

+",78102,,,,,43845.68819,how to build microservices that also have GUIs,,2,2,,,,CC BY-SA 4.0, +403840,1,403842,,1/15/2020 14:19,,2,184,"

I'm a beginner studying interfaces in Java through some quizzes and I came through this question:

+
+

What are Java Interface used for?

+
+

I can opt among one of the following three choices:

+
+

A. They're used to describe the API of various classes.

+

B. They're used to avoid having to specify the contract for methods.

+

C. They're used to let real and apparent types differ. You can obtain this difference only by using interfaces.

+
+

I think the right answer is Choice A. Choice B doesn't make sense to me, since Interfaces usually describe methods along with their contract. Choice C doesn't make sense either, but I might be wrong.

+

Which one is the right answer? Thank you!

+",355139,,-1,,43998.41736,43845.62083,What are Java Interfaces used for? (multiple choice question),,1,2,,,,CC BY-SA 4.0, +403844,1,,,1/15/2020 15:08,,5,585,"

I'm trying to deepen my understanding on this, the only thing I know for sure is that Iterator is an interface in Java.

+ +

I've been reading CS literature, for example here and here and looking for similar questions here.

+ +

All I've come up is just a bit confusing to me: I read, for example, that ""abstract datatypes simply summarize names and types of operations (in Java, this means interfaces)"" or ""each ADT corresponds to a class (or Java interface) and the operations on the ADT are the class/interface's public methods"". Some people even confuse ADTs with abstract classes.

+ +

So, may I state that an Iterator is an abstract data type?

+",355139,,155513,,43845.79861,43846.63819,"Is it true that ""A Java Iterator is an Abstract Data Type""?",,3,5,,,,CC BY-SA 4.0, +403849,1,403853,,1/15/2020 16:54,,7,675,"

I've heard of some techniques to optimize code and make it faster:

+ +
    +
  • On one side are clearly relevant optimization: use of better algorithms, benchmarking, etc.

  • +
  • On the other side are techniques with a more doubtful relevance. I don’t know if they are a myth or a reality or just deprecated usage.

  • +
+ +

My question is about the latter ones. Some examples:

+ +
    +
  • Big function (several thousand lines) to avoid function call overhead
  • +
  • No SRP to avoid Object overhead
  • +
  • Reuse variable as much as possible instead of having scoped variables
  • +
  • ++i/i++
  • +
  • And many some other practices.
  • +
+ +

Often those techniques go against a readable, understandable and maintainable code base. So I'd like to know if they have a founded reason to exist.

+ +

A few things to note:

+ +
    +
  • I'm only assuming there may be rules in place and not that those are only bad practices that took root in code and in developers minds.
  • +
  • I'm currently working in C++98 and accordingly old compiler and hopefully in c++14 more up to date compiler in the next future. I'd like to know if the answer depends of the environment
  • +
+ +

My question: Are those practices myths or reality? In the context of an application with heavy performance criteria (Like High Frequency Trading Software) could those practices supersede ""good development"" practices?

+",293499,,209774,,43845.76181,44102.16458,Relevance of optimization techniques,,8,3,3,,,CC BY-SA 4.0, +403855,1,,,1/15/2020 17:42,,2,182,"

Just to clarify: This is not a rant. I'm genuinely curious and it's a genuine question our of straight curiosity. I want to become a better developer and even little details like this count.

+ +

So, don't get me wrong, this is not a technical question about what a Dictionary<string, string> does. It's a question about when to use it as a developer in a team.

+ +

The reason for this is something that happened today at work. I started working on a project made by another developer. He's still in the team so I can ask for info etc. BUT:

+ +

He uses dictionaries in C# for EVERYTHING. We do payments and he gets a known response from a payment gateway API.

+ +

The fields are known, so if it was me, I'd make an entity/model whatever you want to call it and then assign the values there and then save to the database (that was the short story). Have in mind that this is data that we need saved, but just for historical reasons. We're not actually doing any operations on it.

+ +

It's a big entity (as in, lots of fields). So I tried working on the project and it was impossible of course to get intellisense on it. I needed a single key to return the value to the frontend. Of course, I couldn't find it. It was nowhere in the projects (we have a monolith).

+ +

I had to spend about half an hour going through projects, commits, history, then the database and then I ended up asking him what happened to it, because I couldn't find it.

+ +

It was something really simple like object.VariableInObject.Data[""theKeyIwantFromTheDb""] but actually looking for it was a nightmare. I got frustrated but I asked a very polite question about 'Why do you use Dictionaries so much? Especially if the data is known.'

+ +

His response was 'because I don't believe we should use objects for data we will keep for historical reasons only' (namely in this case history of a person's payments').

+ +

I don't know if that's valid. Is it? Because it was a nightmare to track down the data on a huge solution I hadn't worked on before. If it was in the project and not a dictionary I would have immediately found it.

+ +

So, the question is in the title. Development wise in a team, when other people use your solution and need to work on it, should you use something like that, that's hard to track down?

+ +

I'm asking because I personally try to make it easier for other developers and it seems like he's experienced (senior while I'm mid) and just doesn't give a...you know. He just does what he pleases (at least that's what it looks like).

+ +

P.S. If that plays a role, the db is Nosql.

+",355160,,1204,,43845.77708,43845.77708,"Should developers do things that defeat strong-typing, kill intellisense and make it difficult for other developers to follow?",,0,15,,43845.77778,,CC BY-SA 4.0, +403862,1,,,1/15/2020 22:47,,1,87,"

I have multiple APIs doing the same thing but the code is copied for each API. The names of fields and formats of certain fields like dates are different between the various APIs, but everything is ultimately the same. I would like to remove duplication of logic and change these APIs to call the service layer.

+ +

I have started removing duplication of some code. Each API has a class which the JSON maps to. This allows for various names and formats in each API. In the controller for each API I convert that class to a unified class with the appropriate formats and names. I then pass this to a unified service layer. The service layer then returns a unified object and that gets mapped to the API specific response object. This seems to be working well.

+ +

The problem I am running into is handling errors. Instead of validating on each individual class that the JSON was mapped to I now validate the single class. When there is an error the expected error output is supposed to have something like:

+ +
{
+    ""errors"": [
+        {
+            ""error"": ""DOB cannot be in the future"",
+            ""field"": ""dob""
+        }
+    ]
+}
+
+ +

The problem is if one of the APIs has the field dateOfBirth instead of dob then the field in the error object is not correct.

+ +

I see two solutions for this:

+ +
    +
  1. I could have a process that remap each the error fields back to what each of the individual APIs needs.

  2. +
  3. I can do the validation on each of the individual APIs instead of the shared single class.

  4. +
+ +

I don't particularly like any of these solutions. Is there a better way to handle this?

+",355177,,,,,43845.94931,Removing duplicate code with multiple APIs,,0,4,,,,CC BY-SA 4.0, +403863,1,403865,,1/15/2020 23:49,,0,164,"

I built an async multi-client TCP server for RPC usage. It's working well but I've found it difficult to unit test certain functionality:

+ +
+
    +
  1. Connect 2x clients, is client count 2
  2. +
  3. Connect 1x client, disconnect client, is client count zero
  4. +
+
+ +

I want to test that the server is robust with handling disconnects and multiple connections. The below test fails only due to scheduling.

+ +

Unit Test

+ +
        [TestMethod]
+        public void Start_TwoConnections_ClientsIsTwo()
+        {
+            var handler = new HandlerStub();
+            using (server = new APIServer(handler))
+            using (var client1 = new TcpClient())
+            using (var client2 = new TcpClient())
+            {
+                server.Start();
+                client1.Connect(IPAddress.Loopback, port);
+                client2.Connect(IPAddress.Loopback, port);
+                // await Task.Delay(500); <-- This will fix the problem, but is surely unreliable.
+                Assert.AreEqual(2, server.Clients);
+            }
+        }
+
+ +

Server Snippet

+ +
        public void Start()
+        {
+            // Root try-catch, for unexpected errors
+            try
+            {
+                server = new TcpListener(IPAddress.Loopback, 8352);
+                IsRunning = true;
+                do // Retry loop
+                {
+                    // Start server errors
+                    try
+                    {
+                        server.Start();
+                        var task = Task.Run(AcceptConnections);
+                    }
+                    catch (SocketException ex)
+                    {
+                        Console.WriteLine(string.Format(""Error {0}: Failed to start server."", ex.ErrorCode));
+                    }
+                }
+                while (!server.Server.IsBound && !IsDisposed);
+            }
+            catch (Exception ex)
+            {
+                IsRunning = false;
+                Console.WriteLine(string.Format(""Unexpected Error: {0}"", ex.ToString()));
+                throw ex;
+            }
+        }
+
+        private async Task AcceptConnections()
+        {
+            try
+            {            
+                // Multi-client listener loop
+                do
+                {
+                    var connection = await AcceptConnection();
+                    connections.Add(connection);
+                }
+                while (!IsDisposed && IsRunning);
+            }
+            catch (SocketException ex)
+            {
+                Console.WriteLine(string.Format(""Error {0}: Server socket error."", ex.ErrorCode));
+                CleanupConnections();
+            }
+        }
+
+ +

How can this code be refactored to improve it's test-ability?

+",309395,,,,,43846.05208,Unit testing async tcp server,<.net>,1,0,,,,CC BY-SA 4.0, +403866,1,403870,,1/16/2020 2:16,,1,326,"

I am trying to draw up a sequence diagram to show how my web client will interact with my backend over a websocket connection.

+ +

I am using a websocket middleware to manage the stream connections.

+ +

What is the best practice of sequence diagram in this case? Should I include the stream creation in it? Or it is simply showing too much low level information.

+ +

+",15257,,209774,,43846.32292,43846.32292,Sequence diagram: explicitly show websocket creation?,,1,4,,,,CC BY-SA 4.0, +403867,1,403897,,1/16/2020 6:34,,1,262,"

I have an object:

+ +
const riders = {
+    Dave: {
+        gender: 'male',
+        age: 13,
+    },
+    Nina: {
+        gender: 'female',
+        age: 16,
+    },
+    Mike: {
+        gender: 'male',
+        age: 12,
+    }
+};
+
+ +

Should I make the object name rider(s) plural or singular? I'm asking this, because I know arrays are always to be plural, but this is an object. I don't want anyone to assume it is an array simply because its name is plural. What're the best practices in this regard? I want it to be readable and understandable, in the best way possible. At the very least, it should be consistent.

+ +

I read that collections are to always be plural, but, according to this answer and the sources it provides, objects are not considered collections. Would the method in which I employ it in define it as a collection?

+",163998,,,,,43846.80625,Should I give an object that holds multiple objects of the same type a plural name?,,3,1,,,,CC BY-SA 4.0, +403872,1,,,1/16/2020 8:30,,3,373,"

Sorry about the vague question, please do suggest different formulations. Anyway here is the kernel of the question:

+ +

How many classes representing an entity/resource or whatever you want to call it, should you have? Let's take it on the example of User object.

+ +

Often I see you have UserEntity used on the repository level to map to DB entity (sometimes autogenerated by your ORM), that maps to service level UserDTO, which then maps to User object to be returned by the API or View layer. This also gets more complicated if you plug in different APIs where some people opt-in to convert to per API object so CRMUser, ShopUser and so on.

+ +

All of those have the same (or similar) fields and no functionality. For each mapping, you need a Converter class and depending on your religion Interface.

+ +

So it gets quite complex in the end.

+ +

Do you have any opinions/guidelines to follow here?

+",355202,,,,,43846.59514,What is optimal number of entity abstraction levels?,,4,0,,,,CC BY-SA 4.0, +403876,1,,,1/16/2020 9:19,,1,139,"

Recently I have asked this question: How do you rewrite the code which using generics and functionals in Java 8 and mixing oop and functional programming by using only object-oriented? on stackoverflow and I couldn't get an answer but there are some comments and further one saying that maybe this is an Inner Platform Effect.

+ +

After that I thought that maybe I haven't asked the question as I have thought it should be so I have continued to research on it, but I could not find any meaningful resource.

+ +

Here is again the code that I'm asking about(in the question on stackoverflow there also client code):

+ +
public class MyUtils<R, MSGIN, MSGOUT> {
+    public SomeClass<R> getSomething(MSGIN query,
+                                Function<MSGIN, MSGOUT> queryFunc,
+                                Function<MSGOUT, List<R>> getResultList) {
+
+               //some logic here
+                        MSGOUT callResult = queryFunc.apply(query);
+               //another logic here
+                        buffer = getResultList.apply(callResult);
+               //yet another logic
+
+               //return someThing;
+            }
+            //...    
+}
+
+ +

What I was trying to ask is that is it some kind of object functional programming design pattern or if is not then is there a term or name for this technique.

+ +

Here what is curious case for me that I'm concentrating on it is that normally when using only object oriented programming you have some behaviour parameterization alternatives (strategy pattern, template pattern, visitor, to name a few) by utilizing dynamic dispatch and polymorphism, but when it comes to functional programming there are pattern matching, higher order functions and composition. In this case it looks like by utilizing generics on the class and type inference in the arguments that are higher order functions, it maybe implementing some kind of template or strategy pattern.

+ +

More specifically, a client calls this method giving lambda expressions whose inputs and outputs be something like that as they will be inferenced as designed like one parameter which is a higher order function taking other's output as input(A, B, C type inference) to these parameters.

+ +
         A query,             //param 1 => A
+Function<A, B> queryFunc,     //param 2 => A -> B
+Function<B, C> getResultList  //param 3 => B -> C
+
+ +

Is this a technique? If it is then is there a name for it and does it utilize functional programming in Java 8? How can we get most of it from this technique? Can this technique be applied to more parts when designing programs in Java by utilizing also functional programming?

+ +
+ +

As suggested in the comments here is the possible reasons behind this abstraction(if it's the right terms) according to my understanding:

+ +
                               ==========================
+consume other service     <-   Consumer Component         ->   consume a REST service to get all results page by page
+                               ==========================
+                                          |
+                                          v
+                    consume SOAP service to get all results page by page
+                                ============================
+                                e.g: SOAPConsumer
+                                ----------------------------
+                                private: offset = 0;
+                                private: pageSize = 100;
+                                private: filterCriteria = new FilterCriteria();
+                                private: buffer = List.emptyList();
+                                public: SOAPConsumer();
+                                -----------------------------
+                                public: consumeAll() {
+                                  conn = connect();
+
+                                  // I think it abstracts this while part. Maybe otherwise they thought that they need to implement that part in other consumers also.
+                                  continue = true;
+                                  while(continue) {   
+                                      resultSet = sendRequest(conn, filterCriteria, offset, pageSize);
+                                      resultSetWithMyInterestedPartOfTheResult = extract(resultSet);
+                                      buffer.addAll(resultSetWithMyInterestedPartOfTheResult);
+                                      lastResult = getLastResult(resultSet); //setNext
+                                      offset = lastResult.getId();//setNext
+                                      if(offset < pageSize) then continue = false;
+                                  }
+                                  return buffer;
+                                }
+                                private: connect();
+                                private: sendRequest();
+                                private: extract();
+                                private: getLastResult()
+                                private: setNext();
+                                ---------------------------------
+
+ +

Please note that as I write in the question, I'm mainly exploring if it is common technique when mixing fp with oop.

+",331518,,331518,,43847.45139,43847.45139,Is designing a generic parameterized class with methods of it accepting higher order functions a functional technique that we can use in Java 8?,,1,9,,,,CC BY-SA 4.0, +403877,1,,,1/16/2020 9:39,,1,356,"

According to the standards like ISO 29119 & ASPICE, the left side of the V-Model contains Requirements, Architecture Design and Detailed Design. On the test side, there are Unit/Component Test, Integration Test, Qualification Test. And the ISO 29119 mentions tests shall be performed against WHAT is expected of the test item and this WHAT shall be described in the test basis. For Unit/Component Test, the test item is an atomic SW component in isolation and test basis is Detailed Design Document of the atomic SW component.

+ +

So, my question is: WHAT an atomic SW component shall do? --> Is this not SW component Requirements? If yes, does this mean in the detailed design documents, the atomic SW Component requirements are written down which are tested in Unit test. When the standard says perform Requirements-based tests, is this what they mean?

+ +

If we do not write atomic SW component Requirements in Detailed Design Documents, then why is there a traceability between Unit test specification and Detailed Design and not between Unit test and Requirements.

+ +

The standards are so confusing, I am lost. Please help.

+",355209,,,,,43847.39167,"Unit Testing against Detailed Design, How to perform requirements based tests ISO 29119? Traceability to Unit Requirements?",,1,1,1,,,CC BY-SA 4.0, +403885,1,403896,,1/16/2020 11:16,,4,156,"

It seems 5 years ago there was a proposal for a HTTP SEARCH request, but it does not appear to have been adopted and it not widely supported. Are there any factual documents or citations which shed light on the reasoning behind why SEARCH has not been adopted by working groups or standards organizations?

+ +

There is also the w3c 'QUERY' verb. Both seemed to try and address the issue of whether to use a ""POST"" or a ""GET"" for a search request. Which is an issue that our team is trying to battle with now and seems to be the subject of much debate all over SE and SO.

+ +

I'm curious to why something like this was never formally adopted? We wondered if its down to the fact that there is no 1 solution in regards to query params vs request body and it has to be a judgement call, but I still see no reason not to have a verb with a request body that assures the user of no changes...

+",153841,,308851,,43846.64722,43846.64722,Whatever happened to HTTP SEARCH?,,1,6,,43846.63542,,CC BY-SA 4.0, +403890,1,403900,,1/16/2020 12:44,,2,694,"

I have some entities and some value objects that need to record the moment they were made. Now I read that a value object is a collection of properties with its own set of rules, and two value objects with the same properties are indistinguishable.

+ +

Now, I'm probably going to use the standard C# DateTime data type for this, and I don't really know how it is implemented internally (might just be a Unix timestamp, which as a primitive should be a property, but if they have a solution to the year 2038 issue that involves multiple properties in some relation with each other, it would be more complicated).

+ +

I think that what makes the difference is that although a timestamp has its own set of rules (e.g. ""35th of January"" is no valid date), I am not going to be enforcing those rules; the language itself will. The date is always created internally; I'm never parsing anything user-submitted into a date.

+ +

So I am leaning towards a timestamp being a property. But then I started thinking: can I apply that rule to any data type which has its rules defined outside my application? For example, an image file. I am not in charge of having it be structured correctly according to the JPEG specification; the language will do that. I could have restrictions on file size, but similarly an entity could have restrictions on the length of one of its strings and I don't see that being made into a value object for just that reason, though I could be wrong on that.

+ +

And what if there is a bespoke set of rules for a data type, e.g. an invoice, that the company I work with already implemented elsewhere and I am importing as a library? That could be as complicated a data type as it gets.

+ +

So, coherently:

+ +
    +
  • Is a timestamp a value object or a property of one?
  • +
  • What makes the difference between value objects and properties? Is it a locally defined set of rules, or is it something else?
  • +
+ +

And even more concretely; I am choosing between these two models (which I understand are to correspond more or less directly with my actual code):

+ +

Version 1:

+ +

+ +

Version 2:

+ +

+",292603,,292603,,43846.61597,43846.65833,"In domain driven design, is a timestamp a property or a value object?",,3,4,,,,CC BY-SA 4.0, +403910,1,,,1/16/2020 19:19,,0,62,"

I'm building a REST api and want to make sure to follow the best practices. +I need to expose a method which will return a 'Document' entity by its 'Lot' number.

+ +

It is possible that for a specific 'Lot', there's no 'Document' entity found. Also, property 'LotId' is not the key for 'Document' entity, it has his own key called 'DocumentId. What url structure shoul I use:

+ +
http://myUrl/Lots/{lotId}/Documents
+
+ +

or

+ +
http://myUrl/Documents?lotId=123
+
+ +

As it is posible to have no 'Document' attqached to a 'Lot', I don't want to return a 404 NotFound if there's no document found, I would return a 200 Ok with null

+",302776,,302776,,43846.83542,43893.61806,REST api - search url,,1,5,,,,CC BY-SA 4.0, +403911,1,403915,,1/16/2020 20:15,,40,7040,"

According to my experience, Wikipedia and prior answers, a scripting language is vague category of languages which are high-level (no manual memory management) and interpreted. Popular examples are Python, Ruby, Perl and Tcl.

+ +

Some scripting languages can be ""embedded"". For example:

+ +
    +
  • Lua is frequently embedded in video game applications.
  • +
  • TCL is embedded in the fossil version control system
  • +
+ +

It is sometimes said that Lua is more easily embedded than Python or that TypeScript is difficult to embed. Similarly, Wren is ""intended for embedding in applications"".

+ +

What factors make a language embeddable? Is it solely the size and speed of the base interpreter or do other factors come into play?

+",98711,,98711,,43847.61319,43848.99792,"What makes a scripting language ""embeddable""?",,5,1,9,,,CC BY-SA 4.0, +403912,1,403930,,1/16/2020 20:24,,1,82,"

Background:

+ +

I'm writing a clinical trials simulator. The user defines future trial options, eg a trial with 100 placebo patients, 200 treatment patients, ""optimistic"" outcome scenario, etc. There can be 1-20,000 of such options. For each option 10,000 - 100,000 trial results are simulated. The simulated data are later used for analytics, for each assumed option.

+ +

Implementation:

+ +

It's an Angular/Electron desktop app (in future, there'll be a web extension). The front-end sends a REST API request with trial options. The Python back-end performs the simulations in parallel. It stores the results in a PostgreSQL database, with a row per option. The database is either on the user laptop or on external storage. There is a single user working on a simulation.

+ +

Performance:

+ +

The simulations can take long, with their results needing a lot of memory. Therefore once the user adds the trial options, I'd only simulate for these additions, rather than for every option. I also allow to abort a running simulation.

+ +

Data consistency:

+ +

What the user assumes the inputs - trial options - are must coincide with the inputs to the latest saved simulations (and to subsequent analytics). Therefore I want to always notify the user about a difference between the front-end and the database. So analytics would be disabled if there's a difference.

+ +

Question:

+ +

How could I ensure this consistency? For example, the front-end could track three sets of trial options: (1) for the finished simulations; (2) for simulations in progress and (3) for current user inputs without started simulations. But this seems fragile. Also, I'm not comfortable with using the front-end, rather than the back-end and database as the source of truth. Does this front-end approach make sense, or should I go with the back-end? Or what else would work better?

+ +

Note: Architecting a predictive modeling software discusses a similar software, although with stable user inputs and a focus on performance. Besides this, I haven't found much relevant info on SE.

+",294434,,,,,43847.37292,How to keep user inputs consistent with assumed inputs to slow computations at the backend?,,1,0,,,,CC BY-SA 4.0, +403923,1,403925,,1/17/2020 6:23,,3,122,"

I have a general question about design pattern for an enterprise application. +I read a lot about it but its actually hard to find an answer because most you find it rater about how to design a data warehouse (DW) or about how to design pipeline, ETL and so on. But I'm stucking on a more top-layer question.

+ +

Setup

+ +

I have a IoT base business model basically like this:

+ +
    +
  • Project + +
      +
    • Location
    • +
    • Customer
    • +
    • [...]
    • +
    • DataSource + +
        +
      • Device +-ValueType + +
          +
        • Data (not include in the relational database)
        • +
      • +
    • +
  • +
+ +

Its currently persisted by a relational database, where we use a lot of features like spatial types, hierarchyids, json types and more that are not available in a DW.

+ +

Effort taken

+ +

We have a very well-designed data pipeline to solve the ETL process. +Works great and a lot of data is coming in.

+ +

For storing mass data I now started to design a data warehouse model (snowflake) to allow effective storing into a DW.

+ +

Looks currently like this:

+ +

+ +

Question

+ +

My actual question is more in general about where to put what data. +I currently have the idea of keeping the relational database as it is and creating a separate DW with the schema I shared. From our business logic (API service) we need then to receive data from both storages to give valid results to a (for example) web application. From a data science perspective you can use the DW for doing ML, BI and other analytics stuff.

+ +

Question:

+ +
    +
  1. Is this really common practice to have storages side-by-side like this?
  2. +
  3. Do I need to stored device data also somehow inside the relational database?
  4. +
  5. What is happening (on separated storage) to the loose related DIM tables inside the warehouse when an entity like location changed inside the relational database?
  6. +
  7. Am I still a too big dummy and should I read more? (then ignore 1 -3) ;)
  8. +
+ +

Thanks for reading!

+",355284,,,,,43848.4125,Enterprise application warehousing and relational database,,2,0,1,,,CC BY-SA 4.0, +403931,1,403959,,1/17/2020 9:26,,9,385,"

Java 7+ allows to use underscores in numeric literals,which do not affect the value of the literal, yet are useful for grouping. Examples from the Java 7 documentation, entitled ""Underscores in Numeric Literals"":

+ +
long creditCardNumber = 1234_5678_9012_3456L;
+long socialSecurityNumber = 999_99_9999L;
+float pi =  3.14_15F;
+long hexBytes = 0xFF_EC_DE_5E;
+long hexWords = 0xCAFE_BABE;
+long maxLong = 0x7fff_ffff_ffff_ffffL;
+byte nybbles = 0b0010_0101;
+long bytes = 0b11010010_01101001_10010100_10010010;
+
+ +
    +
  • here is the doc for Java 8, I haven't for anything like this for later versions;
  • +
  • here is the relevant section in the Java Language Specification v. 13.
  • +
+ +

These days, this feature can also be found in Python and others.

+ +

Which (if any) languages prior to Java 7 (released July 2011) had this particular feature?

+",109091,,209774,,43847.98194,43849.63472,Which was the first language to allow underscore in numeric literals?,,1,17,1,,,CC BY-SA 4.0, +403932,1,403961,,1/17/2020 10:08,,0,95,"

I am making a game to help childern learn Urdu (Urdu Boli = Urdu Language)

+ +

+ +

This is the high level context diagram of the game.Considering an RPG can any additions be made to the diagram?

+",355290,,177980,,43847.59236,43848.00069,Refining DFD of an RPG,,1,5,,,,CC BY-SA 4.0, +403933,1,,,1/17/2020 10:49,,1,95,"

Say I have a C++ function

+ +
/**
+ * @param path If empty, the system default is used
+ */
+void foo(const std::string& path);
+
+ +

And in my implementation I have a default handling for empty paths

+ +
void foo(const std::string& path) {
+    if (path.empty()) {
+        // Use some default
+    } else {
+        // Use the given path
+    }
+}
+
+ +

However, the if-else blocks are quite the same so I would either write

+ +

A

+ +
void foo(const std::string& path_) {
+    const std::string& path = path_.empty() ? ""some default"" : path_;
+    // Use the given path
+}
+
+ +

B

+ +

or change the function definition altogether:

+ +
void foo(std::string path) {
+    if (path.empty() {
+        path = ""some default"";
+    }
+    // Use the given path
+}
+
+ +

Now I am wondering what is advised in terms of style in such cases:

+ +
    +
  • A seems a little odd at first because I explicitly made the parameter const and now start to somehow ""rewrite"" it. Would this code look like overkill?
  • +
  • B seems a little odd because then, this function would have a different signature just because of some implementation details. It would stand out from the other functions which still use const&
  • +
+",313504,,,,,43847.53125,const function parameters and default behavior,,2,2,1,,,CC BY-SA 4.0, +403942,1,403947,,1/17/2020 14:19,,6,294,"

Consider this snippet:

+ +
class Foo
+{
+    int m_fileDescriptor;
+public:
+    Bar transformIntoBar()
+    {
+        Bar bar(m_fileDescriptor);
+        m_fileDescriptor = -1;
+        return bar;
+    }
+};
+
+ +

As you can see, once Foo ""transforms"" into Bar, its m_fileDescriptor becomes -1, which renders the entire object invalid.

+ +

The problem's that users of the class have no way of telling that's going to happen.

+ +

One solution would be to declare a friend make_bar() function

+ +
friend Bar make_bar(Foo f);
+
+ +

and make Foo move-only (which is supposed to happen anyway, I skipped it in the example for brevity). This way, the user has to be aware of the fact, that their Foo object is going to be moved from and therefore become useless.

+ +

There's one serious thing speaking against this solution, however. If there's going to be more methods like this one (e. g. make_baz(Foo f), etc.), I'd have to create an entire API of friend helper functions, which is clearly not something desirable.

+ +

My question is - is there anything better I could do to scream out loud to the users of my class that once you call it, the object becomes useless?

+",341661,,,,,43847.73819,How to explicitly inform users of the class that calling a method will invalidate the object it was called upon?,,4,7,,,,CC BY-SA 4.0, +403946,1,,,1/17/2020 15:57,,4,158,"

I am working with/writing a good amount of code in python using pandas dataframes. One thing I'm really struggling with is how to enforce a ""schema"" of sorts or make it apparent what data fields are inside the dataframe. For example

+ +

say I have a dataframe df with the following columns

+ +
customer_id | order_id | order_amount | order_date | order_time |
+
+ +

now I have some function that is get_average_order_amount_per_customer which will just take the average of the column order_amount per customer. Like a group by

+ +
def get_average_order_amount_per_customer(df):
+    df = df.groupby(['customer_id']).mean()
+    return pd.DataFrame(df['order_amount'])
+
+ +

Now when I go look at this function a few weeks from now I have no idea what is inside this dataframe other than customer_id and order_amount. I would need to go look at preprocessing steps that use that DF and hope to find other functions that use order_id, order_date, order_time. This sometimes requires me tracing back the processing/usage all the way to the file/database schemas where it originated. What I would really love is if the dataframe was strongly-typed or had some schema that was visible in the code without printing it out and checking logs. so I could see what columns it has and rename them if needed or add a field with a default value like I would with a class.

+ +

Like in a class I could just make an Order object and put the fields that I want in there, and then I can just check the Order class file and see what fields are available for use.

+ +

I can't get rid of dataframes all together because some of the code is relying on dataframes as inputs i.e. some machine learning libraries like scikit-learn and for doing some visualization with the dataframes.

+ +

Using Python Typing library I don't think I can name a schema that is inside a dataframe.

+ +

So is there any type of design pattern or technique I could follow which would allow me to over come this hurdle of ambiguity in the dataframe contents?

+",324504,,324504,,43847.66875,43859.37569,Designing interpretable/maintainable python code that use pandas DataFrames?,,3,0,,,,CC BY-SA 4.0, +403948,1,,,1/17/2020 16:21,,2,65,"

I have run into a situation in React where I want to run a function F in a useEffect hook whenever a value V1 from the store changes. F uses another value from the store V2. I only want to run F when V1 changes, but to have access to the latest value of V2. This means I wouldn't want to include V2 in the list of dependencies for the useEffect.

+ +

Below is my current solution, but I wonder if there is a more accepted way of achieving this.

+ +
const v1 = useSelector(state => state.v1);
+const v2 = useSelector(state => state.v2);
+
+const v2_ref = useRef(v2);
+useEffect(() => {v2_ref.current = v2}, [v2]);
+
+useEffect(() => {
+  v1 ? doSomething(v2_ref.current) : doSomethingElse();
+}, [v1]);
+
+",340281,,,,,43950.41806,React Hooks: using state in useEffect without depending on it,,0,2,,,,CC BY-SA 4.0, +403949,1,403951,,1/17/2020 17:12,,0,152,"

I've noticed on web and mobile apps, when scrolling down to the bottom of a list with thousands of entries it reaches the bottom instantly but appears to be scrolling through every entry.

+ +

How is this visual effect achieved? What is the common way to implement this simulation of fast-scroll?

+",173899,,209774,,43848.43125,43848.43125,How is fast-scroll simulated?,,2,8,,,,CC BY-SA 4.0, +403953,1,403954,,1/17/2020 19:05,,-1,208,"

I'm developing a SaaS application where I'm required to keep track and publish every change in a changelog. I've started to follow a Semantic Versioning approach and also using Continuous Delivery.

+ +

Because I use nvie's branching model, let's say:

+ +
    +
  1. I have version 1.0 in master.
  2. +
  3. develop and master are the same.
  4. +
  5. from develop, I fix a bug
  6. +
  7. merge it to master and publish to production.
  8. +
+ +

Now I have version 1.0.1 (according to SemVer) in master. The question is, should I tag it?

+ +

If I do, what's happening is that I have 1.0.1, 1.0.2, 1.0.3,... and every single merge in master tagged. Sometimes more than one a day.

+ +

Seems like SemVer and CD are not compatible, are they? Or is it OK to have (soon hundreds) of tags for every PATCH release?

+",169600,,,,,43860.90556,Git Tagging for SaaS application with CD and SemVer,,2,2,,,,CC BY-SA 4.0, +403955,1,,,1/17/2020 20:23,,3,527,"

I have a particular problem addressing the PostgreSQL advisory locking functions using the bigint variants.

+ +

Basically I want to create a 64 bit bigint value from a text type obtained with the PostgreSQL md5() function from an arbitrary text_input_context input.

+ +

The idea is to emulate / simulate a table row lock behavior1, where the text context is build from the full table name (schema + table name), and the table's keyfield values for a particularly selected row.
+It might be acceptable in our application to lock even more rows due to collisions of these hash values, but ensuring to at least have that row is protected for updates from other sessions.
+The lock / unlock would be ever straightforward and sequential to protect concurrent accesses of a specific table by design.

+ +

Originally the md5() function returns a 128 bit value represented as a hexadecimal text value of 32 bytes.

+ +

I want to convert this to a bigint value, which I can use with the PostgreSQL advisory lock functions.

+ +

When doing some research I found a way to just cut the 128 bit value output of the md5() function for it's 1st 16 bytes, and convert these to a 64 bit bigint value that can be passed to the PostgreSQL advisory lock functions2:

+ +
SELECT pg_catalog.pg_advisory_lock(('x' || 
+             pg_catalog.left(pg_catalog.md5(text_input_context),16)::bit(64)::bigint)
+
+ +

I am well aware that using the md5() algorithm already looses information regarding the text_input_context, though I want to reduce that to a minimum, and even more when just using the 1st 16 bytes of the md5 hash value.

+ +

My question is:
+Is that just a too naive approach, or can we have better implementations to compute a 64 bit value from a 128 bit hash value?

+ +

When researching I found the idea to additionally xor-ing the lower 16 byte part with the upper 16 byte part. But wouldn't that be even worse by means of distribution?

+ +
+ +

Regarding some comments advising to use SELECT ... FOR UPDATE, SELECT FOR UPDATE NOWAIT or SELECT FOR UPDATE SKIP LOCKED, please note that this won't help to solve my specific problem.

+ +
    +
  1. The SELECT commands are done internally by some DB access component, where I have little to no chance to change the behavior how the component issues these statements against the database.

  2. +
  3. What I need is to indicate that a particular table row is locked in the GUI, but still show all the information in the form.

  4. +
+ +
+ +

PS.: May be this question better applies to SE Database Administrators, but in the end it's more about the algorithmic approach IMO.

+ +

PPS.: In case that matters anyhow, the applications are written in Delphi, using the Devart UniDAC components.

+ +
+ +

1)The problem occurs in the context, where we try to replace an ISAM (flat file) database system (ADS) with PostgreSQL. The row locking in the original system is implicit, when an application process (session) holds a cursor to a table row for updating.

+ +

2)Of course the text context value would be passed as a prepared query parameter, so I am aware about the dangers of SQL-injection, or other unexpected syntax issues.

+",83178,,83178,,43849.43889,44121.25417,Using PostgreSQL MD5 hash to calculate a 64 bit hash value for advisory lock functions?,,1,12,,,,CC BY-SA 4.0, +403962,1,403965,,1/18/2020 1:23,,2,360,"

I am trying to identify the pros and cons of two approaches to create an object to return from my generic API. I am thinking the first approach I am sketching out has the advantage of being easier to understand by offshore developers, while the second approach lends itself better to concurrency or more complicated logic on the client side.

+ +

Example object model:

+ +
class TopLevelResponse {
+    String field1;
+    String field2;
+    MidLevelResponse field3;
+    ResponseMisc[] field4;
+}
+
+class MidLevelResponse {
+    String field1;
+    BottomLevelResponse field2;
+}
+
+class BottomLevelResponse {
+    String field1;
+    String[] field2;
+}
+
+class ResponseMisc {
+    String field1;
+    String field2;
+}
+
+ +

First top-down case where each branching object is created whenever a child field is initialized:

+ +
class MyResponseInitializer {
+    TopLevelResponse rsp;
+    TopLevelResponse getTopLevelResponse() {
+        if (Objects.isNull(rsp)) rsp = new TopLevelResponse();
+        return rsp;
+    }
+    MidLevelResponse getMidLevelResponse() {
+       if (Objects.isNull(getTopLevelResponse().getMidLevelResponse())
+           getTopLevelResponse().setMidLevelResponse(new MidLevelResponse());
+       return getTopLevelResponse().getMidLevelResponse();
+    }
+    ...
+    void setField1(String val) {
+        getTopLevelResponse().setField1(val);
+    }
+    void setField2(String val) ...
+    void setMidLevelField1(String val) ...
+    ...
+} 
+
+ +

And on client side:

+ +
...
+MyResponseInitializer rspInit = new MyResponseInitializer();
+rspInit.setField1(""foo"");
+rspInit.setField2(""bar"");
+rspInit.setMidLevelField1(""baz"");
+...
+TopLevelResponse rsp = rspInit.getTopLevelResponse();
+
+ +

Compare to a bottom-up approach like this, where initialization is performed by a series of builders:

+ +
static class BottomLevelResponseBuilder(){
+     ...
+     BottomLevelResponse build()...
+}
+static class MidLevelResponseBuilder()
+...
+static class TopLevelResponseBuilder()
+...
+
+ +

Which on client side looks like this:

+ +
TopLevelResponse rsp = TopLevelResponseBuilder.newInstance()
+                           .field1(""foo"")
+                           .field2(""bar"")                                             
+                           .field3(MidLevelResponseBuilder.newInstance()
+                                .field1(""baz"")...
+                       ...
+                       .build();
+
+ +

What are the pros vs cons of either approach here? Am I right to think that the first approach is better from a defensive programming perspective, since it reduces the risk of rogue implementations by offshore devs? Or that using concurrency to create each branch on a separate thread can pay off especially with more nesting involved in the data model if I use approach 2?

+",,user313675,,,,43848.96458,Bottom-up vs top-down object building in API,,2,0,,,,CC BY-SA 4.0, +403964,1,,,1/18/2020 2:24,,1,377,"

I am working on converting an existing python based monolith solution to a microservice. The current flow is pretty straight forward:

+ +

Accept XLSX as input -> Run some complex algorithms based on input -> generate XMLs

+ +

I have created two services using Flask Restplus:

+ +
    +
  • storage - this deals with store/download/delete of any input or output files. When a user calls /upload/ with an xlsx file, this service will store the given file and return back the download_url to the user back as a response.
  • +
  • gen - User calls this service by passing the download_url in a request, this service download the file, process it, generate output file, upload the output file to the storage by calling a storage service endpoint.
  • +
+ +

However, at times, we have seen that the input files we are receiving are quite large in size(~2GB) and it's taking time to upload. Now we are worrying that if multiple users upload huge files concurrently then our system will go for a toss. +We have made the gen service async using celery + RabbitMQ. But, I am not sure what needs to be done for the file upload part.

+",354878,,,,,43848.52361,Flask restful file upload asynchronously?,,1,2,,,,CC BY-SA 4.0, +403969,1,,,1/18/2020 11:32,,1,58,"

I have created a real-time GPS tracking web-based system using Java Servlets as a backend solution whereas the front end is using javascript with ajax requests and WebSockets. (Both the front and backend are operating together as a Web application)

+ +

Basically, the system is a web interface that shows at real-time vehicles on google maps and also users can build reports for past events based on input data like start timestamp, end timestamp, vehicle id and a bit more calibration staff.

+ +

So currently everything is on the same backend - the realtime logic and the logic of the reports (currently there are 50+ reports the user can choose from)

+ +

Due to the complexity of the reports and realtime data serialization, most of the database data associated with users and vehicles are loaded in memory.

+ +

However, from time to time (twice a day) there needs to be made improvements and or patch some bugs (for example some report logic needs to be changed a bit, or the front-end UI needs to be fixed, or the way the emails are sent needs to be modified for some clients) so I have to kill the users' sessions and deploy the fix but all this introduces downtime.

+ +

That's why I am considering dividing the system into multiple services and each of them to be on an independent server:

+ +
One independent service/server for the real-time data fetching and serialization.
+One independent service/server for reverse geocoding.
+One independent service/server for calculating and caching if current latitude/longitude is inside urban area or on a particular speed road.
+One independent service/server for daily reports.
+One independent service/server for event-based reports
+One independent service/server for email and other notifications.
+One independent service/server for tachograph data downloading and uploading to FTP.
+
+ +

and so on.

+ +

This way if some of the services need to be patched, fixed or upgraded I won't need to stop everything and kill the users' session.

+ +

(Please have in mind that there are daily requirements to change or introduce some logic in some of the services due to thousands of private clients who pull the strings and tell what should be made)

+ +

However, I have some objects like CustomDateTime, Vehicle, MarkedArea, DynamicPOI, ClosedGeoCurve, FuelFlowMeter, Canbus, GPSDeviceFirmware, UserSerializationPermissions, DistanceCalculationMethod and so on, that are heavily used across the system so if I divide the system on services and each of them is hosted on independent physical server machine I will need to have these Java objects on each of them and each time I introduce a new field, method and/or business logic to some of the classes I will have to deploy them again on each of the servers.

+ +

That's why I am still investigating what would be the best approach to handle this division and i will be more than happy if some of you show me the right direction using JEE architecture.

+ +

Thank you in advance.

+",127436,,,,,43855.97708,JEE Gps Tracking System Design,,1,1,1,,,CC BY-SA 4.0, +403971,1,,,1/18/2020 12:25,,2,40,"

I'm working on a project that we will deploy to Google Cloud Platform. The services we want to use include cloud run, SQL, and cloud storage.

+ +

We will use several deployments for different phases of development; namely development, staging, and production.

+ +

I'm worried about security mostly regarding who can access what. Obviously, we want our developers to have some access to the development environment so that they may tweak things, create temporary databases, and perform other tasks that may be necessary to help in development. However, the production environment is naturally more protected and not everyone should have access to its settings.

+ +

One solution I thought of was creating a separate project for each environment. However, I believe this might not be the correct course of action. What is your suggestion for this? Basically what we would like is separate environments with their own firewall and access controls.

+ +

Thanks!

+",191555,,,,,43848.55903,Separating different deployment stages into different projects on Google Cloud Platform,,1,0,,,,CC BY-SA 4.0, +403974,1,,,1/18/2020 14:54,,2,155,"

I'm developing a client library. I'd like to provide both Sync and Async interfaces to endpoints on a server. They would be rather easy to implement as completely separate entities, but I would like to follow DRY principles and other best practices. How could one implement such clients, and are there any established patterns of achieving this behavior? Endpoints need to do three things: pre-process arguments to request attributes, send the request and post-process request contents.

+ +

An example of both implementations:

+ +
def get_resource(id, parameter):
+    """"""
+    Docstring with explanations and parameter descriptions.
+    """"""
+    params = pre_process(id, parameter)
+    response = sync_request(params)
+    return post_process(response)
+
+async def get_resource(id, parameter):
+    """"""
+    Docstring.
+    """"""
+    params = pre_process(id, parameter)
+    response = await async_request(params)
+    return post_process(response)
+
+ +

This is of course a simplification. Pre-processing often involves logic which I wouldn't want to duplicate, along with the call signature and docstring. So there could be a number of solutions:

+ +

Duplicate everything

+ +

I don't consider this a good option. It would require maintaining two pieces of the same logic.

+ +

Refactor logic into functions

+ +

This would mean that pre_process and post_process would quite literally be functions, but the overall call would be implemented twice in two classes or modules.

+ +

Refactor into one function

+ +

Suppose we had a way of knowing whether we are in async or sync mode. Synchronous functions can then return ""early"" to provide async functionality through a truly asynchronous function that duplicates the latter part of the call.

+ +
def get_resource(id, parameter):
+    """"""
+    Docstring with explanations and parameter descriptions.
+    """"""
+    params = pre_process(id, parameter)
+
+    if async_mode:
+        return async_get_resource(params)
+    else:
+        response = sync_request(params)
+        return post_process(response)
+
+async def async_get_resource(params):
+    response = await async_request(params)
+    return post_process(response)
+
+ +

I don't know if it is an advantage that the same function can now be both sync and async. It could even be confusing to use. Depends on the mechanism to provide async_mode I guess.

+ +

Refactor into one function with a decorator

+ +

This is taking it a bit far, but could work. We can decorate a synchronous function that returns the request parameters. In that decorator we can then decide whether async should be used. If so, return an awaitable, if not, use synchronous calls. A callable function for post-processing the request would be passed into the decorator.

+ +
def decorate_call(post_process: callable):
+    def decorator(function):
+        async def async_send(params):
+            response = await async_request(params)
+            return post_process(response)
+
+        def wrapper(*args, **kwargs):
+            params = function(*args, **kwargs)
+            if async_mode:
+                return async_send(params)
+            else:
+                response = sync_request(params)
+                return post_process(response)
+
+        return wrapper
+    return decorator
+
+@decorate_call(post_process)
+def get_resource(id, parameter):
+    """"""
+    Docstring with explanations and parameter descriptions.
+    """"""
+    return pre_process(id, parameter)
+
+ +
+ +

Which of these would be the best solution? Or are there some other, more appropriate methods?

+",301321,,301321,,43848.92569,44119.00486,Implementing both Sync and Async clients with DRY,,1,11,,,,CC BY-SA 4.0, +403977,1,403978,,1/18/2020 17:34,,1,68,"

In Scheme, the general form of a procedure definition is:

+ +
+

(define (<name> <parameters>) <body>)

+
+ +

where <body> accepts a sequence of expressions, allowing this kind of procedure definitions:

+ +
> (define (f) 1 2 3)
+> (f)
+3
+
+ +

Likewise, the general form of a conditional expression is:

+ +
+

(cond (<predicate> <consequent>) (<predicate> <consequent>) … (<predicate> <consequent>))

+
+ +

where <consequent> in each clause accepts a sequence of expressions, allowing this kind of conditional expressions:

+ +
> (cond (#t 1 2 3))
+3
+
+ +

But why can’t I use define in a clause’s consequent of a conditional expression like in the body of a procedure definition?

+ +

Compare:

+ +
> (define (f) (define x 1) (define y 1) (define z 1) (+ x y z))
+> (f)
+3
+
+ +

with:

+ +
> (cond (#t (define x 1) (define y 1) (define z 1) (+ x y z)))
+ERROR on line 1: unexpected define: (define x 1)
+
+ +

Note. — I am using the Chibi-Scheme 0.8.0 implementation on MacOS 10.15.2.

+",184279,,184279,,43854.32431,43854.32431,Using define in a conditional expression in Scheme,,1,0,,,,CC BY-SA 4.0, +403982,1,,,1/18/2020 22:03,,-3,90,"

I am making a Java and Spring webapp that scrapes data from a web and then publishes it through an API. Some of the raw scrapped data is in the form of Set<SomeObject> which I then convert to a Set<ConvertedObject>. The thing is, there is a field that has a repeating value in a lot of SomeObject and I want to avoid converting it more than once - it isn't an expensive operation but still.

+ +

So far I have come up with two solutions:

+ +
    +
  • Extract that field from SomeObject to a Map<String, Set<SomeObject>>, which is easy to do from the web I am scrapping (like 5 lines of code).
  • +
  • Keep the field inside SomeObject and make an internal service with cache to convert that field, which is also easy to do with Spring.
  • +
+ +

The possible values are not so many; 7 values cover 95%+ of the entities, but some times there are some other values apart from those which I am able to parse.

+ +

I like the second solution in that the code becomes cleaner, no complicated Map and Set structure, but it is also a more sophisticated one in my opinion that isn't really necessary.

+ +

Which one do you think is better?

+ +

To better clarify, I am most concerned with clean code rather than efficiency. Plus, I am also interested in a general answer regarding Spring web applications design. Should I use components everywhere I can or sometimes vanilla solutions are better?

+",353810,,353810,,43849.16944,43850.84444,Should vanilla solutions be avoided in Spring web applications?,,1,5,,,,CC BY-SA 4.0, +403985,1,403989,,1/19/2020 0:19,,6,206,"

In my program I've included the header for an external library (GLFW) in my Main class. Everything I need to use that library for can be handled in my main class, with the exception of two sneaky little methods that I need in a separate class.

+ +
#include <GLFW/glfw3.h>
+#include ""graphics/Display.hpp""
+
+int main()
+{
+    glfwInit();
+    glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
+    glfwWindowHint(GLFW_RESIZABLE, GLFW_FALSE);
+
+    GLFWwindow* window = glfwCreateWindow(800, 600, ""Adventum"", nullptr, nullptr);
+
+    uint32_t extCount;
+    const char** extensions = glfwGetRequiredInstanceExtensions(&extCount);
+
+    Display* display = new Display();
+
+    //@formatter:off
+    auto terminate = [&](){glfwSetWindowShouldClose(window, true);};
+    auto surfaceCreation = [&](VkSurfaceKHR* surface){return (glfwCreateWindowSurface(display->instance, window, nullptr, surface));};
+    //@formatter:on
+
+    display->setTerminateFunction(terminate);
+    display->setSurfaceCreationFunction(surfaceCreation);
+    display->create(extensions, extCount);
+
+    while (!glfwWindowShouldClose(window))
+    {
+        glfwPollEvents();
+    }
+
+    display->destroy();
+    delete display;
+
+    glfwDestroyWindow(window);
+    glfwTerminate();
+}
+
+ +

This is my main function. The two functions I need are glfwSetWindowShouldClose and glfwCreateWindowSurface with both requiring a reference to variables in main and would be an additional hurdle on top of including the header in both classes. As you can see I solved this by creating two lambda functions (terminate and surfaceCreation) that contain the external function call.

+ +

My question is, does this come off as an eyesore to experienced developers? Is this a crude and unnecessary work around? (I'm trying to figure out how to ask this without it being ""opinion based"".)

+",355396,,355396,,43849.08611,43849.20069,Is it practical to pass function pointers to a separate class to avoid additional includes?,,1,2,1,,,CC BY-SA 4.0, +403993,1,403997,,1/19/2020 12:03,,0,57,"

Introduction

+

I intend to create a .NET WinForms application (this will be a toy application) which connects to a SQL Server database backend. I have designed and implemented an object-oriented model in C# with classes corresponding to DB tables, as part of the Model. After doing some research, I have decided to implement another layer of the Model, designed specifically for DB connection. There will be a third layer, the service layer. In the database, there will be three or four tables at most.

+

Dealing with loss of connection to DB

+

My question is regarding the connection to the database. My vision is to establish a System.Data.SqlClient SqlConnection when the application is run, and authenticate the user (salted hashed passwords are stored in the DB). If the authentication is successful, the connection is closed, and the main form is displayed. Whenever there is a need to update the database or retrieve data from it, another connection is established, appropriate queries executed, and then the connection closed.

+

A potential problem is if the ability to establish a connection is lost. What should be done to deal with this? Should the application terminate at that point? Should data be loaded from the db (held in instances of the classes) when the application starts, and then if anything needs to be written, it is written into the db when it terminates normally?

+",350409,,-1,,43998.41736,43849.71944,MVC database connection .NET,<.net>,1,3,,,,CC BY-SA 4.0, +403994,1,,,1/19/2020 13:43,,0,139,"

I'm writing a ""generic"" achievement system for my MMORPG project, it needs to be friendly & efficient for my game designers (without having to write code to add new achievements). +If anyone got some suggestions about good alternatives, I'll be glad to give it a check. (Lua, C# scripting...), many things I've not yet done and I'm feeling more ""comfortable"" on that option.

+ +

For that, I thought about a solution, which is a basic configuration file that is linked to an enumeration of ""eventType"" and provide a list of optional args (based on the Event properties) +Here is an example

+ +
name: ACHIEVEMENT_NAME
+event_type: ITEM_USAGE # enum as a string, there is a list of event types
+count: 30 # 30 item usage
+args: # every args are optional
+  item_vnum: 1127 # vnum - optional
+  map_id: 1 # on map id 1 only - optional
+
+ +
public enum AchievementEventType
+{
+    ITEM_USED,
+    MONSTER_KILLED,
+}
+
+public interface IAchievementArgument
+{
+    AchievementEventType EventType { get;}
+}
+
+public class MonsterKilledAchievementArgument : IAchievementArgument
+{
+    public AchievementEventType EventType => AchievementEventType.MONSTER_KILLED;
+
+    public long MonsterVnum { get; set; }
+    public short? MapId { get; set; }
+}
+
+public class AchievementConfiguration
+{
+    public string Name { get; set; }
+    public string EventType { get; set; }
+    public long Count { get; set; }
+    public Dictionary<string, object>? Args { get; set; }
+}
+
+public delegate bool AchievementFilter(IAchievementArgument achievementArgument);
+
+ +

I thought about generating an expression tree to build a delegate (AchievementFilter) for my ""Achievement incremental condition"" +(Basically, the function that will check if the player's specific achievement counter can be incremented or not)

+ +

There are two solutions I thought, both have their pros/cons, but I'm looking for external point of view, what do you think about it. (or maybe, another kind of solution that you could present me)

+ +

1 expression tree per achievement

+ +

I generate 1 expression tree per achievement that will compare the IAchievementArgument given as parameter with the achievement configuration.

+ +
    +
  • Pros : + +
      +
    • faster at execution time (each achievement condition have their own delegate)
    • +
  • +
  • Cons: + +
      +
    • Memory footprint
    • +
  • +
+ +

1 expression tree per args type

+ +

I generate 1 expression tree per IAchievementArgument type that will fetch and compare achievement configuration one by one

+ +
    +
  • Cons : + +
      +
    • More execution time (needs to check all key/value equality of each achievement configuration)
    • +
  • +
  • Pros : + +
      +
    • Lighter memory footprint
    • +
  • +
+",355423,,90149,,43850.59028,44120.91806,C# - Generic Configurable Condition checker at Runtime - Achievement System,,2,0,,,,CC BY-SA 4.0, +403998,1,,,1/19/2020 17:31,,2,155,"

Currently I am cleaning up hard to maintain and test if else clutter which is based on conditions which have to be checked in isolations:

+ +

What is the basic semantic of the conditions?

+ +
+

Big Entity Objects have to be checked based on two entity keys namely Trans and Right for state change as in the example Below:

+
+ +
if (oldTrans.getfOrder().equals(newTrans.getfOrder()) {
+     compound.setIsStateChanged(true);
+      return;
+}
+if (oldRight.getgRight().equals(newRight.getgRight()) {
+     compound.setIsStateChanged(true);
+}
+  ....and the list goes on till 20 more such conditions
+
+ +

Currently the if else are all cluttered up at one place:

+ +
    if (oldTrans.getfOrder().equals(newTrans.getfOrder()) {
+        compound.setIsStateChanged(true);
+        LOGGER.info(""major change detected"");
+        return compound;
+    } if (oldTrans.getgOrder().equals(newTrans.getgOrder()) {
+        compound.setIsStateChanged(true);
+LOGGER.info(""major change detected"");
+        return compound;   
+    }
+
+ +

I see 2 main issues here

+ +
    +
  1. Every if has a return statement and with so many ifs it hard to know when and a what point method exits.

  2. +
  3. To many if branchings are error prone and The number of conditions is likely to go up.

  4. +
+ +

To avoid so many ifs that are basically based on the same semantic underneath from clean code perspective I tried to solve it the polymorphic way

+ +

Extracting the Conditions in Enums as Constants and impleneting a checker Interface that takes new and old objects as params

+ +
    public interface ICheckStateChange<T>(T old, T new) {
+        boolean check(T old, T new);
+    }
+
+    //implementations
+    public TransChecker implements ICheckStateChange<Trans> {
+
+      List<BiPredicate<Trans, Trans>> allTransConditions = transConditions.getValues();
+
+    public boolean check(Trans oldTrans, Trans newTrans) {
+         //all conditions null check here
+        //loop through conditions
+        for (BiPredicate<Trans, Trans> transCondition: allTransConditions) {
+            if (transCondition).test()) {
+                return true;
+             LOGGER.info(""major state change detected, taking apt action"")
+          }
+    }
+
+public RightChecker implements ICheckStateChange<Right> {
+
+      List<BiPredicate<Right, Right>> allTransConditions = RightConditions.getValues();
+
+    public boolean check(Right oldRight, Right newRIght) {
+         //all conditions null check here
+        //loop through conditions
+        for (BiPredicate<Right, Right> rightCondition: allRightConditions) {
+            if (rightCondition).test()) {
+                return true;
+             LOGGER.info(""major state change detected, taking apt action"")
+          }
+    }
+
+ +

The Conditons are now centrally located as BiPredicate constants using lambdas

+ +
public enum rightConditions {
+    FORDER_CHANGE_NULL_TO_NOT_NULL((Order old, Order new)
+       -> old == null && new != null),
+
+    //to be replaced by the right condition
+    GORDER_CHANGE_FROM_OPEN_TO_DONE((Order old, Order new)
+       -> old == null && new != null)
+
+    //to be replaced by the right condition
+    LORDER_CHANGE_FROM_OPEN_TO_REVERTED((Order old, Order new)
+       -> old == null && new != null)
+   }
+
+ +

My question here is about the approach of refactoring the If elses with the help of lambda BiPredicates in hindsight of clean code? Readability, extensibility and maintainability ;)

+",104329,,104329,,43850.26528,43850.26528,Refactor approach for huge if else Clutter based on many independant conditions,,2,16,,,,CC BY-SA 4.0, +403999,1,404004,,1/19/2020 17:54,,-2,56,"

I'm trying to convert this table into an ORM schema:

+ +

The solution given by the book is this one: +It connects all the paramether to the Member one. This is my solution, which I think it's less complicated, but I don't know if it is equivalent:

+ +

What do you think about my solution? Is it right?

+",355436,,,,,43849.94653,Converting a table to an ORM schema,,1,4,,,,CC BY-SA 4.0, +404000,1,,,1/19/2020 19:15,,-3,57,"

I am writing an implementation of a binary search tree and in doing this I need a method that splits an array in two.

+ +

I am unsure where it is appropriate to place this method. What I mean by ""where to place"" is if I am going to monkey patch the array class, create a class method belonging to Tree, create a global method or lastly do something else which I have not thought of.

+ +

Here is some code to further describe the alternatives I have thought of.

+ +

Alternative 1: Monkey patch

+ +
class Array
+  def split
+    each_slice(@size/2).to_a
+  end
+end
+
+ +

Alternative 2: Class method

+ +
class Tree
+  def Tree.split(a)
+    a.each_slice(a.size/2).to_a
+  end
+end
+
+ +

Alternative 3: Global method

+ +
def Tree.split(a)
+  a.each_slice(a.size/2).to_a
+end
+
+ +

Where is it appropriate to place this method?

+ +

As always thanks in advance for answering my question

+ +

Olav

+",353803,,348453,,43854.57639,43854.57639,Where is it appropriate to implement the split_array method,,1,2,,,,CC BY-SA 4.0, +404008,1,,,1/20/2020 0:01,,1,103,"

I have a general question about loading data into a data warehouse (DW). +This is basically a followup to an older question of mine. +I have a general understanding problem about fill the [Date] dimension.

+ +

Setup

+ +

I have a IoT base business model basically like this:

+ +
    +
  • Project + +
      +
    • Location
    • +
    • Customer
    • +
    • [...]
    • +
    • DataSource + +
        +
      • Device +-ValueType + +
          +
        • Data (not include in the relational database)
        • +
      • +
    • +
  • +
+ +

Its currently persisted by a relational database.

+ +

Effort taken

+ +

For storing mass data I designed a data warehouse model (snowflake) to allow effective storing into a DW.

+ +

Looks currently like this:

+ +

+ +

I read a lot about designing a proper warehouse and settings up an ETL process. I identified all your dimension tables as SCD and use the temporal tables feature of MSSql to solve updates.

+ +

Question

+ +

Let assume only my fact and [Date] dim table exists. I have problems to understand how to perform proper inserts into the DW. +If I understand correctly a default pattern would be using a staging table (on MSSQL, for example, a Polybase table) and then perform batch operations like:

+ +
    +
  1. clean staging table
  2. +
  3. fill staging table
  4. +
  5. query for not existing dates
  6. +
  7. insert not existing dates
  8. +
  9. move staging to fact table with references to date dimension
  10. +
+ +

This could be optimized for example with MERGE statements.

+ +
    +
  • Did I understand that correctly?
  • +
  • Are there better best-practice strategies like that?
  • +
+ +

One last hint: Because the DeviceDate is IoT data there will be very unique dates (more timestamps). So I prefill all possible timestamp makes no sense for me.

+",355284,,355284,,43850.06458,43851.275,Load for Date dimension table of a warehouse,,1,0,,,,CC BY-SA 4.0, +404009,1,404015,,1/20/2020 1:23,,2,635,"

I am creating this project based on event sourcing and cqrs. +My write and read model are on different databases (and machines) and are connected through an event bus (in particular I am using MassTransit with rabbitmq for the transport)

+ +

When an event is generated it is stored in my write model and published on the queue where the read model picks it up and updates the db accordingly

+ +

What I still have not completely clear is how to keep the two models in sync in case of errors or problems

+ +

For example it could happen that the database on the read side goes down, or messages queue goes down itself and the write side cannot publish any more events

+ +

My first approach is, since every aggregate (I am using DDD) has a version number I can use this to keep track of how many events per aggregate instance I have processed.

+ +

For example, if in the read side my object with ID 8734 has a version number of 4, and a new event arrives for that object but the metadata says that now we are at version 6, the read notices it missed an event and it can use the message queue to ask for the missing one(s) +(This method actually solves also the problem of processing events in order)

+ +

The drawback is that the system does not realize that an event has NOT been processed until the next one arrives. +(If the event 6 for object 8734 is not generated, I'll never notice that I am missing the event 5)

+ +

A second option is: I can create a background processor that 'polls' the database and asks for new events, but this defeats the idea of pub/sub messaging and also forces the read model to process events sequentially, even if they are about different aggreagates where parallel processing would be permitted (and delaying the eventual consistency)

+ +

Lastly I could make sure from the read side I keep publishing an event until I receive the ack that the read model (or all the read models) have processed it correctly, but this shifts the repsonsibility on the write side of the application and I do not know if it a good strategy or it is going to bite me back later

+ +

How to deal with these situations?

+",350360,,,,,43850.22014,Event sourcing and sync between write and read model,,1,0,1,,,CC BY-SA 4.0, +404010,1,404011,,1/20/2020 1:48,,3,315,"

Where I work we occasionally come across really challenging defects, which require a great deal of technical expertise, skill and patience to resolve. Getting our most talented engineers to work on these issues (especially on legacy code) can be quite challenging.

+ +

My question to the community is this: Have you successfully tried any techniques which would make such challenge support activities attractive/fun? If so, could you share what you have done?

+ +

Of course we could just pick the most talented engineers and instruct them to work on these issues, but forcing them to do this type of work on an ongoing basis can lead to dis-enfranchisement, which is not something we want.

+",166094,,217210,,43851.11667,43851.15972,"Making software defect correction ""fun""",,2,4,,43851.62778,,CC BY-SA 4.0, +404013,1,,,1/20/2020 2:59,,-1,54,"

I'm trying to understand how separation of duties works between various job functions as a security measure. For starters, there are computer (hardware) operations, applications programs, and systems programs.

+ +

From what I understand, computer operators are supposed to be confined to hardware and hard data. They are not supposed to have anything to do with systems programming or applications programming, because of potential conflicts of interest. I follow this so far.

+ +

Likewise, the access of applications programmers are supposed to be limited to applications programs. They are not supposed to have anything to do with the systems programs that control their programs, or with the hard data or hardware. Again, this seems to follow.

+ +

The ""symmetry"" is broken with systems programmers. They are supposed to work with systems, and understandably, are not supposed to have access to applications programs. But my understanding is that they are allowed to have access to the hard data and hardware, when the other two types of personnel aren't allowed outside their areas. Why might this be?

+ +

I'm also confused about the roles of two more agents, systems and administrators and systems analysts. Maybe the problem is that I'm getting hung up on the job titles.

+ +

Systems administrators are ""keepers of the seal"" such as providing access abd passwords to the systems. Yet, they are not supposed to have access to the systems, only to applications. Is their function actually an ""applications"" function, despite their job titles?

+ +

Ditto for systems analysts, who oversee and work with applications programmers, and in this regard, are ""applications"" people who have access to applications. But they're not supposed to have access to systems, despite their job titles.

+ +

Clarification: In answer to a comment, the question can be restated as follows (in two parts). 1) Why are systems programmers allowed to go outside of systems, (to hardware) when applications programmers and hardware people are expected to stick to issues connected with their departments? 2) why do systems administrators and systems analysts appear have the same scope as ""applications"" programmers when the term ""systems"" is in their job titles?

+",35693,,35693,,43850.19583,43850.50069,How does separation of duties work between the kinds of workers described below?,,1,2,,,,CC BY-SA 4.0, +404017,1,404024,,1/20/2020 6:09,,0,104,"

I need some help to understand if the code below could be refactored to something less straightforward, less repetitive and more towards any appropriate pattern.

+ +

What I feel uncomfortable with in the code is the flow of repetitive tasks with the same pattern like:

+ +
// Get the result from some operation (API call / or any other operation);
+// Check if the result is somehow valid;
+// If it is not valid, set the response object accordingly and return early;
+// If it is valid, continue with the next step with the overall same logic but different details.
+
+ +

Does the code look like being able to get refactored to (or towards) some usefully applicable here design pattern?

+ +

Would love to hear any feedback on it.

+ +

Here is the code:

+ +
/**
+ * Check if the given email exists in the SendGrid recipients global list
+ * and its custom field 'status' has the value 'subscribed'.
+ *
+ * @param  string  $email The email to check.
+ *
+ * @return object  (object)['isfound'=>false, 'issubscribed'=>false];
+ */
+public function getSubscriberStatus(string $email): object
+{
+    $result = (object) ['isfound' => null, 'issubscribed' => null];
+
+    /**
+     * Find the email in the SendGrid global list.
+     */
+    $endpoint = ""contactdb/recipients/search?email=$email"";
+    $found = $this->callSendGrid('GET', $endpoint);
+    if ($found->status !== 200) {
+        Log::error(sprintf('[SENDGRID] Error while searching the email: %s in the SendGrid list, status: %s, message: %s', $email, $found->status, $found->message));
+        $result->isfound = false;
+        $result->issubscribed = false;
+        return $result;
+    }
+
+    if (!($found->data->recipient_count > 0)) {
+        $result->isfound = false;
+        $result->issubscribed = false;
+        return $result;
+    }
+
+    /**
+     * Find the recipient with email exactly matching the required one.
+     */
+    $recipient = collect($found->data->recipients)->first(function ($item) use ($email) {
+        return $item->email === $email;
+    });
+
+    /**
+     * No exactly matching emails.
+     */
+    if (!$recipient) {
+        $result->isfound = false;
+        $result->issubscribed = false;
+        return $result;
+    }
+
+    $result->isfound = true;
+
+    /**
+     * Get the status field of the recipient's 'custom_fields' array.
+     */
+    $status = collect($recipient->custom_fields)->first(function ($item) {
+        return $item->name === 'status';
+    });
+
+    if ($status->value !== 'subscribed') {
+        $result->issubscribed = false;
+        return $result;
+    }
+
+    $result->issubscribed = true;
+    return $result;
+}
+
+",307971,,,,,43850.79583,Refactor the method which has the sequence of the similarly looking blocks of code to (or towards) the design pattern(s),,2,4,,,,CC BY-SA 4.0, +404019,1,,,1/20/2020 8:14,,0,85,"

To give an arbitrary example, let's say that I'm storing two objects, Item and Box, in the same database.

+ +

These objects have a one-to-one relationship.

+ +

Item has the following properties:

+ +
    +
  • ID (string)
  • +
  • BoxID (string)
  • +
  • Price (int)
  • +
+ +

Box has the following properties:

+ +
    +
  • ID (string)
  • +
  • ItemID (string)
  • +
  • ItemLastUpdated (timestamp)
  • +
+ +

Whenever Item's price changes, Box's ItemLastUpdated must be updated as well:

+ +
try {
+    item.price = newPrice;
+    itemRepository.Save(item);
+} catch (Exception e) {
+    // Handle the Exception.
+}
+
+try {
+    box.itemLastUpdated = now;
+    boxRepository.Save(box);        
+} catch (Exception e) {
+    itemRepository.RollbackLastChange();
+    // Handle the Exception.
+}
+
+ +

My question concerns the scenario where a database outage causes boxRepository.Save() to fail.

+ +

Because itemRepository also uses the same database, it won't be able to rollback its last change.

+ +

item's data is now bad because it has stayed modified despite needing a rollback.

+ +

EDIT:

+ +

Our current architecture cannot use transactions because it is using interfaces to work with databases and therefore cannot assume that a given database has transactions built-in.

+ +

If we assume that whatever database we use will have transactions, how do we recode the above code to use them? We are currently using different repositories to work with different tables.

+ +

Will creating a ""master object"" to collate all changes and attempt a transaction suffice, or is there a better way?

+",322362,,322362,,43850.99931,43854.15556,Handling Failed Rollbacks,,1,8,,,,CC BY-SA 4.0, +404020,1,404030,,1/20/2020 8:28,,0,1265,"

I have kind of a unique usecase:

+ +
    +
  1. Phones that are used to connect to the app might be shared
  2. +
  3. Connections are very unstable (sometimes no connection for half a day)
  4. +
  5. Data should be accessible through the interface only by an authenticated user
  6. +
  7. The data should be accessible after the first login for each user
  8. +
  9. Users are not really tech sure
  10. +
+ +

PWAs use JavaScript and therefore do have a restricted possibilities for encryption.

+ +

My current setup is

+ +
    +
  1. angular app with pwa possibilities
  2. +
  3. PouchDB with remote CouchDB sync for data
  4. +
+ +

I feel like it is not a good idea to store user credentials on the device even if they are encrypted, especially when using JavaScript.

+ +

Is this even possible to achieve? And what kind of flow would you recommend. I thought about creating a unique (short, four letter) token that the user has to remember when logging out. It is stored encrypted together with the username. This token in combination with the username can then be used to relogin as long the application is offline. As soon as the app is online again the user is asked to login with its real credentials. If this succeeds the token is deleted and a new one is created (and shown to the user) when the user logs out.

+",355462,,355462,,43850.35625,43850.49653,Offline-Login Procedure in PWA,,1,0,,,,CC BY-SA 4.0, +404021,1,,,1/20/2020 9:17,,2,905,"

So I know a little bit about parsing Json data but not too much so pardon if I am not describing everything as I should. Lets use this Json file as an example:

+ +
{  
+  ""firstname"": ""John"",  
+  ""lastname"": ""Smith"",  
+  ""age"": 25,  
+  ""address"": {  
+    ""street address"": ""21 2nd Street"",  
+    ""city"": ""New York"",  
+    ""state"": ""NY"",  
+    ""postal code"": ""10021""  
+  },  
+  ""phonenumbers"": [  
+    {  
+      ""type"": ""home"",  
+      ""number"": ""212 555-1234""  
+    },  
+    {  
+      ""type"": ""fax"",  
+      ""number"": ""646 555-4567""  
+    }  
+  ],  
+  ""sex"": {  
+    ""type"": ""male""  
+  }  
+}  
+
+ +

I know that if I want to parse this json data I will first have to create a class with these names in script (I didnt write them all out I know)

+ +
public class JsonClass
+{
+    string firstname;
+    List<phonenumberlist> phonenumbers;
+}
+
+public class phonenumberlist
+{
+    string type;
+    string number;
+}
+
+ +

but my question is what if I want to dynamically change the json file and how can I go about setting up new classes dynamically in accordance to my dynamically changing json file? for example If i add to the Json file under phone numbers list.

+ +
""areaCodeLocation"": ""new york""
+
+ +

Then I wouldnt be able to parse it since I would have to manually go into my code and update the class I had written. Or what if I wanted to add something else? like
+ ""height"" : ""6""

+ +

Basically what Im asking is how to dynamically build a parser to change according to my dynamically changing json, if that even is possible? Thanks!

+ +

Edit: To further clarify what I am trying to achieve. I will host a json file on the internet and I will parse it when my program starts. This is not a problem normally because everything is properly initialized written through classes in the code. But In the event I decide to add more information to my json file like ""areaCodeLocation"" or ""height"" as shown above = this will not be parsed because It is not initialized in the class.

+ +

What I want to do, if possible, is for the json file to be parsed through code, including setting up the classes, which will allow the program to parse any additional information I may want to add to the json file (hosted online) without myself going in to my code and manually adding new classes for parsing. This is so my users will not have to update the program every time I decide to add something new to the json file since it will be done within the code.

+",355467,,355467,,43851.07014,43851.07014,How to parse a dynamically changing Json file? (c#),,1,7,,,,CC BY-SA 4.0, +404026,1,404039,,1/20/2020 10:45,,4,360,"

We have multiple pages searching for users, each site having different search parameters. Sometimes, we have 2 parameters, sometimes 4 and most of these parameters overlap. So we have kind of (simplified) this code:

+ +
SearchUser()
+{
+    // Get values from somewhere
+    Service.SearchActiveUser(firstname, lastname);
+}
+SearchUserSomewhereElse()
+{
+    // Get values from somewhere
+    Service.SearchUser(lastname, id);
+}
+
+// some other layer
+SearchActiveUser(string firstname, string lastname)
+{
+    var users = from user in allusers
+                where user.IsActive == true
+                select user;
+
+    if (!String.IsNullOrEmpty(firstname))
+        users = users.Where(user => user.Firstname == firstname);
+    if (!String.IsNullOrEmpty(lastname))
+        users = users.Where(user => user.Lastname == lastname);
+
+    return users
+}
+
+SearchUser(string lastname, int? id)
+{
+    var users = from user in allusers
+                select user;
+
+    if (!String.IsNullOrEmpty(lastname))
+        users = users.Where(user => user.Lastname== lastname);
+    if (id.HasValue)
+        users = users.Where(user => user.Id == id);
+
+    return users;
+}
+
+ +

Now, we have another search for some other data where we got one giant search value object (30+ search properties) and one method that filters it all. +Applying this to the user situation, it'd look like this:

+ +
SearchUser()
+{
+    // Get values from somewhere
+    var searchVO = new UserSearchVO { Firstname = firstname, Lastname = lastname, IsActive = true };
+    Service.SearchUser(searchVO);
+}
+SearchUserSomewhereElse()
+{
+    // Get values from somewhere
+    var searchVO = new UserSearchVO { Lastname = lastname, Id = id };
+    Service.SearchUser(searchVO);
+}
+
+// some other layer
+SearchUser(UserSearchVO searchVO)
+{
+    var users = from user in allusers
+                select user;
+
+    if (searchVO.IsActive.HasValue)
+        users = users.Where(user => user.IsActive == searchVO.IsActive);
+    if (!String.IsNullOrEmpty(searchVO.Firstname))
+        users = users.Where(user => user.Firstname == searchVO.Firstname);
+    if (!String.IsNullOrEmpty(searchVO.Lastname))
+        users = users.Where(user => user.Lastname == searchVO.Lastname);
+    if (searchVO.Id.HasValue)
+        users = users.Where(user => user.Id == searchVO.Id);
+    if (!String.IsNullOrEmpty(searchVO.SomeFutureValue))
+        users = users.Where(user => user.SomeFutureValue == searchVO.SomeFutureValue);
+
+    return users;
+}
+
+ +

I understand that the first approach is the starting approach, I don't use a search object if I don't need one with 2 params. But I feel like violating some principles of clean code with the latter approach.

+ +

Which principles of clean code do I violate using the second approach and is there another clean way of keeping it readable while not having 5000 different methods to filter 1 object?

+",296502,,296502,,43850.53194,43938.01319,Grouping of methods,,4,8,,,,CC BY-SA 4.0, +404027,1,,,1/20/2020 11:19,,0,177,"

I'm currently designing a database query language and I came to wonder what should be the best syntax for the comparison operator.

+ +

Most modern languages use ==, but amongst the database languages based on SQL the = is also often used. +I acknowledge that the = should have been the comparison operator whereas affectation might have been something like <- if languages had follow a more mathematic syntax, but I guess languages like C have contribued to make = for affectation a standard.

+ +

However, I'm not looking for a debate on what should be the best operator for affectation or comparison, but to know if using the same operator for two distinct things like MySQL and some other languages do is a practice to banish.

+ +

In terms of language design, using = could be a bit annoying since it is source of grammar ambiguities, since this operator is also used for affectation. MySQL seems to solve this ambiguity by introducing the SET keyword but in my case I don't see the point of introducing a new keyword where I can simply use different operators for affectation and comparison to remove the ambiguity.

+ +

So my question is : is there a real benefit to use = as comparison operator considering it is already used for affectation ?

+",87947,,209774,,43850.79583,43850.79583,"Language design : use equals symbol = both for affectation and comparison, like in MySQL",,4,4,1,,,CC BY-SA 4.0, +404033,1,,,1/20/2020 13:22,,1,76,"

I'm currently looking into NestJS to use as the framework for a new e-commerce project. The goal is to build multiple loosely-coupled services, which can communicate with each other using the pub/sub messaging pattern. NestJS offers some great integrations out-of-the-box with tools like Redis, NATS and RabbitMQ. I've read that Kafka is also supported, but that isn't properly documented yet.

+ +

The problem I have is about decision-making. After reading lots of material about event-based systems and how they work, I still don't have a clear vision on what tool to pick.

+ +
+ +

1) I think I need a message broker that can queue messages to ensure they get delivered (eventually) to the subscribers.

+ +

This is because I might have a CatalogueService which emits events such as product_created. Services like a CartService and OrderService should also hold a copy of the product data and these products should be up-to-date. I can't afford to have old prices in the OrderService as the price calculations would be off. Also, if a service goes down, it can be restarted and properly process any unprocessed messages in the queue.

+ +

2) I think I need a message broker that can support FIFO (first in, first out) queues.

+ +

This is because I might have two product_edited events that should be processed in the right order to prevent corrupted data. If a store manager changes a product price twice, the latter price should be considered as most actual.

+ +
+ +

I used the words ""I think"" in front of the two requirements as I'm not a 100% sure if my thinking is correct. Therefor I'm curious to see what your thoughts and experiences are. Any tips, tricks or advice would be highly appreciated!

+",355477,,,,,43850.55694,Event-based communication between microservices,,0,0,1,,,CC BY-SA 4.0, +404035,1,,,1/20/2020 14:19,,2,381,"

I've just learnt about the Null Object design pattern, which recommends that the service either return a default null object or throw an null related exception itself so that the client need not worry about checking for null or throwing a null exception.

+ +

The thing is, as far as I understand a client/service relationship should take place via an interface. How can the information that ""the service handles null checking so the client not need have to"" be conveyed via the interface? In C# the interface provides a method signature that can be expected, but the fact the service does null checking seems like an extra piece of arbitrary information that cannot be passed through the interface.

+ +

My concern is if the client assumes the service does null checking even though the interface has no way to specify such, that this may lead to ""leaky abstraction"" and the client is now taking for granted a specific implementation detail of the service, thus coupling itself to it.

+ +

What is the correct way for a client to be aware that the service is doing null checking so that it does not need to, without breaking the principles of encapsulation?

+",355487,,,,,43850.74514,C# - Correct way to convey Null Object design pattern via an interface for client/service?,,2,11,,,,CC BY-SA 4.0, +404038,1,404040,,1/20/2020 15:01,,2,482,"

In the git flow workflow, it is recommended to create branches for releases, and when the release-specific work is done, merge the result into the master and development branch.

+ +

I understand why we would merge a release branch into the master branch, rather than rebase master on the tip of the release branch: We don't want people to have history conflicts with master.

+ +

But I don't understand why it's recommended to merge the release branch into develop, rather than rebase develop off of master, at the commit merging the release branch into master. That seems more ""natural"" to me, and simpler to do.

+ +

What am I missing?

+",63497,,,,,43850.66736,"With the ""git flow"" approach, why are release branches merged into develop?",,1,0,,,,CC BY-SA 4.0, +404054,1,,,1/20/2020 21:54,,-2,76,"

I have included OpenCV libraries in the enterprise software written in c++. I know that OpenCV is a BSD 3-Clause license, which means most of its libraries are free to use, modified, and redistribute. And all the libraries I used in the enterprise software are free (did not useSIFT or SURF). What are the actual texts and files I should include in my project in order to meet the license compliance rulse?

+ +

Do I have to include OpenCV license in my project?

+ +

Do I need to include a licence notice in all the header/cpp files or only the header files that include opencv2.hpp?

+ +

Thanks in advance for the clarification!

+",355511,,,,,43851.00139,Do I include the open source copyright notice in an enterprise software (for commercial use)?,,1,0,,43851.59306,,CC BY-SA 4.0, +404060,1,,,1/21/2020 1:00,,4,1324,"

The third and fourth items of the Manifesto for Agile Software Development

+ +
+
    +
  1. Customer collaboration over contract negotiation
  2. +
  3. Responding to change over following a plan
  4. +
+
+ +

What's the difference between these two?

+ +

If the fourth means ""incremental delivery with some willingness to pivot, instead of big design up front"", then what does the third mean?

+ +

Is the third the same as the fourth ...

+ +
    +
  • Is the (deprecated) ""contract negotiation"" the same as ""following a plan"" -- meaning ""big design up front""?
  • +
  • And ""collaboration"" and ""responding"" -- are they kind of the same thing as each other too?
  • +
+ +

... or different?

+ +

I welcome your explanation, even a reference to one of the original authors explaining it.

+",19237,,,,,43853.20625,Third and fourth items of the Manifesto for Agile Software Development,,4,0,,,,CC BY-SA 4.0, +404068,1,404091,,1/21/2020 5:51,,3,336,"

So we decided to redo UI of our web application in React. Six months down the lane and we have a complete mess of components and reducers and thunks and actions and god knows what not.

+ +

We have multiple files named reducer.ts and each file is 3000-5000 lines long with thousands of reducer functions in each.

+ +

Same is the case with actions.ts files and then accessor.ts files. Interestingly, thunk.ts files are god files with lines approaching to millions (exaggerating, but you got the idea).

+ +

Then there is a few thousand lines long api.ts file that handles every possible call to the api server.

+ +

At the root level, we have a types.ts file that carries more than 30,000 lines in a single file with every typescript type defined in it.

+ +

I have been with small scale React apps for few years now but never had any exposure to this level of project size and messy code.

+ +

I am sure you guessed already that there are no Unit tests at all.

+ +

Primary problem is that the very promise that React provides of independent components that can be refactored and managed individually seems completely broken in this scenario.

+ +

Obviously there is no problem with react itself but I guess the way it has been used in our case is way off the mark. It takes hours to even locate the code for a very simple bug because everything is so deeply interconnected, and every functionality is scattered among at least six to ten files, it is often unclear where some change is happening.

+ +

My questions to the veteran react devs are as below:

+ +
    +
  • Will it be a cardinal sin if I remove the redux and thunk and types and accessors and move the logic of each component inside that component? This way we won't have to jump six to ten files to search for the code of a single action.

  • +
  • Even if we have to keep the store, and reducers and accessors and api.ts, is it a religious practice to keep everything in the same files and make them god files? Can I at least create separate reducer or accessor or API-call or thunk files for each action/feature I am handling?

  • +
  • Is there any recommended practice for code organization when building large scale React apps?

  • +
+ +

I am from a strong unit testing background and I believe it is imperative to have clean, well separated and testable code for better maintenance.

+ +

Am I thinking in the wrong direction because in React world the way project is already done is the right way to do things?

+ +

Thanks for any clues.

+",340618,,325277,,43852.57083,43864.27847,How to manage chaotic code explosion in React application,,3,4,1,,,CC BY-SA 4.0, +404069,1,,,1/21/2020 6:40,,2,38,"

Suppose there is a microservice that has an restful HTTP API for CRUD operations on a database - nothing fancy, but there is a journal table of all changes recorded for audit purposes. +Suppose also there is an event bus (in my case Kafka, but could be anything) to which data change events must be published.

+ +

The journal table has sufficient information to create all events.

+ +

The options on the table are:

+ +
    +
  1. Microservice publishes its own events
  2. +
  3. Another application publishes events based on polling the journal table
  4. +
+ +

Both options guarantee that all events will (eventually) be published, and have similar performance and similar, simple design complexity both approaches will be 100% robust.

+ +

Things to consider, which may or may not affect the answer:

+ +
    +
  • performance is similar, and largely irrelevant anyway
  • +
  • while the journal table is not expected to change, option 2 means having a foreign application depending on it thus creating an API/SLA that is tightly coupled to an implementation choice (using a table for the journal) and its schema.
  • +
  • option 2 requires DB connection credentials, network routing, security signoff, certificates, etc
  • +
  • option 1 means the microservice has the increased responsibility of publishing events
  • +
  • option 2 means there is another application to build, deploy, operate and maintain
  • +
  • both options will work
  • +
+ +

Which is (closer to) best practice?

+",31101,,,,,43851.27778,Data change event publishing by microservice vs separate application - best practice,,0,0,,,,CC BY-SA 4.0, +404073,1,,,1/21/2020 10:34,,2,158,"

I'm wondering if I'm using the correct architecture in my application.

+ +

After calling an endpoint in my API, I'm currently going through the following flow: +Api.EmployeeController.Update(Api.EmployeeUpdateDto) => Services.EmployeeService.Update(Service.EmployeeUpdateDto) => Data.EmployeeRepository.Update(Entities.Employee) => Data.EfDbContext.Employees.Update(Entities.Employee)

+ +

To explain a but more, my API endpoint take Api.EmployeeUpdateDto, within the controller it's being mapped to Services.EmployeeUpdateDto and passed to Services.EmployeeService.Update(). +Within Services.EmployeeService.Update() it retrieves the actual db entity by Id and updates it's values, afterwards it's passed to EmployeeRepository.Update() which in turn calls the underlying EF db context.

+ +

For some reason my gut tells me that it's complicated with too many layers, am I missing something?

+",355541,,,,,43851.58403,Correct approach to DDD?,<.net>,1,8,,,,CC BY-SA 4.0, +404074,1,,,1/21/2020 10:47,,1,134,"

Let's say I have an object A with a public method bool Foo(arg). Foo is potentially a complex algorithm with nested rules (if's). I don't want Foo to have all the code, I want several intermediate functions to make my code more expressive but I face a dilemma.

+ +

Here is a simple example: +Let's say Foo could be written:

+ +
public bool Foo(arg) {
+  if (arg.attr1) {
+    if (arg.attr2) return false;
+    else if (arg.attr3) return false;
+  }
+  return true;
+}
+
+ +

However what is tested is more complexe and are in fact business rules. So a more expressive way to write the code is:

+ +
bool Rule1(arg) {
+  if (arg.attr1) return false;
+  return true;
+}
+
+bool Ruel2(arg) {
+  if (arg.attr2) return false;
+  return true;
+}
+
+bool Rule3(arg) {
+  if (arg.attr3) return false;
+  return true;
+}
+
+public bool Foo(arg) {
+  if (Rule1) return false;
+  if (Ruel2) return false;
+  if (Rule2) return false;
+  return true;
+}
+
+ +

My dilemma is that rule 2 and 3 are only true after validation of rule 1. Since only Foo is public no client can misuse the individual rule functions however an other developer could get a new requirement and use rule 2 or 3 without first checking for rule 1. I could however test for rule 1 inside both rule 2 and 3 but it seems redundant.

+ +

Can I assume some properties in internal functions or should I still do the necessary checks in each functions?

+ +

Edit: Not a duplicate of How to tackle a 'branched' arrow head anti-pattern?. I have already tackle the arrow effect with early return. The question is more about the use of inner method for more expressiveness and what to test in each method.

+ +

Edit The two solution may not have equivalent result. I'm sorry about this but don't worry about this assume that it'll be the case in real life.

+ +

Rule# names will be more expressive in the end with proper business name (like IsBleu, IsAntique, ...)

+ +

if (arg.attr2) are overly simplified test. Real code can compare data in data base, dates, compare with global const values...

+",293499,,293499,,43851.59167,44121.62917,Multiple functions assuming value or check value in each function,,3,3,,,,CC BY-SA 4.0, +404076,1,404080,,1/21/2020 11:36,,7,1737,"

In this github, https://github.com/johnph/simple-transaction, under the Transaction.Framework project, there are entities (located at Data/Entities)

+ +
    +
  • AccountSummaryEntity.cs
  • +
  • AccountTransactionEntity
  • +
+ +

and in the domain folder, there are domain model:

+ +
    +
  • AccountSummary.cs
  • +
  • AccountTransaction.cs
  • +
  • TransactionResult.cs
  • +
+ +

From what I observed, the entities are mainly used for repositories while the domain model is used for almost everything else like business logic validation. Is this known as domain-driven-design?

+",338001,,,,,44034.59931,Domain vs Entities model? Domain-Driven-Design (DDD)?,,3,0,2,,,CC BY-SA 4.0, +404088,1,404092,,1/21/2020 16:50,,1,184,"

I’m working on a problem which I will try my best to describe:

+ +
    +
  • You have a stack of 5 blocks: labelled A, B, C, D and E.

  • +
  • You also have a set of rules giving points if certain conditions are met, for example: B is above D (1 point), D is above A (0.75 points), A is above D (0.25 points) etc.

  • +
  • The goal is to stack the blocks in such a way as to maximise the number of points from the goals. Some goals are contradictory so not all goals can necessarily be met.

  • +
+ +

I would like to understand which kind of general class of problems it is, in order to find a general way to solve it. Is it a graph traversal, bin packing or some other class of problems?

+",355570,,209774,,43852.35069,43852.35069,Which class of problems is this?,,1,1,1,,,CC BY-SA 4.0, +404096,1,404181,,1/21/2020 19:44,,0,641,"

Made system update to temporarily disable HTTPS in our Tomcat server.

+ +

Previous users are still using Https:// URL to access system and receive error +message because it's disabled.

+ +

Would like to redirect users from HTTPS to basic HTTP version of website.

+ +

Have tried multiple different Connectors in server.xml file, but sadly no success.

+ +

Could a tomcat wizard please share this precious knowledge?

+",261547,,,,,43853.72361,Tomcat redirect HTTPS to HTTP,,1,2,,,,CC BY-SA 4.0, +404097,1,,,1/21/2020 19:49,,-2,43,"

We sell a fleet management solution: it includes a mobile application, a web application and a set of APIs.

+ +

Technically the APIs and the web application are developed in C # .net (asp.net mvc and asp.net webapi). We are using as source control.

+ +

Our standard solution is implemented for a certain number of customers. The problem we are facing now is that we have had several large account customers lately who wish to have product customizations for their own case. For example: in the user account management module, one of our large customers requests the possibility of linking 3 additional data (employee's internal identifier number, warehouse code and affiliation number): this data n have nothing to do in our standard APIs.

+ +

The question is therefore how to make changes in our base code, and develop our APIs as a product while meeting the requirements of our client. + We indeed want to avoid having duplicate code that is too specific and difficult to maintain while trying to have a stable common base.

+ +

Some of us say that we must plan for this customer to create a new git repository, then push our product branch, and finally make the specific changes requested by the customer. + Some others want's to modify for each specification ou product base code if needed in order to place an extension point in order to let the possibility for a customer to extend it (for example add the 3 new specifics fields int the api and the underlying database table for the user account)

+",285517,,,,,43851.89583,How to add business customer customization in our API and,,2,0,,,,CC BY-SA 4.0, +404105,1,404111,,1/22/2020 4:01,,13,5468,"

I made the following diagram to show a typical separation of concerns as typically taught -

+ +

+ +

Here ClassA indirectly uses ClassB via the ISomeInterface, of course ensuring it doesn't know ClassB exists, only the methods within the interface, which ClassB implements. All the information I can find on this separation of concerns ends here, and nowhere can I find out how classA can actually use the interface without coupling itself to ClassB.

+ +

You can't of course instantiate nothing but an interface, an interface has no implementation or functionality itself. So how does ClassA use the interface?

+ +

There are only two ways currently that come to mind -

+ +

1) ClassA does the following:

+ +
ISomeInterface obj = new ClassB();
+
+ +

Here we can make sure we're not calling any members of ClassB directly, only interface members. The problem though is that ClassB has leaked through to ClassA via the instantiation.

+ +

2) ClassA relies upon the interface only, delegating this responsibility of passing the classB object elsewhere, via having the following constructor:

+ +
class ClassA {
+        ISomeInterface obj;
+
+        ClassA(ISomeInterface obj) {
+            this.obj = obj;
+        }
+    }
+
+ +

This of course completely decouples ClassA from ClassB, however all it does is ""pass the buck"" elsewhere since someone, somewhere, must instantiate an implementation of ISomeInterface (such as ClassB) and pass it as an object to ClassA.

+ +

All the tutorials and explanations I can find leave out this last crucial detail. Who exactly is responsible for doing this last crucial thing? It has to happen somewhere. And is the thing that does this now coupling itself to both ClassA & ClassB?

+",355487,,209774,,43852.34444,43860.10417,How does encapsulation actually work?,,3,14,11,,,CC BY-SA 4.0, +404107,1,,,1/22/2020 5:43,,0,259,"

I'm working on a system that implements multiple microservices which communicate via a RabbitMQ messaging bus.

+ +
    +
  • These microservices are created using python with the pika library (to publish messages as well as consume a RabbitMQ queue)
  • +
  • One of these microservices (let's call it 'orders') has a connected database to store data
  • +
+ +

So far, the application components are asynchronous relying fully on RabbitMQ exchanges/queues for communication and, when needed, implements callback queues when one microservice needs to request data from another.

+ +

Now that I have backend microservices talking to each other, I would like to implement a RESTful API interface for this 'orders' microservice so clients (ex: web browsers, external applications) can receive and send data.

+ +

I can think of two ways to do this:

+ +
    +
  1. Create another microservice (let's call it 'orders-api') in something like flask and have it connected to the underlying database behind the 'orders' microservice. This seems like a bad idea since it breaks the microservice pattern to only have a database connected to a single microservice (I don't want two microserices having to know about the same data model)

  2. +
  3. Create an 'api-gateway' microservice which exposes a RESTful API and, when receiving a request, requests information from the 'orders' microservice via the messaging bus. Similar to how RabbitMQ documents Remote Procedure Calls here: https://www.rabbitmq.com/tutorials/tutorial-six-python.html. This would mean that the 'api-gateway' would be synchronous, and thus, would block while waiting for a response on the messaging bus.

  4. +
+ +

I'm not sure if there are other ways to achieve this which I'm not familiar with. Any suggestions on how to integrated a RESTful API in this environment would be appreciated!

+",355604,,,,,44138.66944,Implementing RESTful API in front of Event based microservices,,1,3,,,,CC BY-SA 4.0, +404114,1,,,1/22/2020 11:08,,9,318,"

I'm working on breaking down a monolith application in smaller applications or microservices. Like always, sometimes it's easy and sometimes it's harder to identify domains and split those into smaller applications and databases.

+ +

I've found a number of domains (I'll show you two as an example) which all share + the use of a set of entities (units of care or so called care units). These care units are updated once in a while by data integration tools (SSIS). There are over a milion of care units in the database. In the example below, you see how different domains all act on these care units. +

+ +

We could duplicate those care units to every domain, but I think this would add a lot of complexity considering synchronization issues etc. At the moment we can just do aggregations between these domains using simple join queries in SQL Server. Are there other ways to achieve what I want without a lot of data duplication? If not, what are recommended ways to synchronize tables between domains?

+ +

Our application is written in c# .net using SQL Server as a database. So answers using these technologies are preferred, but not mandatory.

+",125194,,,,,43854.61111,How to design microservices with large number of joint entities outside of the domain border?,,2,4,3,,,CC BY-SA 4.0, +404115,1,,,1/22/2020 11:10,,2,131,"

I'm working in microservices environment, where each service authenticates using OpenID Connect to an authentication service (local IdP), based on Users I keep locally on my Database.

+ +

Now, I want these services to be able to authenticate using Azure, Google, etc.

+ +

Can (and should) I modify my authentication service to allow redirection to another IdP, and replace or chain the token to my proprietary token for my services? Is there a simpler way?

+ +

How can I allow users to login both using name / password OR external IdP?

+",188261,,,,,44125.87778,Chaining openID token,,1,0,,,,CC BY-SA 4.0, +404116,1,404121,,1/22/2020 11:17,,64,12776,"

This is kind of similar to the Two Generals' Problem, but not quite. I think there is a name for it, but I just can't remember it right now.

+ +

I am working on my website's payment flow.

+ +

Scenario

+ +
    +
  1. Alice wants to pay Bob for a service. Bob has quoted her US$10.
  2. +
  3. Alice clicks pay.
  4. +
  5. Whilst Alice's request is flying through the ether, Bob edits his quote. He now wants US$20.
  6. +
  7. Bob's request finishes before Alice's has reached the server.
  8. +
  9. Alice's request reaches the server and her payment is authorized for US$20 instead of US$10.
  10. +
  11. Alice is unhappy.
  12. +
+ +

Whilst the chances of this are very low in practice, it is a possible scenario. Sometimes requests can hang due to network issues, etc...

+ +

Possible Mitigations

+ +

I don't think this problem is solvable. But we can do things to mitigate it.

+ +

This is not exactly an idempotency issue, so I don't think the answer is ""idempotency token"".

+ +

Option 1

+ +

Let's define:

+ +
    +
  • t_0 as the time Alice click pay.
  • +
  • t_edit as the time Bob's edit request succeeds
  • +
  • t_1 as the time Alice's request reaches the server
  • +
+ +

Since we cannot know t_0 unless we send it as part of the request data, and because we cannot trust what the client sends, we will ignore t_0.

+ +

At the time Alice's request arrives in the server, we check:

+ +

if t_1 - t_edit < 1 minute: return ""409 Conflict"" (or some other code)

+ +

Would this approach work? 1 minute is an arbitrary choice, and it doesn't solve the problem entirely. If Alice's request takes 1 minute or more to reach the server, the issue persists.

+ +

This must be an extremely common problem to deal with, right?

+",147328,,591,,43853.35972,43854.78542,"How do I mitigate a scenario where a user goes to pay, but the price is changed mid-request?",,10,22,17,,,CC BY-SA 4.0, +404120,1,404144,,1/22/2020 12:07,,0,2703,"

Is it possible for a queue to have multiple consumers?

+ +

I am working on a system where a single queue needs to be accessed by multiple consumers for different micro-services. Is it possible for the queue to retain data until after a consumer has consumed it or are there other possible methods for multiple consumers to consume data from a queue?

+",355638,,25476,,43852.62847,43852.86319,Rabbitmq and multiple subscribers,,3,2,,,,CC BY-SA 4.0, +404125,1,404129,,1/22/2020 13:30,,-2,176,"

I have a function that converts args given by argparse to a filename:

+ +
def args2filename(args, prefix):
+    filename = prefix
+    kwargs = vars(args)
+    for key, value in sorted(kwargs.items()):
+        filename += f'_{key}={value}'
+    return filename
+
+ +

What is the Python convention for such functions? I'd guess it would be more appropriate to use args_to_filename (or maybe argstofilename as in 2to3) instead of args2filename, but I couldn't find a reference.

+ +

I've found this answer for Python, but it addresses the case when you have a class. @lvc suggests from_foo, if Bar.__init__ already takes a different signature:

+ +
 class Bar:
+      @classmethod
+      def from_foo(cls, f):
+          '''Create a new Bar from the given Foo'''
+          ret = cls()
+          # ...
+
+ +

So, I'm looking for a Pythonic answer in these two cases, i.e., when I don't necessarily need a class, and when I have one.

+",64067,,64067,,43852.56944,43852.66111,"Python function name convention for ""convert foo to bar"", e.g., foo_to_bar, foo2bar",,2,0,,,,CC BY-SA 4.0, +404128,1,404132,,1/22/2020 15:23,,0,186,"

I need to move a single-tenant web application to a multi-tenant (about 100 tenants) web application. Tenants are going to share the same application but each tenant is going to have its own database (database for tenants) I have already planned to move my in-process application cache to a shared distribuited cache identifing cache items by adding a prefix (the tenant-id) to the cache iteme keys (prepended-tenant pattern).

+ +

Application also rely on RabitMQ to implement async processes. Actualy I don't have many queues, just a dozen and few exchanges but i suppose the number of queue and exchange is going to increase in the future.

+ +

Now Im confused about the best architectural pattern for queues when moving toward a multi-tenant architecture.

+ +

Choices:

+ +

1) Multiple virtual host (one per tenant) with same topology replicated per virtual host

+ +

2) Single virtual hoost with same queues, exhanges, ecc shared among tenants.

+ +

The first choice seems to be more complicated to manage as I shoud keep syncronized the topology for every virtual host (suppose 100 tenants means 100 vhost) The second choice seams the easier one, I only need to pass in the context of every messages sent to queues the tenant-identifier so the consumer knows who is the owner of the message and what to do with it.

+ +

I would know some opinions mainly with regards to the second choice as it seems to me more affordable.

+",261565,,,,,43852.90486,Moving single tenant application with queue to multi tenants web application,,2,0,,,,CC BY-SA 4.0, +404133,1,,,1/22/2020 16:58,,-1,55,"

I completely understand, I think, how injecting a dependency of a class allows that dependency to be mocked and the class to be tested with the mocked version.

+ +

What I am not sure about is if you use @autowired to do this DI and write a junit test, will newing up the class be okay? That is, will the annotation simply be ignored and the constructor can be used normally?

+ +

If so, is there ever a case when you want to use Spring in the unit test to inject the mock? It seems to me that if you are writing a unit test, you want it to be as simple as possible and so would avoid involving Spring if you could.

+",355659,,,,,43852.72361,If you use Spring dependency injection does the unit test require some Spring stuff?,,1,0,,,,CC BY-SA 4.0, +404135,1,,,1/22/2020 19:31,,2,269,"

When writing a library, designing a class or extending some existing API, we often need to express actions or relations involving noun entities:

+ +
    +
  • ""Place the ball in the bin""
  • +
  • ""Obtain the coat for the client""
  • +
+ +

(I'm trying to use concrete examples without too many programming connotations here.)

+ +

at times, only the nouns are properly named, and then we write code such as: coats[client] (which could be a lookup using an index or a hash). At other times, we're writing an actual named function or method. And now we face a dilemma - which name do we go with?

+ +
cloackroom.obtain_coat(some_client);  /* vs */  cloackroom.obtain_coat_for(some_client);
+red_ball.place(the_blue_bin);         /* vs */  red_ball.place_in(the_blue_bin); 
+
+ +

and if we're writing functions, these will be:

+ +
obtain_coat(some_client);             /* vs */  obtain_coat_for(some_client); 
+place(discarded_ball, the_blue_bin);  /* vs */  place_in(discarded_ball, the_blue_bin);
+
+ +

I find myself torn between these two naming options:

+ +
    +
  • naming without a preposition vs
  • +
  • naming with a preposition as a suffix (_in, _for etc.)
  • +
+ +

My dilemma is a combination of clarity/exactness-of-expression, aesthetics and succinctness. But other than succinctness which is obvious here, I can't even decide what's ""better"". Seeing just the method or function name, the suffix kind of irks me; but reading obtain_coat(some_client) is also aesthetically grating (as opposed to functions whose name is a transitive verb: refund(some_client)). On the other hand - a function is an action, so it makes sense to name it using just a verb, leaving the object-related prepositions for other syntactic elements. Some languages sorta-kinda support that, through named arguments:

+ +
ball.place(target_receptacle <- the_blue_bin)
+obtain_coat(requisitor <- some_client)
+
+ +

but let's assume that's not available to us.

+ +

My question: If you've faced this dilemma when designing (or rather, naming) some API - what were your significant consideration for and against the use of prepositions?

+ +

Note: If you have a language-specific or language-category-specific answer, that's perfectly ok; like I said, language features seem to have impact on this choice.

+",63497,,63497,,43852.98611,43853.65208,Use prepositions in naming verb-phrase functions?,,3,6,,,,CC BY-SA 4.0, +404139,1,,,1/22/2020 20:14,,0,134,"

Suppose I have a function CalculateOutput(n) which creates an array of size n and repeatedly modifies this array by iterating through every element from 0 to n - 1 (say this is done in linear time). When the array is in a particular order then the number of times the CalculateOutput has walked the array is returned. The thing is that as n increases the output does not necessarily increase (e.g. CalculateOutput(4) = 5 while CalculateOutput(5) = 2). How could I determine the time complexity of this algorithm? Or what other information would I need to be able to determine the running time?

+ +

I believe that if there were some other method to determine the number of iterations over the array (call it m) for a given n then it would be that CalculateOutput = O(m * n). But I only know what this m is by running the algorithm previously described.

+",194271,,194271,,43852.87431,43853.56528,Time complexity of an algorithm whose output does not scale linearly with the size of the input,,1,5,,,,CC BY-SA 4.0, +404140,1,404149,,1/22/2020 20:16,,1,164,"

I recently found out that I have probably been using ball/socket notation in a wrong way all the time. Now I am confused by the different ways of drawing interface relationships in two regards (I think the answers to these are closely related):

+ +

1)

+ +

Given this small diagram:

+ +

+ +

I can create three components in a higher abstraction diagram each containing one of the pieces (and others) from the above diagram and the same relationship connectors.

+ +

However if I use the ball/socket notation, the Foo interface is the ball/socket or in other words it's no longer shown where this interface is defined:

+ +

+ +

How can I show the relationship of iDefineFoo component which contains the Foo interface definition? Should it always be hidden

+ +

2)

+ +

Up unitl now I misused the ball notation (provided interface) to show that a component provides the interface definition itself instead of the realization of it. So for a component A containing the public interfaces for an API I want to show that another component B depends on this component A. Is the only feasable way to use a simple dependency connector? Or how would I show that the dependency is bound to an interface which is inside the component A?

+ +

I know it's no longer a single question but am I completely wrong and should anyway always model the public API and the implementation in the same component?

+",272301,,,,,43853.67014,How to show relationships of the component containing the interface definition when using ball/socket notation in a UML Component Diagram?,,1,0,,,,CC BY-SA 4.0, +404142,1,404227,,1/22/2020 20:23,,0,126,"

I have the following common pattern that I need to use in an application:

+ +
    +
  1. Given a list of keys, call a function with a parameter to find values for the keys. The function may return null for a given key where no value can be found.
  2. +
  3. For those keys missing a value, go to step 1 supplying only the keys missing values and using a different value for the function parameter. If no keys are missing a value or no parameter values are left to try, go to the next step.
  4. +
  5. Return a map that contains all keys with their value (or null if not found) as found in the above steps.
  6. +
+ +

Here's a specific instance of this behavior:

+ +
public Map<String, BigDecimal> getValues(List<String> keys) {
+
+    Map<String, BigDecimal> map = restClient.get(keys, ""paramFirstTry"");
+
+    List<String> missingKeys = getKeysMissingValues(map);
+
+    if (!missingKeys.isEmpty()) {
+        Map<String, BigDecimal> mapSecondTry = restClient.get(missingKeys ""paramSecondTry"");
+        map.putAll(mapSecondTry);
+
+        missingKeys = getKeysMissingValues(mapSecondTry);
+
+        if (!missingKeys.isEmpty()) {
+
+            Map<String, BigDecimal> mapThirdTry = restClient.get(missingKeys, ""paramThirdTry"");     
+            map.putAll(mapThirdTry);
+        }
+    }
+
+    return map;
+}
+
+private List<String> getMissingValues(Map<String, BigDecimal> map) {        
+    return map
+        .entrySet()
+        .stream()
+        .filter(entry -> entry.getValue() == null)
+        .map(Entry::getKey)
+        .collect(Collectors.toList());
+}
+
+ +

I'll have many instances of this behavior albeit with a different number of attempts and String parameter values. How would I make this code generic to handle the behavior described above? I'm using Java 8.

+ +

Update

+ +

By generic, I mean the restClient may be of different types and may have a different method name. The method should always take a List for its first parameter and the second parameter will always take a String. I should have seen the loop. I was looking at this similar post that is a more simple case of just trying to get the first non-null String.

+",90340,,90340,,43853.66111,43854.83542,Populate values in a map from a series of function calls,,3,0,,,,CC BY-SA 4.0, +404155,1,,,1/23/2020 6:41,,2,325,"

Had an interesting discussion with our architect. It was related to replacing a plain DLL reference with a NuGet package. His worry was ""If it is possible for single NuGet package to add multiple DLL references, then NuGet package authors can decide to add a new DLL to the package. So when we update the NuGet package a new DLL is added, but our installer won't know about it, so it won't include it in the installation. And this problem will be revealed only when testers get to test the installed build of the software"". And this would be reason not to use NuGet, as using plain binary references would make things obvious when new DLL was added.

+ +

My stance on this problem is that the chance of this happening is too small to bother with. That NuGet package authors would consider this a breaking change and only make this change in a major release. And that mitigating this risk should not be by not using NuGet, but by creating test automation that stresses our installed software.

+ +

The question is : What is risk of the above happening? Of a NuGet package adding new DLL as a project reference in non-major release?

+",56655,,56655,,43853.28681,43855.74653,Risk of NuGet package adding new reference DLL,,2,5,1,,,CC BY-SA 4.0, +404158,1,,,1/23/2020 8:16,,0,57,"

I have a website developed for my client, and he's currently on an annual maintenance billing for the same post development pushed to production, that he'd paid me for the development charges separately, and then the maintenance billing, which i generate every month.

+ +

Now he comes up with new change request, which was not part of the development which I had done. So just wanted to know, should I charge him for each change request (per hour basis), or probably it should come under the maintenance fees he has paid me.

+ +

I am new to freelancing and this is my first project, so I am not aware of the billing things. Please let me know what would be the ideal way of handling change requests and maintenance.

+",84400,,90149,,43853.76111,43853.76111,Change Requests Billing,,0,3,,,,CC BY-SA 4.0, +404159,1,404163,,1/23/2020 8:20,,1,29,"

I was not sure how to title this question, but bear with me.

+ +

My company is building a new product and for it we will use a third-party service (let's call it ENB for short) to be responsible for many operations on our data that are not our core business. It's in .NET Core if it matters.

+ +

To illustrate I will use the entity ApplicationUser, which is the user that logs in to our application. ApplicationUser will consist of account data as well as a Person, representing the real life person that created this account. To function, ENB needs the Person object and most of its data to function, so we have to create the Person object in ENB. This puts us in a situation where if we store the Person in both our database and in ENB, we will have duplicated the data which rings all kinds of alarm bells in my head.

+ +
public class ApplicationUser
+{
+    public int Id { get; set; }
+    < ... other fields ... >
+    public int EnbPersonId { get; set; }  // This is stored in ENB
+}
+
+ +

The above is the case for many of our entities: ENB needs their data to function so we have to store it there. For others, part of the fields of a domain object are specific to our application and parts are needed by ENB. This is why I say that the schema is split across sources.

+ +

One solution to this is to prefer storing in ENB whenever it's needed, and store the rest (as said, sometimes parts of an entity) in our database. Then, whenever a type is needed, we assemble it by querying both our own database as well as calling ENB through API (we don't have direct access to ENB's database).

+ +

Another solution is to simply duplicate the data. This instinctively feels bad because we have to update two sources whenever anything is changed, and we have to make sure they stay in sync (what happens if one update fails and the other doesn't?).

+ +

What I'm looking for is if any solution is better than the other, and if there is a name for this pattern of splitting a schema across multiple sources i.e. our own and ENB.

+",352180,,352180,,43853.35903,43853.47083,Pattern for schema split across sources,,1,9,,,,CC BY-SA 4.0, +404160,1,404161,,1/23/2020 8:26,,4,334,"

I'm writing a number of Selenium test classes that use Helper classes which contain processes that are often reused (ie. accessing a particular page, entering something into a specific field, etc.)

+ +

Currently these Helper classes are instantiated in a BasicTest class which every test class extends.

+ +

I was thinking about using dependency injection to instantiate the Helperclasses, but I'm not sure if it's pertinent in this case? I know there will only ever be one definition of each Helper class, so would it still be a plus to instantiate them with dependency injection, or would it just be pointless work that complicates the project for no reason?

+",355709,,,,,43853.51319,Should dependency injection be used when there will only ever be one version of any class?,,2,4,1,,,CC BY-SA 4.0, +404166,1,,,1/23/2020 12:33,,-1,290,"

Earlier, we serve with web service but we decide to use dll methods. Because served products are on same machine with common database. So we start to create a library with .net core. This library will be used by seperate .net applications/projects. Should we use/need dependency injection? Application itself will be have its own ui but it is not huge ui. Important part for us is serving with .dll to other projects. We don't acquire enough knowledge so decide to asking here.

+",355733,,355733,,43854.52778,43854.52778,Is it logical to use dependency injection in .net core library project?,,1,1,,43853.76458,,CC BY-SA 4.0, +404167,1,404185,,1/23/2020 12:39,,1,136,"

I am studying a book about software design called: Nonfunctional Requirements in Systems Analysis and Design which talks about the Axiomatic Design Methodology.

+ +

This methodology has two axioms. One of them is the Independence axiom.

+ +

There is something however I did not manage to understand. It says about this methodology :

+ +
+

Each functional requirement should be satisfied without affecting + any other functional requirement. During the conceptualization process + the functional requirements are transformed from the functional domain + where they state what, to the physical domain where they will be met + by how. The mapping should be one design parameter (DP) to one + functional requirement (FR).

+ +

The relevance of the independence axiom has additional utility in + that individual + designs may be evaluated, not qualitatively, but quantitatively, based on the relationship to an ideal design. The ideal design is one + where the number of DPs are equal to the number of FRs, where the FRs + are kept independent of one another. All design alternatives may be + evaluated against the concept of an ideal design.

+
+ +

Question

+ +

The book suggests the ideal design has one design parameter to one functional requirement. But what exactly is considered a design parameter within the software engineering scope and this methodology ?

+",290636,,209774,,43853.78472,44054.16111,Independence axiom and ideal design explained,,1,3,,,,CC BY-SA 4.0, +404182,1,,,1/23/2020 17:36,,3,100,"

I'm designing a platform that will run robots developed by others.

+ +

Basically anyone could implement a IAutonomousAgent and register the implementation on the platform. At runtime, the platform will identify the available implementations, load the code, instantiate and run the robots.

+ +

One thing that I have to be able to control is the very life of the robot. For instance it would be the responsibility of the platform to kill robots that have become unresponsive. And that is the catch.

+ +

The platform cannot rely on the good faith of the robot to acknowledge that the CancellationToken has been received and stop.

+ +

.Net Core does not have a Task.Kill method.

+ +

I could encapsulate the operation on a Thread but .Net Core does not implement Thread.Abort.

+ +

I considered to run the robots on a separated processes. I discarded it because 1. either the user would need to implement a whole program (instead of only a interface) or I would have to dynamically compile the code onto a program. All this because you can't just run a isolated piece of code. 2. The platform would provide several services in form of events, which would have to be reengineered to work across processes. 3. The platform would have a finer grain of control over the robot health via a healthcheck method. This over IPC would be harder.

+ +

So, my question is: what can I do to guarantee that I will be able to communicate-to and kill a robot?

+",83026,,118878,,43853.98958,43853.98958,Technical architecture of platform on .Net Core,<.net-core>,0,10,,,,CC BY-SA 4.0, +404183,1,404184,,1/23/2020 18:15,,0,379,"

I am building an app to register and update children information, this information is to be provided by their tutors. Every child can have multiple tutors, and a tutor can be a tutor for multiple children. This has been solved specifying a Table where I relate a student with each of their tutors by a unique registry, and characterize that relationship accordingly.

+ +

Now, we are facing the need to specify read/write permissions among some of the tutors that tutor a student. My idea is to create a new table where I relate each of the pairs formed by those tutors, and specify the read and write permissions. The problem is that those permissions are not reciprocal, a tutor can have permissions over another, while that other may not have permissions over the first one.

+ +

I don't know which is better, to have only one registry for each pair of tutors and add boolean fields for every type of possible permission, or define a parent-child type of registry where the parents has permissions over the child. This second option would mean having two registries for each pair of tutor, on where tutor 1 is parent over tutor 2 and another where tutor 2 is parent over tutor 1.

+ +

Please, help me determine which would be better and why.

+",355750,,90149,,43853.76389,43853.80764,Double way parent child relationship in Django,,1,4,,,,CC BY-SA 4.0, +404188,1,404233,,1/23/2020 20:30,,-2,121,"

I was given a solution with many projects. Multiple projects call multiple REST APIs. These calls are scattered around the spaghetti code. Trying to figure out what calls are done in what sequence and what dependencies they have is daunting. Do you know of a way, maybe a design pattern, or a pre-existing tool, that I can use to have all the endpoints in one place and perhaps define their sequences? For example, we could have:

+ +
    +
  • One set apiCall1 -> apiCall2 -> apiCall3
  • +
+ +

vs.

+ +
    +
  • Another set apiCallA -> apiCall2 -> apiCallB
  • +
+ +

I don't know the right term to search for, and I don't want to introduce complexity. I'm trying to simplify it. I've searched for centralized API management, central REST APIs, but I don't want to provide the APIs, I want to call them.

+ +

Each call returns JSON that we extract data and copy into DTOs.

+ +

I must use C# in Visual Studio 2017, not Core.

+",355692,,355692,,43854.9,43854.94931,Centralized REST API call management?,,1,4,,,,CC BY-SA 4.0, +404195,1,404196,,1/24/2020 1:09,,42,7927,"

I am in a predicament. I am working in a repository that makes a lot of database changes within its functions.

+ +

The function I am dealing with returns responseids (but from a transformation, not from any database action). However, as a side effect, it adds an object containing those responseIds to the database.

+ +

So should I name it:

+ +
    +
  • getResponseIds: This highlights the return values. It is a very functional way of thinking, but obviously if I have function postToDB it makes no sense to use getStatusOfPost
  • +
  • addResponseIdToDB: This highlights the side effect, although I truly think that many of my functions just operate on the database (and tend not to return anything)
  • +
  • getAndAddResponseIdsToDB: Very informative, but very long.
  • +
+ +

What are the pros and cons on the suggestions above? Or can you make a better suggestion yourself?

+",355774,,591,,43856.86875,43935.54097,Should I use AND in a function name?,,8,8,8,,,CC BY-SA 4.0, +404201,1,,,1/24/2020 5:33,,-2,100,"

I have this method:

+ +
private void ModerateTravel()
+{
+    var vm = new ModerateTravelViewModel();
+    moderate.ShowModerateTravel(vm);
+
+    Observable.FromEventPattern<string>(h => vm.Error += h, h => vm.Error -= h)
+        .Subscribe(DoSomthingOnError);
+
+    Observable.FromEventPattern<string>(h => vm.Close += h, h => vm.Close -= h)
+        .Subscribe(DoSomthingOnClose);
+}
+
+ +

This method is called everytime a button is clicked, showing a child view than wait for error even and or close event.

+ +

This method do the job great, what I want to ask is, what is the impact of this code, because this code subscribe to an event but never unsubsrcibe, does this method will cause memory leak?

+ +

Is this save to call this method multiple times during the runtime without causing leak or any bad effect?

+",232465,,232465,,43855.39514,43855.39514,Is this OK to call a method that subscribe to IObservable multiple times?,,1,1,,43854.58819,,CC BY-SA 4.0, +404207,1,404219,,1/24/2020 11:00,,1,577,"

Imagine that I have a system to manage sports teams. Let's not make it specific to a particular sport, but consider that each team consists of a number of players.

+ +

Therefore, I might have an operation like this to add a player to a team.

+ +
POST http://.../teams/myteam/players/
+
+ +

And something like this to update a player's details.

+ +
PUT http://.../teams/myteam/players/foobob
+
+ +

And this to get a list of players in a team.

+ +
GET http://.../teams/myteam/players
+
+ +

Now, each player has a specific position within the team - and I have a user interface where someone can change the positions of those players within the team. So I might drag player #1 to position 4, player #2 to position 1, and player #3 to position 6. The impact of this change should only take effect once the user has completed the entire operation - so I cannot update individual players on the fly.

+ +

Consequently, at the end of this process, I have some kind of object that maintains an updated mapping of player to order - Player Name -> Number. I now need to push these changes from the front-end to the back-end. However, given that I'm updating the team as a whole, what should my REST endpoint for this operation look like? I'm updating an entire collection of entities with a new ID, so it doesn't feel right to publish it to a player-specific endpoint. Hence, I think I should treat the team as the resource being updated.

+ +

I'm also wary of doing a PUT to the team, given that I'm modifying only a single field on each player, rather than replacing the 'team' entity in its entirety.

+ +

I'm leaning towards using PATCH to update the team, and then specifying the reordering as part of the body. This will then go off and update the individual players belonging to the team. Are there any obvious design issues with this?

+ +
PATCH http://.../teams/myteam
+
+{
+  {player: bob, number: 1},
+  {player: foo, number: 3},
+  {player: bazbar, number: 2}
+}
+
+",355802,,355802,,43854.52708,43855.56319,REST API Design - Reordering Collections,,5,1,1,,,CC BY-SA 4.0, +404215,1,,,1/24/2020 15:06,,1,67,"

I have a question regarding CI/CD procedure. I do have 2 Jenkins jobs, the 1st one will build a binary file who is a dependency to the 2nd job build successfully.

+ +

Should I push this binary from 1st job to the 2nd one or should I pull this binary from 1st job to the 2nd one?

+ +

I'm NOT talking about Git here...

+ +

Basically:

+ +

1) Push -> 1st pushes binary to 2nd job when it's done

+ +

2) Pull -> 2nd pull the binary from 1st job when its starts

+ +

I think the CD would cover the option 1) but at same time I think the 2nd job must be in charge to get all your dependencies and be able to build atomicly.

+",142894,,326536,,43885.64306,43886.65625,Should I pull to another job workspace or should I push from it the 2nd job dependencies?,,2,3,,,,CC BY-SA 4.0, +404220,1,,,1/24/2020 16:16,,0,76,"

I'm experimenting with event sourcing for an application we haven't build yet. No, I won't implement this without any thought, I'm just experimenting.

+ +

My domain model looks somewhat like this. The whole model is created at once via a REST service call by external parties. Please note that the status history entities are not event-sourcing related. These come from some workflow engine we are using.

+ +

+ +

After that point, there are smaller changes to the model. some details are changed. Some status history is added. Sometimes Invoice items are added. Several applications change different aspects to this model, but not as invasive as the original creation of the model. Also there are several application that need to know details about the model, but in different forms. This is what led me to believe event-sourcing could be a good fit.

+ +

My main question: Should the first creation of my model be one big event that says ""Invoice bundle created"" including all nested details needed for creation (in JSON or other form). Or should i break up this initial event in several smaller events, somewhat like the picture below.

+ +

+ +

My second question: how should I deal with the files delivered? Should I store those as blob storage and refer to them from events or should I include those as base64 string in my events?

+",125194,,,,,43854.95486,Should I create one large initial event or break down in smaller events?,,2,6,,,,CC BY-SA 4.0, +404224,1,404225,,1/24/2020 19:09,,3,239,"

I'm obsessed with organization - it's probably the real reason why I enjoy coding. So I namespace everything. But I'm just curious if I'm doing it wrong by being redundant.

+ +

Consider this, which I think is correct

+ +
namespace System {
+   class File {};
+   class Monitor {};
+   class Logger {};
+}
+
+ +

Now consider this, which is what I seem to be doing

+ +
namespace System {
+   class SystemFile {};
+   class SystemMonitor {};
+   class SystemLogger {};
+}
+
+ +

Am I being redundant? It's just that, SystemFile is a file in the system. SystemMonitor monitors the system.

+ +

In use cases, which would you prefer?

+ +
class Application {
+   public:
+      System::Monitor monitor;
+      // or
+      System::SystemMonitor monitor;
+};
+
+",341140,,55614,,43854.88889,43855.96875,Am I using namespaces wrong?,,2,6,,,,CC BY-SA 4.0, +404226,1,,,1/24/2020 19:57,,1,59,"

We've got an aggregate root that has a position value so it can be ordered among the other aggregate roots; relative position is something that can change over time. We'd like to have a method that's ""Move X before Y"". Where should that method live?

+ +

I see a few places that could have that method:

+ +
    +
  1. Aggregate Root: X.MoveAfter(Y). I'm not a fan of this because it'd require one aggregate root to have access to the repository that owns the aggregate roots to potentially change the order of other things

  2. +
  3. Repository: Repository.MoveAfter(X, Y). The repository at least has access to all the data it'd need.

  4. +
  5. Domain Service: Service.MoveAfter(X, Y). I lean towards this pattern, as it seems like the right place to make changes to multiple aggregate roots at the same time.

  6. +
+ +

Which of these is correct? Is there another option I'm not thinking of?

+",62762,,,,,43858.76736,Where should the code live to update an aggregate root's value based on another aggregate root of the same type,,2,1,,,,CC BY-SA 4.0, +404230,1,404249,,1/24/2020 22:10,,5,433,"

In any programming task, my preference is to write fail-fast code. That doesn't seem to be too controversial. However, I've also seen many developers say that constructors should do as little as possible. I'm finding that these two goals are often at odds, and conversations over fail-fast design vs. constructor simplicity sometimes devolve into statements of preference.

+ +

Consider a class that provides write access to a file. The class accepts a file path in its constructor which must be a path to an existing file. Some developers would say that a constructor shouldn't be accessing the file system, so I might just do a null check:

+ +
private readonly string file;
+
+public FileAccessProvider(string file)
+{
+    this.file = string.IsNullOrEmpty(file)
+        ? throw new ArgumentException(nameof(file)) : file;
+}
+
+ +

Now, if the provided file doesn't exist, we might not know about it until the class actually attempts to write data to the file. In this case, would it be acceptable to do a check for file existence in the constructor?

+ +
private readonly string file;
+
+public FileAccessProvider(string file)
+{
+    this.file = string.IsNullOrEmpty(file)
+        ? throw new ArgumentException(nameof(file)) : file;
+
+    if (!File.Exists(file))
+    {
+        throw new ArgumentException(""File does not exist."");
+    }
+}
+
+ +

This complicates my constructor, but better adheres to fail-fast design. We could even take this a step further and add code to the constructor that checks if the user has write permissions to the file.

+ +

What about a class that has to access a database or network resources?

+ +

Is there an accepted side to err on in situations like this?

+",355853,,,,,43857.63681,Fail-fast design vs. limiting constructor logic,,7,3,,,,CC BY-SA 4.0, +404242,1,404248,,1/25/2020 1:43,,8,1589,"

I'm a .Net and Angular developer who's been working with OO languages throughout my education and work history. Lately I've been thinking about spending some time with one of the functional programming languages, namely Haskell, to try and get a feel for what functional programming is all about.

+

My main reservations are on whether Haskell is a good language to start with and whether it is actually used anywhere in the industry. I guess the industry use question is more broad to target all functional languages. I have never personally met anyone who worked with functional programming language in their day today job. I'm basically trying to justify learning a functional programming language instead of another OO library or framework, relevant to my field.

+",237712,,237712,,44123.19722,44123.19722,Functional programming - what to learn and who uses it,<.net>,4,4,1,,,CC BY-SA 4.0, +404250,1,404257,,1/25/2020 9:59,,40,9500,"

Over the years, I have seen quite a few questions on this site along the lines of ""can I invent my own HTTP response codes""? Generally asked by those who are developing both the server and client. The responses tend to go towards sticking with standard codes.

+ +

If I stick with standard HTTP status code numbers, is there any technical reason not to use custom text in order to differentiate between, let's say, multiple 501 responses?

+ +

To reiterate, no client except mine will ever see these values, which are returned by AJAX in repose to authenticated requests.

+",979,,979,,43855.47361,43857.68889,Any technical reason not to use my own HTTP response code TEXT if I develop both server & Client?,,4,14,8,,,CC BY-SA 4.0, +404252,1,,,1/25/2020 10:23,,3,76,"

I have a FOSS library made public through GitHub. It's not a big library, it doesn't have a community around it or anything like that - but it does get a few dozen unique clones a week.

+ +

Now, I'm considering a design change in this library. It's not merely internal, it's a significant change to the API. It's ""my"" library, so I could just go ahead and do it, but - I want to get users' opinions about this change.

+ +

Obviously, I don't have their details and can't actively ask them. How would you suggest I approach trying to poll their opinions?

+ +

Note: I'm interested both in ""Support/object"" binary information and in longer appraisals of the suggested change.

+",63497,,,,,43855.93819,How can I consult users of my FOSS library about a design change idea?,,2,0,,,,CC BY-SA 4.0, +404258,1,,,1/25/2020 12:49,,4,174,"

Would it be feasible to provide (or further) multi-core threading ability for programs that weren't originally designed for such?

+ +

And doing so by creating a ""virtual"" CPU core (or for i7's with hyperthreading, virtual ""virtual cores"") which, to a program, the program sees it as a single core/thread, but on the other side of this virtual core is a program/tool/utility that splits the work across multiple cores/threads on its own? And for those programs already designed for multi-core support, the virtual core enabling an increase in the number of cores usable.

+ +

I feel this would be useful given the trend in recent years of increasing core counts vs. overall CPU speed increases in lieu of CPUs running up against Moore's Law ""ceiling,"" and the seemingly slow or trailing push in software development to take advantage of these growing number of CPU cores.

+ +

I realize something like this probably wouldn't be simple or easy to accomplish, but I'm mostly wondering if it would be feasible to do.

+",48835,,48835,,43855.54306,43857.03611,Would it be possible to abstract multi-threading ability for programs not originaly designed for such?,,4,6,1,,,CC BY-SA 4.0, +404268,1,404281,,1/25/2020 21:46,,2,80,"

I've been going over some tutorials on machine learning using Python and libraries including SciKit Learn and Tensor Flow. (Basic tutorials like creating an algorithm to predict a price given input values).

+ +

I've found that these tools are extremely powerful, but they also appear to be quite slow, and require a lot of tweaking due to overfitting or underfitting the data.

+ +

This wouldn't be a problem if you were dealing with a batch of data (such as you receive a new set of data once a day that you want to calculate predictions on), but I see this being very difficult to run something in real time. I know that there are many sites and applications that are executing machine learning algorithms in real time, so I'm trying to understand how they realistically accomplish it. A few possibilities I can think of are:

+ +
    +
  1. They use these ML tools (such as Python, SciKit Learn, Tensor Flow) in real time to predict real time values and continually learn, but require a huge amount of computing power and optimization to keep performance at acceptable levels.

  2. +
  3. They use these ML tools for real time predictions, but only update the algorithm periodically in some form of offline or batch process. This still requires a lot of computing power and optimization for the real time predictions, but not nearly as much as option 1.

  4. +
  5. They use these ML tools offline to figure out the algorithm, and then translate that algorithm into some other language better suited for their need (i.e. Java, C++, JavaScript, etc.). This would seem very tedious to do.

  6. +
+ +

Or is there a 4th option that I'm missing?

+",355909,,,,,43856.31875,Machine Learning in Real Time,,1,0,,,,CC BY-SA 4.0, +404276,1,,,1/26/2020 2:07,,1,16,"

This is probably a design/architectural question. +My app uses RSA initiated SSO using SAML for authentication and from there on my app uses its own session to manage the request. There is a flaw in this design - if the user signed out of the RSA, my app session is still active.

+ +

How to check the RSA session is active in every request? Does Spring security provide any out of box solution for this?

+",92868,,,,,43856.08819,Spring Security SAML and RSA session,,0,0,,,,CC BY-SA 4.0, +404277,1,404279,,1/26/2020 5:01,,2,106,"

I'm using MSTest. I have some async functions I'm unit testing that rely upon a lot of different methods where the result needs to be awaited upon. Sometimes the unit tests stall in a ""running"" stage for a very long time and I'm attempting to pinpoint which blocking operations are taking the longest.

+ +

The solutions I can think of around this are - +1) Manually stepping through the code line by line. This is not practically possible since I'm using loops over the same blocking operations which sometimes pass instantly, other times take much longer. The loops can contain many thousands of iterations with different variables so doing a ""manual line by line debug"" is not feasible.

+ +

2) Setting a timeout for each blocking operation which if reached, will go into a conditional block of code with a breakpoint, so that I can immediately see all the program state which led to the blocking operation that timed out.

+ +

Option 2 does exactly what I need, however it would require extensive refactoring and in actuality the functionality isn't needed for anything besides this one debug test. I was wondering if there was an in-built solution in either Visual Studio, Jetbrain Rider, or some other IDE or plugin that would allow me to view in real-time the debugger executing each line without me manually needing to click on ""Step Over"" each line segment? I understand I'd never be able to ""catch"" the deugger moving in this manner so to speak since it would be moving quickly, unless of course it was being blocked somewhere for a long time, at which time I'd be able to see exactly what is stalling it without needing to custom write timeout logic for every single blocking operation.

+ +

Does something like this exist?

+",355487,,325277,,43857.68958,43857.68958,How to follow execution path of a Unit Test in real time without manually stepping through the code line-by-line?,,1,1,,,,CC BY-SA 4.0, +404280,1,404284,,1/26/2020 7:20,,3,2546,"

So i want to make a sequence diagram for login .. the first step is user access my website then the system redirect to login Form, then he insert username and password than the system validate it , if success it will direct him to homepage

+ +

my question is : how to put UI like login form into the sequence diagram? i use like interface : Login Form but i think thats the wrong way , because what i know interface is like some class in c#/ java .. here's my sequence diagram.

+ +

+ +

need help , thankyou for the help

+",350020,,,,,43856.53542,How to write UI in sequence diagram?,,2,8,1,,,CC BY-SA 4.0, +404283,1,404347,,1/26/2020 9:27,,0,173,"

I am trying to understand the concept of dependency inversion and I think I finally got the concept. However now I am struggling with another issue, which is the selection of framework, when implementing it in .NET, at least.

+ +

Say I have a business logic assembly which has the need for a data access class. For that I add an interface for the data model and data access class in the business logic assembly (as shown in the drawing below). The actual implementation of the data access interface(s) is done in a different assembly (project).

+ +

This ensures that my business logic does not need to know anything about the implementation of the data access. However, it also means that if my business logic is implemented in .NET Core, then my data access assembly needs to be implemented in .NET Core as well.

+ +

Isn't there any way to avoid this, so I don't couple the framework of my data access assembly to that of my business logic application?

+ +

In other words, can I get the left part of my drawing to be in .NET Core and the right part to be in .NET Standard (for instance)?

+ +

+",271714,,209331,,43857.80833,43857.86597,Avoid framework lock-in with dependency inversion,<.net>,2,6,,,,CC BY-SA 4.0, +404286,1,404339,,1/26/2020 11:55,,-1,56,"

Most binaries have jump and control flow instructions that are relative to other locations in the binary. For example: if I modify an instruction around 0x12341232, and there is an instruction somewhere in the code that would do jmp 0x12341234, that 0x12341234 location will not be the same.

+ +

So,

+ +
# Original
+0x12341232:  mov eax, ebx
+0x12341234:  call sym.hello
+...
+
+0x13371337:  jmp 0x12341234 # points to correct instruction
+
+--------
+
+# Modified
+0x12341232:  call 0x3232
+0x12341237:  call sym.hello
+...
+
+0x13371337:  jmp 0x12341234 # points to incorrect instruction
+
+ +

AFAIK, ARM's ISA would have a lot less trouble handling this issue since it has fixed-length instructions. x86 would have a big issue with this.

+ +

My question is: Is there a framework or research paper that tackles this issue?

+ +

Many thanks

+",283720,,,,,43857.61597,Modify a binary and account for relative jumps,,1,4,,,,CC BY-SA 4.0, +404289,1,404300,,1/26/2020 13:35,,1,418,"

I had to increment my integer twice in a loop, so I thought I would try and be clever:

+ +
for (int i = 0; !sl.isEmpty(); ++i++) { ... }
+
+ +

++i++ however is not an assignable expression, at least in GCC. +Is there a technical reason why this is the case, that logically, it would be impossible to implement such a syntax rule?

+ +

In my mind,

+ +
int i(0);
+QTextStream(stdout) << i++; // 0
+QTextStream(stdout) << ++i; // 2
+i = 0;
+QTextStream(stdout) << ++i++; // impossible, but it should give 1
+
+ +

However I assume that there are good reasons why maybe this can not be the case. +Just curious.

+",136084,,,,,43866.35556,"Logically, is there a reason why ++i++ can not be a valid expression?",,2,5,,,,CC BY-SA 4.0, +404290,1,,,1/26/2020 14:04,,-1,63,"

When evaluating whether to use webhooks or polling for an architectural decision I've been reading some articles. Most highlight the obvious drawbacks with using polling such as:

+ +
    +
  • wasted resources (most polls don't return an update)
  • +
  • added latency (due to polling timeout)
  • +
+ +

So most recommendations burn down to ""use webhooks instead of polling"". Putting my skeptic hat on though I can't help but wonder if there are downsides and alternative scenarios. So my question is - what are the examples of scenarios where use of polling would be preferable to webhooks?

+",355939,,,,,43856.73889,Are there cases when using polling is preferable to Webhooks,,1,2,,,,CC BY-SA 4.0, +404292,1,404298,,1/26/2020 14:33,,-1,441,"

So I am in the process of finishing an app (android for now), and I have a doubt about if I should have papers that proof that I made that app.

+ +

Its the first time for me to publish an app, and I am concerned about how to prove that I made the app and it belongs to me.

+ +

In other words

+ +

If I came and asked the instagram app developer, prove that you made this app? What would he show me?

+ +

Thanks for your help.

+ +

EDIT

+ +

So developers of whatsapp, instagram, skype, ...... have nothing to prove that they made the app? They just publish to the store?

+",355942,,355942,,43856.62431,43856.91736,How do I prove an app is mine?,,4,19,1,43856.71736,,CC BY-SA 4.0, +404293,1,,,1/26/2020 14:51,,0,72,"

Well, I've been hammering away on this for about a week now with no practical progress because I can't find mental fluidity with the concepts I'm trying to wrangle. I'm Officially Stumped, and would humbly appreciate any insight.

+ +

Some minimal background: for several years I've used my own small PHP (CLI-only) introspection helper that provides a d() alternative to print_r() or var_dump(). It features type colorization (using xterm's 256 color palette), and argument literalization (d($a) might show $a: ""hi""), and among other nice features, a (much) more compact output layout.

+ +

My problem

+ +

My question is to do with how the layout system decides how to present nested items. If I have input like

+ +
$a = [
+  ""abcde"", ""fghij"", ""klmno"", ""pqrst""
+];
+
+d($a);
+
+ +

I'll get something back like

+ +
  (test:3) $a: [""abcde"", ""fghij"", ""klmno"", ""pqrst""]
+
+ +

but if I were to add a small array in the middle that would cause the output to be too long for one line, the output might change to

+ +
  (test:3) $a: [
+    ""abcde"",
+    ""fghij"",
+    [1, 2, 3, 4, 5],
+    ""klmno"",
+    ""pqrst""
+  ]
+
+ +

This is really nice and seriously tidies up what PHP would output intermixed with a *very* large serving of line breaks.

+ +

In the example above you might notice that the outer array expands into multiline mode because it would (rightly) be too long for the line, while the inner array stays in compact mode because it (also rightly) will fit just fine.

+ +

I'm not sure if the way I used to do this was arguably terrible or arguably viable: every single dump_* function would speculatively re-execute itself in an assumed non-multiline mode, with the magic $len_test switching my internal output-buffering system into length-measurement mode. Then the magic $len value would contain the amount of data output, and I could straightforwardly check whether it was too long.

+ +

Like this:

+ +
function dump_string(...) {
+
+  if (!self::$len_test) {     // length test?
+    self::$len_test = true;   // if not, enable
+    self::dump_string(...);   // and re-execute ourselves
+    $multiline = (self::$len > LINE_LIMIT);
+    self::$len_test = false;  // this is the outer scope; go back to
+                              // text-buffering mode now
+  } else {
+    $multiline = false;       // test length assuming no newlines etc
+                              // (this is the recursive/inner scope)
+  }
+
+  ...
+
+}
+
+ +

This method had two problems.

+ +

Firstly, not only did properties, getter functions, etc get accessed twice, objects were inconsistently double-instantiated due to the inner function being recursed into while references were held by the outer context. (To add some fun entropy, this would only happen up to the point the function realized it had emitted too much output and bailed out.) This caused spl_object_id() to return generally confusing and innacurate results.

+ +

Secondly, the function recursion caused the reference recursion detection elsewhere in my code to become unmanageably hairy, because it basically had to do recursion detection across its own recursion boundaries.

+ +

I've worked out a solution for both problems that moves the ""is this too long"" logic into the output layer and allows the dump_* functions to access each value/getter/property precisely once... but it introduces a huge issue of its own - it limits the is-this-block-too-long analysis to discrete blocks in a way that does not take into account the existing contents of a given line, particularly the (sometimes) large amounts of padding that lines might have in them.

+ +

Given the following example,

+ +
  (test:3) $a: [
+    [
+      [
+        [
+          [
+            ['aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa']
+          ]
+        ]
+      ]
+    ]
+  ]
+
+ +

my new approach only considers the length of the 'aaaa...', disregarding the padding at the start of the line. That line should definitely be wrapping, but as far as the new logic is concerned it doesn't need to because it's blind to the padding.

+ +

This is what I'm Officially Stumped on.

+ +

My partial solution

+ +

You may feel like tacking a crack at reasoning through the problem space before reading on.

+ +

To cut a long story short, over about 4 days of very slow mental visualization progress (I had absolutely no idea how to visualize any of this on the computer) I came up with the following.

+ +

I have an out() function that accepts text, a stack depth, and a display mode (-1 = always print, 0 = only print if not in multiline mode, 1 = only multiline). I feed this function all the text I want presented for both single- and multiline modes. This function appends to a display list like this:

+ +
[
+  [ 0, -1, ""  (test:3) $a: ["" ], # always print           | 16  0  (lengths,
+  [ 0,  0, "" ""                ], # only single-line       | -      explained
+  [ 0,  1, ""\n    0:""         ], # only multiline         | 23  0  below)
+  [ 1, -1, ""'abc'""            ], # always print (level 1) | 28  5
+  [ 0,  0, "" ""                ], # only single-line       | -
+  [ 0,  1, ""\n  ""             ], # multiline              | 31
+  [ 0, -1, ""]""                ], # always                 | 32
+]
+
+ +

If only the -1 and 0 parts of the above were printed, you'd get

+ +
  (test:3) $a: [ 'abc' ]
+
+ +

while if only the -1 and 1 parts were printed, you'd get

+ +
  (test:3) $a: [
+    'abc'
+  ]
+
+ +

Next, I iterate through the display list, keeping a running total of text length for single-line (0) and output-anyway (-1) modes, per stack depth, and letting the computed length cascade up the stack depth chain. So in the example above, the length at depth 0 is 27 but is counted as 32 because it contains depth 1, which has length 5. This allows me to straightforwardly examine the text lengths per stack depth, and set a multiline flag for the outermost stack depth that is over the limit. Going back to the example above with the nested arrays, this causes the outer array to go multiline while the inner one does not. Win.

+ +

The final piece of this puzzle is that text is only emitted in multiline mode or if I set a force-flush flag (which I use just before my dump() routine exits). Whenever out() is called with a depth argument that has multiline mode set, that text is output immediately. This allows the display list to have a very small buffer: it only buffers a few KB of data at most, ever, which was a critical requirement.

+ +

Unfortunately, I see no viable way within this model to factor the padding into the length calculations. I also have no idea what besides this model would also work.

+ +

The way I reason about this model, it doesn't differentiate between

+ +
[[[""abcde""]]]
+
+ +
[
+  [
+    [
+      ""abcde""
+    ]
+  ]
+]
+
+
+ +

and

+ +

[ \n [ \n [ \n abcde \n ] \n ] \n ]

+ +

it's just looking at the stack depths and then enabling some text which happens to be padding as a side-effect of the text within a particular scope being too long.

+ +

(Note that I add the padding with mode 1, which is deliberately not considered within the length computations as it would of course always be considered, for all stack depths.)

+ +

As might be discerned from the way I've written about this, this type of logic is very alien and unfamiliar to me; I don't really know what I'm doing. If there's anything I can clarify, please do let me know. And if there are any resources I can chew on to (hopefully) provoke my brain to grow/develop in ways that would be helpful to understand this kind of thing better, I would love to hear about them!

+",117377,,,,,43856.65069,Recursive speculative display list engine - computing text length across stack boundaries,,1,0,,,,CC BY-SA 4.0, +404306,1,,,1/26/2020 19:53,,-3,117,"

My app is using this kind of layered architecture:

+ +

Controller > Service > Repository > Data Mapper > Persistence

+ +

Often I notice that my service methods are just directly calling repository methods without ""doing anything else"" to them. E.g.,

+ +
Class UserRepo
+{
+    // ...
+
+    public function getUser($id)
+    {
+        return $this->dataMapper->findOne($id);
+    }
+}
+
+class UserService
+{
+    // ...
+
+    public function getUser($id)
+    {
+        return $this->userRepo->getUser($id);
+    } 
+}
+
+ +

I notice this kind of scenario is prevalent all throughout the app, where the service method doesn't do anything special and just calls the repository method. Although, there are still some cases where the service method actually does some business logic before it calls the repo.

+ +

Now I thought I could some time by ditching the repository pattern and just inject the data mapper directory to the service classes. E.g.,

+ +
class UserService
+{
+    // ...
+
+    public function getUser($id)
+    {
+        return $this->dataMapper->findOne($id);
+    } 
+}
+
+ +

My question is, is it acceptable to ditch the repository pattern and just use the data mapper directly in the service class?

+",319317,,,,,43857.32847,Is it acceptable to ditch the Repository pattern in a layered architecture?,,1,4,,,,CC BY-SA 4.0, +404309,1,404328,,1/26/2020 23:21,,4,397,"

As far as I know Data Oriented Design differs a lot with OOP. It encourages reusability of data, discourages polymorphism, etc. And because SOLID uses OOP a lot (especially Interface Segregation because of obvious reasons), how do you do it with DOD? Using functional paradigm, maybe? But how? Or if you don’t / can’t use SOLID with DOD, what’s the common practice to do clean code in DOD?

+ +

Thanks in advance

+",355943,,,,,44139.36319,How do you do SOLID with Data Oriented Design?,,3,0,1,,,CC BY-SA 4.0, +404311,1,,,1/27/2020 1:14,,0,142,"

This is quite theoretical, and I hope it's the right SE site.

+ +

A couple of years ago I worked at a company using Maya 2014 (I think that was the version) with a couple of other 3D Artists.

+ +

Eventually I figured out that there's a short, reproducible sequence of operations that I can do that will lead to the crash of the program without fail - 100% rate, with a catch.

+ +

The Sequence, for the sake of completeness:

+ +
+

Open a fresh Maya, create a polygon cylinder, set the rendering style of the perspective style to shaded if it isn't already, press space twice to get to the side view in wireframe, select all faces, deselect the middle faces, delete the (now selected) caps, select one of the border rings, extrude and pull out the edge, press 3 to set the cylinder to smoothed rendering, then press space to leave the wireframe side view. At this point, the other views would fail to render, Maya would encounter a fatal error and close itself.

+
+ +

I could perform this on any PC, with any Hardware, on a fresh or highly modified Version of that Maya without fail. +The catch: Only I could do it.

+ +

When I showed this bug to my colleagues and they tried to replicate it, Maya did NOT crash. (I think there was one other person to which this also happened, maybe) +We could perform the sequence on the same PC on the same install back to back, it would crash for me but not for them.

+ +

We theorized it might be that I'm typing faster than the others, confusing the system. I performed the steps slowly yet it still happened. The sequence was also only the fastest way to replicate the crash. ""Tabbing out of a smoothed wireframe view after performing an operation in it"" was the only condition, and all projects would crash on me if I didn't pay attention to the conditions.

+ +

What we did NOT test was what would happen if I would perform one half of the sequence and a colleague the other, trying to narrow it down to a certain step.

+ +

So, the bug occurred reliably: +- Independent of Hardware +- Independent of it being a fresh install +- Independent of context (big or small project) +- Independent of operation speed +- Apparently, dependent on the person performing it.

+ +

I jokingly called it the ""Curse of the Maya"" and we left it at that - but to this day I wonder, what could theoretically cause a bug to appear seemingly dependent on the user? From all my understanding of software that just doesn't make sense.

+",355965,,154036,,43857.35069,43857.35069,"What could cause a bug to be ""Person-Dependent""?",<3d>,1,5,,,,CC BY-SA 4.0, +404313,1,404318,,1/27/2020 1:46,,1,173,"

I am wondering about how to make lower-level code reusable when the Dependency Inversion Principle (DIP) is used.

+ +

In the book Clean Architecture by Robert C. Martin, the DIP is described such that the higher-level components define the interfaces they need, and then the lower-level components implement these interfaces, as in

+ +
+-----------------------------------+
+|                                   |
+|  Application ------> Service <I>  |
+|                         ^         |
++-------------------------|---------+
+                          |
+                     +----|--------+
+                     | ServiceImpl |
+                     +-------------+
+
+ +

That way,

+ +
    +
  • the application code is protected from changes in the lower-level components
  • +
  • the application does not depend on the source of the ServiceImpl
  • +
  • ""the source code dependencies are inverted against the flow of control"".
  • +
+ +

Maybe the image in my head is wrong, but I always imagine that the Service in the picture above may be something like a logging service, so maybe the interface has methods like debug(), info() etc., and then the implementation could log to a file, to a database, or whatever.

+ +

I find two things a bit weird about this approach:

+ +
    +
  1. The general-purpose logging code in ServiceImpl that doesn't know anything about the data it is logging needs to import an interface from the big valuable application module, is that right? Somewhere in my lower-level code I have a line that looks like from high_level_app import ServiceInterface, that doesn't feel right. It feels like I am creating an artificial dependency to the higher-level module here.
  2. +
  3. From a more semantic viewpoint, obviously the functionality in my lower-level logging implementation could be reused across the whole organization, but since the higher-level code defines the interface and (as per the point above) I probably have a source code dependency on the higher-level code, does that limit the reuse?
  4. +
+ +

So I would like to ask:

+ +
    +
  • Does the Dependency Inversion Principle in general inhibit the reuse of lower-level libraries? Or is my example of the logging service flawed, because that's not what the DIP is aiming at?
  • +
  • When using the DIP, how would I go and create reusable lower-level components, rather than only making them usable for the particular higher-level component I am dealing with right now?
  • +
+",345293,,,,,43858.78889,Dependency Inversion Principle and Lower-Level Code Reuse,,3,0,1,,,CC BY-SA 4.0, +404319,1,,,1/27/2020 4:55,,4,307,"

I'm a C# developer & never worked before on either unit tests or Selenium browser test automation.

+ +

For a current assignment, there is an existing Visual Studio solution that has a project that does test automation. This solution has no unit tests.

+ +

For e.g, there is a form on the live site that captures customer data. This Selenium VS project on build, will auto open the browser, fill the form fields with random values, click on submit and verify the next steps like 'Thank you screen', database entries etc. (These test cases are created by developers & not by QC).

+ +

This is not unit test, but more of how a QC will test after the development is complete. right?

+ +

For someone who never worked on tests before, one advantage I see is, each time the code is deployed to a new environment like STG, UAT, etc, the developer can run this project by changing the URL/connection string, and it will execute all the test cases, saving time and like a regression testing too.

+ +

Is this a good approach to use this kind of testing instead of conventional unit tests. If not, what major advantage/disadvantage has one model over the other.

+",148582,,148582,,43857.82222,43857.8625,Unit tests vs Automation testing,,2,1,,,,CC BY-SA 4.0, +404322,1,,,1/27/2020 5:50,,0,310,"

Please consider the following code:

+ +
class baseclass {
+    public $hideme;
+    public function getit() { return $this->hideme; }
+    public function setit($value) { $this->hideme = $value; }
+}
+
+class derived extends baseclass {
+    private $hideme; //can't do this
+    private function getit() { //can't do this
+        return 0;
+    }
+}
+
+function doobee(baseclass $obj) {
+    echo $obj->getit() . ""\n"";
+}
+
+ +

On the other hand in java language you can do that with properties. Also, even in php you can do the other way around, that is, a private property in the base class can be redefined as public in the child class, because it creates a new property in child class, instead of overriding. Why can't it do same when redeclaring public property as private in the child class?

+ +

This answer on SO says, The rationale being you shouldn't be able to hide members from the base class.... but, we are not hiding anything from base class!, base class's objects will still have access to those public properties and function. Its just the child classes' objects have either two versions(private and public) of same property or just the visibility be overridden?.

+ +
    +
  1. What if I want just any grandchild classes inheriting from child(derived in code) class not have public properties of grandfather class($hideme in baseclass)?

  2. +
  3. If it is a general rule of inheritance in oop as the SO answer suggest, why is not valid in java?

  4. +
  5. What's up with the methods? #1 also applies to methods as well.

  6. +
+ +

P.S: Please note that #1,#2 and #3 are not three different questions. Asking them as three different questions wouldn't make sense as they all circle around the same problem.

+",106313,,,,,43857.34722,Why does php not allow to decrease visibility of class properties and methods in the inheriting class?,,1,0,1,,,CC BY-SA 4.0, +404323,1,404325,,1/27/2020 7:26,,-3,278,"

I want to use Dependecy injection in my new .net core project, but my manager thinks it is an anti-pattern usage. I already know benefits of dependency injection. But my manager cares architecture more than everything. Anyway, when search about it, i have read enough reviews. There are different opinions about it and i am still not sure. So asking here. Is it an antipattern example or not? If it is not what is the best practice, is it okey to reference data and businness layers on UI layer? I always use this structure but just want to do deeper research best practice about that.

+",355733,,355733,,43857.35208,43857.37431,.Net Core Dependency Injection is an example of anti-pattern?,<.net>,1,7,,,,CC BY-SA 4.0, +404326,1,,,1/27/2020 9:52,,5,221,"

I've just finished my studies in Computer Science and now I'm working. +The problem is I'm the only computer scientist in my company and I'm probably taking a lot of bad habits. I would like to correct it if one day I have to work in a team.

+ +

I hope this question belongs in this community: +In a C++ Git project, where do I put dependencies?

+ +

How can I be sure that everyone will have the same versions of dependencies without uploading everything on GitHub for example? Do I have to give the dependencies I downloaded and used to everyone, or do I just have to let people know which I used and let them install how they want?

+ +

To give an example, I have a Visual Studio project, in which I added a lot of dependencies, but they are on a personal repository. So I configured VS with them. When I put them on Git, configuration will be saved, but it will be my own configuration. How can one use this project?

+",322387,,90149,,43857.59653,43862.81736,Where can I put project dependencies,,2,5,,,,CC BY-SA 4.0, +404331,1,,,1/27/2020 13:02,,2,146,"

It's my first post here as I understand that SO is a platform to find fixes and this is where to ask questions about more general questions. Correct me if I'm wrong

+ +

I'm working on project in Laravel and I started wondering where to put my code.
+It feels like this would perfectly fit in my Item model, but I'v always put this kind of logic on controllers side.

+ +
public function addComment($text, $isLog = false) {
+    $comment = new ItemComment;
+    $comment->item_id= $this->id;
+    $comment->user_id = auth()->id();
+    $comment->is_log = $isLog;
+    $comment->comment = $text;
+    $comment->save();
+}
+
+ +

Would this be a good or bad practice to put this in Item and call it like $item->addComment(...);
+I also have a ItemComment model, but something like $item->comments()->add(...) wouldn't work as ->comments() returns relationship.

+",356010,,356010,,43857.56111,43895.12778,Should I put add() method in Laravel model,,1,2,,,,CC BY-SA 4.0, +404332,1,,,1/27/2020 13:24,,3,151,"

Premise:
+- Two services A and B
+- Resource X has owner U, and is managed by service B

+ +

Now, I need to handle these auth scenarios:
+1- End-user needs to directly use service B's API to access X
+2- Service A needs access to X, because end-user made a call to A (i.e. delegation)
+3- As part of a async comms (e.g. a job or an event), service A needs access to X. (no end-user involved)

+ +

I've been researching a lot (a month now), but still feel confused. For example, I've looked at oauth/jwt integration with Kong, +but it wasn't clear how #3 above would fit. Istio seems ideal since it can assign identities to services.

+ +

If I go with Istio:
+- what part of auth will the application (both A and B) be responsible for?
+- what will the final (HTTP) request that makes it to the application roughly look like?

+",356014,,356014,,43857.58611,43957.77778,Auth in microservices applications with service mesh,,3,0,,,,CC BY-SA 4.0, +404337,1,,,1/27/2020 14:08,,7,351,"

One of the primary job responsibilities of the Scrum Master is to remove impediments. Working at several different places, I have yet to grasp what kind of impediments they are supposed to be removing. Here are the typical impediments I see most developers having:

+ +
    +
  1. Technical (I can't figure out how to write this, this isn't working as I expect, what's the best way to do this, etc). This seems to be the most frequent and common impediment that developers run into. Scrum Masters are generally non-technical, so they can almost never address these problems. The best they can typically do is ask the question ""Who can help you with this?"" and then make sure that you are following up with them.

  2. +
  3. Team-Member (my teammate isn't working well with me, my teammate is writing bad code, etc). Since a separate management chain usually exists, most developers will often bring up these concerns to their manager instead of a scrum master.

  4. +
  5. Other Teams (the other team I need something from isn't getting back to me, etc). Like #2, since a separate management chain usually exists, a scrum master typically doesn't have any real management authority or political influence. The best they can do is typically act as an additional person to bug the other team. When working with another team is really impacted, I generally see management escalations being the most common (and most direct) way to solve the problem.

  6. +
+ +

So what other impediments are they supposed to be removing?

+",355909,,355909,,43857.60486,43857.85972,What impediments does a Scrum Master remove?,,3,4,,,,CC BY-SA 4.0, +404344,1,404346,,1/27/2020 17:10,,8,1328,"

Context: I'm an embedded dev with only 2 years of solid experience. I'm the sole technical employee of a startup of 4 people. We have an MVP of our product out and are getting ready to develop the next iteration of it. The original MVP was developed by a partnered contractor team, with one older embedded dev doing everything software. I joined the company too late to have any input into the design of the MVP. The product is a gateway-type device: embedded Linux, messages come in one way, some limited intelligence happens inside, they come out on the other side.

+ +

The 'problem': Everything in the system seems to be chucked into a single SQLite database. Processed and unprocessed messages live in the same table (with one field used to indicate which one they are), provisioning related things live another table, even the logging and debug is done by writing to yet another table in the database. The system is written largely in python but the biggest part of it is a massive class full of wrappers for complex SQL which seem to do the bulk of the data manipulation.

+ +

All of this makes me uncomfortable, especially the messages part as it looks like a classic case of ""database as a queue"" design anti-pattern. That being said I find it difficult to articulate exactly why this is wrong to my non-dev boss, other than vague and nebulous mentions of maintainability, difficulty in introducing changes because of lack of modularisation, as well as of lack of clarity in how data flows and is being processed. It doesn't help that the contractor has ""authority of years of experience"" over my judgment.

+ +

Am I justified in feeling uncomfortable about this design choice? I mean, the thing works in principle. I know that the desire to refactor can be pretty strong, irrational and should not always be acted upon given business constraints. But then... I kinda feel like starting mostly from scratch only ripping out useful bits of the first MVP would be cleaner and take less time than working with the system created so far. But is it just my younger-person-hot-headedness?

+",303762,,149436,,43863.73403,43863.73403,"An older, experienced contractor used an SQLite DB for various queues - am I, a young dev, justified with feeling uncomfortable with it?",,3,8,2,43862.70069,,CC BY-SA 4.0, +404348,1,404484,,1/27/2020 19:14,,1,733,"

I recently joined a team who is trying to use ""microservice"" pattern for their new application. They've already started to implement API's. In the end it should be an API for both mobile and web UI. I have some issues about their implementation.

+ +

Things that doesn't make sense to me:

+ +
    +
  • Every ""microservice"" is inside one git repository plus one solution file (.sln they're using dotnet).
  • +
  • Since all microservice intertwined they can't be deployed independently.
  • +
  • There is ""gateway pattern"" but gateways are making http calls to other ""microservices"".
  • +
  • There is a ""common"" project (class library) to all requests and responses. (all projects have reference to that project).
  • +
  • For every 3rd party API they implemented a ""microservice"" to consume it. (think it like; mobile is making a request to a gateway then gateway proxies that request to a microservice then that microservice is making ""actual"" http call to 3rd party) this is same for web UI part.
  • +
+ +

I don't feel that they are doing things in right way, but I don't know where to start. I tried to trace a request but it did not go well. There are tons of boilerplate code in the solution file. The bad thing is, there is a deadline and they don't want to participate for a big refactor.

+ +

What should be the next step?
+Any pattern to follow for those things?

+ +

Maybe microservices are not good for us.

+",295924,,295924,,43858.20625,43861.96736,Microservices and 3rd party APIs,,1,5,2,,,CC BY-SA 4.0, +404355,1,,,1/27/2020 21:36,,-1,79,"

I've have an n-tier .NET 4.6 internal business application. It has a business logic layer class library project that references a data access layer class library project. It's designed to decouple the two so that, in theory, a different data access assembly could be swapped out without modifying the BLL.

+ +

But a new idea has just entered the picture. We want to use these projects to also support a public-facing website via a web service application. To help alleviate database server load (trying to avoid SQL Enterprise licensing), we had the idea, ""what if we do some of our large read-only queries from a replicated database, while doing the less frequent inserts and updates to the master database?""

+ +

They will be identical databases so my DAL still works against both, but it simply connects to the single connection string its configuration file. I never planned on it choosing a data source at runtime so there could also be other ApplicationSettings, static variables, possibly issues with managing the life of the DALs Entity Framework DbContext lifetime... So it seems like what I really need is two parallel instances of that assembly. I could do that by creating two separate web service applications for each place, but could it be done within a single application? I have a feeling a total redesign with some kind of configuration DI is probably what is needed for this use case, but curious what the options are, if any, for running two parallel assembly instances/configurations.

+",57419,,,,,43859.89236,Use two parallel instances of the same .NET assembly,<.net>,1,3,,,,CC BY-SA 4.0, +404356,1,,,1/27/2020 22:03,,0,93,"

I am looking to better understand best practices for handling large quantity of parameters. I am particularly interested in the types of parameters involved in machine learning code bases and considering the patterns in which they're manipulated. These machine learning parameters typically possess the following characteristics:

+ +
    +
  1. Do not change: Once the configuration is specified, the code does not change the value.
  2. +
  3. Name collisions: Two modules that share the same parameter (e.g. lr for learning rate).
  4. +
  5. Defaults exist: Many times a specific run of the code only requires configuring a handful of parameters, the rest using defaults.
  6. +
  7. Related parameters can be pervasive: For example, the minibatch_size may be used by a large quantity of functions, including data preparation, gradient computation, etc.
  8. +
+ +

I have leveraged three patterns in my experience:

+ +

Pattern A (Passing using Kwargs)

+ +
def do_b(c=3, **kwargs):
+    b = kwargs['b'] + 1
+    a = kwargs['a'] + 2 # Raises KeyError: 'a'
+    return b
+
+def do_a(a=5, c=4, **kwargs):
+    b = do_b(**kwargs)
+    a = a + 1
+    return a, b, c
+
+def main():
+    a = 1
+    b = 2
+    c = 3
+    do_a(a=1, b=2, c=3)
+
+if __name__ == ""__main__"":
+    main()
+
+ +

This pattern is the most clunky in my experience for several reasons:
+ 1. Unpacking a variable from kwargs in caller function make it unavailable in the called function
+ 2. Where are the defaults set?: If the caller does not override the defaults specified in the function's parameters, you could be traversing a large code base trying to find where the default is set. Also, the default may be specified in several different functions, making it difficult to determine which default is used (e.g. c in the example).

+ +

Pattern B (Passing via Global Object)

+ +

This pattern treats the configuration parameters more or less as global variables. The Config class is instantiated only once and the object gets passed around from function to function.

+ +
class Config:
+    def __init__(self):
+        self.a = 1
+        self.b = 2
+
+def do_b(opt):
+    a = opt.a + 1
+    b = opt.b + 1
+    return b
+
+def do_a(opt):
+    a = opt.a + 1
+    b = do_b(opt)
+    return a, b
+
+def main():
+    opt = Config()
+    do_a(opt)
+
+if __name__ == ""__main__"":
+    main()
+
+ +

Pros
+1. Enables easy access to any configuration parameter as long as the configuration object is within the namespace.
+2. Enables methods to be leveraged for controlled updating of configuration (e.g. merging configurations).
+
+Cons
+Global variable concept and all of the cons that come along:
+1. Any function can update the object, and therefore makes it difficult to trace down the bug
+2. No argument/parameter checking at compilation time.

+ +

Pattern C (Closures)

+ +

I have also seen each module, and their respective configuration parameters, exposed at a high-level using closures via lambda functions. See below for an example from ShangtongZhang/DeepRL:

+ +
class DDPGAgent:
+    def __init__(self, config):
+        self.network = config.network_fn()
+        self.replay = config.replay_fn()
+        # ...etc
+
+# DDPG
+def ddpg_continuous(**kwargs):
+    config = Config()
+    config.merge(kwargs)
+    ...
+    config.network_fn = lambda: DeterministicActorCriticNet(
+        config.state_dim, config.action_dim,
+        actor_body=FCBody(config.state_dim, (400, 300), gate=F.relu),
+        critic_body=TwoLayerFCBodyWithAction(
+            config.state_dim, config.action_dim, (400, 300), gate=F.relu),
+        actor_opt_fn=lambda params: torch.optim.Adam(params, lr=1e-4),
+        critic_opt_fn=lambda params: torch.optim.Adam(params, lr=1e-3))
+
+    config.replay_fn = lambda: Replay(memory_size=int(1e6), batch_size=64)
+    ...
+    run_steps(DDPGAgent(config))
+
+ +

And then the classes are instantiated by the DDPGAgent. Unfortunately, this couples DDPGAgent with the lambda functions and makes testing very difficult.

+ +

Which of the three patterns would you chose and what are the pros and cons? Do you have any better options? Any guidance is greatly appreciated!

+",355764,,355764,,43858.62986,43858.62986,Design Patterns for Passing Large Quantity of Parameters in Machine Learning,,1,1,,,,CC BY-SA 4.0, +404360,1,404364,,1/28/2020 3:00,,-1,127,"

Similarly to a rest api, I want a server to listen for an email to an address I have created, and in response to mail being received, run code that I have created. Is this possible already? I understand how to call an api or create a micro service, but it seems like triggering code when someone sends an email is esoteric.

+ +

In my research, it seems like I need an smtp server and pubsub data structure. However, in the implementations I have found on GitHub, it seems they are more tailored to sending email rather than triggering code when a user receives an email. I would prefer not to use sengrid or ses aws.

+ +

If smtp server is what I need, do you think it is something I can complete in a semester.

+",356060,,356060,,43858.66528,43858.66528,Is there existing technology write code to be executed in response to an email being sent for a certain email?,,1,6,,43858.42083,,CC BY-SA 4.0, +404362,1,404365,,1/28/2020 4:02,,3,151,"

I'm facing some difficulty with designing a factory and/or strategy pattern for building out EmailTemplates. IMO this seems like the design pattern to go with, but I feel like the path I'm going down isn't very extensible and will be prone to spaghetti-like code in the future. The problem is this: an EmailService takes in a (hydrated) EmailTemplate to send an email. The EmailTemplate has a bunch of key/value pairs that are derived from different objects/data depending on the strategy (receipt, refund, void, etc).

+ +

Here is what I'm currently working with:

+ +
public class EmailTemplateFactory {
+
+private static Map<TemplateType, BiFunction<EmailTemplate, EmailTemplateContext, EmailTemplate>> templates = new HashMap<>();
+
+static {
+    templates.put(TemplateType.RECEIPT, EmailTemplateFactory::buildReceipt);
+    templates.put(TemplateType.REFUND, EmailTemplateFactory::buildRefund);
+    templates.put(TemplateType.VOID, EmailTemplateFactory::buildVoid);
+}
+
+public static EmailTemplate getTemplate(final EmailTemplateContext emailTemplateContext) {
+    if (emailTemplateContext == null) {
+        throw new IllegalArgumentException(""Must provide context to build email template!"");
+    }
+    return templates.get(emailTemplateContext.getTemplateType())
+            .apply(getBaseTemplate(TemplateFormat.HANDLEBARS, emailTemplateContext), emailTemplateContext);
+}
+
+private static EmailTemplate buildReceipt(final EmailTemplate baseTemplate, final EmailTemplateContext emailTemplateContext) {
+    //build the receipt template
+    return new EmailTemplate();
+}
+
+private static EmailTemplate buildVoid(final EmailTemplate baseTemplate, final EmailTemplateContext emailTemplateContext) {
+    // build the void template
+    return new EmailTemplate();
+}
+
+private static EmailTemplate buildRefund(final EmailTemplate baseTemplate, final EmailTemplateContext emailTemplateContext) {
+    // build the refund template 
+    return new EmailTemplate();
+}
+
+private static EmailTemplate getBaseTemplate(final TemplateFormat templateFormat, final EmailTemplateContext emailTemplateContext) {
+  // setting up common data between templates 
+}
+
+public enum TemplateType {
+    RECEIPT, REFUND, VOID;
+}
+}
+
+ +

I created a class called EmailTemplateContext which contains the multiple different pieces of data that these different strategies use to build out the template. However, this is what feels troublesome, because its lots of somewhat unrelated fields where only 1 or 2 (of the 5 or 6) are needed to build any given template. Is a context object being passed into the factory the right approach here?

+ +

Most of the code inside of the buildReceipt, buildRefund, and buildVoid methods is simply emailTemplate.addVariable(key, value), and a fair amount of them are shared among strategies (why I created the getBaseTemplate method.

+ +

Any help would be appreciated to point me in the right direction. Thanks!

+",356064,,,,,43858.31319,Factory/Strategy Pattern for objects that require different pieces of data,,1,1,,,,CC BY-SA 4.0, +404369,1,404373,,1/28/2020 9:56,,36,8396,"

Encapsulation

+ +
+

In object-oriented programming (OOP), encapsulation refers to the + bundling of data with the methods that operate on that data, or the + restricting of direct access to some of an object's components.1 + Encapsulation is used to hide the values or state of a structured data + object inside a class, preventing unauthorized parties' direct access + to them. Wikipedia - Encapsulation (Computer Programming)

+
+ +

Immutability

+ +
+

In object-oriented and functional programming, an immutable object (unchangeable object) is an object whose state cannot be modified after it is created.Wikipedia - Immutable object

+
+ +

If you can guarantee immutability, do you need to think about encapsulation?

+ +

I have seen these concepts being used in explaining ideas in object-oriented programming (OOP) and functional programming (FP).

+ +

I tried to investigate on the topics of encapsulation, immutability and their relation to one another. I couldn't find a post that explicitly asked if encapsulation is guaranteed if you have immutability.

+ +

Please correct me if I have misunderstood anything on the topic of encapsulation or immutability. I wish to understand these concepts better. Also, direct me to any other posts that have been done on the topic which answers the question above.

+",356081,,591,,43859.63333,43861.16111,Do you need to think about encapsulation if you can ensure immutability?,,6,3,9,,,CC BY-SA 4.0, +404376,1,404378,,1/28/2020 12:04,,2,211,"

We are automating the testing on an Web ERP solution (Dynamics) through a tool (RSAT, which uses selenium) provided by the developer of the ERP (Microsoft).

+ +

The RSAT has a list of instructions to do some actions on the pages and it takes the values to use from an excel file. The RSAT can be used with command lines.

+ +

So at first we started using a PowerShell script and Azure DevOps to launch the automated tests right after the code packages has been deployed to the testing environment.

+ +

It was a few dozen lines long and it was fine.

+ +

Then we started switching the values in the parameter excel file with other values to cover more tests values with the same test case.

+ +

It added a few hundred lines to the script.

+ +
    +
  • Then we generated a file in which we compiled all the results of the tests,
  • +
  • We added some logs,
  • +
  • We sent the result file by mail,
  • +
  • We queried and rolled back the database right after finishing the tests,
  • +
+ +

Well, my problem is that PowerShell script is growing a lot (we actually have multiple scripts now with script 1 calling script 2 when it ends and chaining all the actions) and we still have many features and ideas to add.

+ +

My question is: at which point should we say

+ +
+

stop, PowerShell is not meant to do this, for the sake of + maintainability and stability we should switch to [Python/C#/...]

+
+ +

(Maybe I'm totally wrong and using multiples chained PowerShell scripts is actually good practice, especially when you use Azure DevOps)

+",298320,,118878,,43858.51319,43859.11597,Should powershell be used to develop a whole application?,,2,3,,,,CC BY-SA 4.0, +404377,1,,,1/28/2020 12:11,,1,33,"

I have a configuration application in Nodejs. It has a Component with name and uuid. A Component can have many Schemas. A Schema has a uuid, name, componentId, json. A Schema can have many Configurations. A Configuration has name, schemaId, json and uuid. A Schema can contain reference of many other Schemas in it. Now I want to create a functionality of exporting all the data from one instance of the application ( in json format in file ) and import it in another. What should be the simplest way to do it? a few questions are

+ +
    +
  1. How to tell application what to export. for now i think there should be separate arrays for components, schemas and configurations. Like
  2. +
+ +
{
+    components: ['id1', 'id2'],
+    schemas: ['s1', 's2'],
+    configuration: ['c1', 'c2'],
+}
+
+ +

this data should be sent to application to return a file with all information that will later be used for importing in another instance

+ +
    +
  1. The real question is how should my export file look like keeping in mind that dependencies are also involved and dependencies can also overlap. for example a schema can have many other schemas referenced in its json field. eg schema1 has schema2 and schema4 as its dependencies. so there is another schema schema5 that also require schema2. so while importing we have to make sure that schema2 should be saved first before saving schema1 and schema5. how to represent such file that requires order as well as overlapped dependencies, making sure that schema2 is not saved twice while importing. json of schema1 is shown below as an example
  2. +
+ +
{
+    ""$schema"": ""http://json-schema.org/draft-04/schema#"",
+    ""p1"": {
+        ""$ref"": ""link-to-schema2""
+    },
+    ""p2"": {
+        ""$ref"": ""link-to-schema4""
+    },
+
+}
+
+
+ +
    +
  1. What should be the step wise sudo algorithm i should follow while importing.
  2. +
+",356093,,,,,43858.50764,How to develop an Import/Export Functionality for my node application,,0,0,,,,CC BY-SA 4.0, +404380,1,,,1/28/2020 13:36,,0,104,"

Assuming the following scenario:

+ +

A python application that receives file and process that file trying to understand what the file format( any type of data/compressions/archives/packages/mounts/etc ), then it open(cuts) the file and sees which files are inside that file and tries to execute the same logic as before. +All relevant information that the script can get from files must be stored in the database in order to present it in the UI( in the form of a tree ).

+ +
    +
  • Parent file should be available to share info with his child files
  • +
  • After all files are extracted and data collected there are some post process scripts( example: to connect js function names in html file with js file )
  • +
+ +

how it woks now: +one worker come up with x threads that listening to python queue, and when first file_path come into queue it start work and fill that queue with child files. All related information are saved into python list and then it send batches into db(arangodb).

+ +

what problems with that solution:

+ +
    +
  • Hard to debug with multithreading, specially if thread is stuck. renewing the thread
  • +
  • No easy way to catch errors, only in logs
  • +
+ +

questions:

+ +
    +
  • do a task queue/job queue like Celery/rq/dramatiq will fit here? If yes, which one on your opinion?
  • +
  • publisher-subscriber pattern will fit here or it's overcomplicating?
  • +
+ +

Maybe you know some articles/companies/products that seems familiar?

+",355987,,355987,,43859.57014,43859.57014,How to design a fast way to `unarchive` file using python?,,0,9,1,,,CC BY-SA 4.0, +404384,1,404394,,1/28/2020 14:11,,2,115,"

I have a project that has a DD design, and also uses dependency injection. During development, we connect to a test database which contains a former snapshot of production. This works well 95% of the time. However, for some processes that we have to implement into our application, the underlying SQL statements, (of which I can't control and are developed by a third party) are slow. Some of these procedures can take 30 seconds while even others might take over 5 minutes to execute.

+ +

Our UI's rely on this data and, unfortunately, our users understand that there are some tasks they have to start and just wait for the data to arrive. For now, I'm not asking for ways to speed up the final application. Simply put, there are just to many factors outside of our control and this limitation has to be accepted as part of the final result.

+ +

However, while developing the UI I do require a responsive return of results from my domain. To do this, I've setup fake repositories within my application project. Specifically, I have an ASP.Net project that initializes all of the DI components within the AppHost. It is hear that all repos are injected into the container, followed by the respective domains, which are then used by the ASP.Net APIs that are utilized by the web page UI.

+ +

So far, what I've done has been to create fake repositories from within ASP.Net project. Within the AppHost, I have an #ifdef FAKES preprocessor that checks to see if I've specified that I'd like to use the fake repos, and it then loads the fakes, rather than the legit ones which are used during normal testing and eventually production.

+ +

This work but feels... wrong. Maybe this is the correct approach, but I feel as if there has to be a better way of injecting fake data repos into my application for development. Is there a better appraoch for this purpose?

+ +

One requirement is that I do not want to always access this faked data in development. While building UIs, I often could care less about what the data looks like, so long as something is there that has the right data types and string lengths. However, once it's time to test implementation, I will always have to wait for the legitimate test data to load from our test servers.

+ +

Just to be clear, these fakes are not for unit testing. In this context, I am strictly creating fake repositories so that I'm getting data back in milliseconds, rather than 2-3 minutes. Is there a better way of implementing and loading these context-specific development fakes?

+",30988,,30988,,43858.65486,43858.88194,Is there a proper way to setup and use fake data in development for a DDD project using DI?,,1,2,,,,CC BY-SA 4.0, +404387,1,,,1/28/2020 15:18,,2,138,"

Consider I have a class which represents a tree element. An element can be changed, inspected children elements can be added to it (say trough the add() method). But also I have a class which contains root of the tree (State). And for convenience's sake I have added static overload of add() method which adds elements to root. And as result element class needs to get current application state through a singleton (which existence is justified) and therefore element class is bound to State class. Is it good? Is it justified? Because otherwise I have to add method add to State (but this seems like SRP violation though).

+",356114,,,,,43858.83333,Is this SRP violation?,,1,12,,,,CC BY-SA 4.0, +404390,1,,,1/28/2020 15:48,,3,252,"

I'm working with an API that has many asynchronous calls and handlers. I'd like to extend these with a RESTful interface and endpoints in spring. I'm imagining the usual Controller and Service layers, where the Service probably wraps these async handlers. I've read several articles about CompletableFuture, @Async, and @EnableAsync and eventually got this working, but am not sure I'm doing it 'right' or well-designed. This post does a good job of merging the CompletableFuture and async handlers; at least, the handlers there are similar to mine.

+ +

so I have something like this interface:

+ +
interface AccountHandler {
+   void onSuccess(Portfolio result);
+   void onError(Throwable error);
+}
+
+ +

and now this CompletableFuture merged with it:

+ +
class MyPortfolioCallbackHandler extends CompletableFuture<Portfolio> implements AccountHandler {
+   void onSuccess(Portfolio result) {
+      super.complete(result);
+   }
+   void onError(Throwable error) {
+      super.completeExceptionally(error);
+   }
+}
+
+ +

The Service looks like this:

+ +
@Service
+public class PortfolioService {
+
+    @Async
+    public CompletableFuture<Portfolio> getPortfolio() throws Exception {
+        MyPortfolioCallbackHandler portfolioHandler = new PortfolioHandler();
+        portfolioHandler.getPortfolio();
+
+        return portfolioHandler;
+    }
+}
+
+ +

And the Controller is this:

+ +
@RestController
+@RequestMapping(""portfolio"")
+public class PortfolioController {
+
+    @Autowired
+    PortfolioService portfolioService;
+
+    @GetMapping(produces = ""application/json"")
+    public Portfolio getPortfolio() throws Exception {
+        return portfolioService.getPortfolio();
+    }
+}
+
+ +

This actually works, to my surprise, after trying a few things and being confused by the CompletableFuture. But is it a good design, or done 'right'?

+",325905,,325905,,43858.70625,43858.70625,Is this a good design for wrapping asynchronous API calls into a RESTful interface?,,1,7,,,,CC BY-SA 4.0, +404402,1,,,1/28/2020 21:19,,0,73,"

I am trying to create a thumbnail from a video that I'll be uploading to my Web App(React). And then I'll upload that video to AWS S3, and thumbnail(image) to S3 as well.

+ +

I wanted to know where should I create the thumbnail from a video frame, either client-side or in a Lambda service?

+ +

I asked someone and they said to create thumbnail from video in Lambda function as it'll be a complex task and will need processing to do on client-side. But I don't think so as the code to create thumbnail is quite simple just create a canvas and convert it to an image.

+",356139,,356139,,43858.89236,43858.89236,Should I create Thumbnail of video in Web App (client-side) or Lambda/Server-side?,,0,4,,,,CC BY-SA 4.0, +404403,1,404422,,1/28/2020 21:47,,0,746,"

In UML 2.0, there are two ways of representing an association between classes which I can't seem to distinguish between.

+ +

First of all, there is the qualified association, represented as such: +

+ +

You also have an association class, represented as such: +

+ +

I would say that in the case of a qualified association, it is assumed that class 2 holds a reference to an indexed collection of class 1 objects, so it can access a reference to an object of class 1 by its qualifier in the collection.

+ +

In the case of an association class, it is usually said that the association class is the association. I would assume in practical terms that at least one of the associated classes has a reference to the association class, which in turn has a reference to both the classes.

+ +

I'd love to hear a more educated and insightful perspective on this.

+",352625,,,,,43859.83264,What is the difference between a qualified association and an association class and how to choose between them?,,2,0,1,,,CC BY-SA 4.0, +404406,1,,,1/28/2020 22:25,,0,179,"

Where I work we follow a scaled agile process based on scrum which we've changed to suit our culture and needs. +One thing I feel we are missing are guilds to allow the free flow of information between members of a community of interest.

+ +

I'd like to setup a guild for Software Engineers however most information I've found has guilds set-up around narrow topics (e.g.: front end, performance, automation). Given the size of our technical workforce (300ish with 1/3 being software engineers) I fear that picking such specific topics we will have guilds which are too small and hence defeats the purpose of getting people to cross-pollinate/share widely.

+ +

So my question is, what is a typical guild size and a corollary question is should the scope of a guild be adjusted to achieve this size?

+",166094,,209774,,43859.79236,43859.79236,What is a typical guild size?,,1,4,,,,CC BY-SA 4.0, +404412,1,,,1/29/2020 3:34,,1,92,"

I wanted to learn more about systems and so I was reading about x86 global descriptor tables for memory segments as one does, and I came across this table from here:

+ +

+ +

I think I understand why all the information is needed, but I don't understand why values are broken up in this way.

+ +

In particular, why did the designers prefer the above layout over something like:

+ +
first 32-bits: base
+next 20-bits: limit
+next 12-bits: access-byte and flags
+
+ +

?

+ +

I've come across lots of places that seem to describe the table entry layout, but it seems to be harder to find information about why things turned out this way.

+",45589,,,,,43859.15694,Why is the data for an x86 GDT entry designed this way?,,1,0,,,,CC BY-SA 4.0, +404423,1,,,1/29/2020 13:41,,1,100,"

I've been using python for several years as a non-developer writing scripts to help business processes. I'm in a new position (still non-developer) and would like to adopt more rigorous coding standards (especially since the projects I will be working on will be bigger). Specifically, this means, in part, creating distinct environments and folder structures for each project rather than just having a bunch of modules from distinct projects all floating together in the same folder getting run manually at the terminal.

+ +

I was reading this guide about how to structure projects but I have some questions since I've never done this before.

+ +

1) An example project I am tasked with includes a. automating data import into system b. checking downstream output from system to ensure everything was processed correctly.

+ +

Data takes at least 1 month to flow through, this is not an immediate process. In other words, these are distinct tasks. But higher ups have phrased it as being one project. In the world of structuring python projects, would it make more sense to create two projects (or in other words, two folder structures with their own virtual environments) out of this? Or include them in one? I don't foresee code needing to be shared between them. I do imagine task b. would need a list of case IDs created by task a. but that would be the only functional link.

+ +

I assume two distinct projects makes sense but what thought goes into that?

+ +

2) Is a project designed to only ever explicitly run a single module (like a main.py at the top level) while the others are all referenced internally? Or can you structure a project such that you may explicitly run any number of modules depending on what task you are performing at runtime? In other words, what is the scope and intent of a project/top-level package/folder?

+ +

3) If you intend to run a main.py from the top level package at the command line, does that top level directory need to be a package if it is not being imported into another program? Can it just be a regular folder as long as you never reference it in other code?

+ +

4) To what degree are you ""supposed"" to combine classes/functions with the nitty-gritty usage of those functions (is there a term for this?) within the same module? In other words, are you supposed to leave classes/functions in their own modules and leave the importing/usage/running of them to another separate module? Or can you combine them into one? And when would you do which and why?

+ +

With my messy background, I have only ever had singular modules which contain all the functions and execution in a single file. I don't know what the proper protocol is for delimiting scope.

+ +

These may be basic questions that I've taken too long to come around to but thank you.

+",356137,,,,,43859.84583,Basic advice on structuring python project,,2,2,,43860.48056,,CC BY-SA 4.0, +404425,1,,,1/29/2020 14:30,,8,716,"

I'm not registered with Facebook and I've never logged in to Facebook inside the browser I use, Today I enter the site facebook.com and see my actual phone number on the sign in page with this message:

+ +
+

Facebook requests and receives your phone number from your mobile network

+
+ +

So how does a website, in this case Facebook get the mobile phone number inside the browser?

+ +

+",356183,,177980,,43860.93472,43860.93472,How can a website get the phone number of the device used to call it?,,3,2,,,,CC BY-SA 4.0, +404426,1,,,1/29/2020 14:31,,2,253,"

Stripe API uses a key called Idempotency-Key for achieving idempotency: https://stripe.com/docs/api/idempotent_requests. Is this similar to a nonce?

+",264027,,209331,,43859.675,43859.76181,Can nonce be used to achieve Idempotency in REST API?,,2,0,,,,CC BY-SA 4.0, +404434,1,,,1/29/2020 19:59,,-3,68,"

Q: Do we need project managers in the agile organisation when doing software development or are the project governance replaced by product governance?

+ +

There are several questions asked and answered here about the role of project managers in Scrum. These questions come to some help but what I'm asking here is that the organisation I am working with has replaced project managers with product managers AND product owners. They prescribe to Modern Agile. They do not prescribe to SCRUM.

+ +

My question is a bit wider encompassing agile SW development using any of XP, FDD, SCRUM, Kanban, Lean SW development, SAFe.

+ +

I have worked in an organisation on an agile transformation for quite a while and part of the transformation is replacing project governance and a maintenance organisation with a product governance and organisation. The SW dev organisation is built up by 30 product teams that manages the life cycle of 1 to 5 products, cross functional teams doing DevOps. Products being SW services w/wo UI addressing various needs of the organisation’s IT services needs across processes delivering end services to customers.

+ +

I often come across several turf wars when an organisation embarks on its agile (transformation) journey. One of these are the necessity or not of project managers in the agile SW organisation.

+ +

Project managers feels threatened by agile, maybe they have not been reskilling – keeping tabs i.e. they are not sure about the why and the how of agile. Or the agile operating model proposed seem to indicate new roles that somewhat looks like a project manager but is not – according to “agile people”.

+ +

In this organisation project managers are replaced by product managers accountable in the business for quality and delivery of services (budget, objectives, strategies, KPIs etc. These product managers are supported by product owners doing daily operative stuff to make sure the proper features are developed, and highest impact bugs are rectified.

+ +

So are project managers needed in this kind of agile setup for SW continuous delivery – yes or no?

+ +

If project managers are needed – why? +What are the pros and cons of project managers?

+ +

If there are to be both project governance and product governance – what is the split in accountability?

+",355955,,355955,,43859.86528,43859.86528,Do we need project managers in the agile organisation when doing software development or are the project governance replaced by product governance?,,1,1,,43859.93264,,CC BY-SA 4.0, +404435,1,,,1/29/2020 20:00,,2,138,"

I have some doubts about the practical way of violate or to not the pre and post conditions based on Liskov Substitution Principle.

+ +

In the beginning, I have create the examples where first child would respect and the second to violate. But Liskov article is very mathematical, and now, I do not have more confidence about it.

+ +

First about pre:

+ +
<?php
+
+class TermCalculator {
+    public function data(int $dias): DateTimeInterface {
+        if($dias > 0)
+            return (new DateTime())->modify(""+$dias days"");
+        throw new \InvalidArgumentException(""Term needs to be above zero "");
+    }
+}
+class TermCalculatorCLT extends TermCalculator {
+    public function data(int $dias): DateTimeInterface {
+        if ($dias >= 0)
+            return (new DateTime())->modify(""+$dias days"");
+        throw new \InvalidArgumentException(""Term needs to be above -1"");
+    }
+}
+class TermCalculatorCPC extends TermCalculator {
+    public function data(int $dias): DateTimeInterface {
+        if (in_array($dias, range(1,30)))
+            return (new DateTime())->modify(""+$dias days"");
+        throw new \InvalidArgumentException(""Term needs to be between 1 and 30 "");
+    }
+}
+
+ +

**

+ +
    +
  • Which children class above violates pre-condition?
  • +
+ +

**

+ +

And two situations about post:

+ +
<?php
+class Account {
+    protected float $balance;
+    public function __construct(float $balanceInicial){
+        $this->balance = $balanceInicial;   
+    }
+    public function withdraw(float $value) : float { 
+        if(($this->balance - $value) >= 0)
+          $this->balance -= $value;
+        return $this->balance;
+    }
+}
+class AccountVip extends Account {
+    private const TAXA = 10.00;
+    public function withdraw(float $value) : float {
+        if(($this->balance - $value) >= self::TAXA)
+            $this->balance -= $value;
+        return $this->balance;
+    }
+}
+class AccountIlimited extends Account {
+    public function withdraw(float $value) : float {
+        $this->balance -= $value;
+        return $this->balance;
+    }
+}
+
+ +

**

+ +
    +
  • Which children class above violates post-condition?
  • +
+ +

**

+",356206,,,,,43859.84514,No confident about pre/post conditions on LSP,,1,2,,,,CC BY-SA 4.0, +404450,1,404456,,1/30/2020 6:21,,2,162,"

I'm wondering whether it is bad practice to keep a user's ID in a JWT.

+ +

I'm planning on using the email in the sub, since it's already available to them, and I can use it to identify them, all the same. I can let the DB index it so it's easier to retrieve their information using the email rather than the ID.

+ +

Isn't it better to avoid giving the user any information regarding how the DB is referencing them, as in the ID? My concern is that I don't know why everyone is fine with using the ID stored in the DB in a JWT, since it can be easily avoided. Isn't there a scenario where it's not a good idea to give them their DB stored ID? The more vague the information, or already available information the user has about themselves, the better, right?

+",163998,,,,,43860.40694,"Why is it fine to use a user's ID in their JWT, as opposed to their email/username?",,1,1,,,,CC BY-SA 4.0, +404451,1,,,1/30/2020 6:45,,5,303,"

I like to invert dependencies whenever possible by depending mostly on abstraction and allowing the concrete implementations to be passed into the object by clients, or a factory. I've found this to be pretty conducive to testability and extensibility. Here's a simple example:

+ +
public class Feature {
+    private final IStrategy strategy; // interface
+
+    public Feature(IStrategy strategy) {
+        this.strategy = strategy;
+    }
+}
+
+ +

However, I don't like to burden clients of this class with supplying the concrete implementation of IStrategy, because usually the client doesn't care. I also don't want to provide a factory every time I do something like this, because I would end up with a LOT of factories. Here's what I usually like to do instead:

+ +
public class Feature {
+    private final IStrategy strategy; // interface
+
+    public Feature(IStrategy strategy) {
+        this.strategy = strategy;
+    }
+
+    public Feature() {
+        this(new DefaultConcreteStrategy());
+    }
+}
+
+ +

Technically, Feature has a direct dependency on DefaultConcreteStrategy because it mentions it by name. The compile-time dependency will always be there. But the runtime dependency is effectively optional because motivated clients (unit tests, usually) can inject another concretion if desired.

+ +

Is this sound design? Is there a name for this pattern? Does anyone have a better or alternative approach?

+",356235,,,,,43861.28056,"Is it good design to have one constructor that supplies a ""default"" concrete class to another that takes an abstraction?",,2,0,,,,CC BY-SA 4.0, +404452,1,,,1/30/2020 7:06,,-3,141,"

Edit: My main question is how would you structure and design code to accomplish the project I describe below. Please try and think of this as a technical interview question and you have to describe how you'd accomplish this task to someone who has limited technical experience.

+ +

Hi there and thanks for your time. I'll try to make this simple and concise. I've been working on a personal project for the last few months during my free time in Python (3.7) and as I'm getting close to done, I've realized I literally never made my own object or class. I've tinkered with programming for a while so I thought I understood how to use OOP and the benefits of such, but I'm understanding now I must not really appreciate why OOP should be used in small projects like my own.

+ +

So here's what I've been building in a big picture explanation.. It's a program that implements a GUI, intercepts tcp communication over a certain port in another process, translates that raw data into variables and things I understand, and then in another process I use those translations to upload neat, clean data to a Google Sheets Spreadsheet.

+ +

And here's the how I structured my code explanation. Each process is under a different *.py file. The main process is the GUI implemented in tkinter which is mainly consumed by the tkinter loop catching GUI events. The GUI opens up a thread to catch clear text translations from a multiprocessing Queue and stick them into the textbox widget. Then I open my second process which performs the man in the middle and translation functions. The man in the middle loop calls a translate function which identifies the type of packet data. The translate function then calls the relevant function to decipher that data and returns the translation. My third and last process listens to a queue for the translated data in a loop and then performs the uploads via the Google sheets API as the data is received, while being careful to stay within the free 100 write requests every 100 seconds.

+ +

So, thank you for reading this far, and back to my original question. How would you have structured a program to accomplish what I described? Should I be using classes/objects to encapsulate each process and have all of my 1000's of lines of code in one file? Are their performance gains to be considered utilizing OOP in Python versus separating each process's code in different files? And here's my except Exception as e:, what haven't I considered while designing this?

+ +

Thank you for any and all responses! I'm here just to learn how experienced developers would structure this to be efficient, scalable, and coherent to others.

+",356236,,356236,,43860.77778,43860.77778,Python Code Design - How should I have used OOP?,,1,1,,,,CC BY-SA 4.0, +404464,1,404482,,1/30/2020 16:40,,2,531,"

Under the context of dependency injection - that is, an interface has mostly one implementation - I took the habit of exposing via the Interface a bunch of fields which are never called by the consumer classes. These fields reflect a high-level implementation strategy; I decided to expose them via my interfaces because I feel it helps to understand how the abstraction works - or let's say what is expected from that abstraction - not in terms of detailed implementation, but in terms of high-level principles and why the object/interface was needed in the first place.

+ +

For example, I have an interface IEncrypter which encrypts strings. The idea behind this interface is to be able to choose which encryption algorithm to use. So the implementation class takes an algorithm abstraction IEncryptionAlgo in its constructor, and stores it as a ReadOnly field: ThisEncryptionAlgo As IEncryptionAlgo. Then when I call Encrypt(Message), it calls ThisEncryptionAlgo.Encrypt(Message).

+ +

Strictly speaking, the interface does not need to expose ThisEncryptionAlgo, and exposing the Byte() Encrypt(Message As String) function alone is sufficient for the consumer. However, I feel that having the Interface exposing ThisEncryptionAlgo (as a ReadOnly) has some advantages:

+ +
    +
  • You help developers to understand the spirit behind the interface, which is useful both when implementing and when consuming.
  • +
  • You make debugging easier as you can quickly inspect the property directly from the interface.
  • +
  • Error logging and tracing might be easier if you generate a report based on the interface properties.
  • +
+ +

I believe it is ok because the main purpose of having this as an interface rather than a concrete class is to allow dependency injection and unit testing of the consumers, not to add a true layer of abstraction. Having said that, it also defeats the principle that interfaces should disregard any implementation details.

+ +

What is your opinion? Should I remove ThisEncryptionAlgo from my interface?

+ +
+ +

Full Example

+ + + +
Interface IEncryptionAlgo
+
+    Function Encrypt(Input As Byte()) As Byte()
+
+    Function Decrypt(Input As Byte()) As Byte()
+
+End Interface
+
+
+Interface ICheckSumAlgo
+
+    Function GetHashSum(Input As Byte()) As Byte()
+
+    ReadOnly Property HashLength As Integer
+
+End Interface
+
+
+Interface IEncrypter
+
+    ReadOnly Property ThisEncryptionAlgo As IEncryptionAlgo
+
+    ReadOnly Property ThisCheckSumAlgo As ICheckSumAlgo
+
+    Function Encrypt(Message As String) As Byte()
+
+    Function Decrypt(Cypher As Byte()) As String
+
+    Function EncryptWihCheckSum(Message As String) As Byte()
+
+    Function DecryptWithCheckSum(SumAndCypher As Byte()) As String
+
+End Interface
+
+
+Class Encrypter
+    Implements IEncrypter
+
+
+    Private Sub New()
+    End Sub
+
+    Sub New(Algo As IEncryptionAlgo, CheckSum As ICheckSumAlgo)
+        Me.ThisEncryptionAlgo = Algo
+        Me.ThisCheckSumAlgo = CheckSum
+    End Sub
+
+
+    Public ReadOnly Property ThisEncryptionAlgo As IEncryptionAlgo Implements IEncrypter.ThisEncryptionAlgo
+
+    Public ReadOnly Property ThisCheckSumAlgo As ICheckSumAlgo Implements IEncrypter.ThisCheckSumAlgo
+
+
+    Public Function Encrypt(Message As String) As Byte() Implements IEncrypter.Encrypt
+
+        Dim MessageBytes As Byte() = Text.Encoding.Unicode.GetBytes(Message)
+
+        Dim Cypher As Byte() = Me.ThisEncryptionAlgo.Encrypt(MessageBytes)
+        Return Cypher
+
+    End Function
+
+    Public Function Decrypt(Cypher() As Byte) As String Implements IEncrypter.Decrypt
+
+        Dim MessageBytes As Byte() = Me.ThisEncryptionAlgo.Decrypt(Cypher)
+
+        Dim Message As String = Text.Encoding.Unicode.GetString(MessageBytes)
+        Return Message
+
+    End Function
+
+
+    Public Function EncryptWihCheckSum(Message As String) As Byte() Implements IEncrypter.EncryptWihCheckSum
+
+        Dim Cypher As Byte() = Encrypt(Message)
+        Dim CypherSum As Byte() = Me.ThisCheckSumAlgo.GetHashSum(Cypher)
+
+        Dim SumAndCypher As Byte() = CypherSum.Concat(Cypher).ToArray
+        Return SumAndCypher
+
+    End Function
+
+    Public Function DecryptWithCheckSum(SumAndCypher() As Byte) As String Implements IEncrypter.DecryptWithCheckSum
+
+        Dim Cypher As Byte() = SumAndCypher.Skip(Me.ThisCheckSumAlgo.HashLength).ToArray
+        Dim ExpectedCypherSum As Byte() = SumAndCypher.Take(Me.ThisCheckSumAlgo.HashLength).ToArray
+        Dim CurrentCypherSum As Byte() = Me.ThisCheckSumAlgo.GetHashSum(Cypher)
+
+        If Not CurrentCypherSum.SequenceEqual(ExpectedCypherSum) Then Throw New ArgumentException(""Check sum failed."", NameOf(SumAndCypher))
+
+        Dim Message As String = Decrypt(Cypher)
+        Return Message
+
+    End Function
+
+
+End Class
+
+Class ServicesFactory
+    Implements IServicesFactory
+
+    Function NewEncrypter() As IEncrypter Implements IServicesFactory
+        Return New Encrypter(My.AppSettings.GetDefaultAlgo, My.AppSettings.GetDefaultCheckSum)
+    End Function
+
+End Class
+
+
+Class ConsummerClass
+
+    Private ReadOnly Property MainFactory As IServicesFactory
+
+    Private Sub New
+    End Sub
+
+    Sub New(MainFactory as IServicesFactory)
+        Me.MainFactory=MainFactory
+    End Sub
+
+    Sub Main()
+
+        Dim MyMessage As String = InputBox(""Write something"")
+
+        Dim Encrypter As IEncrypter = Me.MainFactory.NewEncrypter
+        Dim EncryptedMessage As Byte() = Encrypter.Encrypt(MyMessage)
+
+        WebClient.SendPost(Convert.ToBase64String(EncryptedMessage))
+
+    End Sub
+
+End Class
+
+ +

As one can see, it is ""by design"" that IEncrypter holds a field which refers to the algo object to use. If one wants to use a different algo, they may implement IEncryptionAlgo and inject it via the ServicesFactory. Under such context, ThisEncryptionAlgo is not needed by the consumer class, but having it exposed via the IEncrypter interface ensures any implementation of the later fits the overall architecture. At least, that is what I intuitively feel, but I'd like to challenge this.

+",329035,,329035,,43861.55208,43861.55208,Unnecessary (?) ReadOnly fields in Interfaces,,4,16,,,,CC BY-SA 4.0, +404466,1,,,1/30/2020 16:44,,1,36,"

We have a case scenario in our project where we are provided with a set of XSDs. These XSDs we converted to Java Pojos with the help of JAXB. After this we were suppose to update few values in the Pojo and converted back into the corresponding XML file. +The number of XSD provided to us is large, they are similar - but placement of XML tags differ e.g. same tag will be present in different elements in different XSDs. Thus in for updating any element we had to write different methods so that it returns the specific XML. +Is there any way or design change by which we may kind off generalize the updation part.

+",311203,,,,,43860.69722,Handle multiple similar structured XMLs in a Java project,,0,4,,,,CC BY-SA 4.0, +404472,1,404504,,1/30/2020 19:08,,-1,124,"

In HTML/CSS, inline elements, such as <span>, do not support explicitly setting their width via the width CSS property.

+ +

This is confusing to many developers, as questions like ""Setting the width of inline elements"" show.

+ +

When learning technologies, I always find it helpful to understand the ""why?"", the motivation behind design decisions. So:

+ +

Why was it decided for inline elements to not support setting width?

+ +
    +
  • Are there technical reasons that would make it hard to implement in browsers?
  • +
  • Did the designers of CSS think that allowing it would be redundant?
  • +
  • Or did someone just think ""We'll ignore width for inline elements just to spite people.""? (Well, hopefully not.)
  • +
+",12248,,12248,,43861.34028,43861.50417,"Inline HTML elements don't allow setting ""width"" - why is that?",,1,9,,,,CC BY-SA 4.0, +404473,1,,,1/30/2020 19:46,,1,365,"

Is there anything that can calculate the Big-O notation of a function? +While learning about Big-O, it is implied that it is pretty trivial to do in your head once you know the basic rules, however if that were true, I would expect this functionality to be embedded in various IDEs. I have not seen any tool like this.

+ +

Is this something that can be easily interpreted via software, or does this require context that is only available to the developer?

+",136084,,,,,43891.45972,Are there any programs or IDEs that calculate Big-O notation on functions? Is this something that is possible to program into an IDE?,,3,12,,,,CC BY-SA 4.0, +404474,1,404479,,1/30/2020 20:04,,0,80,"

In Go we can choose to make something a pointer or not. We can also support multiple return types. A common signature on functions or methods is:

+ +
type MyType struct{}
+
+// Value Return Type
+func (m MyType) FindOne(key string)(MyType, error) {...}
+
+// Pointer Return Type
+func (m MyType) FindOne(key string)(*MyType, error) {...}
+
+ +

In reviewing the repos on Github (non-exhaustive) for signatures like the above, it seems to be preferred to use the pointer style return type when something can also return an error. Is that the general consensus?

+ +

Few Examples (again non-exhaustive I have definitely not seen everything there is to see):

+ +

https://github.com/etcd-io/etcd/blob/9c5426830b1b8728af14e069acfdc2a64dea768c/clientv3/kv.go#L48-L54

+ +

https://github.com/go-redis/redis/blob/d19aba07b47683ef19378c4a4d43959672b7cec8/cluster.go#L329

+ +

https://github.com/bradfitz/gomemcache/blob/master/memcache/memcache.go#L317

+ +

In thinking about it, and here's where my question lies, I would think the pointer style would be preferred on calls where a result might not exist:

+ +
service := MyType{}
+result, err := service.FindOne(""not-found-key"")
+
+ +

In this way result can be nil and only the err has a value. Effectively preventing anyone from operating result. Whereas in the value type return, if nothing is found, we would still need to return an empty MyType{}. Assuming we are good Go citizens, we would naturally error check things always. Nothing is perfect and things slip by, so in the case where someone were to do something like this:

+ +
service := MyType{}
+result, _ := service.FindOne(""not-found-key"")
+
+// do stuff with `result` because of course it will ALWAYS succeed
+// we don't need to error check (until we do)
+
+ +

The execution will continue even though we are now in a potentially unknown state. Where as is we tried to do things to that pointer return value, since it's nil, it would panic and we'd know about it right away.

+ +

This has additional implications as well in the design of our MyType service. To keep things consistent, if we introduced at FindMany method:

+ +
type MyType struct{}
+
+func (m MyType) FindOne(key string)(*MyType, error) {...}
+func (m MyType) FindMany(key string)([]*MyType, error) {...}
+
+ +

We would probably benefit from keeping the Pointer return types consistent so callers don't have to think too much as it's always the same either a Pointer or Value of MyType.

+",209560,,,,,43861.74306,"Preferred return type, pointer or value, when the method/function can also return an error",,1,2,1,,,CC BY-SA 4.0, +404478,1,404481,,1/30/2020 22:15,,-1,81,"

Say a branch is created with many merges/commits, then you figure out 1 merge was bad. Would it be better to undo it with rebase -i or with git revert or by manually undoing some changes and adding them in a new commit? It seems like both revert and manually undoing it with new commits both add an unnecessary extra commit. Is rebase -i less error-prone?

+",303767,,,,,43860.97639,Is rebase -i better then revert?,,1,2,,,,CC BY-SA 4.0, +404486,1,404502,,1/31/2020 2:17,,1,83,"

Background: I was working on a web-socket application integrated into a more conventional http request based website that uses REST APIs.

+ +

Task: I need to retrieve user history from the database for that application. It is a given that the application is open when we need to retrieve this history. Further, triggering updates/deletes/modifications of history in the database are not relevant(and will not be in the future either).

+ +

Analysis: Based on reading several online resources, I got the general idea that REST APIs are preferred due to scalability, conversion to binary format for large datasets and caching while websockets allow server to frontend communication and higher speed since the connection need not be opened/closed everytime. In my application however, I feel like using the websocket itself might be better since:

+ +
    +
  • It's already going to be open while an additional http request is extra load
  • +
  • The data I am sending is essentially text and only a small chunk of the entire history is required in one go
  • +
  • Why go through authentication/filtering again when an authenticated websocket is already open? Also, why write serialisers and define an api when the updation/deletion methods they provide are never going to be used?
  • +
+ +

Question: TL,DR I feel like a simple function in my websocket application would do the job of history retrieval better than a REST API in my case. However, I am a newbie and don't want to budge from standard design without being sure that my analysis of the situation is reasonable. Could someone help me understand if/any points that I might have missed?

+",356318,,,,,43861.46181,Should a REST API be used when a websocket is already open?,,1,0,,,,CC BY-SA 4.0, +404487,1,,,1/31/2020 2:31,,1,83,"

So I've run into some strange behavior on an application i'm developing in C using the windows API. Im trying to implement a closed connection server-client interface. However, for whatever reason connect() is failing consistently after between 1-50 successful iterations of the below sudo code: (WSA initialization is omitted, reasoning explained later, all that is important to know is TCP connection is used)

+ +
While(1) {
+    socket();
+    if(failed) continue;
+    connect(); //This fails repeatedly after several successful iterations
+    if(failed) {
+         closesocket();
+         continue;
+    }
+    closesocket();
+}
+
+ +

At this point, im likely going to just use an open connection as my program works when the connection is left open, and as often as data will be transferred, this is likely the better option anyways. However, for soon to be obvious reasons, I would like to know why the above doesnt work. The code that initializes WSA and the above code are called by a thread that my int WINAPI WinMain function creates. When the same code is executed within the normal int main() function in a previous developmental version (ie it is not being executed as a thread and not alongside WINAPI WinMain), it indefinitely connects and disconnects successfully to the same exact server (I.E. IT WORKS). Please let me know if there is a proper way to implement a closed connection. Why would the same code work in one instance and not another? Is it due to being executed within a thread? Is it due to being executed alongside WINAPI functions? I wouldn't think it is firewall related since it works in one instance.

+",356316,,356316,,43861.11042,44132.04375,What are the limitations of WinSock2 sockets within threads?,,1,0,,,,CC BY-SA 4.0, +404490,1,,,1/31/2020 5:23,,0,97,"

Suppose I have a method which is something like

+ +
void getCalled(Predicate<Integer> predicate, List<Integer> lst){
+   lst.stream().filter(predicate).forEach(...);
+}
+
+ +

The thing is, for this predicate I will have exactly two choices, which are negations of each other. So the predicates are:

+ +
public class Helper {
+   public static boolean doesExist(int x){
+    return ..
+   }
+
+   public static boolean doesNotExist(int x){
+     return !doesExist();
+   }
+}
+
+ +

Now in the caller I have confusion, which is, should I extract this

+ +
void caller(){
+  List<Integer> lst = ...
+  getCalled(lst, Helper::doesExist);
+  getCalled(lst, Helper::doesNotExist);
+}
+
+ +

The issue, I am not sure having extra code for Helper::doesNotExist is a good design practice since its just calling the negation of doesExist, something like:

+ +
void caller(){
+  List<Integer> lst = ...
+  getCalled(lst, Helper::doesExist);
+  getCalled(lst, x -> !Helper.doesExist(x)); //Ugly
+}
+
+ +

At the same time, this is the cleanest way to submit a predicate to the getCalled method, by having two different predicates, doesExist and doesNotExist.

+ +

Any idea

+",356320,,,,,43861.62986,Boolean function for clean predicate?,,2,1,,,,CC BY-SA 4.0, +404491,1,,,1/31/2020 6:25,,0,39,"

The service that is built works on requests reply method using Kafka consumers. This service communicates with external systems .

+ +
    +
  1. Whenever an external system requests a message(via KAfka) They had to send the client_id and an idempotent key. +2.The service would be saving this unique combination of client_id and idempotent_key in a request table on the first time
  2. +
  3. Before processing the request the client would be checking if it is already processed
  4. +
  5. If duplicate found the request would be ignored
  6. +
+ +

Any recommendations?

+",264027,,264027,,43861.33056,43861.33056,Deciding on Idempotency Approach on Distirbuted Systems (At Kafka Consumer),,0,2,,,,CC BY-SA 4.0, +404495,1,404520,,1/31/2020 8:08,,3,192,"

To some extent, this is a wide question, because I do not know in which direction I should move. +I am using Polymaps to show markers on a map. The markers are static but the visualization depends on the values of the input data. For the map I read the data like this:

+ +
    var b = read_function(""data.json"", function(a) {
+        json_object = a
+    });
+
+    map.add(b.on(""load"",load))
+
+ +

This works well when the JSON files are relatively small. When I get above 30 MB in size, the browser starts to complain (slow, crashing etc.). I work with Numerical Weather Prediction (NWP) model data so I am mainly experienced with Fortran and Python and the data I hope to be able to show is for an entire model domain of 1.3 million points, that is 1.3 million markers. The JSON file with all this data takes up 1.1 GB.

+ +

Surely it is not wise to read this file at once and I do not want to show all markers at the same time anyway. The density of markers can be much lower when the zoom is low and show more and more as you zoom in.

+ +

So my question is, how should an amateur proceed? Do I perhaps need to create an API somehow that returns smaller JSON objects depending on the state of the map?

+",356330,,,,,43861.89792,Read large JSON file and return a subset for map,,3,2,,,,CC BY-SA 4.0, +404505,1,,,1/31/2020 12:11,,-1,55,"

This is part of a bigger problem, which is to find out if point XYZ exists in any of n (XYZ -> XYZ) ""boxes"".

+ +

I'm currently splitting up the problem into a smaller one, by focusing on one dimension first and ""filtering"" till either a range is found, or it's not;

+ +

How can I find out if my number X is in any of n ranges with only a ""beginning"" and an ""end"" number?

+ +

PS: I've already found some suggestions like ""interval trees"" and ""segment trees"", but I couldn't quickly find out if that was what I needed or not.

+",356343,,,,,43861.66458,How can I efficiently find out if X is in any of N ranges of L-R numbers?,,1,0,,,,CC BY-SA 4.0, +404506,1,,,1/31/2020 12:27,,3,139,"

I have some years of oop programming experience,I though I know what abstraction is (using abstract class and interface), but I am confused with the definition that appear in a book which says:

+ +

""The process of filtering the available information and arriving at a subset that is essential for your application is called abstraction.

+ +

and

+ +

""Abstraction can be achieved using Abstract Class and Abstract Method.""

+ +

According to the first part, let's say we want to create a banking application and we are asked to collect all the information about your customer. customer info can contain: name, gender, age, favorite food, favorite movie etc... +so we select only the useful information for the banking application from that pool and pick up name, gender and age.

+ +

But why it has to be achieved using Abstract Class and Abstract Method? we can just have a normal class as:

+ +
public class Customer
+{
+   string name;
+   string gender;
+   int age;
+}
+
+ +

so we don't even use abstract class, so how Abstract Classes associated with filtering information?

+",344348,,,,,43861.68819,confused with abstraction definition?,,3,5,,,,CC BY-SA 4.0, +404509,1,,,1/31/2020 12:57,,1,46,"

I'm trying to understand if there is a manager/broker/coordinator class in frameworks and I hope so, but what possible ways are in a framework to make it run the pluggable extensions and what the term is for the name of that responsible part.

+ +

In the Build dynamically extensible frameworks article there are some keywords like ""coordinating class"" and ""broker class"".

+ +

Let's say, you have made a framework with a certain design decision like they tell, for example in these Evolving Frameworks: A Pattern Language for +Developing Object-Oriented Frameworks or Extension/Service/Plugin Mechanisms in Java articles. When I have researched there are not many articles on framework design, most of them are old and even in them they don't clearly tell how actually the core of framework triggers and make itself rolling by handling all extensions to cover them and run the specific extension and if there is none to run default implementation.

+ +

I'm aware of DI, dynamic class loading, reflection, observer, hooks, and my question is not about to design a framework, but how these techniques can be used and more specifically which one is most common or basic to be able to trigger it.

+ +

How and in which ways a framework core broker or something initiates the process and run by getting all extensions and selecting most specific one and if there is not the default implementation? Are there any general terms for the component or class that does this task?

+ +

How extensions to the later parts in its lifecycle triggered by selecting the specific implementation?

+ +

As a sample please consider this from an article I have mentioned. I'm interested in this kind of specific part, but not how to design extension points. Is it most basic approach?

+ +
public boolean book( String type, Calendar date ) throws NoStorageFoundException 
+  { 
+    BookableStorage aStorage = getBookableStorage( type ); 
+    if( aStorage == null ) 
+      throw new NoStorageFoundException(); 
+    Bookable[] someBookables = aStorage.loadAll(); 
+    for( int i=0; i < someBookables.length; i++ ) 
+    { 
+      if( someBookables[i].isFree( date ) ) 
+      { 
+        someBookables[i].book( date ); 
+        try 
+        { 
+          aStorage.store( someBookables[i] ); 
+        } 
+        catch( IOException e ) 
+        { 
+          myMessageWriter.println( ""I could not store your request."" ); 
+        } 
+        return true; 
+      } 
+    } 
+    return false; 
+  } 
+
+",331518,,,,,43861.53958,"How does a framework manager, broker or coordinator class handle pluggable extensions and make them run?",,0,0,,,,CC BY-SA 4.0, +404510,1,,,1/31/2020 13:03,,-3,337,"

In the domain of system-modeling (e, systemVerilog, matlab, phyton), lists are obsoleting arrays, stacks and queues(*) altogether. Other domains that use python, perl and ruby have that same mindset, as well.
+I am a system-modeler and a List aficionado that is new to c# and java (for hi-school teaching). c# seem so great ..but...

+ +

The Remove&Return approach is respected by :

+ +

C# Stack, C# Queue, Java ArrayList ,Python list, Perl, Ruby, e, systemverilog and others.

+ +

A notable exception is ... C# List.

+ +

c# List<T>.RemoveAt(idx) doesn't return a value.

+ +

Why is that important ? +Say one wants to post this in a modeling web site (who aren't experts in c#):

+ +
 int len=myCriticalProcessList.Length; sendMethod( myCriticalProcessList.RemoveAt(len - 1)  );
+
+ +

That seem to work for any modern language, but to post it in c#, one has to also post several more lines, and invent new names. +Also one doesn't need to know the type of the list! And when the type is changed, he does not need to change his code at all. Also naturally System modeling folks care about advanced c#, as much as c# folks care about advanced system-modeling (i.e none).

+ +

Can there be any damage if a value is returned in that case? +Is there a performance consideration here? +Does Java-value-return in arraylist.remove, make it slower, or less safe? +Is c#'s remove&return approach good for Stack, and Queue, yet not a good idea for List? +Is there some difference in underline implementation, or philosophy ?

+ +

Side note:

+ +
    +
  • C# Stack.Pop and c# Queue.Dequeue do return a value. (a +remove&return approach, like Java remove, and unlike c# RemoveAt).

  • +
  • Python's list.pop([idx]), removes-element-by-position, +and returns a value. Similar to remove in java ( yet with a default, remove&return the last element).

  • +
  • C++ (stl) remove, is also void (thanks @Deduplicator). Yet, decisions relevant for languages without native-built-in-garbage-collection, are not always relevant for GC langs.

  • +
+ +

Seem @Deduplicator comment points us to the answer!!

+ +

C# was inspired by c++ not only in it's name, but also in many other things.Additionally, C# compiler was developed in C++ (in it's first ~15 years). Find me one human that is not influenced by the language he is using ... +No one says it's better than Mads Torgersen, the program manager of c# : ""Working in C# every day makes you think differently about C#: It’s the power of “dogfooding”."" +https://medium.com/microsoft-open-source-stories/how-microsoft-rewrote-its-c-compiler-in-c-and-made-it-open-source-4ebed5646f98

+ +

(*) minor: queues are not 100% obliterated by lists. There're very rare cases in system modeling were O(n) for dequeue is not tolerated.

+",354317,,354317,,43862.91875,43862.91875,"C# is fantastic, if only List 'd respect Remove&Return",,1,25,,43861.60625,,CC BY-SA 4.0, +404511,1,,,1/31/2020 13:03,,-2,76,"

The background of this question is stemming from an article about someone who operates a website that is powered by a solar panel and the carbon footprint that we produce in the software space. Data centers use around 2% of the world's energy. It's made me think a bit further about what I can do as an engineer to be a bit greener.

+ +

An example that comes to mind was Etsy upgrading from PHP 5 to PHP 7. With the efficiency improvements, they were able to turn off 80%+ of their app servers.

+ +

I have been a huge fan of JAMStack websites of late (where the site itself is static using something like Vue/React and all dynamic content is pulled via an API). I've also been building most of my APIs in go which is a TON more efficient than PHP/Node. I'm trying to be language agnostic here, but that's obviously a factor.

+ +

My question is then, from energy (CPU utilization?) perspective is it more efficient to generate the markup on the server, or to do so client-side. I know page caching would improve this server-side, but if these views are dyamic (logged in) then caching the page will be useless.

+ +

I know servers aren't particularly efficient at generating markup and I couldn't find any concrete answers as to what is the best from an energy perspective.

+",356349,,,,,43861.63542,Server-side vs. Client-side markup - whichi is more efficient?,,1,1,1,,,CC BY-SA 4.0, +404528,1,,,1/31/2020 17:41,,0,44,"

This is likely in the wrong section but I'm curious if there is a good way to select unique and distinct colors for plotting data on charts? My current method is to hard code a set plot limit with color codes predefined. Im sure there is a random generator but the colors must be noticeably different for legibility. Are there any resources online that compile lists of codes for uses like this where I could handle this more easily or is there a method I may not be aware of?

+ +

I apologize if this is in the wrong section. I am just looking for resources.

+",352360,,,,,43861.73681,Unique Colors for Chart Data,,0,5,,,,CC BY-SA 4.0, +404533,1,,,1/31/2020 20:09,,4,85,"

A long time ago - and long before I joined the project - my project was migrated from clearcase to git. This migration led to the following file layout:

+ +
.
+├── bar
+│   ├── bar.c
+│   └── bar.h
+├── foo
+│   ├── foo.c
+│   └── foo.h
+└── patches
+    ├── bar
+    │   └── bar.c
+    └── foo
+        └── foo.h
+
+ +

On the day of the clearcase to git migration, patch and patched files were mostly copies with some -mostly- minor changes that were the reflect of clearcase differences.

+ +

The build system makes the patches hide the patched files. Patched files have been kept as a testimony of the patches that had been done in the ancient clearcase time. +Patched files have (normally) experienced no changes since the migration.

+ +

Developers of the team have always feel really confused that when changing bar/bar.h the implementation file that needs to be updated is patches/bar/bar.c and not bar/bar.c which is just a confusing testimony of the ancient times.

+ +

Now comes the time to apply these patches!

+ +

Now comes the time for bar/bar.c to become the real source file again and for patches/bar/bar.c to get back to the ancient times (Nonetheless bar/bar.c shall reflect the git change history of patches/bar/bar.c).

+ +

I see two options:

+ +

Option 1

+ +

Tricking the devs into believing there has never been a patch!

+ +
# This totally is a pseudo script. Sorry, friday night, not at work anymore!
+pfile=patches/bar/bar.c
+ofile=bar/bar.c
+for commit in $(git log -- pfile)
+   git show commit:pfile > ofile
+   git commit -m""%B"" --date %aN --author %cI
+
+ +

(This might be doable by some sort of rebase I haven't worked on today, feel free to improve)

+ +

Pros

+ +
    +
  • git log -- bar/bar.c really shows the history of the file with the application of the clearcase patch as one of the top commits.
  • +
  • If I want to run a filter to move foo and bar to their own repo, the history will be preserved (and clean)
  • +
+ +

Cons

+ +
    +
  • This adds a lot of commit!
  • +
  • Does not easily preserves commit coherency: if commit 01234567 modified both bar/bar.h and patches/bar/bar.c, this will now appear in two different commits
  • +
+ +

Option 2

+ +

The commit saving way. It can be found eg here:

+ +
git rm bar/bar.c
+git commit -m 'Remove unpatched clearcase file'
+git mv patches/bar/bar.c bar/bar.c
+git commit -m 'Apply clearcase and git patches'
+
+ +

Pros

+ +
    +
  • Few commits added
  • +
  • If I set git config log.follow true (thanks @VonC), my fellow devs will be tricked in looking at the history but the initial clearcase patch. This shall be enough.
  • +
+ +

Cons

+ +
    +
  • Unless I put a lot of effort this is not resilient to future (and foreseen) subdirectory filters.
  • +
  • Initial clearcase patch hidden. To see the unpatched file history, I'll need some git trickery now that --follow is the default or setting an alias get_clearcase_patch that will compute the diff.
  • +
+ +

The question

+ +

Is there a third option I haven't seen (which preserves history, I thought of some which doesn't of course ^^)? +Are there any pros and cons I haven't seen? +Which is the best solution?

+",325571,,325571,,43861.90833,43861.90833,Git replace file merging (or preserving) history,,1,5,,,,CC BY-SA 4.0, +404538,1,404541,,1/31/2020 22:09,,-3,55,"

let's say I have 3 customers A, B and C.

+ +

I have application (whatever what this application is) I have front-end (let's say in angular2) and back-end (in Spring).

+ +

On the beggining I have one common version of front-end and back-end, but after some time customer A wants something unique on front (what have to be handle special in backend) and customer B wants it too, similarly with C. At this moment from one back-end and one front-end we have 3 front-ends and 3 back-end (one for each customer) I don't think so it is ok to have only one backend for every customers).

+ +

What is the best for customizing this application for every customer? (Let say next I will have customer D, E, F... Z).

+ +

If my question is unclear let me know. I am not native English speaker.

+",356392,,,,,43862.03542,Customizing module per client,,1,3,,,,CC BY-SA 4.0, +404540,1,404565,,1/31/2020 23:38,,3,262,"

In my project, we have a couple different back-end APIs/endpoints that are called by the same front-end page at different times. All of these endpoints are sort of related to the overall ""theme"" or ""purpose"" of the page, but they are called at different times, for different reasons, and return different data. The data returned by each endpoint varies a lot. Some fields require simple database queries, some require processing to generate, some require calls to external APIs, but they all serve this common ""theme"" of the page and are temporally coupled within their respective endpoints.

+ +

Here's the general architecture of ONE of these endpoints, UpgradePreviewEndpoint. This one gets called when the user selects a target version that they wish to upgrade to:

+ +

+ +

The other endpoints follow the exact same architecture and share the same abstractions, so hopefully imagination will suffice. Each endpoint has a single process method that takes a request object. The endpoint then asks an IResponseAssembler to assemble a string response. The format of that response is irrelevant to the question, but currently it's JSON by default.

+ +

Underneath the response assembly component, I have a single gateway into all the business logic used by this page, regardless of which endpoint is accessing it. This is my facade, which is called ServiceFacade in the diagram. Each concrete IResponseAssembler implementation also has its own corresponding I...Facade interface through which it interacts with the concrete ServiceFacade. The ServiceFacade implements all of these interfaces, one for each assembler.

+ +

Since there's only one concrete facade, and it's highly stable, I could have allowed all of the assemblers to depend on it directly. I could have also created a single interface for the facade and allowed all of the assemblers to depend on that one interface. However, since each assembler only needs to call a few methods on the facade, both of those approaches would violate the Interface Segregation Principle. What I've tried to do here is implement the ISP by using individual interfaces to carve out pieces of real estate inside the ServiceFacade for each assembler to depend on. That way, none of them depend on anything they don't actually need.

+ +

However, I have seen many descriptions of the Facade Pattern state that it should provide a single interface into a complex subsystem. Segregating the interface seems to violate that axiom. On the contrary, splitting the concrete ServiceFacade class into multiple entities seems like a really absurd idea in this case, since the only reason for having multiple endpoints is the temporal coupling of the GUI which is completely irrelevant at the service layer.

+ +

I would like the fine folks in this community to scrutinize this design and tell me if I am committing any egregious crimes, or if there is a better approach. This is not my first iteration with the design of these APIs, and I feel that I am on the right track now, but that feeling seems to get me into trouble often enough to warrant skepticism.

+",356235,,356235,,43861.99653,43863.1875,Does it make sense to apply interface segregation to a facade?,,1,4,2,,,CC BY-SA 4.0, +404542,1,,,2/1/2020 0:33,,2,627,"

Background

+

In Uncle Bob's Clean Architecture, use case interactors are responsible for the orchestration of business objects to accomplish some user goal. +As an easy example, an e-commerce application might have a use case to purchase items in a shopping cart: the interactor receives a request (DTO) to make a purchase, then the interactor might query various output ports (gateways) for instance to check inventory/availability, check to see if a payment can be made, and if so, persist a change to the inventory service, ending with a response (another DTO) indicating the success/failure of the interaction (the user-interface layers then presents this information to the user).

+

In trying to apply the principles of Clean Architecture to my applications, I continually run into the question of whether the use case interactor and gateways should be implemented on a front-end, back-end, or both. Consider the scenario for a mobile phone app that communicates with a web app - in a scenario like this, I've tended to notice that implementing use case interactors in the front-end lead them to be very anemic. That is, they only relay requests from the user-interface (typically a function call made on a view-model) to an output port (gateway interface). The gateway is good as it provides some level of abstraction about how communication with the back-end occurs (HTTP, sockets, IPC, RPC, etc), a detail that Uncle Bob stresses to be kept open for as long as possible. But I question the value of a use case interactor that merely relays requests - after all, why shouldn't a view-model depend directly on the output port? This is especially the case when the critical business rules must guarded by executing on an external device (e.g. web server, hardware, etc.) to prevent bad or unauthorized requests.

+

My typical attempt usually ends up as below. The output port provides a separation from the specific mechanism for sending a request. The only problem is that the use case interactor seems very anemic as mentioned since it merely relays a request to the gateway because the interesting business logic occurs on the external device or web app. +

+

Consequently, I'm wondering if I should go with the following approach instead, but I'm not sure what the advantages/disadvantages of removing the use case interactor - it seems improper to depend from one adapter directly to another another adapter (albeit via an abstraction).

+

+

Now, in this circumstance, I do not care how external server, hardware, or web app are implemented so long as I obey it's API contract. +Though, supposing I was responsible for implementing the external application, is this where the use case interactor starts to be useful as now I have critical business rules and user flows that require orchestration?

+

My current conclusion is that it would seem that in a front-end/back-end system, the front-end would only be interested in use case interactors insofar-as the interesting business rules occur only within that tier (i.e. the front-end). If there are interesting business rules that cannot be fully executed in the front-end, perhaps due to security/authorization requirements, then we'll have to move that use-case interactor to the back-end tier instead and can simply have a view-model depend directly on the output port as in the second diagram.

+

Question

+

Does anyone have an expert opinion regarding tiering, MVVM, and Clean Architecture who can show me where I might be going astray? I'm hoping to understand:

+
    +
  1. Is it useful (perhaps for maintainability reasons) in having use case interactors that only relay to a gateway (because the sophisticated business logic occurs on a different tier).
  2. +
  3. What harms, if any, are there in depending directly from a view-model (an entry-point adapter) to an exit-point adapter via some output port interface?
  4. +
+",249944,,-1,,43998.41736,43892.50208,"Uncle Bob's Clean Architecture - Dealing with anemic interactors, tiering, and front-end MVVM",,2,0,2,,,CC BY-SA 4.0, +404543,1,404545,,2/1/2020 1:04,,1,255,"

I have a code that is something like this in a class

+ +
string method x (){ 
+  foreach(a in alist){
+    //do something
+  }
+  return string;
+}
+integer method y (){ 
+  foreach(a in alist){
+    //do something
+  }
+  return integer;
+}
+double method z (){ 
+  foreach(a in alist){
+    //do something
+  }
+  return double;
+}
+
+ +

I sense a code smell here in the multiple for loops on the same object. But I am not sure whether it is real or not. Is there any way I can refactor this code? so that I only have one place for the for loop?

+",356397,,,,,43863.83056,Refactoring multiple for loops on same list in different methods,,2,5,,,,CC BY-SA 4.0, +404546,1,,,2/1/2020 4:53,,1,54,"

I've inherited a project that has a codebase in src/, but it also has precompiled binaries of dependency software in bin/. I would like to move away from having precompiled software as part of our repository, since (1) it doesn't work for all users and (2) they won't be updated when the source code of their own repo changes.

+ +

I have two options:

+ +
    +
  1. Use git submodule to have the source code of the dependency projects present in my repository in a way that keeps them updated. I like this idea, however it requires users to have to build not only our project but the external ones as well. Additionally, I'm not sure where to put the external projects. I'm thinking external/ as suggested by the PFL
  2. +
  3. Require the users to build the dependencies themselves. This option keeps our repository and may be less confusing to users (who already submit issues to our repo when dependency software is failing). This is a minor gain since its not really a big deal to tend to these.
  4. +
+ +

It seems to me like option 1 is the clear winner, however another post suggests otherwise. Am I missing something? What do people usually do in practice?

+",350903,,,,,43862.20347,External standalone cpp code in my project,,0,5,,,,CC BY-SA 4.0, +404549,1,,,2/1/2020 10:18,,3,1210,"

Just a little explanation:

+ +

""I'm used to be a solo front-end developer in my company and using default folder-structure and way of coding which vue-cli provided, it is good for a solo developer and small applications, but the project is going to scale up and be an enterprise level app, which takes me more times to develop new feature or make it harder to reuse my previous codes and also the company said we want to hire new front-end developers, which makes it like a nightmare for me in this situation.

+ +

So i decided to make a change to make a better and well define front-end project, so first i migrated the project from js to typescript to use interfaces and class , etc... which the interface and classes and helpers that i developed makes the code very cleaner than it was, and i was happy about it.

+ +

but it seems that its not enough, which Also i told myself currently i know where every things lives and how to do things and where should i go to develop new components and add what in where folders and etc... but if the new developer has come, it can make him/her confuse to adpat to it, so i began to read about enterprise level folder-structures(https://github.com/vuejs/awesome-vue#scaffold) like this one which is introduced by chris frtiz in vue awesome (https://github.com/chrisvfritz/vue-enterprise-boilerplate):

+ +

+ +

and this one whic is also introduced by vue awesome(https://github.com/NarHakobyan/awesome-vue-boilerplate):

+ +

+ +

and when i was investigate them, i noticed that every developer comes with a new self authored folder structre, and finally i used a conjunction of those in my project and it seems to wrok very well.

+ +

But i still think i made a new folder structure and self authored way of coding, which is not a standard or maybe not well enough defined solution.

+ +

So i began to investigate other solutions in other domains and uncle bob's pages, and finally i found something called ""Clean Architecture"" in uncle bob's site which shows an diagram which tell about a onion model of doing code developiong,

+ +

(""(https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html). +)

+ +

and it seems to be popular in android world and our companies android developer seems to be very happy with it and they telling me we can see our code and quickly found what is what and where should we go to develop new things and a better reusre our pervious code and almost no conflict in developing.

+ +

I want to know is this architecture good for fron-end? do you have any exprience in developing front-end with this clean code? was it really helpfull?

+ +

thank you in advance for the guide.

+",356411,,356411,,43862.62708,43862.62708,How & Should we use Clean Architecture in Front-end?,,0,9,,,,CC BY-SA 4.0, +404552,1,,,2/1/2020 14:22,,0,30,"

I'm not very familiar with redux, but most of the resources about the topic recommends to place CRUD API calls in the redux store via middlewares and use additional states like: FETCH_SUCCEEDED, FETCH_FAILED, etc. to store the result.

+ +

Is that really necessary?

+ +

Isn't this against the single responsibility principle?

+ +

What are the benefits of the above boilerplate-ish approach versus simple calling your api client and then dispatch the result data to the store?

+",356420,,,,,43862.59861,"Should redux store take care for fetching, updating and deleting data from an API?",,0,0,,,,CC BY-SA 4.0, +404554,1,,,2/1/2020 14:42,,2,83,"

I have to build a dynamic rules engine in say, Java, where a user can define a certain list of filters and trigger a certain event based on them. The rules will be a long chain of conditions, such as:

+ +
if 
+    condition1 AND (condition2 OR condition3) AND condition4
+then
+    trigger(event)
+
+ +

where conditionN is any operation based on the attribute for a business object. Example: a string can have an operation - endsWith or a number can have operations greaterThan.

+ +

The problem is, this list of rules will not be hard-coded and has to be fetched dynamically from a database (a separate web UI will update this database with the rules) and run against an entity and then trigger the appropriate action on the entity.

+ +

What's the best way to approach this kind of problem and write a scalable and high-performant program for this?

+ +

I am currently thinking of building expressions in Spring Expression Language (SpEL) and storing them in DB as strings and then while running the program, fetching this column and running them against my domain object.

+ +
// Fetch rule and action from DB
+rule = rs.getString(""rule""); // Rule in SpEL
+action = rs.getString(""action""); // Action method name
+
+ExpressionParser parser = new SpelExpressionParser();
+
+Expression exp = parser.parseExpression(rule);
+boolean result = (String) exp.getValue(obj, Boolean.class); // Run expression against domain object `obj`
+
+if (result) action();
+
+ +

I am currently not sure if this is the best way for doing things and if it will scale and be extensively customizable or not.

+ +

For the above approach, then rule string needs to built by parsing user input. I was also thinking about storing rules as an XML document:

+ +
<Rule>
+    <Filter>
+        <Operation name=""or"">
+            <Operand>
+                <Operation name=""lessThan"">
+                    <Operand>${numAttribute}</operand>
+                    <Operand unit=""percentage"">5.05</operand>
+                </Operation>
+            </Operand>
+            <Operand>
+                <Operation name=""equals"">
+                    <Operand>${stringAttribute}</Operand>
+                    <Operand>IN</Operand>
+                </Operation>
+            <Operand>
+        </Operation>
+    </Filter>
+    <Action>
+        actionName
+    </Action>
+</Rule>
+
+ +

However, with this scenario, I am not sure how to proceed after parsing the XML and how to run the operations dynamically based on the described values.

+ +

I would appreciate any ideas of the aforementioned ones or a totally new way to solve this problem. I understand it's a huge application but need clarity on how to start. I believe the biggest problem is how to run user inputs statements as Java code? I know a little bit about reflection but I am not sure if it's meant to solve this problem and if it should be used considering it has its drawbacks in areas of performance.

+",356423,,,,,43871.30972,Evaluate and run Dynamic Rules,,1,9,2,,,CC BY-SA 4.0, +404561,1,,,2/1/2020 20:14,,1,227,"

I just started our bachelor thesis, and want to ask you how to make a proper documentation.

+ +

There are two pieces of software in our project's scope: a microcontroller software (node), and a server software (control_room).

+ +

We wish to place all project documents in the software repository. This way all information can be found in one place, and development-process choices can be linked with specific code.

+ +

Currently, it looks like:

+ +
Folder                 Comment
+----------------       -------------------
+control_room/          source code for the server software
+node/                  source code for the microcontroller software
+docs/                  PROJECT DOCUMENTS (core topic of this question) 
+README.md              small overview for the new readers
+
+ +

The docs/ folder for the project documents aims to set a clear picture on how the software was made. It has the following structure:

+ +
Sub-folder             Comment
+----------------       -------------------
+process/               All choices made during the project are reasoned here.
+sprint_retrospectives/ Reflections on work done every 2 weeks.
+supervisor_meetings/   Supervisor meeting minutes
+
+ +

Is this sufficient? How should we further develop it to meet our objectives?

+",311979,,209774,,43863.70833,43867.92014,How to structure the project documentation?,,2,3,,,,CC BY-SA 4.0, +404567,1,404568,,2/2/2020 6:44,,60,9505,"

Let's say there is a member SomeMethod in an interface ISomeInterface as follows:

+ +
public interface ISomeInterface
+{
+    int SomeMethod(string a);
+}
+
+ +

For the purposes of my program, all consumers of ISomeInterface act upon the assumption that the returned int is greater than 5.

+ +

Three ways come to mind to solve this -

+ +

1) For every object that consumes ISomeInterface, they assert that the returned int > 5.

+ +

2) For every object that implements ISomeInterface, they assert that the int they're about to return is > 5.

+ +

Both the above two solutions are cumbersome since they require the developer to remember to do this on every single implementation or consumption of ISomeInterface. Furthermore this is relying upon the implementation of the interface which isn't good.

+ +

3) The only way I can think to do this practically is to have a wrapper that also implements ISomeInterface, and returns the underlying implementation as follows:

+ +
public class SomeWrapper : ISomeInterface
+{
+    private ISomeInterface obj;
+
+    SomeWrapper(ISomeInterface obj)
+    {
+        this.obj = obj;
+    }
+
+    public int SomeMethod(string a)
+    {
+        int ret = obj.SomeMethod(""hello!"");
+            if (!(ret > 5))
+            throw new Exception(""ret <= 5"");
+        else
+            return ret;
+    }
+}
+
+ +

The problem though now is that we're again relying on an implementation detail of ISomeInterface via what the SomeWrapper class does, although with the benefit that now we've confined it to a single location.

+ +

Is this the best way to ensure an interface is implemented in the expected manner, or is there a better alternative? I understand interfaces may not be designed for this, but then what is the best practice to use some object under the assumption that it behaves a certain way more than what I can convey within its member signatures of an interface without needing to do assertions upon every time it's instanced? An interface seems like a good concept, if only I could also specify additional things or restrictions it's supposed to implement.

+",355487,,355487,,43864.65,43865.64792,How do I ensure that interface implementations are implemented in the manner I expected?,,8,10,13,,,CC BY-SA 4.0, +404570,1,404573,,2/2/2020 8:55,,0,76,"

I'm in the starting phases of designing an API. I'd like it to be Restful and comply with what's commonly considered best practices.

+ +

One of my resources need to accept several query string parameters. Here's an example:

+ +
{
+  ""pick_up"": {
+    ""datetime"": ""20200301T1400"",
+    ""location"": {
+      ""lat"": 40.68231,
+      ""lng"": -73.52935
+    }
+  },
+  ""drop_off"": {
+    ""datetime"": ""20200401T1400"",
+    ""location"": {
+      ""lat"": 40.68231,
+      ""lng"": -73.52935
+    }
+  }
+}
+
+ +

The above is how I would probably represent the parameters as a JSON. That could be converted to a query string:

+ +
pick_up%5Bdatetime%5D=20200301T1400&pick_up%5Blocation%5D%5Blat%5D=40.68231&pick_up%5Blocation%5D%5Blng%5D=-73.52935&drop_off%5Bdatetime%5D=20200401T1400&drop_off%5Blocation%5D%5Blat%5D=40.68231&drop_off%5Blocation%5D%5Blng%5D=-73.52935
+
+ +

That will work, but it'll look ugly. Maybe something like this would look prettier:

+ +
pick_up_datetime=20200301T1400&pick_up_location_lat=40.68231&pick_up_location_lng=-73.52935 ....
+
+ +

Aside from the subjective ""pretty""factor, are there any benefits of one over the other? Perhaps in terms of generally accepted best practice, security, RESTful design, etc?

+",93973,,,,,43863.47083,GET Parameters in Restful API,,1,7,,,,CC BY-SA 4.0, +404574,1,404579,,2/2/2020 14:10,,2,189,"

Say an API returns the following pieces of data:

+ +
    +
  • daily_price
  • +
  • number_of_days
  • +
  • total_price
  • +
+ +

Assume that the API will always have the number_of_days and at least daily_price or total_price.

+ +

In cases when the API only has the daily_price or total_price, should the API compute the missing value (e.g. daily_price * number_of_days = total_price) - or should it leave it to the consumer to calculate? And if the API did compute the missing value, should it indicate which of the two values (if any) was computed?

+",93973,,,,,43863.73542,Should an API return calculated values that the consumer can calculate themselves?,,2,2,,,,CC BY-SA 4.0, +404580,1,404617,,2/2/2020 17:55,,2,525,"

I have an existing production Oracle Database. However, there are performance issues for certain kind of operations, because of the volume of the data, or the complexity of queries.

+ +

That's why I regularly export/dump each Oracle table to a CSV. Then such CSVs are converted to Parquet files in order to allow very high performance queries with Spark. However, my concern is about loosing strong consistency benefits.

+ +

Suppose two tables in Oracle like :

+ +
data (id, value, fk_metadata_types_id)
+
+metadata_types (id, label)
+
+ +

As of now, I export regularly such two tables, then convert it to Parquet files (each Oracle table has its own set of Parquet files) in order to be ready for Spark queries.

+ +

The problem is about consistency. There are two batch, one that dump to CSV (then Parquet) the data tables, and an other that dump to CSV the metadata tables.

+ +

So basically, it can happen that at a given time, Spark read the data table with fk_metadata_types_id that doesn't already exists in the corresponding metadata Parquet tables.

+ +

How to handle such consistency issues ? The idea here is to have performant queries with Spark, but also guarantee that when the data is queried by Spark, it is always possible (strong consistency) to get the corresponding metadata_types (by a join, like an Oracle join finally).

+ +

Thanks

+",356482,,,,,43864.50486,From Oracle to Apache Parquet : how to handle eventual consistency?,,1,6,,,,CC BY-SA 4.0, +404582,1,,,2/2/2020 18:23,,1,76,"

This is an overly simplistic example of a domain model for a time keeping domain. I only started on it today as a way to practice the techniques I'm learning while reading Implementing Domain-Driven-Design by Vaughn Vernon.

+ +

I am using the Axon framework to model my Aggregate (posted below).

+ +

One big question I had in my Design was the TimeCardEntry objects. Is it appropriate for me to keep a running list of Entries? Should they contain an ID at all or even be an AggregateMember?

+ +

I am still working through the details, but I imagine eventually I will need to add a query layer that will allow viewing a timecard's data.

+ +

I will also likely eventually need an ability to udpate existing time card entries via an UpdateTimeCardEntry command.

+ +
@Slf4j
+@Aggregate
+public class TimeCard {
+
+  @AggregateIdentifier
+  private String employeeName;
+
+  @AggregateMember(eventForwardingMode = ForwardMatchingInstances.class)
+  private List<TimeCardEntry> timeCardEntries = new ArrayList<>();
+
+  public TimeCard() {
+    //Empty Constructor for Axon framework
+  }
+
+  @CommandHandler
+  public TimeCard(ClockInCommand cmd) {
+    apply(new ClockInEvent(cmd.getEmployeeName(),
+        GenericEventMessage.clock.instant(),
+        cmd.getTimeCardEntryId()));
+  }
+
+  @CommandHandler
+  public void handle(ClockOutCommand cmd) {
+    getEntryIfOpen(cmd.getTimeCardEntryId()).
+        ifPresentOrElse(
+            entry -> apply(
+                new ClockOutEvent(cmd.getEmployeeName(),
+                    GenericEventMessage.clock.instant(),
+                    entry.timeCardEntryId)),
+            () -> log.error(""Employee has not clocked in or is already clocked out""));
+  }
+
+  @EventSourcingHandler
+  public void on(ClockInEvent event) {
+    this.employeeName = event.getEmployeeName();
+    timeCardEntries.add(new TimeCardEntry(event.getTimeCardEntryId(), event.getTime()));
+  }
+
+  private Optional<TimeCardEntry> getEntryIfOpen(String timeCardEntryId) {
+    return timeCardEntries.stream()
+        .filter(entry -> entry.getTimeCardEntryId().equals(timeCardEntryId))
+        .filter(TimeCardEntry::isClockedIn)
+        .findFirst();
+  }
+
+  @Data
+  public class TimeCardEntry {
+
+    @EntityId
+    private final String timeCardEntryId;
+    private final Instant clockInTime;
+    private Instant clockOutTime;
+
+    @EventSourcingHandler
+    public void on(ClockOutEvent event) {
+      this.clockOutTime = event.getTime();
+    }
+
+    private boolean isClockedIn() {
+      return clockOutTime == null;
+    }
+  }
+}
+
+ +

I have the following tests -

+ +
class TimeCardTest {
+
+  AggregateTestFixture<TimeCard> testFixture;
+
+  @BeforeEach
+  void setUp() {
+    testFixture = new AggregateTestFixture<>(TimeCard.class);
+  }
+
+  @ParameterizedTest
+  @MethodSource(value = ""randomEmployeeName"")
+  void testClockInCommand(String employeeName) {
+    String randomUUID = UUID.randomUUID().toString();
+    testFixture.givenNoPriorActivity()
+        .when(new ClockInCommand(employeeName, randomUUID))
+        .expectEvents(new ClockInEvent(employeeName, testFixture.currentTime(), randomUUID));
+  }
+
+  @ParameterizedTest
+  @MethodSource(value = ""randomEmployeeName"")
+  void testClockOutCommand(String employeeName) {
+    String randomUUID = UUID.randomUUID().toString();
+    testFixture
+        .givenCommands(new ClockInCommand(employeeName, randomUUID))
+        .when(new ClockOutCommand(employeeName, randomUUID))
+        .expectEvents(new ClockOutEvent(employeeName, testFixture.currentTime(), randomUUID));
+  }
+
+  private static Stream<String> randomEmployeeName() {
+    return Stream.generate(RandomString::make).limit(10);
+  }
+}
+
+",355668,,,,,43863.76597,How to manage data in an Aggregate,,0,0,,,,CC BY-SA 4.0, +404583,1,,,2/2/2020 18:29,,0,91,"

I've come across a table that has about 200 columns. About 150 of these can be grouped into 5-10 tables that make real world ""sense"", and seeing as most of these entries are never used I figured it would save a lot of null pointers and reduce the size of the database drastically if I did this.

+ +

e.g. lets say the current main table has these entries:

+ +
Id | Person |  DOB  | Address   | FaveColour | LeastFaveColour | MoreColourOpinions
+------------------------------------------------------------------------------
+1    Jim      1992    Here        null         null              null
+2    Bob      1991    There       Brown        Orange            I like purple
+3    Bill     1990    Everywhere  null         null              null
+
+ +

So here you might have guessed that I would split the columns relating to colour into a separate table.

+ +
Id | Person |  DOB  | Address 
+-----------------------------
+1    Jim      1992    Here      
+2    Bob      1991    There
+3    Bill     1990    Everywhere
+
+
+PersonId | FaveColour | LeastFaveColour | MoreColourOpinions
+------------------------------------------------------------
+2          Brown        Orange            I like purple
+
+ +

Now, I know that it's totally fine to have 1-1 tables, but my question relates to speed - what's going to be the difference between querying the original gargantuan table vs. querying all the separate tables left joined together?

+ +

Let's say the table has half a million rows and I want to query on one thing from EVERY group, e.g.

+ +
Select * from Person p
+left join ColourOpinions co on p.Id = co.PersonId
+-- add another ten+ left joins
+where co.FaveColour = 'Brown'
+-- and another filter, one for each of the ten+ joins
+
+ +

I assume that querying the original table will be faster, because there's no joins to be made, for all those joins I'm basically recreating the original table before querying it... but how much slower will it be? Is it a good idea to split this table up?

+ +

I'm thinking yes because the speed of querying smaller tables and joins separately, as well as the database size difference, will totally offset any occasion where we need to recreate the entire original table and query that?!

+",352077,,104818,,43895.47708,43895.47708,Splitting 1-1 relationships across lots of different tables?,,1,2,,,,CC BY-SA 4.0, +404584,1,,,2/2/2020 18:31,,-1,124,"

Would it be fair to say that almost all unit test frameworks use assertions, or a single assertion, to determine whether or not a test has passed or failed?

+ +

In particular, are there any frameworks which instead compare textual output from a test against a known-good 'golden' version to determine whether the test has passed or failed?

+ +

The code I'm trying to test is stateful, and the methods can have a long history. My thinking is that I should be generating textual output recording this history (maybe a couple of dozen state transitions), and comparing this to the expected transitions, so I get a single pass/fail for the entire history.

+ +

I could instead break a single test into 20 tests, and set up a new state individually for every test, and then assert that the next state is as expected for all 20 tests, but this seems to be pointless and wasteful.

+ +

edit

+ +

To clarify - I'm not looking for a tool, or a recommendation. I'm already doing what I suggested, 'manually', and I was hoping to get some insight as to whether this counted as 'unit testing'. I didn't want to actually ask that question as I thought it would result in lots of downvotes and no answers. OTOH, if any 'real' frameworks do this as well, then that actually answers the question.

+",356483,,356483,,43864.62639,43864.62639,Determination of pass/fail for unit testing?,,1,3,,,,CC BY-SA 4.0, +404590,1,404596,,2/2/2020 21:17,,1,176,"

I wanted to learn more about DNS, and I happened to have a copy of Computer Networking: A Top-Down Approach 4th Edition lying around.

+ +

Section 2.5.1 (page 132) suggests using DNS as a load balancer:

+ +
+

DNS is also used to perform load distribution among replicated servers, such as replicated Web servers. Busy sites, such as cnn.com, are replicated over multiple servers, with each server running on a different end system and each having a different IP address. For replicated Web servers, a set of IP addresses is thus associated with one canonical hostname. The DNS database contains this set of IP addresses. When clients make a DNS query for a name mapped to a set of addresses, the server responds with the entire set of IP addresses, but rotates the ordering of the addresses within each reply. Because a client typically sends its HTTP request message to the IP address that is listed first in the set, DNS rotation distributes the traffic among the replicated servers.

+
+ +

On systems I've worked on, load distribution is done by directing all traffic to a load balancer or proxy, which forwards the request to a replica. The DNS-based strategy described here seems brittle:

+ +
    +
  • You're dependent on the DNS server rotating the list of IP addresses it sends back to clients.
  • +
  • You're dependent on clients always taking the first IP in the list.
  • +
  • If you add or remove a server from your cluster, you have to propagate the change through DNS. It can take 24 - 48 hours for the majority of DNS caches to roll over and load your change (the book says so, and I have personal experience with this). If you have four servers and one crashes suddenly, a quarter of requests fail for the next 24 hours.
  • +
  • Since the load distribution mechanism lives outside your system, you don't have much to go on if one of your servers is getting overloaded.
  • +
+ +

This edition came out in 2008, and a lot has changed on the internet since then. Is this DNS-based load distribution strategy outdated? Are there still reasons to use it?

+",212168,,,,,43864.09583,Are there still reasons to use DNS for load balancing?,,1,5,,,,CC BY-SA 4.0, +404592,1,,,2/2/2020 23:32,,5,271,"

Well in working with scrum it's always seen as important to make clearly defined stop and start points of small tasks.

+ +

However how do you capture ""ongoing"" tasks into a scrum system?

+ +

Like on friday after deployment we typically have ""work on documentation and code review of old code to check for bad practices"". This has no end, as constantly new code is found and new practices emerge; require updating of documentation and blaming old code to be updated. We're also too small of a company to make a separate system for these (devops and the likes are still tightly coupled to development). So to provide the manager of a time frame these things are calculated into the scrum work.

+ +

However how do you suggest to ""mark"" them - so that both management knows what happens, and we do not have constant backlogs of the same wording.

+",43635,,,,,43874.65278,"How to catch ""continuous ongoing tasks"" into a scrum system?",,3,0,,,,CC BY-SA 4.0, +404597,1,,,2/3/2020 2:23,,0,137,"

Context: I'm building a popup widget. The html and css files are stored in S3. I need to get those files asynchronously and then continue with the rest of the logic.

+ +

In the code below, I'm getting the html file from S3 and setting the contents of a popup div to be equal to the contents of this file.

+ +

The issue: I have a lot more logic to handle the rest of the form. I'm currently using loadFormFunctions as the starting point to executing the remaining logic. loadFormFunctions contains many other functions that need to be executed.

+ +

Question: Is this best practice for executing the remaining code? Or is there a way to check if getFromS3 has resolved and then continue executing another function?

+ +
async function getFromS3() {
+
+    await fetch('someFileinS3.html', {
+    }).then((response) => {
+        return response.text()
+
+    }).then((text) => {
+        popup.innerHTML = """";
+        popup.innerHTML = text;
+        loadFormFunctions();
+    })
+
+}
+
+ +

I realize the way I've laid this out might be confusing. Please let me know if you need me to rephrase.

+",353734,,20328,,43865.58125,44015.71319,What is the best practice for incorporating asynchronous code in this case?,,2,1,,,,CC BY-SA 4.0, +404600,1,404602,,2/3/2020 5:00,,1,98,"

I'm building a Restful API and trying to use JSON:API format for my response. Looking at https://jsonapi.org/format/, it states:

+ +
+

A resource object MUST contain at least the following top-level + members:

+ +
    +
  • id
  • +
  • type
  • +
+ +

Exception: The id member is not required when the resource object + originates at the client and represents a new resource to be created + on the server.

+
+ +

My API will be returning a list of, say, fruits. The list of fruits won't be coming from my database -- it will be collected from an external source. That external source does not give me an id. Also, I won't be saving the fruit items in my database, so I can't even return an internally generated id for it.

+ +

Also, my database won't have a ""fruits"" table at all -- so I'm even reluctant to use the type member.

+ +

In other words, I don't think I can set id and type. And if I don't, then my API response is no longer JSON:API compliant.

+ +

How do I handle this?

+",93973,,,,,43864.25417,JSON:API - I don't think I have an `id` or `type` to return,,2,0,,,,CC BY-SA 4.0, +404606,1,,,2/3/2020 8:15,,1,32,"

I am trying to build a API for doing CRUD on images with their caption. So far I can think of +option1: sending the images in form-data as list of images and their corresponding captions in another list and map them according the list index. Cons: mapping according to index may lead to human error and seems somehow inconsistent.

+ +

option2: sending the images converted to base64 string and sending them in json. Cons: increases the file size to 33%.

+ +

can someone with similar experience shed some light on this. Thanks in Advance.

+",250731,,,,,43864.34375,Standard way to bulk crud image with caption in an API?,,0,0,,,,CC BY-SA 4.0, +404610,1,,,2/3/2020 10:02,,3,104,"

A manufacturer sells different flavour chocolates.

+ +

Customers can place an order for any number of each flavour chocolate, from zero upwards.

+ +

Some combinations of chocolates are frequently ordered, so the manufacture has prepackaged these combinations in his warehouse to make dispatch quicker (which saves money). There's no restriction on how many chocolates can be pre-packed in to a box +e.g

+ +

box 1 = 3 x Strawberry, 3 x Coffee, 1 x Orange;

+ +

box 2 = 1 x Strawberry, 3 x Coffee, 5 x Lime

+ +

How would we determine if the customer's order can be (fully or partially) fulfilled using a combination of prepackaged chocolates and how many of each box are required ?

+ +

(Although this sounds like it's not a real world problem, it is the simplest analogy that I could come up with for my real world problem that doesn't require an understanding of the specific industry & conditions that the real problem relates to)

+",356531,,356531,,43865.4375,43877.57986,Algorithm for fulfilling an order using pre-packaged goods,,1,11,2,,,CC BY-SA 4.0, +404611,1,404655,,2/3/2020 10:38,,2,224,"

My question is not about dilemma between clean code vs performance, but I want to understand exact issue with declaring variables and sharing them between functions.

+ +

I read in many threads, that from performance point of view it is good practice to declare basic variables like int, float, bool etc. as close each other as possible. So it is better to declare a lot of variables in the scope of MyClass::method(), than in the MyClass body. Is it truth?

+ +

But what if I need realy lot of variables, and I need to use them in two separated functions. +For example:

+ +
void MyClass::firstMethod()
+{
+    // here I use a lot of variables
+    secondMethod(); // and here I need all those variables
+}
+
+ +

So it is better (still thinking about performance) to declare all those variables in the body of MyClass? Or is it better to do something like that:

+ +
void MyClass::firstMethod()
+{
+    float var001, var002 ... var100;
+    // here I make some calculations with all variables
+
+    secondMethod(var001, var002 ... var100);
+}
+
+void MyClass::secondMethod(float &v001, float &v002 ... float &v100)
+{
+    // do some calculations
+}
+
+ +

Of course it looks stupid when method get 100 input parameters. In ""Clean Code"" book of Robert C. Martin I read that methods should take no more than 3 or 4 input parameters. But I wonder if it has good performance in such not typical algorithms?

+ +

I work on audio processor, and in the audio process block I need to calculate really large number of things, like calculate MID/SIDE samples, multiply by input gain, output gain, perform some filtering, dynamic analysis, multiply by gain reduction coefficient, and sends some of those variables to the graphic threads to show it on some audio monitors, and many more things. And I need to do that independently for each sample, and there are tens of thousands of samples in each second. So is it good practise to declare as much as possible in the scope, or better in class body?

+ +

Please notice, lot of my variables also need to be std::atomic, so it's not only basic variables like I said on the beginning. I am not sure how to declare all of that to provide best performance.

+ +

Also I have more than one processor blocks for different audio processes which I can choose in the real time from my application GUI. So is it better to do one big processor block with lot of if statements (or switch) to choose processor? Or better to declare some lambda for which I can assign various processes and then only call that lambda in main process block? And the problem is I have also various algorithms in each of my various processors. So then I end up with calling lambdas inside other lambdas.

+ +

I wonder how to handle all of those issues to provide best performance, but also to provide clean code as much as possible.

+",356536,,90149,,43864.5875,43865.13681,How to declare and share a lot of variables to provide best performance,,2,3,,,,CC BY-SA 4.0, +404613,1,,,2/3/2020 11:04,,24,5719,"

In some C projects, function names start with a common prefix indicative of the module name, for example:

+ +
mymodule_do_this_stuff();
+
+mymodule_do_that_stuff();
+
+ +

Is there a justification for doing so when the linkage is internal, and there is no possibility of namespace collisions?

+ +

For example,

+ +
static void mymodule_do_this_stuff(void)
+{
+  ...
+}
+
+",356537,,266299,,43865.67847,43866.02639,Why are module-specific prefixes widely used for function names in C modules?,,3,1,2,,,CC BY-SA 4.0, +404620,1,,,2/3/2020 13:29,,5,146,"

I have a software project A which makes API calls to a third-party software B that is heavily based on data stored on the file system. Also, those software and file systems are distributed on servers in different locations. I have been thinking of ways of not using a file system, and using a database instead for storing BLOBs, for example. Let's suppose the following scenario:

+ +
    +
  1. A calls B.
  2. +
  3. B needs a template file that is on the file system.
  4. +
  5. If the template file is accessible on the file system, B processes successfully. If not, it returns error.
  6. +
+ +

Within this scenario, I thought about two options:

+ +
    +
  1. Using a database to store BLOBs of those template files. Then, I would have A extract the files from the database and save it on the file system whenever a call to B is made. After the call returns, A would update the template file on the database and delete it from the file system. With this scenario, I would pretty much need only to replicate/distribute the database storage.
  2. +
  3. Using a file system with replication to all locations A and B are running at. It would be easier for me to implement, but I would lose all features a database storage has such as queries, statistics...
  4. +
+ +

I have been trying to find solutions for this problem, and I have come across DBFS on Oracle databases. It seems to me that it creates a file system interface for external access, while, in reality, those files are being storaged and administrated on a relational database. In theory it would solve my problems with file systems, but I am not able to test/use this product since I do not have an Oracle database. I have tried to look for simillar features on other open source databases, without any success. Maybe I am not searching correctly, or maybe there is a correct terminology for this that I am not aware of. It seems to me like a simple problem that many people have probably faced at some point.

+ +
    +
  1. Am I going right about it?
  2. +
  3. What are the best practices on dealing with file system dependant applications?
  4. +
  5. Is there any way that I can make use of the advantages of databases while maintaining a file system accessible to this third-party software?
  6. +
+",352971,,352971,,43864.60694,43864.60972,Solving file system dependency with database storage,,1,3,,,,CC BY-SA 4.0, +404623,1,404633,,2/3/2020 14:42,,4,407,"

I am working at a medium-size semi-govermental organization managing subcontractors for software projects. One of our contractors recently turned in the ""source code"" for the project they had contracted. I strongly suspect that the code is auto-generated. This bothers me for several reasons (some of which are also mentioned in: Is source code generation an anti-pattern?). E.g.:

+ +
    +
  • I suspect that the code is a lot larger than it could have been, had it been written manually
  • +
  • maintaining this code will accordingly be a lot harder
  • +
  • since this is not really source code, there are no comments at all (or only token, useless comments) and no effort was apparently made to come up with a meaningful organization of the code base (e.g. in terms of libraries, etc) that would have made sense to a human maintainer
  • +
  • the contractor is using this as a way to circumvent their obligation to surrender their source code to us and also with a view to securing future maintenance contracts as well (or at least enjoy an advantage over other bidders who won't have access to the real sources).
  • +
+ +

The contractor has also done a clever job of injecting some artificial randomness in the generated sources so as to give the impression that this was written by hand.

+ +

I feel that my employer / the taxpayer is being cheated by an unscrupulous subcontractor willing to walk on a fuzzy red line, betting that they've done something clever that can't be conclusively proved.

+ +

Is there a way I can detect and prove that this was automatically generated by some other software?

+",356556,,4,,43864.63125,43865.30139,How can I detect autogenerated code in a project?,,4,12,,,,CC BY-SA 4.0, +404630,1,404632,,2/3/2020 16:21,,1,216,"

I'm writing a fairly large piece of logic, during which there are 6 points where things could go wrong and execution should stop after logging the error. The error is also stored in an object.

+ +

However, up to now I have been using a public static final String for this error string. As it is only used in a single place (and almost certainly won't be used anywhere else), does it make more sense to remove this global variable declaration from the class and instead hardcode the error reason?

+",356572,,,,,43864.88889,"Using ""magic"" strings for single-use error reasons?",,1,3,1,,,CC BY-SA 4.0, +404640,1,404659,,2/3/2020 19:04,,4,331,"

I have a business object which is basically a wrapper around a value of type int. There are some constraints for the value/object:

+ +
    +
  • Not every value in int's range is valid1
  • +
  • The valid values are not a predefined discrete set, therefore an enum is not an option
  • +
  • Two objects with the same value are always considered equal
  • +
  • The validity of a value should be checked in the constructor
  • +
  • 0 is not an valid value
  • +
+ +

If I just consider the first three oft theses constraints I'd say this is a predestined use case for an immutable struct (which I would prefer). The problem lies within the last two ones:

+ +

Since I can't have a parameterless constructor in a struct an object with 0 as a value can be constructed. I could treat this as a special value like a null value. But this would force me to ""null check"" and I could take a class as well. Are there any more reasons for using a struct in this use case?

+ +
+ +

1 To be more precise at the valid values: they have to have 5 or 6 digits. The first 4 can have any value between 1000 and 9999, the remaining digit(s) are either between 1 and 4 or 1 and 12.

+",248595,,248595,,43864.85972,43865.43819,Struct or class for wrapping an int when 0 isn't a valid value,,3,13,,,,CC BY-SA 4.0, +404643,1,,,2/3/2020 20:27,,3,71,"

I have devices thay play video files from their local disk basing on a pre-defined playlist (every device plays exactly the same movie in the same order, every device knows in advance what videos will be played and what is their duration). There is no master device, every one is equal (there is a server though, but it's only used for downloading the video files for the first time and it can be in an entirely different network, and it doesn't do any video playback).

+ +

The devices are all exactly the same (the entire hardware is identical and the OS (linux) also) +I'm using a custom video player based on GStreamer library (in Qt).

+ +

I want to synchronize the video playback between all these devices in the same local area network (over UDP packets). I'm trying to find the best solution for my circumstances. I tried doing it so that the devices periodically send an ""i am alive"" packet over UDP broadcast, and before starting the video playback they send a ""i am ready"" packet with a predefined number that is periodically decremented by other devices (so if one device loses power for example, the number will go down to 0, if it comes back there will be a re-synchronization). If all devices are ready, then they all start playing. There are some small differences, but it's acceptable.

+ +

Here are my main points regarding the synchronization:

+ +
    +
  1. I cannot use an NTP-synced clock, because the devices often work in a cut-off network that has no access to any NTP server. Even if they had, I think NTP is not a good way to do this (not really that synced). This also prevents me from using any sort of timestamps, since they can be totally different on every device (they have an RTC module, but in case the battery goes low there will be differences)
  2. +
  3. I can pause/seek the video as needed, but I'd rather not seek the video during playback because that would produce a weird effect.
  4. +
  5. In case any device goes behind (or gets restarted), videos need to be re-synchronized
  6. +
  7. I cannot do video streaming (I tried already, it produces many other problems such as artifacts and doesn't really produce a perfect sync)
  8. +
  9. There is no (at least there shouldn't be) any visible gap between the videos, they are played one after one in a seamless loop.
  10. +
  11. There is no audio involved, so it doesn't need to be synced.
  12. +
+ +

What are your ideas on achieving this?

+",356589,,,,,44135.00417,"Synchronizing video playback in LAN over UDP (no master, same HW & SW)",,1,4,,,,CC BY-SA 4.0, +404650,1,,,2/3/2020 22:53,,4,248,"

First a couple of examples:

+ +
var val = obj.GetValue();
+var can = obj.CanPlay;
+var has = obj.HasValue;
+var val = obj.RequiresConstantRepaint(); // From Unity
+var val = obj.PlayOnAwake; // From Unity
+class RequireComponentAttribute // From Unity
+
+ +

As you can see, methods usually use first-person verbs, but not always.

+ +

Properties usually use thrid-person, but also not always.

+ +

There is the class RequireComponentAttribute in my example, it uses first-person verb. (Here is another problem: classes usually don't use verbs. But for attribute it maybe normal.)

+ +

Are there some rules how to choose either first-person or thrid-person verb?

+",352915,,352915,,43869.54306,43932.59931,When to use third-person verb? (PlayOnAwake or PlaysOnAwake),,2,1,,,,CC BY-SA 4.0, +404654,1,416025,,2/4/2020 2:44,,1,49,"

I am building a Browser Extension that captures a Selection made by the user in any web page, and stores it in his account. I don't quite know how to proceed with this, however.

+ +

My initial thought was that I had to build an extension for this functionality [capturing the Selection], and a web app to receive it and deal with it -- a back end to store the user's selections in a database, and a front end to present the saved selections in his account.

+ +

So I would have to build 2 separate projects, that somehow interact?

+ +

I've been doing research online for a couple of weeks now and I couldn't find definitive answers...

+ +

In this one: How do browser extensions interact with web apps? the answer had it that a back end would be necessary, which I can deduct, but what would be its relation to the extension? There was no specific answer. Because extensions don't have a back end, right? So it would have to be a separate thing that connects to the extension?

+ +

The same answer in the post above also mentioned how PWA's can have many of the functionality extensions have. In my case, would a PWA suffice? Because I need to listen for a user selection, but in any web page. (something like Web Clipper from Evernote, but I don't want to save the entire page, just the selections).

+ +

And that is why I thought of creating an extension in the first place, because it can access all pages a user visits, whereas if I use the Selection API in a web app, it would only work within the web app itself, right? I mean, it would only be able to capture the selections made in the web app's pages?

+ +

And if I really have to build two separate things, what would be the most efficient way to go about this? I mean, folder-wise, would it have to be different projects? Or just separate folders in the same project?

+ +

Sorry for the many doubts, but please give me some light in this one? If there is a better way of proceeding, or if I misunderstood any of the functionalities of an extension, web app, pwa, etc...

+ +

Any help will be very much appreciated!

+ +

Thanks

+",356602,,,,,44092.60208,How do I make my Browser Extension send a Selection it captured to a Database/Web App so it can be stored in the user's account?,,1,4,,,,CC BY-SA 4.0, +404658,1,404660,,2/4/2020 5:17,,1,95,"

I was reading the Cloud Foundry docs on Pushing Apps with Sidecar Processes.

+

Under Limitations, there was a point stating health-checks:

+
+
    +
  • Sidecars only support PID-based health checks. HTTP health-checks for sidecars are not currently supported.
  • +
+
+

I am unable to understand the difference between PID-based and HTTP-based health-checks. As per my understanding HTTP-based health-checks would require HTTP calls against services to check their state. I have no idea how PID-based health checks would be carried out.

+",316145,,31260,,44008.52083,44008.52083,Difference between PID and HTTP-based health checks,,1,1,,,,CC BY-SA 4.0, +404661,1,404669,,2/4/2020 8:19,,22,3854,"

I've a habit I just mechanically do without even thinking too much about it.

+ +

Whenever a constructor is waiting for some parameters, I consider this a public information that should be available by the calling code later on, if desired.

+ +

For example:

+ +
public class FooRepository : IFooRepository
+{
+    public FooRepository(IDbConnection dbConnection)
+    {
+        DbConnection = dbConnection ?? throw new ArgumentNullException(nameof(dbConnection));
+    }
+
+    public IDbConnection DbConnection { get; }
+}
+
+ +

The calling code which instantiated a FooRepository object is passing an IDbConnection object and therefore has the right to access this information later on but can't modify it anymore (no set on the DbConnection property)

+ +

The dbConnection parameter could be passed explicitly or by dependency injection, it doesn't matter. The FooRepository shouldn't be aware of such details.

+ +

However, yesterday when doing peer programming with a coworker, he told me that any class I write should expose just the minimum useful information. +He said developers shouldn't be able to analyse and mess with the internal state of an object.

+ +

I don't quite agree with him. In order to not waste too much time, I don't want to think a few minutes for each parameter to determine if this would be a good idea to expose it or not. +In my opinion, there are some use cases we simply can't think of, when we first write a new class.

+ +

Whether or not the class will finally be included in a Nuget package, doesn't really matter. I just don't want to limit users of my class from accessing information they explicitly passed when instantiating the object or could be easily retrieved from the dependency injection framework.

+ +

Could someone explain to me what is considered a good practice here?

+ +

Should I really think whether each parameter makes sense to be exposed? Or is there a design pattern I can just instinctively apply without wasting too much time?

+ +

Any resource on the subject are welcome.

+",250661,,174182,,43865.875,43887.86528,Constructing an object: should I expose or hide parameters passed to the constructor?,,4,6,3,,,CC BY-SA 4.0, +404662,1,,,2/4/2020 8:46,,3,71,"

I'm working for a large company that's currently overhauling its network infrastructure, and several departments within the company have expressed interest in APM, desiring a tool for performance monitoring and error catching, customer tracking, business transaction management and the like.

+ +

However, the organisation's size, and the fact that different interest groups wish for different features, has made it quite difficult to choose the right one between the available options. There are quite a few APM tools out there, both open source and commercial, and they seem to be boast somewhat different features, focusing on different things.

+ +

As part of my master's thesis, it's fallen to me to map out these requirements and make a recommendation based on the results. I'm somewhat familiar with requirements engineering in general from an academic perspective, but that has always been from the perspective of building a new software application, not selecting between pre-existing ones (which shoe fits the best?)

+ +

Are there any academically sound models for performing requirement elicitation and analysis under these sorts of conditions? In short, helping a customer, represented by multiple different internal interest groups, pick a specific piece of software between a plethora of available ones? Or is this something where the standard IEEE should be used as-is?

+",356615,,356615,,43865.37014,43867.43056,Requirements engineering model for picking between various software applications?,,2,4,1,,,CC BY-SA 4.0, +404664,1,,,2/4/2020 9:41,,3,100,"

I confess, the question title suggest a ""too broad"" question, but here me out first... I am only interested in verifying my findings in that regard.

+ +

All the following situations have the following in common:

+ +
    +
  • Binaries and supporting files (no user data) of an unspecified application should be installed onto the target system.
  • +
  • Configuration, user data shall be stored on the target system.
  • +
  • Start menu shortcuts shall be created on the target system.
  • +
  • The roaming/local synchronization concept should be considered.
  • +
+ +

The scenarios are:

+ +
    +
  1. A machine-wide installation; all users on the target system can see the start menu shortcuts within their user desktop, and can use the application. Configuration and user data are shared across all users (are common, no individual configuration or user data).
  2. +
  3. A machine-wide installation; all users on the target system can see the start menu shortcuts within their user desktop, and can use the application. Configuration and user data are individual to the specific user and isolated from each other (users cannot access configuration and other data from other users).
  4. +
  5. A per-user only installation; users needs to install the application by themselves. If user A installs the application, the installation of user A is not accessible to user B. If user B wants to use the application, user B has to install the application individually.
  6. +
+ +

From my ""research"", I have identified the following directories for the mentioned scenarios:

+ +

Scenario 1: machine-wide installation, shared configuration and common user data

+ +
    +
  • Application binaries: C:\Program Files\[Manufacturer]\[ProductName]
  • +
  • Configuration and user data: C:\ProgramData\[Manufacturer]\[ProductName]
  • +
  • Start menu shortcuts: C:\ProgramData\Microsoft\Windows\Start Menu
  • +
+ +

Scenario 2: machine-wide installation, individual configuration and individual user data

+ +
    +
  • Application binaries: C:\Program Files\[Manufacturer]\[ProductName]
  • +
  • Configuration and user data: C:\Users\[UserName]\AppData\Roaming\[Manufacturer]\[ProductName]
  • +
  • Start menu shortcuts: C:\ProgramData\Microsoft\Windows\Start Menu
  • +
+ +

Scenario 3: per-user only installation, individual configuration and individual user data

+ +
    +
  • Application binaries: C:\Users\[UserName]\AppData\Local\[Manufacturer]\[ProductName]
  • +
  • Configuration and user data: C:\Users\[UserName]\AppData\Roaming\[Manufacturer]\[ProductName]
  • +
  • Start menu shortcuts: C:\Users\[UserName]\AppData\Roaming\Microsoft\Windows\Start Menu
  • +
+ +

Pit fall with scenario 3: The user may install the application on a computer A, which is Active Directory connected, switches to computer B. The application is not installed on computer B, since the application was installed on computer A within Local, so the data was not shared/synced with the Active Directory environment. The issue is, that the user sees the shortcuts, as they were added to Roaming. I could not find a Local start menu. +On the other hand, when the application would be installed within the Roaming domain, Active Directory administrators will quickly argue against the installation of the software, as shared/synced Roaming is not intended for application binaries, but user data.

+ +

There are several (unnamed) application available, which do not uses the ""common directories"". For example, new directories were created on the root level (directly under C:\, like C:\[Manufacturer]\[ProductName]) or in the (visible) user home directory (like C:\Users\[UserName]\Documents\[Manufacturer]\[ProductName]). I disfavor such solutions, as an administrator would have hard times finding application binaries or user data, and, related to C:\Users\[UserName]\Documents, will populate the visible user home with data, the users has not placed there by themselfs. The possibility of the user deleting mandatory configuration or user data files is high, because the user thinks ""I do not have placed it there"".

+ +

Edit I missed to reveal some information that clarifies the/my situation. I am operating an installation builder software, that provides me with generic path variables, such as LocalAppDir, RemoteAppDir, ProgramsDir, ProgramDataDir, etc .. +I have picked some of the most common used targets (paths) to discus, whether the (resolved and absolute) path is the right path to use. +@MSalters pointed out the localization issue, and ProgramData dir moved issue. However, those issues are mitigated by using the path variables provided by the installation builder software.

+ +

Using a Windows-based system (as of Windows 7, 8.1, 10), what are the correct (intended) storage locations (folders) for an application?

+",60091,,60091,,43865.54653,43865.54653,What are the correct storage locations (folders) for an application within a Windows-based (Windows 7 - 10) environment?,,1,0,,,,CC BY-SA 4.0, +404667,1,,,2/4/2020 10:25,,2,25,"

I have a common style (common.scss) and a text direction (RTL, LTR) option file (option-document-direction-rtl.scss, option-document-direction-ltr.scss). And by switching the reading of these files, the css file with RTL / LTR changed is output by method like gulp.js.

+ +

_common.scss

+ +
body {
+  max-width: 1120px;
+  background-color: black;
+  color: white;
+  text-align: $direction;
+}
+
+ +

_option-document-direction-rtl.scss

+ +
$direction: right;
+
+ +

_option-document-direction-ltr.scss

+ +
$direction: left;
+
+ +

style.scss

+ +
@import ""option-document-direction-rtl"";
+// switch to @import ""option-document-direction-rtl"";
+
+@import ""common.scss""
+
+ +

And I can get below two css files same time by some method (actually these scss files style be more complex):

+ +

rtl-style.css

+ +
body {
+  max-width: 1120px;
+  background-color: black;
+  color: white;
+  text-align: right;
+}
+
+ +

ltr-style.css

+ +
body {
+  max-width: 1120px;
+  background-color: black;
+  color: white;
+  text-align: left;
+}
+
+ +
+ +

Main subject

+ +

So, I have to switched these style condition different from text direction. For example, screen size without media queries. In this case, I think need a new options SCSS file, such as _computer.scss and _smartphone.scss. Here, I had a question.

+ +

The number of configuration files with new conditions may increase in the future. In the current method, in short, the css file to be output is changed depending on which of the ""configuration files that describe the settings used in each condition"" is read.

+ +

I'm afraid that my configuration files will grow and become more difficult to manage. What tools and mechanisms on SCSS can be used to increase the scalability of many configuration files?

+ +

I considered a method of dynamically changing the output file according to the argument at the time of build, but I have not been able to determine whether this is a way to improve scalability.

+",356315,,,,,43865.43403,How to increase scalability the method that output each file while switching multiple many conditions?,,0,0,,,,CC BY-SA 4.0, +404670,1,,,2/4/2020 10:35,,2,133,"

Currently an enum in our project takes properties in the constructor. They are practically guaranteed to be entirely different across every different enum.

+ +

Now I'd like to add an property which is a simple boolean - and it's only different from the default in a very few specific cases.

+ +

There's 2 sensible approaches I can see here:

+ +
    +
  1. Refactor the constructor to take the boolean parameter, set it to true for the specific necessary ones. While this is the most consistent one with the current status quo, it also adds imho quite a bit of unnecessary clutter since it varies from the default so rarely.

  2. +
  3. Define an overridable getter Method that uses the default and override the method in the enums where it varies. I like this because it keeps the constructor ""clean"", but it starts introducing ""magic"" that might be missed or forgotten by successors in the future - especially since it hides the implementation.

  4. +
+ +

Judging purely by clarity of implementation, option #1 should win by far. But my gut is not sure whether clairty can be sacrificed in this case for a little bit of extra readability.

+",328802,,328802,,43865.46944,44019.66736,"Using Enum constructors, Overridable Getters, or both in combination?",,1,5,1,,,CC BY-SA 4.0, +404671,1,,,2/4/2020 11:00,,2,118,"

I have a city review app and will let people review the cities where they have been. Forget about users for a while.

+ +

So in the db will look like this

+ +
CITIES:             REVIEWS:
+
+| id | name     |   | id | CityId | score | 
+|----|----------|   |----|--------|-------|
+| 1  | Tokyo    |   | 1  | 1      | 3     |
+| 2  | New York |   | 2  | 1      | 1     |
+| 3  | Paris    |   | 3  | 3      | 5     |
+                    | 4  | 3      | 2     |
+                    | 5  | 3      | 1     |
+
+
+ +

My numbers:
+I may have 1000 different cities and 10-50 reviews for each city (but reviews count will continue to grow)

+ +

Options:
+A: Add a column averageReview to the CITIES table and update that average on every new review and simply pickup that value every time I have to retrieve a city

+ +

B: Calculate the average every time I have to retrieve a city

+ +

Which is the most common way to get the average review every time I need to retrieve a city?

+",325366,,,,,43866.37014,Save average review score or calculate at every request?,,2,4,,,,CC BY-SA 4.0, +404673,1,404683,,2/4/2020 11:33,,2,196,"

I'm looking for general advice regarding the structure of applications.

+ +

In applications I've been building recently, I've started to use a class (I'll refer to it as DataManager for the rest of the question) to hold all my individual objects, and all related methods. I make the DataManager accessible by all other classes and forms by making it Singleton, and fire events from DataManager that individual components can subscribe to. The components would then access the DataManager, and update their UI accordingly.

+ +

I do it this way, because then there is a central class that manages the data of the application and allows all UI elements to access the data they need at any time. I find it makes it much easier to debug, and remember how the application works (when I have to revisit it down the line).

+ +

My question is, is this good practice? And if not, what is good practice? I'm trying to improve my application design, so would appreciate any advice on this.

+ +

NB. I use this structure in C# winforms projects.

+",356535,,105684,,43865.55694,43865.60556,Program Structure,,1,5,,,,CC BY-SA 4.0, +404674,1,,,2/4/2020 12:03,,0,56,"

I'm suppose to identify 3 CK metrics and the most appropriate class to refactor so as to increase encapsulation, modularity and reduce the complexity of the following project

+ +

+ +

According to my research, from the original metrics proposed by C&K the coupling of objects and lack of cohesion of methods impact all 3 factors and I also chose the weighted methods per class which impacts complexity.

+ +

After an initial analysis the Vehicle class has the highest values for almost all metrics, some of them are significantly higher (e.g LCOM=95 while the average is 5.7), so I think this is the right class to tackle.

+ +

No spec was given, just the code and the UML diagram above and my problem is that I don't see how I could restructure the code so as to lower these metrics.

+ +

Any suggestions, examples?

+",339027,,,,,43865.50208,CK Metrics - Lower Complexity by Refactoring,,0,2,,,,CC BY-SA 4.0, +404675,1,404693,,2/4/2020 12:14,,-2,141,"

I guess most of you already met them. You get them from your data sources, see them in your logs, or in the output from your legacy systems. Some strings you can't really read.

+ +

To derive any useful information from them, you need to decode them first. With files, it is often possible to see the header. With text strings that have no header, you need to guess.

+ +

Many have been in that situation, and, StackOverflow has an endless supply of one-shot questions on the topic.

+ +

Some are fine with that practice... but others don't want to litter SO with overly specific questions and wait for help. In the long run, it's faster to invest some skill points into identification anyway.

+ +

Let's do that.

+ +

How do you identify that a string is in Base64, JSON, B-encode, or any other data exchange format? What resources do you use - cheat sheets, online tools, something else? Are there any techniques that can be learned, apart of ""just seeing it""?

+",222503,,,,,43865.74514,How to identify encoding of a text string?,,1,3,,43865.75,,CC BY-SA 4.0, +404677,1,,,2/4/2020 12:56,,2,85,"

I'm actually building an application that relies on multiple third parties (like 10-12) and I'm wondering if I should test my application with its third parties integrations.

+ +

I had a discussion with some folks and it seems that the answers vary in a way that I'm not totally sure of what could be better than the other.

+ +

My point is that I would stick with the real third parties and test my whole application including their real behaviours because:

+ +
    +
  • Mocking all of them would be a real pain
  • +
  • Interactions with the third parties are an inherent part of the system. And if I'm mocking them with some expected behaviour while they are changing the real behaviour, this could drive me in a weird state where my tests are passing but the real app is breaking.
  • +
+ +

What are your thoughts on including third parties as part of the E2E tests suites instead of mocking them?

+",160326,,,,,43865.54861,E2E testing with third party services,,1,0,,,,CC BY-SA 4.0, +404681,1,404901,,2/4/2020 14:25,,-2,66,"

I am starting to work in a company. In this company, we have a set of products. These products are inherently related and have many shared functionalities and parts. Each of our products consists of a number of components. Types of these components are small software code, electronic boards, and mechanical pieces.

+ +

After nearly fifteen years of developing and deploying these products to many customers, the managers of this company feel being frustrated with management, updating, improving, and developing new related products.

+ +

Another important problem is that after these years, they understood that we have to develop our products based on software and not hardware, as they have already been doing. Currently, our products are based on electronic and mechanical components and software is mainly to control hardware devices. Now we have to develop products that are able to provide more information and features to the user in mobile apps and store data regarding our devices in servers. In other words, we change the mindset from developing hardware devices with small software codes to software products which control related hardware devices.

+ +

Nearly all of the employees are not computer engineering professionals. They have a bachelor's or master's degrees in electronics, control or mechanical engineering. My questions are:

+ +

1- Does anyone have such an experience to set up such a development system for such a workforce?

+ +

2- Is there any type of development platforms that considers version control and artifact management for electronic and mechanical components together with the software process management? Are ordinary software development platforms are good for these types of products?

+ +

3- Meanwhile as a very small step toward this goal, they are looking for a tool that enables them to manage documents produced in developing these products and linking them to other documents in the system. It is very good if this tool has version control and change management together with access control. Can I suggest using change management and version control software such as Rational ClearCase or git?

+ +

In summary:

+ +

Is there any type of development platforms for systems that have both electronic and mechanical components together with software components?

+",356646,,356646,,43865.6125,43870.49722,Is there any type of development platforms for systems which have both electronic and mechanical components together with software components?,,1,17,,,,CC BY-SA 4.0, +404684,1,,,2/4/2020 14:37,,4,152,"

This is a question I constantly ask myself when designing a data intensive application: When is it appropriate to use stream() over parallelStream()? Would it make sense to use both? How do I quantify the metrics and conditions to intelligently decide which ones to use at runtime. From what I understand, parallelStream() is a great facility to process entries in parallel but it all comes down to execution time and overhead. Does the end justify the means?

+ +

In my particular use case, do to the nature of the application, the velocity and volume of the data I am processing will be all over the place. There will be times where the volume is so large, my application would massively benefit from parallelizing the workload. Then there are times where a single thread will accomplish the task much more efficiently. I have profiled my application a dozen times and have had mixed results.

+ +

So this brings me to my question. Is there a way in Java 8 (or later) to switch between stream() and parallelStream() intelligently? I considered at one point defining boundaries on the data that would allow for alternating between the two but in the end, not every piece of equipment is designed the same. Some systems may deal with single threaded workload much better then others. And vice versa.

+ +

It might be relevant to mention that I am using Apache Kafka, using Kafka Streams with Spring Cloud Streams. For the most part, I feel like I have squeezed everything out of Kafka in terms of performance and want to focus internally on optimizing my own service.

+",233622,,,,,43865.94583,Alternating between Java streams and parallel streams at runtime,,2,0,,,,CC BY-SA 4.0, +404687,1,,,2/4/2020 15:05,,2,530,"

I am a little bit confused about simple factory and factory method. My main difficult is the abrut difference between the examples code on the internet, even on wikipedia, where have lots of them, some with Interfaces, some with Switch cases and some even with registers

+ +

I've create this simple one example, using PHP, but I'am not confident if its can be considered a simple factory and/or factory method:

+ +
<?php
+
+abstract class Log {
+    protected $nome;     
+    public function __construct(string $nome) {
+        touch($nome);
+        $this->nome = $nome;
+    }
+
+    protected function getDataHora(): string {
+        return (new \DateTime())->format('Y-m-d h:I:s.v');
+    }
+    public abstract function insert(string $texto);
+    public abstract function read(): string;
+}
+
+class TextLog extends Log {
+
+    public function insert(string $texto) {
+        $log_entry = $this->getDataHora() . "";"" . $texto . PHP_EOL;
+        file_put_contents($this->nome, $log_entry, FILE_APPEND | LOCK_EX);
+    }
+
+    public function read(): string {
+        return file_get_contents($this->nome);
+    }
+
+}
+
+class JsonLog extends Log {
+    public function insert(string $texto) {
+        $log_entry = ['data' => $this->getDataHora(), 'texto' => $texto];
+        $json = file_get_contents($this->nome);
+        $tempArray = json_decode($json, true);
+        if(!is_array($tempArray)){
+            $tempArray = [];
+        }
+        array_push($tempArray, $log_entry);
+        $jsonData = json_encode($tempArray);
+        file_put_contents($this->nome, $jsonData);
+    }
+
+    public function read(): string {
+        $json = file_get_contents($this->nome);
+        $array = json_decode($json);
+        return implode("";"", $array);
+    }
+
+}
+
+class LogFactory {
+    public static function get(string $tipo): Log {
+        switch (strtoupper($tipo)) {
+            default:
+            case 'TXT':
+                return new TextLog(""log.txt"");
+            case 'JSON':
+                return new JsonLog(""log.json"");
+        }
+    }
+}
+$log = LogFactory::get('txt');
+var_dump($log);
+$log->insert('Testando 1,2,3...');
+
+",356206,,,,,43865.72917,What is the difference between simple factory and factory method?,,1,4,,,,CC BY-SA 4.0, +404688,1,404962,,2/4/2020 15:21,,1,131,"

I'm not sure where to place the following checks I have now in the material domain class. The issue is, that I need to validate that material exists and is valid in external database. Only after that validation I can be sure that I can use this material in our new project and store it in database. The Material object is part of collection in Order aggregate object. This is simplified example to demonstrate the issue.

+ +

Material class

+ +
  public class Material
+        {
+            public int? Id { get; private set; }
+            public int RequestId { get; private set; }
+            public int MaterialNumber { get; private set; }
+            public int Quantity { get; private set; }
+
+            // EF
+            private Material()
+            {
+            }
+
+            public Material(int requestId, int materialNumber, int quantity,
+                string materialType, string materialStatus, string materialCondition)
+            {
+                if (materialCondition != ""ZPR0"")
+                {
+                    throw new InvalidOperationException(""Material condition is not valid."");
+                }
+
+                if (materialType != ""FERT"" || materialType != ""UNBW"")
+                {
+                    throw new InvalidOperationException(""Material type is not valid."");
+                }
+
+                if (materialStatus != ""Z2"" || materialStatus != ""Z3"" || materialStatus != ""Z4"")
+                {
+                    throw new InvalidOperationException(""Material status is not valid."");
+                }
+
+                Id = requestId;
+                MaterialNumber = materialNumber;
+                Quantity = quantity;
+            }
+        }
+
+ +

Order class

+ +
public class Order : IAggregate
+    {
+        public int? Id { get; private set; }
+
+        public IList<Material> Materials { get; }
+
+        // EF
+        private Order()
+        {
+        }
+
+        public AddMaterial(Material material)
+        {
+            Materials.Add(material);
+        }
+}
+
+ +

Should I load materialType, materialStatus and materialCondition values from the database and pass it into domain entity inside Application layer (API) ? Or should I call some domain service (or repository method inside application layer) or domain validator to call some method IsMaterialValid(materialType, materialStatus and materialCondition) and after that materialize the Material object ? Its simple issue but I dont know how to handle it properly.

+ +
    +
  • I am not sure if this type of validation should be part of the domain object or not. And if not, is it possible to place it inside application layer or is it better to incorporate some domain service somehow ?

  • +
  • The solution which I lean, is to create IsMaterialValid(int number) as database procedure and call it from repository in application layer, after that validation, create material object and pass it to Order object. It seems like most direct and easy way. What I don't like is that I will outsource the domain validation into database procedure (but as a plus I will still be dependent on validation method defined in repository interface).

  • +
+ +

Any idea ?

+",239406,,239406,,43866.51875,43871.81875,Where to execute domain validation which is not part of domain object?,,1,7,,,,CC BY-SA 4.0, +404690,1,404696,,2/4/2020 15:39,,3,119,"

I'm starting on a greenfield project and haven't done or architected the sign-up/sign-in and auth part of the backend in a very long time, so my knowledge on the nowadays' good practices is very limited. I've basically only done basic local sign-up/sign-in functionality.

+ +

The app is related to social networking, so thousands of users might be expected in proximity of a few years. Therefore I want to start off with as clean of architecture as possible, obviously. But also trying to ""look into the future"" and anticipate costs.

+ +

Now, for the project I need to have sign-up/log-in options with:

+ +
    +
  • mobile phone nr.
  • +
  • email
  • +
  • Apple account
  • +
  • Facebook account
  • +
+ +

I'm looking into managed services like Auth0, AWS Cognito (particularly interested in this one), etc.

+ +

But I'm uncertain whether I understand how the process and user account management should be set up. Provided I use a relational DB, do I still need to create the user account models in my application and then:

+ +
    +
  • when a user signs up thru email - my app handles all the auth, including sign-in and access/refresh token generation and handling.
  • +
  • when a user signs up thru any other 3rd party means - I use the managed auth service and link the user record in the service to a user record in my DB. Then use the managed service for token generation and handling.
  • +
+ +

Or, do I delegate the whole user management (including the user account models & data) to the managed auth service? And thus, the only functionality related to auth and user account management in my application will be calls to the managed auth service...?

+",213565,,,,,43865.78611,Good/clean practice for building the auth part of an application,,1,0,,,,CC BY-SA 4.0, +404698,1,404762,,2/4/2020 19:43,,1,294,"

I have a design question about data. I have a class ""T"" that has a single interface handling two data types as described below. The class T is constructed to handle one of the two data types. Users of T class do not need to know which type of data T class handles.

+ +
    +
  • Data class ""L"" that has a known (limited) number of instances, which are constructed during runtime and they do not change. I plan to use a singleton to hold a container for all such data objects. In the class T, I can directly access the singleton and therefore avoid the need for passing the data object as a function argument.
  • +
  • Data class ""UL"" that have an unknown (unlimited) number of instances. The class T works on each instance one at a time. It seems I will have to pass this type of data in as an argument. This would be fine if T happens to handle UL. But if T handles data type ""L"", then the argument of ""UL"" will end up not being used. Is this a legitimate design? It feels strange as the UL argument can be redundant. But can it also be harmless?
  • +
  • At this point both type of objects are mutable during their lifetime. The data objects will not change in ""T"" / ""Ts"", but could change in class ""P"".
  • +
  • The data type ""L"" is needed because it has a distinct business meaning and L class have a pre-defined number of instances. the ""L"" data is like global data, while the ""UL"" data is like user data. The ""UL"" data is handled one by one.
  • +
+ +

I was also wondering if there is a way to avoid passing the ""UL"" object as a function argument. For example, I can set up a reference outside the class T, and each time the UL object changes, the new UL is assigned to the reference. Would this allow me to hide the different instances of UL behind a fixed reference? Can I then somehow access the fixed reference in class T, in a way similar to the singleton for data type L?

+ +

I got stuck in this design and I feel I may have missed something big.

+ +
+ +
    +
  • A T object may work with either ""L"" type or ""UL"" type. Multiple ""T"" objects are aggregated in another class Ts. The number of T objects within a Ts object can vary.
  • +
  • A Ts object is used in a user class ""P"", which read ""UL"" objects one by one, and pass the ""UL"" object to the ""Ts"" object that it holds. The ""Ts"" object then pass the UL object to the individual ""T"" object, which may or may not use the ""UL"" object.
  • +
  • I have this design because it seems desirable to have a single interface for the class ""T"", so that they can be used in a consistent way within ""TS"" / ""P"" object.
  • +
+ +
class P
+{
+  Ts ts_;
+  public void work(UL_Data ul_data)
+  {
+     ts_.work(ul_data)
+  }
+}
+
+class Ts
+{
+   IList<T> ts_;
+   public void work(UL_Data ul_data)
+   {
+    //...
+    foreach(var t in ts_)
+       {
+          double val = t.read(ul_data); 
+          //....
+       }
+    //...
+   }
+}
+
+class T
+{
+  //set to true if the class read data type L
+  bool data_type_L;
+  public double read(UL_Data ul_data)
+  {
+     if(!data_type_L)
+         //...read data type L from singleton
+     else
+         //read data type UL from function argument ul_data
+  }
+}
+
+",277787,,277787,,43866.86458,43868.88542,Common interface to handle different data type,,2,22,1,,,CC BY-SA 4.0, +404700,1,404701,,2/4/2020 20:38,,3,102,"

I've been a relentless proponent of small files. I prefer one function export per file, functions with everything-in-one-view, and breaking up UI components as much as sensible (which is why I love React).

+ +

There are many benefits to this from a code-maintenance and clarity perspective (IMO), but I have never assessed what the impact may be on overall bundle size and compilation time (Webpack is tagged here, but other build tools are applicable). My assumption has always been that the benefit of small files is worth what is probably a minor or even negligible difference in bundle size and speed. But I would like some data to back up or refute this assumption without having to massively refactor an existing app for comparison.

+ +

So, what is the impact of many-small vs fewer-larger files on bundle size and compilation time?

+",19829,,19829,,43865.86806,43866.11042,Many small files vs fewer larger files: impact on bundle size and build time,,1,5,,,,CC BY-SA 4.0, +404704,1,,,2/5/2020 0:08,,1,25,"

I wrote a lot of code with the intention of it being usable not only in the WordPress space, but everywhere and because of that, I've been very, very against returning WP_Errors and resorted to only returning bool when something went wrong. Naturally, that's not enough because a boolean doesn't give you any context as to what went wrong.

+ +

Is there any library/way to bridge the gap between WP_Errors and something the whole PHP ecosystem understands? That is to say, if I take my code that's written mostly for WordPress and plug it into a non-WP environment, is there something that will guarantee it'll work when I return an Error?

+",353781,,,,,43866.11458,How do I solve the portability of WP_Error?,,1,0,,,,CC BY-SA 4.0, +404718,1,404729,,2/5/2020 7:52,,1,1029,"

I have an application that consists of multiple sections of which each section will need to load data from various API calls.

+ +

Now I'm thinking of taking advantage of Session variables(or caching) to store some data based on User ID so that in case of a user reloads the page continuously it would prevent unnecessary calls (except of course in certain cases where the data needs real time update).

+ +

This is my first time building a complex web application and I was wondering what would be a good practice to prevent the application from getting overloaded with unnecessary calls.

+ +

Thanks for any advice.

+",356708,,,,,43866.64583,ASP.NET MVC - Using Session Variables or Caching to prevent unnecessary calls. Is it a good practice in general?,,3,3,,,,CC BY-SA 4.0, +404721,1,,,2/5/2020 9:02,,-2,97,"

My employer recently learned of Laravel, and has asked us to migrate much of our development to it. None of us have issues with this, as we’ve all played around with it.

+ +

We have two main in-house products: an api, and a web app which calls said api. The discussion we’re currently having, is does it make sense to put both products into the same Laravel application?

+ +

I do apologize if this is the wrong place to ask.

+",356716,,,,,43866.62708,Does it make sense to build api and web in one app?,,1,1,,,,CC BY-SA 4.0, +404724,1,,,2/5/2020 12:42,,3,171,"

This is probably a very basic question, but I have never worked on a team before and I'm not sure if there is an obvious answer or if I'm too bad at googling. Regardless, I can't find a clear answer to how this is done in practice.

+ +

Let's say I'm on a team with 9 other people working part time on a school project for the duration of a semester. Using some sort of CI with Github, we integrate code into our repo anywhere from a few times a week to multiple times per day. How does everyone on the team sync up their local machines with that Github repo? Ideally, we would want to simulate a real project in a work environment.

+ +

It's often explained in the context of a git workflow just using an arrow, but what I want to know is what that arrow looks like in practice, either through personal anecdotes or software/script recommendations (in case the process is automated).

+",356739,,356739,,43866.59097,43866.95833,How to keep everyone on team in sync with repo?,,3,6,,,,CC BY-SA 4.0, +404726,1,,,2/5/2020 13:21,,0,57,"

we have a monolith that basically reaches close to a 100k IoT devices/routers and tries to gather some information about them.

+ +

As you can imagine, this can take close up to 40 hours for the process to complete(due to badly designed monolith + slow network that reaches these devices)

+ +

Now I am trying to break this IoT device syncing part out from the monolith, and make it it's own service(with multiple handlers to manage the load to make things faster).

+ +

But all the data/credentials required to connect to these devices are still in the Old Monolith. This data changes rarely, probably once a month.

+ +

What's the best way to sync data from the old monolith? +Should it be a REST API in the old monolith that returns a giant JSON with all the data which the new micro-service can call like say every week?

+ +

I know about things like event based architecture which works great for a large team, but I am a lone developer(so limited resources) so I am wondering some practical engineering solution that works and which can avoid making the data syncing part it's own project.

+",356746,,,,,44200.87708,Syncing data from a monolith to a micro-service,,1,0,,,,CC BY-SA 4.0, +404727,1,,,2/5/2020 14:00,,3,61,"

I am developing a blog website, for each post, it has a list of tags, just like stackoverflow.

+ +

There is no doubt that in the server side, i will expose a api like blog/edit to the client side, and the request parameter like:

+ +
class BlogEditParam{
+    private List<String> tags;
+}
+
+ +

So definitely i need to validate the tags field in the server side, like checking whether if the tag name exist or not, i have no doubt with that.

+ +

However, the tags field has ""set"" semantic, it cannot have any duplication. for example, a post cannot have a tag list like: +c++, java, c++

+ +

What should i do when dealing with the possible duplication in the input parameter? It seems to me that i have two strategy:

+ +
    +
  1. just silently ignore it. For the example i've shown above, i remove all the duplication in the server side and accept this request(of course, the c++ and java tag must pass the exist validation)
  2. +
  3. report an error to the client side when finding such duplication
  4. +
+ +

What's the prefer way in this blog application scenario?

+ +
+ +

I am using Spring Boot as the server side framework. It use jackson to deserialize the request body, i know i could write the input parameter like:

+ +
class BlogEditParam{
+    private Set<String> tags;
+}
+
+ +

But in fact, it just ignore the duplication when do the deserialization, so just like the first strategy that i've mention above.

+",349362,,349362,,43866.5875,43866.59444,"What's the prefer way when validate a ""set"" semantic input JSON array parameter in a blog website, silently ignore it or report an error?",,1,0,,,,CC BY-SA 4.0, +404734,1,404743,,2/5/2020 16:20,,1,177,"

Recently, I have been told by others to look into containerization of my stateless web applications (in this case .NET Core 2.x and 3.1). All of my dependencies are retrieved from public and private Nuget feeds.

+ +

For a few years, I have been deploying to Azure App Services (Web apps) with no issues and only some very simple configurations on creating the App Service. Azure (and I'm guessing AWS) seems to be able to handle common configurations like mine pretty well. What am I missing out on by not using containers?

+ +

Do Docker containers really just solve a problem that I don't have?

+",104198,,104198,,43866.70625,43866.76736,Why should I use containers instead of deploying build artifacts directly to Azure App Services or AWS Elastic Beanstalk?,,2,3,,,,CC BY-SA 4.0, +404735,1,,,2/5/2020 16:35,,1,73,"

I have a piece of code (a Facade) that creates every single infrastructure component on aws.

+ +
private void synthesizeStack() {
+    Deployer<Vpc> vpcDeployer = new VpcDeployer(this);
+    Deployer<Bucket> s3Deployer = new S3Deployer(this);
+
+    Vpc vpc = vpcDeployer.deploy();
+    Bucket bucket = s3Deployer.deploy();
+
+    Deployer<SecurityGroup> bastionSecurityGroupDeployer = new BastionSecurityGroupDeployer(this, vpc);
+    SecurityGroup bastionSecurityGroup = bastionSecurityGroupDeployer.deploy();
+    Deployer<Instance> bastionDeployer = new BastionDeployer(this, vpc, bastionSecurityGroup);
+
+    Deployer<SecurityGroup> rdsSecurityGroupDeployer = new RdsSecurityGroupDeployer(this, vpc, bastionSecurityGroup);
+    SecurityGroup rdsSecurityGroup = rdsSecurityGroupDeployer.deploy();
+    Deployer<DatabaseCluster> rdsDeployer = new RdsDeployer(this, vpc, rdsSecurityGroup);
+
+    Deployer<List<Lambda>> lambdaDeployer = new LambdaDeployer(this, bucket);
+
+    rdsDeployer.deploy();
+    bastionDeployer.deploy();
+    lambdaDeployer.deploy();
+}
+
+ +

I tried to give to all deployer a common interface but each deployer has different parameters and return type so I created a generic interface.

+ +

How can I simplify this piece of code? Is there a better way?

+",356764,,356764,,43866.69931,43867.42986,Refactoring and simplify this infrastructure building code,,1,2,,43873.61181,,CC BY-SA 4.0, +404737,1,404738,,2/5/2020 16:57,,66,17034,"

I'm trying to clean up some of my code using some best practices, and read that comments are almost always a bad idea for future maintainability. I've managed to get rid of most of them by refactoring to methods with good names.

+ +

However, there are several comments sprinkled around that explain why a single line of code needs to exist. I'm having trouble figuring out a way to get rid of these. They often follow this kind of structure:

+ +
// Needs to start disabled to avoid artifacts the first frame. Will enable after frame complete.
+lineRenderer.enabled = false;
+
+ +

Then later on in the code I enable it.

+ +

I've thought about extracting it into a one-line method called StartLineRendererDisabledToAvoidArtifactsFirstFrame() but this doesn't seem all that clean to me.

+ +

Are there any best practices for dealing with such one-liners? Many of them explain the existence of code that on the surface looks superfluous but then actually has an important function. I want to guard against future deletion.

+ +

Sidenote: I've already run into some scenarios where refactoring/renaming has made these comments be out-of-date or in the wrong spot etc. so I definitely see why removing them would be useful.

+ +

Related but different question here:

+ + + +

EDIT BASED ON ANSWERS AND COMMENTS BELOW

+ +

Here's my takeaway from all the great discussion below.

+ +
    +
  1. Comments are fine if there's not an easy way to refactor/rename for more clarity. But they should be used sparingly.
  2. +
  3. Removing comments is generally a good thing. But some comments need to exist to help future readers understand the code without having to dig too deep.
  4. +
  5. But they should explain the WHY, not the HOW.
  6. +
  7. If the code is there for a particular reason that is very important or fixes a bug, it should probably also have a corresponding unit test anyway.
  8. +
  9. Commit messages can help track why and where and how the commented code came to be.
  10. +
+",157621,,3611,,43877.63264,43878.67778,How to avoid comments about one line of code for cleanliness,,13,23,13,,,CC BY-SA 4.0, +404747,1,404749,,2/5/2020 19:16,,4,121,"

I'm working with several Excel worksheets and workbooks that need to consolidated or linked to each other. These workbooks will track client interactions for different team members.

+ +

I did the initial consolidation in Microsoft Excel, but realized that going forward, there's a risk of people overwriting each other or working from an older version of the master workbook.

+ +

After some research, I realized one idea could be to have each person track their own client interactions. I can regularly consolidate each workbook into the master using Excel data models.

+ +

But are there additional risk to doing this? Is an Excel data model as good as an Access database? I've shied away from Access because my team is not comfortable using SQL and overall dislike the Access Interface.

+ +

This is why I started off using Excel. But managing multiple workbooks (with changing names) has already created several version control issues. Any feedback would be helpful!

+ +

I'm limited to using Microsoft Office 365.

+ +
+ +

Use Cases

+ +

Log interactions

+ +
    +
  • Each person can keep a log of daily client interactions
  • +
  • Individual logs are regularly synced to a master list that shows +interactions for the whole team
  • +
+ +

Log contact information

+ +
    +
  • Each person can contact information for new clients
  • +
  • New contact info is synced with the master list
  • +
+ +

Add or update broad company information

+ +
    +
  • Each person can add or update general company information (ex Company A is focused on xyz for 2020). This is separate from the individual interactions
  • +
  • Updates to general company information can be added directly or +synced with a master list
  • +
+ +

Filter for a list of interactions, contacts and broad information

+ +
    +
  • Search interactions based on criteria (ex company's city, interaction +date, contact's place of employment)
  • +
  • Print all information (interactions, contacts and broad information) +based on a specific company name
  • +
+ +

Reporting

+ +
    +
  • Send scheduled reports (weekly, quarterly) based on recent updates and +other specified criteria (ex company with the most logged interactions for the month)
  • +
+",356773,,1204,,43866.82431,43866.85139,Is Excel data model as good as or better than Microsoft Access?,,2,17,,,,CC BY-SA 4.0, +404750,1,,,2/5/2020 21:17,,1,130,"

This is a difficult question that can involve many people, it is a real scenario and can have real consequences.

+ +

A big organization with about 100 developers, working on different teams. The teams work with different front-end technologies, maintaining different products for the same company.

+ +

One team has experience with Angular, for some years. They are good at that. But the company choose to use React for all the other projects, and want this team to use React too.

+ +

Is this a good practice? to force a team to work with a desired stack?

+ +

What is better for a company? +- Let developers choose the stack, and have different technologies +- Let the ""architects"" or the company choose a stack for the company and have a single technology

+ +

I know some of you will say that developers should choose the stack because they know the right tool for the right problem. But lets not use this argument please. Let's say ""Developers use the tool they know"", in this case both tools are similar, with both of them one can satisfy the same needs.. And i am just referring to Angular and React. I don't talk about back-end or other things.

+ +

There are some good reasons to make all the teams use the same tech stack: market adoption, sharing code base, unifying technologies. This makes sense. But, forcing a team that is proficient in the other stack to change... Does that make sense? To throw that knowledge and work to the trash?

+",356780,,1204,,43866.92778,43869.42986,"Large organization, different teams, unique tech stack or different ones?",,4,0,,,,CC BY-SA 4.0, +404751,1,404756,,2/5/2020 21:39,,1,578,"

I have the following use case about a video rental store which has the following actors:

+ +
    +
  • Member (Gold, Ordinal)
  • +
  • Assistant
  • +
  • Supplier
  • +
  • Clerk
  • +
+ +

The system is a desktop application for renting and reserving videos, and the shop has a range of videos in stock for members

+ +
    +
  • gold members can borrow a maximum of 10 videos, while ordinal members 5
  • +
  • Assistants add members to the club once they have provided proof of identity
  • +
  • a member can come to the shop and ask the assistant in the shop to rent a video
  • +
  • the assistant checks for the availability and the limit of the member
  • +
  • Gold Members can opt to have an extended hire period
  • +
  • The member returns the video when he/she finishes with it (if return time beyond the hire period the member will have to pay a fine)
  • +
  • members can also make phone calls to make a reservation, in this case, the assistant will do the same process as for rent a video (check availability, identify and check member limit)
  • +
+ +

The video shop has gone online and now:

+ +
    +
  • members can browse the catalog online
  • +
  • the clerk makes the order for a new video and also adds them to the catalog when they are received from the supplier
  • +
+ +

I have made an illustration of the use-case.

+ +

1 - Is it correct that the user has an association with the rent a video and reserve a video use-cases along with the assistant?

+ +

2 - where can i add the supplier when the clerk makes an order

+ +

+ +

updated use-case model:

+ +
    +
  • removed make order and receive order
  • +
  • added verification use-case that is included in both (rent and reserve)
  • +
+ +

+",356779,,356779,,43867.48611,43867.48611,Use Case Diagram for video renting,,1,3,,,,CC BY-SA 4.0, +404755,1,404763,,2/5/2020 22:21,,7,603,"

In this and this stack overflow questions, the answers state that C as a compiler backend is a bad idea.

+ +

But why?

+ +

C has many compilers that can heavily optimize it. Every platform has a compiler that supports it, and it can be compiled to every architecture in existence. In addition, languages like Nim and V support generating C code.

+ +

So, I don't understand why C would be a bad idea at all. In my view, it seems a rather good choice.

+",355450,,,,,43868.55069,code generation - would C be a good compiler backend?,,3,2,,,,CC BY-SA 4.0, +404760,1,,,2/5/2020 23:41,,0,95,"

How about instead of

+ +
#define bool _Bool
+#define true 1
+#define false 0
+#define __bool_true_false_are_defined 1
+
+ +

We should have this:

+ +
#define bool _Bool
+#define true (bool)1
+#define false (bool)0
+#define __bool_true_false_are_defined (bool)1
+
+ +

so that the issue specified in this question won't happen, by casting the literals into one-byte bools instead of leaving them four-byte integers? Where can I officially propose this?

+ +

Another change is, how about we get rid of _Bool altogether and just have bool built in?

+",,user356583,,user356583,43866.99306,43867.05347,Better stdbool.h,,1,1,,,,CC BY-SA 4.0, +404761,1,,,2/5/2020 23:49,,2,47,"

Traditionally it's assumed that we should treat create read update delete as separate concepts. However, I have noticed that the following pattern seems more usable for certain contexts [create, update, delete], [read].

+ +

For example:

+ +

In the context where what's allowed for a new object may be virtually indistinguishable from an existing object for many properties

+ +
    +
  • Create or Update. By combining create and update into one step, it creates lots of reuse. This dramatically simplifies backend code to two parts: If no ID is provided create a new object (relatively empty), if an ID is provided attempt to get it. Then perform the update. The front end is also simplified, and from a user interaction perspective new and updates (edits) are automatically consistent.

  • +
  • In some cases, ""Delete"" is actually ""Archive"" and acts more like a property update then a separate request, ie turning an ""is_archived"" flag to True. In this context it becomes a sub set of Update.

  • +
+ +

What I am already aware of

+ +
    +
  • REST does not need to be used for internal APIs
  • +
  • Most things aren't really REST
  • +
+ +

The goal is not so much to have an all in one end point as is brought up here: RESTful API versus One Do-All Endpoint

+ +

But rather to combine like concerns, and make it more composable. For example adding a property in this pattern means adding it one place.

+ +

The net effect is instead of having 3 back end routes, and 3 front end components (6 total), there's 1 of teach (2 total), with only marginal differences where required.

+ +

Read operations seem significantly different because +* In general the object is assumed to always exist +* Usually reading multiple objects or doing other things with it. +* Often different permission and context assumptions

+ +

This is discussed a bit here Is it better to have separate Create and Edit actions or combine Create and Edit into one? +And there was some discussion of trade offs.

+ +

Key distinction is that I'm not so much asking just for the back end API, but more the way a person thinks about in the context of both Front and Back end, the context of archiving also being a part of it. (And in context of modern frameworks)

+",352284,,,,,43867.08194,Is it a common pattern to put create/update/archive as one process?,,1,0,,,,CC BY-SA 4.0, +404767,1,,,2/6/2020 2:28,,3,161,"

From what I understand two components A and B should only communicate with one another via an interface. Ideally this interface should be in its own separate assembly so that the client need not be loaded with the dependencies of any particular interface implementation.

+ +

The question then is how do I pass information such as enums and custom Types that appear within the interface if the client is only dependent on the interface?

+ +

Eg:

+ +
public enum Status
+{
+    Success,
+    Failure
+}
+
+public interface SomeInterface
+{
+    Status CallAPI(string s);
+}
+
+ +

In which assembly should Statusin this example reside so that both client and service can use the interface, but without breaking the principle of Dependency Inversion? Can it be contained within the same assembly as the stand-alone interface? What if the enum was replaced with a custom Type that the interface depended on?

+",355487,,,,,43867.78403,How to maintain Depedency Inversion Principle with enums & custom types?,,1,2,,,,CC BY-SA 4.0, +404768,1,,,2/6/2020 3:05,,3,345,"

According to Is "avoid the yo-yo problem" a valid reason to allow the "primitive obsession"?, I should define ""price"" like this:

+ +
public class Main{
+    private Price price;
+}
+
+public class Price{
+    private double priceValue;
+    public static void roundToValidValue(double priceValue){
+    }
+}
+
+ +

instead of

+ +
public class Main{
+    private double price;
+}
+
+public class PriceHelper{
+    public static double roundToValidValue(double priceValue){
+    }
+}
+
+ +

because I should encapsulate the logic about Price into a class to increase maintainability. However, I found it seems go against the goal of ""use most abstract type as possible"" (Why define a Java object using interface (e.g. Map) rather than implementation (HashMap)) :

+ +

As I understand, one of the goal of ""use most abstract type as possible"" is to reduce the coupling between classes, which lets the clients avoid using methods that they don't call, and hence the clients wouldn't depend on the methods that they don't use. However, back to the Price class, I found that some clients don't call ""roundToValidValue"" at all. If I follow the ""avoid primitive obsession"" rule and encapsulate ""roundToValidValue"" into a new class, the clients that don't call ""roundToValidValue"" at all would also depend on ""roundToValidValue"", which seems go against the goal of ""decoupling"" that suggested by ""use most abstract type as possible"".

+ +

Also, according to Should we define types for everything?, I should avoid using primitive types to represent business model directly, ie: I should change

+ +
public class Car{
+    private double price;
+    private double weight;
+    public void methodForPrice(double price){
+    }
+}
+
+ +

into

+ +
public class Class{
+    private Price price;
+    private Weight weight;
+    public void methodForPrice(Price price){
+    }
+}
+
+ +

which can avoid the situation that calls the wrong method ,ie :methodForPrice(weight). However, according to Why define a Java object using interface (e.g. Map) rather than implementation (HashMap), it suggests using most abstract type as possible, ie:

+ +
public LinkedHashMap<String,Stock> availableStocks;
+public HashMap<String,Stock> unusedStocks;
+
+public class Main{
+    public void removeUnusedStocks(Map<String,Stock> unusedStocks){
+    }
+}
+
+ +

which I found it may suffer from the problem of ""primitive obsession"" : I may call ""removeUnusedStocks(availableStocks)"" wrongly. So I think use more specific type:

+ +
public class Main{
+    public void removeUnusedStocks(LinkedHashMap<String,Stock> unusedStocks){
+    }
+}
+
+ +

is better even if both versions works idendically and hence I think ""use most abstract type as possible"" go against the goal of ""avoid primitive obsession"".

+ +

As the result, I think the goal of ""avoid primitive obsession"" and ""use most abstract type as possible"" go against each other and hence ""avoid primitive obsession"" and ""use most abstract type as possible"" contradicts each other, is that true?

+",351912,,,,,43867.83611,"Do ""avoid primitive obsession"" and ""use most abstract type as possible"" contradict each other?",,4,2,,,,CC BY-SA 4.0, +404771,1,,,2/6/2020 3:56,,1,165,"

I would like to understand what happens in a request which includes a .pfx certificate to authenticate to client to the server. I know how to implement this in python or use it in postman, but I don't understand what happens in the background. In which part (header, body) of the request is the certificate included?

+",356795,,,,,44017.25486,Where to include a pfx certificate in a http request?,,1,0,,,,CC BY-SA 4.0, +404775,1,404777,,2/6/2020 7:17,,1,133,"

I have data spread across two tables shipments and shipment_data. Currently I have the standard auto increment primary key id for shipments and a manually assigned unique primary key id for shipment_data. The primary key on shipment_data is also a foreign key referencing id on shipments.

+ +

In my mind this seems more efficient than making shipment_data.id auto increment and adding a shipment_data.shipment_id foreign key, since they will, as long as everything works correctly, have the same value as both will be created at the same time.

+ +

But I have a creeping feeling that there might be something I'm overlooking.

+",307447,,209774,,43867.34583,43867.97778,Using the primary key as a foreign key,,1,6,,,,CC BY-SA 4.0, +404776,1,404780,,2/6/2020 7:55,,1,162,"

Currently I'm working on creating something with the following general structure. I want to call 4 different APIs in sequential order (the results of one are needed for the next one). If one throws an exception, undo the efforts of the previous APIs with their sister delete APIs.

+ +

Currently my structure/ control flow looks like:

+ +
boolean A = false;
+boolean B = false;
+boolean C = false;
+boolean D = false;
+string response = StringUtils.EMPTY
+
+try {
+   API_A;
+   A = true;
+}
+catch (A's Exceptions {
+    response = ""fail!""
+}
+
+if (boolean_A == false){
+   undo_if_stuff_to_undo;
+   return response;
+}
+
+ +

Basically this repeated 3 times and if all 4 were successful my response would say success. I was wondering if there was a cleaner way to approach this. I thought about making a function for each API call that returned a boolean but the problem was that I needed the results from each API to call the next. Perhaps creating a special class for each API result that would store its success/failure and relevant attributes?

+",356812,,,,,43867.39861,Alternative to mass Try Catch blocks for my logic,,2,0,,,,CC BY-SA 4.0, +404778,1,,,2/6/2020 8:53,,0,70,"

I'm the manager of a reasonably small sized development department with 13 developers, 5 testers/QA, and 2 UX-designers.

+ +

The support is split into two parts ""Customer service"" (CS) and ""technical support"" (TS). However, today these are quite intertwined. TS and CS share the burden of answering phones and reroutes questions to each other. TS answers difficult technical questions and troubleshoot issues with the customers. CS takes care of light tech questions, added sales, onboardings, etc. If TS can't solve an issue (for example due to a bug) the tickets are transferred to the dev team.

+ +
+ +

This leaves me with the opportunity to suggest absorbing the technical support team under ""my wings"", instead of customer service. I have no personal gains in this, I am considering this purely for the benefit of the company.

+ +

Perceived benefits

+ +

What I think will come out of this is mainly a better product and a more closed loop between QA and TS. At this point it's easy to move responsibilities between them. For example who reproduces an issue before sending it to the relevant dev team?

+ +

Also, today feedback is pretty much limited to major issues and bugs instead of everything between major grievances and irritations.

+ +

Drawbacks

+ +

I am the manager for the entire development department. Each team has a sort of team leader, but they basically just escalade issues to me. This is something I have to address sometime soon in any case. Incorporating a new team with mine would cause increased administrative duties, such as SLA reporting (and following) which could be pretty mentally taxing.

+ +

Changes I would do as a part of my proposal

+ +

Today, customer service and technical support is quite intertwined and I don't want any part of the sales side, so what I would suggest is the following:

+ +
    +
  1. No inbound calls to technical support. Customer service answers all calls and acts as first line. They are smart people, any technical issues they can answer quickly, they will.

  2. +
  3. Technical support is currently 3 people. One of those will remain in the CS department to help answer 1st line. So in reality only two people will join my team.

  4. +
  5. Support tickets will arrive through Jira as per usual, but TS will only look at second (and third) line.

  6. +
  7. It will also be possible to connect directly to a support technician, but at the discretion of the CS representative. You can't call directly to TS but you can be connected. This is to avoid TS having to spend a lot of time calling out to customers that won't respond.

  8. +
  9. Developers are never the fallback for technical support, or 1st line. If there are multiple sick people, vacations colliding etc, the support temporarily morphs back into customer care.

  10. +
+ +

What do you think?

+ +

TLDR: I have the opportunity to absorb the technical support team into my development team, but I'm not sure if it's a good idea or not.

+",356806,,,,,44200.91806,Combining development and support departments,,1,0,,,,CC BY-SA 4.0, +404781,1,405830,,2/6/2020 9:11,,2,81,"

I have an application which uses lambda and fargate (task) containers for compute, EventBridge for communication, S3 and DynamoDB for storage and exposes data out via an API Gateway.

+ +

From my reading I can see that I can use PrivateLink or similar to expose each of these services into a VPC. However, all the documentation seems to suggest this is useful for accessing EC2 and RDS instances which live in the VPC. Is there any advantage to routing the communications for my services (Lambda,S3,DynamoDB etc) through a VPC when none of them really live in the VPC or is it just adding an unnecessary level of complexity to the application?

+ +

Thanks

+",356817,,,,,43888.55625,Does putting a serverless application in a VPC make it more secure?,,1,0,1,,,CC BY-SA 4.0, +404784,1,404803,,2/6/2020 10:59,,-4,271,"

I try to design a client application for a messaging application. The client can send and receive messages, the client can connect/disconnect.

+ +

My problem is that I don't know how to incorporate the Server class. Where would one put it in the diagram?

+",193720,,193720,,43870.67639,43870.68611,Where to put the server in my Messaging System UML diagram?,,1,31,3,,,CC BY-SA 4.0, +404785,1,,,2/6/2020 11:38,,-2,51,"

Which is better to use? react class based stateful component or react hooks functional stateful component? I've searched a bit but couldn't find what is preferred to use.

+",356833,,,,,43867.68056,React class or function for stateful components?,,1,1,,,,CC BY-SA 4.0, +404786,1,,,2/6/2020 12:32,,0,215,"

I'm working on a certain group repository. This repo has an issue tracker, but - it's not intended for all issues, and certainly ot for our day-to-day development work. The point is - I need to track issues with my code (and perhaps the code of others ) - myself.

+ +

How would you suggest I do this ""private issue tracking""? Should I just use a text file or a spreadsheet? Or is there some kind of personal issue tracker you would recommend? Or maybe something else entirely?

+ +

I should mention the code is not FOSS, so I can't just fork the code and track issues on my own repo.

+ +

Note:

+ +
    +
  • I'm not using a physical implement like a notebook or whiteboard for several reasons: 1. Expecting to eventually convert/combine these into filings on the public tracker, maybe. 2. If push comes to shove, I use a plain text editor, not a physical notebook, typically 3. Sharing with people who are not physically next to me.
  • +
  • The simple default is to just write down the issues I see in plain text format. But if there are more than, say, 15-20 of these, that gets cumbersome; and I want to be able to have links between the issues, to tag them, and the features I would usually have on an issue tracker. I just feel that setting up one just for myself is a bit of an overkill.
  • +
+",63497,,63497,,43867.95069,43867.95069,How to privately track issues?,,2,10,,43867.95278,,CC BY-SA 4.0, +404789,1,,,2/6/2020 13:19,,1,170,"

Need your help to clarify primitive concepts:

+ +

In an embedded system, when a program runs on the processor (ARM as an example), in my understanding that it is because the ""code to be executed"" is loaded in the main memory.

+ +

My question is:

+ +
    +
  • Is ""the code to be executed"" called ""executable"", what does ""executable"" mean?
  • +
  • What is the difference between ""executable"" and ""the software or binary image"".
  • +
+ +

I am not coming from a SW engineering background, please excuse my non solid background.

+",356847,,6509,,43867.57986,43867.70556,Executable VS. Software image,,2,3,,,,CC BY-SA 4.0, +404790,1,,,2/6/2020 13:50,,10,882,"

Sometimes when working with other developers with git version control I run into a problem where the git graph suddenly spawns countless parallels lines like in this image.

+ +

+Image from here.

+ +

I have no idea what actions cause this and how to prevent this issue.

+ +

Usually this happens with just a couple of branches like master, staging and a few feature branches.

+",356850,,356850,,43867.58333,43867.61667,"What causes the ""git rainbow""?",,1,4,,,,CC BY-SA 4.0, +404792,1,404915,,2/6/2020 14:13,,2,390,"

I have seen people point me to many different ""origins"" of TDD. Some will point me to Kent Beck's rediscovery in the late 1990s with XP. Others will mention the 1960s best practice of annotating the results on the sides of the punch cards to check things faster and more accurately. And, still, some will point me to various books on software reliability dating from the 1960s to the 1990s.

+ +

But is there a seminal paper or book with a formalization of the process dating from before Kent Beck's XP Explained book? Could someone who has thoroughly studied the topic create a timeline?

+",344810,,209774,,43870.76181,43918.42222,What is the origin of TDD?,,1,8,,,,CC BY-SA 4.0, +404794,1,,,2/6/2020 15:57,,0,176,"

We're in the process of moving everything to K8s and one of our applications is a small .NET Core 2.2 console app that runs a Hangfire background job server. At the moment the app runs as a Windows service, but that's obviously about to change.

+ +

The problem that I ran into is that I'm not sure which approach is better for the K8s liveness probe:

+ +
    +
  1. Use an ASP.NET Core web app and use the built-in health check system
  2. +
  3. Stick to the console app approach and run some script to check the app still works
  4. +
+ +

The only things that I think we need to check is if the process is up and if the app can connect to the Hangfire db. Doing this with the first approach is extremely easy, but I'm not sure it's worth creating a web app just for this. The second approach would result in a lighter app, but I honestly have no idea how we would end up checking the db connection.

+ +

Has anyone tackled this issue before? Is there some other approach that would be more appropiate? Any suggestion is appreciated.

+",220631,,,,,43898.54306,What's a good liveness probe for a Hangfire background job server?,<.net-core>,1,0,,,,CC BY-SA 4.0, +404800,1,404801,,2/6/2020 18:08,,1,82,"

I am currently writing a Python program that retrieves the pixels from two BMP files and finds the percent difference between them (without using the Pillow library for the purpose of learning). I can think of two ways to approach this problem:

+ +

1) Read the pixels from each file in separate streams and compare the results as the files are being read.
+2) Read the pixels from one file into a multi-dimensional list, read the pixels from the other into another multi-dimensional list, and then compare the resulting lists one-by-one.

+ +

Out of these two approaches, which would be a more efficient choice to tackling my goal? Or is there another approach which would be better? I am currently working on 256x256 24-bit BMPs and plan on introducing images with larger resolutions.

+",356872,,356872,,43867.75625,43867.77153,How should I approach the comparison of two BMP images?,,1,2,,,,CC BY-SA 4.0, +404805,1,,,2/6/2020 19:56,,0,173,"

When the caller gives me a call, I need to evaluate n number of criteria which currently I'm doing like

+ +
+
+

if (a & b & c & d & e)

+
+
+ +

Day by day the conditions are growing and it's really hard to read and understand even for me (who actually wrote the code). Is there any better way of doing this?

+",198754,,,,,43867.86875,How to write the following snippet in more cleaner way?,,2,1,,,,CC BY-SA 4.0, +404810,1,404848,,2/6/2020 20:51,,6,398,"

I have created a class that implements behavior which is difficult to test without some intimate knowledge of internal state. I'd like not to clutter the class's public API with accessors for this internal state, and I'd also like to add some friction to other programmers writing code depending on this internal state. So would it be a good idea to do the following?

+ +
// SUT:
+public class ComplicatedThing
+{
+    // The important behavior
+    public void DoStuff() { ... }
+
+    // Things necessary for tests:
+    public interface ITestClient
+    {
+        int IntimateKnowledge { set; }
+    }
+    public void Accept(ITestClient client)
+    {
+        client.IntimateKnowledge = ...;
+    }
+}
+
+// Tests:
+public class ComplicatedThingClass
+{
+    class Client : ComplicatedThing.ITestClient
+    {
+        public int IntimateKnowledge { get; set; } = -1;
+    }
+
+    public class DoStuffMethodShould
+    {
+        [Fact]
+        public void DoComplicatedThings()
+        {
+            var thing = new ComplicatedThing();
+
+            thing.DoStuff(...);
+
+            var client = new Client();
+            thing.Accept(client);
+            Assert.Equal(123, client.IntimateKnowledge);
+        }
+    }
+}
+
+ +

Details

+ +

Specifically, I have a class which can asynchronously detect finalization of an object:

+ +
public class LifetimeWatcher
+{
+    private readonly ConditionalWeakTable<object, HashSet<...>> _weakTable = ConditionalWeakTable<object, HashSet<...>>();
+
+    private int ResidueLevel => _weakTable.Select(kvp => kvp.Value.Count + 1).Sum();
+
+    public async ValueTask WaitForFinalizationAsync(
+        WeakReference<object> weakTarget,
+        CancellationToken cancellationToken);
+}
+
+ +

One of the things that I'd like to test is that no ""residue"" remains after a call to WaitForFinalizationAsync completes. Specifically, any entries added to the private ConditionalWeakTable need to be removed when the function returns. This is easy to quantify with the ResidueLevel property.

+ +

But how do I make that information available to tests without cluttering the public API or making it easy for others to write code which begins depending on this implementation detail?

+ +

Not really knowing what else to do I have added the following to my class:

+ +
public class LifetimeWatcher
+{
+    // ...
+
+    public interface ITestClient
+    {
+        int ResidueLevel { set; }
+    }
+    public Accept(ITestClient client)
+    {
+        client.ResidueLevel = this.ResidueLevel;
+    }
+}
+
+ +

Normal tests are able to interact with instances of LifetimeWatcher using its normal public API. But now the few tests that need to be concerned with ""residue"" are able to acquire that implementation detail by jumping through some hoops.

+ +

Question

+ +

Is this the right thing to do?

+ +

Lots of alarms are going off in my head when I do this:

+ +
    +
  • Tests that depend on implementation details are rigid
  • +
  • I'm not writing unit tests: my tests are using this implementation detail in conjunction with calls to GC.Collect()
  • +
  • My software under test is doing extra stuff only for the benefit of tests. My LifetimeWatcher class has things called ""test"" within it, for crying out loud
  • +
  • I have to create a special class (implementing LifetimeWatcher.ITestClient) that will only live in the tests project
  • +
+ +

But at the same time:

+ +
    +
  • This class is doing some complicated things and is very sensitive to changes. I want it to be difficult to break this class in important ways, and leaving ""residue"" is an important thing to prevent
  • +
  • ""Residue"" levels are an implementation detail. So if I want to test it then I have no choice but to test an implementation detail. At least I'm not calling Thread.Sleep(...) or making HTTP requests in the tests
  • +
  • Having to implement an interface and then having no guarantee that its members will even be populated when you pass it to the Accept method certainly makes it difficult for others to depend on implementation details
  • +
  • If tests in the future need additional intimate knowledge then it's easy to add additional setters to the interface
  • +
  • I can further reduce the API footprint by making the Accept method an explicit implementation of some interface and moving the ITestClient interface out to a different file
  • +
+ +

What's the Better Way™?

+",261656,,261656,,43867.87708,43870.09514,Exposing implementation details to tests,,4,0,1,,,CC BY-SA 4.0, +404812,1,,,2/6/2020 21:18,,0,99,"

Sorry for the weird title but I couldn't think of a better way to explain it. I saw someone do this once and didn't think it was a good idea but I wanted to check. Basically, he wanted to return the numbers 0-3 and do two different things with them. Say he wanted to use one as a key to a dictionary and the other in a value calculation, like this:

+ +
d{0:0, 1:1, 2:4, 3:9}
+
+ +

So he did this to return the index twice:

+ +
d = {}
+for i, v in enumerate(range(4)):
+    d[i] = v**2
+
+ +

Is that better than writing d[i] = i**2? I thought it was weird to enumerate range(4). I know this might be somewhat subjective but is this bad design or just a preference thing? What's the best practice?

+",285315,,,,,43867.97014,Is it ever a good idea to enumerate a python range so you get the index twice?,,1,3,,,,CC BY-SA 4.0, +404819,1,,,2/6/2020 23:30,,5,128,"

I can't believe DataTable/SqlDataAdapter massively beat out System.Data.Linq.DataContext.ExecuteCommand and ExecuteNonQuery (tried with both Stored Procedures and command text) and just straight SqlConnection/SqlCommand ExecuteCommand and ExecuteNonQuery called during looping. I was going to finally upgrade a legacy app and thought I was being pretty clever in my new approach, but it turns out it was tortoise slow. I went through 6 phases of code and have rough timings/stats, but I was just looking to see if anyone spotted something obvious knows something I don't before I dug deeper.

+ +

From my understanding of DataTable/SqlDataAdapter (which I haven't 'coded' for 15-20 years really), DataTable is maintaining a state of each row (clean, new, edited) and when you call Update() it sends back 'batches' of SQL commands (based on your defined Insert/Update command) to update the database appropriately. I assume it is watching out for length of the SQL command and the number of parameters based on SQL Server limitations as well.

+ +

One thing to note, in all my implementations, the UPDATE/INSERT commands were 'identical' in terms of fields updated and WHERE clauses. Only difference is whether they were SQL Stored Procedures or simply adhoc command text.

+ +

Let's dig in. I have an Xml document with repeating elements of xDataDef:

+ +
<xDataDefs>
+   <xDataDef id-auth='111111111'>
+      <Profile>
+         <DataElements/>
+      </Profile>
+      <HistoryData>
+         <HistoryItem hisType='Pay' hisIndex='2020'>
+            <DataElements/>
+         </HistoryItem>
+      </HistoryData>
+   </xDataDef>
+</xDataDefs>
+
+ +

File Statistics

+ +
    +
  • <DataElements/> - Those are just the properties of the 'model'. So usually 5-20 fields represented by this.
  • +
  • <xDataDef/> - My file has 2500 xDataDef elements.
  • +
  • <HistoryItem/> - Across all xDataDef elements, my file has 159K elements.
  • +
+ +

During processing, I loop this file with a XmlReader processing each xDataDef (and children) as I encounter them.

+ +

Database Information

+ +

Profile table - Each xDataDef element will be placed here.

+ +
    +
  • pKey - int, primary key and has a non-clustered unique
  • +
  • pAuthID - holds xDataDef/@id-auth and has a non-clustered unique index.
  • +
  • pProfile - holds the xml blob.
  • +
+ +

HistoryData table - Each HistoryItem element is placed here.

+ +
    +
  • hisKey - int, primary key and has a non-clustered unique index.
  • +
  • hispKey - int, foreign key to Profile has a non-clustered index.
  • +
  • hisType / hisIndex - holds respective attributes.
  • +
  • hisData - holds the xml blob.
  • +
  • IX_HistoryData - non-clustered unique index of hispKey/hisType/hisIndex
  • +
+ +

Original/Baseline

+ +
    +
  • Time: 4 minutes, 30 seconds to load entire file - 2541 Profile rows and 159K HistoryData rows.
  • +
  • Strategy: Using DataTables / SqlDataAdapters and other code that I thought was inefficient, it took 4.5 minutes to load entire file. Explained more in Phase 6.
  • +
+ +

Phase 1

+ +
    +
  • Time: In 1 minute, I only imported 40 Profile rows and 2,351 HistoryData rows.
  • +
  • Strategy: Build up one query to insert or update the Profile and all associated HistoryData rows using a 'UPDATE ...IF @@ROWCOUNT == 0 THEN INSERT INTO... design. Then execute the command for every profile (so about 2500 calls). The query roughly looked like this.

    + +

    DECLARE @pKey int;
    +UPDATE Profile SET @pKey = pKey, pProfile = 'NewXml' WHERE pAuthID = 'AuthID'

    + +

    IF @@ROWCOUNT = 0 BEGIN
    + INSERT INTO Profile (pProfile, pAuthID) VALUES ('NewXml', 'AuthID')
    + SET @pKey = SCOPE_IDENTITY()
    +END

    + +

    UPDATE HistoryData SET hisData = 'NewXml' WHERE hispKey = @pKey AND hisType = 'Type' AND hisIndex='Index1'

    + +

    IF @@ROWCOUNT = 0 BEGIN
    + INSERT INTO HistoryData (hispKey, hisType, hisIndex, hisData) VALUES (@pKey, 'Type', 'Index1', 'NewXml')
    +END

    + +

    UPDATE HistoryData SET hisData = 'NewXml' WHERE hispKey = @pKey AND hisType = 'Type' AND hisIndex='Index2'

    + +

    IF @@ROWCOUNT = 0 BEGIN
    + INSERT INTO HistoryData (hispKey, hisType, hisIndex, hisData) VALUES (@pKey, 'Type', 'Index2', 'NewXml')
    +END

    + +

    [Repeating HistoryData blocks for every hisType/hisIndex row]

  • +
+ +

Phase 2

+ +
    +
  • Time: In 1 minute, I only imported 15 Profile rows and 941 HistoryData rows. (also included a 55 second query to prep stuff.
  • +
  • Strategy: Queried all the Profile.pKey values up front into a Dictionary<pAuthID, pKey> and the same thing to HistoryData.hisKey into a Dictionary<pKey+hisType+hisIndex, hisKey> to be able to determine whether to issue an Update or Insert statement for each Profile/HistoryItem. While looping, I called DataContext.ExecuteCommand/ExecuteNonQuery on each row based on existence
  • +
+ +

Phase 3

+ +
    +
  • Time: In 1 minute, I only imported 17 Profile rows and 1,054 HistoryData rows.
  • +
  • Strategy: Still queried all the Profile.pKey up front, but eliminated the HistoryData.hisKey query 'up front'. Instead called it right before processing xDataDef/HistoryData/HistoryItem rows and restricted the query to the currently processing Profile to reduce memory requirements and make the query faster. Finally, while looping, I called SqlCommand.ExecuteCommand/ExecuteNonQuery on each row based on existence using the same command text as Phase 2.
  • +
+ +

Phase 4

+ +
    +
  • Time: In 1 minute, I only imported 16 Profile rows and 995 HistoryData rows.
  • +
  • Strategy: Still queried all the Profile.pKey up front, but completely eliminated the HistoryData.hisKey query. I could do this because I was using the exact same Stored Procedures that the original DataTable/SqlDataAdapter used and the Stored Procedure for 'HistoryData Command' already had a UPDATE..IF NOT UPDATED...INSERT strategy in place. During the looping, I executed SqlCommand for each Profile/HistoryItem row. I'm surprised at how slow this was compared to original given that only difference that I think is happening is that original 'batched' statements?
  • +
+ +

Phase 5

+ +
    +
  • Time: In 1 minute, I only imported 291 Profile rows and 17,263 HistoryData rows.
  • +
  • Strategy: Almost back to 'original' implementation. I filled a DataTable with all existing data from Profile table. In my case, I already had all 2541 rows in database. Surprised this can be efficient. As for the HistoryData DataTable, it is filled with 'nothing' (the WHERE clause force no rows to be returned).
  • +
+ +

During looping, I BeginEdit or NewRow for each Profile element, set the column values and call SqlDataAdapter.Update. Then, I call NewRow for every HistoryItem element for current xDataDef and set column values, finally calling SqlDataAdapter.Update after done. So the number of SqlDataAdapter.Update calls would approximately be 2541 for Profile elements and ~(17263/2541) for HistoryItem elements.

+ +

Phase 6

+ +
    +
  • Time: In 1 minute, I imported 2541 Profile rows and 34,000 HistoryData Rows (essentially back to original import time).

  • +
  • Strategy: This is almost clone of original. The results for this phase a skewed a bit because of the implementation. It does same 'strategy' as Phase 5, with this one difference.

  • +
+ +

It processed all Profile rows first and only calls SqlDataAdapter.Update one time after processing all rows. After Profile Rows, it then loops the Xml again and processes all HistoryItem elements calling SqlDataAdapter.Update for every 2000 rows (then clears out DataTable.Rows to release memory) and one more time after looping to catch any remainder rows not applied.

+ +

This scares me. A file with only 2541 Profile elements might be ok, but we'll need to handle files as big as 60K-100K Profile Elements. I guess I could implement a similar strategy as HistoryData rows, but there is the issue that I fill the Profile DataTable at the beginning with the existing population.

+ +

Take Aways That I'll Probably Test Unless Suggested Otherwise

+ +
    +
  1. I'm curious if I need to continue to used Stored Procedures for Phase 5-6 or if it would be 'just' as performant if I used command text instead.
  2. +
  3. I'm curious if I need to have an Update and Insert command for Profile rows, or if I could created a new Stored Procedure/command text to do the UPDATE...INSERT as needed (like the HistoryData rows). I'm assuming I can since there are more HistoryData rows in Xml and database and it seems to perform well enough. Then I could get rid of requirement to populate Profile DataTable up front.
  4. +
+ +

So...are the results surprising? I'd like to get rid of Sql* objects in the code base, but maybe that is an unwarranted desire. I know L2S is obsolete as well, but at least it 'kind of' translates to Entity Framework if/when we move that direction. I'd also like to not use Stored Procedures and only use command text where the code is more visible/maintainable to the developers working on it. But if performance is the cost, then I'll forego this as well.

+ +

Is there a better way to process large Xml files and update/insert data into a SQL database?

+",356894,,,,,43868.51042,Are legacy C# DataTable/SqlDataAdapters exponentially faster than SqlConnection/SqlCommand and/or LINQ to SQL DataContext.ExecuteCommand calls?,,1,6,,43870.69306,,CC BY-SA 4.0, +404823,1,,,2/7/2020 2:43,,3,202,"

After searching for some Abstract Factory examples using modern programming languages, I have some dillemas about the sensu lato of conceptual UML schema of Abstract, more specificly about the Client (Application) Class and its relationships

+ +

On wikipedia example, the Client code has relationships with the Factory and each (family of) Products:

+ +

+ +

On the other hand,on Refactoring Guru the Client code has an aggregation with Factory:

+ +

+ +

Can this last one be considered a ""GOF"" abstract factory too?

+ +

If so, would it be only on another way to simplify the diagram and make the use of concrete families implicit and not explicit?

+",356206,,51948,,43870.14792,43870.14792,Abstract Factory: Can Client Class have an aggregation with the Factory?,,2,1,,,,CC BY-SA 4.0, +404826,1,404834,,2/7/2020 3:51,,0,111,"

Regarding the understanding of Northbound interface and Sounthbound interface of a system, which one will be correct?

+ +
    +
  • A
  • +
+ +

In understanding A, the traffic from/to higher level system is northbound interface (blue), while that from/to lower level system is southbound interface (orange).

+ +

+ +
    +
  • B
  • +
+ +

In understanding B, the traffic from lower to higher is northbound (blue), while that from higher to lower is southbound (orange).

+ +

+",75065,,,,,43868.38056,"Northbound interface and Sounthbound interface, which understanding is correct?",,2,2,,,,CC BY-SA 4.0, +404833,1,,,2/7/2020 9:03,,-1,49,"

I am working on an (PHP) application where users have so called workspaces. A workspace is a folder with a specific structure and bunch of specific files - user information and some workspace metadata is stored in the DB. To separate the concerns I split all classes into those which access the database and those who access the filesystem. From the technological view I think this makes perfekt sense and miimizes dependencies.

+ +

Now my structure looks like this:

+ +
application
+|- classes
+|-- dao
+|--- dbAccess.class.php
+|--- dbConfig.class.php
+|--- dbInitializer.class.php [extends dbAccess]
+|--- dbAdmin.class.php [extends dbAccess]
+|--- ...more db related classes
+|-- workspace
+|--- wsController.class.php
+|--- wsInitializer.class.php [extends wsController]
+|--- wsValidator.class.php [extends wsController]
+|--- ... more fs related classes
+|-- exception
+|--- ...exception classes
+|-- ... more categories
+|- test
+...
+...
+
+
+ +

But now I wonder now if my naming of the classes and class folders could be imporved: +- Is DAO the right category for database-using classes or does that imply for example a spefiic architecture of the class? +- Is is clever to name the other category workspace instead of filesystem? Semantically thinking workspace is correct - since the workspaceValidator for example validates the structure of a workspace folder and its files not the filesystem in general. On the other hand - DAO or DCconnection contains functions to access workspace metedata (which belongs sematically to the workspace) but also other data from the DB like user data. So one category is labeled after a concept from my application - workspace - and the other after technology - db - that seems a little bit weird.

+ +

I hope my problem is somehow understandable without knowing the classes themselves. Maybe someone have a good suggestion how to name them properly?

+",332120,,,,,43868.49306,Naming my classes and class folders in PHP project,,1,1,,,,CC BY-SA 4.0, +404838,1,404840,,2/7/2020 11:53,,-2,402,"

I asked this question on StackOverflow recently:

+ +

Is there a Map in Java that supports looking up Keys by (non-Unique) Value?

+ +

As I suspected, the answer was ""no"", but I'm wondering why there is no data structure that captures the relationship in question:

+ +

Value put(Key, Value)

+ +

List<Key> lookup(Value)

+ +

Is a fairly common set of requirements, and it's one that is supported pretty commonly in database languages like SQL (e.g. SELECT key WHERE value = ${val})

+ +

So why do no major languages support this relationship?

+ +

Note that this is different to a bi-map relationship, where the 1-1 relationship is enforced in both directions.

+",366,,,,,43868.72222,"Why is ""map with reverse lookup"" not supported as a data structure?",,2,3,,,,CC BY-SA 4.0, +404843,1,,,2/7/2020 13:41,,1,23,"

I know that Primary-backup replicated distributed systems guarantee sequential consistency.
+My question is whether multi-primary systems also achieve that.

+ +

I mean if they use consensus (i.e: Paxos algorithm) to agree on an order for the received requests, then I suppose sequential consistency is achieved.
+But if they just use conflict-free replicated data types, would that achieve sequential consistency?

+",275662,,83178,,43868.72014,43868.72014,Do Multi-Primary replicated distributed systems guarantee sequential consistency?,,0,0,,,,CC BY-SA 4.0, +404854,1,406217,,2/7/2020 21:07,,2,110,"

I have to design a database, but I don't yet know which underlying technology it will use (not just my decision). Could be SQL, could be NoSQL, could be something else.

+ +

I do have quite a few requirements and I have enough knowledge of the business domain to create a data model.

+ +

Obviously, not the physical model. I would like to create a technology independent logical model. First, I'd like to create an E-R diagram.

+ +

But I don't want to draw it via drag and drop. I would like to describe it in some language from which I could generate a diagram.

+ +

Does this exist?

+ +

I know E-R diagrams can be reverse-engineered from existing SQL databases, for example. But that's not what I want.

+ +

I am looking for a generalized technology-agnostic language for describing a logical data model. It would support defining objects, relations between them (one to many, many to many) and perhaps attributes and their types (in a general way, thus 'text' or 'string' and not varchar2(100)).

+ +

Basically an E-R diagram written out. After I create it, I'd commit it to Github and share and maintain that way. Diagrams are quite useful, so I should be able to import it to some tool to get a diagram from it.

+ +

Any ideas?

+",356970,,,,,43896.72639,"Entity-Relationship language, not diagram",,3,1,,,,CC BY-SA 4.0, +404856,1,,,2/7/2020 22:21,,3,332,"

If you're a very old programmer like me you may have written stuff like this early on:

+ +
DIM A, B, C
+LET A = 2
+LET B = 2
+GOSUB ADD
+PRINT C
+END
+ADD:
+LET C = A + B
+RETURN
+
+ +

(Actually, if you're an assembly programmer, you may be stuck writing stuff like this anyway, but let's not digress.)

+ +

Of course the modern approach would be more like this:

+ +
var c = Add(2,2);
+Print(c);
+
+function Add(int a, int b)
+{
+    return a + b;
+}
+
+ +

I understand the first pattern is obviously ""bad;"" that is not under dispute. I'm just trying to explain to another engineer the specific technical reasons why, without injecting my own opinion. A good answer will strive to be exhaustive, and stick to technical reasons, risks, and potentially cite known code smells or other authoritative sources.

+",115084,,115084,,43868.94375,43869.94861,Why is it better to use parameters instead of temporary global variables?,,3,7,1,,,CC BY-SA 4.0, +404860,1,405581,,2/7/2020 22:45,,2,255,"

I'm struggling to find good ways to split up classes without exposing private data. Most articles I read about SRP seem to ignore how the new classes that take on the separated responsibilities access the data that once used to be private to the original class.

+ +

Take, for example, a Gripper class representing a robotic gripper in a graphical computer simulation. This class handles the logic of a gripper, pickup up items, rotating them, putting them down in a different position, etc. The gripper can also draw itself onto a GUI.

+ +

This breaks the SRP, because there are 2 reasons for the Gripper class to change: Changes to the logic as to how a gripper operates, and changes as to how a gripper is drawn. However, Gripper has some private data members that are used by both the logic and the drawing portion. Simply exposing those members trough some (const) getters feels like a step backwards. I'd be exposing implementation details, tying myself down to supporting this new ""interface"" and it seems downright wrong.

+ +

So I came up with this:

+ +
class Renderer
+{
+public:
+
+    /* Takes the data needed to draw a gripper and does so. */
+    void
+    DrawGripper(const Foo& foo, const Qux& qux);
+
+    /* Additional methods to draw other things. */
+};
+
+class Gripper
+{
+public:
+
+    void
+    Draw(Renderer& renderer) const
+    {
+        renderer.DrawGripper(mFoo, mQux);
+    }
+
+private:
+
+    Foo mFoo;
+    Bar mBar;
+    Qux mQux;
+};
+
+ +

Pro's:

+ +
    +
  • Better separation of responsibilities. Apart from the Draw function, consisting of 1 line of code, all the drawing code is now gone from Gripper.
  • +
  • Renderer could be a abstract interface, easily allowing different implementations.
  • +
  • Data can be passed by const reference to DrawGripper whereas a plain memberfunction would have total access to all members.
  • +
+ +

Con's:

+ +
    +
  • Gripper still has a Draw function and knows about Renderer.
  • +
+ +

I feel the con is manageable tough. In the end, one of the reasons for Gripper to exist is to eventually be drawn onto the screen, so the fact that it still has a Draw function does not seem too bad. Perhaps this is a case of having to choose the lesser of 2 evils? The alternative of exposing private data being far worse imho.

+ +

Am I on the right track here? Is this a good system that can be deployed in cases like this? Any problems or better ways?

+",341699,,209774,,43869.50625,43883.73819,Splitting class responsibilities without exposing private data,,5,6,0,,,CC BY-SA 4.0, +404861,1,,,2/7/2020 23:20,,1,33,"

I have a customer-facing enterprise app. (like AWS). Customers specify config on the app and relevant infrastructure is spun up along with my software on it. My CloudOps team, like the AWSCloudOps team, would have access to an internal version of the app which would have a lot more nuts and bolts that are not exposed to customer i.e. internal app is superset of customer-facing app

+ +
    +
  1. Should I deploy 2 Application servers - 1 for customer or 1 for internal?
  2. +
  3. Should I deploy 1 Application server for both and use RBAC to keep it secure
  4. +
+ +

I like option 2 but a colleague is saying it is not secure and wants option 1

+",356976,,356976,,43869.11111,43869.38958,Deploying a Customer Facing Enterprise App,,1,0,,,,CC BY-SA 4.0, +404864,1,404898,,2/8/2020 2:12,,2,126,"

Let's say a sample case where we want to create an article with some tags. Following are my pseudocode, and the questions are at the bottom.

+ +

Sample Case

+ +

Request:

+ +
@Data
+public class CreateArticleRequest {
+    @NotEmpty
+    String content;
+    @NotEmpty
+    List<Integer> tagIds;
+}
+
+ +

Controller:

+ +
@RestfulController
+public class ArticleController {
+    // ... some autowire ...
+    public void create(@Valid CreateArticleRequest request) {
+        // (i) shall I do this?
+        if(!repository.doesAllTagIdsExists(request.getTagIds())) {
+            throw new RuntimeException(""bad tagIds"");
+        }
+
+        service.create(request); // or shall I put something here?
+    }
+}
+
+ +

Service:

+ +
@Service
+public class ArticleService {
+    // ... some autowire ...
+    public void create(CreateArticleRequest request) {
+        Article article = new Article();
+        article.setContent(request.getContent());
+        Article createdArticle = repository.createArticle(article);
+        // (ii)
+        repository.createArticleTagRelation(createdArticle.getId(), request.getTagIds());
+    }
+}
+
+ +

Model:

+ +
@Data class Article { int id; String content; }
+@Data class Tag { int id; ...sth else... }
+
+ +

Database:

+ +
Table ""article"" columns: id, content
+Table ""tag"" columns: id, sth_else
+Table ""article_has_tag"" columns: article_id, tag_id
+
+ +
+ +

Questions

+ +
    +
  1. Of course, we need to validate whether the tagId exists and is visible to this user, instead of trusting it and directly putting into database. Where shall I do it? Shall I do it at (i)? (I know we can validate things like ""not empty"" etc using @NotEmpty, but how can I validate that tagIds which need database?)
  2. +
  3. At (ii) in Service, shall the createArticle + createArticleTagRelation be handled here, or combine them into one repository function repository.createArticleAndWithTagRelation?
  4. +
  5. Is it good to have such XXXRequest and YYYResponse objects? Is there anything wrong with my overall architecture?
  6. +
+",340897,,90149,,43872.57083,43872.57083,Convert modelId to model & validation of modelId -- in Controller or Service layer?,,1,0,,,,CC BY-SA 4.0, +404865,1,,,2/8/2020 2:52,,1,1030,"

I have been exploring the microservice architecture for the batch-based system.

+ +

Here is our current setup:

+ +

Code: +We have 5 systems that are internally connected and they pass data from one system to another. Currently entire logic is sitting in Oracle as PL/SQL, Hadoop(Hive, Impala, Spark, etc) and Shell scripts.

+ +

Communications: +These systems share the data either through cross DB Table grants or they export the data in files and send it to each other.

+ +

Triggers: +These systems send trigger through custom workflow engines or processes look for some files in a repetitive mode.

+ +

Now coming to the main question: +Is it a good idea to convert these processes into microservices(Code) and use Kafka( Communication and Trigger) so that they can share data and we can have a more distributed well-choreographed process flow. Just to give an example, When one system finish process it can send data in Kafka is available(this act as trigger and producer) and all consumer system can start using that data in parallel instead of sending data in files or hitting databases individually.

+ +

Edit based on comments: +Looking for some insight on microservice based architecture for the Batch based system, Irrespective of the current setup or think we are building a brand new system.

+ +

Any suggestion through link/Blog, tools, and technologies would be greatly appreciated.

+",356985,,356985,,43874.12292,43875.94167,Microservice architecture pattern for Batch based system,,2,7,1,,,CC BY-SA 4.0, +404867,1,,,2/8/2020 8:24,,2,46,"

Let's say you have an app like Facebook, where each Post can be tagged with a Place. Now, the whole social app backend api (basically the whole client api) is built using nodejs + postgres. But the Places autocomplete is a custom API that is built in Golang for example.

+ +

Since the Places API is essentially based on a stateless DB (using postgres) cause it just stores information that is not to be manipulated by user, it makes sense to put it in its own micro-service.

+ +

So, it kinds of makes sense to have the following architecture :

+ +

Service A - the main client api / backend. In this service I can Like a post, follow a Friend, and post new Posts.

+ +

Service B - will have all the Places information and API. That means it will have tables with cities and countries, and expose an API to retrieve this information.

+ +

So if a user posts a new Post in London, Service A will handle this action and create a record in its db inside table ""Posts"" , where one of the columns will have the id of London (which sits in Service B's table).

+ +

Now, next time I want to Get that post, I will only have the place's id, but obviously we would want to show the information of that place (city name, country name, etc...).

+ +

It means that for an endpoint of ""getPost(id = 2)"" in Service A, we will have to join the Places tables from Service B. And that's the problem. Microservices should ideally not have any inter-communication between them that would constitute unwanted traffic load. Frankly, I'm not even sure how and if that's even technically plausible.

+ +

The alternatives would be to have a monorepo with project Places in Go, and the Main project in nodejs, with the same DB, or with 2 databases.

+ +

I am unable to weigh correctly the pros and cons and the probability of those alternatives, and would like to understand what is usually done in cases like this?

+ +

** P.S - Regardless of whether it would end up being Monorepo or micro-service architecture, I intend to use Docker + Kubernetes.

+",356993,,,,,43869.66736,Microservices for app with custom made API without imposing traffic load,,1,0,,,,CC BY-SA 4.0, +404876,1,404884,,2/8/2020 15:17,,3,184,"

So, in C#, I understand the historical difference between the two vaguely; a Task is a newer concept and the highest level concurrency native C# offers.

+ +

AsyncResult is a bit more ambiguous. For example, the docs say BeginRead relies on the underlying stream for non blocking behavior.

+ +

I know ReadAsync creates a Task which is handled by a Scheduler on the ThreadPool and therefore is guaranteed to not block, but then I read about C# delegates and Begin/EndInvoke which allow ""any synchronous method to be asynchronous""

+ +

Is this true? As the documentation around BeginRead seems to indicate it still blocks if the underlying stream blocks. Is this equivalent to creating a Task, just less developer friendly?

+ +

Is a Task always preferred to an AsyncResult? Why or why not?

+ +

I surely hope this is the correct place to ask this.

+",355847,,,,,43869.74306,Difference Between AsyncResult and Task in c#,,1,0,,,,CC BY-SA 4.0, +404879,1,,,2/8/2020 15:36,,2,76,"

For a system, I have certain requirements

+ +
    +
  1. should be soft realtime.
  2. +
  3. should be able to handle lots of operations in parallel
  4. +
  5. should have ability to add, remove, alter features
  6. +
  7. should be able to increase/decrease memory, computation based on load
  8. +
+ +

So I have implemented an app design based on microkernel architecture with a mediator bus pattern. Here is the design

+ +

+ +

Design Philosphy:

+ +
    +
  • Worker is any piece of code, running on a dedicated thread.
  • +
  • Each worker has exactly one message box
  • +
  • System worker has only one copy, starts/stops with the system only
  • +
  • User worker can be created/ destroyed or multiple copies can run.
  • +
  • Workers cannot share anything
  • +
  • Workers can only communicate through immutable messages
  • +
+ +

Kernel

+ +
    +
  1. Can starts or stops a processes.

  2. +
  3. Allocates message boxes

  4. +
  5. can run multiple copies of user workers

  6. +
+ +

Router

+ +
    +
  1. Can only move messages from one message box to another.

  2. +
  3. Undelivered messages are delivered to dead letter collector.

  4. +
+ +

Dead Letter Collector

+ +
    +
  1. Messages which have no receiver are received and dumped

  2. +
  3. Messages undelivered because of no space in target message box are kept and retried after a certain interval 3 times at least.

  4. +
+ +

Features can be built as workers and can be loaded and unloaded without modifying the source code at run time. (similar to plug-ins). If there is load on a certain feature then kernel can load multiple copies of that feature. In case of feature not available, the system does not crash but the messages are considered undelivered. new features can be added, existing ones can be removed or altered.

+ +

A feature can be an API, a database functionality, a computation etc. and a feature can be dependent of other features too.

+ +
+ +

Based on the requirements and the proposed design what are your views and what can we improve upon?

+",67385,,67385,,43872.2,43872.2,Critical view of a particular application design,,1,0,1,,,CC BY-SA 4.0, +404887,1,,,2/8/2020 19:44,,-4,357,"

Being mobile developer for quite some time (ios/android) I've learnt that local database is very rarely needed. Mobile application are mobile by definition, they usually serve only as clients to access some REST API and do parsing/display work. In such cases server and its database are the true and the only source of truth and mobile app felt more like a shallow mirror of what the data actually is. Once the data is fetched there is no guarantee that it's not immediately changed and still valid and up-to-date by the time we show it to user.

+ +

Whether it's news feed or list of chat messages we usually should consider them ""dirty"" when we visit the same screen again or even after some time. We just fetch data again and it feels not only easier than adding database layer, but also necessary to show user the most actual data.

+ +

For long I thought that making networks requests for whatever resources is totally fine since

+ +
    +
  1. Data becomes outdated really quickly nowadays (your news feed gets new news basically every few minutes; your chats, especially group chats get new messages on very random basis and in random quantities; if you try to cache some goods with their prices and available quantities, show that to user and it happens to be out of stock or with a wrong price at checkout you're in trouble!)
  2. +
  3. Let's not pretend it's easy and fast to implement database layer for an app. You're getting the whole range of things (and problems!) to think about, including, but not limited to schemas, mapping from OOP to SQL world (types, foreign keys, etc), migrations, multithreading, etc. All these things need quite some time to master to the level when you're comfortable around them. And I'm talking about both high level ORMs, CoreData, Realms, Room, and low level SQLite, FMDB and others. Though I should admit, high level tools actually solve many problems.
  4. +
  5. The cost of making network requests nowadays is so low it's basically free, everyone has fast internet connection and access to wi-fi spots (I don't even want to mention that parsing json was never a difficult task for ios/android device). We also have local http caches on devices (OKHTTP's cache, NSURLCache, cache-control), gzip, etc. The only real cost I see here is battery, but I'm not qualified enough to compare battery consumption of single network request/maintaining socket connection opened vs writing/reading to database on disk
  6. +
  7. In some situation where you can't get data from your api it's better to show network error than to silently show outdated data (tolerable for news feed, don't care, but definitely intolerable for taxi app where I can't tell if taxi is lagging or my app/network is lagging)
  8. +
+ +

It feels like integrating a database that mirrors the server one's is a very expensive commodity in terms of development time. It feels more like a nice to have feature and that it could be done only when you have (a lot of) spare resources. It feels justified only if business actually needs it for some logical thorough reason. Or for some data that's by definition belongs to user/device and should/could not be accessible to outer world (basically server and other users). For example, recent search requests (queries, not responses), tokens, preferences, etc.

+ +

Now I'll try to cite a few resources I saw and explain my concerns about them:

+ +
    +
  • android's jetpack guide raises interesting question of what to do, if app needs the same piece of data on different screens (user in this example). If one screen updates data then another screen wouldn't know unless some coordination mechanism is involved. For that reason Room is used. But it feels much easier to implement just in-memory user cache. Not only because it's faster (at least for me, this tutorial looks promising though) but because users' data could become invalid if we close app and open it in few hours - I honestly don't see any reason to cache it, just fetch it again from server, not a big deal, right? In other words - why would I bother implementing ""hard"" expensive to implement database cache, if I just could save anything in in-memory cache (basically variables, lists, etc)(and possibly use reactive programming (or whatever, really) to be notified about changes). It's really hard to come up with example when I could get OOM exception because of cached users. In the end a single .png picture occupies as much memory as maybe hundreds of user objects.

  • +
  • “Use a single source of truth: the database.” — Gwendal Roué. Author advices to use database as SSoT and not rely too much on objects that was just queried. Why would I bother to rely on database at all if I could just rely on my api backend in the same way?

  • +
  • Should data be stored to local database in Android when heavily using REST services? Feels relatable, actually

  • +
+ +

After all said it just feels that

+ +
    +
  1. We should store only data that's immutable by definition (or changes so rarely and/or so unimportant) that we could store it once and forever and never worry about it being outdate

  2. +
  3. Developers are mistakenly(?) trying to rely on database as the single source of truth even though it isn't. It is merely a cache that's always potentially outdated. The only real source of truth is your server.

  4. +
  5. I probably missing some really huge point here. It is an unexplored territory for me and I humbly hope to hear from people who successfully use databases and could explain some unobvious (to me) benefits.

  6. +
+",162046,,,,,43870.87917,Why local database should be the only source of truth for mobile application?,,3,6,,,,CC BY-SA 4.0, +404892,1,,,2/9/2020 0:21,,0,139,"

For example, if I want to write a daemon program in C# that uses anonymous pipes to communicate with programs written in another language, is this both possible and feasible?

+ +

I ask because I intend to write the code in a language that can cross-compile to several targets (Haxe). I intend to write an application which compiles code on the fly, runs it, and communicates with it over a period of time, and other targets compile faster than C# (any scripted language for example), I can easily write common platform agnostic client code in Haxe, but is this possible and feasible in C# using the AnonymousPipeServerStream?

+ +

I see that the client handle is simply a strIng, what is this string for? +How is it used to initialize the AnonymousPipeClientStream and can I create a cross-platform abstraction in client programs that similarly initializes and consumes the client handle from the .NET server application?

+ +

Most languages support subprocessing by running a shell command and returning an object with handles for the stdin, stdout and stderr streams for a program. This is, for example, a prerequisite to initializing an AnonymousPipeServerStream, the underlying program simply does something with a pipe handle, presumably this is more performant than using System.Diagnostics.Process.Stdin/Stdout?

+ +

How is this different than an anonymous pipe?

+",355847,,1204,,43870.78681,44173.87847,Can a C# AnonymousPipeServerStream create a non .NET client?,,1,4,1,,,CC BY-SA 4.0, +404893,1,,,2/9/2020 1:44,,1,73,"

This is an initial thought I'm having on logging. Clearly, am missing something about the whole picture because I don't think someone didn't think of this before.

+ +

PHP runs on one request. That means it's contained, which means every request needs to open its own connections, etc. this is somewhat given but, when you look at how logging is done, it's somewhat infuriating: Every time an error happens, open a file, append to that file and close it, or, as far as I understand, you can open a file stream, but that still writes to the system every time logging happens.

+ +

So...why don't we just save all errors to memory, then, when the request is finished (every framework under the sun has actions, meaning that you can hook right at the end of a request), we just write these to the file?

+ +

Naturally, this has an edge-case of where the request/system for some reason critically broke down and the writing couldn't happen, but, assuming that wasn't a concern, why not just do memory-then-file writing?

+",353781,,,,,43870.07222,Why does the PHP community always rely on file-based logging instead of a combination with in-memory logging?,,0,4,1,,,CC BY-SA 4.0, +404895,1,404904,,2/9/2020 6:59,,3,343,"

I am designing a system that contains organizational hierarchy management. There are four roles in the system which are the user, admin, manager and head of procurement:

+ +

+ +

I am trying to let the admin of each organization create his own hierarchy by creating new roles using tasks and privileges predefined in the system.

+ +

For example an admin might add a user named head of market research between user and head of procurement. The admin will assign this new user tasks like approving purchase requests which are predefined in the system.

+ +

I am trying to prepare a class diagram for this system and I don't know how to represent the new roles to be created. +Any help is appreciated. Thank you in advance.

+",353744,,209774,,43870.53819,43870.53819,How can I provide a class diagram for a system that contains creation of an object that contains a combination of property?,,1,1,,,,CC BY-SA 4.0, +404900,1,404903,,2/9/2020 9:41,,1,105,"

What were the motivations for java to use the @FunctionalInterface annotation, instead of creating a new FunctionalInterface modifier?

+ +
@FunctionalInterface
+public interface MyFunction {
+    TypeB myFunction(TypeA objectA);
+}
+
+ +

vs.

+ +
public FunctionalInterface MyFunction {
+    TypeB myFunction(TypeA objectA);
+}
+
+ +

this question is similar to Question about Override Annotation, but one Comment from Michał Kosmulski about enum being a new entity fits the functional Interface definition.

+",357037,,,,,43870.51944,"Why does java use an @functionalInterface annotation, instead of a modifier?",,1,1,,,,CC BY-SA 4.0, +404905,1,,,2/9/2020 12:58,,-1,276,"

I have a large hash, around 6 gigabytes that I load into memory. On my current laptop that I develop from, it really does a number on my system, causing massive amounts of lag while I try to go about other things, say for example, debugging or browsing stack overflow. This would not be an issue if I could simply store the hash exclusively on my swap space, while leaving the primary memory available for more important programs.

+ +

Is this sort of thing at all possible, insofar as GCC is concerned? Are there memory managers which can do this sort of thing? And if not, what are my best options that do not require me to set up a PostgreSQL server?

+ +

I really only need this while I am debugging. Eventually the program will sit on a server where this will not be necessary.

+",136084,,,,,43870.6,"In C++ and GCC on Linux, is it possible to allocate memory to your swap space instead of your RAM?",,1,8,,,,CC BY-SA 4.0, +404907,1,404910,,2/9/2020 15:24,,1,69,"

So I've got ""Services"" in my system that handle creating, updating etc of the data.

+ +

For example state_service.create() would create a new state in the database. This state belongs to a group.

+ +

The problem is, the group needs to be created first, before the state can be added.

+ +

Now, I can always call group_service.create() first, then create the state after, like this:

+ +
def test_state_service_create(self):
+    self.group_service.create()
+    state_id = self.state_service.create()
+    self.assertTrue(state_id)
+
+ +

however I am not sure if this is bad practice, since this now relies on group_service.create() working correctly. The alternative is to manually create the group myself, such as:

+ +
def test_state_service_create(self):
+    self.db.groups.insert_one(self.mock_groups[0])
+    state_id = self.state_service.create()
+    self.assertTrue(state_id)
+
+ +

But this would just mean that I'd need to update the mock data should the schema change.

+ +

Which is the proper practice here, or is it something else entirely?

+",344512,,,,,43870.70347,Can you call functions that are not being tested as part of the unit test?,,1,3,,,,CC BY-SA 4.0, +404908,1,,,2/9/2020 16:34,,-1,72,"

I just curious to know that what is the best practice or way for sharing common functionality or code among the micro-services. For example, there is one micro-service which is responsible for the operations related to user entity. Now if X micro-service needs some data related to user entity, then how can this be achieved? Coding some logic related to user in X is not an good idea, as it will create redundancy.

+ +

What I know so far is,

+ +
    +
  1. We can use facade design pattern, in which we can pack common functionalities in some library (kind of JAR) and share this among different micro-services, but then it will create constraint on technology which can be used for micro-services. For example, JAR can mostly be used with Java projects.
  2. +
  3. We can use some queuing system, to send messages from one micro-service to another. But I am not sure exactly how can we achieve this and how much is this reliable and compatible.
  4. +
  5. We can directly use HTTP REST calls between micro-services for communication. But again this will add latency in response time, which is bad.
  6. +
+ +

So which is the best way for micro-service internal communication?

+",237932,,,,,43870.88681,What is the best way for sharing common functionality or reusing existing code in micro-service architecture?,,1,1,,,,CC BY-SA 4.0, +404909,1,,,2/9/2020 16:43,,2,232,"

I am new to using PHP Frameworks. I decided to try out Laravel. In my project, I needed to write a search function which will search through a few entities based on some keywords, then do a UNION and return the results. So the SQL will look something like this:

+ +
SELECT pages.id,pages.updated_at,pages.created_at,page_translations.title,page_translations.description
+FROM pages
+INNER JOIN page_translations ON page_translations.page_id = pages.id
+WHERE pages.deleted_at IS NULL
+AND page_translations.deleted_at IS NULL
+AND pages.published = 1
+AND page_translations.locale = @lang
+AND page_translations.active = 1
+AND ( page_translations.title LIKE '%@keyword%'
+    OR page_translations.description LIKE '%@keyword%'
+    OR pages.id IN (
+        SELECT blockable_id
+        FROM blocks
+        WHERE blockable_type = 'App\\\\Models\\\\Pages'
+        AND content LIKE '%@keyword%'
+    )
+)
+UNION
+SELECT articles.id,articles.updated_at,articles.created_at,article_translations.title,article_translations.description
+FROM articles
+INNER JOIN article_translations ON article_translations.article_id = articles.id
+WHERE articles.deleted_at IS NULL
+AND article_translations.deleted_at IS NULL
+AND articles.published = 1
+AND article_translations.locale = @lang
+AND article_translations.active = 1
+AND ( article_translations.title LIKE '%@keyword%'
+    OR article_translations.description LIKE '%@keyword%'
+    OR articles.id IN (
+        SELECT blockable_id
+        FROM blocks
+        WHERE blockable_type = 'App\\\\Models\\\\Articles'
+        AND content LIKE '%@keyword%'
+    )
+)
+ORDER BY updated_at DESC
+
+ +

I translated this query to use Laravel's Eloquent approach, and it looked something like this:

+ +
$pages = DB::table('pages')
+  ->select(explode(',','pages.id,pages.updated_at,pages.created_at,page_translations.title,page_translations.description'))
+  ->selectSub(function($query){
+    $query->selectRaw(""'pages'"");
+  },'content_type')
+  ->join('page_translations','page_translations.page_id','=','pages.id')
+  ->whereNull('pages.deleted_at')
+  ->whereNull('page_translations.deleted_at')
+  ->where([
+    ['pages.published','=',1],
+    ['page_translations.locale','=',$lang],
+    ['page_translations.active','=',1],
+  ])
+  ->where(function($query) use ($keywords) {
+    $query->where('page_translations.title','LIKE','%'.$keywords.'%')
+      ->orWhere('page_translations.description','LIKE','%'.$keywords.'%')
+      ->orWhereIn('pages.id',function($subquery) use ($keywords) {
+        $subquery->select('blockable_id')
+          ->from('blocks')
+          ->where('blockable_type','=','App\\\\Models\\\\Page')
+          ->where(function($blockquery) use ($keywords) {
+            $blockquery->where('content','LIKE','%'.$keywords.'%');
+          });
+      });
+  });
+$articles = DB::table('articles')
+  ->select(explode(',','articles.id,articles.updated_at,articles.created_at,article_translations.title,article_translations.description'))
+  ->selectSub(function($query){
+    $query->selectRaw(""'articles'"");
+  },'content_type')
+  ->join('article_translations','article_translations.article_id','=','articles.id')
+  ->whereNull('articles.deleted_at')
+  ->whereNull('article_translations.deleted_at')
+  ->where([
+    ['articles.published','=',1],
+    ['article_translations.locale','=',$lang],
+    ['article_translations.active','=',1],
+  ])
+  ->where(function($query) use ($keywords) {
+    $query->where('article_translations.title','LIKE','%'.$keywords.'%')
+      ->orWhere('article_translations.description','LIKE','%'.$keywords.'%')
+      ->orWhereIn('articles.id',function($subquery) use ($keywords) {
+        $subquery->select('blockable_id')
+          ->from('blocks')
+          ->where('blockable_type','=','App\\\\Models\\\\Article')
+          ->where(function($blockquery) use ($keywords) {
+            $blockquery->where('content','LIKE','%'.$keywords.'%');
+          });
+      });
+  })
+  ->union($pages)
+  ->orderBy('content_type','desc')
+  ->orderBy('updated_at','desc')
+  ->get();
+
+ +

To me, the raw SQL approach is much more legible. And if my query had a few more subqueries, the SQL approach would still remain quite legible to me.

+ +

So my question is when developing with Laravel in a small team environment (2 other backend developers), is it best practice to never write Raw SQL? Is it the Laravel convention to always use Eloquent query builder methods unless there were exceptional circumstances such as bugs with eloquent, performance issues, etc...?

+",46872,,,,,43870.69653,Is it bad practice to use RAW SQL when Laravel Eloquent offers query building alternatives?,,0,2,,,,CC BY-SA 4.0, +404911,1,,,2/9/2020 17:20,,1,43,"

I'm used to comments in Git and Mercurial repositories:

+ +
    +
  • Commit comments, which may involve multiple files over the whole repository - has a comment. Commits without comments are possible, but rare (in my experience).
  • +
  • Branch descriptions: These are the opposite in terms of use: Many people don't even know about them, and they're rarely used (in my experience).
  • +
+ +

I ""know"" what to write in commit comments (and branch descriptions if I use them). Various online tools even make assumptions about their content, e.g. BitBucket and GitHub which close issues for you, if you comment fixes #123 on the fixing commit.

+ +

But when working with ClearCase - which I'm new to - I find myself confused. You see, ClearCase versioning is per file; there are no all-repository commits; and there are ""views"", which are complex selections of versions for each of a repository's files.

+ +

There are also more types of comments than I'm used to:

+ +
    +
  • View comments - added to a newly-created view
  • +
  • Branch type comments - files can have revisions in any of various named branch types; and when you create a new possible branch type, that gets a comment too
  • +
  • Check-out comments - per file (or group of files) that's checked out
  • +
  • Check-in comments - per file (or group of files) that's checked in
  • +
+ +

My question: Can you give a rule of thumb for what kind of text one should place in which kind of ClearCase comments? In a typical workflow?

+",63497,,63497,,43870.83333,43870.83333,What to put in which kind of comment in ClearCase?,,0,4,1,,,CC BY-SA 4.0, +404913,1,404924,,2/9/2020 17:40,,4,136,"

As a long-time perpetrator of monolithic systems (oh, the shame!), I recently had my eyes opened to the concept of microservices. I've now read lots of online articles, and enjoyed the piecemeal plan of the Strangler Pattern to break a monolith into microservices. I can see how especially with the advent of technologies like Kubernetes to manage all these disparate parts, it makes a lot of sense to have lots of small services that can be independently scaled and maintained by different development teams.

+ +

What's getting me stuck, though, is that everything I've found on the Web talks in abstract generalities, with very little actual code explaining how to do anything beyond CRUD in a microservice. Like, let's take the scenario of an application that has customers and products. There's one piece of the system that manages the sales and billing, and another piece that manages the tech support. What's common between them is Customer and Product. The sales side also has an Order table, while tech support has a SupportTicket.

+ +

So, what's the ""correct"" microservice architecture here, if we have to design this from scratch? Should we have separate microservices for each table? Or are ""Sales"" and ""TechSupport"" separate services? If the latter, who gets responsibility for maintaining Customers and Products? If the former, what happens in the event that Tech Support gets a ticket that is billable, and they won't start working on it until they get an Order linked to their support ticket that shows it has been paid? What if business rules require that there needs to be a transaction boundary, so that the support ticket does not get created unless there is a matching, paid order?

+ +

I guess my broader question is that invariably, in real-world systems, things have dependencies on other things, and sometimes those dependencies are circular. How is it possible to have all these neat, lean microservices, that do nothing other than worry about their own bailiwick, when the reality is that it's very rare to have the luxury of not caring about anything else in the overall system?

+",1715,,,,,43870.86042,Microservices: how to draw boundaries?,,1,3,,,,CC BY-SA 4.0, +404916,1,,,2/9/2020 18:58,,4,309,"

In his book ""Concurrency in C# Cookbook"", Stephen Cleary writes:

+ +
+

If you can, try to organize your code along modern design guidelines, like Ports and Adapters (Hexagonal Architecture), which separate your business logic from side effects such as I/O. If you can get into that situation, then there's no need to expose both sync and async APIs for anything; your business logic would always be sync and the I/O would always be async.

+
+ +

However, I don't find that to be true in my design and I am wondering how it's supposed to be so I can have a sync domain and async infrastructure code.

+ +

For example, given the following:

+ +
public interface IContentRepository
+{
+  Content GetContent (Guid contentId);
+}
+
+public class MyDomainObject
+{
+  public void Foo (IContentRepository repo)
+  {
+    var content = GetContent(someId);
+    ...
+  }
+}
+
+public class ContentRepository : IContentRepository
+{
+  // Uh oh, either...
+  // 1. implement it sync here or
+  // 2. use sync-over-async or
+  // 3. make IContentRepository return Task<Content> (and live with an async domain)
+}
+
+ +

How's that supposed to be designed such that my domain can stay sync-only and can make use of async infrastructure code. Is that even possible? Have I misunderstood Stephen?

+",216923,,,,,43939.76042,How does an hexagonal architecture help with async/await async-over-sync?,,3,2,,,,CC BY-SA 4.0, +404922,1,,,2/9/2020 20:08,,1,116,"

I have once seen a universal download for an app. It was said to work on Windows, MacOS, and Linux. If the exact same app is designed to work on three different without changing any code. For example, Java can be made to run cross platform. Are there performance benefits to creating platform specific apps rather than creating a universal app, where the download is designed to run on all platforms?

+",348453,,348453,,43870.87292,43870.87292,Are universal apps any less efficient than platform-specific apps?,,1,0,,,,CC BY-SA 4.0, +404928,1,,,2/9/2020 22:09,,0,79,"

Let's assume we have multiple microservice Spring Boot (2.x) applications (Spring MVC with Maven build management) where they are accessed by clients. All of the microservices expose REST APIs, and let's assume stack comprises multiple layers, e.g. front-end, integration, business, external organization services (these microservices sit in business layer and users are authenticated through another system to gain access to front-end layer, which in turn sends the requests down stream to integration layer and so on).

+ +

Previously, these applications were in a monolithic application which was being deployed to some enterprise application server, e.g. JBoss EAP or WebSphere, so managing keystores/truststores were being handled outside of application layer but these application servers.

+ +

In case when we deploy them via embedded Spring Boot Tomcat, if we need to have SSL/TLS authentication by their clients, we need to manage these keystores/truststores continuously as the number of microservices grow.

+ +

Worse, if we have multiple environments, e.g. local, dev, test, performance, staging, production, etc., then we need to have separate certificates (due to CN's) and keystore/trutstore management for each environment (so the problem gets multiplied).

+ +

How does one go about designing its Spring Boot application to handle keystores/truststores in various profiles.

+ +

Note: We might getting away with a single client that hits all these business layer microservices, a.k.a. integration application but then again, this could change.

+",334752,,,,,43870.92292,Certificate Management in Multiple Microservice Spring Boot Servers,,0,0,,,,CC BY-SA 4.0, +404930,1,404934,,2/10/2020 0:19,,5,399,"

I have done quite a bit of reading about microservices but I have so far never designed any. I am working on doing that now and I am wondering which approach is correct (see the image below).

+ +

Basically, I've read that each microservice should own its data and only expose it via an API and that totally makes sense. However, that doesn't in itself say how specialized should microservices be. When breaking down a monolith, where does one stop? My intuition is that many online examples go too far.

+ +

In particular, breaking down into separate microservices relations that are tightly logically and functionally coupled, like Customers/Accounts/AccountTransactions or Users/Posts/Comments, while possible, just feels very wrong to me from architectural standpoint. This data is and is meant to be interdependent, why try to force it apart? Am I missing something?

+ +

Apart from data ownership and cohabitation, another conundrum I have is about the proper mapping between microservices and endpoints. If the way to access microservices is chosen to be via a RESTful API, does it have to be one microservice per endpoint or can one microservice have several endpoints? What's the best practice? Could I have one microservice with three endpoints and one database serving all three?

+ +

Does a microservice have a subdivision, and if so what is it called? A method? This is related to the previous question. If one microservice can do several things, how are things doing those things called?

+ +

+",356970,,,,,43875.84931,Which of these microservice designs is correct?,,3,1,2,,,CC BY-SA 4.0, +404931,1,,,2/10/2020 2:07,,0,21,"

I have a library that depends on a PDB being present for a file present on the running OS. This PDB is hosted officially on the Microsoft servers, and the version needed will change between OS updates (and hence, need to be downloaded again.)

+ +

Currently, when the specific library object that depends on this PDB is constructed, it checks the cache directory to see if the correct version has been downloaded, otherwise it attempts to download this dynamically before continuing execution.

+ +

I recognise that this work shouldn't really be done from a constructor, however, I can't really think of another way to do this, as the library cannot run without the PDB being present.

+ +

What would be a better way to do this?

+",357080,,,,,43871.08819,What would be the best way to dynamically download an external dependency at runtime?,,0,0,,43871.28056,,CC BY-SA 4.0, +404939,1,,,2/10/2020 12:00,,4,430,"

As a practical example, imagine a Gripper class which represents a robotic gripper in a simulation. +Gripper has a TryGrip method, which checks if there's a GrippableItem in the correct position (within the gripper's rectangle). If there is, the gripper takes control of the item until it is dropped:

+ +
class GripperItemLocator  //Struggling to find a good name
+{
+public:
+
+    virtual std::unique_ptr<GrippableItem>
+    FindItemInRange(Rect rect) = 0; 
+};
+
+class Gripper
+{
+public:
+
+    bool
+    TryGrip(GripperItemLocator& itemLocator)
+    {
+        auto item = itemLocator.FindItemInRange(CalcCurrentRect());
+        if (item)
+        {
+            TakeControlOfItem(std::move(item));
+        }       
+    }
+
+private:
+
+    Rect
+    CalcCurrentRect() const;
+
+    void
+    TakeControlOfItem(std::unique_ptr<GrippableItem> item);
+};
+
+ +

In accordance with the dependency inversion principle, we have a GripperItemLocator abstraction in order to decouple the gripper from the rest of the world. If the where and how GrippableItem's are located in our program changes, this does not concern the gripper, only the class(es) that implement GripperItemLocator must change.

+ +

I'm wondering if we really need interface classes for this? Say we just have a regular class doing the same job:

+ +
class GripperItemLocator  
+{
+public:
+
+    std::unique_ptr<GrippableItem>
+    FindItemInRange(Rect rect)
+    {
+        //Immediate implementation
+    }   
+};
+
+ +

I realize this does no longer have the ability to select different implementations at run-time, but say we're 100% sure there won't ever be a need for this, we're only interested in the decoupling. Do we lose anything else that I'm overlooking, or is this just as good ?

+",341699,,,,,43871.94722,Do we really need interface classes for the dependency inversion principle?,,4,7,,,,CC BY-SA 4.0, +404945,1,404966,,2/10/2020 14:18,,2,203,"

When implementing GET on a resource is makes sense to respond with 404 if the resource cannot be found. For POST and PUT verbs it is a little more complicated.

+ +

To respond with 404 in that case I would need to do some kind of pre flight check. First do a SELECT in the database, check if there is a matching row, if not, respond with 404. But due to concurrent users there is no guarantee that the following UPDATE sql command will affect any rows.

+ +

Is it good practice to implement a pre flight check for resources in a REST API even if there is no guarantee that the request will be processed successfully anyway?

+",85146,,,,,43871.91597,Should POST and PUT handlers do pre flight check for 404?,,2,5,,,,CC BY-SA 4.0, +404946,1,,,2/10/2020 14:49,,1,31,"

I am planning to develop a subscription based Video on Demand android App.

+ +

I am planning to host videos on VPS with unlimited free bandwidth. It's said that nothing beats nginx for static hosting. So I've plan to serve videos through nginx with server configuration :

+ +
- 32 vCPUs @2.4 GHz
+- 60 GB Ram
+- 1800 GB SSD
+
+ +

costing around 200$ a month. The VPS will run just Nginx, database to be hosted on firebase! Also if anyone could tell me how many concurrent streaming user devices could this configuration handle?

+ +

So my Question is, Is is possible that I DRM encode/encrypt locally & serve through vps so that only the Widevine licence fee is paid? which costs $0.20/100 licenses listed here

+ +

Update:

+ +

This article describes how you can use Media Services to deliver PlayReady and/or Widevine licenses but do the rest with your on-premises servers.

+",304121,,304121,,43871.63681,43871.63681,Can an Azure DRM protected video be delivered from a VPS?,,0,0,1,,,CC BY-SA 4.0, +404949,1,404957,,2/10/2020 15:37,,4,766,"

Is there a more syntactically beautiful/simply better way to write the following (without major abstraction)?:

+ +
 if (usart_error.CRCError == true || usart_error.DMATransferError == true ||
+     usart_error.FramingError == true || usart_error.NoiseError == true ||
+     usart_error.OverrunError == true || usart_error.ParityError == true )
+ {
+    //...
+ }
+
+ +

I am using the OOP only aspect of C++ for my embedded system if that makes any difference.

+ +

Note: usart_error is my class so I can adjust it if needed.

+ +

I have had it suggested to use a bit mask on an array and check if it has a 1 etc. but this abstracts it too far for my liking.

+",222040,,,,,43875.25556,How to deal with a lot of conditions in If statement in an elegant manner,,5,10,1,,,CC BY-SA 4.0, +404953,1,,,2/10/2020 16:22,,1,66,"

When using data and domain models, where does validation take place? Both or just

+ +

For example:

+ +
class UsersDB():
+    def create(self, user_data):
+        # Create user here
+        return insert_status
+
+    def confirm_user(self, token):
+        if token_date < today_start:
+            # Token expired, return False
+            return False
+
+        if not self.collection.exists({'token':token}):
+            # Token doesn't exist
+            return False
+
+        # Confirm user account
+        self.collection.update_one({}, {})
+
+class UserService():
+    def __init__():
+         self.users_db = UsersDb()
+
+    def create(self, user_data):
+        if self.users_db.exists(user_data['email']):
+              self._set_error(status=409, error='This user already exists.')
+              return False
+
+        if self.users_db.create(user_data):
+              EmailService().send_registration_email(user_data)
+              AuditService().add_event('registration', ....)
+        else:
+              self._set_error(status=500, message='Unable to create user.'
+              return False
+
+ +

Now, in the UsersDB.confirm_user(), if this method fails, it's impossible to determine why the method call fails, be it due to the non-existant token or the expired token. However, this is validation that I don't necessarily want mixed in with the business logic. I'd prefer to keep validation within the UsersDB wrapper.

+ +

Similarly, if I wanted to do some more advanced validation on the user_data parameter passed through to UserService.create(), then I'd need to do it at the UserService level in order to get any meaningful feedback to the user.

+ +

Should I just separate validation out of the UsersDB object and do the validation within the UserService methods before passing over to the UsersDB object, or is there a problem with the design that needs addressing?

+ +

Ideally, I'd like to keep the validation out of UserService so that it only handles calls to other services, but I'm not sure if that's a good idea.

+",344512,,344512,,43871.70972,43872.40903,"When using data and domain models, where should validation take place? And how should errors be fed back to the user?",,2,1,,,,CC BY-SA 4.0, +404954,1,,,2/10/2020 16:29,,1,54,"

Please forgive me if I am mixing up the terminology here, I'm a bit unfamiliar with it. I wanted to find a way to support extensibility in a web application; I wanted a web application that was similar to Microsoft Azure DevOps, where in the web application out of the box is fully functional, but also extensible in a controlled manner with contribution points.

+ +

I have been exploring Angular, React, and Vue.js options as the front end framework for the web application. At first, I was going to try to simulate how Microsoft Azure DevOps handles extensions by using a mix of iframe and XDM (cross domain messaging). This was a bit too complicated, so during my exploration with Angular I came across the concept of Angular Elements.

+ +

I found a git repository (https://github.com/manfredsteyer/angular-elements-dashboard) which demonstrates how an external js file could be used to load html into your page dynamically. I then found some blogs and some repositories which demonstrated how to create a single, bundled JS file external to the main site. For example, https://github.com/kito99/micro-frontends-demo.

+ +

I combined both of these examples and had my primary Angular application run on http://localhost:4200 and I created an express app to host the external Js files on http://localhost:3000. This worked perfectly.

+ +
+ +

I have not been able to find anything similar for React or Vue.js, so perhaps I am mixing up the wrong terminology? Is this a possibility in React?

+",357138,,,,,43871.68681,Does the React (and Vue.js) frameworks support extensibility?,,0,2,,,,CC BY-SA 4.0, +404955,1,,,2/10/2020 16:35,,1,63,"

I'm designing an API for a website where users can share images and all other users can see these images. The current idea is to have a path +/user/name/images/xyz to GET a user's image and then a path images/abc to GET any public image ""abc."" I'm worried that there is a design flaw if I'm separating image retrieval across multiple endpoints. What's the most proper way to design this?

+",337654,,,,,43882.85972,REST: Seperate API endpoints for user images and all images?,,1,2,,,,CC BY-SA 4.0, +404964,1,,,2/10/2020 21:13,,4,389,"

The team I am on is using the git implementation in Azure DevOps. We have been using almost the GitFlow model. We are using story branches instead of feature branches, and the long-lived develop and master branches.

+ +

The develop branch is deployed to the development environment, and the master branch is deployed to the QA (and later to production) environments.

+ +

When a developer completes coding for the story, a pull request is created into develop. Assuming all goes well with testing, the developer would then create a pull request from develop to master.

+ +

My question comes in when there are multiple developers working in the same repository. Developer 'A' completes a PR into develop, but that code is not QA-ready (not fully tested, bugs, etc). Developer 'B' then completes a different PR into develop, and would like to get that code into master. We currently have developer 'B' cherry-pick the PR into master, but that leads to some interesting history.

+ +

We would like to be able to merge individual stories to master as the stories are ready. Is there a different workflow that would better support this?

+ +

I've been through a number of articles today, GitLab flow seems to be the closest, but seems to want to take the latest master commit into production.

+ +

Recent Edits

+ +

(I'm adding these edits to try to cover some of the recent comments, hope that's acceptable)

+ +

Our current workflow is that developers work on story (or potentially bug) branches created from 'develop'. When that code is ready, a PR is submitted to merge the code into 'develop'. That is then deployed to the development environment for initial testing. Assuming that testing goes well, the developer performs a cherry-pick to create a PR into 'master'. The 'master' branch is deployed to the QA environment, and later to production.

+ +

This is wrong on a number of levels, Stop cherry-picking for one.

+ +

It is up to the individual developer to say when code is ready to go to QA. Code is deployed to QA a few times a day, and to production a few times a week.

+ +

I am hoping at some point we move away from the 'develop' branch, and just move to 'master'-based development with story branches. I'm afraid we're not there yet.

+",357162,,357162,,43872.62014,43873.82708,Git workflow for somewhat large team,,3,6,1,,,CC BY-SA 4.0, +404965,1,,,2/10/2020 21:57,,1,93,"

I have a need to generate a next number for each file type my Logic Apps is generating. My Logic App is doing some translations between an WMS and a customer's ERP. The issue is the customer is requiring us to generate and maintain a sequence number to be placed in the files. The full list of requirements are:

+ +
    +
  1. Generate a unique sequence number. No two calls, for the same file +type, should get the same result.
  2. +
  3. Number sequence is maintained +separately by file type. There are 3 files types, so each would +have their own sequence.
  4. +
  5. Should be able to handle spurts of 10 calls +a second for a few seconds at a time, which can be the same file +type.
  6. +
  7. Data should be stored in Azure.
  8. +
+ +

I'm at the point I know I'll need to write an Azure Function to do this. It will accept one parameter, ""fileType"", to determine which sequence number to return. I also know I'll need to store the current number ""somewhere"" and that the Azure Functions itself or the database/file (thru locking) needs to be pessimistic in nature. I don't know from a cost perspective what azure storage option would work best for this scenario, as well as to make sure of the lock/release to use with that technology decision. To me, it feels like using a database to store three numbers seems like overkill.

+",356787,,,,,43929.76458,Logic App next number,,2,1,,,,CC BY-SA 4.0, +404970,1,,,2/10/2020 22:40,,3,257,"

Problem Background

+ +

Recently, I joined a government agency as a software engineer/scientist/analyst. Previously, worked in software industry - gained 3 years of software engineering experience at previous job (to add to about 7 years in computational science/scientific computing). My current job is to come up with a strategy for modernizing a legacy scientific program.

+ +

The scientific program to modernize is a large legacy computational system that basically does mathematical optimization. Development started in the 1990s and has not kept up with best practices, unfortunately. It was/is written by scientists and analysts.

+ +

The main component of the system is a Fortran-based (various versions starting from 90, some newer versions incorporated, compiling with 2018 compiler) program that does the optimization. The program consists of 400K lines of Fortran code, 20K lines of shell scripts, and 60K lines of external math solver code. There is no test suite, hence the legacy label. The program can be thought of as a dozen modules that describe a particular physical component's behavior in the optimization. The general flow of the Fortran program is described in a main routine, where these dozen modules are called sequentially. The main routine does some other data orchestration and I/O as well. There is some interface to commercial products and optimization solvers, probably through a home grown Fortran wrapper. One of the biggest issues IMO is the use of global variables - both main and the modules have access to these globals, so change to the state can be made from anywhere (see my specific question).

+ +

There is a lot of home grown code for sub-systems or utilities that manage the main Fortran program, written mainly as shell scripts. These sub-systems include:

+ +
    +
  • a queuing system that manages the executions of the main Fortran program on internal prem Windows servers,
  • +
  • post-processor that converts the Fortran UNF files to CSV and Excel format,
  • +
  • custom visualization package written in Visual Basic that plots the results of the Fortran program,
  • +
  • version control utilities as wrappers around RCS VCS,
  • +
  • compiler utility that wraps the Fortran compilation.
  • +
+ +

Those are the main sub-systems or utilities necessary to work with the Fortran +program and its input/output, but there are loads of other Fortran programs and +shell scripts that do longer-term things like server space management and license management.

+ +

My immediate team is responsible for the Fortran code execution and integration +with other modules (so not all 400K lines of Fortran is in our scope, just maybe 10-20%, the rest is with other groups responsible for the dozen modules, which introduces some organizational pains since we have no control over their code). My team consists of me and another software developer, both mid-level software developers converted from scientific computing. A junior software developer with a traditional background in software and CS is joining shortly. Our senior software developer (one of the original developers of the entire system) is retiring in 1 month, and we are in the process of trying to find a replacement.

+ +

Problem

+ +

My question is: What are the components and sequence of the modernization plan/strategy that I should consider? The modernization is basically the process of moving from legacy to a more modern process, both technically (e.g., architecture, frameworks) and organizationally (e.g., agile process management for development).

+ +

Proposed strategy

+ +

Currently, at a high level, my plan is to:

+ +
    +
  1. assess extent of home grown code for systems that are not part of the main Fortran program;
  2. +
  3. replace each of these home grown solutions with best practice open source +solution, so we maintain as little code as possible; + +
      +
    • current order is modern VCS (Git/Gitlab), then queuing system, then viz package, but order will be determined by how much code there is per sub-system.
    • +
  4. +
  5. with the remainder of the code - hopefully just the main Fortran program and not some vital sub-system that we cannot find an open source solution for - capture current behavior with characterization tests;
  6. +
  7. refactor (update Fortran, port all functionality that doesn't do number crunching from Fortran to Python, etc.), make sure tests pass, repeat;
  8. +
  9. ""futurize"" code by updating architecture to enable cloud compute (to avoid vendor lock in), using Docker for containerization.
  10. +
+ +

Research

+ +

I've looked at some great discussion of similar topics:

+ + + +

But notice that some of these questions and answers are almost 10 years old, +so I wonder if there are better approaches available. Also, I am dealing with +a procedural scientific computing environment, rather than a heavy OOP business app, so perhaps the principles mentioned in the above Stackexchange links don't carry over as well. I am also not a senior software engineer, so not sure if I am even using the right terms in search and question formulation. There is the complication of scripts and utilities in the system that makes this effort not just about porting or refactoring Fortran, that makes this situation and problem unique.

+ +

Thanks!

+",175040,,175040,,43873.71597,43873.71597,How to modernize large legacy Fortran program?,,1,8,,43872.29097,,CC BY-SA 4.0, +404973,1,,,2/11/2020 0:39,,4,261,"

In the context of a multiuser database desktop application, the concurrency problem has to be considered.
+Many articles focus on two models: optimistic and pessimistic locking.
+In pessimistic locking you expect a concurrent access to a record and so you lock the resource to prevent others to access it while it's being updated.

+ +

In optimistic locking you considert the possibility of a concurrent update as an unlikely event and so you design the application not to lock the resource. Since this may lead to data loss deriving from unmanaged access, you implement a mechanism based on token-fields or timestamps or revision numbers or whatsoever in order to detect the conflict and to raise an error if it happens.

+ +

Design a system for detecting concurrency conflicts in a systematic way, (that is on most or on all the tables of a database) might not be a difficult task, but not even a trivial one, and usually is accomplished by the ORM.

+ +

Yet this is only half of the story because when a conflict is detected, there should be a resolution.
+In many articles seems to emerge the idea that 'since it's remote', then this resolution can be somehow semplified, let's say just show a message the user that a conflicted occured and she needs to reload the data. But what about a message like 'Another user has changed the data, do you want to overwrite them or reload?'? Without providing any information on which data has been changed it would be a bit ridicolous.
+In a full-blown implementation differences should be hightlighted. The GUI should host a sort of on-the-fly comparison. I haven't seen many of these implementations so far. It seems quite challanging as an implementation, not to mention the need for testing. Also implementing a simplified version of this comparison that considers only the most meaningful fields seems not an easy task and might be trivial only for the most basic cases. +Yet most of the articles on optimistic locking gloss over this point.

+ +

Question

+ +

On the side of optimistic locking we have a 'strategy' that forces you to chose between two equally unsatifying options. One is ""correctness at a higher cost"", a cost that is not proportionated to likelyhood of the event. The other option is a 'partial', over simplified implementation.

+ +

But then, if the event is really so remote, doesn't it make much more sense to use pessimistic locking? At least for desktop applications? In the most basic form it has a quite simple implementation. The cost in terms of implementation would return to be proportioned to the likelyhood of the event. The cost in terms of inefficiency (no lock is always faster than locking) would the same be low because of the slow-rate of the event.

+ +

It is very difficult for me to go on with this reasoning because it seems to go against all the more recent practices. Entity Framework and Entity Framework Corecore do support Optimistic locking out-of-teh-box, while pessimistic locking is not natively supported and requires database-specific sql. Optimistic locking is gaining ground all over. Is there something important that I am not considering?
+Is there any good article on the design of a lock system, not only on how orm-optimistic locking or db-pessimistic locking works?

+",28667,,,,,43873.3375,optimistic locking vs pessimistic locking,,3,3,1,,,CC BY-SA 4.0, +404977,1,404996,,2/11/2020 3:00,,-2,139,"

I hope everyone is good. Well, I am at the end of my degree BS (Software Engineering), and in the third Phase of my Final Year Project named as 'Test Phase'.

+ +

My Project is to build an Expert System for assessing programming course performance of e-learning students. We are advised to design it as a Web Application as huge number of people will be using it online.

+ +

We are advised to use Fuzzy Logic Algorithm of Machine Learning while designing this Expert System.

+ +

Well, all of you might know that there is no match of Python when it comes to Machine Learning, so that's why I selected Python Language for my Final Year Project.

+ +

Now, I am confused that, should I use Django WebFramework to build this Expert System or design the Expert System as a standalone desktop application and connect that to internet? Confusion is here because, if I use Django, it will be same like a website like for example to access Expert System, we will have to go to following link:

+ +

www.youruniversity.com/expertsystem

+ +

This thing is confusing me, because how can an Expert System be same like a website?

+ +

Please help me in this confusion? Also, tell me according to the advice given by our university, is there any other way to create Expert System rather than using any WebFramework?

+ +

I shall be very thankful to all the helpful replies.

+ +

Regards,

+",259355,,,,,43872.525,Is Python's Django WebFramework good to design Expert System as a Web App?,,1,5,,,,CC BY-SA 4.0, +404979,1,,,2/11/2020 5:54,,0,34,"

I am making a social media app. I have a use case in which i have to cluster similar type of data. For example, take Instagram. When a user likes a post, we get a notification that 'x likes your post'. Same in the case of follow. If 3 more users liked that post and 2 more followed the user, after some time, notification becomes 'x and 3 others like your post' and 'y and 2 others followed you'. This is something i have to achieve. Post, Like and Follow data is stored in PostgresSQL. I am also using MongoDB and Redis and more dbs can be added.

+ +

How should i cluster my notification data in this case? what should be the flow to cluster the notifications so that there shouldn't be any extra load over server or DB. Should it create a new notification in the db saying 'x and 3 others like your post' or should it just send it as a response on the front end?

+",357185,,357185,,43872.38889,43872.38889,Recommendations for data clustering in an application,,0,3,,,,CC BY-SA 4.0, +404980,1,,,2/11/2020 6:11,,2,159,"

I am converting a monolith to micro services architecture using RESTful apis using C#. I have identified various micro services that will completely represent my monolithic application.

+One important service is a configuration service which holds important configuration information which would be needed by the remaining micro-services.

+What is the best way for the services to communicate with the Configuration service?
Can I directly invoke the configuration service URL from other services using HTTP?

+I have looked into other approaches like RabbitMQ, pub/sub, but i believe that these approaches are more suited for use cases where broadcasting of information is needed to multiple services.

+What would be the best approach for communication between services in the above case?

+",357189,,357191,,43874.75694,43874.93611,Micro services communication among them,<.net>,3,2,1,,,CC BY-SA 4.0, +404984,1,404998,,2/11/2020 7:29,,1,87,"

Though this seems less like a software engineering problem, IMHO it is actually about programming paradigms, instead of specific ""how to solve this bug"" questions...

+ +

Say we have article, author and tag. article---author is M:1, and article---tag is N:M. Table:

+ +
TABLE article: id, author_id, text
+TABLE author: id, name
+TABLE tag: id, tagname
+TABLE article_tag_pivot: article_id, tag_id
+
+ +

I have two ways of writing down the ""models"":

+ +

Way 1: More like the Hibernate way.

+ +
@Data
+class Article {
+  long id; String text;
+  Author author;
+  List<Tag> tags;
+}
+// updated: add my original thought of implementation of these two classes
+class Author {
+  long id; String name;
+  List<Article> articles;
+}
+class Tag {
+  long id; String tagname;
+  List<Article> articles;
+}
+
+ +

Way 2: The fields in class is exactly one-to-one with the table columns.

+ +
@Data
+class Article {
+  long id; String text;
+  long authorId;
+  // no List<Tag> since not in article table
+}
+@Data
+class ArticleTagPivot {
+  long articleId;
+  long tagId;
+}
+// updated: add my original thought of implementation of these two classes
+class Author {
+  long id; String text;
+}
+class Tag {
+  long id; String tagname;
+}
+
+ +

I wonder which way is better? I am using MyBatis. I see some pros and cons for each method so cannot decide...

+ +

Thanks very much for any replies!

+",340897,,340897,,43872.40347,43872.54583,"Should database model POJO contain *entities* or *ids* when having 1:M, M:N relationships?",,2,0,,,,CC BY-SA 4.0, +404989,1,,,2/11/2020 9:31,,1,248,"

My understanding of mocking vs not mocking is that mocking too much creates brittle tests that need to be changed all the time, while on the other hand less mocking better verifies that the system works as it's supposed to all the way through.

+ +

I can see the value of not mocking too much in an untyped language since, but there are some parts that I have trouble understanding when it comes to a layered architecture in a typed language. I hope it's possible to understand what I'm asking here, but if not don't hesitate to comment!

+ +

Say I have the following controller and accompanying service classes in an MVC application. (This example is in C# but I hope it's possible to follow even without knowledge.)

+ +
public class ApplicationController
+{
+    private IApplicationService _service;
+    public ApplicationController(IApplicationService service)
+    {
+        _service = service;
+    }
+    [HttpPost]
+    public ActionResult LoginUser(string id)
+    {
+        var result = _service.LoginUser(id);
+        if (!result) return Unauthorized();
+        return Ok();
+    }
+}
+
+public class ApplicationService
+{
+    private DbContext _dbContext;
+    private HttpContextAccessor _httpContextAccessor;
+    public void LoginUSer(DbContext dbContext, HttpContextAccessor httpContextAccessor)
+    {
+        _dbContext = dbContext;
+        _httpContextAccessor = httpContextAccessor;
+    }
+    public bool LoginUser(string id)
+    {
+        var user = _dbContext.Users.Get(id);
+        if (user == null) return false;
+        if (!user.Active) return false;
+        _httpContextAccessor.HttpContext.User.Identity.Id = user.Id;
+        return true;
+    }
+}
+
+ +

When testing the ApplicationController I can think of two simple tests: + - If result is false, return Unauthorized + - If result is true, return Ok

+ +

Not mocking anything - tight coupling?

+ +

Since my approach in this example is to not use any mocks, I need to somehow simulate ApplicationService returning false. One way is to seed the database with a user that has the property Active = false.

+ +
[Fact]
+public void LoginUser_UserNotActive_ReturnsUnauthorized()
+{
+    var sut = new ApplicationController();
+    _dbContext.Add(new User { Id = ""123"", Active = false });
+    var result = sut.LoginUser(""123"");
+    Assert.IsType<UnauthorizedResult>(result);
+}
+
+ +

This is where it gets tricky for my understanding. In order to create this test, my ApplicationController test has to ""know"" in which instances the ApplicationService will return false and set up the environment properly, which creates coupling, right? Let me illustrate what I mean.

+ +

Now consider me writing tests for the ApplicationService. A simple test would be the following.

+ +
public void LoginUser_UserNotActive_ReturnsFalse()
+{
+    var sut = new ApplicationService();
+    _dbContext.Add(new User { Id = ""123"", Active = false });
+    var result = sut.LoginUser(""123"");
+    Assert.False(result);
+}
+
+ +

With this coupling which (from my understanding) stems from not writing black-box tests, if a later requirement says that you can login users which are active, I have to change both tests! +Furthermore if I write an ApplicationController test to test UserNotActive, I have to test for UserDoesNotExist too, right? But at that point I'm duplicating tests, just in different layers.

+ +

Wouldn't it be better with mocking..?

+ +

If on the other hand I wrote the controller test using mocks, it would maybe look like this.

+ +
public void LoginUser_UserNotActive_ReturnsUnauthorized()
+{
+    var serviceMock = Substitute.For<ApplicationService>();
+    serviceMock.LoginUser(Arg.Any<string>()).Returns(false);
+    var sut = new ApplicationController(serviceMock);
+    var result = sut.LoginUser(""someId"");
+    Assert.IsType<UnauthorizedResult>(result);
+}
+
+ +

With this test, if I change the requirements to be able to login with an inactive user, I would not need to change the ApplicationController test since I'm treating ApplicationService like a black box; I'm just testing return values!

+ +
+ +

So this is where my understanding breaks down. How would I go about black-box testing without mocks? It seems like I have to write white box tests for everything, which seems to create brittle tests that have to change as soon as as requirements change down the chain. At that point, why am I even separating functionality into classes and layers?

+ +

The example in this question is very simple but I hope it gets the point across; one could imagine an example with more layers where the test complexity would only grow.

+",352180,,,,,43873.00972,Black-box testing when testing without mocking?,,4,3,1,,,CC BY-SA 4.0, +405002,1,,,2/11/2020 15:18,,5,304,"

I am to write a piece of software to a friend of my uncle's, but I don't exactly know all the elements that are needed to fulfil the user's needs, so I can't begin to formulate a design yet. Since the problem is kind of solvable (in a small, non practical scale) by spreadsheets, macros, forms and a database, I am doing exactly that, and in the process I am figuring out the elements that will have to somehow be present in my design.

+ +

I imagine there must be another way to do this, a formalized way to design your system based on the needs of the user without the middle step of emulating the user trying to solve their own problem via inferior means. If indeed there is such a thing, what is the name of such discipline/technique?

+",357224,,4,,43873.00625,43873.33542,How to formally figure out the best design based on the informal user's description of their needs?,,5,3,2,,,CC BY-SA 4.0, +405010,1,,,2/11/2020 18:00,,2,52,"

At my company, we are developing pretty simple React Native and sometimes mobile-only React apps. Most of them have some similar logic, such as 'sign-in by phone' flow, some payments stuff, etc., and also some similar UI-components as well.
+What we wanna do is take all those modules with similar logic, which is basically reducer+actions+selectors+sagas, and move them out into some place, from which we could reuse them in our new projects. The same with UI-components.

+ +

So, my questions are: is this even make sense? How would one do that? I'm thinking about some monorepo with one package for each module in it. Would this approach fit our needs?

+",357242,,,,,43872.75,How could I reuse common JS modules between several projects?,,0,5,,,,CC BY-SA 4.0, +405013,1,,,2/11/2020 19:06,,-1,100,"

I have an ASP.NET MVC application and I will need to execute a task every one minute.

+ +
  The task is: 
+    -> Go to database 
+    -> Check from Table 1 if a record has value = ""something""
+    -> Perform a task in my web application(submit content to a website) 
+    -> Update Record in Table 2 
+    -> Delete item from Table 1 
+    -> Repeat
+
+ +

I'm thinking of building a desktop application so I can have more control over it and use some kind of library or Windows Scheduler for executing this task every minute.

+ +

I've found out about Quartz.NET library and using it on web applications and it seems cool but I'm a little concerned on using a scheduler in a web application.

+ +

What are your opinions?

+ +

Thanks

+",356708,,356708,,43873.0375,43873.53819,Using a separate desktop application for handing scheduled tasks in an ASP.NET MVC Application,,3,1,,,,CC BY-SA 4.0, +405014,1,,,2/11/2020 19:15,,0,65,"

I am facing dilemma on how to best design the following functionality. What design patterns and OOD principles should I use.

+ +

For simplicity sake following are basic requirements:

+ +
    +
  • displays type can be any of the following: PC Monitor, LG TV, Samsung TV, or other manufacturer
  • +
  • displays can have different functionalities, for example: turn on, turn off, sleep, etc..
  • +
  • there should be available some metadata about displays, like + +
      +
    • is CEC link available and working
    • +
    • is RS232 link available and working
    • +
    • is HDMI cable connected
    • +
    • is display currently turned on or off
    • +
    • get/set display resolution, etc...
    • +
  • +
  • communication with displays can be done in any of the following ways + +
      +
    • RS232 serial cable commands (which are specific to manufacturer)
    • +
    • HDMI CEC commands
    • +
    • as a fallback (and for the monitors) we use EnergyStar features aka. xset dpms force off
    • +
  • +
+ +

I'm more familiar with the php, but I will probably implement this in python as it is more systems language and I suppose also more appropriate for this task.

+ +

In the end it could be something simple as following I suppose. It could manually figure out which link it has available for communication(cec|rs232) and which command it should send along that link depending on manufacturer of the display.

+ +
$display = new Display();
+if ($display->isConnected() && $display->isOn()) {
+    $display->turnOff();
+}
+$forfun = $display->getVendorName();
+
+ +

I was thinking about implementing something along this lines, but I can already some some problems with this:

+ +
/** DisplayFactory classes */
+abstract class AbstractDisplayFactory {
+    abstract function makeDisplay();
+}
+
+class LGDisplayFactory extends AbstractDisplayFactory {
+    private $context = ""LG"";
+    function makeDisplay() {
+        return new LGDisplay;
+    }
+}
+
+class SamsungDisplayFactory extends AbstractDisplayFactory {
+    private $context = ""Samsung"";
+    function makeDisplay() {
+        return new SamsungDisplay;
+    }
+}
+
+/** Display classes */
+abstract class AbstractDisplay {
+
+    abstract function turnOn();}
+    abstract function turnOff();
+
+    function hasCecLink() {
+        //return status;
+    }
+    function hasSerialLink() {
+        //return status;
+    }
+    function isHDMICableConnected() {
+        // return status;
+    }
+
+    // ....
+}
+
+class LGDisplay extends AbstractDisplay {
+
+    function turnOff() {
+        if($this->hasCecLink()) {
+            // turn off via HDMI CEC - vendor agnostic commands
+            // implementation of this would be better placed in a more generic class since
+            // is vendor agnostic command, and we would not need to repeat it for every display
+        }
+        else if($this->hasSerialLink()) {
+            // turn off via rs232 - vendor specific commands
+        }
+        else {
+            // shell_exec('xset dpms force off')
+            // this also would be better placed in a more generic class
+        }
+    }
+
+    // ...
+}
+
+class SamsungDisplay extends AbstractDisplay {
+    // similar as for LGDisplay class
+}
+
+ +

I would be very happy of any pointers I could get about how to best approach this.

+ +

UPDATE:

+ +

What would I like to achieve is to have a single entry point for display management and monitoring which would allow me to control different kinds of displays via different kinds of methods some of which are common to all displays while others are vendor specific methods.

+ +

This is embedded pc for signage displays therefore user is not local to the display.

+ +

It does not need to indicate capabilities back to the end-user, but it should indicate some capabilities back to the administrator.

+ +

The system should definitely pick preferred or best method when a control operation is requested and fall back to one which is always available and working for all displays if other better methods don't have prerequisites to get executed (like disconnected serial cable).

+ +

Some of the operations/requirements of the system I identified are:

+ +
    +
  • system should be capable to turn display on and off via rs232 serial connection, if not available then via hdmi CEC, if not available then as a fall back execute command xset dpms force off
  • +
  • system should get information of the display such as: + +
      +
    • is display on or off
    • +
    • is display connected to the PC
    • +
    • what is current resolution and its orientation
    • +
    • get list of supported resolutions
    • +
  • +
  • system should be able to set some display parameters such as: + +
      +
    • set resolution, orientation and offset/position of the picture
    • +
    • set brightness
    • +
  • +
+ +

Some display parameters are going to be set/get directly via vendor specific methods(serial connection), while others in a vendor agnostic and more standardized path via hdmi CEC protocol, while others like resolution via operating system(with help of tools like xrand and others).

+ +

I suppose for this I should employ more than one design pattern, but I'm struggling which ones and how should I apply them to have such a plugable system so that the systems is extensible and that I don't to have deal with gazillion of if statements if I want to turn off tv for one or another kind of display via one or another method.

+",357251,,357251,,43874.84653,43874.84653,Architecture Design of Command&Control application center for displays,,0,2,,,,CC BY-SA 4.0, +405017,1,,,2/11/2020 20:51,,1,32,"

I'm working on designing a system for reporting/storing information on hardware devices (IoT-like), and trying to figure out how that would best be structured in Azure's Cosmos DB.

+ +

Every device will have a unique string identifier, and several groups of properties we need to record. I don't have the structure finalized yet, but it will be something like:

+ +
{
+    ""deviceIdentifier"": ""model-serial0123"",
+    ""summaryInformation"": {
+        ""isActive"": true,
+        ""purchaseDate"": ""2020-02-11T20:31:47.6450853Z"",
+        ...
+    },
+    ""configurationSettings"": {
+        ""setting1"": ""value1"",
+        ""setting2"": ""value2"",
+        ...
+    },
+    ""checksums"": {
+        ""fileA.xyz"": ""ABCD1234"",
+        ""fileB.xyz"": ""EF567890"",
+        ...
+    }
+}
+
+ +

Each of the various blocks can have a large number of key/value entries, and will not necessarily need to be retrieved at the same time.

+ +

In a standard relational database, I would have the deviceIdentifier as a natural key, and then the other three blocks would be separate tables with a foreign key to the list of device identifiers. My instinct here is to set teh deviceIdentifier field as a partition, and to split the three sub-blocks into separate documents in the same partition. That might be my prior experience leading me astray, though.

+ +

In NoSQL (and Cosmos in specific), does it still make sense to split this up into separate documents? Or is it reasonable to have a single document per partition? Am I even choosing the right type of field to partition on?

+ +

If I do split it up, how do I do a search for something like ""devices with setting1 of value AND checksum of fileA.xyz of ABCD1234""? Or does the fact that I'll sometimes need to reassemble this information imply that I should keep it all together even if it will usually result in returning more data than necessary?

+ +

Basically, I'm struggling at wrapping my mind around how to best model this data in an unstructured manner, and even knowing what questions to ask myself.

+",32347,,,,,43872.88819,Storage and partitioning in Cosmos DB,,1,0,,,,CC BY-SA 4.0, +405021,1,,,2/11/2020 22:58,,1,138,"

I have a Rails app with HighSchoolTeam and ClubTeam models. I'm currently using Single Table Inheritance (STI) with Team. That means I only have a Teams table in my database, no High_School_Teams or Club_Teams tables.

+ +

What seems appropriate to me would be for HighSchoolTeam and ClubTeam to have their own tables in the database and also to inherit from a Team model. This doesn't seem like a common approach in Rails though. So I'm wondering a) if that is actually the case and b) if so why, if not what are the tradeoffs. Also, how do things work in server-side frameworks other than Rails?

+",126432,,,,,43919.70903,Rails model inheritance without STI or polymorphism?,,1,0,,,,CC BY-SA 4.0, +405029,1,,,2/12/2020 6:42,,-2,64,"

I had a question about big scale projects. +what if the team decides to change a library and therefor the codes should be changed. +for example what if the team wants to change the picasso library to glide so we should change the + Picasso.get().load().into(); +100 times.how should we handle this situations?

+",357279,,,,,43873.29722,Big scale projects,,1,2,,,,CC BY-SA 4.0, +405031,1,,,2/12/2020 7:14,,1,104,"

I'm currently working on a project which is build as a microservice architecture.

+ +

We have one ""Gateway"" which aggregates the data coming from the different microservices to return one aggregated result to the frontend. The ""Gateways"" exposes GraphQL queries and mutations to CRUD data.

+ +

The microservices are build using java and spring boot and expose several REST API endpoints.

+ +

The problem I'm currently facing happens when it comes to generate a aggregate report.

+ +

For example:

+ +
    +
  1. Fronted queries all orders within a time span
  2. +
  3. Orders microservice returns a List of orders containing a list of products and a customer reference
  4. +
  5. Now for each product in one order a GET to the product micro services is invoked to get the product data. That means if an order contains 10 products 10 GET requests are created to get the data.
  6. +
  7. Same goes to the customer reference. A GET request is invoked to get the concrete data of the customer from the customer micro service
  8. +
+ +

Now if we imagine we want to query all orders of the last month this ends up in a long-running query which might run in a timeout.

+ +

I know that microservices in general do not claim to be fast and that this is not their main purpose. But the current situation is not sustainable.

+ +

Are there any ideas to improve the performance without re-implementing the whole project?

+ +

Sorry if this is the wrong stack network for such questions.

+",357282,,,,,43874.56597,Performance issues in an pseudo microservice environment,,4,2,,,,CC BY-SA 4.0, +405038,1,405040,,2/12/2020 8:26,,69,22311,"

When sending a request to another module and expecting a result, it seems to me there are two ways of dealing with the 'non-happy paths'.

+ +
    +
  • Throw an exception
  • +
  • Return a result object that wraps different results (such as value and error)
  • +
+ +

I would say the first one seems better in general. It keeps the code clean and readable. If you expect the result to be correct, just throw an exception to handle happy path divergence.

+ +

But what when you have no clue what the result will be?

+ +

For example calling a module that validates a lottery ticket. The happy path would be that you won, but it probably won't be. (As pointed out by @Ben Cottrell in the comments, ""not winning"" is also the happy path, maybe not for the end user, though)

+ +

Would it be better to consider that the happy path is getting a result from the LotteryTicketValidator and just handle exceptions for when the ticket could not be processed?

+ +

Another one could be user authentication when logging in. Can we assume that the user entered the correct credentials and throw an exception when the credentials are invalid, or should we expect to get some sort of LoginResult object?

+",116570,,292892,,43873.74514,43880.60278,Result object vs throwing exceptions,,12,21,22,,,CC BY-SA 4.0, +405039,1,,,2/12/2020 8:49,,1,293,"

TL;DR How do you build an airtight license control system for your onprem application, which revokes access once subscription ends?

+ +
+ +

Premise

+ +

I am building this application in Python that needs to be deployed on premise of the client. This application runs constantly like a service spewing out stats. This application is based on a subscription model.

+ +
+ +

Considerations

+ +
    +
  1. Ensuring the IP is intact as the application resides on the client. + +
      +
    • Achieving this by building a binary using Cython
    • +
  2. +
  3. When subscription ends the application should not run. + +
      +
    • Thinking of using HOTP(HMAC based OTP) so each client will have a seed baked into the executable. HOTP uses a seed and time to generate a symmetric key. So for every time slot say 1 month, a new key should be given to the executable.
    • +
  4. +
+ +
+ +

Questions

+ +
    +
  1. Is there a better way to achieve this?
  2. +
  3. How can the application keep time in tamper-proof way? (As OS time is not reliable).
    Or
    How do I store the count of time slots in a secure way such that it cannot be hacked?
  4. +
  5. As this is a service that is always running, how do I enforce a check at the end of a time slot like 1 month? Also as the application is on-prem, it can be shut down.
  6. +
+ +

One ugly way is encrypting and storing an timer.txt which is constantly updated by a separate python process. Is there a better way?

+ +
+ +

The above is with consideration of no-internet.

+ +

But even if internet exists, it does not guarantee a fool-proof solution, because the network can be monitored and calls to an API can be redirected to a local service with the same name. Does this mean there should be two-way SSL here?

+",357288,,209774,,43873.53472,43903.91667,License control for an on-premise application,,1,6,,,,CC BY-SA 4.0, +405042,1,,,2/12/2020 9:43,,0,64,"

i have a confusion about what code should be placed in view and what in viewmodel

+ +

for example click events

+ +

here is a snippet code of google ToDo app example for showing application architecture (Link):

+ +
//this code is placed in view (TasksFragment)
+override fun onOptionsItemSelected(item: MenuItem) =
+        when (item.itemId) {
+            R.id.menu_clear -> {
+                viewModel.clearCompletedTasks()
+                true
+            }
+            R.id.menu_filter -> {
+                showFilteringPopUpMenu()
+                true
+            }
+            R.id.menu_refresh -> {
+                viewModel.loadTasks(true)
+                true
+            }
+            else -> false
+        }
+
+private fun showFilteringPopUpMenu() {
+    val view = activity?.findViewById<View>(R.id.menu_filter) ?: return
+    PopupMenu(requireContext(), view).run {
+        menuInflater.inflate(R.menu.filter_tasks, menu)
+
+        setOnMenuItemClickListener {
+            viewModel.setFiltering(
+                when (it.itemId) {
+                    R.id.active -> TasksFilterType.ACTIVE_TASKS
+                    R.id.completed -> TasksFilterType.COMPLETED_TASKS
+                    else -> TasksFilterType.ALL_TASKS
+                }
+            )
+            viewModel.loadTasks(false)
+            true
+        }
+        show()
+    }
+}
+
+ +

as you can see there are 3 three items for a menu

+ +

2 of them delegated to viewmodel and one handled in view

+ +

beside that there are some logic in view that make decision based on view's id, but in mvvm definition said there must be no condition in view

+",357293,,,,,43873.40486,confusion for view's and viewmodel's responsibility in mvvm architecture in android,,0,3,,,,CC BY-SA 4.0, +405047,1,,,2/12/2020 11:43,,1,193,"

I'm setting up a development environment for my application. As such, to run it locally, the API dependencies need to be mocked, to keep it as lightweight as possible.

+ +

The problem however is keeping the data that the mocks return up to date. i.e. as the APIs and their data matures, it may return vastly different data to what is being mocked. These APIs are our own so we have access to all its data, but having something like a static JSON file which needs to be updated seems unmanageable, especially at scale. What are some good approaches to this problem?

+ +

FWIW: Strategies to maintain contract between mocks and APIs came the closest to answering this but couldn't specifically enough, hence asking this.

+",356572,,356572,,43874.65417,43874.65417,Mocking APIs: Keeping mock data up to date?,,1,2,,,,CC BY-SA 4.0, +405054,1,,,2/12/2020 14:02,,-3,125,"

I would like to know if it is possible to code any function f that takes a structure S as a parameter while, in case we want to make drastic change to struct S, we can do it without touching a single hair of f ?

+ +

Ok, i'll share here what blows and finally your comments will help me to understand things better: +They need a kind of contract, for example :

+ +

R1) each time f needs something from struct S then f access to it through a function that struct S promise to keep accessible.

+",357311,,357311,,43873.60972,43873.63333,Is it possible to isolate C function from changes to its parameter type?,,1,3,,43873.66528,,CC BY-SA 4.0, +405059,1,,,2/12/2020 15:52,,4,100,"

Background

+ +

The high level overview of my situation is described here. I am breaking it apart into smaller, specific questions, such as this one, regarding extensive use of global variables in a procedural style.

+ +

Problem

+ +

In the large legacy Fortran codebase that me and some other new team members have inherited, the use of global variables by way of common blocks is extensive. There are thousands of arrays that are passed around between various modules (via include syntax).

+ +

Is there a overall strategy of working with and managing Fortran global variables and common blocks in such a massive way (1000s of shared arrays)? Are there any alternatives to global variables (e.g., database) that don't take a huge performance hit?

+ +

It seems to me that since the common block feature is part of the Fortran language (in order to make it fast by basically doing in-memory compute?), then it was intended to be used this way. However, on the other end of the argument is the advice that use of global variables is bad practice, mainly because it is hard to debug because changes to state can happen from so many different places (modules, subroutines).

+ +

Also, by alternatives to the global variable approach, I am thinking about some data layer that is not so tightly coupled like a database, where the data storage is abstracted away from the main Fortran program. This way we could be more flexible with our choice for data storage and could also interface to this data storage from other places like Python programs.

+",175040,,9113,,43883.39167,43883.39167,Global variables and common block management in Fortran,,0,2,,,,CC BY-SA 4.0, +405063,1,,,2/12/2020 18:01,,0,42,"

I am facing difficulties establishing constraints in the following situation. This is a hypothetical scenario for practicing DB designing.

+ +
+

Imagine, a rule in a School states that the Courses there will be taken in some fixed Rooms. E.g: Math Course can only be taken in Room 101, 102 and 110. Now, a Course can have many Sections. So, I need to make sure that any Section of the Math Course is taken only in the specified Rooms.

+
+ +

NOTE: A particular Section will be taken in only one room. That is, Section 1 of the Math course will be taken in Room 101 only

+ +

Now, I make a Course-Room relation by taking the PK of both Course and Room. The Section relation is created by taking the PK of Course and adding the Section Number with it.

+ +

For storing the Rooms of the Sections, a relation Section-Room is created. The Section-Room table should only contain rows that conform to the Course-Room relation. How can I create a constraint from this relation to the Course-Room relation?

+ +
+ +

A sample ERD is created here: https://dbdiagram.io/d/5e4433459e76504e0ef15e24

+ +

The Schema as DBML is pasted below as a backup.

+ +
// Store info about a Course
+Table course {
+  course_pk int [pk] 
+  course_code varchar(13) [not null, unique]
+  course_name varchar(200) [not null]
+}
+
+Table room {
+  room_pk int [pk]
+  room_no int
+}
+
+Table section {
+  section_pk int [pk]
+  course_id int
+  section int
+}
+Ref: section.course_id > course.course_pk
+
+Table course_room {
+  course_room_pk int [pk]
+  course_id int
+  room_id int
+}
+Ref: course_room.course_id > course.course_pk
+Ref: course_room.room_id > room.room_pk
+
+",353363,,353363,,43874.39931,43874.39931,Ensure that Room No. of a Course's Section is within the permitted Rooms for that Course with some Constraint,,1,6,,,,CC BY-SA 4.0, +405073,1,405090,,2/12/2020 21:24,,1,76,"

For teaching purposes, I'm trying to replicate in a more faithful way from this conceptual UML (from wikipedia):

+ +

+ +

In a ""so-so"" real world example, in my case, families of Loans and Insurances:

+ +

+ +

So, can it be considered a valid GOF Abstract Factory?

+ +

Have I followed all the original principles? Or does it needs any kind of fix?

+",356206,,356206,,43873.89861,43874.34792,Can this simple Bank example be considered as a valid Abstract Factory?,,1,3,,,,CC BY-SA 4.0, +405075,1,,,2/12/2020 22:18,,3,160,"

Suppose I have a domain entity representing a person. (Examples in TypeScript)

+ +
class Person {
+    constructor(public name: string) {}
+}
+
+ +

Now, because other parts of the domain will need to reference the Person entity from outside the aggregate, I need to expose the Person's ID in the database.

+ +
class Person {
+    constructor(public id: number, public name: string) {}
+}
+
+ +

Now, because id is Non Nullable, I can't create new instances freely in the domain layer:

+ +
const p = new Person(undefined, 'Patrick');
+error TS2345: Argument of type 'undefined' is not assignable to parameter of type 'number'.
+
+ +

Well, then I have to make the id property possibly undefined - or null if you're in another language? Not sure how this problem translates to other type systems.

+ +
class Person {
+    constructor(public id: number | undefined, public name: string) {}
+}
+
+const p = new Person(undefined, 'Patrick');
+
+ +

But now, I'll have the ID being nullable at places I'm sure it's not going to happen:

+ +
async function doSomething(): Promise<number> {
+    const p = await personRepository.getPersonById(id);
+    // let's say for some reason I'll use the ID
+    return p.id + 500;
+    error TS2532: Object is possibly 'undefined'.
+}
+
+ +

For every property that I expect to always be there but might not, which in this example is the ID but might also include creation dates and other database managed stuff, I need to protect that path of execution:

+ +
async function doSomething(): Promise<number> {
+    const p = await personRepository.getPersonById(id);
+    // let's say for some reason I'll use the ID
+    if (p.id === undefined) throw 'Should not happen';
+    return p.id + 500;
+}
+
+ +

This is adding a lot of cruft to our codebase.

+ +

So what is the best course of action here? Attempt to not expose the ID on the domain (seems hard)? Don't make id nullable and provide a fake that the repository will treat differently (seems smelly)? Strengthen the id type from number to something else?

+",146025,,,,,43874.54306,Should entities have nullable id because they're autoganerated by the database?,,2,4,,,,CC BY-SA 4.0, +405076,1,405095,,2/12/2020 22:26,,1,266,"

Considering a feature needs changes at many places in different modules of the software: UI, business logic, backend, etc.

+ +

What is a good approach to do so?

+ +

We are using dependency injection and considering to use the ApplicationBuilder to exchange the modules at one single place, BUT this would require code duplication or many different states inside the modules.

+ +

Any better idea?

+",333929,,209774,,43877.54236,43877.54236,Feature toggle: How to toggle without spreading same toggle all over the code?,,3,7,,,,CC BY-SA 4.0, +405078,1,405128,,2/12/2020 23:27,,0,795,"

In a REST API, when I want to update all the properties of an entity, what is better to use in terms of good practices? PUT or PATCH? If it is better to use PATCH, why is PUT necessary? What would be the difference between the two?

+ +

If all fields are updated, in that case both operations are idempotent, right? So, what is the difference?

+",334842,,334842,,43873.97847,43874.73333,REST - PUT or PATCH when updating all properties of an entity,,1,13,,,,CC BY-SA 4.0, +405079,1,,,2/13/2020 0:15,,2,250,"

Just now I was reading a Python question on stack overflow, about clamping a list/array of results to be within a certain range.

+ +

Once of the more simple answers suggested something like:

+ +
clamped_list = [ max(64, min(128, i)) for i in source_list ]
+
+ +

This sort of list/array construction loop is championed as ""The Pythonic Way"". If answers implement this same algorithm as a series of steps in a loop-body, there would suggestions that it is ""not pythonic"", and probably down-votes.

+ +

Yet if a C/C++ for() loop were constructed similarly ~

+ +
for (i=0; i<MAX_ELEMENTS; clamped[i] = iMAX( 64, iMIN( 128, source[i] ) ), i+=1 );
+
+ +

It would never pass any sort of code review (at least none I have ever participated in).

+ +

What is the history of this complicated ""Pythonic Way"", how did something that had been frowned on in other languages long before Python was invented become the idiomatic way in python?

+",321255,,321255,,43874.89375,43875.57153,Loop complexity in C Vs Python,,2,6,,,,CC BY-SA 4.0, +405080,1,405082,,2/13/2020 0:31,,9,554,"

Why is assembly language called ""assembly""?

+ +

I was just watching the 1st video in the ""Crockford on JavaScript"" series.

+ +

In it, Douglas says,

+ +
+

"". . . the first program to make programming easier was the assembler, + and using something called assembly language. We don't know why its + called assembly language. . . "".

+
+ +

Thoughts?

+",357353,,209774,,43874.32361,43874.32361,"Why is Assembly Language called ""Assembly""?",,1,2,,,,CC BY-SA 4.0, +405084,1,405176,,2/13/2020 2:12,,-2,95,"

I have been developing an app that will require a cron task every minute. +We are handling our cron tasks with Spring Boot Scheduling. Although, I am a little worried about the following question:

+ +

One part of our product must be highly available on the mentioned task, meaning, if it fails even for 1 minute, it will have a great impact on our processes and customers. The question is: is Google Cloud App Engine reliable enough to support these processes so our product wont get affected easily, and if Google Cloud App Engine gets to fail, what options do we have to handle this kind of situation where we need an application that cannot fail not even by one single minute?

+",357368,,173647,,43875.42986,43875.65764,Thoughts of Google Cloud App Engine Reliability,,1,2,,,,CC BY-SA 4.0, +405091,1,405104,,2/13/2020 8:25,,0,150,"

I have a class which has the purpose of providing file operations i.e. providing functionality to create the file, read, write and rewrite to the file. So, the main constituent of the class is functions.

+ +

However, there is one functionality where I am using a Timer to make the thread wait for one second. So, basically I have an AutoResetEvent and Timer working in conjunction.

+ +

Now, since we have to register the Timer_Elapsed callback to the Timer, we need a place to do it. Since the class is static, I can use a static constructor to do that but this makes me think whether the class should be static at first place.

+ +

I am confused whether to make the class static or non-static as it is mostly about giving simple functionality but at the same time Timer and AutoResetEvent have become kind of state elements of the class. I would appreciate suggestions.

+ +

Edit +Reason why I am using a Timer is:

+ +

I am trying to synchronize file access between two processes by using FileShare.None. In order to do that I am in a loop to reattempt to gain access to the FileStream for which I have to wait 1 second each iteration. I am developing functionality in legacy code so I want to avoid Thread.Sleep. Also I know that a mutex would be a better option to synchronize access to a file but I cannot use it as I have so many such files that creating a mutex for each is not preferred (I am asked to not create mutex).

+",319425,,319425,,43874.43264,43874.45486,Should I make the class static or non-static for the following case?,,3,9,,,,CC BY-SA 4.0, +405098,1,405103,,2/13/2020 10:08,,1,42,"

I have a 24 hour ""sliding window"" sequence of ""start"" and ""stop"" events in memory comming from an iOT device.

+ +

I'm only interrested in finding ""stop"" events followed by ""start"" events in order to determine that the object is offline between these two events.

+ +

If I only have a start event, the device is considered offline before and online after.

+ +

If I only have a stop event, the device is considered online before and offline after.

+ +

This seems a standard pattern in programing but I can't figure a clever way to do that properly given the tools at my disposal, such as LINQ for instance.

+ +

What would be the best way to achieve this goal?

+ +

EDIT: I'd like to get a list of tuples of ""stop"" and ""start"" objects as a result.

+",357389,,357389,,43874.49306,43874.49306,Finding Event-Oriented Patterns in temporal sequence,,1,1,,,,CC BY-SA 4.0, +405101,1,,,2/13/2020 10:22,,-1,40,"

While writing this question I found out that I'm faced with 2 problems: a testing one and a production one but I hope that resolving the situation can solve both issues.

+ +

Now the question:

+ +

I have a set of data coming from a data base. For one of the data I have different rules depending of the value. To simplify let's I have SubsetA with RuleA and SubsetB with RuleB.

+ +

To simplify I have:

+ +
Rule(Value) {
+ if (Value in [ValueA, ValueB] /*SubsetA */) {
+  RuleA();
+ }
+ else {
+  RuleB();
+ }
+}
+
+ +

I'm writing non regression unit test for Rule and expecting results based on rule A and B calls. Since the set of potential value is limited and small I wanted to test each case and check that for each value either RuleA or RuleB is called accordingly (and does what is expected). However since my data come from a database a new value could appear anyway and will have to be tested for either Rule A, Rule B or even a new Rule C.

+ +

C would be the easiest case since I expect a new rule to be match with a new test. +A and B are different because a new value in database would not fail any test for now because it would default to rule B.

+ +

In the else case directing to rule B should I define the complementary set of value (SubsetB) of the if case and define a new else case generating an error? Is it viable for large set of data? Also it's still not caught by unit test. Doesn't it go against the need to have those value in a database?

+",293499,,,,,44144.54653,"Non regression test for small set of data data driven from database, how to handle future data",,1,0,,,,CC BY-SA 4.0, +405106,1,405163,,2/13/2020 10:53,,3,206,"

The problem is the following: I have to download a set of JSON files and convert them to a certain format. There are 5 output formats (Let's call them A, B, C, D, E) and all of the downloaded json files are going to be in of of two formats (call them J1, and J2).

+ +

So once I decide whether I am downloading J1 or J2 type of files, I should be able to convert them into one of the 5 formats. Basically this means that there are potentially 10 different conversions possible.

+ +

I was wondering how to better design this application the current setup is the following:

+ +
    +
  1. Converter class - which implements common behaviors.
  2. +
  3. An interface which all strategies should have
  4. +
  5. 10 different strategies. from J1 to (A,B,C,D,E) and J2 to (A,B,C,D,E)
  6. +
+ +

All 10 different strategies implement the interface of converting to necessary format, a Converter object contains a strategy which is set at run time when I understand exactly which one is required (based on user input)

+ +

This seems unbelievably sloppy, maybe I am not understanding the Design Pattern as necessary. If I have all the strategies implemented in different classes, they all have a convert function, why should I bother to create a Converter object and initialize it with my chosen strategy, when I could simply make a ConcreteStrategy type of object and call the convert function on them.

+",357391,,,,,43875.575,How to use strategy pattern more effectively?,,3,6,1,,,CC BY-SA 4.0, +405107,1,,,2/13/2020 11:03,,3,125,"

So almost every post I read about oop by purists, they keep stressing about how using static methods is anti pattern and breaks the testability of the code. +On the other hand every time I look for some example code using factories (irrespective of programming language) specially for the purpose of object construction, I see a static method in the factory class returning the constructed object. (pseudo code below)

+ +
class ProductFactory() {
+
+    public static function make(string name): Product
+    {
+        if(name contains TV)
+            return new TVProduct(name)
+        else 
+            return new HomeAppliance(name);
+    }
+}
+
+product = ProductFactory->make('LCD TV');
+
+ +

To me this looks perfectly fine because ultimately I want an instance of the object and not the instance of factory as will be the case below.

+ +
productFactory = new ProductFactory();
+product = productFactory->make('LCD TV');
+
+ +

My question is two fold here.

+ +

1- What is the real way of using factories? is static method in a factory the proper & accepted way to use them?

+ +

2- How do we write unit tests for a factory that uses a static method ?

+ +

I hope my understanding of using factories for object construction is not flawed at fundamental level.

+",340618,,,,,43876.2625,Factories and static methods,,4,0,,,,CC BY-SA 4.0, +405110,1,405118,,2/13/2020 12:55,,1,118,"

Code below shows setting a value of an object's property and calling a private method in a setter to update the status of the object. Is this call a good practice or setter at most should only validate incoming value and should not take part in other kinds of logic?

+ +
public class Toolbar
+{
+    private bool isMenuButtonVisible;
+    public bool IsMenuButtonVisible
+    {
+        get => isMenuButtonVisible;
+        set
+        {
+            isMenuButtonVisible = value;
+            UpdateToolbarVisualState();
+        }
+    }
+
+    private void UpdateToolbarVisualState()
+    {
+        MenuButtonBackground.IsVisible = IsMenuButtonVisible;
+        MenuButtonIcon.IsVisible = IsMenuButtonVisible;
+        MenuButtonLabel.IsVisible = IsMenuButtonVisible;
+        //...
+    }
+}
+
+",357400,,,,,43874.64236,Calling a private method in a setter to update object at every change of the property,,1,3,,,,CC BY-SA 4.0, +405119,1,405121,,2/13/2020 14:33,,4,152,"

Recently we had a discussion if rough DB schema (high level tables and columns) design should be part of system design phase or not.

+ +

We have two confronting approaches in the company. +Let's assume System Design consists of designing Queues, Lambdas, integration with other microservices, etc.

+ +

Approach 1) +In system design we should also include (at least in rough idea) what tables should be in DB, with relations, and normalization.

+ +

Approach 2) +During system design we should just state that DB is there, and that's it. Later DB schema will naturally evolve after logic is implemented and needs to be persisted.

+ +

What do you think?

+",343027,,,,,43894.88125,DB design while designing the system,,2,0,1,,,CC BY-SA 4.0, +405123,1,,,2/13/2020 15:47,,0,29,"

So to create a concrete factory that extends from an abstract factory class, I usually create a 'producer' class to determine what abstract factory to use.

+ +

For example:

+ +
public abstract class AbstractFactoryProducer
+{
+  public AbstractFactory createFactory(String foo)
+  {
+    if(foo.equals(""foo""))
+    {
+      return new FooFactory();
+    }
+    else if (foo.equals(""bar""))
+    {
+      return new BarFactory();
+    }
+    ... n if elses where n is number of factories
+  }
+
+ +

Is there a way around using a ton of if/else statements or switch statements?

+",,user328592,,user328592,43879.69514,43879.69514,What to Use In Abstract Factory Producer to Choose Concrete Factory?,,0,3,,43874.65972,,CC BY-SA 4.0, +405127,1,,,2/13/2020 16:47,,2,158,"

I have data in my database in UTC time format in this format (pseudo):

+ +
[
+  {
+     ""date"": ""Date: 2020-02-12T08:00:00+0000""
+     ""productsSold"": {
+       0: 122,
+       1: 130,
+       2: 90,
+       ...
+     }
+  }
+]
+
+ +

So in this example on the 12 of February, 122 products were sold in an onlineshop between midnight and 1 o'clock (UTC time); 130 products have been sold between 1am and 2am, etc.

+ +

I want to display in the frontend the products that have been sold during the last week - aggregated on a daily basis (e.g. on Monday 1200 products were sold; on Tuesday 1500 products, etc.). This should be done in dependency of the users time zone.

+ +

Now I see two possibilities:

+ +
    +
  1. The frontend does a HTTP request with the user's time zone. The backend does the calculation, i.e. aggregates the products sold depending on the requested time zone.
  2. +
  3. The frontend always performs the same request. The backend returns detailed, non-aggregated data of the last eight days. The frontend does the transformation and calculation process.
  4. +
+ +

What would be your approach and why?

+",342466,,342466,,43876.69028,43876.69028,How should an API handle timezone related data?,,3,3,1,,,CC BY-SA 4.0, +405130,1,,,2/13/2020 17:32,,1,69,"

In my team, we use git branching to keep different features separate (of course). However, one co-worker insists on keeping these features in different files as well to avoid merge conflicts. For example, we may have a FormView.java on master, but in have both a FormView.java and a ValidatedFormView.java on the form-validation branch. I suggested that rather than duplicating files to ""work around"" git, we should let git do its thing and merge features when we want to. He also wants to keep duplicates in order to be able merge some parts of a feature into other branches without having to deal with merging the FormView.java file itself.

+ +

What should we do? If he is right, why? If I am right, how can I convince him as such?

+",307317,,,,,43876.07431,Should we keep very similar files with slight feature differences under different names?,,1,1,,,,CC BY-SA 4.0, +405131,1,,,2/13/2020 17:41,,0,126,"

What is the best way to name two classes that describe the same object, but where one of those classes does not hold the complete information? And should I make one of the classes inherit from the other?

+ +

I want to have (for the sake of example) an API that gets all people to display a list of names, and then when a person has been selected, another API that gets the detailed content for that person.

+ +

Here is an example of the classes I am currently using, and demonstrating my current decisions: the naming conventions, and that one should derive from the other.

+ +
    public class PersonBase
+    {
+        public int PersonId { get; set; }
+        public string Name { get; set; }
+    }
+
+    public class Person : PersonBase
+    {
+        public Date DateOfBirth { get; set; }
+        // Other properties...
+    }
+
+ +

EDIT: The reason I have chosen to separate into two classes for the same entity is because the database query for the complete Person is a lot more complex/intensive, and therefore I want a lightweight version that is used when searching for a particular person in a list.

+ +

Note that what I have called PersonBase actually contains some other properties that are used to filter on (you could imagine IsManager or similar), so in my opinion it does represent the person, but simply does not have the full information.

+",321922,,321922,,43875.36875,43875.36875,"How to name two versions of the same object, where one has a smaller amount of data and one is the ""complete"" object",,2,4,1,,,CC BY-SA 4.0, +405133,1,,,2/13/2020 19:03,,3,94,"

I am developing a piece of commercial software for Windows, that requires users to obtain a license key in order to use it. It is run locally by clients with the assumption that their machine may not have internet access, so all licensing processes must happen offline.

+ +

The binaries are iterated frequently (new updates once or twice a week), and I am looking for ways to prevent a single patcher from being created that would be able to automatically circumvent the licensing measures in all version of the software in one go.

+ +

What I'm wondering, is if there is a procedural way to setup a licensing system, such that each binary would have to be cracked in a different way in order to bypass the anti-piracy measures.

+ +

A naive implementation of this might be to simply relocate/rewrite the license checking function within the code, so a patcher couldn't just hook the same function address in each version in order to bypass it. But that relocation would have to be done manually by myself and I'm sure the function might still have some obvious signature that a competent patcher could find automatically.

+ +

Are there any clever ways to force crackers to re-do their work on every single build of a piece of software like this? I'm just trying to avoid a situation where a pirate could download a single patcher and immediately gain access to every version of my software that I release.

+ +

A few extra notes:

+ +
    +
  • I am well aware all locally-run software can be easily cracked. I am not asking how to prevent cracking, only slow it down by forcing crackers to re-do their work on every build.

  • +
  • I am aware of partial key verification that would sort of accomplish this type of thing on the license-key side of things, however that's not really relevant here because PKV would do nothing to stop a cracker from simply jumping the verification function in the binary.

  • +
  • My frequently-iterated software may end up with hundreds of builds over time, so I am looking for an automatic/procedural way to accomplish this task, rather than instructions for how I could manually change my code in each build

  • +
+",357429,,,,,43874.79375,Windows c++ (piracy): strategies for preventing multiple binaries from being cracked by a single patcher,,0,6,,43876.04097,,CC BY-SA 4.0, +405136,1,,,2/13/2020 20:11,,1,51,"

It appeared, that I have in my pet project two abstractions: asset loaders and drawing tasks. For each abstraction I have some classes representing them (currently a single class for drawing task abstraction and factory, which returns new classes, for asset loaders), a store (formerly a manager, but I found it is a bad practice to use ""Manager"" word in naming) and some other helpers (e.g. decorators). I am thinking how to put all this in a project structure. When I started the project, there was a folder for abstraction itself, separate folder for stores, and another one for helpers. Currently I realized, that this two systems are actually parallel, and hence should be developed separately. So I decided to create to folder for each.

+ +

So what I am interesting it:

+ +
    +
  • How do you split your files in directories: + +
      +
    • By use
    • +
    • By abstraction they describe
    • +
    • By language construction types (i.e. classes, functions)
    • +
    • Something else
    • +
  • +
  • How do you name them: + +
      +
    • you use some words, representing it is a group (i.e. +assetLoadersSystem, assetLoadersUtils ...)
    • +
    • you name it as main concept (just assetLoaders directory with the same file in it)
    • +
  • +
+ +

Is there any place I can read about it? I found some interesting chapters in ""Clean code"" and ""Code complete"" about variable naming tips. Is there anything like that for my problem?

+",357436,,353068,,43877.63264,43877.63264,Tips for module naming and criteria for grouping files in a directory,,1,2,,,,CC BY-SA 4.0, +405139,1,,,2/13/2020 21:39,,1,176,"

I'm trying to create project siteWeb and mobile app with clean Architecture and Microservices.

+ +

create identity service with separate database A and make another api service with other database B. +for example i have table Competence from DB(B) has relationship many to many with Table AspNetUser from DB(A).

+ +

how to link between him?

+ +

Would the save two competence for user be required to store data from the User database (user_id, competence_Id) without relationship?

+ +

its a good idea to create table profile in DB(B) has column Id

+ +
Profile
+-Id
+-FullName
+-Email
+
+ +

and each inscription user be required to save user_id to Profile (""user_id"",""john doe"",""johndoe@gmail.xyz"") after that i link profile with others table in DB(B)?

+ +

or i can to Make profile as

+ +
  Profile
+    -Id
+    -FullName
+    -Email
+    CONSTRAINT [FK_Profile_ToAspNetUsers] FOREIGN KEY ([Id]) REFERENCES [dbo].[AspNetUsers]([Id])
+
+ +

Do we need that?

+",357315,,,,,44025.46042,Separate database in microservice architecture,<.net>,2,0,,,,CC BY-SA 4.0, +405140,1,,,2/13/2020 21:41,,1,21,"

So, I have an API that authenticates with Identity Server OAuth2 and gives our user a JWT token when logging in through a front end application, this is a token that has access to everything.

+ +

The API is also external facing so that third parties could access the API using client id + secret via OAuth2, this token is a lot more restricted to scope and limits.

+ +

The problem I'm facing is that I can easily grab hold of the token from logging in via the UI and use it to directly access the API, which has no limits.

+ +
+ +

I've done some searching around on best practices and many references to using server side cookies stored as HttpOnly, or encrypting the token with a key only the application knows but I wanted to ask the question in case someone has hit an issue like this before and knew a better solution?

+",357440,,,,,43875.91806,Identity Server 4 Authentication - Security questions around using the same API for internal application and third parties,,1,0,,,,CC BY-SA 4.0, +405150,1,,,2/13/2020 23:12,,0,43,"

I'm wondering how do I properly secure this example app (https://github.com/brunodrugowick/spring-thymeleaf-vue-crud-example).

+ +

The app is:

+ +
    +
  • A Spring (Boot) app with server-rendered pages via Thymeleaf.
  • +
  • And it serves, also, an API providing data for the pages.
  • +
+ +

I serve the pages via Thymeleaf (for example, Users, Roles etc.) and perform operations (CRUD) with Vue.js using the API.

+ +

What's the proper way to secure this stack and its underlying limitations considering this architecture?

+ +

Some questions that pop on my mind, to help you understand why I'm asking:

+ +
    +
  • Can I go with Spring Security defaults (adding csrf token on my forms that POST/PUT with Vue.js)?
  • +
  • How do I integrate this with my DELETE via API, for example?
  • +
  • Should I disable csrf?
  • +
  • Does session even makes sense? Maybe I should give access to all the pages and control the access to the API. In this case wouldn't the way I developed (Vue scripts for every page) wrongly repetitive?
  • +
+ +

Does this architecture makes sense? What are the caveats?

+",352386,,,,,43874.96667,How do I properly secure a Spring Boot app serving server-side rendered pages and API?,,0,0,1,,,CC BY-SA 4.0, +405154,1,,,2/13/2020 23:24,,0,47,"

We are building some microservices that we will likely be deploying using GAE and I am fairly new to GAE. I’ve done a lot of other development in my day, but this paradigm is a little different. I’m wondering if someone can provide some advice on the most common methodology to follow for development for services deployed using GAE. Specifically, do developers tend to do their coding locally, test the functionality out, and then deploy to GAE to test everything, or do they, make whatever small change to their code, and actually push it to GAE to test it? It seems like one would first build out the basic functionality of the service locally, and then once it is working go through and push it to GAE to make sure it is working since it seems it would be more difficult to debug when running on GAE. Just wondering what the typical model developers follow when building services to be deployed to GAE.

+",244419,,353068,,43875.89236,44179.25069,Deployment best practices for GAE Microservices,,1,0,,,,CC BY-SA 4.0, +405156,1,,,2/14/2020 1:44,,-1,175,"

In a SyML/UML activity diagram, how do you implement an OR gate ?

+ +

There immediately came to mind to use a merge node. But does a merge node always have to come after a decision node?

+ +

Also, I have seen an action with two inputs where each input has a multiplicity definition, e.g., [0..1], but how could this be used to implement an Or gate where the action would be fired if at least one of the inputs contained a token?

+",357041,,90149,,43875.70486,43876.14931,SyML/UML Activity diagram: how to model an OR gate,,2,7,,,,CC BY-SA 4.0, +405158,1,405161,,2/14/2020 2:05,,3,174,"

In my ASP.net core application with Angular 2+ client, I work with a complicated object graph. In the object graph I have some objects with references to each other. I have a simplification included as an illustration below, where MainObject represents the graph:

+ +
public class MainObject
+{
+    public int Id { get; set; }   
+    public ObjectProperty MyProperty { get; set; }
+    public List<GreenObject> GreenObjects { get; set; }
+    public List<YellowObject> YellowObjects { get; set; }
+}
+
+public class GreenObject
+{
+    public int Id { get; set; }
+    public Guid Guid { get; set; }
+    public GreenObjectProperties MyProperty { get; set; }
+    public Guid? YellowObjectRef { get; set; }
+}
+public class YellowObject
+{
+    public int Id { get; set; }
+    public Guid Guid { get; set; }
+    public YellowProperties MyProperty { get; set; }
+    public Guid? GreenObjectRef { get; set; }
+}
+
+ +

Now I have the requirement to create clones of MainObject, such that any relationship between objects in the object graph is maintained, but also such that it can be persisted. This means I need new UUID/GUID for the clone (to be able to be persisted in my DB), and continue to respect the relationship to the cloned objects and thus the newly created UUID/GUID's.

+ +

I am already using Automapper in my ASP.net application, so I was considering to use this library to create the clone. But I am willing to consider other libraries, or even to perform the cloning on the client side.

+ +

Question: Is there a common pattern or library to easily clone the object graph, creating new UUID/GUID's while keeping the references to the newly created objects/GUID's?

+ +

In the simplification I could easily iterate over the list of GreenObject issue new GUID and then find the YellowObject where GreenObjectRef equals the original GUID and replace it with the new GUID (and vice versa for the list of YellowObject). However my actual implementation is very complicated, and writing such a custom cloning method that is specific to the relationship would also be troublesome to maintain.

+",316827,,326536,,43875.84931,43878.33611,How to clone an object graph and keep relationships of objects intact?,<.net-core>,3,2,1,,,CC BY-SA 4.0, +405159,1,,,2/14/2020 2:40,,0,42,"

I am unable to find an algorithm for optimizing or eliminating Phi instructions after some Control Flow Graph modifications.

+ +

I have found algorithms for destructing and constructing the SSA form, and for regenerating the SSA in the event where it was broken (by introducing multiple assignments to the same variable.)

+ +

However, I am not sure if there are any published algorithms for optimizing Phi placement in an existing SSA graph where, due to some CFG changes it may not longer be in optimal form?

+ +

Or do I need to convert out of SSA form and then back into SSA form?

+ +

Consider a node A has two predecessors, one of which is identified as dead code. Now, there could be Phi instructions for the edge of that predecessor and Node A that would no longer be valid. Are there algorithms for reconstructing the SSA to accommodate for this?

+",138350,,,,,43875.11111,Optimizing Phi Instruction Placement after changes in CFG,,0,0,,,,CC BY-SA 4.0, +405165,1,406144,,2/14/2020 9:28,,-2,26,"

We have a API (written in .NET Core 3.1), published to Azure Web Services. This API talks to a database for authentication. After a succesfull authentication, the API connects to other databases, depending the user that logs in.

+ +

Now we came to the part of insights. I want to know:

+ +
    +
  • The datetime the API is called
  • +
  • By which client
  • +
  • What end-point was triggered
  • +
+ +

I'm not sure how I can get this insights. Does Azure provide such a functionality? Or do I have to write records myself in the database?

+",349803,,,,,43895.57431,Azure web service api usage per user insights,,1,0,,,,CC BY-SA 4.0, +405167,1,405738,,2/14/2020 9:36,,0,140,"

We're designing and developing an enterprise application using Spring Boot for REST APIs and Angular 8 as the web client. It's been an year since we started.

+ +

When I started, I read multiple articles which cleared my understanding of why REST without HATEOAS is not actually REST. I went ahead with the then available Spring HATEOAS project which was on v0.25 (not stable) then but good enough to get us started.

+ +

Now, it has moved to a stable 1.x version and we have upgraded to it.

+ +

In our case, we have a collection of documents stored in MongoDB which is used for multiple APIs. Think of it as a collection which contains lot of information and it has more than 3 ways to represent. And 95% of our APIs are GET requests, only 5% are PUT, POST and DELETE. Its a monitoring application API, you get it.

+ +

Now, I believe that the relation between a Resource and a Controller should be one-to-one.

+ +
+

Question 1: Am I correct with this?

+
+ +

And,

+ +
+

Question 2: Am I correct with the premise that as soon as the URL is same, the resource structure must not change?

+
+ +

Like, every time I hit the URL https://example.com/status, I must get a consistent structure of Response, irrespective of the query params I give to the API i.e. the number of keys in the response will be the same, but the values could vary.

+ +
+

Question 3: How do we define parent-child relation in a resource. Or should we even care about it?

+
+ +

Moving on,

+ +
+

Question 4: If I want a subset of keys for a Resource, should I define a new Resource for it, or I can manipulate the structure by giving some query parameters?

+
+ +

Answer to even one of the question could help me a lot.

+",149660,,,,,43887.70556,RESTful API design using HATEOAS - Decision on Structure,,3,0,,,,CC BY-SA 4.0, +405170,1,,,2/14/2020 10:26,,-1,70,"

I would like to know which architecture is more suitable when considering data sharing between tenants: a Multi-instance (Single-tenant) or Multi-tenant architecture with a database by tenant.

+ +

Imagine this first scenario with two clients who each having an instance of the same application and therefore each have a separate database. A third client also connects to its instance, but in addition, it must be able to read and / or write certain data at client 1 and / or client 2.

+ +

What are the possibilities to allow such data sharing in an architecture as described above?

+ +

And second scenario, if we have 100 tenants, therefore 100 databases, and if I need global analysis functionality, do I have to query each of these 100 databases to have the complete information?

+",357476,,209774,,43875.49861,43875.53681,Data sharing on a multi-instance application,,1,1,,,,CC BY-SA 4.0, +405173,1,,,2/14/2020 10:59,,6,1364,"

I would like to create a REST API with NestJs. But I want to add GraphQL as another top level layer later on. So for the start I have the basic layers controller, service and TypeORM repository. Let's assume you want to update a user's username by id. The controller route could be

+ +

PATCH /users/:id/username

+ +

Two problems might come up in the service or repository layer:

+ +
    +
  • The user id might not exist
  • +
  • The username exists already
  • +
+ +

The basic flow of this operation would be

+ +
    +
  • Fetch the user by id
  • +
  • Handle error if the user does not exist
  • +
  • Check if the username exists already
  • +
  • Handle error if the username exists already
  • +
  • Update the user's username
  • +
+ +

I'm thinking about how I should handle those errors. I could throw exceptions immediately based on this concept

+ +

https://en.wikipedia.org/wiki/Fail-fast

+ +

NestJs provides some out of the box exceptions I can use

+ +

https://docs.nestjs.com/exception-filters#built-in-http-exceptions

+ +

The problem is that I don't think I should throw HTTP exceptions in my service layer. They should be thrown in my controller logic. So what is a common approach for those errors?

+ +
    +
  • Should I just return undefined instead of an updated user? The controller wouldn't know which part failed.
  • +
  • Should I create my own exceptions extending Error and throw them?
  • +
  • Due to the fact exceptions come with low performance should the return type of the function be something like <User | NotFoundError | ConflictError>?
  • +
+",343175,,,,,43875.90069,Error handling in Nest service layer,,1,6,,,,CC BY-SA 4.0, +405177,1,,,2/14/2020 14:39,,2,331,"

For teaching purposes, I am trying to create a PHP implementation of a conceptual example of Builder Pattern:

+ +

First of all, some products:

+ +
class Product1 {
+    private string $attribute1;
+    private string $attribute2;
+    function getAttribute1(): string {
+        return $this->attribute1;
+    }
+    function getAttribute2(): string {
+        return $this->attribute2;
+    }
+    function setAttribute1(string $attribute1): void {
+        $this->attribute1 = $attribute1;
+    }
+    function setAttribute2(string $attribute2): void {
+        $this->attribute2 = $attribute2;
+    }
+}
+
+class Product2 {
+    private string $attribute1;
+    private string $attribute2;
+    function getAttribute1(): string {
+        return $this->attribute1;
+    }
+    function getAttribute2(): string {
+        return $this->attribute2;
+    }
+    function setAttribute1(string $attribute1): void {
+        $this->attribute1 = $attribute1;
+    }
+    function setAttribute2(string $attribute2): void {
+        $this->attribute2 = $attribute2;
+    }
+}
+
+ +

The Builder interface:

+ +
interface Builder {
+    public function createNewProduct();
+    public function makePart1($value);
+    public function makePart2($value);
+    public function getProduct();
+}
+
+ +

The concrete Builders:

+ +
class ConcreteBuilder1 implements Builder {
+    private Product1 $product;
+
+    function __construct() {
+        $this->product = $this->createNewProduct();
+    }
+    public function createNewProduct() {
+        return new Product1();
+    }
+    public function makePart1($value) {
+        $this->product->setAttribute1(""variation $value a1""); 
+    }
+    public function makePart2($value) {
+        $this->product->setAttribute2(""variation $value a2"");
+    }
+    public function getProduct() {
+        return $this->product;
+    }
+
+}
+
+class ConcreteBuilder2 implements Builder {
+    private Product2 $product;
+
+    function __construct() {
+        $this->product = $this->createNewProduct();
+    }
+    public function createNewProduct() {
+        return new Product2();
+    }
+    public function makePart1($value) {
+        $this->product->setAttribute1(""variation $value b1"");
+    }
+    public function makePart2($value) {
+        $this->product->setAttribute2(""variation $value b2"");
+    }
+    public function getProduct() {
+        return $this->product;
+    }
+
+}
+
+ +

And the Director receiving builder instances by references:

+ +
class Director {
+    public function createVariation1(Builder &$builder){
+        $builder->makePart1(1);
+        $builder->makePart2(2);
+    }
+    public function createVariation2(Builder &$builder){
+        $builder->makePart1(3);
+        $builder->makePart2(4);
+    }
+}
+
+ +

Using it:

+ +
$builder = new ConcreteBuilder2();
+$director = new Director();
+$director->createVariation1($builder);
+var_dump($builder->getProduct());
+
+ +

So, Is it acceptable this approach? Still a valid GoF Builder?

+ +

If not, how would it be a correct conceptual Builder in PHP?

+",356206,,356206,,43875.62639,43878.15208,"Builder Pattern: Is it acceptable to use ""passing-by-reference"" on Director methods?",,2,8,2,,,CC BY-SA 4.0, +405178,1,405185,,2/14/2020 14:58,,6,225,"

The book ""Implementing Domain Driven Design"" (page 361) suggests to use special types to distinguish several kinds of IDs, e. g. using BookId(1) instead of just 1 of type Int or Long. In my Clean Architecture or Onion Architecture the outermost layer is a HTTP adapter taking an id of type Long. Then this adapter calls an application service, which in turn calls the domain (a repository in this case). The flow is as follows:

+ +
HTTP Adapter → Application Service → Domain
+
+ +

The question is where to convert the Long to BookId. In principle there are two options:

+ +

1) Convert it in the HTTP adapter. This would have the benefit that the raw type would no longer exist and no one could ever use an AuthorId where a BookId should have been used. The conversation could even be performed by the framework, so that no code would ever have to convert or handle raw types. An example with Jersey (JAX/RS) could look like this:

+ +
@GET
+@Path(""/books/{id}"")
+fun findById(id: BookId): Response { ... }
+
+ +

A converter registered by the framework would take the Long value from the request and convert it to a BookId automatically.

+ +
HTTP Adapter  → Application Service → Domain
+Long → BookId   BookID                BookId
+
+ +

2) The Long value could be passed to the application service. However, I don't see any benefit here.

+ +
HTTP Adapter → Application Service → Domain
+Long           Long → BookId         BookID
+
+ +

Question: Where is the best place for the conversion? Do you see any good reason not to convert the value as early as possible?

+",63946,,63946,,43876.87222,44043.80694,Where to convert primitive types in meaningful types in Clean Architecture / Onion Architecture,,3,2,2,,,CC BY-SA 4.0, +405179,1,,,2/14/2020 15:14,,-2,51,"

I am very confused in what I did with this mercurial repository ... I reversed a commit and after that I was never able to have a ""unique"" structure in my repository again. Can someone help me?

+ +
> hg log
+
+672[tip]: 670,671   9abd695c57ee   2020-02-14 10:01 +0100   
+  commit before
+
+671[localgit/master][master]:669   02169eecd8d0   2020-02-13 17:15 +0100 
+  script to subscribe shared flows
+
+670   922dcfabbc33   2020-02-13 15:34 +0100   
+  script to subscribe flows
+
+669[localgit/feature][feature]   126caa38767f   2020-02-13 10:22 +0100   
+  debug algosec connection
+
+
+> hg heads
+
+changeset:   672:9abd695c57ee
+tag:         tip
+parent:      670:922dcfabbc33
+parent:      671:02169eecd8d0
+user:        grigoli
+date:        Fri Feb 14 10:01:37 2020 +0100
+summary:     commit before
+
+> hg branches
+
+default                      672:9abd695c57ee
+
+> hg bookmarks
+
+feature                   669:126caa38767f
+master                    671:02169eecd8d0
+
+",357475,,173647,,43875.84861,43875.84861,How can I get back to having only the master bookmark in Mercurial?,,1,1,,,,CC BY-SA 4.0, +405182,1,405211,,2/14/2020 18:33,,-2,392,"

Since I cut my teeth on code with OO, I’m biased toward using structs as classes without methods. However, there’s probably a good reason that typedef isn’t the default behavior of struct. What is it?

+",334254,,,,,43876.05069,Why would you make a struct without typedef?,,1,7,,,,CC BY-SA 4.0, +405183,1,,,2/14/2020 18:52,,3,348,"

I've been assigned to a code base responsible for millions of dollars of transactions, per quarter, and has been in use for over a decade. Sifting through the solution, I see doubles used everywhere to represent money and arithmetic is done on these variables; on the rare occasion is a decimal type used.

+ +

What is an apt approach to understand the extent of possible damage done by rounding errors due to using an inappropriate type for currency?

+",218080,,,,,43887.84861,How do you assess the damage in a system that has been using floats or doubles for money?,,5,4,1,,,CC BY-SA 4.0, +405186,1,405190,,2/14/2020 19:29,,1,177,"

I have a collection of animals (interface) which contains birds and cats (both also interfaces). I want to print out all the cats to the console and I am forbidden to use instanceof (I did not make up this arbitrary restriction).

+ +

What is the correct way in OOP to solve this? getClass() is not possible because I just have access to the API and it would not comply with OOP concepts. Another alternative would be the visitor pattern (I can change the API), but this would be just be a very convoluted way to effectively perform a type-check.

+",306566,,,,,43876.13125,How to avoid type checking with Java/OOP in this situation?,,2,6,,,,CC BY-SA 4.0, +405191,1,,,2/14/2020 20:57,,-4,87,"

I have this unit test:

+ +
[Test]
+        [AutoMoqData]
+        [TestCaseSource(typeof(PhoneNumberTestCases))]
+        public void PopulatesPhoneProperty(
+          string inputValue,
+          string expectedValue)
+          Entity source,
+          [NoDefaultEnum] ConcreteUserMapper sut,
+        {
+            source.LogicalName = ""user"";
+            source.Attributes.Add(""phone"", inputValue);
+
+            sut.Map(source).Phone.Should().Be(expectedValue);
+        }
+
+ +

The problem is that I want my last two test method parameters (sut and source) to be instantiated automatically without explicitly doing so. And right now it does not work (Not enough arguments provided, provide at least 4 arguments). + Anyone has any solution for that? I searched for a day and could not find anything that can help me.

+ +
internal class PhoneNumberTestCases : IEnumerable
+{
+    public IEnumerator GetEnumerator()
+    {
+        yield return new object[] { ""800) 814-1103 ext. 3120 ext."", ""80081411033120"" };
+        yield return new object[] { ""80081411033120"", ""80081411033120"" };
+        yield return new object[] { ""1a2b3cc4dd800"", ""1234800"" };
+        yield return new object[] { ""555.555.5555"", ""5555555555"" };
+        yield return new object[] { ""555-555-5555"", ""5555555555"" };
+        yield return new object[] { ""555.555.5555"", ""5555555555"" };
+        yield return new object[] { ""555?555!5555"", ""5555555555"" };
+        yield return new object[] { ""555-555-5555 x6666"", ""55555555556666"" };
+        yield return new object[] { string.Empty, string.Empty };
+        yield return new object[] { null, null };
+    }
+}
+
+ +

}

+",357521,,357521,,43876.19583,43877.74931,TestCaseSource in NUnit 3.12.0,,1,4,,,,CC BY-SA 4.0, +405200,1,405201,,2/14/2020 22:07,,1,131,"

I am struggling with naming 3 variables defining the following concept:

+ +

An action, after being performed, is not available for an amount of time and then is available again.

+ +

The concept comprises 3 variables: +- A boolean (is action available?) +- An int (action 'cooldown' duration) +- Another int (action 'cooldown' countdown/timer)

+ +

As you can imagine, naming them actionCooldown, actionCooldownDuration and actionCooldownCountdown is awkward, as the names are too long.

+ +

I can't, however, find a better naming standard for this concept.

+ +

I thought about delay and pause but they don't necessarily communicate ""period of unavailability after use"".

+ +

The context is a bit generic, this is in game development but I am using these cooldowns anywhere from input to movement to collisions.

+",248301,,,,,43876.52361,"Succinct and explicative way to name ""cooldown"" variables",,5,3,,,,CC BY-SA 4.0, +405205,1,,,2/14/2020 23:47,,1,95,"

I'm writing a web server as a test project in node.js

+ +

Whats really bothering me so far is the lack of control or even awareness of memory usage. Naturally I want to cache some stuff in RAM for faster operations but how much are my objects using? There seems to be no reliable way to tell.

+ +

Best I can think up is estimating worst case scenarios, each reference 8B, each character in a string 2B but what is the base cost of an array or an object? Could be nothing or could be something depending on how they are implemented, I've got no idea.

+ +

I tried measuring the sizes of things by creating 10M of them and measuring the difference before and after with forcing GC and process.memoryUsage().heapUsed but got some really odd results, Numbers would be 10B, each character in a string 5B, an empty array 42B, and empty object 66B, I doubt its correct but its the only result I have.

+ +

At this point I'm thinking that creating a huge blob and manually converting js@v8 native types into bytes and storing them there is the only way to actually have a clue, sounds pretty lame.

+ +

Is there a practical way to actually utilize the physical memory my server near its limit without the fear of running out?

+",143433,,,,,44163.75069,Memory management in node.js,,1,2,1,,,CC BY-SA 4.0, +405207,1,,,2/15/2020 0:24,,0,624,"

I am primarily working with C#, dotnet and Visual Studio 2019 with extensions like Resharper enabled. Visual Studio with Resharper is a memory hog.

+ +

I currently have the following PC at home where I do my development work:

+ +
    +
  • CPU - Ryzen 7 2700x
  • +
  • RAM - 2 x 16 gb 3000Mhz
  • +
  • Storage - 512 GB SSD - WD Blue
  • +
  • Motherboard - Asus Rog Strix B350M-i Gaming
  • +
  • GPU - Gigabyte GeForce Windforce GTX1080 8GB
  • +
  • OS - Win 10 Pro
  • +
+ +

With a few Firefox windows, and few instances of Visual Studio debugging Docker containers I hit CPU utilization 100% and memory utilization 100% for 3-4 minutes. +The machine came to a crawling speed, even mouse wasn't moving smoothly.

+ +

Also build-compile-debug-run tests could do with faster speeds, it's not good for productivity when I sit idle and watch build progress. I would like to enable auto run unit tests on every code change save to ensure I haven't broken anything and have rapid feedback loop. I can't run this effectively now because it slows down build-compile-run-test flow.

+ +

Does Visual Studio build/compile/debug/ run test workflow benefit from Multicore CPUs +or benefit from higher single core clock speeds?

+ +

Seeing results of https://www.cpubenchmark.net/high_end_cpus.html , +how does these scores mean for a workflow like mine?

+ +
    +
  • Ryzen 7 2700x has a score of 16,927.
  • +
  • Ryzen 9 3900x has a score of 31,943 (almost twice the score)
  • +
  • Ryzen 9 3950x has a score of 35702 (more than double of 2700x, and around 11% more than 3900x)
  • +
+ +

Ryzen 9 3950x is 50% costlier than Ryzen 9 3900x , but the performance CPU Benchmark scores only a 11% increase. +Is that synthetic score not relevant for my workflow? Would having 8 extra cores (as compared to 3900x) help me a lot?

+ +

So would this mean if my build/compile takes around 60s now, getting a ryzen 3900x would make it near 30s?

+ +

Right now I have a SATA SSD Western Digital Blue, would getting an NVME SSD help?

+ +

I am also thinking of upgrading to 64GB RAM.

+",115186,,110531,,43877.63264,43877.63264,"How to speed up my software development workflow? I'm using Visual Studio 2019 to build, compile, run tests for dotnet, C#, Docker containers, etc",,2,4,,,,CC BY-SA 4.0, +405217,1,405220,,2/15/2020 7:05,,-3,76,"

Saw below definition on https://http2.github.io/:

+
+

What is HTTP/2?

+

HTTP/2 is a replacement for how HTTP is expressed “on the wire.” It is not a ground-up rewrite of the protocol; HTTP methods, status codes, and semantics are the same, and it should be possible to use the same APIs as HTTP/1.x (possibly with some small additions) to represent the protocol.

+

The focus of the protocol is on performance; specifically, end-user perceived latency, network and server resource usage. One major goal is to allow the use of a single connection from browsers to a Web site.

+
+

My questions are:

+
    +
  1. In the above definition, "single connection from browsers to a Web site". What does it mean? Because we can open multiple instances of a website from a browser.
  2. +
  3. While writing an API, how can I specify that HTTP/2 protocol should be invoked?
  4. +
  5. When we say the HTTP/2 protocol. Does it include HTTP also or it's just the term for new protocol only?
  6. +
+

PS: Not sure about the title of the question, feel free to correct it.

+",351766,,-1,,43998.41736,43876.39444,What exactly is HTTP/2 protocol,,1,2,,,,CC BY-SA 4.0, +405221,1,,,2/15/2020 8:43,,3,354,"

This is my first time PM experience. I have created a developer team and we want to work on an Angular-NodeJS project.

+ +

I have defined some tasks and divided the project into the different modules/components. For example each page of the website has a component. Also we have routing component, different services, etc.

+ +

As this is my first time project, I decided to create a basic framework of the project and put the routing component and other basic services on it. Then put it into a public repository that every developer of the project can clone and start his/her work. But also I will put it inside a private repository that other developers only can commit their tasks (send pull requests) after they done their tasks, but the only person who can see and accept and have the whole project will be me.

+ +

I'd like to follow scrum methodology but after having solved the above problem. Since I don't have previous experience, I don't know how right/wrong my idea is and I would like your advice: am I on the right track?

+",334479,,209774,,43876.56181,43889.83889,How to ensure that developers see only the project modules they are working on?,,4,7,1,,,CC BY-SA 4.0, +405223,1,405234,,2/15/2020 9:39,,1,218,"

I have a .net core api trying to implement it according to domain drive design principles.

+ +

In the domain layer there is a public Create method that contains all validation and business rules. If one fails it will throw an exception. So basically you can not create an invalid entity or an invalid value type. For example:

+ +
public class Adult
+{
+   private Adult(string name,int age)
+   {
+       this.Name = name;
+       this.Age = age;
+   }
+
+   public string Name { get; }
+
+   public int Age { get; }
+
+   public static Adult Create(string name, int age)
+   {
+      if (string.IsNullOrWhiteSpace(name))
+          throw new ArgumentException(nameof(name));
+
+      if (age < 18)
+         throw new ArgumentException(""age should not be less that 18"");
+
+     return new Adult(name, age);
+ }
+
+}
+
+ +

In the web layer my web dtos have the basic validation attributes. For example:

+ +
public class AdultWebDto
+{
+   [Required]
+   public string Name {get; set; }
+
+   public int Age {get; set; }
+}
+
+ +

So basically if the user tries to create an adult with no name it will return a 400 Error but if he tries to create a user with an age less than 18 it will pass the dto validation and will return a 500 Error.

+ +

Of course if the user requests an Adult with either invalid name or invalid age it will return a 500 error since the data coming from the repository were invalid.

+ +

Is what I am doing right? So for simple validation rules to check that in the web layer and return a 400 error but in case of more complicated business rules to return a 500 error?

+",156652,,156652,,43876.41319,43876.82431,What http error to return in case of validation and business rules in a domain driven design api,,3,1,,,,CC BY-SA 4.0, +405226,1,405227,,2/15/2020 10:19,,2,86,"

I'm implementing a custom templated container as part of a learning project in C++. The container makes use of different components like serialization, memory management, iterators. I am wondering what is the best way to organise the code.

+ +

Option 1: Put everything into one header file:

+ +

I've seen this in several example online. But by doing this I'm going to end up with a file of several thousand lines long:

+ +
// container.hpp
+template <typename params>
+class Container {
+
+public:
+    // params is a struct with typedefs and static variables 
+    // used to configure the container at compile time
+
+    // I define additional typedefs and static variables here from params::
+
+public:
+    class iterator {
+
+        // can use the typedefs and variables above
+    };
+
+    class memory {
+
+        // can use the typedefs and variables above
+    };
+};
+
+ +

Option 2: Split the code into multiple headers

+ +

Each file contains the implementation of 1 component.

+ +
// memory.hpp
+template <typename Container>
+class memory {
+
+public:
+    // typedefs and static variables from Container::
+
+};
+
+ +
+ +
// iterator.hpp
+template <typename Container>
+class iterator {
+
+public:
+    // typedefs and static variables from Container::
+
+};
+
+ +
+ +
// container.hpp
+#include ""iterator.hpp""
+#include ""memory.hpp""
+
+template <typename params>
+class Container {
+public: 
+    // typedefs and static variables here from params::
+
+
+    typedef Container<params> self_type;
+
+
+    iterator<self_type> begin();
+
+private:
+    memory<self_type> mem;
+};
+
+ +

What are the arguments to chose?

+ +

Solution 1 seems cleaner; something bugs me about having to pass the container type to the memory and iterators in solution 2. But at the same time the code is organised in different files in solution 2. Each file contains the implementation of 1 component.

+ +

Is it just a matter of personal preference? Or are there objective reasons to choose one solution over the other? Or is there a completely different and better approach?

+",357538,,209774,,43876.45764,43876.49583,Organization of C++ source code for reusable components,,2,1,,,,CC BY-SA 4.0, +405232,1,,,2/15/2020 14:28,,3,60,"

Context: A system tracks some sort of transactions (e.g. money flow) for it's whole user base. At the end of the month each entity capable of receiving transactions has to be sent exactly one bill containing all the transaction's of the current month. +This system is instanced, meaning that there are multiple instances (servers) of the system running.

+ +

How to handle this sort of scenario? I must not allow duplicate monthly transaction evaluations. Given the nature of the system a master-slave architecture is ruled out as master-election or master-slave in general is not scaling to a large amount of instances.

+ +

I would like to spread the load of the monthly transaction aggregation across all running instances. The transactions are stored in a database in the form (user_id, transaction_target_id, transaction_amount). Failed monthly transaction aggregations should be retried by another instance

+ +

In short: I need exactly-once guarantees for a distributed system without a master-slave concept

+",336564,,336564,,43877.07847,43877.91667,Managing multi-server monthly transaction aggregations,,1,0,2,,,CC BY-SA 4.0, +405233,1,,,2/15/2020 14:57,,-2,59,"

In a tennis match, each game has the possible scores of Love, 15, 30, 40, or Advantage. This could be modeled as a dictionary [0: ""Love"", 1: ""15"", 2: ""30"", 3: ""40"", 4: ""Advantage""] or enumerated type cases. In the situation where both players win six games however, the next game is a tiebreak where points are simply scored as 0, 1, 2, etc where the first to 7 with a margin of 2 wins the game. What would be a good approach for taking into account both in scoring?

+",267972,,,,,43876.96806,"In a tennis scoring program, what would be a good approach for storing and presenting both standard games and tiebreaks?",,1,3,,,,CC BY-SA 4.0, +405236,1,,,2/15/2020 16:04,,-3,50,"

I'm trying to find the best design approach to handle a design change in a new project I'm working on.

+ +

At the moment, the flow runs and makes some calculations based on a parameter which is used.

+ +

Now, different parameters needs to be added and a new flow would have to be run for each of the other params as well.

+ +

Just to emphasize, the flow of the logic doesn't change, it remains the same.

+ +

However, since now the flow needs to be run not only for one parameter but for others, and potentially more (just assuming) the output would change for each flow.

+ +

I want to refactor it in such a way that another change in the future, won't cause a major change.

+ +

Updated

+ +

Ok, so I added some contrived code which represents the code flow.

+ +

var service = new Service();

+ +

service.start();

+ +
class Service {
+
+
+public Start() {
+     var manager = new LogicManager();
+    manager.calculate();
+}
+
+class LogicManager() {
+
+    public calculate(){
+        var dataProvider = new DataProvider();
+        dataProvider.getData(1);
+    }
+}
+
+class DataProvider
+    public getData(int filter) {
+        // common logic
+
+        // seperate logic per filter
+    }
+}
+
+ +

The filter itself is used for a query against a data source.

+ +

Inside DataProvider, I managed to extract the common code regardless of the param being passed to different methods. +which leaves me with the logic which needs to be executed per filter.

+ +

Now, I can create in the provider, a dictionary/array an iterate over it in a simple loop, but I'm not sure if that's is the best way.

+ +

@candied_orange: +What you are saying is what I was thinking about - separate what is changing - which is the param. +However, I can't find a suitable pattern for it.

+",357560,,357560,,43877.57639,43877.57639,Different input param for the same logic flow,,1,4,,,,CC BY-SA 4.0, +405237,1,,,2/15/2020 16:50,,1,64,"

I'm making a chat server using sockets and a MySQL database, and after it's working I want to expand it to become a more complex game server.

+ +

I want to know whether my design is missing anything.

+ +

Specifically, I'm worried about database access from different threads, or whether I may be creating too many threads, and whether there is likely to be a bottleneck somewhere.

+ +

I have 3 functions:

+ +

main function starts a 2nd thread for the ServerHandler function that loops and accepts new client connections. That ServerHandler function then opens a new thread for each client connection for a ClientHandler function.

+ +

The code/pseudocode (so far incomplete while I'm still considering the architecture) is below. I'm writing it in Scala.

+ +

My main questions are regarding the ClientHandler function and whether I'm doing anything in the wrong order. Will I be at risk of separate client threads making database writes & reads in an unexpected order causing unreproducible issues?

+ +

I wonder whether I need a separate thread with a list of commands to be executed so I can be sure that 1 client writes then reads, then another client writes then reads, etc. Do I need to maintain some database read/write list? Or is that handled by the database server somehow?

+ +

Mostly I'd just like to be enlightened about my lack of understanding of database reads/writes across different threads, and whether there's something obvious that I'm doing wrong here. Does the program structure design look good?

+ +

Regarding threads, will I need to test the server with many client connections (1k, 10k, 100k?) and then set some client connection limit to what I think is safe?

+ +
// ** ClientHandler **
+// This is run in a new thread for each client that connects to the ServerHandler
+// This does:
+// 1. Set up input and output streams to communicate with 1 client
+// 2. Set up connection to database (MySQL server running on the same machine), including:
+//    a. Establish connection
+//    b. Read some data from the database (latest version #, # of clients connected, etc)
+// 3. Send welcome message to client (latest version #, # of clients connected, etc)
+// 4. Set ""startTime"" variable to current system time in milliseconds to detect client timeout
+// 5. Loop and do this (nothing is blocking, so irrelevant steps will be skipped):
+//    1. Handle client message (if there is a new one), including:
+//       a. Parse client message, including:
+//          i. If we have not verified the client yet, we only accept one command: ""connect""
+//          ii. On ""connect"", we verify the ID and update the database (most recent log in time)
+//       b. Database reads/writes/updates as necessary, depending on the command
+//       c. Send a response message back to the client with the results
+//    Even if there is no client message received, we do this:
+//    2. Check for server-side updates that should be notified to the client, including:
+//       a. Database reads
+//       b. Timestamp comparisons, checking when we last notified the client of the server state
+//    3. Notify client of any server-side state changes (if necessary)
+// 6. If the client times out (~5000ms), update the database that the client has disconnected
+// 7. Close database connection
+// 8. Close socket
+ClientHandler(socket){
+    // Set up input and output streams to communicate with 1 client
+    val inputstream = new BufferedReader(new InputStreamReader(socket.getInputStream()))
+    val outputstream = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream()))
+
+    // Set up connection to database (MySQL server running on the same machine)
+    val db = Database.forConfig(""mydb"")
+    // Read some data from the database (latest version #, # of clients connected, etc)
+    ...
+
+    // Send welcome message to client (latest version #, # of clients connected, etc)
+    outputstream.write(...).newLine().flush()
+
+    // Set ""startTime"" variable to current system time in milliseconds to detect client timeout
+    var startTime = System.currentTimeMillis()
+
+    //
+    ...
+}
+
+// ** ServerHandler **
+// This runs once in its own thread (because it contains a blocking call)
+// This creates a new thread to run ClientHandler for each client that connects
+ServerHandler(server_socket){
+    while(true){
+        val socket = server_socket.accept() // this is a blocking call
+        val client_handler = new ClientHandler(socket)
+        val thread = new Thread(client_handler)
+        thread.start() // when a client connects, start a new thread to handle that client
+    }
+}
+
+// ** main **
+// This does 3 things:
+// 1. Start a new thread for ServerHandler which sits and waits for clients to connect
+// 2. Loop and accept commands from admin via console
+// 3. Process server-side logic in real-time
+main{
+    // Start a new thread for ServerHandler
+    val server_socket = new ServerSocket(port 10000)
+    val server_handler = new ServerHandler(server_socket)
+    val thread = new Thread(server_handler)
+    thread.start() // start a thread for the server handler
+
+    // Loop and accept commands from admin via console
+    while(true){
+        input match{
+            case ""stats"" =>
+                // print stats (# of clients logged in etc)
+            case ""quit"" =>
+                // close server_socket etc and stop program
+            case _ =>
+                // unrecognized command
+        }
+    }
+}
+
+",357563,,353068,,43876.98542,43876.98542,Is my server design safe regarding multiple threads and concurrent database reads/writes?,,1,3,,,,CC BY-SA 4.0, +405243,1,,,2/15/2020 19:19,,2,626,"

I have a Python library that performs a kind of calculation given a parameter-object. A requirement of the parameter object is that it be both hashable and serializable. It's a long calculation, so it memoizes itself. Let's say this is implemented something like this:

+ +
# Notes: params *must* be serializable and hashable;
+#        equal params *always* produce equal results.
+def long_calc(params):
+   if params in long_calc._cache:
+      return long_calc._cache[params]
+   result = _long_calc_but_with_math(params)
+   long_calc._cache[params] = result
+   return result
+long_calc._cache = {}
+
+ +

In particular, this calculation is really slow, so I would like to add an optional feature to the function such that, if you provide it with a cache location, it will check for a cache file and either load/return it instead of running the calculation or, if that file is not found, will calculate the result and save it before returning it. This would be something like this:

+ +
def long_calc(params, cache=None):
+   if params in long_calc._cache:
+      return long_calc._cache[params]
+   if cache is not None and os.path.isfile(cache):
+      return _load_calc_result(cache)
+   result = _long_calc_but_with_math(params)
+   long_calc._cache[params] = result
+   if cache is not None:
+      _save_calc_result(cache, result)
+   return result
+long_calc._cache = {}
+
+ +

In typical use, this calculation will be run many times with many sets of parameters, and a given Python instance will likely examine many results. Accordingly, I would like to make this easier for the user and allow them to provide just a cache directory; the long_calc function would then be responsible for ensuring that its contents represented a correct/unique hashing of the cached data. My first idea for how to do this was to use the parameter-hashes as directory names; each directory represented a set of parameter values, all of which have the same hash and thus all collide in this schema. The parameters themselves would be serialized out to a file key-001.pickle and the result to val-001.pickle so that exact parameters could be checked and collisions resolved. I realize this is slow, but the calculations are plenty long enough to justify the wait. For context, this is scientific software, so security is not really a concern, but reproducibility is. I wrote this approach up something like this:

+ +
def long_calc(params, cache=None):
+   if params in long_calc._cache:
+      return long_calc._cache[params]
+   if cache is not None:
+      cache = _cache_filename(params, cache)
+      if os.path.isfile(cache):
+         return _load_calc_result(cache)
+   result = _long_calc_but_with_math(params)
+   long_calc._cache[params] = result
+   if cache is not None:
+      _save_calc_result(cache, result)
+   return result
+long_calc._cache = {}
+
+def _cache_filename(params, cache_dir):
+   # Get a hash for the params:
+   h = hash(params)
+   # Turn it into a directory name
+   hstr = ('p' if h > 0 else 'n') + str(abs(h))
+   hdir = os.path.join(cache_dir, hstr)
+   # make it if it doesn't exist
+   os.makedirs(hdir, mode=0o755)
+   # look for a key that either matches or is not yet filled
+   k = 0
+   while True:
+      # get the keyfile's name
+      kfilename = os.path.join(hdir, 'key_%d.pkl' % k)
+      if not os.path.isfile(kfilename):
+         # Claim this spot
+         _save_params(kfilename, params)
+         break
+      v = _load_params(kfilename)
+      if params == v:
+         break
+      k = k + 1
+   # return the value file that matches the key file 
+   return os.path.join(hdir, 'val_%d.pkl' % k)
+
+ +

For the record I know there's a race condition here and am not worried about it.

+ +

This approach worked just fine and tested just fine... until I restarted my python instance. It ends up that python's hash function is intentionally salted as a security measure, so the line h = hash(params) isn't finding a consistent hash, which is what I need. This puts me in a bit of a bind because previous versions of the software have already established the norm that the params need to be hashable (i.e. via the hash() function) to be valid. Changing this requirement to instead be that the params object must be hashable by some other library's schema will break code unless the other library has a drop-in replacement for hash().

+ +

TLDR; Question 1: Is there a drop-in replacement for hash(x) that yields a consistent hash of x for any (or almost any) x normally hashable by hash(x)? Note that I'm willing to sacrifice a few unusual edge cases regarding the type of x if there's a drop-in for this that is pretty close.

+ +

Other answers about this kind of question have pointed at hashlib, but it looks to me like with this library where I would need to convert objects like frozensets to a unique string of bytes for hashing, which means it doesn't have a clear replacement for hash(). (I can't guarantee that equal parameter objects will have identical serialization strings, only that, once deserialized, the objects will again be equal).

+ +

TLDR; Question 2: Is there an easy way to make an object that is hashable (via hash()) and serializable (into a byte-string that can be different from that of another equal object) work with existing hash-libraries like hashlib?

+ +

A bit of research shows that you can pass Python the environment variable PYTHONHASHSEED=0 to disable salting. This is basically what I want, but for various reasons I don't think it's a good idea to force/require my users to do this themselves, and, as far as I can tell, you can't update the hash-seed during a python process. This has led me to the uncomfortable conclusion that the best way to get at this problem is possibly to fork/exec and have the child-process manage the calculation/caching of the result with an updated seed. I think that this is a pretty bad solution, and it's hard for me to imagine that there would be a consistent hash algorithm deep in python that just can't be accessed in any other way than this one.

+ +

TLDR; Question 3: Is there a way to temporarily disable the python hash salting aside from starting a new python process?

+ +

One last piece of context is that it's okay if a cache directory for one node/OS/python-version isn't compatible with that of another. But if I have a local cache on my local machine that is being updated within the same context, it should all work correctly. (Also it would be even better if the cache were consistent across all these things! it's just not required of a solution.)

+ +

Any other solutions that fall within the constraints I've laid out would also be very welcome! Thanks in advance.

+",183467,,183467,,43876.80903,44147.00278,How to perform consistent hashing on any Python object that works with hash()?,,1,0,,,,CC BY-SA 4.0, +405244,1,405266,,2/15/2020 19:25,,36,6792,"

Suppose I'm creating a game played on a 2D coordinate grid. The game has 3 types of enemies which all move in different ways:

+ +
    +
  • Drunkard: moves using type 1 movement.
  • +
  • Mummy: moves using type 1 movement, except when it's near the main character, in which case it will use type 2 movement.
  • +
  • Ninja: moves using type 3 movement.
  • +
+ +

Here are the ideas I've come up with in organizing the class hierarchy:

+ +

Proposal 1

+ +

A single base class where each enemy is derived from:

+ +
abstract class Enemy:
+    show()   // Called each game tick
+    update() // Called each game tick
+    abstract move() // Called in update
+
+class Drunkard extends Enemy:
+    move() // Type 1 movement
+
+class Mummy extends Enemy:
+    move() // Type 1 + type 2 movement
+
+class Ninja extends Enemy:
+    move() // Type 3 movement
+
+ +

Problems:

+ +
    +
  • Violates DRY since code isn't shared between Drunkard and Mummy.
  • +
+ +

Proposal 2

+ +

Same as proposal 1 but Enemy does more:

+ +
abstract class Enemy:
+    show()            // Called each game tick
+    update()          // Called each game tick
+    move()           // Tries alternateMove, if unsuccessful, perform type 1 movement
+    abstract alternateMove() // Returns a boolean
+
+class Drunkard extends Enemy:
+    alternateMove(): return False
+
+class Mummy extends Enemy:
+    alternateMove() // Type 2 movement if in range, otherwise return false
+
+class Ninja extends Enemy:
+    alternateMove() // Type 3 movement and return true
+
+ +

Problems:

+ +
    +
  • Ninja really only has one move, so it doesn't really have an ""alternate move."" Thus, Enemy is a subpar representation of all enemies.
  • +
+ +

Proposal 3

+ +

Extending proposal 2 with a MovementPlanEnemy.

+ +
abstract class Enemy:
+    show()   // Called each game tick
+    update() // Called each game tick
+    abstract move() // Called in update
+
+class MovementPlanEnemy:
+    move() // Type 1 movement
+    abstract alternateMove()
+
+class Drunkard extends MovementPlanEnemy:
+    alternateMove() // Return false
+
+class Mummy extends MovementPlanEnemy:
+    alternateMove() // Tries type 2 movement
+
+class Ninja extends Enemy:
+    move() // Type 3 movement
+
+ +

Problems:

+ +
    +
  • Ugly and possibly over-engineered.
  • +
+ +

Question

+ +

Proposal 1 is simple but has a lower level of abstraction. Proposal 3 is complex but has a higher level of abstraction.

+ +

I understand the whole thing about ""composition over inheritance"" and how it can solve this whole mess. However, I have to implement this for a school project which requires us to use inheritance. So given this restriction, what would be the best way to organize this class hierarchy? Is this just an example of why inheritance is inherently bad?

+ +

I guess since my restriction is that I have to use inheritance, I'm really asking the broader question: in general, when is it appropriate to introduce a new layer of abstraction at the cost of complicating the program architecture?

+",357570,,198876,,43876.93819,43883.43542,When is it appropriate to introduce a new layer of abstraction into a class hierarchy?,,10,8,17,,,CC BY-SA 4.0, +405247,1,405250,,2/15/2020 21:19,,3,1355,"

+ +

As it is said in ARLOW, J., AND NEUSTADT, I. UML 2 and the Unified Process, 2nd ed book, there are 7 types of relationship between different objects.

+ +
    +
  1. Dependency

  2. +
  3. Association

  4. +
  5. Aggregation

  6. +
  7. Composition

  8. +
  9. Containment

  10. +
  11. Generalization

  12. +
  13. Realization

  14. +
+ +

But I have read in somewhere else that if a relationship has one of the following conditions, it must be an aggregation relationship:

+ +

a) Membership b) Containment c) Assembly

+ +

The problem is I can't find out the difference between Aggregation and Containment relationship as a separate relationships!

+",352516,,352516,,43876.98056,43884.87639,What is the difference between containment and aggregation relationship in UML?,,2,0,,,,CC BY-SA 4.0, +405248,1,,,2/15/2020 21:26,,-3,96,"

I need to check my understanding regarding the spiral process model as it is confusing me.

+ +

According to my understanding, the spiral model is similar to the waterfall model with the activities as follows:

+ +
    +
  1. Requirements analysis
  2. +
  3. design
  4. +
  5. implementation
  6. +
  7. testing
  8. +
+ +

But in each phase we do the following: Firstly we just do a risk analysis in which we discover and solve risks, then we implement this phase (whether it is a requirement analysis phase or design phase and so on) then we plan for the next phase.

+ +

Is my understanding right?

+",351790,,332877,,43895.57014,43895.57014,How is the spiral model to be understood?,,1,1,1,,,CC BY-SA 4.0, +405256,1,405287,,2/15/2020 23:45,,1,50,"

I've been reading Unicode's core specification (see https://www.unicode.org/versions/latest/). I mostly understood what the text was explaining in section 2.1 Architectural Context until it started talking about layout behaviors. It feels like there's no preface or explanation of the phrase 'layout behavior of characters'.

+ +

Section 2.1 explains how a character encoding must be designed with text processes and algorithms in mind since an encoding choice can make text rendering and other processes more complex (or simpler, depending on the choice).

+ +

The specification then continues with a Character Identify sub-header that uses multiple phrases including the word 'layout'. Here are some examples from the text:

+ +
    +
  • ""Whenever Unicode makes statements about the default layout behavior of characters, it is done to ensure that users...""
  • +
  • ""The actual layout in an implementation may differ in detail.""
  • +
  • ""A mathematical layout system, .., will have many domain-specific rules for layout..""
  • +
  • ""The purpose of defining Unicode default layout behavior is not to enforce a single and specific aesthetic layout for each script..""
  • +
+ +

What does 'layout' and/or 'layout behavior of characters' mean in this context?

+",357577,,,,,43877.69236,Layout Behavior of Characters (question about unicode standard),,1,0,,,,CC BY-SA 4.0, +405268,1,405269,,2/16/2020 7:17,,23,5046,"

I have a C header that is generated from a CSV file and a python script. The C header mainly contains a list of #define constants.

+ +

I want to be able to detect manual changes to this header during compilation (which tends to happen frequently in this early phase of development), and have the compiler display a warning to indicate to the developer to update the CSV file and regenerate the header.

+ +

If I were to go about doing this, I would have the python script generate some kind of metadata about the file itself, perhaps a hash, and then the compiler would somehow check this hash and compare to what's in the file. But I'm not sure what's the best way to go about it. Does GCC have any facilities I can use for this kind of thing?

+",124792,,209774,,43877.65139,43881.54028,Detect manual changes to an autogenerated C header,,8,8,2,43879.80347,,CC BY-SA 4.0, +405276,1,405282,,2/16/2020 12:13,,4,114,"

UPDATE I totally get why this question could/should get dinged ... but hey, if you feel inclined to vote down this question, can you also provide some feedback on where a better forum could be for me to take this question?

+ +

I am at the start of a new effort to create an internal product for my company. So, right off the bat, the stakes aren't crazy high since my customers are my co-workers across the org (I personally work with a team of 8 developers).

+ +

Our main focus is to combine our multiple (~4) services under one, unified product (so a lot of refactoring, merging, and complete rewrites of code).

+ +

We're starting to cobble together the plan/strategy for this new product and I've been questioning a lot of our existing strategies to see if there is an advantage to another approach.

+ +
+

When is it appropriate to have scheduled releases (vs. pushing to prod + when a sizable effort has been completed) ?

+
+ +

A stakeholder asked me about a release schedule and my first thought was, ""Why don't we do this?""

+ +

We can definitely continue with ad-hoc pushes, but is there a value-add to telling the business owners/stakeholders that they should be able to expect a release every third Monday of a month (for example)?

+",357601,,357601,,43877.51528,43878.38125,When to have Scheduled Releases (vs pushing to production when a ticket is complete)?,,2,4,1,,,CC BY-SA 4.0, +405279,1,,,2/16/2020 12:59,,1,33,"

I am new to neo4j and the graph I'm shaping will be used by third party applications for some fixed (cypher) queries - we can think about the classical ""Who is friend of Alice?"" question.

+ +

I'd like these queries to be easily requested from the other applications - that can be developed in different languages - while not having them reimplement each query nor putting too many layers between the client application and the neo4j engine (that to me would slow down responses).

+ +

These questions are, of course, totally answered by Cypher queries or anyway using only resources from the neo4j database itself.

+ +

So the graph should act as a service to other applications: what is the best way to provide such functionality? I'm aiming at two goals: a) centralize query contents; b) minimize layers overhead.

+ +

A couple of solution ideas came to mind:

+ +
    +
  1. Add a service layer in front of the graph database, written in one language, that clients may send request to
    +So the path would be client (i.e. Node.js) -> (Rest perhaps?) -> service layer (Python?) -> Cypher (over Bolt) -> neo4j
  2. +
  3. neo4j UDF: so client (i.e. Node.js) -> function over Bolt -> neo4j
    +I didn't understand whether these would perform similar to native functions
  4. +
  5. Client (i.e. Node.js) --> Cypher over Bol -> neo4j
    +Of course this would duplicate query logic in each client and potentially generate errors
  6. +
  7. Other options I haven't found / thought about?
  8. +
+ +

If this was an SQL database, I'd have writted stored functions for everything and clients could just do SELECT fx(data) - but this being neo4j, I'd like to hear some advices.

+ +

Solution #2 seems to be the best for me - I don't care if that leaves clients the chance to issue other queries; maybe I can block them with permissions. Of course if another graph database offers today better support for my needs, I'll evaluate that too.

+",145966,,209774,,43877.63889,43877.63889,Queries as a service for other applications (neo4j),,0,0,1,,,CC BY-SA 4.0, +405281,1,405417,,2/16/2020 15:40,,1,120,"

I want to create a multi-layered backend architecture with a REST layer and GraphQL layer later on.

+ +

So let's say you would start with the basic layers controller, service, repository would it make sense to create request/response interfaces for each layer?

+ +

When creating a user you might come up with

+ +
    +
  • create-user DTO => controller request => holding username and password from the body
  • +
  • user DTO => controller response => user object sent back to the client
  • +
  • create-user BO => service request => request data coming from the controller
  • +
  • user BO => service response => user object coming from the service
  • +
  • create-user DAO => repository request => request data coming from the service
  • +
  • user DAO => repository response => the database user
  • +
+ +

So each layer would define its own input and output data. Would this be considered best practise or are there any better ideas?

+ +

It would also be possible to create a separate library for each layer and the controller would consume the service library.

+",317611,,209774,,43877.65833,43879.92847,Define input and output interfaces for each application layer,,1,5,,,,CC BY-SA 4.0, +405284,1,405292,,2/16/2020 16:24,,2,145,"

The spiral model is a risk-driven SDLC model. There are many diagrams that describe this SDLC model. Here is one of them: +

+

As we can see there are many iterations (one for concept of operation, requirements and so on), every iteration has its own prototype.

+

For example, there is a prototype associated with the requirements analysis iteration but what is the meaning of this prototype developed for this iteration? The same can be asked for the design iteration? So please someone explain these prototypes with an example if possible. One more question, how prototyping could reduce risks?

+",352196,,31267,,44196.66458,44196.66458,Prototyping in Spiral model,,2,1,,,,CC BY-SA 4.0, +405286,1,405299,,2/16/2020 16:37,,-2,74,"

I am a CIS student and want to do the best industry standard for organizing projects. I find my desktop and dev folder cluttered with different projects and would like some advice on how to better organize like a professional. Thanks

+",357611,,,,,43877.88056,Industry standard for organizing dev work,,1,4,,43879.625,,CC BY-SA 4.0, +405289,1,,,2/16/2020 17:08,,23,2393,"

I am trying to plan a system which validates the compability of different components by comparing their semantic versioning number, especially the major release number (since it indicates API changes and backwards compability). I came across the following scenario where I could not find an exact answer:

+ +

Let's say the code is on version 2.3.5 and I add a new major API change, therefore updating the version to 3.0.0. However, a few days after releasing I find that this change does not suit the users' needs, and I revert this change so that everything that was compatible with the versions 2.x.x will be compatible again (note that I'm not doing a version control revert, but rather put the old version of the code back in a regular commit). Now I can't figure out if the new version should be 4.0.0 because I again did a major API change and the numbers should always be incremented, or, because it will be backwards compatible again, use 2.4.0.

+ +

I see advantages and problems with both solutions. Are there ground rules or best practices for such cases?

+",352123,,,,,43878.9125,How to change the semantic version number when reverting the last major change,,3,4,2,,,CC BY-SA 4.0, +405290,1,,,2/16/2020 17:19,,1,24,"

After some extensive research I still don't know how to properly implement the following case. I think this question answers something similar, but I'm not 100% sure (Should client have access to 3rd party API access token?).

+ +

Let's say I have my resource server (my-api.com), my identity provider and authorization server (my-idp.com) and have an client app (native or browser) (com.my-app).

+ +

The standard use-case is implemented with the authorization grant flow.

+ +

I now have a new use-case, where I need to request data from a 3rd party resource server (other-api.com). The 3rd party has an identity provider as well and offers OAuth 2.0 authorization and OpenID authentication flows. Resource owners of the 3rd party need to give their consent to my application so I can request their data and further use it in my application.

+ +

My questions are the following:

+ +

Is the on-behalf-flow what I need? It seems to be for two APIs which I control, not for 3rd party APIs.

+ +
+ +

How do I handle the 3rd party access, refresh and id token to make requests on-behalf of the resource owner?

+ +
    +
  1. I could store the 3rd party tokens in my-api.com and append it to every request I do to request data for my user?
  2. +
  3. I could store the 3rd party tokens in my-idp.com next to my user information?
  4. +
  5. I could send the 3rd party tokens to com.my-app which would result in two tokens for each party. This seems to be awkward.
  6. +
+ +

I would go for option 2 and would extend the functionality of my-idp.com. Is this a valid approach? My API my-api.com would then fetch the 3rd party tokens before it does requests on-behalf of my user.

+ +

Thanks.

+",310934,,,,,43877.72153,Handling 3rd party OAuth2 tokens,,0,3,,,,CC BY-SA 4.0, +405303,1,405307,,2/16/2020 22:25,,1,142,"

Let's say you have a wireless network that acts as a bridge/wireless repeater. How would both factor into a class diagram? In my mind it makes senses to have a parent object that can exist on its own. Then the child object that is both dependent on the parent, but also creates an instance of the parent. The child can access all the inherited attributes, while being able to overload methods that take use of the methods and attributes of the parent instance.

+ +

+ +

In my mind, a wireless repeater has a parent wireless network (aggregate), but also has the completely separate one that inherits the wirelessnetwork attributes. I want it that direction because the bridge network is a private extension of the main wireless network and need to be able to show this in a sequence diagram.

+",357627,,,,,43878.33958,Combined Inheritance and Composition,,2,5,,,,CC BY-SA 4.0, +405308,1,405311,,2/17/2020 0:54,,24,5451,"

Coupling is defined as the knowledge one object has about another one, which describes how dependent they are. The more dependent, the worse, since changes in one would impact in the second. High coupling is bad, low coupling good. There are different coupling types.

+ +

Let's assume we talk about call coupling.

+ +

In Java, when A creates an object B and calls one of its methods it's said to be tightly coupled. But if we create an interface IB, used from A where B implements IB it's said to be loosely coupled. I don't see why since one change in the interface would have an impact in A and B. And one change in B would have an impact in IB and A. They still seem to be call coupled.

+ +

The same rationale applies to the Facade GoF design pattern. It's said it promotes low coupling, since we put an intermediary between a subsystem and the client code. In this case, it looks like we transferred the problem from the client code to the Facade. Since a change in the subsystem would have an impact on the Facade instead of the client code. The client code is no longer coupled to the subsystem but the Facade is.

+ +

I don't see how coupling is reduced.

+ +

This has been asked in: How is loose coupling achieved using interfaces in Java when an implementation class is mandatory and bound to interface contract?

+ +

But the answers are not specific enough. First, since encapsulation is also about treating objects as black boxes, the first answer does not specify any gain by using interfaces compared to regular classes (= tight coupling, in case it's all about black boxes). Therefore the answer is invalid. What interfaces provide is decoupling between the interface and the implementation when multiple implementations exist. But doesn't solve anything related to call coupling. May be the link provided above should add a new category called ""implementation coupling"". Regardless of this, the solution is still call coupled. In the second answer it mentions data coupling, but as far as I know the issue is about call coupling.

+ +

The rest of the answers are irrelevant.

+ +

Regarding ""Design Patterns - Understanding Facade Pattern"", I understand the Facade pattern. I'm only asking about the coupling reduced by the pattern. Which based on my reasoning is not reduced but transferred.

+ +

This subject has been treated but no proper answer has been given.

+",357636,,173647,,43879.35556,43879.35556,Coupling: Theory vs Reality,,6,7,7,,,CC BY-SA 4.0, +405318,1,405329,,2/17/2020 8:53,,4,135,"

We were discussing how to design a API response, for simplicity, think of having to give information of all the different types of facilities available in a city:

+ +
{
+    ""city"": {
+        ""cityName"": ""Gotham"",
+        ""population"": ""8620000"",
+        ""facilities"": {
+            ""airport"": {
+                ""name"": ""Batman International Airport"",
+                ""numRunways"": 7,
+                ""dailyCommuters"": 340000
+            },
+            ""railwayStation"": {
+                ""name"": ""Gotham Downtown Rail Station"",
+                ""platforms"": 12,
+                ""dailyCommuters"": 40000
+            },
+            ""hospital"": {
+                ""name"": ""Joker Memorial Nursing Home"",
+                ""beds"": 300
+            },
+            ""port"": null,
+            ""market"": null
+        }
+    }
+}
+
+ +
    +
  1. For now, we know that there are 5 facilities available generally in cities that our clients are interested in. In future, we might add more facility types to the list, though.
  2. +
  3. In the example response, Gotham city has an Airport, Railway Station and a Hospital, but no Port or a Market.
  4. +
  5. Each facility is very different from each other, and would have different types of fields in it. We need that flexibility.
  6. +
+ +

Somebody suggested adding an extra buildings list, to indicate which buildings are present:

+ +
{
+    ""city"": {
+        ""cityName"": ""Gotham"",
+        ""population"": ""8620000"",
+        ""buildings"": {
+            ""buildings"": [""AIRPORT"", ""RAILWAY_STATION"", ""HOSPITAL""],
+
+            ""airport"": {
+                ""name"": ""Batman International Airport"",
+                ""numRunways"": 7,
+                ""dailyCommuters"": 340000
+            },
+            ""railwayStation"": {
+                ""name"": ""Gotham Downtown Rail Station"",
+                ""platforms"": 12,
+                ""dailyCommuters"": 40000
+            },
+            ""hospital"": {
+                ""name"": ""Joker Memorial Nursing Home"",
+                ""beds"": 300
+            },
+            ""port"": null,
+            ""market"": null
+        }
+    }
+}
+
+ +

Generalising, if there are lots of fields in an object which might not be set, is it a good practice to specify them explicitly in the response from the UI? What are the pros and cons? What's the best way to do this?

+",180461,,,,,43878.81944,How to designing API JSON Response with nullable fields,,5,0,,,,CC BY-SA 4.0, +405321,1,405326,,2/17/2020 9:47,,2,447,"

We're reviewing the popular Gitflow as our git branching model and I've liked it so far. One case I'm not sure about is the production rollback. Say we've there's a new feature that we're planning to publish in the upcoming release. The feature is tested and finally merged to master. Either automatically or manually the source code in the master is built and deployed. Shortly after deploying we reveal a critical bug and immediately create a hotfix branch. All right, so far so good. But we roll back the production version to the most recent working one. That means the program that's running in production does not match the source in the master branch. And anyone who branches off from master will something different that's really working. How do you cope with this case?

+",287396,,287396,,43878.54931,43878.54931,How to cope with production rollbacks when using gitflow,,1,2,,,,CC BY-SA 4.0, +405322,1,405340,,2/17/2020 9:51,,1,42,"

Currently, I have a rest API service which serves to many different consumers. One of the endpoints of API is for retrieving an order. Inside a single application, I am making 5 different service calls to other external API's to generate and return the response to the caller.

+ +

Half of my API's consumers do not need the response of the 3 other services (there are 5 external service calls in total to generate the response). Some of them only want to see the basic information about the order. For those consumers, only 2 API call is enough to generate their content on their side.

+ +

However, for the rest of the other consumers, they do use the entire response

+ +

Sometimes, because of that, the response time increases rapidly. Even though the consumer does not want to use the response of the specific service, they have to wait for that service in any case.

+ +

Is there a way to determine which services should we call depending on the consumer's request?

+ +

Shortly, I want my system to determine which service needs to be called depending on the consumer of the API.

+ +

note: application is monolithic. it is not a microservice. all 5 API calls are being done within the same application.

+",260992,,260992,,43878.42847,43878.62014,Determining the service calls within the API based on the each consumer request,,1,4,,,,CC BY-SA 4.0, +405334,1,405335,,2/17/2020 12:45,,2,122,"

(Almost) all numbers in my program are parsed and treated as complex numbers. There is one corner cases, that specifically needs an integer. Unfortunately my programming languages does not allow to compare integers to complex numbers so I needed to come up with the following code:

+ +
function_name(Complex c)
+    if c has imaginary part != 0 
+        throw exception
+    if ( integer(real(c)) - real(c) ) != 0
+        throw exception
+    return integer(real(c))
+
+ +

I'm stuck on how to call this function. If I call it something like complex_to_int I think it hides the fact, that it fails on non integer complex numbers. If the complex number is real, then I (or other people) might expect this function to work like a normal cast from a floating point value to an integer (i.e. the decimal places are just cut off).

+ +

My question is:
+-Is there a good name to call this function

+ +

Or, at the risk of the question being too subjective

+ +

-Should I split this function in two and actually have a real casting function (that just ignores potential imaginary parts and cuts of decimal places) and write a function that checks if the complex number fulfills my criteria before casting it?

+",301421,,301421,,43878.65,43878.65,What should I call an internal casting function that may fail,,1,2,,,,CC BY-SA 4.0, +405339,1,,,2/17/2020 14:40,,0,175,"

I have a python class that has methods to perform the CRUD operations via REST api:

+ +
class my_class():
+    def get_obj(self,...) -> requests.Response:
+        res  = requests.get(...);
+        return res;
+
+    def create_obj(self, ...) -> requests.Response:
+        res  = requests.post(...);
+        return res;
+
+    def modify_obj(self, ...) -> requests.Response:
+        res  = requests.put(...);
+        return res;
+
+
+    def delete_obj(self, ...) -> requests.Response:
+        res  = requests.delete(...)
+        return res;
+
+ +

I want to test these functionalities and to do that I can think of 3 ways:

+ +
    +
  1. a single test that tests in cascade all the functionalities, something like:

    + +
    class test_myclass(unittest.TestCase):
    +    def setUp(self):
    +        self.api_class = my_class();
    +
    +    def test_all(self):
    +        res = self.api_class.create_obj();
    +        self.assertTrue(res.ok);
    +        res = self.api_class.get_obj();
    +        self.assertTrue(res.ok);
    +        res = self.api_class.modify_obj();
    +        self.assertTrue(res.ok);
    +        res = self.api_class.modify_obj();
    +        self.assertTrue(res.ok);
    +
    + +

    This has the advantage of being very compact and when the test succeed, everything is clean: all the created stuff is deleted. On the contrary, if one of the middle tests fails, then problems arise so maybe a tearDown() is needed.

  2. +
  3. The second way is to test each functionality with a method:

    + +
    class test_myclass(unittest.TestCase):
    +    def setUp(self):
    +        self.api_class = my_class();
    +
    +    def test_create(self):
    +        res = self.api_class.create_obj();
    +        if res.ok:
    +            self.api_class.delete_obj();
    +        self.assertTrue(res.ok);
    +
    +    def test_get(self):
    +        res0 = self.api_class.create_obj();
    +        res = self.api_class.get_obj();
    +        if res0.ok:
    +            self.api_class.delete_obj();
    +        self.assertTrue(res.ok);
    +    def test_modify():
    +        res0 = self.api_class.create_obj();
    +        res = self.api_class.modify_obj();
    +        if res0.ok:
    +            self.api_class.delete_obj();
    +        self.assertTrue(res.ok);
    +
    +     def test_delete
    +        self.api_class.create_obj();
    +        res = self.api_class.delete_obj();
    +        self.assertTrue(res.ok);
    +
    + +

    This way is much more redundant but in my opinion tests are cleaner if a problem arises.

    + +
      +
    1. The third way is to write a test class for each functionality: each one with a setUp and tearDown method:

      + +
      class test_myclass_create(unittest.TestCase):
      +    def tearDown(self):
      +         self.api_class.delete_obj();
      +
      +    def test_create(self):
      +         res = self.api_class.create_obj();
      +         self.assertTrue(res.ok);
      +
      +class test_myclass_get(unittest.TestCase):
      +    def tearDown(self):
      +        self.api_class.delete_obj();
      +    def setUp(self):
      +        self.api_class.create_obj();
      +    def test_get(self):
      +        res = self.api_class.get_obj();
      +        self.assertTrue(res.ok);
      +
      +class test_myclass_modify(unittest.TestCase):
      +    def tearDown(self):
      +        self.api_class.delete_obj();
      +    def setUp(self):
      +        self.api_class.create_obj();            
      +    def test_modify():
      +        res = self.api_class.modify_obj();
      +        self.assertTrue(res.ok);
      +
      +class test_myclass_get(unittest.TestCase):
      +    def tearDown(self):
      +         self.api_class.delete_obj();
      +    def setUp(self):
      +         self.api_class.create_obj();                
      +    def test_delete:
      +         res = self.api_class.delete_obj();
      +         self.assertTrue(res.ok);
      +
    2. +
  4. +
+ +

Does a best practice exist for such a case or is it just up to to the programmer?

+",301620,,,,,43894.95208,Which of these is a better practice to write Python unittest for CRUD operations of REST api?,,1,0,,,,CC BY-SA 4.0, +405345,1,405346,,2/17/2020 16:28,,10,494,"

Help me settle an internal question.

+ +

We have an endpoint which we all agree should be a GET, because all it's doing is calling a stored proc and returning a set of data. However, there is a set of filters that we need to pass to the endpoint. Below is an example of what the filers might look like if all of the filter options are passed:

+ +
{
+    ""DistributorCode"":""6065"",
+    ""Model"":""123-xyz"",
+    ""Serial"":""654654065"",
+    ""CurrentSMR"":""11350.47"",
+    ""SoldbyDistributor"":"""",
+    ""ServiceDistributor"":"""",
+    ""LatestSMRDate"":""02/12/2020"",
+    ""Coverage"":"""",
+    ""Customer"":"""",
+    ""CoverageExpirayDate"":"""",
+    ""SortBy"":{
+        ""name"" :""serial"",
+        ""order"":""asc""
+    },
+    ""select"":[""eh"",""wt"",""fc""]
+} 
+
+ +

We could just pass in this json string as a querystring parameter in the GET call, or we could have each of them be their own parameter (although SortBy might get tricky).

+ +

But some are concerned that this will make the URL too long, and we will risk running into max query string length errors. So, they want to make the call a POST instead. If it were a POST call, then it would require both an object in the body and at least one QS param (&code=) which is non-negotiable (it must be there.)

+ +

So we have two options (that I can think of):

+ +
    +
  1. Make it a GET call with a potentially very long URL due to the parameters.

    + +

    1a) each filter is its own parameter

    + +

    1b) the list of filters is a json string in it's own parameter

  2. +
  3. Make it a POST call that requires both QS parameters and a json object in the Body

  4. +
+ +

Which would you do, and why?

+ +

Thanks!

+",189394,,,,,43885.64306,API endpoint POST vs GET,,3,7,,,,CC BY-SA 4.0, +405347,1,405363,,2/17/2020 17:29,,2,197,"

I am developing small lottery games where the participant can win one of many gifts, each having various quantities. (The code is in PHP)

+ +

Simple scenario:

+ +

I have 5 types of gifts, 1000 in total, and am expecting 1000 participations. +What algorithm can I use to ensure that these 1000 gifts are distributed somewhat evenly. The time period doesn't matter.

+ +

Uneven: 2 drones given away first, then 10 tablets, etc...

+ +
| Title     | Quantity |
+|-----------|----------|
+| Drone     | 5        |
+| Tablet    | 10       |
+| Hotel     | 100      |
+| Gift card | 200      |
+| Coupon    | 685      |
+
+ +

More complex scenario:

+ +

I have the same gifts and gift quantities, but I anticipate 600 participations. +I wish for the drone, tablet, and hotel nights to be all given out by the 600th participation.
+So there should be 400 (gift cards + coupons) left, and better if there's the same percentage left: 116 gift cards (200/685=29%) and 284 coupons (71%).

+ +

What I tried:

+ +

For the first scenario, I tried using these probabilities, and looping through the 5 gifts sequentially:
+- Each gift has its own counter
+- The gift's counter increases every time the gift is considered
+- The gift is given if the gift's counter == originalQuantity
+- The loop starts back from the drone when a gift is given

+ +
| Title     | Quantity | Probability                   |
+|-----------|----------|-------------------------------|
+| Drone     | 5        | 5/1000                        |
+| Tablet    | 10       | 10/(1000-5)                   |
+| Hotel     | 100      | 100/(1000-5-10)               |
+| Gift card | 200      | 200/(1000-5-10-100)           |
+| Coupon    | 685      | 685/(1000-5-10-100-200) = 1/1 |
+
+ +

This solution works, and by adding a coefficient to each gift (Drone, table and hotel must have 20% more chance to come out, and gift card and coupon 30% less chance), I could also handle the more complex scenario.

+ +

My question is: are there better, fairer, simpler ways to distribute these gifts ?

+ +

Edit 1: Even distribution would mean random, and have for example, after 20% of participation, have approximately 20% of each gifts given out (1 drone, 2 tablets, etc...)

+",349540,,349540,,43879.34514,43879.34514,Efficient and fair way to evenly distribute multiple items with various quantities,,1,2,,,,CC BY-SA 4.0, +405348,1,,,2/17/2020 17:48,,1,157,"

I am trying to develop an Instant messenger using WebSocket.

+ +

I have multiple instances of my servers running (say server1 , server2). Two users(say userA , userB) who wants to chat with each other. But connected to different servers. + UserA to Server1 and UserB to server2

+ +

Need some suggestion on how to implement

+ +

Approach 1 :

+ +

Initially, I was thinking to have a centralized database, which stores the connections(whenever a connection is established). Something like (simplest table structure)

+ +
User_id , Socket_object  
+
+ +

Let's say userA is sending a message to userB.

+ +

So that I will fetch the socket object of the receiver(userB) from the database and send the message directly to UserB . + But later I came to know ServerSocket object is not serializable in Java.

+ +

Approach 2:

+ +

Whenever a connection is established, the server which accepted the connection can save the below details in a table.

+ +
 user_id , server_name , server_port 
+
+ +

when some server receives a message. It gets the destination server detail from the table and make a connection(probably HTTP, since we don't need persistence) to that destination server and push the message. Then the destination server passes the information to the respective user.

+ +

How to proceed with Implementation? Is there any other better way to implement it.

+ +

Please do not suggest some third-party libraries.

+",54050,,,,,43878.93681,Instant Messaging with WebSocket,,1,2,,,,CC BY-SA 4.0, +405349,1,,,2/17/2020 18:02,,1,125,"

I am making an SNMP agent. In order to pass information to this SNMP agent, I need to periodically extract data from two different sources (and there may be more sources in future.) I am trying to design the interface for these periodic data extractors.

+ +

However, both of these extractors return different sets of information. Also, in the case of one extractor, we additionally need to pass in some state in order to get the information, while the other extractor does not need any state passed in as an argument.

+ +

So the input argument and the return type are dependent on the specific extractor. Now in order to have a common interface for these data extractors, I will need generics for both the input parameter and return type.

+ +
public interface DataExtractor {
+  R GetData<T,R>(T state = null) where T : DataExtractInitParam;
+} 
+
+ +

where DataExtractInitParam is a marker abstract class with no fields and will inherit to create initial state parameter types. Also, it can have similar hierarchy for the return type, but all of this makes me think that the interface definition has become needlessly complex.

+ +

Although semantically the implementations of this interface are doing same thing (i.e. they follow a contract to extract data for SNMP) it is hard to come up with a simple common interface because of the different inputs and outputs of the two extractors.

+ +

Another bad thing with this design that comes to my mind is that the caller of the interface methods will be somehow able to call the method via the interface reference but in the end we will need to downcast the data that will be returned. So, the caller needs to know about the specific implementation's return type which altogether defeats the purpose of using an interface for loose coupling. (This makes me feel that my design is wrong).

+ +

One possible solution that I thought of is to have void return type and maintain the state returned by the methods as class-level global data, but again global state variables are not a good design.

+ +

I would appreciate any help designing this interface.

+",319425,,283761,,43893.71389,43893.71389,How to design the interface method for the following case?,,0,12,,,,CC BY-SA 4.0, +405351,1,405354,,2/17/2020 19:05,,0,45,"

In the most abstract, platform agnostic way possible, can someone explain what actually determines the end of input/output a socket? Is this something the programming language itself typically handles and indicates by a special return value (for example, -1 bytes read)?

+ +

I am exposed with an IO API that will throw EOF when the known end of a stream is reached, however, for network IO, EOF is never thrown (until the Socket is closed), so when I am reading a socket, I never know when to stop reading, as, rather than -1, it will return 0 bytes read. Is 0 bytes read where I should stop reading? Or should I keep trying to read bytes, and consider it EOF/input done being read when 0 is returned so many times (polling basically)?

+ +

That solution (polling) seems quite inelegant.. but I can't think of any other way to know when to stop attempting to read input from a socket.

+",355847,,,,,43878.81111,What Actually Indicates the end of a Socket Input/Output feed?,,1,6,,,,CC BY-SA 4.0, +405358,1,405359,,2/17/2020 19:58,,1,221,"

By first, i mean i have read articles which puts project planning as the first phase of the sdlc.

+ +

In my opinion it should come after the requirement analysis and specification, because if you dont know the reqirements of a project, how can you plan about it?

+ +

it does not make any sense to me.

+ +

you're planning for a project whose requirements you don't know, it simply stupid!

+",357711,,,,,43879.71667,"What Comes First, Project Planning or Requirment Analysis?",,5,2,,,,CC BY-SA 4.0, +405360,1,,,2/17/2020 20:52,,-2,51,"

Forgive my jargon , as I'm not very familiar with Constraint Satisfaction Problem(s) or Linear Programming procedures (For eg: Presolve)

+ +

I have very trivial constraint set from variables of continuous domains. However, the problem may have to be solved at scale. All constraints are equality constraints. I'll give a small example (but will have to code the solution to scale):

+ +
  x1=0.01
+  x1+x2=0.02
+  x1+x2+x3=0.02
+  x1+x2+x3+x4+x5=0.05
+
+ +

The solution will be something like this:

+ +
x1=0.01
+x2=0.01
+x3=0.0
+x4=[0,0.03]
+x5=[0,0.03]`
+
+ +

I'm looking for suggestions on algorithms for optimization in contraint programming or linear programming. Any further recommendations, on readily available library implementations will also be welcome. Thanking you in advance.`

+",357721,,357721,,43878.93681,43879.60417,Finding the domain(s) of variables in a Linear Program using the constraints? (Constraint programming/Linear programming),,1,2,,,,CC BY-SA 4.0, +405365,1,,,2/17/2020 21:42,,1,70,"

i have some back end service that is in charge of sending notifications to my end users via mapping between some identifier to their real web sockets.

+ +

currently the service works in a sync way, meaning every loop iteration it checks if i got some new messages in my queue and if so i am polling the message from the queue and sending it to the relevant users based on their identifiers in a sync way(until the message is not sent the loop wont continue).

+ +

i came up with an idea to send these messages in async way. meaning that after polling a message from the queue i will delegate it to some thread pool to deal with it and send it when possible.

+ +

i have some concerns about this solution.

+ +

first, is there any scenario that messages will be received by the user out of order?.

+ +

second, will i see an improvement in the time the user will receive his messages? the sending involves I/O operations so i thought it will be a good idea to make it asyc.

+ +

are there any other concerns you think i should consider?

+",357728,,,,,43880.675,async notifications delivery,,1,0,,,,CC BY-SA 4.0, +405368,1,405383,,2/17/2020 22:43,,0,100,"

I have a new website product which I'm beginning to demo. I have the opportunity to demo to a relatively large potential client. I've found out (from the individual) that they already have a similar product that they previously tried to monetize but would like to look at mine and are interested in switching over to my platform instead given its price point.

+ +

There's a natural reluctance since this potential client already has invested time and money into their own platform and has shown that they have tried to monetize it. Most of what I would demo could be completely viable for their own platform.

+ +

My first inclination is to ask for an NDA prior to the demo, however, as I've read and would have suspected, that's off putting so early on in the relationship. I've already read that an NDA, from the perspective of a small company like my own, is not very useful against a large company with resources.

+ +

So, does this situation warrant an NDA or is there another mechanism to protect my IP and ease my concern that I'm not thinking of?

+",85064,,,,,43879.34514,NDA for potential competitor demo,,1,1,,43879.6375,,CC BY-SA 4.0, +405369,1,,,2/17/2020 23:25,,0,60,"

SETUP: +Local Windows 10 machine. It runs a VM (Ubuntu Server, NAT, Virtualbox)

+ +

QUESTION: +I want to make a VNC connection from my Local machine to the VM. Since VNC is not a secure protocol, it is recommended to make the connection through an SSH tunnel. However, since the VM is running on the same machine, I wonder if this is still necessary.

+",357733,,,,,43879.10903,VNC security: SSH tunnel from local machine to VM necessary?,,1,1,,,,CC BY-SA 4.0, +405371,1,,,2/18/2020 0:54,,7,359,"

I have a project at work that was written entirely by a scientist who was learning programming while writing it. It is almost 200,000 lines of C++, and almost every variable is a global variable (over 2,000 global variables). I think he found out about local variables about half way through writing the project. The few local variable names are almost always x, xx, i, ii, j, jj, M, or some other single letter name. This program is prone to seg faulting and running valgrind reveals nearly a thousand instances of memory corruption. It is totally reliant on undefined behavior, to the point where it only produces the correct results on CentOS 7, and Ubuntu yields completely different and incorrect results (it's scientific code). It is completely indecipherable by anyone but the original author. We are in a unique position now where the company is going to bet everything on this one piece of software written by someone who has never written production software before. There is an extreme bus factor here, because after working on this codebase for months, I'm baffled by almost every line, and implementing the most basic features takes an incredibly long time. No other developer at the company wants to touch this code. This is by far the worst code I've ever seen. I never saw code this bad even from freshman CS students.

+ +

Given this situation, is this an appropriate case to ""burn it down, and start from scratch""? What is a good strategy to make this a maintainable code base? This code will need frequent updates as research progresses, meaning freezing the code is not an option.

+ +

For clarification, this project is not yet being used in production. It has been entirely proof of concept until now, and will be used in production this summer.

+ +

For anyone curious of what this would look like, the functions look something like this.

+ +
void doSomething(void) {
+  sideEffect1();
+  sideEffect2();
+  sideEffect3();
+  ...
+  sideEffect145();
+}
+
+void sideEffect1(void) {
+  if (globalVar1) {
+    return;
+  }
+  anotherSideEffect1();
+  if (globalVar2) {
+    globalVar1 = globalVar2 + 1;  
+  }
+  ... hundreds of lines later
+}
+
+",352925,,352925,,43879.06736,43879.48819,When is a rewrite appropriate?,,3,7,,43879.62014,,CC BY-SA 4.0, +405373,1,,,2/18/2020 2:25,,1,36,"

I have a scenario in our Web Application.

+ +

GUI invokes a REST web wervice and further web service calls a procedure. However, the procedure returns data properly for most of the scenarios. In some scenarios it takes more than 5 minutes and a session timeout happens. Now client suggested that in this case if any procedure takes more than 2 mins then it should return to GUI with a message ""XLS WILL BE SENT VIA EMAIL"" and when procedure returns the data (let's say, after 10 mins) then it should mail the XLS file to requested mail Id.

+ +

I have one approach as of now.

+ +

To implement a check while procedure call starts and if it crosses 2 minutes then it will invoke another REST service and from existing service message will be returned. Now from second web service it will mail the XLS file.

+ +

But this approach will take more time 2 mins + 10 mins (Procedure response).

+ +

Please guide me best approach for this implementation.

+ +

Please note GUI Team , Web Service Team and Database Team are different and works in different organizations.

+",357738,,90149,,43879.62222,43879.93472,Multiple thread invocation in Rest Web Service,,2,0,,,,CC BY-SA 4.0, +405382,1,,,2/18/2020 8:09,,0,223,"

I was talking to a colleague recently about hashCode and equals being methods in class Object in Java (among other languages). I am from a more theoretical background while my colleague is more of a pragmatic person.

+ +

In my opinion it does not make any sense of having the hashCode and equals methods being defined in class Object. While our personal discussion was quite amusing I am more interested in an official retrospective on this subject from the original inventors of java. Have they ever publicly expressed regret about having made the decision? Or have they every explicitly defended their decision against criticism in public?

+ +

Public statements from people other than the original developers are also welcome but please only from people with at least some level of authority and notability.

+ +

Thanks in advance!

+",357757,,,,,43879.38611,Have the inventors of Java ever publicly expressed regret about hashCode and equals in class Object?,,1,6,,43879.57014,,CC BY-SA 4.0, +405394,1,405398,,2/18/2020 16:52,,-2,173,"

I am confused, from what i have read the design phase has nothing to do with how the software looks like and its about how it should be build in the next phase which is the developement phase?

+ +

My Understanding of Design phase:

+ +

how to the software should be build, like its a blueprint of its mechanism i guess?

+ +

I want to know if im wrong or if there is more to it.

+ +

REF#1:

+ +
+

Phase 3: Design: In this third phase, the system and software design + documents are prepared as per the requirement specification document. + This helps define overall system architecture.

+ +

This design phase serves as input for the next phase of the model.

+ +

There are two kinds of design documents developed in this phase:

+ +

High-Level Design (HLD)

+ +

Brief description and name of each module An outline about the + functionality of every module Interface relationship and dependencies + between modules Database tables identified along with their key + elements Complete architecture diagrams along with technology details + Low-Level Design(LLD)

+ +

Functional logic of the modules Database tables, which include type + and size Complete detail of the interface Addresses all types of + dependency issues Listing of error messages Complete input and outputs + for every module

+
+ +

REF#2:

+ +
+

Design and prototyping Once the requirements are understood, software + architects and developers can begin to design the software. The design + process uses established patterns for application architecture and + software development. Architects may use an architecture framework + such as TOGAF to compose an application from existing components, + promoting reuse and standardization.

+ +

Developers use proven Design Patterns to solve algorithmic problems in + a consistent way. This phase may also include some rapid prototyping, + also known as a spike, to compare solutions to find the best fit. The + output of this phase includes:

+ +

Design documents that list the patterns and components selected for + the project Code produced by spikes, used as a starting point for + development

+
+ +

REFRENCES:

+ +

https://www.guru99.com/software-development-life-cycle-tutorial.html +https://raygun.com/blog/software-development-life-cycle/

+",353299,,,,,44195.14097,what is the job of Design Phase of SDLC?,,2,2,,,,CC BY-SA 4.0, +405399,1,,,2/18/2020 17:40,,2,149,"

I have the resource OrderRequest which I guess can be qualified as process. The OrderRequest can be create of update. The create should be idempotent, because creating the same order request 2 times is undesired. At the same time the POST is not idempotent and it does not always return the same result. I use a combination of Client side generated Id and server side generated Id. When create is executed the client side generated Id is used so it ends up as

+

PUT /orders/clientGeneratedID

+

It will return also a server generated Id that can be used. There is a subtle difference between the two Ids because having clientGeneratedtID means also that there is a "dealer order" associated with the order.

+

Once created a modification is modified via clientGeneratedID or server generated Id.

+

POST /orders/{clientGeneratedID or serverGeneratedID}/modifications

+

Reading through various posts I understand there is some confusion about POST and PUT with regards of what is create and what is UPDATE.

+

My understanding is that since I supply the Id and the URL the correct verb for my create is PUT. At the same time since I am creating a process it does not sound correct for me that PUT should be also update I am not entirely replacing the resource which is a prerequisite for using PUT for update. So I have decided to use POST.

+

Is it wrong that I have both PUT and POST in such way? Am I doing it right? I understand that I can do an idempotent POST for create. But is there a general guideline that you should use either PUT or POST but don't combine and what about the fact that I use them in reverse PUT for create and POST for update?

+",314234,,-1,,43998.41736,43880.43403,Combining PUT and POST on the same resource,,2,5,,,,CC BY-SA 4.0, +405400,1,,,2/18/2020 18:07,,4,143,"

We're looking to split up a monolith. In order to do so, we've identified some business areas that look like good candidates for subdomains, and we're trying to figure out how to split the functionality across those subdomains.

+ +

The domain is sports league management software.

+ +

We've tentatively identified a few subdomains, three of which are scheduling (when and where is the fixture), accounting (how much should each team be billed for a fixture) and competition management (if Team A wins a fixture, how many points are they allocated etc).

+ +

An example of the functionality we're trying to split is the deletion of a fixture, which is a sports match between two teams.

+ +

When it comes to deleting a fixture, all three of these subdomains may need to get involved. Specifically, the deletion will free up the space that was previously taken, the teams involved should no longer be charged for the fixture, and the league standings need to be updated because the results of the fixture are now gone.

+ +

Similarly, all three subdomains may have a say in whether or not a fixture actually can be deleted. Well, perhaps not scheduling, but accounting might say ""Can't delete that fixture, Team A has already paid for it, it should be cancelled instead"", and competition management may say ""Can't delete that fixture, we've already calculated the particpant of a subsequent fixture based on the result of this fixture"".

+ +

We've sketched out an architecture where each subdomain can provide validators and handlers for particular commands. So the command bus would find all the validators from all the subdomains, loop through them and, within the context of a transaction, ask if the process is ok to go. If they all say ""GO!"", the bus then loops through all the command handlers and passes them the command, and finally either rollbacks the transaction due to an error in any of the handlers, or commits the transaction and publishes any events that need publishing.

+ +

So far so good. Only I've not come across anyone recommending such an approach. All the recommendations seem to be that command should have one command handler only, and that events should be used to drive subsequent processing in other subdomains, or potentially to create additional commands to put onto the command bus. Which is fine, but what about the case where the command should not be processed due to some validation error known only by a subdomain 3 events down the line?

+ +

Are we barking up the wrong tree here? Are we talking about Sagas? Or application services? Or does the concept of multiple command validators and handlers actually seem feasible? Can anyone suggest any pitfalls we're not spotting?

+ +

Thanks for your opinions!

+ +

EDIT

+ +

It should be noted that this is not a multi tenanted app, each customer has their own site, and so scaling is achieved by adding machines horizontally - ie, more customers = another server. All the services will run on the same machine and can participate in the same transaction. We're not necessarily looking to move to a layout with different services/microservices running on different machines - just trying to better encapsulate the logic for each subdomain into separate sections of the code so it becomes easier for new devs to pick up and understand and maintain.

+",254643,,254643,,43880.35625,43880.35625,Are there any pitfalls to having multiple handlers for a command?,,1,6,,,,CC BY-SA 4.0, +405401,1,405404,,2/18/2020 18:20,,8,444,"

My coworker and I are debating the correct design for an API. Say we have a function void deleteBlogPost(int postId). What should this function do if the blog post indexed with postId does not exist?

+ +

I believe it would be appropriate to throw an exception, because the function should be designed to do one thing. When the user calls a function called deleteBlogPost, they always expect the post with ID postId to be deleted. To try to delete a post with an invalid postId does not make sense, so an exception should be thrown.

+ +

My colleague argues that the caller does not really intend to delete a specific post, just to ensure that after the call, the post does not exist. If you call deleteBlogPost with a nonexistent post ID, the goal state is already achieved, so nothing should happen. He also noted that this design ensures calls to deleteBlogPost are idempotent, but I'm not convinced that this is a good thing.

+ +

We found examples of both patterns in several APIs. For instance, compare deleting a dictionary/map entry with a key that does not exist between Python and Java:

+ +

Python:

+ +
my_dict = {}
+del my_dict['test']   # KeyError: 'test'
+
+ +

Java:

+ +
Map<String, Object> map = new HashMap<>();
+map.remove(""test"");   // no exception thrown
+
+ +

Should a function throw exceptions based on its expected behavior or its goal state?

+",357812,,,,,43880.84861,Should a function throw exceptions based on its expected behavior or its goal state?,,8,7,,,,CC BY-SA 4.0, +405405,1,405409,,2/18/2020 19:44,,0,358,"

I have always written my if statements like this when I want the negative condition to enter me into the if block.

+ +

Example 1

+ +
if(condition == false){}
+
+ +

However, we just hired a new senior on the team that insists we should refactor to

+ +

Example 2

+ +
if(!condition){}
+
+ +

and use that moving forward.

+ +

I find example #1 to be easier to reason especially when checking a nested property.

+ +

e.g. if(person.name.middle.hasValue == false){}

+ +

Question +What example is better to practice? or is there a better practice +than either example?

+ +

Edit Scope +please limit answers to Negative conditions only. of course if(condition == true){} is worst than if(condition){} Because the == true is redundent for the true case but not the false case.

+",293429,,293429,,43879.93194,43881.57708,Which is better if(condition == false) or If(!condition),,4,14,,43879.97569,,CC BY-SA 4.0, +405410,1,,,2/18/2020 20:29,,-3,366,"

I'm developing a FOSS library which I am pretty fond of. More specific details probably don't matter.

+ +

I've already ""finished"" a feature set sufficient for an initial release IMHO. However - some of the features I introduced in there were added for completeness of the API; I didn't use them in my own applicative coding (which gave rise to the library), and they've never been tested at all. The ones I have used had only been tested through my use, which has not focused on questionable corner cases.

+ +

So here is the vicious cycle:

+ +
    +
  1. To be released, even initially the library code needs to be functionally correct and well-performing - if not perfectly, than at least to good degree.
  2. +
  3. To ensure correctness (not to mention performance), testing is necessary; at least, unit test coverage.
  4. +
  5. To be involved in writing and running tests for the library, and resolving issues which come up during testing (and they do naturally come up), people have to like it and be interested in it.
  6. +
  7. People won't get to know and perhaps become fond of the library before it's released (for some definition of release).
  8. +
+ +

it's 1 -> 2 -> 3 -> 4 -> 1 and thus on and on in a vicious cycle.

+ +

So far, I've just spent quite some time just writing unit tests myself, and it seems this way I'll release when I'm retiring and the whole thing is irrelevant.

+ +

My question: How do I break this vicious cycle? Or in other words - how can I get some potential users into helping me with the annoying and somewhat boring work of writing and running unit tests (and perhaps resolving the issues that come up)?

+ +
+ +

Edit: I ended up writing the tests myself. It was ""just"" 108,849 different assertions... :-(

+",63497,,63497,,43925.75069,43925.75069,How to break the vicious cycle of test-writing preceding an initial release?,,4,22,,,,CC BY-SA 4.0, +405421,1,405429,,2/19/2020 1:10,,0,71,"

Premise: This is for learning purpose.

+ +

I'm trying to adapt my Console Application code to be served through a WPF Application GUI that I would like to create.

+ +

One problem is struggling me.

+ +

Actually I'm using something like a procedural approach, even if it's async in some path, how I could decouple my code to be the server for a WPF application View?

+ +

Actually I have models representing the managers, some helpers with function to edit queries that I got from controller, and DB Entities, scaffold-ed from an existing DB.

+ +

Example:

+ +
        static async Task Main(string[] args)
+            {
+                //IConcessionario sisal = new Sisal();
+                IManager pManager = new PrimaryManager(new HttpClient());
+
+                UserController controller = new UserController(new DBContext());
+
+                List<UserQuery> userQueries;
+
+                Console.WriteLine(""Starting"");
+
+                userQueries = await pManager.GetAllUsersQueriesAsync(users);
+
+                Console.WriteLine(""InsertInDb"");
+
+                /* code to insert data in db */
+
+            }
+
+        public class UserController {
+            private DBContext context;
+
+            public UserController(DBContext context)
+            {
+                this.context = context;
+            }
+
+            public function List<User> GetAll(){
+                await using (var ctx = new DBContext())
+                {
+                    return context.Users
+                                .Where(s => s.Active == true)
+                                .Select(s => new User() { /** code to populate **/ }));
+                }
+            }
+        }
+
+        public class PrimaryManager : IManager {
+            private readonly HttpClient client;
+
+            public PrimaryManager(HttpClient client)
+            {
+                this.client = client;
+            }
+
+            public function List<UserQueries> GetAllUsersQueriesAsync() 
+            {
+                var queries = new List<UserQuery> queries;
+
+                foreach(var user in users)
+                {
+                    var query = new UserQuery(){ /* code to populate */ };
+                    queries.Add(query);
+                    Console.WriteLine($""Added {user.Name} query {query.id}"");
+                }
+            }
+        }
+
+ +
    +
  1. Look at Console.WriteLine I put everywhere, how I could move it away from code and modify it to be catched at the same way from the console app and the WPF view?

  2. +
  3. I'm writing here because I miss the fundamental like: Should my WPF application be aware of the other solution? Should I reference it?

  4. +
+",357840,,1476,,43880.50069,43880.50069,Structuring code to support console output and WPF views,,1,1,,,,CC BY-SA 4.0, +405428,1,,,2/19/2020 6:28,,0,27,"

I'm looking to build a graph that's tied to any caching system that I can interact with. Its only goal is when an item is deleted, based on its ""linked items"", to also delete these. A partial rebuild of the cache, if you will.

+ +

For example, in an action-driven system, my object would be listening for the action post_updated to be fired. It will then look inside my graph and see that values with the keys post_123_gallery, post_123_recipe, post_123_small_images need to be deleted and the cache for these need to be rebuild.

+ +

I have some schematics I came up with to try to represent this graph/tree in the memory:

+ +
[
+    <context> 'page:post_1' => [
+        keys => [
+            'page:post_1:gallery:gallery_3257',
+            'page:post_1:recipes:recipe_2',
+        ],
+        actions => [
+            'post_updated',
+            'gallery_created',
+            'recipe_created',
+        ]
+    ]
+]
+
+ +

The first allows for contextualization of my values. Every value I saved will be under a prefix, which is its context, as decided by the developers. For example, when I'm on a post page and I'm saving new data, the system will save the supposed cached values under these prefixed keys.

+ +
[
+    <key>'post_1:gallery_3257' => [
+        'actions_to_clear_on' => [
+            'post_updated',
+            'gallery_created'
+        ],
+        'linked_to' => [
+            'post_1:recipes:recipe_2'
+        ]
+    ]
+]
+
+ +

The second one doesn't have the concept of context (but it's expected of developers to name things in a natural, predictable way), rather, each key is treated as an individual that can have other linked keys. When one key gets deleted, whatever is under linked_to also gets deleted.

+ +

In my head, I'm stuck. Something doesn't feel right about either of the approaches in the sense that, at their core, they won't work. One thing that comes to mind is that the first approach is totally flawed, a key(A) can depend on another key(B), and (B) can depend on multiple keys on its own. Basically, a small deletion can quickly turn into a mammoth of chain deletes.

+",353781,,,,,43880.26944,Cache graph with hierachy,,0,3,,,,CC BY-SA 4.0, +405432,1,405445,,2/19/2020 7:52,,0,65,"

So I was watching Jimmy Bogard giving a talk on Effective Microservice Communications (Power Point available on his GitHub). One thing he mentions is that the messages between the services should be small, something like this (example from the presentation):

+ +
POST /dress
+{
+    ""order"": ""/order/23""
+}
+
+ +

To my understanding, this would be a link to where the full order details (which would be needed to perform any action on the order). So this raises two questions:

+ +
    +
  1. Does that mean that there would have to be an Order service, in which the order details are stored. And which all the other services, that needs to do something with the order, reads from?
  2. +
  3. If 'Yes' to the above, what is the advantage of this approach, rather than just sending the order information inside the message?
  4. +
+",271714,,,,,43880.59931,Microservices communications content,,2,0,,,,CC BY-SA 4.0, +405436,1,405441,,2/19/2020 10:15,,0,80,"

I have a monolith application in .net core 3.0 with entity framework core 3.0. using:

+ +
    +
  • a table with ~3 million records. Its structure is BusinessUnitId | ProfileId | Amount(it has more fields, but these are important ones);
  • +
  • another table that looks like this: ProfileId | Price.
  • +
+ +

When I need to generate report for all Business Units, I have to read ~3kk records from one table, then match profiles with their price, then generate view model and send it.

+ +

Problem is - it takes more than one minute, and this report should appear on web page in form of table, i.e. take not more than ~5 seconds.

+ +

I tried to create separate table just for this report, but it's population/recalculation takes like 10 minutes. Currently I think that I will move report data to separate table and make a service that will update it when needed.

+ +

What is best practice or standard approach for these kind of problems, when you need to calculate huge amount of data in run time/perform queries that take long to execute, but need it to be fast?

+",350019,,209774,,43880.48472,43881.67917,Need advice on reporting with big amounts of data,,3,2,,,,CC BY-SA 4.0, +405440,1,,,2/19/2020 13:06,,1,100,"

I'm doing a bit of refactoring work for a software that use hardware, specifically cameras, to gather images and process them in different ways. A few different cameras are supported, and there will likely be different ones in the future, and therefore I thought it would be a good idea to hide the implementation of those cameas behind an interface, ICamera, acting like some kind of Facade, providing the base functionality that any camera might have, and thus letting the main application logic remain untouched when any new camera is added.

+ +

Now, these cameras can have different features, and new cameras might be introduced in the future. I thought about defining additional interfaces, such as IZoom for cameras that can zoom, so that the implentation of that zoom functionality can be hidden and any user interface elements can be reused.

+ +

However, now I'm a bit confused regarding ""who"" should keep track of these different interfaces (for example someone needs to tell the GUI to draw its control for an interface). I'm suspecting I'm closing in on a design problem.

+ +

So, my problem is that I want to prepare the software architecture for future hardware features, but at the same time letting the base functionality remain ""untouched"". Am I missing something basic, or am I trying too much? Is the problem creational or structural?

+ +

I visualized my idea below:

+ +

+ +

I've read up on a few structural design patterns (such as Visitor or Decorator) but my initial feeling is that they're not right for this problem. Please prove me wrong!

+",357880,,,,,43880.58403,Base interface with extensions,,2,2,,,,CC BY-SA 4.0, +405449,1,,,2/19/2020 15:03,,3,68,"

I'm new to AWS serverless applications and am looking for something like ocelot request aggregation on the AWS serverless stack.

+ +

Assume I have two AWS lambdas that return data needed by a SPA: A and B. They each take 1 second to run to load data from different sources. When the browser makes a call to the API I wish to run both concurrently and return the resulting JSON to the browser.

+ +

Two options I'm considering:

+ +
    +
  1. A parallel state in a step function that runs A and B
  2. +
  3. Have another lambda whose sole purpose is to run A and B concurrently and return the results
  4. +
+ +

Testing both these I've found the step function takes an extra 220ms per call and the concurrent lambda takes an extra 500ms each call. These are both too slow for a SPA api. Is there a faster solution?

+",319740,,,,,44158.58611,Robust way to aggregate results from 2 AWS lambdas for a SPA,,1,3,,,,CC BY-SA 4.0, +405452,1,405500,,2/19/2020 15:57,,1,132,"

Here's an example of my question in python. Notice there's only a very subtle difference: changing an if to an elif. There's no difference in behavior; if the first if statement is executed, the second will never be reached. Is it better practice to use an if or an elif in this situation? (Which is better: Option 1 or Option 2?)

+ +

Option 1:

+ +
def add_two_numbers(a, b):
+  if not isinstance(a, int):
+    raise TypeError('a must be an int')
+  if not isinstance(b, int):
+    raise TypeError('b must be an int')
+  return a + b
+
+ +

Option 2:

+ +
def add_two_numbers(a, b):
+  if not isinstance(a, int):
+    raise TypeError('a must be an int')
+  elif not isinstance(b, int):
+    raise TypeError('b must be an int')
+  return a + b
+
+",346430,,,,,43881.68611,Should I use an `else lif` or an `if` for the second of two consecutive assertions?,,1,6,,,,CC BY-SA 4.0, +405456,1,,,2/19/2020 17:09,,1,174,"

General overview

+ +

We recently had lots of problems with automated tests in our team. Part of it was that the people designated to writing them had little experience. After this failed, we incorporated a different approach where the product owner and the developers are much more engaged in the overall process of automated testing, so the quality improved.

+ +

There is still one more problem that we are facing without a good solution and that is the test data and it's maintenance.

+ +

We have integration tests written using RestSharp and xUnit (the back end is ASP.Net Core) and GUI tests written with the use of Selenium and xUnit (the front end being in Angular).

+ +

Both of these rely on MS SQL as the data store.

+ +

Now best practices state that the tests should be as independent as possible from each other. This is quite easy to accomplish in unit tests where we can arrange test data because the tested part is very small. In GUI/Integration tests however, matters are a bit different.

+ +

Problem Description

+ +

Since GUI/integration tests often perform complicated actions (which test many more elements than unit tests), it is very difficult to create independent mock data in a database for each individual test.

+ +

Scenario

+ +

Let's assume that we want to write automated tests for some hierarchical organization structure which is added through CRUD screens. We have sites with child areas which then have child subareas and so on.

+ +

The first test would be to test adding and editing Sites. This is quite simple. The second test for Areas requires assigning a preexisting Site in the system so we can create hierarchy. Third would require Site and Area and so on.

+ +

So basically the more complex the test scenario, the more preexisting specific data it may require and failures at the beginning of the hierarchy-add will fail other tests which may work fine on their own.

+ +

Possible solutions

+ +
    +
  1. Tests rely on each other for creation of data - This is in my opinion a very bad approach because failing the first test will fail the rest. Plus it doesn't scale very well for more complex scenarios.

  2. +
  3. Each test loads it's data individually - this allows the tests to be independent but because of this we will have lots of code just to load the data for the test in case of changes or problems it will also cause a lot of rework so it will also make the tests a bit fragile.

  4. +
  5. We could use containerization with Docker. Create database image and load the database with data on each run. This will allow us to make sure that more tests can be run in parallel but again solution 1 and 2 would also need to be implemented to push the data into the container.

  6. +
  7. We could divide the tests into different categories. Simple/Complex ones. Simple one we would assume can be written in as much isolation as possible. Complex ones could have shared steps (Hierarchy would be such an example and we could accept that there will be some chained test steps there).

  8. +
+ +

Any guidance here would be greatly appreciated. Creating successful test automation is a lot of work and the best thing is to start in the best direction possible.

+",154994,,283761,,43881.36667,44188.50278,How to Mock Test Data for complicated Integration/GUI automated tests,,1,3,,,,CC BY-SA 4.0, +405461,1,,,2/19/2020 19:31,,2,84,"

I have two data sets. The first data set has approx. 50.000 movie and song titles and the second one have 20.000 blacklist strings. I am looking for the best algorithm to detect movie/song title which contains blacklisted word(s).

+ +

Example: +Dataset #1

+ +
The Lord Of The Rings
+E.T.
+Star Wars
+...
+(50k items)
+
+ +

Blacklist Data set

+ +
Lord
+Home Alone
+Matrix
+ar
+...
+(20k items)
+
+ +

Items in these data sets may be a character or a few words. String search algorithms like Boyer-Moore is not helping me with this since I have more than 1 needle to search in the haystack. I (probably) need to find an algorithm to find all combinations efficiently and later make a string search (regex maybe?) for each combination.

+",13725,,13725,,43880.82014,43880.82014,Algorithm to search very long blacklist in another very long data set,,0,10,,,,CC BY-SA 4.0, +405462,1,405466,,2/19/2020 20:17,,1,390,"

I'm currently in the process of transforming a monolithic application to a microservices based architecture. The monolith is dependent on third party services (as in other departments) for its data. Most of these third party services are accessible via SOAP while some of them use REST.

+ +

Some of these third party services have terrible (even unusable IMO) APIs. It adds a lot of unneeded complexity and boilerplate to our aggregator service (which is a monolith atm). I'm in the process of mapping the domain of one of these third-party APIs to a usable domain in an ACL to be able to offer a decent API to our aggregator service.

+ +

However, it got me wondering. Is this even the correct way to go? It's a lot of work, should I just bite the bullet and use those terrible APIs? How do you handle terrible third-party APIs in a microservices architecture which make your service less maintainable?

+ +

Thanks in advance!

+",340212,,340212,,43880.84931,43880.94514,How to deal with bad third party APIs in a microservices architecture?,,2,4,,,,CC BY-SA 4.0, +405467,1,,,2/19/2020 21:43,,-2,120,"

I'm working in a rest API for a turned based game from scratch, and I'm having some troubles figuring out the best architecture to do it.

+ +

I need to explain how the game works and what is my architecture so you can help me out.

+ +

The game is a turned based board game with up to 4 characters that can interact with eachother in various ways in the game. Everything is decided in the backend and returned to the game via the API.

+ +

In the backend I have the following entities:

+ +
    +
  • Board, that holds all sorts of map details like terrain type, items, etc.
  • +
  • User, that holds the actual data or the user (nickname, email, password, etc).
  • +
  • Player, that holds things like health, skills, etc.
  • +
  • Grenade, a special power that players can use one each round.
  • +
+ +

I have just a few more entities, but lets keep it simple for the purpose of this question.

+ +

So I have a endpoint called start-game that returns all the initial stats of the game, and here is where my problem begins. Since I need to send everything to the game, I came up with a structure like this:

+ +
{
+    gameData: {
+        board: {
+            players: [], // all players data
+            terrain: {}, // all terrain data
+        },
+    },
+    userData: {} // user data, nickname, etc
+}
+
+ +

I don't really know where to put the Grenade, because we throw the grenade at the board, so I feel like it should be a node inside board, but also it is a item used by the Player, so in my classes diagram, I have Grenade inside Player.

+ +

Also, I don't have the wrappers ""gameData"" neither ""userData"" in my classes, it is a wrapper created specially to make the API return more readable.

+ +

My question is: How should I map my API responses to? I heard that I should always map my responses to a class in my code, but if I do that, the API response look weird and also will ""leak"" a lot of unused data to the game.

+ +

It's not only the grenade problem, but also the custom wrappers that I have in place in the returns, are those considered bad practice? If yes, what is the best way to handle my case?

+ +

I'm also not using database yet, so I'm wondering if I should have a database diagram before diving into API responses.

+ +

Since this is my first time writing a API, I want to do it in the most scalable and proper way.

+ +

EDIT: Why the downvotes?

+",349986,,349986,,43881.37083,43881.37083,How should I design my API responses?,,1,6,,,,CC BY-SA 4.0, +405468,1,,,2/19/2020 21:56,,0,32,"

I have a private PHP library that is working well for the most part. Let's call it ""private library"". I can move into the ""private library"" and edit, git add, git commit and git push -- all as you would expect. The changes I make update the ""private library"" on github.com as expected.

+ +

I have a second composer library that contains code from the ""private library"" and code from symfony, aws, google and the like. This composer library is included by other scripts running on our server. Let's call this the ""include library"". One of the directories is vendor/raystedman/library/src which contains our ""private library"".

+ +

Here is the scenario I am trying to sort out.

+ +
    +
  1. make a change to the ""private library"", git add, git commit and +git push
  2. +
  3. run composer update on the ""include library""
  4. +
+ +

I can see composer making updates to the ""include library"" from github as expected -- checking out our code.

+ +

Here is the problem. I move into the ""include library"" directory vendor/raystedman/library/src. I then issue git status. I'm told this library is 1 commit ahead and I should perform a git push. This is not actually the case as the code in the ""include library"" and is the same as github and the ""private library.

+ +

I have a workaround to the problem. I delete (rm -rf) vendor/raystedman in the ""include library"" prior to the composer update. Now composer performs a clone operation and everything is fine.

+ +

What am I doing wrong here? I think the key could be the difference between ""Checking Out"" our code in the original scenario and ""Cloning"" our code in the workaround.

+",357935,,357935,,43881.28542,43881.28542,Composer Update -- Need to Push?,,0,0,,,,CC BY-SA 4.0, +405472,1,405484,,2/20/2020 0:05,,0,65,"

We have several microservices and are now in a position where we need a GET request with a list of ID's in a querystring.

+ +

I'm reluctant to use a POST for the purposes of a GET request because of the general principal of using HTTP verbs as intended.

+ +

However, there are plenty of drawbacks to serializing our array of ID's in the querystring: +1. querystring limits on the server and in the browser +2. having to add our own serialization code to put the array into the querystring +3. deserializing on the server

+ +

Is it overly dogmatic to strictly adhere to the GET / POST definitions? Am I correct in assuming that using a POST in order to GET data back (not changing the state of the server) is the most sensible option here?

+",274088,,,,,43881.53542,Tradeoffs between RESTful GET and querystring serialization,,3,0,,,,CC BY-SA 4.0, +405474,1,405479,,2/20/2020 0:33,,4,116,"

I have a Java background and am studying Python's data model. Specifically, I am curious about how and when special methods (e.g. __add__) get called.

+ +

It seems like the Python interpreter may execute these special methods when it encounters certain built-in functions. To take an example from the book ""Fluent Python"", if you have a class like the following FrenchDeck...

+ +
import collections
+
+Card = collections.namedtuple('Card', ['rank', 'suit'])
+
+class FrenchDeck:
+    ranks = [str(n) for n in range(2, 11)] + list('JQKA')
+    suits = 'spades diamonds clubs hearts'.split()
+
+    def __init__(self):
+        self._cards = [Card(rank, suit) for suit in self.suits
+                                        for rank in self.ranks]
+
+    def __len__(self):
+        return len(self._cards)
+
+    def __getitem__(self, position):
+        return self._cards[position]
+
+ +

then __getitem__ will get called when the Python interpreter encounters things like for card in FrenchDeck() or FrenchDeck()[11].

+ +

To me, this seems expressive but also extremely vague. How do I determine what built-in functions will call my special methods? There seems to be some implicit mention of mappings between built-ins and special methods in statements of the Python Data Model section like

+ +
+

It is recommended that both mappings and sequences implement the __contains__() method to allow efficient use of the in operator.

+
+ +

But I can't find a clear reference doc with statements like ""for user-defined classes, in will use __contains__ if it exists, otherwise it will use __getitem__"".

+",357949,,,,,43881.57014,Difficulty understanding the Python data model and how built-in functions map to special methods,,1,1,,,,CC BY-SA 4.0, +405476,1,,,2/20/2020 2:38,,0,85,"

I have the following dataset:

+ +
[234,565,678,90,66,7,8,44,5]
+[275,23,54,34,5]
+[4745,23,54,4,556,65,89,5,4,569,87,412]
+[75,273,59,5,4,567,412]
+[44,34,556,69,5,4,569,812]
+
+ +

Each array represents a web session. Each element in the array represents an activity that the user did during the session.

+ +

What's shown above is only a subset of the entire array list (which consists of over million arrays).

+ +

My task is to provide the next element, across all arrays, given an element or list of elements.

+ +

Here's an example: if [234] was given, I need to return 565 because [234] matches with the first element of the first array. So I need to return the subsequent 565

+ +

But I also need to be able to return a result given an array of elements, like following: if [275,23,54] was given, I need to return 34, because [275,23,54] matches with the first three elements of the second array, and 34 follows that.

+ +

This can be done fairly easily in Java-like language if I cache the entire data set in memory, and then search across those. But my challenge is using a database to store and search these. I was leaning towards key/value databases like Redis. My challenge here is that I don't have a ""good"" key here. My input could be any of the elements in any array, or a list of elements.

+ +

What would be the optimal way of storing and searching this.

+ +

Cheers!

+",357953,,,,,43911.37639,Key-value database for dynamic/changing keys,,1,3,,,,CC BY-SA 4.0, +405478,1,,,2/20/2020 3:08,,5,820,"

This is a general question but I work in the .NET world so I'd like to know if there any specific quirks about the .NET Framework / Core platforms that I should be concerned about here.

+ +

I think it's safe to say that as a general rule, breaking long-running jobs in to threads could be slightly faster than breaking jobs in long-running processes. However, the amount of performance difference would be negligible. I haven't done any benchmarks on this, but this answer seems to indicate that there is an consensus on the matter at least as far as .NET is concerned.

+ +

However, I still hear the argument ""It was broken up in to separate processes for performance reasons"" often. I'm puzzled by this and I wonder if this argument ever holds weight. Please remember, this discussion has nothing to do with maintainability. The question I am asking: are multiple jobs running on a single process each necessarily faster than multiple jobs running on a single thread each inside one process?

+ +

Intuitively, I would have to guess that the answer is no. For example, 10 long-running jobs running on 10 threads should run with roughly the same performance as 10 long-running jobs on 10 different processes. But, I see people making the design choice of breaking services up in to smaller parts purely for the purpose of performance.

+ +

What I suspect is going on is that shoddy code has created scenarios where starvation is occurring. I think that what is happening is that parallelism is being overused and availability of CPUs is not being honoured so that the benefits of multiple cores are being eroded because the jobs are fighting for CPU power. Still, this doesn't really explain why breaking a job out in to separate processes would improve performance.

+ +

So, how can I prove that either a) performance is not improved by breaking jobs out in to separate processes, or b) the improvements made by breaking the code out in to separate processes is only because of poor code design in the first place?

+",329526,,,,,44176.80069,Performance: Is There a Reason to Use Processes over Threads?,<.net>,5,13,1,,,CC BY-SA 4.0, +405490,1,,,2/20/2020 14:19,,1,345,"

I've read a lot about Domain Driven Design including books from Eric Evans and Vaughn Vernon. So I am familiar with the concepts Aggregate Root, Entity, and Value Object.

+ +

But while I was modeling some domain model using Domain Driven Design approach a question arose which I never had before. I realized that I could model an aggregate root's state entirely as a value object which also includes the child entities. Let me first show you the ""normal"" approach:

+ +
class EntityDataVO {}
+
+class Entity {
+    String id;
+    EntityDataVO data;
+
+    Entity(String id, EntityDataVO data) {
+        this.id = id;
+        this.data = data;
+    }
+
+    void update(EntityDataVO data) {
+        this.data = data;
+    }
+}
+
+class AggregateRoot1 {
+    private Map<String, Entity> entities = new HashMap<>();
+
+    void addEntity(String id, EntityDataVO data) {
+        this.entities.put(id, new Entity(id, data));
+    }
+
+    void updateEntity(String id, EntityDataVO data) {
+        this.entities.get(id).update(data);
+    }
+}
+
+ +

There is an Entity class with an id and a value object EntityDataVO. The AggregateRoot1 class creates instances of this class and keeps a list of them. Updates to specific entities are delegated to the Entityclass.

+ +

Now let me show you the alternative modeling approach:

+ +
class AggregateRootDataVO {
+    private Map<String, EntityDataVO> entities = new HashMap<>();
+
+    AggregateRootDataVO addEntity(String id, EntityDataVO data) {
+        var rootData = new AggregateRootDataVO();
+        rootData.entities = new HashMap<>(entities);
+        rootData.entities.put(id, data);
+        return rootData;
+    }
+
+    AggregateRootDataVO updateEntity(String id, EntityDataVO data) {
+        var rootData = new AggregateRootDataVO();
+        rootData.entities = new HashMap<>(entities);
+        rootData.entities.put(id, data);
+        return rootData;
+    }
+}
+
+class AggregateRoot2 {
+    private AggregateRootDataVO data;
+
+    void addEntity(String id, EntityDataVO data) {
+        this.data = this.data.addEntity(id, data);
+    }
+
+    void updateEntity(String id, EntityDataVO data) {
+        this.data = this.data.updateEntity(id, data);
+    }
+}
+
+ +

In this case, the list of entities is ""encoded"" into the AggregateRoot2's value object AggregateRootDataVO. As you can see, the value object's identity is defined by its members which are, of course, value objects by themselves. So two instances of this class with an equal internal map of entities are equal and I wouldn`t care which one to use.

+ +

Now I am confused. I really have no idea which alternative I should use. The second approach has definitely an advantage. The complete aggregate's state is represented as a value object. So it is easy to serialize it in order to send it over the network. For instance, a backend service could send it to a GUI frontend and the latter would see the complete aggregate's state. This is not true for the first approach implemented in AggregateRoot1. In that case I would have to define a DTO (Data Transfer Object) for that purpose which would look similar to the AggregateRootDataVO class.

+ +

I wonder what others think about those two modeling approaches and which one they'd prefer under what circumstances. Currently, I really don't know which one I should prefer. In fact, I am tempted to always prefer the second approach when there is no good reason against it, because I like to deal with value objects and their nice properties.

+",30077,,,,,43882.84792,When to model an aggregate's entities as part of a value object?,,2,1,,,,CC BY-SA 4.0, +405492,1,405576,,2/20/2020 14:48,,2,159,"

I am trying to design a HashTable from scratch. I am starting with an initial bucket size of 11 and trying to maintain a load factor 0.75.

+ +

Java documentation mentions that whenever the number of items reaches the load factor the HashTable increases the size of buckets by double.

+ +

My question is if I am starting with initial bucket size of 11 and I double the buckets as soon as the load factor is reached the number of buckets would be 22. Now that the load factor is reached I need to rehash the table and 22 will be used as to rehash the keys (because I am using division hashing method) but since 22 is not a prime it will cause a lot more collisions.

+ +

What is the best way to increase the bucket size in this case?

+",342967,,90149,,43881.97361,43883.46944,Load factor and prime number for hashing in HashTable,,2,1,,,,CC BY-SA 4.0, +405493,1,,,2/20/2020 15:14,,1,84,"

I was designing an email service in .net standard 2.0 using mailkit library. I proposed the following design + +This design was not accepted stating the following reasons:

+ +
    +
  1. This design have to many components. Basically it adds more LoC.
  2. +
  3. It proposes interface for models.
  4. +
  5. If anyone want to implement Imap, then its increases the complexity.
  6. +
+ +

Based on these comments, another design was proposed and accepted as follows:

+ +

+ +

For me 1st design is better because:

+ +
    +
  1. Its more plug-able while the 2nd design is structured towards mailkit library
  2. +
  3. Its more easy to unit test, while 2nd one required reading through mailkit library to find implementations of mailtransport and ImailService
  4. +
  5. mailkitalready provides smtpclient and imapclient. We can make use of that instead of going through library and find both implementations inherits from mailtransport/IMailService
  6. +
  7. More readable code and the responsibilities are defined correctly.
  8. +
+ +

Are those valid points?. If not what are the flaws in 1st design?
+While designing a service how far we have to consider plug-ability?

+",243528,,243528,,43881.65208,43952.91319,How far we have to consider the plug-ability while designing an email service using external library,,1,8,,,,CC BY-SA 4.0, +405497,1,,,2/20/2020 16:07,,-1,404,"

We are setting up a pattern library.
+It's use is on the one hand to document and display our UI elements in a web view. On the other hand, it should also be the place where components are built. The actual web application should then consume the components (markup, CSS and JavaScript) from the pattern library and only display them.

+ +

We are mainly using PHP, mustache and javascript in our frontend at the moment, this is why we decided to build out components in the pattern library using those technologies.

+ +

At the same time, we are rethinking the technologies used in our frontend completely and are looking out to build certain parts using React or webcomponents (stencil).

+ +

The tool we use for the pattern library (fractal) makes you choose the template engine in a config. This means: choose mustache, no react and vice versa.

+ +

Has anyone here already set up a pattern library combining multiple technologies successfully - and is able to consume components in the fronend application, as well as display components, built in different technologies, in the pattern library?

+ +

If yes, I would very much appreciate your shared experience. Especially in terms of how you set up the pattern library, if you used a certain tool that helped a lot and how the frontend of the web application consumes the components.

+ +

Best wishes!

+",358012,,358012,,43882.38403,43882.38403,Recommend a way to handle multiple UI frameworks in a pattern library,,1,2,,,,CC BY-SA 4.0, +405504,1,,,2/20/2020 17:33,,1,99,"

Currently working on designing a process which demands me to send alerts like email notifications to the users which meet a business criteria(we can also call business rules).

+ +

I want to make this process more dynamic so any rule can be deleted, modified added on demand(because there are no predefined set of rules and may change time to time) and the process should be able to send alerts on fly.

+ +

I have a rough draft in my mind and looking for corrections/changes. My initial approach will be similar to something like:

+ +
    +
  • We already have some processes running which are already performing a set of different functions.
  • +
  • So I am thinking to apply the rules in that processes and any data which falls under that criteria will be stored in a newly created database (this database will have the toaddress, from, message, subject, description, rule, processName, timestamp, isSentMessage,status etc...) which will be storing the data.
  • +
  • And then another program will be continuously running like a scheduler and it will see for the newly added records and send emails accordingly.
  • +
+ +

The problem with my approach is :- as the business rules may be changing we cannot update the code which may not be efficient.

+ +

So I wanted to know if there is any better approach of achieving this or any modifications to this design(anything like adding additional columns, changing the design, modifications etc).

+ +

Technologies being used : Java 8, SQL server, Spring, Autosys.

+ +

Any suggestions or any other approaches, design are welcome...

+ +

UPDATE:

+ +

Now I am here with my solution to the above question but still thinking of any alternate so I am posting here what is in my mind. +I am thinking to set up a autosys job which runs on a scehduler basis(using spring) also using groovy scripts which are stored in database.All other process will be sending the binding parameters in a json format where we think that the rules get triggered. Then my new scheduler will look for that records in that table and it knows which rule to be applied and then it uses that particular groovy template to execute the code for that rule and send an email.

+ +

(All process will be inserting records to the db and the decision making will be in groovy script whether to send email or not).

+ +

So please suggest me whether my thought process is OK or any modifications are required?

+",353454,,353454,,43887.94931,44158.00278,How to design a process and use the business rules for sending alerts/notifications,,2,1,1,,,CC BY-SA 4.0, +405505,1,405516,,2/20/2020 17:53,,3,106,"

I am currently in a process of maintaining a data warehouse for a quickly growing start up company. There is a lot of reporting demands from the clients, and this is usually handled by a data warehouse we set up. However, unlike bigger more established companies I worked for in the past, this company has a rapidly growing schema that would see almost 1 new table and 2 to 3 table changes every two weeks. These rapid schema changes have been picked up by the ETL team, but it has been challenging to keep up so I was wondering if there is a better way to handle it.

+ +

To explain the process in more detail, we have been using a traditional star schema data warehouse model. The tables would be transferred to ODS DB first without any changes, and will be transferred to Data warehouse layer as dimension and fact tables. If there is one change in the production DB schema, we will need to change DB structure for ODS and Data warehouse, and change the ETL steps as well. In essence, we have a meeting every sprint to check if there is any change in the production DB, and apply these changes to the ODS DB, Data Warehouse, and ETL. At this point I am wondering whether there is a better systematic way of doing this maintenance.

+",295466,,,,,43883.30625,How should a data warehouse be maintained for a quickly changing schema,,2,4,,,,CC BY-SA 4.0, +405508,1,,,2/20/2020 19:24,,2,178,"

I've been thinking on how to import a project B into a project A, which both are GitHub repositories.

+ +

Project B is a library I reuse over many projects, I do add stuff in it, directly from project X, Y or Z.

+ +

Project B is simple, it's a directory with C# files and an assembly definition, i.e. a Unity class library. It's like a Shared Project in Visual Studio in the sense that it is minimalist and self-contained.

+ +

I've been trying to look for alternatives such as git submodules and so on, but in the end, nothing beats the ease and flexibility of deploying that library to another project through a symbolic link.

+ +

Pros:

+ +
    +
  • not getting a pre-built library (and impossible to augment, obviously)
  • +
  • code can be changed directly
  • +
  • IDE sees no difference, these are just files even though they're somewhere else
  • +
  • I do not have to mix commits in the library with those of the project
  • +
+ +

Cons:

+ +
    +
  • I have to commit library to its repository
  • +
  • Some stuff can get broken if I do API changes
  • +
+ +

But in reality when I look at these cons, I don't think it outweighs the pros and these would happen even when trying a different approach.

+ +

This is how I set things up:

+ +
    +
  • dev folder + +
      +
    • library
    • +
    • project + +
        +
      • library symlink
      • +
    • +
  • +
+ +

I use Link Shell Extension which eases the creation of symlinks in Windows Explorer.

+ +

Question:

+ +

Is it a good approach for being able to import external code in a project yet being able to modify it ?

+ +

Else, could you suggest some alternatives ?

+",59626,,,,,43881.84722,Are symbolic links a good way to 'import' a project into another?,,1,5,1,,,CC BY-SA 4.0, +405514,1,,,2/20/2020 20:28,,-1,244,"

I assumed that the purpose of pull requests within a team is to get a second set of eyes and basically proofread the code. But lately I am noticing that they can serve an additional function of preventing knowledge silos. The logic is that if every piece of code you write goes through a PR, then we preclude the danger of code which is unfamiliar to anyone except the author, increase the bus factor, and generally drive developers staying in the loop about projects they are not personally developing. If PRs are strategically spread out instead of always assigning to the same person, then these benefits become pretty strong.

+ +

By this logic, I am tempted to consider the ""proofreading"" to be a very secondary benefit of PRs. Correctness is checked in any case by automated and manual tests, careful design, and double checking my own work (though the fresh look from someone else doesn't hurt). The primary purpose of the PR is ""FYI"" - it is like an announcement to other developers of possibly significant changes I am implementing (if they are definitely significant, I would make an actual announcement eg. during a meeting).

+ +

This has impact on how the PRs are made. With the ""proofread"" logic, the main question is too big and hard to review PR vs. too trivial PR (simple changes that are obviously correct). But with the ""FYI"" logic, the question is whether the PR encapsulates a change that someone else should know about. Therefore extremely trivial PRs are still a good idea because even though there is no real need to proofread, the change is still worth communicating.

+ +

Is my thinking logical, or is this understanding of PRs an antipattern that causes problems down the line?

+",348440,,,,,43882.40972,Pull requests as knowledge transfer: an anti-pattern?,,3,2,,,,CC BY-SA 4.0, +405515,1,,,2/20/2020 20:43,,0,131,"

I have been working on a project and Im hoping to get a gauge on possible performance issues before I put it in any production environment. Currently on my local machine Chrome's task manager says I average about 60000-70000 K for my memory footprint while the JavaScript Memory varies from about 5000-12000 K with 3000-9000 K live.

+ +

I dont really know what all of this means to me. My production environment will more than likely be on a single server with 16-32Gb of ram. The total number of sessions is likely to never exceed 100. Not sure if it matters but I use PHP $_SESSION to store session variables rather than cookies.

+ +

I understand this is a rather broad question but on its face does it look like there could be performance issues once maximum number of sessions is reached?

+ +

Other than switching to cookies, caching, and managing any memory leaks, is there anything I can do to lower my memory usage and mitigate any performance concerns?

+ +

This is not the only application that will be on the server but there is a large portion of the current memory that remains unused.

+",352360,,353068,,43881.925,44191.50069,Memory Usage for Website,,1,3,,,,CC BY-SA 4.0, +405524,1,,,2/21/2020 3:05,,0,167,"

Say that I am developing a web application that has the following structure:

+ +
    +
  • An SPA web frontend (angular in my case)
  • +
  • Postgresql database with: + +
      +
    • A bunch of initial data in CSV's and JSON's that need to be loaded.
    • +
    • A bunch of cron-job scripts which periodically fetch data from external sources and feeds it into the database.
    • +
  • +
  • A thin 'middleware' web server which provides a GraphQL API to the database (as well as auth).
  • +
+ +

Currently, I develop this app with all three parts running separately... I run the frontend using Angular's dev server, I run the middleware as a standalone process (with nodemon), and I have a development database against which I run scripts manually.

+ +

This works ""ok"" as long as I am working solo, but it is quickly become unmanageable as I try to bring on other frontend devs (I will still be the only middleware/database guy). For example, I have put instances of the middleware and database on a development server that's accessible to all. However, this makes it hard for me to make changes (the data model and API are changing rapidly) because it may break whatever the others are working on. I think I need to be able to version the api and database, but I'm not exactly sure how to do that. Also, there may be problems with version skew between the components.

+ +

One thing I've thought about is putting the middleware and database (with all data preloaded) into a docker container and having the other frontend devs run it with docker compose or something. However, I'm not sure how well that would work on windows. I don't have the expertise to run something more complicated like kubernetes at the moment. Also, in the long term, I'm not sure I want to deploy in containers because putting a database in docker does not seem 'right' to me for some reason (maybe I'm biased).

+ +

Any advice on the correct development workflow and/or project structure and/or products and services that might help?

+",87358,,,,,44152.91875,Proper development workflow for fullstack app with multiple developers?,,3,3,,,,CC BY-SA 4.0, +405525,1,,,2/21/2020 4:03,,0,22,"

Traditionally web applications has been build by one singel page served to the client for every functionality (/settings, /users …). So every time you need a new Gui, you would have to navigate to a new page/new request. +For example: suppose we have an app. the user want to change some settings (for example which users should have access) this might be done in something like this way. the user navigate to the ""settings page"", then the user navigate to the ""users page"". in this way we lost both guis/states for the primary and settings page.

+ +

With the increase of singel page apps these days, it looks to me that the idea is still the same as before. It should still be ""one page"" for every time a new functionality in the app is requested. Use the example above, we would still create a totally new gui for the settings page and then a totally new gui for the users gui. So the only difference is that we dont have to generate this gui on server side, while we can now do it on the client instead.

+ +

So my question is, is there some special reason for not generating a gui in the web application which is more window based. use the example abo again. Instead ""clearing the gui and adding a new"" we would just create a new dialog on the top of the existing gui once we open the settings and do the same again once we open the ""users"" settings. +(the dialog would just be some code like a div with the content and a static location and a z-index higher than last dialog)

+ +

Is there any research shows that a user prefer a web application without windows or is it just done this way because of old habits in web application development?

+",292082,,,,,43882.16875,Window based singel page applications,,0,2,,,,CC BY-SA 4.0, +405527,1,,,2/21/2020 4:14,,0,69,"

We had an application break in production during a deployment because a load-balancer package in our top-level Dockerfile had pulled its latest version, which happened to have a new API. Our app broke during a time when most of our developers were out of the office, so I and another dev had to scramble into the night trying to fix the error. Because our latest build had many new features, it took us a few hours to discover that it was a version change in the Dockerfile that had caused the entire application to break.

+ +

Since we use CI/CD practices, I thought perhaps it might be a good idea to hardcode the version of this package in the Dockerfile, since it is such a high-level component of the application. Which I did.

+ +

My reasoning is that in the future, when staff are ""hands on deck"" and available to fix any issues, we can upgrade the top-level packages in our Dockerfile (there aren't many of them), carefully checking for versions which break the app.

+ +

Is this considered good or bad practice? Why?

+",223744,,,,,43882.37778,Is it considered a good practice to hardcode package versions in something as high-level as a Dockerfile?,,1,1,,,,CC BY-SA 4.0, +405528,1,,,2/21/2020 5:34,,4,252,"

I'm not a low-level programmer, I mainly program in C# which is a managed language. Still, every now and then I read articles, news and patch notes about the most varied software talking about memory unsafety fixes, including a video (if I remember correctly about Rust and it's memory safety) in which the host says that Microsoft loses lots of money just paying people to fix memory unsafeties. It got me thinking what exactly does ""memory safety"" mean, what kind of code generates memory unsafety, how does scientists (or attackers) find them, how and to which extent can they exploit it and how do you fix the part of the code that has it? It would be very informative if someone could answer these questions with minimal code examples (preferably written in C).

+",339024,,,,,43890.14722,"How is memory unsafety generated, found, exploited and fixed?",,4,1,2,,,CC BY-SA 4.0, +405532,1,,,2/21/2020 8:21,,1,46,"

Say I am wanting to include a third party HTML component in my site...

+ +

I know that I can simply include a <script> tag to pull in the component on the client's side; however because I do not trust this component to load in a speedily manner, or to load at all, I am wanting to have it server side render for users so that I am not left with egg on my face when it inevitably fails to load due to some connection issue to the host.

+ +

This would be simple if the component was in a final state and would be static, however I know that there is a strong possibility for updates to be rolled out to this component in time, which would leave any static version on my server out of date etc. A problem that would easily be avoided if I'd just use the <script> tag mentioned earlier.

+ +

My question is... How do I get around this? +Should I be polling the link in the <script> tag I have been given once a day for example just to ensure I have the current version? Should I attempt some sort of subscription service to the link?

+",335397,,,,,43882.37431,Server side rendering of third party updated html components,,1,0,,,,CC BY-SA 4.0, +405536,1,405540,,2/21/2020 9:50,,0,302,"

I'm trying to unit test some repositories and have no idea what I'm doing if someone could point in the right direction that would be good, currently, I'm testing behaviour of creating a new user

+ +
    [Fact]
+    public void Create_New_User_Test()
+    {
+        var userRepository = new Mock<IUserRepository>();
+        var user = new User();
+        userRepository.Setup(s => s.CreateNewUser(user)).Returns(Task.FromResult(1));
+
+        Assert.IsAssignableFrom<User>(user);
+    }
+
+ +

I'm not sure how to setup the right methods, what to pass, what to assert, is this even right unit test? because it doesn't feel like it

+",358083,,,,,43882.64097,Is this proper way to unit test?,,3,3,,,,CC BY-SA 4.0, +405537,1,,,2/21/2020 9:53,,-1,642,"

We develop a web app which consists of a frontend and a backend project. The frontend is consuming an API that is provided by the backend. +Frontend and backend are developed separately by different teams but in the end, we deliver one application to our customer. +Both applications getting build separately by gitlab-ci and deployed independent from each other to the dev/test/staging system. They are then running in different docker containers connected to each other.

+ +

If we simple stick to the git flow for both projects we would finally have a tested combination of frontend and backend on our testing environment. For a simple one-sided application I would suggest this state gets simply tagged and released. Then you would have something like v1.0 which can be delivered to the customer by deploying this on the prod system.

+ +

But if we look in the particular FE/BE project, there are very likely different versions. +For example:

+ +

Frontend: 3.14.324 +Backend 2.1.92

+ +

My question here is, does it make sense to think about what versioning strategy to apply for each single project to avoid incompatible states between these two applications? Or just say ""this combinations works""? +Is there a need to have a convention how these version have to fit to each other? How to manage that dependency between those two sides?

+ +

What is the usual way to handle different apps working together especially if there are two different teams working on this projects separately?

+ +

So one scenario i know is that we change something on the backend so that the frontend will get an error. The typical ""does not work anymore but still worked 5 min ago"" scenario which will cause so much headache to our devs. +How can we avoid that but still have a working product in the end?

+ +

Some opinions from experienced developers might be helpful here too.

+ +

Thanks a lot

+",310893,,,,,43882.51319,Deployment / Release strategy for separated frontend + backend web app,,1,0,,,,CC BY-SA 4.0, +405541,1,,,2/21/2020 11:32,,0,199,"

I'm trying to simulate the hiring process of a record label. Formally, I decided that the steps to do it are these:

+ +
    +
  1. The artist send its new album to the record label, which will review it and decide if approve it or not.
  2. +
  3. If the record label approves it, they will prepare an interview in order to decide if hire the artist. If the album is not approved, the will reject it and then terminate the process.
  4. +
  5. Once the interview is done, they can reject it (terminating the processes) or send an offer to the artist, which will decide if accept it or not.
  6. +
+ +

This is how I modeled it: +

+ +

What do you think about I used the objects in this model? I didn't understand if I need to connect them to other activities or not. About the two messages, I used the one In the artist to communicate to the record label, is it correct?

+",358092,,,,,44152.87639,First steps in BPMN 2.0 - Music production diagram,,1,0,,,,CC BY-SA 4.0, +405548,1,,,2/21/2020 16:38,,-1,41,"

Imagine I am creating a command called UpdatePhoneNumberCommand. Imagine also that in my application a business rule dictates that no-one may update anyone else's phone number - it is always you updating your own Phone Number.

+ +

The number itself obviously needs to go onto the Command as a property (UpdatePhoneNumberCommand.Number), my question is does the UserId belong on there too? You could either a) make it a property on the command, figure out who is logged in and then execute the command or b) leave it off the Command then lookup the currently logged-in User in the CommandHandler (via some abstraction - IUserContext or similar).

+ +

I don't like the idea of CommandHandlers being concerned with who is logged-in - even via an abstraction - but I can't really make a convincing case why. It just feels wrong to me - obviously your Handler needs to accept dependencies but I don't feel like figuring out who the subject of a Command is should be looked-up within the Handler.

+ +

Do you guys have any advice/heuristics when designing Commands which do something similar?

+",315572,,,,,43912.70972,Does logged-in User information belong on a Command?,,1,0,,,,CC BY-SA 4.0, +405551,1,405659,,2/21/2020 17:10,,0,115,"

This question is highly related to this over question. I'm asking another question between I still am a bit confused on the topic.

+ +

The Issue I Ran Into

+ +

I currently have a Django web application where users can have a list of their hobbies and can increment a counter each time they do one of their hobbies. My issue is, I want to extend this functionality outside of a front-end experience. In other words, I want users to be able to increment their hobbies using a POST request, say, from the terminal or from their own script or something.

+ +

Where I'm Confused

+ +

Do I have to create an API for my webapp to add this functionality?

+ +

Another way to put is, do I have to handle requests coming from the front-end differently than requests coming from somewhere else? Would front-end requests go to the URL www.hostname.com/hobbies/1 and other requests go to the URL www.hostname.com/api/hobbies/1?

+ +

If I do route the POST requests to the same URL as the requests coming from the front end (i.e. www.hostname.com/hobbies/1), then how come google has external APIs for Google Maps? It seems like if other web applications are calling Google Maps functionality then Google has separated those instances from their front end.

+",358119,,358119,,43882.72153,43885.39861,Where is the distinction between a web application and an API?,,1,2,,,,CC BY-SA 4.0, +405553,1,,,2/21/2020 17:22,,0,48,"

We currently handle a well separated monolithic service which handles various 'contexts' to help with the separation. I haven't had any issues so far when dealing with problems that require the creation of a new context since it was simple enough.

+ +

Now I'm dealing with a group of entities (3, to be exact) that relate to each other and I'm not sure how to properly organize one of them in the project.

+ +

For the example we'll name them Branch, BranchService, Service.

+ +

Branch lives in its own context as does Service, but at the moment I am not sure where to place BranchService. In this case, BranchService is not just an entity to handle many-to-many, it has some information on it's own that is used around the service.

+ +

Since they are relatively coupled, should they be together and share the same context? Or is there an alternative for this case?

+",303101,,,,,44152.95833,Context separation for a group of entities,,1,1,,,,CC BY-SA 4.0, +405555,1,,,2/21/2020 17:50,,37,8715,"

Say that A is working on a branch based off master and B merges changes into the master branch, which introduces merge conflicts between A's branch and master.

+ +

Whose responsibility is it to fix merge conflicts? I am not intending to be petty, so in other words - is it more productive in general if A fixes these conflicts or B?

+",313167,,,,,43886.41944,Whose responsibility is it to fix merge conflicts?,,6,14,9,,,CC BY-SA 4.0, +405567,1,,,2/21/2020 21:43,,0,74,"

Our company is trying to find a good generic way to have Many-to-One data for an entity. For example, a user might have 1 primary email, but many other emails also attached to their account.

+ +

So we have a users table (1 row maps to 1 user):

+ +
| id     | handle   | primary_email | is_verified | first_name | last_name |
+|--------|----------|---------------|-------------|------------|-----------|
+| (int)  | (string) | (string)      | (boolean)   | (string)   | (string)  |
+
+ +

but then we may want to store multiple emails for the same user, so we have another table, let's called it ""users_map"", where many rows map to 1 user:

+ +
| id     | user_id | key      | value  |
+|--------|---------|----------|--------|
+| (int)  | (uuid)  | (string) | (json) |
+
+ +

so for example if there were multiple emails for the same user, we would do something like this:

+ +
| id | user_id | key   | value            |
+|----|---------|-------|------------------|
+| 1  | 1       | email | ""foo1@bar.com""   |
+| 2  | 1       | email | ""foo2@bar.com""   |
+| 3  | 1       | email | ""foo3@bar.com""   |
+| 4  | 2       | email | ""sing@user2.com"" |
+| 5  | 2       | email | ""song@user2.com"" |
+
+ +

so my question is - is there a better way to do this other than using JSON for the value column? If not - is there a way to enforce a schema on the JSON somehow? Last question - from my brief research the inverse table design is called an ""unpivot"" table - but if there is a better name for it please let me know.

+ +

The potential advantage of a generic table by user? if you shard by user, each shard has only 2 tables instead of 5 or 10?

+",101210,,209774,,43999.89583,43999.89583,How to use strict schema with seemingly fluid data type,,2,2,,,,CC BY-SA 4.0, +405579,1,405582,,2/22/2020 17:24,,0,98,"

By way of background, I recently coded a small app that detects the number of boxes in an image. The app defines some types, for example:

+ +
class Box(val x1: Int, val y1: Int, val x2: Int, val y2: Int)
+
+ +

This app is Kotlin, but the principles should apply to any strongly-typed language. The definition of Box above is how I've typically seen code written. It works just fine. However, just using Int for the type, we could inadvertently assign an X-coordinate, where Y-coordinate is expected. I realised I could use the type system to avoid such an error, by defining XCoordinate and YCoordinate types:

+ +
class Box(val x1: XCoordinate, val y1: YCoordinate, 
+          val x2: XCoordinate, val y2: YCoordinate)
+
+ +

In fact, when I did this I actually detected a bug - I'd copy&pasted code to detect horizontal lines to detect vertical lines, and forgotten to update one bit.

+ +

I figure this is not a novel technique, but I've not really come across it in practice, and without knowing what it's called, it's hard to Google. If you have information on what the technique is called, or even better, advice on how to do it well, that would be great!

+",108835,,,,,43883.77361,What's the name of the technique of using very specific types to help catch errors?,,1,0,,,,CC BY-SA 4.0, +405586,1,,,2/22/2020 18:50,,0,53,"

I am using a 3rd party library that must be initialized with Lib::init() before any of its other functions may be called and that must be uninitialized with Lib::destroy() before the application ends. My classes that use the library just assume that it is initialized. This is not a problem in my application - I am calling init() immediately when entering main and I am calling destroy() directly before leaving it.

+ +

Unfortunately, in my unit tests, I sometimes forget to initialize the library. This made me thinking: If I forget to call Lib::init(), other users of my classes will also forget to call it. To tackle this problem, I added a RAII wrapper that calls Lib::init() in the constructor and Lib::destroy() in the destructor:

+ +
struct LibInstance
+{
+    LibInstance()
+    {
+        Lib::init();
+    }
+    LibInstance(LibInstance const&) = delete;
+    LibInstance(LibInstance&&) = delete;
+    LibInstance& operator=(LibInstance const&) = delete;
+    LibInstance& operator=(LibInstance&&) = delete;
+    ~LibInstance()
+    {
+        Lib::destroy();
+    }
+};
+
+ +

Now all of my classes that use the library require a LibInstance const& argument in the constructor or in the respective member function. This way, it is guaranteed that init() has been called before:

+ +
struct MyClass
+{
+    int do_something_with_lib(LibInstance const& /*instance*/)
+    {
+        // Here, we can be sure that Lib::init() has been called,
+        // because the user was able to pass a LibInstance argument.
+        Lib::foo();
+        return Lib::bar();
+    }
+};
+
+ +

At first, this wrapper seemed like a really good solution and I was happy about it. It is impossible to use my class without initializing the library. However, after some time, I find it really strange that many of my classes now require a LibInstance argument and never access it. Additionally, it is sometimes inconvenient to pass the LibInstance from main down to the place where it is actually used.

+ +

Is it a good idea that my classes take an additional (unused) argument just for ensuring that Lib::init() has been called?

+ +

If it is not a good idea, what is a better way to ensure the initialization?

+",301401,,,,,43883.78472,Ensure that library has been initialized,,0,6,,,,CC BY-SA 4.0, +405587,1,405650,,2/22/2020 18:53,,1,150,"

Let's say you have a username value object which has formatting rules e.g.,

+ +
class Username
+{
+    private $username; 
+
+    public function __construct(string $username)
+    {
+        // thrown an exception if invalid regex ^[a-zA-Z0-9]+$
+        // ...
+
+        $this->username = $username;
+    }
+}
+
+ +

Like above, usually you would pass the raw string from ""outside"" to validate the username.

+ +

My question is, can value objects have ""additional functionality"" like generate its own value, like for example, generate a username? e.g.,

+ +
class Username
+{
+    private $username; 
+
+    // original __constuct() remains here
+    // ...
+
+    public function generateUsername(UserRepository $userRepository): self
+    {
+        $username = $this->generateRandomUsername();
+
+        if (!$userRepository->userExist($username))           
+             return new self($username);
+    }
+}
+
+ +

The example is just a quick pseudocode of the idea but the idea is generateUsername() would take UserRepository to validate if the newly generated username from within the class does not exist yet in the database.

+ +

Can value objects have a functionally like this, or should they behave more like the first example, where they just accept and validate input from the ""outside""?

+",319317,,319317,,43883.80208,43892.87569,Can Value Objects create their own value?,,1,14,,,,CC BY-SA 4.0, +405589,1,,,2/22/2020 20:46,,0,43,"

Title pretty much says it. I ran into trouble getting condition coverage of a logical statement and found that the order in which the statement is written (rather than order of operations) results in different condition coverage. The equivalent statements are the following. I'm using google test.

+ +
if (A && (B || C)) 
+{
+    do x
+}
+
+if ((B || C) && A) 
+{
+    do x
+}
+
+ +

For both if statements I have the same three test cases, and I've bolded the conditions which I think get evaluated by gtest.

+ +

For the first if statement:

+ +
    +
  1. A = true, B = true, C = false
  2. +
  3. A = true, B = false, C = false
  4. +
  5. A = false, B = false, C = true
  6. +
+ +

For the second if statement:

+ +
    +
  1. B = true, C = false, A = true
  2. +
  3. B = false, C = false, A = true
  4. +
  5. B = false, C = true, A = false
  6. +
+ +

The second if statement results in full condition coverage (since A, B, and C get evaluated as true AND false), but the first does not.

+ +

Could this be considered a bug? Technically conditions in parenthesis should evaluate first, but this doesn't seem to be happening.

+ +

That being said I don't think this behavior could result in things being missed, but could result in more test cases than necessary depending on how your logical statements are ordered, so maybe it's not a bug.

+ +

Is there a reason for this behavior that I'm missing? Much appreciated!

+",358185,,358185,,43883.87222,43883.87222,"Logically equivalent conditionals, same test, but different condition coverage (google test)",,0,2,1,,,CC BY-SA 4.0, +405590,1,,,2/22/2020 20:47,,1,40,"

In our current architecture, a user has multiple ways they can change an address (as an example). They can do it through an online portal, through our core processor directly, or via a number of apps that talk to the core processor. I want to design a solution that has one place for all of those users to funnel to, and one place for the business logic, instead of half a dozen. I'm writing a wrapper API (RESTful) that abstracts our core processor's SOAP API for ease of use. The reason for the abstraction is because we use a primitive proprietary programming language that doesn't generate XML as easily as it can simply call out to a URL.

+ +

I'm evaluating third-party proxy layers such as Mulesoft for this, but I'm also researching solutions that would allow for business logic to be placed into the RESTful API before it calls out to the core processor. We also want to plan for the occurrence that the core processor may eventually change, and the SOAP API could be replaced with something else. What design pattern (if any) would I use to decouple the API interface from the underlying SOAP API and core processor so that if we need to switch the backing API to a completely different API that provides similar data and maybe uses a different transport mechanism, the outer API wouldn't have to change?

+ +

I believe it would either be the adapter pattern or the bridge pattern, but I'm not very experienced in SOLID or design patterns and am not sure which would apply (if any). I know the bridge pattern is for up front design, while the adapter pattern is after the fact. It seems like both to me.

+ +

Or is this just inversion of control at work?

+",307409,,,,,43883.86597,How can I decouple my wrapper API from its underlying SOAP implementation?,,0,0,,,,CC BY-SA 4.0, +405594,1,,,2/22/2020 23:18,,0,70,"

I asked this question originally on StackOverflow, and was advised to post it here instead. My question is concerning the best practices for continuous deployment and continuous integration especially with small teams.

+ +

Within our project we have a situation, where we would have to have a server running code (optimally from a git repository or something comparable) at the customer's facilities. As the server would potentially contain sensitive information, we would have to follow a specific procedure for deployment to these machines:

+ +
    +
  1. cut off connection to some of the data drives
  2. +
  3. connect to the internet
  4. +
  5. check for updates in the software
  6. +
  7. update and run pre- and post-deployment codes on the machine
  8. +
  9. disconnect from the internet
  10. +
  11. connect data drives again
  12. +
+ +

Additionally, a proper solution would also have to include some precautions concerning security in transmission of the code itself, this, however, is optional.

+ +

I have searched for solutions online, but most of them only contain information on deployment to online services such as heroku or Microsoft Azure. In our case the use of such services is off-limits, as customer data must not be stored off-site. Another solution would be, to include handcrafted bash scripts and cronjobs on the customer's machine. This is tedious and error-prone. Moreover, a trip to the customers facilities would potentially be necessary, if deployment scripts change.

+ +

What is the best way to fulfill the described requirements?

+ +

Additional question: Is there a way to improve the routine described above in a way, which would not include cutting off the data drives, if there is no update to deliver?

+",358194,,,,,43885.45625,Best practices for python CD/CI with machines on-site at customer,,2,0,,,,CC BY-SA 4.0, +405600,1,,,2/23/2020 2:59,,-3,151,"

What type of programming is C. It is not object oriented, but what oriented is it? I googled it, and got no good answers.

+",348453,,,,,43885.67431,"If C isn't object oriented, then C is _____ oriented?",,3,1,2,,,CC BY-SA 4.0, +405602,1,405644,,2/23/2020 3:09,,1,117,"

I am implementing an ECS system with a data-oriented design, and with a TDD methodology using Catch 2. I have the following class declaration snippet for an EntityManager:

+ +
using Entity = std::uint16_t;
+using Signature = std::bitset<MAX_NUM_COMPONENTS>;
+
+const std::uint16_t MAX_NUM_ENTITIES = 10000;
+
+class EntityManager
+{
+public:
+  ...
+  Entity createEntity();
+  ...
+private:
+  Signature entities[MAX_NUM_ENTITIES];
+  ...
+}
+
+ +

The purpose of EntityManager::createEntity() is to simply return an Entity (which we will refer to as e), and set entities[e] to some value. However, the method must throw an exception when the number of entities that exists are more than MAX_NUM_ENTITIES.

+ +

I would like to unit test the aforementioned method. But, I am unsure how to do so. An idea I have is to fill up the entire entities array by calling createEntity() MAX_NUM_ENTITIES + 1 times in my test to trigger an exception. However, I have a feeling that it is not the best way to do it, and may even result in a slow test run. Another idea, but similar to the previous one, is to somehow mock entities so that it would have a small size. We will then call createEntity() multiple times until an exception is raised just like the previous idea.

+ +

What approach would you recommend?

+",208923,,208923,,43884.13611,43884.96528,"In C++, how would one unit test a method that must throw an exception when a private array no longer has ""space""?",,1,8,,,,CC BY-SA 4.0, +405605,1,,,2/23/2020 5:50,,0,36,"

Currently I'm working on creating something with the following general structure. I want to call 4 different APIs in sequential order (the results of one are needed for the next one). If one throws an exception, undo the efforts of the previous APIs with their sister delete APIs.

+ +

Currently my structure/ control flow looks like:

+ +
boolean A = false;
+boolean B = false;
+boolean C = false;
+boolean D = false;
+string response = StringUtils.EMPTY
+
+try {
+   API_A_CREATE;
+   A = true;
+}
+catch (A's Exceptions {
+    A = false;
+}
+
+if (A == false){
+    response == ""failed""
+}
+
+try {
+   API_B_CREATE;
+   B = true;
+}
+catch (B's Exceptions {
+    B = false;
+}
+
+if (B == false){
+   API_A_DELETE;
+   return response;
+}
+
+ +

Basically this repeated 2 times and if all 4 were successful my response would say success. I was wondering if there was a cleaner way to approach this.

+ +

I think I will use Optional to do each API call but I was wondering do I need to write 4 different roll back functions?

+",356812,,,,,43884.24306,Better way to structure several different API calls with rollbacks?,,0,1,,,,CC BY-SA 4.0, +405609,1,,,2/23/2020 8:47,,-1,263,"

So I was reading https://hackernoon.com/objects-vs-data-structures-e380b962c1d2

+ +

and I stumbled upon this quote :

+ +

""A Person data structure has a first name, last name, and phone number. A Person object walks, runs, jumps, and speaks. A Person object does things.""

+ +

As I understand, the Person data structure would be like :

+ +
public class Person
+{
+    public String name;
+    public int weight;
+}
+
+ +

Now for the Person object that ""walks, runs, jumps, and speaks."" I'm confused how this would be any different than the previous class with just added methods like so :

+ +
public class Person
+{
+    public String name;
+    public int weight;
+
+    public void run(){
+        stuff 
+    }
+
+    public void jump(){
+         stuff
+    }
+}
+
+ +

Sorry, I'm just confused and any clarification would be extremely helpful!

+",356812,,,,,43884.39444,Difference between objects and data structures?,,1,6,,,,CC BY-SA 4.0, +405612,1,,,2/23/2020 10:30,,-3,35,"

I have been given a task to take on a legacy application (which has a very poor API in terms of user experience, is undocumented largely, and performs slowly) and build a new API and SDK to improve the user experience. I have access to the source code of this legacy application and I can see only one endpoint which does everything.

+ +

I would like to know the best way to wrap a new API that improves user experience for a legacy application. These are some of things I have thought about:

+ +
    +
  1. Design the API so that it follows REST principles
  2. +
  3. Document the API using something like RAML/Swagger etc
  4. +
  5. Improve the user experience of the API so that they can enter query data much more easier such as specifying it in a JSON payload rather than as query parameters in the URL (as is with the legacy application)
  6. +
  7. Generate tests to query all permutations and so exercise as much as possible the underlying legacy service.
  8. +
  9. Some sort of regression testing framework to make sure the new API does not break the legacy app by working outside it's constraints.
  10. +
+ +

One thing which troubles me is how to best communicate with the legacy webservice? I would have to translate the JSON from my webservice to a query which the legacy API accepts. What are the best practices around that?

+ +

Any other suggestions which I have missed would be appreciated.

+",315865,,315865,,43884.44306,43884.47708,Best Practices for Building An API and SDK for a Legacy Application,,1,1,,,,CC BY-SA 4.0, +405615,1,,,2/23/2020 12:19,,-2,243,"

Let's say we have a abstract super class A with abstract method m1 and m2. B inherits from A, overrides m1 and m2 and defines a new public method m3. Since to access m3 we have to cast B and this requires the usage ofinstanceof` operator:

+ +
A a = new B();
+if(a instanceof B) {
+  B b = (B) a;
+  b.m3();
+}
+
+ +

Does this mean that adding m3 was a poor design decision and all subclasses should stick to the interface defined in the superclass without adding new methods ? In fact, ideally, we should always program against the interface so even casting to a subtype would be poor design.

+",357636,,83178,,43885.58403,43885.65139,Is adding methods in a subclass poor design?,,4,5,,,,CC BY-SA 4.0, +405618,1,,,2/23/2020 14:07,,-5,41,"

What is the differences between the Class and Actor in UML. Can I use an actor in use case diagram which is used as a class in a class diagram?

+",358219,,208831,,43885.58403,43885.58403,Difference in UML,,2,4,,,,CC BY-SA 4.0, +405619,1,,,2/23/2020 14:21,,-2,295,"

Situation: I am designing a REST API that needs one or more, potentially large objects to do it's work

+ +

I am facing a decision to either

+ +
    +
  1. Pass the large object by reference and have the API retrieve it
  2. +
  3. Pass in the large object as a parameter
  4. +
+ +

Approach 1:

+ +
@PostMapping(""/api/{id}"")
+String getSomeObj(int id){
+    //make another rest call with id and get CustomObj
+    // then do some logic and return something
+    //Here response time will be more as it has again another rest calls
+
+}
+
+ +

Approach 2:

+ +
@PostMapping(""/api/{id}"")
+String getSomeObj(@PathParam(""id"") int id, @RequestBody CustomObj obj){
+    //directly do logic with the provided obj and return something
+    //Here Response time would be less as we are directly getting the actual Object from Request Body
+    //BUT is this a good practice to pass an object in which we need only few details?
+}
+
+ +

Issue 1):

+ +
for(int i=0; i<100; i++){
+    id = i;
+    //make rest call to /api/{id} 
+}
+
+ +

Above Unnecessary 100 internal rest calls if we follow approach 1, we can avoid above issue if we follow approach 2

+ +

Issue 2): +What if a CustomObj is a Huge Nested JSON Object, of course memory taken by JSON would be very less even if JSON object has so many arrays and nested JSON objects, BUT we don't need all the information gettin from the JSON, only few details required from the Request Object. But is this a good practice sending a huge object as a request body?

+ +

So in above two, which approach is good practice?

+",348609,,353068,,43887.66181,43887.66181,Pass ID or Object which has irrelavant details as RequestBody in Rest Call?,,1,10,,,,CC BY-SA 4.0, +405620,1,,,2/23/2020 14:57,,0,74,"

Why in C# it is not important to know when the generation 1 was collected by GC while implementing the GCNotification?

+ +

While reading the CLR via C# book I met the following excerpt:

+ +
+

The GCNotification class shown below raises an event whenever a generation 0 or generation 2 collection occurs. With these events, you could have the computer beep whenever a collection occurs or you calculate how much time passes between collections, how much memory is allocated between collections, and more. With this class, you could easily instrument your application to get a better + understanding of how your application uses memory.

+
+ +
public static class GCNotification {
+ private static Action < Int32 > s_gcDone = null; // The event's field
+ public static event Action < Int32 > GCDone {
+  add {
+   // If there were no registered delegates before, start reporting notifications now
+   if (s_gcDone == null) {
+    new GenObject(0);
+    new GenObject(2);
+   }
+   s_gcDone += value;
+  }
+  remove {
+   s_gcDone -= value;
+  }
+ }
+
+ private sealed class GenObject {
+  private Int32 m_generation;
+  public GenObject(Int32 generation) {
+   m_generation = generation;
+  }
+  ~GenObject() { // This is the Finalize method
+   // If this object is in the generation we want (or higher),
+   // notify the delegates that a GC just completed
+   if (GC.GetGeneration(this) >= m_generation) {
+    Action < Int32 > temp = Volatile.Read(ref s_gcDone);
+    if (temp != null) temp(m_generation);
+   }
+   // Keep reporting notifications if there is at least one delegated registered,
+   // the AppDomain isn't unloading, and the process isn’t shutting down
+   if ((s_gcDone != null) &&
+    !AppDomain.CurrentDomain.IsFinalizingForUnload() &&
+    !Environment.HasShutdownStarted) {
+    // For Gen 0, create a new object; for Gen 2, resurrect the object
+    // & let the GC call Finalize again the next time Gen 2 is GC'd
+    if (m_generation == 0) new GenObject(0);
+    else GC.ReRegisterForFinalize(this);
+   } else {
+    /* Let the objects go away */ }
+  }
+ }
+}
+
+ +

When the generation 1 collection occurs the listeners will be passed 0 instead of 1 due to this line:

+ +

if (temp != null) temp(m_generation);

+ +

(since m_generation is never 1 in this example, but only 0 and 2).

+ +

Why did not we add a new GenObject(1);? Is it because we would have the listeners being called twice then (one for 0 and one for 1 when the generation 1 is collected)? In such a case, why are not they called twice due to the new GenObject(2); (one for 0 and one for 2 when the generation 2 is collected)?

+ +

Maybe the author just missed the new GenObject(1); accidentally, not intentionally?

+",357525,,,,,43884.62292,Why in C# it is not important to know when the generation 1 was collected by GC while implementing the GCNotification?,,0,6,,,,CC BY-SA 4.0, +405624,1,405637,,2/23/2020 15:21,,6,335,"

In TypeScript, the following is called a union type:

+
number | string
+
+

and this is called an intersection type:

+
number & string
+
+
+

However, is the resulting type of the | not actually an intersection, and that of & a union?

+

From the docs:

+
+

For example, Person & Serializable & Loggable is a Person and Serializable and Loggable. That means an object of this type will have all members of all three types.

+
+

and

+
+

number | string | boolean is the type of a value that can be a number, a string, or a boolean.

+

If we have a value that has a union type, we can only access members that are common to all types in the union.

+
+
+

Q: Would it therefore not be more correct to switch their names?

+",55929,,-1,,43998.41736,43885.08264,Naming of union and intersection types in TypeScript,,3,2,1,,,CC BY-SA 4.0, +405630,1,405733,,2/23/2020 17:33,,0,697,"

I have a website which offers pages in the format of https://www.example.com/X where X is a sequential, unique number increasing by one every time a page is created by the users and never reused even if the user deletes their page. Since the site doesn't offer a quick and painless way to know which one of those pages is still up I resorted to checking them one by one, contacting them through an HttpClient and analyzing the HttpResponeMessage.StatusCode for 200 or 404 Http codes. My main method is as follows:

+ +
private async Task CheckIfPageExistsAsync(int PageId)
+    {
+        string address = $""{ PageId }"";
+        try
+        {
+            var result = await httpClient.GetAsync(address);
+
+            Console.WriteLine($""{ PageId } - { result.StatusCode }"");
+
+            if (result.StatusCode == HttpStatusCode.OK)
+            {
+                ValidPagesChecked.Add(PageId);
+            }
+        }
+        //Code for HttpClient timeout handling
+        catch (Exception)
+        {
+            Console.WriteLine($""Failed ID: { PageId }"");
+        }
+    }
+
+ +

This code is called like this in order to have a certain degree of parallelism:

+ +
public void Test()
+    {
+        var tasks = new ConcurrentBag<Task>();
+        var lastId = GetLastPageIdChecked();
+        //Here opens up 30 requests at a time because I found it's the upper limit before getting hit with a rate limiter and receiving 429 errors
+        Parallel.For(lastId + 1, lastId + 31, i =>
+        {
+            tasks.Add(CheckIfCharacterExistsAsync(i));
+        });
+        Task.WaitAll(tasks.ToArray());
+
+        lastId += 30;
+        Console.WriteLine(""STEP"");
+
+        WriteLastPageIdChecked(lastId);
+        WriteValidPageIdsList();
+    }
+
+ +

Now, from what I understand starting tasks through Parallel should let the program handle itself when it comes to how the concurrent threads should be active at the same time and adding them all to a ConcurrentBag enables me to wait for all of them to end before moving on to the next batch of pages to check. Since this whole operation is incredibly expensive time-wise I'd like to know if I've opted for a good approach when it comes to parallelism and asynchronous methods.

+",343856,,,,,43886.80069,"How to approach a large number of multiple, parallel HttpClient requests?",,1,13,,,,CC BY-SA 4.0, +405631,1,405671,,2/23/2020 17:56,,1,315,"

I know majority of databases uses B-Trees, and I can see how using a balanced binary tree will give fast sort times, for ordering by ID or whatever else the primary key is; but how are databases able to ORDER_BY different fields like Name or Age, does it just perform an efficient sorting algorithm like merge sort or quick sort on the data or does it store sorted data in B-Trees for all fields (which seems really inefficient for storage). Because ID ordering and Name ordering would be different unless it stores all fields sorted in a B-Tree, it must perform some other sorting algorithm.

+ +

TLDR: How are databases able to perform fast sorting on non-primary key fields, if the data stored in B-Tree is based on the primary key.

+",358189,,358189,,43887.76458,43887.76458,How do B-Trees used in databases sort data based on different fields?,,5,2,,,,CC BY-SA 4.0, +405634,1,,,2/23/2020 18:31,,2,366,"

As I understand it, RESTful URL is more like a resource locator than a command. For example, to get user 1234, you wouldn't use this:

+ +
//Not restful
+https://server/GetUser?UserID=1234
+
+ +

You would use this:

+ +
//Restful
+https://server/users/1234
+
+ +

This makes sense for resources that take a single identifier.

+ +

But what if I want to get, say, a range of transactions between two dates?

+ +
//Old style
+https://server/GetTransactions?StartDate=2020-02-20&EndDate=2020-02-21
+
+ +

What would a RESTful URL look like?

+",115084,,115084,,43885.31111,43885.63542,What would be a RESTful URL pattern for a date range?,,3,3,,,,CC BY-SA 4.0, +405635,1,405639,,2/23/2020 18:43,,0,226,"

In another question on this site, asking to clarify the open closed principle, @Kate Gregory answered this. +I'm interested in this part specifically:

+ +
+

Imagine you wrote an Invoice class that works perfectly and has no bugs. It makes a PDF of an invoice. + Then someone says they want an HTML invoice with links in it. You don't change any code in Invoice to satisfy this request. Instead, you make another class, HTMLInvoice, that does what they now want. You leverage inheritance so that you don't have to write a lot of duplicate code in HTMLInvoice.

+
+ +

I'm wondering how this view can be matched with proper encapsulation. +In my view, a properly encapsulated Invoice class would take the data it needs, do all it's work privately, and then spit out a PDF. However, in that case HTMLInvoice would be unable to derive from Invoice in order to reduce ""duplicate code"". Without access to Invoice's private fields/methods there is nothing for HTMLInvoice to re-use.

+ +

Surely the idea can't be to make Invoice's private members protected (from the start) to facilitate this?

+",341699,,,,,43884.97222,Open closed principle: code duplication and encapsulation,,4,2,,,,CC BY-SA 4.0, +405636,1,,,2/23/2020 19:11,,0,259,"

Let's assume a SimpleFactory that creates a group of objects:

+ +
public SimpleFactory {
+   public Bycicle createBycicle(String type) {
+     if(type.equals(""ONE"")) return new OneWheelBycicle();
+     if(type.equals(""TWO"")) return new TwoWheelBycicle();
+     if(type.equals(""THREE"")) return new ThreeWheelBycicle();
+   }
+}
+
+ +

This has the advantage of centralizing the creation of Bycicles in one method/class. If new Bycicles are added, there's only one place to change the code. So far so good.

+ +

What I don't understand is the benefits of the Factory Method. As far as I'm concerned I would always use the SimpleFactory. Does the factory method exist because a doSomething method includes code that manipulates the created object ? Because the object creation includes object manipulation ? Or may be because I can add n ConcreteFactories to group objects by factory, which I could also do in SimpleFactory adding a new parameter for the group.

+ +

+",357636,,209774,,43884.97431,44035.46181,SimpleFactory vs Factory Method,,2,1,,,,CC BY-SA 4.0, +405648,1,,,2/24/2020 0:07,,0,63,"

Was wondering if someone could give a practical understanding of what it means for an algorithm to perform in (log a / log b) time?

+ +

In other words, practically speaking for an algorithm to perform in nlogn time it would need to split a max of log n times and perform n amount of work at each level. For log n performance it would split a max of log n times and perform at most a constant amount of work at each level, etc.

+ +

What would be the equivalent way of explaining the performance of a log a / log b algorithm?

+ +

Thanks

+",340385,,,,,43885.00486,Practical Understanding of Log a / Log b Running Time,,0,3,,,,CC BY-SA 4.0, +405651,1,,,2/24/2020 7:04,,-4,53,"

I need some guidance that how to capture the face when we blink the eyes. I have seen the core Image framework, but it detects only after capturing the image. But I need it to be live capture.

+",358258,,,,,43885.58958,Capture the face when we blink eye,,1,3,,,,CC BY-SA 4.0, +405652,1,,,2/24/2020 7:07,,0,97,"

My setting is the following (Please assume that this points are entirely rock solid unchangeable, some for good reasons some: ""just because""):

+ +
    +
  • In a super scalable microservice environment i receive messages
  • +
  • These message have a logical order (a timestamp that they bring with them)
  • +
  • The messages are fed into a messaging system
  • +
  • The messages are idempotent
  • +
  • As soon as a message has arrived in the data storage it is okay to discard any older message for the same objectId
  • +
  • Neither the technical order of the messages (the order of the publishing to the system) can be guaranteed nor its uniqueness (messages can be delivered multiple times)
  • +
  • Consumers are consuming messages in a highly parallel fashion
  • +
  • At the end of the processing of the messages most of them will be stored in a SQL Database
  • +
  • No hard consistency constraints apply (eventual consistency can be applied)
  • +
  • Using Kafka order other event streaming platforms is too expensive because of its total cost of ownership
  • +
+ +

So the central problem i am facing is: +What to do if a older messages gets processed after a newer one.

+ +

Here is a simple example, lets say the following messages are processed in this order:

+ +
{
+  objectId: '4711'
+  logicalOrder: 2
+  content: ""World""
+}
+
+ +
{
+  objectId: '4711'
+  logicalOrder: 1
+  content: ""Hello""
+}
+
+ +

In the end the field content will be written to the database. So if i do nothing special about it, the result will be wrong:

+ +
+----------+---------+
+| objectId | content |
++----------+---------+
+|     4711 | Hello   |
++----------+---------+
+
+ +

Now the first thing that comes to mind is adding the logicalOrder as a column and then there are two approaches.

+ +
    +
  1. Adding everything that is processed to the table (keeping up the ability to easily batch insert records)
  2. +
  3. Checking if something newer (according to logicalOrder) is in the table and only write to the table if there is no newer dataset in the table
  4. +
+ +

In the first case we will end up with this:

+ +
+----------+---------+--------------+
+| objectId | content | logicalOrder |
++----------+---------+--------------+
+|     4711 | World   |            2 |
+|     4711 | Hello   |            1 |
++----------+---------+--------------+
+
+ +

Now here comes the questions:

+ +
    +
  • What do you think is more performant in a SQL environment? Taking the overhead of checking first before writing to the table or
  • +
  • using a PARTITION BY fancy sql statement to always select the entries with the highest logicalOrder evertime data is read (maybe also a cleanup job will be needed to free storage and to bring back performance).
  • +
  • Do you encounter similar problems?
  • +
  • How did you solve them?
  • +
+",89164,,89164,,43886.46875,43886.67847,Keeping up logical order in a messaging environment,,3,1,,,,CC BY-SA 4.0, +405657,1,,,2/24/2020 9:17,,0,42,"

I have gone through Elastic working and understands how inverted index(faster look up) and index( +storing actual document disk in ES). My understanding is that inverted index has most of the required stuff when search query is executed. +For example : word to document id mapping/location/TF-IDF/boost score etc

+ +

Say If I search for Java developer new york, Inverted index has all the stuff score/document id/primary key of record in DB to return as response etc.

+ +

So my question is should not we just store inverted index only but not actual documents on disk as query search is done on inverted index only not on documents ?

+",260829,,260829,,43886.05903,43886.05903,Elastic Search inverted index and index containing actual documents?,,0,4,,,,CC BY-SA 4.0, +405663,1,,,2/24/2020 10:41,,0,112,"

I have a Task in my project , to complete that i have call(REST) multiple external systems. If my call fails at some level , i have to rollback all my previous calls(making call with undo action) .

+ +

Looking for effective way of implementing it in JAVA. Can Someone point me to any design pattern if exists?

+ +
    public void myTask(){
+
+
+        call subtask1() ; //external sys
+
+        try{
+            call subtask2();
+        }
+        catch(Exception){
+            call undo_subtask1();
+        }
+
+        try{
+            call subtask3();
+        }
+        catch(Exception){
+            call undo_subtask1();
+            call undo_subtask2();  //calling in any order is fine.
+        }
+
+        .....
+    }
+
+ +

The above code is similar to database rollback. Tomorrow there can be subtask 4, 5,6 . I don't want to add try/catch again. Is there recommended way to achieve this?

+ +

When i googled i got command pattern. But i don't understand how that will help my case.

+ +

Thanks

+",54050,,54050,,43885.45972,43885.51806,Design pattern for executing and reverting the tasks in same order,,2,4,,,,CC BY-SA 4.0, +405664,1,,,2/24/2020 10:53,,5,648,"

Suppose you have a bounded context with ~30 business events and, for simplicity sake, the same number of commands like ChangeUserEmailCommand -> UserEmailChangedEvent initiated from a web UI. Processing a command may fail for the following main reasons (besides infrastructure failures of course):

+ +
    +
  1. Validation issue (email uniqueness)
  2. +
  3. Technical issue (optimistic concurrency version mismatch)
  4. +
+ +

I'd be interested to provide the best user experience to the clients and display what went wrong.

+ +

What is the best practice to signal the failures?

+ +
    +
  1. Would you create 30 more events like ChangeUserEmailFailedEvent? If not, what's your rule of thumb for which events to create a paired *FailedEvent?
  2. +
  3. Is it a good idea to just have bool Success {get;set;} property in the existing events? It's probably not the best way when you need to signal more failure details than just an error message
  4. +
  5. Would you create a single ConcurrencyFailedEvent for all concurrency issues adding a source command type as part of it's payload? Just to separate this kind of failure from business validation failures?
  6. +
+ +

The commands are processed asynchronously (via a broker). The read storage is separated from the write storage. No event sourcing.

+ +

As for why would I need this I can think of the following:

+ +
    +
  1. Detailed error message pushed back to a client via web sockets, for example
  2. +
  3. Threat detection - reacting to an increased number of failed user registration which might be an attack
  4. +
  5. Monitoring - displaying a number of failed orders on a dashboard, for example. If it's within a certain range I'd feel safe letting the support handle it. If it's above a certain number - I probably need to dig the logs.
  6. +
+",38702,,38702,,43885.73194,43885.73194,Handle failures in Event Driven Architecture,,2,9,1,,,CC BY-SA 4.0, +405669,1,,,2/24/2020 12:03,,4,318,"

I'm interested in starting coding dojos at the company I work at (during or after working hours). I think it will help to spread knowledge on how to write clean, concise and consistent code. I've been part of coding dojos in the past and it's really helped me be a better software engineer. I especially think that this could be useful for engineers who are switching to a new language e.g. Java to Golang.

+ +

What I'm not sure of is how to measure its success or failure? Is it worth taking 30-60 minutes of the company's time per week to allow their engineers to hone their skills?

+ +

I could send out a simple questionnaire after every session, and try to measure how much people have learned. Or I can try and measure the quality of code in PRs? That would be really interesting to do!

+ +

Has anyone done such a thing? If so what did you do?

+",320115,,326536,,43886.42639,43887.39792,Measuring the success of coding dojos,,4,3,0,,,CC BY-SA 4.0, +405672,1,,,2/24/2020 14:04,,0,30,"

I have an api server that relies on an auth server (both owned by the same company). Once the client gets a grant from the auth server, it is no longer needed, because the only information I need from the auth server is the user profile. Should I in this case cache the user profile on the api server, along with the access token? I want to do this so that I don't have to keep calling the auth server every non-auth related request.

+ +
    +
  • The api has no authentication on its own, it trusts the auth server and vice versa.
  • +
+ +

Is this a good pattern? Or instead should I issue an access token from the api server, after the authentication with the auth server was validated?

+ +

+",358284,,,,,43885.58611,Proper way to use oauth with external auth server and an api,,0,5,,,,CC BY-SA 4.0, +405686,1,,,2/24/2020 17:38,,-2,266,"

I'm a programming teacher. My students learn structured and then object oriented programming in JavaScript and C#. They learn SQL, MS SQL Server, Dapper and EntityFramework Core.

+ +

After this they make applications with data access code directly in the ASP.NET Core WebAPI controllers. I tell them not to do that, and teach them about layering, the old n-layer architecture, and the onion architecture reversing the dependencies. We work with testability and unit tests. And they learn to not expose objects of the domain model outside the application.

+ +

Let's assume that a full understanding of Domain Driven Design and CQRS are out of scope for my students, and/or assume that they will make some small and simple applications.

+ +

What architectural topics could I consider teaching them in this context? Another way of asking the same would be; in a simple application based on good object-oriented programming, ASP.NET Core WebAPI and SQL Server Db - and a Vue.js frontend, what would be a simple but still good architecture to consider?

+",358307,,,,,43885.89097,What is a simple implementation of onion architecture for C# ASP.NET Core WebAPI and SQL db that is not full DDD and CQRS?,,1,3,,,,CC BY-SA 4.0, +405688,1,405698,,2/24/2020 18:06,,11,3464,"

I am new to design patterns and working my way through the Factory Method and Strategy patterns. I understand that Factory is a creational pattern and Strategy is behavioral but I struggle to understand when to use which one. For example, I have the following classes that updates client details based on the company specified:

+ +
    public interface IUpdateDetails { void UpdateDetails(); }
+
+    public class UPSUpdateDetails : IUpdateDetails
+    {
+        public void UpdateDetails() => Console.WriteLine(""Call web service to update details for UPS..."");
+    }
+
+    public class FedExUpdateDetails : IUpdateDetails
+    {
+        public void UpdateDetails() => Console.WriteLine(""Update database for FedEx..."");
+    }
+
+    public class DHLUpdateDetails : IUpdateDetails
+    {
+        public void UpdateDetails() => Console.WriteLine(""Send Email for DHL..."");
+    }
+
+ +

I can create a Factory to return the correct class:

+ +
    public class UpdateDetailsFactory
+    {
+        public static IUpdateDetails Create(string type)
+        {
+            switch (type)
+            {
+                case ""UPS"":
+                    return new UPSUpdateDetails();
+                case ""DHL"":
+                    return new DHLUpdateDetails();
+                case ""FedEx"":
+                    return new FedExUpdateDetails();
+                default:
+                    throw new ArgumentException();
+            }
+        }
+    }
+
+ +

and implement it as follows:

+ +
    // Using Factory
+    string company = ""UPS""; // Get company from database
+    IUpdateDetails updateDetails = UpdateDetailsFactory.Create(company);
+    updateDetails.UpdateDetails();
+    Console.ReadLine();
+
+ +

or I can use the Strategy pattern:

+ +
    public class UpdateDetailsContext
+    {
+        private IUpdateDetails strategy;
+
+        public void SetStrategy(IUpdateDetails updateDetails) => strategy = updateDetails;
+
+        public void Update() => strategy.UpdateDetails();
+    }
+
+ +

and implement it as follows:

+ +
    // Using Strategy
+    UpdateDetailsContext context = new UpdateDetailsContext();
+    context.SetStrategy(UpdateDetailsFactory.Create(company));
+    context.Update();
+    Console.ReadLine();
+
+ +

My question is - am I understanding the patterns correctly? What is the difference between using the one over the other?

+",300601,,,,,43887.81458,Strategy vs Factory design pattern,,6,0,,,,CC BY-SA 4.0, +405691,1,406336,,2/24/2020 18:54,,1,1252,"

I think I am just look for a bit of code review advice. It might possibly be a methodology question?

+ +

Essentially when I am pulling data (usually from a REST request), I generate a service, then inject HTTPclient of Angular common so I can use a simple GET to grab the data. Like below:

+ +
interface ReportContact{
+  Email: string;
+  Name: string;
+  ReportName:string;
+  ReportType:string;
+  ReportFunction:string;
+}
+
+@Injectable({
+  providedIn: 'root'
+})
+
+
+export class ReportingContactService {
+
+  private readonly url = environment.url; 
+  private _http:HttpClient;
+  constructor(http:HttpClient) {
+    //DI 
+    this._http=http;
+  }
+
+    getReportContacts():Observable<ReportContact[]>{
+    return this._http.get<ReportContact[]>(this.url);
+  }
+}
+
+ +

This is generally my go to when I am making a simple service. I then usually use this service in my component like below:

+ +
export class ReportingContactComponent implements OnInit {
+
+  _service:ReportingContactService;
+  data:ReportingContact[];
+
+  constructor(service:ReportingContactService) {
+    //DI service
+    this._service = service;
+   }
+
+  ngOnInit() {
+    this._service.getReportContacts().subscribe(
+      data=> {
+            for(var element of data){
+
+            }
+    });
+  }
+
+}
+
+ +

Now, where I was hoping for some review and pointers on is, should I now be mapping the return of that service to a class that I create as a model? I am used to the mindset of separating data type objects from the objects used in the rest of the application. So essentially in that for loop (of the component.ts), I would map to these new objects which look like below:

+ +
export class ReportingContact{
+    Email: string;
+    Name: string;
+    ReportName:string;
+    Type:string;
+    Function:string;
+}
+
+ +

Am I wasting my time and is this generally not the ""norm""? Again, I just have always come from the mindset of separating, but the Angular framework (or maybe just TS and JS) has introduced new ways of thinking for me, so I like to be keeping to the generally accepted standards.

+",327284,,,,,43900.33333,Typescript and Angular 6 - Mapping Service Results to Data Transfer Objects,,1,4,,,,CC BY-SA 4.0, +405699,1,405700,,2/24/2020 22:29,,1,224,"

Reading the blue book at page 152, we can find this:

+ +
+

[A repository] provide methods to add and remove objects, which will + encapsulate the actual insertion or removal of data in the data store. + Provide methods that select objects based on some criteria and return + fully instantiated objects or collections of objects whose attribute + values meet the criteria, thereby encapsulating the actual storage and + query technology.

+
+ +

One of advantages in repositories introduction is that:

+ +
+

They allow easy substitution of a dummy implementation, for use in + testing

+
+ +

Crystal clear! I can use an interface to define the repository; implement it with many classes (i.e. real database, rather than in-memory value for testing); inject one implementation or another using DI engine.

+ +

On the other hand:

+ +
+

The hexagonal architecture (aka ports and adapters) divides a system into several + loosely-coupled interchangeable components, such as the application + core, the database, the user interface, test scripts and interfaces + with other systems.

+
+ +

To me it look like extend the repository concepts not only to a database. Do I have a notification port? Then I will implement two adapters one for RabbitMQ, another one for Amazon SNS or in-memory topic.

+ +

Put in this way, to me: a repository could be considered an adapter for the database port.

+ +

Am I missing something?

+",117422,,,,,43885.99375,Is the Repository pattern a part of the Ports and Adapters concept,,1,0,,,,CC BY-SA 4.0, +405703,1,,,2/25/2020 1:19,,-4,307,"

When talking about scaling a system, it's often said that vertical scaling has limitations. So after a point, we need to scale the system horizontally.

+ +

What are the limitations of:

+ +
    +
  1. Main memory - What is the maximum amount of memory supported by a 32bit and 64bit processor and why? Is it the width of the address bus and hence 2^32 and 2^64 is the max supported size?
  2. +
  3. Hard disk/SSD - Maximum storage that we can add to a system?
  4. +
  5. Network - Is it measured in requests per second that are supported? Number of connections? What is a typical load supported by modern day laptops with say i5/8GB/256GB SSD?
  6. +
+",162250,,9113,,43886.49097,44217.97708,Why can't we vertically scale a system infinitely?,,2,6,,,,CC BY-SA 4.0, +405712,1,,,2/25/2020 11:29,,0,33,"

First of all forgive me if it is not right place to ask this question but I need more discussion on this question from valued engineers and developers please.

+ +

We have been working on multi tenant SAAS system and we have used semi isolated schema based architecture where each tenant will have their own schema for example tenant1.UserProfile for UserProfile table, tenant2.UserProfile....... tenant10000.UserProfile for 10000th tenant and there will be more than 20 apps (UserManagement, PayrollManagement, EventManagement, HRManagement----n) under same hood.

+ +

We have tried to setup micro architecture in this way:

+ +

Separate database for each app so there will be 20 databases in 20 different machines for future scaling purpose.

+ +

Separate container for each app (20 Docker Containers for instance) running in one machine for now.

+ +

Request domain for each tenant will be tenant1.app_payroll.mydomain.com.

+ +

Having said that , I have noticed google has several apps and when each app opened it shows https://mail.google.com/mail/u/0/?tab=Cm1#inbox if I opened Gmail App, https://contacts.google.com/?hl=en-GB&tab=rC1 if I opened Contact App...

+ +

Is it possible to make our system like how google has managed? If yes, how ?

+ +

Are we on right direction to implement micro-architecture?

+ +

Are we on economical way to consume resource (computing) in the future when thousands of tenant register to our system?

+ +

Are we on right track to maintain the system and scale it when needed?

+",288152,,,,,43886.47847,How to setup micro architecture in python-django multi tenant SAAS?,,0,0,,,,CC BY-SA 4.0, +405714,1,,,2/25/2020 12:02,,0,198,"

I'm struggling to fully understand MVC pattern, I found a lot of information on the web but they are really confusing because it seems there are various ways to do it.

+ +

What I understood is that the user interact with the view which will call a method of the controller which will modify the model. +The model notifies the view that its state is changed, and the view in turn asks the model for the new state (this can be obtained with an observer pattern).

+ +

I need an example to understand all of that, because i'm very confused. +In this example I have a view which will display all users (a photo, a name, a surname, an email) with a given name. +The view is something like this:

+ +

+ +

Imagine the view will show a list of user retrieved from a database, how would you use MVC pattern to achieve this behaviour? More important which is the model and how can notify the view that the user searched for another name (and therefore the user list to show is changed)?

+ +

Thank you all.

+",358382,,,,,43889.57708,Model View Controller pattern for a Java desktop application,,2,1,1,,,CC BY-SA 4.0, +405715,1,,,2/25/2020 14:00,,-1,220,"

I have an API whose job is to aggregate resources obtained by calling multiple other APIs and then give out the aggregated response to the client. Currently even if one or more dependent API calls fail, I go ahead and aggregate response from the other dependent APIs and then give them out to the client with a 2xx status. In case all of the dependent API calls fail, I still give out a 2xx with empty list.

+ +

In case one or more dependent API calls fail, I want to give out an indication to the client. I was thinking of using the HTTP code 206. What would be the best approach to handle this scenario? Is giving out a 206 with the names of the dependent services that failed to give out a 2xx as part of a header the correct approach. If yes, what should this header be called?

+",358384,,,,,43892.53264,Http status and reponse in case the API gives out partial response,,2,2,,,,CC BY-SA 4.0, +405718,1,,,2/25/2020 15:35,,4,436,"

Is there a pattern or design that I could refer to for dealing with bulk data in inter-services communication.

+ +

My use case is to import data from upstream feed files(say 50k records) to our distributed system. So those records would end up in multiple services. Each record represents an instance of an entity in our system. Ex. a user.

+ +

We use async communication using RabbitMQ for other use cases where it is single instance of an entity.

+ +

We were thinking of batching the records, say 1000 records per message. So that we have manageable volume of messages in Queue and also make use of bulk inserts/updates within the consuming service.

+ +

What if a batch of 1000 records had partial success in one of the service.

+ +
    +
  • Because the other services might have already been updated, this will be an inconsistent state of the overall system.
  • +
  • Although the 1000 records would be rejected in error'ing service due to DB transaction failure, the message will be retried until the root of the issue is fixed. Feel like this would clog the service.
  • +
+ +

Making REST calls is an option, but it's not as scalable as async communication.

+ +

Anyone came across similar use case before, if so how did you handle it?

+",358402,,,,,43887.13958,Communicating bulk data among microservices,,2,1,1,,,CC BY-SA 4.0, +405722,1,405728,,2/25/2020 16:20,,2,104,"

I would like to create a state machine.

+ +

Each State would have its run method, and, according to some logic would then set a next state.

+ +
+ +
    +
  • Option 1:
  • +
+ +

If each state is responsible for determining a next state, then it would have a next_state() which would return a pointer or an id of some other state, thus forcing each state to know about the existence of other states ==> bad.

+ +
    +
  • Option 2:
  • +
+ +

If some other entity is responsible for the next state, then some logic there would have to calculate the next state, but would depend on current state, thus breaking the current state's encapsulation (or force the creation of a getter-like method, that would in reality be option 1) ==> bad

+ +
+ +

So, I can't come up with a way that doesn't break encapsulation for the part of a state machine that decide the next step.

+ +

I would like to hear the best practice in this case, as even Wikipedia doesn't shed light on this.

+",329411,,,,,43886.73542,State Machine: what object is responsible for state transfer?,,1,3,,,,CC BY-SA 4.0, +405723,1,,,2/25/2020 17:01,,1,223,"

I have problems to write my first simple pseudocode.

+ +

The input of the algorithm is an ontology which contains axioms and an inference set, exactly an inference is defined as an object with set of premises and an conclusion.

+ +

The output of the algorithm should be the set of conclusion that we can derive from the given ontology using the inference set.

+ +

A conclusion is derivable if there exits an inference which has as conclusion such conclusion and in turn its premises are derivable.

+ +

I wrote the pseudocode, but i think it is not the best solution due to the goto instruction. I think it can be remove and i can add some data structure that takes care of the phase of propagation. I hope that i was clear:)

+ +

+",358413,,209774,,43886.88472,43887.62847,Inefficient Pseudocode,,1,8,,,,CC BY-SA 4.0, +405726,1,,,2/25/2020 17:28,,-3,77,"

So I am trying to design a simple URL shortener application where every time a URL is queried, it is going to update the number of times it has been queried.

+ +

I'm thinking of using MongoDB and I am thinking of some sample schema like this

+ +
{
+id : ...., //Mongo-generated ID
+originalUrl: .... // The original URL
+shortUrlKey: .... // The shortened URL key
+createdAt: ....
+updatedAt: .....
+hitCount: ...... // Number of times the document has been queried.
+}
+
+ +

I want to make sure that every time a particular URL is queried, the request increments the hitCount field by one and then returns the entity.

+ +

Now I read somewhere that writes on MongoDB result in locking that particular document.

+ +

So I have the following questions:

+ +
    +
  1. Since every read here is going to update the document, how best can I design my application so that it can be scaled efficiently?

  2. +
  3. Also, I want to serve the URLs from Redis cache once the hitCount crosses a certain number. But while I'm serving the URL from the cache, I still want to update the hitCount field. How do I do that?

  4. +
+ +

It makes sense to have an asynchronous (fire and ignore the result) call to update the field once the URL starts getting served from cache because at that point keeping hitCount synchronised doesn't matter, but until that point, how can I sync the document without degrading the performance or losing the chance of scalability?

+",252919,,,,,43886.83958,How to scale an application where every read results in a write?,,2,1,,,,CC BY-SA 4.0, +405730,1,,,2/25/2020 18:21,,5,274,"

If we have a long list of JSON entries, we could put those on an HTTP body so we could parse the whole body of JSON with something like JSON.parse():

+
HTTP/1.1 200 OK
+Date: Sun, 10 Oct 2010 23:26:07 GMT
+Server: Apache/2.2.8 (Ubuntu) mod_ssl/2.2.8 OpenSSL/0.9.8g
+Last-Modified: Sun, 26 Sep 2010 22:04:35 GMT
+ETag: "45b6-834-49130cc1182c0"
+Accept-Ranges: bytes
+Content-Length: (big number)
+Connection: close
+Content-Type: application/json
+
+[
+"imagine",
+"this",
+"list",
+"having",
+"thousands",
+"of",
+"entries",
+...
+]
+
+

Alternatively, we could split up the body into new lines.

+

Is there a way to write the HTTP body in a streaming manner (line by line) and also read the HTTP body in a streaming manner? Or is there no point since the HTTP body is always buffered?

+",358145,,4,,44000.89028,44007.60625,How to stream JSON using HTTP instead of pure TCP,,4,6,,,,CC BY-SA 4.0, +405732,1,,,2/25/2020 18:59,,0,36,"

I've designed my app so it stores the data of each sale in the following way: each order contains different products so for each product in an order a table stores the quantity(of the product) , net sale and earnings.

+ +

I will need earnings and net sales of days and months for display, so my approach would be to sum the individuals earnings and nets of every product sold in a day, but I think that as data gets bigger it would get expensive on the app's performance so my question is: Should I make another table for net sales and earnings for each day and month?, so as time passes it might be more efficient. I'm using code-first database with .net core 2.1, junior developer working on a full-stack project

+",358417,,,,,43886.83125,data model design of a store net sales and earnings,<.net>,1,0,,,,CC BY-SA 4.0, +405742,1,,,2/25/2020 22:19,,0,97,"

I'm pretty experienced with developing both frontend as well as backend applications, with a variety of programming languages and frameworks. I know the problem space and concepts involved in both quite well.

+ +

Still I find my thinking around the differences in application architecture and also the differences of the common problems that both have to solve to be quite fuzzy. I'd usually consider myself at least competent in making interesting and/or relevant distinctions and communicating them. But somehow I'm drawing a blank here, even in properly describing what I'm even looking for in this question.

+ +

So, for example, user interfaces tend to have to react a lot more to events (e.g. reacting to a users' click or touch).

+ +

Additionally, user interfaces, both SPAs, as well as mobile apps (but even simple websites, really), tend to have a lot state, whatever that would mean in more precise terms. Sometimes, backends are architected in a way where the only event that they need to handle is a request. All state is setup at the beginning of the request, and cleaned up / discarded after the response (e.g. PHP). Other backends obviously are more long-lived, keeping state in memory across requests.

+ +

Both, frontend and backend, access data/state. A backend might query a database, another service, or a cache. A frontend might send a request to an API, have a local cache, use a state container like Redux, or even have a database-like abstraction on its own (I believe PouchDB and/or Apollo might count here).

+ +

What are the conceptual differences in data/state between frontend and backend, if there are any? What are differences in the usual patterns of interacting with data/state in frontend and backend, if there are any?

+ +

I feel like in describing all of this I'm missing either the entirely obvious distinctions, or the more subtle differences in approaches and the problem space.

+ +

When I search for ""frontend vs backend development"", there are tons of introductory articles describing what a frontend is (=user interface) and what a backend is (=everything that is not the user interface). I'm not looking for the answer that frontends are concerned with drawing pixels to the screen, but rather something more ""meaty"" -- which I seem to be unable to describe further.

+ +

I'm asking this question here hoping that some people can share their approaches to making sense of this. I also hope that this question is appropriate to this forum and counts as answerable. I appreciate any input and approach to this, if just to see what kind of responses this vague query produces.

+ +

[EDIT] +Ah, obviously asynchronicity and callbacks are two things that are much more common topics when dealing with user interfaces (because it's important to not block the main UI thread and keep the app responsive).

+",270383,,270383,,43887.00139,43887.00139,What are differences in application architecture in frontend and backend applications?,,1,0,,,,CC BY-SA 4.0, +405745,1,405809,,2/25/2020 23:10,,1,137,"

I'm using some modified MIT licensed code directly in my project and am concerned about how to correctly attribute it.

+ +

The code in question does not have license text in each file, only a single LICENSE file at top level. I have included the contents of that file in my own licenses file. In this file I also have the licenses for other libraries I use, but these are all unmodified and live in separate directories in the project making the origin clear.

+ +

I am concerned that this leaves it unclear as to what the origin of any particular piece of code is in my project. I'm curious if this is actually an issue and if it is, how could I manage this better?

+ +

Thanks

+",358430,,,,,43887.94375,How to correctly credit the authors of an MIT licensed library in this case?,,1,6,,,,CC BY-SA 4.0, +405749,1,405775,,2/26/2020 4:01,,47,9346,"

Disclaimer: I don't expect zero tech debt. In this post, technical debt problem refers to severity that has been causing negative impact, say productivity.

+ +

Recently I was thinking to build a tool to automatically generate tech debt report from issue tracker - introduction rate vs cleanup rate over the time. Apart from the total, there'll also be numbers broken down by project team and by manager, so that managers could easily get insight on current tech debt level, without delving into issue tracker and details (such tool might already exists, I need to research to avoid reinventing wheel).

+ +

Motivation wise, tech debts have been snowballing for years. Whenever developers increase project estimate to include tech debt clean up, more often they will be asked to remove those numbers from estimate, so refactoring/clean up works usually ends up indefinitely postponed. I hope the periodic report will help to improve tech debt management issue.

+ +

However, on second thought, I wonder will increasing visibility of tech debt level really helps to raise priority. Generally, is tech debt issue an org culture issue or just lack of tool/insight? I supposed there's no universal answer, I wonder which is the more common cause. What's your experience?

+ +

--- Update 2/28

+ +

Clarification: I believe most management are intelligent enough to realise there's impact, especially after teammates reported pain in terms of project productivity. My gut feeling is that, they don't have a concrete picture about how serious problem is. My idea is to help management to gain clearer picture, via two steps:

+ +
    +
  1. Have techdebts logged, and have their impact tracked. (there are challenges, but that's beyond scope of this question)
  2. +
  3. Have a report for introduction rate vs cleanup rate (there could be further breakdown by high/low impact).
  4. +
+ +

My curiosity comes from, will these efforts help or are they just waste of time, generally speaking (not specific within my org) - hence the question what's your experience. If it's org culture issue, then most likely these efforts won't help much.

+",358438,,358438,,43889.27431,43890.85347,Is technical debt management problem more of a culture issue or insight issue,,8,16,33,,,CC BY-SA 4.0, +405756,1,,,2/26/2020 8:06,,0,55,"

Sorry for the bad title. Couldn't really think of a good name without an explanation.

+ +

In our system (source is inside Oracle PL/SQL packages) we have quite a lot large SQL queries with a sh..load of variables, dozens of joins to multiple tables and a lot of conditions. Some of those tables and variables are even configurable by the user.

+ +

Example would be a warehouse. The user has some orders/positions and items in different boxes. He can see the order needs item_x and he can see that there is a box with item_x in it. When he tries to perform some action he get's an error that item_x for his order is not available.

+ +

Most of the time the problem is some configuration or special case that prevents the system for using that box the user has in mind (he isn't telling the system which boxes, the system is searching for a valid box with a lot of ordering etc.).

+ +

At the moment a problem like this ends up in a procedure which looks like this:

+ +
    +
  • Client calls and says action_x isn't working all though he has the item_x that is needed by the order in box 1234
  • +
  • We connect to the database and check that what he saying is true
  • +
  • We copy the freakin huge SQL into a worksheet.
  • +
  • We debug into the procedure to copy all the variables and constants being used in the operation he is performing and replace them (by that time probably 20 minutes have passed).
  • +
  • Now we can comment out the WHERE-clause or change some JOINs to LEFT JOINs to find the row that was filtered out and the reason why.
  • +
+ +

Those things can happen after updates because some bug sneaked into the conditions, it can be some configuration problem caused by the user or it is just right that those item wasn't seen but the user can't keep track on all the reasons why, nether can we.

+ +

What would be good practices to change this problem. Of course it might be possible to remove the conditions, change everything to LEFT JOINs and validate everything after, while looping over the data, so it is possible to print out a neat message. All though it has to be more like returning a freakin huge result set with reasons behind every row why that wasn't a match. And of course this would lead to really, really bad performance and his application would probably freeze for minutes.

+ +

Another solution might be having two queries. One with conditions and INNER JOINs and another without and with LEFT JOINs which only gets executed if the first doesn't return any valid rows. This wouldn't impact performance for working cases but would probably end up in bugs etc. because of redundancy and at the end displaying that for the user would be pretty hard.

+ +

And of course we could speed up the search/replace part with some aggressive logging of the variables but at the end that would just speed up the debugging. Are there good solutions for problems like that which might also benefit the user?

+",175825,,175825,,43887.53611,43887.65833,Analyse/show user why data has been filtered out by SQL,,1,0,1,,,CC BY-SA 4.0, +405759,1,,,2/26/2020 8:25,,28,8030,"

Let me try to summarize a bit more with a simple example:

+ +

You're building a large application, a user portal for example, with feeds, news, account management, and a whole range of difference features.

+ +

During development it's decided you need to implement mocked data, for either ease of testing, or perhaps because the APIs you are communicating with are unreliable.

+ +

Should you ever need to make changes to your core application in order to accommodate for the use of mock data, or should your application remain pure and agnostic to how it is being used, and should you instead force your mock data to find a way to inject itself into the application?

+ +

I am of the mindset that an application should not concern itself with how the outside wants to use or test it, and that to add conditional statements in the application checking whether it is mocked or not is not just bad practice, it's terrible practice. It's a complete coupling that should never even be entertained, no matter how difficult it is to get mock data into the application.

+ +

On a current assignment I'm seeing literally hundreds of references throughout the entire code base that will do different things depending on the context of data being used. This sort of stuff makes me want to cry, but maybe I just have it all wrong and there is a legitimate benefit in this?

+ +

What are some arguments for why this tight coupling could ever be a good thing?

+ +

What are your general thoughts and experience with tightly ingraining mock-data usage into the application itself?

+",358456,,,,,43888.76944,"In software design, should an application remain agnostic regarding its usage with real world data / mock data?",,9,8,8,,,CC BY-SA 4.0, +405762,1,405763,,2/26/2020 9:13,,-3,405,"

I'm developing GUI for controlling and testing hardware device.

+ +

The GUI consists of many basic controls like textboxes and radio buttons which are mostly independent of each other - each control sends a command to device to read or write a parameter. And a couple of timer-based monitors querying the device for status every couple of seconds.

+ +

It was initially written as a WinForms application without any separation between view, data and communication layers.

+ +

There are no performance/scalability requirements. I'm looking for a better way to build something open for future modification for similar devices.

+ +

Which architectural pattern would be recommended for re-organizing such an application?

+ +

Edit: adding to helb's answer, I found this SO answer helpful in explaining different GUI architectures.

+",52463,,52463,,43888.39931,43888.39931,Choosing architecture for Winforms C# application,,1,3,,,,CC BY-SA 4.0, +405766,1,,,2/26/2020 9:42,,3,198,"

There is a feature that is now deprecated and going to be removed. Adding a logging statement, observed by some alerting mechanism, can help finding out whether the feature is still being actively used and upgrade the clients that do so.

+ +

Although not necessarily technically important for the diagnosis, I wonder what would be the appropriate log level in a semantical sense.

+ +
    +
  • INFO, because it’s just an information. The use of the feature does not cause any potential harm.
  • +
  • WARNING, because it’s necessary to eventually upgrade all clients. After upgrade, this will become an error.
  • +
+ +

Or create a custom level, if supported by the language/framework/ecosystem?

+",190107,,,,,43889.11944,What log level use for deprecated features?,,4,0,,,,CC BY-SA 4.0, +405769,1,,,2/26/2020 9:57,,2,65,"

I've learned many ways to keep a domain model flexible over the years, but there is a remaining case where the setup resists change.

+ +

Suppose that we have kept our domain model properly isolated: we can change it easily. Now a change requires a fundamental restructuring of the domain model. There is one thing in our way: our persistent data structure will no longer fit the heavily-changed domain model.

+ +

In most cases, the data structure does remain usable, and may need only some minor adjustments. We can change our domain model, and then simply fix the translation to the classes that map to our data storage.

+ +

However, this is not most cases. Our data structure needs to change to accommodate the updated domain model. Let's also say we have a GB of data stored. (Should this be avoided to solve the problem? Definitely up for discussion.) After the change, we must still have that data, be it in its new form.

+ +

Particularly the updating of data to its new form seems to introduce an undesirable amount of work and risk.

+ +

How can we create a setup where the above scenario is easiest to manage? Particularly, it would seem good to minimize work and risk.

+ +

Feel free to apply whatever persistent data storage you deem most suitable.

+",213637,,,,,43887.48125,Persistent data structure changes for changing domain model,,1,1,,,,CC BY-SA 4.0, +405776,1,,,2/26/2020 12:52,,-3,185,"

Are there other general purpose programming languages besides Objective-C +ARC and Swift which target the llvm and use static compile time Automatic Reference Counting for memory management?

+",358479,,358479,,43887.74722,43887.74722,What programming languages besides Apple Swift & Objective-C use the Llvm compile-time Automatic Reference Counting exclusively for memory management?,,1,8,,,,CC BY-SA 4.0, +405780,1,405786,,2/26/2020 13:55,,1,87,"

I have a project in which one module keeps the state of the target device (things like current command level, but mostly status registers caches).

+ +

I'm aware that having a global public variable (Singleton pattern) is considered a very bad practice, and I understand why.

+ +

Instead, my approach is to use an opaque pointer (to a struct), allocate a single (static) instance of such struct in the *.c file (so it's not public) and provide it through a Handle GetHandle(void) function.

+ +

The above strategy is basically use the adaptor pattern, I still have a single instance. I have read that this is an anti-pattern as well.

+ +

Is there any better why to design this module?

+ +

Note: notice that in this project I can't use dynamic memory, so malloc etc. are forbidden

+",304203,,,,,43887.63681,Alternatives to service locator with opaque pointer in C,,1,6,,,,CC BY-SA 4.0, +405784,1,,,2/26/2020 15:00,,0,81,"

Suppose I have in my domain model two aggregates: Dog and Cat. Dog is composed of DogName and DogFood. Cat is composed of CatName and CatFood.

+ +

The repositories, however, are a PetNames API which only provides a single endpoint to retrieve all names, and PetFoods API which only provides a single endpoint to retrieve all foods. They are both 3rd party. I have no control over them.

+ +

There are times in my Use Cases where I need to rehydrate both a Dog and a Cat for some operation. The calls to the two repositories are expensive and I don't want to make multiple calls that retrieve identical data for each aggregate.

+ +

How do I go about rehydrating the aggregates in this case?

+",170610,,,,,43887.67639,How do I deal with multiple repositories for one aggregate?,,1,3,,,,CC BY-SA 4.0, +405788,1,,,2/26/2020 15:31,,1,751,"

An example to show what I mean exploring a deeply nested object structure.

+ +

This is a real code I deal with today (it is vue but that is irrelevant to my question)

+ +
data() { 
+    return {
+        customer: {},
+        settings: [],
+        task: {},
+        ...
+        }
+}
+
+ +

I read the code several times to find out the customer is created by spread operator to combine some objects returned from 2 REST api calls!!! The same goes to task & settings.

+ +

Object Literal and spread operator makes creating deeply nested object very easy. But from time to time I have to read the code several times to find out ""where did I get that property"" or ""why nobody tells me this object has such such property (normally deeply nested)"". So is there anyway to help me explore what properties customer has when I see customer: {} ?

+ +

Requiring the original author to write a thorough document is much easier to say than done. The same goes to setting a hard rule like no more than 3 nested levels.

+ +

BTW, my question is not about how to access deeply nest object and avoid the error ""Cannot read property 'foo' of undefined""

+ +

Using an example from ""Accessing Nested Objects in JavaScript"").

+ +
user = {
+    id: 101,
+    email: 'jack@dev.com',
+    personalInfo: {
+        name: 'Jack',
+        address: {
+            line1: 'westwish st',
+            city: 'wallas',
+            state: 'WX'
+        }
+    }
+
+ +

After all there are some best practices to access the nested object, e.g. How can I access and process nested objects, arrays or JSON? or I can use lodash get

+ +

So my question is is there any recommended way(s) to make working with deeply nested object less painful, in general and javascript in particular?

+ +

-------- update --------

+ +

When I said ""Requiring the original author to write a thorough document is much easier to say than done."" There are couple of reasons except for the obvious one that the original author may not have time/resource/energy to do that.

+ +

The other reasons (happen to me a lot) include, the original author has quit the job or there are more than one designers to this data structure and the latecomers added something that turned to be totally unnecessary because they didn't understand the original design in the first place. I am sure anyone who inherits a legacy system can relate to that.

+ +

And in javascript world a legacy system can mean something developed in 2018!

+",217053,,217053,,43888.51597,43888.51597,How to explore a deeply nested object structure more easily?,,1,9,,,,CC BY-SA 4.0, +405790,1,,,2/26/2020 15:55,,-2,90,"
        wasted requests     not enough time for those requests
+                 |                       |
+                 |                       |
+(1) |-x--x-----------------x----x--x--x| x  x   (executing requests)
+      ..                   .    .....           (sending requests)
+(2) |-x--x-----------------x----xxxx-x-|        (executing requests)
+
+(3) |xx-x--x--x---x---x----x----x-----x|        (executing requests, ideal)
+
+

I'm trying to figure out when to send requests to an API that would be spaced as much consistently as possible.

+

The most straght-forward solution would be to keep the requests spaced enough so that the API wouldn't throttle (1). However this technique wastes a lot of requests.

+

I came up with the idea to let the requests run a lot more quickly when there's a lot of space left and slow down when at the end (3). However when the API after the interval received its 120th request in the last minute, the requests were too slow. It'd be optimal to send the every 0.5 seconds (60/120), but my script was sending them every 0.85 seconds.

+

How would I accomplish something similar to the example show in (2)?

+

Edit: +Here's the code I tried. It's a lot finnicky.

+
class Limitter:
+    def __init__(self, num, interval, minimal=.2):
+        self.num = num
+        self.interval = interval
+        self.minimal = minimal
+
+        self._next = None
+
+        self._requests = []
+        self._lock = Lock()
+
+    def _get_duration(self):
+        now = time.time()
+        self._requests = filter(lambda x: now - x < self.interval, self._requests)
+
+        if len(self._requests) < 2:     # if not enough requests stored, return interval / num
+            return float(self.interval) / self.num
+        else:
+            oldest = self._requests[0]
+            newest = self._requests[-1]
+            req_left = self.num - len(self._requests)
+
+            if self.num == len(self._requests):             # if num reached, do this. It doesn't work at all
+                return max(.5, oldest - (now - 60) + .4)    # tried to hotfix with some constants
+
+            time_left = float(oldest + self.interval - newest)
+            unit_time = time_left / req_left
+            slant = -1.4 / self.num * req_left + 1.7        # slant that enables the different timing (first short, then long)(just a linear function)
+            return unit_time * slant
+
+    def acquire(self):
+        with self._lock:
+            now = time.time()
+
+            if self._next:
+                duration = self._next - time.time()
+                if duration > 0:
+                    time.sleep(duration)
+
+            delta = self._get_duration()
+            self._requests.append(time.time())
+            self._next = time.time() + delta
+
+if __name__ == '__main__':
+    l = Limitter(120, 60)
+
+    for i in range(240):
+        l.acquire()
+
+

I'm developing an "API" for Hypixel API. They have a limit of 120 requests per minute. The requests are just communicating with the official API. I'm just trying to get information as fast as possible. I don't think the limit is there to protect the API from me.

+",358498,,358498,,44081.38333,44081.38333,Smooth out requests to rate limitted API,,1,12,,,,CC BY-SA 4.0, +405797,1,405817,,2/26/2020 17:28,,1,92,"

The question is about my database design. Is it OK?

+ +

I feel uncomfortable about having 2 separate WordTraining and SyllableTraining tables and about too simple design (not much normalized). Maybe there are any other issues.

+ +

My web-app teaches kids to read syllables and words.

+ +

First the user (kid) chooses - whether to train to read syllables, or to train to read (whole) words.

+ +

Then the user is presented with words/syllables from a fixed list (WordBank and SyllableBank Tables below), if the user fails a read exercise - the app asks the same question again next time and then periodically asks again at increasing time intervals. If the user sees the word/syllable for the first time and passes the read exercise at the first attempt - this word/syllable would never be shown to the user again (for the exception of the special Cram Mode when the user can practise all words from the database regardless of his previous test results).

+ +

I expect not more than 60 000 words / 300 syllables / ~10 mln users.

+ +

My database design is as follows:

+ +
WordBank
++----+-------+------------------+
+| id | Word  | SyllabilizedWord |
++----+-------+------------------+
+|  1 | hello | hel/lo           |
+|  2 | papa  | pa/pa            |
++----+-------+------------------+
+
+SyllableBank
++----+----------+
+| id | Syllable |
++----+----------+
+|  1 | hel      |
+|  2 | lo       |
++----+----------+
+
+WordTraining
++----+--------+------------+---------------+--------+--------+--------+
+| id | Failed |  NextRep   | FirstSeenDate | Ignore | WordId | UserId |
++----+--------+------------+---------------+--------+--------+--------+
+|  1 | True   | NULL       | 2020-02-26    | False  | 1      | 1      |   
+|  2 | True   | 2020-02-30 | 2020-02-26    | False  | 4      | 2      |
+|  3 | False  | NULL       | 2020-02-26    | False  | 7      | 3      |
++----+--------+------------+---------------+--------+--------+--------+
+
+SyllableTraining
++----+--------+------------+---------------+--------+------------+--------+
+| id | Failed |  NextRep   | FirstSeenDate | Ignore | SyllableId | UserId |
++----+--------+------------+---------------+--------+------------+--------+
+|  1 | True   | NULL       | 2020-02-26    | False  | 1          | 1      |   
+|  2 | True   | 2020-02-30 | 2020-02-26    | False  | 4          | 2      |   
+|  3 | False  | NULL       | 2020-02-26    | False  | 7          | 3  | 
++----+--------+------------+---------------+--------+------------+--------+
+
+User
++----+---------------+------------+--------------+---------+---------------+
+| id |   LoginName    |  FullName  |    Email     | PswHash | LastLoginDate |
++----+---------------+------------+--------------+---------+---------------+
+|  1 | johnLoginName | John Black | jb@gmail.com | acb3456 | 2020-02-22    |
++----+---------------+------------+--------------+---------+---------------+
+
+ +

WordTraining Table - Once the user (who clicked button (chose) Word-trainig Mode) is presented a word for the first time (sees it first) - a record is created. WordId is a ForeignKey matching with (primary key) of the table Word (all words available for practise), UserId is a foreign key for respective User. Ignore atribute means that user wants to see respective word (exercise with it) no more.

+ +

WordBank.SyllabilizedWord - is just to show a prompt how to syllabilize the word correctly.

+ +

More detailed app logic:

+ +
    +
  1. I have multiple users
  2. +
  3. For each user there are a few training modes possible (train read whole words, train read syllables, 3rd mode is complicated to describe)
  4. +
  5. For each user I store statistics - ids of word or syllable the user was shown (at least once) + whether the user failed or passed the exercise + date when user was shown that kind of Problem.
  6. +
+ +

Algorithm

+ +
    +
  1. User logs in
  2. +
  3. User selects mode (train syllables or words). In the future different modes might be added or initial modes can be often modified.
  4. +
  5. At first start of the app when user clicks ""Next Problem"" button, a new (unseen yet) Problem is presented to the user. He answers and the Problem (respective word or syllable) is marked passed or failed. Plus FirstSeenDate table-field (WordTraining or SyllableTraining table - depending on what user chose to practice) is set (ISO-date when the user saw this Problem).
  6. +
  7. At the second start of the app when user clicks ""Next Problem"" button, problem selection is like this: If there are any Failed Problems - show them all first. If there are no Failed Problems - show unseen-yet Problems. If Failed (at previous game) Problem is passed during this time, Problem is marked ""Passed"" (Fail is false), and the field NextRep (next repetition (review)) is set (ISO-date when to show that problem to the user again). Field NextRep is set to 2 * (todayDate - FirstSeenDate) - but this algorithm is subject to change and might be very complicated.
  8. +
  9. At third start of the app when user clicks ""Next Problem"" button - If there are any NextRep due today (or before today = missed) - show them all first. Then logic is like in item 4 above (show all failed, then unseen). If NextRep Problem is passed, Field NextRep is set to new value = 2 * (todayDate - (current)NextRep) - - but this algorithm is subject to change and might be very complicated.
  10. +
+",358496,,,,,43888.4,Database design of a web-app which trains kids to read,,2,0,,,,CC BY-SA 4.0, +405806,1,,,2/26/2020 20:54,,-1,148,"

We have daily 200-300 live projects which receive online traffic from our vendors across all our live projects. We store this data in two different DB (I know this sounds bad, but seemed good at the time of designing for various reason):

+ +
    +
  • project information in - projects table of MySQL
  • +
  • traffic data for those projects in MongoDB collection.
  • +
+ +

Now, we have to generate analytical data out of these data that we store like on a given date-range - Revenue Earned, Profit, Top Vendors, Top Projects(with most conversion), etc... This data has to be stored daily somewhere so that we can query it by Date Range.

+ +

The only problem is that one project traffic is divided between multiple vendors. So this kind of makes it nested data because for a project we can have 10 vendors records.

+ +

What's the best way to store this kind of data so that I can query it easily for my representation purpose?

+ +

UPDATE

+ +

The Project Table contains details about project like -

+ +

Project_Code | Client Code | Sales Manager |Payout | Status | Country Code ...

+ +

The Traffics collection contains detail about the traffic on a particular project Like -

+ +

UUID | Project_Code | Vendor Code | Start Time | End Time | Conversion Status |...

+ +

When we have to pull project-wise stats we fetch all the projects and then sum up the traffic for corresponding project_code so that we can get

+ +

Project Code | Total Starts(sum of traffic for that project code) | Conversion (Sum of Traffic with conversion_status=1) | Incidence Length (Avg of Median(Endtime - Start time))

+ +

So this is what we have so far in our DB

+ +
+ +

Now what we want is to daily mine this data so that we can calculate

+ +

(Given Date Range - Like Last 7 Days, Today, This Month)

+ +

First View. Project Code | Revenue Earned (Conversion * Payout) | Cost to Company (sum(Vendor Conversion * Vendor Cost)) | Profit (Revenue - Cost)

+ +

Second View. First view but where Sales Manager = ""John""

+ +

Third View. Top n vendors | Top n Projects | ...

+ +

Fourth View. Same as the Third View but where Sales Manager = ""John""

+ +

So far we are using Laravel, Mysql, MongoDB, Redis as tech stack.

+",320838,,320838,,43888.55556,43890.35694,How to store nested data in efficient way in DB,,1,1,,,,CC BY-SA 4.0, +405811,1,405812,,2/26/2020 22:55,,0,149,"

For context, I'm writing TypeScript, but I believe the concept works for many languages.

+ +

I have a function getFooParams(): FoodParams, that gets data that gets passed to foo later on. (It's a React-Native project that has some limitations on how the data can be passed to the native code, so this architecture seems unavoidable.)

+ +

Should the terminology be getFooParams or getFooArgs and FoodParams vs. FooArgs.

+ +

I'm leaning toward FooParams for the type, and fooArgs for instances of the type.

+ +

Update: Specific use-case example

+ +

I have a situation where I have an API client object that's responsible for formatting and sending requests to the API. Unfortunately, there are a few endpoints I have to pass off the actually requesting to a 3rd-party react-native-background-geolocation module. This module will send HTTP requests in the background without hitting my code, so I need to describe to it how to format that query. To avoid having API logic with my code that deals with the geolocation module (and because I have 3 implementations of that API client) I have a function getTimeEstimateInfo that returns the type TimeEstimateInfo to have the API client object create an object to pass on the the 3rd-party module.

+",182339,,182339,,43888.6875,43888.69375,"Should the type of a parameter that holds the arguments for a function be named ""argument"" or ""parameter""",,3,3,,,,CC BY-SA 4.0, +405814,1,405816,,2/27/2020 6:37,,2,110,"

Suppose I initialize a library, say, in python object -

+ +
#filename: file_A
+import file_
+class A():
+    def __init__(self):
+        self.pd = file_.pd
+        self.np = file_.np
+
+ +

And suppose the content of the file_ is,

+ +
#filename: file_
+import pandas as pd
+import numpy as np
+
+ +

You might be wondering what's the sense of including those libraries as objects, for import file_ will anyway include namespace of pd & np but the reason of inclusion is to make these libraries available for child classes, as bellow.

+ +
#filename: file_B
+from file_A import A
+Class B(A):
+    # Getting all the libraries without even importing them again. 
+
+ +

My concern is about the implications of this strategy. Is it a better idea to adopt such way?

+",357770,,,,,43888.34722,Is it a good software engineering practice to store libraries as attributes of objects?,,1,0,1,,,CC BY-SA 4.0, +405815,1,,,2/27/2020 7:09,,0,84,"

Brief description of a problem - providing factories which are creating same object type in different ways and following rules of DDD (isolated domain model, independent domain objects inside of it).

+ +

I have some domain objects in my program and couple of functionally different blocks which are satisfying some needs of my domain model and it's objects. Now i'm facing following issue - some of the functional blocks outside of my domain model need to create domain objects (e.g. data base adapter - when i'm loading object back i need to recreate it) - but it also means that my domain model core is no longer such isolated as i would like to (and also, as far as i understood, according to DDD concepts domain is isolated and should not be affected by changes in another functional blocks);

+ +

Author of DDD proposes to use factories to create objects - and conceptually this might solve my problem - and, for example, it's explicitly mentioned in the book (Part 2, Chapter 6, p. 145 ""Reconstructing Stored Objects"") that for reconstruction of objects you need to have separate factory - because objects would be created differently.

+ +

Another side of this problem i'm currently facing is unit tests - i can have unit tests in another functional block which is using domain objects to test behavior but it also mean that inside of tests i need to create objects to actually perform test - and again, i need to set only limited amount of fields and i don't care about the rest - but if i do it explicitly in test then when i'll be modifying my domain model i'll need to update already 2 separate functional blocks from which my model domain normally should be independent - DB layer and unit tests in some another layer.

+ +

I think it's possible to solve by providing factories for different ways of creation of same object type instance - but i'm missing examples and i can't figure out on my own how the implementation could look like because of constrains of Factory patterns - you need to have single interface function (e.g. create() or create(argsList)) - but that's exactly my problem - i want to keep object with only one ctor and after some modifications of this domain object i want to avoid updating other functional blocks - i would rather like to update 1 factory responsible for this particular part of object's functionality.

+ +

I would be really grateful for any examples of such particular use case or advice, thanks in advance! +P.S. i found some example in TypeScript (http://www.taimila.com/blog/ddd-and-testing-strategy/) but seems like it's impossible in C++ with our mechanism of default arguments

+ +

Here is scheme of solution i was trying to create: +

+ +

How, according to DDD concepts it's possible to implement such a factory? API for me is not clear. I can imagine that if, for instance, domainFactory has some unique attributes (like CustomType field) we can inject this additional variable in factory's ctor - but only in case if this variable is shared between instances - it does not work if variable is unique per object instance.

+ +

I can also imagine the case when all other factories are using DomainFactory - but what if another specific factory (e.g. StorageFactory) will try to pass something that Domain one does not expect (e.g. loaded unique id) - and how they can communicate using only one API ""create"" method?

+",358548,,,,,43888.29792,How to create factories for same object type but different ways of creation following Domain Driven Design rules? (C++),,0,9,,,,CC BY-SA 4.0, +405821,1,,,2/27/2020 12:04,,2,75,"

This question is regarding the better architecture to follow for API Development in Java using any RDBMS database backend.

+ +

Currently, we are using the below approach to fetch data from database and passing it to the client.

+ +

Here, Once the request is made to the rest controller (e.g. Spring Boot/Java EE), it makes a database procedure call which returns the database cursor. Using this cursor, the data will be parsed row by row and column by column (e.g Spring RowMapper/JDBC) into a list of POJOs. After that a JSON API (e.g. Jackson/GSON) serializes these POJOs to JSON response messages, which is then passed to client through the Rest controller.

+ +

We noticed some disadvantages with this approach

+ +
    +
  • Performance Issue for large result sets
  • +
  • Many populated objects in the heap (too many POJOs) that keeps the +garbage collector busy
  • +
  • Slow Development time (creating a POJO for every result set)
  • +
+ +

Now, we have come up with a new approach which increases the performance of the application, reduces the garbage collection and optimizes the development time. +With this approach, once the request is made, a database procedure call is made which returns the JSON response using the in-database JSON API features (e.g. Oracle/SQL Server JSON APIs). This JSON Response is returned as a VARCHAR/CLOB and returned to the client using Java Rest controller.

+ +

This has been found to be more effective and productive compared to the traditional way of implementation because of the following benefits:

+ +
    +
  • Better Performance because the JSON is generated when the data is +fetched using the SQL commands
  • +
  • Less overhead on the garbage Collection because the serialization +happens in the database
  • +
  • Fast Development as developers just need to worry about hand-coding +their SQLs (at least in our team )
  • +
+ +

Can anyone please advise if it is better to serialize the data to JSON at database level instead of doing it in Middle Tier? Which is the better architecture ?

+",358578,,,,,43888.63889,Which is the better architecture to follow for API Development in Java using any RDBMS database backend?,,1,1,,,,CC BY-SA 4.0, +405827,1,405839,,2/27/2020 13:04,,2,437,"

A programmer keeps making cosmetic changes to the code while we have a strict deadline and the contract stipulate ""no changes to the existing code"". I am wondering where this ""attitude"" comes from: DevOps ? Agile ?

+ +

Changes performed:

+ +
    +
  1. Replacing explicit variables with ""var""

  2. +
  3. Renaming short variable names with longer ones

  4. +
  5. Refactoring code injections into MVC controller classes

  6. +
  7. Adding design patterns (like command patttern) to existing code (with no functionality changes)

  8. +
  9. Adding constructors with parameters to ViewModel classes (forgetting to add a non parameter one, so the post breaks...)

  10. +
+ +

Hundred and hundred of changes after tests were made, and making merging way more complicated.

+ +

Is this Agile ?

+",358581,,177980,,43888.64097,43890.51944,Making hundred of cosmetic changes to the code at the last minute,,4,7,,43890.66597,,CC BY-SA 4.0, +405829,1,405872,,2/27/2020 13:13,,3,348,"

I understand the rationale of avoiding using namespace std - this defines too many casual names the developer may not be even aware of.

+ +

I tried to work around the problem with the help of using construct. Most often, I it was using std::string and using std::vector, because these are really very common cases and writing std:: everywhere seems cluttering the code a lot. During the code review, it was pointed out that if I do this in the headers, the definitions propagate over many files without obvious consent, as the headers files tend to include one another. In response, I removed all using statements from the headers but moved them into .cpp files where the most of the code reside anyway. However during the next code review I was told to remove them from there as well.

+ +

I still have some doubts if using on the top of the cpp file that is never included into another file still may have big negative consequences. It seems to me that having no namespace prefixes for very common case like string or vector makes the code easier to read.

+ +

I would like to clarify, if the current major consensus in C++ community really discourages putting some common names (using std::vector and using std::string) of the std:: namespace on the top of the .cpp (not header) file. If this is really the case, it would be good to know why.

+",81278,,,,,44126.4,Are namespace constructs like 'using std::string' unacceptable also in .cpp files?,,3,6,,,,CC BY-SA 4.0, +405832,1,,,2/27/2020 13:29,,-2,154,"

I use XAMPP to host an Apache web server and a MySQL database. Most of the data processing is done on the user's mobile phone. After a certain activity, information of the user is sent to the web server. The PHP script checks if the data is already in the database, if it is, the data is sent to the Firebase Cloud Messaging Server, which in turn sends the data to the user.

+ +

I have read Informations about the different system architectures but could not find one that fits my case.

+ +

My question is, which system architecture am I using?

+",358584,,,,,43888.61319,"Which Software Architecture am I using? XAMPP, Apache, MySQL, PHP Android",,1,4,,,,CC BY-SA 4.0, +405834,1,,,2/27/2020 14:28,,-1,90,"

I'm working on a parser for a (very small) toy language, and I want to test that it's parsing expressions with the appropriate precedence. Previously I just had arithmetic operators, so there weren't many possible combinations of operations; however, now I'm adding logical/relational operations, which means there are many more possible expressions. I'm not sure how best to test the parser without writing an excessive number of unit tests covering every possible combination of operators. How can I go about writing a reasonable number of tests that make sure all operator precedence is handled correctly?

+",212963,,,,,43888.66042,How can I efficiently test that a parser handles multiple levels of operator precedence correctly?,,1,3,,,,CC BY-SA 4.0, +405840,1,405855,,2/27/2020 15:35,,0,203,"

I was planing to create an API based on microservices, I'm stuck on how to solve a specific scenario.

+ +

Initial plan is, all microservices are only accessible through a REST API.

+ +

Based on the next schema, if you want to get all the orders you simply go to /orders and get a list of them.

+ +

But whats happens when you want to get all orders related to a client? You go to /clients/123/orders and internally clients make a request to orders and then clients returns you the orders list, but don't make any sense to make a internal request to orders, receive a long list from orders and return that list again to the client from clients. Also the pagination will be a pain here.

+ +

I read about having an accessible view of the orders database from clients but not sure whats is the best approach for this.

+ +

+",339125,,325277,,43892.82917,43892.82917,"Microservices based API, how to handle long lists requests",,2,6,,,,CC BY-SA 4.0, +405846,1,,,2/27/2020 16:25,,3,78,"

I'm processing data and writing in a database, that is used by my colleague, a designer, working on data viz.

+ +

How can I efficiently, say, add a new column for my colleague to use in their visualization?

+ +

I have a ""production"" output which they uses, and a ""staging"" output that I use to check my result, they also have a ""production"" visualization (website for instance) and a ""staging"" viz.

+ +

I'm thinking of 2 options:

+ +
    +
    • +
    • Add new column into my staging output,
    • +
    • once I'm happy with my dev, launch this new feature (new column) into production,
    • +
    • then write some sort of release note to warn my colleague that this new column is available,
    • +
    • and let them do their part of dev/staging/production for their visualization.
    • +
  1. +
    • +
    • Add this new column into my staging output,
    • +
    • warn my colleague that this new column is available,
    • +
    • wait for them to do their part of the dev
    • +
    • once we're both happy with our dev, launch the new column and the new visualization in production
    • +
  2. +
+ +

Each method seem to have cons, so I can't make my mind about what to do:

+ +
    +
    • +
    • A bit slow, my colleague has to wait for me to launch my feature to start working on their visualization
    • +
    • If I'm removing a column I need to be extra careful that nobody is using it, since it's in production
    • +
  1. +
    • +
    • We need to deploy in sync
    • +
    • My colleague is working on changing data. Say I need to change the name of the column I'm using during my development phase, my colleague needs to update the source of their visualization.
    • +
  2. +
+",358600,,,,,43888.72153,How to synchronize changes between Data Engineers and Designers?,,2,1,,,,CC BY-SA 4.0, +405849,1,405857,,2/27/2020 17:11,,6,1412,"

State of the union:

+ +

C# Events/Eventhandlers are:

+ +
    +
  • blocking
  • +
  • throwing Exceptions
  • +
  • sequential
  • +
  • deterministic executed in order
  • +
  • MulticastDelegates
  • +
  • a handler is always dependent on the behavior of the handlers registered earlier
  • +
+ +

Actually they are pretty close to regular function pointers. Despite beeing a sequence of them.

+ +

If any event-subscriber is doing evil things (blocking, throwing Exceptions):

+ +
    +
  • the event invoker is blocked (in the worst case indefinitely)
  • +
  • the event invoker has to deal with unpredictable exceptions
  • +
  • the internal sequential eventhandler calling will break at the first exception
  • +
  • any EventHandler internally stored after the failing Eventhandler will not be executed
  • +
+ +

C# Example on .Net Fiddle

+ +

I always thought of c# events as an implementation of the publish-subcribe pattern.

+ +

But:

+ +

This contradicts my intuition of publishing/subcriber semantics. +Actually it seems to be the opposite.

+ +

If i publish a news/website/book/podcast/newsletter:

+ +
    +
  • publishing is non-blocking (in relationship to the subscribers)
  • +
  • consuming is concurrent
  • +
  • reader/subscriber errors don't interfere with my publishing
  • +
  • reader/subscriber errors don't interfere with other readers/subscribers
  • +
+ +

Transfered to .Net this would mean: event.Invoke(...) leads to:

+ +
    +
  • event.Invoke(...) is fire and forget
  • +
  • all subscriptions are dispatched to the thread-pool
  • +
  • and executed concurrent and independent of each other (not threadsafe though)
  • +
  • undeterministic order of execution
  • +
  • you might have to take care of threadsafety while accessing objects
  • +
  • one handler cannot ""kill"" the execution of other handlers
  • +
+ +

Other people seem to be confused too:

+ + + +

PS: +I'm aware that this might be highly subjective. +I guess there've been good reasons to do it this way.

+",22182,,22182,,43888.73681,43888.80556,Is the C# EventHandler designed the wrong way?,<.net>,3,4,2,,,CC BY-SA 4.0, +405859,1,,,2/27/2020 19:39,,9,545,"

This is the case:

+ +
    +
  • Clients want to know how much time will be needed to finish a particular task (not the group of tasks). They are asking for man/days absolute estimation and only when they get it, they decide whether to approve or not.
  • +
  • Teams are trying to avoid giving absolute estimations and to focus on relative estimations (t-shirt sizes for example)
  • +
+ +

The attempt:

+ +
    +
  • Use t-shirt sizes and agree with the team that sizes have ranges (XS-1 day or less, S-1 to 2 days etc...). Communicate to client highest or lowest number in that range. Track cycle time for sizes and then figure out what is the cycle time for XS, S, M, L...If you succeed in this, then communicate to client this cycle time?
  • +
+ +

Any thoughts?

+",339365,,,,,43894.38681,"Teams do relative estimations, business wants absolute estimations. How to make everyone satisfied?",,7,4,,,,CC BY-SA 4.0, +405868,1,,,2/27/2020 23:02,,3,125,"

I have a medium-sized Angular-based web application that I'm currently implementing some permission components for. Overall, the areas where the permissions components will be used are virtually identical, but the specific endpoints (and the parameters that need to be provided to build those endpoints) are slightly different.

+ +

As it stands now, there are 4 separate branches to the logic (and those will likely only grow over time). The 4 branches basically work out to:

+ +
    +
  • Organization + +
      +
    • Group
    • +
    • User
    • +
  • +
  • Project + +
      +
    • Group
    • +
    • User
    • +
  • +
+ +

I've created a few basic interfaces and classes to try to facilitate this design.

+ +

First, we have my actual service interface:

+ +
export interface PermissionService {
+  getPermissionsList(): Observable<PermissionGroup[]>;
+
+  getAppliedPermissions(objectId: number): Observable<AppliedPermission[]>;
+
+  setObjectPermission(objectId: number, permissionId: number, allow: boolean): Observable<any>;
+
+  removeObjectPermission(objectId: number, permissionId: number): Observable<any>;
+}
+
+ +

Next, we have the service factory interface:

+ +
export interface PermissionFactory {
+  applies(params: PermissionServiceParams): boolean;
+
+  create(params: PermissionServiceParams): PermissionService;
+}
+
+ +

The factory, uses another interface PermissionServiceParams which contains the base parameters that all requests will share, along with an enum that provides what the service's type is.

+ +
export interface PermissionServiceParams {
+  type: PermissionServiceType;
+  organization: string;
+}
+
+ +

Finally, we have the actual implementation of my abstract factory, which is responsible for selecting the appropriate service factory to create the necessary service. It looks something like:

+ +
@Injectable({
+  providedIn: 'root'
+})
+export class PermissionServiceFactoryService {
+  private _factories: PermissionFactory[] = [];
+
+  constructor(organizationUserPermissionFactoryService: OrganizationUserPermissionFactoryService,
+              organizationGroupPermissionFactoryService: OrganizationGroupPermissionFactoryService,
+              projectUserPermissionFactoryService: ProjectUserPermissionFactoryService,
+              projectGroupPermissionFactoryService: ProjectGroupPermissionFactoryService) {
+    this._factories.push(organizationUserPermissionFactoryService);
+    this._factories.push(organizationGroupPermissionFactoryService);
+    this._factories.push(projectUserPermissionFactoryService);
+    this._factories.push(projectGroupPermissionFactoryService);
+  }
+
+  getFactory(params: PermissionServiceParams): PermissionFactory {
+    const factories = this._factories.filter(fac => fac.applies(params));
+
+    if (!factories || factories.length === 0) {
+      throw new Error('No matching factories found!');
+    } else if (factories && factories.length > 1) {
+      throw new Error('Ambiguous Invocation! Multiple factories apply to the provided params.');
+    }
+
+    return factories[0];
+  }
+}
+
+ +
+ +

If it's not immediately apparent from the code above, the algorithm works by housing a list of every PermissionFactory in the application within my concrete implementation of my abstract factory, PermissionServiceFactoryService. Whenever getFactory is called, the abstract factory iterates over each of the defined permission factories, and finds the one that applies to the parameters passed into the method.

+ +

Once the appropriate factory is found, the developer is then free to call the create method, like so:

+ +
const params = {organization: data.organization, type: PermissionServiceType.OrganizationUserPermissions};
+this._permissionsService = permissionServiceFactoryService.getFactory(params).create(params);
+
+ +

The factory is responsible for passing the proper static parameters into the constructor of the corresponding permission service. So if additional parameters aside from those that are present on the base PermissionServiceParams are required, the factory handles that.

+ +

Is this the most simple approach? It seems rather verbose, and it may be a bit of overkill for what I'm trying to accomplish?

+",204600,,9113,,43889.29028,43889.32431,"Is there a simpler approach than abstract factories for handling similar, but branching, logic?",,1,1,,,,CC BY-SA 4.0, +405870,1,,,2/28/2020 3:57,,2,115,"

After visit dozens of pages searching a ""non-sockets-or-iphone-conceptual-example"" of Adapter Pattern, I have found this one:

+ +

+ +

+ +
+

Lloyds bank is an international bank offers services worldwide. For + offshore account holders, the tax rate is 0.03%. And, in India it + offers two types of accounts, Standard and Platinum. Tax rules are not + applied to indian bank accounts. Now the offshore bank is incompatible + to Indian account types. We need to design an AccountAdapter to make + both the incompatible account types to work together.

+
+ +

So Can this example enough to accomplish the concepts of object adapter, adaptee and client?

+",356206,,,,,43889.53681,Is this a valid GoF Adapter example?,,1,1,,,,CC BY-SA 4.0, +405878,1,405886,,2/28/2020 11:09,,2,229,"

I hope I'm directing this very general question to the right audience. If not, don't hesitate to redirect me elsewhere is possible.

+ +

I'm part of an initiative at a large company that is starting its journey towards open business APIs to offer data and new digital services. Our team has examined loads of existing APIs out there, many of them award-winning ones but we have yet to find more than a handful (not even that, actually) that are OData. All are ""classic REST"".

+ +

We do have experience with OData and as an SAP-driven company it's supported out-of-the-box for a majority of our backend consumption already so it would be a logical choice.

+ +

I can see problems with OData, such as making it more difficult to control load and it might also be more challenging to create clients for such APIs, depending on one's dev platform. Is it issues like these behind the common design decision to go ""classic REST"" rather than OData in bublic business APIs?

+ +

Would really appreciate a good discussion on this topic, or suggestions to sources with analyses.

+ +

Thanks

+",358662,,,,,44048.47708,Why not OData in public business APIs?,,1,1,,,,CC BY-SA 4.0, +405879,1,,,2/28/2020 11:49,,11,2157,"

I am currently researching approaches for moving our application to Docker containers and stumbled upon a question to which I could not find a clear answer.

+ +

Our application has several separate databases that are currently hosted in one database server. When moving to Docker should we keep the architecture similar (i.e. one container with all databases) or should we use one container per database?

+ +

The latter approach seems more ""docker"" to me. Similarly to not hosting 2 applications in one container, it seems to make sense to also not host 2 databases in one container.

+ +

Are there any established best practices? Does it depend on the parameters of the databases in question (size, access frequency, etc.) or the used database server (SQL server, PostgreSQL, etc.)?

+ +

As far as I can tell the ""container per DB"" approach gives more flexibility (e.g. enforce memory limit per DB) at the cost of more overhead (i.e. the database server overhead is incurred once per database instead of just once in total). Are there any other advantages/disadvantages I should consider?

+",358664,,326536,,43889.675,43890.96944,Docker: One container per database?,,5,1,1,,,CC BY-SA 4.0, +405885,1,405887,,2/28/2020 14:20,,1,199,"

I am making a data visualization application in Unity game engine that simply shows data in a 3D environment using google maps API.

+ +

I am using the three-tier architecture to explain the application. It has the following layers

+ +
    +
  1. Presentation Layer Shows the visualization
  2. +
  3. Logic Layer Transforms the data as per logic from the API and gives it to presentation layer
  4. +
  5. Resources Layer google maps
  6. +
+ +

My question is, where does the API call module goes, the one which requests data from google maps. +Should it be in-between the logic and resource layer as another layer?

+ +

Or am I doing something wrong here...

+ +

+",358672,,,,,43889.63681,external API in three-tier architecture?,,1,0,,,,CC BY-SA 4.0, +405891,1,,,2/28/2020 17:06,,-3,51,"

I'm developing a real-time fraud detection system for a bank. The job of this fraud detection system is to decide whether the incoming transaction is fraud or not. The system has no interaction with the banking customer. The fraud detection system has use cases like capture incoming transaction data, calculate the risk level of the transaction, etc.

+ +

How to draw a use case diagrams for this type of a situation?

+",254401,,,,,43889.71458,How to draw a use case digram for a autonomous system?,,1,8,,,,CC BY-SA 4.0, +405898,1,405899,,2/28/2020 19:48,,2,248,"

Why did c++11 add a separate find_if() instead of simply overloading the existing find()?
+Wouldn't overloading the function be sufficient?

+",358698,,155513,,43889.86319,43890.48958,Why did C++11 add find_if() instead of overloading find()?,,1,1,,,,CC BY-SA 4.0, +405901,1,,,2/28/2020 20:55,,2,209,"

Note this is a general, conceptual question about performance optimization. motivated by the following real-world case.

+ +

I have a file on a Windows network drive that has a 100Mbps limt; it is a binary file and is 165MB.

+ +

My local machine has software on it specifically designed to manipulate this file format, and when opened in that software takes less than a second to display all the information. When monitoring the Task Manager during this split second, the process for the software shows:

+ +
    +
  • 13% Network (@ 26.4 Mbps briefly)
  • +
  • 08% CPU (@ 1.2% briefly)
  • +
+ +

Since the format is known, I wrote a Python script to parse it and the fastest that I can do while using the struct module is roughly 15-17 seconds. During this time, CPU usage for the Python process doesn't change, but Network usage reaches 94% (@ 82 Mbps on average).

+ +

What could a software be doing that its able to fully read the file so fast, yet I'm maximizing the network bandwidth and it takes me much longer?

+",325526,,9113,,43890.39583,43890.39583,How is a software able to read a network file faster than it appears to be possible?,,1,20,,,,CC BY-SA 4.0, +405902,1,,,2/28/2020 22:07,,0,74,"

I am looking for an algorithm to compare two trees.

+ +

I have this class in Java

+ +
public class TreeNodeDSP {
+
+  TreeNodeDSP parent;
+  List<TreeNodeDSP> children;
+  NodeDSP value;
+
+  public TreeNodeDSP(TreeNodeDSP parent) {
+    this.parent = parent;
+    children = new ArrayList<>();
+  }
+
+  public TreeNodeDSP(TreeNodeDSP parent, NodeDSP value) {
+    this.parent = parent;
+    children = new ArrayList<>();
+    this.value = value;
+  }
+
+  public void addChild(TreeNodeDSP node) {
+    if (node != null && node.getValue() != null) {
+      if (children.stream().noneMatch(child -> Objects.equals(child.getValue(), node.getValue()))) {
+        children.add(node);
+      }
+    }
+  }
+
+  public TreeNodeDSP getParent() {
+    return parent;
+  }
+
+  public void cleanChildren() {
+    children = new ArrayList<>();
+  }
+
+  public int getChildrenCount() {
+    return children.size();
+  }
+
+  public TreeNodeDSP getChildrenAt(int position) {
+    if (children.size() > position && position > -1) {
+      return children.get(position);
+    }
+    return null;
+  }
+
+  public List<TreeNodeDSP> getChildren() {
+    return children;
+  }
+
+  public NodeDSP getValue() {
+    return value;
+  }
+
+  public boolean isLeaf() {
+    return children.isEmpty();
+  }
+
+}
+
+ +

Now, I have an algorithm to populate my Tree (TreeNodeDSP) and facing problem performing operations based on Automatically populate Tree (calculating some conditions),

+ +

I populate manually my Tree and works.

+ +

I need to discover what is the difference in my Tree (manually Populated, and Automatically) in its nodes (internally), not only says are different Trees. When some node/leaf is different I need to print its contents.

+ +

But, I don't know how to begin with the comparison Algorithm.

+",186710,,353068,,43890.88333,43890.88333,Tree Comparison Algorithm,,0,1,,,,CC BY-SA 4.0, +405905,1,405907,,2/28/2020 23:12,,1,139,"

Hexagonal Architecture seems to make so much sense when I read about it (like more than I can say; the ultimate eureka moment), but actually implementing it is a different story.

+ +

I more or less understand the structure, but my biggest problem I think is the delineation between components.

+ +

In the simplest example, say we're just doing simple retrieval of data, with a UI library, a Logic library, and a Data library (and a program running it all).

+ +

The user opens a window, or a webpage, and a list of data should appear there. The series of requests is something like: UI Event -> Primary Adapter -> Logic -> Secondary Adapter -> Database (and then returning back up the chain). (The Primary and Secondary Adapters have each implemented their respective Ports).

+ +

The main questions I have are:

+ +
    +
  • Who does what?
  • +
  • Who lives where?
  • +
+ +

Let's say my DB Model is differently structured than my Business Logic Model. And then there is also a UI Model of sorts (presentation types).

+ +

I want to say that it should be the DB library's responsibility to translate the DB Model into the Logic Model, and that the Core Logic should be ignorant of this process. Similarly it should be the UI library's responsibility to translate the Logic Model into the UI Model.

+ +

But if I do this, then the issue is that I have not actually captured anything of value in the Logic Layer. And also, the UI Model is something that is not actually specific to any particular UI. The ""presentation rules"" are something that could be taken out of the UI and put into a reusable place.

+ +

It's almost as if there could be UI and DB ""translation layers"" put in between the Logic Library and UI and DB Libraries. You would then have a Logic Model, a separate UI Model, and an actual UI Implementation Library. Similarly you could separate the DB into one or more Data Access Libraries (in my case I have at least two separate databases), and a Data Translation Library that translates data from the DBs into the Logic Model.

+ +

Or... the translation could live in the UI and DB Libraries? Or it could live in the Logic Library?

+ +

As concrete examples, imagine that a single concept in the Business Logic actually comes from two different places (databases) and must be composed together. Similarly, when presented, there are things like running totals, grand totals, numeric text formats, etc. that are basically ""UI Rules"" rather than Business/Logic rules and are (or can be) separate from the UI implementation itself.

+ +

(And then I still feel like I would not have determined where things actually go in practice. For each interface that is defined there is the question of whether it should be implemented alongside it, or in the neighbouring library.)

+",47207,,47207,,43890.01319,43890.07917,Who does what and who lives where?,,1,0,1,,,CC BY-SA 4.0, +405909,1,,,2/29/2020 5:16,,0,401,"

Well as per title, does it matter in any way functionally (say when rewriting history or something), from which branch one derives a new branch?

+ +

If at the point of time when creating the new branch all existing (or all relevant) branches have been merged together so they point to the same code?

+ +

To explain how this ""could"" matter:
+If I merge branch ""feature A"" into ""dev"", and then (accidentally) make a new feature branch ""feature B"" while in checkout of ""feature A"" does it make any difference if I would've created it instead from ""dev""? And should I undo the branch creation ""feature B"" and ""properly"" create it from ""dev""? -- There would be no changes to A after the merging.

+",43635,,43635,,43890.31667,43895.98194,Does it actually matter from which branch you create a new branch in git?,,3,5,,,,CC BY-SA 4.0, +405912,1,,,2/29/2020 7:41,,1,326,"

Versioning is always a big question and I see many similar questions here, however none really answer my question so I try to highlight the things that are important to me specifically.

+ +

We have a single repository for a multi-module maven project. The decision for a single repository was made, because the modules are (in part) tightly coupled, and it makes refactoring across the whole codebase easier. The move to a monorepo was done just recently without thinking about how to properly handle versioning and releases.

+ +

Before, each module (then a separate project in a separate repository) was released independently and luckily never introduced breaking changes, so customers could freely choose which versions to use and never had any issues.

+ +

What I want to do is keep offering releases for the individual modules, because it makes updating much easier. (Bugfix release? just exchange the jar file). But I always want to release distribution packages.

+ +

Consider the following layout:

+ +
pom.xml
+  --- app1 pom.xml
+  --- app2 pom.xml
+  --- lib1 pom.xml
+  --- dist pom.xml
+
+ +

Each module has its own version. An application might get a new feature which justifies a bump in the minor version, a library might get constant buxfixes so its micro version increases fast.

+ +

It comes down to two questions:

+ +
    +
  • Under these circumstances - what is the meaning of the version number of the maven root project?
  • +
+ +

I could imagine it simply tracking the state of the repository. A new module is added? Bump the micro version. A module is removed/merged? Bump the major version. Things like that. But I'm not sure if this is a good approach.

+ +
    +
  • What version should the distribution package have?
  • +
+ +

When all components have different versions, I just can't pick a single one. Every update of the components has to be reflected in the distribution's version. Although I really like semantic versioning, I feel like a date-based approach might be better suited here. Make a distribution release every 6 months or so and call it 2020_06. If an important bugfix required the update of a component in between, a version like 2020_06.2 might work as well.

+",358722,,358722,,43890.325,44113.95972,Maven: Versioning for multi module repository,,1,2,1,,,CC BY-SA 4.0, +405916,1,,,2/29/2020 9:54,,0,25,"

In a web app i'm working on, there's some i18n/l10n support. Some entities (I'm using locations as an example) support having names in multiple locales. The app has a primary_locale (defaulting to 'en' if unspecified) as well as an other_locales setting.

+ +

Besides that, each user also has a locale setting (again defaulting to 'en').

+ +

There is a list view, and it is required to be able to search for locations by name, as well as sort them by name in that view.

+ +

What i'm having issues with is selecting a single locale for the searching and sorting, since:

+ +
    +
  • the user locale setting might not match the app primary locale setting, and hence, location names might not be in the user's selected (or defaulted) locale
  • +
  • there is no way to set the locale for the sorting or searching from the user interface.
  • +
+",358727,,209774,,43891.66319,43891.66319,Locale selection for searching and sorting in web app,,0,2,,,,CC BY-SA 4.0, +405917,1,405926,,2/29/2020 11:16,,-3,65,"

How specific is hardware optimization when building from source and what should I look for in the documentation to decide if building for my hardware might be worth it?

+ +

From threads like this one I gather that optimization for my ""CPU and environment"" is a possible advantage, but that there are also risks. How does one know which optimizations are already standard and which ones I can benefit from with minimal risk? Compilation time and size of the binaries are not a priority.

+ +
    +
  • What are the CPU optimizations tuned to? Specific models? AMD vs Intel? CPU generation? Chipset? Number of cores?

  • +
  • Are these optimizations overlapping or separate from optimization flags like O2 and O3?

  • +
  • Does the ""environment"" part of the optimization have to do with my kernel version, which libraries I have installed, or what? Is part of optimization deciding which libraries to build with?

  • +
  • If I stick to recommended flags, is there still some hardware optimization, or do I need to use the ""riskier"" flags? And are they still risky if I know exactly what hardware I will run the build on?

  • +
  • My understanding of O-flags is limited to having read that the higher +O-numbers are more optimized but have a higher risk of instability. +Do I need to be a software engineer in order to make educated guesses +about flags?

  • +
+",358733,,,,,43890.72778,How specific is hardware optimization when building from source/how do I know?,,2,1,,,,CC BY-SA 4.0, +405918,1,,,2/29/2020 11:48,,-1,130,"

As a learning exercise I want to develop a system using microservices. +I have designed my authentication/authorization architecture and would like to know if there are drawbacks to my design. It is as follows:

+ +
    +
  • Authentication service has following tasks:

    + +
      +
    • Issue JWT signed with RSA key that does not expire
    • +
  • +
  • Gateway that has following tasks

    + +
      +
    • route unauthenticated requests to authorization service
    • +
    • route authenticated services to resource services
    • +
    • issues session tokens to end users that are stored in distributed key-value datastore (e.g. Redis) and maps them to JWT issued by auth service
    • +
  • +
  • Resource services that validate JWT signature using static public key

  • +
+ +

Are there any drawbacks? What could I improve, change?

+",158213,,,,,44161.79722,Microservice Architecture auth design,,1,3,0,,,CC BY-SA 4.0, +405929,1,405931,,2/29/2020 20:18,,1,41,"

Say, you pre-validate if a username already exist in a registration form in the application layer. E.g., You send back a nice ""username already exist"" error message to the user.

+ +

While unlikely, there's still a possibility that two different users can ""simultaneously"" pass the application layer validator and one of them will eventually get a ""duplicate entry"" exception from the database layer.

+ +

In this scenario, is it good practice to still catch the database layer exception and send back a nice error message to the user. Or, is that overkill, and you just let it ""slide"" and halt the application with a 500 error?

+ +

Note: letting it slide may also trigger your 500 error alerts, if any, at 3AM for what is actually just a ""minor"" validation error :-)

+ +

I know this is an unlikely scenario but still can happen, what's your approach on it?

+",319317,,,,,43890.89722,"Still catch ""duplicate entry"" exception of database even after pre-validation of user input?",,1,0,,,,CC BY-SA 4.0, +405934,1,,,2/29/2020 22:55,,0,22,"

I made a little application using djangoRESTFramework+vue+axios that is basically a crud, in this little spa (there are two tabs) i send info in the first page and in the second one i fill a table with the data. To fill the data i use axios in the mounted hook, so every time i open the app it refresh the data, also every time i send a ticket i use ""location.reload"" to refresh the page and it will of course refresh the data again.

+ +

The app works perfectly on my machine but when i shared it with my colleagues through our internal network is when the problem arises. When they open the app the first time it refresh the data on the table normally, but when they send the ticket it does not, the pages refreshes but the axios call doesn't bring the data from the server and the only way my colleagues are able to get it right is if they log out and then log in again.

+ +

After some research i found the ""never_cache"" decorator that helped me solve the problem, it worked perfectly after applying it. But the thing is, i dont know why.

+ +

I mean, i read that said decorator only add some data to the header response, a ""Cache-Control: max-age=0, no-cache, no-store, must-revalidate"". Is this thing always needed? my problem is that (and i admit i'm a n00b) this completly changed how i understand ajax calls, i need to add that decorator every time? why does this present problems sometimes and others don't? how do i know if i should add that info on every ajax call? can someone explain a little bit how does this works?

+",305795,,,,,43890.95486,"Cache problem on a django made api, already solved but i need to understand what is happening",,0,0,,,,CC BY-SA 4.0, +405935,1,,,2/29/2020 23:31,,1,55,"

Searching for a good PHP real world example, I've found this example of ""composite"" using:

+ +
    +
  • FormElement as Component
  • +
  • Fieldset and Form as Containers
  • +
  • Input as Leaf
  • +
+ +

(this is my UML from the code):

+ +

+ +

So, Is this form generator a valid Composite GoF?

+",356206,,356206,,43890.99167,43891.66042,Is this form generator a valid Composite GoF?,,1,7,,,,CC BY-SA 4.0, +405936,1,,,3/1/2020 2:40,,0,125,"

When would I use reactive programming libraries like RX Java and Project Reactor compared to stream processing engines such as Storm and Flink?

+ +

I am aware that these concepts might not be directly comparable, but I would like to understand the relationship between both concepts better.

+ +

My current undertanding is that both have a focus on unbounded streams of events/data.

+ +

Reactive programmin is a more low-level concept in the sense that a stream processing engine could be implemented with it. I think this is actually the case for Akka Streams.

+ +

Stream processing engines seem to aim to solve the problem of horizontal scaling stream processing tasks, i.e., clustering etc.

+ +

Please clearify the commonalities and differences between both concepts, typical use cases for each of them and how they relate to each other otherwise.

+",358774,,,,,43892.59861,What is the relationship between reactive programming and stream processing engines?,,1,0,,,,CC BY-SA 4.0, +405941,1,,,3/1/2020 10:29,,-2,41,"

Context

+ +

We have an application that is written in .NET and runs on a Citrix server. This app consists of shortcuts to external tools (like: DameWare, VNC viewer, mtsc.exe, msra.nexe, ...) that are installed on the server. The user fills in a hostname in the app and the external tool is started after clicking its button with the hostname as a parameter, so a remote connection to the client is set up.

+ +

Goal

+ +

The goal here is to rewrite this application to have it accessible in through am internal website. So Citrix is no longer required.

+ +

idea

+ +

The idea is to use node.js to create the API. This API will be used to add new tools that are installed on the server, launch executables on the server with the arguments from the input box on the website, ...

+ +

This will allow us to install all these remote connection tools on one server and have them used by our service desk agents from their web browsers.

+ +

Question

+ +

Is it possible for an SPA, that is created by say Vue.js, to launch an application installed on the server and open it on the users's PC?

+ +

Sorry if this is a dumb question, but I couldn't find any information about this somewhere.

+ +

In a reply to this questions it was said that the API can then launch an executable, but can it also display the GUI from say VNC Viewer on the client PC?

+",273652,,273652,,43891.44375,43891.45694,Can a JavaScript SPA launch executables installed on the server?,,1,1,,,,CC BY-SA 4.0, +405943,1,405988,,3/1/2020 10:51,,-1,104,"

I'm currently designing my first API and I had a question: Should an API return all information about an object and its foreign keys?

+ +

To explain the above a bit better, this is a part of the database I have made:

+ +

+ +

When I get a single event and I know that the person making the request needs the information about Event Location and Zone, should the API return the full information in one request, or should the person calling the API have to make a second and then third API call to get the remaining information about Event Location and Zone?

+ +

Thanks for any replies! If any further details are needed let me know :)

+",358789,,,,,43892.39306,Should an API be designed so that all information needed is returned in one request?,,1,5,,,,CC BY-SA 4.0, +405946,1,,,3/1/2020 11:21,,1,137,"

I have the following code:

+ +
const string endPoint = @""foo{0}?pageNum={1}&itemsPerPage={2}"";
+const int itemsPerPage = xxx;
+InvoiceCollection response = await _apiClient
+    .GetAsync<InvoiceCollection>(string.Format(endPoint, _apiClient.OrgId, 1, itemsPerPage));
+
+if (response?.TotalCount > itemsPerPage)
+{
+    var allInvoices = (await DoPagination(endPoint,response.TotalCount, itemsPerPage))
+        .SelectMany(i => i.Invoices);
+    response.Invoices = (response.Invoices ?? Enumerable.Empty<Invoice>()).Concat(allInvoices);
+}
+
+return response;
+
+ +

Here the logic is to call an API, if the total number of results exceeds pre-defined itemsPerPage, call all other pages and get the results in allInvoices and combine it with the first response and return all the results together. Since IEnumerable.Concat is used there will be a new object created and get assigned to response.

+ +

I got a code review comment specifying ""All responses sent down from the wire, to the HttpClient should be immutable."" The comment is particularly about response variable, which I disagree because its a locally scoped variable and does not affect the execution even if its mutable or immutable. I am not able to foresee any scenario where the code breaks because of that.

+ +

Is my argument valid?

+ +

Update:

+ +

Below is my InvoiceCollection class:

+ +
 public class InvoiceCollection
+ {
+     [JsonProperty(""results"")]
+     public IEnumerable<Invoice> Invoices { get; set; }
+     public int TotalCount { get; set; }
+ }
+
+ +

The response is a bad naming which leads to confusion. Here response can be renamed as fisrtCollection or something similar to that and it does not hold any information about API client.

+",243528,,243528,,43891.57153,43891.57153,Should we consider immutability for local scoped variables,,3,4,,,,CC BY-SA 4.0, +405952,1,,,3/1/2020 13:58,,1,40,"

I have a question about best-practices in terms of reliability.

+ +

There is some data residing in RAM of some process and the data needs to be delivered in a bunch of external storage providing the least possibility for data loss. The data are not required to be read again by the process.

+ +

I currently think that the way to go is to dump the data on a local drive and then deliver the data into the storages with some external tool. I see the following benefit:

+ +
    +
  1. Writing to a single drive is much less likely to fail then say 1 of 10 external storages the data needs to be delivered to
  2. +
  3. Abstracting away the storage type the data are delivered into.
  4. +
  5. Simplifying the logic of the process which generates the data.
  6. +
+ +

I see the following drawbacks:

+ +
    +
  1. Local disk usually have much less capacity then external storages
  2. +
+ +

So I assume in case the data are not required to be read again it is a good practice to dump it on a local drive first and then deliver with some external tool. Is it?

+",319610,,,,,43891.58194,The most reliable way to deliver data to external storages,,0,3,,,,CC BY-SA 4.0, +405959,1,405961,,3/1/2020 19:41,,4,660,"

Imagine a twitter-like app where users are allowed to follow to a limited amount of users; say 100.

+ +

I have this code:

+ +
fun follow(followerId: String, followedId: String) {
+  if (repo.countFollowing(followerId) >= 100) {
+    throw MaxFollowingReached()
+  }
+  repo.insertFollowingRelationship(followerId, followedId)
+}
+
+ +

And this test case:

+ +
@Test
+fun `users can following a maximum of 100 users`() {
+   val followerId = ""1""
+   val followedId = ""2""
+
+   val repo = mockRepository()
+   when(repo.countFollowing(followerId)).thenReturn(100)
+
+   service = Service(repo)
+
+   assertThrows {
+     service.follow(followerId, followedId)
+   }
+}
+
+ +

As you see, the number 100 is present in the test name. Is this a good practice? Or should I avoid it and write a ""more generic"" test?

+",145346,,,,,43896.49722,"Is it a good practice to include ""magic numbers"" in test case name?",,5,0,1,,,CC BY-SA 4.0, +405965,1,,,3/1/2020 22:06,,-3,93,"

In programming terms, what defines Open Source?

+ +

I am building a platform that I want to be open, to be used by other programmers in the context of running their own apps on it at the same time for companies on their public cloud.

+ +

The only restriction I want to have is if anyone, including companies, wants to make a profit of it, making it into a SaaS or ""Software-as-a-Service"" deployed in a public cloud, they must first acquire a commercial license.

+ +

Does this break the spirit of opensource? What are the options in terms of Licenses that will allow programmers/developers to take part in programming/shaping the platform while keeping it from being SaaSified without a license to the organization that oversees the platform codes?

+ +

I've looked into MongoDB SSPL and MariaDB BSL

+ +

With the BSL more inclined to being Open-source and SSPL not being open-source.

+",92366,,,,,43891.97153,"Choosing the correct ""opensource"" license for codes",,1,2,,43892.47917,,CC BY-SA 4.0, +405968,1,,,3/1/2020 23:44,,0,440,"

When evaluating code performance, CPU instructions is not the best metric, since the exact number of operations depends on the compiler, CPU model, architecture and so on.

+ +

And we came up with a bunch of mathematical tools to describe the performance of an algorithm (the most popular being big-Oh notation) and we simply report the running time to get a feel for how the implementation runs in practice (yes, big-Oh is not always that easy to use).

+ +

I need to optimize a piece of code. This implies a lot of reading and staring at the code looking for places to improve, but also a great deal of experimentation. I need to check if idea A does improve (and by how much) the performance. And I do this by measuring the time it takes to execute a piece of code.

+ +

But this is quite imperfect: I do use a multitasking system. I cannot totally reproduce the running conditions of the previous experiment. My browser might decide to start 10 more threads which get in the way, or I might get some CPU throttling at a different point in time due to various reasons.

+ +

So, usually, I need not only to wait for my program to run, but I also tend to close my browser, make sure the IDE is not indexing or doing some expensive operation, wait a few minutes for the CPU to cool down, and then run my experiment. This is the only way I could get reliable timings.

+ +

The question is: is there a way to count the number of operations? On my CPU, for a specific process. This would be (theoretically) an ideal way of comparing 2 versions of my code, unless there is some issue I am missing.

+",112693,,,,,43893.63889,Why don't we count CPU operations?,,4,6,,,,CC BY-SA 4.0, +405973,1,405978,,3/2/2020 3:06,,9,4296,"

I have been studying Clean Architecture (CA) by Robert C. Martin and have found it quite useful in promoting architectural standards for large applications. Through implementation of a case study, I have a bit of experience of how it can help build applications that are more flexible, robust and scalable. Finally I have also come into grips with its potential shortcomings (many of which are outlined in this excellent response).

+ +

My question, though, is how Clean Architecture relates to Domain Driven Design (DDD) by Eric Evans. While not quite as familiar with DDD, I have noticed many similarities between DDD and CA. So here are my questions:

+ +
    +
  1. Are there any differences between CA and DDD (other than their naming scheme)?
  2. +
  3. Should they be used in tandem, drawing insight from both, or should one be used over the other?
  4. +
+ +

From research, the only thing I was able to find on this was that CA ""uses higher level of abstraction on the business objects"" sourced from here.

+",358776,,,,,44049.38958,Difference between Domain Driven Design and Clean Architecture,,3,4,6,,,CC BY-SA 4.0, +405976,1,,,3/2/2020 4:57,,-1,69,"

I have a proxy server that waits for another server to come online. A client sends a request to the proxy server - is there a way for the proxy server to send a preliminary response to the client to let the client know it has to wait for the other server to come online? Maybe a header that the browser can display?

+ +

For example, it seems possible for one header to be sent by the server and received by the client 30 seconds before subsequent headers, but with my testing with Node.js, all the headers are buffered and sent after the first line of the body is written:

+ +

https://gist.github.com/ORESoftware/97d12e44af1fd7706468fff2eeb2add6

+",357515,,357515,,43892.79028,43892.79028,How to send a preliminary HTTP response,,1,7,,,,CC BY-SA 4.0, +405980,1,,,3/2/2020 6:41,,7,189,"

Software libraries targetting resource constrained environments like embedded systems use conditional compilation to allow consumers to shave space by removing unused features from the final binaries distributed in production.

+ +

What are the tradeoffs to building a unique binary? Surely there are the obvious space savings on disk, memory and cpu caches; and the performance benefits brought by space-time tradeoffs. But at least initially it seems that would increase the possibilities to test and design against.

+ +

It seems that the implications differ from those of traditional runtime path complexity, not just because of the consideration of 2 different types of branching, but because compiler condition syntax is far unsafer than its runtime counterpart (at least in C).

+ +

The specific example that sparked this question is Busybox heavy use of conditionally compiled feature flags. +https://git.busybox.net/busybox/tree/networking/httpd.c

+",249024,,249024,,43892.29028,43892.80694,"How does conditional compilation impact product quality, security and code complexity?",,2,5,,43892.80972,,CC BY-SA 4.0, +405992,1,406000,,3/2/2020 13:21,,0,96,"

I am trying to figure out the right project structure for C++ and I am working on Ubuntu using CMake. I mostly work on AI/ Robotics/ Data Science. Assume that I want to generate executables and libraries. I have looked at a few links including link1, link2, link3. I am also looking at OpenCV git repo to gain more understanding. I understand that some aspects of this question can be opinion-based. But I still think there are enough parts that can be answered specifically. If not please let me know if there is another stack exchange site more suitable for this question (Code Review?).

+ +

1) Regarding test folder: Consider writing code to perform camera calibration. Assume that the test for the calibration procedure is done using calculation of re-projection error or something similar. In this case do we include the code for this test inside the test folder? Or is this folder only to perform tests from a software engineering perspective; things like time complexity, space complexity, edge cases, bugs?

+ +

2) Regarding app/apps folder: What exactly goes in here? My initial impression was that it is the source for final executable/ application. But OpenCV seems to have all kinds of files inside this folder.

+",267086,,,,,43892.66458,C++ Project Structure in UNIX/Linux environment: test and app folder,,1,0,,,,CC BY-SA 4.0, +405997,1,,,3/2/2020 15:01,,0,169,"

In a C# application, I've got a behaviour that I would like to be available in different classes that not necessarily share the same ancestor. What better opportunity to 'favour composition over inheritance?""

+ +

In this case not only composition is a better way, but can also be the only way.
+Unfortunately, the code is the following. I left only the relevant part, obviously it would make little sense to delegate only this, but this is the problematic part.

+ +
public void NotifyPropertyChanged([CallerMemberName] String propertyName = """")
+    {
+        PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
+
+   }
+
+ +

Turns out that since PropertyChanged is an event, it can only be raised inside the declaring class.

+ +

I evaluated all the proposed alternatives to this, found in this S&O question: +Alternatives +and all of them seems a bit of a hack. That is, events are not meant to be delegated. Fundamentally, I would like that someone proof me to be wrong on this point.

+ +

Unfortunately this seems to be another case of the pattern: you use inheritance? no no no, there is a better approach. And when you try to implement it you suddenly bump into problems that require various levels of hacking the language or bringing complexity of the solution to an whole other level.

+ +

At the end of the day inheritance or composition is not a marriage, so I don't want to stay with either of them for life; I considered myself to be married with the non-duplication, or DRY principle. So I wanted to find a good way of not repeating the code. Here I simply cannot find a reasonably-simple solution and hope that someone else that has already bumped into this problem proposed some perhaps 'third way' solution.

+",28667,,1204,,43892.64861,43892.70764,favor composition over inheritance: practical problems,,1,13,,,,CC BY-SA 4.0, +405999,1,,,3/2/2020 15:46,,-1,64,"

I'm currently developing a REST API which is supposed to be consumed by my App and desktop website. +The user should not need to login every single time.

+ +

Currently I implemented the following:

+ +
    +
  1. First call to auth with username&password -> master token is returned (valid for 1 month) and stored in the DB. On client side, it's stored in local storage.
  2. +
  3. Call to access token endpoint with master token. Check if expired, if not check in DB if it's revoked by the user. Return 15 minute access token
  4. +
  5. Every secure resource needs the access token. If it's expired, the client needs to provide the master token to the access token endpoint again.
  6. +
+ +

If the master token expired, a new login is required. By the way, how would I do that without disrupting the user flow? Let's say he creates something and just at that moment the master token expired. Simple login popup?

+ +

Real question is: Let's say the user uses the incognito mode to Login. I create a master token in the DB. The user closes it. Since no logout happened, the refresh token stays in my database until it's really expired (1 month). if my users use a lot of these modes, the database may hold many master tokens which might never be used (since they aren't remembered by any application). Is there a flow to deal with that? Or just live with having them in the database?

+ +

Seems to be a bit confusing if you have tons of signed in devices in the ""Signed In Devices"" list.

+",358860,,,,,44162.83542,Implement Sign Out for Specific Device in REST API,,2,0,,,,CC BY-SA 4.0, +406001,1,,,3/2/2020 16:09,,3,209,"

We have a service where we have billions of key-value data stored in some storage. Before actually querying the data, we query the bloom filter to determine if the key may exist or definitely does not exist in the underlying store.

+ +

But the problem is with ~ 1 Billion keys and 0.1% false positive rate, the size of the bloom filter can easily exceed 2GB. Also we need to load many such filters (for every category of data, there is different filter)

+ +

In such scenario, what is the efficient way to load bloom filter (or any data structure for that matter) whose size exceeds the size of the JVM itself ?

+ +
    +
  1. Should I not load the filter into memory, and instead operate from disk itself ?
  2. +
  3. Is there any standard library which provides this abstraction of having the bloom filter on disk, and still query it ?
  4. +
  5. Is there any like standard way to deal with such issue ? I have heard about off-heap storages, but don't have much understanding around those.
  6. +
+",358861,,,,,43897.14167,Operate on data that doesn't fit into JVM,,2,6,,,,CC BY-SA 4.0, +406002,1,,,3/2/2020 16:20,,0,85,"

I'm working on porting an iOS app to Xamarin, where it's hoped that the Xamarin project will match as much of the logic and layout of the iOS project as possible.

+ +

One big area that the iOS project includes is the use of parent/child contexts in Core Data. As far as I can tell, there's nothing similar in Entity Framework.

+ +

Some online guidance has suggested using the repository pattern; other online guidance has suggested that this isn't a good approach (because ""Entity Framework is a repository pattern"").

+ +

I'm wondering if anyone has any specific experience or knowledge on this. In particular, what's the best way to replicate parent-child context relationships, as per Core Data, or should we be looking at a whole new EF-specific approach?

+ +

Thanks.

+",358864,,,,,43892.68056,Implementing Core Data parent-child context relationship via Entity Framework,,0,0,,,,CC BY-SA 4.0, +406003,1,,,3/2/2020 17:03,,1,82,"

I have the class AircraftManager that keeps track of a list of Aircraft objects and a list of AircraftListener objects.

+ +

A DataSource (custom class, not sql related) object retrieves information from an external source, the data sources decodes the message and calls appropriate logic on the aircraft, i.e setAltitude, setHeading. The program is designed to plot Aircraft onto a live map.

+ +

calling the setters on Aircraft should trigger the onAircraftUpdate(Aircraft aircraft, String fieldName, Object fieldValue) method on each registered AircraftListener in the AircraftManager

+ +

This first of all requires the Aircraft to keep a reference of the AircraftManager to invoke methods on the event listeners as such:

+ +
public void setAltitude(int altitude) {
+    if (this.altitude != altitude) {
+        this.altitude = altitude;
+        this.aircraftManager.getListeners().forEach((aircraftListener ->
+                aircraftListener.onAircraftValueChanged(this, ""altitude"", longitude)));
+    }
+}
+
+ +

This also means AircraftManager has a responsibility to keep hold of all AircraftListeners as well as Aircraft.

+ +

My question is whether this solution is satisfactory, any improvements/changes you would make. It's also worth noting that a class will not want to add a listener upon a single aircraft, but all aircraft.

+",358794,,1204,,43892.91944,43892.91944,Structuring application to comply with Single responsibility,,1,8,1,,,CC BY-SA 4.0, +406007,1,,,3/2/2020 18:29,,0,165,"

Based on what I have been reading about the Liskov Substitution Principle, I understand that a square and rectangle class cannot be a part of the same inheritance tree.

+ +

I would like to apply these ideas to a Folder and a File, as they commonly exist on disk. Is there a property of one or the other or both which would force a conclusion they too should not be part of the same inheritance tree according to Liskov?

+ +

What are some properties we could consider?

+ +
    +
  1. The data. Files consist of bytes. However, Folders can be considered to consist of the bytes of the files they contain.

  2. +
  3. Access to both is defined by permissions

  4. +
+ +

I suppose the property where inheritance breaks down would be one concerning containment. A folder contains files. A file does not. However, if one considers that a file is just a collection of bytes, then a folder could be considered to be just the collection of bytes consisting of the files in the folder.

+ +

Is there a property of a file or a folder that requires us to maintain two inheritance trees?

+",9966,,,,,43893.43819,Inheritance: Folders and Files & Liskov Substitution Principle,,2,11,,,,CC BY-SA 4.0, +406013,1,406015,,3/2/2020 20:46,,0,106,"

The title says it all. How would I, in theory, create a binding for different GUI toolkits for a programming language that has no GUI bindings. I ask because I want to experiment with this sometime in the future.

+",358879,,,,,43892.87917,"How Would I Create Bindings to a GUI toolkit like GTK, Tk, or Qt for a Programming Language?",,1,1,,,,CC BY-SA 4.0, +406020,1,,,3/3/2020 0:10,,3,169,"

Software libraries targetting resource constrained environments like embedded systems use conditional compilation to allow consumers to shave space and thus increase performance by removing unused features from the final binaries distributed in production.

+ +

Assume that library developers produced the compiler flags and were a consideration at design and test phases of the library.

+ +

As most design decisions, there are tradeoffs, in this case the code complexity and product quality unarguably suffer due to the increased branches to design and test against.

+ +

However with regards to security, the net impact is not clear, there are both positive and negative effects from removing features. Initially, removing code reduces the surface of attack. But on the other hand, building a custom binary means that a bug, and thus exploit, might be present in that specific combination.

+ +

The implications differ from those of traditional runtime path complexity, not just because of the consideration of 2 different types of branching, but because compiler condition syntax is far unsafer than its runtime counterpart (at least in C).

+ +

An interesting phenomenon is that building a custom binary might expose the users to targetted attacks, but using a standard build might expose the users to mass exploits.

+ +

The question is, considering there are both positive and negative influences on security, if we were to quantify them, would the net impact be positive or negative? In other, less academic words, if one is concerned with security, should they build a custom binary without the features they need?

+ +

The specific example that sparked this question is Busybox heavy use of conditionally compiled feature flags. https://git.busybox.net/busybox/tree/networking/httpd.c

+",249024,,249024,,43893.01528,43893.64514,Does removing unused features from libraries through compiler flags increase or reduce security risks?,,2,5,,,,CC BY-SA 4.0, +406021,1,,,3/3/2020 3:02,,0,38,"

In a simple web architecture using Spring and Maven where it consists of following layers:

+ +
    +
  • Controller
  • +
  • Service
  • +
  • Repository
  • +
  • Shared
  • +
+ +

Where in shared module all the classes that are used among different layers are kept.

+ +

When creating a flat Maven structure, should we add the Shared module as a dependency to Repository or to every layer?

+ +

If we add the Shared module to Repository module, since Service module includes Repository dependency, and Controller adds Service module as its dependency, then transitively, controller, and also Service, modules are going to have access to Shared artifacts.

+ +

However, is it better to add Shared dependency to every module individually? Or should we break artifact dependency between each layer into its module? For instance, objects/exceptions shared among Repository and Service into its own, and Service and Controller into its own, then another shared module for clients of Controller module?

+",334752,,,,,44083.83819,Where Should You Add a Shared Maven Module as a Dependency in an N-tier Architecture,,1,0,,,,CC BY-SA 4.0, +406027,1,,,3/3/2020 8:29,,2,90,"

So I learned that the feature I am interested in is called a ""displacement map"". This makes it so you can take a blank t-shirt (with all it's curves and subtle textures), and apply an image to it so it looks like the t-shirt actually has that image on it.

+ +

The part I'm wondering about is how do you make this a dynamic process? How do you, say, take a book cover at an angle with specific lighting, or a model wearing a t-shirt in specific lighting at a specific angle, and apply the graphic? How could I go about creating such a ""template"" image of a blank book or model with t-shirt, such that I can upload an image and it gets nicely applied to the t-shirt and all I have to specify is the width/height/x/y of the image? How do you make it so the image gets rotated/skewed properly? What sorts of data models am I dealing with here?

+ +

I am a software engineer but I've never had to deal with such a problem before and it seems like I would have to create something like this... Somehow create a 3D model out of the image I take of a model wearing a T-shirt, turn this 3D model into a displacement map or UV texture or (throw some more graphics jargon at it which I don't fully understand) and then the image will become a texture on a 3D image. This means I have to do a lot of work and sort of sculpt a 3D model out of an image, because I don't see how it could be done automatically....

+ +

That's as much as I can imagine. Wondering if one could gently guide me through the process, not in too much detail, but in some detail. I would like to create something like this or this.

+",73722,,,,,43893.38819,How to create a T-shirt or book displacement map in practice?,<3d>,1,0,,,,CC BY-SA 4.0, +406029,1,,,3/3/2020 9:07,,1,83,"

We are designing a system, in which we need to store amount of SalesTax applied as well as Tax percentage value. We decided that we will keep tax value in separate table (simplified example):

+ +

Taxes table:

+ +
+----+---------+---------------+------------+------------+
+| Id |  Area   | TaxPercentage | StartDate  |  EndDate   |
++----+---------+---------------+------------+------------+
+|  1 | Japan   | 5%            | 2019-01-01 | 2020-12-31 |
+|  2 | Japan   | 6%            | 2021-01-01 | null       |
+|  3 | France  | 21%           | 2019-01-01 | null       |
+|  4 | Germany | 19%           | null       | 2022-12-31 |
+|  5 | Germany | 18%           | 2023-01-01 | null       |
++----+---------+---------------+------------+------------+
+
+ +

and then we have purchased history table:

+ +

OrdersHistory table - option 1:

+ +
+----+------------+---------+----------+------+--------+---------+
+| Id |    Date    | OrderId | PriceVal | Curr | TaxVal | TaxPerc |
++----+------------+---------+----------+------+--------+---------+
+|  1 | 2020-11-11 |     123 | 100.00   | YEN  | 5.00   | 5%      |
+|  2 | 2020-11-12 |     456 | 200.00   | EUR  | 28.00  | 19%     |
++----+------------+---------+----------+------+--------+---------+
+
+ +

OrdersHistory table - option 2:

+ +
+----+------------+---------+----------+------+--------+-------+
+| Id |    Date    | OrderId | PriceVal | Curr | TaxVal | TaxId |
++----+------------+---------+----------+------+--------+-------+
+|  1 | 2020-11-11 |     123 | 100.00   | YEN  | 5.00   |     1 |
+|  2 | 2020-11-12 |     456 | 200.00   | EUR  | 28.00  |     4 |
++----+------------+---------+----------+------+--------+-------+
+
+ +

The difference is in last column.

+ +

In other words - one developer claims we should keep history table (ledger) in denormalized form (TaxPercentage is value object, thus it should be copied) while another developer claims that database (tables) should be in normalized form (thus we should keep TaxId reference to Taxes table). OrdersHistory table can have 100.000s or millions of records.

+ +

Which approach is better and why?

+",343027,,,,,43894.70625,Denormalized history (ledger) table with tax - yes or no?,,1,1,,,,CC BY-SA 4.0, +406030,1,,,3/3/2020 9:42,,0,59,"

For example our App have Access Groups feature which may be enabled/disabled for the User. +So we would have multiple checking for feature status in different layers like:

+ +
if ($user->access_groups_enabled) {...}
+
+ +

This may be in all kinds of App layers, f.e. checking is needed for sql queries logic (if AG disabled then show all Posts instead of Posts which been attached to AG).

+ +

I hope it makes sense. So what is the best practice (project should use OOP) to do that in MVC context? Think as we have Laravel-based app. +I am thinking about polymorphism and Access Groups feature separation to module (like domain modules in DDD), then using AG-related objects by the same interface as we would use for non-related to AG modules. Multiple IF conditions allover the codebase realted to AG-feature checking smells somehow.

+",312946,,,,,43893.40417,Best practice for organizing code with checks for enabled feature,,0,5,,,,CC BY-SA 4.0, +406034,1,,,3/3/2020 10:47,,0,100,"

Sometimes a class A can have an ""associated"" class B such that the implementation of B depends on the implementation of A. For example, this can happen when B's objects are to be created by A's objects and are to be ""linked"" to their creator object.

+ +

An example from Python: dictionary view objects are created by certain dictionary methods.

+ +

My problem: I am implementing an implicit graph in Python, and in addition to the class Node of nodes of this graph, I wish to have a class NeighbourExplorer whose objects contain more state than a simple node but allow more efficient access to and iteration over neighbour nodes.

+ +

If the ""main"" class is subclassed, the ""associated"" class may need to be subclassed too. Thus, when using inheritance, extra care must be taken to keep the classes synchronised.

+ +

What would be an appropriate object-oriented approach to group such classes together? If Python dictionaries and dictionary views were implemented in Python, how would it be done?

+ +

Currently, for the implicit graph example, I am thinking about just making the ""associated class"" an attribute of the ""main class"":

+ +
class Node:
+    ...
+    class NeighbourExplorer:
+        ...
+
+",60408,,60408,,43893.875,43893.875,"""Best practice"" or ""design pattern"" to group a class with ""associated"" classes in an object-oriented language",,0,11,,,,CC BY-SA 4.0, +406037,1,,,3/3/2020 12:00,,1,259,"

I'm thinking about ""simple"" problem. I have database with messages and email adresses to send:

+ +
emailAddress string
+message string 
+Sent bool
+
+ +

Is it possible to write a system in which I can be sure whatever happens that email will be sent only once? I don't care if it was received, lets assume that email send operation is atomic and just after sending it I know if it succedded or not.

+ +

So pseudocode can look like:

+ +
OpenTransaction
+   try
+      UpdateRow (sent = true)
+      SendEmail()
+   catch
+      RollbackTransaction
+CommitTransation
+
+ +

But in this case after email is sent, someone may turn off server and then that information will not be stored in DB.

+ +

I would like to read a bit about such problems. Do you have any resources to take a look at?

+",358918,,90149,,43895.57222,43895.57222,Be sure to send email only once,,3,1,,,,CC BY-SA 4.0, +406041,1,,,3/3/2020 12:34,,0,145,"

I am designing a MVP for a simple gamification system for a trampoline park(s).

+ +

External company is providing bracelets for customers, then collecting this data in their own web app, where they extract things like ""time in air"", ""number of jumps"" etc. in real time from accelerometer readings (x,y,z)

+ +

I need to receive this transformed data somehow and display it on a scoreboard (among other things). My problem is with how to integrate their data with my system.

+ +

First I considered the simplest possible scenario - one trampoline park and 10 bracelets.

+ +

The simplest solution would be to connect my web app with their web app via websocket directly, receive transformed data from bracelets in real time and update the score on my website.

+ +

Things start to fall apart when I consider scaling the app to multiple trampoline parks and hundreds/thousands of bracelets

+ +

I started researching some integration patterns but this is a very broad area and I am new to things like this. I stumbled upon technology called Apache Kafka, it deals with streaming data (don't know if data from bracelets can be called streaming data, or is it more reserved for things like audio/video streaming for example on twitch or netflix?)

+ +

So the basic architecture I came up with is like this (I don't know if it makes any sense, if it's a good use case for Apache Kafka and if it isn't an overkill in my scenario):

+ +
    +
  1. External company collects the data from thousands of bracelets and transforms the data (from ""x,y,z"" readings to things like ""time in air"" or ""jump occured"" etc.)
  2. +
  3. Transformed data is being published to Apache Kafka on AWS (each trampoline park has its own topic?)
  4. +
  5. I run multiple instances of my web app, one per trampoline park, all of them subscribe for 1 topic each, all of them are connected to the same database (PostgreSQL on AWS), so users can go between different trampoline parks and still have all their data persistent
  6. +
  7. I consume data from Apacha Kafka and use it for my internal reasons (eg. calculating current bracelet user score)
  8. +
+ +

So my questions would be:

+ +
    +
  1. Does that architecture make any sense?
  2. +
  3. Is this a good use case for Apacha Kafka or is this an overkill? Or maybe there are some better technologies to use? Maybe RabbitMQ is enough?
  4. +
  5. Not sure if I should have one central web app to gather data about all trampoline parks or if I should spread it among many smaller apps, one for each trampoline park, but I will still need some central server to coordinate...
  6. +
+",131757,,131757,,43895.53194,44171.37778,Integrating with real-time data from multiple devices with accelerometer,,1,2,,,,CC BY-SA 4.0, +406042,1,406059,,3/3/2020 13:01,,2,172,"

I have a git repository and I would like to add the installation steps required to make the project work locally. The project is a company and private project and users will also be contributors. I'm using GitLab.

+ +

But I don't know in what file to write these steps. I have several options in mind such as:

+ +
    +
  • the README.md file: it's the default page that users and contributors will see. But I fear too much information inside one file will make it hard to read.
  • +
  • the CONTRIBUTING.md file: in my case, people who will install the project will be contributors. If they want to contribute, they will need to install the project locally..
  • +
  • an INSTALL.md file: it's not recognized by GitLab as CONTRIBUTING and README are. But it's far more obvious in the name that it contains the installation steps.
  • +
+ +

What is the best practice in this case?

+",358921,,,,,43893.82986,In what file do I put the installation steps in my git repository?,,1,2,,,,CC BY-SA 4.0, +406046,1,,,3/3/2020 13:28,,0,120,"

First bounded context: Requisites

+ +

Here we store user's organization requisites (Title, Logo, ID numbers, Bank requisites)

+ +

Second bounded context: Bank Integration

+ +

Here we have all use cases that somehow connected with integrations with user's banks.

+ +

Use case 1: User adds new bank account requisites from UI.

+ +

RequisitesApplicationService.AddNewBankAccount(requisitesDTO)

+ +

Use case 2: User sets up integration with the bank from UI (Button ""Set up integration"" near bank account requisites).

+ +

BankIntegrationService.SetUpBankIntegration(bankAccountRequisitesId)

+ +

Use case 3: Our app somehow knows about the user's bank account requisites even before the user adds it to the requisites. We want to automatically add known requisites to the app and immediately integrate the account.

+ +

What should I choose?

+ +
    +
  1. BankIntegrationService.AddNewIntegratedBankAccount(requisitesDTO)?
  2. +
  3. RequisitesApplicationService.AddNewIntegratedBankAccount(requisitesDTO)?
  4. +
  5. var bankAccountRequisitesId = RequisitesApplicationService.AddNewBankAccount(requisitesDTO) +and then +BankIntegrationService.SetUpBankIntegration(bankAccountRequisitesId)?
  6. +
  7. Other options?
  8. +
+ +

If 1 or 2 are suitable options, how different Bounded Contexts should talk to each other?

+",60346,,,,,43893.56111,Can one Application Service call the Application Service from another Bounded Context?,,0,6,1,,,CC BY-SA 4.0, +406048,1,,,3/3/2020 13:58,,2,89,"

I'm building a multi-tenant system that consists of one (SPA) client, calling multiple API's, all under my control.

+ +

User authentication is done with OpenID Connect, I'm sending an ID and access token to the client, the client uses the access token to call the API's.

+ +

At this stage, the API's know they receive a request from an authenticated user, but they still need to know what that user can do, the Authorization part.

+ +

I would like to prevent the scenario where all my API's call a store where authorization data is persisted, it feels like a performance bottleneck, and would tightly couple all my API's.

+ +

I would like to keep my OpenID Connect server as decoupled from this specific project as possible, processing only the users's identity and API scope claims. Therefore, putting all authorization-related claims inside the access token seems like the wrong move.

+ +

I came up with the following solution:

+ +
    +
  • User navigates to SPA

  • +
  • SPA calls OpenID Connect server, gets an access token with identity info

  • +
  • SPA calls a custom API endpoint, specific for this project, lets call it 'user-info-api'

  • +
  • This API issues a custom JWT token, signed by cert or shared secret

  • +
  • This custom JWT token is appended on every subsequent API request, it contains all info needed to do user authorization in each API, therefore eliminating round trips to the authorization store. The only thing the API's need to do is to verify it's signature

  • +
+ +

I would like a 2nd opinion on this approach for the following reasons:

+ +
    +
  • This seems a bit over-engineered
  • +
  • I'm worried about the increased payload this would bring to each http request.
  • +
  • I'm wondering if there are standardized approaches tackling this same issue
  • +
+",358922,,,,,43894.53958,Delegating authorization across multiple API's,,1,4,,,,CC BY-SA 4.0, +406053,1,,,3/3/2020 15:47,,-1,131,"

According to my understanding, in the MVC design the Model can only receive the functions calls from the Controller and not from the View directly.

+ +

Is it true that the Model can change or access the View directly?

+",358935,,209774,,43894.35417,43894.35417,Model View Controller: Can the Model access the view directly?,,2,3,,,,CC BY-SA 4.0, +406057,1,406058,,3/3/2020 17:38,,-2,74,"

I need to set up newsletter signup for a site.

+ +

Initially, I was eyeing services like MailChimp and SendPulse, but I'm not comfortable with their prices and I'm not really keen on sort of giving up control over and giving them access to the mailing list (for both mine and my users' sake).

+ +

The downside of not picking such a service seems to be that they are supposedly way more resistant to being blacklisted, rate limited or their emails being marked as spam.

+ +

The question is whether even if I follow Google's recommendations about mass email service configurations is it still worth going with one of these big companies from a blacklist/rate-limiting/marked-as-spam perspective over setting up a DIY mass email service with Firebase Cloud Functions and SMTP?

+",303233,,,,,43893.74444,DIY email service vs MailChimp and pals,,1,5,,,,CC BY-SA 4.0, +406061,1,406089,,3/3/2020 19:24,,2,165,"

I have a repository function on my repository layer. I use sequelize as data access. I will write tests for my function.

+

Here is the logic I want in English:

+
    +
  • My function should save a user to database by using the email and password
  • +
  • My function should return an object representation of my created user.
  • +
+

For the logic I write the code below.

+
const UserModel = require('../models/user');
+
+exports.create = async (email, password) => {
+    const createdUser = await UserModel.create({
+        email,
+        password
+    });
+
+    return createdUser.toJSON();
+};
+
+

So, I am planning to write integration tests. I will write some tests as below.

+
const userRepository = require('./repositories/user');
+const userModel = require('./model/user');
+const chai = require('chai');
+
+it('should create a user by using the correct email and password', async () => {
+    const email = 'testemail@test.com';
+    const password = 'test@Pass123';
+
+    const createdUser = await userRepository.create(email, password);
+
+    const userInDb = userModel.findOne(
+        {
+            id = createdUser.id
+        }
+    );
+
+    chai.expect(userInDb.email).to.be.equal(email);
+    chai.expect(userInDb.password).to.be.equal(password);
+});
+
+it('should return object representation of created user', async () => {
+    const email = 'testemail@test.com';
+    const password = 'test@Pass123';
+
+    const createdUser = await userRepository.create(email, password);
+
+    chai.expect(typeof userInDb).to.be.equal('object');
+});
+
+

So I have several questions.

+
    +
  1. Isn't the test enough? Should I write unit test for it? Or what are the benefits of having unit test for my method with or without my integration test.

    +
  2. +
  3. Should I write integration test and unit test together?

    +
  4. +
  5. How can I write unit test for my method?

    +

    Also something seems incorrect to me.

    +
  6. +
  7. My tests are depend on each other. Like, if my should return object test fails my other test could fail. Because i depent my return result will be an object in first test.

    +
  8. +
  9. My tests depend on the package I use. My repository method uses sequelize for the logic. However if I change sequelize ORM with another ORM. I have to change my tests as well.

    +
  10. +
+",358952,,-1,,43998.41736,43894.52083,Why should I write unit test for my example instead of (or with) my integration test,,1,2,,,,CC BY-SA 4.0, +406063,1,,,3/3/2020 21:05,,1,66,"

I'm modeling the ""Domain"" Layer of ""Clean Architecture"" for an application that gets its data from an XML file when starts.

+ +

The XML file looks like:

+ +
<?xml version=""1.0"" encoding=""UTF-8"" ?>
+<BluePrint>
+    <BluePrintVersion>1.0</BluePrintVersion>
+    <BluePrintMaxMetaData>500</BluePrintMaxMetaData>
+    <ZoneData>
+        <MetaDataId>0</MetaDataId>
+        <ZoneConfigs>
+            <ZoneInfo>
+                <Model>
+                    <Name>Simple<Name>
+                    <DetailedName>Simple model<DetailedName>
+                </Model>
+                <MetaDataId>1<MetaDataId>
+            </ZoneInfo>
+            <ZoneInfo>
+                <Model>
+                    <Name>Medium<Name>
+                    <DetailedName>More than simple model<DetailedName>
+                </Model>
+                <MetaDataId>2<MetaDataId>
+            </ZoneInfo>
+        </ZoneConfigs>
+    </ZoneData>
+...
+</BluePrint>
+
+ +

Based on above I had the following entities (in C#):

+ +
public class Zone
+{
+    public short MetaDataId{get;set;}
+    public float Value{get;set;} //Not specified in XML but this comes from MetaData through MetaDataId
+    public ICollection<ZoneInfo> ZoneConfigs{get;set;}
+}
+
+public struct ZoneInfo
+{
+    public struct Model
+    {
+        public string Name;
+        public string DetailedName;
+    }
+    public short MetaDataId{get;set;}
+    public float Value{get;set;} //Not specified in XML but this comes from MetaData through MetaDataId
+}
+
+public struct MetaData
+{
+    public short Id{get; set;}
+    public string Name{get;set;}
+    public float Value{get;set;}
+}
+
+
+public struct BluePrint
+{
+    public string Version{get;set;}
+    public int MaxMetaData{get;set;}
+}
+
+ +

So that I could have MetaData, Zone and BluePrint repositories in ""Infrastructure"" and ""Persistence"" layers.

+ +

However it turns out that XML files have versions and between one version and another the definition of elements can change. The only inmutable element is ""MetaData"". For example, XML element <ZoneData> can change to something like:

+ +
...
+        <ZoneData>
+            <ZoneConfigs>
+                <ZoneInfo>
+                    <Model>
+                        <Name>Simple<Name>
+                        <DetailedName>Simple model<DetailedName>
+                    </Model>
+                    <MetaDataId>1<MetaDataId>
+                </ZoneInfo>
+                <ZoneInfo>
+                    <Model>
+                        <Name>Medium<Name>
+                        <DetailedName>More than simple model<DetailedName>
+                    </Model>
+                    <MetaDataId>2<MetaDataId>
+                </ZoneInfo>
+            </ZoneConfigs>
+            <CustomConfigs>
+                <CustomValue>
+                    <Name>Custom0<NAme>
+                    <Value>0<Value>
+                </CustomValue>
+                <CustomValue>
+                    <Name>Custom1<NAme>
+                    <Value>1<Value>
+                </CustomValue>
+            </CustomConfigs>
+        </ZoneData>
+...
+
+ +

So my question is: +Should I keep ""MetaData"" object as the only entity in ""Domain"" layer and the rest would be formed in ""Application"" layer depending on XML file version provided?

+",358955,,,,,43893.87847,Changing entities in Clean Architecture,,0,2,,,,CC BY-SA 4.0, +406064,1,,,3/3/2020 22:35,,1,33,"

Why is it that some abstract methods in interface hierarchies are redeclared as abstract further down?

+ +

iterator() for example, abstract in Collection is redeclared in Set and List, and again further down the Set hierarchy in NavigableSet.

+ +

I'd like to understand why this is done from a design point of view, or if there's some historical reason, etc. (I haven't seen anything about it in texts I've read sofar, and could not find anything in the JLS).

+ +

It is similar with other methods in Collection: add, size, remove, removeAll, clear, contains, containsAll, equals, isEmpty

+",70112,,,,,43893.94097,subinterfaces redeclaring abstract methods,,0,2,,,,CC BY-SA 4.0, +406066,1,406067,,3/3/2020 23:10,,1,217,"

I was given an API document that contained this diagram. And I was amazed at how cleanly it presents the flow of data between multiple endpoints by putting the end points in vertical silos. Data event flow moves from top to bottom, going back and forth across the diagram.

+ +

Is there a name for this kind of diagram in particular -- something more specific than just ""Data flow diagram""? And especially, how can I easily make one with this kind of arrangement? I don't know what to search for to find something this specific.

+ +

I've redacted the information that was originally shown.

+ +

+",112626,,,,,43920.51875,What kind of diagram is this?,,2,4,,,,CC BY-SA 4.0, +406068,1,406075,,3/4/2020 0:00,,0,72,"

Say, our team owns 3 services, one is responsible for creating persons, other is responsible for creating buildings, and 3rd one is responsible for creating jobs. Also, we have one website, which is connected to all these services. Now, we want to add a functionality on our UI that would take a CSV with fields asking values for creating persons, buildings and jobs. Once that CSV is uploaded, it should create persons with some buildings and jobs. Also, say, this kind of functionality is required by some client calling our service from their service. We want this CSV upload functionality, at least in UI, to be asynchronous in nature which means we will return some job id to the user/client and user/client can use that to track the progress of job.

+ +

Then, is it a good idea to create a service which does this orchestration kind of work, calling our other services and provide the endpoint to the API of that to the client as well as to the UI? Or UI backend should do the orchestration at their side while client should do it at their side?

+ +

I have never seen a service whose only purpose is to create jobs and call other services. What do you suggest and why?

+",287375,,,,,44159.40903,Should there be a separate service for creating asynchronous job?,,2,0,1,,,CC BY-SA 4.0, +406079,1,406096,,3/4/2020 7:47,,33,6151,"

Doing a code review, I ran into this assertion in a unit test:

+ +
assertThatThrownBy(() -> shoppingCartService.payForCart(command))
+  .isInstanceOfSatisfying(PaymentException.class, 
+    exception -> assertThat(exception.getMessage()).isEqualTo(
+      ""Cannot pay for ids ["" + item.getId() +""], status is not WAITING""));
+
+ +

I feel that testing the literal text of an Exception is bad practice, but I'm unable to convince the author. (Note: this is not localized text; it's a String literal in the code with the item ID as a parameter.) My reasoning:

+ +
    +
  • If we decided that, e.g. the item's ID was a critical piece of information, it might make sense to test that the message contains the ID. But in this case, it would be better to include the ID as a field of the Exception class.
  • +
  • If we had some sort of external system that was automatically reading the logs and looking for certain strings (we don't), then yes, this test would be justified
  • +
  • What's important in the message is that it clearly communicates what the problem is. But there must be hundreds of ways of constructing such a message, and no testing library can tell us if a plain text human language String is ""clear"" or ""useful"".
  • +
  • Thus, this amounts to, e.g. writing unit tests for translations - it's pointless because the best you can do boils down to duplicating your messages file in the tests.
  • +
+ +

What's the best practice here, and why?

+",38424,,123316,,43896.65486,43896.76875,Testing the wording of an Exception message,,9,3,3,,,CC BY-SA 4.0, +406084,1,,,3/4/2020 10:05,,0,84,"

I'm planning to setup a hybrid encryption procedure in my app. So basically i did encryption for data send from client to server using this method, now i'm confused on how to encrypt data send from server to client. So what i have done so for is that :

+ +

I generated a public-private key pair using an asymmetric encryption(RSA), saved the private key in server side and then shipped the public key with my app(client).Now when i need to send data to the server we generate a secret key and then encrypt our data with this secret key. Now we encrypt the secret key using the initially created public key(from asymmetric enc) and send this encrypted secret key and along with the encrypted data to server. Now the server can use its private key to decrypt the secret key and further use the decrypted secret key to decrypt the data

+ +

My question is that, how to encrypt the data from server to client using this method? should i create another asymmetric key pair for this, or is there a way to use the existing pair? I'm confused on decrypting part in client side since we can't save the private key in the app

+",282748,,,,,43894.42014,How can i implement two way asymmetric encryption for my app and backend server?,,0,5,,,,CC BY-SA 4.0, +406088,1,,,3/4/2020 11:50,,3,67,"

I want some advice regarding my architecture and hosting options. +I'm attempting to build an e-commerce site for e-books. It will use nestJS for the backend and ReactJS+Typescript for the frontend. +Postgresql will be the DB. I want to use Elasticsearch to provide search capabilities.

+ +

Initially, I thought to host each of these projects in their own server. But since I'll be using Elasticsearch for the portion most in need of scalability; I can put the front and back ends on the same server. + Something akin to this:

+ +

I would still need the backend as a separate project to perform user authentication and other utilities.

+ +

Does this make sense? Would a monolithic architecture work better in this case? I'm not serving multiple frontends nor would backend API be public. +Maybe it will present an issue if I succeed with this and think of making a mobile app.

+ +

I was thinking of starting with the digital ocean as my hosting platform; I'd need three servers at minimum (DB, es, and front+back-ends).

+ +

I would love to read your insights.

+",358992,,,user354459,43894.59792,44165.00278,ReactJS with Elasticsearch App Architecture,,1,0,,,,CC BY-SA 4.0, +406090,1,,,3/4/2020 12:34,,1,193,"

I have just been confirmed for an interview and one of the hint questions given by the recruiter was:

+ +
""Explain Idempotency and a case when you can't make processing idempotent""
+
+ +

I understand Idempotency but cannot figure out when it can't be used.

+",57613,,1204,,43894.60486,43894.72708,A case when you can't make processing idempotent,,3,2,,,,CC BY-SA 4.0, +406091,1,406338,,3/4/2020 12:38,,0,133,"

I face an issue where I want to inject Entity Frameworks DbContext into a service class, in a WPF application. The problem is that the service classes are instantiated and contained by the view models. That could mean a rather long life time for DbContext, if injected directly into the service.

+ +

The solution I have come up with, is to inject a DbContextFactory instead. It is described by a very simple interface:

+ +
public interface IContextFactory<out T> where T : DbContext
+{
+    T Create();
+}
+
+ +

When created a service, any class that implements IContextFactory can then be injected, like this:

+ +
private readonly IContextFactory<MyCustomContext> _factory;
+
+public MyService(IContextFactory<MyCustomContext> factory)
+{
+    _factory = factory;
+}
+
+ +

Which makes it really easy to get and use an instance of the context, in a disposable manner:

+ +
public async Task AddEntity(MyEntity entity)
+{
+    // Some validations first
+
+    using (var db = _factory.Create())
+    {
+        db.Add(entity);
+        await db.SaveChangesAsync();
+    }
+}
+
+ +

Is this a good way to do things? And are there any ways I could improve upon my concept?

+",271714,,,,,43900.39514,Services injected with factories,,2,1,,,,CC BY-SA 4.0, +406097,1,,,3/4/2020 15:37,,0,81,"

On our product there are many config files (we have many processes)
+For ""logicical"" configuration, we store all configuration in a document based database and then distribute the configuration to different component upon configuration change.

+ +

But, We also store configuration on app.config files, and sometimes, these files are being modified, e.g we store connection strings, ports, and other configurations.
+Is there a common way to prevent End-users from playing with the app.config of a process?

+ +

The simple way is to encrypt/convert to Base64 the fields and decrypt them when loading the XML attribute, but that is a bit hacky.

+ +

Thanks

+",66294,,,,,43894.91319,app.config prevent end users from modifying it,,4,1,,,,CC BY-SA 4.0, +406104,1,,,3/4/2020 16:50,,0,33,"

The mainstream opinion about using a reactive non-blocking backend is that it increases performance for large numbers of clients but sacrifices maintainability due to increased complexity.

+ +

I am in a situation, in which the performance issue is not that important, it is rather low-scale with probably no more than a dozen simultanous clients- it could well be handled with plain old blocking I/O from this perspective.

+ +

However, we found that providing a SSE-based API based on Spring Webflux would simplify the communication between backend and the frontend which uses Angular and rx/js. On the client side you could just subscribe to the reactive EventSource, instead of handling polling etc.

+ +

I am aware that SSE can be done with Spring MVC as well, but manually providing SSE messages seems much more effort than just creating a Flux and exposing it to the controllers. Especially, considering that the team uses the reactive programming model in the frontend anyway.

+ +

So the goal would not be to go full reactive in the backend but to provide an API that is easy to consume and take advantage of the framework that provides SSE support based on project reactor almost out-of-the-box.

+ +

The question is whether I am missing potential drawbacks of using Webflux to implement an SSE-based API in this scenario.

+ +

I am curious because my reason to consider this approach does not align with common answers e.g. on this site which focus on performance aspects vs in increase of complexity. The conclusion seems to be only do reactive in the 10% of cases with very high requirements for parallelism for which thread-based models do not suffice.

+ +

Do the performance issues still apply if the use of reactive Streams is limited e.g. to the use of simple factories in the presentation layer?

+",358774,,,,,43894.70139,Can reactive streams simplify API development even for low-scale applications?,,0,0,0,,,CC BY-SA 4.0, +406107,1,406127,,3/4/2020 17:49,,1,113,"

I'm working on a webapp that is supposed to be ""database-driven"". Now, the stuff we do is a bit evolved (a little configurator) and I hesitate to re-read the DB on every callback, because I fear data might become inconsistent (several linked tables are involved) and possibly a lot of checking would have to be added to avoid that.

+ +

So my idea was to read all required data when a user session starts and work with that. The data in general is rather static, new configurations created in a session will be stored and have no impact on other sessions. However, a coworker involved with testing now claimed that this was ""not database-driven"", as data was cached (for the duration of a session).

+ +

My understanding was very naive - I had assumed ""database-driven"" to mean ""based on data held in a database"" (as opposed to ""data held in .xml-files""). But does the general understanding of the term also imply ""no caching""?

+",288540,,,,,43894.96458,Is it legitimate for a database-driven website to read the DB once (at SessionStart)?,,2,1,,,,CC BY-SA 4.0, +406108,1,406113,,3/4/2020 17:50,,2,236,"

I'm fairly new to web development and I have a bit of a weird question.

+ +

I'm currently working on a personal project that may or may not eventually become a real, commercialized product.

+ +

In my database, I have a ""user"" table that stores the user's information, including their username, encrypted password, first name, last, name, email address, etc.

+ +

Now, I've decided that the usernames will always be email addresses since the users won't be able to interact with each other (therefore, no sensitive data security issues). The problem is I have both an ""username"" and an ""email"" column. As of now, when the user create their account, they don't type in an username and an email. They type in an email, which gets passed into the two SQL columns.

+ +

Later on, they can change their username and email address separately and both could technically be different. I realize now that this might not be ideal and rewriting the parts of the code affected by this, changing the database structure, etc. is going to to take a little bit of work, which I'm 100% willing to do, but not sure yet if it's necessary at all, so that's why I'm asking this question here.

+ +

Could this situation be causing issues in the future, either terms of data management, security management or simply user experience?

+",359039,,,,,43894.85625,Is it good or bad practice to use the user email in two separate columns of the user SQL table?,,3,0,,,,CC BY-SA 4.0, +406116,1,406437,,3/4/2020 19:48,,-1,249,"

I am doing some code review for a project and they have generated a name based UUID using SHA256 as the hashing algorithm.

+ +

I found a some Java code that created a Name-based (hashing) UUIDs using SHA-256 in Java

+ +

The Java code takes the first 16 bytes from the SHA256 and sets the UUID type to 5 and sets the RFC 4122 variant. Type 5 is the SHA1 named type not SHA256.

+ +

I found another implementation (sorry lost the link) that did the same thing but set the UUID type to 6. Type 6 is not a type found in RFC 4122.

+ +

So my question is there a standard expansion to RFC 4122 for generating a name based UUID using SHA256?

+",40184,,40184,,43901.67222,43911.57222,Any standard expansions to RFC 4122 for generating a name based UUID using SHA256,,1,3,,,,CC BY-SA 4.0, +406117,1,406122,,3/4/2020 20:02,,2,130,"

Consider that you've got a POJO that you intend to serialize and send through a socket.

+ +

You can use whatever serialization strategy you wish (JSON, XML, protobuf, ..., etc) to serialize the actual POJO into a byte[], then you send it through the socket.

+ +

The byte[] arrives on the other end, but in this receiving context you do not know what class the information represents, so how do you know which POJO class to construct to begin populating its fields?

+ +

I'd want to do this without the need to have multiple endpoints/sockets within the context of which I could assume the type of data that is being received. I want to receive all sorts of different POJOs in the same socket context.

+ +

One idea was to share a mapping across these contexts, mapping a type code to a class type. I could then build some sort of user defined frame with which to transport the data.

+ +
| 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 |
++----------------+---------------+---------------+----------------+
+|                      TYPE CODE SECTION (32 bit int)             |
++----------------+---------------+---------------+----------------+
+|                      PAYLOAD LENGTH SECTION (32 bit int)        |
++----------------+---------------+---------------+----------------+
+|                          PAYLOAD SECTION                        |
++----------------+---------------+---------------+----------------+
+|                          PAYLOAD CONTINUED....                  |
++----------------+---------------+---------------+----------------+
+:                              .....                              :
++----------------+---------------+---------------+----------------+
+
+ +

On serialization, I would insert this type code into the frame, then append on the byte[] that is the serialized POJO, then send this frame!

+ +

On the other end, I could extract this type code from the frame, and look it up in the shared mapping. Voilla! I now know which class was sent through the socket, and can de-serialize smoothly

+ +

However, this seems bad, because now all of these contexts are coupled together with this shared mapping. What if I want to make my project micro-service oriented? It could get ugly, especially if the mapping could be different for a different use case.

+ +

It occurred to me that this is a problem people have solved, and maybe I just don't know the name for this type of thing, or the high level design patterns/ideas.

+ +

Could someone provide some context? What solutions already exist that already solve this problem? Is there a name for this type of thing?

+",359044,,1204,,43895.03958,43895.03958,How do you resolve a byte[] into a class instance in a way that doesn't couple the serialization/deserialization contexts together,,3,5,,,,CC BY-SA 4.0, +406129,1,406133,,3/4/2020 23:40,,-3,128,"

I'm expirementing with creating a website as a IT Ticketing system. This website has a top nav and a side nav. I want the content area to change depending on the link clicked. I've managed to get the concept down using just php through isset($_GET['page']) and conditional statements which activates the include command to pull the php documents I want to fill the content area.

+ +

This partially works, but the entire page seems to get completely reloaded when I click one of the links. The links are basically just a href=""?page=tickets"". This defeats the whole purpose of my template.

+ +

Basically I'm looking for advice as to which direction to head. I know there's frameworks like laravel that uses things like blade for MVC, there's iframes, and so many things, but I don't know what's right for this specific task. Thanks for the help.

+",359067,,359067,,43894.99722,43916.85556,What are some of the best ways to create single page websites with dynamic content?,,3,2,,,,CC BY-SA 4.0, +406134,1,,,3/5/2020 1:13,,1,118,"

As of iOS 6, Apple thought Unwind Segues would be added to their layout/views. What this means is, when you're on the 7th view of a stack, you can pop back to any other one. For those of you pure OO fans (like myself), I cringed a little. I cringed even more when I learned these also work with pop-ups.

+ +

Why would a modal on top of a stack know about the Nth view before?

+ +

I'm a big fan of OO because when you stick to its principles, you don't get spaghetti. And once you have a complex set of modals that are shared among multiple navigation controllers, these Segues begin to cluster into a bunch of lines on your storyboard that effectively don't enhance the navigation (as compared to an Obj-Oriented solution), but take up space on your Storyboard.

+ +

If you change a parent view, you immediately break the unwind segue and have to edit all the code that pointed to it, which is one of the exact same reasons we avoid doing this in code.

+ +

This has made me cringe for awhile, and I don't see any questions that bring this up in relation to views.

+",74778,,,,,43925.62569,Should child objects not know parents while in views/view code?,,3,3,,,,CC BY-SA 4.0, +406140,1,,,3/5/2020 10:13,,0,91,"

First, as I know, Entity in DDD is almost same with Value Object except Entity has identity. Every article I have read say same thing that entity id has ORM mapping with any ORM tool. But I don’t want to use ORM mapping in Entity. Instead, I would like to do database operation with Repository Interfaces without mapping. And, in this case, I am stuck on how I should do this.

+ +

I will explain in my mind with an example below

+ +

Let’s assume I have a TODO application and there are some questions in the TODO and some answers in each those questions.

+ +

We have 3 value object (or entity): +Todo +TodoQuestion +TodoQuestionValue

+ +

Now, I thought that I have a value object (or entity) for TODO. This value object has a method to get questions that gets array of TodoQuestion value object. And inside of TodoQuestion value object we have a method to get values of questions that gets array of TodoQuestionValue.

+ +
<?php
+class Todo{
+   private int $id;
+   /**
+    * @param array<TodoQuestion> $questions
+    */
+   private array $questions;
+   private TodoRepositoryInterface $repository;
+   public function __constructor(TodoRepositoryInterface $repository){
+      $this->repository = $repository;
+   }
+   public function getQuestions(){
+      $this->questions = $this->repository->listQuestionsByTodoId($this->id);
+   }
+}
+
+ +
<?php
+class TodoQuestion{
+   private int $id;
+   private string $question;
+   /**
+    * @param array<TodoQuestionValue> $values
+    */
+   private array $values;
+   private TodoRepositoryInterface $repository;
+   public function __constructor(TodoRepositoryInterface $repository){
+      $this->repository = $repository;
+   }
+   public function getValues(){
+      $this->values = $this->repository->listValuesByQuestionId($this->id);
+   }
+}
+
+ +

Now, I would like to get your opinions about how I could shape this structure by following the DDD rules.

+ +

Thank you.

+",265548,,,,,43895.42569,DDD Value Objects and Entity Without ORM Mapping in PHP,,0,3,,,,CC BY-SA 4.0, +406142,1,,,3/5/2020 11:07,,1,164,"

I am developing a hobby project where I try to use DI to get testable code. Until now, I found that it improved both the readability, usability, and testability of the code. However, now I have a situtation where I find that the usability suffers a lot and I think there is some key part of DI that I am missing.

+ +

In the initial setup phase of my application, I need to load a project file. In the Application class, the code is quite simple and readable:

+ +
void Application::run(int argc, char** argv)
+{
+    m_project.load(argc, argv);
+    // other stuff following, e.g., run main loop
+}
+
+ +

Its constructor receives a ICmdLineProject interface with a void load(int argc, char** argv) function. The concrete implementation now reads a file name from the command line arguments, parses the file, and then applies the stored values:

+ +
void FooProject::load(int argc, char** argv)
+{
+    std::string file_name = read_file_name_from_cmd_line(argc, argv);
+    std::ifstream file(file_name);
+    IniConfig ini(file);
+    FooConfig config = parse_ini_config(ini);
+    m_window_system.create_window(config.m_width, config.m_height);
+}
+
+ +

When implementing the unit tests for FooProject::load(), I found that the function does too many things and has many error cases that need to be checked:

+ +
    +
  • It should throw if the cmd line args can not be parsed.
  • +
  • It should throw if the file does not exist.
  • +
  • It should throw if the file can not be parsed as ini file.
  • +
  • It should throw if the window_width and window_height values are missing in the ini file.
  • +
+ +

Additionally, I need to check that create_window() is called with the correct arguments. This last test was easy with the gmock framework. However, if this test fails, the test does not show what part went wrong: Did parsing the cmd line args go wrong? Did parsing the ini file go wrong? Did I swap width and height?

+ +

In order to split the responsibilities and simplify testing, I decided to split the class:

+ +
    +
  • CmdLineToFileProject: Reads a file name from the cmd line args and calls load(file_name) on some IFileProject that was passed in the constructor.
  • +
  • FileToStreamProject: Opens the file as stream and calls load(stream) on some IStreamProject.
  • +
  • StreamToIniProject: Creates a IniConfig from the stream and calls load(ini) on some IIniProject.
  • +
  • IniToFooConfigProject: Creates a FooConfig from the ini and calls load(config) on some IFooProject.
  • +
+ +

Now every class does exactly one thing. It is very easy to test both success and failure cases for each step. The FooProject is now very simple:

+ +
void FooProject::load(FooConfig const& f_config)
+{
+    m_window_system.create_window(f_config.m_width, f_config.m_height);
+}
+
+ +

Unfortunately, the application initialization now became a lot more complicated. Previously, it was as simple as this:

+ +
FooProject project;
+Application app(project);
+app.run(argc, argv);
+
+ +

Now however, I need to chain all the project classes together:

+ +
FooProject p0;
+IniToFooConfigProject p1(p0);
+StreamToIniProject p2(p1);
+FileToStreamProject p3(p2);
+CmdLineToFileProject p4(p3);
+Application app(p4);
+app.run(argc, argv);
+
+ +

I think this is really inconvenient and the users of FooProject and Application are suffering. I just wanted to improve a small method (5 lines) and needed to create 4 new classes and their interfaces. How can you avoid this class chain? This must be a common problem when applying DI techniques. Are there any solutions to this? Did I get the whole application setup wrong?

+",301401,,,,,43895.83125,Avoid class chains that emerge from DI,,2,1,,,,CC BY-SA 4.0, +406143,1,,,3/5/2020 12:14,,-1,483,"

I have this project with MVVM and clean architecture well implemented but I've decided to split it into modules. Right now I have: +apimodule

+ +
    +
  • apimodule: with the retrofit dependencies
  • +
  • app: with the data, domain and presentation layers.
  • +
+ +

I want to modularize it even more: +Create a domain module with the domain models, usecases and repository interfaces. In theory, this module shouldn't depend on the data layer. +Create a data module with the repositories and DataSources that depends on the apimodule (and eventually databasemodule) and the domain module. +Move the activities and viewmodels to presentation module that only depends on the domain module.

+ +

So far so good. The problem starts when I try to add a DI library to the equation (I'm using Koin for this).

+ +
    +
  • If I define the repository injection in the domain module, then it should depend on the repository module to retrieve the repository implementation.
  • +
  • If I define the injection in the data module, then I'm coupling presentation module to the data module, also making the repository interface useless.
  • +
+ +

I think the first approach is much better than the second one, but I don't like it either as it would violate the principle ""Domain Layer does NOT depend on Data Layer"".

+",359107,,,,,44166.67083,"Android project, clean architecture and modular approach",,1,0,,,,CC BY-SA 4.0, +406146,1,406150,,3/5/2020 14:30,,0,178,"

I have recently encountered multiple articles with title Everytime a mock returns a mock a fairy dies And I ran into exact same situation while using factory class in my code. I am writing a sample Java code here to explain in detail what I mean.

+ +
@RequiredArgsConstructor
+class Battle{
+  private final GunFactory;
+
+  public Damage attack(Bullet availableBullet, Enemy enemy){
+    Gun gun = gunFactory.create(availableBullet);
+    return gun.shoot(enemy);
+  }
+}
+
+class BattleTest{
+  private GunFactory gunFactory = mock(GunFactory.class);
+  private Gun gun = mock(Gun.class);
+  private Battle battle = new Battle(gunFactory);
+
+  @Test
+  public void attack(){
+    Enemy anyEnemy = new Enemy(strength=10);
+    Bullet anyBullet = new Bullet(""Machine gun bullet"");
+    Damage anyDamage = new Damage(100);
+
+    when(gunFactory.create(anyAvailableBullet)).thenReturn(gun);
+    when(gun.shoot(anyEnemy)).thenReturn(anyDamage);
+
+    assertThat(battle.attack(anyBullte, anyEnemy), is(anyDamage));
+  }
+}
+
+ +

As you can see in the sample (ugly) code my factory is a mock and is returning another mock. So it got me wondering if there is anyway to avoid this. I tried moving one class higher, so that the GunFactory sends Gun object as a parameter to attack function, but I would still run into same issue when I test the other class.

+ +

Is there a way to avoid this. Or with factory pattern is this inevitable?

+",264246,,,,,43895.63819,"How to avoid ""mock returning mock"" when using factory pattern",,1,4,,,,CC BY-SA 4.0, +406151,1,406158,,3/5/2020 15:37,,1,254,"

I'm just not sure as to why JIT (Just-in-time) and AOT (Ahead-of-time) are often presented in contradiction to another.

+ +

If we do not care about about portability, it feels to me that a program could very well be AOT compiled and then, at runtime the JIT could be used to re-optimized the hot parts.

+ +

What are some known implementations using this scheme. If there are none, why so?

+",316634,,316634,,43895.82986,43896.44375,Why do we oppose AOT and JIT compilation. Can they be complementary?,,3,5,,,,CC BY-SA 4.0, +406154,1,,,3/5/2020 16:07,,1,11,"

I have displayProjectTable that gets state from reducer projectData and populates itself.

+ +

Currently displayProjectTable has a local state that stores focusedRowID. My problem is that I want other components to have the ability to modify the focusedRowID and it's content.

+ +

Some examples: +-A component tells displayProjectTable to add 1 to focusedRowID (ie. focus on the next row) +-A component tells displayProjectTable to set value of row at focusedRowID to ""test"". Because displayProjectTable gets its state from projectData, that reducer's state will have to be changed at the focusedRowID index.

+ +

A possible solution is to store focusedRowID inside of the projectData.js reducer. However, this seems like a bad solution to me. The projectData reducer is responsible for fetching, storing, and saving projectData. Would it really make sense to store a focusedRowID when projectData shouldn't even know what a row is?

+",359124,,,,,43895.67153,Best way to allow other components to change table's focused row?,,0,0,,,,CC BY-SA 4.0, +406156,1,406159,,3/5/2020 17:24,,0,125,"

I have been using a pattern in a lot of places (mainly C#) that I would like to know the name of.

+ +

Here is an example of it in C#:

+ +
public enum ThingType
+{
+    A,
+    B,
+    C
+}
+
+public interface IThing
+{
+    ThingType Type
+    { get; }
+}
+
+public class ThingA : IThing
+{
+    public ThingType Type => ThingType.A;
+}
+
+public class ThingB : IThing
+{
+    public ThingType Type => ThingType.B;
+}
+
+public class ThingC : IThing
+{
+    public ThingType Type => ThingType.C;
+}
+
+ +

As long as all implementations of IThing have a corresponding member in the enum, I can safely cast to the actual type of an IThing after checking the value of IThing.Type.

+ +
public void HandleThing(IThing thing)
+{
+    switch(thing.Type)
+    {
+        case ThingType.A:
+            ThingA a = (ThingA)thing;
+            // Doing something with a...
+            break;
+
+        case ThingType.B:
+            ThingB b = (ThingB)thing;
+            // Doing something with b...
+            break;
+
+        case ThingType.C:
+            ThingC c = (ThingC)thing;
+            // Doing something with c...
+            break;
+    }
+}
+
+ +
+ +

I apologize if this question is a duplicate, I went through a few pages of search results for multiple different search phrases and I couldn't find this question already.

+",336959,,336959,,43895.875,43895.875,Name of this enum-based design pattern to get the type,,1,19,,,,CC BY-SA 4.0, +406161,1,,,3/5/2020 19:42,,0,111,"

I would like to run tasks in parallel. At this time, I am using a very simple +worker pool using a single concurrent queue shared by all the threads.

+ +

Every task has a non unique ""tag"" (an integer in my case). +I would like tasks having the same tag not to run in parallel.

+ +

At this time, I have implemented it by having a Runner object for each +tag. New tasks are sent to the Runner associated with the task tag.

+ +

The Runner then enqueue the task into an internal queue and the checks +if it is already scheduled into the worker pool. If not, it schedules +itself into the pool.

+ +

When run inside the pool, it will pop a task from its internal queue, +run it and if the queue is not empty, reschedule itself.

+ +

Most operations (push, pop, already running check using a simple +boolean and queue is not empty check) are ""protected"" using a mutex +for each Runner instance.

+ +

My solution seems to be working on a few simple cases, but I fear it +is somewhat fragile (for example, I'm almost certain that there is a +race condition in the ""queue is not empty"" check) and uses a lot of +locks (one for the main queue and for each tag).

+ +

Is there a cleaner solution, maybe less race condition-prone?

+ +

Here is a sample code illustrating my implementation (it is probably not working, as I'm replying from home, but I hope it is enough to get the idea):

+ +
class Task {
+public:
+        Task(int tag) : tag_(tag) {};
+        void run() {
+                // do something
+        }
+
+        int getTag() { return tag_; }
+
+private:
+        int tag_;
+};
+
+class Runner;
+
+class WorkerPool {
+public:
+        void scheduleRunner(Runner *runner);
+};
+
+class TaskQueue {
+public:
+        void push(std::shared_ptr<Task> task);
+        std::shared_ptr<Task> pop();
+        bool isEmpty();
+};
+
+class Runner {
+public:
+    Runner(WorkerPool& pool) : pool_(pool) {};
+
+    // schedule execution of the task.
+    // called by producers
+    void scheduleTask(std::shared_ptr<Task> task) {
+        std::unique_lock<std::mutex> lock(lock_);
+        queue_.push(std::move(task));
+        if (!running_) {
+            pool_.scheduleRunner(this);
+            running_ = true;
+        }
+    }
+
+    // run the task from the pool
+    void runTaskFromPool() {
+        std::unique_lock<std::mutex> lock(lock_);
+        std::shared_ptr<Task> task = queue_.pop();
+        // we can't leave it locked because the task may need
+        // to enqueue another element
+        lock.unlock();
+        task->run();
+        lock.lock();
+        if (queue_.isEmpty())
+            running_ = false;
+        else
+            pool_.scheduleRunner(this);
+    }
+
+private:
+    WorkerPool& pool_;
+    TaskQueue queue_;
+    std::mutex lock_;
+    bool running_;
+};
+
+class Dispatcher {
+public:
+    Dispatcher(WorkerPool& pool) : pool_(pool) {};
+
+    void scheduleTask(std::shared_ptr<Task> task) {
+        std::unique_lock<std::mutex> lock(lock_);
+        createRunnerIfNotExists_(task->getTag());
+        runners_[task->getTag()]->scheduleTask(std::move(task));
+    }
+
+private:
+    std::map<int, std::unique_ptr<Runner>> runners_;
+        WorkerPool& pool_;
+    std::mutex lock_;
+};
+
+",359136,,359136,,43895.89167,44165.92083,Worker pool running tasks of the same kind serially,,1,6,,,,CC BY-SA 4.0, +406163,1,,,3/5/2020 21:03,,0,132,"

I'm using a listener pattern where a class A listens for events from various classes B, C, D with the help of a listener interface I

+ +

Essentially the structure looks like:

+ +

+interface I {
+    void generalCallback();
+}
+
+class A implements I {
+    @Override
+    public void generalCallback() {
+        /// Do some stuff
+    }
+
+    void initCommon(Common common) {
+        common.setListener(this);
+    }
+}
+
+class Common {
+    I i; 
+    void setListener(I i) {
+        this.i = i;
+    }
+}
+
+class B extends Common {
+    void doStuff() {
+        i.generalCallback();
+    }
+}
+
+class C extends Common {
+    void doSomeOtherStuff() {
+        i.generalCallback();
+    }
+}
+
+class D extends Common {
+    void doSomeGeneralStuff() {
+        i.generalCallback();
+    }
+}
+
+ +

Now, for some reasons, I want to inform A about a specific event of D. So is it okay to add one more method to the interface which would now be specific to a particular client(D here) rather than general.
+Updated Code

+ +

+interface I {
+    ...
+    void callbackFromD();
+}
+
+class A implements I {
+    ...
+    @Override
+    public void callbackFromD() { }
+}
+
+class D extends Common {
+    ...
+    void doSpecificStuff() {
+        i.callbackFromD();
+    }
+}
+
+ +

So my questions are :

+ +
    +
  1. Is this a good approach to solve this problem?
  2. +
  3. Or should I create a new interface just for 1 callback from D?
  4. +
  5. What happens when I have the requirement for specific callbacks from other classes B and C as well?
  6. +
+",359145,,,,,43903.70764,Adding client specific method to a listener interface is a good idea?,,6,1,,,,CC BY-SA 4.0, +406167,1,,,3/5/2020 23:14,,0,23,"

I'm teaching myself python and am working on a space invaders clone. Pretty much everything is working as I want it to. But I'm also trying to teach myself good principles of object-oriented design.

+ +

I have one class AlienInvasion that initializes a bunch of game objects and runs the while loop for the game (it currently also does some other things that should probably be in their own classes). When I create each object I pass it the AlienInvasion object so that each class can get access to objects that it needs to interact with. For example here is a class that checks for and responds to events:

+ +
import sys
+
+import pygame
+
+
+class EventHandler:
+
+    def __init__(self, ai_game):
+        """"""Initialize attributes.""""""
+        self.ai_game = ai_game
+        self.settings = ai_game.settings
+        self.display = ai_game.display
+        self.stats = ai_game.stats
+        self.sb = ai_game.sb
+        self.ship = ai_game.ship
+
+    def check_events(self):
+        """"""Respond to keypresses and mouse events.""""""
+        for event in pygame.event.get():
+            if event.type == pygame.QUIT:
+                self.sb.record_high_score()
+                sys.exit()
+            elif event.type == pygame.KEYDOWN:
+                self._check_keydown_events(event)
+            elif event.type == pygame.KEYUP:
+                self._check_keyup_events(event)
+            elif event.type == pygame.MOUSEBUTTONDOWN:
+                mouse_pos = pygame.mouse.get_pos()
+                self._check_button(mouse_pos)
+
+    def _check_keydown_events(self, event):
+        """"""Respond to keypresses.""""""
+        if event.key == pygame.K_RIGHT:
+            self.ship.moving_right = True
+        elif event.key == pygame.K_LEFT:
+            self.ship.moving_left = True
+        elif event.key == pygame.K_q:
+            self.ai_game.quit()
+        elif event.key == pygame.K_SPACE:
+            self.ship.fire_bullet()
+        elif event.key == pygame.K_e and not self.stats.game_active:
+            self.settings.set_difficulty(self.settings.easy)
+            self.ai_game.start_game()
+        elif event.key == pygame.K_n and not self.stats.game_active:
+            self.settings.set_difficulty(self.settings.normal)
+            self.ai_game.start_game()
+        elif event.key == pygame.K_h and not self.stats.game_active:
+            self.settings.set_difficulty(self.settings.hard)
+            self.ai_game.start_game()
+
+    def _check_keyup_events(self, event):
+        """"""Respond to key releases.""""""
+        if event.key == pygame.K_RIGHT:
+            self.ship.moving_right = False
+        elif event.key == pygame.K_LEFT:
+            self.ship.moving_left = False
+
+    def _check_button(self, mouse_pos):
+        """"""Set the difficulty setting.""""""
+        if self.display.easy_button.rect.collidepoint(mouse_pos):
+            self.settings.set_difficulty(self.settings.easy)
+            self.ai_game.start_game()
+        elif self.display.normal_button.rect.collidepoint(mouse_pos):
+            self.settings.set_difficulty(self.settings.normal)
+            self.ai_game.start_game()
+        elif self.display.hard_button.rect.collidepoint(mouse_pos):
+            self.settings.set_difficulty(self.settings.hard)
+            self.ai_game.start_game()
+        elif self.display.quit_button.rect.collidepoint(mouse_pos):
+            self.ai_game.quit()
+
+ +

The initialization method consists entirely of setting up attributes referencing objects from ai_game (the AlienInvasion class) so that EventHandler can tell each object when it needs to do in response to an event. This is how I've been handling all object interactions. The AlienInvasion class 'knows' about all the different objects and allows each object to interact with other objects via the sorts of attributes you see above in the EventHandler. This doesn't strike me as good design in that the objects in that the objects are (I think) more coupled than they should be.

+ +

I'm interested in learning how to make my code more maintainable. I've spent the last few days looking for an answer. I haven't seen anything that explains how I should be handling object interactions (or, more accurately, I haven't seen anything that I recognized as something I could/should apply here to minimize how much each object 'knows' about the other objects). I've read some things on inheritance, composition, mixins, and class (as opposed to instance) attributes and none of it seems right here (very possibly because I just don't see how to apply any of the things I've looked at properly here).

+ +

I suppose I'm essentially asking for a code review, but a design oriented one.

+ +

You can see the full code at https://github.com/nmoore32/Space-Invaders-Clone

+",359152,,,,,43895.96806,Python - Options for accessing class attributes/methods from other classes while keeping classes encapsulated?,,0,1,,,,CC BY-SA 4.0, +406168,1,406210,,3/5/2020 23:32,,0,60,"

Let's say I have a booking system where I want to fire an aws lambda function to send an email or message to users, 10 minutes before their booking starts. I have looked online and found few solutions.

+ +

AWS Lambda + CloudWatch + DynamoDB:

+ +

Someone suggested adding the job to dynamodb and setting the TTL to when I want to notify the user and then connect cloudwatch to listen to remove triggers on the dynamodb. I did not like this method as it seems like a hacky way to do it.

+ +

ATrigger

+ +

This website provides a rest api which you can use to set up a scheduling job in the future. This is exactly what I need but the last update on their social media was in 2018. So probably not maintained.

+",359153,,,,,44069.65556,Fire aws lambda at a specific time,,2,2,,,,CC BY-SA 4.0, +406170,1,406171,,3/6/2020 1:03,,1,83,"

This is (again) a question of methodology...

+ +

Suppose we are testing a service that returns Articles given ids, i.e. List<Article> getArticles(List<Integer> ids);. In addition, the corresponding location will be null if the id is invalid.

+ +

Firstly fill the database by the SQL:

+ +
INSERT INTO articles VALUES (2001, ""aaa""), (2002, ""bbbb""), (2003, ""cc""), (2004, ""ddddd"");
+
+ +

Then test by the following. Which way shall I do it (see details in the comments below)?

+ +
void testGetArticle() {
+    // NOTE: WHICH WAY?
+
+    // way1: one existing id, one nonexisting id.
+    var ids = Arrays.newArrayList(2002, 9999);
+
+    // way2: many existing ids, many nonexisting ids.
+    var ids = Arrays.newArrayList(1000, 1001, 2002, 2003, 2004, 9997, 9998, 9999);
+
+    List<Article> a = xx.getArticle(ids);
+    assertNull(a.get(0));    assertEquals(a.get(1), ...);  ...etc... // assert each of the results
+}
+
+ +

Thanks for any suggestions!

+",340897,,,,,43918.41042,"Is ""one representative"" enough or need to have many same-type things in testing?",,2,0,,,,CC BY-SA 4.0, +406172,1,,,3/6/2020 2:29,,-1,50,"

What is the best way to send messages from a Contact Us page in a website as a notification email to an administrator ?

+ +

So far, this is what I did but I feel like this is not really the proper way.

+ +
    +
  • I created a gmail account that will serve as an email sender
  • +
  • From the contact us page, I fetched the values from the text fields and sent it using the email +library of CodeIgniter using the gmail smtp
  • +
+ +

The result is that the administrator email will receive an email from the mailsender@gmail.com. What I'm thinking about is what if there are a lot of people that sent a message at the same time ? Problems might arise from such situation. Please pardon my ignorance, I'm quite new at this industry. Thank you for the response.

+",359160,,359160,,43896.10903,43926.45903,Ideas For Sending Messages from Contact Us Page in Website as a Notification Email to an Administrator,,1,0,,,,CC BY-SA 4.0, +406173,1,,,3/6/2020 3:14,,0,33,"

We are developing a graphql api for our DAL at work and I'm having trouble seeing how as a consumer of the api we can use our current model in a beneficial way. I'm developing a desktop app that uses entity framework core and interacts with an oracle database.

+ +

From what I think I understand of Ef Core DbContext acts as a UnitOfWork and DbSet acts as a Repository. So in this instance I can have different DbContext implementations for different database types (e.g. oracle, mysql etc) depending on the environment and available providers.

+ +

Now enter graphql. There isn't a provider for graphql as a consumer, so if I'm to use the graphql api where does that fit into the above structure?

+ +

Would graphql replace DbContext and DbSet?

+ +

The natural place I could see would be in method calls at one level closer to the ViewModel. I'm going to try to elaborate this in code.

+ +

DbContext's

+ +
public class MyContext : DbContext
+{
+    public DbSet<Person> People {get;set;}
+    public DbSet<Pet> Pets {get; set;}
+
+    // Map using fluent stuff...
+}
+
+public class MyContext2 : DbContext
+{
+    public DbSet<Person> People {get;set;}
+    public DbSet<Pet> Pets {get; set;}
+
+    // Map using fluent stuff going to a different Db...
+}
+
+ +

Using the Context

+ +
// interface this to enable swapping out dbcontext with something else.
+public interface IMyConsumerClass
+{
+   public Person GetPersonById(int id);
+}
+
+public class MyConsumerClass : IMyConsumerClass
+{
+    public MyConsumerClass(DbContextFactory contextFactory)
+    {  
+       // inject the correct factory for the type of endpoint MyContext or MyContext2.
+       ContextFactory = contextFactory
+    }
+
+    private DbContextFactory ContextFactory {get;}
+
+    public Person GetPersonById(int id)
+    {
+       using(var context = ContextFactory.CreateContext())
+       {
+          return context.People.Include(person => person.Pets).SingleOrDefault(person => person.Id == id);
+       }
+    }
+}
+
+ +

So Where Does GraphQL Go?

+ +

Do I need to implement the interface IMyConsumerClass and skip entityframework entirely?

+ +

Have I overlooked a graphql provider library for entityframework!?

+ +

This seems great for the guy/gal writing the api, but not so great for the people consuming it.

+",206934,,206934,,43896.15972,43896.15972,How does GraphQL and EntityFramework work together consumer side,,0,0,,,,CC BY-SA 4.0, +406174,1,406183,,3/6/2020 4:31,,0,117,"

I don't know if this is the right place to post this sort of question, so feel free to move it.

+ +

Lets say that Vendor X is selling a certain software package. There are also other competitors, including Vendor Y, selling similar software packages. My company is leaning toward purchasing the software package from Vendor X, probably due to our long working relationship with the vendor.

+ +

For various reasons my company requires a certain software feature. Vendor X's software package does not include this feature, but Vendor Y's software package does. This feature has nothing to to with our company or environment, but is a general feature that other companies with similar needs could also benefit from (which is probably why Vendor Y offers this feature as standard).

+ +

The quote given by Vendor X for the development cost for the feature in question was a little high. Also Vendor X's package is newish, whereas Vendor Y's package has been around a long time and has many customers. I am in favor of using Vendor Y's package, but most most other people prefer Vendor X's package most likely due the past working history of our two companies. I have a feeling that in the end Vendor X's package will be chosen regardless of the cost. Given this, I want to try to get Vendor X to reduce their estimate to minimize the blow somewhat. (For the record, we have not yet received an estimate from Vendor Y even though we have been waiting months. There is some potential weirdness going on, but for the sake of simplicity I won't elaborate any further.)

+ +

Would it be unfair or unusual to ask Vendor X to reduce, or possibly eliminate, the development cost for the feature that arguably could be used to add to the overall desirability of the package in the software market? Is this an atypical thing to request? BTW, we asked casually about this once, but the vendor said no. (I have a feeling that Vendor X is aware of their status as preferred vendor and so is using that to their advantage the best they can.)

+ +

If Vendor X does not budge on the development cost, would it be impractical, unorthodox, or maybe even crazy to require that the vendor not to use any of the code used in the development of our custom feature in the standard package that they sell to other customers? How would that even work anyway?

+",359161,,,,,43896.42222,What is the practicality of asking vendor to not include a specially developed feature in their standard package offering,,3,1,,,,CC BY-SA 4.0, +406175,1,,,3/6/2020 5:02,,0,99,"

The goal within my framework is to provide facades or front-facing functions/APIs so that people can easily interact with my system, as such, in most cases, that goes super well by having static functions that people can just call and have everything sent to them, but I have an issue when it comes to these static functions (often within a class, to help grouping) actually depend on objects that you'd normally see in the class' constructor.

+ +

So, I use services. They provide a way to enforce strong rules for what is ""just ok"" global state. For example, let's define a service:

+ +

registerService( 'serviceEveryoneCanUse', new ServiceObject ), this is now available to everyone to use. Normally, when you do DI, you just simply construct your object by calling the same services API to retrieve this service, like so:

+ +

new SomeObject( getService( 'serviceEveryoneCanUse' ) ) and so, it's very clear that the object depends on a certain thing. The problem is then, assume I had a class with wrapper/helper functions and as such, they are static:

+ +
class Helpers
+{
+    function doSomething()
+    {
+        $service_I_need = getService( 'serviceEveryoneCanUse' );
+
+        //Do stuff with the service
+    }
+}
+
+ +

Now I have a hidden dependency. Of course, I can handle it as an error if it cannot retrieve the service, but my goal is to achieve clarity, I don't want someone to look at my class and say ""Oh, cool, no outside dependencies"", then once he sees this function he's like ""Nope, guess it breaks if that object isn't available"".

+ +
+

""But why not just initialize the Helpers class with the proper + dependency if it actually is one?""

+
+ +

As I was saying, I'd rather people do ::doSomething than new Helpers( getService( 'serviceEveryoneCanUse' )::doSomrthing every time they wanna use that function.

+ +

One guess I have is that I actually have to make the Helpers class itself a service and then I would be able to do proper, by the book DI, as such:

+ +
class Helpers
+{
+    public function __construct( ServiceINeedInterface $service )
+    {
+        $this->service = $service;
+    }
+
+    function doSomething()
+    {
+        //I can now use $this->service
+    }
+}
+
+registerService( 'Helpers', new Helpers( getService( 'serviceEveryoneCanUse' ) ) )
+
+getService( 'Helpers' )->doSomething()
+
+ +

But this has a problem: it takes so, so long to write and, frankly, I believe the first approach, even if it's not perfect or ""pure"" still pleases my testing because I can just swap these objects.

+ +

Is there no better approach for all this?

+ +

(As an additional note, this is not hard-coding. We don't depend on an specific object that we cannot ever change, we depend on a central service container/provider that is ever-present and the heart & brain of my app to serve us a certain object that can always change)

+",353781,,,,,44076.48681,Usage of objects as services in static functions,,3,3,,,,CC BY-SA 4.0, +406178,1,,,3/6/2020 5:47,,0,36,"

A methodology question again...

+ +

Say we have an article module, a feed module, and a user module. We encapsulate them, so the user module only exposes one method User getUserById(int id) (all others are internal, e.g. does it use Redis or database, how to store, ...). The article and feed modules, of course, need to call the getUserById API when running.

+ +

Now we need to write many tests: Test DAO, test service,test controller of article module, and same things for feed module. Of course, during testing we need to stub the user module. Which way is better?

+ +
+ +

Way A:

+ +

First create a shared stub:

+ +
class UserModuleStub implements UserModule {
+    public static final int EXISTING_UESR_ID_A = 10;
+    public static final int EXISTING_UESR_ID_B = 11;
+    ... (several more) ...
+    public static final int NON_EXISTING_UESR_ID_D = 99;
+    public User getUserById(int id) {
+        // if id==EXISTING_UESR_ID_XXX return a user of that data. Otherwise return null.
+    }
+}
+
+ +

Then, all the test classes will inject the UserModuleStub, say:

+ +
@Test void shouldReturnArticle() {
+    do_some_setup(..., UserModuleStub.EXISTING_UESR_ID_A);
+    Article a = articleService.getById(...);
+    assertEquals(a.getAuthor().getId(), UserModuleStub.EXISTING_UESR_ID_A);
+}
+
+ +
+ +

Way B:

+ +

Do not create any shared stub. Instead, mock it every time when needed in every class again and again.

+ +

Say, for one of the many test functions and classes:

+ +
@Test void shouldReturnArticle() {
+    final int USER_ID = 100; final User USER_OF_THE_ID = new User(...);
+    when(userModule.getById(USER_ID)).thenReturn(USER_OF_THE_ID);
+    // ...repeat for several times... because we may call the userModule for *many* ids!
+
+    Article a = articleService.getById(...);
+    assertEquals(a.getAuthor().getId(), USER_ID);
+}
+
+ +

And for all others, repeat this again and again.

+ +
+ +

Which is better? Thanks for any ideas!

+",340897,,,,,43896.24097,"Shared stub or ""private"" stub for other modules in testing?",,0,0,,,,CC BY-SA 4.0, +406179,1,406180,,3/6/2020 6:11,,5,1780,"

I'm currently working with a very large system and have been asked to add an additional parameter to a method that's called from over 200 different places directly.

+ +

The method signature looks something like this:

+ +
public static bool SendMessageAndLog(long id, long userId, string message, string cc="""", params Attachment[] attachments) 
+{ ... }
+
+ +

I need to be able to log the id of the event this message is associated with. I'm kinda stuck between 2 solutions:

+ +
    +
  1. Creating a new method that does exactly the same thing but takes the event ID as well, stripping the old method and making it call the new method
  2. +
  3. Adding an optional parameter for the event id and going through and using named parameters for the 200 calls which seems like a massive pain
  4. +
+ +

Are there any more other potential solutions to this? What would be the best practice in this case keeping in mind that I can't refactor this too much.

+",96091,,261965,,43897.40694,43897.40694,Best Practice for Adding an Additional Parameter to a Function,<.net>,2,6,1,,,CC BY-SA 4.0, +406185,1,,,3/6/2020 9:42,,0,15,"

This could be viewed as a follow-up of Organizing maven modules and spring profiles with a more specific case.

+ +

I currently have an application with a GUI (mix of Swing for Cartography/JavaFX/Spring).

+ +

I am now adding some junit 5 test and I want to put a decent base. Currently I am not running on an integration server but I am trying to be able to run it in headless mode (using TestFX and it's Monocle Platform support). This is made to speed up the execution but is also a good opportunity to shake up a bit my current code base and see how it goes.

+ +

Technically I am able to do it : for instance instead of allocating a JFrame, I can just allocate a JPanel with a BorderLayout in headless mode because JFrame throw a HeadlessException when running wit headless enabled.

+ +

Now the problem : I am organizing each features of my application by splitting the @Configuration into three differents ones : + - Base configuration : contains the basic stuff like service/repository. + - UI Configuration : contains the JavaFX beans I need to use (mainly controllers of views with spring dependency injection). + - Map Configuration : contains everything related to the implementation of the cartography engine I am using.

+ +

In the case off stuff showing of the map I have the following dependency graph :

+ +

Map Configuration > UI Configuration > Base Configuration

+ +

The dependencies are currently using the @Import annotation of Spring in order to be resolved.

+ +

But now I need the ability to switch the current UIConfiguration by an HeadlessUIConfiguration while still using the other two, in order to unit test the beans instantiated within.

+ +

For that, I have many ways to do it with Spring :

+ +
    +
  • In my ""UI Configuration"" I could have define two times the same set of beans with differents @Profile
  • +
  • I could have an UIConfiguration with a @Profile(""with-gui"") and an UIHeadlessConfiguration with a @Profile(""headless"") and import both of them from my MapConfiguration
  • +
  • I could override the beans declared in the UIConfiguration by importing it along with the others required ones and then redefined the beans.
  • +
  • I can consider that the UIConfiguration is not directly imported from the MapConfiguration but has to be provided on a higher level.
  • +
+ +

Note that I exclude stuff based on @ComponentScan because I want to group my beans and have a clear understanding of all the dependcies they use.

+ +

However I have some difficulty to choose between everything :

+ +
    +
  • In the two first case, it means that if I package that and deliver that to the client I will have ""dead code"" in it, that ""dead code"" could still eventually mess something up if a @Profile is not place properly.
  • +
  • The thrid case will make harder to understand what is being used and what is not, specifically if I use it more widely.
  • +
  • The fourth could be basically seen as a equivalent of a maven ""provided"" dependency which means that I cannot run the MapConfiguration alone but always need to add an UIConfiguration along with it. It will obvisouly technically works but I should still go that way.
  • +
+ +

The question is : in the case of unit testing, how I should go about it ? If possible without polluting (or not too much) the running code with the testing one even if discriminated with @Profile.

+",216452,,,,,43896.40417,Organizing spring profile : specific case with unit test and multi module,,0,0,,,,CC BY-SA 4.0, +406189,1,,,3/6/2020 10:30,,1,78,"

This is not about the specification of semver itself (which is crystal clear), but rather about the best approach to implement it within a development pipeline when building libraries.

+ +
+ +

TL;DR: who/what sets the version and deploys the artifacts, and when?

+ +
+ +

We're planning to opensource some of our internal Maven-based GitHub-hosted Java libs as they might be useful for others. What we currently have:

+ +
    +
  • CI pipeline via GitHub Actions, running the tests on PRs going to master, as well as master directly each time it's updated.
  • +
+ +

Now, what's the best approach to update the version and deliver the artifacts? I can see several:

+ +
    +
  • Being ""release-driven"": each time a GitHub Release is manually created via the GitHub UI (github.com/owner/repo/releases), another pipeline starts, reads and fetches the created tag, runs mvn versions:set then mvn deploy (put simply). The actual committed POM's version then doesn't need to change, e.g. can stay at 0-SNAPSHOT.

  • +
  • Being ""merge-driven"": each time a PR is merged to master, roughly the same pipeline as above triggers, but uses the version of the POM (or auto increments the patch number by default, or something along those lines). A corresponding GitHub Release then needs to be created, either automatically or manually I guess.

  • +
  • Being entirely manual: some dev needs to mvn deploy manually after having dealt with the versioning and the release somehow.

  • +
  • Your approach.

  • +
+ +

How do people proceed out there? Is there one best approach? For our internal services and libs, a successful merge to master increments some version number and triggers a deployment, but is it the right approach if we want to follow semver strictly?

+ +

And because semver gives sense to versions, I guess it cannot be entirely automatic, can it? I believe some human must need to know and tell the system whether the coming release is a patch, a minor or a major one?

+ +

One case to keep in mind as well, is when a change (for instance a security patch) needs to be backported to an older version that is still maintained.

+ +
+ +

I originally asked this question on opensource.stackexchange.com, but I noticed it wasn't really related to OSS, but rather semver, so I'm trying my luck here instead.

+",72730,,72730,,43896.68958,43896.86736,Implementing semver within a development pipeline when building libraries,,1,0,,,,CC BY-SA 4.0, +406190,1,,,3/6/2020 10:31,,7,236,"

Oh, abbreviations in code, the bane of every developer.

+ +

Consensus seems to be against them, because readability might be affected. But the examples given are often extreme: like ""repr"" for ""represent"". I call that extreme, because it is shortening a word not in need of shortening, into something that has a very ambiguous meaning.

+ +

Please consider my example as a possible example of when abbreviations may be justified. I've got a project that uses a number of projections, which are basically views on a Mongo-database. To name a projection descriptively, you have to describe the perspective, which takes a few words. And each projection has a number of classes to make it work, each of which have names of their own, suffixed to the projection name.

+ +

So, because I have a projection named StatemachineDefinitionRelevantEvents (take my word for it that this is the shortest way to describe the projection), I also have classes named StatemachineDefinitionRelevantEventsAggregrateProjection, StatemachineDefinitionRelevantEventsReadModel, StatemachineDefinitionRelevantEventsRepository. Those are around 50 characters long.

+ +

Because vars are also disallowed (company policy), you get code like this:

+ +
StatemachineDefinitionRelevantEventsAggregrateProjection statemachineDefinitionRelevantEventsAggregrateProjection = new StatemachineDefinitionRelevantEventsAggregrateProjection(<bunch of parameters with similarly lengthy names>);
+
+ +

Repeated over and over if I need to create several of these classes in a row.

+ +

I decided to just replace every instance of StatemachineDefinitionRelevantEvents with SDRE. I'm also clearly noting in my ubiquitous language and documentation that SDRE stands for StatemachineDefinitionRelevantEvents, with an explanation of the projection's definition and purpose. Lo and behold, my initialisations suddenly fit on a single line! And the suffix always remains unabbreviated, so even if you don't know what SDRE means, the function of a SDRERepository, SDREReadModel and SDREAggregrateProjection should be interpretable from the name alone.

+ +

Is this an example of justified and 'good' use of abbreviations, or is it still wrong? I want to get a second opinion before I propose it to the team.

+",292603,,292603,,43896.44514,43896.70278,Is there a correct way to use abbreviations in code?,,2,5,,43896.72083,,CC BY-SA 4.0, +406194,1,406208,,3/6/2020 11:06,,3,631,"

I'm wondering what's the best approach, and its advantages, when specifying parameters for the Web Service methods. Best to explain it through examples.

+ +

In my (SOAP) WebService, used by a Xamarin mobile app, I have a WebMethod SubmitForm(int, TransactionData, List<Answer>), where:

+ +

int is the ID of the Project and specifies which Database to connect to,

+ +

TransactionData is a DTO containing data about the user and the form, and

+ +

Answer is a DTO containing the ID and answer for a single question.

+ +

Because I have separate tables for TransactionData and Answers, these are fairly unrelated to each other, but I'm considering creating a new DTO, SubmitRequest, which would contain these 3 objects. What are the advantages and disadvantages of these options, apart for readability, and a minuscule overhead of instantiating and extracting from SubmitRequest?

+ +

Another situation to consider is a WebMethod which accepts a single primitive type, like int. Would it be better to just let it accept an int, or wrap it in a DTO, that contains just a single property? Frankly, I'm not a fan of the latter, because you end up with DTO for a string, int, etc.

+ +

Hence the question, what is the most advantageous for the system? Is there even a difference? Or maybe it's just a matter of personal preference?

+",359185,,,,,43896.62847,Multiple DTOs vs Single DTO vs primitives,,2,0,,,,CC BY-SA 4.0, +406195,1,,,3/6/2020 11:12,,1,57,"

We maintain a technical project whose specifications are in a document of few hundred pages. It's a text document (Word), not a Wiki or a Web-based stuff.

+ +

The document contains lots of tables with values (strings, numerical, parameter names) . Some tables are huge (hundreds of data), other are quite small.

+ +

I'm looking for good practices/best approaches to get readable and comfy documents, especially data tables, avoiding hard to distinguish rows from content around, and also not some overly colored stuff making it hard to focus on. (As a side note, the specification contains also pseudo-code and code samples, formatted and highlighted with styles in a way like does SE markdown engine.)

+ +

I got this article : Web Typography: Designing Tables to be Read, Not Looked At. It's aimed at Web design, It might be good for such documents.

+ +

Are there guidelines to write a easy-to-read technical specification? I understand it might be a subjective question... Some people likes colorful documents, others sober ones.

+",217421,,,,,43896.48472,How to improve readability of a specification document containing lots of tables?,,1,2,,,,CC BY-SA 4.0, +406197,1,,,3/6/2020 11:37,,1,17,"

I'm trying to figure out the best way to ""partially"" use the validates_uniqueness_of validator for a particular case of problem.

+ +

Let's say I have a Book class with multiple comments, with the Comment class having a belongs_to :book. Now let's say I have certain comments which I consider notable, such as ""most quotable"" or ""most famous"" or whatever. To date, I would generally add a most_quotable_comment_id and a most_famous_comment_id foreign key to the Book class. But this gets unwieldy when you also have ""most tweeted"" and so on. Eventually your Book object has a ton of foreign keys. But assuming I only want one ""most_x"" for any given type (in other words, two comments can't equally be ""most quotable""), it's the easiest mechanism to implement.

+ +

An alternative might be to define a bunch of values like MOST_COMMENTED = 1 (etc) and then add a field such as t.column :comments, :most_what, :integer. So now I can extend this system however I want, just by adding more MOST_x values. This pre-supposes that one comment can't be ""most quotable"" and also ""most famous"", for the sake of argument.

+ +

So now I need to add a validator to make sure that for a given book, only one comment has most_what == MOST_COMMENTED. So that seems pretty straightforward:

+ +
validates_uniqueness_of :most_what, :scope => :book_id, :conditions => {where.not(:most_what => 0)}
+
+ +

The last bit to avoid the problem that most comments aren't most-anything! :)

+ +

So this all seems a bit clunky to me, and I'm wondering what are the opinions of using a mechanism like this over just adding all the foreign keys to the main class. For the record, I'm not actually talking about books and comments, so it is safe to assume that I'll never have a comment which scores in two categories.

+ +

As I said, I originally coded it with foreign keys and my own view is that if you're dealing with a ""handful"" of foreign keys, the original method is best, but eventually as you add more categories, the other method is preferred. But what's a ""handful""? Two? Ten? Also, I'm curious about that second pattern and whether it's really wholesome (for want of a better word).

+",232378,,,,,43896.48403,Best RoR pattern for special instances of a subclass,,0,0,,,,CC BY-SA 4.0, +406201,1,,,3/6/2020 12:36,,-3,46,"

I'm trying to find a source of information to explain how to sort a list of items considering their multiple attributes and units. All I can find on the internet is code example on sorting Javascript lists and SQL queries.

+ +

The key is that it must compare Price, Probability and Color at the same time, for example. A higher price not necessarily should be the first element of the sorted list. If Probability is too high or if the color is Red, it should be priorized.

+ +

I understand that I must transform the values to be sorted to the same unit.

+ +

Anyone has a clue how do this?

+",354702,,,,,43896.55903,How to sort a list by multiple attributes and units?,,1,3,,,,CC BY-SA 4.0, +406216,1,406224,,3/6/2020 17:23,,3,253,"

General question

+ +

how can I design an interface that can support both

+ +
// v1beta1.Deployment
+type Deployment struct {
+    metav1.TypeMeta
+    metav1.ObjectMeta
+    Spec v1beta1.DeploymentSpec
+    Status v1beta1.DeploymentStat
+}
+type DeploymentInterface interface {
+    Create(*v1beta1.Deployment) (*v1beta1.Deployment, error)
+    Update(*v1beta1.Deployment) (*v1beta1.Deployment, error)
+    UpdateStatus(*v1beta1.Deployment) (*v1beta1.Deployment, error)
+}
+
+ +
// v1.Deployment
+type Deployment struct {
+    metav1.TypeMeta
+    metav1.ObjectMeta
+    Spec v1.DeploymentSpec
+    Status v1.DeploymentStat
+}
+type DeploymentInterface interface {
+    Create(*v1.Deployment) (*v1.Deployment, error)
+    Update(*v1.Deployment) (*v1.Deployment, error)
+    UpdateStatus(*v1.Deployment) (*v1.Deployment, error)
+}
+
+ +

which have different parameter and return types?

+ +
+ +

Details

+ +

The above two interfaces are from kubernetes go-client, they define different versions of the API. Since we have to support both of them for different versions of clusters we are running and I don't want to copy our application code for every version, I want to design an interface that can support different versions of Deployment.

+ +

The current code of our application has a lot of helper functions relying on the specific type, for an example:

+ +
func (s *KubeControllerService) deploymentCustomImage(deployment *v1beta1.Deployment, appGitBuildConfig *models.AppGitBuildConfig) *v1beta1.Deployment {
+}
+
+ +

And we have hundreds of them. It would be very hard to support a new version by copying each function and impossible to maintain such code.

+ +

For what I know, since go's lack of generics, to support two different types the only viable way is to use interface. But i'm facing methods with different types of parameters and return values, I have no idea how to design for this scenario in go.

+",169972,,,,,43896.80833,Golang Interface Design for Multiple Parameter and Return Types,,1,0,,,,CC BY-SA 4.0, +406227,1,,,3/6/2020 20:53,,1,64,"

I'm curious if there is a pattern or at least a better way to code this situation. For example, say you're writing a rest api for a reporting workflow. You have a User class and a Job class. Each User has a Job, and each Job a JobType. Some actions will be the same for the user but have additional logic based on their Job Type - submitting a Report for a user in Finance or Legal will have some more logic involved than an HR user's Report. Based off of only the User's id I want to check if their report requires extra work and then execute the proper logic.

+ +

Pseudocode:

+ +
SubmitReportForUser(int userId, Report report) {
+  DoCommonReportWork(report)
+  if (ReportsNeedMoreWork(userId)){
+    DoExtraReportWork(userId, report);
+  }
+}
+
+ReportsNeedMoreWork(int userId) {
+  JobType = GetJobTypeForUser(userId); // Get from db
+  return (JobType == Finance || JobType == Legal);
+}
+
+DoExtraReportWork(userId, report) {
+  JobType = GetJobTypeForUser(userId); // Get from db
+  if (JobType == Finance) { DoFinanceReportWork(report); }
+  else if (JobType == Legal) { DoLegalReportWork(report); }
+}
+
+ +

Having to load the JobType twice and doing multiple checks on what the JobType is both come off as code smells to me, but I'm not sure the best way to go about this situation. The only immediate fix I see would be to load the JobType earlier and have ReportsNeedMoreWork and DoExtraReportWork take the JobType instead of the user id, but that doesn't get around the duplicated if/elses. This is a simplified example, in my case there are many more JobTypes that would involve extra work.

+ +

Thanks for reading so far, any feedback is appreciated! This is an old codebase I'm working with, but looking to improve upon. Thanks!

+",311842,,,,,43896.87014,Best way to check if a value meets a condition and then perform additional logic based on the condition it meets?,,0,6,,,,CC BY-SA 4.0, +406229,1,,,3/6/2020 21:38,,1,84,"

I am making a C++ glfw wrapper for myself to use. I want to have classes like Monitor, Window, Context that would be wrappers for glfw objects like GLFWMonitor* or GLFWWindow*. The problem is that if I want to have a method create_window() in the Window class, it must have access to the underlying raw glfw pointer in Monitor class. At the same time, if I wanted the Monitor class to have a method like enumerate_windows(), it would need access to the underlying glfw window pointer. I could have a public method get_raw_handle() in both classes, but implementation details are supposed to be hidden and I don't want the user to be able to access this information. It should only be available to the library.

+ +

I thought that I could friend the classes to each other, but that doesn't seem right, as if I wanted to add a new class, I would have to friend all existing classes to it and the other way around. It would also require forward declarations for everything and right now my library is header only.

+ +

I also thought that maybe all these objects could inherit from some kind of super WindowingSystemObject class that would have a protected method get_raw_handle(), but that would mean that the handle would need to be universal (like a void*) and then cast to the appropriate type.

+ +

All this seems so complicated. Instead of writing code I feel like some kind of a philosopher. I don't know anymore if I should make it in C++ (with classes) or just make it in C and don't care about any encapsulation as long as it works.

+",359229,,,,,43897.22083,How to make a system of mutually related classes?,,1,2,,,,CC BY-SA 4.0, +406230,1,406234,,3/6/2020 22:00,,4,122,"

I am working on building a API and SDK for a web service. My question is what is the correct practice for logging. Should the SDK do logging for the API methods? All the SDKs I have seen do not do this so I am just wondering is that the recommended best practice?

+",315865,,,,,43896.99167,Should An SDK Have Logging in the API,,3,0,1,,,CC BY-SA 4.0, +406231,1,,,3/6/2020 22:30,,0,58,"

I'm struggling a bit for a preferred way to organize a sequence of asynchronous tasks that can be applied in parallel. Say, you are parsing data from many files. In my case I'm using javascript and promises, but this could be most any language. (Hence the weird tags as ""javascript"" and ""language-agnostic"".

+ +

Option A: Parallelize at the end

+ +

1) First, create the chain of tasks for a single file / stream, e.g.

+ +
function readAndParseAndConvert(file) {
+  return read(filename)
+    .then((body) => parse(body))
+    .then((parsed) => convert(parsed));
+}
+
+ +

2) Then, put it all together

+ +
Promise.all(theArrayOfFilenames.map(readAndParseAndConvert));
+
+ +

Option B: Parallelize each step

+ +

1) Create the steps

+ +
function readFiles(filenames) {
+   return Promise.all(filenames.map((filename) => read(filename))
+}
+
+function parseBodies(bodies) {
+  return Promise.all(bodies.map(body) => parse(body))
+}
+
+function convertAll(parsed) {
+  return Promise.all((parsed) => convert(parsed));
+}
+
+ +

2) Put them together

+ +
readFiles(filenames)
+  .then(parseBodies)
+  .then(convertAll);
+
+ +

Ultimately this may get flagged as ""opinion based"", but any objective thoughts? Remember that real code would have try/catch, closing files, etc...

+",90992,,,,,44168.08472,Organizing Parallel Arrays of Promises / Async tasks,,1,1,,,,CC BY-SA 4.0, +406236,1,,,3/7/2020 1:15,,2,107,"
Given that I want to write a Given/When/Then scenario
+When I write a Given/When/Then scenario
+Then my Given and When are generally the same thing
+
+ +

I've been trying to practice writing user stories in Given/When/Then but I often end up with something like above; the Given and When are identical. I find that high level stories come natural but when I try to break them down into manageable tasks I end up with stories like the above.

+ +

Here's an example on a ""reporting"" system I'm working on where security issues can be reported and reviewed. At a high level, the stories are easy and make sense:

+ +
Given a Security Problem is dicovered
+When the Issue is reported to SecurApp
+Then an Item is created for review
+
+ +

At the system level, we'll take in a ""request"" which goes on a queue, +gets picked up, and a report is generated

+ +
Given a request to create a report
+When a report is created
+Then the user is notified that the report is created
+
+ +

But now I've got this weird duplication between Given and When. This happens a lot.

+ +

What's a good way to think about Given/When/Then? Because I have a feeling I'm approaching this incorrectly. Or maybe, does Given often get repeated across similar stories?

+",31508,,,,,43897.89514,How to write Given/When/Then Scenarios without Given and When being the same,,3,0,,,,CC BY-SA 4.0, +406237,1,406240,,3/7/2020 1:34,,4,286,"

I'm currently writing a small language of my own, and have been considering the difference between the C++ style, where the access modifier applies to a block of members, and the C#/Java style, where the access modifier is specified separately for every member. I'm not going to ask which is better; I realize that's very opinion based.

+ +

Why does C# require an access modifier for each member?

+ +

I'm not trying to focus just on C# - it may have just been following Java or another language's convention, in which case, the question could become 'Why does that language require an access modifier for each member?'

+ +

I'm looking for any documentation, supporting quotes, etc in which the language designers discuss the pros and cons behind each syntax, and the reason they chose one over the other. I've had a look around on Google, and couldn't find anything, but I am aware that some members of Stack Overflow have actually been a part of the .Net development team, so I was hoping I might be able to find some answers here.

+",172077,,620,,43897.35069,43898.30139,Design decisions behind access modifiers in C#,,1,1,,,,CC BY-SA 4.0, +406245,1,,,3/7/2020 8:46,,-3,199,"

I know that in some contexts, best practice would be DDD, CQRS and EventSourcing, but in my case this would be too complicated of two reasons:

+ +
    +
  • My team is beginners, and we want them to be productive as early as possible
  • +
  • The application is really simple, but obviously the complexity may grow in time
  • +
+ +

This question is an attempt at rephrasing this question: What is a simple implementation of onion architecture for C# ASP.NET Core WebAPI and SQL db that is not full DDD and CQRS?

+ +

My thoughts so far are:

+ +
    +
  • using onion or clean architecture is better than the old n-tier model, and no harder to learn, and it makes it easier to unit test the domain entity layer
  • +
  • The API should not return data as objects of the domain entities, but it should be mapped to separate view models
  • +
+ +

I am not sure of whether to design the database and SQLs manually, and then use Dapper - or go for something like EntityFramework Core - code first.

+ +

I wonder if CQRS would be a good idea, even without DDD.

+ +

I would also like to say a little bit about our simple application. It is for an amusement park where seasonal employees in different departments are required to take some training courses. They should be able to register for a course, cancel their registration, and the responsible trainer for the course should be able to mark registration with attendance and approvement. Also there is some reporting, which is all reading and showing these data. This is version 1.

+ +

Version 2 is admin dashboard for registering users as trainers and admins (which can see all the reports). Admins should also be able to map which departments require which training courses.

+ +

Maybe many will see this as opinion based, but I believe that it should be possible to define some broad steps in a learning ladder about application architecture, and even though the exact sequence may be debatable, I expect some kind of consensus about which steps should be considered. So that is my question; above simple layering, n-tier and onion/clean - and below CQRS, DDD EventSourcing, what concrete topics/learning steps exist in application architecture in this context?

+",358307,,358307,,43897.39931,43901.59931,What is the simplest version of best practice application architecture for a backend in C# and ASP.NET Core WebAPI?,,1,4,,,,CC BY-SA 4.0, +406252,1,,,3/7/2020 19:42,,0,169,"

I am trying to determine which is the best arquitecture to my application, I am planning to use python, mysql, angular and flask as an intermediate between python and angular.

+ +

I have all the shots of basketball matches of different seasons whith the correspondant player, team, position, etc; and I want to plot them in a basketball court:

+ +

https://plot.ly/~agmm23/1/

+ +

I want to add filters to this plot that for example I can select the team of the shooter, the player name, the team who was played against, the players of the other team, etc.

+ +

I was planning to have all the data in mysql, once I need it to export everything to a pandas dataframe and add some filters (widgets to plotly) to determine the previous items, and based on these filters to graph the shots missed and scored.

+ +

But now I am not sure which should be the best aproach, if the entire graph must be done in angular, with the filters and the satter plot whit the position, or if it is correct to do it in plotly in python, add the widgets in plotly and bring the data every time. My main doubt at this point can be summarized as where should I plot the scatter and the court, in angular and obtaining the values from flask, or it is correct to do it in plotly with the filters added? what is it the best advice considering that I am going to continue change the filters and obtaining new data?

+",359264,,,,,44134.67222,Exchange data between python and angular with flask,,1,1,,,,CC BY-SA 4.0, +406254,1,,,3/7/2020 21:51,,1,47,"

My team is building a tenanted primary service, with shared microservices used by that primary to separate common business tasks. We're building a new microservice that is responsible for running a series of business rules against a primary object in our system. Think of it like the Order in a shopping cart, and maybe the new service is responsible for doing a set of validation checks on the Order.

+ +

We like the microservice idea because we can add new business rules very quickly, without necessarily needing to deploy them to all tenants. We have a number of existing rules to build from, and we expect to need new rules on a very frequent basis, although the amount of data on the Order entity will change infrequently.

+ +

The question -- What is a good way to design the inputs to this service?

+ +
    +
  1. We could pass the new service the entire Order, but that feels overkill if we only need some of the values on it. (it's a lot)
  2. +
  3. On the other hand, we don't want to have to deploy all of the tenanted systems to add a new business rule, so only passing the currently needed values is not enough.
  4. +
  5. ?? Your other idea here! I can imagine a great number of complex plans!
  6. +
+",333151,,,,,43897.91042,"Microservice with ""grab bag"" set of inputs, how to manage for the long term?",,0,7,,,,CC BY-SA 4.0, +406255,1,,,3/7/2020 22:26,,3,98,"

Suppose I'm writing C or C++ code which deals with... ok, let's make it citizens in a state. In this state, citizens have numeric id's (not strings - numbers); and for reasons of performance, or compatibility with other software, it is assumed there can be less than 2^32 citizens.

+ +

Now, in my code, I have a bunch of functions which take or return a citizen's numeric index; and other functions which take or return a number of citizens (e.g. number of people who were naturalized as citizens last year).

+ +

My dilemma regards the types. Do I:

+ +
    +
  • define a size-type, and use it for both student indices and numbers-of-students?
  • +
  • define an index-type, and use it for both student indices and numbers-of-students?
  • +
  • Define both, despite them actually being just aliases of each other?
  • +
+ +

I'm also not sure what name to use for them. It wouldn't be citizen, as that would be a data type describing a student. Should it be something like citizen_index or citizen_index_t? But then, what about the size type? It's not citizen_size, after all... so - num_of_citizens? num_citizens_t?

+ +

I want to do this in a consistent, convincing and non-contrived way, but it seems like I only have bad options. Am I missing something?

+ +

Note: I can't just use std::size_t due to the constraint I mentioned earlier, in case you were thinking of suggesting that.

+",63497,,,,,43898.17153,A size-type vs index-type conundrum,,1,3,,,,CC BY-SA 4.0, +406257,1,406280,,3/8/2020 0:16,,2,181,"

I want to build a REST API but I have some holes when it comes to the security part. I would like to get my head around how to authenticate the calls to the API.

+ +

Therefore, this is my first draft for how the process should be:

+ +
    +
  1. User account is created
  2. +
  3. Token is automatically generated and stored in db, associated to that user
  4. +
  5. User calls login endpoint
  6. +
  7. Obtains their token as a response Token
  8. +
  9. The token is stored in the client side as long as the session lasts
  10. +
  11. The token is used to make the rest of the calls
  12. +
+ +

I have to admit I am not fully convinced about the idea of returning the token when login is called but I need your advice about whether this is a good idea or not. If not, please, give me some advice to approach this situation.

+",359059,,,,,43899.91667,Securing REST API with authenticated user,,3,4,,,,CC BY-SA 4.0, +406259,1,,,3/8/2020 2:07,,3,196,"

Example 1:

+ +

Let me assume that I have a base class A. Class B extends Class A (Class B is a derived class).

+ +

Can I conclude, Class B doesn't obey single responsibility principle since it uses the concept of inheritance? I.e certain changes happen to Class A will require changes to Class B as well? Hence the responsibility for these changes is shared between Class A and Class B?

+ +

Example 2:

+ +
class Monitorable_stack extends Stack {
+    private int high_water_mark = 0;
+    private int current_size;
+    public void push( Object article )
+    { if( ++current_size > high_water_mark )
+        high_water_mark = current_size; super.push(article);
+    }
+    public Object pop()
+    {
+        --current_size;
+        return super.pop();
+    }
+    public int maximum_size_so_far()
+    {
+        return high_water_mark;
+    }
+}
+
+ +

What SOLID principles does this code obey and can some body help me in a better understanding the SOLID principles?

+",359274,,9113,,43898.59236,43899.52708,Relationship between inheritance and single responsibility principle,,4,5,2,,,CC BY-SA 4.0, +406262,1,,,3/8/2020 6:22,,5,64,"

I am curious about these three questions when building a push-based(write fanout) feed system:

+
    +
  1. How does it handle the unsubscribe logic?
  2. +
+

To me, it seems to me that we have two choices: delete or keep the history data from one’s inbox.

+

If we choose to keep it, does it mean that when processing the timeline read request, we have to do another step of filter -- to check whether the author of the message retrieved from the inbox is still included in the requester's following list or not?

+
    +
  1. How does it handle message erasure?
  2. +
+

For example, when an author deletes one of his blog, it seems to me that the design decision here is also similar to unsubscribe.

+
    +
  1. For a new-registered user has empty following list, when he/she follows a new user, what should we do to fill his/her timeline, back to pull-based method?
  2. +
+",349362,,379622,,44208.75347,44208.75486,"How does a push-based(write fanout) feed system handle unsubscribe logic, message erasure and new following user's message",,1,1,1,,,CC BY-SA 4.0, +406263,1,,,3/8/2020 8:55,,-1,61,"

The only way I know of is to copy the whole heap by allocating copies of all objects on a new heap and dropping the old one, like couchbase db does for example. Presumably you could also do the same on a subset of the heap to copy less data around each time. Google isn't helping me much, since it only talks about heap compaction in gc. Are there other ways to do this?

+",128997,,128997,,43898.37708,43928.62569,What's the canonical way to do compaction on a refcounted heap?,,2,1,,,,CC BY-SA 4.0, +406273,1,406275,,3/8/2020 16:20,,2,355,"

I was playing one of my favorite games from Supercell, and I imagine it’s a rather complicated game, and it stems across the two main mobile platforms.

+ +

My question is, do the developers write the game logic twice? I’ve never developed games on iOS or Android applications, so my knowledge on this is limited.

+ +

Do companies really hire two sets of teams that not only create the application, but then make sure key aspects of the game are maintained and updated in sync?

+ +

I doubt anyone will be able to give me an answer to the specific company I mentioned above, but maybe something similar?

+",327284,,1204,,43898.69444,43901.22569,In mobile games that work across android and iOS is game logic written twice?,,4,0,,,,CC BY-SA 4.0, +406274,1,,,3/8/2020 16:26,,1,88,"

I am trying to create a good pattern for my application. I want to reduce as much as possible the duplicate code and I want to use generics as much as possible. I have created a pattern for this but I want to know is there anything wrong with the code or if is there any other better pattern already made for this.

+ +

BaseDao interface:

+ +
public interface BaseDao<T extends DTO> {
+
+    List<T> fetchAll();
+
+}
+
+ +

BaseImplementetion: all the DAO classes will extend this class.

+ +
public abstract class BaseDaoImpl<T extends DTO> implements BaseDao {
+
+    private Class<T> clazz;
+
+    public BaseDaoImpl(Class<T> clazz) {
+    this.clazz = clazz;
+
+    }
+
+    @Override
+    public List<T> fetchAll() {
+
+    List<T> entities = null;
+    Session session = null;
+    try {
+        session = doOpenSession();
+        Query<T> query = session.createQuery(""from "" + clazz.getName());
+        entities = query.list();
+    } catch (Exception e) {
+        throw e;
+    } finally {
+        session.close();
+    }
+    return entities;
+    }
+
+ +

UserDao interface:

+ +
public interface UserDao extends BaseDao{
+    User getUserByUsername(String username);
+}
+
+ +

UserDao implementation:

+ +
public class UserDaoImpl extends BaseDaoImpl<User> implements UserDao{
+
+    public UserDaoImpl() {
+    this(User.class);
+    }
+    private UserDaoImpl(Class<User> clazz) {
+    super(clazz);
+    }
+
+    public User getUserByUsername(String name) {
+
+    User user = null;
+    try (Session session = HibernateUtils.getSessionFactory().openSession()) {
+        String hql = ""FROM User u WHERE u.username = :name"";
+        Query<User> query = session.createQuery(hql).setMaxResults(1);
+        query.setParameter(""name"", name);
+        user = query.uniqueResult();
+    }
+    return user;
+    }
+}
+
+ +

PS: So, I want to create all the basic operations (add, save, update, getAll etc) in my BaseDaoImpl and after that all the DAO classes to inherit the class.

+ +

Do you think this approach can bring me problems latter?

+",359303,,,,,43898.76667,Dao pattern Java and Hibernate,,0,4,,,,CC BY-SA 4.0, +406277,1,406302,,3/8/2020 18:53,,4,88,"

I run on a daily basis a set of VBA-rich Excel files. Most of them include MS Office application cross-talk, but also employ third-party applications and MySQL. Due to the fact of running those files in a specific order and at a specific time of day I set up an xlsm-based scheduler to run those files and control outputs. Since my 'application' is growing I'm facing a problem of resource usage and typical garbage collector errors in VBA (not mentioning lack of active VBA support from MS, poor IDE and debugging etc) therefore I'm looking for other ways of automation.

+ +

Processes I'm running are typical ETL from 3rd party sources and CRMs & RDBMSs, integration of the data to edible by non-DataScience users and tools (must cover Excel 64- and 32-bit architecture). Most of it runs along data validation after imports, type testing before further analysis, sometimes picture2data replacement, applying business logic to data, pushing outlook notifications, SharePoint data I/O etc.

+ +

My question is whether PowerShell-based scheduler running those VBA and VB scripts may be more effective when it comes to memory usage or it will only as effective as the VBA/VB code is?

+ +

It is worth noting that due to the given policy I'm strongly discouraged using excellent R or Python libraries for working with data, so I need to stick with MS tools.

+ +

Looking forward for any advice, hint or even a loose tip on the topic.

+",358426,,358426,,43898.88611,43899.52014,PowerShell performance when running Excel macros,,1,6,,,,CC BY-SA 4.0, +406283,1,406289,,3/9/2020 2:05,,32,9082,"

I want to code a little program that takes in head tracking data and moves a 3D object accordingly on the screen. To achieve this I found a software called opentrack that has a C++ API. The problem is that any game dev environments I know / have a way to access use C# as the language.

+ +

I'm very confortable with C# and used to be comfortable with C++ and C a while back, and could easily get back into it if a solution required it.

+ +

This is a silly little personal project, but one I'm passionate about and would love to solve, so any help in resolving this would be appreciated. Thanks!

+ +

UPDATE:

+ +

Wow, that's an amazing amount and quality of responses, I would like to deeply thank everybody who contributed!

+",359326,,359326,,43900.96042,43900.96042,Is there a way to use a C++ API in C#?,,5,2,7,,,CC BY-SA 4.0, +406285,1,,,3/9/2020 4:44,,2,99,"

I am creating a rest template to consume REST API secured by OAuth 2.0.

+ +

The provider has implemented an expiry for the access token for 5 mins. So Using the rest template, I will be calling the database and sending records via Rest Template.

+ +

The records may vary from 10k to 1 million, so I wanted to create a fault proof system. In the process of sending records even if the token expires, I should be sending the records to the server.

+ +

What is the best way of achieving this?

+ +

Any help or inputs are highly appreciated.

+",353454,,353454,,43899.85833,44169.87847,Handling OAuth 2.0 access token,,1,2,,,,CC BY-SA 4.0, +406286,1,,,3/9/2020 5:53,,2,133,"

I'm working on an application which needs to open a database file. There are 2 ""versions"" of this database: one of them is more general data storage, and the other contains ""less"" information. That being said, database table structures are different, so I need to issue different queries to get ""same"" information from each of them.

+ +

My first thought is to create a query factory abstract interface, which will only have pure virtual methods returning the queries:

+ +
class IQueryFactory
+{
+public:
+    virtual QString getNames() const = 0;
+    virtual QString getSurnames() const = 0;
+    // ...
+};
+
+ +

and have this interface implemented for both ""versions"" of database:

+ +
class GeneralQueryFactory : public IQueryFactory
+{
+public:
+    QString getNames() const override
+    {
+        return ""SELECT DISTINCT Name FROM People;"";
+    }
+
+    QString getSurnames() const override
+    {
+        return ""SELECT DISTINCT Surname FROM People;"";
+    }
+
+    // ...
+}
+
+class SpecificQueryFactory : public IQueryFactory
+{
+public:
+    QString getNames() const override
+    {
+        return ""SELECT DISTINCT FirstName FROM Employees;"";
+    }
+
+    QString getSurnames() const override
+    {
+        return ""SELECT DISTINCT LastName FROM Employees;"";
+    }
+
+    // ...
+}
+
+ +

I'm creating an instance of GeneralQueryFactory and SpecificQueryFactory at program startup, then when a database is to be loaded, I check if some table in database, and based on that store the corresponding query factory pointer with the database name in map:

+ +
GeneralQueryFactory generalFactory;
+SpecificQueryFactory specificFactory;
+
+// ...
+
+if (...)
+    dbInfo.add(dbPath, &app.generalFactory);
+else
+    dbInfo.add(dbPath, &app.specificFactory);
+
+ +

Later I use the query factory as follows (in another function):

+ +
auto queryFct = dbInfo.getQueryFactory(dbPath);
+QSqlQuery sqlQuery(queryFct->getNames());
+sqlQuery.exec();
+// ...
+
+ +

I wanna know if this is done correctly. Is the usage of this design pattern correct for this problem? What can I improve? Thanks.

+",112713,,,,,43899.46528,Should I use Factory Method design pattern for this problem?,,1,3,,,,CC BY-SA 4.0, +406292,1,,,3/9/2020 10:09,,-3,100,"

In Java string classes are final, we know that we can't inherit a final class and not able to write on this, then how did we enter a value on a string even its a final class?

+",359343,,,,,43899.47222,"In java string classes are final, than how did we enter value on string even its a final class?",,2,0,,,,CC BY-SA 4.0, +406294,1,,,3/9/2020 10:23,,1,92,"

N.B. Several months after initially asking this question (and not coming up with any satisfactory answers) I am now learning to use HTML Custom Elements / WebComponents. It seems the same question comes up again:

+
+

I know I can turn everything in my HTML into a Custom Element... so what do / what don't I turn into a Custom Element?

+
+
+

This is a question about best practice in data architecture when it comes to component-based systems. There are a number of "component" technologies in vanilla javascript, not least:

+
    +
  • Web Workers
  • +
  • ES6 Modules
  • +
  • Optional External JS files
  • +
+

Self-evidently, whenever a document can be architected as a single block or built up dynamically from separate components, there is a spectrum of architecture possibilities, ranging from:

+
    +
  • The single document imports no components
  • +
  • The single dynamic document imports some components
  • +
  • The single dynamic document imports many components
  • +
  • Every part of the dynamic document is an imported component
  • +
+

Most system architectures will represent a position somewhere between the two poles.

+

But is there a sensible rule of thumb or principle which can help an information architect decide whether something should be a separate, self-contained, importable component or not?

+

How might I decide sensibly what UI elements ought (and ought not) be written as components to import?

+

What indicators suggest that a UI element ought to be written as a separate, importable component?

+
+

Example:

+
<h1>My Weather Page</h1>
+
+<nav>Navigation Here</nav>
+
+<p>Introductory Paragraph Here</p>
+
+<!-- Dynamic Weather Module Here -->
+
+<p>Paragraph explaining how to navigate / use the weather module</p>
+
+<footer>Footer Links and Notices Here</footer>
+
+

Given the page architecture above, I would be most inclined to nominate the following two as components (handled by Web Workers, ES6 Modules etc.) to be imported into the page template:

+
    +
  • <nav>Navigation Here</nav>
  • +
  • <!-- Dynamic Weather Module Here -->
  • +
+

and the following four as non-imported sections of the page-template:

+
    +
  • <h1>My Weather Page</h1>
  • +
  • <p>Introductory Paragraph Here</p>
  • +
  • <p>Paragraph explaining how to navigate / use the weather module</p>
  • +
  • <footer>Footer Links and Notices Here</footer>
  • +
+

But that feels to me like an arbitrary decision. I can imagine others proposing that any of the sections immediately above (especially the <footer> if it were, to some degree, a custom, dynamic <footer>) might feasibly be written as separate components imported into the page template, too.

+

Eventually, I'm left scratching my head and wondering if there is anything on the page which shouldn't be a component. That said, turning everything into a component feels like going completely overboard.

+

Clearly the degree of componentization is a choice ultimately made by each architect...

+

... but are there any sensible guidelines / rules of thumb etc. which can help an information architect decide what ought to be an imported component and what ought not to be an imported component in an architecture which allows for componentization?

+
+

I can think of the following example questions:

+
    +
  • Does the section appear on every page? If not, probably make it an importable component.
  • +
  • Is the section dynamic and does it appear differently on every page? If so, probably make it an importable component.
  • +
+

I imagine there will be other questions similar to those above, the collected answers to which may assist in deciding whether to turn a piece of code into a separate, importable component or to leave it as a non-imported section within the main template.

+
+

Added:

+

Having thought about this on and off for 5 days and finding myself unable to imagine a UI element for which there could not be at least a halfway-reasonable argument to turn it into an importable component, I am starting to wonder if the question isn't better, reversed:

+

"What guiding indicators suggest that a UI element ought not be written as a separate, self-contained, importable component?"

+",359345,,359345,,44095.39306,44125.42083,Component based architectures in JS / PHP: what indicators suggest that a UI element ought to be written as an importable component?,,1,4,,,,CC BY-SA 4.0, +406295,1,,,3/9/2020 10:42,,2,157,"

We are creating a new page on our website which will require overall ~5000 LOC. +Now the problems here are:

+ +
    +
  1. How to get so many lines reviewed?
  2. +
  3. When to merge these changes to master for release?
  4. +
+ +

How we currently solve these problems: +We create a temporary Release branch in which everyone will merge their features related to this page. These individual features/pull requests are reviewed by peers. +We also keep rebasing this branch with master after every release.

+ +

When this temporary Release branch is functionally ready to go live, we merge it in the next release branch which will eventually be merged to master.

+ +

Main question:

+ +
    +
  1. Are we right in maintaining this temporary Release branch merging features in it or is there some better way? On some websites it suggests to merge each small feature of new page to master and take it(keeping it hidden) and in last release you should make these features visible
  2. +
+ +

I have read online but couldn't find the my problem listed elsewhere.

+ +

Some research:

+ +

Is it better to merge "often" or only after completion do a big merge of feature branches?

+ +

https://google.github.io/eng-practices/review/developer/small-cls.html

+",331598,,,,,43899.48194,How to review and merge a big feature,,1,1,,,,CC BY-SA 4.0, +406296,1,,,3/9/2020 11:17,,-1,87,"

I have to code a gui like this:

+ +

+ +

It's an homepage with a left menu and a changing right part, on button1 click form1 must be displayed, on button2 click form2 a so on. +Each form has its own button ""ok"" that would call a controller function which will save data entered by the user in a database (but it is not relevant).

+ +

My question is: +I know that according to MVC pattern user button click should call a controller function, so what would the controller do on button1 click? +There are 2 scenarios in my mind:

+ +

1) Controller create the form1 and then pass it to the homepage view which will just show it on the right.

+ +

2) Controller just call the appropriate method of the Homepage view which will create form1 and will show it.

+ +

EDIT: What i would know is: is it a job of the controller to create each view corresponding to the form and then pass it to the homepage view, or it just call a method of the homepage view which will create and display it?

+",358382,,358382,,43899.50417,43899.50417,Display different panel on button click (MVC pattern),,1,1,,,,CC BY-SA 4.0, +406300,1,406307,,3/9/2020 12:05,,1,99,"

Summary

+

Instead of calling WebApi straight from a Web Forms User Control, I have created JS class which contains functions returning jQuery AJAX requests. The control would instantiate the class and call these functions, which would allow me to separate the code that calls the API from my control.

+

Is this the best approach? +What down-falls could there be later?

+

Implementation

+

MyService.js

+

Assume my WebApi controller is called MyServiceController. The client-side JS class would be:

+
class MyService
+{
+    //  Example method to call
+    start = function(userId) {
+        return $.ajax({
+            url: '/api/myService/start',
+            type: 'POST',
+            data: {
+                id: userId
+            });
+    }
+
+    // Another example method
+    stop = function() {
+        return $.ajax({
+            url: '/api/myService/stop',
+            type: 'POST'
+        });
+    }
+}
+
+

MyCode.ascx

+

This will then be consumed from the web forms control as so:

+
// Instantiate the class
+var let api = new MyService();
+
+// Assume document has loaded and add click-handler
+var someId = 1;
+
+// Call Start
+$('.myService-start').click(function(){
+    api.start(someId)
+        .done(function(){
+            console.log('start completed');
+        })
+        .fail(function(){
+            console.log('start failed');
+        });
+});
+
+// Call Stop
+$('.myService-stop').click(function(){
+    api.stop()
+        .done(function(){
+            console.log('stop completed');
+        })
+        .fail(function(){
+            console.log('stop failed');
+        });;
+});
+
+
<span class"myService-start">Start</span>
+<span class"myService-stop">Stop</span>
+
+

Question

+

Am I following a bad pattern? Will this lead to a code-smell later on?

+

Please let me know if there's a better way to implement this

+",116205,,-1,,43998.41736,43899.60486,"Should I separate client-side API calls into a separate .js file and class, and reference that?",,1,0,0,,,CC BY-SA 4.0, +406301,1,,,3/9/2020 12:14,,1,48,"

Problem

+ +

I am working on an application for my personal use. Basically what I require is a 24 hr running cloud application that will get alerted about the new articles posted on provided website links and fetch about heading or specific content from those articles. The info is processed and put together after which posted on my web app. As soon as this happens I should get a notification(web or SMS) in my phone. Now I can go and approve these posts personally on the web app. Approving them must send and store them to my google docs (or any similar platform). The post should die after my interaction.

+ +

My approach

+ +

I have done some research and broken down this into sub problems.

+ +
    +
  1. For getting the content I need to web scrap these websites. But I don't know how my application would know if a new article is posted. I think RSS feed can help but I am not sure.

  2. +
  3. For 24hr running app, I was thinking I can deploy a python script on a cloud platform like heroku. But would love to know if there is a better option that can integrate everything with my web app.

  4. +
  5. I can easily get notification in my phone from my web deployed web app.

  6. +
  7. For storing them on google docs part, I can use google app script. But I need to know if that can be easily integrated with everything else, as I haven't used it before.

  8. +
+ +

More info

+ +

As this is for my personal use, I want it done in a cheapest way possible. The whole problem mentioned above is basically getting, processing and storing desired content from a website to my google sheet. I wanted to know all possible ways and constraints before I start working on it. Any help or suggestion as how I should do it is highly appreciable.

+",359352,,,,,43899.50972,Designing flow for application,,0,9,,,,CC BY-SA 4.0, +406311,1,406314,,3/9/2020 16:01,,-1,3167,"

I know that ""mvn clean install"" cleans everything that has already been built by maven and rebuilds everything as specified by pom.xml. However, if things have already been installed, and I just run ""mvn install"", does it reinstall things that already been installed? In other words, does it install things twice or does it only install the additional files that are needed?

+",355519,,,,,43899.69236,"What is difference between 'mvn install"" and ""mvn clean install"" in maven?",,1,1,,,,CC BY-SA 4.0, +406315,1,406326,,3/9/2020 17:11,,3,254,"

We(in my company) use to save the JWT token in the cookie. The web application is on Spring boot + JSP application. So the flow is, in a successful login service send a JWT token, that token has been saved in the cookie and all the subsequent request to the service the token has been retrieved from the cookie. The current code that we use to write is more like as follow.

+ +

In Spring Controller

+ +
@GetMapping(""/"")
+@ResponseBody
+public List<Node> test(HttpServletRequest request) {
+    var nodeList = service.testService(request);
+    return nodeList;
+}
+
+ +

In Service Layer

+ +
public List<Node> testService(HttpServletRequest request) {
+  // business logic
+  // some other service call
+  someService.get(request)
+}
+
+ +

In Rest Service Layer

+ +
public List<Node>  get(HttpServletRequest request){
+  // finally we retrieve the token from the sevletRequest
+  token = WebUtils.getCookie(request, ACCESS_TOKEN);
+  // rest call with this token.
+}
+
+ +

My concern with the servletRequest parameter. I have to carry this request everywhere where we can possible make rest call. What can I improve in this design? I am also seeking for advises how others are handling this.

+ +

==UPDATE==

+ +

Suppose A(controller) calls B, B calls C. Now C has to call D which has a rest call. This time the code has to be refactored to pass the Token parameter from A to all the way down to D.

+",102780,,102780,,43900.72708,43906.86528,Where to save JWT token?,,2,0,1,,,CC BY-SA 4.0, +406316,1,406322,,3/9/2020 17:52,,2,58,"

I've run into a common annoyance where I'll have an event that sends a parameter, but some consumers of the event don't need the parameter at all. So I wrap it in another function that decouples the signature from the actual method call. This adds one more level of thought that a reader needs to parse before understanding the code.

+ +
void OnEnable() {
+    HeightSnapEvents.OnSnapToHeight += PictureMightHaveMoved; //event has float parameter
+}
+
+void PictureMightHaveMoved(float height) {
+    AlignWithPictureTaking(); //don't need height at all.
+}
+
+void AlignWithPictureTaking() {
+    Debug.Log(""Aligned!"");
+}
+
+ +

I've also tried:

+ +
void PictureMightHaveMoved(float unused) {
+    Debug.Log(""Aligned!"");
+}
+
+ +

I've also considered having the event also call a parameter-less version of itself, but then it adds complexity making sure all the events are fired properly.

+ +

Is there a more elegant solution that keeps readability?

+ +

Details: I'm programming in c# and Unity, but this shouldn't really matter.

+",157621,,,,,43899.85486,Remove unwanted parameter from event,,1,1,,,,CC BY-SA 4.0, +406317,1,,,3/9/2020 17:53,,1,67,"

I'm currently refactoring an older legacy application and use React to rebuild some of the former functionality. This application has a form which has +10 different input fields. The aim is to kind of built a wizard that leads the user through fields, instead of bombarding them with +10 fields at once.

+ +

Now, I've thought a little about how to tackle this. I could have a component for each step (lets say three step) and each one contains some detailed information about the form. This could then be wrapped up with a parent-component to deal with the state.

+ +

But this would imply that each ""step""-component is very specific to its fields and that makes very static.

+ +

Maybe instead of writing out all the steps into individual components, I could have a very generic one, which takes a number of elements, but doesn't care about anything regarding to this specific form. The idea would be to externalise the form's structure into a json object, which will define the form and provides the information on how to build it.

+ +

Something like:

+ +
{""user_input_form"": {
+    ""steps"": [{
+        ""index"": 1,
+        ""fields"": {
+            ""first_name"": {
+                ""required"": true,
+                ""label"": ""The first name label"",                
+                ""value"": """",
+                ""type"": ""text"",
+            },
+            ""last_name"": {
+                ""required"": true,
+                ""label"": ""The last name label"",
+                ""value"": """",
+                ""type"": ""text"",
+            }
+        }
+    },
+    {
+        ""index"": 2,
+        ""fields"": {
+            ""date_of_birth"": {
+                ""required"": true,
+                ""label"": ""The birthday label"",
+                ""value"": """",
+                ""type"": ""date"",
+            },
+            ""home_country"": {
+                ""required"": false,
+                ""label"": ""The last name label"",
+                ""value"": """",
+                ""type"": ""select"",
+                ""options"": [""England"", ""Wales"", ""Scotland"", ""Northern Ireland""]
+            }
+        }
+    }]
+}
+
+ +

Whilst the form component is tightly tied to the json structure, it would not be tied to the underlying data.

+ +

Is that a common thing to do or rather something to avoid?

+ +

Cheers

+",359381,,,,,43899.7625,Building a form based on json object?,,1,1,,,,CC BY-SA 4.0, +406323,1,,,3/9/2020 20:36,,0,52,"

If I obtain a release build of a github project from a Maven repository, how can I securely verify that it's the authentic build?

+ +

DETAILS

+ +

That was my X question. My Y question follows.

+ +

Maven's Guide to uploading artifacts to the Central Repository says:

+ +
+

To improve the quality of the Central Repository, we require you to provide PGP signatures for all your artifacts (all files except checksums), and distribute your public key to a key server like http://pgp.mit.edu. Read Working with PGP Signatures for more information.

+
+ +

Should I be typically be looking for a PGP signature for the Github project, or for the Github user who made the release, or something else?

+",1755,,,,,44051.62847,Validating provenance of release binaries of github projects obtained from Maven repository,,1,6,,,,CC BY-SA 4.0, +406328,1,,,3/9/2020 21:54,,-1,140,"

I have recently decided upon a micro service design to merge two Spring Boot applications.

+ +

I basically have 2 applications that deliver two entirely different front ends, but share user accounts.

+ +

My current structure is (in no particular order):

+ +
    +
  • Application 1
  • +
  • Application 2
  • +
  • Gateway service
  • +
  • Discovery service
  • +
+ +

The gateway to the applications is something like:

+ +

localhost:8080/application1

+ +

OR

+ +

localhost:8080/application2.

+ +

If you log into one, you are presumably authenticated to both.

+ +

In a gist, the application specific pages are delivered through zuul and the gateway handles all of it; however the actual JSPs are inside the specific Spring-Boot application.

+ +

Does it make sense to have the web pages delivered to the client inside the gateway service, and the specific application services (1 and 2) just contain the REST calls?

+",359387,,,,,43900.60278,is this microservice design fine?,,2,5,,,,CC BY-SA 4.0, +406335,1,406337,,3/10/2020 7:25,,-2,94,"

I am trying to imagine how Google Chrome automatically updates but have some questions:

+ +
    +
  1. Is this against Apple's terms of service? I feel like I've seen somewhere that Chrome for MacOS isn't in the Apple Store because of this. Not sure.
  2. +
  3. How is it swapping out its runtime?
  4. +
  5. Is it possible to autoupdate on iOS/iPad?
  6. +
+ +

Basically I just would like to know how it works, and from there I can figure out the platform-specific implementation details (I'm thinking of doing this in Swift for iOS and MacOS).

+ +

From my imagination, it seems that for this to work, there would be a daemon polling the network for updates. Once it finds an update is ready, it downloads something (what does it download?). Does it just download the whole new app binary? Or some sort of patch?

+ +

Then to get the autoupdate, you must close and reopen the app. What this means is that the app icon you click on actually just pings the daemon, which looks for the ""last download"" of the app. And then the daemon runs the latest download. That's pretty much it.

+ +

So there is a ""shell"" of an app, which is the app icon. Then there is a ""daemon"" app, which stays around. Then there is the ""real"" app, which is what gets loaded as the browser, for example. So 3 apps at least.

+ +

Is this how it works? If not, what actually does happen?

+ +

Also, can you do this same sort of thing on iOS/iPad/Windows?

+",73722,,,,,43900.3375,How do you implement an app autoupdate feature from scratch?,,1,1,,,,CC BY-SA 4.0, +406340,1,,,3/10/2020 9:56,,5,433,"

We're Scrum teams building microservices. Our GitHub repositories are single-branch, each of us integrates his/her code into master several times a day, with no feature branches. Our Jenkins pipelines compile the code, run automatic tests, supply the code to other services for static code scans, and deploy our software across a CloudFoundry landscape for further testing. If all the steps succeed, our pipelines automatically deploy the software into production spaces on Amazon Web services. We live Uncle Bob's Clean Code, write reliable unit tests with > 90% mutation coverage, and honor Jez Humble's ideas from Continuous Delivery. We think we are doing everything right.

+ +

It doesn't work.

+ +

99% of our builds fail on their way through the pipelines. In many weeks our velocity is nearly 0.

+ +

Now the first impulse is to say we'd need to code cleaner, test more, stop pushing into red pipelines, roll back faster, perform root cause and post mortem analyses, and so on. But we've been there and done all that, chomped through our code, practices, and culture. We've streamlined and upskilled our own teams as best as we could.

+ +

The problem is that the builds fail for reasons that our teams have no control over: the Jenkins servers, the Maven Nexus, the npm registry, the code scanning services, the Cloud Foundry landscape, all of these are maintained by other teams in our company. Individually, all of the involved 15 tools and teams are nice, and suffer only sporadic outages that might block a pipeline from minutes to a handful of days max. But in combination, the failure probabilities sum up to a nearly impenetratable wall of random failure.

+ +

What are strategies to improve this situation?

+",302186,,302186,,43900.53125,43901.31111,How do you do continuous delivery in an unstable environment?,,3,8,,,,CC BY-SA 4.0, +406341,1,406347,,3/10/2020 10:01,,1,174,"

I am new to Golang and I've seen it is very common to check for errors all the time. +I am trying to find a way to not have my code polluted with ""if error { log... }"" or ""if error { exit }"". +What do you think of a function like:

+ +
fun exitIfError(error err) {
+  if error != nil {
+    exit(1)
+  }
+}
+
+ +

That body of the function would be otherwise spread throughout the main function several times, but it would probably make less explicit where the program exits, any opinions?

+",359427,,359427,,43900.44931,43900.58681,exit and error handling in golang,,1,1,1,,,CC BY-SA 4.0, +406343,1,406522,,3/10/2020 10:27,,-2,67,"

I have an external process in my program. To interact with it i have a configuration file i can edit. The external process then reads this file periodically and updates its state accordingly.

+ +

Now what is a good name for this file, as represented in code?

+ +

ExternalProcessApplicationConfigurationFile feels 'static' since using configuration as a word 'sounds static' as in the configuration is not mutable. Although i feel something like 'ExternalProcessApplicationSettingsFile' sounds even more static.

+ +

How would you normally call something like this situation, where an external module in your code has a configuration that is mutable.

+",359430,,,,,43903.79583,What do i call a 'Mutable configuration',,1,3,,,,CC BY-SA 4.0, +406345,1,,,3/10/2020 12:53,,0,191,"

I am trying to write domain code and I am pretty happy with my architecture so far. Problem started at point when I needed to create different implementation of repository, that required asynchronous approach. Interface of sychronous and asynchronous are not the same from code's point of view. I had to change interface of all repositories to be asynchronous.

+ +

Now I don't want my business logic to be asynchronous. I feel like that adds more complexity, makes some of code harder to test and most of all, it isn't really part of business logic. I feel like most of time there isn't any good justification of making business logic asynchronous unless you are maybe trying to have everything asynchronous or some of the logic requires very long calculations, which isn't my case.

+ +

I was thinking how to configure my model in general. I guess I can first load all necessary data from repositories and then call everything synchronously. Problem with that is, that sometimes I need to save stuff. Not in models directly, but in some event listeners. I can either wrap that into another interface (which I don't like) or keep save method synchronous without need for return value. +That's what I came up with so far and it works pretty alright without many changes in code (mostly change in interface from sync to async). +I was also thinking about wrapping async over sync or sync/over async, but I really don't like road.

+ +
    +
  • What is correct approach for this? I'd like to read about it a bit more, but haven't found much.
  • +
  • What is relation of asynchronous programming and domain driven design? Should there be asynchronous code in models?
  • +
+",353552,,,,,43900.53681,Domain driven design and asynchronous code (repository),,0,1,,,,CC BY-SA 4.0, +406349,1,,,3/10/2020 14:24,,-1,212,"

Consider a SimpleCalculator class which contains four methods (Add, Subtract, Multiply, Divide).

+ +

I need to create another class ScientificCalculator which needs above methods plus other methods like Sin, tan, cos, log, etc.

+ +

Should I reuse the functionality by inheriting from SimpleCalculator or should I think of some other alternative (such as using some pattern like chain of responsibility). Below is kinda pseudo code which I am thinking of.

+ +
if (operation in SimpleOperations) then use SimpleCalculator else ScientificCalculator
+
+ +

Should I further break the ScientificCalculator into TrigonometricCalculator, LogarithmicCalculator, etc.?

+",296142,,296142,,43900.61458,43900.81597,Is inheritance without polymorphism/overriding a right practice?,,4,11,,,,CC BY-SA 4.0, +406352,1,,,3/10/2020 14:45,,0,219,"

I have a MySQL database with a column Body MEDIUMTEXT. Until now I used to only store the contents into it. There was no update option for the users of the application. Now, I wanted to add an update option to the content(To make it easy, just think it like a stack exchange editing a post scenario).

+ +

I know about not optimizing something until it's needed but at the same time I don't want to design a system now and later realise it was a stupid thing to do in the first place.

+ +

So, I got some ideas:

+ +

Retrieving the whole Body from the database and comparing both character by character.

+ +

The problem in this idea is: Database overhead(even though it is for inserts, I guess the connecting, sending, parsing and closing is same for every query) of sending a query(2 units), parsing(2 units), retrieving the huge data over the network (even though it's on the same datacenter but still there will be some latency... right?) and the load on application server to actually hold the data in the variables and comparing both the strings(both Memory and CPU load)

+ +

In MySQL, MEDIUMTEXT is 16MB max. So, in the worst case, I need to transfer that over network and hold that data(old and new) in a variable in the application server. So, even if I'm handing 32 requests at a time, it will occupy 1GB of RAM (32MB * 32 req = 1024MB) and additional CPU load while checking that content.

+ +

It might be in very rare cases that 16MB will be sent every time and the more inefficient thing here is actually comparing that 16MB content character by character. But is it too much to always estimate the worst case while designing a system?

+ +
+ +

Storing a hashed value of Body in a column and comparing the hashes.

+ +

I store an additional column in the table with hashedBody value of the Body. Whenever an update comes to the server, I just retrieve that hashedBody value from the database. Here I can save on network transfer by transferring only hashBody value. Now, I hash the new content came from the User and compare the smaller hashes and update if the hashes are different.

+ +

But if the hashing algorithm is fast, chances are they might encounter a collision and the new content which is different will not be updated and if the hashing algorithm is secure and slow, there are less chances for collision but I might use more CPU than in the first naive method above.

+ +
+ +

And for the questions I have:

+ +
    +
  1. Is it too much to always estimate the worst case while designing a +system?

  2. +
  3. Is the first method good?

  4. +
  5. Is the second method good? Are there any hash algorithms other than +md5? Like less collisions and fast?

  6. +
  7. Is there any other new approach or idea to tackle this?

  8. +
+ +

P.S. Yeah I know that I'm clearly overthinking about this but I'm afraid not considering these small mistakes/design can cause problems which I can't foresee. I just want to make a decision based on all the information and know all the problems that I can or will encounter in the future rather than make a decision based on ignorance and come across problems out of the blue and panic.

+ +
+ +

Edit: 1 +I forgot to mention but I'm maintaining a History table to update that with old values(like maintaining a versioning system). So, I can't just update. I need old values to insert it into History table and new values into Post table.

+",295520,,295520,,43900.66667,43902.68819,Comparing whether two very large text contents are different or not efficiently,,3,6,,,,CC BY-SA 4.0, +406357,1,,,3/10/2020 16:04,,0,71,"

I have mainly three groups of CSV files (each file is divided into several small files): First group of CSV files have 600+ GB in total (MAYBE 200+ GB if in int, cause CSV calculates by char right?), each file has same size of lines, the whole data should be gather line by line together. The application should read some specific lines from these files, gets the data_1 and fetches the data_2 and data_3 from second and third group of files by data_1. Second and third group of CSV files has about 60 GB each. Add together the lines of all files in order, then it would be the whole data.

+ +

Here comes my question, it costs such a long time to reads some specific lines from the three groups of files, where I use fgets() in c to read CSV file. I got a large RAM (about 190GB), but not enough to load all the files of first group, I tried PSQL as well, it works much better than reads from CSV files. But I am wondering, if there is any other way to make it perform better? if HDFS is a good idea?

+",359454,,,,,43900.85903,A program design question: Good idea using HDFS in c for reading large data?,,1,8,,,,CC BY-SA 4.0, +406358,1,406368,,3/10/2020 16:23,,2,111,"

I am working with C++ in a Linux/ Unix environment. I am trying to learn the physical design of large scale projects. In one of my projects, I am using an SDK from a camera manufacturer. They released a new version of this SDK and one of the applications I had built based on the previous version stopped working because the SDK had undergone some restructuring and changes. I fixed this by making changes on my code. This was not production level code and was only for research.

+ +

Recently I came across another project that had a folder for a specific gcc-build inside its root directory. When I checked the CMakeLists.txt, it looked like it was setup to use whatever tools where available inside this folder.

+ +
    +
  1. In the first case, should I have configured my cmake and other associated tools to ensure that the right version of SDK was shipped (to other researchers)? It looks like I could have used either of the two approaches here:

    + +
      +
    1. Use a script that ensures that a specific version of SDK is downloaded and installed prior to building my code. Ensure that the user runs this script in their system.

    2. +
    3. Grab all the library files I need from the SDK and provide it inside my project folder, asking cmake to use specifically these files.

    4. +
    + +

    Which of the two is the right approach? If it is the second approach, do the SDK library files go in lib directory inside the project root folder?

  2. +
  3. In the second case, is it a bit extreme to include a specific gcc-build? Or is it a common practice to ensure that things don't break?

  4. +
  5. In the case of shared library files, doesn't keeping and shipping separate copies partially beat the purpose of having shared libraries? Isn't one of the ideas to use one set of files throughout the system and avoid shipping bulky code?

  6. +
+",267086,,110531,,43900.87153,43901.275,Is it common to include a specific build of a library/ tool for production level project?,,1,0,,,,CC BY-SA 4.0, +406361,1,,,3/10/2020 17:15,,0,24,"

I'm developing a data consumer application in C# which connects to multiple remote clients over TCP. Since it's C# application, each client is monitored by a Task whose purpose is basically to run a polling loop waiting for TCP messages, and decode them to get relevant data we want to store. Let's say this is done by a class ClientMonitor

+ +

The sole purpose of the application is to persist incoming data to a single (simple) database, we're talking a couple of main tables. You could imagine each remote client is reporting vehicles passing a traffic light.

+ +

It is almost like each remote client is running in a separate mini-application, running in parallel ClientMonitor.Monitor(). But I'm nervous about how to handle persistence. I have two main thoughts how to do this:

+ +
    +
  1. Each ClientMonitor as part of its polling loop receives TCP data, decodes it, then performs SQL operations to store the data. i.e. it really is a parallel application, multiple workers talking to their traffic light and storing the dat.
  2. +
  3. Each ClientMonitor continually receives data and then pushes it onto some central processing queue. The processing queue is continually pulling data of the queue and persisting it to the DB.
  4. +
+ +

My concerns about 1. are having multiple parallel workers all trying to write to the database concurrently. I have no idea how well modern DBs (we are using SQLServer) cope with such scenarios - we're talking maybe a couple dozen traffic lights.

+ +

Any thoughts on the better approach (mine or another one)? I will say this is quite a small application. It doesn't require Architecture Astronautics or the ability to split over multiple nodes using a message-queue like Rabbit, just an efficient, sensible design. We're talking dozens of remote clients and the traffic light analogy extends well to the amount of traffic (!) being recorded - we maybe get a chunk of data to record every second and we can reasonably batch this if needed to reduce on INSERT calls... but this may be premature optimization.

+",104623,,,,,43900.71875,How to design data persistence in an application which stores data-streams from multiple clients,,0,1,,,,CC BY-SA 4.0, +406364,1,,,3/10/2020 17:42,,0,104,"

For teaching purpose, I would like to create a simple implementation of State Pattern using PHP 7.4. +So, I've tried to create a simple ""document state machine"" starting with Draft, sending to review and, after three ""votes"", can be published:

+ +
<?php
+
+namespace StatePatternPHP;
+
+interface DocumentManagement {
+    public function review();
+    public function approve();
+    public function reject();
+    public function publish();
+}
+
+abstract class State implements DocumentManagement {
+    protected Document $document;
+    public function __construct(Document $document) {
+        $this->document = $document;
+    }
+    public function review() {
+        throw new Exception(""Document cannot be reviewed in this current state ("". get_class($this)."")"");
+    }
+    public function approve() {
+        throw new Exception(""Document cannot be approved in this current state ("". get_class($this)."")"");
+    }
+
+    public function publish() {
+        throw new Exception(""Document cannot be published in this current state ("". get_class($this)."")"");
+    }
+
+    public function reject() {
+        throw new Exception(""Document cannot be rejected in this current state ("". get_class($this)."")"");
+    }
+}
+
+class Document implements DocumentManagement {
+
+    private string $content;
+    private State  $currentState;
+    private int $approvals = 0;
+
+    public function __construct(string $content) {
+        $this->content = $content;
+        $this->currentState = new Draft($this);
+    }
+    public function getContent(): string {
+        return $this->content;
+    }
+    public function setContent(string $content){
+        $this->content = $content;
+        $this->currentState = new Draft($this);
+        $this->approvals = 0;
+    }
+    public function setState(State $state){
+        $this->currentState = $state;
+    }
+    public function addApproval(){
+        $this->approvals++;
+    }
+    public function disapprove(){
+        $this->approvals--;
+    }
+    public function getApprovals(){
+        return $this->approvals;
+    }
+    public function review() {
+        $this->currentState->review();
+    }
+    public function approve() {
+        $this->currentState->approve();
+    }
+    public function publish() {
+        $this->currentState->publish();
+    }
+    public function reject() {
+        $this->currentState->reject();
+    }
+
+}
+
+class Draft extends State {
+
+    public function review() {
+        $this->document->setState(new InReview($this->document));
+    }
+}
+
+class InReview extends State {
+
+    public function approve() {
+        $this->document->addApproval();
+    }
+    public function publish() {
+        if($this->document->getApprovals() > 2){ //needs 3 votes at least
+            $this->document->setState(new Published($this->document));
+        }else{
+            parent::publish();
+        }
+    }
+    public function reject() {
+        $this->document->disapprove();
+    }
+
+}
+
+class Published extends State {
+    public function __construct(Document $document) {
+        parent::__construct($document);
+        print('document published !');
+    }
+}
+
+$document = new Document(""hello world !"");
+$document->review();
+$document->approve();
+$document->approve();
+$document->approve();
+$document->publish();
+
+ +
    +
  • So, Could this be considered a valid State GoF Pattern implementation +?
  • +
  • Is this a valid S.O.L.I.D too? My main fear is about how open is to +new state...
  • +
  • Is there any kind of improvement to do on it?
  • +
+ +

The sand-boxed version is here.

+",356206,,5099,,43901.39167,44171.54306,Could this be considered a valid State GoF Pattern implementation?,,1,0,,,,CC BY-SA 4.0, +406366,1,,,3/10/2020 18:25,,-2,135,"

My language is C#.

+ +

I have a set of seven classes that all ultimately derive from a single class. The image of the class diagram is posted below. I will frequently need to iterate through collections where the members are all the seven classes intermixed. An example collection might look like this:

+ +
Collection: A, D, C, A,..., B
+
+ +

No two collections would have the same mix of elements or the same order of elements.

+ +

I think I understand the basic idea of creating classes using the factory pattern but I have never used one before.

+ +

My issue is I need to do various things with each of these elements. Some elements share common methods (perhaps they inherited the method from the top base class) and some have unique methods/properties that none of the others have. I find myself having to do a lot of detecting what element I have and then casting to that element. This smells like I am doing the wrong thing as I see now I will have to create the same detection code and do the casting over and over again.

+ +

Is there a better way that I need to be handling these classes?

+ +

+ +

Adding info on the classes...

+ +

The classes use three different types of CAD files. One CAD file for each class. The three CAD files make seven different types of ""things"". The underlying API of the CAD files is based on com and was first released in 1995. It has had many yearly changes over the years. The API is pretty convoluted now because of this.

+ +

The seven classes detail my ""things"". All of the ""things"" share some basic information like file meta data (name, path, file type, etc) but they all also have different uses and therefore need different classes. +One of the main classes is an assembly that holds components. The components can be a part or another assembly. It is recursive with the possibility of multiple layers of assemblies referencing assemblies referencing assemblies.

+ +

The class structure is set up to consolidate the methods and properties that are common at the top and and as you progress each level down they diverge. This strategy was to prevent duplication of code.

+ +

I have seen that Interfaces should only have things that are related together. If you were to look at any one class it would be made up of a lot of different kinds of completely different methods and properties that are totally unrelated to each other. Kind of like how motors, tires, body parts, etc of a car are unrelated. The only way they would be related is that they would be in the same class.

+ +

If I wrote a post on what each class is it would make up a very complicated book. That is why I did not post more detailed information - we would end up getting mired in the details.

+ +

Something that was helpful is knowing that I can do the following using Interfaces:

+ +
public void DoSomething(ISomeInterface value)
+{
+    value.action();
+} 
+
+ +

This was a new concept to me - that was very, very helpful. My simple understanding of Interfaces before asking this question was that they create a ""contract"" that helps keep an API consistent over time. I did not understand that they could be used for polymorphism.

+ +

I think my course to better code here relies in trying to flatten the classes without duplication code and using Interfaces to their full extent.

+ +

Thank you all for challenging my thinking - I am grateful and of course up for more of that!

+",341020,,341020,,43901.68403,43901.68403,What is the best way to handle classes descended from the same base class in a collection?,,2,13,,,,CC BY-SA 4.0, +406373,1,406390,,3/10/2020 22:20,,1,133,"

I have been coding some octave .oct files lately (C++), and for my purposes speed is of the essence.

+ +

It seems to me that creating C++ objects (in general) can take some time. I was wondering if there is some way to circumvent this? Maybe to somehow create a statically allocated object the first time the function is called and then to return it every time. Although I don't know how it could be possible to do this memory-safely.

+ +

It is safe to assume I want to return a sparse matrix object, and I know an upper bound on the number of nonzero elements, so that all the information required for creating the object once-and-for-all exists at first function call.

+ +

Just to create the object seems to take some 0.05 seconds but the overhead of calling oct file is only on magnitude 5e-5 seconds. That is a factor of a thousand which can be very impactful if the rest of the code is heavily optimized computational C-code which often runs faster than 1e-5 seconds.

+",220866,,,,,43915.48958,How to create objects and allocate data only once in C++ to improve speed with octave .oct files?,,2,11,,,,CC BY-SA 4.0, +406375,1,,,3/10/2020 23:48,,1,355,"

I am confused about the Open/Closed principle. The principle says "open for extension, closed for modification". My thinking is that if there is a class and if you need to add any new functionality to that class, you can add that functionality by creating a new derived class which is inherited from the base class. So you add the functionality required without any modification to the base class.

+

In a similar fashion, the OCP principle can be obeyed by applying the above methodology to any class. Is that true?

+",359274,,173647,,44076.69236,44076.86875,Understanding Open Closed Principle,,7,1,,,,CC BY-SA 4.0, +406378,1,406380,,3/11/2020 1:04,,-1,168,"

I'm trying to apply Clean Architecture to a mobile Android App, but I still have some doubts about how to manage API calls.

+ +

Currently, the classes are structured like this:

+ +

View -> ViewModel -> UseCase -> Repository -> DataSource

+ +

It's all well and good when the API calls are about fetching some data from/sending some data to an API. The Repository handles the data I put into it or retrieve from it, and behaves like a collection.

+ +

Now, what should I do when my API call has nothing to do with some data I could put in a collection?

+ +

Example: +The user hasn't received the ""confirm your email"" link on his email, so he clicks a button named ""Resend Email Confirmation Link"". Then, I'll have a ResendEmailConfirmationUseCase.

+ +

My question is: should I have a repository to make that API call?

+ +

That doesn't make sense to me, as I'll handle no data, but just send a signal to an API saying ""Hey, send that confirmation email again"". There is no collection of data that translates into a repository here. How would I even name that repository?

+ +

How would you approach this problem?

+",353470,,,,,43901.13403,Clean Architecture: are repositories always needed?,,1,0,,,,CC BY-SA 4.0, +406382,1,,,3/11/2020 4:13,,0,45,"

I am using this page to learn about an Adjacency List graph. The diagram shown makes sense to me:

+ +

+ +

However, the code example's output has me confused. The input is this:

+ +
struct Graph* graph = createGraph(6);
+addEdge(graph, 0, 1);
+addEdge(graph, 0, 2);
+addEdge(graph, 1, 2);
+addEdge(graph, 1, 4);
+addEdge(graph, 1, 3);
+addEdge(graph, 2, 4);
+addEdge(graph, 3, 4);
+addEdge(graph, 4, 6);
+addEdge(graph, 5, 1);
+addEdge(graph, 5, 6);
+
+printGraph(graph);
+
+ +

And the output is:

+ +
 Adjacency list of vertex 0
+ 2 -> 1 -> 
+
+ Adjacency list of vertex 1
+ 5 -> 3 -> 4 -> 2 -> 0 -> 
+
+ Adjacency list of vertex 2
+ 4 -> 1 -> 0 -> 
+
+ Adjacency list of vertex 3
+ 4 -> 1 -> 
+
+ Adjacency list of vertex 4
+ 6 -> 3 -> 2 -> 1 -> 
+
+ Adjacency list of vertex 5
+ 6 -> 1 -> 
+
+ +

I am confused because the diagram seems to show that all nodes that are directly connected to one another end up in that node's list. For example, 0 is directly connected to 3, 2, and 1 and thus it's list reflects this. Likewise, node 3 is only connected to 0 and it's list reflects this as well.

+ +

However, as shown in the program's output, node 1 is connected to nodes 2, 4, and 3, but node 1's list looks like: 5 -> 3 -> 4 -> 2 -> 0 ->. Why are 5 and 0 in this list? Likewise, node 4 is connected to 6 only in the code, but it's output is: 6 -> 3 -> 2 -> 1 ->.

+ +

At first, I thought maybe it's showing node's which are connected via some other node, but if that were the case, then the list for node 3 in the diagram would also contain nodes 1 and 2 because they are connected via node 0 to node 3. So I am completely confused at this point and could use some guidance.

+",237893,,,,,43901.17569,What am I failing to understand about this adjacency list graph?,,0,1,,,,CC BY-SA 4.0, +406384,1,406387,,3/11/2020 5:34,,7,280,"

I'm writing a python function to replace all the non-alphanumeric characters in the keys of this dictionary with underscores.

+ +

To make sure it's working as expected as I don't have a ton of experience in the language, I created a sample dictionary with a few different samples to make sure it worked well enough.

+ +

Is this the kind of thing that would go into a unit test? Once I have it working is there any point to preserving this? And if so, would test driven development have been creating the sample dictionary and expected outcome first before writing the function?

+",122681,,,,,43918.40694,Should examples for functions be unit tests?,,2,2,,,,CC BY-SA 4.0, +406388,1,406396,,3/11/2020 7:48,,-1,88,"

I am writing a program for Windows, which uses two databases. One is very big (dozens of tables and thousands of records) and comes from another program, but I only download data from it. Second is ""mine"" and only used by my program (dozen of tables and hundreds of records). I have a repository to keep track of changes. I develop this application only by myself, and when some new functionality or a fix is ready, I publish it to a network hard drive as a ClickOnce application, which means that other people in the company get automatic updates, whenever I publish a new one. So far it has been working fine, but now I need to add a new module, which is going to increase the scope of the project by let's say 50% more. Now I want to implement some better methods to divide the program into development and production. How should I do this in the most efficient way?

+ +
    +
  1. I thought of making a second branch on my repository called ""production"", and only switching to it when I want to publish something, and merging it with the main branch. But when a fix would be needed, while I am working on something in the main branch, I don't know how to do it, other than writing the fix on production branch, and copying to the main branch manually, so that the partially written code of something else would not get transferred to the production branch.
  2. +
  3. My second idea is to create a second folder and copy the project to it, but then I would have to manually copy the code from one to another.
  4. +
  5. My third problem is with the database. Should I create a copy of ""my"" smaller database, for example, once a day, and use the copied version for development, and leave the original version only for people using the released application?
  6. +
+ +

Can someone guide me on how to advance my project from being a one-person-amateur-styled-single-instance-manual-changes-etc-everything, to something which would resemble a bit more professional approach to developing an application which would be easily adaptable to new functionalities, maybe new people joining me in development, keeping production and development separate, avoid some big mistakes and crashes on the working database, and so on?

+",342274,,,,,43901.43125,How to improve my development strategy?,,1,2,,,,CC BY-SA 4.0, +406389,1,406391,,3/11/2020 7:50,,1,29,"

I am building a blog system's comment api, and decided to provide the REST api like:

+ +
post: /blogs/{blogId}/comments/
+put: /blogs/{blogId}/comments/{commentId}
+
+ +

These api accept the same parameter:

+ +
public class CommentEditParam {
+    private String content;
+
+    // identify the reply to user's id
+    private Long replyTo;
+}
+
+ +

And the comment entity has these fields:

+ +
@Entity
+public class Comment {
+    @Id
+    @GeneratedValue(strategy = GenerationType.IDENTITY)
+    private Long id;
+
+    private String content;
+
+    private Integer status;
+
+    @ManyToOne
+    @JoinColumn(name = ""author_id"")
+    private User author;
+
+    @ManyToOne
+    @JoinColumn(name = ""reply_to"")
+    @Column(nullable = true)
+    private User replyTo;
+
+    @ManyToOne
+    @JoinColumn(name = ""blog_id"")
+    private Blog blog;
+}
+
+ +

Now here's the problem, do i need to check the existence of blogId/ReplyToUserId when adding a new comment? If i need to do the check, it seems that i need to query the database two more times. But if i don't, what's the drawback?

+",349362,,349362,,43901.33194,43901.35139,Do i need to check the existence of blogId/replyToUserId when building a blog's comment api?,,1,0,,,,CC BY-SA 4.0, +406393,1,,,3/11/2020 8:37,,3,376,"

Consider two services (bounded contexts by DDD):

+ +
    +
  • Sales
  • +
  • Billing
  • +
+ +

Sales is responsible for creating orders and Billing for handling payments.

+ +

Sales tracks orders and Billing holds payments:

+ +
 Sales DB                 Billing DB
++----------+-------+     +------------+----------+-------+
+| order_id | paid  |     | payment_id | order_id | total |
++----------+-------+     +------------+----------+-------+
+| 123      | true  |     | 456        | 123      | 789.5 |
++----------+-------+     +------------+----------+-------+
+
+ +

When an action is finished an event is published:

+ +
    +
  • Sales + +
      +
    • OrderPlaced
    • +
  • +
  • Billing + +
      +
    • PaymentReceived
    • +
  • +
+ +

Billing collects a payment after OrderPlaced is received and Sales updates the order state when PaymentReceived comes.

+ +

This creates a cyclic dependency between Sales and Billing.

+ +
Sales(OrderPlaced) <---> Billing(PaymentReceived)
+
+ +

Which makesit impossible to build the services in separate artifacts (eg JARs).

+ +

The idea behind this is to have independently deployable artifacts, which can be later brought together in an application:

+ +
Application.jar (-> Sales.jar, -> Billing.jar)
+WebApp.jar (-> Sales.jar, -> Billing.jar)
+StandaloneApp.jar (-> Sales.jar)
+
+ +

A possible solution would be to create a technical cut package Events:

+ +
    +
  • Sales
  • +
  • Billing
  • +
  • Events + +
      +
    • OrderPlaced
    • +
    • PaymentReceived
    • +
  • +
+ +

Put both event classes into it and make the services depend on it:

+ +
Sales ---> Events(OrderPlaced,PaymentReceived) <--- Billing
+
+ +

But then I see some drawbacks:

+ +
    +
  1. The domain events leave the domain contexts.
  2. +
  3. The package Events can easily explode.
  4. +
  5. Additional services depending on Events have a dependency on more than what they potentially need.
  6. +
+ +

Is there a better way?

+",350226,,350226,,43901.44444,43903.39167,How to resolve cyclic dependencies in Event-driven systems?,,4,5,,,,CC BY-SA 4.0, +406397,1,,,3/11/2020 10:23,,1,128,"

Task Description

+ +

I need a way to wrap different types of progress bars in a way that an algorithm can update the progress without knowing about the exact implementation. The progress implementation may allow the user to request to cancel the operation and the algorithm can query if it should stop at points where it can safely abort the current operation.

+ +

Question

+ +

Now I am looking for how to design a clean API, that does not require backward incompatible changes in the future.

+ +

I need these functions:

+ +
    +
  • Start a task with an optional description.
  • +
  • Increase the progress.
  • +
  • Enable or disable a cancel button.
  • +
  • Communicate cancel events to the algorithm, that should stop as soon as possible.
  • +
  • Finish the progress, e.g., by hiding the progress bar.
  • +
  • Possibly update the task description between different steps of the algorithm.
  • +
  • Maybe output some logging information that should be displayed in the UI (e.g. ""rounding error occured, the result may be inaccurate"")
  • +
+ +

Why decide on a final API now?

+ +

I am breaking up a large code base by creating smaller libraries and I want to open source some of the functions that may be useful for others. For most functions the API is quite clear and will not need larger changes later on and such changes may provide backward compatibility by using overloaded functions.

+ +

But when trying to find an implementation for a generic Progress object, I see problems when I need to add a function later that needs to be added to the algorithms or implemented by the progress implementations. To avoid trouble for other users, I look for an API that will not need to be changed later on.

+ +

Restrictions

+ +

As I do not implement (all) the progress bar implementations myself, I have the restriction that I cannot require them to use callbacks for the cancel operation.
+In fact I need to wrap one API (Maya progressBar) that uses a isCancelled method that needs to be called in order to determine if the user clicked the cancel button.

+ +

How the interface could look like

+ +

An implementation with a progress bar (GUI or text UI) may set the maximum value to 50 and display a cancel button when canStop() is true.

+ +
[====      ] 40% [cancel]
+
+ +

(text representation of a GUI)

+ +

A text mode implementation may calculate the percentage and display a simple progress bar, but ignore canStop()

+ +
0 --------------- 100
+  |||||
+
+ +

(actual text output)

+ +

Current Implementation

+ +

This is my current attempt for the API (updated with some feedback from codereview.stackexchange.com):

+ +
class Progress {
+public:
+    virtual ~Progress() {};
+    virtual void start(std::string description, uint maxProgress, uint initial_progress, bool can_stop) = 0;
+    virtual void end() = 0;
+    virtual void progress(int value) = 0;
+    virtual void incProgress(int steps = 1) = 0;
+    virtual bool shouldStop() const = 0;
+    virtual bool wasStopped() const = 0;
+};
+
+ +

Here I already dropped the info(std::string message) method, as it probably should not belong the progress implementation.

+ +

Example Implementation

+ +

This is an example implementation for a simple text output, that does not support the cancel operation.

+ +
class SimpleProgress: public Progress {
+public:
+    SimpleProgress(int progress_bar_length = 50): progress_bar_length(progress_bar_length) {};
+
+    virtual void start(std::string description, uint max_progress = 0, uint initial_progress = 0, bool can_stop = false) override {
+        std::cerr << ""Starting: "" << description << std::endl;
+        this->description = description;
+        this->max_progress = max_progress;
+        this->current_progress = initial_progress;
+    };
+
+    virtual void end() override {
+        progress(max_progress);
+        std::cerr << ""Finished: "" << description << std::endl;
+    };
+
+    virtual void progress(int progress) override {
+        assert(progress >= 0);
+        assert(progress <= max_progress);
+        for(int i = 0; i < (progress - current_progress) / max_progress * progress_bar_length; i++) {
+            std::cerr << ""="";
+        }
+        current_progress = progress;
+    };
+
+    virtual void incProgress(int steps = 1) override {
+        assert(steps > 0);
+        if(max_progress >= 0) {
+            assert(current_progress < max_progress);
+            progress(current_progress + steps);
+        } else {
+            // Progress without a known maximum value.
+            std::cerr << ""."" << std::endl;
+        }
+    };
+
+    virtual bool shouldStop() const override {
+        return false;
+    }
+
+private:
+    int current_progress = 0;
+    int max_progress = -1;
+    int progress_bar_length;
+    std::string description;
+};
+
+ +

For a non-interactive implementation, you could for example want to stop when a certain runtime is exceeded, implementing the methods like this:

+ +
        virtual void setCanStop(bool can_stop) {
+                _can_stop = can_stop;
+        };
+
+        virtual bool canStop() {
+                return _can_stop;
+        };
+
+        virtual bool shouldStop() {
+                if(_can_stop && runtime >= max_runtime) {
+                        return true;
+                } else {
+                        return false;
+                }
+        }
+
+ +

Example Usage

+ +

This is how I would use the Progress object:

+ +
void doSomething(Progress *progress = nullptr) {
+    DummyProgress dummyProgress;
+    if(progress == nullptr) {
+        progress = &dummyProgress;
+    }
+    // progress bar for a 50 step operation that can be stopped.
+    progress->start(""Calculating something"", 50, 0, true);
+    for(int i=0; i < 50; i++) {
+        progress->progress(i);
+        // Calculate something for step i
+        if(progress->shouldStop()) {
+            progress->end();
+            break; // stop the calculcation
+        }
+    }
+    if(progress->wasStopped()) {
+        std::cerr << ""Not all items were processed."" << std::endl;
+    } else {
+        progress->end();
+    }
+}
+
+ +
+ +

Note: I originally posted this question to Code Review and they recommended to ask such more theoreic questions about good APIs here.

+",282089,,282089,,43902.59236,43902.59236,How to design a stable API for a progress display function?,,3,6,,,,CC BY-SA 4.0, +406400,1,,,3/11/2020 10:53,,1,65,"

A superclass should never be aware of its subclasses. +Or so I thought.

+ +

I have a Filter class, which wraps a function and some of its kwargs. +Some other kwargs are callable eg

+ +
specific_filter=Filter(some_funct,day='sunday')
+
+ +

you can then use the Filter object like so

+ +
resulting_object=specific_filter(some_object) 
+
+ +

or

+ +
resulting_object=specific_filter(some_object,runtime_kwarg=8)
+
+ +

You specify the callable_kwargs when constructing the filter.

+ +

Filter can have 0 or more callable_kwargs. +LabelFilter inherits from Filter, and has at least one callable_kwarg: label.

+ +

I want Filter&LabelFilter (the filter constructed from the binary & operation between filters) result to be LabelFilter since it has at least one callable kwarg which is label.

+ +

At the same time, I probably don't want the Filter class to be aware of its subclasses (from software engineering considerations).

+ +

What's the appropriate way to achieve that?

+ +

(if it matters at all,the actual programming language is python, so there are no friend functions).

+",359500,,,,,43901.47014,Superclass operations should result in subclass- a unique situation,,1,4,,,,CC BY-SA 4.0, +406402,1,406422,,3/11/2020 11:32,,5,253,"

Apply is a fitting name for a member function of a function-type class that applies the function to the given arguments:

+ +
class Addition {
+   int apply(int a, int b) {
+      return a + b;
+   }
+}
+
+ +

But what about the opposite, when the class that a function is applied to has a method that applies a given function to itself?

+ +
class FktArg {
+   Object whatShouldIBeNamed(Function<FktArg> fkt) {
+      return fkt.apply(this);
+   }
+}
+
+ +

What is the naming convention here?

+",76353,,,,,43902.49861,What should a member function be called that applies an argument function to the object?,,3,3,,,,CC BY-SA 4.0, +406406,1,,,3/11/2020 13:15,,-3,270,"

I have a Github project for a Django web-app that one team uses. Other teams are interested in also using it and they'll need separate instances of it (it's a stock tracking database so they need to not see each other's stock). It's also likely that they'll be small modifications that different teams will need.

+ +

For the different instances of the database, I can easily solve that by having a different database connection. However, I'm not sure of the best way to go about maintaining the different versions.

+ +

The project is currently on GitHub, as they'll all be based off the same original code would it make sense to manage the differences using GitHub branches, even if the intent is never to merge them? Or would it make more sense to just make different repositories to maintain the different versions?

+",359513,,,,,43901.58403,Using Github Branches to maintain two versions of a project?,,2,4,,,,CC BY-SA 4.0, +406409,1,406412,,3/11/2020 13:45,,2,211,"

Are streams of binary data considered a form of bit banging? Does this definition change if the array is buffered? I am referring software which handles binary data on a general purpose CPU; for example, Java's Input/OutputStream, C++'s istream/ostream, and C's character arrays.

+ +

I'm aware of that working on individual bytes rather than a buffered is slower due to efficiency lost at the I/O barrier.

+ +
+

In contrast to bit banging, dedicated hardware (e.g., UART, SPI + interface) satisfies these requirements and, if necessary, provides a + data buffer to relax software timing requirements.

+
+ +

How is timing related?

+ +

This question is in reference to this Wikipedia entry:

+ +

https://en.wikipedia.org/wiki/Bit_banging

+",212561,,,,,43901.59653,Are streams of binary data considered a form of bit banging?,,1,5,,,,CC BY-SA 4.0, +406423,1,406428,,3/11/2020 17:54,,6,258,"

My company works in the field of public infrastructure in Europe, specifically as a software provider and operator. All of our development work is currently outhoused, to two main suppliers relevant for this question. Our company background is in the (semi) public sector, so some development methods take a little longer to get here.

+ +

One of our suppliers has recently undergone a merger with a larger IT company and as a result, is now embracing agile development methods more deeply. +Thus far, we have been working with detailed specification sheets and lump-sum payments for our systems, but from my (limited) understanding of agile development methods this is not really feasible with the model. Additionally, we made the experience that projects run over budget anyway, so the meticulous planning beforehand seems a bit like crystal-balling, with the same amount of relevance to the future ahead.

+ +

We have had a discussion with our suppliers about this, of course, and they told us very neat sounding things including how we will monitor the total expenditures during the project as it goes on and which knobs we can turn as it progresses in order to control the cost. +They also mentioned that instead of developing large, unwieldy specification sheets we should formulate our requirements where possible as User Stories [""As an X I want to be able to do Y in order to achieve Z"", in all brevity]

+ +

Our other supplier mentioned today that they have trouble estimating the amount of work beforehand, and discussing a similar model to above, they were enthusiastic about it, however they have relatively little experience with agile development methods thanks to their background being in the public sector as well. +They mentioned that agile models mean ""You can't say at the beginning what the final product will look like."" and ""You cannot say with reasonable certainty how much a specific product or feature will cost."", which are usually dealbreakers for their clients.

+ +

From my understanding, however, not being able to say what the final product looks like is allievated by the fact that as the project progresses, the client (us, in this case) is kept in the loop and able to bring input into the project, to shape it the way they wish it to be or how they 'meant' it to be. Because to be frank, I do not know how each system should look as I specify it either, and I doubt many of their other clients do. +About the cost, as explained above, I have come to understand that while an estimate ahead of time is not possible, controling the cost is by e.g. reducing features, simplifying parts etc.

+ +

Questions: +Is my above understanding of the differences between agile development and traditional waterfall models correct, in terms of planning a project?

+ +

How can I, as a customer, best support agile development by my suppliers? +Any recommended reading would be helpful.

+ +

Is there something we should ask for or insist on, like a maximum amount of manhours per month that we're willing to pay, a minimum amount of features/functionalities, or something like that?

+ +

P.S. If this is the wrong Stack to ask the question feel free to move it.

+ +

EDIT: For clarity, the first company proposed going with a Kanban model, perhaps borrowing some aspects from a Scrum approach.

+",359530,,359530,,43901.75903,43903.49931,How can a client support agile development methodology?,,8,4,1,,,CC BY-SA 4.0, +406430,1,406434,,3/11/2020 19:22,,-2,157,"

I need to generate a large Excel file (something around 50 megs) and send response to another API which will provide it to the front end for a download option.

+ +

My question is if it will be better to save generated Excel file and provide a path as a response to the API for the front end (something similar what web mail apps do) or to create a response to the front end API as a byte array instead of path to the file?

+",276014,,276014,,43901.81667,43901.8625,Generate large Excel files and response from API,,1,2,,,,CC BY-SA 4.0, +406433,1,,,3/11/2020 20:41,,0,221,"

In most of the examples for Abstract Syntax Trees (AST), I see no function or classes.

+ +

I am wondering, if the functions and classes are represented in the AST? If no, where should the functions, classes and templates be stored? What is the possible data structure representation for the code?

+ +

I was thinking about

+ +
struct SourceModule
+{
+    AST ast;
+}
+
+ +

or

+ +
struct SourceModule
+{
+    std::vector<AST> ast;
+}
+
+ +

or

+ +
struct SourceModule
+{
+    std::vector<AST> ast;
+    std::vector<Class> classe_list;
+    std::vector<Function> function_list;
+}
+
+ +

By AST example, I mean something like this:

+ +

+",359538,,,,,43902.11944,Does compiler AST include functions and classes?,,3,1,,,,CC BY-SA 4.0, +406449,1,,,3/12/2020 5:17,,0,138,"

I am currently designing a system that lets users connect their Cloud Storage such as Google Drive, Dropbox etc and also to their physical filesystem (personal laptop/ ftp server etc.) and then can just open a single webpage page where he can find all the files in each of the storages and can download them. My system has 3 main components which are

+ +
    +
  • A front end UI which the user will use to view files from all the different storage providers (React frontend)
  • +
  • A backend server which does authentication using a DB (Mongo) and once authenticated, creates connections to the different storage providers. (Nodejs server)
  • +
  • A backend client to be run ONLY on the physical filesystem (personal laptop) which will connect to the server to provide the files in the file system. (Node js client server)
  • +
+ +

Now, I also provide the ability for multiple users to connect to the server, authenticate themselves and have access to their storage. (I decided to use the Google API and Dropbox API to connect to the server (using OAuth) and store their tokens in the DB so that they don't have to authenticate every time for fetching the files.)

+ +

So far, I think my design looks solid, except for the File System client part. I decided to use Web Sockets for this ie. the client will connect to the server by providing credentials, and once authenticated, the server will generate a unique UUID and store it along with the user details in the DB and also send the UUID to the client on successful authentication. The client will verify the token for every emit from the server. So now the flow is as

+ +
    +
  • Everytime the user requests a file from the FS (File System), he will send the token along with the emit request.
  • +
  • The client verifies if the token matches with what it received during authentication and if +true, will send the file.
  • +
+ +

Now, this is where my doubt is.

+ +

Since multiple users can be connected to the server and each might have their FS connection open using sockets, is it right that every time a user requests a file from his File System, all the other user's client also checks if their token matches?

+ +

I feel this part of the design is problematic.

+ +

Of course, inactive clients will timeout and their connection would be closed, but suppose if 1000 users are actively using the application, then whenever user 1 requests a file from his file system, the emitted event from the server with the UUID for user 1 would also trigger all the other 999 users to check if the token matches theirs.

+ +

Is there also a security concern here?

+ +

The only reason I wanted to have a socket connection is so that connecting the FS client to the server would require constant connection as long as the FS is alive and I don't know if HTTP requests would be secure as the port of the client would be exposed to the outside world.

+ +

Please give me your valuable thoughts on this.

+",340080,,340080,,43902.26528,44172.92153,Is using web sockets between client-server to tranfer files the right approach? ( when multiple users connected on the same socket connection ),,1,1,1,,,CC BY-SA 4.0, +406452,1,,,3/12/2020 7:13,,-3,70,"

Compared with HTML (Hypertext Markup Language), TLV (Tag-Length-Value) also can be used for a data representation. If TLV is designed smart enough, it can represent hierarchies and trees with no problem.

+ +

Unlike XML or HTML which are human readable, TLV is not. Because it is showing the tags with numbers than text. On the other side, TLV is not language dependent (human languages such as English, German, Chinese). Now, my question is that regardless of marketing challenges,

+ +
    +
  • Can TLV theoretically be used for browsers to render a website?
  • +
  • If yes, will it be faster for rendering the website in the browser as the length is knowing with a minimal effort?
  • +
  • Is there any reason to prevent people using TLV for website rendering?
  • +
+",359567,,359567,,43902.42153,43902.45833,Can TLV theoretically be used for browsers?,,1,4,,,,CC BY-SA 4.0, +406456,1,,,3/12/2020 9:23,,0,78,"

In Form-based application (WPF/MVVM/SQLServer), consider the form that handles the classical actions that you can perform on any entity.

+ +
    +
  • Create
  • +
  • Read
  • +
  • Update
  • +
  • Delete
  • +
+ +

The problem, in general terms, is simple to tell.
+which class of the viewmodel competes the call of any of the above actions?
+What is clear:
+* the ultimate actions compete to the model, that handles the business logic
+What is unclear, i'll explain with this example. +Say that I have a model class that represent a Person. +This class will have basic CRUD methods, say static NewInstance (C), static Load (R), Save (U), static static Delete(object key) (D).

+ +

Now the problem.
+When I bring up a Person instance, I most likely will do it through a PersonViewModel, that will have expose a SaveCommand that in turn will call the save method of the underlying model.

+ +

Should this viewmodel have also the create command? This would be quite unnatural, because either:

+ +
    +
  • the PersonViewModel is capable of undloading the underlying model and loading another one, thing that may be less than easy, depending on how much 'wiring' is going on between model and viewmodel

  • +
  • the command acts just as a relay for an external class that actually unloads the viewmodel and loads the new one. This is a bit contrived because in this case we have a code that unloads the object that contains it, and this requires additional assessments of correctness.

  • +
+ +

So all of these seems to be code smells that indicate that Create doesn't belong to PersonViewModel.
+Which class does it belongs to then?
+I'm thinking of factory method in a 'viewmodel' variation. But solutions like that seems all to be over-engineered so I'm asking if there is a well-established pattern (possibly there are multiple ones) for this issue.

+",28667,,,,,43902.42083,CRUD actions rexponsibility in the MVVM,,1,0,,,,CC BY-SA 4.0, +406459,1,406468,,3/12/2020 10:31,,1,246,"

I work with Spring applications. Recently I have found this article about the Anemic Domain Model.

+ +

They recommend putting logic in Entity classes. It solves a problem that Martin Fowler described in his article. We can move logic from our service layer to the domain layer. It sounds good.

+ +

But can we do the same with our DTOs? There are plenty of methods with a logic that would be convenient to put in those DTOs. Or they should remain as bags of getters and setters?

+ +

For example, can we move this method to ExpenseSearchDTO?

+ +
private void processSearchChartDTO(ExpenseSearchDTO expenseSearchDTO) {
+        if (expenseSearchDTO.getDateTo() == null) {
+            expenseSearchDTO.setDateTo(LocalDateTime.now());
+        }
+        if (expenseSearchDTO.getDateFrom() == null) {
+            expenseSearchDTO.setDateFrom(LocalDateTime.of(0, 1, 1, 0, 0));
+        }
+}
+
+",347888,,,,,43903.49583,Role of DTOs in Rich Domain Model,,4,2,,,,CC BY-SA 4.0, +406463,1,406465,,3/12/2020 11:42,,0,71,"

I have an application. It needs to send emails. We've all been there, done that, got the t-shirt.

+

I've got an IEmailMessage interface:

+
public interface IEmailMessage
+{
+    string From { get; }
+
+    string To { get; }
+
+    string Subject { get; }
+
+    string Body { get; }
+
+    bool IsBodyHtml { get; }
+}
+
+

with a readonly implementation:

+
public class DefaultEmailMessage : IEmailMessage
+{
+    public DefaultEmailMessage(string from, string to, string subject, string body, bool isBodyHtml)
+    {
+        // you can probably guess what this does
+    }
+
+    public string From { get; }
+
+    public string To { get; }
+
+    public string Subject { get; }
+
+    public string Body { get; }
+
+    public bool IsBodyHtml { get; }
+}
+
+

There's a service to actually send an email (implementations use actual email providers like SendGrid):

+
public interface IEmailSender
+{
+    Task<bool> SendAsync(IEmailMessage email, CancellationToken cancellationToken = default(CancellationToken));
+}
+
+

and finally, a service that handles pulling the messages from a queue, transforms them into generic IEmailMessages, then uses an instance of the above interface to actually send them off:

+
public interface IEmailDeliveryService
+{
+    Task SendOutstandingEmailsAsync(int batchSize);
+}
+
+public class EmailDeliveryService : IEmailDeliveryService
+{
+    public EmailDeliveryService(IEmailSender emailClient)
+    {
+        ...
+    }
+
+    public async Task SendOutstandingEmailsAsync(int batchSize)
+    {
+        IEnumerable<IEmailMessage> serviceEmails = GetListOfEmailsFromDb(batchSize)
+            .ConvertAll(dbEmail => ToIEmailMessage(dbEmail));
+
+        foreach (var email in serviceEmails)
+            await _emailClient.SendAsync(email);
+    }
+}
+
+

The reason for this somewhat convoluted chain is to allow the actual email sending provider IEmailSender to be substituted (this is a PoC and SendGrid is unlikely to be the final choice).

+
+

In our dev environment we don't want test emails to potentially go out to clients, so we need to modify the "To" field to set it to a dev mailbox. For this purpose I've written the following visitor:

+
public interface IEmailTransformer
+{
+    void Transform(IEmailMessage email);
+}
+
+

The intention is that I will have 0...N instances of IEmailTransformer injected into the EmailDeliveryService, and each instance will get every email passed to it so it can do whatever it needs:

+
public interface IEmailDeliveryService
+{
+    Task SendOutstandingEmailsAsync(int batchSize);
+}
+
+public class EmailDeliveryService : IEmailDeliveryService
+{
+    public EmailDeliveryService(IEmailSender emailClient, IEnumerable<IEmailTransformer> transformers)
+    {
+        ...
+    }
+
+    public async Task SendOutstandingEmailsAsync(int batchSize)
+    {
+        IEnumerable<IEmailMessage> serviceEmails = GetListOfEmailsFromDb(batchSize)
+            .ConvertAll(dbEmail => ToIEmailMessage(dbEmail));
+
+        foreach (var email in serviceEmails)
+        {
+            foreach (var transformer in _transformers)
+                transformer.Transform(email);
+
+            await _emailClient.SendAsync(email);
+        }
+    }
+}
+
+

I'll then have an implementation that sets the "To" field, instantiated only in the dev environment:

+
public class EmailToTransformer : IEmailTransformer
+{
+    public void Transform(IEmailMessage email)
+    {
+        email.To = _emailTo;
+    }
+}
+
+

Problem

+

The above obviously doesn't work since the interface's properties are readonly.

+
+

Solution #1

+

Make the properties writable.

+

Issues #1

+

The properties shouldn't be writable because IEmailSender shouldn't be able to mutate them.

+
+

Solution #2

+

Define an intermediate interface, IWritableEmailMessage, that inherits from IEmailMessage:

+
public interface IWritableEmailMessage : IEmailMessage
+{
+    string From { get; set; }
+
+    string To { get; set; }
+
+    string Subject { get; set; }
+
+    string Body { get; set; }
+
+    bool IsBodyHtml { get; set; }
+}
+
+

Change EmailDeliveryService, DefaultEmailMessage and IEmailTransformer to use IWritableEmailMessage.

+

Issues #2

+

Feels like using abstraction to achieve the same thing as #1, just in an unnecessarily convoluted manner.

+
+

Question

+

Neither of the above solutions is optimal, I feel like there must be a better way to achieve this, but I don't know what it is. I feel like I'm either overlooking something simple, or unnecessarily overcomplicating this. Any suggestions (especially pointing to particular design patterns) are highly appreciated!

+

Apologies for the large amount of code, but I felt it's necessary to fully explain the problem. I tried to follow the advice in this Meta answer.

+",132348,,-1,,43998.41736,43903.60139,Pros and cons of different implementations wrt encapsulation,,2,4,,,,CC BY-SA 4.0, +406464,1,,,3/12/2020 11:57,,0,17,"

Let's say you have created a monolithic jhipster application with JWT authentication and where typically the login is by a user registering themselves and activating through email:

+ +
    +
  1. Let's say that this application is of a simple monolithic architecture for storing documents and we are simply calling it documentsApp

  2. +
  3. You would like to share documents data on that app with another application

  4. +
  5. You do not want to do this using the HTTP style feign clients, or through a messaging framework like Kafka, because you would want to manage (i.e. keeping records and login attempts, to review and deny access based on criteria) data access.

  6. +
  7. You particularly want to avoid feign clients because, you do not want the client application in need of document data to have to ""know"", the structure of objects in the documentsApp.

  8. +
  9. The client application could be created in any stack; PHP Laravel, .NET or Ruby on rails, it could even be a terminal script on Curl. This complicates the use of application-registry.

  10. +
+ +

How would you go about setting access for the client application, since you cannot create emails (my assumption) for the client app and activate its account, the documentsApp? Which would be the best and safest way to allow other applications to access data on the documentsApp programmatically?

+ +

I have thought about how google can issue specific app passwords for an email account and grant access to a specifically named application using that app password even if the google account has two-step verification. How is this created in an app like a jhipster monolith?

+",359582,,,,,43902.49792,Programmatic Application login into Jhipster Application,,0,0,,,,CC BY-SA 4.0, +406469,1,,,3/12/2020 14:20,,0,67,"

The initial task was to find very quickly, free time for resources, let say - hotel rooms, services availability. +I came with following model:

+ +

Let say we have 24h in 1 day, and 365(366) days in a year. +1) We have a sequence of bits

+ +

timetable=0001011....000, length(timetable) = 24*365 = 8760 +0 is busy +1 is free.

+ +

So if we need to merge two timetables output is: +1 && 1 = 1 free and free give free, +1 && 0 or 0 && 0 = 0, busy and free or busy and busy give busy.

+ +

2) We have array of sequences:

+ +

arr = [timetable1, timetable2, timetable3....N]; N < 1,000,000

+ +

representing free/busy intervals for a timetable of some resources (hotel room, or any service counting in hours).

+ +

The task is the following:

+ +

We need to find free time in the following interval (i,j). i<>j, 0 + +

-i,j can represent days interval in year, so we can reduce the initial long array by the rule above. +Example, (120, 125) + -i,j can represent hours interval in year, so we can use same array as above.

+ +

The total should be around 50-100 ms.

+ +

My algorithm is the following: +1) input is a sequence of bit 11111...1111 +2) make bitwise ""and"" (&) with each timetable.

+ +

Bitwise operations are really fast, so I think this should be an optimal solution. +Does anyone think that can be done better?

+",359595,,,,,43902.61319,Booking problem,,1,2,,,,CC BY-SA 4.0, +406474,1,,,3/12/2020 15:50,,0,605,"

I have an C# MVC Application which is basically a large application form. We are using a large ViewModel to store all the information the user enters as they pass through multiple steps in the application.

+ +

My problem is that certain steps are loading/saved dynamically through AJAX. I need to store some of the database generated IDs in my view model, which is a problem because those IDs won't persist to the next page unless I add something like hidden fields so that they will be populated in the view model when the server posts to the controller. For various reasons I can't always do this, for instance, the sections of the page that dynamically generated through AJAX, also in some cases I can't have these IDs show on the page for security reasons.

+ +

So I started moving these IDs into session values. However I got to thinking, why don't I just save the entire view model in session and then update it each time the user submits their form data and hits the controller. Are there any downsides to storing the entire view model in session? Does this go against MVC architecture? Or should I just leave my IDs in session in the ViewModel and only save the database IDs in session something like the code below:

+ +
public static int? CustomerAppID
+{
+        get
+        {
+            if (HttpContext.Current.Session[""CustomerAppID""] != null)
+            {
+                return (int)HttpContext.Current.Session[""CustomerAppID""];
+            }
+            else
+            {
+                return null;
+            }
+        }
+        set
+        {
+            HttpContext.Current.Session[""CustomerAppID""] = value;
+        }
+}
+
+",233287,,,,,43902.73889,Is it good practice to save an entire ViewModel in Session (C# ASP.NET MVC),,1,1,,,,CC BY-SA 4.0, +406475,1,,,3/12/2020 15:51,,0,109,"

I have several POCO´s (simple classes holding data) that I have need to flush from memory from time to time whenever a collection grows too large. Most often I want to upload the data to a REST service, but if it fails I want to save the data to disk.

+ +

In the best of worlds, the code should be able to handle any data type. The problem I'm facing is who should be responsible for the conversion between object -> string | xml | json | ....

+ +

I could let all Pocos implement an interface (say ConvertToJson() ). But that would mean all classes that needs to be written has to have that interface and method which seems tedious, instead I would be happy with the default conversion instead of a class-specific.

+ +
writer.Write(poco.ConvertToJson());
+
+ +

Simplified code of what I have right now. It lacks support for different conversions (XML, Json).

+ +
public interface IWriter
+{
+    bool Write(string data);
+    bool Write(Object data);
+}
+
+ +

Writers (File and REST)

+ +
public static class FileWriter : IWriter
+{
+    public bool Write(string data)
+    {
+        File.WriteAllText(""file.txt"", data);
+    }
+
+    public bool Write(object data)
+    {
+        XmlSerializer xs = new XmlSerializer(data.GetType());
+        using (TextWriter tw = new StreamWriter(""file.txt"")) { xs.Serialize(tw, data); }
+    }
+}
+
+public static class RestApiCaller : IWriter
+{
+    public bool Write(string data)
+    {
+        MyHttpClient.CallRestService(data);
+    }
+
+    public bool Write(object data)
+    {
+        XmlSerializer xs = new XmlSerializer(data.GetType());
+        MyHttpClient.CallRestService(xs.Serialize(data));
+    }
+}
+
+ +

Example of calling code

+ +
    static void Main(string[] args)
+    {
+         // Write DeathStar as XML to rest service. This was easy!
+         List<DeathStar> deathStars = new List<DeathStar>(...);
+         if(!RestApiCaller.Write(deathStars))
+         {
+             // I dont know how to write my deathStar as JSON
+             FileWriter.Write(deathStars);
+         }
+    }
+
+ +

Who should do the conversion?

+ +
    +
  • The Poco classes?
  • +
  • The Writer?
  • +
  • a dedicated converter? + +
      +
    • if so, who calls the converter, the poco, Writer, or the calling code?
    • +
  • +
+ +

And should a writer only accept string as input? Both XML and Json are strings in the end, but I'm not sure if that is desirable. I would like the answer to be following SOLID.

+",174594,,174594,,43902.66875,43903.68958,"How to divide responsibility between poco, writer and converter",,2,0,,,,CC BY-SA 4.0, +406484,1,,,3/13/2020 3:02,,2,79,"

I have proposed a REST API called 'getSessionState' which basically a backend API that retrieves some state info from a redis server and return to the clients.

+ +

Because the state data is kept in a Redis server, whenever a user calls the function I will extend the expiry of the redis data structure by a bit in order to avoid key miss.

+ +

A question is raised by one of my colleagues that strictly speaking it is making a change to the state. So it shouldn't be called 'getSessionState' because 'get' implies no change to the backing data.

+ +

Is it a fair comment?

+ +

If so, what will be good alternative name to getSessionState?

+ +

My team generally uses Google's API design documentation https://cloud.google.com/apis/design/ as design guideline, but I am not sure if it covers the aspect of TTL.

+",15257,,15257,,43903.24653,43903.24653,REST API: is it a violation of naming convention if a GET method changes the expiry of the a redis key?,,1,1,,,,CC BY-SA 4.0, +406485,1,406489,,3/13/2020 4:32,,1,207,"

This is an existing C# .NET WinForms project. I assume it was not developed with unit tests in mind from the very beginning. It uses a Model-View-Controller architecture, and the backend is a content repository (a kind of heirarchical database) which I am not too familiar with, since I have only touched the front end.

+ +

I have been asked to create interfaces for each public class, including the forms. These will be used for unit testing with NUnit. This is complete. What is the best approach now? The only thing that comes to mind at the moment, is to create concrete classes from these interfaces, which is just re-implementing the existing classes in the application. Outside of the fact that this will make the unit tests ""mind their own business"", I don't see what the purpose of re-writing the classes is.

+ +

If this is the correct approach, it does not seem difficult to test things such as the Model, which contains an OOP representation of the database. How should the forms be tested?

+",350409,,,,,43903.5,Unit testing an existing project by creating interfaces for all public classes (including GUI forms),<.net>,1,7,,,,CC BY-SA 4.0, +406490,1,,,3/13/2020 7:39,,-4,187,"

It's mind bending experiencing reading code like :

+ +
""Aggregation"".Equals(evt.Id))
+
+ +

I was unsuccessful in trying to talk the person out of this style, maybe I'm wrong? +Or maybe I wasn't articulate enough, hence the post here.

+",9445,,,,,43903.33333,Is using yoda conditions with c# justified?,,1,1,,,,CC BY-SA 4.0, +406493,1,,,3/13/2020 11:35,,1,55,"

Is it a standard practice to develop a HTML project using partial views.
+Consider a project where the design team will develop the HTMLs based on the requirement & then the backend team will work on that.

+

For example:

+

index.html:

+
<html>
+<head>
+    //reference to jQuery
+</head>
+<body>
+    <div class="main">
+       //all the page content will load here
+    </div>
+    
+    <script>
+        $(document).ready(function(){           
+            $.get("header.html", function(html_string)
+            {
+                $(".main").append(html_string);
+            },'html');  
+            $.get("banner.html", function(html_string)
+            {
+                $(".main").append(html_string);
+            },'html'); 
+            $.get("intro.html", function(html_string)
+            {
+                $(".main").append(html_string);
+            },'html');
+            $.get("footer.html", function(html_string)
+            {
+                $(".main").append(html_string);
+            },'html');          
+        });
+    </script>
+</body>
+</html>
+
+

header.html

+
<header>
+...
+</header>
+
+

banner.html

+
<section>
+  <img src="" />
+  <h1>hello</h1>
+</section>
+
+

intro.html

+
<section>
+  <h2>heading/h2>
+  <p>description</p>
+</section>
+
+

footer.html

+
<footer>
+...
+</footer>
+
+

The reasons for this approach:

+
    +
  1. A change request, say in the header menu, will be made only in one place & it will reflect in all other pages, unlike the regular approach, where the designer has to update the header markup in all the pages.

    +
  2. +
  3. Re-usability & also easy to change the order of a section in a page, as & when required.

    +
  4. +
  5. If a particular section's markup is updated by the design team, it will be easier for the back-end developer to just pick that view instead of going through the entire html markup.

    +

    and now my questions:

    +
      +
    1. Is this a standard design approach and is there a specific name for this kind of HTML development.

      +
    2. +
    3. What are the challenges if anyone has tried this.

      +
    4. +
    5. I'm unable to find any material online that will help a designer better understand & practice this approach. Please share any pointers.

      +
    6. +
    +
  6. +
+",148582,,-1,,43998.41736,43903.65972,Html development with partial views,,1,3,,,,CC BY-SA 4.0, +406496,1,406517,,3/13/2020 12:13,,0,130,"
my_dict = { 1: 11, 2: 12, 3: 13 }
+
+ +

If I want to work on the list of keys from my_dict there appear to be (at least) three ways to do this in Python >3.5 (i.e. on or after PEP 448)

+ +

List Comprehension:

+ +
[key for key in my_dict]
+
+ +
+

[1, 2, 3]

+
+ +

Unpacking:

+ +
[*my_dict]
+
+ +
+

[1, 2, 3]

+
+ +

Constructing a list from the view of the keys:

+ +
list(my_dict.keys())
+
+ +
+

[1, 2, 3]

+
+ +

Are these three methods interchangeable?

+ +

I think

+ +
    +
  • [key for key in my_dict] is the most 'Pythonic',
  • +
  • [*my_dict] is the most terse, and
  • +
  • list(my_dict.keys()) is the most readable.
  • +
+ +

Are there any technical reasons for choosing between these?

+",50096,,50096,,43903.69444,43903.72153,"Unpacking python dictionaries, or list comprehension, or …",,1,0,,,,CC BY-SA 4.0, +406497,1,406524,,3/13/2020 12:21,,0,68,"

I have a domain service and i need to create an aggregate inside it, because the logic for create this aggregate involves another aggregates and calls to repository to check some business rules.

+ +

Is it correct to call Repository.Add(newEntity) inside a domain service? or a domain service only can query entities?

+",359653,,,,,43903.81389,where to call repository update/add methods?,,1,5,,,,CC BY-SA 4.0, +406498,1,406648,,3/13/2020 13:03,,1,78,"

I have the following models.

+ +

First, there is a Vector which has a circular DNA sequence.
+Second, there is a LinearizedVector which could one of below classes.

+ +
+LinearizedVectorBase:  
+  LinearizedVectorByOneCut   
+     ConcreteLinearizedVectorByOnePosition  
+     ConcreteLinearizedVectorByOneEnzyme  
+     ConcreteLinearizedVectorByOneEnzymeOrPosition  
+  ConcreteLinearizedVectorByTwoCuts is composed of two 
+    `ConcreteLinearizedVectorByOneEnzymeOrPosition`     
+
+ +

The base class LinearizedVectorBase has the following abstract methods:

+ +
linearize_vector()
+get_feature_instances()
+
+ +

The current design

+ +

+ +

The problem with the current design is that ConcreteLinearizedVectorByOneEnzymeOrPosition has two parents with different implementations.

+ +
class ConcreteLinearizedVectorByOneEnzymeOrPosition
+   def get_feature_instances(self):
+       if self.restriction_enzyme is not None:
+            # it should be `ConcreteLinearizedVectorByOneEnzyme().get_feature_instances()`
+            return ?
+       elif self.cut_position is not None:
+            # it should be `ConcreteLinearizedVectorByOnePosition().get_feature_instances()`
+            return ?         
+
+ +

I want to have two types of enzyme and position within one model so ConcreteLinearizedVectorByTwoCuts could have a composition relation of two objects of ConcreteLinearizedVectorByOneEnzymeOrPosition. Otherwise, it will grow exponentially in the numbers of models, ConcreteLinearizedVectorByTwoEnzymes,ConcreteLinearizedVectorByTwoPositions,ConcreteLinearizedVectorByEnzymeAndPosition,ConcreteLinearizedVectorByPositionAndEnzyme, which is clearly wrong.

+ +

I'm trying to minimize the number of concrete classes that exist as a result of the possible permutation of Position and Enzymes. When I used a concrete EnzymeOrPosition class and used it in the LinearizedVectorByTwoCuts by a composition relation, I faced the issue of having two parents with a different implementation.

+ +

The flow of the program:

+ +
linearized_vector = linearize_vector(vector, **kwargs)
+feature_instances = linearized_vector.get_feature_instances()
+
+ +

What is a better way of designing this use case?

+",359652,,335038,,43908.60278,43908.60278,Choosing a suitable design structural pattern for a use case,,1,8,,,,CC BY-SA 4.0, +406499,1,,,3/13/2020 13:18,,2,221,"

I'm refactoring a health monitoring system which requires that certain attributes of an Entity have to be unique across the system. The attributes of an Entity are configurable by the end-user and the user can pick one or more attributes to be unique (either ""universally"" unique or unique across a geographical area).

+ +

Currently, the solution performs very poorly when looking up these unique values (we use Postgres). By using Postgres partial indexes mitigates the performance issue,but, on large datasets (500 millions rows, which is not unusual) the performance is not acceptable.

+ +

One solution I'm considering is to hash the attribute + value using a trigger before INSERT and UPDATE. The trigger would check this ""hashes"" unique-index before allowing the INSERT. If the hash is missing, then it inserts. Otherwise it blocks the operation.

+ +

Is there a better solution to this problem, considering the size of the dataset?

+ +

Edit:

+ +

Following @JimmyJames suggestion (use a Bloom index), I did run some tests to verify which index is faster for a direct lookup. +Env: Postgres 12, 64Gb ram, 16 cores AMD

+ +

First I have created 500 millions pseudo-hashes:

+ +
insert into bloom_filter (
+   hash
+)
+select
+    gen_random_uuid()
+from generate_series(1, 500000000) s(i);
+
+ +

Created a b-tree index:

+ +
CREATE INDEX idx_btree_bar on bloom_filter (hash);
+
+ +

Index creation took ~19 min.

+ +

A simple lookup takes 24ms. (milliseconds)

+ +
select count(*) from bloom_filter where hash= '99c2b46f-cc36-4249-ae36-f16f047f2962';
+
+ +

Then, I have killed the b-tree index, and created a bloom index:

+ +
CREATE EXTENSION bloom;
+
+CREATE INDEX idx_bloom_hash ON bloom_filter USING bloom(hash)
+WITH (length=64, col1=4);
+
+ +

Index creation took: 2m 54s

+ +

Same lookup query as above takes 1.536 sec., which is significantly more than a b-tree index. +Not surprisingly, an hash index has a similar look-up speed of a b-tree index.

+",148927,,148927,,43904.6125,43904.6125,Efficient uniqueness check on large dataset,,2,2,,,,CC BY-SA 4.0, +406504,1,,,3/13/2020 15:03,,0,245,"

The application that I am working on has numerous ...toMany relations, e.i. class Model can have several parameters. In Unidirectional world, it is simple to manage a collection. I can clean the collection (Java List/set) and add elements again, very inefficient but for DB performance, but doable.

+ +

In a bidirectional word, mapping from DTO to an entity is a bit harder. I have to identify which elements in the collection changed, which were added and which were removed in the collection. That creates a lot of boilerplate code. Comparing IDs and filed values. If class Parameter has another ...toMany relation, this becomes a real mess and forces to create many loops.

+ +

A few possible solutions, hoping to get more form you!

+ +
    +
  • Use model mappers such as (Dozer, MapStruct, ModelMapper, JMapper, ... you name it.) However trying ModelMapper I realized that this takes much more time than querying parameter from DB and editing only specific fields. Much more time for very complex objects.
  • +
  • Track collections, write a class that keeps track of changes. However, that would require editing code in many places as well as increase collection size (any known libraries/extensions to do thatß)
  • +
  • ???
  • +
+ +

What's the best approach to map DTO to entity and vice versa for a complex objects?

+",110752,,,,,43904.52083,Mapping bidirectional 'toMany' relation from DTO to entity,,1,0,,,,CC BY-SA 4.0, +406510,1,,,3/13/2020 16:05,,-7,131,"

I hate PHP dollar signs. With that in mind I tried and found a way to access variables and function parameters without using them.

+ +

First, to get value of a variable I used get_defined_vars:

+ +
// echo $foo;
+echo get_defined_vars()['foo']; 
+
+ +

That was easy. To assign value to the variable I used extract:

+ +
// $foo = 'Hello World!';
+extract(['foo' => 'Hello World!']);
+
+ +

To access function parameters: func_get_args:

+ +
// function print_string($msg) {
+function print_string() {
+    extract(['msg' => func_get_args()[0]]);
+    echo get_defined_vars()['msg']; 
+}
+
+ +

But after all its ugly, messy, impractical. I'd like to simplify it, add some syntactic sugar. I don't know how. I was looking going in crazy directions like making array constants, classes with modifiable members etc. but I couldn't get to anything prettier.

+ +

What can I do more to simplify above code? (If such a question is not to be asked on this stack exchange board then I kindly ask moderators to move it to the proper one.)

+",127960,,,,,43903.69514,Getting rid of dollar signs in PHP in a best way,,1,4,,,,CC BY-SA 4.0, +406530,1,,,3/13/2020 23:06,,1,20,"

I am creating software in MacOS that has over 10GB of assets associated with it. I can create a pkg that properly installs these but I have two problems. First, the notorization process required for Catalina is going into introduce a brutal bottleneck and weeks of debugging. Second, the distribution of this file is quite cumbersome.

+ +

I am looking at creating a minimal installer for the primary code components, but then downloading the thousands of assets during install. Is there a standard way to use a pkg to do this or do I need to roll my own? What is the standard way to do this?

+",113143,,,,,43903.9625,Big MacOS Install PKG Versus Downloads,,0,0,,,,CC BY-SA 4.0, +406539,1,406546,,3/14/2020 11:24,,4,975,"

For example I have a static C++ array {'d', 'o', 'c', 's'}. And I have x86 architecture, with 32-bits length words.

+ +

I want to replace letter c with g. As far as I understand, when we make a read operation from a byte addressable RAM, CPU always reads 1 word from it. So it will read the whole array.

+ +

After CPU will read the array, I will replace letter c with g and write it back to memory.

+ +

Will CPU write 1 bytes or 4 byte in RAM?

+ +

If it writes 4 bytes, what's the point of using byte addressable memory instead of word addressable?

+ +

UPDATE

+ +

Let's consider another example. The CPU uses 32 bits words. We have an array of 4 bytes in RAM. It starts at address 0x000:

+ +
0x000: 1110 1111 0000 1100 [1110 1111] 0000 1100 // we want to update byte in brackets
+0x003: ....
+0x007: ....
+
+ +

Since RAM is byte addressable, each cell contains exactly 1 byte. So, theoretically, we can read or write 1 byte using single memory access. But since computer has 32 bits bus, CPU always reads 32 bits using single memory access.

+ +

So as far as I understand CPU reads the whole word into its register:

+ +
1110 1111 0000 1100 [1110 1111] 0000 1100
+
+ +

Then it updates 3rd byte in this register (for example with 1000 1111):

+ +
1110 1111 0000 1100 [1000 1111] 0000 1100
+
+ +

And I don't understand what’s next. +[Main question] Does CPU write all 32 bits from register to RAM or only 8 bits?

+ +

[Answered] If it updates 32 bits, what’s the point of not using word addressable RAM which works exactly the same? According to answers below we still did it because of our past and Sneakernet.

+",359709,,173647,,43907.50972,43908.60069,Is it possible to update exactly 1 byte in RAM?,,7,7,1,,,CC BY-SA 4.0, +406542,1,,,3/14/2020 12:19,,2,270,"

I'm trying to build a section of my app where two users can message each other. I've read about TCP and UDP and it seems like TCP is more suited due to ordered packet delivery. However, TCP requires a connection between the two users at all times for data to be sent but in my app one user may send another user a message even when the user receiving the message isn't on the application.

+ +

How do I get round this problem? I would ideally like to make the application P2P so I don't need a dedicated server but is the reality that I do need a dedicated server ran using TCP that can then delivers the messages when the user connects?

+ +

Or, alternatively I'm using mysql as the database to store the chat history so can I just load the latest messages off that when the users logs in?

+",359466,,,,,44176.79167,Chat part of application - using UDP or TCP?,,2,9,1,,,CC BY-SA 4.0, +406544,1,406576,,3/14/2020 12:32,,-2,74,"

I am writing a HTTP client and am trying to decide what would be the correct time to wait before retrying the HTTP request in case the downstream service is for some reason not responding. I intend use a library which has a great feature for scheduling requests. It has a function called +Schedule.exponential e.g.

+ +
val exponential = Schedule.exponential(10.milliseconds)
+
+ +

The definition is:

+ +
  /**
+   * A schedule that always recurs, but will wait a certain amount between
+   * repetitions, given by `base * factor.pow(n)`, where `n` is the number of
+   * repetitions so far. Returns the current duration between recurrences.
+   */
+  def exponential(base: Duration, factor: Double = 2.0): Schedule[Clock, Any, Duration] =
+    delayed(forever.map(i => base * math.pow(factor, i.doubleValue)))
+
+ +

I am not sure what is the best practice on setting the base and factor value. What is reasonable or preferred practice here?

+",315865,,,,,43905.80764,What Is A Good Default Exponential Backoff Setting For Making Http Calls,,1,0,,,,CC BY-SA 4.0, +406545,1,,,3/14/2020 12:41,,1,73,"

I'm building a multiplayer game where anyone can add its own server but there is only one central server that contains the database where player information, like experience and gold, are stored. To authenticate as a game server and thus have authorization to give gold to a player, you join a JWT, issued by me, to your requests. However, whoever owning the server could use this token and boost up his own characters. My problem is how can I trust a game server request? To me, except by obfuscating the communication between a game server and the central one, there is not much to be done. But this problem reminds me of cryptocurrencies, would a blockchain help me here? +

+",359644,,359644,,43904.54722,43904.54722,How to trust servers in a network where anyone can add its own,,0,6,,,,CC BY-SA 4.0, +406551,1,,,3/14/2020 23:47,,2,39,"

At our company we have a monolith PHP application which has been broken down into multiple (self developed) packages around the (self developed) framework package.

+ +

This application isn’t a SAAS solution, so the application is installed at the clients domain when developing their websites.

+ +

For each package we have a structure like this:

+ +
    +
  • StackOverflowController + +
      +
    • StackOverflowPackageController + +
        +
      • Controller
      • +
    • +
  • +
+ +

In the example above the StackOverflowController can be easily customized on the application side. +The StackOverflowPackageController is defined in the package and the Controller is defined in the framework.

+ +

Our biggest problem is now that we have a quite large amount of different packages (with a large amount of versions). +Since last year every day is checked if the code has changes and a new version tag is being created.

+ +

We are facing the following issues:

+ +
    +
  • Due to the number of clients from over the years, there’s an enormous number of different versions of our application active. So, when we fix an issue for client A, the package is not always upgradable and the error won’t be fixed without fixing it again.
  • +
  • Some packages are tightly coupled to each other so the application breaks when you update one of the packages, but not the other
  • +
  • We’d like to move onto SemVer, but we don’t know exactly how and who will be responsable for that job to determine the kind of version type.
  • +
  • Also when someone fixed a bug in v1 he needs to create a PR for every child branch and merge it, which is often forgotten.
  • +
  • When person A introduces a new feature and person B too, in SemVer the next minor version has already been taken.
  • +
+ +

Conclusion: +How are you dealing with a non-SAAS solution divided in multiple packages and managing when en to where bugfixes or features should be merged?

+",359740,,90149,,43906.67917,43906.67917,How to deal with package management when having a monolith broken down in packages?,,0,0,,,,CC BY-SA 4.0, +406552,1,,,3/15/2020 1:08,,-2,208,"

A bakery shop has to provide a stream of muffins for customers. The muffins are made by a baker in the kitchen and placed on a conveyor belt. The conveyor belt carries the muffins to where the customers are waiting to buy them. +This scenario has been simulated using two processes: the baker and customer, and a shared conveyor belt implemented as a circular array called conveyor, where each space in the array can hold one muffin. There are two shared general semaphores, empty and full, and a mutex +buffer_mutex. In this scenario, there is only multiple bakers and a single customer.

+ +

The pseudo-code for the baker is as follows. The baker makes use of an integer variable in for noting the next available space on the conveyor belt.

+ +
    +
  1. while(true){
  2. +
  3. muffin = makeMuffin(); // Create a muffin
  4. +
  5. wait(empty);
  6. +
  7. wait(buffer_mutex);
  8. +
  9. conveyor[in] = muffin; // Put the muffin on the conveyor belt
  10. +
  11. in = (in + 1) mod n;
  12. +
  13. signal(buffer_mutex);
  14. +
  15. signal(full);
  16. +
  17. }
  18. +
+ +

The pseudo-code for the customer is as follows. The customer makes use of an integer variable out for noting the next location on the conveyor belt that contains a muffin.

+ +
    +
  1. while(true){
  2. +
  3. wait(full);
  4. +
  5. muffin = conveyor[out]; // Get a muffin from the conveyor belt
  6. +
  7. conveyor[out] = null;
  8. +
  9. out = (out + 1) mod n;
  10. +
  11. signal(empty);
  12. +
  13. eat(muffin); // Eat the muffin
  14. +
  15. }
  16. +
+ +

What will happen if the order of semaphores in the customer changes. +So, we have in line 2 signal(empty) and in line 6 wait(full).

+",359744,,,,,43906.68611,what would happen if you signal() before wait()?,,2,0,,,,CC BY-SA 4.0, +406553,1,406557,,3/15/2020 3:03,,-4,170,"

Suppose we want to develop a small module (time needed: two weeks of one developer). Then what about this new (maybe?) pipeline:

+ +
    +
  1. The test engineer starts working: + +
      +
    • Think about all cases (including many edge cases) that we need to test.
    • +
    • Write them down as code using an automated testing framework.
    • +
    • At this stage, the testing code all fails, because the business code does not yet exist.
    • +
  2. +
  3. After the test engineer finishes, the developer starts: + +
      +
    • Develop business code. During development, run the test code above. (Now the dev does not need to use things like PostMan to manually test his code over and over again.)
    • +
    • The testing code may contain bugs, and the developer should fix it. (Since the testing code is mostly straightforward, it should be not challenging to fix)
    • +
    • When all test code passes, the code is almost done.
    • +
  4. +
  5. The test engineer checks that the modification of tests does not throw away things he wants to test.
  6. +
  7. Done.
  8. +
+ +

P.S. We are talking about E2E tests here, not unit tests. IMHO unit tests should be done by programmers (is it correct?).

+ +

P.S. We are small teams, so maybe not able to use the methodology in big companies.

+ +

Is this methodology acceptable/wonderful/terrible/awful?

+",340897,,1204,,43906.59514,43918.40278,An idea about cooperation between development engineers and test engineers,,2,6,,,,CC BY-SA 4.0, +406554,1,406561,,3/15/2020 4:39,,-1,100,"

I have a problem in my app where I have many entities that can all reference each other in different ways. For example, I have a Job (e.g. build house) that I might assign to a team called ""Plumbers"" and a separate single user called ""Bob"". Jobs, Teams and Users are all entities with unique ids.

+ +

When I make a request for this data I want to be able to compose the depth of data I want and if my returned type should contain ids or objects. The goal is to still keep the type safety for the return type.

+ +

For example I may just want the JobSummary, which will give me the name of the job, and the ids of the Users and the Teams assigned.

+ +

Alternatively I may want to request the full ""tree"" of data where IDs are resolved to actual entities. I'll call this the HydratedJob. I'm not sure if ""hydrated"" is the right term, but I've heard it used in this sense a lot. In this case the HydratedJob will contain a User object, and a Team object. That team object, in turn will contain a list of User objects.

+ +

I would really like to know if this is a pattern with a name that I could research more, and if anyone knows any pitfalls of this approach as I'm looking to integrate it in my app.

+ +

Below I've made an implementation of this in Kotlin to demonstrate:

+ +

All entities or entity IDs implement this:

+ +
interface Identifiable {
+    val value: String
+}
+
+ +

For the Job entity I create an interface that is either implemented by just the id, or the full entity (FlexibleJob). I call it flexible, because the job can either contain ids or objects for other entities. A type parameter specifies which of these must be provided.

+ +
interface IdentifiableJob : Identifiable
+data class JobId(override val value: String) : IdentifiableJob
+data class FlexibleJob<USER : IdentifiableUser, TEAM : IdentifiableTeam>(
+        val jobId: JobId,
+        val name: String,
+        val users: List<USER>,
+        val teams: List<TEAM>
+) : IdentifiableJob {
+    override val value = jobId.value
+}
+
+ +

I do the same for teams and users.

+ +
interface IdentifiableTeam : Identifiable
+data class TeamId(override val value: String) : IdentifiableTeam
+data class FlexibleTeam<USER : IdentifiableUser>(
+        val teamId: TeamId,
+        val name: String,
+        val users: List<USER>? = null
+) : IdentifiableTeam {
+    override val value = teamId.value
+}
+
+interface IdentifiableUser : Identifiable
+data class UserId(override val value: String) : IdentifiableUser
+data class User(val userId: UserId,
+                val name: String) : IdentifiableUser {
+    override val value = userId.value
+}
+
+ +

Then I use typealiases to make the list of type parameters easier to read (there could be about 10 type parameters, with multiple nesting of type parameters :O). Here I achieve my goal of being able to compose data types with any level of nesting I want.

+ +
typealias SummaryJob = FlexibleJob<IdentifiableUser, IdentifiableTeam> // No nested objects, just ids
+typealias SummaryTeam = FlexibleTeam<IdentifiableUser>
+typealias HydratedTeam = FlexibleTeam<User>
+typealias PartiallyHydratedJob = FlexibleJob<User, SummaryTeam>
+typealias HydratedJob = FlexibleJob<User, HydratedTeam> // All objects nested, no ids
+
+ +

Then below you can see that type safety works

+ +
fun main() {
+    val userId = UserId(""user-id"")
+    val user = User(userId = userId, name = ""Bob"")
+    val summaryTeam = SummaryTeam(
+            teamId = TeamId(""team-id""),
+            name = ""Plumbers"",
+            users = listOf(userId)
+    )
+    val hydratedTeam = HydratedTeam(
+            teamId = TeamId(""team-id""),
+            name = ""Plumbers"",
+            users = listOf(user)
+    )
+
+    // Fails with type mis-match compile error. Expecting User not UserId and expecting HydratedTeam not SummaryTeam. This is good type safety.
+    val attemptFullyHydratedJobForDisplay = HydratedJob(JobId(""123""),
+                                                              ""Build stuff"",
+                                                              listOf(userId),
+                                                              listOf(summaryTeam))
+
+    // Compiles
+    val fullyHydratedJobForDisplay = HydratedJob(JobId(""123""), ""Build stuff"", listOf(user), listOf(hydratedTeam))
+}
+
+",339098,,339098,,43906.16736,43906.16736,Is there a name for this pattern of composing a type safe return type from different levels of nested related entities?,,1,9,,,,CC BY-SA 4.0, +406565,1,406567,,3/15/2020 14:32,,-2,67,"

I am working for a small electronics company and I was assigned a task of re-inventing and re-writing the software for our product delivery flow, which gathers design components, verifies them, and creates a delivery tarball which later gets send to the client. I am debating whether to choose Perl or Python for this. I am considering pros and cons of both. I am not a software engineer by trade hence I wanted to ask you for advice.

+ +

Some background info:

+ +

The old flow is an ancient, long Perl script which runs all the sign-off checks the moment delivery is created, is ugly, has been patched a thousand times already, desperately needs to be replaced.

+ +

I plan to decentralize this such that each component is signed-off by an engineer responsible for the given component. The sign-off would amount to running a script which would kick-off a couple of EDA tools and parse their output to verify the status. Information about the status would be stored as a file containing some serialized data structure or an object.

+ +

The package-building script will only gather information about readiness of design components from the serialized files (is it signed-off? were the files modified since last sign-off? are there any waived errors?) and create a tarball.

+ +

I am considering the pros and cons of both solutions. I wanted to ask for some advice on this. Here are the things I have considered so far:

+ +
    +
  • I am familiar with both languages at the same level so no clear advantage here, except for Python being nicer to write in
  • +
  • Almost all scripts in the company are written in Perl and this is the language that our engineers are familiar with. That said, we have almost no standardized libraries so there's not much that I could reuse anyway.
  • +
  • Perl is a little bit outdated and has far less actively developed libraries compared to Python
  • +
  • There is an interface allowing Perl to instantiate Python objects, however no easy way to go the other way around.
  • +
  • Parsing text files (EDA tool logs) is easier with Perl
  • +
  • Fresh engineers coming from the university are far more likely to know Python than Perl
  • +
+ +

I want to make an informed choice since this piece of software will certainly be used for years to come. I would appreciate it if you could give me some more arguments than those listed above that I could consider before I make a decision.

+",359758,,,,,43905.65764,Choosing between perl and python for my application,,1,5,,43905.79375,,CC BY-SA 4.0, +406566,1,406578,,3/15/2020 15:41,,-2,192,"

So I've been an indie apps developer for 2 years (I launched my first Android app at the end of 2017), and I've built few apps since then, but all of them were simple apps.

+ +

Currently I'm planning my next project, and in this next app the user will store sensitive data in the app (It'll be a Calendar App), and I'm planning on storing the data outside user's device (I'll probably use one of Firebase's services) for backup purposes, and to sync data between different devices, so I'll need to encrypt and secure the data.

+ +

So I've been wondering is it required of an app developer to learn information security (I literally don't have any experience or knowledge about information security) or it'll be enough to use libraries made by other people for encryption purposes?

+",359761,,,,,43907.55139,Is it required for an app developer to know information security?,,4,3,,,,CC BY-SA 4.0, +406569,1,406681,,3/15/2020 16:24,,0,67,"

Problem +

A streaming application should perform matching transitively i.e. if A == B & B == C then A == C

+ +

Current Implementation +

Application accepts domain objects in a streaming fashion and perform matching to filter out all duplicates. A scoring engine helps determine the equality by assigning scores between two domain objects; which would be considered equal if they score above certain threshold. All equal objects are grouped viz. an unique match group is created and only the top scoring object is considered for further processing.

+ +

Since the domain objects could flow out-of-sequence and / or randomly, the transitive relations / equality is not achieved as desired. Consequently similar (or almost similar) objects creates different match group and duplicates are submitted to downstream application which create some unacceptable business scenarios.

+ +

For e.g. if threshold score for equality is ZERO i.e. for any positive score two domain objects will be deemed equal and considering following scores A & B = 50, A & C = -30 & B & C = 70 below situations arise

+ +

Situation 1 +
Considering sequence as A followed by B and C => +A will create own match group as it is processed first with no other matching objects available. +B will join A in same match group (since positive score of 50 with A) +C will join B in same match group (since positive score of 70 with B) [Note - The negative score between A & C will not be considered as B & C has positive score]

+ +

This is desired where all objects joins same match group.

+ +

Situation 2 +
Considering sequence as B followed by C and A => +B will create own match group as it is processed first with no other matching objects available. +C will join B in same match group (since positive score of 70 with B) +A will join B in same match group (since positive score of 50 with B) [Note - The negative score between A & C will not be considered as B & A has positive score]

+ +

This is desired where all objects joins same match group.

+ +

Situation 3 +
Considering sequence as C followed by A and B => +C will create own match group as it is processed first with no other matching objects available. +A will create its own match group (since negative score of 30 with C) +B will join C in first match group (since positive score of 70 with C) [Note - Although B has positive score with both A as well as C, it will join C's match group as it has higher score with C as compared to A]

+ +

This is not desired as two match groups are created, whereas all objects should have joined same match group, them being similar (or almost similar). The downstream system will receive two match groups where it expected only single thus causing business problems.

+ +

Current solution to overcome transitive in-equality
+
In case of situation 3 the moment an object i.e. B has more than one positive score across more than one match group it will join the match group with highest score (i.e. match group of C) and trigger re-processing for rest of the match group(s) (i.e. match group of A) and all of their objects i.e. A. Thus when these object(s) i.e. A is re-processed it will find positive match with B and will join B in same match group of B & C which will essentially have all of them together in same match group.

+ +

Limitations / Challenges faced +
Although the above solution (more of a workaround to me) works for smaller set of data (couple of hundred or thousands sometime), it goes for toss for larger set of data, say when millions of objects are processed in an hour or so. The re-processing occurs multiple time thus reducing the overall throughput of system. +Also the solution is not immune to race-condition e.g. in case the re-processed match group has more than one objects, it again can potentially have different match group for similar objects.

+ +

Expected outcome +

An algorithm which will ensure that out-of-turn processing of similar (or near similar) objects ensures that they join same match group. Addressing race-condition would be add on however not must have as of now.

+ +

EDIT 1 - +
Similarity Score +
Every domain object has few properties which are considered for equality in a weighted fashion i.e. property X has highest weight while Y & Z has lower weight. Thus if two domain objects has similar X but not Y & Z the similarity score will be higher as compared to those which has Y & Z similar but not X.

+ +

P.S.: I have tried isolating the issue and described in it's minimalist form. If required feel free to comment and I could help with the relevant details.

+",163748,,163748,,43907.17778,43908.13333,Transitive matching in streaming application,,1,2,,,,CC BY-SA 4.0, +406570,1,406575,,3/15/2020 16:26,,0,279,"

Perhaps im not defining my pytests right but im seeing this: +

+ +

Does pep8 demand a docstring for each unit test function too? I cant find pep8 docs specific to this, wondering if pep8/flake8 is unit test aware?

+ +

I have many tests so this might be kind of a mess to have docstrings, but maybe I should have fewer tests with many asserts instead of many small tests?

+ +

For example I have specific tests like:

+ +
test_method_one_returns_x_if_y_invalid()
+test_method_one_returns_z_if_y_valid()
+
+ +

But perhaps it should be more like:

+ +
test_method_one_validates_y()
+... then assert both cases in this single method
+
+ +

So if I have big tests with many cases grouped under them docstrings would be more feasible. Does pep8 or another convention have anything to say about this?

+",118738,,,,,43905.78403,pep8 (D103) I need docstrings for my unit test functions too?,,1,2,,,,CC BY-SA 4.0, +406573,1,,,3/15/2020 18:19,,3,136,"

I am writing a bytecode interpreter in C for a simple programming language.

+ +

I want to add GUI capabilities to the language. As a first step, I decided to bake into the interpreter a wrapper for the GTK library. It is exposed to user code as a builtin module.

+ +

My problem is that GTK works by taking control of the thread: once you call the C function g_application_run, the thread enters an endless listening loop inside GTK.

+ +

Why is this a problem? Because while we are ""stuck"" in this GTK loop, the bytecode interpretation loop of the interpreter is frozen. Psuedo bytecode to demonstrate:

+ +
0 SOME OPCODE
+1 SOME OTHER OPCODE
+2 OPCODE INVOKING GTK WRAPPER FUNCTION <-- GTK invoked in C level and enters an endless loop
+3 MORE OPCODES  <-------- This is never executed
+
+ +

My first thought to combat this, was to design my GTK wrapper in a ""close to 1:1 mirroring"" style to the way the C library is supposed to be used. For example in psuedo code:

+ +
import gtk
+func app_code() {
+    # ... app code ...
+}
+gtk.application_run(app_code)
+
+ +

This supposedly solves the problem - we don't care that we never exit the GTK endless loop inside g_application_run, because it is now its responsibility to invoke our own user code.

+ +

The reason this won't work: the way user functions are invoked inside the interpreter, is by

+ +
    +
  1. Pushing a new stack frame on the call stack
  2. +
  3. Setting the instruction pointer to point to the beginning of the called functions bytecode
  4. +
  5. In the next iteration of the bytecode interpretation loop, the new function will naturally start running
  6. +
+ +

The GTK wrapper can easily do steps 1 and 2. But - step 3 will never happen, because we are stuck inside the GTK infinite loop. The interpreter loop never begins a new iteration.

+ +

So my question is: what are the possible options to solve this issue? Preferably, what are examples of how this issue is dealt with in existing projects?

+ +
    +
  • Start the GTK loop on a different C level thread?
  • +
  • Forsake GTK and look for a less ""framework-y"" and more ""library-y"" type of GUI toolkit, that doesn't take thread control?
  • +
  • Any other options?
  • +
+",121368,,121368,,43905.76736,43906.84097,How can one bake a GUI framework inside an interpreter without freezing the interpreter?,,1,2,,,,CC BY-SA 4.0, +406579,1,,,3/15/2020 20:36,,5,336,"

I'm writing a library which includes some wrapper code for some underlying API. In that underlying API, there are two concepts, ""foo"" and ""bar"", whose literal meaning in the English dictionary is close to the switching of their meaning in the API. i.e. it uses ""foo"" for something that's close to ""bar"" and vice-versa.

+ +

(For the curious: It's basically ""thread index"" vs ""thread id"" in CUDA. Which one do you suppose is 3-dimensional and which is a linearization of the other?)

+ +

Now, in my wrapper code, should I...

+ +
    +
  • Stick with their incorrect and confusing terminology?
  • +
  • Switch it back, explaining profusely in comments when I'm doing that?
  • +
  • Go for other terms, which won't get people mixed-up, but introduce more ""concept clutter""?
  • +
+ +

I'm not asking for input based on your experience with my specific case, but rather with similar situations you may have faced. Also, both a recommendation and unexpected considerations would be useful.

+ +

Note: The wrapper code does not wrap all of the underlying API, so that you only ever go through my code. So it is quite possible users will encounter the mis-named terms in other contexts. On the other hand... uses of my wrappers are always through a namespace i.e. quux::foo or quux::bar.

+",63497,,63497,,43906.41875,43906.91667,Stick with mis-named concepts or rename them in wrapper code?,,4,6,2,,,CC BY-SA 4.0, +406583,1,,,3/16/2020 0:10,,-1,1260,"

I have a backend Spring Boot API that should have one(?) endpoint that returns some statistics to display in a frontend. These statistics are calculated from data that comes from two different databases.

+ +

Now since I'm using Spring Boot, I understand that each Controller should have a Service and each Service should have one Repository. How bad would it be to have a service with two repositories, that merges the data from them? Would it be better to have two different controllers with their respective services and repositories and then aggregate the results on the front-end side after calling these two endpoints?

+ +

What other alternatives do I have for this scenario?

+",271435,,316848,,43966.41389,43996.41667,Is it okay to have one Service with two different Repositories in Spring Boot MVC?,,1,1,,,,CC BY-SA 4.0, +406584,1,406596,,3/16/2020 1:04,,-1,74,"

I have been learning Backend development for quite a while now and I decided to build a bigger project using Express.js and the MVC architecture, the project is basically a Restful API with Vue.js in the frontend (it may contain some real-time communication later as well). Since I am not from a CS background, I have stumbled upon multiple resources about how to architecture my application and it felt somewhat overwhelming.

+ +

To make it clear, I will tell you what I understood regarding the MVC, what I wanted to do, and I will show you a screenshot of my current design, all I need is a friendly criticism with guidance from an experienced developer without complicating stuff.

+ +

While I was building my application, I anticipated that as my code grows, it looks ugly and harder to maintain, so I stopped and started thinking about architecturing my application for a couple of days before jumping into writing the code.

+ +

The MVC architectural pattern splits the code into three logical components, the model, the view, and the controller. Where the view is the layer with visuals, for example the UI or JSON for a Restful API request. The model is the layer that interacts with the database, it sometimes contains the ""business logic"" of the application. Finally the controller just receives the request, calls a method from the model, and returns back the response to the user. I found that it is debatable whether to add the business logic inside the model or inside the controller.

+ +

In all cases, when the application code grows in size, it would be hard to maintain the code, since the controller/model will grow accordingly resulting in a big fat controller/model that is harder to maintain and test, also it doesn't make sense for me to add all the business logic in any of the controller or the model. Thus, we could make a layer (service layer) just between the model and the controller that contains our business logic in a SOLID way, we could also have multiple utility classes.

+ +

Here's a screenshot with the design I came with. This post is open for anyone who wants to share his idea about the MVC architecture and provide some guidance, resources, or useful criticism. This is my first draft, please correct me if you see something that is wrong, or could be done better.

+ +

Regards, Hazem

+ +

Note: I have posted the same question on stackoverflow.

+ +

+",359786,,,,,43906.43056,Need some criticism for my backend app Architecture,,1,2,,,,CC BY-SA 4.0, +406587,1,,,3/16/2020 3:42,,-1,89,"

I sometimes write Java code that looks like:

+ +
success: {
+    fail: {
+        if (...) break fail;
+        // some code
+        if (...) break fail;
+        // some code
+        if (...) break fail;
+        // some code
+        break success;
+    }
+    // failure handling code
+}
+
+ +

Is this bad practice? I could extract this to a method that returns an Optional<SomeResultClass>, and then call that method from the main method, but I'm worried that this will be bad for performance.

+",,user359787,,,,43906.67153,Is it bad to break out of a labelled block in Java?,,1,4,,,,CC BY-SA 4.0, +406589,1,,,3/16/2020 6:40,,1,143,"

I have read several times that a Microservice should be an independent software unit. +But what does that mean exactly and is it really achievable for each business case? Does that mean that I can run a microservice stand alone without any other microservice and use the full functionality of that microservice? Or do I get the idea wrong of independent microservices?

+ +

A ProductCatalog microservice for example:

+ +

Yes this microservice can run independently. I can start it and perform different CRUD operations for my products without the need to run another microservice. Further I can also use this service to define different product categories and so on. Of course the microservice has its own database. This microservice has no dependency to other services and stands for its own.

+ +

Same goes for a Customer microservices which stores the corresponding data of the customers of the system. Further it is also responsible for the user authentication.

+ +

But what about a Checkout microservice?

+ +

This microservice performs a checkout to purchase one or more products for one user with a given paymethod. To perform a checkout the service requires a request containing the products the customer and maybe the prefered paymethod. When running this service independently it would be possible to perform a checkout for products which may not exists in the ProductCatalog as there is no dependency to the ProductCatalog microservice and therefore no verification is possible. This would or can lead to a data integrity problem. If there would be a verification against the ProductCatalog microservice the Checkout microservice would not run independently.

+ +

So my question is if this is the case when saying that microservices should be independent units of software. Or do I miss something here?

+ +

Personally, I think that the microservices should run independently. This makes testing and development much easier. However, I am concerned about data integrity and the fact that this may result in redundant data. For example the transaction history in the Check-out Service: The products there may have different or fewer data than the products in the ProductCatalog microservice Service. Since not all data is required for the checkout itself. To make this clear: Transaction history means that Customer X has bought [A,B,C] Product with Paymethod YZ.

+ +

PS: I am aware that the validation can take place in an API Gateway / Aggregator or similar. The API Gateway / Aggregator is a facade for bundling the microservice backends. The clients communicate with the API Gateway and not directly with the microservice backends.

+",357282,,,,,44198.03958,Microservice should be an independent software unit - Up to which level?,,4,4,,,,CC BY-SA 4.0, +406591,1,406592,,3/16/2020 8:02,,46,6980,"

While going over a java book I came across this phrase:

+ +
+

Different JVMs can run threads in profoundly different ways.

+
+ +

While it's completely understandable to me that code can behave differently depending on the underlying JVM implementation, it does bring up the question.

+ +

Why are there multiple different implementations of JVM in the first place?

+ +

Why might I, as a developer be dissatisfied with the official JVM implementation that Oracle provides and decide to build up a different one?

+",295696,,,,,43908.20347,Why are there multiple different implementations of JVM?,,4,8,6,,,CC BY-SA 4.0, +406597,1,,,3/16/2020 10:25,,-1,42,"

Can anybody point me to the official source where it is explained what are operators and operands in Halstead Metrics for code, I would prefer the original paper by Halstead. Please don't post the Wikipedia link as anyone can edit the page. +My original question is, let's say there is a function ""fun"" taking one value as an input. In an expression

+ +
x = a + fun(b)
+
+ +

Is fun an operator or an operand or both ?. ""()"" is one operator or ""("" & "")"" are different ?

+",359804,,,,,43906.66806,What exactly are the Operators and Operands in Halstead Metrics?,,1,2,,,,CC BY-SA 4.0, +406600,1,,,3/16/2020 11:43,,1,40,"

We have 800-900 services we expose via an ESB. Each service is a web app hosted on Tomcat servers. We have 4 tomcat servers per group of services. Our services are split into 4 groups.

+ +

Each service (web app) is free to include external libraries in their WEB-INF\lib folder.

+ +

The server class path tomcat\lib only contains a few hand picked libraries which are common for all services. Recently we added one more library to tomcat\lib, however it has dependencies to 4 other libraries, so a total of 5 libraries are added to the server class path.

+ +

The developers will see that 5 libraries are added to the server class path, and will start using all 5 libraries, whereas the intention was only to expose 1.

+ +

How is this best handled?

+ +

The platform team is thinking to add an ""internal"" service as a web app, and let this web-app contain the 4 dependent libraries, and then use a http call from the tomcat\lib\jar to the web-app, and in that way hide the 4 dependent libraries. This however introduces new errors that must be handled (time out, http errors, etc).

+",294971,,,,,43906.48819,"how to handle external shared libraries, which we do not want to expose",,0,0,,,,CC BY-SA 4.0, +406605,1,,,3/16/2020 13:41,,0,35,"

I'm having trouble how to structure a proper Game Server using python3 socket library. My game is a serverside game, where the client send basic commands to the server, which is interpreted and run by the Server. The Server then returns a GameState-Object to the client.

+ +

Since the game is a serverside game, the serverside-part is running a Game-Instace, which includes a gameloop and the game logic.

+ +

Right now, the code works, but i'd like the Server-class and the Game-class to be two independent units (or classes), which they are not right now. In that way i would be able to simply copy-paste my Game-Server class to other projects, without modification.

+ +

Right now my setup is as such (""->"" meaning ""owns instance"" of):

+ +

Server -> Game -> (game objects)

+ +

However, i would like it to be as such:

+ +

Game -> Server

+ +

Game -> Game objects

+ +

My goal is to make the Server an independent unit, which can be instantiated and used by multiple game objects. ""Decouple"" Server and Game so to say.

+ +

I will provide code from Server,Client and GameInstance below, and describe why i haven't been able to solve this problem yet:

+ +
import sys
+import pickle
+from _thread import *
+from ..Settings.Settings import *
+from ..Events.EventBus import EventBus
+from ..Events.Event import Event
+from ..Events.EventType import EventType
+from ..API_Entities.Tank import Tank
+from ..Events.IEventListener import IEventListener
+from ..Clients.ConnectedClient import ConnectedClient
+from .GameState import GameState
+from ..CommandIntepreter.ClientCommandInterpreter import ClientCommandInterpreter
+
+class Game():
+    def __init__(self):
+        pygame.init()
+        pygame.mixer.quit()
+        self.clock = pygame.time.Clock()
+
+        self.state = GameState.Paused
+        self.clients = {}
+
+        self.__EventBus = EventBus()
+        self.__EventBus.InitializeEventBus()
+
+        start_new_thread(self.GameLoop,())
+
+    def StartGameLoop(self):
+        self.state = GameState.Running
+
+    def PauseGameLoop(self):
+        self.state = GameState.Paused
+
+    def GameLoop(self):
+        while True:
+            if(self.state == GameState.Running):
+                self.dt = self.clock.tick(FPS)
+                self.__EventBus.ProcessEvents()
+
+    ### NOT USED FOR NOW
+    def UpdateClients(self, clients : list):
+        self.clients = clients
+
+    def AddClient(self, id : int) -> None:
+        NewClient = ConnectedClient(id)
+        self.__EventBus.SubscribeMultiple([EventType.Movement,
+            EventType.Rotation,EventType.Shooting],NewClient.Tank)
+        self.clients[id] = NewClient
+
+    def RemoveClient(self, id : int) -> None:
+        DisconnectedClient = self.clients[id]
+        self.__EventBus.UnsubscribeAll(DisconnectedClient.Tank)
+        del self.clients[id]
+
+
+    def ProcessServerCommands(self, msg : str):
+        if msg == ""pause"":
+            self.PauseGameLoop()
+        elif msg == ""start"":
+            self.StartGameLoop()
+        elif msg == ""status"":
+            print(self.state)
+        elif msg == ""players"":
+            print(len(self.clients))
+        elif msg == ""sub"":
+            __tank = Tank(1,360,0)
+            self.__EventBus.Subscribe(EventType.GameState,__tank)
+        elif msg == ""post"":
+            self.PostEvent(Event(EventType.GameState,""Heeey!"",0))
+        elif msg == ""move"":
+            self.PostEvent(Event(EventType.Movement,""Moving!"",0))
+        elif msg == ""reply"":
+            self.PostEvent(Event(EventType.Output,""Hi Server!"",0))
+        elif msg == ""map"":
+            self.PostEvent(Event(EventType.Output,""map""),0)
+
+    def PostEvent(self,_Event : Event): 
+        if self.state == GameState.Running:
+            self.__EventBus.PostEvent(_Event)
+
+    def InterpretUserInput(self, UserInput, id : int):
+        ClientCommandInterpreter.InterpretateCommand(UserInput,id)
+        return ""Hej : Fra serveren""
+
+    def AddServerToEventBus(self, _Server):
+        self.__EventBus.Subscribe(EventType.Output, _Server)
+
+    def ChangeMap(self):
+        return ""New map""
+
+
+# TODO
+# Add Running,Pause to __EventBus
+# ADD list of EventTypes to subscribe to in IEventListener 
+# Call InterpretUserInput from Server - add method here
+
+ +
import pickle
+from _thread import *
+import socket
+from ..Clients.ConnectedClient import ConnectedClient
+from ..Settings.Settings import *
+from ..Game.Game import Game
+from ..Events.EventBus import EventBus
+from ..Events.IEventListener import IEventListener
+from ..Events.Event import Event
+from ..Events.EventType import EventType
+import typing
+
+class Server(IEventListener):
+    def __init__(self,Ip_address : str, Port : str):
+        self.Port = Port
+        self.Ip_address = Ip_address
+        self.Connections = {}
+
+        self.Game = Game()
+        self.Game.AddServerToEventBus(self)
+
+        self.Running = True 
+
+        self.Server = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
+        self.Server.setsockopt(socket.SOL_IP,socket.SO_REUSEADDR,1)
+
+    def StartServer(self):
+        try:
+            self.Server.bind((self.Ip_address,self.Port))
+        except socket.error as e:
+            str(e)
+
+        self.Server.listen()
+        start_new_thread(self.CommandLoop,())
+        print('Waiting for connection. Server started at ip:',self.Ip_address)
+
+    def ServerLoop(self):
+        CurrentPlayerId = 0
+
+        while self.Running:
+            print(""serverloop running"")
+            conn, addr = self.Server.accept()
+            print(""Connected to"", addr)
+            ###
+            start_new_thread(self.__ThreadedClient,(conn,CurrentPlayerId))
+            CurrentPlayerId += 1
+
+    def CommandLoop(self):
+        while True:
+            Command = input(""Command : "")
+            self.Game.ProcessServerCommands(Command)
+
+    def __ThreadedClient(self, connection, playerid : int): 
+        self.__AddConnection(connection,playerid)
+        ### dummy for test
+        connection.send(pickle.dumps(""start""))
+
+        while True:
+            data = self.RecieveFromClient(connection)
+            if not data:
+                break
+            else:
+                self.SendToClient(
+                        connection,self.Game.InterpretUserInput(data,playerid))
+
+        ## On disconnection
+        self.__RemoveConnection(playerid)
+        print(""Disconnected : "",connection)
+
+    def __AddConnection(self, connection, playerid : int):
+        self.Connections[playerid] = connection
+        self.Game.AddClient(playerid)
+
+    def __RemoveConnection(self,playerid : int):
+        del self.Connections[playerid]
+        self.Game.RemoveClient(playerid)
+
+    def __PostEventToGame(self,command : str):
+        _event = None
+
+    def SendToClient(self, connection : socket, data):
+        try:
+            connection.sendall(pickle.dumps(data))
+            return True
+        except:
+            return False
+
+    def SendToClientById(self, playerid : id, data):
+        return self.SendToClient(self.Connections[playerid],data)
+
+    def SendToAllClients(self, data):
+        for key in self.Connections:
+            self.SendToClient(self.Connections[key],data)
+
+    def RecieveFromClient(self, connection : socket):
+        try:
+            return pickle.loads(connection.recv(2048))
+        except Exception as e:
+            return False
+
+    def RecieveFromClientById(self, playerid : id):
+        return self.RecieveFromClient(self.Connections[playerid])
+
+    def ProcessEvent(self, ET : EventType, EV : Event):
+        if ET == EventType.Output:
+            if Ev.msg == ""map"":
+                print(""Server : Changed map"")
+                self.SendToAllClients(self.Game.ChangeMap())
+
+
+
+ +
import pickle
+from .ip_address import Ip_address
+
+#Client
+class Network:
+    def __init__(self,serverip):
+        self.client = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
+        self.serverip = serverip
+        self.port = 5555
+        self.addr = (self.serverip,self.port)
+        self.p = self.connect()
+
+    def connect(self):
+        try:
+            self.client.connect(self.addr)
+            return pickle.loads(self.client.recv(2048))
+        except:
+            print(""server not found!"")
+
+    def getInit(self):
+        return self.p
+
+    def send(self,data):
+        try:
+            self.client.send(pickle.dumps(data))
+            return pickle.loads(self.client.recv(2048))
+        except socket.error as e:
+            print(e)
+
+
+ +

Problem 1: In the Server.py class, the ThreadedClient() func is handling the connection (transfer of data) between the Server and Client. The Client expects the Server to reply with data from the Game-class, however, the Server must not access data from Game if i want a total decoupling.

+ +

Problem 2: The Server, Client and Game runs all on different loops, and are thus not synchronius.

+ +

Problem 3: I want to use the Server.SendToClient(params) to communicate from Game -> Server -> Client - however, the Server needs to reply the Client right away from within _ThreadedClient(params), and can therefor not rely on the SendToClient(params) function.

+ +

I have thought of following solutions; however i would like to know a good and proper way to solve this problem:

+ +

Solution 1: Have some kind of Data-Queue-Object in the Server, to handle the asynchorisy between, Client, Server and Game.

+ +

Solution 2: Use the ThreadedClient(params) func to just ping between the Client and Server to keep up the connection. The Server.SendToClient(params) could then be modified to handle ""real"" data transfer between Client and Server.

+ +

Solution 3: Create a GameInstanceInterface used by the Server, to handle function calls from Server to Game. It would not be 100% decoupled, however the Server with the GameInstanceInterface could be easily copy-pasted to other games.

+",131834,,,,,43906.57014,How to properly structure a Game Server (using Python3 socket),,0,0,,,,CC BY-SA 4.0, +406607,1,406624,,3/16/2020 15:23,,2,133,"

I'm trying to build a distributed system and have been researching how best to manage dependencies between services. The common example in tutorials is an ordering system - let's say I have a Catalog, Order and Account service.

+ +

The Order service needs to accept an order containing a number of items, cost up the total value, do a credit check against the account, and complete the order assuming all is okay.

+ +

This single action requires data from the other two services in a synchronous fashion (e.g. we need to look up the cost of each catalog item since we can't trust the user to have not tampered with it, and we need to validate that the user has sufficient credit), and I've experimented with ways of dealing with this:

+ +
    +
  1. Maintain a persistent cache of the required data in the Order service which is kept synchronised using messaging (e.g. events in RabbitMQ, Kafka such as itemCreated or accountUpdated.). The Order service contains a list of accounts with only the information it requires (e.g. the current credit), and a list of items (again, with a price and quantity in stock) to allow it to work autonomously. I've seen this referred to as a 'data-pump' architecture.
  2. +
  3. Synchronous calls (e.g. REST/RPC) from the order service to the Catalog and Account services for it to verify the information.
  4. +
  5. Another service (I think sometimes referred to as an aggregation service) sits above these three services, and aggregates the data from the Catalog and Account services into a format that the Order service can accept without doing the query itself.
  6. +
+ +

Each method has issues and I'm having difficulty deciding if any are correct:

+ +
    +
  1. A lot of data will end up being cached. In future, an order may be dependent on a new Discount service - do I cache all of these as well? My actual domain model will depend on over 8 other microservices already and it is in its infancy. Would this new Discount service also need to cache catalog items?
  2. +
  3. This seems to go against some chief use-cases for independent and autonomous microservices - I've simply taken a monolithic application and swapped fast in-process calls for slower http calls. If I change one service, I likely to need to update another. If either the Catalog or Account services go offline, the Orders service cannot accept orders.
  4. +
  5. This technique simply moves many of the problems with 2. to another layer.
  6. +
+ +

I like the first approach due to its simplicity and resilience - if either other service goes offline then I can still make an Order. However, I am not sure I can justify the amount of data 'duplication' (I realise it's not strictly duplication) - is caching all of that information for validation (and potentially many times over for different services) acceptable practice? I can foresee the catalog being cached in many different services for example.

+ +

Thanks in advance!

+",359815,,,,,43936.75,Synchronous communcation vs caching in microservices,,2,2,0,,,CC BY-SA 4.0, +406608,1,406617,,3/16/2020 15:47,,1,166,"

Let's assume that I have an attribute called Ticket_Case that has a predefined value (for the sake of arguments, say 1 to 20) that model different real-life use cases. They are an attribute to an ticket itself.

+ +

In a different context of the system, I need to generate different messages based on the ticket case. These specific messages are used in this one place, and nowhere else.

+ +

The pragmatic solution would be to create a huge switch / case function that maps a case to a message. The less pragmatic, but also less error-prone solution would be to directly attach the message into the Ticket_Case object in some way. Okay, sure that works. It's muddling two different contexts a bit, but that's ok.

+ +

But now assume we have a growing number of contexts. Each of them don't really have a lot of similarity, so code reuse is not an option. We could group them all together in the specific Ticket_Case object, but that puts a lot of responsibility into that object. But the other alternative would be the huge switch / case method in every context boundary to translate the Ticket_Case to the corresponding object. This alone screams for many easy-to-miss issues if a new Ticket-Case were ever introduced.

+ +

So my question is: What is an acceptable way to deal with these ""huge enums"" that have correct, yet different uses in different places of the system, without having them either grow out of control internally or creating a lot of brittle mapping code?

+",328802,,,,,43908.33403,How to deal with huge enums that internalize a lot of different contexts without instead creating brittle mapping code in said contexts?,,4,1,,,,CC BY-SA 4.0, +406621,1,,,3/16/2020 17:15,,3,261,"

We have a scenario where we'd like to use two branches in our git workflow, otherwise known as develop and master.

+ +

The current flow is as follows:

+ +
    +
  • Create feat-branch - this branch will be based off of master.
  • +
  • Make your changes in feat-branch, then finally commit your changes, and push them upstream to the remote feat-branch.
  • +
  • Finally, the goal is you will eventually make two Pull Requests (via GitHub). One will be: develop <- feat branch and the other will be master <- feat branch
  • +
+ +

This methodology works perfectly fine for merging things into the master branch since the feature branch is already branching off of the master branch (aside from having to git pull origin master every now and then when you are out of date - but this is expected (I personally like to rebase my feature branches as well)).

+ +

However, the idea is you will be making a Pull Request to develop first (before you even make a secondary Pull Request to merge to master).

+ +

Merging to the develop branch becomes dicey at this point because each contributor is effectively making the same exact PR to both develop and master, which is creating different merge commit hashes in each respective branch with the same changes.

+ +

However, since everyone is branching off of master, this means those merge commits that end up in master eventually get merged back into the develop branch over time... this causes PRs to have erroneous differences being displayed which essentially say the same changes are being made by another person.

+ +

I'm guessing this is a mix of how commit hashes work in conjunction with the fact that every merge commit that is created after a Pull Request is merged into master ends up being included in a PR to the develop branch.

+ +

At this point I'm kind of just looking for maybe some sort of general direction or ideas regarding resolution, whether it be an entirely different way to approach the 2 branch model or something different altogether.

+",359822,,,,,43908.32431,Git commit hashes and Git merging between branches,,2,6,1,,,CC BY-SA 4.0, +406622,1,,,3/16/2020 17:18,,0,22,"

I'm not sure how to improve this architecture, involving view models and Koin:

+ +
    +
  • Typical purchase flow with several steps, each is a screen / fragment (using the nav component)

  • +
  • You can navigate back and forward between the steps. The entered data should be kept. So if e.g. you are in step 2, navigate back and forward, the data you entered in 2 is still there. This means it needs a (for now only in memory) shared data storage.

  • +
+ +

So far I actually have it working, but I'm not happy with it. I created a dependency OrderHolder which contains the order (all the user inputs). The view models manipulate this. So far so good. But my problem is that the contained order is optional! and I want to instantiate the step view models with a non optional order.

+ +
    +
  • Why is it optional? Because OrderHolder is a global dependency, so when it's created, there's no ongoing order. And I can't instantiate the order without initial data (like the selected product ids).

  • +
  • Why is this bad? Because the steps only make sense with an ongoing order. They should be instantiated only with an existing order. Otherwise everything is littered with ""safe access"" or throwing ""illegal state: no order"" exceptions.

  • +
  • So the solution would be probably to have some sort of intermediate / factory, that creates view models with an OrderHolder containing a non optional order.

  • +
  • But this means I've to create the factory at runtime, when I create the order. So I need another factory? And how do I access the created factory from the fragments, in order to create the view models? Total confusion here.

  • +
  • Maybe I'm missing a different solution.

  • +
+",75425,,,,,43906.72083,Factory to create view models with non optional mutable data storage,,0,0,,,,CC BY-SA 4.0, +406627,1,406629,,3/16/2020 18:37,,-4,67,"

Lets say you have a class that has the responsibility producing a set of finished data, but the method of producing that data is intentionally an implementation detail and as such should be left inaccessible to clients. How do you unit test such an API, knowing as the implementor that the data will change over time? Ideally you'd want to mock the underlying dataset, but the fact that it's an implementation detail means the dataset is not exposed and should not be. The client application should never have any reason to change the underlying data access methods, but the test still has need to mock it.

+ +

For a simple example, lets say you have two APIs. The first returns a set of integers that form a data set. The second API is a subset of that, representing integers that need to be filtered out of the first set. Your function has the responsibility of returning the final set of integers, but the fact that it makes API calls internally is an implementation detail that there is otherwise no reason to expose to the outside. Following principles of good class design, the only method that should be visible from the outside is the one function that returns the set of integers, but in order to unit test it you would need to override those internal API calls to provide consistent controlled data. I want to be able to unit test this function to make sure it return that right data, but by all rights I shouldn't expose any of the necessary components that are purely implementation details.

+ +

Is there a way to have the best of both worlds? Are there better design principles that I'm unaware of that describe how to break down a problem of this style?

+",210161,,210161,,43906.87917,43906.87917,Unit testing data-transformation functions that call external APIs,,1,6,,,,CC BY-SA 4.0, +406630,1,406704,,3/16/2020 19:25,,0,223,"

I have been experimenting with the idea of function classes as explained in this article and Composition applied to function dependencies as described in the following questions:

+ + + +

Consider the following mathematical functions, where x is a variable and a, b and c are constants:

+ +
f(x) = a*x**2+b*x+c
+g(x) = a*sin(3*x)
+
+ +

As python function classes these can be expressed as follows:

+ +
import numpy as np
+
+class F:
+    def __init__(self, a, b, c):
+        self.a = a
+        self.b = b
+        self.c = c
+    def __call__(self, x):
+        return self.a*x**2+self.b*x+self.c
+
+class G:
+    def __init__(self, a):
+        self.a = a
+    def __call__(self, x):
+        return self.a*np.sin(3*x)
+
+ +

For several reasons which I prefer not to elaborate here, I prefer to have one meta class which can be instantiated and assigned any function. The difficulty is of course the constants as self is undefined outside of a class. Here's what I mean:

+ +
def f(x):
+    return a*x**2+b*x+c
+
+F = FunctionClass(f)
+#how to make a, b and c instance variables?
+
+ +

If I don't do this I am condemned to writing a class for every function, which defeats the purpose of what I am trying to achieve. I've asked a question addressing this on a more technical level in stack over flow but it was not very popular at all

+ +

Is there a design pattern or python construct that can allow me to achieve this? Perhaps decorators? Perhaps use of the __dict__ attribute? Any ideas?

+",225187,,225187,,43907.42569,43911.85347,Design pattern for a function class,,4,11,,,,CC BY-SA 4.0, +406634,1,,,3/16/2020 20:24,,-4,182,"

I have a web application that is running under .NET framework and that is using JS/JQuery on the client side. I am the owner and the only developer of this web application, and it is not totally under production. This just to say that I can change the whole architecture of this application with no problem.

+ +

I want to implement heartbeat on the client side so that I can know who is connected and who is not and also to inform the user if there are any live updates etc.... My biggest fear is that I am not sure if I do implement it myself that it would slow down the system on the future. So I have a lot of questions in mind :

+ +
    +
  • Is 5 secondes good as an interval between each heartbeat

  • +
  • Can a 5 secondes heartbeat slow down the system ?

  • +
  • How can I implement it in terms of best practice, can I simply do a setInterval and send an ajax request to the server every 5 secondes ?

  • +
  • I heard that nodejs have also this option, should I migrate some part to nodejs (I am not a nodejs expert)

  • +
+ +

PS: my app will have only 100-300 clients on the future

+",359439,,106566,,43907.39097,44015.01667,Implementing heartbeat in terms of best practice,<.net>,2,6,,,,CC BY-SA 4.0, +406637,1,406716,,3/16/2020 21:57,,0,121,"

I know there is plenty of questions about this, but there is a lot of seemingly conflicting information.

+ +

My assumption: +The viewmodel is an abstraction of the view, an interface to the business logic, if you will. +Therefor if the viewmodel is an abstraction of the view, it makes sense for it not to depend on its implementation. So the viewmodel is unaware of any view. I assume the Viewmodel is kind of a proxy for the view, exposing controls that can be manipulated by whoever owns the viewmodel. So what class owns/uses the viewmodel? The model is of course unaware of any viewmodel, as it only holds the state of the application. +So I like to introduce a controller to take care of concrete business logic. Which in my opinions allows for much more loose coupling.

+ +

This is how I imagine the dependency relationship:

+ +

sketch

+ +

So to be more concrete. How can I imagine the dependency relation between the different components? And how much sense does the introduction of a controller (for processing logic) make and most of all how correct am I in my assumptions make?

+",359837,,,,,43909.74722,Dependency relations in MVVM and the place of the controller,,2,0,,,,CC BY-SA 4.0, +406638,1,,,3/16/2020 22:31,,1,62,"

I'm reading a paper and they use the term ""discriminative power"" in reference to a recognizer for road sign recognition. What exactly is discriminative power?

+",347874,,,,,43906.97014,What is discriminative power?,,1,1,,,,CC BY-SA 4.0, +406640,1,,,3/16/2020 23:43,,1,104,"

Often, when programming, you'll have different degrees of information to you in different contexts.

+ +

For example, a web server may have two routes, which recieve information about a Person, one of these routes receives a Person with a Name and an id, and the other just receives a Name.

+ +

We want to write code that looks something like this, but probably more complicated:

+ +
def lookupPersonAge(person: Person): Int
+
+// this doesn't receive a name
+def route1(person: Person): Response = {
+    complete(lookupPersonAge(person))
+}
+
+// this does receive a name
+def route2(person: NamedPerson): Response = {
+    complete(s""${person.name} is ${lookupPersonAge(person)} years old"")
+}
+
+ +

It's tempting to model Person as an interface, so that we can write generic code, that's usable between the two routes:

+ +
trait Person {
+    def id: Int
+}
+
+case class PersonID(id: Int) extends Person
+
+case class NamedPerson(id: Int, name: String) extends Person
+
+ +

The problem with this representation, is that if we want to write code against the generic Person interface, we loose a lot of useful things, such as pattern matching, and the auto generated copy method. +This also looks confusingly similar to an Algebraic Data Type, despite the fact that it shouldn't be used like one.

+ +

Because of this, it's appealing to model NamedPerson using composition instead, for example

+ +
case class PersonId(id: Int)
+case class NamedPerson(person: Person, name: String)
+
+ +

This is unwieldy, as it requires you to ""lift"" any methods written in terms of Person, in order to apply them to NamedPerson, in this case, we'd have to write route2 as:

+ +
def route2(person: NamedPerson): Response = {
+    complete(s""${person.name} is ${lookupPersonAge(person.person)} years old"")
+}
+
+ +

It also scales poorly, as different combinations of fields are added, or if either type of Person may appear nested within another object.

+ +

For completeness, we could also model this using an Option:

+ +
case class Person(id: Int, name: Option[String])
+
+ +

This isn't appealing, because we do know whether or not the name is going to be present, we want to write code that's generic to whether or not it is. +And this representation would lead us to write the following code, which requires us to use an unsafe call to get

+ +
def route2(person: NamedPerson): Response = {
+    complete(s""${person.name.get} is ${lookupPersonAge(person)} years old"")
+}
+
+ +

I feel like what I'm missing here is some form of structural typing, with that said, I'm sure I'm missing a nicer way to solve this kind of problem. +What's an effective way to resolve this issue?

+",289146,,289146,,43907.00347,43946.19722,"How to avoid code duplication from handling ""structually similar types"" in Scala?",,1,2,,,,CC BY-SA 4.0, +406644,1,,,3/17/2020 1:04,,1,62,"

I have 2 Aggregate Root: +One is the Lessor user, and the other is the Lessee user.

+ +
class Lessor extends AggregateRoot {}
+
+class Lessee extends AggregateRoot {}
+
+ +

Each of these Aggregate Roots has a email property, it is a Value Object.

+ +
class Email extends ValueObject { }
+
+ +
class Lessor extends AggregateRoot {
+    private email: Email
+}
+
+class Lessee extends AggregateRoot {
+    private email: Email
+}
+
+ +

I am not sharing an instance of Email, I am sharing the code of Email.

+ +

Can I put the Email code in the SharedKernel folder?

+ +
    +
  • I didn't want to duplicate the Email code by placing it in theDomain folders of each Aggregate Root.
  • +
  • And don't even put the Email code in one of the Aggregate Roots and import it in the other.
  • +
+",359842,,,,,43907.04444,Where to place the Value Object code that is shared by more than one Aggregate Root?,,0,3,,,,CC BY-SA 4.0, +406649,1,,,3/17/2020 4:14,,-1,59,"

Setup

+ +

Say I have a database table called Events which contains events for when a user becomes active or inactive

+ +
EventId|Timestamp     |User|Status  |
+-------|--------------|----|--------|
+1      |11/03/20 04:34|A   |ACTIVE  |
+2      |11/03/20 05:11|A   |INACTIVE|
+3      |11/03/20 05:15|A   |ACTIVE  |
+4      |11/03/20 05:44|A   |ACTIVE  |
+5      |11/03/20 06:15|A   |INACTIVE|
+
+ +

And another table called StatusTransition which keeps track of when the Status changes for a certain user, this links two items from the Events table (basically a ""start"" event and a corresponding ""end"" event). The goal here is to track the durations of when a user was active or inactive.

+ +
Id|StartEventId|EndEventId|
+--|------------|----------|
+1 |1           |2         |
+2 |2           |3         |
+3 |3           |5         |
+
+ +

The process whenever the application receives an event is to:

+ +
    +
  1. Write the record in the Events table. Call this Record A.
  2. +
  3. Fetch the most recent record (the one before Record A was written, assume there is always at least one) in the Events table for that user. Call this fetched record Record B.
  4. +
  5. If Record B has a different Status value from Record A then write an entry to the StatusTransition.
  6. +
+ +

Problem

+ +

The system that does the above is supposed to be designed as a job-based distributed system, in this case I have multiple worker applications being fed from a job queue.

+ +

Since there are multiple applications which might handle the two distinct events from the same user, it will be a problem if the events were written out-of-order.

+ +

Example:

+ +
    +
  • Some external application publishes two events to the job queue
  • +
  • Worker A receives an event with a 11/03/20 04:54 timestamp
  • +
  • Worker B receives an event with a 11/03/20 04:52 timestamp
  • +
  • Worker A writes a StatusTransition record, linking this event and some other past event whereas it should be linked to the event Worker B received (but isn't written yet)
  • +
+ +

Question

+ +

How do I design this such that it still yields the correct behavior even if workers process events out-of-order? (e.g. like separating the logic that writes to StatusTransition as another job, locking the Events table while writing to StatusTransition, etc.)

+",359491,,359491,,43907.21597,43907.22847,Concurrency issue in job-based system,,2,0,,,,CC BY-SA 4.0, +406682,1,,,3/18/2020 4:20,,-4,58,"

How do you successfully adopt TDD in an organization? Training alone is not enough in my opinion as I feel it requires a change in process and mindset. If TDD has been implemented in your organization can you provide what steps/approach were taken to successfully implemented.

+",359918,,,,,43908.24861,TDD Implementation,,2,1,,,,CC BY-SA 4.0, +406685,1,,,3/18/2020 5:14,,-4,51,"

Why does Java's +V Map.put(K key, V value) +not comply to the Command-Query Separation?

+",249350,,,,,43908.38056,Command-Query Separation non-compliance,,1,6,,,,CC BY-SA 4.0, +406690,1,,,3/18/2020 8:06,,1,96,"

Consider the construction of the FixAcceptor type below. The code snippet is part of a unit test.

+ +
var logSource = LogSource.OnMessage | LogSource.Event;
+var stateStore = new StateStore();
+
+var newOrderSingleMessageValidator = new NewOrderSingleMessageMessageValidator();
+var newOrderSingleMessageHandler = new NewOrderSingleMessageHandler(this.logger, newOrderSingleMessageValidator, stateStore);
+
+// More specific message handlers to come ...
+
+var messageHandler = new AcceptorMessageHandler(this.logger, logSource, newOrderSingleMessageHandler);
+
+var messageStoreFactory = new MemoryStoreFactory();
+var sessionSettings = new SessionSettings(""test_acceptor.cfg"");
+var logFactory = new FileLogFactory(sessionSettings);
+
+var acceptor = new FixAcceptor(messageHandler, messageStoreFactory, sessionSettings, logFactory);
+
+ +

The creation of FixAcceptor feels wrong and confusing to me. Especially as there will be more specific message handlers in the future.

+ +

As I have to new-up many dependencies here, I was looking for a creational design pattern, that I can apply to the AcceptorMessageHandler or FixAcceptor type. I thought the builder pattern could be a match.

+ +

But after reading the checklist e.g. provided here: Builder Pattern I am not so sure anymore. Especially the following condition is not fulfilled

+ +
+
    +
  1. Decide if a common input and many possible representations is the problem at hand
  2. +
+
+ +

There will always be only one possible representation. Sure I could hard-wire all my dependencies and make the creation of my FixAcceptor type more easy. But then, how about testability?

+ +

Further things I don't like about the code:

+ +
    +
  1. Passing around the cross-cutting concern/aspect this.logger.
  2. +
  3. The NewOrderSingleMessageHandler is responsible for message validation and processing. Is this a violation of the SRP?
  4. +
+ +

So the question is:

+ +

What is the cleanest way of constructing my types here? Which pattern should I apply?

+ +

Constructor of FixAcceptor

+ +
public class FixAcceptor
+{
+    private readonly ThreadedSocketAcceptor acceptor;
+
+    // ... 
+
+    public FixAcceptor
+    (
+        IAcceptorMessageHandler messageHandler, // external type
+        IMessageStoreFactory messageStoreFactory, // external type
+        SessionSettings sessionSettings, // external type
+        ILogFactory logFactory // external type
+    )
+    {
+        this.acceptor = new ThreadedSocketAcceptor
+        (
+            messageHandler,
+            messageStoreFactory,
+            sessionSettings,
+            logFactory
+        );
+    }
+
+    // ...
+}
+
+ +

Constructor of NewOrderSingleMessageHandler

+ +
// The abstract MessageHandlers implements the validation
+public class NewOrderSingleMessageHandler : MessageHandler<NewOrderSingle>
+{
+    private readonly IStateStore<NewOrderSingle> stateStore;
+
+    public NewOrderSingleMessageHandler
+    (
+        ILogger logger, 
+        IValidator<NewOrderSingle> validator,
+        IStateStore<NewOrderSingle> stateStore
+    ) : base(logger, validator)
+    {
+        this.stateStore = stateStore;
+    }
+
+    public override void Process(NewOrderSingle message, SessionID sessionId)
+    {
+        // ...
+    }
+}
+
+ +

Constructor of AcceptorMessageHandler

+ +
public class AcceptorMessageHandler : MessageCracker, IAcceptorMessageHandler
+{
+    private readonly ILogger logger;
+    private readonly LogSource logSource;
+    private readonly IMessageHandler<NewOrderSingle> newOrderSingleMessageHandler;
+
+    public AcceptorMessageHandler
+    (
+        ILogger logger, 
+        LogSource logSource,
+        IMessageHandler<NewOrderSingle> newOrderSingleMessageHandler
+    )
+    {
+        this.logger = logger;
+        this.logSource = logSource;
+        this.newOrderSingleMessageHandler = newOrderSingleMessageHandler;
+    }
+
+    // Lots of callbacks following... 
+}
+
+",319795,,319795,,43908.56806,43908.64167,How should I construct a complex object having many dependencies?,,1,5,,,,CC BY-SA 4.0, +406691,1,406705,,3/18/2020 8:18,,1,65,"

There are multiple aspects and quality that are desirable for a distributed system. Among these are consistency and availability. According to the CAP theorem, if partition tolerance is a must, the system designer can choose from a spectrum of models that trade off between consistency and availability. Jepsen provides an interactive diagram that shows multiple models that trade off between very consistent and very available (although not an exhaustive list). For systems that require high availability, the designers may choose to use a weaker consistency model.

+ +

In this question, let's take strong consistency to mean linearizability and weak consistency to mean any other models weaker than linearizability (e.g. sequential consistency, causal consistency, strong eventual consistency (SEC), etc.). One of the consequential requirements of linearizability is that the system needs to have a global clock.

+ +

In many modern systems, all operations performed are associated with a timestamp at which the operation was executed. This implies that some global clock is used. Therefore, does it makes any sense to design a system to be weakly consistent?

+",272428,,,,,43908.55972,Is weak consistency still relevant in systems that record timestamps?,,1,3,,,,CC BY-SA 4.0, +406692,1,406699,,3/18/2020 8:52,,-2,638,"

I'm writing a Software Requirements Specification (SRS) document compliant with the standard IEEE-830-1998. I've also drawn a couple of UML diagrams, specifically a use case and an activity diagram. +I wonder whether is correct to insert those diagrams inside the SRS. And, if that is the case, in which section of SRS should I put them between 2 Overall description and 3 Specific requirements?

+",359598,,83178,,43908.37569,43908.45556,UML use case and activity diagrams in SRS document,,1,4,,,,CC BY-SA 4.0, +406693,1,,,3/18/2020 8:58,,1,167,"

I know the fact that global mutable variables are bad as they can be accidentally modified and make testing difficult. However, there are situations when a class needs to cache some information, for e.g. caching the time when the last request was made to a public function of a class, and this information is totally private to the class to make certain decisions in its functions. In this scenario, a class may look like this:

+ +
public class MutableStateClass {
+  //some dependencies
+  private DateTime lastRequestTime;
+
+  public void SomeFunc() {
+    ...
+    if(some decision based on lastRequestTime) {
+      ...
+    }
+  }
+
+  //other functions may also access lastRequestTime
+}
+
+ +

One might say that we can parameterize the functions to have a DateTime parameter which holds the last request time but I think this knowledge is totally private to the class and it doesn't make sense to introduce this parameter in the public interface of the class as the callers don't have to know about the last request time.

+ +

Another example could be a bunch of booleans which make decision on what message to log based on some historical point for e.g. a periodic function which might be executed every 10 seconds might need to log some information on each invocation and this logging might depend on some historical knowledge. In this case as well we might end up with some booleans on class level which can be modified by some functions within the class.

+ +

So how to avoid these mutable class level variables when parameterizing them is not an option? This generally happens when we need to cache some information. How can we better design the class for this scenario?

+ +

I know there are many questions related to global variables but here I want to address on the mutable variables that exist within a class and I am not talking about application wide mutable globals.

+",319425,,,,,43909.25417,How to avoid global mutable variables within a class?,,4,9,,,,CC BY-SA 4.0, +406698,1,406708,,3/18/2020 10:48,,-2,76,"

The question is related to software engineering best practice. If I have a function foo() that needs some non-trivial variables (say some 2D arrays) to be set before it can perform its action, is it better to have the caller set those variables and pass them on when it calls foo(), or is it better to write foo() such that it calls a third function bar() that sets the variables for foo(). The latter will result in a chain of function calls, possibly one for each input argument. Should I favor this over having everything set in the caller's scope before calling foo()?

+",359938,,,,,43908.57708,Function call sequence,,1,5,,43908.59028,,CC BY-SA 4.0, +406706,1,406719,,3/18/2020 13:36,,4,446,"

I am analysing a Windows Forms application in .NET Framework 4.5.2 with 4 separate solutions with a combined 1.5million lines of code (and 10 years of development)

+ +
    +
  • Libraries.sln (54 projects)
  • +
  • Tools.sln (18 projects with many depending on 28 projects in Libraries)
  • +
  • MainApp.sln (140 projects with many depending on 28 projects in Libraries)
  • +
  • MainAppTest.sln (60 projects with dependencies on many projects in MainApp)
  • +
+ +

All references currently are hardcoded to specific dll's in corresponding bin/debug folder

+ +

Currently it is only possible to do a build on 1 specific machine (with the correct drive mapping and folder structure)

+ +

Architectural Thoughts

+ +
    +
  • Put everything into a single solution and reference projects (so no direct referencing of build dlls). Could use solution folders to virtually segregate areas (Libraries, Tests, Tools)
  • +
  • Keep the existing 4 solutions and start by using a private NuGet feed for the 28 shared Libraries (I tried this using Artifacts on Dev Ops and it works fine - however getting into complexity regarding whether to include pdb debugging symbols in the package)
  • +
  • Pack multiples projects (eg the 28 shared libraries) into a single NuGet package - this has issues https://github.com/NuGet/Home/issues/3891
  • +
+ +

Question

+ +

With today's modern machines and fast SSD's, is there any reason not to have a single massive solution (270 projects, 1.5m loc) for a solution which doesn't currently have code reuse anywhere else?

+ +

Update 1

+ +

Thank you all I really appreciate your time and answers.

+ +
    +
  • The problem I'm trying to resolve (and good to get me to define it) is getting the build working on another machine, so we can then automate the release process which is manual (10 hours of builds/running 11,000 tests/building the Inno installer). I imagine we will use Azure DevOps for CI/CD.

  • +
  • I believe the Libraries project changes sometimes, but the client wants to be able to step into the code when debugging, so can get clarity in case the issue is in there.

  • +
  • Unknown as to the full clean build time of all the solutions (as I am leaving alone the only working machine which would take significant effort to rebuild)
  • +
  • Good idea on a build script (if it comes to that)
  • +
+ +

Update 2

+ +

All the responses have pertinent points - thank you again.

+",25850,,25850,,43915.29028,43915.29028,Single massive solution - good idea?,<.net>,3,10,,,,CC BY-SA 4.0, +406720,1,,,3/18/2020 20:09,,0,303,"

There's a scenario I have where, in the main entry point of my C# program, I iterate a directory of managed DLLs, load each of them, and pass a factory object to each of them to get a mapping of implementations for various public interfaces. After the iteration is completed, I have one big map of interfaces and their corresponding implementations.

+ +

This factory object is needed by many unrelated objects in the system. There's also state that goes with the factory that identifies if specific DLLs were found, because knowing if I have a certain implementation sometimes influences the UI (disabling elements in the UI when they are not present).

+ +

What is the best way to share this factory + the accompanying informational state? I can either store it as a global, or I can pass it through hundreds of functions in my code, but I like neither of these solutions.

+ +

Challenges like this make it really hard to avoid the Singleton pattern. I can't be creating these factories on-demand all over the place because I would be repeating the same, heavy work each time and that's unnecessary.

+ +

What are some alternative solutions that don't involve abusing global variables (which is error prone), polluting interfaces, or impacting performance? Note that if the singleton pattern (as a class) is justified here and there are no reasonable alternatives, I can accept that as an answer.

+ +

As far as research goes, I did find answers like this one that mention dependency injection, however I'm not sure how that would apply here. I've also considered some sort of observer pattern for this, but again I don't see how I could use that mechanism to avoid the problems here. It feels like no matter what, when you boil each idea down to the essentials, it comes down to either global state or polluting interfaces.

+",31950,,,,,43908.94514,"How to avoid global variables for state that is the result of one-time, heavy operations?",,3,5,,,,CC BY-SA 4.0, +406729,1,406739,,3/19/2020 4:46,,2,1564,"

I am trying to use EF6 with my project. I just watched Repository Pattern with C# and Entity Framework, Done Right | Mosh video. I understand the necessity of Repositories but I still don't understand UnitOfWork. I already have my DBContext to use instead of UnitOfWork. In the sample code of the video it is like:

+ +
using(var UnitOfWork = new UnitOfWork(new DBContext()))
+{
+    UnitOfWork.Courses.Get(1);
+}
+
+ +

I don't understand the benefit of this. I can just directly use my dbContext in all of my repos,

+ +
using(var dbContext = new DBContext())
+{
+    var courseRepo = new CourseRepo(dbContext);
+    var otherRepo = new OtherRepo(dbContext);
+
+    courseRepo.Get(1);
+}
+
+ +

So my repositories will be using the same dbContext. What is the reason of UnitOfWork then?

+",358952,,,,,43909.41736,Is unit of work pattern really needed with repository pattern,,2,5,1,,,CC BY-SA 4.0, +406733,1,406734,,3/19/2020 7:13,,2,226,"

I'd like to match a regex pattern on a stream, but I am not sure what algorithm to use. I certainly don't want to load the entire file into memory.

+ +

I tried to figure out how to do this, but I have only very basic ideas. For example I could concat the chunks into some sort of aggregate string as long as I find a match. After that I cut the aggregate at the end of the match and continue. But if there is no match I would end up loading the entire string into memory.

+ +

I could give a maximum size to the aggregate, but I might lose matches that way. Another issue that if there is a longer match starting in the same position, then I would miss that. So I guess this algorithm is incompatible with possessive quantifiers too.

+ +

Is there a better solution for this problem?

+",65755,,7422,,43909.30278,44194.78264,What is the right algorithm to match regex on a stream?,,3,2,,,,CC BY-SA 4.0, +406738,1,406743,,3/19/2020 9:50,,-2,66,"

I want ask experienced software developers and architects about best architectural practices for the following problem.

+ +

Suppose we have two entities: Student and Teacher and each of these two entities has an image. How can i model the database? A single ""Image"" table with two nullable foreign keys (to reference the student or teacher table) or you prefer one table per entity (StudentImage and TeacherImage)

+",344176,,,,,43909.46389,Best architectural practices,,2,2,,43909.79306,,CC BY-SA 4.0, +406740,1,,,3/19/2020 9:55,,0,41,"

I have a aggregate root called ""Billing Document"" which has some additional entities as attributes (""Billing lines""). +I want to persist these objects together into my database. I read about the repository pattern in combination with a mapperclass. this would be a good solution in my eyes. My domain objects don't need to know anything about persistence. The repository should save the ""billing document"" and the ""billing lines"" together in a consistent state. Note that I'm not able to use some kind of OR-Mapper here.

+ +

The attributes to be mapped and persisted are private attributes. So by default also the repository and the mapper don't have access to them. +What is a good solution to grant access to these attributes so the data could be persisted? I don't really want to add special methods to my domain objects that are only needed by persistence. This could lead to the problem that there methods would maybe be used also in other contexts.

+ +

1) Should I implement a method called ""getContractData"" or ""toDatabase"" which returns the attributes of the ""Billing Document"" and the corresponding ""Billing Lines"" as a structure and the mapper could use this structure. +2) Should I give my repository and mapper a special ""friends"" status, so they can directly access private attributes?

+",360016,,,,,43909.41319,Repository Pattern without OR Mapper - Accessing of Attributes,,0,4,,,,CC BY-SA 4.0, +406742,1,,,3/19/2020 10:24,,0,25,"

I am working on an application Angular/Spring boot.

+ +

This is the database shema (there were minor changes)

+ +

+ +

In the backend part (Spring boot) we are using fetchtype eager, it means when i load mission it will load type which loads documents and will load client, user which loads role and userdetails so in the frontend when i display a table of missions that has the client name users type etc, i'm pretty much extracting from all the database in one query.

+ +

We moved passed it now, but i think i should optimize...so few questions +i'm gonna display a mission, so i need the client name from table client, and user information from userdetails and the type. So do let it be eager and load everything, or should i first send a query to get mission only, then a query to get client, then userdetails then type? it looks like in both cases i'm calling all the database.

+ +

Another issue is, when i want to add a mission i will list all managers (users with role managers) so when i display it should i send a request that eagerly give me the user and userdetails, so when i display the managers i would display first name and last name and when i submit the form i would put the user id (because mission has manager_id which is a user_id) or should i just send a request to get all from userDetails and when i am about to submit the form i would send another request to get the user_id.

+ +

These are the problems i am facing, now i am looking more for an advice on how to think about the design or fetch type or whatever rather than an explicit answer.

+ +

Thank you.

+",360021,,,,,43909.43333,Database design and Spring fetch type eager,,0,0,,,,CC BY-SA 4.0, +406748,1,,,3/19/2020 12:09,,-2,84,"

I want to release a program, my first program, but I don't know how to protect it. By protecting it I mean protect it against piracy and getting copied by others. Is there anything I can do to protect it? Can I protect the program or the code?

+",360015,,,,,43909.51319,How to protect my program?,,1,5,,43909.55139,,CC BY-SA 4.0, +406751,1,406762,,3/19/2020 12:28,,0,125,"

I have a the requirement for a webservice which should return a ""business log"" of the action the service performed. Usually I only return error logs which are based on exceptions. the exceptions get aggregated to a log at the top most level in the webservice.

+ +

Injecting a logger-object into every business class so it could write to the ""business log"" seems to be a solution, but it pollutes my code with a lot of logging-commands inside the business code. Also every constructer has to be modified. +Adding an attribute to every class which includes the log messages also looks strange to me. Is there any other solution?

+",360016,,155433,,43909.79306,43909.79306,Architectural solution for business logging,,2,0,,,,CC BY-SA 4.0, +406755,1,,,3/19/2020 13:10,,4,655,"

I have been struggling with this concept in the context of web applications ever since I first read about it. The theory states that the domain objects should encapsulate their behaviour and business logic. A model which contains only data and has its logic somewhere outside is called an ""anemic domain model"", which is a bad thing. Also, the domain should not perform data access.

+ +

If for instance I had a social app which had a bunch of objects of type User, and users should be able to add other users as their friends, the User class should contain a method named Befriend(User user) so that I could do something like userA.Befriend(userB).

+ +
class User {
+
+    Friends[] friends;
+
+    void Befriend(User user) { ... }
+}
+
+ +

However, the act of befriending might contain some restrictions and so I would have to do some validation in my Befriend method. Here are some purely theoretical restrictions:

+ +
    +
  1. The user must not already be your friend
  2. +
  3. You and the other user must not have common friends
  4. +
  5. In Bucharest it must be raining
  6. +
+ +

Now let's imagine that the friends lists might be huge, userA might have 50.000 friends, userB might have 100.000 friends. +So, for validating 1 and 2 it wouldn't be efficient to eagerly pull the entire friends lists from the database when constructing the user object and then doing those checks in my Befriend method iterating the friends list. In the database I have indexes and checks like these would be trivial (and fast). So naturally I would prefer to put these queries somewhere in my Data Access Layer and use them whenever needed.

+ +
class FriendsRepository: IFriendsRepository {
+
+    bool HasFriend(User user, User friend);
+    bool HasCommonFriends(User userA, User userB);
+
+}
+
+ +

But how am I supposed to use this object inside my Befriend method from my User object? People say domain objects must not use repositories (even through abstractions such as interfaces), though there seems to be some disagreement here. Say I violated this rule. Domain objects don't benefit from Dependency Injection so I would have to change my Befriend method to:

+ +
void Befriend(User user, IFriendsRepository friendsRepository) { ... }
+
+ +

Alright. Now what about the weather? That's something completely unrelated to our entity and that information comes from an IWeatherService. Again, I need it in my Befriend method.

+ +
void Befriend(User user, IFriendsRepository friendsRepository, IWeatherService weatherService) { ... }
+
+ +

This already makes me feel like this method does not belong inside the User class. I have a lot of external dependencies and I don't get Dependency Injection which sucks. But pulling this out from the User to a service (or whatever) inside my Application Layer makes my domain model anemic. I very rarely encountered methods which could either be executed without validation or contain only extremely simple validation rules, only depending on the immediately available properties on the said entity (like primitive fields for instance, such as Username string, ActiveUntil date etc.).

+ +

So I'm left asking: what kind of methods could naturally fit in the domain objects? Let's be honest, real apps often deal with huge amounts of data, many object relations and very complex validation logic. Rarely you only have to do trivial checks like ""is this user over 12 years old?"".

+ +

P.S.: I used that example purely for demonstration purposes. Please don't cling on it.

+",280779,,,,,43915.66667,What kind of logic can Domain Objects realistically contain?,,4,1,5,,,CC BY-SA 4.0, +406757,1,,,3/19/2020 13:34,,-1,61,"

Assume I have module of classes:

+
    +
  • EntityLoader loads some entity by id.
  • +
  • EntityValidator checks preconditions, input data before saving changes.
  • +
  • EntityUpdater saves changes.
  • +
+

EntityLoader has method loadById($entityId), it loads data and caches it in own state. It is called in validator first, and is called in updater later when validation has completed successfully. +Usage pattern is:

+
if ($validator->validate()) {
+    $updater->update()
+}
+
+

I want to see a not-nullable type of return value of function EntityLoader::loadById($id). I want to throw CouldNotFindEntity exception, when record cannot be loaded. +Validator expects this exception. It catches this exception and produces respective result status. CouldNotFindEntity exception looks like checked exception in terms of validator responsibility.

+

Updater should be called for successful validation result, thus I don't want to expect any exception from EntityLoader in updater execution context. If I get exception there, it should mean either unrecoverable system error or developer code mistake. So CouldNotFindEntity exception looks like unchecked exception in terms of updater action.

+

I write code in PHP and there is no checked/unchecked exception separation concept, so I can assume, that unchecked exceptions are inherited from RuntimeException, but checked exceptions are inherited from DomainException.

+

I can't realize which kind of exception should I use in my case? This is private for module exception, it isn't among observable behavior for client of module. It is domain-kind exception, because describes business logic problem in validator. In the same time, it is unrecoverable exception that looks like run-time in updater calling context.
+Doesn't LogicException feet my needs for updater and validator?

+

I consciously choose exception mechanism for code simplification. I don't want to replace it with result status, or null-object, or Optional object.

+",81800,,342873,,44197.71181,44199.06736,RuntimeException or DomainException or LogicException in loader function?,,1,3,,,,CC BY-SA 4.0, +406758,1,,,3/19/2020 13:46,,-1,109,"

I have a requirement to list country codes (e.g. CA, US) by username. A single user may have zero, one or more countries and a single username. A country has a single country code.

+ +

I have a UserRepository and a CountryRepository.

+ +

My question is - which repository should I be putting this code into?

+ +

Is there a generally accepted standard? Or is it more of an 'either works, pick one and stick with it' kind of thing?

+ +

Regarding the content of the repositories, they're largely just CRUDs. CountryRepo just has two methods - GetCountryByCode and ListCountries. UserRepository has ListUsers, GetUserByUsername, DeleteUserByUsername, UpdateUser, CreateUser.

+",309190,,309190,,43909.60972,43940.54167,"When adding a ListFooForBar method, should it go into the FooRepository or BarRepository?",,3,6,1,,,CC BY-SA 4.0, +406759,1,,,3/19/2020 13:51,,-4,172,"

I have many dependent statements. What is the best approach to handle these cases dynamically?

+ +

Example

+ +
enum UserType {
+
+    case buyer
+    case seller
+} 
+
+enum ViewType {
+
+    case active
+    case inactive
+} 
+
+enum ActionType {
+
+     case buy
+     case offer
+}
+
+ +

These types are retrieved from an API.

+ +

I have a matrix of dependent cases. I need to check all types.

+ +
if userType == UserType.buyer && viewType == ViewType.active && actionType = ActionType.buy {
+
+   // return view
+} else if userType == UserType.buyer && viewType == ViewType.active && actionType = ActionType.offer {
+   // return view
+
+}else if userType == UserType.seller && viewType == ViewType.active && actionType = ActionType.buy {
+   // return view
+
+}else if userType == UserType.seller && viewType == ViewType.active && actionType = ActionType.offer {
+   // return view
+
+}else if userType == UserType.buyer && viewType == ViewType.inactive && actionType = ActionType.buy {
+   // return view
+
+}else if userType == UserType.buyer && viewType == ViewType.inactive && actionType = ActionType.offer {
+   // return view
+
+}
+..... too many cases 
+
+ +

I've tried to handle these cases using a command design pattern but this results in too many cases especially when adding a new ViewType or ActionType. I need to create 4 classes buyerNewViewTypeOffer, SellerNewViewTypeOffer, buyerNewViewTypeBuy, SellerNewViewTypeBuy...

+ +

Any idea how to handle these cases dynamically when adding a new type without needing to create many statements or classes?

+",360034,,304738,,43962.26736,43962.26736,How to Replace Many if Statements for many types,,3,6,,,,CC BY-SA 4.0, +406760,1,,,3/19/2020 14:22,,0,107,"

This is a general software architecture related question. This question is not related to a specifc programming language or service.

+ +

The question is: ""Should a server do things on behalf of the user without user interaction by providing the users credentials (e.g. token or certificate)?""

+ +

The following diagram shows one option where the token/certificate will be persisted by the web API. Because doing the actual process, Service A will be called from a server. This server will ask for the credentials to create a user context and call Service A in on behalf of the user.

+ +

+ +

Edit (adding the result of our research):

+ +

The problem with this approach is that persisting tokens or any kind of credentials outside of the user directory is a risk. Tokens or certificates can be used to create a user context -- not only by the server, but also by others.

+ +

Instead, another option is to explicitly grant permission to the principal that is running on the server to access Service A with its own credentials. The request might include a token from Service A to let Service A know it does thing on behalf of the user. In this case, no credentials (like tokens or certificates) must be persisted. The risk of ""losing"" these kind of credentials does not exist.

+ +

Edit (adding SAS Token example):

+ +

The last option looks like this:

+ +

+ +

The Web App requests something like a SAS (Shared Access Signature) Token. This token is bound to the principal. Only the principal can use the SAS Token. Instead of persisting the users token, a SAS token is persisted.

+ +

The last option is the favorite solution, since this does not include persistance of user credentials or things that can be used to create an user context.

+ +

Edit (updating diagrams, including Service Bus)

+",360037,,360037,,43909.66528,43909.6875,Should a server call services on behalf of the user?,,2,8,,,,CC BY-SA 4.0, +406774,1,406776,,3/19/2020 21:09,,-1,51,"

Let's say you need to create an API around machines, and you want to be able to query machines many different ways:

+ +
    +
  • Get List of all Machines
  • +
  • Get Machine By ID
  • +
  • Get Machines By Client
  • +
  • Get Machines By Type
  • +
  • Get Machines By Mfg Date
  • +
  • Get Machines By Foo
  • +
  • Get Machines By Bar
  • +
+ +

You get the idea. And keep in mind this is a very simplistic example. Don't take it too literally. It's the general theory I'm after. The way I see it, there are two ways you can go about this

+ +
    +
  1. You can have a controller for each (with the exception of the first two options)
  2. +
  3. You can put them all in the same controller
  4. +
+ +

Under Option 1, each endpoint name would be the same as the HTTP method (GET in this example). +Under Option 2, each endpoint name would match the query type. (Example: ByClientId)

+ +

I've always been of the opinion that #1 is better. But is there ever a case for #2? For example, if you are going to end up with hundreds of controllers each with a single endpoint inside of it?

+ +

Thanks!!

+ +
+ +

The following are only examples of what I mean by 1 and 2...

+ +

Option 1 would look like this:

+ +

MachineController.cs

+ +
[HttpGet]
+public string Get()
+{
+    return ""all machines"";
+}
+
+[HttpGet]
+public string Get(int id)
+{
+    return id.ToString();
+}
+
+ +

and then...

+ +

MachinesByClientController.cs

+ +
[HttpGet]
+public string Get(int clientId)
+{
+    return $""all machines for client of id "" {id};
+}
+
+ +

and then...

+ +

MachinesByTypeController.cs

+ +
[HttpGet]
+public string Get(int typeId)
+{
+    return $""all machines with type of id "" {id};
+}
+
+ +

And so on down the line.

+ +

--- OR ---

+ +

Option Two would look like this:

+ +

MachineController.cs

+ +
[HttpGet]
+public string All()
+{
+    return ""all machines"";
+}
+
+[HttpGet]
+public string ByMachineId(int id)
+{
+    return id.ToString();
+}
+
+[HttpGet]
+public string ByClientId(int id)
+{
+    return id.ToString();
+}
+
+[HttpGet]
+public string ByTypeId(int id)
+{
+    return id.ToString();
+}
+
+",189394,,156767,,43913.62083,43913.62083,Configuring Controllers and Endpoints in a HTTP API,<.net>,1,2,,,,CC BY-SA 4.0, +406775,1,406780,,3/20/2020 0:11,,3,257,"

I often find myself creating class cycles by using the observer pattern. Consider the following scenario:

+ +
    +
  • I have a central accessible global data source (Subject)
  • +
  • The data source is reflected by many GUI components (Observers) in various ways (e.g. different states for buttons, size indicators, numbers, etc.)
  • +
  • Some of those GUI components (like buttons) can update the global data source (e.g. a button can remove something from the data source or it shows as disabled if the data source is empty)
  • +
+ +

So now I have a cycle between some GUI components and the data source.

+ +

Do I abuse the observer pattern here for something it is not meant for? Is there another way to solve this?

+",360073,,,,,43910.27847,I often create class cycles by using the Observer Pattern. How can I avoid this?,,1,3,,,,CC BY-SA 4.0, +406781,1,406787,,3/20/2020 6:55,,1,141,"

I recall seeing a large email corporation discourage POP client access, suggesting it was insecure.

+ +

Assuming this is not FUD meant to encourage adoption of their client, I am wondering whether either of these may be true:

+ +
    +
  1. This has a basis in truth in that POP is insecure without the proper extensions, but POP can be as secure as any other method by use of the right extensions/protocols.
  2. +
  3. There is something inherently defective as to security with the POP protocol even if the most secure extensions are used.
  4. +
+ +

Might someone be able to offer a high level introduction to what, if any, issues POP may suffer from?

+",21948,,,,,43914.39167,Does POP suffer from any inherent security problems,,2,4,,,,CC BY-SA 4.0, +406784,1,406795,,3/20/2020 8:28,,1,286,"

I am creating a system which makes static analysis on code when a commit to GitHub is made, the results are then showed as a GitHub review.
+My problem is, this means that the developer (actor) will not engage with my system, as it is actually GitHub that does this. The developer (actor) is using GitHub and will see the result from my system in GitHub, hence they are never actually using my system directly.
+I feel like the diagram below doesn't really capture this that well.

+ +

+ +

I then created another diagram, where the developer (actor) communicates with GitHub (actor) and GitHub then communicates with my system.
+However this also seems wrong to be, but still better than before. One of my big problems with this is that GitHub is on the left side, however GitHub is kinda a primary actor, so it also makes sense. I have never seen any examples of use case diagrams doing this, is this actually valid and more importantly does it makes sense? +

+ +

Side question: +The CTO (actor) has a use case saying:

+ +
+

As a CTO, I want the tool to not save the code on the server running it, so that I won’t have to worry about our code being leaked.

+
+ +

For me this sounds like a valid use case, but when putting it into a use case diagram it feels off, as this is not really an action, it is more of a requirement. Should this just be part of like a functional requirement?

+ +

Edit: +So I tried improving it based on the recommendations from you guys.
+I ended up removing a lot of use cases, as they were more non-functional or functional requirements. Then I wanted to make the change to show that the tool is running remotely when a commit is pushed, not sure if I did that correctly. +

+",360093,,360093,,43910.53542,43910.53542,Use case diagram where actions are going through a third party system,,1,5,,,,CC BY-SA 4.0, +406785,1,,,3/20/2020 9:02,,0,237,"

During my years developing software I have most of the times tried to improve the readability of my code. As one example, I often try not to use boolean flag parameters of methods and I try to refactor them when I encounter it. I know that using boolean parameters can be a sign of a problem in the design but it is usually a simple refactor instead of refactoring entire classes and interfaces. Whether using boolean parameters is a good or bad thing is out of the scope of this question however.

+ +

Recently I have been told by colleagues that it is not necessary as IntellIJ will tell you the parameter name of a boolean when you use inlay hints. For example if you have the java method

+ +
public void sendEmail(Contact contact, boolean isCompany)
+
+ +

whenever you call this sendEmail method IntellIJ adds this little box saying that your boolean is isCompany (only when you use true or false directly in the method call I believe). My question is if it is acceptable to rely on your IDE for these kind of hints so you don't have to refactor or should you just refactor nonetheless if you don't like boolean parameters.

+",360096,,173647,,43910.41458,43910.56319,Is relying on an IDE for code readability acceptable,,3,3,,,,CC BY-SA 4.0, +406786,1,406801,,3/20/2020 9:04,,-4,55,"

I am working on a project which is introducing a new Business Product, that is leveraging existing systems

+ +

From a requirements perspective, they fall into all these types:

+ +
    +
  1. A system change is required --> i.e. new field, field modification
  2. +
  3. Plain use of the existing field --> i.e. field will be used as it currently does for other products
  4. +
  5. Use of the existing field, but with manual processes specific to that field --> i.e.if the value being entered is greater than $500, then call the manager over from their desk to say ""go for it!"" before proceeding
  6. +
  7. Complete manually processes outside of the system context --> i.e. ID and Verify the customer on the phone
  8. +
+ +

Would all these scenarios be eligible for a User Story? How would it be best to ID and segregate build vs non-build stories when tasks are later assigned?

+",360078,,,,,43910.59375,Should User Stories be written for non-build requirements?,,1,1,,,,CC BY-SA 4.0, +406791,1,,,3/20/2020 10:54,,1,34,"

Say I have some code which consumes a class called Subject, which implements ISubject. +Would there be any concerns if I were to build a ProxySubject, which inherits Subject?

+ +

I like this style because it makes it so simple to wrap the real subject: ProxySubject inherits Subject and overrides the required fields (mostly relying upon ThisBase.ThatOverridenProperty). If the targeted fields are not virtual (not overridable), then you can always re-implement ISubject directly from ProxySubject.

+ +

I did not find any suggestions for doing so; which makes me wonder it might actually not be such a good idea?

+",329035,,,,,43910.45417,Implementing Proxy pattern via concrete inheritance,,0,5,,,,CC BY-SA 4.0, +406794,1,,,3/20/2020 12:12,,-4,83,"

I am modifying a software which is for trash collection. While reading the code, I asked myself is there any formula or trick to quickly calculate the number of time the statement gets executed inside the nested loop rather than doing it manually in head. Here's the kinda pseudo code for this----

+ +
for(i=1; i<=9;i++)
+{
+   sum = 0;
+   for(j=i; j<=N && sum<=10; j++)
+      sum = sum + arr[j] //any quick trick to calculate it's number of execution
+}
+
+",360107,,,,,43921.29375,Is there any formula or trick to count how many times a statement inside a nested loop gets executed?,,1,14,,,,CC BY-SA 4.0, +406797,1,,,3/20/2020 13:41,,2,106,"

I'm working on a fairly large c++ project which uses boost's serialization. +The issue that I have with the way it is currently organized is that serialization is weaved into the main source code on all levels of the application from gui to the core. But sometimes I need to build the core (or other components) separately and I don't want to have to link to boost for its serialization when I don't need serialization at all. So the issues are:

+ +
    +
  1. No way to build low-level components separately without having to link to boost;
  2. +
  3. There's no need for serialization when separate components are compiled (for testing for example), so it's a waste of compilation time;
  4. +
  5. Serialization creates code clutter when it's mixed with logic.
  6. +
+ +

I came up with a certain scheme that I'd like to share with the community before initiating a project-wise refactoring which is time-consuming. I have also looked into ways of making the serialization non-intrusive, as suggested in boost documentation, but only solves one of the problems, which is code clutter, and even then it's unclear how to make it work with derived classes.

+ +

So at the moment, the typical serialized class looks somewhat like this

+ +
#include <boost/serialization/...>
+...
+#include <boost/serialization/...>
+
+class A 
+{
+    T1 field1;
+    T2 field2;
+public:
+    T operation1();
+    T operation2();
+private:
+    friend class boost::serialization::access;
+
+    template<class Archive>
+    void serialize(Archive &ar, const unsigned int version) {
+        ar & boost:serialization::make_nvp(""field1"", field1);
+        ar & boost:serialization::make_nvp(""field2"", field2);
+    }
+}
+
+BOOST_CLASS_VERSION(A, 1);
+BOOST_CLASS_IMPLEMENTATION(A, boost::serialization::object_class_info)
+//BOOST_CLASS_EXPORT(A) // for derived classes
+
+ +

And I'd rather it looked like this

+ +
#include <serialization.h>
+
+class A 
+{
+    SERIALIZABLE
+private:
+    T1 field1;
+    T2 field2;
+public:
+    T operation1();
+    T operation2();
+}
+
+ +

Here serialization.h contains includes of all necessary boost headers and defines a macro SERIALIZABLE that expands to

+ +
private:
+friend class boost::serialization::access;
+template<class Archive> void serialize(Archive &ar, const unsigned int version);
+
+ +

At the top-level CMakeList.txt an option ENABLE_SERIALIZATION is introduced. If the option is ON and names SERIALIZABLE and SERIALIZABLE_SPLITTED are defined (the latter is for save() and load() functions when we need those separately). If ENABLE_SERIALIZATION is OFF, names SERIALIZABLE and SERIALIZABLE_SPLITTED are defined empty and no boost headers are included in serialization.h.

+ +

The implementation of the function serialize() (or save() and load() when needed) is in a separate cpp. For example for a library called geometry we'll have library geometry_serialization containing cpp with implementations of serialize(), save() and load() functions. One of the key features is that it compiles and links only when ENABLE_SERIALIZATION is ON.

+ +

The advantages that I see in it:

+ +
    +
  • solves the problems that have motivated this refactoring;
  • +
  • looks a lot cleaner;
  • +
  • while the serialization is now separate from the main source code and doesn't cause clutter, there is a reminder left in the code that there's serialization code for this class somewhere else (which is good because total omission of that fact might also lead to problems).
  • +
+ +

Possibly shortcomings that I worry about:

+ +
    +
  • the system is homebrewed so it needs to be explained to each and every developer working on the project;

  • +
  • the whole thing might be a reinvention of something that already exists in the boost serialization library and I just missed it;

  • +
  • possible existence of a more common and sensible approach that I don't know about.

  • +
+ +

Thank you in advance for any feedback and suggestions!

+",264344,,,,,43910.57014,How to separate a serialization code from application in a large c++ project,,0,2,,,,CC BY-SA 4.0, +406798,1,,,3/20/2020 13:41,,0,54,"

I am implementing a feature in an app where a user can unlock achievements, and when the client requests what achievements the user has unlocked, the client needs to know the user hasn't seen it before. There are UI considerations around new achievements vs. old ones.

+ +

The only way I can see solving this nicely is to break the idempotent and safe rule of GET requests. How I am currently thinking of solving it (which breaks this rule):

+ +

GET /users/:id/achievements

+ +

Achievements is data stored something like:

+ +
last_accessed: Date
+achievements: [{name: Enum, unlocked_at: Date}]
+
+ +

The GET endpoint would then compare what is new and not based on last_accessed and unlocked_at, and return data, BUT it would then have to update the last_accessed Date, which is what breaks the GET rule. That, and the fact that if you query it again, the 'new' achievements would be moved over to the 'not new' list on subsequent queries.

+ +

Please note that solving this purely client side is not an option, as if the user logs in on a different device, we can not have every achievement be seen as new.

+ +

Any advice or recommendations on how to solve this problem?

+",360112,,,,,43910.66181,API GET Request: Annotating data that the user hasn't seen before without breaking GET rules,,2,1,,,,CC BY-SA 4.0, +406804,1,,,3/20/2020 15:06,,1,20,"

The main source of D3js solutions is observableHq.com, but seems impossible (?) to reuse algorithms by copy/paste... Is it? Even checking tutorials like this, there are no simple way (with less plugins or programmer's time-consumtion!) to check and reuse.

+ +

Example: I need a fresh 2020 D3js v5 algorithm for indented-tree visualization, and there are a good solution: observableHq.com/@d3/indented-tree.
The download is not useful because is based on complex Runtime class...

+ +

But, seems a simple chart-builder algorithm,

+ +
chart = {  // the indented-tree algorithm
+  const nodes = root.descendants();
+  const svg = d3.create(""svg"")// ...
+  // ...
+  return svg.node();
+}
+
+ +

Can I, by simple human step-by-step, convert it in a simple HTML, with no complex adaptations, that starts with <script src=""https://d3js.org/d3.v5.min.js""></script> and ends with no Runtime class use?

+ +
+ +

FAQ

+ +

Aleatory questions and answers:

+ +
+

Could you explain what the problem is with including the D3 script tag and using Runtime library in the code?

+
+ +

When the original ObservableHq algorithm is simple, I need another way, a simple way to reuse it, by copy/paste and minimal adaptations.

+ +
+

Did you read the Downloading and embedding notebooks tutorial?

+
+ +

Yes, no news there: no ""human instruction"" about how to reuse code... Only ""install it"" and ""install that"". No instructions about ""copy/paste and minimal adaptations"" that I explained above.

+",84349,,84349,,43910.63403,43910.63403,How to interpret the set of simpler algorithms of ObservableHq as an reusable library?,,0,0,,,,CC BY-SA 4.0, +406808,1,,,3/20/2020 18:52,,-2,48,"

I have been wondering about this issue for quiet some time now. I find interesting and I have not been able to come up with a solution.

+ +

This must be seen in a context of a shopping cart where you can first choose to add 1 gold bar to your cart and then add another 2, so the sum is 3.

+ +

Let's say we have these products:

+ +

1 gold bar = 40$ +2 gold bars = 80$ +3 gold bars = 110$

+ +

So when I add 1 gold bar, my cart is 40$ dollars worth. Adding another 2, then my cart is 120$ worth.

+ +

Now I have 3 gold bars in my cart.

+ +

But there is a price for 3 gold bars at 110$, and I want that price to kick in of course, because I do have 3 gold bars in my cart.

+ +

Can this be categorized as a normal knapsack issue ? Or are there any other way of looking at it ?

+ +

Thank you

+",333082,,,,,43911.01111,Shopping Cart Offers,,1,2,,,,CC BY-SA 4.0, +406809,1,406811,,3/20/2020 18:52,,2,99,"

I am working on wrapping an old project written in C# in a test framework.

+ +

The largest problem I have is that I have a bunch of classes that are all VERY tightly coupled with other classes. All of these classes also make their own calls to the database, and many classes instantiate others directly within their methods.

+ +

I think I have come up with a good resolution to the problem, but I want to do a quick sanity check. I was thinking of moving instantiation of all objects to an abstract factory. Instead of directly instantiating adapters that speak to the database within the class, I will use Dependency Inversion to pass the adapter to the object at instantiation within the factory.

+ +

I will then be able to create a test factory that passes in mocked adapters to the objects instead of the ones that directly call the database.

+ +

When objects instantiate other objects within their own methods, they need to use the same type of factory that instantiated themselves. I was thinking of passing the factory that created the object into its constructor to save as a property when they want to make other objects. Would that be a good idea?

+ +

Sorry for the ramble, let me know if people need more information.

+",360131,,,,,43912.60069,Wrapping a legacy project in a test framework,,1,2,,,,CC BY-SA 4.0, +406810,1,,,3/20/2020 19:06,,1,58,"

I have an Angular SPA and I spun up my own basic form-based authentication using a .net core web API and SQL server to store user accounts / hashed and salted credentials. This is obviously not an ideal way to handle auth and it was only meant to be temporary. I have since gotten a request to integrate Azure AD, SSO for organizations using the software already. Since I wanted to move in this direction anyways, I think this is the perfect opportunity to rethink my auth solution.

+ +

I drew up this plan:

+ +

+ +

As detailed, I want all user access to be handled through Microsoft accounts, and do away with my custom solution. Now, I still need a way to make sure that only accounts allowed by me have access to the software; I plan to do this in the orange layer above. Once a user signs into an account via the login.microsoftonline.com portal, I would then validate that the email address is associated with an account in my SQL database already. If not, I would not allow them access to the software / secure route. The SQL database account entries will be added by me manually.

+ +

I figured this was a great solution because it removes the need for me to store sensitive password / user information in my database. I would only be storing an some ID's (Email address, etc...) to Microsoft accounts that I allow in.

+ +

So a few questions:

+ +
    +
  1. Is this a proper SSO solution?
  2. +
  3. Is there any apparent major flaws in handling things this way?
  4. +
  5. Should I use the email address in my SQL as the user identifier? or some +other ID given by Azure AD?
  6. +
+",338194,,338194,,43911.01736,43911.01736,Using Azure AD MSAL as only authentication solution for SPA,,0,0,1,,,CC BY-SA 4.0, +406813,1,,,3/20/2020 21:11,,-1,102,"

Moved

+ +

I originally posted this on SoftwareEngineering because that's where this related question was; but having looked into the Help in detail, I think my question is more on-topic for stackoverflow, and I'm moving it there. Will probably delete this question once the move is complete. Sorry I can't move the comments, which have been helpful, along with the question.

+ +

Original question

+ +

I'm trying to modify a SQLite query (in Android) to return its results in pseudorandom order. As in this question, the order needs to be stable over time (e.g. paging, screen rotation, etc.), so I can't just use ORDER BY RANDOM(). Instead I want to use a hash function that depends on a couple of input values that provide stability and sufficient uniqueness. One of these values is ROWID of the table, which is a set of integers fairly close together; the other value is more like a session ID, that remains invariant within this query.

+ +

According to this well-researched answer, FNV-1 and FNV-1a are simple hash functions with few collisions and good distribution. But as simple as they are, FNV-1 and FNV-1a both involve XOR operations, as well as looping over the bytes of input.

+ +

Looping within each row of a query is pretty awkward. One could fake it by unrolling the loop, especially if only a few bytes are involved. I could make do with two bytes, combining LSBs from the two input values (val1 & 255 and val2 & 255).

+ +

XOR isn't supported directly in SQLite. I understand A ^ B can be implemented as (A | B) - (A & B). But the repetition of values, combined with the unrolling of the loop, starts to get unwieldy. Could I just use + (ignoring overflow) instead of XOR? I don't need very high quality randomness. The order just needs to look random to a casual observer over small-integer scales.

+ +

So I'm wondering if anyone has already implemented such a thing. Given how widely used this hash function is, it seems like there would likely already be an implementation for this situation.

+ +

Here's my attempt at implementing FNV-1a:

+ +
SELECT ..... ORDER BY (((fnvbasis + val1 & 255) * fnvprime) + val2 & 255) * fnvprime;
+
+ +

I'm ignoring the fact that in FNV, the XOR operation (which I've replaced with +) is only supposed to affect the lowest 8 bits of the hash value. I'm also ignoring any overflow (which I hope just means the upper bits that I don't care about are lost).

+ +

For fnvbasis I'll use 16777619, and for fnvprime I'll use 2166136261. These are the specified values for 32 bit input, since I don't see a specified value for 16 bit input.

+ +

So is this a reasonable way to approximate FNV-1a in a SQLite query? Is there a better, existing implementation? I.e. will it actually produce an ordering that looks pretty random to a casual user, despite my mutilating the operations of the real FNV-1a?

+",8465,,8465,,43913.63125,43913.63125,How to implement FNV-1(a) in SQLite?,,1,11,,,,CC BY-SA 4.0, +406818,1,,,3/21/2020 7:56,,-1,69,"

Assume we have data structure objects - some of them are DTOs and some of them are VOs. Also assume that Value Objects are immutable and Data Transfer Objects are immutable and readonly, then:

+ +

How would you generally call/define mutable data structures?

+ +

Note that they aren't entities either because they don't have identifiers. For example, it could be some event passed between ordered listeners which depend on information from previous listeners (like stopping propagation, etc.)

+ +

EDIT: I realize that objects are supposed to tie state and behavior - I'm not trying to separate them. If you think these objects should be DTOs or VOs or what I described are just objects then that's fine - this is still an answer to me. I'm interested in popular opinion - ultimately those programming rules are largely opinion based and shaped by politics. Maybe some will deem this question ill-suited for stack exchange as primarily opinion-based, but I think most of the answers here are just that.

+",360155,,360155,,43911.72222,43911.72222,Defining characteristics of mutable data structure objects,,1,11,,,,CC BY-SA 4.0, +406822,1,406824,,3/21/2020 11:21,,3,699,"

Is it bad for a REST API to have non-unique ID for a child resource?

+ +

For example, the endpoint is:

+ +

GET /parent/:parent_name/child/:child_name

+ +

The :parent_name is unique, but a :child_name is only unique for that parent.

+ +

I decided to use names at first because the parent and child are resources that exist in external systems. This would allow the API consumers to enter the names into the URI intuitively without requiring an extra query to get an ID unique to the API like a GUID.

+ +

However, I'm starting to think this might have been a mistake. Any endpoints that use a child now need both the :parent_name and :child_name. Any endpoints where the child is the root also require the :parent_name in a query parameter, which gives me a bad feeling and I'm not sure how intuitive it is for API consumers.

+ +

Is this design with non-unique ID's for child resources acceptable, or is it better to just generate a unique ID within the API and always use that ID?

+",360161,,,,,43911.54583,"Designing a REST API resource with a non-unique ID, but unique composite ID",,1,0,,,,CC BY-SA 4.0, +406825,1,406826,,3/21/2020 12:11,,-1,391,"

I know that Python is an interpreted language and that c++ is a compiled one, or at least I like to think that I've understood some of their differences.

+ +

Although C++ is apparently faster than python, what if you compiled Python code to an exe: Would they be the same speed or will C++ still be much faster?

+",360166,,209774,,43911.53611,43911.55556,Will compiled python code be as fast as compiled C++ code?,,2,3,1,,,CC BY-SA 4.0, +406829,1,406830,,3/21/2020 16:18,,-1,371,"

I am very new to UML and UML activity diagrams. my question is suppose in a student course registration system after student successfully login to the system there are 3 options add course, delete course or review courses then how we can implement this in UML activity diagram.

+ +

can I use fork here ? but I know fork is used when there is simultaneous actions. this is not simultaneous action. student can either add course delete course or review course. can anyone help me? any help is appreciated.

+",350162,,209774,,43911.70278,43911.70278,Select one from multiple options in UML activity diagram,,1,0,,,,CC BY-SA 4.0, +406831,1,406853,,3/21/2020 17:50,,0,231,"

Why there are only Domain Events (not talking about CQRS) in the Domain-Driven Design theory?

+ +

Domain Commands like CreateOrder or CollectPayment seem to be a valid concept as well.

+ +

Is there a reason for this? Or are commands seen as a subset of events?

+",350226,,,,,43912.63056,Why are there no Domain Commands in DDD?,,1,0,,,,CC BY-SA 4.0, +406840,1,,,3/21/2020 21:50,,2,74,"

I’m developing a Matlab GUI for a scientific computing application and need to plot fairly heavy intermediate results.

+ +

Currently, the computation is represented as a function. The GUI accepts user input and passes it to the computation function, which returns the final data results.

+ +

Architecturally, the GUI incorporates a view and controller class. Main initializes these objects, in addition to defining the computation function, which lives in a sourced file.

+ +

Now, because I need to plot intermediate results, I’m trying to figure out the best way to create separation of concerns between the computation logic and plotting. The physical display of intermediate results to the user does not need to take place until the computations are completed.

+ +

My instincts are that the computation functions should not know that they’re being plotted.

+ +

Ideas so far:

+ +
    +
  1. Computation logic accepts UI axes handles as an input argument, plotting takes place inside the computation logic. This appears to violate separation of concerns.
  2. +
  3. Computation logic returns a data structure containing intermediate results. The GUI is responsible for binding the data to GUI plots. I don’t like this, as it seems cumbersome to keep all the intermediate data around.
  4. +
  5. I create a plotter object, which is initialized in main() and can be configured (e.g., turned on or off). During the computation procedure, the computation function calls a plotter method that generates the needed plots and discards the data. The plotting logic in the computation function is reduced to a single plotter method call, but it still exists. This is analogous to idiomatic solutions for logging in large applications.
  6. +
  7. The computation now lives inside a class, with intermediate results as a continuously overwritten member variable. The completion of an intermediate result triggers an event, a plotter object listens for that event and accesses/plots the computation class’s intermediate result member variable. This seems most fully separated, as now the computation logic has no knowledge in its code to being plotted.
  8. +
+ +

Ultimately, the computation is generating and discarding intermediate results that need to be captured and processed by another object, and I need some way of interspersing additional logic without modifying the original function.

+ +

Any other ideas? This is being done in Matlab, but if there’s a conventional way of tackling this in other languages, I’d still love to learn about it.

+ +

Past Research:

+ +

This problem appears very analogous to organizing logging code, as in this question. However, while it seems widely accepted that logging statements intersperse business logic, it’s unclear if plotting demands stronger separation, or if inseparably it should itself be considered business logic and contained within the computation code.

+",309478,,,,,43914.90139,Separating Plotting and Computation Logic in Scientific Computing MVC App,,1,1,,,,CC BY-SA 4.0, +406842,1,406844,,3/22/2020 0:39,,1,195,"

I am learning to create Use Case diagrams using the UML specification, but I have a couple of doubts about Use Case relationships which I cannot solve on my own.

+ +

My question regards two different situations:

+ +
    +
  1. the completion of a specific Use Case is a pre-condition for the execution of another Use Case. I have seen in some article that a ""precedes"" connector is suggested, but I don't find anything similar in UML (""precedes"" and ""invokes"" relationship should be defined in the OML specification). The ""include"" relationship of the UML 2.5 specification seems suitable in some scenarios, but not in all of them.
  2. +
  3. there is an alternative (i.e. optional) flow during the execution of a Use Case, which can also be identified as a different Use Case which is meaningful independently. Reading the ""exclude"" relationship description, I understand that ""the extending UseCase typically defines behavior that may not necessarily be meaningful by itself"". Also, some articles says that ""The extending use case is dependent on the extended (base) use case"" (i.e. doesn't make much sense on its own). I'm not sure that ""extend"" is the proper relationship in such a situation.
  4. +
+ +

I have faced both situations while designing the same diagram, thus I will report it here for completeness and a better understanding (I haven't already included all the system Use Cases in the diagram):

+ +
    +
  1. The User must be logged-in the system before re-scheduling a booking (some articles suggest this to be an ""include"" relationship), and the User must be viewing the booking to ""trigger"" the re-scheduling Use Case (this is more close to a ""precedes"").
  2. +
  3. If no suitable alternatives are available during re-scheduling, the User should be able to Delete the current booking (instead of re-scheduling it).
  4. +
+ +

+",356140,,278015,,43920.43819,43920.43819,UML Use Case Diagrams Relationship - Required / Optional AND independent,,1,3,,,,CC BY-SA 4.0, +406847,1,,,3/22/2020 8:22,,1,41,"

So I have an problem regarding logic I cant wrap my head around. I need to expand my application to two different countries which both have their own respective languages (some countries require up to 3 languages).

+ +

As it turns out Country2 and Country3 have totally different views and logic for some of the major components in my application which require different views and services regarding the business logic.

+ +

We have built modular system. And right now for Country1 I have two languages, so under each module's resources i have views/lang1 or views/lang2 and based on middleware I show the correct language view.

+ +

Now how should I implement another country since most of the logic is different?

+ +
    +
  • Should I create modules for Country1 and Country2 under my main application, where it's own countries application is running ( meaning multiple views etc)? I believe this would make updating and maintaining the application harder among different developers from different countries.

  • +
  • Should I create sub-directories under my TLD so app.companyname.com/country1/en where each application would have it own perspective git repository?

  • +
  • Should I invest into buying domains for each app.companyname.fr/en

  • +
  • Should I make subdomains for fr.app.companyname.com?

  • +
+ +

Which of these options would make the most sense in my case?

+",360201,,,,,43912.34861,How to structure a website with both multiple countries and languages?,,0,13,,,,CC BY-SA 4.0, +406848,1,406849,,3/22/2020 9:41,,2,289,"

I had a quick question. I was working on understanding malware and then I started to wonder how was malware able to run on its own when installed? So for example, I was to click on a bad link that someone sent me, the malware installs on my computer, but isn't that it...the file is installed but it can't run unless I click on the actual file itself.exe right? If that is the case then what is the worry about malware, as long as you don't run it then you're fine? I am currently working on c++ and working on creating a GUI and so I was trying to have a finished product of .exe and while learning about it, I have stumbled on some pretty interesting things I never even asked myself.

+",360166,,,,,43913.65556,How can malware run on a pc when installed?,,1,9,,,,CC BY-SA 4.0, +406856,1,,,3/22/2020 17:02,,0,263,"

I'm working on a blog-like application with Flask and SQLAlchemy and I'm unsure how to store the blog posts (articles) in the database. These are going to contain text and images (placed between paragraphaphs). I expect the text to have formatting, with bold, italic, different font sizes, etc.

+ +

Would markup be a good option? If so, would each blog post have an html file of its own? That doesn't seem very efficient, but maybe there's an elegant way of doing so.

+ +

If not markup , then what can I use?

+ +

In this answer to a similar question, they mention a react markdown package. If markdown is the 'best' way to solve this, is there something similar to this package in Flask or SQLAlchemy?

+ +

Note that by 'best', I mean the most appropriate and simplest, the easiest to integrate to a Flask SQLAlchemy project.

+ +

If any code is necessary, please tell me.

+",348910,,90149,,43913.625,43913.625,How to store articles (with text and images) in SQLAlchemy database,,0,2,,,,CC BY-SA 4.0, +406858,1,,,3/22/2020 17:33,,0,57,"

Edit -- I have no limited my question down to a more specific question.

+ +

Is there an argument not to wrap a 3rd party user input validation library?

+ +

To me there are a few strong arguments to do so.

+ +

1) By writing a wrapper it makes it easy for me to extend or inject additional implementation into the library.

+ +

2) If I change the dependency, say I have found a better input validation library, I can swap them out relatively painlessly, I just need to update my wrapper.

+ +

3) If the dependency updates it is easy to migrate.

+ +

4) It makes testing easier

+",357939,,357939,,43912.76042,43912.76042,Should a 3rd party user input validation library be wrapped?,,1,4,,,,CC BY-SA 4.0, +406861,1,,,3/22/2020 19:56,,-2,196,"

I am coming from a typical monolithic background and I've been experimenting a lot with Spring Framework. I have also build some simple microservices communicating with each other etc. +Now I want to go a step forward and build my one more complex app based on microservices. Right now I am on the phase where I am doing my research on how to do that. I have a simple idea of composing an online shop app with, let's say , two microservices:

+ +
    +
  1. Customer Relationship Service
  2. +
  3. Order Management Service
  4. +
+ +

What I don't still understand is how could those two services exchange information about customers. In a typical monolithic project I would have just relate a Customer entity with a Order Entity over a @OneToMany relationship, assuming a customer can have more than one order active. But how would this relationship work in a microservice environment without having to read the same data base ?

+",360219,,,,,43913.93333,@OneToMany relationship in a microservice enviroment,,2,7,0,,,CC BY-SA 4.0, +406865,1,,,3/22/2020 23:23,,0,178,"

I am building a mini SQL parser that reads and converts SELECT, INSERT, UPDATE and DELETE statements into an AST. My current design is as follows:

+ +

Statement.java:

+ +
public interface Statement {
+    String getOperation();
+    String getTable();
+    Map<String, Object> getColumns();
+    String getConditions();
+}
+
+ +

Select.java, Insert.java, Update.java and Delete.java are all POJO's that implement this interface, with varying attributes (for example, different toString() implementations). I have a separate utility class that defines a method called parse(String query) as follows:

+ +
public Statement parse(String query) {
+    String operation = getOperation(query);
+    if (operation.equals(""insert"")) {
+        String table = getTable();
+        Map<String, Object> keyValues = getColumns();
+        return new InsertStatement(table, keyValues);
+    } else if ...
+}
+
+ +

I'm wondering if it makes sense to have a separate utility class for defining the parse method or if it may be a better idea to change Statement to an abstract class and move the parse method inside it. That way, someone could call Statement.parse(""INSERT INTO ..."") and get an InsertStatement object back:

+ +
public abstract class Statement {
+    String getOperation();
+    String getTable();
+    Map<String, Object> getColumns();
+    String getConditions();
+    public Statement parse(String query) {
+        String operation = getOperation(query);
+        if (operation.equals(""insert"")) {
+            return parseInsert(query);
+        } else if ...
+    }
+
+    private InsertStatement parseInsert(String query) {
+      // parse and return InsertStatement instance
+    } 
+}
+
+ +

Is there a general good practice recommended here?

+",241284,,,,,43913.62431,Designing a Parser - Abstract vs Interface,,2,2,,,,CC BY-SA 4.0, +406867,1,,,3/23/2020 0:05,,1,78,"

We have a system consisting of

+ +
    +
  • Detectors
  • +
  • Converters/Transformers
  • +
  • Business Rules Engines and State Managers
  • +
  • Views/Displays/Applications
  • +
+ +

There are several potential ways these may interact and share state

+ +
    +
  • Message passing mediated by Queues
  • +
  • Point to Point Message Passing via Http
  • +
  • Shared State in a Database/Datastore and state-change notifications via messaging
  • +
+ +

I am drawing up a simplistic flow-chart / system diagram and am wondering how best to represent the publish-subscribe or notifier/listener(s) mechansisms. I do not recall standardized flowchart symbols for these.

+ +

An example would be:

+ +
1  A Parts Modification detection event occurs
+
+2  Events are written to `Events` and `Parts_State` topics (or possibly tables)
+
+3  A State_Management module that listens on those topics performs validations and 
+   transformations then updates downstream topics with the results
+
+ +

What I am looking for is how to represent the Publisher/Notifier aspect of the Detection module in the Flowchart and the Subscriber/Listener aspect of the State_Management module in the flowchart.

+",102887,,,,,43913.00347,Flowchart symbol or pattern for Publisher/Subscriber or Listen/Notify,,0,2,,,,CC BY-SA 4.0, +406870,1,406874,,3/23/2020 6:10,,-3,131,"

I'm working on a solution to ""suspend / hibernate and resume"" an application written in C # /. Net and looking for the right name for it.

+ +

This solution ""hibernate"" the completed thread, allows you to save its state (stack, local data ...) to disk and restart (from hibernation). +Something like the hibernation function in the Windows operating system, working only on individual threads in .Net. +The key in this solution is that the thread lives very long. Such a thread can be serialized (at the execution point)/hibernate, saved to disk and resumed after a few days (even on another computer).

+ +

The name ""Hibernate"" would be appropriate, but is immediately associated with ORM Hibernate.

+ +

The name should reflect the intentions of the solution and be understood by potential users. +Do you have any suggestions?

+",343225,,343225,,43913.32986,43913.37292,"How to name the functionality of ""suspend/hibernate and restore application execution"" - something like Hibernate in Windows, but only for one Thread",,1,4,,43913.64722,,CC BY-SA 4.0, +406871,1,,,3/23/2020 6:13,,2,194,"

Consider the following JavaScript code:

+ +
class MyClass {
+   #myPrivateField;
+   constructor() {
+      #myPrivateField = new AnotherClass();
+      this.theGetter.method1(); // or: this.#myPrivateField.method1(); 
+      this.theGetter.method2(); // or: this.#myPrivateField.method2();
+   }
+   get theGetter() {
+      return this.#myPrivateField;
+   }
+}
+
+ +

Obviously, invoking the ""getter"" method of theGetter causes some negligible overhead, or probably gets inlined. Inside of the class, the getter could've perfectly been alternated with a ""direct access"" notation (i.e, through this.#myPrivateField.method1(), for example).

+ +

Regardless of the performance penalty, in a software engineering perspective, why should or shouldn't I stick to one notation over the other inside the context of the class and what is the best practice in this regard?

+",360238,,360238,,43913.26667,43915.50347,The usage of getter notation inside the context of the class,,2,10,,,,CC BY-SA 4.0, +406876,1,,,3/23/2020 9:33,,0,174,"

I have a form that's being submitted from my front end to my back end, and the back end is responding. I'd like to ensure that they both send JSON in a format that the other side will interpret correctly.

+ +

From this question, I figured out that JSON schemas are the way to do that.

+ +

However, that's not quite what I want. JSON schema are a way of validating that the schema is correct.

+ +

For an example of what I've tried so far, in one of my earlier sites, I was using a Python Flask backend with a React frontend. I was only passing one value in the JSON so I was just using JSON in the form {'key': number} where the key named 'key' was the only key in the JSON. I stored the text 'key' as a constant both in Python and in Javascript. Ideally, both would look it up from the same file.

+ +

That is, I want my program to be able to look up the values in the schema and update. If the name of one key changes to something else, the rest of my program should update accordingly. Just using schemas doesn't work because they only validate; they don't update my program.

+ +

How do people usually make their back end and front end update to accommodate new schemas without changing all the constants in the code by hand?

+ +

As a fake example, it would be nice if the python schema program allowed me to give unchanging names to the properties. For example I have a 'price' property described as ""the cost of the object"". I then give a constant name like ""PRICE"" to this property. It's then referenceable in my code as Schema.PRICE. This means that if I get a new spec from the store owner that says ""We decided that what used to be 'price' should now be called 'cost' because it's how much it costs our customers to pay for it,"" all I have to do is go to the schema and change 'price' to 'cost'. But in my code, everything remains Schema.PRICE which will now just evaluate to 'cost' when run. This allows the property names within my schema to be flexible while the code remains effortless to maintain.

+ +

So, how is this sort of flexibility usually accomplished?

+",234187,,,,,43913.39792,Structure of constants between back end and front end,,0,0,,,,CC BY-SA 4.0, +406877,1,406881,,3/23/2020 11:04,,2,973,"

Hi I am trying to create a role based dynamic route authorization system on my NodeJS ExpressJS powered API server.

+ +

Scenario:

+ +
    +
  • There will be some roles like Admin, StandardUser etc. and they can't be edited or removed.

  • +
  • More roles can be added by authorized users dynamically. These roles of course can be edited or removed.

  • +
  • Authorized users can set all the route permissions. They can set Role A can use Route X. They can do it for all kind of roles. Dynamic roles or ""unchangable"" roles like Admin and StandardUser.

  • +
+ +

Now my scenario shows that I need a Roles Table because of my dynamic roles and a RoutePermissions Table for dynamically assign route access to my roles. However I can't find a way to store my ""unchangable"" data.

+ +
    +
  1. Where should I put my ""unchangable"" (I don't know if there is a term for it. Please let me know.) role data. Where should I put SuperAdmin and StandardUser roles? Because if I put them in roles table in my DB I will have 2 unchangable rows in it my dynamic data table.

  2. +
  3. Where should I put my ""unchangable"" route data. Since the routes (POST /users/deactivate) are coded in the system I don't think it is correct to put them in database. Because I think, the database should contain only the dynamic data but my routes are not dynamic.

  4. +
  5. If i don't put my ""unchangable"" data to DB. I can store it in my code like enums or consts but how can I dynamically create permissions with them? Because permissions are dynamic for each role. (Even for Admin and StandardUser remember they are not dynamic roles.)

  6. +
+",358952,,358952,,43914.74444,43914.74444,Storing unchangable data in database vs in code,,2,1,,,,CC BY-SA 4.0, +406879,1,,,3/23/2020 11:25,,0,22,"

I am looking for standards to implement access tokens to do authorization in our services. +So far I found the OAuth standard, which seems to offer everything we need.

+ +

I am however looking for alternatives, to ensure that the choice we make is the right one. +I tried googling, but https://www.google.com/search?client=firefox-b-d&q=access+token+standards only seems to result in OAuth2.0.

+ +

Are there any other access token standards?

+",269550,,90149,,43913.62292,43913.62292,Standards for access tokens (authorization),,0,1,,,,CC BY-SA 4.0, +406889,1,406895,,3/23/2020 15:20,,-1,126,"

Imagine you have software component named A which knows component B
+Lets say knowing each other is via reference or imports, or two of them.

+ +

Is it safe to tell the following statements:
+1. A-->B this connection is loosely coupled?

+ +
    +
  1. If it was also in addition that B knows A (B-->A) then we could safely say that components are then tightly coupled and only then?
  2. +
+",114196,,11732,,43913.66875,43913.94861,Software components tightly coupled or loosely coupled- simple case examples,,3,3,,,,CC BY-SA 4.0, +406891,1,,,3/23/2020 16:53,,0,248,"

I'm trying here to understand why some frameworks introduce the control over the number of consumers on a AMQP queue.

+ +

For example, Spring AMQP introduces this feature on the 1.3.0 version, with the maxConcurrentConsumers property:

+ +
+

Since version 1.3.0, you can now dynamically adjust the concurrentConsumers property. If it is changed while the container is running, consumers are added or removed as necessary to adjust to the new setting.

+ +

In addition, a new property called maxConcurrentConsumers has been added and the container dynamically adjusts the concurrency based on workload.

+
+ +

But I can't figure out the advantages to use it if I could just set the concurrentConsumers already with my maximum value.

+ +

The only advantage that I know is reduce the average number of the connections with the RabbitMQ server. But thinking that most of the applications are very far to reach a number of connections that could create problems to a RabbitMQ instance, these consumers control feature seems very rare to take advantage.

+",172464,,172464,,43913.77083,43913.77083,Why control the minimum and maximum concurrent consumers on an AMQP queue?,,0,0,,,,CC BY-SA 4.0, +406899,1,,,3/23/2020 22:20,,0,105,"

I am in the process of implementing a persistent collection in C, specifically, an immutable hash trie. In order to increase acceptance and reusability, I have identified the following key areas that should be abstracted:

+ +
    +
  • typing of key and value
  • +
  • allocation & freeing of additional nodes
  • +
+ +

Typing of key and value also comes with the complexity abstracting hash and equality functions.

+ +

I am mostly interested in being able to handle the following cases in particular, regarding memory management:

+ +
    +
  • creating new nodes with malloc and using reference counting to free them
  • +
  • using the ravenbrooks memory pool system (mps) as a garbage collector, which means I'll have to use the mps api to request new memory
  • +
+ +

and typing:

+ +
    +
  • (void *) as default for key and value types
  • +
  • making any or both of key and value types a struct
  • +
+ +

In addition I am wondering whether it would be useful to give users the option to provide hash and equals functions either:

+ +
    +
  • at runtime, and passing them down the trie every time, or
  • +
  • at compile time, ""baking"" them into the various functions with the help of preprocessor macros
  • +
+ +

Are there ""standard"" ways to achieve those goals? Patterns, so to speak? Or are these goals misguided?

+ +

I haven't gotten to the point of a working implementation yet, but from first experiments I think I can achieve these goals via heavy use of macros and an unorthodox #include order.

+",238766,,,,,43950.83333,How should I provide generic typing and allocation for a collection library in C?,,1,5,,,,CC BY-SA 4.0, +406908,1,,,3/24/2020 7:23,,0,24,"

I've several devices that need to send and receive data from a message broker, in my case RabbitMQ but is not so relevant for the discussion, and I need -as you can imagine- to provide connectivity in a way that can be considered secure both for the user as well as the service provider

+ +

I know that mutal SSL certificate authentication is a common way to authenticate devices, but what if I don't want to use client certificate? Maybe I can't afford a PKI or i work with untrusted devices so a SSL certificate would be useless since it's not private by design

+ +

I think that would be quite good if I was able to generate random, revokable, temporary credential for each device, maybe after an handshake made with HTTP APIs and based on some sort of hardware secret

+ +

But what are the patterns, the tools, the state of art in this subject?

+ +

Thanks in advance for replies

+",354790,,354790,,43914.34931,43914.34931,Broker Temporary Credentials for IoT Usage,,0,0,,,,CC BY-SA 4.0, +406910,1,406913,,3/24/2020 9:09,,1,87,"

Can requirements be expressed in class-diagrams?

+ +

For example:

+ +

A student can enroll in a class. +There are different classes. +Some classes have dependencies, e.g. you can't go to Spanish III before you went to Spanish II and Spanish I.

+ +

How to model that?

+",360306,,,,,43914.45278,Class-diagram dependencies between classes,,2,6,,,,CC BY-SA 4.0, +406911,1,406921,,3/24/2020 9:44,,2,213,"

While making an application I've come to the point where I want to add logging for the inevitable case when something goes wrong. Now this seems like a problem that should have been solved decades ago, and indeed - there's no shortage of logging libraries for any decent programming language. But while trying to choose among them I ran across a rather fundamental dilemma.

+ +

Do I emphasize safety or performance?

+ +

Let's look at both extremes. If I emphasize safety, then I should make sure that the log entry is safely stored before I proceed with work. Otherwise I could end up in a situation where my program thinks it's logged 5 log entries and has done substantial work, but then an error makes these entries disappear and later forensics turn up nonsensical results. ""I see that the program only logged up to point A, but there is already data stored which suggests it reached point B. Then where are the log entries between A and B?! What's going on here?!"" At the furthest extremes of safety this means that after I produce a log entry I also need to wait until it has been successfully flushed to physical storage. However that's a VERY expensive operation and would basically kill my program's performance, since every log call would probably take tens if not hundreds of milliseconds.

+ +

On the other extreme - performance. In this case I need to shove the log entry to wherever as fast as I can and move on without looking back. Another background process then can take said entry and try to write it to physical storage, but the main process is unaffected. The main risk here is the one mentioned before - it's easy to lose log entries if everything crashes and burns before they could be written out.

+ +

And this whole thing is on a spectrum, of course. By employing various strategies you can make your chosen approach either safer (but slower) or faster (but riskier). And... I can't decide how to choose.

+ +

Is there some commonly accepted middle ground? Has anyone every come up with a way of deciding where your particular situation should lie?

+",7279,,,,,43914.72431,Logging: safety vs performance. How do I choose?,,2,13,1,,,CC BY-SA 4.0, +406912,1,,,3/24/2020 10:01,,-4,76,"

I need 16GB RAM on my development machine to do some Android programming but I can't really afford to buy a new one. So, what's there to stop me from just creating a VM on Azure, use it for just a few hours every day (since its billed per second) and save a lot of costs? If I am using Azure's spot instances, I am getting a significant amount of savings, since I don't care about 24x7 deployment anyway.

+",179065,,,,,43914.82778,"If my development machine is slow and has low end specifications, why can't I just rent a VM on Azure?",,1,4,,43914.59931,,CC BY-SA 4.0, +406922,1,,,3/24/2020 15:11,,0,208,"

I'm using a DDD approach for the Domain classes. Although, I have a problem on my design, that I'm handling it now but didn't have a good idea to over tackle it.

+ +

My Architecture is the follow one:

+ +

- Core

+ +
    +
  • Application(here I have a bunch of command/queries that use Domain entities and CQRS to process use cases)
  • +
  • Domain
  • +
+ +

-Services

+ +

-Infrastructure

+ +

-Presentation

+ +

My problem relies in the following thing. I have a class called template that implements the interface ITemplate. This interface implements several methods and properties. As we can see above

+ +
 public interface ITemplate
+ {
+        public int Id { get; set; }
+
+        string Name { get; set; }
+
+        string Definition { get; set; }
+
+        ISource Source { get; set; }
+
+        ITemplateDefinition GetTemplateDefinitionObject();
+
+        void SetTemplateDefinition(string templateDefinitionString);
+ }
+
+ +

Inside a command, placed on the Application folder, I have the following:

+ +
public async Task<Unit> Handle(UpdateTemplateCommand request, CancellationToken cancellationToken)
+{
+       var templateDto = request.Template;
+
+
+       var sourceForTemplate = await SourceRepository.SingleAsync(x => x.Id == templateDto.SourceId);
+
+       if (sourceForTemplate == null)
+       {
+           throw new NotFoundException(nameof(Domain.Entities.Source), request.Template.SourceId);
+       }
+
+       var templateToUpdate = await TemplateRepository.SingleAsync(x => x.Id == templateDto.Id);
+
+       if (templateToUpdate == null)
+       {
+            throw new NotFoundException(nameof(Domain.Entities.Template), request.Template.Id);
+       }
+
+
+       if (!string.IsNullOrEmpty(templateDto.Definition))
+       {
+            try
+            {      
+              **templateToUpdate.SetTemplateDefinition(templateDto.Definition);**
+            }
+            catch (Exception e)
+            {
+               throw e;
+            }
+
+        }
+   }
+          TemplateRepository.UpdateRFEntity((Domain.Entities.Template)templateToUpdate);
+
+ +

The line with the ** surrounding is my problem.

+ +

When Unit testing this I can't fake it since the repository returns a concrete implementation and not a interface, which i think is the right way, since when dealing with a ORM like EF, with the possibility to track entities and other mechanisms we shouldn't loose that by mapping responses to a interface.

+ +

Does anyone have a idea how to do it cleaner in order to be able to mock class calls without the need to make members virtual, since i have already a interface, and not having to transpose all responses to interface?

+",319070,,232369,,44096.59236,44126.91736,Entity Framework and Domain Driven Design Testability,,2,8,,,,CC BY-SA 4.0, +406924,1,,,3/24/2020 17:31,,1,82,"

We have a large monolithic database. As part of a new requirement, I am proposing to move our financial transactional system into a new separate database via microservice-esque services. These financial transactions are very important for our customers and performance is key. It’s a very large module with a lot of business complexities. Expectation is that our system should be able to handle 1000 TPS minimum.

+

There are two schools of thought in our development team.

+

Side A is to keep it as a monolithic DB but improve some bad designs and improve code quality in general. Advantage is less steep learning curve, less amount of surprises in later stage.

+

Side B (me included) is to use this opportunity as a challenge and implement micro service-based architecture-based solution with a smaller scalable database. Challenge is fear of unknown, stepping out of comfort zone and since project is time bound, chances of failure.

+

Following are the challenges while designing this system as an independent system.

+
    +
  • A lot of our data will reside in a monolithic badly designed database. A lot of tables expect reference of transaction ID generated as result of a financial transaction. For example, players can get offers. If they redeem offers, their account will be credited with X amount of money. Now to support various reports, we need to maintain which player redeemed which offer and what was the corresponding ID of that transaction.
  • +
  • Reports will be tricky. They will need to refer to both databases. Only option is to use linked server or open query but their performance will surely be degraded.
  • +
  • Right now, most of our logic is in the database. So now C# simply need to make a database call to invoke a HUGE SP which will call a lot of functions and further SPs. But yes, there will be only one call. Now if we move to service bus driven design, there will be a lot of calls over network. A typical flow could easily have 8 to 10 calls. These calls will have a lot of back and forth. It's not easy to do a POC to guarantee that multiple API calls will perform better than single procedure call.
  • +
+

I am looking for your opinion on microservice architecture and smaller database. In your experience, is it really worth to emphasize smaller databases? Separating databases will definitely trigger rewrite of many components since we will need to break complex SP logic to API calls.

+",194130,,173647,,44047.74444,44047.74444,Moving a module out of monolithic database using microservice patterns - challenges,,1,1,,,,CC BY-SA 4.0, +406929,1,,,3/24/2020 22:21,,3,88,"

This was very often in the past center of discussion, whether to use GET or POST for searchin.

+ +

If you are making for example geospatial request, both values are needed (lat and long). If you do it with GET, you cannot really annotate in openapi that both fields should be dependent on each other - the whole group lat/lon/distance required if one of them set. However there are some opened tickets that could solve these use cases by defining so called fragments, but I dont believe it will be standardized in OpenAPI anytime soon.

+ +

Is it really a bad thing to use POST in such cases? From my understanding, POST is used when data is mutated, however it gives us:

+ +
    +
  • ++ ability to do validate very soon, if all parameters of specific search request are valid and not in controller/service layer
  • +
  • ++ makes URI more readable (can be used with along filtering)
  • +
  • -- not able to save it in bookmarks
  • +
  • -- for testing you need Postman/curl/...
  • +
+ +

So basically I would have something like this:

+ +
POST /stores?country=US
+{
+    ""nearLocation"": { ""lat"": 10, ""lon"": 10, ""radius"": ""10km"" }
+}
+
+ +

instead of

+ +
GET /stores?country=US&lat=10&lon=10&radius=10km
+
+ +

which would require to validate the fields in controller and could not be defined in the openapi file.

+ +

What do you think, is it really a bad practice to move some fields to POST? I think we should differentiate between filtering and searching. You can filter enums, strings etc. and you can search with geospatial requests, date range requests where the comparator is not just equal. Am I thinking right?

+",318664,,,,,44185.96042,Geospatial search in microservices - GET or POST?,,1,2,1,,,CC BY-SA 4.0, +406933,1,,,3/25/2020 0:02,,3,143,"

I am wondering how software with premium modes, such as Spotify, can operate offline. If a user is not connected to the internet, then they cannot authenticate via a remote server, which has e.g. a database of users and when their accounts expire. The only explanation I can think of is that the software locally stores the account's expiration date and encrypts this file when the user is online. But for this to work offline, the decryption key and method must also be stored locally rather than being retrieved from a server. In this case, it seems that authentication is only secured through obfuscation of the encryption method and key in the machine code.

+ +

Is the above the typical way that offline authentication is handled, or is there a better way of achieving this?

+",360357,,,,,43915.28333,How does authentication work for offline software?,,3,0,1,,,CC BY-SA 4.0, +406941,1,406958,,3/25/2020 11:49,,2,109,"

Often enough, the terms data model and data format are used interchangeably, but here I disagree. Let's start with the simpler one, the data format. I don't know exact definitions, but the data format describes the layout of a piece of data, the meaning of the individual bytes. There are many formats available, more general ones being for example JSON and XML. However, one can represent the same piece of data in different formats. Short example - a location.

+ +

JSON

+ +
{
+    ""location"": {
+        ""longitude"": 41.25,
+        ""latitide"": -120.9762
+    }
+}
+
+ +

And XML

+ +
<location>
+    <longitude>41.25</longitude>
+    <latitude>-120.9762</latitude>
+</location>
+
+ +

So, the basis for those two is a common data model, which states that a location is comprised of two fields, long and lat, and both those fields are floating point values.

+ +

Now my question is, how can I formally write down such a data model, preferrably in a common format that is machine readable (preferrably - a standardized diagram is already better than a sheet of paper)?

+ +

XSD for XML and JSON Schema for JSON are mappings of a data model onto a specific data format. What I was hoping to find is one level higher, providing an abstraction of the data model to be consumed my multiple, different data formats. I don't design the application, I design the data model. If application A consumes a piece of data from a web service, JSON might be a good choice for a data format here. But an application B, which works with the same data, but only over the wire, might prefer to use a binary format. I hope it gets clear what kind of abstraction I want - not to model data for a specific format, but for all formats.

+",321010,,321010,,43915.52986,43915.82986,How to represent a data model?,,2,3,1,,,CC BY-SA 4.0, +406946,1,406951,,3/25/2020 15:30,,3,248,"

Some of our customers from time to time report an unexpected behavior in one feature of our software and we suspect that we have a bug.

+ +

The feature itself and the kind of bug is not interesting for the purpose of this discussion, but just to fix the idea the broken piece is a command scheduler. From time time to time scheduled commands are lost and they are not executed at the schedueled time of the day. We are not currently able to reproduce this issue in a controlled way.

+ +

By investigating the service in charge of implementing the broken feature we noticed that the current implementation has an insufficient number of logs and this makes understanding the runtime behavior quite difficult. So, we decided to improve the logging in order to have better insights about the runtime behavior in the customer installation.

+ +

While reasoning about this problem I asked myself a basic question: is it a good choice depending on debug level logs in order to have a full understanding of what's going in a software product ? Is there a better way to handle this kind of situation ?

+ +

The point is that no one is going to run a software in production by enabling debug level logs (at least not in standard scenarios). +When debug level logs are enabled plenty of logs are written and this can harm the log store in terms of storage consumption and performance.

+ +

So, the first problem is that debug level logs are not enabled by default in production and this means that when a problem arises for the first time you don't have your precious logs which can help you fully understand what exactly happened. You just observe an unexpected behavior, but you don't have a clear idea of the root cause.

+ +

This point can be very harmful because, in many cases, the pattern to reproduce an unexpected behavior is unknown or not very clear. This means that, once the debug level logs are enabled in order to carry on the investigation, it is entirely possible that you won't be able to reproduce the issue observed before and you are stuck unable to understand the root cause.

+ +

Are there better alternatives than low level logs to handle these scenarios ?

+",310179,,,,,43917.63194,Are there better alternatives to debug level logs to investigate a bug in a production environment?,,3,2,,,,CC BY-SA 4.0, +406949,1,,,3/25/2020 15:39,,1,118,"

During my first attempt of implementing an project with the ""Clean Architecture"" I try to implementation a job portal where I came across a problem concerning the communication between (hopefully) two loosely coupled modules.

+ +

The two modules are:

+ +
    +
  • Identity - responsible for user operations like register
  • +
  • Catalog - responsible for job operations like find
  • +
+ +

The Problem

+ +

When searching for one or more jobs, information about the employer should be sent along with the job information.

+ +

Brief Architect Description

+ +

A Job (inside Catalog) references an User (inside Identity) by an id called employee:

+ +
class Job {
+    private Identifier id;
+    private Identifier employee;
+    /* ... */
+}
+
+ +
+ +

First Attempt

+ +

My first idea was to query the information inside a Use Case but this would couple the two modules:

+ +
class FindJobByIdUseCase {
+
+    private final FindUserByIdUseCase findUserById; // from Identity module
+    /* ... */
+
+
+    JobResponse execute(FindJobInput input) {
+      User employee = findUserById.execute(input.employee);
+      Job job = /* ... */;
+      /* ... */
+      return new JobResponse(jobDTO, employeeDTO); 
+    }
+
+}
+
+ +

Second Attempt

+ +

I thought I could create a third module coupled to Identity and Catalog to aggregate the required data:

+ +

Identity <----- Identity-Catalog-Aggregat -------> Catalog

+ +
// in Identity-Catalog-Module
+class FindJobAndEmployeeUseCase {
+
+    private final FindUserByIdUseCase findUserById; // from Identity module
+    private final FindJobByIdUseCase findJobById;   // from Catalog module
+    /* ... */
+
+
+    Response execute(FindJobAndEmployeeInput input) {
+      User employee = findUserById.execute(input.employee);
+      Job job = findJobById.execute(input.job);
+      / * ... */
+      return new Response(jobDTO, employeeDTO); 
+    }
+
+}
+
+ +
+ +

Both attempts feel wrong.. Can you advise me on the solution that makes the most sense?

+ +

I appreciate any kind of answer that will help me.

+ +

Thank you for your time and effort!

+",284647,,284647,,43915.69653,44100.45903,Collect Data of different Modules inside a Monolith,,2,1,,,,CC BY-SA 4.0, +406955,1,,,3/25/2020 19:04,,3,120,"

The data of my application is retrieved from an XML file. XML file is versioned, but there is a new XML version file very often. The structure of each XML file changes with respect to the other versions. Sometimes the changes are minor, sometimes are big.

+ +

For example, if version XMLFileV1 contains something like:

+ +
<Shape>
+  <Id>0</Id>
+  <Features>
+    <Feature>
+      <Name>Name0</Name>
+      <Color>Color0</Color>
+    </Feature>
+    <Feature>
+      <Name>Name00</Name>
+      <Color>Color00</Color>
+    </Feature>
+  </Features>
+</Shape>
+
+ +

The content of FileXMLV2 for the same ""Shape"" element may be like:

+ +
<Shape>
+  <SubShapes>
+    <Subshape>
+      <Id>0</Id>
+      <Code>00</Code>
+    </Subshape>
+    <Subshape>
+      <Id>1</Id>
+      <Code>01</Code>
+    </Subshape>
+  <SubShapes>
+  <Features>
+    <Feature>
+      <Name>Name0</Name>
+      <Color>Color0</Color>
+    </Feature>
+    <Feature>
+      <Name>Name00</Name>
+      <Color>Color00</Color>
+    </Feature>
+  </Features>
+</Shape>
+
+ +

Application must be capable to read and handle any version of XML file, not only the current one.

+ +

For the situation shown above is difficult to have an stable ""Shape"" class that can be serialized/deserialized with all versions of XML files.

+ +

I thought of following options to deal with such situation:

+ +
    +
  1. Create a base class of Shape that correspond with the oldest version +of XML file. Then, for let's say XMLFileV1, create ShapeV1 that +derivates from the Shape and add the needed changes to this ShapeV1 +according to XMLFileV1. So, when XMLfileV2 arrives and element is +modified again, create a new ShapeV2 that derivates from ShapeV1 (or +from base Shape if needed) to be accorded to the new XMLFileV2 +requirements. So that the serialization/deserialization could be +possible because XML file would be ""mapped"" exactly with the corresponding ""Shape"" +object.
  2. +
  3. Create only one Shape class with the form of the current version of XML +file. So when application reads an older version of XML File, it +would left in blank the fields of Shape class that does not exist +in older XML file. When a version newer than the current one +arrives, I would just modify the Shape class to add the needed +fields. I won't be able to serialize/deserialize Shape class as in option 1, but I would +read the XML ""line-by-line"" stlye (using XMLReader in C# for +example).
  4. +
+ +

So my questions are:

+ +
    +
  • which of above approach is better to maintain in the long time?
  • +
  • Do you have approach better that both I have presented?
  • +
+ +

Thanks!

+",358955,,,,,43918.57569,More maintainable way for handing a variable XML,,4,1,,,,CC BY-SA 4.0, +406963,1,,,3/25/2020 22:56,,4,363,"

Suppose I have a class Texture that will be passed to a Renderer to be displayed on screen. One possible design is as follows:

+ +
class Texture
+{
+    public:
+    Texture(unsigned w, unsigned h) : w_ {w}, h_ {h}, buf_(w*h) {}
+
+    void set_pixel(size_t i, Color c) { ... }
+    std::vector<Color> image_data() const { ... }
+    unsigned width() const { ... }
+    unsigned height() const { ... }
+
+    private:
+    unsigned w_, h_;
+    std::vector<Color> buf_;
+};
+
+ +

This design is ""safe"". The buf_ vector will never be uninitialized and the implementation details are hidden. On the other hand, this design:

+ +
struct Texture
+{
+    std::vector<Color> buf;
+    unsigned w {}, h {};
+};
+
+ +

is way more simple. In my case especially, when I find myself only using it once or twice, it's hard for me to decide whether or not I should keep it simple as in the second case, or write ""proper"" code as in the first case.

+",282787,,,,,43917.91667,"How to know where to draw the line between ""safe"" code and ""over-engineered"" code?",,3,5,1,,,CC BY-SA 4.0, +406965,1,,,3/26/2020 0:12,,1,91,"

I'm trying to figure out where to put business logic and why and where performance fits in. Im trying to get away from the fat logic-filled repositories that we produce a lot in my company.

+ +

So i'm trying to refactor this method DDD style:

+ +
var doesCollide = _bookingRepository.HasCollisionWithOtherBookings(bookingEntity, newEnd);
+
+ +

What i think i want is the following:

+ +
var doesCollide = booking.CollidesWithOtherBookings(allBookings, newEnd);
+
+ +

The internal logic is just doing some classic date mathematics. I still need to load allBookings from the database.

+ +
var allBookings = context.bookings.ToList();
+var doesCollide = booking.CollidesWithOtherBookings(allBookings, newEnd);
+
+ +

But now I'm thinking, ALL of the bookings? I can improve performance a lot by just adding some simple conditionals to my query.

+ +
var allBookings = context.bookings.Where(x => x.Start < newEnd && booking.Start < x.End && x.Id != booking.Id).ToList();
+var doesCollide = booking.CollidesWithOtherBookings(allBookings, newEnd);
+
+ +

Great. But i literally just duplicated the exact logic that i attempted to hide in CollidesWithOtherBookings. What does that even mean? Do i have to test this new query somehow?

+ +

Im not sure what im even testing here. The datetime logic is easy to mess up, so i should probably test it. If i test Booking.CollidesWithotherBookings thats great, but in my actual application i could have a bug since i can mess up in the where clause.

+",360425,,,,,43916.60972,DDD does this (database)logic belong in the model,,5,0,1,,,CC BY-SA 4.0, +406972,1,,,3/26/2020 2:11,,-1,36,"

I am facing an issue that I don't believe is novel but none the less am having trouble finding a solution that fits well with our system. We have a constant stream of events going into AWS Kinesis. The way Kinesis works (to my understanding) is it reacts and recalculates some query based on new events. Now what I am trying to do is react to a lack of events. Example: I want to detect when some event (lets say a login event) does not happen for x days in a row.

+ +

Now my question is not necessarily about Kinesis (but am definitely open to thoughts on ways one might be able to accomplish this there) but about this concept of ""debouncing"" events in some stream processor / treating the absence of an event as an event. I am looking for material on how other platforms that handle this, well, handle this at scale. A big use case I can think of is detecting when some log stream fails or stops producing logs which could indicate an issue.

+",360431,,,,,43916.37014,Event Processing a Lack of Events,,1,0,,,,CC BY-SA 4.0, +406975,1,,,3/26/2020 5:17,,-4,93,"

How can I distinguish between server-client and master-worker architectures?

+ +
    +
  • is a pair of client and server a pair of a master and worker?

  • +
  • is a pair of a master and worker a pair of client and server?

  • +
  • must a worker be created by its own master? In comparison, a server is often not created by a client.

  • +
  • must a worker have only one master, but not more than one masters? In comparison, a server can serve multiple clients. +Thanks.

  • +
+",699,,699,,43916.62569,43916.62569,How can I distinguish between server-client and master-worker architectures?,,1,5,,,,CC BY-SA 4.0, +406978,1,406981,,3/26/2020 8:12,,-1,71,"

What are the OOP best practice guidelines - in general or specific for ABAP - for using attributes to store data versus just passing it along as method parameters.

+ +

I'm having a hard time deciding when to do what and am also interested to learn more in terms of correct OO Modeling.

+ +

Cheers

+",360439,,,,,43916.85278,What are the guidelines for using attributes vs. passing local variables along from method call to method call?,,2,2,,,,CC BY-SA 4.0, +406980,1,407003,,3/26/2020 9:11,,2,1664,"

I just read the book 'clean architecture' by Uncle Bob and really like the approach. +But the big disappointment came when I tried to implement it in C#. +I really hope you can help me with some questions.

+ +

+ +

What I tried was to implement the following component diagram:

+ +

+ +

A = Enterprise Business Rules

+ +

B = Application Business Rules

+ +

C = Interface Adapters

+ +

D = Frameworks & Drivers

+ +

But some things seem strange to me:

+ +
    +
  1. A presenter converts the ""OutputData"" (with for example date objects) in ""ViewModel"" (that has only strings and flags). So the interface ""Output Boundary"" could have a function ""present"" with a parameter of type ""OutputData"" and return a value with type ""ViewModel"". But Output data is not allowed to have a code reference to ""ViewModel"", so how should this work?

  2. +
  3. The outer most layer is the only layer where ""Frameworks"" are allowed. But in frameworks the ""controllers"" are usually build into the framework. So do I have to build a wrapper? So I would put controllers of the framework in the outermost layer and my own ""architecture controllers"" to the adapter layer? Seems like over engineering to me.

  4. +
+ +

If I look at concrete implementations on the web, this does not really answer my questions.

+ +

For example this guy created a project with three components:

+ +
+

Web.Api - Maps to the layers that hold the Web, UI and Presenter concerns. In the context of our API, this means it accepts input in the form of http requests over the network (e.g., GET/POST/etc.) and returns its output as content formatted as JSON/HTML/XML, etc. The Presenters contain .NET framework-specific references and types, so they also live here as per The Dependency Rule we don't want to pass any of them inward.

+ +

Web.Api.Core - Maps to the layers that hold the Use Case and Entity + concerns and is also where our External Interfaces get defined. These + innermost layers contain our domain objects and business rules. The + code in this layer is mostly pure C# - no network connections, + databases, etc. allowed. Interfaces represent those dependencies, and + their implementations get injected into our use cases as we'll see + shortly.

+ +

Web.Api.Infrastructure - Maps to the layers that hold the Database and + Gateway concerns. In here, we define data entities, database access + (typically in the shape of repositories), integrations with other + network services, caches, etc. This project/layer contains the + physical implementation of the interfaces defined in our domain layer.

+
+ +
    +
  1. He is not the only one out there having a layer called ""infrastructure"". But I did not find this layer in the book. Where does it come from? Sometimes people use it for database access, sometimes for services. Where does the term ""infrastructure"" come from?

  2. +
  3. He puts ""Web, UI and Presenter concerns"" in the Web-Layer. So he mixes layers.

  4. +
  5. On web applications - like an SPA with Angular that accesses an API in the background, I think that my web application should have its own ""clean architecture"" like described here. +My .NET core application on the server also has its own ""clean architecture"". This makes sense to me. +But uncle bob states that ""the web is a detail"", so this looks like he would treat Angular only as a ""detail"" and the UI. But probably he did not think of SPAs while he wrote the book... SPAs can have a part of the business rules. So how do you deal with that?

  6. +
+",344151,,,,,43917.04236,Implementing clean architecture,,1,8,1,,,CC BY-SA 4.0, +406983,1,406992,,3/26/2020 9:27,,1,55,"

I have been tasked writing a ""fire-and-forget"" push web application, that can push high-volume XML messages (of several types) to multiple client endpoints over the internet (HTTPS). I don't need a response, or even to know if they've received the message or not - I don't want it to fail at my end if the message doesn't arrive.

+ +

In other words, given a url (e.g. https://192.168.3.45/MessageTypeA/v1, https://192.168.3.45/MessageTypeB/v3, etc.), my application should forward a copy of all XML messages of a given type to that url, and if a client is listening at that url then it can do what it likes with those messages.

+ +

I can define how the client urls are defined, security, etc - there's nothing already existing and so I'm not limited by an existing approach.

+ +

I am fairly new to web-based apis. I have been looking into REST, SOAP, WebSub...; and trying to find out what is the best approach for this.

+ +

REST-based APIs, it seems to me, act on objects at the receiving end - ""GET"" the list of trains, or ""PUT"" an update to the driver, or ""POST"" a new train, or whatever; which isn't relevant for me here - I guess all I would want in this approach is ""POST"" an new message of type x, y or z? The point is that the xml message when interpreted may well be a POST or a PUT, but I don't want to be pre-processing the messages to decide that - all I'm doing is providing the endpoint with raw data.

+ +

In WebSub language, I think I'm the ""publisher"" and I'm publishing to multiple ""Hubs""? But the difference is there's no subscriber in my scenario - I maintain the list of targets per message type, rather than them subscribing.

+ +

So I don't really know which protocol/approach is best for this kind of scenario, and so am looking for some advice. Whichever protocol I use needs to allow for encryption on the message and authentication by the receiving client, to ensure it's me sending them messages.

+",119489,,119489,,43916.40903,43916.56389,Sending an xml message as the payload to a web api,,1,1,,,,CC BY-SA 4.0, +406985,1,406987,,3/26/2020 11:06,,1,436,"

I have looked through most of the answers given regarding using in-memory database for unit test, but I could not find one that was clear to me.

+ +

I am writing some unit tests to cover some of our codebase. The codebase is written in ASP.NET Core and EF Core. I am using xUnit and Moq.

+ +

I have read and understood the concept of unit test vs integration tests.

+ +

As I understand, writing unit tests means testing a code in isolation from other dependencies and to achieve this, we can mock those dependencies so we can test the code only.

+ +

However, I found that it is a bit more work setting up mock of dependencies that I need especially if these dependencies are repository dependencies.

+ +

When I tried using an in-memory database, all I need to do was setup the in-memory database, seed the database, create a test fixture and it works. And in subsequent test, all I have to do is create the dependencies and it works just fine.

+ +

To mock the repository, I have to setup a mock and return value. And then there is the complexity of mocking an async repository as explained here: How to mock an async repository with Entity Framework Core. For each test, I have to mock the repository that is needed which means more work.

+ +

With all these in mind, would it be better if I just ditch mocking and use in-memory database? Also, would this not be seen as integration testing though it's an in-memory database.

+ +

I created two different tests using the in-memory database and mocking.. +- The in-memory database tests understandably took more time, but the difference is usually about 1sec longer than the tests using mocks.

+",316001,,,,,43916.54444,Are in-memory database a form of integration tests?,,2,0,,,,CC BY-SA 4.0, +406991,1,407016,,3/26/2020 13:21,,0,654,"

From Software Architecture and Design Illuminated

+ +
+

In the master-slave architecture, slaves provide replicated + services to the master, and the master selects a particular result among slaves by certain selection strategies. The slaves may + perform the same functional task by different algorithms and methods or by a + totally different functionality.

+ +

...

+ +

Master-slave architecture is used for the software system + where reliability is + critical. This is due to the replication (redundancy) of servers.

+ +

It should be noted that in database schema design, the terms master-slave or + parent-child are employed to specify the dependency of one entity on another. If + the master node is deleted then the slave node has reason to stay. This concept + does not apply to the discussion here.

+
+ +

Is it correct that there are two or more different meanings of the term ""master-slave"" or ""master-worker""?

+ +
    +
  1. Why does the concept of ""master-slave"" in ""in database schema +design"" ""does not apply to the discussion"" in software architecture?
  2. +
  3. In database schema design, what do the following mean:

    + +
      +
    • ""the dependency of one entity on another""

    • +
    • ""If the master node is deleted then the slave node has reason to stay""?

    • +
  4. +
  5. In the software architecture of master-slave, ""if +the master node is deleted then the slave node has reason to stay"", +does the slave node has no reason to stay?
  6. +
  7. Also in distributed systems with data replications, is the concept of +""leader and follower"" the same as ""master and slave"" in +software architecture?

    + +

    From https://en.wikipedia.org/wiki/Master/slave_(technology)

    + +
    +

    In database replication, the master database is regarded as the authoritative source, and the slave databases are synchronized to it.

    +
  8. +
+ +

Thanks.

+",699,,699,,43916.67917,43916.86528,"Does the term ""master-slave"" have the same meaning in software architecture, database schema, and distributed system replication?",,2,11,,,,CC BY-SA 4.0, +406997,1,,,3/26/2020 14:37,,0,113,"

I have one microservice named UserRegistration for user creation, another microservice named Billing where able to show the billing details for users.

+ +

After a new user is being created, it will send userId from UserRegistration to Billing through message queue like Apache Kafka or RabbitMq. Since these 2 microservices has different database, Billing microservice only has userId, how Billing able to retrieve the user details like name, age, address? Through REST API in UserRegistration?

+",275397,,,,,43917.3,How to get data from different microservice?,,2,0,,,,CC BY-SA 4.0, +407006,1,,,3/26/2020 19:37,,-2,54,"

I am working on an app and creating models now. I think this is a basic 1N normalization question but am not sure. Should I break location/address out as a separate table?

+ +

Let's say I have Parents and Children. Sometimes I will have children with no currently identified parents, and found at x location. Sometimes I will know that parents live at address y location, but then move to z. They may have known children, 1,2,3. Sometimes I will have children (1 and 2) at a shelter at location r, with known parents at location s. Sometimes I will have child 1 and not know s/he is related to child 3 at location r and s and their parents are f and need to be re-united. So parents are a one and children many. Children and parents are always at a specific location but this can change and I need to track that. I need to be able to say we found child 3 here and they moved to shelter x, then y then got reunited with parent.

+ +

Should location/address be a separate table or is it more helpful to have location/address in parent, children, and shelter tables?

+",360480,,360480,,43916.89722,43917.62569,Simple OO/normalization question - common fields in new table/separate object?,,1,5,,,,CC BY-SA 4.0, +407008,1,407015,,3/26/2020 19:51,,49,8335,"

A friend of mine is working in a 200-employee company. The company's business has nothing to do with IT, but they do have an IT department to work, among others, on their website, used by the customers.

+ +

The website started with a core idea that programmers have to test the application themselves using automated testing. However, it quickly started to be problematic, as programmers were spending too much time writing functional tests with Selenium (and later Cypress.io) trying to deal with either complicated interactions, such as drag and drop or file uploads, or trying to figure out why the tests randomly fail. For a while, more than 25% of the time was spent on those tests; moreover, most programmers were pissed off by those tests, as they wanted to produce actual value, not try to figure out why the tests would randomly fail.

+ +

Two years ago, it was decided to pay a company from Bulgaria to do the functional, interface-level tests manually. Things went well, as such testing was pretty inexpensive. Overall, programmers were delivering features faster, with fewer regressions, and everyone was happy.

+ +

However, over time, programmers started to be overconfident. They would write fewer integration or even unit tests, and would sometimes mark features as done without even actually checked if they work in a browser: since testers will catch the mistakes, why bother? This creates two problems: (1) it takes more time to solve the issues when they are discovered by testers a few days ago (compared to when they are discovered within minutes by programmers themselves) and (2) the overall cost of the outsourced testers grows constantly.

+ +

Recently, the team lead tries to change this behavior by:

+ +
    +
  • Measuring, per person, how many tickets are reopened by the testers (and sharing the results to the whole team).

  • +
  • Giving congratulation to the persons who performed the best, i.e. those who have the least tickets being reopened.

  • +
  • Spend time pair programming with those who performed the worst, trying to understand why are they so reluctant to test their code, and showing them that it's not that difficult.

  • +
  • Explaining that it's much faster to solve a problem right now, than to wait for several days until the feature gets tested.

  • +
  • Explaining that testers do system tests only, and the lack of unit tests make it difficult to pinpoint the exact location of the problem.

  • +
+ +

However, it doesn't work:

+ +
    +
  • The metrics are not always relevant. One may work on an unclear or complex ticket which gets reopened several times by the testers because of the edge cases, and a colleague may meanwhile work on a ticket which is so straightforward that there is absolutely no chance to introduce any regression.

  • +
  • Programmers are reluctant to test code, because (1) they find it just boring, and because (2) if they don't test code, it looks like they deliver the feature faster.

  • +
  • They also don't see why fixing a problem days after developing a feature would be a problem. They understand the theory, but they don't feel it in practice. Also, they believe that even if it would take a bit longer, it's still cheaper for the company to pay inexpensive outsourced testers rather than spend programmers' time on tests. Telling them repeatedly that this is not the case has no effect.

  • +
  • As for system vs. unit testing, programmers reply that they don't spend that much time finding the exact location of a problem reported by a tester anyway (which seems to be actually true).

  • +
+ +

What else can be done to encourage programmers to stop overly rely on testers?

+",6605,,25373,,43920.43681,43920.43681,Company outsourced testing, how can we encourage programmers to stop overly relying on testers?,,15,17,13,,,CC BY-SA 4.0 +407036,1,408126,,3/27/2020 10:30,,-2,35,"

I'm building an API in .net that wil capture webhooks events of a third party service.

+ +

Lots of actions on the third party service will trigger the webhooks.

+ +
    account.deactivated"",
+    ""account.deleted"",
+    ""company.added"",
+    ""company.deleted"",
+    ""company.updated"",
+    ""contact.added"",
+    ""contact.deleted"",
+    ""contact.linkedToCompany"",
+    ""contact.unlinkedFromCompany"",
+    ""contact.updated"",
+    ""creditNote.booked"",
+    ""deal.created"",
+    ""deal.deleted"",
+    ""deal.lost"",
+    ""deal.moved"",
+    ""deal.updated"",
+    ""deal.won"",
+    ""invoice.booked"",
+    ""invoice.deleted"",
+    ""invoice.drafted"",
+    ""invoice.paymentRegistered"",
+    ""invoice.paymentRemoved"",
+    ""invoice.sent"",
+    ""invoice.updated"",
+    ""milestone.created"",
+    ""milestone.updated"",
+    ""product.added"",
+    ""project.created"",
+    ""project.deleted"",
+    ""project.updated"",
+    ""timeTracking.added"",
+    ""timeTracking.deleted"",
+    ""timeTracking.updated"",
+    ""user.deactivated""
+
+ +

My question is should I capture this all in 1 Controller. +This will reduce the amount of code. but certainly will make it lot more complex. +Checking wich object it is making a dynamic insert/update to the database & ...

+ +

Or should't I create for every object a different controller, so know which object I have to handle.

+",267518,,,,,43919.75833,Capture webhooks of third party service with an API,<.net>,1,0,,,,CC BY-SA 4.0, +407040,1,,,3/27/2020 13:08,,2,108,"

In some .NET libraries, there's a pattern of two alternative ways to call a function.

+ +

int i = x.GetValue(k); /* Might throw. */
+if (x.TryGetValue(k, out int i) { /*...*/ } else { /* ... */ }

+ +

I don't like the names of these two functions as the not-Try variant presents itself the ""normal"" way to call a function. TryX, having the added word in the name, feels like a specialist form. For my new library that doesn't have to worry about existing users, I'd like to switch that around.

+ +

If I rename TryGetValue to just GetValue, what do I call the original GetValue that will throw an exception if it can't complete my request?

+ +

Is there a standard name for this pattern? (I'm hoping the industry has an established pattern rather than coining a new word in the comments, but you're welcome to do so if you wish.)

+ +

EDIT: I could be persuaded that TryGetValue is the correct name, but I'd still like to change the name of GetValue that throws an exception, leaving no function called just GetValue.

+",3912,,3912,,43917.57917,43917.57917,What is the name of the throw-an-exception on invalid inputs pattern?,,1,7,,,,CC BY-SA 4.0, +407041,1,,,3/27/2020 13:16,,1,237,"

I have read about function vs procedure +function and procedure both are subroutines but function returns a value and procedure doesn't.

+ +

Can a function returning void be called a procedure?

+ +

Give reasons with definitions...

+",360516,,209774,,43917.71875,43917.78611,Can a function returning void be called a procedure?,,3,0,,43917.81111,,CC BY-SA 4.0, +407042,1,,,3/27/2020 13:25,,1,782,"

I've seen a debate on this. Nested trys in the catch, I can see that being okay. The outer try has already triggered a catch by that time, hence no scope issues. The other way....hides errors (it is a scope issue).

+ +
try {
+
+     try{
+        Throws exception; 
+        } catch (exception innerE) {
+         outerE.printStackTrace(); /* I know the error , printing but have no way of passing what I know to the outer try*/
+     }
+
+} 
+
+catch (exception outerE){
+ outerE.printStackTrace(); /* Going to override the inner and not say anything because the inner catch is not visible to my scope */  
+}
+
+ +

How is this not an anti-pattern?

+",360525,,44470,,43927.34306,43927.87431,How is a nested Try/Catch (inside the try) not an anti-pattern,,4,4,,,,CC BY-SA 4.0, +407055,1,,,3/27/2020 18:03,,1,38,"

My Laravel project starts to grow up and I'm starting to dealing with fat Controllers and Models. Probably I'm for away from the SOLID principles path and my code is not DRY.

+ +

I decided to refactor my layers logic to follow more SOLID and try to be more DRY. After some research over the internet I ended up with the following conclusion:

+ +

+ +

Let me explain this diagram:

+ +
    +
  1. First, we start with a view, where the user performs some action and the request is sent to the Controller.
  2. +
  3. Controller responsibility is to handle the request and give the response to the user. It will be able to call 3 different layers (blue numbers): + +
      +
    • Service: to handle business logic like calculations, special actions, etc.
    • +
    • Repository: where all query logic will be placed. For example, if on index method we want to return the users list with users that have more than 100 posts and are ordered by name (Example 1).
    • +
    • Laravel Resource (Transformers): with the responsibility to transform a model into JSON. When we change our table we don't have to change all the views and controllers or models affected by that change. It will be all done in one place.
    • +
  4. +
+ +
+ +

Example 1:

+ +
 # UserController.php
+ public function index()
+ {
+     $users = new UserCollection($this->UserRep->usersWithPost());
+     return view('user-list', compact('users'));
+ }
+
+ # UserRepository.php
+ public function usersWithPost($min = 100)
+ {
+     return $this->model->where('post_count', '>=', $min)->orderBy('name');
+ }
+
+ # UserResource.php
+ public function toArray($request)
+ {
+     return [
+         'id' => $this->id,
+         'name' => $this->name,
+         'email' => $this->email,
+         'post_count' => $this->post_count,
+         'created_at' => $this->created_at,
+         'updated_at' => $this->updated_at,
+     ];
+ }
+
+ +
+ +
    +
  1. Service calls (green numbers): + +
      +
    • It may call Repository if it needs any data from my model to perform some action.
    • +
    • It also may call Laravel Resource ""Transformers"" if there were repository calls.
    • +
  2. +
  3. The repository will use the eloquent model to persist query on my data storage (MySQL).
  4. +
+ +

This is how I plan to refactor my code, however, I do not have experience with Laravel and I would like to ask more experienced developers if they can indicate me if I'm on the right path.

+ +

Do you think it makes sense and it is a good practice?

+ +

I would like to highlight that I will not switch between ORMs. I will use eloquent with MySQL as my data storage, that's why I'm planning to put all my queries to repositories to have a different layer for queries logic.

+",360546,,,,,43917.75208,Refactoring a Laravel aplication layers,,0,1,,,,CC BY-SA 4.0, +407056,1,,,3/27/2020 18:08,,5,228,"

I am preparing for a discussion with my fellow programmers which will be about their use of the C/C++ #include directive. The codebase which I have to retrofit to Automotive standards is using includes of the form #include <path/out/of/the/blue.h>. To be precise: the projects carry around a large set of include paths for the compiler (-Iinclude/me etc.) BUT the path expressions even reach outside of these places so that blue.h can only be found if the compiler internally produces a combination of all include paths with the path in the statement itself: include/me/+path/out/of/the/blue.h. There are many gripes I have against this practice:

+ +
    +
  • AFAIK <> is reserved for system headers coming from the platform, and are strongly discouraged to be used by project code. The compilation only works because the C and C++ standard requires the compiler to search again as if the file was given with """" if it is not found on the first pass.
  • +
  • It creates a review nightmare: the include file is neither at the path rooted at the directory where the including C or C++ file sits nor is it in any of the include locations, you have to repeat the search of the compiler to eventually find it somewhere, yet at this point in time you aren't really trusting yourself - the compiler could have searched differently.
  • +
  • There are a number of multiple file names in the project tree: blue.h can be found at a number of locations and sometimes one blue.h serves as a dispatcher file for the inclusion of a more specific, the true blue.h down the directory tree. Which blue.h is selected is distinguished by #define PLATFORM macros and the like.
  • +
  • It creates a monolithic, reorganization-resisting project structure, coupling C and C++ interfaces (which live in directory-less space as far as the language is concerned) with the file system.
  • +
  • It spills over into new projects: As soon as one uses a header which itself includes other path-dependent headers, the new projects build script has to adapt to this usage.
  • +
+ +

We are using mbed-os and it looks like its source tree suffers from the same in (IMHO) bad code structuring choice.

+ +

As a TL;DR one could say that I have the firm belief that one is ill-advised to carry the project structure into the source code. One has to supply a lot of structure and dependencies to the build system and linker anyway - introducing a secondary coupling per the source files wreaks havoc at least when one tries to change the build system (as I am forced to now).

+ +

What is the public opinion on this? How flat or tree-like do you manage your includes?

+ +

PS: MISRA is only remotely talking about this issue though one could read it as ""don't use anything else but header file names""

+ +

PPS: I am not completely against the use of paths (well, I am in my code, but I could live with this in inherited code) as long as this isn't visible from the outside but the current versions of the projects rather forces one to adapt to exactly this usage.

+ +

PPPS: to give you an idea where carelessness regarding physical structure leads to, here a part of the include paths which are required to compile mbed-os which I mentioned before:

+ +
mbed-os/features/nanostack/mbed-mesh-api/
+mbed-os/features/nanostack/sal-stack-nanostack-eventloop/
+mbed-os/features/nanostack/mbed-mesh-api/mbed-mesh-api/
+mbed-os/features/nanostack/mbed-mesh-api/source/include/
+mbed-os/features/nanostack/nanostack-interface/
+mbed-os/features/nanostack/sal-stack-nanostack-eventloop/nanostack-event-loop/
+mbed-os/features/lwipstack/
+mbed-os/features/lwipstack/lwip-sys/
+mbed-os/features/lwipstack/lwip/src/include/
+mbed-os/features/lwipstack/lwip/src/include/lwip/
+
+ +

Some of these paths are just starting points for a deeper reach (i.e. ""sub/subsub/inc_this.hpp"") and some are plain old ""there you will find your includes"" directories.

+ +

This leads to yet another counter argument to anything beyond the simple ""set your include paths to where your include files are"" rule: it is obviously impossible to communicate anything more complex over time and different coding cultures.

+",100647,,100647,,43928.34792,43928.34792,The case against path expressions in #include directives,,1,10,,,,CC BY-SA 4.0, +407064,1,,,3/27/2020 23:21,,5,189,"

I am in the process of trying to model a transportation module for an ERP type system using C# and EF Core. This service is responsible for managing customer pickups and company-owned truck deliveries. Customer pickups contain a list of orders and trucks contain a list of stops and those stops contain a list of orders.

+ +

The primary interface for managing pickups and trucks is through a REST based api. Order creation/update/cancellation events are being received from an ordering module via a service bus queue.

+ +

Business Rules

+ +
    +
  • At order entry, an order is assigned an attribute to specify if it is a customer pickup or delivery via a truck. However, it is up to users within this transportation module to associate those orders to a specific pickup or truck/stop instance.
  • +
  • An order can be associated to only a single pickup or truck stop at a given time.
  • +
  • Orders have properties which include a status and shipping metrics (dimensions, weight).
  • +
  • Orders on a truck cannot exceed a certain volume or total weight.
  • +
  • Order updates can be received at any time, even after a shipping assignment is made. Those updates can change the shipment type (user decides to pick up order vs. having it shipped to them) or order build which would alter the shipping metrics. If shipping type is changed, order should be unassociated to any current pickup or truck stop.
  • +
  • Trucks have a status of open/in-transit/closed with each stop on the truck having a status of open/delivered.
  • +
  • Users can mark a truck as in-transit only if all orders have a status of produced.
  • +
  • Once truck is in-transit, users mark each stop as delivered once the delivery has been made. Only after all stops are marked as delivered, can the truck status be updated to closed.
  • +
  • If an order is cancelled, it needs to be automatically removed from either the pickup or truck it may be associated to.
  • +
+ +

We are using a relational database (SQL Server) for storing the individual entities. My question is really around how to model these various aggregate roots/entities/value objects/domain services/domain events as well as the database tables backing them.

+ +

Initial Thoughts

+ +
    +
  • Have aggregate roots of Pickup, Truck and Order.
  • +
  • Pickup has a list of value objects containing linked order ids.
  • +
  • Trucks have a list of stop entities and a list of value objects containing linked order ids, order status and order shipping metrics.
  • +
  • Other options considered - add nullable foreign keys directly to order that reference a pickup or truck stop; additional option is to have an OrderAssociations table that maps orders to a pickup or truck stop.
  • +
+ +

In order to enforce business rules for the truck though this is where things get a bit interesting. If adding an order to a truck, the total weight and volume of all other orders needs to be taken into account and an error should be returned if the order will cause the truck to exceed the prescribed thresholds. If an order update is received for an order already associated to a truck and that order will cause the truck to exceed the allowed thresholds an alert needs to be fired and the truck cannot be marked as in-transit until the issue is resolved.

+ +

Questions

+ +
    +
  • When an order update is received, we need to know if it is associated to a pickup or truck stop and if it is, that object will need to be notified. If we have separate tables for PickupOrders and TruckStopOrders, determining the association doesn't exactly seem efficient as a query would need to be made to both tables. In addition, we'd need to load the entire truck of stops/orders to call an update order method on the truck aggregate. How would you recommend this order update be handled? Is this an application level event handler to update the order entity itself as well as the truck? Does the order entity get updated and raise a domain event that the truck is somehow notified of? Curious on thoughts of if this logic belongs in the domain layer or application layer.
  • +
  • Are truck stops value objects or entities? They only exist within the context of a truck. However, they do have a status associated to them (open/delivered) and a list of associated orders.
  • +
  • It would be nice for orders to maintain a reference to the pickup or truck stop that they are associated with. However, doing so would couple the truck stop / pickup and order so any order updates would require updating both in the same transaction. Not sure if this would be managed via a domain service or application layer?
  • +
  • If maintaining a separate truck stop orders table, what is the best way to keep the order status/metrics in this table in sync with the orders table?
  • +
+ +

Happy to provide additional clarification where needed. Any thoughts are appreciated.

+",360564,,,,,43919.58125,DDD Domain Modeling of Transportation Module,,2,1,1,,,CC BY-SA 4.0, +407065,1,,,3/28/2020 0:42,,2,138,"

tldr at the bottom if you don't want to read all this! :)

+ +

First of all the db I'm using is MongoDB! So I've been building a fun project and all has been well but I hit a small problem. Effectively, the project could be thought of as a simple button on a website. For each click of the button, the click will be sent to the back end via sockets and recorded and stored in a database. This is very small and easy to do as I just have a simple Mongo database with a key called ""buttonClicks"" that increment by 1 each time it's clicked. There's also some other arbritrary incremental data sent as well like for examples sake, the browser of which the click was sent from. Regardless, the schema currently looks like this

+ +
{
+    name: Admin
+    buttonClicks: 22,
+    whichBrowser: {chrome: 10, edge: 12}
+}
+
+ +

As you can see, this is all incremental data, and since there's only a handful of browsers on the planet, the whichBrowser object won't get that big.

+ +

The problem is I also need a time stamp. What I mean is the way it's setup now, I just know how many times the button was clicked from day 1 till now. What if I want to see how much button clicks there was today/week/month. I plan to show this in a graph so I would also like to bin it by X amount as well. So I may want to graph it where it shows me the clicks through the past day in 5 minute intervals, or clicks through the past week in 10 minute intervals, etc.

+ +

How can I achieve this? As it is right now my database is very small since it basically has 1 key with a value that simply increments, but if I were to store the time stamp as is for each click, I imagine the database will get MASSIVE with the ""buttonClicks"" key having a value of a huge object, something like

+ +
{
+    name: Admin
+    buttonClicks: [{
+        date: ""3/27/2020 - 12:44:15"",
+        browser: chrome
+    },{
+        date: ""3/27/2020 - 12:44:15"",
+        browser: chrome
+    },{
+        date: ""3/27/2020 - 12:44:15"",
+        browser: edge
+    },{
+        date: ""3/27/2020 - 12:44:16"",
+        browser: chrome
+    },{
+        date: ""3/27/2020 - 12:44:16"",
+        browser: fire fox
+    }]
+}
+
+ +

As you can see, only 5 clicks were recorded but it's already so big. Not to mention (in this example) only 2 seconds has passed and this array already has 5 objects.

+ +

Thank you! +So does anyone have any advice on how to can store each click (and other info like browser)

+ +

tldr: A button is clicked on a site and recorded in the back end. It takes little space for now since it's just incremental data (just up the value by 1 when a button click is recorded) but I realized I need the date and time as well for data visualization purposes. How can I record the data in a way for each click so it won't take a lot of space and I can easily manipulate it (like how much clicks were there the past week, past 10 days, etc)

+",360567,,360567,,43918.03542,43918.63333,How should I store time stamps so that it's easily accessible and won't take too much space,,3,0,,,,CC BY-SA 4.0, +408096,1,,,3/28/2020 22:31,,3,414,"

I've been wondering if this concept has a name and a consolidated theory behind.

+ +
+

If you need to build software, but you don't need it right now, + it's always better to wait because the technology will be better in + the future.

+
+ +

I'm pretty sure it is true:

+ +
    +
  • IT tech is getting better over time: better softwares, better libraries, better IDE, ...
  • +
  • So any development is going to be cheaper (not taking into account other factors like market state, ...)
  • +
  • And any resulting product is going to be better in terms of speed, robustness, look and feel, ...
  • +
+ +

But I can't find any theory or name for this concept. Can you point me into the right direction?

+ +

EDIT

+ +

I wanted to clarify some things, as all answers, although interesting, are slightly off topic:

+ +
    +
  • I know I will need this software at some point in the future, as its functionalities covers some critical parts of my business. It's for internal use, and it will replace an existing one that is relying on obsolete tech and that users don't like.
  • +
  • There is no concurrence
  • +
  • I'm not talking about cutting-edge tech. It's plain normal development. The assumption is that this normal dev will get better and easier as time passes. Neither now or in the future do I plan on using tech that would qualify me as an ""early adopter"". + +
      +
    • In other words, the idea is to take advantage of the fact that what will be mainstream in a few years will be better than what is mainstream now.
    • +
  • +
+",361611,,361611,,43921.38056,43921.45625,Always better to wait?,,5,6,,,,CC BY-SA 4.0, +408099,1,,,3/28/2020 23:49,,0,82,"

I'm thinking about this in terms of a No SQL database, more specifically MongoDB. So, I want to build something like Google Analytics where I will be taking in a ton of data and when it occurs so I can show how many times X happened this year/month/week/day/hr/etc. Like Google Analytics, other than the time stamp, the only other data being stored is small data. There will also be multiple points from which I'm taking in data, like how there's different admins in Google Analytics with their own dashboard. I'll give an example.

+ +

There's 3 ""admins"" with their own dashboard and website, then we need to take in data from each site. For simplicity sake the only data we'll be taking in is when a user leaves a comment on their page and if the comment had a picture.

+ +

This is how I originally had it, but of course there's no time stamp

+ +
{
+    Name: ""Admin 1"",
+    numComments: 10,
+    hadPics: {Yes: 6, No: 4}
+},{
+    Name: ""Admin 2"",
+    numComments: 12,
+    hadPics: {Yes: 6, No: 6}
+},{
+    Name: ""Admin 3"",
+    numComments: 5,
+    hadPics: {Yes: 3, No: 2}
+}
+
+ +

Then I changed it to this

+ +
{
+    Name: ""Admin 1"",
+    numComments: 3,
+    time: [{
+        time: ""3/28/2020 - 6:45:36 am"",
+        hadPics: ""No""
+    },{
+        time: ""3/28/2020 - 6:45:37 am"",
+        hadPics: ""Yes""
+    },{
+        time: ""3/28/2020 - 6:46:10 am"",
+        hadPics: ""Yes""
+    }]
+},{
+    Name: ""Admin 2"",
+    numComments: 5,
+    time: [{same structure as above}]
+},{
+    Name: ""Admin 3"",
+    numComments: 6,
+    time: [{same structure as above}]
+}
+
+ +

But I imagine this would get insanely huge and is not a good way to store the data, not to mention I also just found out that collections have a max space of 16mb which sounds like something I would pass if I were to store it like that? Like the time array would get very big. This is only from 3 comments, imagine if there were 10 comments each second and there were 100 ""Admins"". Within a minute the time array would have 100 items in it and I'd have done 1000 writes into the database.

+ +

If anyone has any suggestions on how to save this data well so that I can easily retrieve and store the data, I would love to hear it! Especially if it makes it easier when grabbing data like from the last day, week, etc.

+",360567,,,,,43919.74931,"How can I store time series data like Google Analytics, Facebook, etc?",,1,1,,,,CC BY-SA 4.0, +408100,1,408103,,3/28/2020 23:59,,3,214,"

Suppose I have a table that saves plane ticket purchases. I need to keep track that I don't oversell so when a user purchases a ticket it needs to be stored in that table and other customers need to be aware of this.

+ +

Now suppose that this table may be queried and written to too often due to some promotion I'm running. In order to not overload the connections to the DB and get errors due to timeouts, I choose to put the DB in an elastic SQL server by some provider (e.g. Azure Elastic DB).

+ +

But if a user is connected to one instance of this DB and writes to it, how do the rest (copies) of the DB know about it and prevent any other customer from inserting in it violating this constraint I have? Is there some pattern I should be using? Do the copies communicate with each other somehow?

+ +

And if this setting actually can't solve this problem, what patterns can be used to solve an issue like this?

+ +

Thanks

+",361618,,,,,43924.57431,How can I scale a database that requires that writes are consistent?,,3,1,,,,CC BY-SA 4.0, +408106,1,,,3/29/2020 5:44,,0,97,"

When microservices need to talk with each other, the common practice will be to have some REST (or gRPC) communication.

+ +

I'm wondering what should be a better approach? (let's assume all services are in Java) -

+ +
    +
  1. Each service is using a freestyle REST client (e.g. OkHttp)
  2. +
  3. When service A needs to talk with service X, it has to include a dependency jar library of ""service X client"" that hides the network communication from service A.
  4. +
+ +

Let's say this is our system - where service A uses services X and Y as its data resources:

+ +
   /-X
+A--
+   \-Y
+
+ +

Here are some cases:

+ +

New Functionality in X

+ +

If service X has new functionality for A, in both approaches there will be a need to update the code of A to support it and to add new HTTP calls. If we are using a dependency JAR, we will also need to create a new version of the jar.

+ +

New versions of X / Y

+ +

Versions update in X and Y, as long as they don't break the interface doesn't require any change in A - in both approaches.

+ +

Different HTTP library versions in X & Y clients

+ +

It is possible that X & Y client libraries will include different versions of the same HTTP client - this may cause dependency library conjunction in service A.

+ +

What is the best practice these days? I found this post and this post where each of them supports the other approach. I also came across Feign as REST Client that might be a third option

+",311114,,,,,43920.59444,Calling Microserice using REST or dedicated client jar,,2,0,,,,CC BY-SA 4.0, +408114,1,408120,,3/29/2020 12:50,,7,372,"

When reading the Python docs for writing C extensions, one can find the following text in the part about CPython's garbage collection strategy (emphasis mine):

+ +
+

... The disadvantage [of automatic garbage collection] is that for C, there is no truly portable automatic garbage collector, while reference counting can be implemented portably (as long as the functions malloc() and free() are available — which the C Standard guarantees). Maybe some day a sufficiently portable automatic garbage collector will be available for C. Until then, we’ll have to live with reference counts.

+
+ +

What exactly is the author's intention in the emphasized statement? Surely one can easily implement a portable tracing GC, at least for the specific VM at hand, using only malloc, free and basic data structures.

+ +

I suppose (and please correct me if I'm wrong) that the author refers to more advanced GC strategies, such as a ""moving GC"" (I'm actually not sure what that is), etc.

+ +

If so - please explain the reasons why, assuming the author is correct, it is difficult to create a portable advanced garbage collector.

+",121368,,,,,43919.67083,Why is it difficult to create a truly portable garbage collector for C?,,1,4,,,,CC BY-SA 4.0, +408116,1,,,3/29/2020 14:17,,2,60,"

I am not sure if this is the correct place to ask, please direct me to the correct place if this is not it.

+ +

Context

+ +

I have written a do-all, keep-all, serve-all API (it started small, then grew out of proportions) to function as User Management System for one of our solutions.

+ +

This API keeps track of all data, and also communicates with external APIs to perform some sort of configuration on the user accounts. Some of the endpoints includes:

+ +
    +
  • User (details: Firstname, Lastname, Email, Mobile, UserCompany)
  • +
  • UserCompany (retrieved from CMDB)
  • +
  • Company (Customer retrieved from CMDB)
  • +
  • Connection (User connected to Company)
  • +
  • ConnectionSystem (Connection connected to CMDB-System implemented at Customer)
  • +
  • ConnectionSystemAccess (Access to assets in CMDB-System for Connection)
  • +
+ +

Example: UserA John Doe from UserCompanyA Contoso connected to SystemA Office365 at CustomerA Microsoft with access to AssetA ExchangeOnline

+ +

The model is something like this: +

+ +

Question(s)

+ +

So I am looking to expand my system to make it a bit more task-oriented, as well as separating concerns a bit more. Now I am afraid my system has bloated into doing to much (in a way that definitely works for our teams), but still breaking with a lot of principles when it comes to good design. I have also been learning alot since I started (practically no experience with any kind of programming) to where I am today, and I have identified areas of improvement.

+ +
    +
  1. Task one: Create a job-handling system/API + +
      +
    • Should this be a separate API, or can I build this into the existing one?
    • +
  2. +
  3. Task two: Create workers (I'm thinking here of using the .net core 3 Workers) to handle jobs
  4. +
+ +

I'm thinking RabbitMQ for the job-queuing as we have resources in the company with experience running this software. The different JobTypes would produce different messages that Workers for the different JobTypes will process and report status back to JobAPI.

+ +

JobAPI model example (simplified for brevity): +

+",361644,,,,,43919.59514,How to design job-handling for API,,0,0,,,,CC BY-SA 4.0, +408117,1,,,3/29/2020 14:23,,3,419,"

Coming from Java, I was surprised to find out that Kotlin doesn't allow assignments as expressions.

+ +

Is there a reason for that?

+ +

Java (Works)

+ +
    @Test
+    public void test_x() {
+        List<String> elements = null;
+        for (final String x : (elements = List.<String>of(""1"", ""2""))) {
+            System.out.println(x);
+        }
+    }
+
+ +

Kotlin (Compile-time error)

+ +
    /*Assignments are not expressions, and only expressions are allowed in this context*/
+    @Test
+    fun test_x() {
+        var elements: List<String> = listOf()
+        for (x in (elements = listOf(""1"", ""2""))) {
+            println(x)
+        }
+    }
+
+",289700,,,,,43919.61111,Why Kotlin doesn't allow assignments as expressions?,,2,2,1,,,CC BY-SA 4.0, +408123,1,,,3/29/2020 16:17,,3,62,"

First of all a small introduction, im relatively new to Swift and to programming in general, been doing it for the last year and loving every and each new thing of this vast world.

+ +

My post is about some technical advices and to know if the decisions that are being made in my company makes any sense.
I will first address the design that is being suggested and next my conclusions.

+ +

The design that's being implemented;

+ +

We are working in a big app, this app has some flows where they follow a sequence of 5 to 8 controllers, our team leader insists in this design pattern; +
Let’s call the first controller a holder and the holder(black border) is responsible to have a container, this container has a proper navigation controller(red border), also, the holder hold all the data(orange) that those secondary controllers are generating.

+ +

Diagram of what this pattern is trying to achieve

+ +

+ +

To do this every red border controller has a method:



+ +
private func getParent() -> HolderViewController? {
+
+    if let parent = navigationController?.parent as? HolderViewController {
+
+        return parent
+    }
+    return nil
+}
+
+ +

And to write in holder we call the method;

+ +
getParent().someModel.someModelProperty = ""some data”
+
+ +

Conclusion;

+ +

Passing data through navigation controller seems to go against to the single responsibility principle. +Creating strong references in every controller, even if I ensure that the navigationController is properly deinit when flow ends, does not seem a good option, this could lead to memory leaks and retain cycles. +I cannot ensure that, for some hod reason, two controllers could try to access the same property at the same time. +This seems the Singleton Design pattern but with a limited “scope”

+ +

Resolutions;

+ +

Since I am new and I’m working in a company, and, every company has a hierarchy my objective above all is to learn if my conclusions are wrong and have better and more concise explanation about this.

+ +

First of all, to address the problem of memory leaks I created a concurrentQueue. +Instead of accessing the model directly to write in it I tried to address it through a method that will use a keyPath instead of the model directly, this is the method I’m using to write in the model;

+ +

In holder:

+ +
class HolderViewController: UIViewController {
+
+   @IBOutlet weak var bottomNavigationContainer: UIView!
+
+   private var bottomNavigationController: UINavigationController!
+   private var someModel: SomeModel!
+   private let concurrentQueue: DispatchQueue = DispatchQueue(label: ""concurrentQueue"", attributes: .concurrent)
+
+    override func viewDidLoad() {
+        super.viewDidLoad()
+
+        setupBottomNavigation()
+    }
+
+    private func setupBottomNavigation() {
+
+        rootController = SecondayViewController()
+
+        if let root = rootController {
+
+            bottomNavigationController = UINavigationController(rootViewController: root)
+            addChild(bottomNavigationController)
+            bottomNavigationController.view.frame = bottomNavigationContainer.bounds
+            bottomNavigationContainer.addSubview(bottomNavigationController.view)
+        }
+    }
+}
+
+extension HolderViewController {
+
+    public func setValueInModel<Value>(_ value: Value, forKey path: WritableKeyPath<SomeModel, Value>) {
+
+       concurrentQueue.async(flags: .barrier) { [weak someModelInstance] in
+
+            if var obj = someModelInstance {
+
+                obj[keyPath: path] = value
+            }
+        }
+    }
+
+    public func readFromHolder() -> SomeModel {
+
+        concurrentQueue.sync {
+            return self.someModelInstance
+        }
+    }
+}
+
+ +

The red border controllers are the ones inside the bottom navigation controller;

+ +
class RedBorderViewController: UIViewController {
+
+    var someString: String?
+
+    @IBOutlet weak var textField: UITextField!
+
+    override func viewDidLoad() {
+      super.viewDidLoad()
+    }
+
+    override func viewWillAppear(_ animated: Bool) {
+      super.viewWillAppear(animated)
+
+      if let text = readFromHolder() {
+
+        textField.text = text
+      }
+    }
+
+    @IBAction func continueButton(_ sender: Any) {
+
+      if let text = textField.text {
+
+        setValueInHolder(text)
+      }
+    }
+
+    private func getParent() -> HolderViewController? {
+
+      if let parent = navigationController?.parent as? HolderViewController {
+
+        return parent
+      }
+      return nil
+    }
+
+    private func setValueInHolder(string: String) {
+
+      getParent().setValueInModel(string, forKey: \.someModelProperty)
+
+    }
+
+    private func readFromHolder() -> String {
+
+       return getParent().readFromHolder().someModelProperty
+    }
+
+} 
+
+ +

the code above is just a example of how we doing things

+ +

This look like some messy code to do a simple thing, we could use closures, delegates, segues etc… but my team leader does not like the other, simpler and more common solutions. Forgot to mention, every of our controllers has a xib and we do not use storyboards.
I know the basics of how to use the other options, what I’m trying is to understand if this is or it isn’t a good solution and why, and if my way of thinking and my methods make any sense.

+ +

Remember, every conclusion I took or every solution I've implemented could be wrong, that’s why I’m sharing with you my thoughts in order to learn from your advices and experience

+ +

Thanks in advance. :) +Stay safe!

+",361658,,,,,44202.83611,Need technical advice about passing data through UINavigationController,,1,0,0,,,CC BY-SA 4.0, +408129,1,408130,,3/30/2020 2:03,,0,144,"

I understand accessors have to do with OOP. For other languages such as C, imagine you have a function that given 2 numbers returns their sum. Should you name it getSum() or sum()?

+",358678,,,,,43920.29167,Should a function that returns something be named as an accessor (getSomething)?,,3,0,,,,CC BY-SA 4.0, +408133,1,,,3/30/2020 6:19,,3,229,"

I am trying to revamp my legacy application to make it scalable and performant. Its current architecture is something like this

+ +

Consider a short-lived script that gets invocated 500k+ times a day, each invocation is a unique invocation (identified by a Key) and it writes a couple of structured files to its own unique dated directory. There could be script reruns too (reruns would update the files in the same partition).

+ +

Now I have a Web application to show information from the execution of this script (data persisted by the script runs in the file system) in UI.

+ +

The backend of this web application is Java-based. It has 7days of in-memory cache (hashmaps) with dedicated threads that wake up every 30 sec and refresh the data in cache by reading fresh information from the file system. Note that in-memory cache with 7days of data takes around 40Gb of RAM space.

+ +

The frontend is react based. We refresh data in the browser by querying data from the Java backend every 30 sec by making an API call.

+ +

As you could notice there are three main issues with this architecture:

+ +
    +
  1. There could be a delay of 30sec(backend refresh) + 30 sec(frontend refresh) in showing fresh data in the UI. This is because we are polling data and pushing data on updates.
  2. +
  3. Since we have an in-memory cache, it is not possible to scale this application horizontally by replicating server instances. We will end up having multiple caches across different instances of server and each will have its own refresh cycle.
  4. +
  5. Queries before 7 days are too slow because data is not available in the in-memory cache of the Java server. We have to read it from the file-system on the fly.
  6. +
+ +

How can I improve this architecture?

+ +

One possible Architecture:

+ +

I was thinking about introducing a Kafka Queue where scripts can publish events along with writing to a filesystem. Java server can subscribe to these Kafka events. On receiving events from the Kafka, the Java server can

+ +
    +
  1. Update data to the Redis Cache
  2. +
  3. Persist it to the database, and
  4. +
  5. Push updates to UI over WebSockets
  6. +
+ +

Does this sound good or you see any flaw?

+",361588,,361588,,43925.37292,43925.37292,Architecture for real time updates from the data in file system,,2,7,1,,,CC BY-SA 4.0, +408137,1,408150,,3/30/2020 7:35,,-2,163,"

I am creating a small program to parse the contents of Excel files. There are two types of Excel files, containing the same data, but with different templates.

+ +

It is possible to tell them apart with 100% certainty by checking the contents of a specific cell, which I call the fingerprint.

+ +
+ +

Follows an MVCE:

+ +
    +
  • Creates an A if the fingerprint contains foo

  • +
  • Creates a B if the fingerprint contains bar

  • +
+ +

Interesting data (Two characters on the left/right depending on the class) is then displayed. (Obviously the real use case is more complex than extracting data from the fingerprint. In some cases, I want to get the content of column A, in some cases, it is first word in column B, concatenaned with column D, etc.)

+ +

However, should I ever want to add a new implementation, where is interesting data is, say, the middle two characters, I would need to modify the Create item function as well.

+ +

I have tried to find a way to loop through all implementations of the base class, and make it automatic, but figured the cure was worse than the disease...

+ +
using System;
+
+public class Program
+{
+    public static void Main()
+    {
+        var factory = new Factory();
+
+        var test1 = factory.CreateItem(""something foo"");
+        Console.WriteLine(test1.GetInterestingData());
+
+
+        var test2 = factory.CreateItem(""something bar"");
+        Console.WriteLine(test2.GetInterestingData());
+    }
+}
+
+public abstract class Base
+{
+    protected readonly string Data;
+    public Base(string data){
+        this.Data = data;
+    }
+    public abstract string GetInterestingData();
+}
+
+public class A:Base
+{
+    public const string Fingerprint = ""Foo"";
+    public A(string data):base(data){}
+
+    public override string GetInterestingData(){
+        return this.Data.Substring(0,2);
+    }
+}
+
+public class B:Base
+{
+    public const string Fingerprint = ""Bar"";
+    public B(string data):base(data){}
+
+    public override string GetInterestingData(){
+        return this.Data.Substring(this.Data.Length - 2,2);
+    }
+}   
+
+public class Factory
+{
+    public Base CreateItem(string data){
+        if(data.Contains(A.Fingerprint, StringComparison.OrdinalIgnoreCase)){
+            return new A(data);
+        }
+        if(data.Contains(B.Fingerprint, StringComparison.OrdinalIgnoreCase)){
+            return new B(data);
+        }
+        throw new Exception(""No fingerprint match"");
+    }
+}
+
+",361694,,288976,,43920.47569,43921.40833,Using const values in factory class to choose implementation,,2,5,,,,CC BY-SA 4.0, +408145,1,,,3/30/2020 13:24,,0,1755,"

I'm wondering for best practices to upsert a document based on two identifiers.

+ +
@Document(collection = ""cars"")
+public class CarEntity {
+    @Id
+    private String id;
+    private String color;
+    private String brand;
+    private Instant repairedAt;
+        ...
+}
+
+ +

Each car is identifiable with the combination of color and brand, which is the logical ID to upsert a document. I want to update the repairedAt field.

+ +

Now there are (to my knowledge) three good ways to handle that.

+ +

Option 1: Custom repository with custom upsertOneByColorAndBrand which either

+ +

1.a) uses the CarRepository internally:

+ +
public void upsertOneByColorAndBrand(final CarEntity carNew,
+                                           final String color,
+                                           final String brand) {
+    // findOneByColorAndBrand is no custom implementation, just from the interface
+    var carExisting = repo.findOneByColorAndBrand(color, brand);
+    if (carExisting == null) {
+        // write to DB if not existent
+        repo.save(carNew);
+    } else {
+        // update document if existing
+        carExisting.setRepairedAt(carNew.getRepairedAt());
+        repo.save(carExisting);
+    }
+}
+
+ +

1.b) or uses MongoTemplate internally:

+ +
public void upsertOneByColorAndBrand(final CarEntity carNew,
+                                           final String color,
+                                           final String brand) {
+    // define find query
+    Query query = new Query();
+    query.addCriteria(Criteria.where(""color"").is(color));
+    query.addCriteria(Criteria.where(""brand"").is(brand));
+
+    // create document from entity
+    Document doc = new Document();
+    mongoTemplate.getConverter().write(carNew, doc);
+    Update update = Update.fromDocument(doc);
+
+    // upsert
+    mongoTemplate.upsert(query, update, CarEntity.class);
+}
+
+ +

Option 2: You could also skip the whole custom repository implementation and just define the ID of the document yourself, based on the combination of color & brand, as the combination of these are unique in this example. When doing that, you can just use the native repo.save(car), as it will automagically use the new composite ID to operate, which skips the whole ""find"" part.

+ +

I compared the performance of option 1.a and option 1.b and the difference is in the milliseconds for a three digit amount of documents. That's because 1.a) splits up the ""find"" and ""save"" operation.

+ +

Are there huge downsides/benefits with option 2? In your experience, is using a composite ID in the Spring Data environment a good idea? Would you rather use compound indexes or ""set"" the ID manually when creating the entity?

+",309192,,309192,,43920.66597,43920.66597,Spring Data MongoDB: Update document based on multiple identifiers with Composite ID vs. MongoTemplate upsert vs. MongoRepository find & save,,0,0,,,,CC BY-SA 4.0, +408146,1,408154,,3/30/2020 13:52,,1,75,"

So i got a few xml files that need to be encrypted and it works, no problems here. Now since the latest update of the CodeAnalyzers i got a new warning ( CA5401 ) that tells me, that it is a bad idea to have a ""nonstandard initializing vector"". A little research showed me, why it is indeed a good idea to have a new IV each time a message is encoded. I shall use random IVs, it says.

+ +

However, i still need the exact same IV to decode that file, so using a newly generated IV to open a file from last friday won't work.

+ +

However then i saw this: +https://www.codeproject.com/Articles/556454/Encrypt-and-Decrypt-Text-with-a-Specified-Key

+ +

Where basically the IV is used as the first bytes of each encryted text and thus, an unencrypted part of the data.

+ +

Is this how it is meant to be done?

+",361728,,,,,43920.63403,Encryption using a nonstandard-IV,,1,2,,,,CC BY-SA 4.0, +408148,1,,,3/30/2020 14:16,,-1,45,"

My friends and I are trying to figure out which one of our professor's would win in a fist fight. (ie, 2 profs would fight at a time, and the victors of those fights would then fight, etc. until there is only 1 winner). We have not asked, but we suspect they will be unwilling to cooperate with us, so we want to compare them ourselves and algorithmically determine a winner. The basic idea is that each person would be presented several(~20) dialogs, each along the lines of ""Would X or Y win in a fist fight?"", where X and Y would be random professors. At the end of this, we need some way to use all of these comparisons to pick a winner.

+ +

I looked into using ELO scoring, but it won't work as it is time dependent. To me, it seems like this problem must have been solved already. I can think of many scenarios where you would need something similar(e.g. several people want to go to a restaurant together, and must decide what the group's favourite is).

+",361731,,,,,43920.61528,Algorithm for subjective ranking by many people,,1,2,,,,CC BY-SA 4.0, +408157,1,,,3/30/2020 15:59,,2,54,"

In our organisation, we're planning on migrating a Single-Page Application (with a complex back-end) to another Front-end technology (Angular to React). We'd like to procede with an Agile mindset by migrating progressively (page by page) and deploying frequently.

+ +

However, we're in a bit of a conundrum as to how to procede, and we'd like some advice or feedback from the community. Here's are some constraints that I tried to summarize here :

+ +
    +
  • UX : Roaming from an Angular page to a React page degrades the user experience: instead of a single-app experience (where the header & sidebar doesn't reload), the user experiences the page as a full reload.
  • +
  • Technically: we can solve this by enclosing an Angular page into a React page, but it induces a lot of risks in session management, js/css overload... The Angular app is also heavily coupled with the back-end, and one of the migration's goal is to decouple it
  • +
  • Methodology : Working on a 9-month redesign before delivering puts us in a great risk, as there is little user feedback. We're keen on the frequent delivery and feedback cycle that worked so well for us.
  • +
  • New features : We also have to deliver new features to respect our engagements. Initially, we wanted to connect a page migration with new features, and resolve our migration roadmap with our new features roadmap
  • +
+ +

There's a paradox between a progressive migration process that is methodologically sound, but that is technically risky and degrades the UX during the transition phase. We may think about living with some of these constraints but we'd love to collect some feedback. I hope that the question is appropriate for this site also. Have you ever experienced this situation before? Thanks!!

+",158501,,,,,43920.69653,Front-end migration in an Agile process,,1,2,,,,CC BY-SA 4.0, +408166,1,408167,,3/30/2020 20:13,,0,100,"

Given the following class in a Legacy code base without any UT's. +So any refactoring done should be done on the smallest possible scale, just in order to be able to write UT's.

+ +
public class Person
+{
+    private readonly PersonValidator personValidator;
+
+    public Person()
+    {
+        this.personValidator = new PersonValidator(this);
+    }
+}
+
+public class PersonValidator
+{
+    public PersonValidator(Person person)
+    {
+        // Logic here.
+    }
+}
+
+ +

Please note that this is very simplistic. +Should I create an interface of the PersonValidator class and use factory design pattern to create this one? +Note that it cannot be injected since it require this in it's constructor.

+ +

Basically, the code would become something like that:

+ +
public interface IPersonValidator 
+{ }
+
+public interface IPersonValidatorFactory
+{
+    IPersonValidator CreatePersonValidator(Person person);
+}
+
+public sealed class PersonValidatorFactory : IPersonValidatorFactory
+{
+    public IPersonValidator CreatePersonValidator(Person person)
+    {
+        return new PersonValidator(person);
+    }
+}
+
+public class PersonValidator : IPersonValidator
+{
+    public PersonValidator(Person person)
+    {
+        // Logic here.
+    }
+}
+
+public class Person
+{
+    private readonly IPersonValidator personValidator;
+
+    public Person(IPersonValidatorFactory personValidatorFactory)
+    {
+        this.personValidator = personValidatorFactory.CreatePersonValidator(this);
+    }
+}
+
+ +

By using this approach, I get rid of the hard wired dependency on the 'PersonValidator'. +And since I'm using an interface that represents a factory, I would be able to mock it and return any type that I like. +So my unit test would not depend on any other concrete types.

+ +

Is this the preferred approach or am I missing something?

+",186577,,,,,43920.86458,Should I refactor this class to use a Factory?,,2,0,1,,,CC BY-SA 4.0, +408169,1,408173,,3/30/2020 20:48,,0,83,"

I am building an API Rest with Spring Boot and I would like to clarify a concept about my architecture, to see if the community can help me.

+ +

Imagine that in my database I have a table called Person. I am mounting an architecture based on three layer architecture. The general scheme would be as follows:

+ +
    +
  • On the one hand I have the PersonDao.java, which will be in charge of accessing the database to retrieve the tuples from the Person table. To do this, it uses a class called PersonEntity.java, which contains (as attributes) all the columns of the table. PersonDao.java is going to return an object from the PersonModel.java class. PersonModel.java is the class that represents the Person business model.

  • +
  • On the other hand, I have the PersonService.java, which is responsible for carrying out the business logic of my application. This service calls PersonaDao.java to access the information stored in the database. PersonService.java works with objects of the PersonModel.java class, since this class represents the business objects of my application. PersonService.java will always return a PersonDto,java.

  • +
  • Finally, PersonController.java. This controller will be the one that exposes the connection interface of the Rest API. He always works with DTO and communicates with PersonService.java through DTO as well.

  • +
+ +

PersonController <-> (PersonDto) <-> PersonService <-> (PersonModel) <-> PersonDao <-> (PersonEntity) <-> DB

+ +

The question is: Is it necessary to use the PersonModel.java class so that PersonService.java only works with objects of this class? Or would it be better for PersonService.java to work directly with objects from class PersonEntity.java? If I do it this way it is to maintain the principle of single responsibility, so that each layer only works with objects of its scope.

+ +

If the answer is that PersonModel.java is necessary to maintain the principle of single responsibility of each layer. Would something change if JpaRepository were used? If I ask this question it is because in many tutorials and examples I see that when JpaRepository is used, the services work directly with the entities. In this case, shouldn't we create a class that represents the business object for the services?

+ +

EDIT: In the answer to this question (https://stackoverflow.com/questions/34084203), the architecture that makes sense in my head would be reflected, but surely it is not the most correct thing. The conclusion I come to is that each layer would use its own kind of objects. Copy / paste of the answer:

+ +
+

Typically you have different layers:

+ +
    +
  • A persistence layer to store data
  • +
  • Business layer to operate on data
  • +
  • A presentation layer to expose data
  • +
+ +

Typically, each layer would use its own kind of objects:

+ +
    +
  • Persistence Layer: Repositories, Entities
  • +
  • Business Layer: Services, Domain Objects
  • +
  • Presentation Layer: Controllers, DTOs
  • +
+ +

This means each + layer would only work with its own objects and never ever pass them to + another layer.

+
+ +

Thanks in advance.

+",334842,,334842,,43921.30486,43921.30486,Need for a domain model in a service in 3 tier architecture?,,1,0,,,,CC BY-SA 4.0, +408172,1,,,3/31/2020 3:29,,1,24,"

I have around 50 microservices which communicate with each other using Apache MQ implementation.

+ +

I want to know if I can avoid big bang approach when I need to upgrade the MQ version.

+ +

Current option that I can think of is put a switch in each micro service and then on the D day we switch on the new MQ. +But this is a big bang approach. +I want to know if there can be other way to perform this kind of middle ware upgrade.

+",168441,,,,,43921.14514,How to Upgrade MQ with incremental approach with 50 micro services,,0,0,,,,CC BY-SA 4.0, +408174,1,408176,,3/31/2020 5:05,,2,128,"

I am working currently on a project that requires a series (almost 86) calculations to run based on a user data input. The problem is that each calculation has a series of requirements:

+ +
    +
  • Should be able to hold a version variable to be able to distinguish changes on each calculation algorithm implementation. This way, anytime we modify an algorithm, we know which version was used on specific calculation.
  • +
  • Should be able to load specific data from other modules within the application (namely, we have 8 entities) so each one can choose the information necessary for its operation.
  • +
  • Should be able to determine if it's ""runnable"", and by which we would write a function (?) that verifies that the data extracted (from the previous requirement) meets some custom criteria for each calculation that guarantees the algorithm will be executed correctly.
  • +
  • Should have each a different algorithm implementation.
  • +
  • Generate a series of execution metrics (logs) and store them, such as data fetching time, algorithm execution time and the sampleSize which refers to as the amount of data loaded for each specific calculation to run.
  • +
+ +

Currently, what I've done is: created an abstract class Calculation with this structure:

+ +
abstract class Calculation<T, F> {
+  /**
+ * Logging Variables.
+   */
+  private initialDataFetchTime: Date;
+  private finalDataFetchTime: Date;
+  private initialAlgorithmTime: Date;
+  private finalAlgorithmTime: Date;
+
+  // Final result holding variable.
+  private finalResult: T;
+
+  // The coverage status for this calculation.
+  private coverage: boolean;
+
+  // Data to use within the algorithm.
+  private data: F;
+
+  // The version of the Calculation.
+  public abstract version: string;
+
+  // The form data from the User to be used.
+  public static formData: FormData;
+
+  /**
+ * This is the abstract function to be implemented with
+ * the operation to be performed with the data. Always
+ * called after `loadData()`.
+   */
+  public abstract async algorithm(): Promise<T>;
+
+  /**
+ * This function should implement the data fetching
+ * for this particular calculation. This function is always
+ * called before `calculation()`.
+   */
+  public abstract async fetchData(): Promise<F>;
+
+  /**
+ * This is the abstract function that checks
+ * if enough information is met to perform the
+ * calculation. This function is called always
+ * after `loadData()`.
+   */
+  public abstract async coverageValidation(): Promise<boolean>;
+
+  /**
+ * This is the public member function that is called
+ * to perform the data-fetching operations of the
+ * calculation. This is the first function to call.
+   */
+  public async loadData(): Promise<void> {
+    // Get the initial time.
+    this.initialDataFetchTime = new Date();
+
+    /**
+     * Here we run the data-fetching implementation for
+     * this particular calculation.
+     */
+    this.data = await this.fetchData();
+
+    // Store the final time.
+    this.finalDataFetchTime = new Date();
+  }
+
+  /**
+ * This is the public member function that is called
+ * to perform the calculation on this field. This is
+ * the last function to be called.
+   */
+  public async calculation(): Promise<T> {
+    // Get the initial time.
+    this.initialAlgorithmTime = new Date();
+
+    /**
+     * Here we run the algorithmic implementation for
+     * this particular calculation.
+     */
+    this.finalResult = await this.algorithm();
+
+    // Store the final time.
+    this.finalAlgorithmTime = new Date();
+
+    // Return the result.
+    return this.finalResult;
+  }
+
+  /**
+ * This is the public member function that is called
+ * to perform the coverage-checking of this calculation.
+ * This function should be called after the `loadData()`
+ * and before `calculation()`.
+   */
+  public async coverageCheck(): Promise<boolean> {
+    // Execute the check function.
+    this.coverage = await this.coverageValidation();
+
+    // Return result.
+    return this.coverage;
+  }
+
+  /**
+ * Set FormData statically to be used across calculations.¡
+   */
+  public static setFormData(formData: FormData): FormData {
+    // Store report.
+    this.formData = formData;
+
+    // Return report.
+    return this.formData;
+  }
+
+  /**
+ * Get the coverage of this calculation.
+   */
+  public getCoverage(): boolean {
+    return this.coverage;
+  }
+
+  /**
+ * Get the data for this calculation.
+   */
+  public getData(): F {
+    return this.data;
+  }
+
+  /**
+ * Get the result for this calculation.
+   */
+  public getResult(): T {
+    return this.finalResult;
+  }
+
+  /**
+   * Function to get the class name.
+   */
+  private getClassName(): string {
+    return this.constructor.name;
+  }
+
+  /**
+   * Function to get the version for this calculation.
+   */
+  private getVersion(): string { return this.version; }
+
+  /**
+   * Get all the Valuation Logs for this Calculation.
+   */
+  public async getValuationLogs(): Promise<CreateValuationLogDTO[]> {
+    // The array of results.
+    const valuationLogs: CreateValuationLogDTO[] = [];
+
+    // Log the time the algorithm took to execute.
+    valuationLogs.push({
+      report: Calculation.formData,
+      calculation: this.getClassName(),
+      metric: 'Algorithm Execution Time',
+      version: this.getVersion(),
+      value:
+        this.initialAlgorithmTime.getTime() - this.finalAlgorithmTime.getTime(),
+    });
+
+    // Log the time to fetch information.
+    valuationLogs.push({
+      report: Calculation.formData,
+      calculation: this.getClassName(),
+      metric: 'Data Fetch Load Time',
+      version: this.getVersion(),
+      value:
+        this.initialDataFetchTime.getTime() - this.finalDataFetchTime.getTime(),
+    });
+
+    // Sample size is calculated and not an issue for this matter.
+
+    // Return the metrics.
+    return valuationLogs;
+  }
+}
+
+ +

And then, created subsequent classes for each calculation that extend the previous class, such as:

+ +
export class GeneralArea extends Calculation<number, GeneralAreaData> {
+  /**
+   * Versioning information.
+   * These variable hold the information about the progress done to this
+   * calculation algorithm. The `version`  field is a SemVer field which
+   * stores the version of the current algorithm implementation.
+   *
+   * IF YOU MAKE ANY MODIFICATION TO THIS CALCULATION, PLEASE UPDATE THE
+   * VERSION ACCORDINGLY.
+   */
+  public version = '1.0.0';
+
+  // Dependencies.
+  constructor(private readonly dataSource: DataSource) {
+    super();
+  }
+
+  // 1) Fetch Information
+  public async fetchData(): Promise<GeneralAreaData> {
+    // Query the DB.
+    const dataPoints = this.dataSource.getInformation(/**  **/);
+
+    // Return the data object.
+    return {
+      mortgages: dataPoints,
+    };
+  }
+
+  // 2) Validate Coverage.
+  public async coverageValidation(): Promise<boolean> {
+    // Load data.
+    const data: GeneralAreaData = this.getData();
+
+    // Validate to be more than 5 results.
+    if (data.mortgages.length < 5) {
+      return false;
+    }
+
+    // Everything correct.
+    return true;
+  }
+
+  // 3) Algorithm
+  public async algorithm(): Promise<number> {
+    // Load data.
+    const data: GeneralAreaData = this.getData();
+
+    // Perform operation.
+    const result: number = await Math.min.apply(
+      Math,
+      data.mortgages.map(mortgage => mortgage.price),
+    );
+
+    // Return the result.
+    return result;
+  }
+}
+
+/**
+ * Interface that holds the structure of the data
+ * used for this implementation.
+ */
+export interface GeneralAreaData {
+  // Mortages based on some criteria.
+  mortages: SomeDataEntity;
+}
+
+ +

The idea is to allow us to perform three basic operations:

+ +
    +
  1. Load the data for every calculation.
  2. +
  3. Validate coverage for every calculation.
  4. +
  5. If the previous step returns a general ""true"", run calculations.
  6. +
+ +

However, this pattern has raised some issues as the FormData (the information the user uploads) is stored statically, which means that if some calculation is already running and another user performs an upload, I cannot set the FormData because it will cause the other User's calculations to go nuts. However passing the FormData to each function constructor seems like a lot of work (if you feel like this should be the way, I have no fear of writing code ;) )

+ +

Perhaps is this quarantine, however, am I not seeing something here? Currently, the final execution looks something like this:

+ +

+public performCalculation(formData: FormData): Promise<FormDataWithCalculations> {
+  // Set general form data.
+  Calculation.setFormData(formData); // <--- Error in subsequent requests :(
+
+  // Instance Calculations.
+  const generalAreaCalculation: GeneralAreaCalculation = new GeneralAreaCalculation(/** data service **/);
+  // 85 more instantiations...
+
+  // Load data for Calculations.
+  try {
+    await Promise.all([
+      generalAreaCalculation.loadData(),
+      // 85 more invocations...
+    ]);
+  } catch(dataLoadError) { /** error handling **/ }
+
+  // Check for coverage.
+  const coverages: boolean[] = await Promise.all([
+    generalAreaCalculation.coverageCheck(),
+    // 85 more coverage checks...
+  ]);
+
+  // Reduce coverage.
+  const covered: boolean = coverages.reduce((previousValue, coverage) => coverage && previousValue, true);
+
+  // Check coverage.
+  if (!covered) { /** Throw exception **/ }
+
+  // Perform calculations!
+  const result: FormDataWithCalculations = new FormDataWithCalculations(formData);
+
+  try {
+    result.generalAreaValue = generalAreaCalculation.calculation();
+    // 85 more of this.
+  } catch (algorithmsError) { /** error handling ***/ }
+
+  /*
+   (( Here should go the log collecting and storing, for each of the 85 calculations ))
+  */
+
+  // Return processed information.
+  return result;
+}
+
+ +

I am not afraid of writing too much code if it means for it to be reusable, maintainable, and more importantly, able to be testable (oh yes, test each calculation to ensure it does what it is supposed to do in normal and edge cases, that's why classes were my approach so each one would have attached a test), however I am completely overwhelmed with writing this tremendous amount of code instead of just writing 85 functions (which is what is already been used) and call each one of them.

+ +

Is there any patterns? Guidance? Advice? Reference? Study material? I can't seem to reduce this code any more but wanted to ask in case someone knows a better pattern for this kind of problems and, in case it is useful, the code in TypeScript (NodeJS with NestJS API) to understand how is everything getting wired.

+ +

Thanks in advance and apologies for my awful english!

+",326734,,,,,43921.81528,How to structure OOP multiple calculations?,,1,2,1,,,CC BY-SA 4.0, +408175,1,408204,,3/31/2020 5:30,,2,63,"

I am implementing a system for a customer where they are asking me to use a fixed hash to protect the API as authorization. So this fixed value will be sent in the HEADER of the HTTP call as ""Authorization"":""[the hash]"".

+ +

Meanwhile, when I was looking for RFC implementations I got to know that Authorization: <type> <credentials> pattern was introduced by the W3C in HTTP 1.0. Can somebody please tell me what I am doing is wrong (going against this standard).

+ +

So I was reviewing other APIs using a fixed hash value, and noticed that they are sending this in the URL parameters itself. eg: POST https://language.googleapis.com/v1/documents:analyzeEntities?key=API_KEY I would like to know what is the standard they are following.

+",361788,,,,,43926.85347,Proper way to put a fixed hash as Authorization,,1,1,,,,CC BY-SA 4.0, +408180,1,,,3/31/2020 10:38,,3,313,"

I'm a rather experienced software developer. I worked with many teams and projects throughout my career so far. The recent two projects, however, challenged me in an unusual way. Namely: they were disorganized, or rather organized in a way that I perceive as not effective.

+ +

Both of the projects are for the banking industry. They showed some surprising similarities:

+ +
    +
  • full-fledged Scrum (I mean - planning, refinements, retrospective)
  • +
  • big teams (9-15 team members)
  • +
  • no mockups before development
  • +
  • 20-25% of the developer's time spent on the meetings
  • +
  • developers assigned to work on something they lack expertise about (React devs assigned to Angular, .NET devs assigned to React)
  • +
  • low quality of backlog and functional requirements
  • +
  • superficial code reviews (pointing out if you forget to delete commented out code, but nevermind that code is not clean, names are bad and there are no unit tests)
  • +
  • low developer's productivity
  • +
  • missed deadlines
  • +
  • poor quality of code
  • +
+ +

In case of the first project I tried to convince the team to reorganize, I suggested to split (it was at first around 12 people, later became 18 and more even), I wanted to convince them that we must have mockups first and should reduce the number and/or length of the meetings. Unfortunately, all the requests and suggestions have been rejected. They just said - 'it is what it is', 'we won't be able to fix the situation', 'we always worked that way'.

+ +

Finally, after a few attempts to fix the project and improve it - I quit.

+ +

The second project shows the same symptoms at this moment and I don't want it to fail, but I see the same patterns emerge.

+ +

How do you harness the chaos within the development team? How do you convince the team to follow the well known good principles? Is it possible at all, or maybe some organizations will always produce such an environment? If it's impossible to fix such a team - how do you convince yourselves to work in a team which you know works ineffectively and against the good principles?

+",143750,,1204,,43921.52153,43922.20903,How to harness the chaos within the development team?,,6,7,,,,CC BY-SA 4.0, +408189,1,408191,,3/31/2020 14:33,,2,211,"

We're using the gitflow branching strategy and it works well. What I can't seem to find though is a recommendation on what point people close their releases.

+ +

For example, suppose we got 4 environments:

+ +
    +
  1. develop
  2. +
  3. test
  4. +
  5. uat
  6. +
  7. production
  8. +
+ +

We've been working on a selection of features on the develop environment and we are now creating a release from develop, v1.0.0.

+ +

So we take v1.0.0 to the test environment where bugs are raised and we can hotfix them on the release branch. QA pass the release and we want to take it to uat for the client to sign off before going to production.

+ +

Where in this process do people close the release branch? When they move from test to uat or when they move from uat to production?

+ +

Is there a standard that most people use or is it whatever suits your project? For me there are pros and cons of each option. TBH I see greater benefit closing the release as I take it from test to uat but having the release open until the client has signed it off and it's gone to production makes a lot of sense.

+",295126,,,,,43921.875,"Using gitflow, when do people tend to close off the release branch?",,3,2,,,,CC BY-SA 4.0, +408193,1,408195,,3/31/2020 15:40,,0,177,"

So I'm writing a network simulator in C++ as part of a university project. Right now, I have a class structure that looks something like:

+ +
//includes type aliases
+#include ""GlobalTypes.h""
+
+//main body of simulator, contains various static members and methods that define behavior of simulator
+class mySimulator : public myErrorHandler {static member1...static method1()...};
+
+//these are things like a file IO handler, simulator environment initialization, etc
+class largeSimulatorSubcomponent1 : private Simulator {static member1...static method1()...};
+class largeSimulatorSubcomponent2 : private Simulator {static member1...static method1()...};
+class largeSimulatorSubcomponent3 : private Simulator {static member1...static method1()...};
+
+//various non-static object classes that are manipulated 
+//by the simulator and each other to perform the simulation
+class cellTower{member1...method1()...};
+class cellPhone{member1...method1()...}; 
+class dataContainer1{member1...method1()...};
+class dataContainer2{member1...method1()...};
+class etc{member1...method1()...};
+
+ +

In a general sense, the mySimulator and largeSimulatorSubComponent classes form the ""universe"" and control the external behavior of the various objects that are created in order to perform the simulation.

+ +

For my point of view, my structure makes sense because the ""Simulator"" is not ever created more than once, so it doesn't seem necessary to allow it to be a non static class that can be traditionally instantiated. However, I've come to understand that static classes are typically frowned upon with many suggesting that using namespaces is better. At first, this is what I was doing, but I made the switch to classes with static members and methods because that allowed me control over what parts of the simulator were allowed to access the other parts.

+ +

I fully expect that this project will be taken over by students in the future, and I am trying to make the structure as clear and safe for them to use as possible, and I know that a lot of them will not be experts in C++ and may inadvertently cause bugs or implement behavior in a logically inconsistent ways. I am trying to protect them from that by carefully restricting access between parts of the program with clear logical encapsulation, const correctness, error handling, etc. I believe that my current structure allows that, but I'm interested to hear any suggestions or comments on my logic/structure.

+ +

Also, on a point of lesser importance, what is your opinion on using global aliases in this manner?

+ +
using typename1 = float;  //typename 1 & 2 serve to ""label"" floats 
+using typename2 = float;  //that are used to represent a certain category of
+                          //values that important to the simulator;
+
+",361840,,,,,43924.3375,Should I use a class with only static members to encapsulate my program?,,2,3,1,,,CC BY-SA 4.0, +408196,1,,,3/31/2020 16:18,,1,17,"

Not to sure if this is the right place to ask this.

+ +

But I have more of a Theory Related question then Completely Technical

+ +

I want to Propose my Issue at hand and then I'm looking for someone to give me a fresh mind on this and an Alternative way id be able to handle this

+ +

So I have this Mobile Ecom application everyone's happy with the application the only complaint has been that the checkout ""Placing the Order"" Take's to long 30ish Items take around 20/30 Seconds, 1 Item around 3/5

+ +

I know why this is, But the issue is I can't Do to much about it. Its a Initial Design flaw on my part.

+ +

So essentially When the items are in the cart and they get checked out 2 Things happen 1.I pull that Item from The API backend and then Check that Pulled item A against the Item in the Cart B

+ +

So if A has less stock then B then you can't checkout B as an example, Then If all checks out then B is added to a orderline list and Placed in a Order Call.

+ +

Now the actual order call for B Takes 1/2 Seconds per instance of B, However Getting A to check against B Take's 2-4ish Seconds Per instance of B, Now from that you can conclude that The more items the Longer A make's this process, But how els do I do this? I have to cater for the fact that Person X and Person Z want Product B(Product being a Can Of paint) Now Person X Adds 10 Of B to the Cart And person Z add's 20 But there are only 25 of B

+ +

So Now Person X adds B to cart and keeps shopping While Person Z Adds B and checks out immediately Now person X Adds another item and decides to Checkout But now He requires 10 But theres only 5 left, So Now I need to know that Theres only 5 Left at that Instance.

+ +

So How els besides getting another product A to compare against Would I go about this kind of thing?

+ +

Small Note before getting asked this i've done what I can or from what my Skill level allows to the API(Not my own) to try and Improve these times

+",356905,,,,,43921.67917,Cart Checkout Item Check alternative,,0,4,,,,CC BY-SA 4.0, +408198,1,408200,,3/31/2020 17:13,,1,582,"

I have an example for a DFD for a patient information system implemented in a certain hospital. The below figure, represents the overview diagram (level 0-diagram, if we consider that the first level is the context diagram then the second level is level 0 diagram and so on):

+ +

+ +

In this system, patients can search and make appointments. I try to understand the data flow that is labeled patient name and that is directed from the process Make appointment to the database Patients (see yellow hightligh in the diagram).

+ +

I don't understand why we have this flow in our system: In which scenario would the process Make appointment send to the database patients the patient name ? Shouldn't the patient name already be sent by the process maintain patient info?

+",351790,,209774,,43921.85764,43921.86806,Data Flow Diagram for patient information system for a hospital,,1,2,2,,,CC BY-SA 4.0, +408199,1,,,3/31/2020 18:30,,-1,80,"

While Players like Rappi/Alibaba / Wechat have gone the super app way, i.e on single app to host many unrelated services, Ecosystems like Google, Amazon etc still have multiple single purpose applications e.g google suite apps like mail, calendar, hangout, meet etc.

+ +

What criteria would a product manager look at while deciding how to bundle an enterprise application.

+ +

i.e would you have a Single Super App with many SDK's or Multiple Single-Purpose Applications linking multiple services via API's.

+",361854,,,,,43921.87639,One Super App or Many Single-Purpose Apps,,1,1,,,,CC BY-SA 4.0, +408206,1,408222,,4/1/2020 1:35,,1,106,"

per the title how important is the data in assessing code coverage?

+ +

As background, let's say that you're given 20% of the entire dataset to help trace the different pathways each row of data goes through the code. My understanding of this problem is that the limitation of the data can mean that not all possible pathways have been reached.

+ +

Now, the big question is does one need the other 80% of the data to assess code coverage, or do you just need the one or two extra lines to hit the remaining pathways? To put it differently, if I hit all of the remaining pathways, is that sufficient to assess code coverage?

+ +

As an example, let's say you have the following:

+ +
> df.columns
+[numerator, denominator]
+> df.head(3).values
+array([2, 4, 0],
+      [4, 0, 0])
+
+
+# Code
+for idx, row in enumerate(df):
+    try:
+        if row[0] / row[1] == 0.5:
+            return row[0]/row[1]
+    except DivideByZeroException as e:
+        return 'DivideByZeroException'
+
+ +

So in this case, there's only two possible outcomes: 0.5, DivideByZeroException

+ +

As such, would we require additional values from the dataset, such as numerator:1, denominator:2 if its corresponding pathway has already been identified with numerator:2, denominator:4.

+ +

My background is in statistics and data science, so my apologies if my interpretation might have come off as niave.

+",361872,,361872,,43922.75486,43922.75486,How important is the data to assess code coverage?,,2,1,,,,CC BY-SA 4.0, +408209,1,,,4/1/2020 3:57,,1,85,"

Say I have a Person object. I need to ask the user to choose from a list, which laptop they have.

+ +

They can also choose the option ""My product isn't listed here"".

+ +

Now the Person object will look like:

+ +
{
+  name: 'John',
+  laptop: 'Dell'
+}
+
+ +

If the user selects ""My product isn't listed here"", I can store it as:

+ +
laptop: 'OTHER'
+
+ +

But this is not recommended, as I shouldn't mix the laptop name string with the ""other"" string. In the code I'll be checking against the string ""other"", like a magic constant, which doesn't seem right.

+ +

I can't leave it as laptop: null because then I wouldn't know if the user selected ""other"" or did not make a choice at all.

+ +

Another option is to have a separate field like:

+ +
{
+  name: 'John',
+  laptop: null,
+  laptopFoundInList: false
+}
+
+ +

With this, I'll have to maintain 2 fields for the same thing.

+ +

I'm sure this is a very common problem. What's the correct convention to store such information?

+",324920,,,,,43922.29722,"What's the best convention to store ""other"" option?",,2,3,,,,CC BY-SA 4.0, +408214,1,408216,,4/1/2020 7:32,,3,103,"

I would like to include fastparquet as a dependency in a Python library I am working on but it requires Microsoft Visual C++ to build. The goal here is for the end user to be able to easily install my library via PyPi or with the pip install git+(url) one-liner. I found pip wheels for pre-compiled fastparquet binaries here, but obviously that would only be a solution for users with the same CPU architecture.

+ +

I'm basically asking if there's a good way to include C++ dependent extensions without the end-user needing to have build tools in order to run my library?

+",361887,,,,,43922.35347,Best way to go about including C/C++ dependencies in Python packages?,,1,0,,,,CC BY-SA 4.0, +408221,1,408290,,4/1/2020 10:25,,4,1031,"

I am pretty new to programming languages and only have limited knowledge about design patterns, so I hope you can help me with the following problem:

+ +

I have an application that operates on a group of different services. One functionality of the application is, to provide an interface to the user to call every method of the available services. Therefore I want to use the Command pattern because it allows me to add new commands just by adding new classes and not by changing the existing code. The parameters for each service command are passed to the constructor.

+ +

Commands:

+ +
public interface ICommand {
+    void Execute();
+}
+
+public abstract class Command<T> : ICommand {
+    public T Service { get; set; }
+
+    public abstract void Execute() { /* use service */ }
+}
+
+public class Command1 : Command<Service1> {
+    T1 param1;
+    ...
+
+   public Command1(T1 param1, ...) { /* set parameters */ }
+
+   public override void Execute() { /* call first service1 method */ }
+}
+
+...
+
+public class Command2 : Command<Service2> {
+    T2 param1;
+
+    ...
+
+   public override void Execute() { /* call first service2 method */ }
+}
+
+...
+
+ +

The advantage is, that the user can instantiate a group of commands without knowing the application's interface and execute them at a later time when the service was set. The problem is, that I don't know how I can elegantly inject the services.
+The application is mainly responsible for starting and stopping the services and keeping an instance of each service in a central place.

+ +

Application:

+ +
public class Application {
+    S1 Service1;
+    S2 Service2,
+    ...
+
+    public void StartService(/* params */) { /* ... */ }
+    public void StopService(/* params */) { /* ... */ }
+    ...
+}
+
+ +

Question:

+ +

So my question is how do I get the correct service inside a command?
+I thought about using some kind of Dependency injection, Service locator or Builder pattern, but I never used these patterns and I am not sure what's the best solution in this case and how to implement it correctly.

+ +

Update:

+ +

Thanks to the comment of @Andy and @Anders it's probably the best solution to use a Command class for my parameters and a CommandHandler class for the logic. The advantage is, that I can instantiate a command handler inside the Application class and pass the correct service in the handler's constructor. Also, I can create a command outside the Application without knowing the service a pass this command to the Application for execution.
+To map commands to the correct command handler I a CommmandBus as suggested by @Andy, but I have some trouble implementing the java example in C# because there is no template map like Map<Class<? extends CommandHandler<?>>, CommandHandler<? extends Command>> in C#.

+ +

So what's a clean solution to map a command to its handler in C#? I don't really like my solution below because I have to upcast the command.

+ +

My updated code:

+ +
public interface ICommand { }
+
+public class ConcreteCommand : ICommand {
+    public Type1 Param1 { get; set; }
+    public Type2 Param2 { get; set; }
+    /* some more parameters */
+}
+
+public interface ICommandHandler<in TCommand> {
+    Task Handle(TCommand command);
+}
+
+public class ConcreteCommandHandler : ICommandHandler<ConcreteCommand> {
+    private readonly S1 service;
+
+    public ConcreteCommandHandler(S1 service) {
+        this.service = service;
+    }
+
+    public Task Handle(ConcreteCommand command) {
+        return service.DoSomething(command.Param1, ...);
+    }
+}
+
+
+public class CommandBus {
+    /* BAD: Return type of command handler hardcoded */
+    private readonly IDictionary<Type, Func<ICommand, Task>> handlerMap = new Dictionary<Type, Func<ICommand, Task>>();
+
+        public void RegisterCommandHandler<TCommand>(ICommandHandler<TCommand> handler) where TCommand: ICommand
+        {
+            Type commandType = typeof(TCommand);
+
+            if (handlerMap.ContainsKey(commandType))
+                throw new HandlerAlreadyRegisteredException(commandType);
+
+            /* BAD: Narrowing cast */
+            handlerMap.Add(commandType, command => handler.Handle((TCommand) command));
+        }
+
+        public Task Dispatch<TCommand>(TCommand command) where TCommand : ICommand
+        {
+            Type commandType = typeof(TCommand);
+
+            if (!handlerMap.TryGetValue(commandType, out Func<ICommand, Task> handler))
+                throw new HandlerNotRegisteredException(commandType);
+
+            return handler(command);
+        }
+}
+
+
+public class Application {
+    private CommandBus bus;
+    private S1 service1;
+    private S2 service2;
+    ...
+
+    private void InitializeBus() {
+        bus.RegisterCommandHandler(new ConcreteCommandHandler(service1))
+        ...
+    }
+
+    public void ExecuteCommand<TCommand>(TCommand command) where TCommand : ICommand {
+        bus.Dispatch(command);
+    }
+
+    ...
+}
+
+",361710,,361710,,43924.60764,43924.62361,Design pattern: How to inject dependencies into a Command pattern,,3,8,,,,CC BY-SA 4.0, +408224,1,408430,,4/1/2020 13:30,,0,43,"

Due to a lot of columns to chose from, the standard column selector in Devexpress doesn't cut it for me. So I thought of something like a ""grouped"" column chooser. This should work like a treeview with expandable ""main nodes"" showing all columns under the main node.

+ +

Here is a visual representation of what i mean:

+ +

+ +

Clicking on any of the Category buttons should expand the clicked button and show all columns underneath. From that point, it should be possible to drag columns into the grid just as in the standard column chooser.

+ +

I've researched in the documentation but there is no mention of anything like that. +So my question is:

+ +

is there even a possibility to implement this in ASPNET Webforms?

+",361923,,,,,43927.26181,Treeview column chooser devexpress asp.net WebForms,,1,0,,,,CC BY-SA 4.0, +408226,1,408227,,4/1/2020 13:59,,3,40,"

This is not about storing my user's login details in the app, I already use hash and JWT tokens for that. There is a part of our app where we need to store the login details of the user for another website.

+ +

e.g they enter their username and password, it gets stored in my system and then my node server wakes up at night uses their details to login into the other system and pull out the data it needs. What's the best and safest way to store these details in the system as I need the password in text form to log into the website.

+ +

We are receiving permission from the user to do this as it is automating an essential part of our data collection for the user. It's just the other website/system doesn't have any other way of providing the data to us unless we want it in paper form in the post.

+",361927,,,,,43922.60486,Storing username and password for another site in Node and MongoDB,,1,0,,,,CC BY-SA 4.0, +408228,1,,,4/1/2020 14:45,,2,78,"

I'm using Knex.js with TypeScript for database access.

+ +

Table

+ +
    +
  • uid (UUID, auto-generated by Postgres)
  • +
  • name (varchar)
  • +
+ +

Model

+ +
    interface User {
+        uid: string | null;
+        name: string;
+    }
+
+ +

Repository

+ +
    class UserRepository {
+        public async insert(user: User): Promise<void> {
+            await knex('user').insert(user);
+        }
+    }
+
+ +

Usage

+ +
function someRouteHandler(req, res) {
+    const repo = new UserRepository(...)
+
+    // Next line will error out due to the object literal
+    // not having the uid property
+    repo.insert({
+        //uid: null // satisfies interface constraint but errors during insert
+        name: req.body.name,
+    });
+}
+
+ +

Problem

+ +

The only way I see this working is to modify the repository class methods to explicitly say which fields I need to insert:

+ +
public async insert(user: User): Promise<void> {
+    await knex('user').insert({
+        name: user.name,
+    });
+}
+
+ +

Unfortunately, this adds maintenance efforts since every time I modify the model I also need to modify the methods that do the queries to reflect the changed properties.

+ +

Is there a cleaner way of doing this?

+",361931,,,,,44190.29375,Database Model Classes in TypeScript,,1,2,,,,CC BY-SA 4.0, +408230,1,,,4/1/2020 15:55,,1,39,"

I have a solution for Xamamrin.iOS and Xamarin.Android application with structure as below,

+ +
    +
  • Solution Root + +
      +
    • Common Project
    • +
    • Xamarin.iOS Project
    • +
    • Xamarin.Android Project
    • +
  • +
+ +

I need to branch both iOS and Android projects, but I am facing a problem, when I branch the Xamarin.iOS project alone for example, I get in my workspace the Xamarin.iOS project alone, so I am missing the dependency it has on the Common Project. And since both the iOS and Android projects evolves independently I don't want to branch the whole solution, I want to keep branching specific to each platform.

+ +

How do you recommend I implement branching for this case? I am using TFS 2015 source control.

+",101940,,,,,44192.87778,TFS Branching for Xamarin Common Projects,,1,2,,,,CC BY-SA 4.0, +408232,1,408235,,4/1/2020 16:02,,33,7194,"

We work in scrum teams with a product owner who is responsible for the backlog and prioritisation of that backlog. Recently the topic of un-ticketed work came up, developers for one of the applications are doing un-ticketed work that they regard as important. Typically this is tech debt but can also be things like migrating to a better library etc.

+ +

The argument from the developers was that these are generally small things and in an agile team they should be able to exercise their judgement and fit them in around sprint work. E.g. if waiting for the CI system to build and deploy they could tidy up some code. The effort of raising a ticket will take longer than actually doing the work. The work being done is tested via automated tests and so there is no additional burden on the QA members of the team.

+ +

The argument against this is that the developers are effectively saying their opinion on what work is a priority is more important than any other stakeholder and are not going to go through the PO so it can be compared against other work in the backlog. There is also a case to be made that if a developer has spare time then it would be more productive for them to be elaborating upcoming stories. The state of stories coming into sprint has been raised at retros before and so more elaboration can only help this. There is also a concern that the self policing of what size falls into this category may start to stretch and result in even more time being spent on un-ticketed work.

+ +

I can see both sides of the argument to an extent but should all work no matter how small be ticketed and go through sprint planning rather than be done as and when by developers if it is small?

+",174177,,,,,43923.49028,"Un-ticketed work, how much is too much?",,9,10,3,,,CC BY-SA 4.0, +408242,1,408253,,4/1/2020 20:29,,10,662,"

Background

+ +

At work we've been trying to find a new workflow. We currently follow git-flow pretty closely. We don't have release branches; we just merge develop directly into master once we feel like it's well tested enough. When we have a hotfix, we independently merge it into both develop and master.

+ +

One of the approaches we've been talking about is that instead of merging develop into master, we instead cherry-pick develop's commits into master. After a few cherry-picks, we deploy it and that's our release. Alternatively we could cherry pick from develop to a release branch (based on master), then merge into master.

+ +

At first I was opposed to this, because it would flatten the history out. And, we merge using the --no-ff option, meaning we can revert entire features if needed. This would be lost as well.

+ +

Other that that though, I can't really think of any reasons this would be a bad idea. At the same time I can't see any benefits to this approach. But I'm just a mid-level engineer with less than 10 years of professional experience. So I'm really interested in hearing people's experiences / thoughts about this.

+ +

Question

+ +

Are there benefits to cherry-picking instead of merging? And is there anything that we would lose by using this approach over traditional merging?

+",79027,,79027,,43941.6375,43942.75139,Is cherry-picking commits into master (instead of merging) a good idea?,,2,1,,,,CC BY-SA 4.0, +408245,1,409016,,4/1/2020 20:46,,1,211,"

So, I have an application here that posts requests to another service via REST API calls. I have to implement a health check within my application that ensures that the other service is up. I'm a software noob here and my question is this:

+ +

When do I ping the other service to check if they're up? During my application startup (this is just going to be a one time thing) or when I initialize the client which makes the REST API calls to the other service?

+",361962,,31260,,44008.52083,44008.52083,When do you do a health check?,,3,4,,,,CC BY-SA 4.0, +408246,1,408249,,4/1/2020 21:05,,28,8792,"

Imagine two classes:

+ +
class Person {
+    MarriedPerson marry(Person other) {...}
+}
+
+ +
class MarriedPerson extends Person {
+}
+
+ +

The idea is that a Person is automatically ""morphed"" to type MarriedPerson when the method marry is called on it. For this reason you would need to create an instance of MarriedPerson and transfer the data from the initial Person object. The Person object would still exist, which is bad. Another way would be the application of a design pattern ...

+ +
    +
  • What design pattern would fit the problem?
  • +
  • Are there better ways to model this relationship?
  • +
  • (Off-topic) Are there Java features to define a custom downcast?
  • +
+ +
+ +

This is not homework. I just got curious after commenting here.

+",360199,,360199,,43936.59444,43936.82569,Polymorphism case study - Design pattern for morphing?,,7,11,11,,,CC BY-SA 4.0, +408251,1,,,4/1/2020 22:12,,1,103,"

I am building an interpreter in C for a simple programming language.

+ +

The interpreter is fitted with a built in garbage collector. The GC simply marks all objects which are linked from some root (the call stack, the evaluation stack, loaded modules, etc.) and collects the rest.

+ +

I am now writing an extension DLL for the interpreter. The DLL is ""wrapping"" another C library for graphics (SDL). This library creates objects which represent operating system resources, such as graphical windows. So I now need to think about how the interpreter and GC integrate with the ""outside world"" in a correct way.

+ +

In the current situation, one could write the following code in my language and it will work fine:

+ +
import graphics
+window = graphics.new_window()
+# ... all the rest
+
+ +

However, the following code I suppose will cause problems:

+ +
import graphics
+graphics.new_window()
+
+ +

Since the result of new_window() isn't stored anywhere, at some point the GC will collect the Window object. When being collected, the Window object will return the native resources to the OS, as part of it's deallocate implementation that is being called by the GC.

+ +

I'm not sure if this is reasonable behavior for a programming language. On one hand, if we lose the reference to the Window it doesn't make much sense to leak it.

+ +

On the other hand, if I remember correctly - in Java (and I suppose we can find equivalent examples in most languages), it is common and legal to write code such as the following (psuedo Java):

+ +
public void makeWindow() {
+    new JFrame();
+}
+
+ +

This would create a GUI window that will appear on the screen and stay visible. And as you can see, the JFrame object isn't saved anywhere. Still, the GC will not collect it (or at least it doesn't collect the OS resource the JFrame persumebly wraps).

+ +

What is the standard way to implement this in a language VM?

+ +

The only approach I can think of right now, is that objects that wrap native resources will not free these resources when being claimed by the GC. The only way to free these resources would be with an explicit call in user code to a dispose method on the object. What approach is standard in VM implementations?

+",121368,,121368,,43922.95347,43924.23125,"How do VMs and GCs treat objects which wrap yet active resources, but that are unreachable from user code?",,2,5,,,,CC BY-SA 4.0, +408262,1,,,4/2/2020 5:30,,6,402,"

CSS is a syntax for specifying the appearance of text and other web content, sizes and lengths are defined in pixels, however pixels have different sizes in different devices, this is the most notable in mobile devices.

+ +

If the developer wishes to specify lengths in distance units like centimeters or inches, the CSS specification and its implementations allow the developer to specify their wish but fail to keep their promises. If a box is defined as 1 inch wide, the browser will convert the inch unit to pixels using a fixed ratio and will produce a box 72pixels wide, which will be of different lengths in mobile and desktop displays, none of which will be 1 inch wide.

+ +

The question is two fold:

+ +
    +
  1. Why can't developers specify lengths using real world SI units?

  2. +
  3. Why does css appear to support SI lengths but fail to do so? Admitting that it cannot do so would be better than lying. It appears there is a history of a failed feature here.

  4. +
+ +

Thank you.

+",249024,,,,,43924.09306,Why don't web browsers know the physical dimensions of a display?,,2,4,,,,CC BY-SA 4.0, +408264,1,408380,,4/2/2020 9:54,,2,155,"

I'm developing a Spring boot / Batch application.

+ +

What I'd like to do is to have a separated module for every job. This is a reasonable decision because different tasks (Spring Batch Job) have different domains.

+ +

The first thing I've thought about is to make a maven a module for every job. Then, you only need to add your maven dependency to the main project in order to run it.

+ +

The problem with this solution is that, if some project won't compile or has failed unit tests, the whole application is stuck as well.

+ +

So ideally, +I'd like that the failed module won't be loaded to the application whereas other modules can continue their own CI/CD process.

+ +

How can that be achieved? Is Java 9 Modularity can be used for this use case?

+ +

Thanks

+",313400,,,,,43925.63542,How to build a modular/extensible Spring Boot application?,,1,0,,,,CC BY-SA 4.0, +408267,1,,,4/2/2020 11:07,,1,102,"

Context

+ +

We are creating a library that makes an API (HTTP) request to a 3rd party.
+During testing we have written mock versions of the functions that make external requests so that we can test the other functionality in isolation.

+ +

Question

+ +

Should we export the mock version of the functions in the library so that the application using the library can use the mocks in its' own tests to avoid duplicating the mocks?

+ +

e.g: (pseudocode)

+ +
if (ENV==""test"") {
+  return MockImplementation
+}
+else {
+  return RealImplementation
+}
+
+
+ +

Is this considered an ""anti-pattern"" or a ""best-practice"" when creating a library?

+ +
+

Note: not looking for opinions or flame-war, + just some clarity on what to do in practice.
+ We've tried googling for this, but no relevant result surfaced ... + hence the direct question.
+ If this is not the appropriate place to ask this type of question, where is?

+
+ +

Any insight much appreciated! Thanks!

+ +

Additional Detail

+ +

We aren't using a mocking library/framework. Our mocks simply return sample response. The application that is consuming the library we are building has to mock the same request because the API in question (Google!) does not have a ""test"" endpoint. I'm trying not to be too specific so the question is general. The library we've created is intended for general use but we are using it in our own App/project where we are having to duplicate the mocks of the library.

+",211301,,211301,,43923.62639,43923.64306,Is it a best practice or anti-pattern to export mock versions of functions for a library?,,2,4,,,,CC BY-SA 4.0, +408274,1,408309,,4/2/2020 13:36,,-2,48,"

I'm currently building an education website which contains a lot of content such as questions, videos and notes which all need to be securely locked away and exposed to users with relevant permissions.

+ +

Now one thing I'm stuck on is how to implement a way to store all the course data, questions and videos.

+ +

I believe I have videos down pat; I'm going to use Cloudflare to store the videos securely and serve them to the user. But for everything else I'm not sure if I'm following an antipattern.

+ +

Currently, I plan to store the questions and course outlines as json files which are stored on my express server. Then on the server's boot, I search the folder and store all the question/note json paths to a JS Object which I treat as an index.

+ +

Then when a user wants something I authenticate them, then send the relevant json files they need using the index. I also will be storing cloudflare video ID's to help organize my videos into course structures using JSON. Some questions may have images and such as well, where I create a time-limited URL for the user to access with, sending it with the JSON.

+ +

While it seems good in theory, I have no idea if this is a good approach to take.

+ +

Will it be fast? +Scalable?

+ +

It just seems weird I'm not using a Database like what I'm using to store user data. Folders seem the easiest while also allowing my team to add more questions or notes whenever.

+ +

Is this the best approach to take or is there something better I can do?

+",362030,,,,,43924.09861,What is the best way to store secure static data which can be easily updated by clients?,,1,0,,,,CC BY-SA 4.0, +408278,1,,,4/2/2020 14:36,,-1,45,"

I just learned about Business Rule Engine concept. I'm a junior software developper therefore I don't know much about it.

+ +

But I was wondering if at all it could be possible to inspire myself from this concept and create an engine for input validation. Depending on the type and purpose of an input, the engine would trigger a rule and actions related to the type of input.

+ +

If such an engine exists already, it'd be awesome. Please let me now about it because I have not found it on google.

+ +

My question is : is my idea at all feasible ? Or does it even make sense ?

+",362031,,,,,43923.76458,Could you use business rule engine concept in other realm such as security?,,1,7,,,,CC BY-SA 4.0, +408281,1,408284,,4/2/2020 16:00,,2,91,"

I mainly work on custom web applications that have just one production deployment.

+ +

While we are moving to continuous delivery, I was wondering if that approach reduces the need to make settings configurable.

+ +

For example, I was writing code that sends confirmation e-mails from a web server through Exchange.

+ +

Quite some configuration needs to be done:

+ +
    +
  1. Version of Exchange
  2. +
  3. An account to login to Exchange
  4. +
  5. A sender e-mail address
  6. +
  7. The URL of an Outlook web access service
  8. +
+ +

The version of Exchange will probably change only once in a couple of years. +The account to login and the sender email address will possibly never change during the live time of the product. +The URL may change at some point due to whatever external reason.

+ +

Traditionally, I would make all of these configurable through a setting file, like web.config. The consideration would be that if changing any of these would unexpectedly be required, this could be done without a release of the product, which may not be feasible at the time it is required or introduce risks.

+ +

However, if your continuous delivery pipeline works as it should, couldn't you rely on the possibility to release whenever such a change is required?

+ +

What are your opinions? Have you observed a trend in this?

+",104335,,,,,43925.71875,Does continous delivery reduce the need to make settings configurable?,,3,0,,,,CC BY-SA 4.0, +408282,1,408316,,4/2/2020 16:04,,2,111,"

In a tree structure, is there a standard term (in software engineering industry practice, Computer Science or Mathematics), for the predicate f(x, y) defined in human language as ""x is an ancestor of or identical to y""?

+ +

Example:

+ +

Given the tree:

+ +
    A
+   / \
+  B   E
+ / \
+C   D
+
+ +

Then f(x, C) is satisfied by x in {A, B, C} but not D or E.

+ +

The best term I can find is ""ancestor or self"", but I am doubtful that is a standard term because I can find so few references to it.

+",305797,,305797,,43923.90278,43924.29306,"In a tree, what's the standard term for predicate ""ancestor of or identical to""?",,3,9,1,,,CC BY-SA 4.0, +408289,1,408294,,4/2/2020 19:16,,2,91,"

Some node.js libraries (just as an example) can pull in literally hundreds of dependencies. Some of these dependencies are small packages that only have one contributor. Often times the contributor doesn't even have any personal information listed other than their username.

+ +

How is it possible to trust that nobody in those hundreds of libraries will never act maliciously, get their account hacked, have a change of heart, purposefully introduce a vulnerability, etc?

+ +

It seems that all it would take is one point in the chain to get compromised, or have never had good intentions from the beginning, and that could lead to huge security problems or data breaches. A compromised plugin in a build system could steal source code.

+ +

It just seems like the wild west and a disaster waiting to happen. Do companies really review the source code of every single package they use, and the changes on every single update? That just sounds unmaintainable.

+ +

I could understand trusting a paid library published by a business, but I don't understand projects using these deeply nested dependencies published by internet strangers.

+ +

I guess what my question is that I am having trust issues and don't know how to properly include a library without worrying that I could accidentally compromise a customer's business.

+",85551,,,,,43923.87014,How do projects manage security with so many dependencies in open source projects?,,1,4,1,,,CC BY-SA 4.0, +408291,1,,,4/2/2020 19:59,,1,126,"

I have a question in regards to designing relationships in REST API's. Imagine I have a relationship like in this diagram

+ +

+ +

Now, should I show the relationship in the team endpoint like this?

+ +
/v1/teams/:id
+
+{
+    id: ""66e08a2b-2495-4edc-a528-ef00a3f908f1"",
+    name: ""Sales"",
+    description: ""Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Fusce tellus odio, dapibus id fermentum quis, suscipit id erat"".
+    members: [
+        {
+            id: ""a31604c6-2289-4a0b-9127-7219abf4f494"",
+            name: ""John Doe"",
+            role: ""manager""
+        },
+        {
+            id: ""fa3c68f4-2008-4bd3-8280-8ae8c4c40a26"",
+            name: ""James Smitt"",
+            role: ""member""
+        },
+        {
+            id: ""946d2c3d-415c-4461-850d-892385a65f86"",
+            firstName: ""Jane Doe"",
+            role: ""member""
+        }
+    ]
+}
+
+ +

And an employee endpoint like this:

+ +
/v1/employees/:id
+
+{
+    id: ""a31604c6-2289-4a0b-9127-7219abf4f494"",
+    name: ""John Doe"",
+    teams: [
+        {
+            id: ""66e08a2b-2495-4edc-a528-ef00a3f908f1"",
+            name: ""Sales"",
+            role: ""manager""
+        },
+        {
+            id: ""66e08a2b-2495-4edc-a528-ef00a3f908f1"",
+            name: ""Account management"",
+            role: ""member""
+        },
+
+    ]
+}
+
+ +

OR should I have a separate endpoint for teammembers?

+ +
/v1/teams/:id/members
+
+[
+    {
+        id: ""a31604c6-2289-4a0b-9127-7219abf4f494"",
+        name: ""John Doe"",
+        role: ""manager""
+    },
+    {
+        id: ""fa3c68f4-2008-4bd3-8280-8ae8c4c40a26"",
+        name: ""James Smitt"",
+        role: ""member""
+    },
+    {
+        id: ""946d2c3d-415c-4461-850d-892385a65f86"",
+        firstName: ""Jane Doe"",
+        role: ""member""
+    }
+]
+
+ +

And, if I use the separate should I still include the teams and members properties in the employee and team endpoints?

+",362047,,,,,44126.59375,Rest API relationship design,,2,1,,,,CC BY-SA 4.0, +408293,1,,,4/2/2020 20:25,,2,19,"

I want to have a Client-Server-Model with Smartphones within a range of 50 meters inside an application. One of the Clients is supposed to be the ""Host"" and it's only job is to receive data from all Clients and do something with it (not important). Everyone should be able to see all the other Smartphones, that have the app started, in a list. In addition every Client should be able to see, which of the Clients in the list the ""Host"" is, in order to send data to him.

+ +

I want to have the application decentralised, so that it doesn't depend on a database or something else, just on the few people which are using it in a range of 50m. The benefit I gain by that is, that multiple groups at different locations can use the app.

+ +

I'm struggling with the technology on which i want to build this setup.

+ +

My thoughts about Bluetooth Low Energy:

+ +
    +
  • easy to find all the people in range
  • +
  • not dependent on Wifi/Internet
  • +
  • Problem: the Clients cant determine the Host
  • +
  • Problem: there are alot of other bluetooth devices, which aren't smartphones and should be ignored
  • +
+ +

Is there a better technique I could use? WiFi's disadvantage is that everyone has to be in it.

+",362051,,,,,43923.85069,Client-Server Model between Smartphones via BLE?,,0,1,,,,CC BY-SA 4.0, +408300,1,408331,,4/2/2020 23:07,,1,42,"

Imagine a grid like the one in the image. I have to model an API which identifies a single segment inside a grid (like the one painted in red).

+ +

I thought the segment should be represented with something like this:

+ +
int rowIndex -> specifies the row in which the segment is placed
+int columnIndex -> specifies the column in which the segment is placed
+enum (vertical|horizontal) orientation -> specifies the orientation of the segment
+
+ +

The point at 0,0 is the upper left corner.

+ +

For my example, the red line would be rowIndex: 1, columnIndex: 0, orientation: horizontal.

+ +

Do you think it's a valid and understandable solution to represent a segment inside a grid or perhaps is there a clearer solution? Is the naming clear?

+ +

+",362064,,,,,43925.07153,Identify segment in grid,,2,2,,,,CC BY-SA 4.0, +408301,1,408305,,4/2/2020 23:20,,0,80,"

This question is asked in a general way. In case it is hard to understand, I have added a concrete example below. I am interested in the answer to the general question.

+ +

I have a lot of experience writing programs of type X, but now I need to write a program of type Y.

+ +

Everybody says programs of type Y are easily done in language A with libraries P, Q, and R. However, I'm really only comfortable with language B (and I don't even have A installed).

+ +

It's impossible for me to know how much time I will save by using A + PQR. I do know, however, that my task will take a long time using language B.

+ +

How do professional software engineers decide what to do? Do they risk spending more time than they save by learning a new set of tools? Or do they go with what they know despite it being slow?

+ +



+ +

Same question, but A, B, P, Q, R, X, and Y are replaced with a concrete example. This question is not asking for Python library suggestions!

+ +

I have a decent amount of experience writing smallish embedded applications in C. Now, I need to write a program that displays a UI and sends/receives messages over the network.

+ +

Everybody goes on and on about how Python makes everything easy. However, since I am a Master's student whose graduation date keeps getting further away, I'm thinking about brute forcing this program using C code instead of messing around with a new language.

+ +

It's impossible for me to know how much time I will save by using Python (after fixing my broken Python/pip installation and then going through a bunch of tutorials to decide which networking and UI libraries will work for me). However, I know that I will be fighting an uphill battle doing this in C using sockets and threads...

+ +

How would a professional developer decide what to do? Invest in new tools, or keep using known techniques?

+",362066,,,,,43924.05903,When to use known languages/libraries vs. investing in learning new ones?,,4,0,,,,CC BY-SA 4.0, +408311,1,408313,,4/3/2020 2:47,,2,311,"

I've joined a small software development team to work on an existing code base. While I am tasking myself with learning the required programming languages and actually reading the code, I'm wondering if there are things software engineers would recommend I do in order to efficiently get up to speed on working with an unfamiliar code base.

+ +

I'm concerned that my question is too broad, so comments on how I can improve it would be appreciated.

+",160079,,,,,43924.17014,What steps should I take to become familiar with a new code base?,,2,1,,43924.60139,,CC BY-SA 4.0, +408318,1,,,4/3/2020 6:56,,1,145,"

When you have an application that suffers from the unfortunate interaction between TCP delayed ack and Nagle's algorithm, common solution offered is to turn off Nagle's algorithm.

+ +

However, in general searching through the net, it looks like Nagle's algorithm is overall better idea (basing it off the algorithms in general and this, this, this etc.) but looks like it is pretty hard to turn off the delayed ack and even if you turn it off, the tcp stack turns it back on again on subsequent data exchange. Nagle's algorithm on the other hand can be easily turned off using TCP_NODELAY or similar option, and it stays off.

+ +

What is the reason behind the bias towards delayed acks over Nagle's algorithm? What are the technical/non-technical reasons to prefer delayed ack over Nagle's algorithm?

+ +

Edit: As pointed out by @Bart van Ingen Schenau, when you don't have control over client, all you can do is turn off Nagle's algo, but it is fairly common to have the control over client, and I would like to know the reasons in that case.

+",207378,,207378,,43924.75278,44075.37292,TCP delayed acks vs Nagle's algorithm,,2,0,,,,CC BY-SA 4.0, +408320,1,,,4/3/2020 8:09,,3,260,"

I'm working on a RPG so my character can equip a Weapon, Hat, Boots, Gloves, etc.

+ +

So I have an Item class for the different items and the only class who adds new Behaviour is weapon. I'm not sure if is a good idea to extend the class for the different types of items.

+ +

I think yes because for example, in the Hat slot you can put just Hats so I think it adds safety but I'm looking for an experienced opinion.

+ +

Thanks!!

+",362086,,,,,44078.29306,Is it a good idea to extend a class if it doesn't add new behaviour?,,3,4,,,,CC BY-SA 4.0, +408322,1,,,4/3/2020 8:35,,1,29,"

Currently we are looking for a solution to sync databases across multiple locations. We a location hierarchy such as Country -> State -> District -> Center. To increase speed and reliability of the access of data, we thought that we can have partial synchronisation of database as you go up the hierarchy.

+ +
e.g. 
+                                             USA
+                                            /   \
+                                           /     \
+                                     New York   Illinois
+                                     /     \
+                                    /       \
+                                NY City    Long Island
+                                /   \              \
+                               /     \              \
+                       Manhattan   Brooklyn      Suffolk
+
+
+ +

Manhattan and Brooklyn will sync to NY City, NY City and Long Island will sync to New York and New York and Illinois will sync to USA database.

+ +

One main reason to do this is because the each location will only need to access data about itself and it's children. So the only node will all the data would be the USA node (which regular backups and possibly mirrored)

+ +

Would this be a reliable way to store data and increase speed?

+ +

If so what database would be best to implement this architecture? SQL, NoSQL?
+Also are there any systems that already do this?

+",362087,,,,,43924.35764,Partial database synchronisation across multiple databases,,0,1,,,,CC BY-SA 4.0, +408323,1,408325,,4/3/2020 9:10,,-6,56,"

I'm new in programming I knew some programming languages. +When I search anything about any program it is written in more than one programming language. +How can I do it. And it is important to use multi languages to make a advanced programs?

+",362090,,173647,,43970.27986,43970.27986,How I can make multi language program?,,1,3,,,,CC BY-SA 4.0, +408326,1,408348,,4/3/2020 10:20,,14,5572,"

I am still thinking about clean architecture and just ran into a question regarding the higher levels (Views and Presenters) +I am posting Uncle Bobs picture first here that you remember what I am talking about:

+ +

+ +

Lets say I have a small application where CRUD operations can be performed for customers.

+ +

Lets also say there are three ways to output the customer data to the user:

+ +
    +
  • A form component, where a new customer with name and birthdate can be updated.
  • +
+ +

(The same form is also used to create a new customer, but in this case I do not need to load existing data)

+ +
    +
  • A detail component where the name and birthdate of the customer are shown on the screen.
  • +
  • A PDF-Export where a customer should be displayed in a PDF with the birthdate in US-Format. If the customer is exactly 60 years old, the birthdate should be decorated with special color - these customers get a present. :)
  • +
+ +

Now lets say I want to implement this. +My approach for this looks like this:

+ +

+ +

You can see that I have only one class for output data (one file) and also just one abtract presenter class that gets the OutputData.

+ +

But then I have three concrete presenters with three concrete view models that are responsible for my views. +Now my doubts:

+ +
    +
  1. Is this correct? Did I understand the clean architecture?

  2. +
  3. My concrete presenters and my viewmodels for the HTML form and the HTML detail are exactly the same. This feels redundant. Should I avoid this and re-use my view models for both scenarios? Or should I stick with the borders, as the requirements of the future might change for forms, so that I need seperate models/concrete presenters?

  4. +
  5. If I finally want no US-Format in the date anymore, but lets say... european one. Do I need a new concrete presenter, just for this? Or should the presenter ask the use-case for the global format of the application and then decide on the base of that?

  6. +
  7. If customers that are 60 should receive a birthday present - this logic has to be in a use case. But I am a little lost where this would go... Probably another use case that gets every night all customers from the repository that are 60 and sends a present... ? But this would not relate at all with the decoration in my view model? So the information about the number ""60"" would be in my viewmodel and my use case? Two redundant times? Or how?

  8. +
+",344151,,344151,,43925.76736,43925.76736,Clean Architecture: Should each view have its own presenter and viewmodel?,,1,3,5,,,CC BY-SA 4.0, +408328,1,,,4/3/2020 11:41,,2,38,"

I have a few microservices that i would like to combine in form of an api. The main purpose of the api is to be used by our (first party) mobile app. A side note, we don't have a mobile app or web app right now and we want the backend completely decoupled from the frontend. But i can see that soon a few third parties might need access to the same api as well. I have designed the api with the basic endpoint for all the type of resources. My question is regarding the authorization flow of the api and client.

+ +

I am leaning towards using Oauth 2.0. Password flow for our own app (first party) and Authorization flow or Authorization flow with PCKE depending on the kind fo third party client. Is it a secure way to solve it ? How is it typically done ?

+ +

So for a third party developer, the flow would be something like this. They signup using our mobile app, create an app (client id and secret) and then can use it for authorization. This leads me to one more question

+ +

1) Would it be safe to expose the signup and create an app endpoint to third parties ? Or only our app should be capable of creating apps and signing up new users ?

+",362103,,362103,,43924.49653,43924.49653,Api Authorization for first party apps and third party apps,,0,0,,,,CC BY-SA 4.0, +408332,1,,,4/3/2020 13:45,,0,22,"

We have 22 (.NET, webservice based) componentteams each consisting of atleast 3 developers and 1 buisiness analist. To make sure every one of the 22 components integrates without trouble, we work what is know as ""Contract first"". Two teams get together and they create a XSD for their to-be-created .NET Webservice. They create this XSD by hand or with tooling like XMLSPY.

+ +

What/why we do this:

+ +
    +
  • Developers can generate their C# code from the XSD.
  • +
  • We have tools that nicely display the structure of an XSD. Making it easier for teams to discuss the contents.

  • +
  • We (appearantly) have a need for a webservice catalog. To make it easy to oversee everything we have to across all webservices of all components. We do this by every single XSD to Confluence and generating pages for +them. Having to version each page by hand to insure previous versions +are still documented. + Currently the page generation is broken wich +is another motivation to ask this question.

  • +
+ +

My problem with the above:

+ +
    +
  • Many developers seem to edit the XSD's by hand, but still claim it +saves them time because they do not need to write code (because as +you know, developers hate to write code?). How do other company's work contract first? Or maybe your adopting a different strategy, let me know!
  • +
  • There must be a way to display all services of every component without developers having to create confluence pages or having to generate them by uploading the XSD. This feels like waste.
  • +
+ +

How do you do this? (PS I know there is no best answer (or maybe there is).. but I just want to know how other company's solve this. We have a habit of thinking we are unique).

+",131894,,,,,43924.57292,Defining componentcontracts (XSD) manually in a large organisation to facillitate communication between componentteams feels bad,<.net>,0,0,,,,CC BY-SA 4.0, +408336,1,,,4/3/2020 14:57,,1,78,"

I am implementing a dashboard. I figured I'd make it work like a SPA for cleaner flow and better performance. When you click a sidebar link, the page/section gets loaded with AJAX. You can still browse the URL for each page, loading the whole thing.

+ +

I'm unhappy with the architecture I chose, and that's because of my lack of familiarity with modular js. This is how it works:

+ +
let Dashboard = (function () {
+   // ...
+
+   let load = html => {
+    $container.html(html);
+    CurrentPage.init();
+   }
+
+   let requestAndLoad = url => $.get(url, html => load(html));
+
+   let openPage = $link => requestAndLoad($link.attr('href')).done(() => updateURL());
+
+   return {
+    init: () => {
+        $sidebar.find('a').on('click', function (e) {
+            e.preventDefault();
+            openPage($(this));
+        });
+    },
+    setCurrentPage: page => {....}
+   }
+})();
+
+ +

When the HTML is loaded (clicking sidebar link or browsing to the URL), we set CurrentPage so the corresponding init function is called, e.g:

+ +
Dashboard.setCurrentPage('USERS');
+
+ +

I think it is a pretty awkward way to initialize a page. Suggestions?

+ +

Then comes the part where we have different modules to handle each page. You will see a lack of DRY principles. I'm going to present forward a couple of them, so you get the idea. This would be inside Dashboard:

+ +
let Home = (function () {
+    const name = 'HOME';
+    let pageURL = null;
+
+    return {
+        init: () => {...},
+        load: () => requestAndLoad(pageURL),
+        setURL: url => pageURL = url,
+        getURL: () => pageURL,
+        getName: () => name
+    }
+})();
+
+let Users = (function () {
+    const name = 'USERS';
+    let pageURL = null;
+
+    return {
+        init: () => {...},
+        load: () => requestAndLoad(pageURL),
+        setURL: url => pageURL = url,
+        getURL: () => pageURL,
+        getName: () => name
+    }
+})();
+
+ +

Now, coming from an OOP background, there would be an obvious base class and all pages would inherit this one. This is the js alternative I've thought about, but I'm not sure this is advised:

+ +
let Page = (name, pageURL) => {
+    let init;
+    return {
+        init: () => init,
+        load: () => requestAndLoad(pageURL),
+        setURL: url => pageURL = url,
+        getURL: () => pageURL,
+        getName: () => name
+    }
+};
+
+let Users = Page('USERS', usersURL);
+Users.init = () => {...};
+
+ +

Having a base object and then overriding some function from an outer scope doesn't seem an ideal solution.

+ +

This has zero extensibility if I need to add a new page or worse, the need that has arised now, a page within a page or rather an ""extended"" page; i.e. I would like to make the ""user profile"" (specific user) page ""browsable"", which would require having a new page object to handle it, initialize components, update URL... With the current architecture that'd be an awful lot of work and would bloat the monster even further.

+ +

To clarify, the point of all these init functions would be running your typical ""onload"" actions, like initializing widgets and binding event listeners. Typically ran after loading some HTML. The expression something.init() keeps repeating in my code and I don't like it. If there is a more standard and proper way of doing such a thing in js development please let me know.

+ +

Back to the topic: I took a similar approach with other modules used within some pages. For instance, Draft is used both in the Emails and Users page. Initially, it only existed in Emails, but when the need of loading a draft in the Users page came up I rolled out a ""child class"". However, its behavior is of course different.

+ +
    +
  • In the emails page the draft would is displayed as a panel in the page and some of its interactions were showing a preview when clicking the ""save"" button, or going back to the inbox when clicking the ""back"" button.

  • +
  • In the users page the draft is displayed as a modal. Some of its interactions are different, like showing a preview modal when saving it or simply closing the modal (there is no ""back"" button).

  • +
+ +

An OOP solution is evident here, I just ""can't"" do it with js. Some parts of the behavior are common, while some others are slightly different (subclass). Here's my approach:

+ +
let Emails = (function () {
+    const name = 'MESSAGES';
+    let pageURL = null;
+
+    let Draft = ($container, formSelector) => {
+        let $form;
+
+        // handlers/listeners
+        let handlers = {
+            close: () => {
+                // back to inbox
+            },
+            submit: () => {
+                // save and show previw
+            },
+            // ...
+        }
+
+        let init = () => {
+            $form = $container.find(formSelector);
+            // widgets...
+            // initialize ""pluggable"" handlers
+            Object.values(handlers).forEach(handler => handler());
+        }
+
+        return {
+            init: () => {...},
+            requestAndLoad: () => {...},
+            submit: () => {...},
+            // expose handlers
+            handlers: () => handlers,
+        }
+    }
+
+    return {
+        init: () => {...},
+        load: () => requestAndLoad(pageURL),
+        setURL: url => pageURL = url,
+        getURL: () => pageURL,
+        getName: () => name,
+        // expose Draft
+        Draf: Draft
+    }
+})();
+
+let Users = (function () {
+    let DraftModal = ($container, formSelector) => {
+        const base = Emails.Draft($container, formSelector);
+        // ""override"" base handlers
+        delete base.handlers.close;
+        // post a different submit handler
+        base.handlers.submit: () => {...};
+        return form;
+    }
+})();
+
+ +

I have a original Draft object with the original behavior, exposing handlers so that they can be changed from an outerscope therefore defining a different behavior with the new DraftModal object which acts as a wrapper. I'm not familiar with patterns, but this looks like a decorator?

+ +

Anyway, again, it seems like a bad idea and awkward solution.

+ +

Any tips would be welcome.

+",223143,,223143,,43924.63819,43965.29167,Single page dashboard architecture,,0,3,,,,CC BY-SA 4.0, +408337,1,,,4/3/2020 15:15,,3,174,"

Here is where I am at right now: I know that relational data, like that found in relational databases like MySQL or Postgres, is relational because there are relations between the tables. That is the reason why RDBMS have rigid schemas. On the other hand, NoSQL databases offer a looser schema but lose those relations. But MongoDB, for instance, offers official support (or at least blog posts) for associations between data (One-To-One, One-To-Many, etc).

+ +

Now I am really unsure if the data I am going to have in a project is relational and therefore if MongoDB would be a bad choice. People keep mentioning you should not use NoSQL databases unless you have a specific reason to use them over traditional RDBMS. But I really like MongoDB, especially for GraphQL. Working in JavaScript, I love working with Mongoose. +My data structure, on the other hand, seems kind of relational (users have machines, machines are inside rooms, that are also owned by users).

+ +

I am really confused. Everyone keeps going on and on about the MERN/MEVN/MEAN stack and how awesome MongoDB is with Node which I agree and I love it, but sometimes people tend to jump onto the hype train too much. Can you give me your thoughts on this one?

+",330010,,,,,43925.51389,What *really* is the difference between relational and non-relational data?,,1,10,,,,CC BY-SA 4.0, +408341,1,,,4/3/2020 16:38,,0,31,"

I am learning Django and to make the most out of the educational process I am thinking of this hypothetical system and tries to figure out how I would design and implement it. The hypothetical system is a profile management system for social networks, and I came across a couple of design/implementation questions,

+ +
    +
  1. Assuming the only required methods are post_status and change_profile_picture. Would it be correct to implement a Profile abstract class (maybe using ABC?) and then have both TwitterProfile and FacebookProfile inherit from it and implement these methods in the required manner?

  2. +
  3. Assuming the system should handle multiple users for each social network, what would be the most elegant way to store each Profile's configuration, so it could be instantiated when the system goes up? I thought about storing it in the database or in a file on the machine, and then load from it when the system goes up.

  4. +
+ +

Thanks!

+",362127,,,,,43924.69306,Inheritance and instance-storage design suggestions in Django system,,0,0,,,,CC BY-SA 4.0, +408344,1,,,4/3/2020 18:29,,2,51,"

Working in insurance usually mean you need to deal with a myriad of rules and logic for rating, policy, and claims processing. These are steps in the insurance lifecycle.

+ +

Every insurance product (life insurance, travel insurance, etc.) generally follow the same flow above but with a big difference in the logic for each step.

+ +

Would it make sense to design an insurance system using a microkernel architecture?

+ +

This would mean that all parts of the insurance lifecycle (rating, policy, claims, etc.) is modeled in the core system, with pluggable components for every insurance product. This provides a very quick way to launch new insurance products by implementing a new plugin and adding it to the register.

+ +

Considerations and tradeoffs - I frequently see microkernels being used for monolithic applications. Does this mean that scalability & deployability of individual components is compromised?

+",312836,,,,,43925.69028,Microkernel architecture for insurance,,1,3,,,,CC BY-SA 4.0, +408345,1,,,4/3/2020 18:45,,3,159,"

Let's say I'm trying to write a library that abstracts certain actions. In this example I want to turn a light on or off. There could be hundreds of different kinds of lights that are controlled in lots of different ways, so I have an interface like so:

+ +
public interface ILight
+{
+    bool TurnOnLight();
+    bool TurnOffLight();
+}
+
+ +

Or like:

+ +
public interface ILight
+{
+    Task<bool> TurnOnLight();
+    Task<bool> TurnOffLight();
+}
+
+ +

The design of the interface is that calling the method should turn on/off the light and then return a boolean saying if it was successful or not. The amount of time this takes is indeterminate.

+ +

The desire/goal is to have a common interface that developers who are unfamiliar with the hardware can use. They'd just call TurnOnLight and get a return if it worked or not, regardless of the implementation.

+ +

The issue is that, in the implementation of the interface, some of the implementations involve async operations, and others do not. These ""restrictions"" exist in the various 3rd party libraries used to communicate with the lights. Whether or not these libraries are performing ""real"" async operations is unknown, just that they return an awaitable task with results.

+ +
Library1.TurnOnLight();
+
+ +

vs

+ +
await Library2.TurnOnLight();
+
+ +

The developer using this stuff shouldn't have to worry about what type of light is being used, just that they want to turn it on or off. That makes me think I'd end up with an interface and implementation like:

+ +
public interface ILight
+{
+    Task<bool> TurnOnLight();
+    Task<bool> TurnOffLight();
+}
+
+public class Light1 : ILight
+{
+    public Task<bool> TurnOnlight()
+    {
+        bool result = Library1.TurnOnLight();
+        return Task.FromResult(result);
+    }
+}
+
+public class Light2 : ILight
+{
+    public async Task<bool> TurnOnlight()
+    {
+        return await Library2.TurnOnLight();
+    }
+}
+
+ +

I've seen some people reference using a similar design pattern, and others who say it's bad practice to have something return a task that is synchronous. I lean towards the implementation I've got here, are there gotchas or issues with this design that I should be aware of?

+",113590,,113590,,43924.94792,44085.88611,Issues with an interface treating a synchronous action as async,,3,0,,,,CC BY-SA 4.0, +408350,1,,,4/3/2020 20:57,,0,86,"

I am working on a project in which I have a tree with 4 layers and the hierarchy is like this. +Customer -> Site -> Location -> Guardroom

+ +

In DB each entity has its own table and the child knows its parent's Id.

+ +
public class Guardroom{
+   public int Id {get; set;}
+   public string Name {get; set;}
+   public int LocationId {get; set;}
+}
+
+ +

A rule is that guardroom should be the only leaf. What I mean is that A Customer must have at least one child (site), and A site must have at least one child (location), and a location must have at least one child (guardroom).

+ +

My challenge is that if I delete a guardroom I must delete the location only if the guardroom has no siblings and the grandparent if location has no siblings and so on. In other words, If I delete the guardroom I must delete the nodes above in cascade only if the node to delete has only 1 child.

+ +

I do not want to have a highly coupled structure so I do not want the guardroom to know about the location. I was thinking in using events but I am not sure what patter or how to implement it.

+ +

My infrastructure looks like this.

+ +

I implemented dependency injection and Domain Driven Design. I have a service for each layer in the tree: +i.e.

+ +
public class GuardroomService: IGuardroomService {
+   private readonly IGuardroomService _guardroomService;
+
+   //Here dependency is injected.
+   public GuardroomService(IGuardroomDataAccess guardroomService){
+      _guardroomService = guardroomService;
+   }
+
+   public void Delete(int guardroomId){      
+      //Here I want to raise an event to notify parent (location) and take further action.          
+   }
+
+   //...
+   //The remaining CRUD operations and functions.
+
+}
+
+ +

What patter/structure should I implement to let the other services I am deleting a child without allowing the same service to know about the other services.

+ +

Any help will be much appreciated. Thank you in advance.

+",353845,,,,,44108.50139,How to implement a chain of events on a tree's CRUD operations?,,2,0,,,,CC BY-SA 4.0, +408351,1,408358,,4/3/2020 20:58,,4,257,"

In Software engineering , I usually see that the word module when written it usually followed by a bracket (components, packages, classes … etc.) meaning that a module can be a class or a package or a component. I know that a module is a piece of code that do something. I am so confused when to use the word module or class or a package or even a function for my system ??

+",331812,,,,,43925.67083,What is module in Software engineering context?,,4,0,,,,CC BY-SA 4.0, +408356,1,,,4/4/2020 0:15,,1,31,"

Let's say I have a very large student list want to filter if each of these students is satisfying some specific classroom school and classroom limitations. So, in my data model, I have School and Classroom entities. Limitations (let's call them filters) are like

+ +
    +
  • Age is greater than 9
  • +
  • The grade is equals to 2
  • +
  • Wants to use the school bus
  • +
+ +

Classrooms and the school have the same filters but classroom filter values have precedence over the school's filters. If there is no specific filter defined for the classroom, the schools one will be used. +For example, the school's filter definition is like this:

+ +
    +
  • Age is greater than 7
  • +
  • Must want to use the school bus
  • +
+ +

But the classroom's definition is:

+ +
    +
  • Age is greater than 10
  • +
+ +

In this example, if a student is older than 10 and wants to use the school bus, I can assign him/her to the classroom.

+ +

This is my database design:

+ +
school
+----------
+id
+name
+...
+
+classroom
+----------
+id
+name
+school_id
+...
+
+filter
+----------
+id: primary_key
+name: string
+
+filter_data
+----------
+id: primary_key
+filter_id - FK to filter.id
+owner_id - Polymorphic association to `School` and `Classroom` (the main point of my question)
+filter_type - ENUM('CLASSROOM', 'SCHOOL')
+operator - should be a from a defined list of comparison operators such as: >, <, =, !=, etc...
+filter_value - The value to filter against.
+
+ +

Some example data:

+ +
school
+------
+ID   |  name
+-----+------
+1    |  School #1
+2    |  School #2
+
+classroom
+------
+ID   |  name          | school_id
+-----+----------------|----------
+1    |  Classroom #1  | 1
+2    |  Classroom #2  | 1
+
+filter
+------
+ID   |  name
+-----+------
+1    |  Age
+2    |  Grade
+3    |  School bus
+
+filter_data
+------------
+ID   | filter_id  | owner_id       | filter_type | operator | filter_value
+-----+------------+----------------+-------------+----------+-------------
+1    | 1          | 1              | SCHOOL      | >        | 7
+2    | 1          | 1              | CLASSROOM   | >        | 10
+3    | 3          | 1              | SCHOOL      | =        | 1
+
+ +

With this structure, I can easily query and understand that, to assign a student to Classroom #1, student's age must be greater than 10 and must want to use the school bus. Something like:

+ +
SELECT * FROM filter_data
+WHERE (filter_type='SCHOOL' AND owner_id=1) OR (filter_type='CLASSROOM' AND owner_id=1)
+GROUP BY filter_id HAVING filter_type='CLASSROOM'
+
+ +

(update: the query above didn't work as I expect :-) )

+ +

And now the main question: As you see, there is a polymorphic association for the filter_data.owner_id, it can refer to School or Classroom and I'm losing important features of foreign key usage, like data integrity. An looking for some ideas to have a better, extendable design which guarantees data integrity.

+ +

Planning to use PostgreSQL if helps.

+",13725,,13725,,43925.04653,43925.07778,Data integrity on polymorphic association to two different table,,1,0,,,,CC BY-SA 4.0, +408361,1,,,4/4/2020 4:23,,9,520,"

Is porting code from one programming language to another considered as plagiarism?

+",362161,,13500,,43930.56736,43930.56736,Is porting code from different language plagiarism?,,3,2,,,,CC BY-SA 4.0, +408364,1,408370,,4/4/2020 6:57,,0,59,"

I have a situation like this: in my online game, players are represented by class Player, which is instantiated and assigned their socket upon their connection to the server. +When simplified, some of my code looks like this:

+ +
class Monster
+{
+  Player targetedPlayer;
+
+   void FollowPlayer(Player p)
+   {
+      targetedPlayer=p;
+   }
+}
+
+ +

But I realized that I guess I should not work with references to players because when the player disconnects (I did implement IDispose), the reference in the targetPlayer will still point to an object that is not released because of that. +Obivously I would need that when a player disconnects, all references to his Player instance become null because he is no longer in the game. What would be the best solution?

+",60327,,,,,43925.35417,How to handle a situation when I have references to an object that should be removed,,1,5,,,,CC BY-SA 4.0, +408365,1,408367,,4/4/2020 6:58,,1,54,"

I have designed a simple objects validation framework in Java in the context of a code refactoring.

+ +

The framework has a ValidationRule interface with one method Errors validate(MyObject myObject, Context ctx). +There are a number of classes (Validators) implementing different validation rules, each validation checking a different part of the object properties and children collections.

+ +

The rules are executed in a loop and validation errors accumulated and returned to the user.

+ +

Now, the code that I'm refactoring is often modifying the object under validation in the ""middle"" of some validation logic:

+ +
if (myObject.getName() == null) {
+    myObject.setName(DEFAULT_NAME);
+}
+
+ +

Certain rules rely on the assumption that the object has been modified (so, for instance, they expect the name to be always not null).

+ +

From a design perspective, I would like to keep the validation framework ""read-only"". This makes the rules simple to understand and reason about,but I need to be able to mutate certain object's properties from a rule to the next.

+ +

What would be a good pattern to achieve such a result?

+ +

1) validation rules do not mutate the object under validation (this can be enforced by passing an ImmutableMyObject to the validation interface)

+ +

2) each rule should be able to mutate the objects under validation in order for the next rules to function

+ +

Ideas welcome!

+",148927,,,,,43925.32014,Validation framework and immutability,,1,0,,,,CC BY-SA 4.0, +408374,1,408418,,4/4/2020 12:01,,1,46,"

I have a huge DB,and CalculateTasksFromDB() takes a long time (and lots of memory). +Once that method is done,there is a huge list of tasks. +There are worker processes in the system (at any point in time the system might have zero to 100 of those),each needs to get the next tasks it needs to work on. +The tasks different processes need to get are mutually exclusive eg the task with the id 234234 should be processed by one process and one process only.

+ +

I have chosen to have a 'service process' (not sure if it's a correct terminology) running in the system, and implemented it as a http server. So literally to get its next list of task,each worker process goes to http://127.0.0.1:2131/tasks and gets a sublist of that huge list of tasks. +To ensure that the tasks are mutual exclusive I use flask with threaded=False in the constructor.

+ +

I looked for info about my design, if it makes sense, what are the alternatives etc. but couldn't find any. So: +1.Does my approach make sense? +2.What are the alternatives? +3.What improvements would you suggest

+",359500,,,,,43926.80417,System design : Implementing a common service process with a http server,,1,2,,,,CC BY-SA 4.0, +408385,1,,,4/4/2020 16:55,,2,469,"

I'm trying to wrap my head around the best possible solution in the following situation:

+ +

When updating part of an aggregate, could be any part of the aggregate so either the root or any other entity, how could these changes be persisted back to the db layer.

+ +

There have been a lot of solutions on StackExchange advising something like using your ORM models as Domain objects so you could change the attribute on the Aggregate and let the ORM layer diff and flush the changes to the db, most examples contain references to the Enity Framework if I'm not mistaken. Like the solution here, Article which is a Domain object contains ORM persistence logic

+ +

My partial understanding of DDD was that you shouldn't define any persistence layer logic in your domain models. The persistence logic should be defined in your repository, possibility of having a variety of persistence mechanisms (Postgres, MongoDB, S3 etc). Also 'littering' Domain models with persistance logic and/or 'original' SQL objects makes it a lot harder to test my Domain objects / Aggregates.

+ +

I'm having trouble understanding an easy and simple solution, maybe there isn't any, on how to map these changes back to my ORM layer, in my case I'm using Postgres. Unless the answer is a strong mapping between your ORM model and Domain model in your repository it looks really hard and verbose to do so.

+ +

I figured out, by reading other solutions, that there are a couple of other possibilities (they all have their own drawbacks):

+ +
    +
  1. Generate a ModelAttributeChanged event whenever you change something in your Domain model. You could either return this event, or store it somewhere on the Domain model. When persisting this model in the repository, query the ORM model first and map these changes back on the ORM model before committing it.

    + +
    changed_name_event = person_aggregate.set_name('Henk')
    +Repository.save(person_aggregate, changed_name_event)
    +
  2. +
  3. After you change something on the Aggregate, explicitly call the update method on the Repository aswell to update the attribute. You'd need to update everything on both the Aggregate and on the repository, and you need to know in advance which attributes of your aggregate are going to change before calling the right method on your repository.

    + +
    person_aggregate.change_name('Henk')
    +repository.change_person_name(person_aggregate, 'Henk')
    +
  4. +
+ +

Ideally I just want to be able to update my aggregate and save it through my repository. However because I map my ORM model to an AR, Aggregate Root, I 'loose' the mapping to the ORM model. I could ofcourse keep track of all the changes I make to the Aggregate and whenever I call the repository apply these changes to the ORM model and commit it to the database. My 'problem' with this solution is that I need a strong mapping and tracking changes for nested entities is a hard, complex and error prone process.

+ +

If this is the necessary evil in order completely isolate your domain logic I'm okay with it but it just feels like a lot of logic has to be defined in order to get this abstraction working.

+",187619,,187619,,43925.78125,43957.80903,Domain Driven Design - Updating and persisting aggregates,,2,0,1,,,CC BY-SA 4.0, +408389,1,408399,,4/4/2020 20:58,,3,138,"

I'm building a library application. Let's assume that we have a requirement to let registered people in the library to borrow a book for some default period of time (4 weeks).

+ +

I started to model my domain with an AggregateRoot called Loan with code below:

+ +
public class Loan : AggregateRoot<long>
+{
+    public static int DefaultLoanPeriodInDays = 30;
+
+    private readonly long _bookId;
+    private readonly long _userId;
+    private readonly DateTime _endDate;
+    private bool _active;
+    private Book _book;
+    private RegisteredLibraryUser _user;
+
+    public Book Book => _book;
+    public RegisteredLibraryUser User => _user;
+    public DateTime EndDate => _endDate;
+    public bool Active => _active;
+
+    private Loan(long bookId, long userId, DateTime endDate)
+    {
+        _bookId = bookId;
+        _userId = userId;
+        _endDate = endDate;
+        _active = true;
+    }
+
+    public static Loan Create(long bookId, long userId)
+    {
+        var endDate = DateTime.UtcNow.AddDays(DefaultLoanPeriodInDays);
+        var loan = new Loan(bookId, userId, endDate);
+
+        loan.Book.Borrow();
+
+        loan.AddDomainEvent(new LoanCreatedEvent(bookId, userId, endDate));
+
+        return loan;
+    }
+
+    public void EndLoan()
+    {
+        if (!Active)
+            throw new LoanNotActiveException(Id);
+
+        _active = false;
+        _book.Return();
+
+        AddDomainEvent(new LoanFinishedEvent(Id));
+    }
+}
+
+ +

And my Book entity looks like this:

+ +
public class Book : Entity<long>
+{
+    private BookInformation _bookInformation;
+    private bool _inStock;
+
+    public BookInformation BookInformation => _bookInformation;
+    public bool InStock => _inStock;
+
+    private Book(BookInformation bookInformation)
+    {
+        _bookInformation = bookInformation;
+        _inStock = true;
+    }
+
+    public static Book Create(string title, string author, string subject, string isbn)
+    {
+        var bookInformation = new BookInformation(title, author, subject, isbn);
+        var book = new Book(bookInformation);
+
+        book.AddDomainEvent(new BookCreatedEvent(bookInformation));
+
+        return book;
+    }
+
+    public void Borrow()
+    {
+        if (!InStock)
+            throw new BookAlreadyBorrowedException();
+
+        _inStock = false;
+
+        AddDomainEvent(new BookBorrowedEvent(Id));
+    }
+
+    public void Return()
+    {
+        if (InStock)
+            throw new BookNotBorrowedException(Id);
+
+        _inStock = true;
+
+        AddDomainEvent(new BookReturnedBackEvent(Id, DateTime.UtcNow));
+    }
+}
+
+ +

As you can see I'm using a static factory method for creating my Loan aggregate root where I'm passing an identity of the borrowing book and the user identity who is going to borrow it. Should I pass here the references to these objects (book and user) instead of ids? Which approach is better? As you can see my Book entity has also a property which indicates the availability of a book (InStock property). Should I update this property in the next use-case, for example in the handler of LoadCreatedEvent? Or should it be updated here within my AggregateRoot? If it should be updated here inside my aggregate I should pass the entire book reference instead of just an ID to be able to call it's method _book.Borrow(). +I'm stucked at this point because I would like to do it pretty correct with the DDD approach. Or am I starting to do it from the wrong side and I'm missing something or thinking in a wrong way of it?

+",360301,,,,,43936.64792,"DDD, Aggregate Root and entities in library application scenario",,2,0,2,,,CC BY-SA 4.0, +408395,1,408403,,4/4/2020 23:17,,0,91,"

TLDR with bold

+ +

I want to create a library (I think this is the right term) for my own reinforcement learning environments (envs for short). Most of the envs would be based on self-implemented games created either in pure python or in c++ with python bindings. What would be the best way to structure this project in a manner that is easy to expand, maintain and that would make most sense? I want to be able to reuse code, such as using a general board class for all my board game implementations (eg. chess, go, gomoku). I plan on making it cross platform with the help of CMake and might even venture to package it as a Conda package.

+ +

Through my initial search I found that this layout (and its variants) is popular and decided to rely on that.

+ +

My initial plan was to structure my library into projects, create a repo for each project, and include a project into another one as a git submodule. For creating the environment for the game +2048 the structure would been the following (CMakeLists omitted):

+ +

(the env-vector relies on the env-2048, which relies on the game-2048 which uses the general-board class)

+ +
general-board
+├── external
+│   └── Catch2/
+├── include
+│   └── general-board
+│       └── file.h
+├── src
+│   └── file.cpp
+└── tests
+    └── tests.cpp
+
+game-2048
+├── app
+│   └── manual_game.cpp
+├── external
+│   ├── Catch2
+│   └── general-board
+├── include
+│   └── game-2048
+│       └── file.h
+├── src
+│   └── file.cpp
+└── tests
+    └── tests.cpp
+
+env-2048
+├── external
+│   ├── Catch2/
+│   └── game-2048/
+├── include
+│   └── env-2048
+│       └── file.h
+├── src
+│   └── file.cpp
+└── tests
+    └── tests.cpp 
+
+env-vector <---- this would be on the top, bundling the envs together
+├── external
+│   ├── Catch2/
+│   ├── env-2048/ 
+│   ├── env-chess/ <---- another board game
+│   └── env-go/ <---- another board game
+├── include
+│   └── env-vector
+│       └── file.h
+├── python
+│   └── pybind11_magic_here
+├── src
+│   └── file.cpp
+└── tests
+    └── tests.cpp
+
+ +

After some implementation I got concerned whether the number of submodules and the redundancy got too much. With this structure the project on the top would contain the general-board project n times (where n is the number of games relying on a board), and the catch2 would be included even more. Seems fishy and error prone.

+ +

My second idea was to create one big project and to include everything into it in a 'flat' way and not in a 'nested' way as before. It would look like this:

+ +
(line ending with '/' depicts a folder)
+environments_all_in_one
+│
+├── external
+│   └── Catch2/
+├── include
+│   └── environments_all_in_one
+│       └── **not_even_sure_what_to_put_here**      
+├── python
+│   └── pybind11_magic_here
+├── src
+│   ├── env_vector
+│   ├── envs
+│   │   ├── env-2048/
+│   │   ├── env-chess/
+│   │   └── env-go/
+│   ├── games
+│   │   ├── game-2048/
+│   │   ├── game-chess/
+│   │   └── game-go/
+│   └── general-board
+│       ├── board_abc/
+│       ├── board_array/
+│       └── board_vector/
+└── tests
+    └── tests.cpp
+
+ +

This way code would not be present multiple times and it definitely helps transparency. However, as I am not experienced I must ask:

+ +

Is there a better way to do it?

+",362211,,362211,,43926.37083,43926.37083,To structure big and expandable project(s),,1,1,1,,,CC BY-SA 4.0, +408401,1,408405,,4/5/2020 7:35,,2,78,"

I have a function which takes the incoming request, parses the data and performs an action and posts the results to a webhook. This is running as background as a Celery Task. This function is a common interface for about a dozen Processors, so can be said to follow the Factory Pattern. Here is the psuedo code:

+ +
processors = {
+    ""action_1"": ProcessorClass1, 
+    ""action_2"": ProcessorClass2,
+    ...
+}
+
+def run_task(action, input_file, *args, **kwargs):
+    # Get the input file from a URL
+    log = create_logitem()
+    try:
+        file = get_input_file(input_file)
+    except:
+        log.status = ""Failure""
+
+    # process the input file
+    try:
+        processor = processors[action](file)
+        results = processor.execute()
+    except:
+        log.status = ""Failure""
+
+    # upload the results to another location
+    try:
+        upload_result_file(results.file)
+    except:
+        log.status = ""Failure""
+
+    # Post the log about the entire process to a webhoook
+    post_results_to_webhook(log)
+
+ +

This has been working well for most part as the the inputs were restricted to action and a single argument (input_file). As the software has grown, the processors have increased and the input arguments have started to vary. All the new arguments are passed as keyword arguments and the logic has become more like this.

+ +
try:
+    input_file = get_input_file(input_file)
+    if action == ""action_2"":
+       input_file_2 = get_input_file(kwargs.get(""input_file_2""))
+except:
+    log.status = ""failure""
+
+
+try:
+    processor = processors[action](file)
+    if action == ""action_1"":
+        extra_argument = kwargs.get(""extra_argument"")
+        results = processor.execute(extra_argument)
+    elif action == ""action_2"":
+        extra_1 = kwargs.get(""extra_1"")
+        extra_2 = kwargs.get(""extra_2"")
+        results = processor.execute(input_file_2, extra_1, extra_2)
+    else:
+        results = processor.execute()
+except:
+    log.status = ""Failure""
+
+ +

Adding the if conditions for a couple of things didn't make a difference, but now almost 6 of the 11 processors have extra inputs specific to them and the code is starting to look complex and I am not sure how to simplify it. Or if at all I should attempt at simplifying it.

+ +

Something I have considered:

+ +
    +
  1. Create a separate task for the processors with extra inputs - But this would mean, I will be repeating the file fetching, logging, result upload and webhook code in each task.
  2. +
  3. Moving the file download and argument parsing into the BaseProcessor - This is not possible as the processor is used in other contexts without the file download and webhooks as well.
  4. +
+",362242,,9113,,43926.34028,43926.36111,How to simplify a complex factory pattern?,,1,0,,,,CC BY-SA 4.0, +408412,1,408431,,4/5/2020 16:58,,2,75,"

I aim to understand pull model MVC. I'm stuck at defining a model for a simple color-guessing game in Java Swing I chose to practice it.

+ +

I borrowed the model's initial version from an example which was very simple:

+ +
class Model extends Observable {
+
+    private static final Random rnd = new Random();
+    private static final Piece[] pieces = Piece.values();
+    private Piece hidden = init();
+
+    private Piece init() {
+        return pieces[rnd.nextInt(pieces.length)];
+    }
+
+    public void reset() {
+        hidden = init();
+        setChanged();
+        notifyObservers();
+    }
+
+    public void check(Piece guess) {
+        setChanged();
+        notifyObservers(guess.equals(hidden));
+    }
+}
+
+enum Piece {
+
+    Red(Color.red), Green(Color.green), Blue(Color.blue);
+    public Color color;
+
+    private Piece(Color color) {
+        this.color = color;
+    }
+}
+
+ +

The initial model seems great and does the minimum necessary to play the game. With the caveat that it is the so-called push model and thus observable, so the view registers itself as an observer of the model and is notified when it changes. It is also not named properly (although I'm not sure how should I name it, I attempted to call it GameLogic).

+ +

I want to implement a pull model version of MVC, in which the View asks the model for its state in order to update itself. So I assume the model will have at least one getter for the View to retrieve that information. Actually I believe it will have more, for retrieving all the possible pieces in order to paint the choice buttons. The gameplay will become more complicated as well, I guess, with the view setting a given choice (throughout the controller) and asking the model if the game was won. A boolean check(Piece guess) method seemed more suitable but separating it into two methods void setGuess(Piece guess) and boolean won() worked better for the MVC implementation.

+ +

What would be the proper modeling for this case?

+ +

I'm stuck in one more part, which is how the View will send its position (or rather its piece or color) to the model. I'd appreciate any help on that too. Because from my understanding the view does not hold state.

+ +

This is what I came up with. I stopped using an enum (turned it into a Piece class for extensibility) and got stuck at the following model:

+ +
public class GameLogic {
+
+    private static final Random RANDOM_NUMBER_GENERATOR = new Random();
+    private final Piece [] pieces;
+    private Piece hidden = createRandomPiece();
+    private Piece guess = null;
+
+    public GameLogic(Piece [] pieces) {
+        this.pieces = Objects.requireNonNull(pieces);
+    }
+
+    private Piece createRandomPiece() {
+        return pieces[RANDOM_NUMBER_GENERATOR.nextInt(pieces.length)];
+    }
+
+    public void reset() {
+        hidden = createRandomPiece();
+    }
+
+    public Piece getGuess() {
+        return guess;
+    }
+
+    public Piece [] getPieces() {
+        return Arrays.copyOf(pieces, pieces.length);
+    }
+
+    public void setGuess(Piece guess) {
+        this.guess = guess;
+    }
+
+    public boolean won() {
+        return guess.equals(hidden);
+    }
+
+    public Piece getPieceAtPosition(int position) {
+        if (position < 0 || position >= pieces.length) {
+            return null;
+        }
+
+        return pieces[position];
+    }
+
+}
+
+ +

Was the original model okay (except for being push model MVC)? The model now seems to depend a lot on the View's requirements, this seems wrong to me. But for either push model or pull model the model will have to satisfy the View somehow.

+",93338,,93338,,43927.06806,43927.27014,How to properly model an MVC model in this case?,,1,0,,,,CC BY-SA 4.0, +408419,1,408447,,4/5/2020 19:29,,2,137,"

Essentially I've got a bunch of formulas in two giant methods in a class designed to do math transformations and evaluations to multiple inputs. Where the inputs are actually lists of inputs (as there are some sums involved too). Later on I want to optimize this code by utilizing GPU/CPU accelerated matrix multiplications and additions but for now I'm using the basic for-loops.

+ +

Lets say hypothetically i'd like to grow to several dozen cases and right now i have less than 10..

+ +

Something like:

+ +
enum EnumType {
+    SUPER_FUNCTION,
+    MEGA_FUNCTION,
+    ..
+}
+
+float doMathStuff(EnumType functionType, List<float> a, List<float> b...) {
+    switch(functionType) {
+        case SUPER_FUNCTION:
+                    if(situationA) {
+                        switch(something else) {
+
+                        }
+                    } else {
+                        switch(something else) {
+
+                        }
+                    }
+                return stuff;
+        case MEGA_FUNCTION:
+                for(..) {
+                    if(situationA) {
+                        switch(something else) {
+
+                        }
+                    } else {
+                        switch(something else) {
+
+                        }
+                    }
+                }
+                return stuff;
+        ...
+    }
+}
+
+ +

My problem is that to support the functions I'm ending up with SEVERAL hundred lines of code in each of my switch statements which is making it rather cumbersome to go through. I shudder to think about maintaining this once I add more cases.

+ +

Any ideas as to how to keep this nightmare-in-the-making in check?

+ +

BTW: This is my own personal project and I have total freedom to do any changes.

+",285018,,,,,43927.73056,Large method with nested switch case(s) refactoring (Java),,2,3,1,,,CC BY-SA 4.0, +408420,1,,,4/5/2020 19:30,,1,82,"

The base class (in the base lib, not owned by me), has upgraded its code and add a new method to support additional use cases.

+ +

This is the existing method signature in the base class:

+ +
public void Alert(string someAlertString);
+
+ +

With the new release, the base class is supporting a list of AlertObject (at some point the base class might deprecate the string alert)

+ +
public void Alert(List<Alert> alertObj);
+
+ +

The base class in lib looks something like this:

+ +
public BaseClass {
+   public void Alert(string message) {
+     //Print msg on the UI.
+   }
+
+   public void Alert(List<Alert> alerts) {   <-- New Addition.
+     // Loop through each alert and show the list of messages.
+   }
+
+   // Other methods.
+}
+
+ +

On my side of the code, I have the alert in multiple places in multiple subclasses (>500 alerts), like this:

+ +
public SubClass: BaseClass {
+
+    public void Execute(){
+       // Execute some logic
+       Alert(""This is a warning message.""); <-- Call base calls alert
+    }
+ }
+
+ +

I want to update all these alert statements to use an AlertObject (and I want to add category only to the new alerts, old alerts can continue using the default category):

+ +
public class Alert {
+   public string message {get;set;}
+   public string category {get;set;}
+}
+
+ +

One way to do this is to define a helper class which takes the existing string and return a List of AlertObject:

+ +
public static class AlertHelper {
+   public static List<Alert> getNewAlert(string msg, string category=""Not Defined"") {
+      Alert a = new Alert();
+      a.message = msg;
+      a.category = category;
+      return new List<Alert>() { a };
+   }
+}
+
+ +

Then I can replace all the instance of my Alert with:

+ +
base.Alert(AlertHelper.getNewAlert(""This is a warning message.""))
+
+ +

The one problem I see here is that as the Alert class (in a separate lib) keeps adding properties to support more detailed alerts, I need to keep updating my helper class, and potentially all the places where I call helper class.

+ +

I was wondering if there is a better way to design this.

+",362273,,44470,,43927.3375,43927.85972,Design Pattern when base class supports new method overload,,3,0,,,,CC BY-SA 4.0, +408423,1,,,4/5/2020 21:05,,-4,44,"

I am trying to develop a demo application and confused while designing context.

+ +

Let's say we have three entity and their properties

+ +
User
+    Id
+    Email
+Todo
+    Questions
+    Answers
+        Value
+        User (who answered)
+Survey
+    Questions
+    Answers
+        Value
+        User (who answered)
+
+ +

Now I have 2 question about this

+ +

Now, I wonder what the best practice to define bounded context in such situation is.

+ +

According to the answer of first question, and assuming that User is used within both Todo and Survey, how we should define our context structure by User.

+ +

Thanks.

+",265548,,131624,,43926.91181,43926.94861,How to decide what should be bounded context,,1,1,,,,CC BY-SA 4.0, +408424,1,408473,,4/5/2020 21:51,,1,45,"

I know the proper folder structure for an Angular application should be like this:

+ +
/app
+    /core module
+        /components
+        /services
+    /feature module
+        /components
+        /services
+        /models
+    /shared
+        /components
+        /constants
+        /directives
+        /interceptors
+        /interfaces
+        /models
+        /pipes
+        /sass
+        /services
+
+ +

But I've got a welcome page and a bunch of sub modules associated with a main module. Does it make sense to nest those modules within the parent module, like this?

+ +
/app
+    /core module
+        /components
+        /services
+    /main plan configuration module
+        /sub module one (plan configuration step one)
+            /components
+            /services
+            /models
+        /sub module two (plan configuration step two)
+            /components
+            /services
+            /models
+    /shared
+        /components
+        ...
+
+ +

Or should I not nest them and leave the folder structure like this:

+ +
/app
+    /core module
+        /components
+        /services
+    /main plan configuration module
+        /components
+        /models
+        /services
+    /sub module one (plan configuration step one)
+        /components
+        /services
+        /models
+    /sub module two (plan configuration step two)
+        /components
+        /services
+        /models
+    /shared
+        /components
+        ...
+
+ +

I know this is probably a very personal choice for most people but I'm not the only one working on this codebase and I'd like to put together something that other developers will be able to use easily enough.

+",282705,,,,,43928.24514,Angular Folder directory approach,,1,0,,,,CC BY-SA 4.0, +408425,1,408488,,4/5/2020 22:30,,8,742,"

I have developed a task management tool. And some task lists can be very large. (I have myself more than 300 tasks to do).

+ +

I would like to do some task reviews from time to time as the tasks pile up to be able to sort them by priority.

+ +

I imagine presenting tasks 2 by 2 to the user and ask him what task is more important than the other. Repeat until all the tasks are sorted.

+ +

Note that:

+ +
    +
  • The algorithm need to be ""halt resistant"": the user can abort the review when he wants to and the result of precedent sortings would not be lost.
  • +
  • The tasks already have a initial rank already set by the user by hand that represent the priority.
  • +
+ +

My questions are

+ +
    +
  1. What is the best algorithm that reduce the number of comparison +needed to sort all tasks ?
  2. +
  3. Do you have a better strategy to propose +(or way of presenting the tasks and questions) in order to speed up +these reviews ?
  4. +
+ +

EDIT 1

+ +

Thank you very much for all this feedback !

+ +

To answer some questions:

+ +
    +
  • Yes, each tasks have responsibles and each responsible can order all (thous their) tasks.
  • +
  • We already have tags, but a tasks review is there to determine priorities among them. I agree that priorities can also be set by features and at the task level.
  • +
  • I do not want to have a tree of tasks classified by features and lose the flat list. I prefer a flat backlog with tagged tasks. It is sometimes very useful to prioritize a task belonging to a not so important feature placed later in the queue.
  • +
  • I realize that I maybe have first focused on this problem on the software level (asking for an algorithm) but my question is maybe belonging more to a project management group. I wonder for example how the agile review meetings are held.
  • +
+ +

EDIT 2

+ +

The more I think about it and what is my goal is (asking this question of this algorithm), is that I want to make it fun to do the tasks review.

+ +

The ultimate goal is keep tasks ordered by priority. But it is danting to review tasks. I'd like to have a sexy interface where tasks are presented, and the user could play while doing his review. The sexy/playing part is up to me, but I have a hard time finding a way to logically (algorithmically) present tasks and then order them, giving that the user can stop this process at anytime.

+ +

Say I present 2 random tasks, and that the user says that B is more important than A and stop there. Then what ? Does that mean I simply update the rank to b.rank = a.rank - 1 (lower the more important it is). And why randomly by the way ? Is there a logically thing to do there ? You see, I don't know how to proceed.

+ +

EDIT 3

+ +

I received a lot of input and I will greatly profit from all of them, so a big thank to all of you.

+",28263,,28263,,43928.61181,43969.29931,Best algorithm to sort tasks by priorities by a human,,7,6,2,,,CC BY-SA 4.0, +408435,1,,,4/6/2020 9:37,,2,47,"

Partly for use in my own projects, and partly for fun while being stuck at home I'm attempt to build yet another ""block"" based editor.

+ +

So far I've been using simple objects, starting with a global window.myeditor object, then various sub objects to handle different things. For example I have window.myeditor.toolbar to abstract functions for creating toolbars in each block. I also have made use of Object.create to extend my base window.myeditor.block object to create different block types (paragraph/heading/etc), then create instances of these objects for each element on the page.

+ +

It's starting to get a bit messy though. I can't use global variables in the objects as these don't get re-instantiated with Object.create and so all my objects end up seeing the exact same variables, not a big issue but isn't ideal. I'm also creating functions like _focus() in my main block object, which does some standard stuff, then calling focus() from inside there, which is a function designed to be replaced by child objects so they can extend that functionality. I'm also finding myself duplicating code (such as code to make bold/italic work) which I could probably centralise if I had a more well defined way of ""modularising"" the logic.

+ +

Looking at how other projects work, it seems building on classes, most likely via modules, is the current favourite, although I'm having problems getting my head around a base implementation and think I'm just screwing up the basic architecture.

+ +

For example at the moment I have something along the lines of the following -

+ +

Editor class/module with a global instance to handle the main editor (not that it's required but I'd ideally design it such that nothing relies on having a global instance to anything, so that multiple editor instances would actually be possible without messing each other up). That instance contains an instance of BlockRegistry which stores the different types of block and a reference to their class. The editor also has an array of BlockContainer instances which control a section of blocks on the page. There is a base Block class to handle a single content block that should be extended by plugin block types.

+ +

The BlockContainer class need to list all the html sub-elements, and create a SomeBlockTypeThatExtendsBlock instance for them, using the correct handler, but in order to do that, it needs to access to the BlockRegistry instance to see if it's a supported block type and the correct class.

+ +

I can see myself getting this issue of various class instances needing access to features of other instances regularly, but not sure what the correct method here is, other than resorting to constantly accessing the global instance from inside modules, passing references all over the place, or just sticking any code that might need to reference something else in the same place.

+ +

I think I'm just missing something obvious and screwing up the basic design.

+",279595,,,,,43927.40069,Javascript module design patterns,,0,2,,,,CC BY-SA 4.0, +408439,1,408440,,4/6/2020 12:36,,4,207,"

I have a state machine in which I might have to restart something when needed (Say restart a hardware). But after restarting, I need to run some tests on the thing. If the tests fail, I need to stop the thing and exit the state machine with a failure status.

+ +

In case of failure, how should I handle the 2nd stopping of the thing? The state machine diagram loses its readability if I do it the following way. And the implementation becomes ugly since I need to decide Stop thing state's output based on the previous state (SUCCESS the first time and ERROR if the state was reached after failing the tests)

+ +

+ +

My other option is to create a 2nd instance of the Stop thing state as follows. But then it makes the state machine bigger

+ +

+ +

Which one of these is the correct approach? Or is there another way to approach these kind of problems?

+ +

If it matters here, I am using ROS and smach for the state machine implementation

+",339669,,339669,,43927.54028,43928.22778,Repeating a state in a state machine,,1,3,1,,,CC BY-SA 4.0, +408460,1,,,4/6/2020 22:03,,5,125,"

Disclaimers:

+ +
    +
  1. This question is reposted from SO upon SO user's suggestion to put it here since there is no specific code in question.
  2. +
  3. This question is a subset of my larger theme of Fortran modernization.
  4. +
  5. There are useful versions of this question asked already (1, 2, 3) and blog posts, and although helpful, I am curious of what is best practice now (some of those posts are 5-10 years old) and the context of my situation, which is the reason for asking it in a similar way.
  6. +
+ +

Background

+ +

Our current code base is largely Fortran (400K LOC, most F90/some F77/maybe some newer versions), dispersed amongst some 10 separate dev teams (organizationally) in (somewhat) standalone modules.

+ +

We are exploring the best practice idea of taking one of those modules and writing it in a faster-to-develop language (Python), and wrapping and extending bottlenecks in Fortran.

+ +

However, since the module is currently completely in Fortran and interacts with other Fortran modules and a main Fortran loop, we need to keep the appearance or interface of the module to main Fortran loop the same.

+ +

So the incremental change approach for this Fortran module – and other similarly architected Fortran modules – is to keep the interfaces the same and change the internals.

+ +

In other words, the shell or signature of the Fortran subroutines or functions would be unchanged, but the functionality inside the function would be changed.

+ +

This functionality inside the function is what we want to rewrite in Python.

+ +

From discussions online - and my intuition, whatever that’s worth – it seems unadvised to embed Python like this, and instead do the reverse: extend Python with Fortran (for bottlenecks, as needed).

+ +

However, this embedding Python approach seems the only logical, smallest, atomic step forward to isolate changes to the larger Fortran system.

+ +

It appears that there are some options for this embedding Python problem.

+ +

The most future-proof and stable way is via ISO_C_BINDING and using the Python C API (in theory…).

+ +

There are also stable solutions via Cython and maybe ctypes that are reliable and well maintained.

+ +

There are more dependency-heavy approaches like cffi and forpy, that introduce complexity for the benefit of not writing the additional C interface code.

+ +

We are also aware of simple system calls of Python scripts from Fortran, but these seem too disk write/read heavy; we would really like to do minimal disk reads/writes and keep the array passing in memory (hopefully just pointer locations passed, not array copies).

+ +

Another constraint is that this Fortran-Python interface should be able to handle most common datatypes and structures, including multi-dimensional arrays, or at least have a way to decompose complex custom datatypes to some collection of simpler datatypes like strings and arrays.

+ +

Question:

+ +

Given the above description, do you have any advice on what are the best practice ways that are currently available to call Python from Fortran, with minimal writing of additional C code (so a package like cffi would be preferred over Python C API approach)?

+ +
    +
  • using 2003 or 2008 Fortran versions; not sure if all 2018 features are implemented in our Intel Fortran compiler.
  • +
+ +

Edit (2020-04-16):

+ +

To specify this question further, I am rephrasing the question to be specific to writing as little C overhead code as possible.

+",175040,,175040,,43937.63819,43937.63819,Fortran-Python Interface,,0,3,1,,,CC BY-SA 4.0, +408461,1,408484,,4/6/2020 22:52,,0,59,"

So this is a very beginner question, so please do be patient with me: +But I am building a little practice project and what I am struggling to understand is the overall structure of the solution, in terms of breaking projects up. I assume this is for easily readability and maintainability and as I have never worked like that before I have a few questions.

+ +

So for example sake, I am building a payment app where people transfer money to eachother (""pretend"" money) there will be a basic front end using razor pages and an api solution that captures all info from the database and exports as json.

+ +

So far I've structured my project thusfar:

+ +

project.Domain (this contains all the actual logic, such as the core models and controllers, and code to transfer money from one account to another etc )

+ +

project.Presentation(this will be where all the razor pages live, so in effect the data from Domain is passed into here and printed to the page)

+ +

project.API(this gather app info from the database and export as json. This is restful and will allow for verbs like put, get post etc etc.

+ +

Like i said i've never done this before so im trying to use common sense here. Typically what I normally would have done was create a single MVC project and put everythere in there! but I've been learning SOLID and that made me think the code should defo be structured, but also the solution itself too!

+ +

would be very grateful for input on this!

+ +

Andy

+",351324,,,,,43928.45139,How to structure separate projects in a single solution? i.e project.Domain etc in C#,,1,1,,,,CC BY-SA 4.0, +408466,1,,,4/7/2020 0:31,,0,165,"

In The Art of Unit Testing, 2nd Ed., the author gives the following example for injecting a stub using constructor injection and a ""fake object"". The goal of the ""fake object"" is to inherit the interface and break dependencies so that way you can unit test it.

+ +

I don't understand how these are examples of legitimate tests. The call to LogAnalyzer.IsValidLogFileName() used at runtime would call some implementation of IExtentionManager.IsValid(), which would interrogate the string in some way before return a bool or throwing an exception. So to me, the significant test here is: for IsValidLogFileName(), does the method return what you expect when you give it a specific string?

+ +

In the examples below, the author hard-codes and sets myFakeManager.WillBeValid within the tests to true, passes myFakeManager to the LogAnalyzer and immediately asserts that what they just set myFakeManager.WillBeValid to is true... What they literally just set it to?!

+ +

How are these tests in LogAnalyzerTests.cs useful?

+ +

LogAnalyzer.cs

+ +
public class LogAnalyzer
+{
+    private IExtensionManager manager;
+    public LogAnalyzer(IExtensionManager mgr)
+    {
+        manager = mgr;
+    }
+
+    public bool IsValidLogFileName(string fileName)
+    {
+        return manager.IsValid(fileName);
+    }
+}
+
+ +

IExtensionManager.cs

+ +
public interface IExtensionManager
+{
+    bool IsValid(string fileName);
+}
+
+ +

FakeExtensionManager.cs

+ +
internal class FakeExtensionManager : IExtensionManager
+{
+    public bool WillBeValid = false;
+    public Exception WillThrow = null;
+
+    public bool IsValid(string fileName)
+    {
+        if (WillThrow != null)
+            throw WillThrow;
+
+        return WillBeValid;
+    }
+}
+
+ +

LogAnalyzerTests.cs

+ +
[Test]
+public void IsValidFileName_NameSupportedExtension_ReturnsTrue()
+{
+    FakeExtensionManager myFakeManager = new FakeExtensionManager();
+    myFakeManager.WillBeValid = true;
+
+    LogAnalyzer log = new LogAnalyzer(myFakeManager);
+
+    bool result = log.IsValidLogFileName(""short.ext"");
+    Assert.True(result);
+}
+
+[Test]
+public void IsValidFileName_ExtManagerThrowsException_ReturnsFalse()
+{
+    FakeExtensionManager myFakeManager = new FakeExtensionManager();
+    myFakeManager.WillThrow = new Exception(""this is fake"");
+
+    LogAnalyzer log = new LogAnalyzerConstructor(myFakeManager);
+    bool result = log.IsValidLogFileName(""anything.anyextension"");
+    Assert.False(result);
+}
+
+",218080,,218080,,43928.03403,43928.59653,Unit Testing: Constructor Injection with Fake Objects - Bad Tests?,,3,2,,,,CC BY-SA 4.0, +408468,1,,,4/7/2020 1:34,,0,10,"

I have a Spark cluster that contains my customer's data. I want to allow my customer to query their data via our admin dashboard and generate their own reports, self-service.

+ +

A key consideration is that the customer is technical.

+ +

The best option I can think of is to embed Jupyter Notebook or Zeppelin Notebook in the dashboard, since that would give them direct access to their data.

+ +

What other options do I have?

+",337878,,,,,43928.06528,Best way to expose data from a Spark cluster for queries and custom dashboards in a web app?,,0,0,,,,CC BY-SA 4.0, +408469,1,,,4/7/2020 2:30,,0,61,"

I've written an app using golang which uses OAuth2(Authorization code flow with PKCE) to interact with the Gmail API.
+If I build the app using my own client ID then my client ID can easily be found out through the authorization request URL which my app passes to the system browser during user consent. This won't be a good practice since basically anyone could impersonate my client using my client ID.

+ +

My question

+ +

How am I supposed to distribute my app with my own client ID? If it's not possible with this flow, do any other alternatives exist?

+",362386,,,,,43930.27014,"How should I distribute my app with my own OAuth2 client ID, without letting anyone find it out?",,1,6,,,,CC BY-SA 4.0, +408471,1,,,4/7/2020 4:56,,-4,24,"

I've been researching different approaches to i18n in web applications and have found a lot of great resources for locale specific information like CLDR.

+ +

There is one area where I really can't find any kind of aggregated information and that is the naming of individuals and organisations. There are plenty of articles on the web that talk about the difficulties in generating simple forms for recording people's names, due to cultural differences in naming, and those articles go on to list those differences, but where are these differences recorded?

+ +

Is there some machine readable resource somewhere (like CLDR) that describes how names for people and organisations are properly formatted for their locale and other locales? What components of a name are required, ordering of components, formatting of names for formal/informal purposes etc?

+",248366,,,,,43928.48472,Locale specific guidelines for naming people and organisations,,1,0,,,,CC BY-SA 4.0, +408475,1,,,4/7/2020 7:44,,0,13,"

I have an application which mainly deals with business workflow management. In this, there are going to be many events which will be processed async. The application raises the event and these events are pushed to the cloud service bus. We are on GCP and using GCP's pub-sub as service bus.

+ +

Once the message is received on pub-sub, I use the google cloud function to work with these events.

+ +

On processing the messages, the actions could be many, like

+ +
    +
  • Send Notification
  • +
  • Update the database
  • +
  • Build search index
  • +
  • Audit events to OLAP
  • +
+ +

In many cases the actions involve, adding/updating the primary database.

+ +

for e.g. Let's say if user updates the workflow data, the activity log could be processed via pub-sub and subsequently updated to the database.

+ +

The way my application is structured is that there are 2 separate applications:

+ +
    +
  • ""core"" - nodejs, express based API which is the main application api layer. This API the events
  • +
  • ""task-hub"" - nodejs, express based API, but hosted as cloud functions on google. This is responsible for processing the events
  • +
+ +

The ""task-hub"" application works with many of the same entities as the ""core"", like WorkflowData, WorkflowActivityLog etc.

+ +

Is it a good idea to have ""task-hub"" directly access the database via its own services or does it make sense to keep the API's on the ""core"" API and task-hub to make web http requests to interact with these API's?

+ +

The pros of having the direct interaction to database are

+ +
    +
  • Easier to write and manage
  • +
  • No overhead on making another http call and managing the potential failures of those requests
  • +
  • ""core"" api don't need to handle these additional requests (though this might not be an issue, since its just upserts)
  • +
+ +

On the other side

+ +
    +
  • It feels like logic is distributed across multiple applications
  • +
+ +

I would have isolate the entire service in a separate package and re-use across both, however thats too much of refactoring which I won't be to undertake now. +Also, my application being nodejs application, doesn't make this effort any easier.

+ +

I would like to hear some thoughts on how I would clear the separations.

+",121826,,,,,43928.32222,How to design event processing on cloud with write backs to primary data source,,0,0,,,,CC BY-SA 4.0, +408478,1,408479,,4/7/2020 9:07,,0,60,"

We have put in place an account service management : its reponsability is to let a user register is account, confirm his email, etc...

+ +

We have put also in place a process layer that is reponsible to notify the user when his account is altered.

+ +

The notifications are done by a notification service that is decoupled from our account service and that notify the user based on an event mechanism. + Here is an overview of the principle : +

+ +

As a new requirement, our customer wants us now to send , for any notification, a reminder every 2 hours to the user. +For example if the account is not yet confirmed , we will send the user a notification by email askin him to confirm his account 2 hours later... +The team in charger of the UserNotification service, think that it's not their responsability to handle this because they have a stateless process and do not know about the account status for example. +So they ask that the account management service should raise an event ""AccountStillNotConfirmed"" to ask for the notification every two hours :

+ +

+ +

The alternative is to make the usernotification statefull, or at less make it asks the account service for the list of ""not yet confirm account"" every two hours and send a notification if needed. +The event ""AccountStillNotConfirmed"" seems odd to me. +I am in search of the most clean way to tackle this problem.

+ +

which architecture would be preferable for you and why ? +Is there another alternative way to staisfy our customer need?

+",285517,,,,,43928.38681,Notify user in a event driven architecture,,1,0,,,,CC BY-SA 4.0, +408485,1,,,4/7/2020 11:29,,-4,74,"

I found this previous question that addresses the issue. The question is: How do I deal with global variables in existing legacy code (or, what's better, global hell or pattern hell)?

+ +

My question is to ask for more detail in how to accomplish the last part of the solution. The most popular solution says: +""Over time you can hope to kick the can all the way to the end of the road, by removing all direct knowledge of the global instance from every class, and finally getting rid of the global instance...""

+ +

I have a project where I have grouped globals into a class, as suggested in the solution. +I am also passing an instance of this global class into the other classes that need the globals. +What I don't understand is how to accomplish that last step to ""finally getting rid of the global instance..."".

+",362414,,5099,,43928.49514,43928.50139,How to refactor Global variables?,,1,2,,,,CC BY-SA 4.0, +408489,1,,,4/7/2020 12:20,,-1,99,"

I am looking for a name/paradigm/research area etc. that describes the notion of working with data not in the traditional file-based sense, but instead based on semantics. I can best explain what I am looking for with an example:

+ +

Text editors typically work file-based. I write some text, save it in a file and if I later want to edit the text, I open this exact file. If I write a book, I might have many files for each chapter that I have to deal with individually, even though semantically they form a unit 'book' and the fact that my book is stored in separate files is not relevant when I am writing the book. It is an implementation detail.

+ +

Contrast this with for example a game engine such as Unity. If I edit a level for a game, the game engine might create multiple files to store the data for the level in, but I as a user don't have to know these details. I am working with the semantic unit 'level' and let the computer figure out the details at the storage level.

+ +

Does this concept (let users work with semantic units instead of files) have a name?

+ +

Edit: Maybe my initial question did not convey what I am looking for. I know that what I am describing is a form of abstraction (as some answers/comments have pointed out). But I am asking whether this form of abstraction has a well-defined name in the literature and whether there is research related to developing software that supports this abstraction.

+ +

As an example for a similar situation, take cloud computing. It is an abstraction over computer system resources, yet the development of systems that work with this abstraction is an active research area that is called 'cloud computing'.

+",362419,,362419,,43929.48472,43929.58264,Is there a paradigm for working with data semantically instead of file-based?,,2,7,,,,CC BY-SA 4.0, +408491,1,,,4/7/2020 12:49,,18,3087,"

Instead of using singletons, I made one class to hold an instance for every component. Let's call it MainApp. +MainApp is initialized in the entry point of the program and determines the lifetime of all components and resources. It has a single responsibility and that is being a collection of components. That's why it didn't feel like a bad idea first. +However, some components need access to other components, there is one component that is needed for almost all other components. +To solve this I used dependency injection, but this makes these components dependent on MainApp. +It feels like MainApp just became a big mediator class for all components.

+ +

Back to my question: Is this kind of class bad design? +What would be better alternatives? +Or isn't this class necessarily bad design, is the problem more related to the fact that a lot depends on it?

+",362422,,,,,43946.37639,Is it bad design to have a class represent your entire program?,,4,10,3,,,CC BY-SA 4.0, +408500,1,408501,,4/7/2020 15:06,,1,173,"

I have a RESTful API built with PHP. In POST request saving to DB is triggered. The problem is that now I have to support long running tasks. For example a user triggers POST request that is going to take a few minutes to be processed and having to wait is not a good option.

+ +

From what I've learnt from now: One of the best practices is something like this - user sends POST request with data, API returns status 202 Accepted and sends a location header with an URL where user will obtain the result (once it's ready).

+ +

But how can I achieve this if there is no asynchronous/multithreading stuff in PHP? My endpoint will have to return status 202 and trigger a process (start writing to DB).

+ +

I don't want to let the user waiting for his POST request until the result is obtained ( so it needs to happen like I've described ).

+ +

Anyone has any idea?

+",362441,,,,,43928.63333,"Long running REST API in PHP, async?",,1,0,,,,CC BY-SA 4.0, +408504,1,408506,,4/7/2020 15:46,,-1,86,"

Given 2 programs that are exactly the same where one is compiled directly on the host machine and the other is compiled using cross compilation (eg: say macOS to Linux). +Can there be a difference in terms of runtime performance between the 2 set of binaries generated?

+ +

Intuitively I'm thinking that when compiling on the host machine directly the compiler can use more accurate informations (exact OS version, type of hardware...) and use them to perform some optimizations.

+ +

If there exists differences, then I'm wondering how important can those be.

+",316634,,316634,,43928.66111,43928.96806,runtime performance for binaries compiled on a different OS than target OS,,4,0,,,,CC BY-SA 4.0, +408511,1,408520,,4/7/2020 18:15,,1,283,"

I am building a simple client-side web page that can be updated from an admin page. I am using PHP for server-side manipulation of the client targeted page.

+ +

The setup feels like an overkill to me:

+ +
    +
  1. index.html has some basic html input elements. A button triggers an XML GET request for makeClientPage.php. The PHP script takes a variable generated from the HTML inputs and writes down a clientPage.html file.
  2. +
  3. In the generated client page, I have two javascript variables, called new_image, which is assigned to the variable from admin page and old_image, which has an arbitrary string value.
  4. +
  5. The generated variable is a directory name for a PNG image. There are over a thousand PNG images that the admin may choose from. This path is given to a javascript variable (new_image) between <script> tags in clientPage.html.
  6. +
  7. The image is not simply viewed in an <img> tag. It should be passed to Panolens.js, a panorama image viewer. A JS function updates Panolens.js with the new image.
  8. +
  9. Finally in the clientPage.html, I run a setInterval(function(){checkUpdate())}, 1000). You can find checkUpdate() below as well. It basically runs every second, comparing new_image to old_image, and if they are different, PanoLens API is called for update purposes, and old_image is assigned to new_image, preventing the same image to be updated every second.
  10. +
+ +

The trick here is that, on clientPage.html file, the new_image variable is not read directly by javascript itself. I actually send a request to another PHP function, reading clientPage.html, extracting new_image variable and comparing it like that instead. WHY? Because once the client enters url/clientPage.html, their browser cannot follow a file change. Therefore a request must be send back to the server, asking for the actual value of variable that was updated by the admin's action.

+ +

So in the end, imagine the Admin on his computer and the client with his phone. The admin enters the input and hits the button, sending the desired image to the page client is viewing. The client will be 'wearing' their phone with headset, viewing the image as if they are in an Oculus device. So we do not have the option to simply click 'refresh'.

+ +

My question is about the setInterval(). When this application would be distributed to a hundred clients, our server will have to handle so many php requests. And no, primordial instincts and premature optimizations, etc. etc. guys, I am asking this question so I can learn, below is what I tried, I ask if it can be done differently. I hope it is not asking for an 'opinion'.

+ +

I am quite new in my programming career. And that is the first server side web development I am doing. I am super confused with how PHP must go back and forth to do something useful. Yesterday I learned how to make a XMLHttpRequest to PHP, in order to pass a variable from JS to PHP. I feel bright and dark at the same time.

+ +

I keep searching the internet, on how to trigger a javascript function from PHP, in order to bypass a control everysecond. Because what would make sense is that, once the admin hits generate image button, the client page should get a notification for that. But I cannot understand how it is done on differnet files.

+ +

I would appreciate if you can share your experiences on this topic.

+ +

Code:

+ +

Between script tags in clientPage.html:

+ +
...
+new_image = '_name_%new_image%_name_';
+old_image = 'old_image';
+var setInterval_forUpdate = setInterval(function(){checkUpdate()}, 1000); // everysecond
+
+function checkUpdate(){
+  callPHPUpdate();
+  if(newImageInfo != old_image){
+    old_image = newImageInfo;
+    updateImage();
+  }
+}
+function updateImage(){
+  ...
+  // update PanoLens.js image
+  ...
+}
+function callPHPUpdate(){
+  var xmlhttp = new XMLHttpRequest();
+  xmlhttp.onreadystatechange = function(){ 
+    if(xmlhttp.readyState==4 && xmlhttp.status==200){
+      res = xmlhttp.responseText;
+      newImageInfo = res.split('_name_')[1];
+    }
+  }
+  xmlhttp.open(""GET"",""updateInfo.php"",false);
+  xmlhttp.send();
+}
+
+ +

updateInfo.php that clientPage.html calls for itself.

+ +
<?php 
+function getLineWithString($fileName, $str) {
+    $lines = file($fileName);
+    foreach ($lines as $lineNumber => $line) {
+        if (strpos($line, $str) !== false) {
+            return $line;
+        }
+    }
+    return -1;
+}
+
+
+$new_line = getLineWithString('clientPage.html', 'new_image = '); // notice how I read the variable 
+
+
+echo $new_line;
+?>
+
+",362454,,362454,,43928.92083,43928.94861,How to update HTML/JS client from PHP server without refresh intervals?,,1,2,,,,CC BY-SA 4.0, +408513,1,,,4/7/2020 18:37,,-1,48,"

There are two compatible versions of the same Product (e.g. database engine) that I want to compare to each other for the same input scenario (SQL query).

+ +

The problems that I want to address at first is the instability and stochastic nature of the System the Product is running on:

+ +
    +
  • CPU throttling,
  • +
  • random context switching,
  • +
  • third-party processes running aside,
  • +
  • memory allocator page faults,
  • +
  • filesystem fragmentation (at least when loading the binary in memory),
  • +
  • etc.
  • +
+ +

All of this stuff potentially may be ruled out by a good probabilistic model, but I don't know how to invent the good one.

+ +

Let me formulate the task:

+ +
    +
  • approximate test mean time is 500ms, which is comparable to a systematic error of the System,
  • +
  • determine with probability p1 that the new version is faster than old one,
  • +
  • determine with probability p2 that the new version is slower than old one,
  • +
  • run both versions on given scenario as few times as possible.
  • +
+ +

How to build up such model? Is it possible in runtime or the parameters should be tuned in simulations? How to deal with different Systems, i.e. if I run both of versions on different machines every time? (Articles, literature or scholar links are welcome)

+ +

Also may be are there other approaches for performance testing, like full CPU cycle count, or other probabilistic approaches?

+ +

Example.

+ +

I want to run both of versions for no more than 7 times for each one. I have a formulae representing some Model, like

+ +
bool NewFaster(new_run_time1, new_run_time2, …, old_run_time1, old_run_time2, …)
+
+ +

and it's proven that the result true is true with probability 95%, and the result false is true with probability 80%.

+",119169,,,,,43928.82847,How to execute a proper comparative performance testing?,,1,1,,,,CC BY-SA 4.0, +408518,1,,,4/7/2020 21:27,,1,110,"

I'm building a backend service powered by Spring Batch which enables to define and Jobs.

+ +

Currently, I have several jobs, that essentially, aren't related one to another.

+ +

So,

+ +
    +
  • I have one application.properties file that contains all the properties for all jobs
  • +
  • Packages are bloated with classes that aren't related to each other.
  • +
+ +

So I thought about separating Jobs to different Maven modules and to have an engine module that gathers them all to one jar.

+ +

The problem I was facing is that Spring wouldn't load automatically these application.properties from the other modules.

+ +

i.e. $Value(""${some.module.property}"") will fail the application (engine.jar) as this property isn't found in the application.properties of the engine module.

+ +

I've understood that there might be some solutions for that problem, but my main question is:

+ +

Is it a good idea to make this separation, or is it bearable to have all the classes and configurations in one single project?

+",313400,,313400,,43928.89722,43941.49514,Spring Boot/Batch: Should every Job be separate Maven module?,,1,0,1,,,CC BY-SA 4.0, +408525,1,408529,,4/8/2020 4:30,,-3,39,"

using a python function signature as an example:

+ +
def this_function_has_an_optional_parameter(x, y = 42): ...
+
+ +

I'm wondering if there is an existing set of guidelines specifically for this

+",344563,,,,,43929.38819,Is there a good set of heuristics around how/when to use optional parameters,,1,0,,,,CC BY-SA 4.0, +408526,1,408527,,4/8/2020 5:44,,-2,112,"

I am new to VBA and am struggling with overall program design:

+ +

Toy Example

+ +

Input: Spreadsheet with 20000 rows and 50 columns. Also, order of rows matters.

+ +

Task: Create 3 different reports on a new sheet. Computing these reports requires the following logic:

+ +

Filter rows based on criteria, perform computation with order of remaining rows in mind, filter more, perform computations, repeat 5-6 times.

+ +
Considerations:
+    1. Runtime/Performance
+    2. Readability/Maintainability of Code
+    3. Provide formulas for end users in excel if possible (only when applicable)
+    4. Error checking
+
+ +

My question is which of the following approaches should I take for tasks like the one presented?

+ +

Approach 1 (Read Data into Variant Array):

+ +

I am under the impression that the fastest way to read in data is to an unallocated variant array as seen here:

+ +

Dim Arr() As Variant

+ +

Arr = Range(""A1:A10"")

+ +

After this, I can perform logic by accessing data such as Arr(1,3) and it is easy to filter this array by skipping rows that don't meet certain conditions. In terms of runtime, I think this a good approach. However, constantly accessing the array like Arr(1,3) makes code hard to understand (I suppose one method could be to assign some Constants to the column numbers). Also, with the Variant array, there is no type checking of inputs in each column. One source also mentioned that Variants can have overhead.

+ +

Approach 2 (Read Data into Array of Types):

+ +

I can define a Type with attributes with appropriate data types. The major upside is that I think this method produces the most readable code as you can have meaningful attribute names with type checking. The major downside is that it is more difficult to write to worksheets and filter due to having to constantly resize arrays. Another downside I see is that the overhead increases runtime.

+ +

Approach 3 (Use Worksheet functions and Temporary Columns):

+ +

The main benefit of this is providing some transparency to the end user in the form of formulas; also we can use built in worksheet functions that may help performance in some rare cases. There are two huge downsides to this approach. The first is that readability of code suffers tremendously as you have to properly format excel formulas in the code which can get out of hand with strings and other conditions. The second is huge performance bottlenecks. In a few runtime tests on toy examples, working with the worksheet outperformed the previous approaches; however, in my actual large programs things like AutoFilter can create 10 minutes delays. For the toy example above, I ended up having to create temporary columns and delete rows to create a report instead of filtering, close and reopen the input file, and start the next report.

+ +

Any help is appreciated.

+",347108,,,,,43929.61667,Trouble Designing Programs in VBA,,1,4,,,,CC BY-SA 4.0, +408530,1,408531,,4/8/2020 10:02,,-2,65,"

So, the question I have is how do companies handle transactions from a user with bad connectivity. The scenario in my head is.

+ +

A user X (with $150 in their account) is trying to transfer $100 from their account to someone else's account using an app on the phone. But they happen to have not so good internet connection. The user clicks the transfer button after selecting the recipient and the amount of money to transfer. After the user clicks on the transfer button the user loses connectivity, but the request is sent to the server and it starts the transaction, and in few seconds the user regains the connection and ends up in the same page and they happen to click on the transfer button again. As we know once the request is sent irrespective of the user whether they have the app open or not after they click on the transfer button, the request will be processed.

+ +

Imagine the app has a microservice architecture deployed on Kubernetes and the service which handles the transfer transactions have multiple pods. The first time the user X clicks the transfer button the request is sent to pod1 and the next time the request is sent to some other pod. How will this be handled. I imagine Reentrant locks would not work in this scenario.

+",188783,,,,,43929.44444,How to handle financial transactions across multiple pods of a service deployed on Kubernetes?,,1,0,1,,,CC BY-SA 4.0, +408532,1,,,4/8/2020 11:07,,1,67,"

I have many continuously growing (through scrapping) collections in MongoDB Atlas. The documents in each collection follows the following schema:

+ +
{
+""source_url"": ""<some url on the web>"",
+""html"": ""<S3 url>"",
+""text"": ""<S3 url>"",
+""source"": ""<data source>"",
+}
+
+ +

The html and text are Amazon S3 urls of extracted html and text from the source_url.

+ +

I want to save/sync all these information in Elasticsearch with following form of the document structure:

+ +
{
+""source_url"": ""<some url on the web>"",
+""html"": ""<html from S3 url>"",
+""text"": ""<text from S3 url>"",
+""source"": ""<data source>"",
+}
+
+ +

I can write a python script that saves all the existing data in elastic search and then periodically looks for new entries (all entries in MongoDb minus existing entries in ES) and save the data to ES.

+ +

This however, seems to much of a work to me and there must be a way so that as soon as there is an entry into MongoDB, some service does all the work and sends data to ES.

+ +

Any help in making this work efficient will be appreciated and if there are any known solutions for these types of tasks, please specify.

+",362506,,362506,,43929.72847,43929.72847,Combine/Sync Amazon S3 and MongoDB Atlas with Elastic Search,,0,0,0,,,CC BY-SA 4.0, +408538,1,,,4/8/2020 14:55,,-1,103,"

Our legacy application provide a static method public static boolean persist(Data data) for service/class callers for data persistence.

+ +

I do see unit testing issue for callers. Is this also an example for tight coupling? - As far I understand, changes can be made inside static method without any changes required for callers so this should not be an tight coupling issue.

+ +

Are they additional disadvantages in using this approach compared to dependency injection approach to expose services?

+ +

P.S. This service will be called from multiple threads simultaneously. Service will perform database lookup, add additional data for persistence, call business logic etc. Service in itself has no state.

+ +

I also read following related article on dependency injection vs static for function: For function with no dependencies we can use static invocation. But again here this is a service and not a plain function call.

+ +

Please add comments if this is irrelevant per community standards and guide where such questions should be posted and where to learn about best practices for exposing java classes/services.

+",362445,,362445,,43929.79931,43930.47431,Exposing java service as static method or seam dependency,,1,0,,,,CC BY-SA 4.0, +408540,1,408558,,4/8/2020 15:43,,3,874,"

What are the relationships between these pairs of concepts:

+ + + +

For the first pair of concepts, it seems to me that an object of value type is an element (data or procedure), and an object of reference type is the location (absolute or relative) of an element.

+ +

Questions:

+ +
    +
  • Is an object of value type a value object?
  • +
  • Is an object of reference type a reference object?
  • +
  • Does an object of value type have value semantics?
  • +
  • Does an object of reference type reference semantics?
  • +
+",184279,,45814,,43937.72361,44027.41667,"Value/reference type, object and semantics",,4,0,1,,,CC BY-SA 4.0, +408544,1,,,4/8/2020 17:02,,0,34,"

I love gRPC, but I find every step of the protobuf process rather frustrating (particularly in Python). Even though they are structurally similar to data structures composed of lists and dicts, you can't really handle them that way.

+ +
    +
  • You can't assign a list to a repeated field or a dict to a message (also means you can't use iterators/comprehensions)
  • +
  • You can't delete field attributes either (but you can del msg.lst[:])
  • +
  • Unpopulated fields are not ""falsey"": z.HasField('foo') will be false but not not z.foo is True.
  • +
+ +

I could go on. Basically, it feels like the folks that wrote the Python implementation of protobuf either don't actually know how to write idiomatic python, or want to actively discourage devs from using protobufs as datastructures within a service and only use them at API boundaries. Which is really unfortunate since they are extremely handy for enforcing types and structures (I aggressively use mypy/hints/autocomplete, **kwargs infuriates me). I've heard (haven't been able to find a source) that you aren't actually supposed to use protos dynamically this because of performance reasons, but I've also heard that Google does precisely this.

+ +

That leaves me with a few options:

+ +
    +
  • Deal with un-ergonomic protobuf interfaces, take the possible performance hit
  • +
  • Use dicts and lists and parse them with json_format at ser/de boundaries (files, gRPC, etc) and lose static type checking
  • +
  • Parse at boundaries, but use stricter typed data objects. But now I have to maintain these data structures in lock step with the .proto files (I want to write a custom protoc generator but I don't have the time right now).
  • +
+ +

What's the best approach, given that I like to write rather structured python?

+",217717,,,,,43929.70972,Is it idiomatic to use protobufs as containers within a service?,,0,3,,,,CC BY-SA 4.0, +408545,1,,,4/8/2020 17:03,,0,16,"

We are working on a project where we need to accept a series of business events related to a set of common users. + Those events are produced by various external systems. +We need to ingest them, detect anomalies (based on shared referential data and format), detect potential frauds, and try to rebuild users action in the various systems in order to detect any suspicious behavior.

+ +

Here a quick diagram illustrating the current solution :

+ +

+ +

As for now we going to received 40Gb of data each day from those external systems and we have to analyze them as quickly as possible. +In the future, the datas will come in a more realtime fashion (by api or by small files for example each 15minutes)

+ +

Our first architectural proposition was create a sort of datapipeline with data pulled by the interested services in each step. +But as I am doing some research on this topic, I am wondering if Kafka Stream, or a solution based on Apache Spark/Flink is not more appropriate. +Could you confirm me (or not) if in your mind we should switch to a solution based on Kafka or Spark or Flink. And why? + But additionnaly a big infrastructure could be necessary to run a solution based on Kafka or Spark whereas for now we are more in batch oriented solution requiring may be less servers, don't you think ?

+",285517,,,,,43929.71042,Datapipeline integration and analysis,,0,0,,,,CC BY-SA 4.0, +408547,1,408554,,4/8/2020 17:08,,0,183,"

My controller calls the service layer. The service layer calls a repository or does whatever it does. +If I just return a person object for example how do I know it was successfully retreived?

+ +

I can only think of two ways to handle this, but wasn't sure if there was another way or if one of these was a more appropriate method for handling this.

+ +
    +
  1. Throw an exception in the service layer if there is an issue. The controller can catch the exception and do whatever it needs to do.
  2. +
  3. Wrap the object I want to return in a ServiceResponse object that has a success boolean and error messages that can be checked in the controller?
  4. +
+ +

Am I way off base?

+",362542,,,,,43929.83403,What should the response of the Service Layer be?,,1,1,,,,CC BY-SA 4.0, +408559,1,,,4/8/2020 21:55,,4,196,"

Consider the following general form of a layered architecture :

+ +

+ +

I need to check my understanding about the second layer from top . More specifically I need to check my understanding the regarding the meaning of ""User interface management "" . According to my understanding , UI management is the code that underlies the UI , for example it is the code for each button and each UI elements . So for example , let us consider that the user needs to generate a report from a system that uses this layered pattern as its architecture. So what happens is the user press the generate report button then the underlying code of this button knows that it need to run the code of generate report system functionality ( which exists in the third layer). After that the generate report code ( lets assume that it is implemented as a class) will refer to the database to get the required info to generate this report and then the info will ""propagates upward"" until it reaches the UI again to be viewed by the user. So is this right ? In other words , is this what is the meaning of the User interface management ? and do I understand the mechanism of this architecture ?

+",351790,,,,,43960.50347,A question regarding the Layered Architecture,,1,0,,,,CC BY-SA 4.0, +408565,1,,,4/9/2020 7:49,,1,221,"

I have just first heard of Unified Modelling Language, or UML (Note I am only an amateur software engineer), which wikipedia states is a "general-purpose modeling language in the field of software engineering that is intended to provide a standard way to visualize the design of a system."

+

The UML website seems to claim that UML is quite general, stating:

+
+

You can model just about any type of application, running on any type and combination of hardware, operating system, programming language, and network, in UML

+

You can use UML for business modeling and modeling of other non-software systems too

+
+

However, when I look at the 14 types of diagrams in UML, I am not sure but it seems to me that this modelling language was primarily designed for specifying particular aspects of what I would call the "specification". Clearly, UML is not suited for specifying literally all aspects of what we need from a system, and it shouldn't be: As an extreme example, a specification language like UML cannot fully capture the specification that the user interface should be "easily understandable", and we obviously shouldn't expect this of UML.

+

However, given that (of course) UML can't be used to model/make precise all aspects of what we want from our system, I am wondering what are and are not the benefits of using UML: After reading some introductions, I am still unsure about what these diagrams can and cannot be usefully used for.

+
    +
  • Is UML really as useful for non-object-oriented as it is for object-oriented projects?

    +
  • +
  • What (in practive) aspects of a specification can UML fruitfully capture and what is it less useful for (in practice)?

    +
  • +
+",293724,,-1,,43998.41736,43930.37083,How general is UML actually?,,2,0,1,,,CC BY-SA 4.0, +408569,1,,,4/9/2020 8:35,,0,49,"

I'm looking at the design of dropbox's edgestore. +It manages several thousands of MySql instances, where all dropbox's metadata resides (users, filenames, etc.). I understand why sharding is necessary, and I understand why it's very useful to have the client talk with a edgestore core that routes the request to the correct MySql shard (edgestore engine). What I don't understand is what's the benefit of adding the following complexity:

+ +
+

Since our workload is read-heavy, Cores use a Caching Layer to speed lookups. The caches are also partitioned and replicated for high-availability. Edgestore provides strong-consistency by default, which requires invalidating caches on writes. If the workload can tolerate stale reads, clients can request eventual consistency.

+
+ +

What's wrong with just relying on the LRU cache each MySql provides by default? Is the application caching so much more efficient? how? extra points for showing me how its so much more efficient that it justifies maintaining such a complex mechanism.

+ +

+",285259,,,,,43930.48611,what extra benefit do I get from a caching layer over the caching layer of the database?,,1,0,,,,CC BY-SA 4.0, +408571,1,,,4/9/2020 8:54,,2,142,"

I have three branch on my git tree. master contains validated source version, develop contains +staging versions, then I have some feature branches.

+ +

For now, I have the current strucutre; +The numbers represents some individual commit hashes for simplicity.

+ +
master              develop     feature
+  o----------o----------o----------o
+  1          2          3          4
+
+ +

Commit 2 and 3 come from a feature I merged on the develop branch. +I then started working on a new feature branch.

+ +

I need to rewrite what was done in commit 2 and 3. +For this, I need to branch my current work on new-feature from master and +re-build the merged branch I had containing commits 2 and 3.

+ +

Here is what my git repo tree should look like.

+ +
[What I want]
+master/develop
+      o 1
+      |                rewrite-feature
+      +----------o-----------o
+      |          2           3
+      |
+      |       feature
+      \----------o
+                 4
+
+ +

master and develop are pushed. feature is a local branch.

+ +

I am the only one working on this project, so I can mess up with the origin +if needed without impacting anyone.

+ +

I'm really not sure of what I should do, specialy regarding to the pushed master +and develop branches.

+ +

I think I should rebase the feature branch on commit 1. +But I don't understand how to deal with those commits 2 and 3 without messing +with the public repository.

+",362601,,362601,,43930.40417,44110.71042,Rewind commits to new branch in git,,2,0,,,,CC BY-SA 4.0, +408574,1,,,4/9/2020 10:00,,-3,54,"

I want to design a system that performs certain actions at a predefined time in the future.

+ +

i.e. at 2pm on April 10th 2020, do X

+ +

I'm looking for patterns that would ensure that the intended action is triggered at the right time & only triggered once.

+ +

Ideally, I think it would be best if the action has no knowledge of the scheduling system, and that the scheduling system has little to no knowledge of the action.

+ +

Therefore, the action should be idempotent, and handle that itself, then we can, to a certain extent ignore the triggered only once requirement.

+ +

The scheduler would run off a data store containing records like:

+ +
{
+   action: <some way to know the correct action to perform>
+   trigger_after: <timestamp>
+   triggered: <boolean>
+}
+
+
+ +

Then the scheduler would run every N seconds (however accurate you need it to be)

+ +

Then we trigger any jobs that match where trigger_after < now() and triggered = False

+ +

Then we update the triggered status when the actions have been dispatched.

+ +

What am I missing? What patterns are out there that perform something similar?

+",362603,,,,,43930.54167,What patterns exist out there for a job scheduler?,,1,2,,,,CC BY-SA 4.0, +408580,1,,,4/9/2020 12:53,,-1,67,"

I'm reading a design suggestion for facebook/instagram social network in grokking the system interview (closed content :( so I'll describe the relevant part in my question).

+

They are speculating over how the sharding of the data is done, and according to that selection, how is the allocation of photo_id (story_id/content_id/whatever_id) being done:

+
    +
  1. Shard by user_id, and then use a simple auto-incrementing sequence for the photos of that user
  2. +
  3. Shard by photo_id. this requires some key generating service, as there's no single database for the user's photos.
  4. +
+

They say that the reasons to prefer the more complex alternative (2), could be because option (1) can create

+
    +
  1. Non-uniform distribution of storage
  2. +
  3. Running out of space for a single user
  4. +
  5. Unavailability of all user's photos if a shard is down
  6. +
  7. High latency for a user if there's a high load
  8. +
+

My question (finally): are these all real problems? Isn't the scale of facebook/instagram enough so that working in a granularity of a single user good enough to spread the load?

+",285259,,379622,,44154.51458,44154.51458,Facebook sharding by user_id or by photo_id?,,1,1,,,,CC BY-SA 4.0, +408584,1,409003,,4/9/2020 14:13,,0,45,"

I'm following the python documentation about ""distributing packages using setuptools"". It doesn't mention what to do if a python package you want to include in your install_requires requires a system installation. For example:

+ +

I'd like to include the package pysoundfile, but this requires me to install libsndfile1 via my system package manager, e.g. sudo apt-get install libsndfile1.

+ +

I've got as far as understanding I need to make a binary distribution, given advice in the docs here, but that sends me to the docs listed above.

+ +

How do I automate this process such that users of my package can simply pip install my_package and not have to apt-get install libsndfile1?

+",362628,,,,,43940.72222,Packaging a python project which requires non-python packages,,1,0,,,,CC BY-SA 4.0, +408586,1,408588,,4/9/2020 14:41,,2,197,"

Say two microservices provide information for a product, one is providing general information (A), one for product images (B). Similar to the architecture described in the gateway aggregation pattern, I would like to create an aggregation gateway between these services A & B.

+ +

Now if multiple applications would access that aggregation gateway as in below image, would that be in conflict with microservice architecture?

+ +

+",362630,,,,,43930.62917,Is the Aggregator Pattern with Multiple Clients in Microservice Architecture Valid?,,2,1,,,,CC BY-SA 4.0, +408589,1,,,4/9/2020 15:42,,-1,41,"

I'm working on making a product for tracking workers/staffs locations and in that case the application need to be able to run on the phone for at least 10 hours without the phone suddenly closing the app. +I cannot find too much information on how I can make sure the app are not being closed and how the app can run in the background and still receive location change events.

+ +

The code is based on xamarin. +So the question is, would there be any difference in app priority in these cases:

+ +
    +
  1. Running the app as an wasm app from the browser on the phone
  2. +
  3. Creating an ""shell app"" with a WebView and then running the wasm app from there
  4. +
  5. Creating a xamarin app
  6. +
+ +

The app should target IOS and Android

+ +

The app will do all the calculation on the phone and not send any location data back to the server. It will just report ""now and then"" ""2 hours at customer x, 1 hour at customer y""

+ +

What would be the best way to make sure the app will run the whole day after it has been opened?

+",362633,,,,,43931.70972,GPS application targeting mobile devices,,1,1,,,,CC BY-SA 4.0, +408591,1,408593,,4/9/2020 16:33,,-4,126,"

Java was last revised in 2020, while C++ had it's update back in 2017

+",362642,,,,,43930.71181,Why is C++ not updated as frequently as Java?,,2,9,,43930.71944,,CC BY-SA 4.0, +408597,1,408607,,4/9/2020 18:19,,-2,56,"

I am current developing a karaoke app in JavaFX, I need advise on the most optimal data structure storing a huge list of songs (library). I am inclined to use Binary Search Trees due to its Big O Performance of O(log(n)).

+ +

I am using the String compareTo function in Java for insertion but searching proves difficult with compareTo. Therefore, I am not sure that is the best for searching though strings.

+ +

I am using https://www.bigocheatsheet.com/ for Big O Performance Metrics.

+ +

Should I just ditch Binary Search Trees or use an another simple structure like List? (However performance will be O(n))

+ +

Song Data & Sorting: I load the sample song data provided to me by loading the file and storing it into the data structure. I want to search only by Song Name (Using Java String Contains method) since it's a basic program. Here is the sample song data:

+ +
Decades (2007 Remaster) Joy Division    374 test.mp4
+Lets Stay Together  Al Green    199 test.mp4
+Jump For Joy    New York Trio   286 test.mp4
+Victims Of The Revolution   Bad Religion    197 test.mp4
+Unstable Condition  John Tejada 348 test.mp4
+Go or Linger    Natas loves you 173 test.mp4
+Ocean Front Property    George Strait   197 test.mp4
+Negai (Album-Mix)   Perfume 298 test.mp4
+A Little Bit More   Jamie Lidell    186 test.mp4
+Good Position   Yin Yoga Academy    219 test.mp4
+Weekend Kane Brown  226 test.mp4
+Oh Industry Bette Midler    244 test.mp4
+
+",362651,,362651,,43930.82083,43930.89931,Optimal Data Structure for Searching though String,,1,6,,,,CC BY-SA 4.0, +408600,1,408603,,4/9/2020 19:35,,-5,59,"

About 4 years ago, where I was working, software was still being developed with Java 7 + Swing.

+ +

It took me a couple years, but we could finally migrate to Java 8 + JavaFX, two years after that.

+ +

Now I intend to migrate again. This upgrade, however seems more complex at a glance. While Java is the most I have developed, I am by no means knowledgeable on this topic.

+ +

What is the best practice on upgrading versions, regarding Java? +Should I only migrate to LTS versions? (In which case, I would go from 8 to 11) Is there any other favored version at the moment?

+ +

Keep in mind that this is intended for software developed for a production environment, and both, stability and maintenance are my main concern.

+ +

Should there be any concerns with software still being developed under JavaFX? Along with the new Java version, should I change to another GUI-module?

+ +

Please give me guidance, Gurus!

+",362655,,,,,43930.85278,Java version: When to migrate?,,1,2,,43932.32153,,CC BY-SA 4.0, +408604,1,408606,,4/9/2020 20:40,,1,41,"

I'm an IT guy (read: not a professional programmer) and have made an internal monitoring webtool which allows users to search for printers on our print servers. Based on the filtering criteria given, it returns a series of divs representing the matching printers. Each printer div contains a bunch of live information about the relevant printer.

+ +

JS handles actually populating the divs in an asynchronous AJAX-y way, because live information is being polled from the printers, and that can take a while for each printer, and different printers respond faster or slower, or not at all.

+ +

Each AJAX call made by JS processes some PHP which, among other things, pulls data from one or more CSV files containing information about how to talk to the different printers. However this design means that every single printer that is being polled for a given search must re-read these files, as well as do a bunch of processing of the data pulled from the files.

+ +

Granted, the tool works fine like this, and has for years, but I've always wanted to optimize it. The repeated file reads seem wasteful. But given the AJAX implementation, I've not been able to conceive of a way around this. It would be nicer if this data could somehow be read only once, stored in memory, and accessed as necessary by all AJAX calls prompted by a given search. But I have no idea how that would work, since each call is a separate PHP process.

+ +

I suppose an efficient database (instead of using files) is the obvious answer, but I've always avoided that for several reasons:

+ +
    +
  • Nothing about this tool needs to store new information, or bank historical information. All the data kept in the CSV files is only updated occasionally, by an external scheduled task, or manually as necessary.
  • +
  • Everything about this tool is intended to be real-time data (i.e. uncached, and user-agnostic).
  • +
  • A database is just another dependency, that requires maintaining, and is separate from the tool's text files, lowering the portability and increasing the complexity of maintaining the tool.
  • +
  • It still doesn't solve the problem of doing a bunch of repeat (albeit asynchronous) processing of the pulled data.
  • +
+ +

So maybe my preferences stated above preclude me from the privilege of making the desired improvements, and that's fine. But I wanted to ask if there's any potential solutions you can think of that do fit into this design.

+ +

Thanks for your time.

+",362658,,,,,43930.8875,How could I optimize an AJAX-based site by avoiding unnecessary/duplicate file-reads for each AJAX call?,,1,2,,,,CC BY-SA 4.0, +408610,1,,,4/10/2020 2:03,,1,81,"

I'm in charge of designing the entire backend for the REST API of an application that works more or less like an online browser game (think OGAME, Travian, and the likes). In this game, players are able to loot procedurally generated equipment, like in Borderlands. This equipment can have any amount of bonii to all the different stats present in the game; so to speak, a gun could potentially grant you more Health or Intelligence.

+ +

Right now, they have given me a database schema they were using as a Proof of Concept. Said DB schema is in dire need of a redesign, and since we are in such early stages of development, I might as well go all in with it.

+ +

My problem is as follows: the server application generates random equipment on demand, and the generated equipment is stored in the database. Depending on the type of the generated piece of equipment, it may appear either in the Weapons table (1:n with the player, using a third table [?] for the relation), the Armor table (1:n with the player, same as with the Weapons table), the Ammo table (1:1 with the player, since Ammo types are hardcoded) and the Consumables table (1:1 with the player, same as with the Ammo table). Armor and Weapon tables are similar, but not equal, so merging them in one table isn't possible unless we use a solution similar to what I am about to ask.

+ +

One of the usecases is, obviously, listing the player's current Inventory. As is, this would take three different queries (I'm not expecting a huge concurrent user volume, but I'd rather be safe than sorry). In addition, this design has some limitations:

+ +
    +
  1. A scenario in which we would need to run several versions of the game (v1, v2, v3, etc) simultaneously is likely, and these versions may be forwards-compatible. Using our current schema, this would mean creating a new table or even database for all the new versions.
  2. +
  3. As is, the schema does not allow for n:m relationships between Players and objects. In case two users were to roll identical pieces of equipment (which is likely, given the algorithms in place), the row would be duplicated. The changes required to make this work are relatively minor.
  4. +
  5. Consumables and Ammo are hardcoded into the application.
  6. +
  7. This design does not allow for misc. items, like quest items and the likes.
  8. +
+ +

I have thought of a new ""denormalized"" schema in which all objects are stored in a Warehouse table. In this Warehouse table, we would have an UUID for the object and/or maybe a hash field and, since we are using PostgreSQL, a JSONB field with the information about the object. Said information will be mostly read-only, requiring an update only in some rare cases; in addition, searches using data present in these fields will not only be rare, but only performed in some small subsets of data (finding all the Weapons owned by a player, for example, would filter first by Player). This denormalization would not create redundant data in any case, just data coupling. Furthermore, the JSON documents should never be bigger than 2kb, and they would never have nested objects, just one-dimensional arrays at worst.

+ +

I've asked some colleagues and they said it didn't sound like a good idea, but couldn't pinpoint exactly why. As far as I know, this isn't any worse than using MongoDB; in fact, if I were to start using MongoDB, I would have to denormalize much harder than I am doing right now.

+ +

Is this schema that bad of an idea? Should I keep the current design? What are the reasons against coupling different types of JSON schemas in the same column, other than not being able to perform data validation DB-side (which we should be doing on the server, anyway)?

+ +

EDIT: Adding some more context to this question.

+ +
    +
  • Right now, there is a working client for the game. Since it's still in an internal testing stage, the client directly performs the queries against the database.
  • +
  • I was hired to program the server and design the REST API for the release version of the game. I've been given pretty much free reign designing the server architecture.
  • +
  • There is another programmer in the project, but he is working with the client. Communication between server and client will be exclusively done via the REST API.
  • +
  • We will be using Node.js and PostgreSQL for the server because it's what I'm more comfortable with. I can't foresee whether we will be changing the system in the future.
  • +
  • We are using an in-house ORM. It wouldn't be a problem to modify the ORM in case we found a limitation.
  • +
  • Data from/to the API will be sent and received in JSON format.
  • +
  • A very hastily made estimate of the maximum concurrent number of players, based on wild especulation using a single sample of a similar platform, is 2000.
  • +
  • I could ask the other programmer to modify the way his app works (like adding cache and the likes), but I'd rather not bother him that much right now. Consider the app sees the server as a black box.
  • +
  • The proof of concept DB schema is using TPC right now. I had considered TPH (and that's more or less why I thought using JSONB fields would be cleaner; at best I would save a bit of disk space, but the DB still couldn't guarantee the given object is a valid object). Didn't consider TPT but that still doesn't reduce the number of queries, which is why I wanted to use the JSONB schema.
  • +
  • I also thought about emulating a TPH with a view over the Armors and Weapons table, but we still have the issue of not being flexible enough, and we still don't contemplate the Misc. items table.
  • +
  • I can't tell much more about it without saying too much about the project, but trust me when I say a scenario in which we would have to run several game versions at once is likely. Export scripts could be used.
  • +
  • An usecase in which I would need to filter by any of the JSONB's fields is unlikely. Even then, PostgreSQL would allow it with more than acceptable performance, according to most benchmarks.
  • +
  • An usecase in which I would need to update a single member of a JSONB field is unlikely. Even then, PostgreSQL also allows it.
  • +
  • An usecase in which I would need to update the whole JSONB field is slightly more likely, but then again, most objects are read-only.
  • +
  • The given JSON objects would be very simple and relatively small. They would be, at worst, just a key-value pair with arrays filled with primitives.
  • +
  • Weapons and armor, in the proof of concept, are a n:1 relationship with the player. So to speak, both the instance and the weapon blueprint are the same object. This makes sense, but so would be treating weapon blueprints/archetypes as their own thing and then instancing them in the player's inventory using another table.
  • +
  • If this were a personal project I wouldn't be asking this question: I would just pick the JSONB option and gladly shoot myself in the foot with it, if only for the learning experience. But this is a project some other people will be using in production, so I'd rather check twice and thrice to commit to a solution.
  • +
+",362671,,362671,,43931.90417,43931.90417,Should I perform some minor denormalization to save myself several queries in the future?,,1,0,,,,CC BY-SA 4.0, +408611,1,408612,,4/10/2020 2:33,,-2,23,"

I've been working on an implementation of TD-backgammon. The paper/project I'm basing my implementation on is here:

+ +

https://www.cs.cornell.edu/boom/2001sp/Tsinteris/gammon.htm

+ +

Everything makes sense to me up until the point that it talks about the procedure for back-prop. I haven't taken a lot of upper-level maths past Calc II, and I've never taken a formal course on ML/RL.

+ +

The description:

+ +

Backpropagation procedure:

+ +

Given an input vector V and a desired output O.

+ +

Calculate error E between the network's output on V and the desired output O.

+ +

e(s) = (lambda)*e(s) + grad(V)

+ +

V = V + (alpha)*error(n)*e(s)

+ +

where error(n) is:

+ +

For the weight between hidden node i and the output node, error(i)=E*activation(i)*weight(i)

+ +

For the weigth between input node j and hidden node i, error(j,i)=error(i)*activation(j)*weight(j,i)

+ +

The main points I'm confused about is:

+ +

What information is included in the ""eligibility trace vector"" e(s)?

+ +

What is ""(lambda)"" in step 2?

+ +

What is ""grad(V)"" in step 2. Does it stand for the gradient? and if so what does this mean?

+ +

What is meant by ""alpha"" in step 3?

+ +

Any help or resources would be greatly appreciated.

+",362673,,,,,43931.15347,What do these terms refer to in the context of RL/TD learning?,,1,0,,,,CC BY-SA 4.0, +408613,1,,,4/10/2020 3:45,,9,888,"

I am writing a recipe manager for a software engineering class I am taking, and I would like to implement classes (in Java, if it matters) to allow quantities and units.

+ +

Now, taking recipes into account, we have the typical 'scientific' units like kg, g, lbs, ml, l (dividing between SI, SI-derived, avoirdupois, US customary, imperial, etc); then there are the non-standard units like tbsp, tsp, cups, and finally, we have the countable units like 'cloves', 'slices', 'sticks'.

+ +

The good thing is that all of these units have officially been defined in terms of SI units, and can hence be easily converted amongst one another. This is ideally another feature I'd like to implement.

+ +

Right now I have an extremely long enum of these units and their values in the corresponding SI units. I feel I'm doing something wrong.

+ +

How best should I implement something like this? I have already explored the javax.measure API and its reference implementation, Indriya, but documentation is rather sparse and I'm not sure how to proceed.

+ +

The examples above make use of Java generics, which is something I might use.

+",362678,,,,,43932.76736,Implementation of quantities and units in a recipe manager,,3,9,,,,CC BY-SA 4.0, +408621,1,408662,,4/10/2020 10:27,,1,203,"

I am trying to design online chess game(figuring out required classes). Need some suggestion on choosing better option to validate the move. So, lets say, I have below class

+ +

Option 1 : + My Initial thought process was to have a validator Factory defined, and set of validators specific to each piece.

+ +
Public class Game{
+    private String id;
+    private Player player1;
+    private Player player2;
+    private GameStatus status;
+    private Board board;
+    private Color currentTurn ; //who have to make move now?
+
+    private static Map<PieceType , List<IValidate>> validators; 
+
+    static {
+        validators = initialize_the_appropriate_validators
+    }
+
+    public boolean makeMove(Move move){
+
+        PieceType type = move.getPiece().getType() ;
+
+        for(Ivalidate validator : validators.get(type)){
+                if(!validator.validate())
+                    throw new InvalidMoveException(""Move not allowed!"");
+        }
+
+        .............
+    }
+}
+
+ public class KnightMovesValidator implements Ivalidate{
+     public boolean validate(Move){
+          //TODO: vlidate if it is proper move based on the piece
+     }
+ }
+
+public class Box{
+    private int x ;
+    private int y ;
+    private Piece piece;
+
+}
+
+public Board{
+    private Box [][] boxes;
+}
+
+public class Move{
+    private Box src;
+    private Box dest;
+    private Piece piece;
+    private Player player;
+}
+
+ public enum PieceType{
+    KINGHT , ROOK , KING , QUEEN .......
+ }
+
+ public enum color{
+    BLACK,WHITE
+ }
+
+ public class Piece{
+    private PieceType type;
+    private Color color;
+    private boolean isKilled;
+ }
+
+ +
+ +

Option 2 : When I searched on the internet to validate my implementation. I found the below design in many places.

+ +
 Public Knight extends Piece{
+
+        public boolean canMove(Box src , Box dest) {
+           //TODO : implement   
+        }
+ }
+
+ +

The idea is, instead of having external validator. Each piece is saying whether the requested move is possible or not. It sounds better than my design to me. But still I have some question here... Box is aware of the class Piece because it is holding it.
+ Should Piece be aware of Box(its location) ?? Shouldn't piece be independent of Box?

+ +

Is this correct way of designing our POJO? What is the thought process behind it? Please throw me some light.

+ +

Thanks.

+",54050,,54050,,43932.52986,43934.40486,Design Chess - Object Oriented Design,,1,10,,,,CC BY-SA 4.0, +408622,1,408692,,4/10/2020 11:48,,-1,98,"

When developing a REST API, is it ok to use Form-Data in POST requests or is that frowned upon? All my methods return a JSON body or only an HTTP code, should this be extended to my POST and PUT requests as well? I think that Form-Data is a lot easier to work with, but that might just be that I'm a bit new to developing REST APIs.

+",362692,,,,,43933.11528,Is it bad practice to in a REST API to use Form Data in POST requests when all responses are in JSON?,,2,1,,,,CC BY-SA 4.0, +408624,1,,,4/10/2020 12:09,,1,57,"

I have a question and it is quite simple. Here are the details:

+ +
    +
  • I want to pass an options object into my mainFunction and some other functions inside of it, i.e. someFunctionCall and anotherFunctionCall. This options object is very much for configs. Because my algorithm has many different settings, I am much better off passing the whole object rather than certain properties.

  • +
  • Additionally, within my mainFunction, I compute and assign to a variable (hardToComputeThing). This computation is computationally intensive. So when my config states that it doesn't need it for the whole of mainFunction then we shouldn't use it.

  • +
+ +

The tricky part is when I use my internal function, like someFunctionCall. Assuming that someBool=false(from options), which means that hardToComputeThing=null. I am passing some empty variable inside each object and depending on my config object (options) to inform each internal function on whether I should use that empty object or not. I know it seems a little funky, but the great thing I love about using options is that I can still make use of other keys in options and it makes explicit what my current config is.

+ +

So my question is: Is this good practice? If not, than what should I do? Is it better to pass {x,y,hardToComputeThing} as a functional argument and handle for the missingness of hardToComputeThing, e.g. banana= (hardToComputeThing)? hardToComputeThing[0]*f(apple): f(apple)?

+ +

Below is a some pseudocode to illustrate my problem.

+ +
const mainFunction = ( x, options) = {
+
+  const {someBool} = options
+
+  const hardToComputeThing = (someBool)? computeTheThing(): null
+
+  someFunctionCall(x,y,hardToComputeThing, options)
+
+  anotherFunctionCall(z,hardToComputeThing, options)
+
+}
+
+const someFunctionCall = (x,y,hardToComputeThing,options)=>{
+  const {someBool} = options
+
+
+  let banana;
+  if(someBool){
+    banana = hardToComputeThing[0]* f(apple)
+    ... // Some big piece of code
+  }else{
+    banana = f(apple)
+  }
+
+}
+
+
+
+
+",355774,,355774,,43931.51458,43934.325,How to pass variable to function that may or may not be available based upon options?,,2,1,,,,CC BY-SA 4.0, +408630,1,408665,,4/10/2020 13:25,,1,413,"

I was wondering about a use case scenario for a nested subsystem. Right now, I get on why and how to use subsystems, but when should a nested subsystem be used?

+ +

In the picture below, I decided to create 2 different subsystems as they have their own business logic, but they are for the same company. Is it necessary to put them in one big sub-system called ""Intera"" (company name)?

+ +

+",286580,,4,,43932.51042,43932.51042,When should you use nested subsystems inside usecase diagrams?,,1,3,,,,CC BY-SA 4.0, +408632,1,,,4/10/2020 13:46,,-4,147,"

I'm writing a compiler, and I want it to compile to a native executable (just Linux, for now). I don't want it to be Assembly, it needs to be PURE machine code. Can anyone point me in the right direction?

+ +

EDIT: I want to produce x86 Linux machine code.

+",355434,,355434,,43931.675,43931.675,How to Write Pure Machine Code for Linux?,,2,12,,,,CC BY-SA 4.0, +408634,1,408638,,4/10/2020 15:03,,-4,81,"

I'm using Mocha Library and the Library by default use a Test folder. So, I have to copy the code I written in production into Test folder when doing Unit Tests with Mocha.

+ +

Anyone can think reason behind this method? Is it appropriate?

+ +

I'm just a hobbyist, haven't done programming for large companies. I also avoided unit tests thru Mocha for years but it seems it's right time for me to actually start to use Mocha to remove manual testing pain.

+ +

EDIT: Actually, I can import the code directly from Production but the practice among examples I witnessed is to use dedicated folder for Unit Tests, I looking to understand reasons behind this practice.

+",362721,,362721,,43931.63611,43931.64097,Why do we have to copy all files into dedicated folder when doing unit tests?,,1,7,,,,CC BY-SA 4.0, +408639,1,408641,,4/10/2020 15:51,,0,66,"

Imagine the following pattern: you have a window with information that needs to be updated asynchronously. You launch and detach a thread to handle fetching the information, but while the information is being fetched the window is closed and unloaded from memory. When the background thread goes to update the information in the window's memory, the memory has already been deallocated and a segmentation fault occurs.

+ +

I'm programming in C++, so I could wrap the memory being updated in a std::shared_ptr, but this means the memory has to stay allocated for the duration of the background task, and I'd rather free it immediately. Is there a better way to solve this problem?

+",224382,,224382,,43931.66667,43931.67361,Handling background tasks that may not be relevant when they are completed,,1,5,,,,CC BY-SA 4.0, +408645,1,,,4/10/2020 18:50,,9,499,"

I use Python but I guess my question applies to OOP in general. Whenever I create a class I am never sure whether I should put a parameter/attribute in the constructor or in the method the parameter relates to.

+ +

For example, let's take a Person class which has a days_away method. The role of days_away is to calculate how many days the Person has been away given some timestamps. The constructor of Person (i.e. __init__) will get name as parameter. Will it also get timestamps as parameter or should timestamps go as a parameter of the days_away method which is supposed to calculate how many days the person was away given some datetime periods? Why?

+ +

Edit: To add some context. This is to split an electricity bill between persons sharing an apartment. The bill will be split as a function of the number of days a person has been away so that they don't have to pay for those dates. Days away will be calculated by the days_away method given the timestamps a person left and returned to the apartment.

+",322312,,322312,,43933.33403,43933.33403,How do you decide if a parameter should go to the constructor of the method it relates to?,,6,6,2,,,CC BY-SA 4.0, +408651,1,,,4/10/2020 23:03,,-4,106,"

I'm currently learning Jest, Enzyme, Detox and testing in general, but I'm still trying to grasp the benefit of testing. From what I understand, testing is about creating hypothetical situations. Let's say when a hypothetical parameter is passed into a component, what is the expected result versus the actual result? For example, Jest goes so far as mocking functions, classes, modules, and API calls to create these hypothetical situations.

+ +

I'd understand if the test runs the all the hypothetical scenarios possible without me having to individually list all assertions, then creating a test makes sense. For example, if I wanted to test a function:

+ +
test('the data is peanut butter', done => {
+  function callback(data) {
+    try {
+      expect(data).toBe('peanut butter');
+      done();
+    } catch (error) {
+      done(error);
+    }
+  }
+
+  fetchData(callback);
+});
+
+ +

I have to list a multiple hypothetical results myself individual i.e. ""peanut butter, strawberry jam, butter, etc"" to see if they'd all return expected results. I'm not sure how this is different from me just running the actual code and finding out the results or the errors or creating arbitrary parameters myself and passing them. If I didn't want the API to be hit, I can just create a development server. Plus, tools like Typescript and esLint provides some safeguard during the initial coding and refactoring, granted they're not providing expected behaviors of functions and components.

+ +

I've looked at the documentations and the examples from tutorials, but I still have yet to find a good real life example of testing that I can't do by simply just running the code. Does the app navigate to another screen? Does a modal open? Is a component visible when an event happens? I can't fathom the benefit of running the tests on these over simply running the code. What am I missing?

+ +

update
+I have no idea why this is getting downvoted, but thank you for those taking your time to answer my question.

+",,user357916,,user357916,43932.57083,43932.64444,What can an assertion test that running the actual code cannot?,,3,0,,,,CC BY-SA 4.0, +408658,1,,,4/11/2020 7:56,,2,222,"

Say you have the following entity that represent an exact resource from a table, we're talking .NET Core with Entity Framework Core, code-first approach.

+ +
public class Person 
+{
+    [Key]
+    public int Id { get; set; }
+    public string FirstName { get; set; }
+    public string LastName { get; set; }
+}
+
+ +

Generally speaking when returning DTOs from this class I might decide to just return one field called FullName where I would join first and last name, but rather than duplicating that logic around, how bad of an idea is to just declare a method on Person?

+ +
public string GetFullName() 
+{
+    return $""{FirstName} {LastName}"";
+}
+
+",362762,,,,,43932.38889,Are methods a bad practice on .NET Core EF entity classes?,,3,0,,,,CC BY-SA 4.0, +408663,1,408666,,4/11/2020 10:24,,-1,75,"

I have written many http(s) servers in Node.js and just take for granted that I can receive many requests and all IO operations are async. However, now I would like to try and implement an HTTP server in C, like this one, but I'm wondering what it would look like to be robust.

+ +

Specifically, what I mean is, I don't want to see a full fledged HTTP framework in C because they are quite large and complicated. But I just would like to understand how to maximize the number of requests and handle the requests efficiently on one server.

+ +

Is it just as simple as a while (1) loop to look for incoming socket connections and then parse the HTTP request and server a response in one fell swoop? (Assuming we don't have any async resources we are fetching in this simplified example)?

+ +

Or are there more tricks you need to do somewhere within this, such as creating a queue of incoming requests before processing them, then partially process them in a round robin sort of fashion until they are individually done. Or something along those lines. Wondering if one could outline what needs to be done to maximize performance/throughput.

+",73722,,,,,43932.47014,How does a robust http server handle requests?,,1,1,1,,,CC BY-SA 4.0, +408667,1,408691,,4/11/2020 13:58,,-3,64,"

Suppose there are 2 layers below the layer being tested:

+ +
    +
  1. Technical Logic Layer: calls the DAO layer.
  2. +
  3. DAO layer: calls the database
  4. +
+ +

(The layer being tested can call the Technical Logic Layer but not the DAO layer.)

+ +

If mocks have to be used, when is it better to mock the Technical Logic Layer vs the DAO layer? Let's suppose one has to be mocked because no other solution is allowed.

+ +

I'm looking for the pros and cons of each choice and why one is considered better. Be as comprehensive as possible.

+",362776,,362776,,43932.63958,43933.10625,Layer to mock in tests: database or higher?,,2,2,,,,CC BY-SA 4.0, +408670,1,,,4/11/2020 15:08,,-1,44,"

Recently I've been working on a project to teach myself PHP and SQL, and as the project has gotten more complex I've been wondering what the idiomatic approaches are for creating the backend models from SQL query results.

+ +

For instance, I am building out a feature that lets users of my service invite each other to events. I want to have an endpoint that will fetch all of a user's received invites, which is easy and efficient enough to fetch in a single query. An Invite consists of a sent time, a sending user, a receiving user, and an event (and an event has a ""created-by"" user). Writing out a query for this ends up looking like:

+ +
SELECT 
+    invite.sent_time AS invite_sent_time
+    event.id AS invite_event_id,
+    event.name AS invite_event_name,
+    event.start_time AS invite_event_start_time,
+    event.end_time AS invite_event_end_time,
+    owner.user_id AS invite_event_owner_id,
+    owner.user_name AS invite_event_owner_name,
+    owner.user_email AS invite_event_owner_email,
+    owner.user_friend_since AS invite_event_owner_friend_since,
+    sent_by.user_id AS invite_sent_by_id,
+    sent_by.user_name AS invite_sent_by_name,
+    sent_by.user_email AS invite_sent_by_email,
+    sent_by.user_friend_since AS invite_sent_by_friend_since,
+    sent_to.user_id AS invite_sent_to_id,
+    sent_to.user_name AS invite_sent_to_name,
+    sent_to.user_email AS invite_sent_to_email,
+    sent_to.user_friend_since AS invite_sent_to_friend_since
+FROM invite 
+INNER JOIN ...
+
+ +

That is, the query ends up containing a column for every nested property in the model that I'll ultimately be creating.

+ +

The resulting rows are then transformed into the actual PHP objects via constructors that look like (simplified for the sake of clarity):

+ +
class Invite {
+    private function __construct(array $arr, string $prefix = '') {
+        $this->sent_time = new DateTime($arr[$prefix . 'sent_time']);
+        $this->event = new Event($arr, $prefix . 'event_');
+        $this->sent_by = new User($arr, $prefix . 'sent_by_');
+        $this->sent_to = new User($arr, $prefix . 'sent_to_');
+    }
+}
+
+class Event {
+    private function __construct(array $arr, string $prefix = '') {
+        $this->id = $arr[$prefix . 'id'];
+        $this->owner = new User($arr, $prefix . 'owner_');
+        $this->start_time = new DateTime($arr[$prefix . 'start_time']);
+        $this->end_time = new DateTime($arr[$prefix . 'end_time']);
+        $this->name = $arr[$prefix . 'name'];
+    }
+}
+
+ +

While this works well enough, there's a number of things I don't love about this solution. It involves a good amount of repetition since I have to repeat the PHP property names across the SQL query and as the keys in the associative array, and it also relies on me typing the strings correctly each time. I'm also wary of having the query blow up in size as my objects get more and more complex.

+ +

So, my questions are: is building the query in this way appropriate? If I've verified via EXPLAIN that the queries are running in a relatively efficient manner, is getting all the info in a single result the right way to do things, or would I be better served returning, say, just the sent_by_id and then querying the users separately before returning the response? If getting everything in a single query is the correct approach, what are the best practices/patterns for turning those query results into the application models?

+ +

I would much appreciate everyone's thoughts on this, or any resources for backend application design that someone could recommend!

+",36826,,,,,43932.7,"Best practices for creating structured data from ""flat"" SQL query results",,1,1,,,,CC BY-SA 4.0, +408676,1,,,4/11/2020 17:42,,3,76,"

I'm writing a language interpreter in C. I'm currently implementing a system that allows writing extension modules in C for the interpreter.

+ +

These modules are loaded into a code file like a normal module, but behind the scenes they are dynamically loaded libraries written in C. This is an approach similar to the one in the Python interpreter.

+ +

Inside C code (whether in the main interpreter or in an extension module), currently the way to instantiate a class is the following:

+ +
ObjectInstance* instance = vm_instantiate_class(class_object, arguments);
+
+ +

vm_instantiate_class takes care of allocating the new instance, calling its classes' initializer method on it if present, and some more bookkeeping.

+ +
ObjectInstance* vm_instantiate_class(ObjectClass* klass, ValueArray args) {
+    // Allocate instance
+    ObjectInstance* instance = object_instance_new(klass);
+
+    // This goes to the instance's class, gets the requested attribute, and if it's a method,
+    // the method is wrapped with a BoundMethod so it remembers it's associated instance when it's called
+    ObjectBoundMethod* init_method = (ObjectBoundMethod*) object_load_attribute((Object*) instance, ""@init"");
+
+    // Invoke the initializer on the instance
+    vm_call_bound_method(init_method, args);
+
+    instance->is_initialized = true;
+    return instance;
+}
+
+ +

This works great for creating instances, whether when called for user code in the main interpreter loop, and whether it's called from a C extension.

+ +

However, I'm running into a dilemma when considering how I should handle instantiating native classes.

+ +

Currently, a native class in an extension is implemented like so. We define a struct which ""inherits"" ObjectInstance:

+ +
typedef struct {
+    ObjectInstance base;
+    // ... other native fields
+    int x;
+    int y;
+} ObjectInstanceMyClass;
+
+ +

When object_instance_new is called by vm_instantiate_class as seen above, it checks on the class object if it's a native or user class. If it's a native one, the class object will say how much memory actually needs to be allocated for an instance (e.g. sizeof(ObjectInstanceMyClass)). If it's a user class, sizeof(ObjectInstance) is allocated.

+ +

This allows a native instance to be visible to user code like any other instance - but it can actually carry native data inside it, outside of the data types exposed through the language.

+ +

In an extension I'm writing, there is a native class that wraps a native OS resource - suppose a file handle. Inside the extension itself, I would like to be able to instantiate it the same elegant way I instantiate a regular class:

+ +
ValueArray args = make_value_array();
+value_array_write(args, <a native C file handle>);
+ObjectInstanceFile* file_object = (ObjectInstanceFile*) vm_instantiate_class(file_class, args);
+
+ +

However - I can't do that, since <a native C file handle> isn't of a legal type in my language, and thus can't be written to a ValueArray.

+ +

A workaround I can think of is this:

+ +
// Use vm_instantiate_class to create an instance, and do actual initialization
+// ad-hoc in the outside code.
+
+ValueArray args = make_value_array(); // Empty args
+ObjectInstance* instance = vm_instantiate_class(file_class, args);
+ObjectInstanceFile* file_object = (ObjectInstanceFile*) instance;
+file_object->handle = <a native C file handle>; // Initialize the field
+
+ +

This approach would work - but it's just not elegant.

+ +

I would like to be able to call vm_instantiate_class on a class which wraps a native resource (that has no equivalent in the language level), and have it work without having to further patch the instance after the call.

+ +

How is this sort of thing commonly implemented in language implementations? Specifically, I would like to know about the CPython implementation, but otherwise any language implementation example would do.

+",121368,,,,,43939.38681,How do language implementations implement native-extension class instantiation?,,2,0,,,,CC BY-SA 4.0, +408677,1,,,4/11/2020 17:46,,1,48,"

I have the following Job class representing runs of a job. I have a list of start and end times in Job class because the same Job can be rerun.

+ +
public class Job {
+
+      private final List<Timestamp> jobStartTimes = new SortedList<>();
+      private final List<Timestamp> jobEndTimes = new SortedList<>();
+      private String jobName;
+      private String jobKey;
+      private String host;
+      ....
+      ....
+}
+
+ +

I have this Map for querying jobs given jobkey.

+ +
public class JobMap {
+
+         /**
+         * Here value of 'String' key is jobKey
+         */
+         private final Map<String, Job> jobCache;
+}
+
+ +

I have also created the following hierarchy of hashmaps for storing (starttime, jobKey) and (endtime, jobkey) entries in Map so that I can retrieve job records faster. This is needed because my queries are timestamp based, for ex: return all jobs that ran between x and y timestamp.

+ +
public class YearCache<T> {
+
+        /**
+         * Here value of 'Integer' key is month number (0, 11)
+         */
+        private final Map<Integer, MonthCache> monthCache;
+}
+
+public class MonthCache {
+
+        /**
+         * Here value of 'Integer' key is week number in a month(0, 4)
+         */
+        private final Map<Integer, WeekCache> weekCache;
+}
+
+public class WeekCache {
+
+        /**
+         * Here value of 'Integer' key is day number in a week (0, 6)
+         */
+        private final Map<Integer, DayCache> dayCache;
+}
+
+private class DayCache
+{
+        /**
+         * Here value of 'Integer' key is hour value in a day (0, 23)
+         * T is of type String representing jobKey
+         */
+         private final NavigableMap<Integer, NavigableMap<Timestamp, Set<T>>> hourCache;
+}
+
+ +

I want to get rid of this Java hashmaps and move to Redis Cache. How can I model/architect this hierarchy in Redis Cache?

+",361588,,361588,,43958.225,43958.225,Architect Java Hashmap Hierarchy to its equivalent in Redis Cache?,,0,0,,,,CC BY-SA 4.0, +408680,1,408681,,4/11/2020 18:49,,18,3458,"

For example if I have a class like

+ +
class Foo {
+    public int bar;
+
+    public Foo(int constructor_var) {
+        bar = construction_var;
+    }
+
+    public bar_plus_one() {
+        return bar++;
+    }
+}
+
+Foo foo = new Foo(2);
+
+ +

and in the IDE I type foo.ba I get bar suggested, or if I type String x = foo.bar() I get red squiggles. How does the IDE become context aware? Is there a code querying language, is it reflection, what?

+ +

To clarify my question a little, I am asking because I want to be able to query my code base. I am looking for a tool where I can (essentially) say SELECT name FROM methods WHERE signature IS 3 ints or something like that. I figure whatever something like Intellisense uses to make suggestions is where I should be looking.

+",342959,,155513,,43932.84097,43934.49514,What do IDEs use to do code completion suggestions?,,4,7,1,,,CC BY-SA 4.0, +408686,1,,,4/11/2020 22:08,,0,199,"

I`m trying to understand the LSP History rule. I have read Wikipedia entry which states the requirement and provides an example:

+ +
+

History constraint (the ""history rule""). Objects are regarded as being modifiable only through their methods (encapsulation). Because subtypes may introduce methods that are not present in the supertype, the introduction of these methods may allow state changes in the subtype that are not permissible in the supertype. The history constraint prohibits this. It was the novel element introduced by Liskov and Wing.
A violation of this constraint can be exemplified by defining a mutable point as a subtype of an immutable point. This is a violation of the history constraint, because in the history of the immutable point, the state is always the same after creation, so it cannot include the history of a mutable point in general. Fields added to the subtype may however be safely modified because they are not observable through the supertype methods. Thus, one can derive a circle with fixed center but mutable radius from immutable point without violating LSP.

+
+ +

In my opinion, there is a problem with this example. Inducing the subtype Mutable point to the supertype Immutable Point breaks the invariant requirement.

+ +
+

Invariants of the supertype must be preserved in a subtype.

+
+ +

Can you think of a more suitable example of a bad OO design that does not comply with the history rule but nevertheless satisfies the invariant requirement?

+ +

Alternatively can you provide another explanation for why this requirement is necessary?

+",362800,,209774,,43933.96181,43933.96181,"Liskov substitution principle: clarification about the ""history rule""",,2,4,1,,,CC BY-SA 4.0, +408687,1,,,4/11/2020 22:29,,3,253,"

So I am trying to speed up my program by using concurrency and/or multi-threading and/or process parallelism. The topics are pretty complex and I am sort of new to them so I am still trying to figure out which one to use and when.

+ +

My task (rather sub-task):

+ +
    +
  1. Get size of a UNIX directory (recursively). In fact, I will be processing multiple directories at once.
  2. +
+ +

Based on what I understand, scanning directory is I/O bound process, and, as a result, decided to use threading instead of multiple processes.

+ +

Here is what I tried (functions work but the results are not really what I expect):

+ +

My dircetory scanning function - utils.py:

+ +
def get_path_size(path):
+    """"""Returns total size of a file/directory.
+
+    Args:
+        path: File/directory path.
+
+    Returns:
+        Total size of a path in bits.
+
+    """"""
+    # Size in bytes/bits (B).
+    total = 0
+
+    if os.path.isdir(path):
+        with os.scandir(path) as direc:
+            for entry in direc:
+                if entry.is_dir(follow_symlinks=False):
+                    total += get_path_size(entry.path)
+                else:
+                    total += entry.stat(follow_symlinks=False).st_size
+    else:
+        total += os.stat(path).st_size
+
+    return total 
+
+ +

Here is my multi-threaded function that calls the function above - file1.py:

+ +
import concurrent.futures
+
+def conc(self):
+    reqs = [{'path': '/path/to/disk1'}, {'path': '/path/to/disk2'}]
+
+    with concurrent.futures.ThreadPoolExecutor(max_workers=12) as executor:
+        future_to_path = {
+            executor.submit(utils.get_path_size, req['path']): req for req in reqs
+        }
+
+        for future in concurrent.futures.as_completed(future_to_path):
+            path = future_to_path[future]
+            size = future.result()
+            print(path, size)
+
+ +

And here is my function using process parallelism - file2.py:

+ +
import concurrent.futures
+
+def paral():
+    with concurrent.futures.ProcessPoolExecutor(max_workers=6) as executor:
+            for path, size in zip(PATHS, executor.map(get_path_size, PATHS)):
+                    print(path, size)
+
+ +

The reason why I am having doubts is because it seems that program finishes faster (if not faster, then about the same) using ProcessPoolExecutor rather than ThreadPoolExecutor. Based on my understanding that get_path_size() is rather I/O intensive and docs saying that ThreadPoolExecutor is more suited for I/O work, I find it surprising that paral() runs faster.

+ +

My questions:

+ +
    +
  1. Am I doing it right overall? I mean, should I be using ProcessPoolExecutor or ThreadPoolExecutor?
  2. +
  3. Any other suggestions on how to make this code better/faster etc.?
  4. +
+ +

Edit #1 - Test results:

+ +

I ran 5 tests for each of the 3 options (each test was ran one after another on a non-loaded machine): non-parallel, ProcessPoolExecutor, and ThreadPoolExecutor.

+ +

Total size of all directories was 65GB in this testing. Yesterday, I ran these tests on directories with total size of ~1.5TB and the results were pretty much the same, relatively.

+ +

Machine spec:

+ +
CPU(s):                20
+Thread(s) per core:    1
+Core(s) per socket:    10
+Socket(s):             2
+
+ +

Non-parallel run-times:

+ +
Duration 38.25443077087402 seconds
+Duration 16.98011016845703 seconds
+Duration 21.282278299331665 seconds
+Duration 37.90052556991577 seconds
+Duration 40.511338233947754 seconds
+
+ +

ProcessPoolExecutor:

+ +
Duration 7.311123371124268 seconds
+Duration 15.097688913345337 seconds
+Duration 15.133012056350708 seconds
+Duration 13.949966669082642 seconds
+Duration 4.563556671142578 seconds
+
+ +

ThreadPoolExecutor:

+ +
Duration 28.408297300338745 seconds
+Duration 7.303474187850952 seconds
+Duration 26.91611957550049 seconds
+Duration 4.6026129722595215 seconds
+Duration 3.424044370651245 seconds
+
+",360573,,360573,,43933.34792,43933.38681,How can I improve the speed of scanning multiple directories recursively at the same?,,3,7,1,,,CC BY-SA 4.0, +408690,1,408713,,4/12/2020 1:25,,0,38,"

I'm a CS student, and I'm doing a project on shared libraries and dynamic linking/loading. One of the questions I have to answer is how symbols are resolved with dynamic linking/loading. I've scoured the internet and haven't been able to find anything conclusive. I understand different linkers may resolve symbols differently across different operating systems. I'm just looking for a general, windows-based answer; how are symbols resolved in dynamic linking and loading?

+ +

Thank You!

+",362815,,,,,43933.64097,How are symbols resolved in dynamic linking and loading?,,1,0,,,,CC BY-SA 4.0, +408693,1,408694,,4/12/2020 4:19,,4,301,"

I'm working on an enterprise that has some Angular/Typescript projects and to avoid repeating code (basically copying and pasting) between them, we decided to go for Monorepo and start write an util library, with unit tests, docs and everything on.

+ +

At the moment we're implementing an util function:

+ +
export const normalizeNames = (value: string): string => {
+  if (!isString(value)) {
+    // throw some error
+  }
+
+  // ...
+}
+
+ +

Just as the company is relatively new to testing concepts in general, so am I.

+ +

Since we are at an impasse trying to establish a standard of how the tests should be structured and what we should test, I decided to open this question here.

+ +

The first thing that came to my mind was to separate them in two main groups:

+ +
    +
  • Invalid -> A test for each invalid type I could imagine, like: 1 null, 1 undefined, 1 NaN, 1 boolean, 1 number, 1 array, and others like Buffer, Map, Object, RegExp, Set, etc.;
  • +
  • Valid;
  • +
+ +

... something like this:

+ +
describe('normalizeNames', () => {
+  describe('invalid', () => {
+    it(`should throw error for the value 'null'`, () => {
+      expect(() => normalizeNames(null as any)).toThrowError(
+        TypeError,
+      );
+    });
+
+    it(`should throw error for the value 'undefined'`, () => {
+      expect(() => normalizeNames(undefined as any)).toThrowError(
+        TypeError,
+      );
+    });
+
+    // other types
+  });
+
+  describe('valid', () => {
+    it(`should return '' for the value ''`, () => {
+      expect(normalizeNames('')).toBe('');
+    });
+
+    it(`should return 'Stack' for the value 'stack'`, () => {
+      expect(normalizeNames('stack')).toBe('Stack');
+    });
+
+    // ... more tests
+  });
+});
+
+ +

...but then I noticed that if I test all the types I can imagine, the tests would be too big and maybe difficult to maintain.

+ +

Another solution that I thought is to create two Arrays and do something like below, to avoid the repetition:

+ +
const invalidTestCases = [
+  { actual: null, expected: TypeError },
+  { actual: undefined, expected: TypeError },
+  // more...
+];
+const validTestCases = [
+  { actual: '', expected: '' },
+  { actual: 'stack', expected: 'Stack' }, // it's just a sample data
+  // more...
+];
+
+describe('normalizeNames', () => {
+  describe('invalid', () => {
+    for (const { actual, expected } of invalidTestCases) {
+      it(`should throw error for the value '${actual}'`, () => {
+        expect(() => normalizeNames(actual as any)).toThrowError(
+          expected,
+        );
+      });
+    }
+  });
+
+  describe('valid', () => {
+    for (const { actual, expected } of validTestCases) {
+      it(`should return '${expected}' for the value '${actual}'`, () => {
+        expect(() => normalizeNames(actual as any)).toBe(expected);
+      });
+    }
+  });
+});
+
+ +
+ +

So, the questions are basically:

+ +
    +
  1. Is it okay to separate the tests in these two main ""groups""?
  2. +
  3. Is it acceptable to have tests for all possible ""types""? Otherwise, which entries would you recommend for invalid tests?
  4. +
  5. For the second solution: is it a good practice to write tests in that way, with loops?
  6. +
+",362821,,362821,,43934.16181,43934.16181,Testing unexpected inputs for unit tests and loops?,,3,4,1,,,CC BY-SA 4.0, +408708,1,408711,,4/12/2020 13:18,,-3,225,"

We have multiple applications with different features. We want to create a central security system, where each user is assigned a role. Role has access to certain applications and features with in those applications. What is the best way to define this in database with good normalization.

+ +

We want to identify the roles, and applications they have access to and features with in those applications they have access to ahead of time, so user is only given a role and by default they get everything that role has privileges to.

+",362848,,362848,,43934.07569,43934.07569,Database design user user roles applications and features,,1,3,,,,CC BY-SA 4.0, +408710,1,408714,,4/12/2020 14:38,,-4,112,"

I'm new to TDD. That begin said, I'm trying to understand why would I have to use assertion library when there is console.log/print?

+ +

In-fact, I can see more detailed error log thru console.log in JavaScript than what a single assertion statement display. console.log helps me more than assertion library.

+ +

I do use Mocha for testing and Chai for assertion. I like Mocha for it's grouping features and Chai seems feel like a optional thing.

+ +

A similar question asked here https://stackoverflow.com/questions/29725571/what-is-assert-in-javascript but no one had provided answer to console.log question except the rest.

+",362721,,362721,,43933.61458,43933.65833,Why use Assertion Library when there is print and console.log?,,2,1,1,,,CC BY-SA 4.0, +408717,1,,,4/12/2020 18:12,,0,31,"

I need a clarification about the correct process to use a CD to update an environment where a Docker Swarm runs. I understood that I can configure my CD to execute docker service update --image foo:1.1.0 fooservice inside the Swarm and so having my stack always up to last image. +But I do not understand how to manage my stack.yml files: I initialize the swarm copying on the machine my stack files which defines the ""state of the art"" of my services; but when the CD updates the running images, the files become outdated.

+ +

Should I replace the service update command with something that modifies the stack files and redeploy? Or is it expected that yml files become obsolete?

+",263163,,326536,,43933.79792,43933.82222,Docker swarm update through Continuous Deployment,,1,0,,,,CC BY-SA 4.0, +408719,1,,,4/12/2020 19:14,,1,116,"

My question is more of educational than an actual coding problem. I tried searching the web, but got little help. +I am trying to learn how to write ISR and understand how they interact with user threads.

+ +

Consider a situation where, I have an ISR which on getting triggered, will copy data from an interrupt source (UART) to a struct. I have an user thread which would want to read this struct at a later point in time. +My questions are two fold: +(a) What are different ways to share common data (for eg. a struct) between an ISR and a user thread? +(b) In case of a shared struct (as above), how do we perform synchronization between the ISR and the user thread?

+ +

Please note: I am new to these topics.

+",362866,,173647,,43935.87292,43935.87292,How do ISRs and user threads synchronize and share data?,,1,1,,,,CC BY-SA 4.0, +408729,1,408736,,4/13/2020 2:35,,-2,90,"

I am making a small ""search engine app"". The app should get three pieces of input from the user:

+ +
    +
  1. The name of the search engine as a string (e.g. ""googe"", ""duckduckgo"", etc)
  2. +
  3. The search term
  4. +
  5. The name of the website the user wants to search on if they want to search only webpages of a particular website (e.g. ""reddit.com"", ""stackoverflow.com"", etc)
  6. +
+ +

I am having a SearchEngine class for this which I am thinking it would initialize like so:

+ +
se = SearchEngine(url=""https://google.com"")
+
+ +

However, the user will be entering ""google"" as input, not the entire https://google.com URL.

+ +

Now, I plan to have both a command line and a web interface.

+ +

For example, in the CLI the engine name would come through an input functon:

+ +
engine_name = input(""Search engine name: "")
+
+ +

In a web interface the engine name would come through an HTML input element.

+ +

So, the client is sending a name while the SearchEngine class is getting a url. One way to solve that ""incompatibility"" would it be to initialize a SearchEngine object like below:

+ +
se = SearchEngine(name=""google"")
+
+ +

and convert the name to a url inside the SearchEngine constructor. I am afraid that would be a bad design.

+ +

Would it be better to have another intermediate layer (class?) that converts name to a url? That way SearchEngine would be independent from the user/client input format.

+ +

What design pattern am I using here? What design pattern should I use?

+",322312,,,,,43934.32083,What design pattern am I up to here?,,1,3,,,,CC BY-SA 4.0, +408738,1,,,4/13/2020 8:52,,-1,31,"

I read a few articles and Stack posts but I'm still unsure how to use MVC properly.

+ +

One of my app features is handling meetings - each having a list of subjects to discuss. My database tables look like this (simplified):

+ +
CREATE TABLE meetings (
+  id int(11) NOT NULL,
+  date date NOT NULL,
+  moderator varchar(128),
+)
+CREATE TABLE meeting_subjects (
+  id int(11) NOT NULL,
+  meetingd_id int(11) NOT NULL,
+  order int(11) NOT NULL,
+  subject varchar(128) NOT NULL,
+)
+
+ +

My controller allows browsing and creating new meetings.

+ +

I don't know how to properly design controller and model. I'd like to ask few questions that should help me use a proper MVC design:

+ +
    +
  1. Should I have one or two models? + +
      +
    • It seems having model per tabale could fit the definition of anemic domain model anti-pattern. I don't expect to work with meeting_subjects on their own.
    • +
    • On the other hand when using model per table I could gain from my framework helpers (it happens to be CodeIgniter\Model providing CRUD methods).
    • +
  2. +
  3. How to handle input data in a controller? + +
      +
    • Should my controller deal with meeting vs. subjects separately?
      +$meeting_id = $meetingsModel->addMeeting($_POST['date'], $_POST['moderator']);
      +$meetingsModel / $subjectsModel->addSubjects($meeting_id, $_POST['subjects']);
    • +
    • Or should I rather have a single model method and pass all data to it?
      +$meetingsModel->addMeeting($_POST['date'], $_POST['moderator'], $_POST['subjects']);
    • +
  4. +
  5. [optional] In case of two models and controller calling one method: I assume it's fine to call subjects model method from meeting model method?
  6. +
+",362898,,,,,43934.46042,Using MVC with multi-table data,,1,0,,,,CC BY-SA 4.0, +408739,1,408742,,4/13/2020 9:08,,43,7697,"

In this docker beginner video its explained, that different stacks may depend on different libraries and dependencies and that this can be handled with Docker.

+ +

However, I don't get what the difference should be between a library and a dependency. As I see it, a library is a collection of code/packages and a dependency is a library that the database/webserver/tool depends on.

+ +

So is there any difference? Or is saying ""a database relies on specific libraries and dependencies"" the same as ""a database relies on specific libraries"" ?

+",224677,,173647,,43935.51319,43936.63056,What is the difference between a library and a dependency?,,4,8,9,,,CC BY-SA 4.0, +408745,1,,,4/13/2020 13:55,,0,309,"

We are about to plan our server architecture and we want to use the BFF strategy with node.js servers to serve multiple front-end apps.

+ +

However we also want to be able to scale easily (e.g. a new front-end should be served) and want to reuse code with minimal maintenance overhead.

+ +

So my questions are:

+ +
    +
  1. How can I scale and manage a network of BFFs

    + +
      +
    • that have similarities but also differences
    • +
    • that are written with node.js and follow the REST pattern (but dont make it a requirement)
    • +
  2. +
  3. Are there other ways to reuse code for multiple BFFs without the use of packaging (as you will need to maintain many packages this way and development of new features require changes in those packages as well).

  4. +
+ +

My idea was to create some kind of server manager that creates and runs new BFFs based on a configuration and a module. But Im not completely satisfied with that approach because it dynamic enough, generates overhead with the modularization, might become monolithic and because I smell a better solution. I cant be the only one with this kind of problem, can I?

+ +

Edit

+ +

I have added a Picture of the Code Duplication that I see. This is the simplest form, the question is what you are going to do if you have 10 Bffs all using service2.

+ +

Now service2 gets a new feature (or any other kind of update) that you have to add to each bff. If the developer is really unlucky, he may have to think about configuring e.g. BFF7 differently for Service2 than the rest.

+ +

Also the Problem is not only to connect the BFF to the Services but also to provide to the frontends if it is the same thing.

+ +

So on the one hand you have many similarities that should be abstracted, but they should be configurable as well.

+ +

Hope that clarifies things.

+ +

+ +

Edit2

+ +

I noticed that I only thought about REST APIs, but maybe a BFF is also supposed to support other stuff as well (socket.io e.g. for a chat)

+",362922,,362922,,43936.65069,43936.65069,DRY(Don't repeat yourself) Principle and BFFs (Backend-for-Frontend),,0,8,2,,,CC BY-SA 4.0, +408746,1,408770,,4/13/2020 14:38,,2,177,"

My question is about an ""edge case"" of the UML class diagram. In particular, I have loads (about 30) classes that implement an interface. They can be split into two groups of similar classes. Within each group the classes only differ by the implementation of the methods from the interface, however they represent fundamental different problems (different partial differential equations). Since they are so similar, I figured I could ""stack"" them on top of each other and clarify the idea with a note.

+ +

Sure, I could also create only two classes (one for each group) and pass the implementation of the function at instantiation. However, I think that this is even worse than the current representation. Also, this would not be practical, since I expect to instantiate 100 and more objects of one class.

+ +

What do you think about the current state of the class diagram in the picture? Do you have any other ideas? Any help is welcome!

+ +

+",362920,,362920,,43935.26667,43935.37569,How to visualise multiple similar classes in an UML class diagram?,,1,6,,,,CC BY-SA 4.0, +408752,1,,,4/13/2020 17:19,,1,64,"

My View has a textfield and a button. According to MVC pattern on button click should be called a function of the controller. This function should do some operation on the View's textfield content. It is a job of the View to pass the String contained in the textfield or should the controller grab it?

+ +

To let you understand here are the two sequence diagram:

+ +

+ +

Which one best fits MVC?

+",358382,,,,,44205.58403,Should the View pass data inserted from user to Controller or should the Controller get data from View's fields? (MVC Pattern),,2,0,,,,CC BY-SA 4.0, +408755,1,408771,,4/13/2020 18:02,,3,324,"

Rewritten Question

+ +

I appreciate the feedback and in response to that I'm re-writing my question. I can't give my specific situation (classes, etc), nor do I think that it would be helpful, as I work in a very niche area that wouldn't make much sense to those outside of it, but I will try and use a similar but invented analogous situation to give something more concrete.

+ +

I have an application and two libraries of interest here. These are quite mature (about ten years old) and relate to a product that has been selling that long. The application reads files of various types, including images, and makes them searchable and viewable. It also produces a detailed report.

+ +

One library (ImageIO) is responsible for reading images. It doesn't just read JPEGs and PNGs, but hundreds of different formats that have been encountered over the years. Formats are continuously being added to it. It can also spit out standard formats like PNGs and JPEGs.

+ +

Another library is responsible for the reporting. It doesn't just handle images, but all sorts of file types. It gives a detailed report including a list of all of the metadata used.

+ +

When I got handed the code, the main application has a class called Document which contains, among other things, a list of Images. An Image has some set properties and methods including Height, Width, and GetBitmap. Each of the image types has its own subclass; JpegImage, PngImage, TiffImage, DicomImage and so on. Most of these have custom properties; camera used, white point, colorspace, title, GPS location and so on. Most have between one and six extra properties. Some properties are common to many types (like exif data), while many image types, particularly the more niche types (like BobsImage) have properties that are unique to that image type.

+ +
Image
+
+// Some methods
+int[][] GetBitmap()
+
+// Some properties
+int Height
+int Width
+
+ +

The main application only uses a few of these properties when they exist. The reporting library reports on them all. There are dozens of properties. There are no special methods, though behind the scenes, some types use some properties for the standard methods. For example, using the aspect ratio for producing the BitMap.

+ +

The application uses a magic string to tell the reporting library what sub-class the images really are. The reporting library then uses that to cast the Image back to it's subclass, and then heaps of ifs and switches to report accordingly.

+ +

I was not happy with this architecture. My first attempt was to turn Image into and an IImage interface, and then bundle properties into groups and have relevant interfaces for the extra. The IImage seems to work fine, but the properties are an issue; there were about as many interfaces as properties, and then they were tested with an ""is a"" style test, which felt like I was pretty much back with the switch statements.

+ +
IImage
+
+// Some methods
+int[][] GetBitmap()
+
+// Some properties
+int Height
+int Width
+
+IGps
+
+Double[] GetGps()
+
+ +

My second attempt was to just add bool HasProperty(PropertyId id) and T GetProperty<T>(PropertyId) to the IImage. Then none of the other interfaces were required.

+ +
enum PropertyId
+GpsData,
+ExifData, ...
+
+
+IImage
+
+// Some methods
+int[][] GetBitmap()
+
+// Some properties
+int Height
+int Width
+
+// New methods
+bool HasProperty(PropertyId id)
+T GetProperty<T>(PropertyId)
+List<PropertyId> GetSupportedProperties()
+
+ +

This really cleaned up the Reporting library; it could enumerate over the GetSupportedProperties and no ifs or switches. It also means it didn't have to care about the hundreds of sub-classes, and in fact, sub-classes weren't even required. A generic Image class that implemented IImage could be made that just contained a list of properties, types for run-time type checking and values.

+ +

It still feels bad. It removes compile-time type checking. For example, var gps = GetProperty<string>(PropertyId.Gps) would compile, but a Gps is a double array, not a string. So it would throw an exception at runtime.

+ +

Also, Flater points out I'm corrupting the point of interfaces, and he is completely right. The reason I'm asking this question is because I think my answer is dirty; it's just the least dirty answer I have. The first and second approaches seemed worse (the original seemed much worse).

+ +

The solution would preferably be able to handle adding properties easily. I have no control over what data image formats decide to use. We have not written a single image format; we either get them from specs (like PNG), or as with about 95% of out formats, we reverse engineer them. That is the benefit our software brings; it understands, views and reports on rare file types (including image formats). About 70% of our time goes into reverse engineering new formats, which arrive on our doorstep faster than we can reverse engineer them.

+ +

The reverse engineering really hampers forward planning. You might find it hard to believe some of the data that is stored. I'm constantly surprised, and I've been doing this for over a decade. This means that we have to be reactive, as we can't be proactive.

+ +

When I used the tree of interfaces (I don't care if they inherit from IImage or from others as needed) I find that I do have fewer interfaces than there are image types, or properties, but still dozens. And checking to see if an object implements an interface doesn't feel much better than calling HasProperty, but perhaps that is my own subjective issue.

+ +

Flater's suggestion seems to line up with my first attempt (the second model) a bit, and Simon B seems to be suggesting my current, second attempt (the third model) is best. I could be reading this wrong. If either is true, I'll live with the dirty feelings; it just felt like there must be some better approach out there, though I haven't found it.

+ +

I hope the context, though fake (but only a little fake) helps. I'm sorry I wasn't more clear the first time around. I hope this is better. I appreciate the time people took to help, and I will eventually accept an answer.

+ +
+ +

Old question kept for reference only

+ +

I am refactoring a smelly class and I'm sure I'm making a pig's ear of it. It feels like a common problem, but I can't see a common solution. As the domain is fairly niche, I've changed names etc.

+ +

I have an interface, let's say IThing, which has a few methods and started with a few properties. As time went on, many different IThings cropped up with different properties. (IThing is a sort of interface to multiple different reverse-engineered Things that we have no control over, so the properties are thrust on us.)

+ +

We ended up with a pattern of the sort bool HasSpecialNumber, int SpecialNumber {get; set;}. This got smelly as we added more and more properties, with every implementation having to implement 20+ methods just to say they don't support a property.

+ +

I thought of using a mixin approach, but maybe I'm thinking of this wrongly, because it would involve as many interfaces as properties or combinations of properties, and a lot of casting. It also seems heavy-handed when I'm only providing properties here and the methods are not changing.

+ +

An IThing looks sort of like this (C#ish pseudo-ish code)

+ +
IThing
+
+// Some methods every Thing supports
+DoSomething
+DoSomethingElse
+
+// A bunch of properties some Things support
+bool HasSpecialNumber { get; }
+int SpecialNumber { get; }
+
+bool HasName { get; }
+string Name { get; }
+
+... and so on
+
+ +

Apart from the smell, every time a property was added, a whole bunch of classes broke. These all needed to be serialized too, using protobuf-net. Many of these classes were only distinct in that they had special objects.

+ +

The next thing we tried was reducing the properties to two methods, with a private method for adding properties.

+ +
IThing
+
+// Some methods every Thing supports
+DoSomething
+DoSomethingElse
+
+// A bunch of properties some Things support
+bool HasProperty( PropertyIdEnum propertyId )
+T GetProperty<T>( PropertyIdEnum propertyId )
+
+// Private method for adding properties
+void AddProperty<T>( PropertyIdEnum propertyId, T value )
+
+
+ +

This sort of worked. Dozens of properties became two accessor methods, and updating the PropertyIdEnum didn't break anything. The AddProperty was used to add properties to a dictionary that mapped the IDs to objects, with a Type stored alongside to ensure no weird casting errors. But I exchanged compile-time type checking for run-time type checking. Also, protobuf-net doesn't support serializing Objects or Types, though that is an implementation detail.

+ +

We ditched the AddProperty abstraction and went back to dozens of classes. That resolved the protobuff-net issue at the cost of having a lot more classes to worry about. We still lack the compile-time type safety.

+ +

I see this issue all over the place in areas I work. For example, ffmpeg and the CODECs they deal with, each with special behaviour. The solutions they use are constrained by backwards compatibility though, an they are working in heavily optimized C while I'm in C#. Is there some pattern or advise for dealing with a run-away set of properties that need to be handled trough a single common interface? If I had control over the properties I would just not be in this situation in the first place, but I don't, so here I am.

+",126281,,126281,,43945.72847,43950.84583,Problem with runaway number of properties,,7,17,,,,CC BY-SA 4.0, +408759,1,,,4/13/2020 20:14,,-4,64,"

I have a piece of code that I developed in an academic context for which I would like to build a nice frontend. My approach to coding has been very academic to this point (read: I made stuff up as I went and didn't worry about other users), but that is no longer a luxury I have, so I want to plan this out carefully before I start anything, and I am unfamiliar with best practices so I am hoping for advice.

+ +

My code so far consists of a standalone piece of C software that is compiled to an executable and run via the command line. It is highly optimized, runs like lightning, runs clean through valgrind, and is thoroughly tested by >20 users over 4 years of daily use, but it isn't very use friendly and doesn't do any input validation so it's easy to break it if you don't know how to use it.

+ +

In order to incorporate this into a python GUI, I have a few options, and I would like opinions or alternatives from those more experienced than I.

+ +
    +
  1. Write the python GUI as a completely separate entity, use it to build the necessary config files and validate the input. Make a system call to the compiled C executable when appropriate.
  2. +
+ +

Pros: easy, simple, won't require much refactoring. The C library is very stable, and not having to change it at all to use it is worth consideration.

+ +

Cons: Almost certainly the wrong way to do things. Not sure why, exactly, but please talk me out of this one.

+ +
    +
  1. Repackage the C code into a library from which I build cython modules. Use these modules within the python GUI.
  2. +
+ +

Pros: Using cython as intended, basically to glue together fast and optimized modules of C code. Unit testing can happen entirely on the python side (right?) and do all of the input validation on the python side as well, completely hiding the C backend from the user. Can turn it into a more object-oriented structure than the C code presently allows.

+ +

Cons: I am new to cython, there will be a learning curve. It will also require refactoring of the C code (in a nutshell, the usual main() function that handles control flow would be moved to the Python side, which comes with the potential for bug introduction and added complexity.

+ +
    +
  1. Some third thing I have not thought of yet.
  2. +
+ +

Any advice or alternatives to the above are welcome.

+",189511,,,,,43934.93056,Best practices when interfacing Python and C code,,1,0,,,,CC BY-SA 4.0, +408760,1,408789,,4/13/2020 20:48,,-3,65,"

We have a case where as it stands our API looks like

+ +

api/workOptions/{workOptionsId}/items/{workOptionId}

+ +

There exists business logic that only one workOptionId can have a status of ""preferred.""

+ +

Rather than forcing the client to submit an entire resource document containing all sub elements pertaining to individual workOptions we want to create a separate resource to handle POSTs or PUTs of individual workOptions, e.g.

+ +

api/workOption/{workOptionId}

+ +

The issue is whether it acceptable from a RESTful perspective to toggle ""off"" the ""preferred"" status of other workOptions in the event that someone POSTs or PUTs to the latter individual workOption endpoint a workOption document having a status of ""preferred."" Thoughts?

+ +

We're also faced with creating a ""proxy"" resource on another resource, i.e. departments e.g.

+ +

api/departments/{departmentId}/workOption/{workOptionId}

+ +

Naturally this would imply that posts to the departments/.../workOption api would cause a further api call to the workOptions api to do the actual update. Are ""proxy"" sub resources acceptable RESTful practice? I assume it is, i.e. the backend should be an implementation detail and not the concern of the API. However, I'm not completely positive on this.

+",288577,,,,,43935.68194,"Is a toggle side affect or propagation side affect, Restful?",,2,7,,,,CC BY-SA 4.0, +408761,1,,,4/13/2020 22:20,,-4,31,"

I'm developing my first Python plugin for a 3D application.

+ +

What I'm looking to do

+ +

I would like users who've purchased a subscription to the plugin from my Wordpress/WooCommerce web site to log in to the plugin with the user name and password from my web site. The plugin would then send a REST request to see if the user's subscription is valid. I have a few questions about how this is done if anyone with experience can advise:

+ +
    +
  1. It seems like I would need to store an OAuth's Consumer key & secret with Read-only privilege on the client's computer for requesting a REST response. The main plugin file will be encrypted. Should I store it there or should I consider another method?
  2. +
  3. I'd imagine I'd need to either require an internet connection for the plugin to work or create a token somehow that stopped usage if the user cancelled their subscription. The WooCommerce REST result is JSON. How would I generate such a token?
  4. +
+ +

Any guidance would be so helpful. Thank you!

+",362960,,,,,43934.94583,How would I authenticating a Software Plugin with Username/Password using REST?,,1,1,,,,CC BY-SA 4.0, +408766,1,,,4/14/2020 4:36,,-4,64,"

I know that browsers use a separate port for each tab. However, in each tab, there might be multiple scripts doing data transfer over the network. How does a browser makes sure that the data is delivered to the right script, to prevent security issues? How does it differentiates between all the scripts?

+",362994,,362994,,43935.54375,43935.54375,How do browsers isolate traffic within a single tab?,,2,0,0,,,CC BY-SA 4.0, +408768,1,,,4/14/2020 7:33,,0,25,"

I am designing an application that will run on AWS ECS. The app will be able to run with multiple configurations for jobs.

+ +

There will be two packages on GIT;

+ +
    +
  1. Config Repository
  2. +
  3. Application
  4. +
+ +

It is a long running application, and the configuration repository will have the following structure.

+ +
    +
  • joba + +
      +
    • config.json
    • +
  • +
  • jobb + +
      +
    • config.json
    • +
  • +
  • jobc + +
      +
    • config.json
    • +
  • +
+ +

and I have 300 job configurations, and the service will be running in 300 rocker containers on ECS for each job config. I need to provide the following solutions for my strategy.

+ +
    +
  1. All service must be running
  2. +
  3. Config change on the job should be detected and only that job should be deployed so there will be no unnecessary deployment for other job
  4. +
  5. If software changes, all jobs should be deployed
  6. +
+ +

It looks like I need to somehow keep my config repository in some external storage, such as S3, Dynamo. However, how can I detect changes in the jobs and how should I build my deployment strategy.

+ +

If you could point me some source or provide your thoughts for the problem, that would be great.

+",362999,,,,,43935.31458,How to design fan-out deployments for Docker containers?,,0,0,1,,,CC BY-SA 4.0, +408779,1,,,4/14/2020 11:11,,2,124,"

Let's say we have two variables, eta and phi related by eta = cos(phi).

+ +

Is there a way to link these variables in any programming language such that there's no need for two different functions, phiToEta(phi) and etaToPhi(eta)?

+",363023,,,,,43935.7125,Is there a way to specify a two-way relationship between variables?,,3,5,,,,CC BY-SA 4.0, +408780,1,,,4/14/2020 11:50,,2,165,"

This is for an Android app, but I think the question applies to any software designed with a service layer.

+ +

Our app is structured with a presentation layer that handles the UI and a service layer beneath it, comprising lots of service objects that the UI layer will call when it needs to perform some business logic. Say there's an EventRecordingService that records whenever the user clicks something and lots of UI classes hold a reference to the same EventRecordingService object.

+ +

Now lets say the EventRecordingService needs to assign an incrementing number to each event it sees, which means it has to maintain a counter. Obviously it would be better to be stateless but sometimes it can't be avoided. Now if an event is recorded from two different threads simultaneously, unless access to the counter is synchronized, it could get confused and give the wrong result.

+ +

My attitude to this kind of thing normally is to never make any classes thread safe unless they need to be, because thread safety is expensive and difficult. Right now, all calls to this service happen in the same thread and there's no plans to add more, so it's fine.

+ +

My colleague argues that it should be thread safe because in the future, someone else might come along and call the service from a different thread. The service doesn't look stateful from the outside, the counter is an implementation detail, and while there's no plans to add more calls to it currently it's easy to imagine it happening in the future. In this case not making it thread safe could be dangerous, because it might appear to work with the new call added but just occasionally go badly wrong.

+ +

I feel he has a point, but also, it seems like if we make everything thread safe even if it doesn't need to be we'll never get anything done.

+ +

So what's the normal thing to do in cases like this? Should we add a warning to the class saying it isn't thread safe, or make it thread safe? Or should we find a way to make it stateless at all costs?

+",344326,,,,,43935.85139,Should services in a service layer be thread safe?,,3,0,,,,CC BY-SA 4.0, +408781,1,409271,,4/14/2020 11:55,,1,51,"

We have a multi-vendor marketplace (e-Commerce) system and plan to move our data structures into a polyglot architecture to improve the performance of reads (critical) and writes (not as critical).

+ +

Offers are placed on a product by several dealers. We have local and online offers and the default offer (first to show in the result) is the cheapest online offer.

+ +

+ +

Therefore we typically have two types of requests, as the image shows. One for Lists (Search results) and a Details view with all offers (local and all/online).

+ +

Entity Relationships:

+ +
    +
  • 1 Offer has 1 Product
  • +
  • 1 Product has N Offers
  • +
  • 1 Offer has 1 Dealer
  • +
  • 1 Dealer has N Offers
  • +
+ +

My idea is to store the product information in a Cosmos Document Storage since the product information changes rarely. +Product entities have search filters (characteristics) as key-value pairs.

+ +

Here is a simplified JSON of an offer:

+ +
{
+ ""offerNumber"": 1234,
+ ""dealerId"": 1000,
+ ""price"": 19.99,
+ ""quantity"": 5,
+ ""product"" : {
+  ""title"": ""My Title"",
+  ""filters"": 
+  [
+   { ""key"": ""value"" },
+   { ""key"": ""value"" }
+  ],
+  ""description"": """"
+ }
+}
+
+ +

An offer typically has a stock (quantity) and price as you see in the model. My first approach was to simply store the cheapest online offer in the document database and all other offers into a relational database (Azure SQL).

+ +

Because of frequently changing stock and price information and due to the nature of document databases not being optimized for updates, a relational database seems to be the more appropriate storage for the offers. For the products, since they don't change frequently, a document database is the better choice.

+ +

My concern is the volume of updates made to stock and prices. Is there any way to optimize a search index using both sources or does anyone have experience with similar use cases and the different storage types on Azure?

+",351056,,351056,,43936.44028,43946.24583,"Storing, indexing and searching data using Azure with frequently updated fields. Multiple Sources",,1,2,,,,CC BY-SA 4.0, +408787,1,,,4/14/2020 12:48,,0,94,"

In game making project, client wants to make the game of which the duration is short. +Is the requirement, ""the duration of the game should be short"", non functional requirement? or functional requirement?

+",363032,,,,,43935.56944,Is this non-functional requirement?,,1,2,,,,CC BY-SA 4.0, +408794,1,,,4/14/2020 15:36,,2,45,"

Wikipedia gave an example of State Pattern:

+ +

Define LowerCaseState and MultipleUpperCaseState, both inherite from State.

+ +
interface State {
+    void writeName(StateContext context, String name);
+}
+
+class LowerCaseState implements State {
+    @Override
+    public void writeName(StateContext context, String name) {
+        System.out.println(name.toLowerCase());
+        context.setState(new MultipleUpperCaseState());
+    }
+}
+
+class MultipleUpperCaseState implements State {
+    /* Counter local to this state */
+    private int count = 0;
+
+    @Override
+    public void writeName(StateContext context, String name) {
+        System.out.println(name.toUpperCase());
+        /* Change state after StateMultipleUpperCase's writeName() gets invoked twice */
+        if (++count > 1) {
+            context.setState(new LowerCaseState());
+        }
+    }
+}
+
+ +

Then we have a Context class:

+ +
class StateContext {
+    private State state;
+
+    public StateContext() {
+        state = new LowerCaseState();
+    }
+
+    /**
+     * Set the current state.
+     * Normally only called by classes implementing the State interface.
+     * @param newState the new state of this context
+     */
+    void setState(State newState) {
+        state = newState;
+    }
+
+    public void writeName(String name) {
+        state.writeName(this, name);
+    }
+}
+
+ +

The usage will be:

+ +
public class StateDemo {
+    public static void main(String[] args) {
+        StateContext context = new StateContext();
+
+        context.writeName(""Monday"");
+        context.writeName(""Tuesday"");
+        context.writeName(""Wednesday"");
+        context.writeName(""Thursday"");
+        context.writeName(""Friday"");
+        context.writeName(""Saturday"");
+        context.writeName(""Sunday"");
+    }
+}
+
+ +

Noted that both the State interface and StateContext have a writeName() method. If we add more methods like that, say, writePoem(), writeCode(), writeEssay(), etc, we have to add them twice, once in State, and onece in StateContext.

+ +

What the writeName() method in StateContext is merely propergate the ""write name"" request, or event to the States.

+ +

Are there any way to eliminate this dupilication? Or is this usually considered acceptable?

+",363043,,,,,43935.71528,State Pattern: duplication in State and Context,,1,2,,,,CC BY-SA 4.0, +408806,1,,,4/14/2020 19:36,,1,98,"

I need to model the persistence of combinatorial information. For example, suppose that the combination of 3 attributes (A, B, and C) are used to reference a given product. Besides that, supposed that a given product can belong to different combinations of A, B and C. The queries will be used to constraint the possibilities of product selection based on user requirements. Here, I'm giving a simple example, but this system is intended to combine hundreds of tables with thousands of combinations.

+ +

For now I can think in two approachs: +Model1:

+ +
Table: PRODUCT_SEARCH_CONDITIONS
+| key search id | attribute 1 | attribute 2 | product id |
+| 1             | value 1     | value 1     | 1          |
+| 2             | value 1     | value 2     | 1          |
+| 3             | value 2     | value 3     | 2          |
+| 4             | value 2     | value 4     | 2          |
+| 5             | value 3     | value 5     | 1          |
+
+ +

Model2:

+ +
Table: PRODUCT_SEARCH_CONDITIONS
+| key search id | product id  |
+| 1             | 1           |
+| 2             | 2           |
+| 3             | 1           |
+
+Table: PRODUCT_SEARCH_ATTRIBUTE_1
+| key search id | attribute 1 |
+| 1             | value 1     |
+| 2             | value 2     |
+| 3             | value 3     |
+
+Table: PRODUCT_SEARCH_ATTRIBUTE_2
+| key search id | attribute 1 |
+| 1             | value 1     |
+| 1             | value 2     |
+| 2             | value 3     |
+| 2             | value 4     |
+| 3             | value 5     |
+
+ +

From model1, the queries and the modeling are simple, basically because I have only one table that store all possible combinations extensively. However, I have the combinatorial explosion problem because each line represents a valid combination. Indeed, if I add more variables and values, then it grows exponentially according to the valid combinations.

+ +

From model2 the queries and the modeling are more complex. However I don't need to store all possible combinations extensively, I can group valid combinations together (e.g. key search id 1 that group values of attribute 2). Besides that, the queries can also be solved more efficiently, as I can calculate the intersection between the IDs found for each attribute given the user requirements.

+ +

My question is, there exists a better modeling approach that can have the advantages of both modelings (simplicity and performance)? Or even, a specific database (or search engine) technology that better support this kind of problem?

+",342319,,,,,44171.21111,How to model combinatorial information in RDBMS,,2,1,1,,,CC BY-SA 4.0, +408808,1,,,4/14/2020 20:01,,-2,85,"

I'm building a project and I only know the MERN stack. Everywhere I see the general consensus is that Mongo is not good and should only be used in very specific use cases. More importantly, I also want to expand my skills so I thought this would be a great time to try out PostgreSQL since there also seems to be a ton of jobs requiring this so I can kill two birds with one stone.

+ +

Here's my problem.

+ +

I'm building an application with unstructured data. What I mean is that I'm building something like a counter where users can input something and it will be counted how many times it has been inputted that day. To give an example lets say it's an app that tracks the snacks you've eaten throughout the day. In Mongo the structure would look like this

+ +
{
+    user_id: ""someId"",
+    date: 1586822400,
+    snacks: {
+        ""orange"": 5,
+        ""kit-kat"": 100,
+        ""peanuts"": 30
+    }
+}
+
+ +

Then if the user were to again add ""peanuts"", it'll increment the value of peanuts by 1 so it'll say 31 instead of 30. However, if the user was to add a random snack called whizits, it'll upsert a key-value pair so the db will look like this

+ +
{
+    user_id: ""someId"",
+    date: 1586822400,
+    snacks: {
+        ""orange"": 5,
+        ""kit-kat"": 100,
+        ""peanuts"": 30,
+        ""whizits"": 1
+    }
+}
+
+ +

Then once the day ends, a new document will be created with an empty snacks object and it'll start all over. This is unstructured because I have no idea what snacks the user will end up adding. Sure, there will probably be a few common snacks that will be added like fruits and chips and whatever else, but there will be the occasional whizits.

+ +

How can I create a schema like this in Postgres? The only way I can think of this is to have a table with the snacks column just being a really long string of the snacks that are split by spaces or something so I can count all the snacks. So it would look like

+ +
+---------+------------+-------------------------------------------------+
+| user_id | date       | snacks                                          |
++---------+------------+-------------------------------------------------+
+| someId  | 1586822400 | ""orange orange orange orange orange kit-kat ... |
++---------+------------+-------------------------------------------------+
+
+ +

This just seems like a horrible way of doing this so any other ideas are appreciated!

+ +

What's important is I want writes to be very fast because I want to update in real time, like imagine if there were a ton of users entering snacks at the same time. I'm using web sockets to relay it as fast as possible. Reads won't happen as frequently since the only time a read will occur is when the user goes to their dashboard to see the statistics of how much snacks they ate and other data.

+ +

PS. Maybe this is one of those cases where Mongo or any other NoSQL db would do better? I posted my project elsewhere asking what db to use and people said to use a relational db since this is relational data since users have snacks, etc. Maybe it's because I'm new to this but I don't see how I can make this work. If you have any other suggestions I'm all ears!

+",360567,,,,,43935.88056,Adding unstructured data to a relational database (PostgreSQL),,1,2,,,,CC BY-SA 4.0, +408812,1,,,4/14/2020 21:29,,0,48,"

I've willingly inherited a VB.Net forms project based on .Net 3.5 last edited with VS2012. I was able to open it up and up-convert it to VS2017. I can compile and run it and make some little tweaks. The code is written a little more like a VBA project/module filled with random classes behind form files and 1000 line methods behind a button actions.

+ +

No namespaces are defined and it's not clear to me how I should try to re-organize the code without breaking it. (Because I already have done that once.)

+ +

Every time I create a new project in the solution, it creates subfolders and I need to add references to the existing project to the new one. It feels a little clunky and dis-organized.

+ +

So far, I've started to pull classes into their own files, but nothing more than that.

+ +

How do I best break apart the code and forms so I can get separate exes for the forms that require them? +How should the underlying folder structure look? Flat or Sub folders? +How should I apply namespaces?(I know this may be a little opinion based, but I'm stuck at where to start)

+",125105,,,,,43935.89514,"Structuring a ""Large"" Windows Forms Project and Solution To something with Multiple Sub-Projects",,0,8,,,,CC BY-SA 4.0, +408813,1,,,4/14/2020 22:34,,0,57,"

I am new to RabbitMQ and herewith I want to make sure that I am not missing out on some advanced RabbitMQ feature or pattern I am not aware of.

+ +

I need to develop a reliable system that processes a few smallish RabbitMQ messages per second (~40/sec). My application receives those messages by subscribing to a RabbitMQ queue (which I do not control). The messages contain data as structured XML. The end goal here is to extract the data from the XML-messages and persist them into an SQL database for later use.

+ +

It is important for the queue operator that the messages are acknowledged as quickly as possible, before they are processed. This is more important than the processing latency, i.e. how long it takes for the messages to arrive in the SQL database.

+ +

Another requirement is that no message is lost under any circumstances. There is no way to get the information later on via a bulk request.

+ +

Currently I am thinking of writing a service which listens to the queue and writes every received message to an Azure Blob Storage with the messages binary body as blob content and the message headers as blob metadata. All fail-over strategies are in this service, too. That way I can record all messages and quickly acknowledge to the queue that I have received them. After that a second service can process them without any time constraints.

+ +

I am happy with such an architecture, but I am concerned that I am not aware of a RabbitMQ feature that can handle such a scenario in a more correct way or with less engineering effort.

+",189962,,,,,43936.45694,Process AMPQ messages both reliably and fast,,1,0,,,,CC BY-SA 4.0, +408814,1,,,4/15/2020 0:13,,2,121,"

I am working on an application where I will be using Android and iOS biometric authentication.

+ +

Coding the use of biometrics on the devices, including prompting and determining whether or not the user device supports it is easy. I can get fingerprint authentication up, face recognition, etc. without any issues. On Android I can easily use the CryptoObject to store data locally and it seems just as easy on iOS.

+ +

I am wondering what the best practices are for verifying this authentication with the back-end as well.

+ +

If I understand things correctly, the happy path process should simply be:

+ +
    +
  1. Prompt user
  2. +
  3. User authenticates with thumbprint, face, etc.
  4. +
  5. Callback is made on authentication success
  6. +
  7. Extract encrypted data on the device
  8. +
  9. Restore session with the back-end services
  10. +
+ +

My problem is with steps #4 and #5.

+ +

What is recommended to actually be stored?

+ +

I am hesitant to store sensitive data, even if encrypted, on the user's physical device. It seems obvious not to store the username/password combination, but what are people actually using to restore the session?

+ +
    +
  • Long-lived Session ID?
  • +
  • License key or bearer token-like identifier for the user?
  • +
  • Should I be adding a new endpoint that uploads custom data from the results that I may have missed in documentation?
  • +
+ +

Right now, after the user authenticates, I am storing the Device ID along-side a generated Token (stored with one-way encryption on the server-side). It feels non-secure to store this data on the local device, unless that is the way to do it.

+",363076,,,,,43936.00903,Biometric authentication with back-end verification service process,,0,4,,,,CC BY-SA 4.0, +408815,1,408819,,4/15/2020 0:54,,-3,178,"

When writing a long process, i.e. one filled with many steps of business logic, what are the best practices for organising it? There are a few different options here that I can see:

+ +
    +
  1. Just write a long script - however steps aren't modular or reusable
  2. +
  3. Write each step as a function and call each in order in the main method - good for visualising the whole script logic, better for unit testing. You might end up with many singleton functions though, and the whole environment needs to be passed to each function every time.
  4. +
  5. Write a long script, and turn parts of code into functions if another process uses them - difficult to visualise and remember what functions exist when writing other processes
  6. +
  7. Create a singleton class that has methods to be called in order by the class init process - reuses class scope variables but impossible to unit test
  8. +
  9. Create a singleton class that has methods to be called in order externally - reuses class scope variables and easier to unit test, but then each method relies on it being called in a certain order which seems to disobey the best practice of having methods uncoupled.
  10. +
  11. Create a singleton class that has methods to be called in order by a ""main process"" method - reuses class scope variables but same issues as 5)
  12. +
  13. Review the whole process, as this very question screams that it is being come at the wrong way in the first place
  14. +
+",358256,,,,,43936.55,"What are the best practices for writing a long, multi-step process?",,1,1,,43937.22708,,CC BY-SA 4.0, +408817,1,,,4/15/2020 2:09,,0,89,"

I have a device that connect using Bluetooth Low Energy. I would like it to be able to communicate with a server over internet using a smartphone as a gateway. As I approach this I am coming up with different way of doing things would like objective inputs.

+ +

Approach 1: write a smartphone app that process data between internet and bluetooth. Data coming over from server would get send over Bluetooth Low Energy to the device. Data coming from device would be send over to server.

+ +

Since the smartphone is a gateway , it doesn't know where to send that data to over IP because the device may want to communicate with one of many server. So the device would need to specify address and other information along with payload when sending it. The other way around the server would have to somehow specify the device outside of message payload. So the IP communication terminates at smartphone. There can be more than one device connected to smartphone. So this add some overhead. We also have to take into effect speed to transport medium so some sort of buffer would have to be used.

+ +

Approach 2: Implement simplified form of TCP/IP protocol over Bluetooth on the device. I am considering MQTT protocol. In this case, I assume the device would have its own IP address so the server can directly address it. With this approach the protocol is all TCP/IP end to end but transport medium still same as before. Of course, this will increase code in device by alot.

+ +

I would like input on which is better design approach based on best practice and design tradeoffs.

+",363088,,363088,,43937.13889,44207.25417,Creating a internet gateway for bluetooth low energy device,,1,0,1,,,CC BY-SA 4.0, +408820,1,,,4/15/2020 3:11,,3,66,"

I have the following situation, which seems to be coming up in multiple teams/services in our Microservices architecture.

+ +

We have Service called ""Program"". A call can be made to create a new program, the first thing it does is do the basic initialization of a program and then fires and event saying ""Program Initialized"". Then any number of services will see that event and do some jobs, when they are done, they fire their events saying they are done. The Program service is monitoring all these events and once they are all completed, it will possibly do some more final work that needs to be done and then fire off anther event saying ""Program Created""

+ +

So the problem comes in at the middle. The Program service needs to know what programs which events it needs to be monitoring in order to know when to do it's final bit of work and then fire the ""Program Created"" event. We can't hard code this into the service, because that means that any time a new service is added (or removed) it would mean a code change to the Program service and we obviously don't want that.

+ +

So my thought is we could have some sort of registration service. All services would register with the registration service. In the case of the Program Service, it would say which events it provides (""Program Initialized"" and ""Program Created"") as well as say it cares about which services consume the ""Program Initialized"" event. Any time another service is registered with the Registration Service and it says that it cares about the ""Program Initialized"" event, it will fire an event saying as such. The Program Service will see that and update a local list of services & events that it needs to keep track of before the ""Program Created"" event can be fired.

+ +

I initially thought this would be in the realm of Service Discovery/Service Registry, but from my reading, a Service Registry seems to be a lot more simplistic (just keeps a general list of services and their locations).

+ +

Is there some other pattern out there that can service this sort of way of doing things?

+",363090,,,,,44217.72153,I need something and I'm not sure if it's a service registry,,1,2,,,,CC BY-SA 4.0, +408822,1,,,4/15/2020 3:26,,1,44,"

I am working with an API that is used for registering bookings in a ticket system. +Some of the calls I have to make take complex XML objects as arguments. +Some data points in these objects have to be retrieved via multiple API calls in-between which a transformation is necessary before this data is combined with existing user data.

+ +

This preparation process too complex for one class and therefore distributed over multiple classes with their own, single responsibility.

+ +

So far I created a class Booking that holds all data that belongs to one booking request. This class is passed around between modules in the process of preparing the data for the final call.

+ +

Doing so is working but couples all modules to the Booking class. +If Booking changes, a lot of other classes have to change which is a violation of the single responsibility principle as far as I understand it.

+ +

I have considered just passing the data around as arguments between the modules but I definitely see this getting messy and prone to errors. +Every method involved would have 4-6 arguments that in some cases might be null or optional.

+ +

How do I deal with having to aggregate multiple data points that all have to feed into one API-method call without coupling all modules to one class?

+ +

Edit:

+ +

These are the data points needed in every step. +I considered defining classes for every set of arguments but I think this would be kind of hard to maintain because it becomes very intransparent what object is responsible for what data and how the data flows between the modules.

+ +

Availability

+ +
    +
  • rates
  • +
  • rate quantities
  • +
  • date, either from user request or retrieved via additional API call
  • +
+ +

→ returns BookingComponents

+ +

Start Booking

+ +
    +
  • booking key
  • +
  • promo code
  • +
  • BookingComponents
  • +
  • email
  • +
  • first name
  • +
  • last name
  • +
+ +

→ returns booking id, expected total

+ +

Credit Card Payment

+ +
    +
  • booking id
  • +
  • credit card token
  • +
  • total amount
  • +
+ +

→ returns boolean

+ +

PayPal Payment

+ +

Commit

+ +
    +
  • booking id
  • +
+ +

SettlePayment

+ +
    +
  • total amount
  • +
  • paypalNonce
  • +
  • paypalOrderId
  • +
+ +

Record Payment

+ +
    +
  • booking id
  • +
  • total amount
  • +
  • payment method string
  • +
  • paypalOrder id
  • +
+ +

Edit - Added Context for follow-up questions

+ +

Modules (Example #1)

+ +

+ +

The class AvailabilityManager has the responsibility of getting the availability for tours either in the booking process or in order to display availability on the website. (Relies on very similar calls to the API but has to be done before the booking can be processed)

+ +

Is this meant by passing behaviour? (Example #2)

+ +
    class InitialBooking{
+        protected $date;
+        protected $rates;
+        protected $quantities;
+
+        //the actual operation is more complex, involves more methods 
+        // and will be needed outside of the Booking process as well
+        public function verifyAvailability(): BookingComponent
+        {
+            $availabilityResult = $this->api->getAvailability($date, $rates, $quantities)
+            return $this->bookingComponentTransformer->transform($availabilityResult);
+        }
+    }
+
+",363094,,363094,,43936.89722,43936.89722,aggregate data for complex API call without coupling modules to parameter object,,1,0,,,,CC BY-SA 4.0, +408823,1,,,4/15/2020 3:33,,1,85,"

I have a little React app and I'm ready to test it. The first thing I need to do is to create some input objects with random data. I can proceed in one of two ways:

+ +
    +
  1. I can create my own fake data line by line using something like faker.js. This will create new fake data every time I run my test. For example:
  2. +
+ +
    let car = {
+      Model: faker.lorem.string(),
+      Year: faker.number.random(2010, 2020)
+    }
+
+ +

The pro of doing this is that I can precisely control all of my input fields. The cons are that it takes longer to write each line of fake data (especially if there are many objects) and that the output is less deterministic.

+ +
    +
  1. I can automatically generate all my input up front one time using something like intermock. The pro is that this is much less labour intensive than the first choice. The con is that I lose some control of the data that gets generated.
  2. +
+ +
    // inside car.ts...
+    interface Car {
+      Model: string
+      Year: number
+    }
+
+    // in the terminal...
+    node ./node_modules/intermock/build/src/index.js --files ./car.ts --interfaces ""Car""
+
+ +

Which option is better? Are there any other pros or cons?

+",295798,,,,,43936.18889,Is it better to test with dynamically generated input data or static data?,,1,0,,,,CC BY-SA 4.0, +408826,1,,,4/15/2020 7:54,,0,41,"

I've been designing and developing a very scalable logging library for a while.

+ +

The main goal of this library is pretty simple. Like many others projects, a simple goal does not mean a simple way the achieve it. Days after designing the API and the main architecture, this library seems to have a quite large engine on the background.
+Most of this engine is based on streams, disk-flushing, text formatting, buffering and, of course, file and directories interaction; nothing surprising, indeed.

+ +

After coding a bit, I realized I never thought about the test design. Well, I trusted it to be easier than it seems.
+For the sake of explanation, I point out that I am developing in C# (v8.0) with a .NET Standard (v2.1) environment. I prefer the lack of third parties libraries (I will develop separate packages for other libraries or frameworks integration, such as ASPNET, Autofac, and so on...).

+ +

The point of the question is, since I am almost bound to the System.IO environment that .NET exposes, I don't know how to really test my librarie's features.

+ +

Well, there are lots of methods, classes or modules that are easy to test (I am trying to apply a TDD strategy). But there are many others services left to this, where they interact directly with the file system.

+ +

For a testing development environment, I use NUnit with FakeItEasy.

+ +

The point is, how can I test and mock those classes, methods and modules that are totally bounded by the machine file system?

+ +

Thanks.

+",362479,,362479,,43936.33403,43936.34792,Test logging library,,1,0,,,,CC BY-SA 4.0, +408829,1,,,4/15/2020 9:14,,0,171,"

As I am moving a part of the monolith app logic to a microservice, I am standing before a problem with scalability.

+ +

Currently, the main monolith runs on different instances, and has some scheduled services. Some of those services are pretty straigthforward, but there are some services that are sending emails basing on a result of a database query. The actual solution bases on optimistic locks, the service on the instance that is sending the email first is saving the last notification time on the database, if another instance tries to send the notification to the same user again, this happens:

+ +
try {
+    final long userId = notificationService.getUserIdToNotify();
+        if (nonNull(userId)) {
+           mailService.sendNotification(userId));
+        }
+    } catch (ConcurrencyFailureException e) {
+        LOGGER.debug(
+            ""User has been allready notified by another thread or instance."");
+            LOGGER.trace(""Optimistic lock"", e);
+    }
+
+ +

The NotificationService returns an UserId if the user wasn't notified previously, or a null if the user was allready notified.

+ +

Doesn't seem to be the best solution, but it somehow works.

+ +

I could move the logic to a single microservice, remove the optimistic lock logic, but I have recieved a requirement, which states that the microservice should be prepared for scaling, and therefore I should be aware of those optimistic locks, and design the microservice so the croned services inside it should include logic that will prevent other instances from doing actions more than once (for given period of time when they run).

+ +

I am not a microservice expert, but from what I see, there are some design problems rolling at me at dangerous speed.

+ +

Should this microservice be scaled? It does some operations on the database on an hourly/daily/weekly basis, and as the logic will be separated from the monolith, it shouldn't affect it... except for performing some operations on the same database which is the monolith using.

+ +

And if the service should be scalable, what solution would be best? I was thinking about using Redis to store keys and values of userId and a timestamp, and check those entries before performing actions, to prevent duplicate/unnecessary ones.

+ +

I am a little bit stuck now, and something in my lizard brain is saying that there is allready a solution for this, but I just seem to miss it.

+",363115,,,,,44210.75139,Designing a scalable microservice: how to prevent one instance from performing an action if another instance allready did?,,2,1,,,,CC BY-SA 4.0, +408834,1,,,4/15/2020 11:31,,0,24,"

I'm part of an initiative where we are moving monolithic-ish applications, each running on their separate VMs and using a common database cluster, to a container architecture with the goal of eventually breaking down all the monoliths to microservices.

+ +

A big part of the deployment is initially creating users and databases for the different applications that together make up the whole system. A typical (very simplified) deployment flow would be:

+ +
    +
  1. Create the application instance (virtual machine)
  2. +
  3. Create users, databases, tables, etc. in the database cluster (if needed)
  4. +
  5. Start the application on the application VM
  6. +
+ +

With some drawbacks to how we do the deployment this generally works fine. Step 2 is part of a framework that runs during instance deployment job where the application packaging provides information on what users to create etc.

+ +

Are there any typical approaches to deal with step 2 in a container environment?

+ +

My vague idea was something like this:

+ +
    +
  1. Launch a short-lived ""deployment"" container that creates users, databases, etc. for the soon-to-be deployed application. This would probably be an image packaged by the application that inherits from the ""deployment"" container.
  2. +
  3. Tear down the deployment container
  4. +
  5. Deploy the application container(s).
  6. +
+ +

Potentially I'm overthinking this; perhaps step 2 should just be a part of container deployment somehow but I feel like we'd want to decouple this from the application containers.

+",10987,,,,,43936.47986,"Resource creation (users, secrets, etc) from monoliths to microservices",,0,4,1,,,CC BY-SA 4.0, +408837,1,,,4/15/2020 13:08,,8,1647,"

Consider the following sample C# Data Transfer Object (DTO):

+ +
public class MailingAddress
+{
+   public string StreetAddress {get; set;}
+
+   public string City {get; set;}
+
+   public string StateOrTerritory {get; set;}
+
+   public string PostalCode {get; set;}
+}
+
+ +

I can write unit tests to check for someone setting the various string members to null or empty strings. I could also make the setters private on all the fields so that it has to be constructed via a constructor.

+ +

I'm asking about this because the CodeCoverage tool is reporting 0% on all these DTO methods and I'm trying to figure out if there's some reasonable testing I might do here.

+ +

I have googled a bit and not come up with a lot. I also searched here but haven't found anything that seems to address this. If I've missed something please link it in the comments.

+ +

EDIT:

+ +

Someone helpfully suggested that my question might be answered by this question. The thing is that while it doesn't look like there's code being run for the various fields, there is, in fact default code there.

+ +

It wouldn't be a case of testing the language features. If someone modifies the default behavior of the get/set pairs then I should have unit tests around them to insure they still behave as expected.

+",7912,,7912,,43937.65069,44175.6125,How Should I Unit Test A Data Transfer Object?,,4,7,,,,CC BY-SA 4.0, +408842,1,,,4/15/2020 14:55,,0,21,"

I'm looking into developing a distributed pubsub with p2p messaging akin to ROS where instead of being brokered, messages are transported directly from producer to consumer. XY problem - each node needs a copy of the graph to know who is listening to what topic.

+ +

However I have read that raft should be implemented with 3/5/7 but not really more participants. That seems logical to me, as there is a nontrivial amount of overhead associated with heartbeats, elections, and staying in sync. But I want dozens, if not hundreds, of pubs and subs (processes) on at least a dozen physical hosts, on potentially lossy WiFi connections.

+ +

Intuitively, I feel the solution looks somewhat like a Raft-y core stack, which provides DNS, kv, logs and health checks, e.g. 3 Consul.io instances. Each node process coming online (think rospy.init_node()) would need to establish a connection to some ""master"" process to listen for updates to the kv (again, Consul makes a lot of sense here with the watches pattern). On kv change, each node would pull state from the core stack.

+ +

Another option would be some polling service which checks the hash of the relevant subtree of the core state, and when it is invalid, it does a pull. That seems like it could be less performant; I am not a big fan of busy-polling, unless it's a slow rate to just catch state drift, accompanied by a state-update announcement.

+ +
    +
  1. Is this generally how things are done?
  2. +
  3. 1a: should I be using state machines in each of the clients?
  4. +
  5. are there better techniques/design patterns?
  6. +
+",217717,,217717,,43936.62569,43936.62569,How to scale raft state to dozens of followers?,,0,0,,,,CC BY-SA 4.0, +408843,1,,,4/15/2020 15:09,,0,151,"

Context - I'm building a flight booking system (online travel agent) that will partner many airlines to sell their seats.

+ +

I've designed my architecture to work like this:

+ +
    +
  1. When a user searches for a flight route, a request goes to look up a cache
  2. +
  3. If the cache has available flight inventory, return it to the user
  4. +
  5. If it does not, create a job in a queue
  6. +
  7. Sometime later, a queue consumer will pick up the job and hit the airline's API / website to retrieve flight inventory. Once retrieved, they will be placed into the cache.
  8. +
  9. The code in #1 polls again, retrieve the inventories in the cache, and display the search results to the user.
  10. +
+ +

Benefits of this setup so far:

+ +
    +
  • the queue / asynchronous fetching approach provides a buffer to prevent hitting airline systems too much
  • +
  • the cache provides very fast response
  • +
+ +

Problems encountered:

+ +
    +
  • Very high memory usage on the cache due to the huge variety of flight routes available
  • +
  • For routes that are not searched often, users will need to wait a while as the system is polling to get the search content
  • +
  • If an airline gone down, then there won't be any inventory data available, causing the user to see nothing + loss of sales
  • +
+ +

How will you improve the design?

+ +

———— EDIT ————

+ +

More context:

+ +
    +
  • In the travel industry, every airline has a Look-To-Book ratio given to distribution partners. For example if you hit their API / website too often but not book, then you will be penalized.

  • +
  • In addition, every airline also stipulates maximum number of clients able to connect from a partner.

  • +
  • This distribution site works with hundreds of airlines and serve millions of requests a day.

  • +
  • Latency to the airlines easily take from 2s - 15s. They are not too tech savvy.

  • +
  • There are additional middlemen partners too like GDS (example: Amadeus) to pull flight routes from.

  • +
+",312836,,312836,,43940.2125,43941.73889,How would you improve this architecture for a travel website?,,3,1,0,,,CC BY-SA 4.0, +408845,1,,,4/15/2020 15:55,,2,420,"

For example, does it make sense to refactor the following code:

+ +
a = a * 2;
+
+ +

as:

+ +
const int INT_TWO = 2;
+
+// ...
+
+a = a * INT_TWO;
+
+ +

My question hinges on the fact that the new constant conveys no extra meaning (as opposed to calling it, say, FOOBAR_FACTOR).

+",88848,,,,,43943.64306,Does it make sense to use meaningless named constants?,,2,9,,,,CC BY-SA 4.0, +408851,1,,,4/15/2020 18:18,,-3,39,"

I was wondering if there is some cost saving, either in time or space by passing and/or returning smaller arguments? char vs int.

+ +

I have heard the compiler will optimize the code based on the type of the processor 8,16,32 bits. +I think it will pass using a register, so that would be the max value of the register. +Personally, I think if the count ever needs to grow, the function does not need to be changed. +Still others will argue that ""Since we are only counting to say 254, we only need an unsigned char. We are saving 24 bits of space, etc"". +Also, I think that there is more trouble with casting and it's better to use the largest register value type. +I think it is better to pass using the larger arguments. Am I right or wrong?

+ +
/* Using smaller parameter */
+unsigned char max_count = 10;
+count(max_count)
+
+void count(unsigned char max_count)
+{
+}
+
+/* Using larger parameter */
+unsigned long max_count = 10;
+count(max_count)
+
+void count(unsigned long max_count)
+{
+}
+
+/* Examples that I come across (MISRA goes nuts with this statement) */
+unsigned char do_something(unsigned char a)
+{
+    ...
+    return ((b | 0x12) << 2);
+}
+
+",363162,,,,,43938.37569,Passing different sized parameters cost savings,,2,2,,,,CC BY-SA 4.0, +408858,1,409630,,4/15/2020 21:59,,0,117,"

I'm working on an API that allows users to invite each other for events. When someone is searching for users to invite, I want to include in the response information about whether a user has already been invited. One way to do this would be to have the response return objects that look like:

+ +
{
+    ""is_invited"": false,
+    ""user"": {
+        ""id"": 1,
+        ""name"": ""John Doe""
+    }
+}
+
+ +

This requires an additional model on the server and client side, so I was also considering formatting the response like a normal user response:

+ +
""user"": {
+    ""id"": 1,
+    ""name"": ""John Doe"",
+    ""is_invited"": false
+}
+
+ +

Where the is_invited property would only appear on responses requested from the context of a particular event. I was curious if this sort of contextually-conditional structure for responses is considered an anti-pattern in API design. If so, would the first structure be the best way to accomplish my goal, or is there another approach I haven't thought of?

+",36826,,36826,,43936.95903,43954.44514,Is it an anti-pattern for a REST-ful API object to contain different fields depending on context?,,2,2,,,,CC BY-SA 4.0, +408859,1,,,4/15/2020 22:11,,0,156,"

Im building a project that stores time series data on a per user basis. On the dashboard of the user it'll show some simple statistical analysis like averages but more importantly, it'll create charts based on the data. These charts could be line charts plotting values from one date to another, or a pie chart showing the occurrence/distribution of things.

+ +

I've never worked with big data and data that needs to be manipulated so I have a few design questions.

+ +
    +
  1. Should I grab all the data at once, or grab the data for a certain time frame one at a time? What I mean is this. Users can view their statistics over the past day, past week, month, whatever. So one thing I can do is grab ALL the data belonging to the user as soon they log in and store it. Then when they ask to see a line chart of their data over the past week, I iterate over the data, pick the points from the past week, display. Then if they want data over the past year, just iterate over the data and so on. The second way is to just query as is needed. So when they load it'll show weekly data as a default. Then if they ask for month it'll requery the DB and get data for the past month. Option 1 is good because it saves having to query the DB every time by storing ALL the data. Option 2 is good since it'll be much less work since I can use SQL to get the exact data I want and display it easily.

  2. +
  3. Once the data is available to me after I get it from the DB I have two options. I model the data server side and send it properly labeled and ready to be plotted. Or I can send all the data to the front end and I can use JavaScript to model the data client side. With the first way it'll save server resources because the same server is getting hit hundreds or thousands of times a second since it's recording the time series data as well. I'm not sure if this will be an issue since like I mentioned, I've never built anything this big. If it's not an issue for the server to keep getting hit a lot every second (through websockets) and to do data manipulation and everything else, then this isn't a problem and I'd rather do it all server side so I don't expose raw data to the client (not that it's an issue, but still).

  4. +
+ +

I'm using NodeJS for the back end and postgres (timescale) for the DB

+",360567,,360567,,43936.92986,43937.05486,Doing data manipulation on server side vs client side,,1,0,,,,CC BY-SA 4.0, +408864,1,,,4/16/2020 0:12,,0,40,"

I'm developing a traceability platform for the agro industry(a web and Android application) where the user can define a series of processes where the raw material becomes a final product. Each process generates data that must be captured (using the Android app). Because it isn't oriented to a specific kind of crop, the data to register will be different for each client.

+ +

Because I'm aware of the cons of EAV (entity attribute value model ),being the difficulty of developing the queries what concerns me the most, I'm evaluating these other options:

+ +
    +
  1. XML data type in SQL Server: I would store the user defined +data in a xml type column. It gives me the possibility of querying +it within the SQL sentence.
  2. +
  3. PostgreSQL Hstore: I can store the data in a key-value pair structure, with +the possibility of querying it with a SQL sentence also.
  4. +
  5. Once the user defined the data to collect in each process, the +structure will not be modified too often, I would say once at most,I +could create the DDL (create,alter,drop table ...) sentences based +on the user input and I could develop a module to create the +reports and queries for the user.
  6. +
+ +

I don't have experience using XML data type column and Hstore in a project, what of these 3 options, would be the best suitable for my case?

+",363188,,,,,43937.00833,Xml data type and Hstore to create and query dynamic fields on a SQL database,,0,0,,,,CC BY-SA 4.0, +408866,1,408874,,4/16/2020 1:54,,1,61,"

I work in a DevOps role and mainly write glue code and OS-level automation. I've rarely needed to deviate from existing language implementations or ""reinvent the wheel"" in the small programs I've written.

+ +

What is the advantage of creating a ""manual"" implementation of an algorithm, i.e. merge sort, than using a language's builtin implementation if it exists? For example in Python, sort() is a very fast way to sort a list in place. At what point does it make sense to abandon sort() and write my own implementation of a sort algorithm?

+",348428,,,,,43937.37222,Using Builtin Method vs Manual Implementation,,1,0,,,,CC BY-SA 4.0, +408867,1,,,4/16/2020 2:17,,0,14,"

PSA: I am new to Authentication/security.

+ +

I'm building a web app which allows client side get requests to the Spotify Api. From my understanding, I have a public and private key used to access the API which I can use to grant tokens (which last 1 hour) to clients. I'm confused on how I should go about granting these tokens to clients. I have 3 methods which I'm considering.

+ +
    +
  1. Grant a new token on every request to the Spotify API.
  2. +
  3. Have one token on the server side which is updated every hour via a cloud function. Allow all authenticated clients access to the token.
  4. +
  5. Add a token field to the user schema and write a cloud function to update the tokens of all logged in users one hour after their respective grant.
  6. +
+ +

Option 3 seems like the best to me, however I also don't know how to implement it. Firestore offers scheduled functions but not conditional on user login as far as I can tell.

+",363196,,,,,43937.09514,Third Party Token Grants via Firestore,,0,1,,,,CC BY-SA 4.0, +408872,1,,,4/16/2020 5:25,,1,65,"

Consider the follow entities:

+ +

Bob, Sally, Apples, Oranges, SupermarketA, SupermarketB.

+ +

The relationship between this is as follows:

+ +
    +
  • Bob can buy Apples from SupermarketA and SupermarketB
  • +
  • Bob can buy Oranges from SupermarketA
  • +
  • Sally can buy Apples from SupermarketA and SupermarketB
  • +
  • Sally can buy Oranges from SupermarketA and SupermarketB
  • +
+ +

How can I answer the question:

+ +

Who can buy Oranges from SupermarketB?

+ +

I have tried to create these relationships in a graph. However when traversing this graph Bob can buy Oranges from SupermarketB because Sally can:

+ +

Bob <-> Oranges <-> SupermarketA

+ +

Sally <-> Oranges <-> SupermarketA

+ +

Sally <-> Oranges <-> SupermarketB

+ +

An alternative approach I have tried is adding these in a 3-dimensional array of booleans as a lookup so that you can determine if Bob can buy Oranges from SupermarketB by indexing to lookup[0][1][1] = false. You can find the answer for Sally by indexing to lookup[1][1][1] = true, or to answer the original question, iterate over the first dimension looking [*][1][1].

+ +

The lookup approach seems quite wasteful, especially with more dimensions.

+ +

How can I effectively model these relationships so I can determine the compatibility between a given set of entities?

+",363211,,,,,43937.58125,"How can the relationship of these objects be modelled, so that when given a set of objects I can determine what other objects are compatible?",,3,0,,,,CC BY-SA 4.0, +408882,1,,,4/16/2020 10:33,,0,55,"

If I have a GET restful api use for query user info by user name. I don't want to return all column. +My response column is decide to client.

+ +

Example. +My user have many column.(userName, sex, phone, address, age) +When my client only want to query the user phone and age by user name. +My restful api is like: /rest/user?userName=Bob&resultColumn={""column"":[""phone"",""age""]}

+ +

Is there a better way to represent the resultColumn parameter? +Maybe like resultColumn=phone,age , resultColumn=phone;age or other format?

+",363242,,118878,,43937.49167,43937.49167,RESTful query parameter at GET method,,0,2,,,,CC BY-SA 4.0, +408890,1,408966,,4/16/2020 12:32,,4,1822,"

I'm struggling to find an elegant and idiomatic way of coding the following scenario using the MVVM paradigm in WPF and was wondering how other people would approach it.

+ +

I have a UserControl in my WPF application which I want to reuse in a number of places. The control is a filtered ComboBox setup allowing users to refine their selections. My example is Department > Team > Person.

+ +

+ +

In each scenario I might have the control configured in a slightly different manner. e.g. Window 1 might have all the departments, teams, and people; Window 2 might only display a subset of all departments; Window 3 might be locked to the user's department and team.

+ +

First (and probably worst) Solution: Give the UserControl its own ViewModel

+ +

This works as far as I can drop the control on each window and it appears to immediately require no further work. The filtering logic is the control's ViewModel as is the loading of all the lookup values. The problem comes when I then want to get the values and when I want to configure it slightly differently for each scenario, largely because I've broken the DataContext inheritance chain. I ended up having the control's ViewModel subscribe to messages for configuration settings and send messages for reporting value selections and it feels like I'm fighting MVVM/WPF rather than working with it.

+ +

Second Solution: No ViewModel for the UserControl and Rely on the Window's ViewModel This has the advantage of being easy to interact with the UserControl through the Window's ViewModel but it feels like I'm duplicating a lot of the loading of lookup values logic as well as the filtering logic.

+ +

I feel like there's an elegant solution of code-behind and MVVM but I can't seem to find it! How would you go about solving this requirement?

+",127018,,,,,43940.34375,WPF UserControl Reuse With MVVM,,5,1,1,,,CC BY-SA 4.0, +408894,1,408901,,4/16/2020 13:26,,3,257,"

I have several classes like Button, Textbox and so on but at instantiation of those object they all need one object reference. The button represents a physical button on the screen but it is not an UI element. Instead it should be used it Selenium to interact with the physical buttons.

+ +
    public class Button : Element, IButton
+    {
+        public Button(IService someService) : base()
+        {
+
+        }
+    }
+
+ +

somewhere later

+ +
var b = new Button(someService)
+var b = new Button(someService)
+var b = new Button(someService)
+var b = new Button(someService)
+
+ +

The problem now is that thoughout the application I need to instantiate this button object several times so I always need to pass the reference inside the constructor. This to mee looks like not so good code. So if anybody has a clue on how to do this better I would be very grateful!

+ +

Thanks in advance

+",179848,,179848,,43937.63194,43943.49375,Multiple classes depend on one object reference,,3,6,,,,CC BY-SA 4.0, +408900,1,408902,,4/16/2020 14:38,,3,229,"

In short: are email end-to-end tests a thing?

+ +

As part of my CI I would like to run email integration/end-to-end tests: the app would send it to some SMTP server and then hits maybe an endpoint where it would check if SMTP server got the email. I have hard time finding anything similar on the net though... Did you hear about something like that, or maybe it just doesn't make any sense to test something like that? +Emails notifications are crucial in our app, maybe there are some other ways to make sure that they work (we do have unit tests)?

+ +

The SMTP server is Postfix, the app is built with Python/Pyramid.

+",363263,,173647,,43937.61944,43937.61944,End to end email tests,,1,0,,,,CC BY-SA 4.0, +408904,1,,,4/16/2020 15:43,,0,26,"

If I have a repository, like a Dropbox server or a File sharing server, and if a client uses this repository for storing its data, from a design perspective, whose responsibility it is to calculate how much resource has been used / occupied by the client, assuming that there are no limits set on this, and no security concerns?

+ +

One might argue that, since it is a client's repository and client's data, it is its responsibility to calculate, by iterating or other means available, and calculate the repo size, and be responsible for it.

+ +

A counter argument could be that since it is the server which provides these resources, it is in the best position to calculate how much filesystem has been in use by which client, and provide this information to specific client when asked.

+ +

Dropbox clients, like the website and the app, provide the user with this information, but I am not sure who calculates this and where? Does the client iterate through files and folders and get the number or is it calculated and maintained at the server side, possibly in a file, and the client merely reads information in this file? +Dropbox at least is an application which is not constrained by use of protocol.

+ +

File sharing protocols like FTP or SFTP, or even WebDAV, to my knowledge, do not contain means to query on folder size, rather they return sizes of individual files and one can calculate.

+",363270,,110531,,43942.16806,43942.16806,Client - Server responsibility with respect to client specific resource utilization,,0,2,,,,CC BY-SA 4.0, +408907,1,,,4/16/2020 17:47,,0,253,"

Say I have a class to model my customers

+ +
class Customer 
+{
+    public static customerType = 'customer';
+}
+
+ +

And a subclass class CorporateCustomer extends Customer to model my corporate customers. Presently I have three subclasses

+ +
class AlphaCustomer extends CorporateCustomer {...}
+class BetaCustomer  extends CorporateCustomer {...}
+class GammaCustomer extends CorporateCustomer {...}
+
+ +

Each subclass of Customer overwrites the customerType property. This property is used when my model is loaded into my database.

+ +

Of course, as my business evolves new subclasses of CorporateCustomer might be added.

+ +

Now in another part of my application, I have a variable called currentCustomerType and I need to determine if there exists a subclass c of CorporateCustomer such that c::customerType is equal to currentCustomerType.

+ +

How would you do this?

+",98327,,98327,,43937.83889,43941.22431,Dynamically knowing all the subclasses of a superclass,,6,11,,,,CC BY-SA 4.0, +408909,1,,,4/16/2020 18:40,,-4,65,"

Two possibilities. Which is better, with respect to readability, maintenance and clean code:

+ +

SQL injection can be disregarded.

+ +
    +
  1. Constructing complex parameterized sql queries using python string concatenation. Up to 7 parameters each adding an additional where condition, order by logic or wrapping a subquery into a new select. The sql string construction for a specific request is handled by a python function that calls other functions that add parts to the SQL string.

  2. +
  3. Using the Jinja templating language for the SQL query construction. There is a base.sql template that will be extended for specific requests with a main SQL query and Jinja macros handle additional logic. SQL lives inside .sql files. For a specific request a python function is called that opens the corresponding .sql file, passes the parameters and Jinja builds the final SQL query.

  4. +
+",363290,,363290,,43937.78958,43937.79167,Is it better (from a maintenance perspective) to dynamically construct complex SQL queries using Jinja or String concatenation?,,1,1,,43937.81875,,CC BY-SA 4.0, +408916,1,408970,,4/17/2020 0:18,,0,29,"

I have a web application that depends upon a set of R analytics. These R analytics read data from a database and perform machine learning, so have high CPU use.

+ +

The R analytics are accessed through HTTP using endpoints set up by the R-plumber REST library, all hosted in a docker container on an Azure Linux App Service. As R is single threaded and can only process one request at a time, it cannot handle multiple requests in parallel, they must be executed sequentially.

+ +

At the moment the web application and the R analytics container sit on the same appserviceplan (so same underlying VM), and when an API request is made that requires heavy CPU load, it can affect the performance the web application for other users.

+ +

I feel I need to move the R analytics API and underlying computational libraries to another hosting service on Azure, but I'm not sure what the best (and most cost effective) option would be.

+ +

Ideally the hosting service would keep a small set of containers idle, and when the HTTP request came in, allocate the request to a free container to perform the processing. If lots of requests were coming in, it would create extra containers elastically to scale with the load, but for a low number of requests, it would scale down automatically. That is what I'm imagining, but I have no experience of Kubernetes and was hoping there would be a easy way to set this up. I looked at Azure Container Instances, but it just seems to be for one container with little control for scaling out.

+ +

What (Azure) solution would be best in this scenario?

+",330137,,,,,43939.56736,Autoscaling containers on Azure,,1,0,,,,CC BY-SA 4.0, +408920,1,408922,,4/17/2020 6:37,,0,281,"

I've to implement some Backend Webservices, which provide a given, final JSON structure, which is allready in use on the FrontEnd side. +This structure doesn't match the database structure, so I have to convert the database entities to the needed serialization format.

+ +

It's a service on demand, that means the backend services has to determine and calculate the situation based on a user given time. +So i cannot return the selections from the repository directly to the user, instead I need some additional layers for the business code (determining and calculation the situation on time X).

+ +

Here are my database tables

+ +
entry
+-------------
+  id int (PK)
+  result_id (FK)
+  ...
+
+result
+-------------
+  id int (PK)
+  t1 double precision
+  t2 double precision
+  t3 double precision
+  t4 double precision
+  total double precision
+
+ +

The response has to look like that:

+ +
entry: {
+    id: x,
+    currentResult: {
+        id: x,
+        t1: xx.xxx,
+        t2: xx.xxx,
+        t3: xx.xxx,
+        t4: xx.xxx,
+        total: xx.xxx
+    },
+    lastResult: {
+        id: x,
+        t1: xx.xxx,
+        t2: xx.xxx,
+        t3: xx.xxx,
+        t4: xx.xxx,
+        total: xx.xxx
+    }
+}
+
+ +

So here my questions:

+ +

At the moment I select all needed data into DTO's, not into the entity records. The DTO's have already the structure of the output format +(e.g. EntryDto has already current and last lap, which isn't possible in the Entry-Entity). +Now all calculations will be made directly in the DTO (which is actually wrong, because it should be responsible only for transfering the data). +This approach works good, and for me the code has a clean structure, because the business code is completely in the DTO (which however is wrong).

+ +

If i wanna be some for conform to good practices, I have to move the business code into another layer. But what layer should be responsible for that?

+ +
    +
  • Entity: It's not the right spot, because some related properties which are needed for the calculations are only present in the DTO.
  • +
  • DTO: All properties for the calculations are given, but the DTO shouldn't be so complex with the amount of methods / business logic.
  • +
  • ?
  • +
+ +

Would be the following a better solution?

+ +
    +
  • Entity entry is only responsible for database internals (joins,...)
  • +
  • Repository selects data directly into new layer EntryBo (EntryBusinessObject) by constructor expression. All calculations will be made in those BO's.
  • +
  • After all calculations were made transform the BO's into DTO's and finalize the response
  • +
+ +

Note: The above project is only a light example. Please no notes on that, only on the given problems.

+",363331,,,,,43938.32708,Decouple business logic from DTO,,1,3,,,,CC BY-SA 4.0, +408923,1,,,4/17/2020 8:06,,2,103,"

I have a range of different animals in my zoo such as turtles, birds etc. As they all share a common trait such as either swimming, flying etc., I thought a strategy pattern would be appropriate to model them. The thing is though that I want to call a method in the composition from the compositor. See this MWE:

+ +

Animal.java (abstract class, composition)

+ +
public abstract class Animal {
+    Movement movement;
+    int metersSwam = 0;
+
+    void swimMeters(int meters){
+        metersSwam += meters;
+    }
+
+    void swim() {
+        movement.swim();
+    }
+
+    void fly() {
+        movement.fly();
+    }
+}
+
+ +

Turtle.java (extends animal)

+ +
public class Turtle extends Animal {
+    public Turtle() {
+        movement = new TurtleMovement(this);
+    }
+}
+
+ +

Movement.java (interface, compositor)

+ +
public interface Movement {
+    void swim();
+    void fly();
+}
+
+ +

TurtleMovement.java (here is the issue, where I call a method of Turtle)

+ +
public class TurtleMovement implements Movement {
+    Animal turtle;
+
+    public TurtleMovement(Animal turtle) {
+        this.turtle = turtle;
+    }
+
+    @Override
+    public void swim() {
+        turtle.swimMeters(10); //<--- here
+        System.out.println(""I can swim, just swam 10 meters"");
+    }
+
+    @Override
+    public void fly() {
+        System.out.println(""I can't fly"");
+    }
+}
+
+ +

main.java

+ +
public class Zoo {
+
+    public static void main(String[] args){
+        Animal animal = new Turtle();
+        animal.fly();
+        animal.swim();
+    }
+}
+
+ +

So my question is basically, am I allowed to call the method of Turtle in TurtleMovement? If not, is there a way to circumvent it or maybe the strategy-pattern isn't ideal for my situation after all?

+",363337,,363337,,43938.40139,43940.68125,Is it allowed to include the composition in the compositor in the Strategy Pattern,,1,5,,,,CC BY-SA 4.0, +408927,1,,,4/17/2020 11:12,,1,43,"

I'm using flatbuffers for the first time. I've generated my java classes and have tested out serializing / deserializaing a message. Now I'm thinking about how to integrate these in to my JavaFx and Android applications.

+ +

Is it valid to pass a DataMessage or a MessageA directly in to bussiness logic classes or my UI? It's seems really unwieldly to pass the DataMessage and then have to extract the correct payload type. So, I created wrappers around each type, so I have MessageAWrapper that contains the MessageA and Header. Should the client/UI code directly access the values from a MessageA, or is it a better practice to copy all the fields from MessageA in to new primitives defined in MessageAWrapper. Doing so seems against what Flatbuffers is for, but I also feel dirty exposign the MessageA that has flatbuffers-specific parsing methods and such. But, it's also a bit of work to have my wrapper classes copy over fields and store them, and could be error prone. But, at somewhere in my stack, I may want to, say, stuff a latitude/longitude float pair in to a convenience class called GeographicPosition. I can do this easily in a wrapper.

+ +

So I'm asking for what's the best practices for using flatbuffers / protobufs in an application wrt abstraction / separation of concerns. Most examples I see are just doing quick serialization to compare speed to JSON parsing, and I'm not seeing a full-up practical example of using it in an application to, say, stuff values in to UI controls.

+ +

For reference, I created a flatbuffers schema as follows:

+ +

+table MessageA{
+    myVal1:float;
+    myVal2:float;
+    myVal3:int;
+}
+
+
+union MessagePayload { MessageA, MessageB, MessageC }
+
+
+// top level message class. Contains a header, and a payload consisting of one of the message types specified in the MessagePayload union.
+table DataMessage {
+    header:Header (required);
+    payload:MessagePayload (required);
+}
+
+root_type DataMessage;
+
+
+ +

I do this so I can examine the payload type to determine what payload I have and serialize that.

+",239738,,,,,43938.46667,Separating generated flatbuffer/protobuf files from domain model - best practices,,0,0,,,,CC BY-SA 4.0, +408928,1,,,4/17/2020 11:20,,-1,85,"

Sorry if this is a silly question, but I am not a native english speaker and a lot of times it is difficult for me to come up with meaningful variable names.

+ +

I have a table of users in our app. All of the following columns in the table are unique:

+ +
    +
  • userid (numeric)
  • +
  • username (must contain at least one letter)
  • +
  • email
  • +
+ +

I want to create a variable that holds ""userid or username or email"". This variable is basically a unique identifier for the user, since all of the columns are unique and I can easily distinguish what column to search by performing some simple lexical analysis on the variable.

+ +

A unique id is usually called ""id"" in most languages :) But I don't want to call it ""id"" because it will become ambiguous with the numeric id.

+ +

This property/variable will possibly be in hundreds of different forms, so I really don't want to change all of them later. Is there a standard naming convention for such identifiers?

+",363354,,,,,43938.74097,Is there a naming convention for variables that hold one of several possible ids?,,1,6,,,,CC BY-SA 4.0, +408933,1,,,4/17/2020 14:20,,3,292,"

We've recently started working on an API, and I'm running into a philosophy issue. This is only my second API I've worked on, but the standard I've seen for retrieving a single model is always a GET, and the endpoint is something like api/model/1, with 1 being the ID. However, my coworker is REALLY adamant about not passing any data through a URL, and wants to use POST instead and send the ID through the body. His reasoning is that he feels it's a security risk.

+ +

At the same time, he wants to follow a philosophy of one POST per file. This means we need a file for UpdateModel, DeleteModel, and EditModel.

+ +

What I'm proposing is we follow this structure:

+ +
GET    /api/Model       Get all to-do items
+GET    /api/Model/{id}  Get an item by ID
+POST   /api/Model       Add a new item
+PUT    /api/Model/{id}  Update an existing item
+DELETE /api/Model/{id}  Delete an item 
+
+ +

But he's proposing something like this:

+ +
GET  /api/Model Get all to-do items
+POST /api/Model Get an item by ID
+POST /api/Model Add a new item
+POST /api/Model Update an existing item 
+POST /api/Model Delete an item 
+
+ +

Is there anything to my coworkers philosophy that I'm not understanding?

+",66899,,,,,43957.82083,Coworker wants to use POST to pass ID's in API routes,,3,3,1,,,CC BY-SA 4.0, +408934,1,408936,,4/17/2020 15:06,,1,36,"

I'm currently writing a paper on my react frontend and I'm struggeling to find the right verb for the interaction between child-components and components in react.

+ +

For example:

+ +

""I have a table component which ?inherits? from the child components tableHeader and tableBody.""

+ +

Is the use of inheritance correct here or how should I describe it?

+ +

EDIT: +Here is a example code of my table

+ +
import React from 'react';
+import TableHeader from './tableHeader';
+import TableBody from './tableBody';
+
+// Stateless Functional Component
+
+const Table = ({ columns, sortColumn, onSort, data }) => {
+  return (
+    <table className=""table"">
+       <TableHeader
+          columns={columns}
+          sortColumn={sortColumn}
+          onSort={onSort}
+       />
+       <TableBody columns={columns} data={data} />
+    </table>
+  );
+};
+
+
+
+export default Table;
+
+",333223,,333223,,43938.64653,43938.64653,what kind of relationship do child components have with components in react?,,1,0,,,,CC BY-SA 4.0, +408937,1,408969,,4/17/2020 16:05,,-3,44,"

Speaking about DDD, in which layer should a hypothetical MailerInterface be?

+ +

I know the implementation (or adapters) for each specific mail sender package should be in the infrastructure, but these implementations will implement an Interface.

+ +

This interface should be inside the domain or application layer?

+",263378,,,,,43939.56528,In which layer should a `MailerInterface` be?,,1,2,,,,CC BY-SA 4.0, +408938,1,,,4/17/2020 17:15,,-3,39,"

I am working on a project to save external credit bureau reports in database. These reports are typically big and could go from 0.5MB to 5.0 MB. Number of files will grow exponentially over the time based on incoming traffic. Estimated file size would be 1TB in 3 months. Files will be read immediately and multiple times by other services/applications once its retrieved from external bureau in real time and saved it in data store. These files will be associated with an uniqueId and file / credit report retrieval will be based on that uniqueId. Response time is one of the critical factor as online experience is associated with it.

+ +

We are searching couple of options like: + 1. SQL blob store with uniqueId + 2. NoSQL column or document store with uniqueId + 3. Object store with a path mapping in SQL DB against an uniqueId.

+ +

Backup / Archive / overall maintenance would play a critical role down the line as our app should be up 24x7 with 99.9999 %.

+ +

Need some input and guidance here to finalize a potential database option based on above scenario.

+",363389,,,,,43938.74167,Saving large files like Credit Reports in Data Store - SQL/NoSQL/ObjectStore,,1,3,,,,CC BY-SA 4.0, +408947,1,,,4/17/2020 20:56,,0,79,"

Wanting to expand my programming horizons I recently started building a website.

+ +

I have started to build up my website and it is heavily focused around an external api.

+ +

The reason for using this api is that I could not get all the information it provides by huge huge margin.

+ +

I am making a video games site where users can comment on games so I want to be able for users to find all there favourite games on the site to talk about, now as well as just titles it contains so much great information about a game, websites, developers, released dates, screenshots, reviews, trailers,.... the list goes on. I could never ever bring this much information together.

+ +

It is built by a community of hundreds if not thousands of people contributing to the information.

+ +

Now you understand that this api is crucial to my site.

+ +

As this is an experimental site for my education only it doesn't matter really matter but I did wonder if it is ""good practise""/""smart"" to do things like this.

+ +

The reason I am thinking this is that if the owners of the site decide to kill the api or not allow it to be updated then my site would become worthless! even the owners could change the licensing on how it can be used etc.

+ +

Of course when it is purely for my own experience it wouldn't be a major deal but what about if you spent a lot of time on your site and you had a lot of users.

+ +

It seems to me like this might be a recipe for disaster to have your website based around something that is out of your control, on the other hand it seems like there are some fantastic api's out there that you could not source the information they provide even if you wanted to so why not take full advantage?

+",363408,,,,,43939.44653,How sensible is it to build a website/app etc. using an external api?,,2,1,,,,CC BY-SA 4.0, +408959,1,,,4/18/2020 7:48,,-3,51,"

I have a bunch of small utility PHP functions that I made to solve different scripting problems. Functions like UUID() and trackUserActivity() etc. There are tons of these functions and increasing every day.

+ +

Say I have around 50 different small functions and around 100 different scripts for my application. Sometimes those 100 scripts have to call 3 or even 5 of those functions and sometimes they just call one single function out of 50.

+ +

I want to know how you organize such a bunch of functions. Do you put all of your functions in a separate folder with each function into a separate file and include individually or create a single class and add all those functions in it and include that class calling functions with object instantiation?

+ +

I don't think that it would be good idea to make a class of 50 functions and include that class even for calling a single function. But I wanted to be sure.

+ +

I found this question at https://stackoverflow.com/questions/1618895/organize-small-utilities-functions but this is particularly for Java and it does not seems to fit the PHP situation.

+",150407,,110531,,43939.34722,43939.38403,Proper way to organize Small Functions in PHP,,1,6,1,,,CC BY-SA 4.0, +408961,1,409032,,4/18/2020 9:04,,0,76,"

I am facing a dilemma with designing an api gateway. Currently, I am using the pipeline pattern, with different stages being the requests made to various services, (http, sockets, amqp, ...).

+ +

I have a base class, RequestStage, with some subclasses, HttpStage, SocketStage, AmqpStage. These classes extend RequestStage because they can have different unique parameters for each of their requests.

+ +

The problem is that each stage/event (subclass) must be a call either to a middleware api or downstream api. If it is a middleware api, then there should be a fail response condition to short circuit the pipeline.

+ +

How can I fit this extra field/trait in? If the stage/event class is calling a middleware api request, then it should contain an extra field ""failCondition"", but how could I implement this cleanly since my inheritance is based on request protocol type and not service type?

+",363426,,,,,43941.42639,Inheritance but with two different shared traits?,,1,6,,,,CC BY-SA 4.0, +408967,1,,,4/18/2020 11:29,,2,93,"

Firstly I am new to Software Engineering and my last question was closed. I am doing my best to ask relevant questions and improve. If you are going to down vote my question I'd really appreciate if you explain why you are downvoting me so I can learn.

+ +

Background:

+ +

I am developing a new Python API that deals with vehicles. A user can request a background check by POST a registration to a vehicles endpoint. What is returned is a defined vehicle object with things including engine size, colour, number of previous owners and whether it was in a crash.

+ +

My program makes calls to various different third party APIs depending on the manufacturer and combines the results to create the vehicle object.

+ +

Current Approach

+ +

At the moment the code is laid out so the request will go through the following stages.

+ +

If the registration belonged to a car it would be:

+ +
-> view.py
+-> vehicle.py 
+   -> car.py
+       -> ford.py
+           -> ford_api.py
+
+ +

If the registration belonged to a van it would be:

+ +
-> view.py
+-> vehicle.py 
+   -> van.py
+       -> toyota.py
+           -> toyota_api.py
+
+ +

Purpose of each file:

+ +

view.py -> layer to perform input validation and authentication

+ +

vehicle.py -> Calls an API to get the basic vehicle details and route it to the relevant vehicle type class e.g. car.py, van.py, bus.py

+ +

car.py -> This deals with the vehicle type specific API call. For example buses and commercial vehicles have a different endpoint to call compared to cars.

+ +

ford.py -> Will get the relevant manufacture specific details.

+ +

ford-api.py -> Actually does the calls connecting to fords api.

+ +

Obviously there is a lot more going on in each file but I'm looking to get feedback on the general approach.

+ +

My Questions

+ +

I have two concerns and I am not sure if they are founded or not.

+ +

(1) I'm worried the program might have too many layers. The view.py calls car.py which calls ford.py which calls ford_api.py

+ +

I am wondering if it is better to have one file call car.py get the response then call ford.py and then call ford_api.py but that results in a lot of if elif elif elif code.

+ +

Should I be concerned about having such a layered structured?

+ +

(2) I am trying to work out the most efficient way of getting the data retrieved from the API back to the caller. I am considering creating a python class that represents the vehicle check.

+ +
class VehicleCheck:
+    def __init__():
+        self.registration = None
+        self.manufacturer = None
+        etc
+
+ +

in the vehicle.py layer and pass it down. The methods that are called with update the VehicleCheck instance and not return anything, apart from the default None. So at the end of the check, when it gets back to vehicle.py that VehicleCheck object will have all the relevant information filled.

+ +

I know a method should only either update an object OR return something never both. I also know updating class instance attributes is fair game in Python the idea of getters and setters isn't really relevant.

+ +

Is this approach considered bad? Is it unusual? I have never seen this approach taken before but I am wondering why.

+",363402,,363402,,43939.48403,43939.58542,Python: Return or update object,,1,0,,,,CC BY-SA 4.0, +408971,1,408975,,4/18/2020 13:57,,1,98,"

This may seem like an odd question but it's something I've yet to find a ""proper"" answer for. I've tried googling but I don't get anything useful (maybe I'm looking for the wrong terms).

+ +

I'm attending a couple of classes where I'm building Web APIs, one using Spring Boot and the other using NodeJS (with Express), and we've been told to use logical layers like the ""Service Layer"" or the ""Data Layer"" but I haven't yet fully understood which responsibilities should belong to whom.

+ +

For instance, in the Spring project, I get a POST request and my handler receives a DTO and then I need to perform these steps:

+ +
    +
  • Transform the DTO into a Model object
  • +
  • Validate the model according to the business rules + +
      +
    • Throw an exception if validaton fails and return an error response
    • +
  • +
  • Save the model to the database
  • +
  • Return an OK response
  • +
+ +

I'm having a hard time understanding to which ""logical"" layers each step belongs to (since I don't fully understand the layering yet), specially who handles the errors and how. Like, if validation fails due to business rules the exception I throw shouldn't know about HTTP but then whose job is it to catch it and map into a proper HTTP error?

+ +

Thanks for the help.

+",363172,,,,,43939.63681,Web server archicture,,2,1,,,,CC BY-SA 4.0, +408974,1,409273,,4/18/2020 15:15,,2,74,"

It seems that I need to have a MergeNode with incoming ControlFlow and ObjectFlow while according to paragraph 15.3.3.5 Merge Nodes of OMG® Unified Modeling Language® (OMG UML®) Version 2.5.1

+ +
+

If the outgoing edge of a MergeNode is + a ControlFlow, then all incoming edges must be ControlFlows, and, if the outgoing edge is an ObjectFlow, then all incoming edges must be ObjectFlows.

+
+ +

Here is the diagram:

+ +

+ +

The intent was to model a process which once started continuously receives and process Order Records and Trade Records. As soon as Stop Request received, the process stops.

+ +

The nodes A and B are ok since all flows are ObjectFlows. The nodes marked C and D are problematic since the flows from the InitialNodes are ControlFlows.

+ +

I need flows from the InitialNodes to initially enable Trade Record Received and Order Record Received nodes since according to paragraph 15.2.3.6 Activity Execution

+ +
+

When an Activity is first invoked, none of its nodes other than input ActivityParameterNodes will initially hold any tokens. However, nodes that do not have incoming edges and require no input data to execute are immediately enabled.

+
+ +

Thus Stop Request Received will be enabled when the activity is first invoked while Trade Record Received and Order Record Received will not be.

+ +

Is there a way to make this diagram conforming with UML specification?

+ +

If I set isControlType=true for Trade Record, Stop Request and Order Record pins would it mean that all flows are now ControlFlows for according to paragraph 15.4.3.1 Object Nodes:

+ +
+

If isControlType=true for an ObjectNode, ControlFlows may be incoming to and outgoing from the ObjectNode, objects tokens can come into or go out of the ObjectNode along ControlFlows, and these tokens can flow along ControlFlows reached downstream of the ObjectNode. The values on such object tokens may be used to affect the control of ExecutableNodes that are the targets of such ControlFlows, though the specific meaning of such values is not defined in this specification

+
+",363225,,363225,,43939.66944,43946.3125,How do I merge ControlFlow and ObjectFlow in UML2 Activity Diagram?,,2,3,,,,CC BY-SA 4.0, +408984,1,,,4/19/2020 4:48,,0,128,"

In this simple example I have two activities ActivityA and ActivityB.

+ +

ActivityA is the foreground activity. User clicks a button that executes dispatchActivityBIntent() that creates a intent and dispatches it to start ActivityB.

+ +
Intent intent = new Intent(this, ActivityB.class);
+startActivity(intent);
+
+ +

These are the diagrams I came up with

+ +

+ +

+ +

I found a previous answer discuss about potraying this in a class diagram.

+ +

Android Class Diagram UML

+ +

+ +

Alternate way. Is this wrong to show like this?

+ +

+",363489,,,,,43940.32222,Android : How to represent one activity starting another activity with an intent in a UML sequence diagram and class diagram?,,0,1,,,,CC BY-SA 4.0, +408993,1,,,4/19/2020 12:41,,-3,104,"

I have 2 microservices.

+ +
    +
  1. Users Service - REST API which provides user detail
  2. +
  3. Statistics Service - REST API to provide different stats
  4. +
+ +

My goal is to provide a single interface to a Mobile App which will be used by our users to see the stats. The stats should be shown to user according to his roles.

+ +

I think API Gateway with Backend-For-Frontend variation is something have to build.

+ +

The API Flow will be like this.

+ +
[Mobile App]            [API Gateway]                           [User Microservice]             [Stats Microservice]
+|                         |                                        |                                    |   
+1---------Get Metrics---->|                                        |                                    |
+                          2--------- Get User Roles -------------->|                                    |
+                          3----------------------------Get Stats According to Roles-------------------->|
+                          4(Wrap data in FrontEnd json)
+<---Send JSON to App------5
+
+ +

I am thinking of using nodejs for this as most of our team members are have already worked on nodejs. +There are some good API Gateway out there like ExpressGateway, fast-gateway in nodejs. But both of them do not provide Data Aggregation ( Combining and Transforming data from multiple services) feature.

+ +

I can understand the Data Aggregation can become very use case specific hence there is not much support from these open source API Gateways, but I have not found any guidelines on how to achieve this with API Gateways.

+ +

I want to know how do we leverage Caching mechanism provided by API gateway if I write custom aggregation plugin/code to call the microservices.

+ +

I also want to know how to leverage Failure Handling provided by API gateway when Statistics Microservice is down.

+",234199,,234199,,43940.54028,43941.29583,"Managing Caching, Service Failure handling when building Backend for frontend in API Gateway",,1,1,,,,CC BY-SA 4.0, +408994,1,,,4/19/2020 13:52,,0,100,"

I'm doing a Fruit Ninja clone in java (language not really relevant). It's not completed yet, but here is the point I'm in now:

+ +

Currently I've:

+ +
    +
  • Difficulty interface, where several classes will implement it to decide the fruit moving speed based on the game time, and also determine how many bombs will be created compared to fruits (the more bombs, the more difficult).
  • +
  • GameOverCondition interface, where it's implemented by ArcadeGameOverCondition and ClassicGameOverCondition, because in Classic you lose when lives end. and in Arcade you lose when time is up.
  • +
  • There is a GameObject abstract class that's extended by Fruit abstract class and Bomb abstract class. Fruit is extended by concrete fruit classes, and Bomb is extended by concrete different bombs.
  • +
  • There is a FruitNinja class, which take GameOverCondition and Difficulty in the constructor. And in the constructor it will run a thread that constantly create more GameObjects. And each GameObject will run a thread in the constructor that moves it with the specific speed determined by the Difficulty.
  • +
+ +

At this point, I just noticed that Difficulty logic currently will be the same for all game types (arcade & classic) (which could be making the game faster and also increase number of bombs). However, I'm thinking that later I may want different logic. How should I change my design?

+ +

Feel free to tell any improvements you see.

+",363525,,363525,,43940.61875,43940.65833,Fruit Ninja Design Decisions,,1,0,,,,CC BY-SA 4.0, +408997,1,,,4/19/2020 14:53,,0,19,"

Background

+ +

Having the client web application and the server on the same network is safer, as one can expose the +server ports only within the network, instead of making them publicly available. I want to expose my Unity3D game using this safety measure and follow a DMZ approach.

+ +

Current State

+ +

Currently my game is compiled and build as a native application and shipped on the Android/iOS store. Users make requests over HTTPS to my (Server) REST Api. This opens up a whole bunch of security issues.

+ +

Goal

+ +

What I ultimately want is to have a small native application, essentially just a wrapper that loads my actual game from my remote Unity3D server. The Unity3D server communicates over internal network ports to my actual server and the database.

+ +

Question

+ +

Is it possible to expose Unity like this? The only way I have tested is compiling Unity as a WebGL project and embed it into a web server/framework like NodeJS, Angular or alike.

+",363528,,,,,43940.62014,Unity3d app as web client,,0,0,,,,CC BY-SA 4.0, +409001,1,,,4/19/2020 16:13,,-2,54,"

I am currently adding functionalities to Order Entity and have a column storing the state of the current order in Status.

+ +

Order goes through the following workflow with minor deviations I have not captured for keeping this simple:

+ +

Placed -> Payment -> Inventory procurement -> Shipped -> Delivered OR Cancelled

+ +

So my order status currently looks like

+ +
public enum OrderStatus{
+   PLACED, PAID, INVENTORY_PROCESSED, SHIPPED, DELIVERED, CANCELLED
+}
+
+ +

Delivered and Cancelled are the terminal states here. +There could be a lot of steps in between, like for example when a payment is tried for and fails.

+ +

Should I go about storing granular states like Payment Initiated, Payment Failed at the order status or let the APIs be deriving this exposed if someone wants to add a functionality upon this.

+ +

Point in favour of having granular states at Order:

+ +
    +
  • Keeps everything observable by just looking at the Order Entity and lets others easily use this.
  • +
+ +

Point against granular states:

+ +
    +
  • At some point state transitions might become too complicated to arrive at and comprehend and to what number these states can go.
  • +
+ +

What are the best ways to think about this or if possible mention your experiences/references.

+ +

Apologies if this is not the right forum to ask this, which if the case is, please redirect me to a place where this would fit best.

+",184172,,,,,43940.79722,How many states to be associated with an Order entity?,,1,6,1,,,CC BY-SA 4.0, +409004,1,409007,,4/19/2020 17:20,,-2,54,"

Since Neo4J is implemented in Java and therefore uses the JVM, wouldn't an equivalent graph database that is written in C++ / Rust or GoLang be more performant? +Why would one decide to build a DBMS in a medium ""high level"" language like Java?

+",333044,,,,,43940.78194,Is a JVM based dbms like Neo4J implemented ideally?,,2,0,,,,CC BY-SA 4.0, +409006,1,409012,,4/19/2020 18:11,,3,163,"

In C#/.NET, I have a class that I want to provide extension points for. I can do this either using inheritance:

+ +
public class Animal {
+    public virtual void Speak() { }
+}
+public class Dog : Animal {
+    public overrides void Speak() => Console.WriteLine(""Woof"");
+}
+var dog = new Dog();
+dog.Speak();
+
+ +

Or using passed-in delegates:

+ +
public class Animal {
+    private Action speak;
+    public Animal(Action speak) => this.speak = speak;
+    public void Speak() => speak();
+}
+var dog = new Animal(() => Console.WriteLine(""Woof""));
+dog.Speak();
+
+ +

I can already see some differences between them:

+ +
    +
  • Access to the base behavior -- if via inheritance, the overriding method can choose whether to invoke the base method or not; if via delegates, there is no automatic access to the base behavior.
  • +
  • Can there be no behavior? -- if via inheritance, there is always some behavior at Speak, either the base class behavior, or the derived class behavior. When using delegates, the delegate field could potentially contain null (although with nullable reference types, this shouldn't happen).
  • +
  • Explicit definition of scoped data / members -- When extending via inheritance, other members or data defined in the derived class are explicitly defined as being part of a class. When using delegates together with lambda expressions, lambda expressions can access the surrounding scope, but the parts of that scope aren't necessarily explicitly defined as such (e.g. closed-over variables).
  • +
+ +

When is it appropriate to expose extension points via inheritance, and when is it appropriate to use delegates?

+",100120,,,,,43941.49375,Extension points via inheritance vs via delegate fields,,4,5,,,,CC BY-SA 4.0, +409013,1,,,4/19/2020 20:14,,-2,43,"

I have million of objects, each with an array smaller than 10 elements, which are the names of other objects in the dataset.
+Basically

+ +
{
+ a:[b,c,d,],
+ b:[c,d,e],
+ c:[a,e,f],
+ ...
+ e:[a,b,c]
+}
+
+ +

will result in +(a,c),(b,e),(c,e) +As for each of this tuples element A points to element B and vice-versa, +e.g b has e in his list, and e has b

+ +

Any ideas beside for elem in elems: {for elem in elems:{...}}?

+",363552,,363552,,43940.86389,43940.8875,Find circular references of first order (bi-directional referencing),,1,2,,,,CC BY-SA 4.0, +409017,1,,,4/19/2020 21:40,,0,317,"

Looking at the web development landscape I see that there is two approach to making websites.

+ +
    +
  • If the site is simple you're better of using HTML, CSS and JS.
  • +
  • If the site is complexe it's worth it to use a frontend framework.
  • +
+ +

I have trouble understanding what a simple website means in this context. Is an e-commerce website a simple site? It does not have many interactions from users (selecting items, checking out,etc...). +Do we refer to only static websites(ex: blogs but even blogs have interactivity with user comments)? +Is YouTube a simple site?

+ +

So what is the extent of the ""simple website"" category that you're better of using plain HTML CSS and JS?

+",350227,,,,,43942.50208,Understanding when to use plain HTML CSS JS vs Frameworks,,1,2,,,,CC BY-SA 4.0, +409020,1,,,4/20/2020 3:08,,4,118,"

I have a problem placing and shaping the flowing method in my program:

+ +
private void PrintWarning(params string[] messages)
+{
+    if (!_suppressWarnings)
+    {
+        if (_warningsAsErrors)
+        {
+            foreach (string message in messages)
+            {
+                Console.WriteLine($""Error: {message}"");
+            }
+            Environment.Exit(1);
+
+        }
+        else
+        {
+            foreach (string message in messages)
+            {
+                Console.WriteLine($""Warning: {message}"");
+            }
+        }
+    }
+}
+
+
+ +

This is C# but it can as well be java or another high level OOP language.

+ +

I already moved it between three different places and it feels wrong in each case. Note, that it's not a practical question as such: for a simple program like mine, it does not matter much where I put it, it will work in any case. But I'd like to find out, what changes I need to make in order for it to appear logical from OOP and class design perspectives.

+ +

The program is a command line utility, with many command line flags, and _suppressWarnings and _warningsAsErrors represent two of these. They normally reside in the Command class:

+ +
class Command
+{
+    public bool SuppressWarnings { get; set; }
+    public bool WarningsAsErrors { get; set; }
+    //... other command line options follow
+
+ +

The command line utility is used in a CI/CD automation pipeline, and in this mode WarningsAsErrors option will be set, so that the process bails out as soon as possible if there is even a single warning. This utility will be used locally while building a pipeline, so without WarningsAsErrors set it can display many warnings. Once the pipeline is build it's not likely to change, so bailing out on the first warning and not displaying the rest makes sense and is the safest.

+ +

Command line options in form if a Command object are passed to an instance of the Processor class and saved as a private field in it. The Processor class works off a TOML file, so the first thing it does when it gets control is read and parse that TOML file with help of third party library. TOML as such does not define ""schema"", but only certain structure of the TOML file and only certain types make sense for the command line utility, so after reading the TOML file, the Processor class constructs a TomlTypeChecker class and passes the instance of the TOML object to it for validating this ephemeral schema:

+ +
TomlTypeChecker typeChecker = new TomlTypeChecker(
+  _command.SuppressWarnings, 
+  _command.WarningsAsErrors, 
+  //... some other parameters);
+typeChecker.CheckTypes(toml);
+
+ +

The PrintWarning method in the incarnation shown above, is a method of the TomlTypeChecker class.

+ +

Unfortunately, the Process class also needs this same logic in PrintWarning, because sometimes it can detect likely errors in the TOML file based on data it sees, which are not related to schema. Clearly, TomlTypeChecker class is not a good home for this method, as it could be use elsewhere.

+ +

I can, of course add warningsAsErrors and suppressWarnings parameters to the method, and move it to a static method in a utility class. This is somewhat unsatisfying because this method is being called a few dozens of times in the program, and specifying the same warningsAsErrors and suppressWarnings each time feels wrong. But putting it in a base class does not feel right either, because let's be honest, Processor and TomlTypeChecker logically completely independent, they don't have the same parent.

+ +

What is the best way to solve this? I'm seriously considering making Command a static property. Is this the best solution?

+",55565,,,,,43941.34514,"Utility method with several parameters, that have same values, that does not belong in any class. Where to put?",,2,0,,,,CC BY-SA 4.0, +409025,1,409031,,4/20/2020 7:20,,-5,42,"

My client has a business which work mostly in remote areas where internet felicity is limited, We have a central database and the branches in remote areas need to connect to the central database. +

+ +

We are developing a web-application(django, postgres) for the same, but it should work even if the client is offline. +


+We planning to achieve it by having a local database and sync the database with central database. (using some jobs running in the client may be celery). +
+We don't need to sync all the tables in the database. +Among the following which is a good approach.

+ +
    +
  1. Should the application always connect with the local database and sync the data with the remote?
  2. +
  3. Should the client connect to the remote when it available and sync the rest of the data?
  4. +
+ +

Let me know is there any better approaches. + Thanks in advance.

+",363580,,5099,,43941.40972,43941.40972,Sync local database with remote,,1,2,1,,,CC BY-SA 4.0, +409028,1,409034,,4/20/2020 8:47,,1,83,"

In a microservice architecture I have a BREAD (CRUD) service that has methods for interacting with database that holds some number of different entities that reference each-other. I also use a Message Broker to publish messages when a change occurs to the entities (Add, Edit, Delete methods) after the query returns successfully.

+ +

Problem

+ +

When deleting rows from a table in a database that with it being referenced in another table, there are only two options: delete rows that are referencing it first and using ON DELETE CASCADE option. In my case, that alone is not enough. The problem I need to solve is:

+ +

How to delete entity from the database with rows referencing it, whilst keeping the ability to publish changes to MQ?

+ +

Considered Solutions

+ +

I. Channels (those languages that have them)

+ +

Complexity: High

+ +

Create a bidirectional channel for every entity. When a slave entity is subscribed to the master entity's channel it will send a registration message on the channel that will notify the master entity what entity depends on it. When an entity is to be deleted, push a message to the channel that will notify listeners that it will be deleted. Wait for a confirmation messages of all the slaves to be received and then delete the entity.

+ +
Create user, email, attachment channels (or a single one with filtering in each service)
+Create user, email, attachment services with its master channel and channels of the parent dependency.  
+
+(later)
+
+Request (delete user) 
+  -> publish on user channel that a user will be deleted
+    -> email entity receives, send a requests in its channel that it will be deleted
+      -> attachment entity receives, deletes the attachments for the email
+      <- publish change to MQ and respond that the attachments are deleted
+    <- publish change to MQ and respond that the emails have been deleted
+    -> contact entity receives, deletes all the user contacts
+    <- publish change to MQ and respond that the contacts have been deleted
+  <- deletes the user, responds to a requests
+
+ +

II. Callbacks

+ +

Complexity: Medium

+ +

Create a callback that will be called before the entity is deleted. The callback will call delete methods on all dependent entities.

+ +
Create user, email, attachment services
+Create user, email, attachment callbacks with references to its dependencies
+Attach callback to service
+
+(later)
+
+Request (delete user) 
+  -> run user callback
+    -> email entity `Delete()` is called. run email callback
+      -> attachment entity `Delete()` is called, deletes the attachments for the email
+      <- publish change to MQ and return that the attachments are deleted
+    <- publish change to MQ and return that the emails are deleted
+    -> contact entity `Delete()` is called, deletes all the user contacts
+    <- publish change to MQ and return that the contacts are deleted
+  <- deletes the user, publish change to MQ and responds to a requests
+
+
+ +

III. DB Engine Notification

+ +

Complexity: Medium

+ +

Some Database engines have internal channels for sending and listening of notifications

+ +
Create listener on user, email, attachment DB channels (or a single channel with filtering in each service)
+Create user, email, attachment services and pass the channel listener(s)
+
+(later)
+
+Request (delete user) 
+  -> run user callback
+  <- deletes the user with `CASCADE` and responds to a requests
+    -> email entity listener handler is called - publish change to MQ
+    -> attachment entity listener handler is called - publish change to MQ
+    -> contact entity listener handler is called - publish change to MQ
+
+ +

Not applicable solutions

+ +

SQL CASCADE

+ +

If using on delete cascade (without notify), I would lose the ability to publish events of, in the above example, email, attachment and contact entities and therefore I wouldn't be able to send the change to MQ

+ +

Solutions for streaming changes at a database level (ex Debezium)

+ +

I would like to have a full control of when, how and where the messages are send and would not like to introduce a 3rd Party Dependency if its not strictly necessary.

+ +

Conclusion

+ +

I would like to hear your opinions of handling deleting with the given requirements and your thoughts on the stated solutions above.

+",221683,,,,,43941.47014,How to approach deleting entities from database with foreign keys,,1,0,,,,CC BY-SA 4.0, +409038,1,409279,,4/20/2020 13:10,,-4,38,"

I am struggling to find some methods / concepts or even implementations of partially automated processes to split a monolith into microservices. Of cause I do not expect any solution that will make poof and the migration is done but any kind of concept or software that assists in the process would be nice.

+ +

I searched for hours now and can not seem to find anything. It is very surprising to me that I can not find even one tool that analyses the way a monolith accesses the data base and suggests something about how cutting of certain areas may help in migrating the system. I can imagine there maybe are some other points that could be automated in some way. +Of cause such a software would not understand the context of what meaning the accessed information have but software that manages some kind of context should be (at least internally) somehow divided into contextual and/or functional services etc. that correspond to specific parts of information.

+ +

So finally my question is:

+ +

Does anyone know any kind of concept or software that does this and I just did not find it? +If not, are there any reasons why such a concept or software is impossible to code?

+ +

Thank you very much +Tim

+",363609,,,,,43946.51528,Partially Automated Migration of Monolith to MicroService,,2,0,,,,CC BY-SA 4.0, +409043,1,409044,,4/20/2020 14:18,,-4,69,"

I'm working on Flutter a project. We've test cases on the server-side.

+ +

On the client-side, We have an idea to only test every acceptance criteria on the E2E level?

+ +

The question is

+ +

What're the problems we will encounter after it's getting bigger?

+",200689,,200689,,43941.60833,43941.63333,Any problems if we test every things on E2E?,,1,8,,,,CC BY-SA 4.0, +409046,1,,,4/20/2020 15:07,,0,54,"

I need to design a nodejs typescript API using typical OOP way with controllers and modals, +here I am adding a code base for invoice API of the system. +can anyone suggest best approach to design in OOP with database accessing.

+ +

I need to implement this system using pure OOP. +here is the my application for invoice CRUD

+ +

data base table structure of invoice and invoice-items

+ +

invoice

+ +
id: int | date: Date | custId: int | created: Date | updated: Date
+
+ +

invoice-items

+ +
id: int | invoiceId: int | itemId: int | price: float | Created : Date | updated: Date
+
+ +

folder structure

+ +
--modals
+    --router.class.ts
+    --base.class.ts
+    --invoice-items.class.ts
+    --invoice.class.ts      
+--controllers
+    --invoice.controller.ts
+--routes
+    --invoice.route.ts
+--app.ts
+--database.ts**
+
+ +

here are the files

+ +

base.class.ts

+ +
//base class for the class which uses throughout the system
+abstract class Base {
+
+    //common properties
+    public id: number | undefined
+    public created: Date | undefined
+    public updated: Date | undefined
+
+    constructor(base: Base) {
+        this.id = base.id
+        this.created = new Date()
+        this.updated = new Date()
+    }
+}
+export default Base
+
+ +

invoice-items.class.ts

+ +
import Base from ""./base.class""
+import pool from ""../database""
+
+class InvoiceItem extends Base {
+
+    invoiceId: number | undefined
+    itemId: number | undefined
+    price: number |undefined
+
+    constructor(item: InvoiceItem) {
+        super(item)
+        this.invoiceId = item.invoiceId
+        this.price = item.price
+    }
+
+    async create() {
+
+        let data = {
+            invoiceId: this.invoiceId,
+            price: this.price,
+            itemId: this.itemId,
+            created: this.created,
+            updated: this.updated
+        }
+
+        await pool.query('insert into invoice-items set ?', data)
+
+        return true;
+    }
+
+    async update() {
+
+        let data = {
+            invoiceId: this.invoiceId,
+            price: this.price,
+            itemId: this.itemId,
+            updated: this.updated
+        }
+
+        await pool.query('update invoice-items set ? where id = ? ', [data, this.id])
+
+        return true;
+    }
+
+    async delete() {
+
+        await pool.query('delete invoice-items where id = ?', [this.id])
+
+        return true;
+    }
+
+
+}
+
+export default InvoiceItem
+
+ +

invoice.class.ts

+ +
import Base from ""./base.class"";
+import pool from ""../database"";
+import InvoiceItem from ""./invoice-item.class"";
+
+class Invoice extends Base {
+
+    date: Date | undefined
+    custId: number | undefined
+    invoiceItems: InvoiceItem [] = []
+
+    constructor(invoice: Invoice) {
+        super(invoice)
+        this.date = invoice.date
+        this.custId = invoice.custId
+    }
+
+    async selectAll(){
+
+        let result = await pool.query('select * from invoice')
+        return result
+    }
+
+    async create() {
+
+        //insert invoice data
+        let data = {
+            date: this.date,
+            custId: this.custId,
+            created: this.created,
+            updated: this.updated
+        }
+
+        var result = await pool.query('insert into invoice set ?', data) //WHAT IF THIS FAILD
+
+        //insert invoice items
+        this.invoiceItems.forEach(async item => {
+
+            //add the invoice id
+            item.invoiceId = result.insertId
+
+            //insert each items into invoice-items table
+            await item.create()
+        });
+
+        return result;
+    }
+
+    async update() {
+
+        //update invoice data
+        let data = {
+            date: this.date,
+            custId: this.custId,
+            updated: this.updated
+        }
+
+        await pool.query('update invoice set ?', data)
+
+        //delete existing invoice items
+        await pool.query('delete invoice-items where invoiceId ?', this.id)
+
+        //insert invoice items
+        this.invoiceItems.forEach(async item => {
+
+            //add the invoice id
+            item.invoiceId = this.id
+
+            //insert each items into invoice-items table
+            await item.create()
+        });
+
+        return true;
+    }
+
+    async delete() {
+
+        await pool.query('delete invoice where id = ?', this.id)
+
+        //delete existing invoice items
+        await pool.query('delete invoice-items where invoiceId ?', this.id)
+
+        return true
+    }
+}
+
+export default Invoice
+
+ +

invoice.controller.ts

+ +
import { Request, Response } from 'express'
+import Invoice from '../modals/invoice.class'
+import InvoiceItem from '../modals/invoice-item.class'
+
+class InvoiceController {
+
+    constructor() {}
+
+    public index(req: Request, res: Response):void {
+
+        let invoice = new Invoice(req.body || {})
+        let rslt = invoice.selectAll()
+        res.status(200).send(rslt)
+    }
+
+    public create(req: Request, res: Response):void {
+
+        // sample payload: {
+        //     custId: 44,
+        //     items:[{ price: 99, itemId: 5 }, { price: 99, itemId: 5 }]
+        // }
+
+        let invoice = new Invoice(req.body || {}) 
+
+        //create invoice items
+        req.body.items.forEach( (item : any) => {
+
+            invoice.invoiceItems.push( new InvoiceItem(item))
+        })
+
+        let rslt = invoice.create()
+        res.status(200).send(rslt)
+    }
+
+    public update(req: Request, res: Response):void {
+
+        // sample payload: {
+        //     custId: 44,
+        //     id: 5,
+        //     items:[{ price: 99, itemId: 5 }, { price: 99, itemId: 5 }]
+        // }
+
+        let invoice = new Invoice(req.body || {})
+
+        //create invoice items
+        req.body.items.forEach( (item : any) => {
+
+            invoice.invoiceItems.push( new InvoiceItem(item))
+        })
+
+        let rslt = invoice.update()
+        res.status(200).send(rslt)
+    }
+
+    public delete(req: Request, res: Response):void {
+
+        // sample payload: {
+        //     id: 5
+        // }
+        let invoice = new Invoice(req.body || {})
+        let rslt = invoice.delete()
+        res.status(200).send(rslt)
+    }
+
+}
+
+export const invoiceController = new InvoiceController()
+
+ +

invoice.route.ts

+ +
import RouterClass from ""../modals/router.class""
+import { invoiceController } from ""../controllers/invoice.controller""
+
+ class InvoiceRoutes extends RouterClass{
+
+    constructor() {
+        super()
+        this.config()
+    }
+
+    //configure user related routes
+    config(): void {
+
+        //user CRUD routes
+        this.router
+            .get('/', invoiceController.index)
+            .post('/', invoiceController.create)
+            .put('/', invoiceController.update)
+            .delete('/', invoiceController.delete)
+    }
+}
+
+const invoiceRoutes =  new InvoiceRoutes()
+export default invoiceRoutes.router
+
+ +

database.ts

+ +
import mysql from 'promise-mysql'
+
+//application mysql connection options
+const conectionConfig = {
+    host: 'localhost',
+    user: 'root',
+    password: '',
+    database: 'test'
+}
+
+//set the database with the nysql library
+const pool:any = mysql.createPool(conectionConfig)
+
+//connect to the database
+pool.getConnection()
+    .then((connection: any) => {
+        pool.releaseConnection(connection)
+        console.log('Database connected.')
+    })
+
+export default pool
+
+ +

app.ts

+ +
import express, { Application, NextFunction, Response, Request } from 'express'
+import indexRoute from './routes/index.route'
+import { AppError } from './modals/apperror.class'
+import invoiceRoutes from './routes/invoice.route'
+
+class Server {
+
+    public app: Application
+
+    constructor() {
+
+        this.app = express()
+        this.config()
+        this.routes()
+    }
+
+    config(): void {
+
+        //set app configuration variables
+        this.app.set('port', process.env.PORT || 80)
+
+        //set application middlewares
+
+        this.app.use(express.json())
+        this.app.use(express.urlencoded({ extended: false }))
+    }
+
+    //appication routes
+    routes(): void {
+
+        this.app.use('/', indexRoute)
+        this.app.use('/invoice', invoiceRoutes)
+
+        //handle the errors
+        this.app.use((err: Error, req: Request, res: Response, next: NextFunction) => {
+
+            let status = 500
+            let message = 'Something went wrong!'
+
+            //defined exceptions
+            if (err instanceof AppError) {
+
+                status = err.status
+                message = err.message
+            }
+
+            //not defined exceptions log the stack
+            else {
+
+                console.log(err.stack)
+            }
+
+            res.status(status).send({ status, message })
+        });
+    }
+
+    start(): void {
+
+        //listen the port
+        this.app.listen(this.app.get('port'), () => {
+            console.log('Server running on PORT', this.app.get('port'))
+        })
+    }
+}
+
+//bootstrap
+const server = new Server()
+server.start()
+
+ +

****NOTE** +the above code base sample code base to demonstrate how I design the API as using object-oriented-pattern. there might be typos or syntax errors.

+ +

I hope there must be better approach to organize the files and design classes in Object oriented API design.

+ +

some of my confusions are: + - do all the modal classes should access data base for creating, deleting, updating with database tables, if so is there any best practice. + - invoice has invoice items, do we really need to create invoice items in create, update routes. + - when creating/updating invoice routes need to deal with two tables, invoice, invoice items, I think there must be better approach for this other than inserting/deleting items one by one.

+ +

I need a better solution other than this, can you any one demonstrate (even with lesser code) it how deal with table in modal classes and how to create a CRUD api using OOp.

+ +

Thank you,

+",204375,,,,,43941.62986,How to design a nodejs API as typical OOP way using typescript?,,0,1,,,,CC BY-SA 4.0, +409047,1,410745,,4/20/2020 15:39,,1,134,"

The docs explicitly state this:

+
+

Avoid using refs for anything that can be done declaratively.

+

For example, instead of exposing open() and close() methods on a Dialog component, pass an isOpen prop to it.

+
+

I get the idea and largely agree for reusable components - not having state (which would need to be manipulated imperatively via refs) hidden inside a component makes it easier to reason about and reuse. I'm wondering if this is applicable for concrete dialogs in an application also; if so I'm missing the understanding to use the right tools it seems.

+

For example: in my application, I have a file tree that shows a menu with options to create, rename, and delete files. Each option will show an appropriate dialog, and these dialogs are encapsulated in their own components. At the bottom of the component hierarchy is a reusable Dialog component:

+
FileTree
+  FileMenu
+  CreateFileDialog
+    Dialog
+  RenameFileDialog
+    Dialog
+  DeleteFileDialog
+    Dialog
+
+

I can see two approaches here:

+
    +
  • FileTree manages all three dialogs' visibility states, meaning no refs
  • +
  • the individual dialogs or even the underlying reusable dialog managing visibility state, using refs for showing dialogs when needed
  • +
+

In my concrete case Dialog is a third-party stateless component, but I could add a stateful wrapper for the second approach. And to me, that seems far superior: there is only a single instance of state handling code, encapsulated in the "correct" component. Having three instances of visibility state in FileTree means I need a way to distinguish these states (long names such as createFileDialogVisible or namespacing objects such as createFileDialog.visible).

+

(In practice there is additional state to be managed, e.g. the new file's name, but that is out of the scope of what the title asks. That said, I think it does impact the feasibility and clarity of putting all dialog state into FileTree.)

+

Given that I feel option two is so clearly superior, I can't imagine that the guideline is meant to encourage the first. Am I missing an alternative approach, or is this just something that is outside the scope of this guideline?

+
+

Appendix

+

Steps in my reasoning that I think are good candidates for being wrong:

+
    +
  • the only alternative to using refs is pulling state into the parent component
  • +
  • managing state in the parent component makes the described code duplication/namespacing issues unavoidable
  • +
  • using refs will actually result in less code duplication/more ergonomic code
  • +
+

The following code sample shows a stateless dialog (Dlg), a stateful wrapper (Dialog) and a Consumer for that stateful dialog. In approach 1, FileTree corresponds to Dialog except that there would be multiple visibility states. In approach 2, FileTree corresponds to Consumer, again with multiple refs.

+
import * as React from 'react';
+
+function Dlg({ visible, onClose }) {
+  return visible ? (
+    <div>
+      dialog is shown
+      <button onClick={onClose}>close</button>
+    </div>
+  ) : (
+    <dvi>dialog is hidden</dvi>
+  );
+}
+
+function Dialog(props, ref) {
+  const [visible, setVisible] = React.useState<boolean>(false);
+
+  const show = () => setVisible(true);
+  const hide = () => setVisible(false);
+
+  React.useImperativeHandle(ref, () => ({ show, hide }));
+
+  return <Dlg visible={visible} onClose={hide} />;
+}
+
+Dialog = React.forwardRef(Dialog);
+
+function Consumer() {
+  const dialogRef = React.useRef(null);
+
+  const showDialog = () => {
+    // eslint-disable-next-line no-throw-literal
+    if (dialogRef.current === null) throw 'ref is null';
+
+    dialogRef.current.show();
+  };
+
+  return (
+    <div>
+      <div>
+        <button onClick={showDialog}>open</button>
+      </div>
+      <Dialog ref={dialogRef} />
+    </div>
+  );
+}
+
+",148002,,-1,,43998.41736,43979.56111,React says refs and imperative code are not the right tool for showing and hiding dialogs. Why though?,,2,1,,,,CC BY-SA 4.0, +409049,1,,,4/20/2020 15:54,,1,141,"

I often see the terms ""early binding"" and ""static dispatch"" used interchangeably, and I also often see the terms ""late binding"" and ""dynamic dispatch"" used interchangeably.

+ +

Do these terms mean the same thing?

+",274108,,,,,43942.73611,"Is ""early binding"" the same as ""static dispatch"", and ""late binding"" the same as ""dynamic dispatch""?",,3,3,,,,CC BY-SA 4.0, +409052,1,409067,,4/20/2020 16:48,,28,6127,"

I've been researching Event Sourcing, and it seems there are two philosophies hidden within what I've read. The core difference is whether actors in the system are proactive, making changes first and publishing events based on what they have done, or reactive, consuming events and updating data based on those events.

+ +

However, the former isn't really Event Sourcing, right? The events aren't the source of change, but just a record of change. That's just an event-based log that can be used to rebuild the data later. When you rebuild the log, you're using different code than when you executed in the first place; in the original run, you send an event that you read the second time around. On top of all of that, you have to introduce commands to actually trigger those changes, which need to be sent directly to the consumer, causing a tighter binding.

+ +

Meanwhile, the ""reactive"" style reverses all of those concerns. Since every change is a reaction to an event, there's basically no difference between listening to the ""live"" system as it churns on and listening to a replay sometime later. There's no need for explicit ""commands"", because services aren't told what to do. Rather, they're in charge of maintaining consistency in the face of the events that occur elsewhere, and of notifying others of their own events. The flip side of this is an inversion of control: instead of knowing about other services/aggregates so you can tell them what to do, you just broadcast your event to the system and let them respond according to their rules. +The only comparative downside I see is that you have to explicitly ignore new messages when replaying old messages, but that can be done with configuration/flags.

+ +

And yet, many guides and products seem to endorse a proactive style. For example, Event Store expects events to be divided into streams based on their target - meaning there is only one target per event, as if you're either sending it to a single, designated target (which makes it a glorified command) OR because the ""target"" is really just the source making a record of the action it took.

+ +

There must be a gap in my understanding, but after a week or so of reading I haven't come across a well-supported explanation for this. I suppose two questions come out of this:

+ +
    +
  1. Which of these two approaches is truly Event Sourcing?
  2. +
  3. Are there benefits that the proactive approach has over the reactive approach that I haven't mentioned here?
  4. +
+",127044,,127044,,43941.76389,43966.64514,Which comes first: event or the change?,,3,2,13,,,CC BY-SA 4.0, +409054,1,,,4/20/2020 17:49,,0,17,"

Thinking about some route/controller structuring here:

+ +

Say I have a Company, under which it can have many Locations, and each Location can have multiple Forms.

+ +

On the Company's profile is a short, simple form to create a new Location, however I want to have a whole page with a drag-and-drop interface for creating/editing Forms

+ +

Would it be better to use a button on the Location profile to send a GET request to a route something like /locations/:locationId/forms/new so I can access req.params.locationId to know which Location to put it under?

+ +

Or is it better to have the button on the Location profile be a form with a hidden, read-only input that contains the current Location's ID which POSTs to /forms/new so I could have req.body.locationId to know which Location to add the Form to?

+ +

Either way, what I'd like to do is have the back-end create a Form and add a list of default Questions to it, then add it to the Location via ID in an array property, save both of them, then assign the new Form to req.session.currentForm and send it to a /forms/edit route, because I want users to be able to edit a Form at the same time they create it, and to recycle logic on the back end for getting to the /edit route.

+ +

Let me know if I'm clear as mud/missing something obvious that would be easier

+",363643,,,,,43941.74236,Node.JS/MongoDB/Mongoose route structuring,,0,0,,,,CC BY-SA 4.0, +409062,1,,,4/20/2020 21:48,,-2,133,"

I have this question for a long time that is it possible for someone with no prior embedded systems programming experience to write unit tests? I have good knowledge of other languages like JAVA, Python, C# and a few others. I can write unit tests in the mentioned languages. But I have never done any project in C or C++ and have basic knowledge about embedded systems.

+ +

I want to know how challenging this can be and what are the possible issues I can face while moving from general business software development to embedded systems?

+ +

I know that with time one can learn any field. I want to know the experiences of those who have worked in both areas and tell me about the challenges they faced so that one can be better prepared.

+",363658,,363658,,43942.32222,43942.58125,Is it possible to write unit tests for embedded systems with no prior embedded programming knowledge?,,2,5,,,,CC BY-SA 4.0, +409068,1,,,4/21/2020 4:30,,-2,56,"

Let's suppose that I have 5 microservices, let's also name them ServiceA, ServiceB, ServiceC, ServiceD, ServiceE.

+ +

To perform an operation X communication needs to happen between these services.

+ +

And I have an API in ServiceD which is a pretty expensive API (in terms of execution time and response size) and this API gets consumed by ServiceA at the beginning of the operation X. (NOTE that the response for this API is based on users, so the response will be different for different user)

+ +

Based on the response received the ServiceA takes some decision and calls subsequent services:

+ +

Example:

+ +
ServiceA --> (req.) ServiceD (expensive API)
+ServiceA <-- (resp.) ServiceD
+ServiceA --> (req.) ServiceB
+    ServiceB --> (req.) ServiceC
+    ServiceB <-- (resp.) ServiceC
+
+    ServiceB --> (req.) ServiceE
+        ServiceE --> (req.) ServiceD (same expensive API called earlier in the same flow)
+
+ +

Now, the ServiceE needs to call the same expensive API of ServiceD which was earlier called by ServiceA in the same flow (by same flow I mean in the chain of API calls to perform X for a user).

+ +

But, since this is a very latency-sensitive flow, so calling this API again will result in an increase in the latency which is an unwanted situation.

+ +

Possible solutions I can think of:

+ +
    +
  • Pass the initial response from the ServiceD to the subsequent API calls and consume the required data from the request in the ServiceE, then no need to call the expensive API.

    + +
      +
    • But, concerns here: + +
        +
      • More network bandwidth is consumed coz of the large payload size being passed b/w different services.
      • +
      • Security concerns like mutation of data in the subsequent API calls between different services.
      • +
      • Serialization/Deserialization cost for large payload?
      • +
    • +
  • +
  • Cache the response of the expensive API for the user.

    + +
      +
    • But, concerns here: + +
        +
      • How scalable would it be? since caching will be happening for many users.
      • +
      • How much of improving in latency with caching give, if I go ahead with any cache service running on a separate cluster?
      • +
    • +
  • +
+ +

What solution do you propose to reduce the latency?

+",313906,,,,,43942.29167,Reducing duplicate API call between micro services in a latency sensitive flow,,1,1,,,,CC BY-SA 4.0, +409075,1,,,4/21/2020 8:50,,0,58,"

I'd like to implement a fast, smooth search. Searched items are not that many: ~100 max. Each item holds the amount of data a facebook event would hold. They will all show up on initial load (maybe an infite scroll). Data won't change frequently. No more than 100 concurrent users.

+ +

What's the best caching strategy for search results, given above conditions?

+ +

What's the most scalable strategy?

+ +

Stack

+ +
    +
  • Frontend: Nuxt (VueJS) + InstantSearch (no Algolia!)
  • +
  • Backend: Spring boot
  • +
  • Dockerized
  • +
+ +

Possible solutions

+ +
    +
  1. Extra caching service on the backend (e.g. reddis, memcached) + make UI go to sever on each search operation. This would basically spam the backend on each keystroke
  2. +
  3. Load all items into local storage (e.g. Vuex) and search there directly. This will increase the app's memory footprint and may turn out messy overtime.
  4. +
  5. A combination of the two?
  6. +
+",99207,,,,,43972.58472,Caching strategy for a fast front-end search,,1,3,,,,CC BY-SA 4.0, +409085,1,,,4/21/2020 14:18,,0,15,"

I want to Store a ""Strategy"" or ""Set of Rules"" on how a process is done.

+ +

In a custumer Base, each User has certain orders. These orders depend on certain other Values. I want to store all User decision for any case in relation to the ""type of Orders"", and suggest them again or even automate them. +

+ +

I have a idea, like a ""Config Object"" that holds all decisions of the User for each Order Type and Custumer. But this could be a little to static and the Configurations could end up being big.

+ +

I want need to integrate this into a SQL Database, but I also have the Option to store it otherwise, as long as it is reasonable.

+ +

I tried to reserch ""Designpatterns"" for a configuration, but found none.

+ +

Are there typical mistakes, in this topic? Or ""State of the Art"" examples for storing individual Configurations?

+",363719,,,,,43942.59583,"How to preserve dynamic Configurations for a Process to historize it, load it and use it again for another process?",,0,0,,,,CC BY-SA 4.0, +409087,1,,,4/21/2020 15:53,,-4,41,"

I am writing a program in Kotlin which parses some input data and writes it to a MySQL database (through JDBC).

+ +

The database includes tables such as users and each table has a corresponding data class representing the entity:

+ +
data class User(val id: Int, val surname: String, val forename: String, ...)
+
+ +

The users table has a primary key (the ID) which links it to other tables: I will call these the foos and bars tables, which have their associated data classes too. A user has a one-many relationship with ""foo""s and ""bar""s

+ +

I have classes for reading/writing to the database, such that when I do QueryManager.getAll(...) with the appropriate parameters I can get a list of User instances from the database.

+ +

However, when I am parsing input data, the ID of all items are unknown, so I cannot use the User class. Also I need to store a list of ""foo""s and ""bar""s associated to each user but without using the Foo and Bar classes.

+ +
+ +

At the moment I have something which can be simplified to this:

+ +
class UserDataHolder {
+
+    // unknown userID
+    val userSurname: String
+    val userForename: String
+    ...
+    val relatedFoos: List<FooDataHolder>
+}
+
+class FooDataHolder {
+
+    // unknown userID and fooID
+    val fooProperty1: String
+    val fooProperty2: Boolean
+    ...
+}
+
+ +

These are given to the database which then:
+ 1. Uses the user properties to write to the users table, getting back the auto-increment ID
+ 2. Adds foo data to the foos table using the user ID (but we don't know the foo ID) + 3. Gets back the foo auto-increment ID to use for something else that it is linked to

+ +
+ +

Is there a common design pattern for transferring incomplete or related data like this, that I can learn about and implement in my program?

+ +

(E.g. I have read about the data transfer object on StackOverflow, this site and Martin Fowler's site but am unsure whether it is most suited here, as I am not currently aware of other patterns for similar/related problems.)

+ +

Edit: To clarify, I want to know what design patterns exist for solving a problem like this.

+",192774,,192774,,43942.70069,43943.35625,"Is there a design pattern for transferring ""partial"" or related data objects?",,1,8,,,,CC BY-SA 4.0, +409089,1,,,4/21/2020 17:02,,0,42,"

I am creating a database that will keep track of my company's clients, and an app that will allow users inside the company to read/update/etc. the database. I'm using code-first EF Core to manage the database. The app is a C# .NET Blazor Server website, and the database is MySQL.

+ +

I initially decided to use a RESTful web API between the EF Core/database layer and the app layer, so it was roughly structured like this:

+ +
MyCo.WebApp -> MyCo.SDK -> (Server running MyCo.API) -> MyCo.Models -> (Server running MySQL DB)
+
+ +

Where:

+ +
    +
  • MyCo.Models contains the data classes and the EF Core code (including the DbContext) that interacts with the database,
  • +
  • MyCo.API provides the RESTful web API,
  • +
  • MyCo.SDK provides an interface to call the API using an HttpClient (from the .NET framework) and parses the JSON back into the data classes from MyCo.Models (I'm referencing the data classes in that project so I didn't need to rewrite them), and
  • +
  • MyCo.WebApp is the user interface that displays the data and allows users to interact with it.
  • +
+ +

I initially thought having the API in the middle of everything was the right thing to do. We have other apps that will need to use this database, so having a single point of entry to the data would give us a way of controlling that. We also need to have an audit of all the changes that users make, and having the API layer gives us a single point where we can add a row to a Log table every time a user adds, updates or deletes an entity.

+ +

The problem came when I tried to retrieve the Company property of a Client. Each Client must belong to a single Company, but in the JSON that the API returned for a particular client, the Company property was always null. This is because EF Core uses lazy loading as an optimization. Since I wasn't explicitly requesting the Company property, EF Core had no reason to load it.

+ +

I could solve this by configuring EF Core to use eager loading, so all relations to an entity are loaded at the same time as the entity. But this comes at a performance cost, and there are times when I'd be happy to load a Contact entity without getting the associated Company with it.

+ +

This led me to realize that by introducing the Web API layer, I was actually losing a lot of the benefits of EF Core, like optimizing the DB calls using Linq. I could return the CompanyId inside the Contact and make another call to the API, but again I'm losing the benefit of being able to use EF Core to turn this into one DB call.

+ +

This answer to another question reasons that you shouldn't use the Repository pattern with Entity Framework because it itself is a Repository layer, which I think I agree with. The Web API I've built is essentially an additional Repository layer.

+ +

I'm now wondering if I need the API at all, since all apps that will use it in future will very likely be built in C#.NET, so they could use the EF Core DbContext directly. So instead I would have this structure:

+ +
MyCo.WebApp -> MyCo.Models -> (Server running MySQL DB)
+
+ +

Alternatively I could add in a Service layer which would handle the calls to DbContext, so the WebApp wouldn't need to know anything about how the database is structured.

+ +
MyCo.WebApp -> MyCo.Service -> MyCo.Models -> (Server running MySQL DB)
+
+ +

Or I could keep the API but change it to be non-RESTful, so it essentially acts as the Service layer - e.g. one endpoint to get a simple Contact data object, one endpoint to get a different Contact object which includes the Company data.

+ +

So my question is:

+ +
    +
  1. Is there any value to having the RESTful Web API layer in my scenario?
  2. +
  3. If not, should I use a Service layer, or is there some other common way of structuring applications of this type?
  4. +
+",363712,,,,,43942.70972,Should I use a Web API between a client and Entity Framework?,,0,5,,,,CC BY-SA 4.0, +409095,1,,,4/21/2020 17:53,,1,422,"

Background

+ +

I'm building a Private Chef booking service where you can book a Chef to cook you a custom Menu. I'm having trouble creating a SQL db schema that accurately represents the domain while maintaining data integrity.

+ +

Requirements

+ +
    +
  • Customers create a Booking by selecting a Chef and a Menu and choose the MenuItems they want for each MenuCourse
  • +
  • A Chef defines a set of MenuItem that a customer can choose from to create their Booking
  • +
  • A Menu is a collection of MenuCourses. (ex. A Menu named ""Tasting Menu"" is a 6 course meal, where each MenuCourse costs between $10-20).
  • +
  • A Chef should be able to associate their MenuItems with multiple Menus and MenuCourses.
  • +
  • A customer Booking should contain the Chef the customer selected along with the Menu (and the MenuItems) that will be served. Booking price is determined by the Menu and MenuCourse selections (an appetizer costs less than an entree)
  • +
+ +

Problem

+ +

In my current data model, I have the following issues that I'm not sure how to fix:

+ +
    +
  • it's possible to create a Booking with Chef ""A"", but then have a BookingMenuItem that references a MenuItem with Chef ""B"" (all the BookingMenuItem for a Booking should belong to the same Chef)
  • +
  • a Booking references a particular Menu (which I need for pricing, pricing is based on Menu and MenuCourse) however a BookingMenuItem for that Booking could reference a completely different Menu or MenuCourse
  • +
+ +

Is it possible to re-design my db schema to fix the integrity problems I'm having? Or do I just have to implement these checks at the application level. Thanks!

+ +

+",363744,,,,,43943.34028,SQL Database schema for Catering/Menu management,,1,8,,,,CC BY-SA 4.0, +409097,1,,,4/21/2020 18:02,,-1,58,"

Many resources regarding the UI always mention that the evaluation of user interface is highly subjective process. So my question is what is the exact meaning of ""subjective process"" in the context of UI and why the evaluation process is subjective process ?

+",352196,,352196,,43942.85,43942.85,Evaluation of user interface,,2,4,,,,CC BY-SA 4.0, +409099,1,409119,,4/21/2020 18:06,,-3,71,"

I'm writing a basic password cracker in C as an introduction to multithreaded programming. I've implemented this already using a 'parallelization' approach that spins up a set number of threads and divides the given wordlist among them, and every thread is more or less self contained to each other. Once one finds a password, the whole program is terminated.

+ +

For exercise sake though I also want to try implementing this in a 'pipeline' approach, where each 'stage' of the cracking process (for instance, password hashing, AES decryption, checksum verification etc) are all assigned their own dedicated threads, and each passes data to the next as they work through the wordlist. The problem here though is that if one stage of the pipeline is significantly faster or slower than the one before or after it, it could stall the whole pipeline waiting for new data.

+ +

I figured the best way to address this would be to somehow find a rough ratio of how long each stage of the pipeline takes the compute and then assign multiple threads to each one based on that. For example, if stage 1 takes roughly double the time of stage 2, then I could prevent stalling by assigning two threads to stage 1 and only 1 to stage 2.

+ +

The problem then is how to reliably figure out how long each stage is going to take to operate on a given piece of data in relation to the others. I'm not really too sure where to begin with this, currently my main idea is to use the clock function provided by time.h to get run times for specific functions, or maybe perf to get relative overhead for each function, but as I understand benchmarking is a very volatile process with lots of moving parts and it's difficult to get reliable results, let alone portable ones.

+ +

As this is just a toy program, portability isn't necessarily needed, perhaps though if I could get reliable results on one machine I could make some elaborate makefile to re-benchmark on every machine it's built on, but at the very least I would like to get accurate results on my own machine to test this style of multithreading.

+ +

Is there any established way to get results like these when pipelining is used in industry?

+",360422,,,,,43943.35139,Methods for reliably benchmarking own program's run time,,1,3,,,,CC BY-SA 4.0, +409107,1,,,4/21/2020 20:05,,0,18,"

I have a program that contains client and api logic.

+ +

In summary client works like this:

+ +
1) Client handles client code
+2) Calling the API is surrounded in (lets call it ) ClientService
+3) if client needs to retrieve or do something from api it calls ClientService method that has some added business logic and calls the api 
+
+ +

The api is basicly proxy around another proprietary service/API, so according to clean code if proprietary api has methods

+ +
readData()
+pushData()
+deleteData()
+
+ +

the proxy has the same methods

+ +
readData()
+pushData()
+deleteData()
+
+ +

that calls this methods and maybe do some logging.

+ +

Now since i do need business logic in API too, I created another service Called proxyService.

+ +

This means all logic is handled in proxyService which does call to proxy for primitive calls to API.

+ +

Now my questions are

+ +

1) proxy returns directly proprietary API objects to proxyService, that rebuilds in into Objects defined in my API module, so inside my API module I return and work with proprietary API objects, but API returns objects defined in API module thus not revealing inside implementation - is this correct? the returning proprietary API object directly from the proxy part.

+ +

2) How would tests look like? Would be good to mock proxy and test proxyservice? In that case, in tests, I would need to work with proprietary API objects directly (e.g mocks my proxy to return them), is this good practice?

+",279669,,,,,43942.83681,Testing Proxy with its service,,0,0,,,,CC BY-SA 4.0, +409108,1,,,4/21/2020 20:51,,1,127,"

I'm learning about the Layered Architecture Pattern for Software Development but I'm confused on how objects are sent 'up' the layers. In general, I know that there are about 4 main layers: ui layer, use case layer, domain layer, data access layer.

+ +

So now, let's say I am developing a use case to display all todo items from a todo list on the screen. My domain would have two classes: TodoList and TodoItem. The TodoList class will have a list of TodoItems and the TodoItem class will have a description attribute. Both these classes are created in the domain layer and 'only' the use case layer has access to the domain layer. So, the use case layer will call on the domain layer to get the list of todo items. At this point, will the method getTodoList() return the 'TodoList' object defined in the domain layer back to the UI layer?

+ +

If this is correct, then the UI layer would have access to the domain layer object which in my opinion breaks the layered architecture pattern because now the developer can further call directly to the domain layer of that object.

+ +

I'm confused at this part and would greatly appreciate clarity on how objects are passed down and up the layers of the layered architecture pattern.

+",363763,,,,,43943.32778,"In a layered architecture, is a domain object sent 'up' to the ui layer so that the ui layer can display the fields that are in the domain object?",,3,0,,,,CC BY-SA 4.0, +409116,1,409117,,4/21/2020 22:42,,-2,82,"

I'm trying implement in JavaScript/Node.js a data upload functionality. I want to be able to switch between different storage providers, e.g. AWS, GCP, Azure, with no code change, for instance, via config-file and at the same time to avoid chains of if-else/switch-case to be able to choose an optimal API/service provider in the code.

+ +

I, personally, considered to go with one of the following:

+ +
    +
  • Façade;
  • +
  • Adapter;
  • +
  • Decorator;
  • +
  • Dependency Injection.
  • +
+ +

The questions:

+ +
    +
  • Which design patterns (not necessary from a proposed list) should I focus on?
  • +
  • Pros and cons of the proposed patterns?
  • +
+",51986,,51986,,43942.96597,43942.97361,Design pattern for a switching between APIs with no code changes,,1,4,,,,CC BY-SA 4.0, +409118,1,,,4/22/2020 0:28,,0,111,"

I know how Polymorphism works, but I am trying to understand in what cases is Polymorphism useful.

+ +

Now All the examples that I have found about Polymorphism are one of the following (the below code is in C++):

+ +
    +
  • Either we have an array of type Base* that holds pointers to objects derived from the Base class, and then we loop through the array and call something like Base->foo()

  • +
  • Or we have a function that have a Base* parameter, and then we call this function and pass it a pointer to any of the objects derived from the Base class

  • +
+ +

Are these the only cases where Polymorphism is useful?

+",274108,,,,,43943.01944,In what cases is Polymorphism useful?,,0,5,,,,CC BY-SA 4.0, +409124,1,,,4/22/2020 5:16,,1,538,"

The main reason, to my understanding, that Flash is being phased out is that it is fundamentally insecure (i.e. it isn't just tons of individual issues; it's insecure on a 'core' level). I severely doubt that anyone would knowingly build something so insecure (so they wouldn't just accept massive security issues even in favour of program flexibility). This leads me wonder, how can we analyze Flash's shortcomings in order to prevent the same underlying problem from occurring?

+ +

Is there a particular form of analysis that can address this issue?

+",335470,,,,,43947.93333,How can we learn from Flash's vulnerabilities?,,3,1,1,43943.72431,,CC BY-SA 4.0, +409131,1,409134,,4/22/2020 7:11,,0,63,"

Example scenario,

+ +

API endpoint for a Product with Product information as JSON data and there can be 0 to n number of binary Images.

+ +

Currently we upload as Binary file, with both Image and JSON data through /v1/products POST method.

+ +

Is this right way? or what are other approaches I can try out, to make a more simple Restful endpoint.

+ +

so once a retrieve data using GET /v1/products/:productID

+ +
{ ""id"":""123"", ""name"":""product name"", imgUrls:[""pathToImage1"",""pathToImage2""]}
+
+ +

Advantage:

+ +
    +
  1. Single API Call
  2. +
+ +

Disadvantage:

+ +
    +
  1. Product actually need Image URL not binary files itself, POST and GET payload looks different, am I deviating REST principles?
  2. +
  3. As Binary files are getting uploaded, it is not easy to do a POST from simple Rest tools like POSTMAN or similar tools
  4. +
+ +

Alternate approach

+ +

Upload Images separately and get the Image URL and later on Product payload, refer the URL

+ +

Advantages:

+ +
    +
  1. GET and POST API looks the same
  2. +
  3. Simple to invoke POST API from POSTMAN and others
  4. +
+ +

Disadvantages:

+ +
    +
  1. Multiple API call.
  2. +
+",210720,,,,,43943.32292,"How to manage HTTP, POST with JSON data payload and Binary Image file?",,1,0,,,,CC BY-SA 4.0, +409132,1,409193,,4/22/2020 7:28,,-3,184,"

I am reading the ISO 25023, and I am not sure if I understand the concept of Functional Completeness correctly, but I think it is useless in comparison to Functional Correctness.

+
+

Functional Completeness measures what proportion of the specified functions has been implemented. A missing function is detected when the system or software product does not have the ability to perform a specified function.

+

Functional Correctness measures what proportion of functions provides the correct results. An incorrect function is one that does not provide a reasonable and acceptable outcome to achieve the specific intended objective.

+
+

Could someone please explain to me how a function can be complete without being correct? How do you measure functional completeness? Do you just ask developers, or check if a function is defined in code?

+",363804,,-1,,43998.41736,43944.28056,What is the difference between Functional Completeness and Functional Correctness in ISO 25023?,,1,2,,,,CC BY-SA 4.0, +409136,1,409138,,4/22/2020 8:32,,-2,71,"

So I have a website which does a bunch of different stuff, more importantly, it allows the user to upload videos.

+ +

Now unfortunately videos are a bit annoying as they need a poster img to display correctly on mobile (I'm looking at you Safari...), so anyways, I made a nice function that uses Moviepy to extract the first frame and then Pillow to process it (basically just adding the duration of the video in the bottom right corner).

+ +

So I'm wondering, is this good practice? I feel like depending on those big and clunky modules may generate vulnerabilities, for example, what happens if an user provides a malformed mp4 which then gets processed? It feels very risky, but again I am unsure how I would approach this problem without those extra tools.

+",363811,,363811,,43943.38194,43947.38889,Depending on fairly big modules to do just a simple operation,,1,0,,,,CC BY-SA 4.0, +409140,1,,,4/22/2020 9:56,,-4,15,"

Lets say you have 2 docker stacks: stack-A.yml defines serviceA and stack-B.yml defines serviceB; you also have an archive of common data that will be mounted by services.

+ +

How could I verify that APIs exposed by a given version of serviceA is compliant with what a given version of serviceB needs?

+ +

How could I verify that the version of the services is compatible with the version of the common data archive?

+ +

Is there any tool that does this dependency check? +I would prefer the verification to be static and not at runtime when I deploy the stack.

+ +

I haven't found anything useful searching for ""dependency management"": I only found information on npm, maven, apt, etc., but I don'y think I can use them for my requirement.

+",263163,,,,,43943.61875,Components dependency check,,1,0,,,,CC BY-SA 4.0, +409141,1,409149,,4/22/2020 10:32,,1,108,"

The automated tests of an API should be stable and simple.

+ +

When writing automated tests for an API, we often have to check that the data created with a program implementing this API is persistent. If the persistence layer is checked directly (for example: a database), then the test is tied to the usage of this persistence layer, and could break often.

+ +

We'd rather use the API to check that the data was persisted, so that the implementation could change without breaking the tests.

+ +

The example

+ +

Let's say we are testing an API to add users, ""myfakeapi.com"". Given this JSON file ""user.json"":

+ +
{
+  ""name"": ""Harry""
+}
+
+ +

... And this testing script:

+ +
#!/bin/bash
+
+# Deleting all the users
+curl -X DELETE https://myfakeapi.com/users
+
+# Adding the new user, returns {""id"": 1, ""name"": ""Harry""}
+curl -X POST -H ""Content-type: application/json"" --data-binary ""@user.json"" https://myfakeapi.com/users
+
+# Checking that the user was saved, returns {""id"": 1, ""name"": ""Harry""}
+curl -X GET -H ""Content-type: application/json"" https://myfakeapi.com/users/1
+
+ +

We could say that the test is successful. In addition to that, it wouldn't break if the implementation changed.

+ +

The issue is that nothing proves that myfakeapi.com stores the user in a database. In fact, it could pass the tests by storing the user in memory. In the end, was checking the persistent layer directly the right solution?

+ +

I don't think so (EDIT: I changed my mind after).

+ +

The question

+ +

From the point of view of the API's user, persistence only means that if there is a reboot of the program, the data isn't lost. This is exactly what I would like to test.

+ +

My question: Is it OK to reboot the program that implements the API during the testing session to check persistence?

+",306079,,306079,,43943.62361,43943.62361,Persistence layer in the automated testing of an API,,1,0,,,,CC BY-SA 4.0, +409142,1,,,4/22/2020 10:52,,-5,94,"

I could not find a good book for learning JavaScript. I Googled but most books seem to assume that you have some programming experience and don’t teach from the ground up. They assume you should know how this does this and how to make certain things happen right off the bat. I have no programming experience except know how to write in HTML and CSS. I would like to practice alongside the book and be able to build projects (alongside book too) and reach an intermediate level of proficiency. Could you please point me to some books for absolute beginners learning JavaScript? Please note that the book should follow the latest ECMAScript standard since they keep on adding and removing features. Thank you!

+",363821,,,,,43944.26667,What is a good book to learn JavaScript for complete beginners?,,1,4,,,,CC BY-SA 4.0, +409143,1,,,4/22/2020 10:58,,5,453,"

I work for the in-house IT department of one of the largest companies in my country.

+ +

The infrastructure and software systems are heavily based on Oracle Database. +Most core business processes and business logic is built using SQL and PL/SQL batch jobs, importing data into the database, transforming, consolidating, communicating via DB-links etc. +This system has been gradually built up for the last 30 years. It's a very homogenous system, which also has its advantages.

+ +

Now recently there has been a push to move towards different technologies, diversifying, and less reliance on Oracle (cost is one factor - we're hosting several hundred Enterprise Edition databases and thousands of Standard Editions).

+ +

However, one question often comes up: +Oracle database has been fairly stable and backwards compatible - how do we ensure the long-term stability (10 years+) of the system in a more heterogenous environment? +Say we have components A, B, C, D using a certain framework, hosted on a cloud somewhere. +What if the cloud provider drops support for the framework? What if component B and C are no longer compatible due to a breaking change?

+ +

I haven't heard a satisfying answer yet - basically the only answer I got so far was ""we'll just have to rewrite it""

+ +

So I'm hoping to find out what strategies should be employed to prevent us from basically having to rewrite everything every 3 years.

+",173169,,,,,43945.39375,How to ensure long-term enterprise software stability with changing frameworks / things going out of support?,,3,5,2,,,CC BY-SA 4.0, +409145,1,,,4/22/2020 11:30,,0,163,"

I have a process that I have been able to drawn somehow with a flowchart.

+ +

This process involves several classes, loops and several threads of execution.

+ +

However I am unsure which (if any) UML diagram could be adequate to represent it.

+ +

I thought first Sequence Diagrams since it includes loops and can represent-I think- threads of execution, but I am not sure what goes along the lifelines. It seems that I cannot represent anything of what is happening.

+ +

Then I thought about Activity Diagrams but my textbook does not include loops in it. However looking at examples such as these ones it seems that loops can be represented simply in Activity Diagrams.

+ +

I suppose that I can represent threads through Forks and Joins but I am open to listen to suggestion from more experienced designers.

+",296531,,,,,43943.93542,Which UML diagram is more adequate to represent a multithread process with loops,,2,1,,,,CC BY-SA 4.0, +409154,1,,,4/22/2020 12:39,,-1,86,"

I come across the need to do a lot of one time scripts (related to API evaluations, data extraction, experiments etc.) that have the potential only to be used very rarely in the future. These could be one function python scripts, jupyter notebooks etc. Although they are rarely used, they are useful if the need to use them arise in the future. I am finding it very difficult to manage and maintain such scripts. They might not necessarily fit into any git repo in our company (the company mostly maintain source code related to products on git repos). How would one deal with such script in a systematic manner? Ideally I would like to keep such scripts in one repo dedicated for such scripts and have the scripts divided (grouped as sub-folders) into the job they belong to. However, such a repo would represent only my scripts and I don't know whether that the right way to do it. Currently, these scripts are scattered on my local computer and some on certain servers I normally work in.

+ +

Update:

+ +

The following two questions don't address what this question is referring to

+ +

Best practices for sharing tiny snippets of code across projects

+ +

What is the right way to manage developer scripts?

+ +

My question is regarding scripts that were important one time for the company to perform some operation (like an API evaluation) and would very rarely be useful in the future. But these scripts are important enough to be held onto because there could be instances where you would need to reuse part of such scripts to do some similar job in the future.

+",363832,,363832,,43943.53819,43943.92014,How to Maintain Rarely Used Scripts,,2,4,,,,CC BY-SA 4.0, +409160,1,409164,,4/22/2020 13:51,,-1,173,"

Let's assume that you are trying to refactor a legacy code to make it easier to understand and more testable, how can you do that?

+ +

In the book ""Working with Legacy Code"", a characterization/regression test was recommended. First, you start with a test that invokes the part of the code you want to refactor (e.g. a big method). +What this test tells you is for that input you should expect that output. Therefore, if you refactor and the output has changed, this means something was broken.

+ +

The assumption here is that the legacy code is harder to understand and thus harder to test. +You can find this case explained in another question here: +Writing tests for code whose purpose I don't understand

+ +

Now you have a refactored code that you can start writing unit tests for (e.g. you broke the big method to smaller testable ones).

+ +

Unit tests might help you uncover bugs that you need to fix. Fixing those bugs will make you unit tests pass but will break the characterization/regression tests. Those tests are expecting a certain output which is the output of the code before the bug was found and fixed.

+ +

So what is the appropiate course of action here? Can I just remove/uncomment the test? I have seen in the book ""Working with Legacy Code"" an example of a test that was removed after refactoring. Does this apply here?

+",198503,,198503,,43943.59861,44082.57083,Won't a characterization/regression test fail when a bug is fixed?,,2,5,,,,CC BY-SA 4.0, +409161,1,,,4/22/2020 14:00,,-6,56,"

From what I have read about the domain driven design, an aspect of it is that there is a clear separation between domain objects and DTOs.
+So the application level components deal with domain objects and are completely unaffected by any changes in the DTOs.
+That seems clear win to me but I am wondering if it really scales.
+If we have for example a network datasource that sends in a JSON a couple of thousands DTOs and we need to parse them to the json DTOs and then tranform those to the corresponding domain objects thereby having a second list of the domain objects it seems we would spend a significant amount of performance time just converting objects from one type to the other, let alone the fact that the DTOs won't have any use at all in the rest of the application life-cycle till the next fetch.
+Am I oversimplifying this or am I misunderstanding something?

+",345109,,,,,43943.67986,Are aspects of domain driven design limiting when we scale?,,1,4,,,,CC BY-SA 4.0, +409163,1,,,4/22/2020 14:26,,-2,118,"

I have code in the following form:

+ +
public void drawObject(MyObject myObject) {
+  RootElement root = new RootElement();
+
+  if (myObject.hasA()) {
+     root.addElement(new XElement());
+  } else {
+     root.addElement(new YElement());
+  }
+  if (myObject.hasB()) {
+     root.addElement(new ZElement(""A""));
+  }
+  if (myObject.hasC()) {
+     root.addElement(new ZElement(""B));
+  }
+
+  //and so on for around 20 more conditions
+
+  root.draw();
+}
+
+ +

I think that's not optimal. However, I can't think of any better solution to this. Is there any design pattern that I could use?

+",363847,,,,,43943.90417,Which Design Pattern to use to avoid conditional adding of elements to list?,,2,3,,,,CC BY-SA 4.0, +409172,1,,,4/22/2020 17:53,,-4,46,"

When making a website for a client, how do you deal with payments (hosting, theme, plugins, software, ect) when you are setting everything up for them, though the client will be paying for the systems/software?

+ +

In the past its just been friends and family who give me a CC, though not sure my client would be ok with that. I want to make this very easy for them, they are not tech-savvy and I do not want to have too many back and forths with them when creating the site. What do you normally do for clients? Do you send them a list of everything to pay for, and they give you codes (this seems messy), or do you put everything in your name, and change the payment once I hand over the site (this seems like it could cause issues)?

+ +

I'd love to hear thoughts on the easiest way to handle this.

+ +

Thank you so much, really appreciate it.

+",363865,,,,,43943.77569,"When making websites for clients, what's the best way to handle payments that clients pay for, but you design?",,1,0,,,,CC BY-SA 4.0, +409173,1,,,4/22/2020 18:28,,1,58,"

In Software Engineering book by Ian Sommerville 8 edition , more specifically in chapter 16 , the author has proposed the following usability attributes to evaluate UI : +

+

I can't understand what is the meaning of Speed of operation and adaptability according to their given definitions. In more details , I can't understand what is the meaning of " user's work practice" and "tied to a single model of work" ? Please someone give an easy example to understand these 2 attributes . Many thanks :)

+",317597,,-1,,43998.41736,43943.83819,Usability measures for evaluating UI,,1,2,,,,CC BY-SA 4.0, +409180,1,409189,,4/22/2020 21:33,,-3,44,"

I am pretty much new to programming, but recently I began to learn C# intensively for Visual C# and for Unity. I have noticed that I use many scripts that have absolutely the same content in different applications.
+An example of this script would be the script I use in most of my games in Unity, ObjectPooler. It has got absolutely identical contents in every game, so I can just copy it from one project to another and it will work as intended.
+How do you think, is it better to write a script from scratch every time I need it, or maybe just have a library of most used scripts to simply copy and paste them on demand?

+",363881,,,,,43943.94444,Do you recommend having a template scripts?,,1,0,,,,CC BY-SA 4.0, +409183,1,,,4/22/2020 21:48,,-1,103,"

The more I read about the Single Responsibility Principle the less I see a class as an object type, but rather as a servant that does something.

+ +

For example, let us suppose we have a new requirement that we need to generate a Pdf file where we write data we get from a Bill class. I would create a class for this. Do I name the class PdfFile or PdfGenerator?

+ +

To my OO perspective a new Pdf file object has just emerged so I would name it PdfFile, but it doesn't make much sense to talk about responsibilities of a PdfFile object.

+ +

Edit: Instead of ""in what basis do I name classes"" this question could as well be ""in what basis do I create classes"".

+",322312,,7422,,43944.25486,43944.25486,Do I name classes based on the object type they represent or the responsibility they have?,,2,0,,,,CC BY-SA 4.0, +409184,1,,,4/22/2020 21:50,,0,20,"

I'm requesting some feedback on a database we use in a live streaming service. We've Frankensteined this one as we've grown, and I'm looking to make a new one because there are now some obvious pain points. I'm not a database engineer and I've likely broken some beginner rules in the current design. Here are some specific redesign goals:

+ +
    +
  1. A single user should be able to login with several different providers e.g. YouTube, Twitch, Mixer
  2. +
  3. Better authorization - faster, more fine grained permissions, extensible
  4. +
  5. Payments from user A to B can be made while user B is not streaming
  6. +
  7. Better notification system - faster, more fine grained controls e.g. don't email me, extensible
  8. +
  9. Improved Chat table where it is easier to express that a certain part of a message is a link or a GIF
  10. +
+ +

Here is the current database generated by TypeORM:

+ +
CREATE DATABASE `testserver` /*!40100 DEFAULT CHARACTER SET utf8mb4 */;
+CREATE TABLE `badge` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `userId` varchar(255) NOT NULL,
+  `channelId` varchar(255) NOT NULL,
+  `banned` tinyint(4) NOT NULL,
+  `moderator` tinyint(4) NOT NULL,
+  `streamer` tinyint(4) NOT NULL,
+  `admin` tinyint(4) NOT NULL,
+  PRIMARY KEY (`id`),
+  KEY `FK_e3655a46bc77b0c5f46aa80f4be` (`userId`),
+  KEY `FK_a1797bd5abdce8beb750a78fde0` (`channelId`),
+  CONSTRAINT `FK_a1797bd5abdce8beb750a78fde0` FOREIGN KEY (`channelId`) REFERENCES `channel` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
+  CONSTRAINT `FK_e3655a46bc77b0c5f46aa80f4be` FOREIGN KEY (`userId`) REFERENCES `user` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+CREATE TABLE `channel` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `bannedWords` text NOT NULL,
+  `handle` varchar(25) DEFAULT NULL,
+  `creatorId` varchar(255) DEFAULT NULL,
+  PRIMARY KEY (`id`),
+  UNIQUE KEY `IDX_57dbe27a2e7b4a2eb0a168dcba` (`handle`),
+  UNIQUE KEY `REL_ea990eb9792cca9333f6b507cd` (`creatorId`),
+  CONSTRAINT `FK_ea990eb9792cca9333f6b507cdf` FOREIGN KEY (`creatorId`) REFERENCES `user` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+CREATE TABLE `chat` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `message` varchar(450) NOT NULL,
+  `provider` varchar(255) NOT NULL,
+  `showId` varchar(255) DEFAULT NULL,
+  `creatorId` varchar(255) DEFAULT NULL,
+  `paymentId` varchar(255) DEFAULT NULL,
+  PRIMARY KEY (`id`),
+  UNIQUE KEY `REL_638f0561ff905a3000d6116a61` (`paymentId`),
+  KEY `FK_a2a35edbc5c7b349b5fc5792259` (`showId`),
+  KEY `FK_77b3c245a0b1252384b64e53f57` (`creatorId`),
+  CONSTRAINT `FK_638f0561ff905a3000d6116a618` FOREIGN KEY (`paymentId`) REFERENCES `payment` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
+  CONSTRAINT `FK_77b3c245a0b1252384b64e53f57` FOREIGN KEY (`creatorId`) REFERENCES `user` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
+  CONSTRAINT `FK_a2a35edbc5c7b349b5fc5792259` FOREIGN KEY (`showId`) REFERENCES `show` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+CREATE TABLE `credentials` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `providerId` varchar(255) NOT NULL,
+  `provider` varchar(255) NOT NULL,
+  `displayName` varchar(255) NOT NULL,
+  `accessToken` varchar(255) DEFAULT NULL,
+  `refreshToken` varchar(255) DEFAULT NULL,
+  `creatorId` varchar(255) DEFAULT NULL,
+  PRIMARY KEY (`id`),
+  KEY `FK_fd9ec3aad3cb6d187216332112f` (`creatorId`),
+  CONSTRAINT `FK_fd9ec3aad3cb6d187216332112f` FOREIGN KEY (`creatorId`) REFERENCES `user` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+CREATE TABLE `notification` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `userId` varchar(255) NOT NULL,
+  `channelId` varchar(255) NOT NULL,
+  PRIMARY KEY (`id`),
+  KEY `FK_66d14f6315ae0e95fc7db92af0b` (`channelId`),
+  KEY `FK_1ced25315eb974b73391fb1c81b` (`userId`),
+  CONSTRAINT `FK_1ced25315eb974b73391fb1c81b` FOREIGN KEY (`userId`) REFERENCES `user` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
+  CONSTRAINT `FK_66d14f6315ae0e95fc7db92af0b` FOREIGN KEY (`channelId`) REFERENCES `channel` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+CREATE TABLE `payment` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `status` varchar(255) NOT NULL,
+  `refId` varchar(255) NOT NULL,
+  `amount` int(11) NOT NULL,
+  `currency` varchar(255) NOT NULL,
+  `receipt_email` varchar(255) NOT NULL,
+  `dismissed` tinyint(4) NOT NULL,
+  `streamerId` varchar(255) NOT NULL,
+  `showId` varchar(255) NOT NULL,
+  `message` varchar(450) DEFAULT NULL,
+  `userId` varchar(255) DEFAULT NULL,
+  PRIMARY KEY (`id`),
+  KEY `FK_b046318e0b341a7f72110b75857` (`userId`),
+  CONSTRAINT `FK_b046318e0b341a7f72110b75857` FOREIGN KEY (`userId`) REFERENCES `user` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+CREATE TABLE `show` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `provider` varchar(255) NOT NULL,
+  `providerUrl` varchar(255) NOT NULL,
+  `showSetupId` varchar(255) DEFAULT NULL,
+  PRIMARY KEY (`id`),
+  KEY `FK_fe335f213894773ff80033204bb` (`showSetupId`),
+  CONSTRAINT `FK_fe335f213894773ff80033204bb` FOREIGN KEY (`showSetupId`) REFERENCES `show_setup` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+CREATE TABLE `show_setup` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `name` varchar(255) NOT NULL,
+  `config` json NOT NULL,
+  `channelId` varchar(255) DEFAULT NULL,
+  PRIMARY KEY (`id`),
+  KEY `FK_5687f3cf3ef4a3745366aa66c22` (`channelId`),
+  CONSTRAINT `FK_5687f3cf3ef4a3745366aa66c22` FOREIGN KEY (`channelId`) REFERENCES `channel` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+CREATE TABLE `user` (
+  `id` varchar(255) NOT NULL,
+  `createdAt` datetime NOT NULL,
+  `updatedAt` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
+  `displayName` varchar(255) NOT NULL,
+  `email` varchar(255) DEFAULT NULL,
+  `imgUrl` varchar(255) NOT NULL,
+  `streamUrl` varchar(255) DEFAULT NULL,
+  `verified` tinyint(4) NOT NULL,
+  PRIMARY KEY (`id`),
+  UNIQUE KEY `IDX_e12875dfb3b1d92d7d7c5377e2` (`email`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+
+ +

I appreciate your feedback, thank you.

+",363885,,,,,43943.90972,Live Streaming Service Schema Design,,0,0,,,,CC BY-SA 4.0, +409190,1,409208,,4/22/2020 23:57,,0,98,"

Suppose Person has a Car. Car is a separate resource with its own URI. For the sake of this example, assume a person can only have one car.

+ +

We want to include the Person's Car in the response when requesting a person via /person/{id}, but we don't want to include the entire Car resource with all of its properties which may be a complicated nested structure. We only want to include a few of the main properties, such as Color, Make, and Model, along with a link to the full Car resource.

+ +
    +
  1. Is it normal to have a different representation of Car that only exists as a child property of Person? Car may also have even more simplified representations as a child of other resources as well, depending on what properties it makes sense to expose based on the parent resource.

  2. +
  3. Since I'm not including an ""official"" representation of the Car resource (one that has its own URI) inside Person, do I include the link to the full Car resource inside the child Car property on Person or do I put the link in the list of links on the parent resource Person?

  4. +
+ +

Include link on the parent resource Person:

+ +
{
+  ""Person"": {
+    ""Name"": ""John Smith"",
+    ""Age"": 25,
+    ""Car"": {
+      ""Make"": ""Toyota"",
+      ""Model"": ""Camry"",
+      ""Year"": 2012
+    },
+    ""Links"": [
+      {
+        ""rel"": ""car"",
+        ""href"": ""/person/{id}/car""
+      },
+      {
+        ""rel"": ""some-other-resource"",
+        ""href"": ""/person/{id}/some-other-resource""
+      }
+    ]
+  }
+}
+
+ +

Or include the Car link in the child property Car:

+ +
{
+  ""Person"": {
+    ""Name"": ""John Smith"",
+    ""Age"": 25,
+    ""Car"": {
+      ""Make"": ""Toyota"",
+      ""Model"": ""Camry"",
+      ""Year"": 2012,
+      ""Links"": [
+        {
+          ""rel"": ""self"",
+          ""href"": ""/person/{id}/car""
+        }
+      ]
+    },
+    ""Links"": [
+      {
+        ""rel"": ""some-other-resource"",
+        ""href"": ""/person/{id}/some-other-resource""
+      }
+    ]
+  }
+}
+
+",280428,,,,,43944.50208,HATEOAS with Child Resources,,1,6,,,,CC BY-SA 4.0, +409196,1,,,4/23/2020 7:49,,2,140,"

The application user has a lot of standard functionality we see in most applications. At a high-level, this includes some form of authentication, authorization, and session management. At a low-level, we have functions like:

+ +
    +
  1. Checking the password strength;
  2. +
  3. Hashing the password, perhaps with a bit of salt;
  4. +
  5. Manipulating a hashmap of data tied to the session object;
  6. +
  7. Checking if the user has a role that has the permission necessary to call a function.
  8. +
+ +

Looking at Martin's Clean Architecture, I believe that the user high-level logic falls under use cases/application business rules. But do the details belong here as well?

+ +

For example, when a User registers or performs a login, the application needs to utilize best-practice algorithms, that should change as new attacks and vulnerabilities are discovered.

+ +

The request arrives from some UI and is handled by a controller at its first stop. Now, from a security perspective, we would want the password to be hashed as close to the entry point as possible, to reduce the severity of bugs that would leak the password into error messages, session data, logs, etc. This would mean that such functionality belongs in the controller. However, my understanding is that the controller is very thin and serves only as a proxy adapter between the UI and the application logic. As such, it shouldn't worry about complex operations such as hashing, but should focus on data translation and invoking the correct functions.

+ +

Now, at the use case layer, we might have a UserService class that offers a registerUser(username, password) function. The registerUser might have code such as:

+ +
registerUser(username, password) {
+    if(isNotUnique(username)) throw new UserNameNotUniqueException();
+    if(isNotStrong(password, username)) throw new InsufficientPasswordStrengthException();
+    var password_hash = hashPassword(password);
+    userRepo.save(new User(username, password_hash));
+
+ +

The isNotUnique calls the userRepository to retrieve a user with the given username. If the list is empty, the username is unique. The isNotStrong performs a series of checks for length, complexity, and similairty with the username. The hashPassword calls the latest and greatest library for performing the password hashing and supplies the strongest configuration parameters.

+ +

Ideally, we would have an adapter between the UserService and the PasswordHasher so that we can switch between implementations if the library becomes vulnerable. Would that make the PasswordHasher an interface adapter, while the actual library belongs to the Frameworks and Drivers layer?

+ +

My second question is how does all of this fit into the old MVC design? Would the user-related application logic fall inside the model? Is the MVC controller a similarly thin layer as Martin's interface adapter layer?

+ +

Third, would access control logic be more suitable at the application logic layer, instead of the controllers?

+ +

Finally, I would welcome comments and corrections for all the reasoning given above.

+",289975,,209774,,43945.75208,43945.75208,Identity and access management in Clean Architecture and MVC design,,0,0,1,,,CC BY-SA 4.0, +409198,1,,,4/23/2020 7:54,,2,170,"

In our production environment we have a large database (SQL Managed Instance), roughly around 400-500 GB. We create a copy of this database and use it to restore in our development environment to create a database (SQL Server 2019) our developers can use. We create a backup of out production database every day, however the restore processor takes a very long time and therefore don't often restore the development database.

+ +

With our growing development team we are noticing that we are stepping on each others toes and causing delays. We are looking into creating a single database per developer, as well as updating the frequency in which we restore these databases.

+ +

I am writing this to get some advice on how to approach this task. Please let me know if you require more information.

+",363912,,,,,43944.36667,How do you approach one database per developer when working with a very large databases?,,1,3,1,,,CC BY-SA 4.0, +409199,1,,,4/23/2020 8:29,,-3,48,"

My webapp is periodically collecting data samples (say of type A and B) from some web services. Its database has therefore columns for A and B. So we have values A1/B1, A2/B2 etc.

+ +

Requirements have changed and now we want to also collect C. More code and more DB columns are added, so now we collect A3/B3/C3 etc. . But we also want to retroactively collect the missing data, i.e. C1 and C2.

+ +

Is there an established term for this process of retroactively filling data in order to make a DB complete again? I initially called it ""migration"" but my fellow developers use it for DB migrations, i.e. adding/removing columns. I'm looking for a term that is more specific than ""data upgrade"".

+",212914,,,,,43944.37431,How to call the process of filling missing data after a database upgrade?,,1,3,,,,CC BY-SA 4.0, +409203,1,,,4/23/2020 9:44,,-1,44,"

In search engine indexing, a body of text is often processed before it is indexed. A common example is stemming, were words are reduced to their root form (plurals are dropped, tense is normalized). Other examples are lemmatization, soundex transformation, casing, etc.

+ +

So, this sentence...

+ +
+

My name is Bond, James Bond

+
+ +

...might be indexed as the following tokens.

+ +
    +
  • my
  • +
  • name
  • +
  • be
  • +
  • bond
  • +
  • jame
  • +
  • bond
  • +
+ +

A basic principle of information retrieval is that this only works if you do the transformation on the query as well.

+ +

If I was to search for ""James"", it would not match because that token was transformed to ""jame"". My search can only reliably work if the exact same set of transformations take place on the query as well, so my query for ""James"" would be equally transformed into ""jame"" before any token matching was attempted.

+ +

(I liken this to algebra, where you have to do the same calculations on both sides of the equals sign.)

+ +

Is there a name for this principle of having to transform both indexed content and query content equally before attempt to compare them? I'm trying to explain this concept to some students, and it would be helpful if there was an existing term for it.

+",230046,,209774,,43945.52153,43945.52153,Equal transformations on both indexed content and query content before a search is attempted,,1,2,1,,,CC BY-SA 4.0, +409204,1,409207,,4/23/2020 10:29,,-1,89,"

I am a senior software developer, once also worked for a big consulting house. I have been using various cloud services for many years and DevOps is already in my blood.

+ +

Recently I moved into another organization. This organization is very lovely in many ways (that's why I moved), but the IT, its tools and processes, are a bit aged. And the culture is a bit conservative. Basically engineers worked upon commands/requirements from business units, every engineer has clear defined responsibility, and engineers' initiatives were not really encouraged or wanted. Some progressive engineers are not happy about lacking of things such as Git, some business units are not happy about the inagility of the IT. Management saw this and would like a change. I, among other progressive colleagues, were asked to help bring the modern tools / methodologies / processes into my organisation.

+ +

Introducing Cloud and DevOps to this org is very different from introducing such things to my ex. clients. In most organizations, it is usually enough to mention some obvious benefits of Cloud / DevOps. My focus at other clients was always execution rather than persuasion. My new org is very, very different. It is not a normal business, it has almost no competition, my peer engineers here worry much less about innovation. For a long while ""serving the business units stably"" has been the value No.1 in this IT (This is not blame at all. Many of my colleagues here are very diligent. Once a task is defined, they deliver high-quality jobs in the defined context). If one mentions those obvious benefits of modern IT, first peer questions could well be ""Well, do our business units really care about these? Do they care how we develop our software? Do they care whether we host our applications on our own server or on an cloud? If the business units don't care, why should we modernize? Why should we learn new techs, learn new tools, and learn new methodologies? We have enough chores to do, we don't have time for those hypes (yes, Cloud and DevOps are often viewed as hypes here).""

+ +

I am trying to persuade some of my peers. I have already prepared some example applications, example docker files, some very progressive peers have already been trying out Kubernetes on their private computers and on public clouds. Good examples of code also needs good introduction. My introductory article begins like this:

+ +
+

As the xxxxxxx project launched in xxxxxx, many of us raised good questions + such as:

+ +
    +
  • What is DevOps?
  • +
  • What is Cloud?
  • +
  • Do we really need Cloud?
  • +
  • What organizational impact do DevOps and Cloud have on us?
  • +
  • How do we approach DevOps and Cloud?
  • +
+ +

Now let me try to explain.

+ +

A brief excursion to the history of development of software + engineering during the last 60 years might bring out the rationals + behind the current cloud/DevOps movement.

+ +

In the acient 1960s to early 1980, software were written with ....

+
+ +

While I am digging into historical literature and constructing a compelling story for my peer engineers, I'd like to humbly ask for your idea of how to introduce Cloud and DevOps (its motivation, essence and roadmap for a conservative org.) under my circumstances. Any suggestion or idea will be highly appreciated.

+ +

Please note that I am not asking for opinions about whether we need modern practices such as devops / cloud in my organization. The question is rather: Assume there is a real need, how to approach these given my circumstances?

+ +

Best, +Nicole

+",363923,,363923,,43944.54583,43944.54583,How to introduce Cloud and DevOps in an organisation that is conservative in nature?,,1,10,1,43944.51042,,CC BY-SA 4.0, +409206,1,409212,,4/23/2020 11:24,,-1,177,"

The context of this question is the early stage of introducing a VCS into an academic setting consisting of non-SW-engineers, largely unaware of modern best practices related to coding as a team. At the time of introducing the VCS, many projects already had non-negligible amounts of code (serving as the initial commits in the respective repositories).

+

Team member X, who spent a significant amount of time implementing a complex mathematical algorithm, and is the only one who really knows what's going on inside it, is reluctant to adopt VCS and instead prefers to distribute periodic snapshots of everything they worked on in a certain period of time, and have somebody else push it to the VCS. Whoever ends up pushing it, due to their inevitable lack of understanding of the modifications, has no way to split them logically into smaller (atomic) commits, and no way of documenting the changes other than "Changes by X from period Y" - resulting in one giant commit that spans many files. This of course defeats much of the purpose of VCS, turning it into little more than a file storage service.

+

I believe that person X doesn't bother with the VCS not because of malice or an inability to learn, but because they don't see the added value in this process. I would therefore like to explain to this person the significance of introducing modifications in small and well-documented commits, in the hopes of getting them on board.

+

We don't have a large team, develop concurrently, use automation or anything but the latest version of the code. Therefore, many arguments in favor of VCS aren't really applicable in our case.

+

The best reasons I could come up with are:

+
+
    +
  • One day, somebody else will need to maintain/modify this code. The external documentation (i.e. article/thesis) and internal documentation (i.e. comments in the code) may not explain why certain implementation details are the way they are (e.g. default values). If some line was changed, and the change was properly documented, this can help avoid repeating old mistakes.

    +
  • +
  • Unless you accompany your codes with the exact commit messages that should appear in them, information might get "lost in translation".

    +
  • +
  • You're needlessly creating work for another person.

    +
  • +
  • One or more of the reasons given as answers here.

    +
  • +
+
+

What other arguments, specific to our scenario, can I use?

+",146757,,-1,,43998.41736,43944.55556,Explaining why a code's modifier should also be its committer,,2,21,,43944.56528,,CC BY-SA 4.0, +409209,1,409213,,4/23/2020 12:34,,3,340,"

I am creating a MEAN stack application.

+ +

I have noticed by chance that whenever I send the credentials of the user to the backend, I can ""fish"" it from the network option on the browser (F12). See image?

+ +

What is the fastest and simplest way to handle this?

+ +

What is the more well-elaborated approach?

+ +

+",363951,,110531,,43944.61181,43950.24583,How can I protect the user password?,,2,3,,,,CC BY-SA 4.0, +409229,1,,,4/24/2020 2:38,,-4,78,"

If I were building an online food ordering app but I wanted to print a receipt to the kitchen, how would I do that?

+ +

There are many SaaS solutions available like Uber Eats and other apps that allow the customers to order food online but all the orders just show up on the tablet they give. What if the kitchen was huge or the receipt was massive, how would the person at the front tell the kitchen exactly what they need? Would they have to re-write it on a paper or something?

+ +

Regardless, the tablet solution is easy to implement. Is it possible to connect to a small recipt/kitchen printer through the internet and send it a request to print an order with X items? I've been searching around but I can't seem to find a solution to this. Most printers have bluetooth (which I don't think can help me in this case) and some say they connect using an ethernet cable, but I haven't found any info on if/how I can connect to the printer and make it print orders.

+ +

I've searched around and I've found some other software that say they can send online orders to the kitchen printer. How do they do this?

+ +

Alternatively, if you have any suggestions as to how I should handle online orders I'm all ears! Thanks

+",360567,,,,,43945.2625,How can I connect to a kitchen printer (or any receipt printer) and make it print custom online orders?,,1,4,,,,CC BY-SA 4.0, +409230,1,,,4/24/2020 4:19,,0,114,"

I am creating a service to get the total yearly expenditure of a customer. The service input is the customer ID. If a customer that does not exist is received at the service, should a 404 (Not found) be thrown back to the consumer or should it be a 400 (Bad Request)?

+",364023,,183449,,43945.31944,43946.36458,Should we return a HTTP 404 or 400 for a customer record that doesn't exist?,,2,0,,43946.50625,,CC BY-SA 4.0, +409242,1,,,4/24/2020 9:56,,0,62,"

A library of audio algorithms is modeled, tested and verified in Simulink (graphical block diagramming tool). It needs to go from this existing Simulink models down to a multiple embedded platforms (floating and fixed-point DSP processors: Qualcomm, Tensilica, ARM, etc).

+ +

Several workflows are considered:

+ +
1) Simulink diagram -> Matlab code -> generic C floating point -> C embedded platform 1
+                                                               -> C embedded platform 2
+                                                               -> C embedded platform 3
+2) Simulink diagram -> generic C floating point -> C embedded platform 1
+                                                -> C embedded platform 2
+                                                -> C embedded platform 3
+3) Simulink diagram -> C embedded platform 1
+                       C embedded platform 2
+                       C embedded platform 3
+
+ +

The Simulink models keep changing, so versioning and ease of maintenance should be considered.

+ +

Which workflow should be preferred?

+ +

Note: Simulink has a C code generation feature. It can generate both generic and embedded C code (with some of the required embedded platforms supported (ARM), and some not (Tensilica)). Not sure about the quality and usability of the C code!?

+",223603,,,,,44216.5875,How to go from Simulink to embedded fixed-point DSP processor?,,1,0,,,,CC BY-SA 4.0, +409243,1,409300,,4/24/2020 9:58,,1,41,"

Let me explain my thoughts about architecture of the project I'm working on. +The project code repository consist of:

+ +
    +
  • Scrapy component - of course it serves to scrape data, process it and calculate relations between data. It populates MySQL database.
  • +
  • Django visualization component - it simply displays data stored in database using many filters.
  • +
+ +

Right now they are deployed as two separated docker containers which works fine. +The idea of former collegues was to go further with split and split also code repositories.

+ +

I can see potential ability to create CI/CD per repository, so it will only run tests/lintes/checks and eventually will only deploy container which actually was modified. It won't run everything for other container which is ok(logical separation).

+ +

But because they are actually working on same database tables(Scrapy populates them, Django reads them) it looks like overkill for me. I would need to maintain two separate DB model specifications in sync in both repositories. Right now Scrapy uses Django ORM for interaction with DB.

+ +

What do you think? Do you think it's worth splitting code repository to two separated ones and keep in sync models in both of them? Or maybe not? Is there a way to trigger/run Gitlab CI/CD process for only affected container in single repository?

+ +

Thank you

+",364047,,,,,43947.3,Scraper in separate repo from visualization component?,,1,4,,,,CC BY-SA 4.0, +409246,1,,,4/24/2020 11:46,,0,61,"

I am working on learning native iOS development in Swift, and I am trying to find something that is similar to what I've learned in Android development with Kotlin.

+ +

In particular I am referring to the concept of having the db as the ""single source of truth"", that is advertised by the Android team as one of the main points of their recommended app architecture. This is possible by the use of LiveData, an observable data holder, and Room, a persistence library based on SQLite that returns LiveData objects. Those objects are observed in the UI layer to display data to the user. With this architecture, the typical Repository call looks something like this:

+ +
/**
+ * The database serves as the single source of truth.
+ * Therefore UI can receive data updates from database only.
+ * Function notify UI about:
+ * [Result.Status.SUCCESS] - with data from database
+ * [Result.Status.ERROR] - if error has occurred from any source
+ * [Result.Status.LOADING]
+ */
+fun <T, A> resultLiveData(databaseQuery: () -> LiveData<T>,
+                          networkCall: suspend () -> Result<A>,
+                          saveCallResult: suspend (A) -> Unit): LiveData<Result<T>> =
+        liveData(Dispatchers.IO) {
+            emit(Result.loading<T>())
+            val source = databaseQuery.invoke().map { Result.success(it) }
+            emitSource(source)
+
+            val responseStatus = networkCall.invoke()
+            if (responseStatus.status == SUCCESS) {
+                saveCallResult(responseStatus.data!!)
+            } else if (responseStatus.status == ERROR) {
+                emit(Result.error<T>(responseStatus.message!!))
+                emitSource(source)
+            }
+        }
+
+ +

(taken from here)

+ +

This allows also to encapsulate both the data and its loading status in a single object (Result in the example), so the UI can start showing the old data coming from the DB while the new one is retrieved from the network, and the status as well.

+ +

I have been looking for something similar in iOS but it seems that there is no straightforward solution. The closest to it seems to be a combination of Realm or CoreData for the data layer, and then RxSwift for the observable bits. This seems to involve quite some more coordination between the different parts, and not as straightforward as in Android.

+ +

Am I missing maybe an easier path in doing what I need?

+",98069,,,,,43945.49028,Common patterns for Observable data layer on iOS,,0,2,,,,CC BY-SA 4.0, +409248,1,,,4/24/2020 14:26,,0,25,"

My company has an application largely designed around the idea of running reports. When we had originally conceived the project, we had decided that it made the most sense to use the Android provided 'SingleTask' launch mode, with the intention of restricting each report to a single instance through the lifespan of the session, meaning that returning to Report X after working in some other screens for some time would load up that prior instance rather than a new one entirely.

+ +

Years later, I was discussing with my team the idea of converting our app to use the 'Standard' launch mode instead, meaning we could have multiple instances of each report and we'd rely more on the native Android backstack if users wished to return to a previous instance of a report.

+ +

I feel conflicted about this, on the one hand, we've operated for several years with the restriction of a single instance of a report, and switching could be more of a problem than it's worth. On the other hand, perhaps it would be more logical to use more standard Android design practices.

+ +

My question to the community: for an app largely centered around the idea of running reports, would you expect to have the ability to run multiple instances of each report? Or would it be more logical to expect a single instance of a report that's perhaps easier to return to?

+",364068,,364068,,43945.61111,43945.61111,Android Design Question - Should a Reporting App Operate with Non-Standard Launch Modes?,,0,0,,,,CC BY-SA 4.0, +409251,1,,,4/24/2020 15:13,,1,169,"

I have a tree-like structure as shown in below picture (as one small example). The tree consists of two different node types, that are:

+ +
    +
  1. Data Nodes: These nodes that are colored in yellow contain about ten attributes. They always appear in the leaves of the tree.
  2. +
  3. Collection Nodes: That are shown in blue. They are sharing Id and Type attributes with the Data Nodes and also keep an ordered list of their children.
  4. +
+ +

This tree is created in one component and sent to another system for manipulation (Saving, showing in GUI, etc.). To design this tree, there are different approaches that I have described below with their advantages and disadvantages. What do you think about them and which one is the most appropriate one in this case?

+ +

Approach 1: +Have a interface for common functionalities (e.g. getId, getType) and put the specific functionalities (e.g. getChildren) in concrete classes. This way, the other application that needs to manipulate the tree (for saving or showing in GUI), needs lots of down-casing based on the type of the node.

+ +

Approach 2: +Have one interface for all of the functionalities of both node types. In each concrete class, either implement the method (if applicable) or throw an exception. It is the responsibility of the user to call the right method based on the type of the node.

+ +

Approach 3: +Do not go for an interface. But just have one class (e.g. TreeNode) with the all data + methods of both node types. It is again the responsibility of the user to call the right method on the right object type.

+ +

p.s. The implementation language will be C++11, if it matters.

+ +
+ +

Update 1: +More information about this tree and more specific requirements are described below:

+ +
    +
  1. The tree is created in a source system using a provided API. In this context, the user creates the tree hierarchy and sets the value of each node.
  2. +
  3. The second system that uses this tree, is responsible for saving/loading the tree and also its display in a GUI. These actions usually need walking over ALL nodes of the tree. In addition, this second system can edit the tree elements using the GUI or do a search on the tree for some values.
  4. +
+ +

+",44649,,209774,,43990.80694,43990.80694,What is the best object-oriented design approach for a tree with two node types?,,3,6,,,,CC BY-SA 4.0, +409258,1,,,4/24/2020 19:47,,1,71,"

I am writing an application for different geometrical types of fuel tanks.

+ +

I have a design problem that only at runtime I will receive the exact type of tank from the end user; and I don't know how to create/handle a dynamic object on the server side.

+ +

For example, a tank can have 3 geometrical types of head: conical, dished, and flat. Each type of head needs to be validated differently.

+ +

I have created a parent class named Head that has all common parameters to all geometrical types of head (diameter, thickness etc.). Each child class (Conical, Dished and Flat) extends Head and has its own instance variables.

+ +

The end user will choose, for example, a tank with conical heads and input all the required parameters and send it to the server for validation.

+ +

At this stage I am stuck. I don't know how to handle the dynamic data. I got a suggestion to use Factory design pattern; but because every class has different +variables, I don't think that is the right direction.

+ +

code without constructor and get/set methods

+ +
public class Head {
+    private float headThickness=0;
+    private float headThicknessTolerance=0;
+
+    private ShapeOfHead headShape;
+    private float knuckleRadius=0;
+    private int numberOfHeadPieces=4;
+    private HeadSide headSide;
+
+    private Figure8_1WeldingDetails headLongitudinalWeld;
+    private Figure9_1WeldingDetails headCircumferentialWeld;
+
+    private Bracing headBracing;
+}
+
+public class DishedHead extends Head {
+    private float dishedHeadDepth;  
+}
+
+public class ConicalHead extends Head {
+    private float conicalHeadHeight;
+}
+
+",364094,,326536,,43946.44792,43947.325,design problem handling a dynamic object,,2,2,1,,,CC BY-SA 4.0, +409268,1,,,4/25/2020 1:22,,-2,76,"

Given a software component diagram, a sprint could be represented as a delta of that diagram. That delta would reflect how the sprint affect the components. Each component could for example have a colour reflecting the change in state:

+ +
    +
  • New component, or progress on a partially complete component: green
  • +
  • Component to change: brown
  • +
  • Component to delete: red
  • +
  • No change: grey
  • +
+ +

Is such a delta visualisation technique used in real-world projects and does it provide benefits for the project work ? Is it supported by tools?

+",210442,,209774,,43946.86667,43946.86667,Visualising a sprint as a delta to a software component diagram?,,3,2,1,43946.68889,,CC BY-SA 4.0, +409280,1,,,4/25/2020 13:49,,-1,41,"

I'm starting a brand new project implementing microservices with domain driven design. We will have microservices written in different languages like C#, Python, and Node. I'm thinking about hosting these microservices as AWS lambda functions and using SNS and/or SQS for event pub/sub. My first attempt at hosting C# functionality as lambda functions had very poor performance when cold starting C# lambda. So, we are getting push back from business on using lambda functions.

+ +

Another option is hosting these microservices on a Linux EC2 instance. How do you implement pub/sub of events between these microservices living on a Linux EC2 instance? Can I still use AWS SNS/SQS? It's my understanding implementing a message bus across different technologies is difficult.

+ +

What's the best approach for implementing pub/sub of events across microservices built with different technologies? Thanks for your time.

+",342429,,,,,43947.45833,pub/sub events to/from microservices in different languages,,1,2,,,,CC BY-SA 4.0, +409281,1,,,4/25/2020 13:58,,2,71,"

There is a number of questions about the differences between UML Association, Aggregation and Composition out there and many many answers, some practical and some phylosofical. Here I'm asking we talk about practical differences!

+ +

In some answers I found:

+ +
    +
  1. Reference languages like Java can't really implement Compositions, since the instance life cycle is controlled by the garbage collection;
  2. +
  3. Associations and Aggregations have no practical difference, so we should just drop Aggregations and work with Associations and Compositions; still, those two kinds of relationship exist;
  4. +
  5. Those three concepts make sense only in programming languages like C++, that have an instance-based (and not reference-based) object model;
  6. +
  7. Aggregation allows many owners, Composition does not; a few sources advogate different.
  8. +
+ +

However, no answer I've found so far approached those concepts in the context of persistent objects. No examples were given considering persistence, even though it is a very usual development condition.

+ +

When an object is persisted in a database system we do have a life cycle model free from garbage collection since an instance (or table row if you will) deletion happens in response to a deliberated act from some part of the software implementing some product requirement.

+ +

The difference between Association and Composition is indeed very clear, they will produce different annotations in code. A very noticiable difference is that with a Composition we will have cascade delete enabled, so when the Owner id deleted, the Items are also deleted. In Association there is no cascade delete enabled.

+ +

However, what differences will we find when annotating Association and Aggregation, specially, when in both cases we have a cardinality bigger than 1?

+",97062,,,,,43946.88125,"How to correctly translate UML Association, Aggregation and Composition to a Hibernate mapping?",,1,4,,,,CC BY-SA 4.0, +409283,1,409287,,4/25/2020 16:36,,-4,39,"

I've recently started reading about CQRS, DDD and EventSourcing. From what I've read one of the best ways to do ES is to have an event store and then a regular DB or cache for easier querying. However, something that confuses me is that in some examples I've seen both of these scenarios:

+ +
    +
  1. Store in event log first and then persist in a regular DB/Cache
  2. +
  3. Store in DB/Cache first and then raise an event while appending to the event log
  4. +
+ +

Which one is considered best practice?

+",364148,,,,,43946.73333,What do you store first in a CQRS + ES architecture,,1,1,,,,CC BY-SA 4.0, +409284,1,,,4/25/2020 16:46,,0,56,"

That is, let's say I have a model that's used in several views. Should I create a single view model to represent it across all those views, or should I create a separate view model for each view that uses the model?

+",67076,,,,,43947.48681,Should a view-model be coupled to a view or to a model?,,2,1,,,,CC BY-SA 4.0, +409289,1,409297,,4/25/2020 17:38,,4,1241,"

I've got to build some somewhat complicated WHERE clauses in SQL for a project I'm working on, and the clauses feel very hierarchical with their combination of ANDs and ORs. Instead of:

+ +
WHERE ([userId] NOT IN @excludeUsers) AND ((([firstname] LIKE @nameFilter) OR ([surname] LIKE @nameFilter)) AND (([jobTitle] LIKE @infoFilter) OR ([mobileNo] LIKE @infoFilter)))
+
+ +

... I want to be able to write something like the following:

+ +
// Wcb is a WhereClauseBuilder
+OrClause innerOr;
+var whereClause =
+    Wcb.And(
+        ""[userId] NOT IN @excludeUsers"",
+        Wcb.And(
+            Wcb.Or(
+                ""[firstname] LIKE @nameFilter"",
+                ""[surname] LIKE @nameFilter""
+            ),
+            innerOr = Wcb.Or(
+                ""[jobTitle] LIKE @infoFilter"",
+                ""[mobileNo] LIKE @infoFilter""
+            )
+        )
+    );
+
+ +

The idea is to eliminate mistakes like missing whitespace, brackets, and AND/OR keywords, from the query. The And and Or static methods would create instances of AndClause and OrClause classes, and they'd overload ToString allowing the whole object graph to resolve to a string upon $""{whereClause}"". I'd also like to be able to add to the query later on, like:

+ +
if (extraInfoFilter != null) {
+    innerOr.Or(
+        ""[extraInfo] LIKE @extraInfoFilter""
+    );
+}
+
+ +

However, the code I'm writing for this has gotten complex enough to prompt me to ask: is this solution over-engineered? Should I just build the strings manually instead of generating them from a hierarchical object model like this? Are there any practical reasons why that would be a better approach?

+",125671,,,,,43951.20625,Is this WHERE clause builder an over-engineered design?,,4,7,,,,CC BY-SA 4.0, +409292,1,409302,,4/25/2020 19:02,,-2,55,"

I have an Acconeer XB112 breakout board and an XM112 radar module. It all works just fine per the Acconeer documentation and tutorials.

+ +

Now I want to modify some of the code to output to a GPIO. My problem is that there are six different header options: two micro USB (J1 & J2), two 2x5 headers (J5 & J6), a 2x30 board-to-board connector (J4; where the XB and the XM mate), and a 2x20 header (J3) that is only solder pads.

+ +

I need to use pin 5 on header J6. How does one determine the name of the specific header and how would that be written in python?

+",364162,,209774,,43975.66806,43975.66806,Designating a pin header and a GPIO,,1,3,,,,CC BY-SA 4.0, +409294,1,409295,,4/25/2020 20:55,,-2,126,"

Suppose that I have a Value Object representing an Image URL of a Cake.

+ +

To check that it really is a cake, I make an asynchronous API call to a server that checks whether the image really represents a cake.

+ +

Would it be ok to put this kind of validation inside the constructor of the Value Object? After all, I can't identify my CakeImage object without checking first. The other solution would be to use an ImageURL object, and check if it represents a cake when using it, but it seems like a bad solution.

+",330878,,,,,43948.92778,Async Value Object Creation (DDD),,2,3,,,,CC BY-SA 4.0, +409301,1,,,4/26/2020 7:25,,0,48,"

I'm working on a multiplatform CMake project that depends on SDL2. I'm currently putting the SDL2 dlls in a directory in my project and committing them through git-lfs. A post-build CMake step is then able to copy the DLLs to the output folder without other developers needing to track them down and manually copy them over.

+ +

My question is: is this bad practice? Are there downsides that I'm not considering?

+ +

I can think of two alternatives:

+ +
    +
  1. Use some method to find the DLLs in the syspath and copy them from there.
  2. +
  3. Have a script or build step that checks for the files and downloads them if not present.
  4. +
+",364191,,,,,43947.30903,Are there downsides to putting dll dependencies in version control?,,0,1,,,,CC BY-SA 4.0, +409304,1,,,4/23/2020 8:34,,0,172,"

This is for C#.

+ +

Ok so I'm coding a very basic money-managing program. You have an account with money in it and you deposit funds into it or you withdraw funds from it.

+ +

Obviously you don't want to accidentally perform the same transaction twice, so I'm trying to throw an InvalidOperationException when a transaction has already been attempted in the past and to specify the reason for this exception. But I have no idea how to go about this.

+ +

This is what I have so far:

+ +
public void Execute(decimal amount)
+{
+    if (_executed)
+    {
+        throw new InvalidOperationException();
+    }
+    if (_balance <= amount)
+    {
+        throw new ArgumentOutOfRangeException("" insufficient funds"");
+    }
+    if (!_success)
+    {
+        this._balance -= amount;
+        _success = true;
+    }
+}
+
+ +

Conceptually I imagine a transaction's unique 'identity' can be stored somewhere and if one attempts another transaction I can use a command that says something along the lines of ""if 'unique code' exists in this place then show error. Or else, continue with transaction"". Any other way to go about it? I have very little experience coding, I'm in my 1st year at uni.

+",363927,Matt,,,,44115.54583,How do I stop a repeated action?,,1,7,,,,CC BY-SA 4.0, +409309,1,409313,,4/26/2020 10:31,,-3,85,"

I am trying to understand processes and their use in software engineering. +Not processes in general but rather creating a process from within a program.
+It seems a really powerfull tool and i have a feeling that it is used in some very important areas of software engineering.

+ +

Why would someone create a child process? +What are the use cases for having a program that calls other programs? +Are there any design patterns that specifically recommend multi-process structure?

+ +

One example i found was the ""make"" program calling GCC to compile some source files into object files. Then piping GCC's output into another call to GCC to do the linking.

+ +

What other uses cases have any of you worked on?

+",364206,,,,,43948.19236,What are some use cases for creating child processes?,,2,1,,44017.86806,,CC BY-SA 4.0, +409314,1,409319,,4/26/2020 11:41,,0,140,"

I'm currently writing unit tests for ASP.NET Core Controllers. Some controllers inject UserManager<T> which seems to be a really hard type to mock. After some attempts to mock or even fake it, I eventually had the idea to create a facade to wrap the UserManager<T> and inject the facade interface into the controllers instead of the UserManager.

+ +

I think I'll need to explain this approach to the team and fear that my reason (to make it mockable) is not convincing enough. Is there some documentation or best practice that supports or justifies this plan?

+",33479,,,,,43948.37222,Is it good practice to create a facade only to be able to mock the wrapped implementation?,,3,3,,,,CC BY-SA 4.0, +409315,1,,,4/26/2020 12:21,,1,44,"

I have implemented pagination using SQL and stored that result of search into temporary table.

+ +

Temporary table is named after unique tab id.

+ +

So that other tab can have seperate result sets.

+ +

That table can be deleted using unload event of browser.

+ +

Now issue I am getting is, sometimes browser crash or system restart, unload event is not firing up.

+ +

I have to store into temporary table due to large result size (that's what I found easy to implement).

+ +

And it is not feasible to apply search query on each first, previous, next, last navigations.

+ +

Is there a better way to handle this scenario?

+ +
SELECT * INTO Search_1587904945298 -- Timestamp as unique id using javascript new Date().getTime()
+FROM    ( SELECT    ROW_NUMBER() OVER ( ORDER BY OrderDate ) AS RowNum, *
+          FROM      Orders
+          WHERE     OrderDate >= '1980-01-01'
+        ) AS RowConstrainedResult
+WHERE   RowNum >= 1
+    AND RowNum < 20
+ORDER BY RowNum
+
+",310389,,310389,,43947.52986,43947.52986,Paginate large data and store in temporary table for navigation,,0,7,,,,CC BY-SA 4.0, +409321,1,,,4/26/2020 14:30,,0,100,"

I have a class which has one method that is called from another class. This method internally calls several other methods to do its work. Those other methods are all public and can be called by the other class but I don't use them that way because I'm trying to follow the ""Tell-Don't-Ask"" principle. All these methods return something back and they don't mutate the state of the input parameters. They do all their processing logic using only the input parameters passed in.

+ +


Now when it comes to unit testing, I can do these things but I'm not sure which one to do


+ +
    +
  1. Test all methods by passing in input and asserting the actual output against an expected one.
  2. +
  3. Test just the internal ""public"" methods (instead of all methods) like I described above and also test that the method that is called by the other class invokes the internal methods with the correct parameters i.e. use a mocking framework to mock out those calls but still assert that the calls are made.
  4. +
+ +

This is an example. The enhanceThingInformation method is the one that is called by the other class. The other methods are pretty specific to this class so I don't thing breaking them out to their own class is the right approach here (though it might be?).

+ +
class ThingInformationEnhancer {
+    addErrorInformation(thing, someDictionary1, someDictionary2) {
+        thing = /* some logic here */
+        /* more logic here */
+        return thing;
+    }
+
+    removeInvalidInformation(thing, someDictionary2) {
+        thing = /* some logic here */
+        /* more logic here */
+        return thing;
+    }
+
+    addAdditionalInformation(thing, someDictionary1, someDictionary2) {
+        thing = /* some logic here */
+        /* more logic here */
+        return thing
+    }
+
+    enhanceThingInformation(thing, someDictionary1, someDictionary2) {
+        thing = this.addErrorInformation(thing, someDictionary1, someDictionary2);
+        thing = this.removeInvalidInformation(thing, someDictionary2);
+        thing = this.addAdditionalInformation(thing, someDictionary1, someDictionary2);
+        return thing;
+    }
+}
+
+ +

I'm interested to hear how others handle this sort of situation.

+",359675,,,,,43948.53403,What approach do I take to unit testing a class which has a method that internally calls other methods?,,2,2,,,,CC BY-SA 4.0, +409324,1,,,4/26/2020 15:02,,-2,122,"

I have this class diagram exercise in UML. The main problem was the permissions are the works, which I did not know very well how to model it. The specification says:

+ +
+

Users can publish their work on a website for external users. In this + process the work will be equipped with a Web service interface.

+ +

The environment will have a Web system so that users outside the + company can simulate simulation work for a fee. Web users must + identify themselves to access the different works.

+ +

External users may request permission to simulate the different works + on the Web system, to obtain said permission they must pay the price + that the company has established for each work . The system will keep + the date in which the simulation permission of each work was granted + to each external user. Once permission has been granted, external + users will be able to run the work as many times as necessary, the + system will store the number of runs of each work that each external + user has performed.

+
+ +

The solution I had raised is the following diagram:

+ +

+ +

But I have some doubts: are access permissions and external user jobs represented correctly?

+",364228,,209774,,43947.66389,43947.77778,Class diagram for permissions,,1,1,,,,CC BY-SA 4.0, +409325,1,,,4/26/2020 17:32,,-5,71,"

The question/title might be ambiguous, so please feel free to improve (or even migrate if necessary) it.

+ +

My primary concern is what are the innovations of relational model/theory/databases? (In particular, so groundbreaking that it deserves a Turing Award?)

+ +

It feels to me that regardless of the mathematically notation (formal language, proved closure property, etc), the intuition behind relational theory is very simple. After all, we have tabulating representation with headers and columns long before modern computing emerged. Examples: roster, military records, bank sheets, etc.

+ +

Based on these real life counterparts, table/relation seems very direct and intuitive. So in historical context,

+ +
    +
  1. what prevented pioneers from invented relational model until the 70s?
  2. +
  3. what is relational model's advantage over hierachical/network/navigational databases?
  4. +
+",267470,,,,,43948.40556,Novelty of relational model in historical context,,1,5,,,,CC BY-SA 4.0, +409332,1,,,4/27/2020 2:50,,-2,157,"

Suppose I'm in some function of class A, in this function I define an object of class B, say b, and call some function using b.func() which updates some values and arrays, say x and A. Then, I create an object of another class C, say c, and call one of its function using c.func(). This function is supposed to use x and A that were updated in the class B during the call b.func(). +How can I access them within the function c.func()? I tried to define an object of class B in c.func(), but the values of x and A appear to be empty...

+",364258,,,,,43948.41458,Accessing one class variable from another class in C++,,3,0,,,,CC BY-SA 4.0, +409333,1,409363,,4/27/2020 3:06,,-1,168,"

There is a number of books written about OOAD (object-oriented analysis and design). A few of them can probably be considered good books, written by people with a lot of experience in the field.

+ +

One could say there is reasonable agreement on what OOAD is, to the point he/she could read a few books and not find (very) disparate elements.

+ +

So far on that subject I've read Larman (Applying UML and Patterns: an introduction to OOAD and the iterative process).

+ +

It begins with the functional requirements (because the non-functional ones are not the book's focus), then goes on to elaborate the use cases and then models the domain across a number of iterations. Somewhere else I've seen activity diagrams being used at an early stage but maybe they didn't fit the examples mentioned in the book. Also the user interface is not defined early (not the book's focus and is postponed to later iterations).

+ +

Can we say:

+ +

a) activity diagrams can optionally be used at the early stages of OOAD?

+ +

b) user interface is meant to be defined in later stages?

+ +

c) are the steps/activities in OOAD mostly well established?

+ +

or:

+ +

a) the steps are not well established, to the point activities cannot be reasonably defined and applied depending on the necessity.

+",93338,,93338,,43954.85417,43954.85417,Are the steps involved in OOAD well-established?,,1,33,,43948.63264,,CC BY-SA 4.0, +409334,1,409345,,4/27/2020 3:25,,0,112,"

In one of his books, Robert Martin provides an example of test-driven pair programming conversation while developing a Bowling Game Scorer in which one asks about user stories' inputs and outputs.

+ +

The inputs are throws and the outputs are scores.

+ +

So I realized I had never thought about a requirement's inputs and outputs (to my recollecting).

+ +

Are the outputs the acceptance criteria for the user stories?

+ +

In that case what are the inputs?

+",93338,,93338,,43948.53542,43950.37083,Are user story inputs and outputs the same as acceptance criteria?,,1,0,,,,CC BY-SA 4.0, +409337,1,,,4/27/2020 5:06,,0,94,"

I have build several small REST services based on spring-boot. Each of this REST service has an own database, own configuration and can run independently without the other services. Each service is mapped to a separate port. You can just start it and invoke each of the provided REST endpoint.

+ +

In front, I have build an ""API-Gateway"" based on spring-boot which acts as an aggregate service communicating with each of the small REST services over HTTP to gather some data or to perform some operations using one or more REST service. Currently, everything runs on my machine so the whole communications goes over localhost and the specific port of the specific service.

+ +

What I now want to achieve or to try is the following:

+ +
    +
  1. The current API Gateway should use the existing spring-boot REST services as a dependency in the POM file
  2. +
  3. It should still be possible to run each of the small REST independently. That means that I can just spring-boot:run and the service exposes the REST API under a specific port. Even if its used as a hard dependency in my API-Gateway
  4. +
  5. Each of the small service should still use its own database so there should be no fat database which holds all data.
  6. +
  7. When running the big fat API-Gateway with all the small services as dependency, the small services should not be mapped on specific port and should not expose the several REST APIs. I assume that this will not happen as the services are not started using spring-boot:run but I just wanted to point this out.
  8. +
  9. From architectural point of view: The communication with the dependency should be implemented using an Interface defining the necessary methods. That should make it easy in future to replace the calls to the dependency directly with some separate running REST endpoint by providing another implementation of that interface.
  10. +
  11. Question: I assume that the big fat API Gateway should not use the already existing Controller implementation of the REST services for invoking the methods. Instead, I assume, the REST service should provide a separate API for direct communication?
  12. +
+ +

What is the best way in achieving the above noted information and requirements?

+ +

Thanks in advance and I hope for your help and your ideas.

+ +

PS: Please inform me if that is the wrong stack exchange network for this question.

+",357282,,,,,43948.2125,Aggregate small spring-boot REST services into a big one as dependencies,,0,0,0,,,CC BY-SA 4.0, +409350,1,,,4/27/2020 10:23,,0,35,"

In a microservices based web backend, most of the services (Node.js) contain modules to handle read and write data separately. When that particular service restarts it grabs the data from other microservices and it remakes the read model cache (sometimes, a document based DB). And this service runs inside a docker-container.

+

When it comes to clustering or if simply speaking when running several instances of the same service, or when in a case which manual restart is needed, this read model creation mechanism runs multiple times and it seems very bad practice. Especially imagine in a scenario it needs to cache a few hundred thousand entities every time, this affects startup time and performance of that service.

+

What are the industry best practices to create/manage read model data when services are booting or restarting?

+",241588,,173647,,44029.72847,44029.72847,"Read models creation, maintaining in stateless sevices CQRS",,0,1,0,,,CC BY-SA 4.0, +409353,1,,,4/27/2020 11:09,,-2,69,"

TLDR:

+ +

In a Headless CMS (or a Decoupled CMS), the content retrieved by the front-end needs to be identifiable (somehow). This is where I'm stuck. I can describe my guesses of how platform-agnostic content might be made identifiable (see my attempt at a guess below). But I can't find real-world confirmation of tried and tested approaches anywhere, detailing how the decoupled front end can request the content from the content repository in a meaningful, indentifiable manner.

+ +

Where can I find a straightforward description of this core aspect of the mechanics of Headless CMS (or Decoupled CMS) architecture?

+ +
+ +

Recently I've been intrigued by the term Headless CMS.

+ +

There seem to be no shortage of articles and blog posts explaining:

+ +
    +
  • What is a Headless CMS?
  • +
  • What's the difference between a Headless CMS and a Traditional CMS?
  • +
+ +

but the explanations always seem to be pitched at the reader who is planning to start using a Headless CMS, not at the engineer who wants to try their hand at writing a Headless CMS.

+ +
+ +

Eventually, after trying repeatedly to read between the lines of the articles I was reading, I grasped the single, fundamental innovation built into the Headless CMS:

+ +
    +
  • in Headless CMS architecture, content and structure are completely separated
  • +
+ +

A conventional approach in web development is to maintain substantial separation between:

+ +
    +
  • Structure (HTML)
  • +
  • Presentation (CSS)
  • +
  • Behaviour (JS)
  • +
+ +

In this model, the written and media Content isn't listed as a (fourth) separate concern because it's implicitly interwoven throughout the Structure.

+ +

e.g.:

+ +
    +
  • You can take the HTML markup: <button type=""button"">Launch Jaguar Slideshow</button>
  • +
  • and present it as a big red button with capitalised white text and a drop-shadow using CSS
  • +
  • and enable it to trigger the creation and drop-down of a console, containing an animated slideshow using Javascript
  • +
+ +

But what you can't conventionally do is separate the textual content:

+ +
    +
  • Launch Jaguar Slideshow
  • +
+ +

from the markup structure:

+ +
    +
  • <button type=""button""> ... </button>
  • +
+ +

But - if I understand correctly - this is what a Headless CMS enables:

+ +
    +
  • on one page <button type=""button""> ... </button> may contain: Launch Jaguar Slideshow
  • +
  • on another page, <button type=""button""> ... </button> may contain: Launch Leopard Slideshow
  • +
  • on a third page, <button type=""button""> ... </button> may contain: Launch Tiger Slideshow etc.
  • +
+ +

I'm not even sure that I've understood everything correctly up to here - and I've never used a templating language - but, my first (incidental) question:

+ +

Does this, essentially, make a Headless CMS the conceptual child of:

+ +
    +
  1. a Traditional CMS; and
  2. +
  3. Templating Languages such as Mustache, Handlebars.js, HAML, Pug, Slim, Nunjucks etc.
  4. +
+ +

with the addition that whereas the templating languages above tend to work exclusively with HTML, the kind of templating engine in a Headless CMS will insert Content into any structure (ie. not just into HTML in a web document, but also into the XML of an RSS feed, or into a Social Media platform component, or into the UI structure of a Native App etc.)

+ +

and, my second, main question:

+ +

How on earth might one approach decoupling Content from Structure?

+ +

My best guess is something like the following JSON, where I have tried to express the relationship of the content only to itself (so it remains structurally agnostic and can be queried and return data to slot into any structure).

+ +
 {
+   ""Summary"":{
+      ""Title"":""Apples"",
+      ""Created"":""[TIMESTAMP HERE]"",
+      ""Last Modified"":""[TIMESTAMP HERE]"",
+      ""ShortDesc"":""An 8-10 word description of Apples here"",
+      ""LongDesc"":""A 20-30 word intro to Apples here.""
+   },
+
+   ""Related"":{
+       ""Parents"":[
+          ""Woodland_Fruit""
+       ],
+       ""Siblings"":[
+          ""Blackberries"",
+          ""Cherries"",
+          ""Pears""
+       ],
+       ""Children"":[
+          ""Granny Smith"",
+          ""Braeburn"",
+          ""Gala"",
+          ""Red Delicious""
+       ]
+   },
+
+   ""Media"":{
+      ""Images"":{
+         ""Hero_1"":{
+            ""Sizes"":[
+
+            ],
+            ""URL"":""[URL HERE]"",
+            ""Title"":""Title Here"",
+            ""Alt"":""Alternative text here"",
+            ""Created"":""[TIMESTAMP HERE]"",
+            ""Credits"":{
+               ""Photographer"":""""
+            },
+            ""Licence"":{
+               ""Type"":"""",
+               ""URL"":"""",
+               ""Holder"":""""
+            }
+         },
+         ""Primary_1"":{
+            ""Sizes"":[
+
+            ],
+            ""URL"":""[URL HERE]"",
+            ""Title"":""Title Here"",
+            ""Alt"":""Alternative text here"",
+            ""Created"":""[TIMESTAMP HERE]"",
+            ""Credits"":{
+               ""Photographer"":""""
+            },
+            ""Licence"":{
+               ""Type"":"""",
+               ""URL"":"""",
+               ""Holder"":""""
+            }
+         },
+         ""Associated_1"":{
+            ""etc."":""etc.""
+         },
+         ""Associated_2"":{
+            ""etc."":""etc.""
+         }
+      }
+   },
+
+   ""Editorial"":{
+      ""Primary"":{
+         ""Title"":""Hesperides and Beyond"",
+         ""Author"":""Ann Onne"",
+         ""Created"":""[TIMESTAMP HERE]"",
+         ""Last Modified"":""[TIMESTAMP HERE]"",
+         ""Last_Modified"":""[TIMESTAMP HERE]"",
+         ""Sections"":[
+            {
+               ""Paragraphs"" : [
+                  {
+                     ""Paragraph"": ""[SECTION 1, PARAGRAPH 1 HERE]"",
+                     ""Pull_Quotes"": [
+                         ""PULLQUOTE HERE""
+                     ]
+                  },
+
+                  {
+                     ""Paragraph"": ""[SECTION 1, PARAGRAPH 2 HERE]"",
+                     ""Pull_Quotes"": [
+                        ""PULLQUOTE HERE"",
+                        ""PULLQUOTE HERE""
+                     ]
+                  }
+               ]
+            },
+
+            {
+               ""Section_Heading"" : ""[SECTION HEADING HERE]"",
+
+               ""Paragraphs"" : [
+                  {
+                     ""Paragraph"": ""[SECTION 2, PARAGRAPH 1 HERE]""
+                  }
+               ]
+            }
+         ]
+      },
+
+      ""Secondary_1"":{
+
+         ""Sections"":[
+            {
+               ""Section_Heading"" : ""[SECTION 1 HEADING HERE]"",
+
+               ""Paragraphs"" : [
+                  {
+                     ""Paragraph"": ""[SECTION 1, PARAGRAPH 1 HERE]""
+                  }
+               ]
+            },
+
+            {
+               ""Section_Heading"" : ""[SECTION 2 HEADING HERE]"",
+
+               ""Paragraphs"" : [
+                  {
+                     ""Paragraph"": ""[SECTION 2, PARAGRAPH 1 HERE]""
+                  }
+               ]
+            },
+
+            {
+               ""Section_Heading"" : ""[SECTION 3 HEADING HERE]"",
+
+               ""Paragraphs"" : [
+                  {
+                     ""Paragraph"": ""[SECTION 3, PARAGRAPH 1 HERE]""
+                  }
+               ]
+            }
+         ]
+      }
+   }
+}
+
+ +

(Hmmm. Does that begin to look like a pseudo-version of JSON-LD + Schema.org to you? Because it does to me...)

+ +

The JSON above describes the topic Apples:

+ +
    +
  • It has 4 sections: Summary, Related, Media, Editorial
  • +
  • It indicates that the topic Apples has a parent topic, as well as sibling and children topics.
  • +
  • The topic has associated media (a hero image, a primary image and 2 more associated images)
  • +
  • The topic also has two Editorial ""articles"" - one is a proper article, one is supplementary information
  • +
+ +

So far, so good.

+ +

But most of this feels very much like guesswork.

+ +

I'd like to confirm that this sort of approach is on the right track.

+ +

Is this how I'm supposed to approach separating Content from Structure when building a Headless CMS?

+",359345,,359345,,43949.41181,43949.41181,Understanding Headless CMS architecture from an engineering (rather than a user) perspective,,1,4,,,,CC BY-SA 4.0, +409355,1,409362,,4/27/2020 11:42,,2,93,"

I am currently in the process of creating a use case digram for a new system that we are building and have stumbled upon an interesting scenario.

+ +

The system has a number of primary actors which include Corporate and Non-Corporate users. Each of these users can “do something” within the system from a variety of different device types which may or may not be under the direct control of my company:

+ +
    +
  • A corporate user could “do something” from a corporate Managed device or their personal device.

  • +
  • A non-corporate user could “do something” from their device which is “unmanaged” by my company.

  • +
+ +

How do I show this relationship on the use case digram? The combination of actor and device type is a fundamental consideration for the design of the system.

+ +

Currently I have settled on “managed device user” and “unmanaged device user” with the view of articulating all the possible combinations on a separate artefact perhaps within the actor descriptions tab. Others say I should have the device as an actor on the use case.

+ +

+ +

I could have “corporate user” “non corporate user��� and “device” as 3 actors on the system. I’m not precious, I just want to make sure it’s correct.

+ +

+ +

So which is the most suitable approach ?

+",364291,,209774,,44023.84375,44023.84375,UML Actor and Device Relationship,,1,7,,,,CC BY-SA 4.0, +409361,1,,,4/27/2020 14:38,,1,26,"

I would like to generate analytics on a per-user basis on how many times they've viewed a particular page for 7 days, and how many times they've viewed a page during their lifetime as a user.

+ +

This will be tracked in 1d, 7d, 14d, 30d, and lifetime intervals. If a user visits the page today, they will appear to have 1 visit in the 1d, as well as 1 additional visit in all date range categories above 1d- since visiting once today means you've visited at least one in all other date range categories.

+ +

+ +

To do this I am storing all events in a data lake, and rolling up these date range counts every 24 hours based on the criteria. The actual users data is in a document, but it wouldn't be feasible to store all events for a user in their document given how much data that would eventually be, and the inevitable data skew that would create on the cluster. This works now, but it's taking more and more time to generate these rollups as the data grows, and even with partitioning, the number of pages we are doing this rollup on is growing at a pace where the current method may not scale gracefully.

+ +

When we receive these events, we could update the users counts at runtime. But without the context of the date, there would be no ""dropoff"". If it were a lifetime count we could always increment that field, but the counts need to update daily as a page view today is not a page view tomorrow.

+ +

Something that stands out to me is that the counts will consistently gravitate towards one end. But that may just be an observation and not anything useful.

+ +

+ +

Is the only way to do this the way I've described, or is there a more clever way to update these fields?

+",293939,,,,,43948.60972,Rolling Analytics - Dates and Dropoffs,,0,3,,,,CC BY-SA 4.0, +409364,1,409373,,4/27/2020 15:47,,-1,131,"

The current workflow we have in place at my company with regards to git is:

+ +
    +
  • Develop on a branch
  • +
  • Push every so often
  • +
  • When done developing, open a pull request in Github
  • +
+ +

We have a reviewers on that pull request where they comment suggestions for improvement. However a lot of the changes they are requesting can be made earlier in the development cycle, rather than waiting till im finished.

+ +

I am suggesting to the team lead that the workflow should be different. Rather than waiting till I have finished my code until other developers comment, wouldnt it make more sense for them to comment while im developing? I think this would catch many bugs earlier in the dev cycle.

+ +

What are the pros and cons of this approach? Is this a bad idea?

+ +

Suggested workflow:

+ +
    +
  • I create a new pull request when I start a new project
  • +
  • Lets say I add some code and push
  • +
  • I comment on the pull request asking the reviewers what they think of the change.
  • +
  • They comment back and I implement the change
  • +
+",362636,,,,,43948.74028,Good / Bad idea to start a pull request in the beginning of a project,,1,5,,,,CC BY-SA 4.0, +409365,1,,,4/27/2020 15:55,,1,84,"

I am trying to design a Rest API backend based on Loopback. Since I heard nodejs is not very good in computing since it will block the thread, can I make a async call just for using java to calculate them, but also host them on same server where the nodejs application is running.? This way it willbecome non blocking. Is it a good idea to , say if hosting on a AWS ec2 instance, host tomcat server and nodejs nginix server on same instance?

+",364308,,,,,43948.74931,Can I use Nodejs for intensive calculation and computing by offloading them to Java framework?,,2,1,,,,CC BY-SA 4.0, +409368,1,409402,,4/27/2020 16:52,,0,73,"

I have the following setup:

+ +
    +
  • different teams
  • +
  • shared eslint config (that imports airbnb rules as a basis).
  • +
+ +

Whenever a developer decides to update libraries, if eslint/prettier have updates, it's a pain. The main reason is becuase normally eslint or prettier are introducing new rules. If we update these libraries, they implicate, most of the times, in reformatting the entire code. I personally don't think this is a good practice, because:

+ +
    +
  1. it clutters the log.
  2. +
  3. you're adding new rules that the teams are not aware (impacts can be huge)
  4. +
  5. Steps 1 and 2 will always repeat whenever you update the libs.
  6. +
+ +

Giving the step 3 I mentioned, it makes me wonder if it makes sense to update these libs.

+ +

When we didn't have anything, following airbnb rules was great. But does it make sense to keep including new rules and reformatting the code whenever you update a library. (for instance, we have establish a process of updating the frontend libraries once a month to avoid having deprecated libs. Not updating the eslint rules and waiting longer proved to be even worse.

+ +

I wanted to know how do you guys handle it and what are the best practices on it.

+ +

Since this is more process-related question, software engineering seems to be the right place to ask.

+",95339,,,,,43949.53819,When does it make sense to update eslint/prettier?,,1,6,,,,CC BY-SA 4.0, +409379,1,,,4/27/2020 21:39,,0,76,"

Abstract:

+ +

I'm attempting to create a ""data interoperability API"" or in other terms ""high-level query interface API"" that will be consumed by (data scientists, web apps, any who wants to query multiple datasets).

+ +

Assumptions:

+ +

The underlying data will usually be in these formats:

+ +

1) Best case scenario - XML (with proper XSD). +• the XML serves as meta-data that describes (where that data resides, file, web service,etc... field descriptions) +• points to delimited data (CSV) or even binary data

+ +

OR

+ +

2) Just plain XML as meta-data (NO XSD).

+ +

• the XML serves as meta-data that describes (where that data resides, file, web service,etc... field descriptions) +• points to delimited data (CSV) or even binary data

+ +

OR

+ +

3) Plain data (CSV no headers)

+ +

This API will be provided as distributable (In case of Java a JAR, or Python extension) that can be loaded by consumer applications.

+ +

Progress so far:

+ +

1) Im able to load XSD using JAXB for Java and PyXB for Python version and generate class based on XSD info. Im calling “xjc” as system process via Java :

+ +
   ProcessBuilder processBuilder = new ProcessBuilder(CMD_ARRAY)Process process = processBuilder.start();
+
+ +

2) Im able to also bind objects & data in memory and issue (what I call “native queries”).

+ +
 //1. instance 
+            jaxbContext = JAXBContext.newInstance(clazz);
+            //2. Use JAXBContext instance to create the Unmarshaller.
+            unmarshaller = jaxbContext.createUnmarshaller();
+            //3. Use the Unmarshaller to unmarshal the XML document to get an instance of JAXBElement.
+            inStream = new FileInputStream( this.xmlFile);
+            OaisInteropFrameworkStates.setState(OaisInteropFrameworkStates.STATE_LOAD_NATIVE_API);
+            returnedObject  = unmarshaller.unmarshal(inStream);
+
+ +

3) Next step (High level Public API mapping) - What I need advice on , see the diagram below (Everything in black boxes are tested and working, refer to the red shapes that were I need advice).

+ +

High level data flow architecture:

+ +

+ +

Actual question 1) What is the best design pattern approach, in order to automatically map/wrap different functions?

+ +

Assumptions:

+ +

1) (User native to public mapping) - The user MUST provide a XML (or maybe JSON) 1 to 1 mapping of function signatures

+ +

2) The mapped function will always return a String

+ +

3) Native API must be decoupled from Public API and they have no knowledge of each other. (except via the mapping file or any auto-generated code based on the xml mapping file. Similar to client & server interfaces.

+ +

4) I have very limited time to work on this project, 8-10 hours per week. Therefore simplicity over elegance.

+ +

Some thoughts:

+ +

I was looking at XML-RPC Python (https://docs.python.org/3/library/xmlrpc.client.html#module-xmlrpc.client) or Java (https://www.tutorialspoint.com/xml-rpc/xml_rpc_examples.htm).

+ +

2) Also REST-RPC, or some other service-oriented architecture.

+ +

3) Have the user create XSD that maps the function endpoints and use JAXB (Rather not, XSD is too verbose, and these days Python minded scripting people will hate that...)

+ +

I like the approach were XML-RPC uses a similar mapping xml:

+ +

Request:

+ +
<?xml version=""1.0"" encoding=""ISO-8859-1""?>
+<methodCall>
+   <methodName>sample.sum</methodName>
+   <params>
+      <param>
+         <value><int>17</int></value>
+      </param>
+
+      <param>
+         <value><int>13</int></value>
+      </param>
+   </params>
+</methodCall>
+
+ +

Response:

+ +
<?xml version=""1.0"" encoding=""ISO-8859-1""?>
+<methodResponse>
+   <params>
+      <param>
+         <value><int>30</int></value>
+      </param>
+   </params>
+</methodResponse>
+
+ +

Thanks a million!

+",364338,,,,,43952.89444,Best design pattern to map functions/wrap functions that will be used by client apps,,0,5,,,,CC BY-SA 4.0, +409382,1,,,4/28/2020 3:51,,1,189,"

In the book UML 2 and the Unified Process from Arlow and Neustadt has been told: +Analysis classes should have 3 to 5 responsibilities

+ +

But as you know we have the SRP that tells us something else!!

+ +

why so happend? +I'm confused

+ +

Which of the following deductions is correct?

+ +

Case 1- The different definitition of responsibilty has been adopted by the authors

+ +

Case 2- the definitions adopted is the same and actually the purpose of them is the following: +We can have multiple responsibilities in analysis classes but in design phase in regarding to SRP, we split the analysis classes to design classes with one responsibility

+ +

Please help me

+",353037,,353037,,43949.87847,43949.87847,Is there multiple definitions for responsibility of class?,,4,2,1,,,CC BY-SA 4.0, +409383,1,,,4/28/2020 5:06,,2,144,"

I'm trying to understand why sizeof(a)/sizeof(t) is inferior for getting the length of an array to sizeof(a)/sizeof(a[0]) if just as it's possible to have different types, my elements could also be of different lengths. so what makes dividing by the element size uniform?

+",364356,,173647,,43949.37917,43950.7875,sizeof(a)/sizeof(a[0]) vs sizeof(a)/sizeof(t) where t is type in C from K.N.King,,2,7,,,,CC BY-SA 4.0, +409385,1,,,4/28/2020 6:10,,0,85,"

Backgroud

+

Building a mobile App for product X which is currently hosted as a SaaS solution. The product X does not support OAuth currently, implements basic authentication and generates Session token after authentication. Product X also implements SSO supporting native SAML 2.0.

+

Use Case

+

Mobile App also implements SSO using existing SSO framework. In this mobile app calls SSO URL and on authentication IDP redirects back to SaaS application, which on receiving SAML token issues session token which is passed to mobile app. Then mobile app calls Product X APIs (hosted along with SaaS aplication) using session token

+

Problem

+

As per organisation security recommendation session token storage on mobile is not secure. They recommend OAuth/JWT tokens.

+

Probable Solutions

+
    +
  1. Product X implements OAuth - which is not feasible in given time frame
  2. +
  3. AWS Cognito as federated cloud proxy - Did some research on this, found that this is used to facilitate authentication for AWS services
  4. +
+

Any recommendations?

+
    +
  1. For storing session token on mobile app securely. Which can be used to convince security team
  2. +
  3. For using any other federated cloud proxy to get OAuth/JWT tokens
  4. +
+",361812,,-1,,43998.41736,44099.54375,Best way to store Session token on mobile App,,1,0,0,,,CC BY-SA 4.0, +409393,1,411973,,4/28/2020 9:35,,-1,53,"

A relation R(A,B,C,D) is given.

+ +

C and D are equivalent (C is the course ID and D is the course name, one implies the other).

+ +

C and D are prime attributes.

+ +

Does that violate the requirement of 1NF or any other NF?

+",360306,,,,,44007.53958,Equivalent attributes and normal forms,,1,7,,,,CC BY-SA 4.0, +409405,1,,,4/28/2020 15:43,,0,80,"

I am pretty new to DDD, so any help/ideas will be appreciated. I will explain my initial design and problem below.

+ +

The user can ask the system to generate products proposal, proposal is basically something, which has some ownership and set of products, which can be renewed if system evolves, new products should be automatically added to proposal and returned to the user. Proposal is designed as separate AR as following class.

+ +
public class Proposal : AggregateRoot
+{
+    public Ownership OwnedBy { get; internal set; }
+
+    public IProposalState State { get; internal set; } =
+        ProposalStateCollection.ProposalInitializedState;
+
+    internal Proposal(Ownership ownedBy)
+        => SetIdentity(ownedBy.DeviceId, this);
+
+
+    public Result AttachTo(string meteringPoint)
+    {
+        if (string.IsNullOrWhiteSpace(meteringPoint))
+            return Result.Failure($""{nameof(meteringPoint)} cannot be null or empty."");
+
+        return State.AttachMeteringPoint(this, meteringPoint);
+    }
+
+    public Result AttachTo(OwnerAddress ownerAddress)
+    {
+        if (ReferenceEquals(null, ownerAddress))
+            return Result.Failure($""{nameof(ownerAddress)} cannot be null."");
+
+        return State.AttachOwnerAddress(this, ownerAddress);
+    }
+
+    internal Result<Product> PopulateWith(ProductType productType)
+    {
+        if (ReferenceEquals(null, productType))
+            return Result.Failure<Product>($""{nameof(productType)} cannot be null."");
+
+        return ProductSpecification
+            .Instance
+            .OwnedByProposal(Id)
+            .ForProductType(productType)
+            .InState(ProductStateCollection.ProductInitializedState)
+            .AndNoExistingIdentity()
+            .Build();
+    }
+
+    public Result ScheduleRecalculation()
+        => State.ScheduleRecalculation(this);
+
+ +

That's pretty straight forward, user sends request to the system, then the Application service serves the request (checking existing of proposal, if it was generated before or creating one if not) and then calling ScheduleRecalculation method, which simply generate domain event based on current state of proposal.

+ +

When handler received event that proposal is recalculated itself (ProposalRecalculationCompletedEvent) it asks domain service to provide him all latest products + existing one and then start recalculation of each product. Something like this. Product is modeled as separate AR.

+ +
    public Task Handle(
+        DomainEventNotification<ProposalRecalculationCompletedEvent> notification,
+        CancellationToken cancellationToken)
+    {
+        var domainEvent = notification.DomainEvent;
+
+        return _proposalRepository.ProposalOfIdAsync(domainEvent.ProposalId, cancellationToken)
+            .Bind(proposalOrNone => proposalOrNone.ToResult(
+                $""Proposal [{domainEvent.ProposalId}] cannot be found. Won't be populated with latest products.'""))
+            .Bind(proposal => _productRepository.ProductsOfProposalAsync(proposal.Id)
+                .Tap(existing =>
+                {
+                    _productService.LatestProductsOfProposal(proposal, existing)
+                        .Tap(latest =>
+                        {
+                            var products = latest.ToList();
+                            _productService.RecalculatePriceOfProducts(products);
+                            _productRepository.PersistProductsAsync(products);
+                        });
+                }))
+            .OnFailure(error =>
+                _logger.LogError(
+                    ""Error occured while populating proposal [{proposalId}] with the latest products. [{error}]"",
+                    domainEvent.ProposalId,
+                    error));
+    }
+
+ +

RecalculatePriceOfProducts is a simple wrapper around internal SchedulePriceRecalculation on Product AR, which send domain event for each product to start recalculation. Product AR itself.

+ +
public class Product : AggregateRoot, IEnumerable<PriceOffer>
+{
+    public string ProposalId { get; internal set; }
+
+    public IProductState State { get; internal set; } =
+        ProductStateCollection.ProductInitializedState;
+
+    public ProductType Type { get; internal set; }
+
+    public Maybe<PriceOffer> CurrentPriceOffer { get; internal set; }
+        = Maybe<PriceOffer>.None;
+
+    internal IList<PriceOffer> PriceOffers { get; set; }
+
+    private IDictionary<PriceOffer, PriceOffer> _priceOffersMap =>
+        PriceOffers.ToDictionary(p => p, p => p);
+
+    internal Product()
+        => SetIdentity(Guid.NewGuid().ToString(""D""), this);
+
+    internal Result SchedulePriceRecalculation(HourlyPrecisedDatePeriod forPeriod)
+    {
+        if (ReferenceEquals(null, forPeriod))
+            return Result.Failure($""{nameof(forPeriod)} cannot be null."");
+
+        if (HasValidPrice(forPeriod))
+            return Result.Success();
+
+        return State.SchedulePriceRecalculation(() =>
+            AddDomainEvent(this.ToProductPriceRecalculationScheduled(forPeriod)));
+    }
+
+    public Result EnrollPriceFor(
+        HourlyPrecisedDatePeriod validForPeriod,
+        IPriceOfferCalculation priceOfferCalculation)
+    {
+        if (ReferenceEquals(null, validForPeriod))
+            return Result.Failure($""{nameof(priceOfferCalculation)} cannot be null."");
+
+        if (ReferenceEquals(null, priceOfferCalculation))
+            return Result.Failure(
+                $""{nameof(priceOfferCalculation)} cannot be null."");
+
+        return State.EnrollPriceFor(() =>
+                priceOfferCalculation.Calculate(validForPeriod)
+                    .Ensure(
+                        priceOffer => !_priceOffersMap.ContainsKey(priceOffer),
+                        ""Price offer already enrolled within the product."")
+                    .Tap(priceOffer =>
+                    {
+                        PriceOffers.Add(priceOffer);
+                        CurrentPriceOffer = priceOffer;
+                    }))
+            .Tap(state => State = state);
+    }
+
+    private bool HasValidPrice(HourlyPrecisedDatePeriod forPeriod)
+        => CurrentPriceOffer.HasValue && CurrentPriceOffer.Value.IsValidForPeriod(forPeriod);
+
+    public IEnumerator<PriceOffer> GetEnumerator()
+        => PriceOffers.GetEnumerator();
+
+    IEnumerator IEnumerable.GetEnumerator()
+        => GetEnumerator();
+}
+
+ +

Here is the question, as soon as I iterate products and schedule price recalculation for each (by domain event and eventually map it to command and sending to service bus), the products processed in parallel, after processing is done for ALL products (no matter if it was succeed or not) I need to sent another event to service bus. Any suggestion how I can do that? As soon as Products are separate ARs I need someway to know that for that particular recalculation all products were calculated with Success or Failed result.

+ +

Another question, if you look at EnrollPriceFor method, which accepts IPriceOfferCalculation. So basically, when price calculation is scheduled and command sent to service bus, the receiver builds IPriceOfferCalculation implementation and pass it to the Product AR, the requirements tell that if calculation failed, it should schedule another calculation with fallback calculation. So who should do that? The receiver? The Product AR itself, then what would be the best design for that?

+ +

Hope it is possible to understand...

+",364398,,,,,44100.33542,DDD design: Tracking price calculation progress and fallback price calculation,,1,7,,,,CC BY-SA 4.0, +409408,1,,,4/28/2020 16:32,,-1,72,"

Writing automated unit tests is followed as a part of our development process.

+ +

We also do have an established code review process for the development code that is written.

+ +

Should the test code be reviewed? If yes how to make it less time-consuming for both the author and the reviewer?

+",364406,,,,,43949.71944,Code Review for Automated Unit Tests,,2,0,,43950.31042,,CC BY-SA 4.0, +409416,1,,,4/28/2020 19:22,,1,114,"

I understand an interface is a contract and if a class implements that interface, it must define those abstract methods from the interface. What I don't understand is, how is data passed between two classes that use an interface?

+ +

So for example: In Android, a Fragment has an interface, say OnFragmentInteractionListener. At someplace in the code it calls,

+ +
onFragmentInteractionListener.displaySomeMessage(message);
+
+ +

The Activity will implement:

+ +
void displaySomeMessage(String message) {
+     System.out.println(message);
+}
+
+ +

How is ""message"" actually passed to the Activity? How does it retrieve this specific piece of data? I use interfaces all the time and know how to use them. But I just don't quite understand what the ""contract"" is actually doing behind the scenes, so that everyone uses the same data.

+",364424,,,,,43950.46042,What goes on behind the scenes when data is passed through the use of interfaces?,,2,5,1,,,CC BY-SA 4.0, +409417,1,409439,,4/28/2020 19:25,,0,35,"

I'm writing my first Redux app. In my store, I have ~300-500 Island objects that I retriev from an API and index by an id string in an object (being treated like a map). When I'm editing one of these Islands by setting editId in the store, the buttons for the other Islands need to be disabled. I can think of three ways I can do this.

+ +
    +
  1. Have all of the components watch editId for changes and check their Island.id against the editId (I'm guessing this is very slow)
  2. +
  3. Add a disabled prop to my island component, have the container that maps the Islands to components watch for editId, and calculate disabled for each island on render
  4. +
  5. Add a disabled field to the Island objects in the store and update each of these objects when my edit action
  6. +
+ +

My hunch is option 2, as that would definitely be the cleanest approach.

+ +

This question is coming from a place of migrating from mobx (which had terrible performance) to redux, and hoping performance improves. If any of the three of these will have virtually the same performance, then I'll choose option 2. I just don't know if there's a redux pattern established that I should be using.

+",169692,,169692,,43949.84514,43950.19167,In Redux is it better for performance to add a property to the items in the store or to calculate it in the container?,,1,2,,,,CC BY-SA 4.0, +409418,1,,,4/28/2020 19:26,,-5,92,"

I have been trying to use Node.js since it was first released, and in all of the intervening years, not once have I encountered a single Node.js project that worked as described when using the directions for initial setup, testing, and/or deployment in that project's official documentation.

+ +

Is this due to immature package management? Are there other structural problems with Node.js projects that prevent them from working as intended? Or am I just unlucky and happen to test Node.js projects and frameworks only when they have critical bugs?

+",182971,,,,,43949.92917,Why do Node.js projects not function properly?,,1,3,,43950.46181,,CC BY-SA 4.0, +409420,1,,,4/28/2020 19:28,,0,25,"

I'm trying to find a good way to manage permissions for a high number of mongo documents.

+ +

What I want to do: Apply group/user/and or application permissions to mongo documents.

+ +

Idea 1: Use mysql to control the relationship (con's duplication of data, 2 diff systems)

+ +

Idea 2: Add metadata to each document (con's: would have to update a ton of doc)

+ +

Idea 3: Create another collection to manage the relationship (leaning toward this)

+ +

I keep going in circles.. posting here out of desperation. If anyone has a design pattern or any ideas I'd really appreciate it.

+ +

I'm dealing with high number of mongo docs (1 million+) and am trying to avoid the situation where I need to update a high number of doc's every time a new group/application/permission is added for a doc or subset of doc's.

+ +

Thanks!

+",364426,,,,,43949.81111,Creating group/applicaton/user permissions around mongodb documents,,0,2,,,,CC BY-SA 4.0, +409424,1,409436,,4/28/2020 20:36,,-2,52,"

I'm struggling to find how to correctly design a database in order to store and retrieve data for my software. +Basically, I'm designing an application for a gym (C++, MySQL with PhpMyAdmin):

+ +
    +
  1. each user (unique ID, name, surname) can have one or more training routines
  2. +
  3. a training routine is divided in training schedules (from A to E)
  4. +
  5. each training schedule contains a series of exercises (name, set, reps) divided in muscolar groups (i.e. CHEST, SHOULDER, BICEPS and so on).
  6. +
+ +

The training routines can be created or retrieved from the database by looking if they are associated to a +specific user; the combination of exercies is not unique and can be shared from one or more users.

+ +

For example:

+ +
              -- Start of training routine #1 --
+EXERCISES                                SET      REP   
+
+Training schedule A
+- CHEST
+Wide chest hammer                         4       10
+Incline press hammer                      3       15
+Pek Fly                                   4       10
+- BICEPS
+Scott Machine                             4       10
+Curls                                     4       12
+- LEGS
+Calf press                                4       16
+
+Training schedule B
+- SHOULDER                               
+Should press hammer                       3       10
+Reverse pek fly                           3       10
+Push Press                                3       10
+- QUADRICEPS
+Leg Curls                                 4       10
+Leg Press                                 5       12
+
+Training schedule C
+.. so on..
+              -- End of Training routine #1 --
+
+ +

I though to create a table for the users, then a table for each training routine with a unique ID where I will store the schedules (A,B,C..) associated with the same ID if they are included in the same training routine; then to create a muscolar group table where I store the exercises. +But I do not know if this is the correct solution because I still have to handle the SET and REPs.

+ +

What do you suggest? +May be the training routine table is redundant because I can retrieve the same information by looking at all the schedules which report the same ID.

+ +

Should I use a single table to store the whole training routine or it would be better to use multiple tables?

+",364431,,,,,43950.10556,Design a database with multiple relations and tables,,1,2,,,,CC BY-SA 4.0, +409426,1,409441,,4/28/2020 21:28,,0,57,"

Currently I'm developing a movie reservation system for diving into Spring Framework.

+

I'm using Spring Data JPA for database modeling. I have several model classes but I am stuck with time-based logic in my application and can't finish modelling phase.

+

Here's the general design of my classes.

+
+

every CinemaHall has one or more Showrooms

+

every Showroom has different number of Seats and Sessions

+

every Session is assigned to a Movie.

+
+

My problem is, I can't decide how a reservation should be done. A reservation of course must be made for a specific Seat instance. That Seat instance in fact belongs to a Showroom. But the reservation information of Seat can vary for different Sessions.

+

In order to solve this problem, I can declare a seats field in Sessions, but then lots and lots of Seat instance will be created for all Sessions of that Showroom(all these instances are eventually persisted to database)

+

The second problem is every Session has a datetime field, naturally, so that a user can choose from the day and hour of available Sessions. But since datetime is a temporal notion, it can go forever. What i mean is, I can't decide if I should pre-create a week, a month or a year's all Sessions in the database, or do the creation whenever a user demands a reservation from that Session.

+

If I choose pre-creating, say a week of Sessions, than when a week passes, should I recreate new ones or is doing it as a scheduled database procedure called every night a more convenient way?

+",278767,,-1,,43998.41736,43950.63889,Mapping of time based models to database,,1,4,,,,CC BY-SA 4.0, +409428,1,,,4/28/2020 21:50,,0,312,"

I am developing an app and splitting this into microservices. The app is a booking system. There are two microservices - 1), bookings and 2) company details.

+ +

For the bookings microservice, this contains entities such as customers, bookings, services, etc etc. All these entities are related to each other @ the database level.

+ +

What I am wondering is, should each of these entities (bookings, customers, services, etc) be their own API and microservice? If so, how would I then handle relationships?

+",333734,,,,,43983.83889,Microservice boundaries and table relationships,,3,1,,,,CC BY-SA 4.0, +409433,1,409444,,4/29/2020 0:21,,-1,89,"

On Android, do people create separate apps accessing the same database? For example an app that has users, producers and a customer support.

+ +

How can one decide if it makes more sense to have a single app for this case, or multiple apps accessing a shared database?.

+ +

For example having a single app and classifying each via their input when they log in and redirecting to their activities. Or: make separate apps which I guess could be easier for development and maintaining purposes?

+ +

A good example of what I am speaking would be of app like ""Uber"".

+",364444,,9113,,43950.44444,43950.44444,What makes more sense: (a) To have multiple apps accessing the same DB (b) single app,,3,2,,,,CC BY-SA 4.0, +409438,1,409442,,4/29/2020 4:35,,-3,56,"

Please help me to understand what is event based data integration in simple layman term with some examples? +How it is different from other form of data integration. +Some sample use cases will be additional help.

+ +

Thanks, +Rajneesh

+",364402,,,,,43950.24375,what is event based data integration?,,1,0,,,,CC BY-SA 4.0, +409445,1,,,4/29/2020 7:19,,0,21,"

Our dev environment is currently sharing an ActiveMQ service hosted on AWS. +This however resulted in problems where dev A's service produce a message that will trigger dev B's localhost consumer.

+ +

Is there a way to segregate and ""fence"" a common ActiveMQ service so that we can share the same MQ service without messages being consumed by another person?

+ +

We have attempted the following:

+ +
    +
  1. Each dev point to own localhost instance of MQ installation. (workable, but if I can get it working on shared instance, then it will avoid additional setup steps for new devs)
  2. +
  3. Looking for a VirtualHost feature like in RabbitMQ, where I read that it may work in my use case. Unable to find a similar feature for ActiveMQ.
  4. +
+ +

Further info:

+ +

Our project is based on .netCore 3.1 C#. MQ implemention in code is abstracted under the MassTransit library.

+",364464,,,,,43950.30486,Can a common ActiveMQ service be shared in a dev environment without messages being consumed between developers?,<.net-core>,0,0,,,,CC BY-SA 4.0, +409448,1,409453,,4/29/2020 9:26,,-1,58,"

I'm not sure if this is the adequate site (maybe CodeReview?) but it's the only one here in StackExchange that have got a ""clean code"" tag ... There is no need for downvotes if this is not the correct place just inform me in the comments and I will move it.

+ +

I've got this piece of java code

+ +
 private void executeCommand() {
+
+    boolean part1ExecutionOK = executePart1();
+    boolean part2ExecutionOK = false;
+    boolean part3ExecutionOK = false;
+
+    if (part1ExecutionOK) {
+         part2ExecutionOK = executePart2();
+
+         if (part2ExecutionOK) {
+               part3ExecutionOK = executePart3();
+
+               if (part3ExecutionOK) {
+                    finishCommand();
+               }
+         }
+    }
+
+    // [*] Log what happened depending on boolean flags
+}
+
+ +

I know that I can simplify it just like this

+ +
private void executeCommand() {
+
+    if (executePart1() && executePart2() && executePart3()) {
+         finishCommand();
+    } else {
+         // [*] Log what happened - I don't know which part failed!
+    }
+}
+
+ +

I could throw a specific exception from each execute method and capture it

+ +
 private boolean executePart1() throws MyException {
+
+      // Do whatever
+
+      if (!checkConditionsPart1()) {
+           throw new MyException(ERROR_CODE_1);
+      }
+
+      return true;
+ }
+
+ ...
+
+
+private void executeCommand() {
+
+    try {
+         if (executePart1() && executePart2() && executePart3()) {
+               finishCommand();
+         } 
+
+         // [*] Log what happened -> Execution OK
+
+    } catch (MyException e) {
+        // [*] Log what happened -> Execution KO depending on e.getMyErrorCode();
+    }
+}
+
+ +

But I'm not confortable throwing exceptions because a method executePart that not accomplish its expected conditions is not an exceptional behaviour.

+ +

Is there a better way to structure the secuential but conditional execution of my three methods and log which one of them has failed?

+",223337,,223337,,43950.42153,43951.54514,Execute sequence of methods conditionally and log possible error,,2,1,,,,CC BY-SA 4.0, +409449,1,,,4/29/2020 9:38,,-2,177,"

I would like to use UML diagrams to show some processes I am designing and would like to implement.

+ +

The processes involve using a queue, and adding and taking out elements from it.

+ +
    +
  • In one of this process the adding and taking out elements are done by two different threads.

  • +
  • In the other, both operations are done by the same thread.

  • +
+ +

So far I have used sequence diagrams to represent the process involving multithreads, but these diagrams don't have the level of granularity to show the queues processing,

+ +

Which is the best way to represent this with UML?

+",296531,,209774,,43952.34931,43952.34931,How can I represent with UML a process that involves queues?,,1,6,,,,CC BY-SA 4.0, +409450,1,,,4/29/2020 9:42,,0,188,"

In order to build a messaging app, I have followed this example : https://github.com/gorilla/websocket/tree/master/examples/chat

+ +

This consists of a Hub, running in a single goroutine in the program, which binds together the Clients (the intermediary between the socket and the Hub).

+ +

In this example, each Client (connected user/websocket) runs 2 goroutines : one for reading messages, one for writing. +So we end up with a single goroutine for the Hub, and many goroutines for all the connected users in the app (2 per each user)

+ +

But this is for an app which all it does is broadcast all the messages to all the connected users. Since my goal is to build a messaging app (similar in functionality to whatsapp, messenger and the likes, where a user has private conversations with a friend(s)), I need to adjust this design accordingly.

+ +

So, I've split this into two layers :

+ +
    +
  1. Connection layer - Hub + Client. This layer is responsible for keeping the connections with the clients (plus everything that it entails - reading/writing messages, encoding/decoding, etc.)
  2. +
  3. Business Logic layer - the ""core"" / the logic itself. Meaning - communicating with the DBs, be it an SQL or Cache like Redis, in order to insert messages, finding participants in each conversation, etc.
  4. +
+ +

Challenges :

+ +

Since the Logic layer has more resource-consuming operations (reading and writing to DBs, finding out which users belong in the conversation so it would know to tell the hub which users in its Clients mapping to send the message to), I'm having hard time to decide where that should lie and how the two layers should interact/communicate with each other?

+ +

Because the Hub is central to the app and has 1 goroutines to handle all traffic, I'm assuming it cannot just call ""processMessage"" in the Service layer, and wait for its response. Otherwise we can potentially reach a state where we have many pending messages and users will start receiving messages with great delay. (Please do correct me if I'm wrong here)

+ +

Ideas so far + problems :

+ +

So what I thought of doing, is to have another Service goroutine for the app, just like the Hub, which the Hub will run once it's initialized, and then the Hub could send a message for it to process using a channel between them. Then, once the Service is done, it sends back to the Hub's channel something like ""hey dear Hub, send this message I gave you, to users: 1, 2, 3"".

+ +

But here I think I have another problem - since Hub and Service are in different packages, it means I need to have a cyclic dependency, because Hub knows about Service (it forwards a received message for it to process), and Service knows about the Hub (it tells the Hub what message to send to which users after it's done processing the received message from hub).

+ +

So it seems that each option has either a performance issue or an architectural/design issue. Or both. +Since I'm not an expert in Go, that would be really helpful to learn how to handle this kind of case both efficiently(performance-wise) and correctly (design-wise).

+ +

EDIT :

+ +

Following Jory Geerts's fantastic answer using one goroutine for the service layer, I have tried to pull together a solution that runs service/logic goroutine for each client instead of one goroutine for the app. But I'm not sure about this solution, and I'm also not sure how to terminate the service gorourtine when a client disconnects.

+ +

this is a half-baked code, just to show what I had in mind :

+ +
type Service struct {
+    ('processing' chan of type messaging.Incoming will come as a parameter in run())
+
+    delivery chan messaging.Outgoing // reference from Hub
+    repo *sqlx.DB // pool of postgres connections.
+    cache redis.Pool // pool of redis connections.
+} 
+
+type Hub struct {
+    clients map[*Client]bool
+    delivery chan messaging.Outgoing
+}
+
+type Client struct {
+    hub *Hub
+    send chan []byte
+    conn *websocket.Conn
+    processing chan messaging.Incoming 
+} 
+
+ +

and the code itself :

+ +

main.go :

+ +
func main() {
+    ...
+    ...
+
+    hub := newHub()
+    go hub.run()
+
+    service := logic.NewService(delivery : hub.delivery, repo: postgresPool, cache: redisPool)
+
+    http.HandleFunc(""/ws"", func(w http.ResponseWriter, r *http.Request) {
+        serveWs(hub, service, w, r)
+    })
+}
+
+func serveWs(hub, service, w, r) {
+    processing := make(chan messaging.Incoming)
+
+    go service.run(processing)
+
+    client := &Client{hub, processing, conn, send: make(chan []byte, 256)} 
+    go client.readPump()
+}
+
+ +

client.go (package connection):

+ +
func readPump() {
+    ...
+    defer func() {
+        c.conn.Close()
+        close(c.processing)
+    }()
+
+    for {
+       msg := c.socket.ReadMessage()
+       c.processing <- msg
+    }
+}
+
+ +

service.go (package logic) :

+ +
func (s *Service) run(processing <-chan messaging.Incoming) {
+    for {
+        select {
+        case incomingMessage := <-processing:
+            result := processMessage(incomingMessage)
+            s.delivery <- result
+    }
+}
+
+ +

hub.go (package connection) :

+ +
func (h *Hub) run() {
+    for {
+        select {
+        case result := <-h.delivery:
+            clients[result.userId].send <- result.msg
+        }
+    }
+}
+
+",356993,,356993,,43953.65347,43953.65347,Go (Golang) efficient logic processing in a chat system,,1,1,0,,,CC BY-SA 4.0, +409451,1,,,4/29/2020 9:58,,0,29,"

I'm working on a quality air index model prediction from certain city.

+ +

I have a dataset composed of hourly pollutants readings (5) from up to 24 basestations. Not all basestations can read all 5 pollutants at the same time and some of them are read alternatively, i.e. at certain time, pollutants A, B and C are read and an hour later only pollutants D and E are read at the same basestation.

+ +

The way I have treated those NaN values has been replacing it by the pollutant column mean value. Since there are up to 24 basestations distributed all over the city to be studied, I have choosen only one with all its readings in order to work with the prediction model.

+ +

1- Working with just one basestation readings is a good approach in order to predict the quality air index at determined city zones?

+ +

2- Will it make sense to use all basestation readings and feed the prediction model with all the readings?

+ +

Once I have filtered the readings of a basestation I compute each pollutant to obtain a 24 hour mean reading value, hence ending with a daily readings dataframe from a single basestation. In order to obtain the air quality index, from each row it is computed the maximum value from all pollutants.

+ +

From a chart of pollutants ranges, it is contrasted in which range, the row maximum obtained value coincides, resulting in the air quality index.

+ +

+ +

For example, from the following sample of dataframe (df):

+ +
            O_3         NO_2        SO_2        PM10        PM25        CO          Label
+date                            
+2001-01-01  19.685217   53.789130   10.870435   20.306522   12.505127   1.055217    2.0
+2001-01-02  25.496667   64.332083   10.119167   27.647917   12.505127   0.965417    2.0
+2001-01-03  17.052917   69.595833   10.700833   33.777500   12.505127   0.965833    2.0
+2001-01-04  18.335000   69.926666   11.472500   36.369583   12.505127   0.855000    2.0
+2001-01-05  9.731667    65.272917   10.611250   32.444167   12.505127   1.174583    2.0
+... ... ... ... ... ... ... ...
+2018-04-27  52.875000   52.125000   1.000000    15.166667   7.125000    0.362500    1.0
+2018-04-28  63.208333   30.625000   1.000000    13.000000   7.791667    0.245833    1.0
+2018-04-29  68.375000   29.833333   1.000000    5.458333    3.750000    0.241667    1.0
+2018-04-30  60.916667   37.375000   2.708333    4.083333    3.208333    0.279167    1.0
+2018-05-01  52.000000   43.000000   4.000000    6.000000    4.000000    0.300000    1.0
+
+ +

It can be appreciated that for the first row, the maximum value belongs to NO_2. If we check the pollutants range chart, for NO_2 a reading of 53.7891 falls in the range of 2 (Good/Bueno). SO far so good.

+ +

My initial idea is to use a RNN. First I would vertically shift the labels column by X days in order to be able to predict the air quality index in X days. In other words, if I shift by 2 the labels in the above df, the label for the first row (2001-01-01) will be the one from the day 2001-01-03. So in a sense, by trainning the RNN model with the features from 2001-01-01, the model should predict what's it going to be the air quality index on the 2001-01-03.

+ +

My problem is that when I observe the variability of labels:

+ +
2.0    4095
+1.0    1354
+3.0     296
+4.0      64
+5.0      15
+Name: Label, dtype: int64
+
+ +

As it can bee seen there are a lot of days labeled with 2, 1 and not so much with 3, 4 and 5. Which is making me to think that the prediction model is not going to be unbiased. In a sense, the RNN is going to perform great when predicting labels 2 and 1 but not for labels 3, 4 and 5.

+ +

Any ideas on how to approach this issue? Any recommendations?

+",364470,,364470,,43950.48333,43950.48333,How to approach the lack of labels variability in a dataset for a RNN model?,,0,1,,,,CC BY-SA 4.0, +409454,1,,,4/29/2020 10:16,,1,438,"

I wrote this class:

+ +
using System;
+using System.Threading;
+using System.Threading.Tasks;
+
+    public interface IJobScheduler {
+        void RunDaily(Task task, int hour, int minutes );
+    }
+
+    public class JobScheduler : IJobScheduler, IDisposable
+    {
+        Timer timer;
+
+        /// <summary>
+        /// Run the provided task every day at the defined time.
+        /// </summary>
+        /// <param name=""task"">Action to execute</param>
+        /// <param name=""hour"">At what time (hour) the task have to be executed. LOCAL time.</param>
+        /// <param name=""minutes"">At what time (minutes) the task have to be executed. LOCAL time.</param>
+        public void RunDaily(Task task, int hour, int minutes)
+        {
+            var todayRun = DateTime.Today.Add(new TimeSpan(hour, minutes, 0));
+
+            var timeToGo = todayRun > DateTime.Now ?
+                (todayRun - DateTime.Now) :          // run today
+                todayRun.AddDays(1) - DateTime.Now;  // run tomorrow
+
+            timer = new Timer(x => { if (isEnabled) task.RunSynchronously(); },
+                state: null,
+                dueTime: (int)timeToGo.TotalMilliseconds,
+                period: 24 * 60 * 60 * 1000 /* 24h */);
+        }
+
+        public void Dispose()
+        {
+            try { timer?.Dispose(); } catch { }
+        }
+    }
+
+ +

Now, this is the initial test I wrote:

+ +
        [Test, Category(""long_test"")]
+        public void TaskScheduler_RunDaily__should__execute_at_the_specified_time() {
+
+            // scheduler has a precision of 1 minute so...
+            var runAt = DateTime.Now.AddMinutes(1);
+
+            var runCounter = 0;
+            Task task = new Task(
+                () => runCounter++
+                );
+
+            int hours = runAt.Hour;
+            int minutes = runAt.Minute;
+            var scheduler = new JobScheduler();
+            scheduler.RunDaily(task, hours, minutes);
+
+            // scheduler has a precision of 1 minute so...
+            Thread.Sleep(TimeSpan.FromSeconds(60+2));
+
+            runCounter.Should().Be(1);
+        }
+
+
+ +

It can be improved to exit as soon as the Task is executed or, introducing seconds as parameter (with default zero) maybe I'll be able to reduce the test time to 1 or 2 seconds.

+ +

But my real quetion is this:
+How can I check that the Scheduler has a Timer set to 24 hours (= it will run the task after 24 hours) ?

+ +

I started thinking to verify the internal Timer...

+ +
        [Test]
+        public void TaskScheduler_RunDaily__should__have_an_internal_timer_et_to_24_hours()
+        {
+            // scheduler has precision of 1 minute so...
+            var runAt = DateTime.Now.AddMinutes(1);
+
+            var runCounter = 0;
+            Task task = new Task(
+                () => runCounter++
+                );
+
+            int hours = runAt.Hour;
+            int minutes = runAt.Minute;
+            var scheduler = new JobScheduler();
+            scheduler.RunDaily(task, hours, minutes);
+
+            // use reflection to get the check internal Timer
+            var timerField = typeof(JobScheduler).GetField(""timer"",
+                System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);
+
+            if (timerField == null) Assert.Fail(""Cannot read the Timer field"");
+
+            var timer = timerField.As<System.Threading.Timer>();
+
+            // what can I test here ?  
+            timer.<internal ""enabled""> Should().Be(true);           
+            timer.<internal ""interval""> Should().Be(24_HOURS);
+        }
+
+ +

and I think it is ok to use reflection and check some internal implementation of MY code, but it is not acceptable to verify the internal implementation of Timer itself.

+ +

The real behavior to test here is the fact that the task run after 24 hours.
+I can think of ""expose"" that 24 hours so to mock it but I really don't like the idea: DailyRun means 24 hours, why should I expose this value as parameter.
+The other kind of rule I like to follow is this: ""Don't make the code ugly just because it has to be testable, prefer simplicity.""
+To be clear, the RunDaily method is 10 lines of code and it get only the needed input, I don't want to change it because there is no practical way to unit test it as it is.

+ +

[Update] +It is worth to explain why I'm so reluctant to change a simple implementation with ... something else.
+This is the current implementation I found in a project:
+

+ +

and this is CurrentTimeProvider:

+ +
    public class CurrentDateTimeProvider : ICurrentDateTimeProvider
+    {
+        public DateTime GetNow()
+        {
+            return DateTime.Now;
+        }
+    }
+
+ +

and SchedulerTimeProvider:

+ +
    public class SchedulerTimeProvider : ISchedulerTimeProvider
+    {
+        #region Fields
+
+        private readonly ICurrentDateTimeProvider _currentDateTimeProvider;
+
+        #endregion
+
+        #region Constructor
+
+        public SchedulerTimeProvider(ICurrentDateTimeProvider currentDateTimeProvider)
+        {
+            _currentDateTimeProvider = currentDateTimeProvider;
+        }
+
+        #endregion
+
+        #region Public Methods
+
+        public ISchedulerTimeInfo GetSchedulerTimeInfo(string scheduledTime)
+        {
+            var defaultSchedulerTime = scheduledTime.Split("":"");
+            var defaultSchedulerHour = int.Parse(defaultSchedulerTime[0]);
+            var defaultSchedulerMinute = int.Parse(defaultSchedulerTime[1]);
+
+            TimeSpan due;
+            TimeSpan period;
+
+
+            var now = _currentDateTimeProvider.GetNow();
+
+            DateTime dueDateTime = new DateTime(now.Year, now.Month, now.Day, defaultSchedulerHour,
+                defaultSchedulerMinute, 0);
+
+
+            if (now > dueDateTime)
+            {
+                var tomorrowsDate = now.AddDays(1);
+                var tomorrowsScheduleDateTime = new DateTime(tomorrowsDate.Year, tomorrowsDate.Month, tomorrowsDate.Day, defaultSchedulerHour, defaultSchedulerMinute, 0);
+                due = tomorrowsScheduleDateTime.Subtract(now);
+                dueDateTime = tomorrowsScheduleDateTime;
+            }
+            else
+            {
+                dueDateTime = new DateTime(now.Year, now.Month, now.Day, defaultSchedulerHour, defaultSchedulerMinute,
+                    0);
+                due = dueDateTime
+                    .Subtract(now);
+            }
+
+
+            period = due.Add(new TimeSpan(1, 0, 0, 0));
+            //period = due.Add(new TimeSpan(0, 0, 2, 0));
+
+
+            return new SchedulerTimeInfo(due, period, dueDateTime);
+        }
+
+        #endregion
+
+    }
+
+ +

(just 2 of the 6 interfaces/classes as example)

+ +

Now, that scheduler is used/started in this way:

+ +
private void StartProcess()
+        {
+            if (_scheduleTimer == null)
+                _scheduleTimer = new Timer(ExecuteExporterProcess);
+
+            var schedulerTimeInfo = _schedulerTimeProvider.GetSchedulerTimeInfo(_config.RegistrationsSchedulerTime);
+
+            var due = schedulerTimeInfo.Due;
+            var period = schedulerTimeInfo.Period;
+
+            _scheduleTimer.Change(due, period);
+        }
+
+ +

The ExecuteExporterProcess method does something and then call again StartProcess.

+ +

I spent some time to understand it and I wrote my simple implementation.
+The current scheduler code is not tested in this solution/project because it is copied from another project and I strongly suspect there are no tests also there (I knew the developer that wrote it).
+Also the class that coontains StartProcess is not testd, so the timer itself is not tested at all. +I put my simple implementation in a common package but before making it available to everyone I want to fully test it; because I put the Timer ""logic"" inside the new implementation I want to test it too.

+ +

As you can see the current implementation is a little bit ""verbose"" and I think I'm able to introduce some interface to make it testable but less complicated.
+But, I hope, you can see WHY I'm reluctant/scared to go on the direction to write so ""verbose"" code.

+",156281,,156281,,43950.47778,43955.54931,C# | How to Unit test a Timer (in a daily job run scheduler)?,,1,7,,,,CC BY-SA 4.0, +409455,1,409535,,4/29/2020 10:58,,42,9942,"
+

Don't be afraid to make a name long. A long descriptive name is better + than a short enigmatic name. A long descriptive name is better than a + long descriptive comment.

+ +

Robert C. Martin

+
+ +

Did I understand Clean Code right? +You put the whole information that you would put into a comment into the class/method/... name. +Wouldn't that lead to long names like:

+ +
PageReloaderForPagesDisplayingVectorGraphicsThatAreUsedInTheEditorComments
+PageReloaderForPagesDisplayingVectorGraphicsThatAreUsedInTheEditorDescriptions
+
+",364477,,364477,,43956.35556,43966.26597,Clean Code: long names instead of comments,,10,16,5,,,CC BY-SA 4.0, +409458,1,409468,,4/29/2020 11:20,,1,188,"

One common issue with secure passwords is that users tend to ""cheat"", one common cheating pattern we meet recently is the ""password swap"" antipattern where the user basically keeps using the same two passwords forever. +e.g.:

+ +
    +
  • Password1!
  • +
  • Secret2$
  • +
  • Password3!
  • +
  • Secret4$
  • +
+ +

This antipattern works because:

+ +
    +
  • any new passwords is completely different from the previous one
  • +
  • the history of hashes does not contain any exact match
  • +
+ +

is there any algorithm which is ""similar"" to an hash but allows to extract a distance metric from the current plain text password in order to avoid those risks?

+ +

Here is my analysis so far

+ +
    +
  • Hashing explicitly requires that ""similiar"" passwords turn to very different hashes: otherwise it would be very easy to converge from a generic password toward the one that generated the hash. Any ""hash""-like algorithm which allows to calculate a metric of distance from the current password would be a security threat.
  • +
  • I can't think of any workaround to come up with an hash which allows to measure ""similarity"" without giving away some kind of ""distance"" metric: which as stated above would render it insecure
  • +
  • Another approach would be storing hashes of subsets of the password. E.g. we store the hash of the previous 10 passwords + the hash of the previous passwords minus the last two chars: this one would block the above example. However in order to work we might have to collect too many hashes of too small substrings (eg. every group of 6 chars) and this would give away the plain text password!
  • +
+",364479,,,,,43950.59861,"hash-like algorithm to identify passwords which are ""too similar"" to previous ones from history",,3,7,,,,CC BY-SA 4.0, +409459,1,,,4/29/2020 11:20,,-1,163,"

In the project I am working on we are building a feature where you have a file with 2 buttons. Button 1 opens the file in your browser, button 2 starts downloading the file. I see these as 2 distinct interactions.

+ +

We are using Angular 8 in the front-end and ASP.NET 3.0 in the back-end, but I think the front-end stack shouldn't matter since we can just get away with an <a> tag that points to the action where you download the file. No JS needed.

+ +

I know the user can decide in the browser what to do when downloading certain file types. For example, in firefox you can decide to open PDF fileswith the built-in pdf viewer, ask the user what to do, or point to some application that handles the action. But, Correct me if I am wrong!, I believe that this is for when a PDF is ""downloaded to the browser"", which is different than downloading it to the disk? I hope my code makes the matter more obvious.

+ +
+ +

The main question: Is it correct that the API has an option to return the file as to make it open in the browser or to download it to disk, or should the API just return the file and should the front-end handle the action that the buttons say they do?

+ +

If the answer is that the front-end should do all the magic, what return File() statement in my code below should be used?

+ +
+ +

The important part of my controller action looks like this:

+ +

URL: https://localhost:3435/api/foo/1/bar/download?openInBrowser=true

+ +
// Dummy code, we are downloading a file called Hello.pdf with the content-type application/pdf
+
+var byteArray = GetBytesFromFile();
+var contentType = GetContentTypeFromFile();
+var fileName = GetFileNameFromFile();
+
+if (openInBrowser)
+{
+    return File(byteArray, contentType);
+}
+else
+{
+    return File(byteArray, contentType, fileName);
+}
+
+
+ +

Now, if you have the following HTML:

+ +
<a href=""https://localhost:3435/api/foo/1/bar/download?openInBrowser=true"">Open in browser</a>
+<a href=""https://localhost:3435/api/foo/1/bar/download?openInBrowser=false"">Download</a>
+
+ +

Clicking on ""Open in browser"" results in the following in firefox using the built-in PDF viewer:

+ +
    +
  • It opens the PDF in the built-in viewer
  • +
  • The response headers are the following using cUrl:
  • +
+ +
HTTP/1.1 200 OK
+Date: Wed, 29 Apr 2020 11:11:24 GMT
+Content-Type: application/pdf
+Server: Kestrel
+Content-Length: 161689
+Content-Disposition: attachment; filename=Hello.pdf; filename*=UTF-8''Hello.pdf
+
+ +

Clicking on ""Download"" results in the following using firefox:

+ +
    +
  • It asks me what to do with the file (This is a user specific setting)
  • +
  • If I select ""save file"", it saves it to my downloads folder and otherwise I select what application needs to handle the file.
  • +
  • Both options result in the same response headers:
  • +
+ +
HTTP/1.1 200 OK
+Date: Wed, 29 Apr 2020 11:18:32 GMT
+Content-Type: application/pdf
+Server: Kestrel
+Content-Length: 161689
+Content-Disposition: attachment; filename=Hello.pdf; filename*=UTF-8''Hello.pdf
+
+ +

I hope my question is clear :)!

+ +

Thank you for your assistance

+",313892,,,,,43951.61181,Should file download preference be decided in front-end or back-end?,,1,1,,,,CC BY-SA 4.0, +409464,1,,,4/29/2020 12:37,,0,50,"

I'm using Node.JS to build a system where the data get consumed by WebSocket requests, instead of classical REST API calls. WebSockets were used for realtime bidirectional communication, but then were extended to also reply to requests from clients.

+ +

At the moment I'd like to unit-test that part of socket server but the code is really a spaghetti-code.

+ +

So the first thing should be to decouple methods and put them in a specific module.

+ +

Therefore I'd add a mixed of PubSub and Request/Reply patterns because sometimes I need an approach and sometimes the other. Or at least I think so.

+ +

Let's see an example:

+ +
// socketServer.js
+// someone sends a request to search for updates.
+socketServer.on('searchForUpdates', (wsClient) => {
+    // 1: do the search
+    const found = updater.findUpdates();
+    // 2: reply to the applicant only (exclunding other clients)
+    socketServer.sendMessageToClient(wsClient, found);
+});
+
+ +

Ok, but the request to search for updates might come from another source (let's say a USB sensor).

+ +
// usbSensor.js
+usbSensor.on('trigger', () => {
+    // 1: do the search
+    const found = updater.findUpdates();
+    // 2: do not reply to the applicant: USB sensor doesn't have an interface!
+    // instead, alert all the web socket clients currently connected
+    socketServer.sendMessageToAllClients(found);
+});
+
+ +

So in the latter example I must have a reference to the socketServer object, but it seems illogic the USB sensor has to know about a SocketServer object. +It sounds better to use the PubSub mechanism, e.g.:

+ +
// usbSensor.js
+usbSensor.on('trigger', () => {
+    // 1: do the search
+    const found = updater.findUpdates();
+    // 2: emit an event
+    pubSub.publish('didFoundUpdates', found);
+});
+
+// socketServer.js
+pubSub.subscribe('didFoundUpdates', data => {
+    socketServer.sendMessageToAllClients(data);
+});
+
+ +

Unit-test should be easier because I don't have to mock objects around. I actually test each component separatedly.

+ +

The reading of the code should improve because of separation of concepts.

+ +

But... is it a best practice? Would you approach the problem in another way?

+",292872,,,,,43950.52569,Is it ok to have a mix of PubSub and Request/Reply in a WebSocket server (focus on unit-test)?,,0,0,,,,CC BY-SA 4.0, +409469,1,,,4/29/2020 14:42,,-3,49,"

I am trying to build a system that tells the user on which platforms (like Netflix, prime, etc.) a movie or series is available. What is the best way to go about it?

+ +

I have considered the following:

+ +
    +
  1. Scraping the web to collate this information (it seems too unwieldy)
  2. +
  3. Manual entry in DB (along with scraping to adjust the data)
  4. +
  5. using 3rd party APIs (like justwatch or guidebox)
  6. +
  7. Requesting APIs from the OTTs themselves (probably not going to hear back from them)
  8. +
+ +

but so far, I have not found a solution that works properly. Are there some other ways to collate this data that I'm missing? like how do these 3rd party providers do it? +Any help or insight will be highly appreciated.

+",217740,,,,,43950.71389,How to manage source data for movies and series?,,1,0,,,,CC BY-SA 4.0, +409471,1,,,4/29/2020 15:33,,0,380,"

Last time I used I created a REST API client in .NET I used exceptions to represent status codes which don't indicate success. (404 was returning null). It's been quite some time since then and my ideas have evolved a bit. Now I am using Blazor for part of my project and I am faced with the problem of API client design. I am specifically interested in the response object design. C# 8.0 provides pattern matching which opens some new options for this design.

+ +

My API returns the object serialized as JSON if the call is executed correctly, 500 if internal server error, 404 if the object is not found, 400 if the validation fails, 401 and 403 in the corresponding authentication and authorization cases and the interesting part 409 Conflict when the error is one that should be displayed to the user. An example of such an error might be a registration call where the email is found to exist in the database or trying to post a comment without confirming the email. These are basically errors that would require additional API calls to validate on the client. So my question is how do I design a statically typed API client in C# 8.0 which allows me to express as much intent in the method signature and is easy to call and handle. 404 become nullable types but what about the rest? As I see it I have a few options

+ +
    +
  • Do it the old school way and make a bunch of exceptions for the other cases. This means that exceptions will be effectively used for flow control as I will have to use try/catch to branch. Even 500 errors are handled in an app with UI. For example if you get a 500 you display something like ""Unexpected error"" to the user and need to unlock the form that was locked during the post. This is different from server side programming where you would just bubble the exception to some global handler, craft an error response and terminate the request. You can't just kill the client app if one request fails (or maybe you can?). I can think of some workarounds here like a message bus that handles the exception and displays the message in an universal way but what about unlocking the form and what about these 409 responses which should result in the message from the server being displayed to the user?

  • +
  • Craft response objects. The ASP.NET Identity has examples of this like the IdentityResult class which has a bool Succeeded property and an Errors property. You are supposed to if-check the Succeeded property and use the Errors property if it is false. This pattern can be extended to have a Result property which can be examined if the request needs to return some object. What I don't like about this approach is that you can forget the check. You can get this result even if you didn't check the property.

  • +
  • My final idea is to create Result types designed to be used with pattern matching. For example like this

  • +
+ +
public abstract class Result
+{
+
+}
+
+public sealed class Success<T> : Result
+{
+    public T Value { get; }
+
+    public Success(T value)
+    {
+        Value = value;
+    }
+}
+
+public sealed class Error : Result
+{
+    public string Message { get; }
+
+    public Error(string message)
+    {
+        Message = message;
+    }
+}
+
+public sealed class ExpectedError : Result
+{
+    public string Message { get; }
+
+    public ExpectedError(string message)
+    {
+        Message = message;
+    }
+}
+
+
+ +

They would be used like this

+ +
Result userResult = await apiClient.RegisterUserAsync(something);
+
+if (userResult is Success<User> success)
+{
+   User user = success.Value;
+   //do something
+}
+else if (userResult is ExpectedError expectedError)
+{
+   DisplayError(ExpectedError.Message);
+}
+else
+{
+    DisplayError(""Something went wrong"");
+}
+
+ +

This will at least force the client code to check if the request was executed successfully although it will not force handling the errors. I've also seen libraries that do similar thing with methods like OnSuccess, anyone has experience with these?

+ +

So which approach would you recommend? How can these approaches be improved? Any other options I have not considered? Maybe exceptions are fine and I can add some form of global handlers on the Blazor side that will keep the form's state?

+",17857,,17857,,43950.66042,43959.56042,How to design a statically typed REST API client?,<.net>,3,13,,,,CC BY-SA 4.0, +409474,1,,,4/29/2020 17:20,,-4,103,"

Let's get to an example. Suppose, I have a Course object with the following properties: (course' code, course's title, credit of course etc.). When expressing this as a JS object, I can do it in the following ways:

+ +

1.

+ +
const course = {
+    course_code: 'CSE123',
+    course_title: 'An example title',
+    course_credit: 4
+}
+
+ +

2.

+ +
const course = {
+    code: 'CSE123',
+    title: 'An example title',
+    credit: 4
+}
+
+ +

Pro of Option 1 to me seems that it's easy to understand what object's properties are being accessed just by looking at the property name. But its quite verbose

+ +

But Option 2 is less verbose and avoids repeating the same thing.

+ +

What are the conventions regarding this? I would love some resources on this.

+",353363,,,,,44022.94028,Should I prefix keys of JS object with the object's name?,,3,0,,,,CC BY-SA 4.0, +409479,1,409482,,4/29/2020 18:28,,1,307,"

I have used Utf8Json a lot (it is very good) but have since adapted some lower level code and started using Utf8JsonReader directly.

+ +

Looking into the code of the Utf8Json library, I see JsonSerializer.DeserializeAsync as ultimately it is using the System.IO Stream class which itself has Stream.ReadAsync functions.

+ +

Looking at Utf8JsonReader it uses ReadOnlySequence<byte and does not have any async functions.'

+ +

Looking here https://github.com/dotnet/runtime/issues/29906 it mentions:

+ +
+

Utf8JsonReader is re-entrant and so an asynchronous wrapper around + Utf8JsonReader that contains additional state can shell out to a fully + synchronous helper function that is able to create the needed types + and perform the streaming read.

+
+ +

My questions are please (and thanks for the time to read):

+ +
    +
  1. why would one library access memory buffers asynchronously +(Utf8Json), and another synchronously (Utf8JsonReader)?

  2. +
  3. I understand async makes sense for long standing IO, like on network +ports or files on disk, but on memory that overhead would be +detrimental? (is this why no async functions on ReadOnlySequence<byte>?)

  4. +
  5. What does the author on the github site mean by creating an async +wrapper, and what would this look like?

  6. +
+",81639,,,,,43950.82361,Why no async functions for Utf8JsonReader and ReadOnlySequence,<.net>,1,3,1,,,CC BY-SA 4.0, +409480,1,,,4/29/2020 18:53,,0,72,"

I had begun scouring my code for optimization opportunities (execution speed, data parallelism, workload parallelism) yesterday when I noticed that there are very few opportunities for speeding up integer divisions. Integer division is heavily used in a hot spot area. Not only is it slow relative to the bit shifts, it’s possibly a showstopper for vectorization (data parallelism). If I want to use SSE/AVX as the design currently stands, the easiest way is to sacrifice some range so that all the intermediate calculations fit in 53-bits—the precision of a double since there are no instructions for integer division. The lack of integer division support is also apparent with “division-free” being a frequent selling point such as for specialized algorithms in the compression/encoding space.

+ +

How common or relevant is it in decision-making processes to pursue (or consider) division-free algorithms and data structures that will end up in hot spot areas of applications? It’s still pretty early in the design process for me so there’s opportunity to consider future needs like parallelization and CPU-architecture-dependent performance characteristics.

+",275493,,275493,,43950.79167,43951.01458,Design Algorithms and Data Structures Around a Lack of Fast Integer Division,,1,5,,,,CC BY-SA 4.0, +409489,1,,,4/29/2020 22:38,,-4,117,"

I want to write a program that performs heavy computations and I want it to be as fast as possible, so I choose C to be the language. Nevertheless, I was told that in spite of its simplicity and high efficiency one can easily make things slow instead of fast AND get himself absolutely nailed into his feet.

+ +

Okay, so if I do not want my feet to be shot off I can consider adding extra static checks inside my code as well as using code tools, static analyzers and such (no one wants bugs). On the other hand, checking codes eat processor time. How do I reach proper balance?

+",251005,,,,,43951.24306,How to exploit efficiency of C?,,2,4,,43951.25625,,CC BY-SA 4.0, +409491,1,,,4/29/2020 23:04,,0,148,"

Often, when any low-cost computer peripheral device is sold, it is sold with a CD containing the device's controlling firmware. Why can't it simply be sold with the firmware pre-installed by the manufacturer so that it does not rely on the host system itself to do this task? Does this have to do with price/memory constraints of the device, and if so, does it really impact the cost so much that firmware must be distributed on CDs?

+",213588,,173647,,43951.41736,43951.93264,Why is device firmware often distributed on CDs with the device?,,2,4,,,,CC BY-SA 4.0, +409496,1,,,4/30/2020 2:33,,0,29,"

I'm looking at porting a Unix-based multi-process architecture server application.

+ +

The app uses SYS V IPC shared memory and semaphores. It has a main message queue in shared memory, with semaphores to protect concurrent access, and separate processes for enqueue, dequeue, and various message type handlers, etc. The single binary initializes the shared memory then forks itself to make all of those processes.

+ +

I have successfully run the app on Windows under Cygwin which supports SYSV IPC . So one option to me is to deploy the app for Windows under Cygwin .

+ +

The Unix version has these nice features due to its architecture:

+ +
    +
  • If one process crashes (e.g. a handler for a certain message type), the master can catch this and fork a new instance of the process. All the while continuing to process other message types. This helps with maximal uptime for the server, as opposed to a single-process multithreaded server where the crash would kill everything that is currently in process and also require some other supervisor process to re-start the server.
  • +
  • It can have a process be hot-upgraded: the binary can be recompiled and run, and the new master takes over managing the existing processes, which can themselves be upgraded by killing them and having the new master fork them off again. So new features can be added to the server without losing any uptime.
  • +
+ +

My question is: What architecture should I use if I were to port this app to native Windows (i.e. not running under Cygwin)?

+ +

And if the answer is to stay multi-process, which Windows IPC functions would be best for the job? E.g. use CreateFileMapping to make some shared memory and run the queue in that?

+ +

I am concerned that switching to a single-process multithread architecture loses both of the above bullets. Or if the plan would be to keep a multi-process architecture in Windows , my understanding is that having a single binary fork multiple processes is not possible so I would have to redesign the build system and startup code to use different binaries for each process.

+",123271,,,,,43951.10625,Porting a SYSV IPC multiprocess architecture to Windows,,0,0,,,,CC BY-SA 4.0, +409497,1,409501,,4/30/2020 3:31,,1,71,"

I am looking at all the green and red boxes here, and am wondering what it would look like if one were to ""implement TLS"" today? What should you implement if you were to implement TLS today?

+ +
    +
  • Do you need to implement all of the previous versions? (SSL 1-3, TLS 1.0-1.3)
  • +
  • Do you need to implement all the cryptography algorithms, or just the TLS 1.3 ones? (RSA is not used in TLS 1.3 it looks like, but is the standard for HTTPS as far as I know, for example).
  • +
  • Do you need to implement all the ciphers, or just the TLS 1.3 ones?
  • +
+ +

Rather than trying to describe what needs to be implemented in order to support TLS today, the simpler question is do browsers support TLS 1.3? If not, why not? This leads to the above questions, plus...

+ +

In order to fetch ""older"" websites, do you need to support older TLS protocols? Or can you have, let's say, a brand new TLS 1.3 implementation that only implements the following, and be good to go in terms of communicating with ""older"" sites and services? Basically TLS 1.3 seems to only implement:

+ +
    +
  1. DHE-RSA
  2. +
  3. ECDHE-RSA
  4. +
  5. ECDHE-ECDSA
  6. +
  7. AES GCM
  8. +
  9. AES CCM
  10. +
  11. ChaCha20-Poly1305
  12. +
  13. AEAD
  14. +
+ +

That's only 7 things, meanwhile there are about 20 key exchange algorithms in the full range of TLS/SSL versions from SSL 1.0 to TLS 1.3, and about 20 ciphers, and 6 data integrity algorithms.

+ +

Basically, in order to support browsing the web if TLS was implemented today in a new open source project for example, would it only need to write the code for those 7 things, or all 20 + 20 + 6 ~ 46 algorithms, basically supporting all SSL and TLS versions of the past?

+ +

I am basically confused as to what needs to be implemented to support secure web browsing in the 21st century, given that everything before TLS 1.3 has been found to be insecure (it seems like). The answer to this question would help me come up with a project scope on what it would take to implement TLS properly today.

+ +

If you can't just implement TLS 1.3 (and need to support the older TLS/SSL versions), what happens if you only had TLS 1.3? What about only TLS 1.3 and 1.2? etc. Basically, what handicaps do you have by only supporting the latest version. Does it mean simply that you can't fetch certain webpages, or what?

+ +

It sounds like browsers have stopped supporting SSL 3.0 and before for a while now, but I'm not sure.

+",73722,,73722,,43951.15625,43951.24167,Do you need to implement TLS versions < 1.3 if you were to implement a TLS supporting library today?,,1,1,,,,CC BY-SA 4.0, +409504,1,,,4/30/2020 6:28,,0,21,"

I have data that I'm going to collect on every pageview that a user has on a given site.

+ +

It will be collected with a client side script and be stored in a mongodb database.

+ +

One site can generate up to 50 million pageviews a month, and the hope is to grow the amount of sites that use this piece of software.

+ +

This could scale quickly, and I'm my question is oriented around mongodb's ability to handle this.

+ +

From what I've researched, the type of repetitive data, without the need for joins, is a good use case for mongodb.

+ +

My question is, at what point should I be concerned about the total size of a mongodb collection?

+ +

At a certain point, should I start to convert this data into more condensed collections? For example, after an hour goes by, convert all of the pageview data into session data; after a day goes by, convert all of the sessions data into hourly data; ect....and get to the point where I have multiple collections that have the pageview data at different levels but for different periods of time based on how far into the past you go. In this case, hourly data could be kept for a month, session data for a could of days, pageview data for a couple of hours.

+ +

Or is this even necessary with mongodb?

+",364549,,,,,43951.26944,Size of mongodb collection and scallability,,0,0,,,,CC BY-SA 4.0, +409508,1,409537,,4/30/2020 8:57,,-2,126,"

I have two approaches to solving a problem but I don't know which one is better. +I will give a simplified example.

+ +

First approach

+ +

The database will look like this:

+ +
color  |
+--------
+FFFFFF |
+0000FF |
+
+ +

The code would have to deal with the specific values and do something like:

+ +
If (color == 0000FF)
+   displayBlue()
+If (color == FFFFFF)
+   displayWhite()
+
+ +

But for this approach I will always have to update the code and the database if there is new color.

+ +

Second approach

+ +

The database will provide for more information:

+ +
color | Name
+-----------------
+FFFFFF | White
+0000FF | Blue 
+
+ +

Then the code would be like:

+ +
Foreach c in color 
+    display( c.Color, c.Name )
+
+ +

With this approach, if there is new color I can just update the database. But I feel like I'm mixing concerns. The UI is driven by database values.

+ +

Could you advise me on the pros and cons of both approaches and therefore, which one is recommended?

+",364558,,73508,,43952.54236,43952.54375,Dependency of program code on specific database values,,2,3,,,,CC BY-SA 4.0, +409511,1,409516,,4/30/2020 10:22,,-2,89,"

The rule I was taught: ""the method should be in the Object that it is invoked on""

+ +
    +
  • a student joins a course => the join method should be in the Course class
  • +
  • a player drives a car => car.drive(..)
  • +
+ +

but another possibility is: the join method is in student. the course has an addStudent Method.

+ +

the reasoning is ""I tell the student to join a course"" and ""I tell the course to add a student"". +I can't tell a course to join a student!

+ +

I guess a possible underlying question is: ""in Object oriented design, should I think of objects: + as physical entities where there is a ""join"" button that I press +OR as agents which I tell them to do things.""

+ +

Context: this example was featured in an introduction to SE course that I just started. +we were told to apply the rule and we were not told why does the rule exist.

+",364560,,,,,43951.4875,should the join method be in the Course class or the Student class?,,1,3,,,,CC BY-SA 4.0, +409517,1,,,4/30/2020 12:20,,1,46,"

I am going to use the FreeRTOS on my hardware platform. My plan is to model individual tasks as C structs containing task execution period, task priority, task stack size, task name and maybe another atributes. Then I am going to define public “constructor” function called task accepting values of individual task attributes mentioned above, “abstract” function called taskFun which contains code of the task (each task will have this function but the implementation will differ among all the tasks) and public set/get functions.

+ +

+ +

My motivation for development of such overhead is improving organization of the source code and data encapsulation. My goal is to have a sort of layered architecture where the top view offers only information about number of tasks and their execution periods and so on. The more detailed information regarding what is executed with which period and priority is hidden in the taskFun function.

+ +

Do you think it is good a idea? What are better approaches in case C language is used? Thanks for any ideas.

+",360122,,,,,43951.51389,FreeRTOS based application architecture,,0,2,1,,,CC BY-SA 4.0, +409518,1,409521,,4/30/2020 12:25,,0,59,"

I have a function whose job is to look through a string that is a post's content and find certain pieces:

+ +
public static function findInsidePostContent( $post = Null )
+{
+
+    //post_content comes from a WP_Post object.
+    $post_blocks = \parse_blocks( $post->post_content );
+    //Look inside, do some things.
+
+}
+
+ +

The function has an absolute truth that we can always rely on: it only and solely works with a WP_Post object.

+ +

But, naturally, since the parameter is Null, there is a clear contradiction here, so, let's try to solve it by adding a specific line right at the start of the function:

+ +
$post = Utils::getPostObject( $post ); if( \is_wp_error( $post ) ) return $post;
+
+ +

Great, that function is always supposed to return a WP_Post object, unless something wrong happened.

+ +

But what we just did was heavily couple two seemingly unconnected pieces of code and, usually, that's bad. We also hid a dependency. But what if we can predict that in all the cases, you will always work with a WP_Post object? You'll always have to work resolve that WP_Post object somewhere before:

+ +
$post = Utils::getPostObject( 21 );
+Utils::findInsidePostContent( '', [], $post );
+
+ +

This ""call path"" will never, ever change.

+ +

Wouldn't the coupling be justified?

+",362942,,,,,43951.58194,Is coupling functionality desired when the usage of a function can be predicted with near-perfect confidence?,,1,2,,,,CC BY-SA 4.0, +409525,1,409532,,4/30/2020 15:44,,1,194,"

Context: I am a new hire out of university at a large software company tasked with either refactoring or re-writing a large legacy method (~500 lines, ~2000 lines expanded with private method calls) that performs a complex workflow with many responsibilities. You can imagine it applying a complex series of interdependent transformations to its input before returning the transformed data. Static code analysis indicates it has a cyclomatic complexity of 67. This class is part of a package used across multiple services not owned by my team and as such the interface cannot be modified. This method is part of a package that is being deprecated feature-by-feature over a long period of time. The goal of the refactor/rewrite is to make it so that we maintain feature parity with the original implementation, but can easily disable individual transformations over time. The method has few tests, so my first task will be to create a full test suite for the class to enforce stable behavior.

+ +

What sort of approach can I apply in order to successfully perform this refactor? My current idea is some sort of Pipeline where we take a collection of enums which describe which transformations to disable when assembling the pipeline. This way deprecating transformations in the future is a matter of adding an additional enum to the parameter. +E.g. Something along the lines of

+ +
public class LegacyClass {
+...
+    public WorkflowResponse performWorkflow(WorkflowInput workflowInput) {
+        WorkflowPipeline pipeline = this.assemblePipeline(this.featureDeprecations);
+        pipeline.validate();
+        return pipeline.execute();
+    }
+
+    private WorkflowPipeline assemblePipeline(EnumSet<FeatureDeprecations> featureDeprecations) {
+        WorkflowPipeline pipeline = new WorkflowPipeline();
+        pipeline.addTransformation(new TransformationOne());
+        if(!featureDeprecations.contains(DEPRECATE_TRANSFORMATION_TWO)) {
+            pipeline.addTransformation(new TransformationTwo());
+        }
+        pipeline.addTransformation(new TransformationThree());
+        return pipeline;
+    }
+
+ +

Is following this approach a good idea for my situation? The main downside I can see is that the code for assembling the pipeline will grow to be very complex even if I manage to encapsulate each transformation rather than having them live within the performWorkflow method.

+",364595,,9113,,43951.79097,43955.89167,"Approach for rewriting a large, mission-critical method",,3,6,1,,,CC BY-SA 4.0, +409527,1,,,4/30/2020 16:57,,-2,69,"

I have X number of servers in NLB listening to an event that gets published by a backend system.

+ +

I am planning to develop a windows service that listens to the event and then calls an internal API. Here, each server in the NLB is subscribing to the event.

+ +

Is there a way that I can make sure that my API is called only once for a published message instead of X times (once per server) ?

+ +

The API I am calling updates the data and I don't want the data to be updated more than once.

+",362273,,209774,,43952.37014,43952.53125,How to call the API once per event-message?,,2,2,1,,,CC BY-SA 4.0, +409530,1,,,4/30/2020 18:19,,3,244,"

Say I have 100 users, each with varying strength, and each with a top 5 set of ""preferred teammates"" and a top 5 set of ""preferred enemies"". I want to sort the users into two teams.

+ +
User
+{
+   int Id;
+   int strength;
+   List<int> PreferredTeammatesIds;  // arbitrary limit of 5
+   List<int> PreferredEnemiesIds;  // arbitrary limit of 5
+}
+
+ +

I am trying to come up with an algorithm where the total strength of each team is near equal, and as many of each user's preferences are achieved.

+ +

First, I assume a perfect everyone-gets-what-they-want is highly unlikely, especially with 100 users. But is there a way to calculate the optimal alignment, or would I just have to do some sort of random mutations or genetic algorithm and keep the best lineup found in N number of generated solutions?

+",6212,,6212,,43951.78819,43956.25347,An algorithm for self picking teams,,4,12,1,,,CC BY-SA 4.0, +409531,1,,,4/30/2020 18:55,,0,61,"

I desingned a rest api software with 2 simple layers: Controller and Service. The controller handles the coming http request and redirect to a service method. In the beginning of development every was going well with the standards cruds. The problems now is the complex reporting query.

+ +

For each persistence entity I create a typed service and controller.

+ +

E.g.: A Person class has a PersonService the is called by a PersonController.

+ +

To achieve the report task, I've been creating lots of query methods and put the data to a dto's and manipulate the data using them. The report data needs a lot of calculation and comparison with another tables datas and I'm basically doing that calculation in dto's because I don't want to query my complete domain object to work with 2 or 3 fields.

+ +

My service layer growed up a lot with all that query methods and I know that I'm doing a wrong thing to handle state of data inside dto before expose them, but how could I fix that with a simple solution? I decided to create a few layers to make the things simple and now I'm not sure if I made the rigth decision.

+",335922,,,,,43952.21458,Problems with software layers in complex query methods,,1,1,,,,CC BY-SA 4.0, +409546,1,409591,,5/1/2020 5:36,,3,337,"

I've a JSON file which I'm trying to de-serialize into POJOs.

+ +
public abstract BaseClass {
+    private String baseClassField;
+    abstract String execute();
+}
+
+ +

ClassA extends BaseClass

+ +
public ClassA extends BaseClass {
+   private String classA;
+   private SomeInterface interface;
+
+   @Override
+   public String execute() {
+      return interface.execute();
+   }
+}
+
+ +

I've simplified my use case, but as you can see in the way I've structured my classes, I've will have business logic within my core POJO definition. I don't think that's a good idea because soon, I will start trying to bring in dependency injection of other member instances to help and perform the execution.

+ +

How should I model the classes, so that I'm able to main the inheritance hierarchy and have the ability of child classes executing the contracts defined by the base class?

+",281546,,,,,43953.68333,De-coupling business logic from POJO de-serialization design pattern,,4,0,,,,CC BY-SA 4.0, +409553,1,,,5/1/2020 9:49,,2,79,"

Problem description

+ +

Publisher-Subscriber architecture with a central registry where agents can either promote their capabilities or search for a given capability.

+ +

The project must be developed with C/C++. A GUI used to compose a chain of agents is a nice-to-have (not necessarily C or C++).

+ +

Current state of my thinking

+ +
    +
  • Set up the Publisher-Subscriber in place using ZeroMQ
  • +
  • The central registry (or notice-board) role is only to connect agents between them using sockets
  • +
  • Data serialization is performed either with MessagePack or FlatBuffers
  • +
+ +

Questions

+ +

It's the first time I'm asked to develop a Publisher/Subscriber architecture and I've never used the libraries I listed above.

+ +

Does my approach look good to you?

+",364651,,9113,,43952.41528,44102.54306,Publisher-Subscriber architecture with central registry,,1,0,,,,CC BY-SA 4.0, +409562,1,,,5/1/2020 14:58,,0,115,"

I've really been struggling with this concept a lot in my head, and I was hoping I could get a poke in the right direction here...

+ +

I have a process that is supposed to take a certain number of phone numbers and send them all text messages (through Twilio). That's all fine and working...my problem is, I don't know how to ensure execution of each function, when there could be anywhere from 10 phone numbers to send to, to 10,000.

+ +

I don't think playing with the timeouts in my cloud environment would be best, because the time would be variable, so what if I set it too short? I just could never really know.

+ +

Or maybe timeouts is the best way to do this, and I'm just ignorant.

+ +

I guess my question boils down to:

+ +

How do I best execute a process that could take anywhere from 30 seconds to 6 hours, and make sure the entire process runs all the way through?

+",281809,,,,,43954.12153,How to ensure execution of a process that could take anywhere from 30 seconds to 3 hours,,2,0,,,,CC BY-SA 4.0, +409569,1,409578,,5/1/2020 18:41,,-3,61,"

How would you architect a simple cascading stylesheet like inheritance object?

+ +

For example, I have Apple that extends Fruit.

+ +
class Fruit {
+
+    constructor() {
+       this.total = 10;
+    }
+}
+
+class Apple extends Fruit {
+    constructor() {
+
+    }
+}
+
+ +

On Fruit there is a property named total with a value of 10. The apple instance can change the property value to anything else. But if I delete the property value somehow I want it to return to the original subclass value.

+ +
var apple = new Apple();
+log(apple.total); // 10
+
+apple.total = 200;
+log(apple.total); // 200
+
+delete apple.total;
+log(apple.total); // 10
+
+ +

I'm using ES6 for this project but I can probably figure out the syntax with the a design pattern.

+ +

This may or may not be Cascading Style Sheets.

+",48061,,,,,43953.07361,How would you architect a simple cascading style sheet object?,,1,1,,,,CC BY-SA 4.0, +409572,1,,,5/1/2020 20:42,,0,35,"

I am creating a spotify app that will allow 2 users to compare what songs they have in common. I have never designed a super-complicated web app before, so I wanted to get some advice for the structure of how this works. I created a flowchart describing how I felt the site should work.

+ +

Taking into security and possible error points, what changes can I make to my flowchart / website design to develop the most robust app possible (even if it is a simple one)?

+ +

+ +

I have never posted on softwareengineering before, so if this is the wrong forum or more information is needed, please let me know instead of downvoting.

+",364697,,364697,,43953.14792,43953.14792,Structure for a spotify / OAUTH web app,,0,0,,,,CC BY-SA 4.0, +409575,1,,,5/2/2020 0:02,,2,133,"

To make a long story short, I needed a property of a button to act as a "secondary text property" and retain the original .Text value of a button, and the only String property that wasn't ReadOnly was the .Name property.

+

I was sure it was going to break on runtime (as tons of code references the name of the control) but just for kicks, I wanted to run it and see what it would do.

+

I my shock and surprise, IT DIDN'T BREAK! And it actually served my purpose of temporarily holding the original text of the button, in time for me to retrieve it again.

+

Thus my two questions

+

(like the surprised kid after he pushed a glass plate on the floor):

+
    +
  1. Why didn't it break?
  2. +
  3. And why shouldn't I do it again?
  4. +
+

I have a few guesses, but I wouldn't know how to confirm them, so this is my first venture for answers:

+
    +
  1. I was thinking one possibility is that the .Name property "already served its purpose" by runtime as the code is already compiled, and all references are already made to the instance, and therefore, altering the .Name property does nothing in runtime. But that almost seems too simple an answer.

    +
  2. +
  3. Maybe I was not actually changing the property, but that seems a silly thing to consider since I was able to retrieve the value.

    +
  4. +
  5. I finally considered that I've entered the twilight zone and I'll be hunted down and imprisoned by Microsoft for not learning the moral lesson of following proper coding conventions. (Sarcasm; no need to edit my post for this)

    +
  6. +
+

I'm inclined to think #1 is the answer, but it seems too simple, and I can't help but think that I'm going to get scolded for messing with it.

+

Any insight or notes about conventions (even a slap on the wrist) would be appreciated.

+

Added code example:

+

(NOTE: I now understand .Tag is a better to use than .Name for this purpose, but this is still a good example for my question.)

+
Private Sub btn_Paint(sender As Object, e As System.Windows.Forms.PaintEventArgs)
+    Dim btn As Button = DirectCast(sender, Button)
+
+    'Make sure to only pull the text when it actually has text.
+    If btn.Text > " " Then
+        btn.Name = btn.Text
+    End If
+
+    btn.Text = String.Empty
+
+    'Set flags to center text on button
+    Dim flags As TextFormatFlags = TextFormatFlags.HorizontalCenter Or TextFormatFlags.VerticalCenter Or TextFormatFlags.WordBreak   'center the text
+
+    'Render the text onto the button
+    TextRenderer.DrawText(e.Graphics, btn.Name, btn.Font, e.ClipRectangle, btn.ForeColor, flags)
+End Sub
+
+",150262,,-1,,43998.41736,43963.63403,Changing control name property during runtime -- Why doesn't anything break and why shouldn't I do it?,<.net>,3,5,,,,CC BY-SA 4.0, +409588,1,409590,,5/2/2020 12:40,,-2,45,"

I am in a situation right now where a solution has been proposed that uses a csv file. The proposed file structure essentially contains three atomic values;

+ +
    +
  • ID
  • +
  • Thing 1
  • +
  • Thing 2
  • +
+ +

Fine. As we were discussing this solution, someone mentioned that some customers might use our ID while other customers might use their own, different ID. Development's suggestion was to simply add a fourth field and have the customer choose between the two (use one or the other), i.e.;

+ +
    +
  • Internal ID
  • +
  • External ID
  • +
  • Thing 1
  • +
  • Thing 2
  • +
+ +

In this way, the system can account for either situation. The response to this suggestion is what is troubling me - our implementation team came back and said to stick with the original file, and then development should add a different, internal designator somewhere else within the system that consumes the csv file, which treats the ID field as either internal or external, depending upon what customer submitted the file. The argument here was that having two fields for ID is confusing to the customer and could lead to problems.

+ +

Alright - finally my question. I feel rather strongly that we should have four fields, but I cannot find any basic software premise to back up my insistence. The CSV file is essentially a table, so I keep looking to DB normalization rules for an answer (1NF keeps coming to mind), but I don't think that's quite right either.

+ +

What rule is being broken by using a field/variable for multiple purposes? This has got to be in a few basic coding books, white papers, lists of do's and don'ts, right? Anyone have anything I might be able to point to?

+ +

Thanks so much in advance!

+ +

Marshall

+",364745,,,,,43953.62222,"Using a field, variable, or column for multiple purposes",,1,4,,,,CC BY-SA 4.0, +409592,1,409596,,5/2/2020 15:10,,0,159,"

I know that:

+ + + +

and I also know that:

+ + + +

But are we using the same word ""escape"" in these two distinct contexts?

+ +

Or are ""escape"" (in escape character) and ""escape"" (in escape key) simply homographs with no meaningful connection apart from coincidental orthography?

+ +
+ +

Why am I asking this question?

+ +

I am writing some functionality (and some documentation) for a custom CMS and I would like to use the

+ +

+ +

symbol as a shorthand notation (within a namespace context where namespace prefixes are normally automatically applied) to mean:

+ +
+

don't use the namespace prefix here

+
+ +

That seems to correlate, approximately, with the definition of ""escape"" when we're talking about escape characters in HTML & CSS and the backslash escape character in Javascript.

+ +

But I'm less keen to use the:

+ +

+ +

character (conventionally used to denote the [ESC] key) for this purpose, if these two uses of the word escape are simply coincidental and don't have the same root.

+",359345,,359345,,43953.68819,43981.35208,"Using the symbol ⎋ to denote any ""escape"" in Javascript, CSS, HTML etc",,3,12,,,,CC BY-SA 4.0, +409593,1,409595,,5/2/2020 15:14,,0,181,"

Some background: +We have similar entity stored in different databases because historically the entity used to come from different vendors(and it still does) and we stored it in three different databases. +All the DBs have SP written over them which returns the entities based on few filter.Table schema of result set returned by all SP is same. +UI doesn't know and care about where the entity is stored and where it is fetched from. So currently we call SPs on different DB,Bind the result set table to entities,Apply some business logic,Sort,Group and Aggregate entities in C# service based on service request.

+ +

Question: +We are refactoring other part of application and For this service, Senior team member and manager think we should change each DB call to a microservice, with the reasoning that no one should be able to access the entity in given DB without connecting to microservice written over that DB(making a common gateway for that entity on given database) and said we should follow Database per service pattern.

+ +

I agrees with all the reason given for database per service architecture but no one external consumer going to consume our micorservice without all business logic, hence we are writing microservice to just bind data set to entities where we are sacrificing the performance by making an IPC call over the network. The other reason which I can think of is scalability as we can spin more containers of micro-services when load increases but we can also scale our current ""monolithic"" service in same way which makes direct async DB calls.

+ +

Can some please tell me some reasons(or guide me to some article which has same information) why we should create entity binding micro-services rather than direct SP? why we should sacrifice performance of service in the name of micro-service architecture? Or making Direct SP call is good in our case.

+ +

I appreciate your invested time in this question

+",321521,,,,,43953.65347,Why to prefer microservice over direct stored Procedure call?,,1,0,,,,CC BY-SA 4.0, +409599,1,,,5/2/2020 16:26,,-1,74,"

I want to add some custom retry logic to the AWS sdk (but this isn't specifically about that). So whenever it throws a specific network exception it waits and tries again and whenever it throws a specific auth related exception it calls a function that tries to update it's credentials and tries again.

+ +

I want to avoid adding/duplicating this logic everywhere I call the sdk

+ +

I could try to wrap the sdk in my own service- I guess it would be a function that just takes an AWS client object and method call?

+ +

But I'm wondering if it's possible to do something like register a callback globally so whenever a specific exception is raised a function is executed. Could I monkey patch the AWS client?

+ +

Is this possible? How does one normally do this sort of thing in a DRY light weight way?

+",118738,,,,,43983.78611,Is it possible to have a function executed when a specific exception is raised anywhere in the program?,,2,0,,,,CC BY-SA 4.0, +409600,1,,,5/2/2020 16:38,,-3,125,"

I have a build pipeline that builds my C++ project on Windows, macOS and Linux. The build process generates 100 libraries and files on each OS. So I have a directory with these files, and I want to package them proberly.

+ +

File foo.dylib goes into osx.zip.

+ +

File bar.dll goes into win.zip

+ +

Is there a standard format for this or do I have to invent my own? If I have to invent my own, I thought of describing this in an inverted gitignore-style file format, combined with a C-preprocessor-like solution. It looks very maintainable and would do a good job.

+ +
#if OSX
+    *.dylib
+    !not_me.dylib
+#else WIN
+    *.dll
+    !not_me.dll
+#endif
+
+ +

The focus should be readability and maintainability. Makefiles have elements for this, but they are not easy to read and with the amount of files I have they are difficult to maintain. For unit tests there is for example the JUnit file format. Is there a similar solution for packaging?

+",345143,,345143,,43953.70556,43960.45,Is there a standard format for build packaging or do I need to invent my own format?,,1,6,,,,CC BY-SA 4.0, +409606,1,,,5/2/2020 18:40,,2,147,"

I'm trying to engineer this:

+ +
    +
  • 200 subclasses [ Derived Classes ]
  • +
  • After a subclass is defined, I wont need to edit any other file. [ Decoupled ]
  • +
  • Subclass Definition registers itself. [ Definition is Registration ]
  • +
+ +

What I could do instead:

+ +
+

Create a class [ One big one ]

+ +

Create an enum value (or similar) for every functional variant [ Highly Coupled ]

+ +

alter behaviour in a switch [ Cumbersome ]

+
+ +

While that would work with its own advantages, I'd like to see if something more polymorphic, decoupled, and Modular is achievable, like this:

+ +
class Base { ... };
+static QList<Base> s_RegisteredClasses; // Empty
+class A : public Base{ ... };
+class B : public Base{ ... };
+/* s_RegisteredClasses should now have A & B inside of it */
+
+ +

What I've tried involves static constructors, where every base class has a different signature. The initialization works, but I wouldnt know how to register it:

+ +
class Register { public: Register() { /* Static Constructor to run any code */ } };
+
+template<class T>
+class Base { static inline Register selfRegister{} }; // `Register` initiated
+
+static QList<Base<?>> s_RegisteredClasses; // ? prevents polymorphism
+class A : public Base<A>{ ... };
+class B : public Base<B>{ ... };
+/* Both A & B run `Register()` constructor,
+   but how do you add them to a list? */
+
+ +

The self initialization works, registration doesn't.

+ +

Should I consider a QList<void *> instead?

+ +

The other variant looks like this, with the base class having one signature:

+ +
class Register { public: Register() { /* Static Constructor to run any code */ } };
+
+template<class T>
+class Base { static inline T selfRegister{} }; // Register initiated only once.
+
+static QList<Base<Register>> s_RegisteredClasses; // but I can now polymorph
+class A : public Base<Register>{ ... };
+class B : public Base<Register>{ ... };
+/* How can I self initialize A & B, adding them to s_RegisteredClasses ? */
+
+ +

I am close to giving up, but I haven't tried registering function pointers yet.

+ +

Is a text Macro the appropriate solution here, or that there is another paradigm I am overlooking?

+ +
+ +

Context

+ +

I am designing a Card Game where the instructions are printed in a Pseudo C++ code. I am hoping a subclass would represent a Card looking like this:

+ +

+",136084,,136084,,43953.83264,44109.12708,"C++: Achieving a decoupled ""Definition is Registration"" paradigm for derived classes?",,1,11,1,,,CC BY-SA 4.0, +409607,1,,,5/2/2020 19:05,,2,192,"

I work for a small-medium sized startup with a development team of 20 people and a very strong engineering culture. Engineering itself is split into smaller sub-teams and I am the only person responsible for a particular component (which is quite critical to the whole organisation).

+ +

We have now finally managed to hire another engineer to start working with me, which is great and will help me a lot, but comes with the inherent challenge of moving away from a solo dev team.

+ +

What are some of the challenges that exist when adding a second engineer to a team? I've already had a look at this question where the accepted answer suggests having a good onboarding process but doesn't really go into the details of what that actually means. I'm more interested in understanding what kind of things I should change on a day to day basis to make sure that this transition is a success.

+",364779,,,,,43955.21528,What are the challenges when scaling a dev team from one to two?,,4,1,,,,CC BY-SA 4.0, +409610,1,,,5/2/2020 19:18,,0,38,"

I have a multi-language web site. On the web site, users can search products and services by geographic areas.

+ +

I am asking which is best approach to performs search of geographic areas. In this moment I am using geographic coordinates.

+ +

I stored in my database the boundaries of different areas, then when user input a city (for example) on the web site, I make a request to Google, I obtain coordinates, then I search if the coordinates retrieved by Google are within the boundaries I previously loaded on database.

+ +

This approach has some pros but also different cons.

+ +

PRO

+ +
    +
  1. I do not care about language. Users input cities. I call google. Google give me coordinates. Then I make the search on coordinates;
  2. +
+ +

CONS

+ +
    +
  1. I must search boundaries of the regions e then store it on my database. Not always it is easy find free or detailed boundaries;
  2. +
  3. Often different users input same cities more and more times. This means I call repeatedly Google to obtain alwais the same coordiantes. Google has a cost I cannot effort in this moment. But I found it is the most accurate service.
  4. +
+ +

What I would like is remove dependencies from google.

+ +

I think I should store the cities, provinces, regions and countries all over the world in my database. Then make searches on the name. But name can change according to language (for example Paris, Parigi, Paryz, Parizh... ). So perhaps that is not the most correct way to performs research.

+ +

I am not looking for a solution language-based. But it is purely theory. +What do you suggest?

+ +

Thank you

+",364778,,,,,43954.29514,Best way to lookup cities/provinces and regions,,1,1,,,,CC BY-SA 4.0, +409611,1,,,5/2/2020 19:19,,1,45,"

I want to clear my application cache after updating an element of my domain.

+ +

In that case i raise an event NodeEndPointReferenceChangedEvent from my Domain Layer.

+ +

The cache in managed in Services Layer.

+ +

I not sure where i place the NodeEndPointReferenceChangedHandler who will be responsible for catching the event and invoking the service to clear the cache.

+ +

It's ok if i divide the Handles in two layer: + DomainHandles who will handle events at the Domain level + ServiceHandles who will handle events at the application layer (cache services)

+ +

The other point is when the event will be rase? In the case of clear cache i prefer raise the event after the commit of transaction. But in other case or in the Domain Handler i prefer manage the event in the same transaction. +So i can define two different interface for manage this case (for example ITransactionNotification and ICommitedTransactionNotification) and the infrastraction layer (in my case EF Core because i use dot net core) will raise the events pre or post transaction.

+",358459,,,,,43953.80486,Clear Applcation Cache with Event Domain DDD,,0,2,,,,CC BY-SA 4.0, +409614,1,,,5/2/2020 21:08,,1,17,"

I wanted to ask about your ideas on how to solve the problem that I have to solve in my application (App1). This is the classic Fronted + Backend (Angular + Java EE) application to which I am currently adding authentication and authorization. However, the matter does not seem to be simple for several reasons. First, there are different kinds of customers who will use this application:

+ +
    +
  1. end-user using GUI (Angular calls the Backend API)
  2. +
  3. end-users using direct backend API
  4. +
  5. another system calling the API backend
  6. +
+ +

In addition, I have an OIDC provider (OIDC_Organization) available in my organization - unfortunately it can only authenticate users. Does not return any information about roles. And my application also needs information about which roles belong to the logged in user. For this reason, I would like to build my own OIDC system (OIDC_Internal), which would delegate authentication to an existing system (OIDC_Organization) in the organization. And from the database I manage (Permission_DB) it would read user permissions / roles. I would return such combined information (id user + role) in new OAuth token(s) to my application. However, I would not like to implement from scratch the entire OIDC system (OIDC_Internal). There must be for sure best options for this. +The another problem is also that I can't put any technical users in OIDC_Organization. Only real users are there. So for option (1) and (2) I can use OIDC_Organization but for option (3) I need to have a user account in a different place (e.g. in the authorization database that I have -> Permission_DB).

+ +

Below is the diagram that seems necessary to ensure security for Application 1. I assume that Authorization Code Flow will be used for the communication of App1-> OIDC_Internal and OIDC_Internal-> OIDC_Organization. +

+ +

Here diagram of the user's technical connection (ApiCli) to the system (I assume, the Credential Grant Flow will be used here.) +

+ +

I was thinking about using Keyclock as a kind of proxy-provider but I don't know if this solution is even possible to implement. For example I don't want to show Keyclock's login page - the login page provided by OIDC_Organization would be sufficient. Maybe would you suggest writing your own solution using some Oauth / OIDC / JWT support libraries?

+ +

I would be very grateful for any ideas. +Best Regards, +Peter

+",364783,,,,,43953.88056,Defining custom OIDC provider with delegating authentication to another OIDC provider and using own authorization database,,0,0,,,,CC BY-SA 4.0, +409617,1,409618,,5/2/2020 23:51,,1,96,"

I'm learning about Software Architecture and especially about scaffolding large-scale architecture and patterns for modern web applications.

+ +

I've noticed that I don't have a pattern for data validation or rules, sometimes I add validations or checks () in client-side layer and others in the server-side or by adding requirement in databases schemas but I see several redundant validations.

+ +

Let's say I have an input with a username and this username should have max 10 characters, as far as I understand one validation in front-end layer ( client-side ) is enough without adding requirements/validations in a database for this property of our schema ( user in MongoDB).

+ +

My question how do I organize or create a standard validation flow for a web application?

+ +

I appreciate you if you can recommend a practical book, a blog, or a series of videos from an expert.

+",291221,,291221,,43954.08611,43955.52431,How to avoid duplicate data validations ( checks ) in web applications?,,2,0,,,,CC BY-SA 4.0, +409634,1,,,5/3/2020 12:13,,1,75,"

I'm new to API design. I've been reading numerous threads RESTful vs RPC patterns for API design, but I feel like none directly addressed my exact use case, so I just wanted to get a second opinion.

+ +

Let's say I want to build a web app which recommends neighbourhoods to live based on two criteria:

+ +
    +
  • Your income (i.e. what you can afford in terms of rent)
  • +
  • Where your office is (i.e. commuting times)
  • +
+ +

I can think of three possible ways of doing this:

+ +
    +
  1. RPC: probably the most straightforward of the three. I define a method on the server called ""getNeighbourhoods"" which takes the user's income / office location as input, finds the neighbourhoods within the price range (by querying the database) and within commuting distance (by calling an external API, for example, to get the drive time between two locations), and returns a list of eligible neighbourhoods with their details. All logic is on the server side. My client just sends the criteria by POST and receives a list of recommended neighbourhoods or a single best neighbourhood.
    +POST /getNeighbourhoods { office_location: foo, income: bar }
    +One issue with this is that it's tightly coupled to the client, and I'd need to take precaution in sending sensitive information to the server.
  2. +
  3. RESTful: I define a ""neighbourhoods"" resource and make a GET request.
    +a) GET /neighbourhoods/ would return all neighbourhoods from the database.
    +b) GET /neighbourhoods/?office_location=foo&max_cost=bar would filter for the recommended neighbourhoods. I would basically apply a similar server-side logic as in the RPC example based on the query string params I get.
    One big problem with this system is that I would be pretty apprehensive about sending this data in the query string, even if obfuscated to something like ""max_cost="". What if I need to use more sensitive information in the future?
  4. +
  5. Client: the other solution that I can think of is that I implement two RESTful resources, one which returns all neighbourhoods from the database, and the other which makes API calls and returns commute times. For example:
    +a) GET /neighbourhoods/ returns neighbourhoods from the database.
    +b) GET /directions/<origin>/<destination> calls the external API to get commute times
    +Then, I apply the complex logic on the client side, checking which neighbourhoods returned from the API are within budget and, of those, which are within a 30-minute commute. +
  6. +
+ +

The final method seems to be best since it's the most decoupled, I'm not making any direct third-party API calls from the client (so I don't expose my API key, for example), and I'm not sending any sensitive user information over the network. However, a lot of the logic is on the client side. Do you see any other downsides to this method?

+ +

Thank you!

+",364820,,364820,,43954.61736,43954.61736,API Design (RPC - RESTful),,0,2,,,,CC BY-SA 4.0, +409647,1,,,5/4/2020 6:11,,-4,38,"

I am building a system composed of a few micro-services over AWS. +I encountered the need of a certain MS to do the same logical work but of a big range of work load. +for example, the same logical work could need 2vcpu and 4GB ram and another will need 4vcpu and 16GB ram. +What is the best practice here? have all the instances big enough to fit my needs? have 2 ECS services one of 2vcpu and one 4vcpu having different SQS deliver the work load? +or is there other rational idea here?

+",153030,,,,,43955.40972,Microservices of different resource type,,1,2,,,,CC BY-SA 4.0, +409648,1,409649,,5/4/2020 8:19,,-2,42,"

Let's say we have a situation in which we have a particular entity, for example an User which has a set of related entities e.g Book.

+ +

At some point in time, a group of users phisically receives a new Book.

+ +

That means that each user in that particular group has now a new Book.

+ +

At this point someone has to connect to a page used to manage the users and either

+ +
    +
  1. Edit each User who got a new book, one by one.
  2. +
  3. Select all the Users in the group and add the book to all of them in one single operation.
  4. +
  5. Skip the part where someone has to manually insert data and automatically update the database when something changes. +This would probably be the best solution, but for reasons that go beyond the technical side (e.g customer requires that updates are manually inserted) not always feasible.
  6. +
+ +

While browsing different pre-packaged solutions, i haven't been able to found a solution which allows what i have described in point 2.

+ +

Some of the pre-made GUIs i looked at include :

+ + + +

I am aware that the solution in point number 2 is prone to some problems (e.g. if two users are selected, what books are displayed to the user visiting the page? The books of the first user? The books of the second? The books which both users have? The books that at least one of the two users has?)

+ +

Still, i haven't been able to found a pre packaged solution for multi line edits, and this led me to ask myself

+ +
    +
  1. Is it because for the backend i am currently using, which is ASP.NET Core, none of the companies who create pre made GUIs have created a multi-line edit option and i should just create a multi-line edit option myself?
  2. +
  3. Have i been looking in the wrong direction and multi-line edit is available, even in pre-made packages and i have just missed it?
  4. +
  5. Should multi-line edits be avoided all together?
  6. +
  7. Does any of what i am asking make any sense at all? Am i completely missing something?
  8. +
+",347052,,347052,,43955.43194,43955.43194,Is it appropriate to include an option to edit multiple rows at once for a CRUD page?,,1,6,,,,CC BY-SA 4.0, +409650,1,,,5/4/2020 9:21,,0,83,"

We have built an iOS/MacOS library, that is being used by several iOS & Mac Apps of a very big company.

+ +

The library is being distributed through Cocoapods and Carthage, the package managers for iOS and MacOS libraries.

+ +

We have pipelines set-up to build on every commit. The test suite of Unit Tests, UI Tests and Integration Tests run on every PR created.

+ +

But, we are not sure how to go-about Continuous Deployment. We cannot release on every PR Merge, as that would mean:

+ +
    +
  1. Too many versions of the library on Cocoapods.
  2. +
  3. If upgrading requires code changes in the apps' code, the documentation will be distributed across different versions' release notes.
  4. +
  5. Our library is not significant enough to dedicate engineers to upgrade frequently on the App-side.
  6. +
+ +

Please help me if you have faced similar problems and what are the standard practices.

+",358919,,,,,44214.25208,How to do Continuous Delivery for public libraries distributing through package managers?,,1,3,,,,CC BY-SA 4.0, +409655,1,409768,,5/4/2020 11:17,,0,193,"

I have had hard times naming boolean fields.

+ +

On the one hand, i've read that it is good to name them as affirmative statements, like hasAge, or canDance. It's not a problem when naming local variables, for example

+ +
boolean itemIsActive;
+boolean everyItemHasName;
+
+ +

But i don't see how these rules could apply this to field name

+ +

Let's assume that in our project user can define multiple currencies, and determine which one of them is the default one.

+ +
class Currency {
+  String name;
+  boolean myProblematicVariable; // this variable should say 'I AM or I AM NOT the default currency'
+}
+
+ +

How would you name myProblematicVariable? I have few types but none of them seems right

+ +
    +
  • default - Taking aside the fact that it's a keyword in many languages (that's not the point of this discussion), i like the fact that if we are using standard getters then i can write boolean currencyIsDefault = currency.isDefault(). Hovewer, it's not affirmative.
  • +
  • isDefault - My next guess, but it's standard getter and setter looks the same as it would look for field named default. This is a problem when it comes to serialization/deserialization, for example .: +Java's jackson (json serialization/deserialization library) will serialize this
  • +
+ +
class Test {
+  private boolean isActive = true;
+
+  public boolean isActive() {
+    return this.isActive;
+  }
+
+  public void setActive(boolean isActive) {
+    this.isActive = isActive;
+  }
+}
+
+ +

as this

+ +
{""active"":true}
+
+ +

And this is true for all serialization/deserialization libraries operating on getters and setters. This creates confusion and inconsistency between, for example, how dto looks on the server and client side. +I could, ofcourse tell jackson that ""This field is named isActive"" using annotation, but this seems like a hack used to repair bad design.

+ +
    +
  • currencyIsDefault - seems kinda right, but feels too much. I mean, what's the point of using currency as part of it's field's name, that comest from context. And imagine writing boolean currencyIsDefault = currency.isCurrencyIsDefault(). Awful.
  • +
+ +

And please don't stick to single language standards, assume that Currency is DTO which can be sent between client and server.

+ +

Edit

+ +

Some ground rules .:

+ +
    +
  • Only one currency at a time can be default
  • +
  • This discussion is not about design patterns but strictly about field naming. Instead of determining whenever the currency is default it could be whenever it's active, enabled, archived, maybe hovered (in gui application) or really, anything else that comes to your mind.
  • +
+",349168,,349168,,43955.92431,43957.45972,Naming boolean fields in classes,,6,2,,,,CC BY-SA 4.0, +409664,1,,,5/4/2020 13:15,,0,90,"

I'm at the stage where I'm taking my MVP app and trying to refactor and structure it in a way that it is maintainable and extendible. I'm new to software architecture or design so I'm having early struggles deciding what pattern would suit best my application.

+ +

How do I go about and investigate if Domain Driven Design pattern (DDD) be appropriate here considering that I have distinct domain (tournament registration) and a lot of complexity in terms of generating brackets and interacting with multiple services?

+ +

The web app runs on Nuxt (using Vue.js and Express) and it communicates with services through network calls.

+ +

Here is a list of things that I have

+ +
    +
  • Bracket API – responsible for generating bracket, fetching matchups, getting current state of the bracket, updating byes, updating players in the bracket etc.
  • +
  • Tournament API – responsible for putting users into tournaments, starting tournaments, ending them and rewarding players
  • +
  • Reward API – picks appropriate reward based on tournament place and sends rewards to users
  • +
  • Cognito API – user accounts
    +
  • +
  • PostgreSQL to store users, tournaments, matchups, rewards, etc.
    +
  • +
  • FTPService – responsible for taking game config files and uploading it onto the game server
  • +
  • FileService – builds JSON files from POCOs
  • +
  • CloudHostingService - using Digital Ocean as a service that allows me to interact with digital ocean API; creating droplets, shutting them off, accessing droplet IPs, etc.
  • +
  • PaymentService - using Stripe API to initialize payments, utilizing hooks to listen for payment events. Upon successful payments, players are put into a tournament
  • +
  • GameService - responds to game call-backs. I’m using CounterStrike as a game of choice and this service will respond to in-game call-backs such as; round start, game end, score changed, etc. which I am using to update the web UI
  • +
  • NotificationService - utilizes Discord API and I’m using it for bot development responsible for sending notifications to discord channels
  • +
+ +

Any help will be extremely appreciated.

+ +

Thanks, +Patrick.

+",348263,,,,,43955.55208,How do I decide if Domain-Driven Design would be applicable to this project?,,0,2,,,,CC BY-SA 4.0, +409667,1,409669,,5/4/2020 13:24,,1,103,"

I am building a SDK that will simplify the use of my API.

+ +

The problem is if when I have to return property of type enum. For example, I use strings instead of int for displaying enum such as

+ +
{
+  ""type"": ""CAR""
+}
+
+ +

instead of

+ +
{
+""type"": 1
+}
+
+ +

Now, my question regarding SDK design/development is should I parse this ""type"" property as enum or as string? If I parse it as enum, I am always in a danger that if new enum value is added, this conversion will throw an error. If I tend to anticipate it, and catch that error, I am still ending up with either a caught exception or invalid enum value.

+ +

On the other hand if I return it as string this problem is gone, but I am expecting that client (client app) keeps track of all the enums and enum changes.

+ +

What would be an expected approach? or if there is another way to handle this, please suggest.

+",107234,,107234,,43955.56319,43955.57778,SDK design: Should I parse enum as string or as enum?,,1,1,,,,CC BY-SA 4.0, +409675,1,,,5/4/2020 15:55,,0,86,"

I'm building a .NET core/standard application that should support different UI platforms (ASP, API, Winforms/WPF). +Rich domain model is perfect for databinding in winforms and WPF by itself. However, due to (potentially) multiple readonly props (incl. calculated props) it is not so for ASP (razor etc.) and API. I need view model classes for those. The dilemma is where to put business rules (validation). On one hand, I definitely need validation on domain classes. On the other hand, I also need the very same validation for UI (to tell the user whats wrong). Which leads to code duplication. I read a bunch of articles on the topic and found two approaches: +- just duplicate the rules (data annotation) and ignore the duplication as the domain model and view model are different beasts even though look similar; +- define data annotations in a separate interface and use ModelMetadataType attribute on both domain model and view model.

+ +

I tend to the second approach, because the business rules are absolute. They do not depend on the angle the business data is approached by. (or am I wrong?) Yet maybe someone could provide me with practical advice which is better in a long run?

+",353988,,,,,43955.66319,.NET core: Independent view model and domain vs. common data annotation via ModelMetadataType and interface,<.net>,0,0,,,,CC BY-SA 4.0, +409677,1,,,5/4/2020 17:34,,1,16,"

I created a forum where I want to display notifications and messages as soon the user gets them. I don't want them to need to refresh to page to see if any notification/message been sent, but I also don't really want to implement 3rd party (like Pusher), that's why I thought I would go with polling.

+ +

Basically, if a user is logged in, the polling starts in Angular. Every 10 seconds, a request to Laravel backend is sent, asking if MessageTable has new row, if yes, then the Laravel returns a ""true"" response, so in angular I display a red dot in the user's profile menu.

+ +

When they navigate to the profile, they get the all the messages, including the new ones. I don't stop the polling there, in case later they will get more new messages. I unsubscribe from polling only if the user is logged out manually.

+ +

Is this acceptable? Should I ""clear"" something at some point so the website won't crash on client side, or it's fine?

+ +

Thanks for answering!

+",364926,,,,,43955.73194,Is this kind of polling from backend (Angular + Laravel) acceptable?,,0,0,,,,CC BY-SA 4.0, +409679,1,409681,,5/4/2020 18:07,,-2,96,"

I know difference between waterfall and incremental strategies. But I'm somewhat confused because I see that there are methodologies which use incremental or iterative approach but I cannot see any examples for waterfall strategy.

+ +

For example Scrum is a methodology that uses incremental or iterative strategy but is there any examples for methodologies that use waterfall strategy? Or is waterfall itself a methodology?

+",330132,,,,,44216.62639,Which methodologies use waterfall strategy and which use incremental strategy?,,3,0,,,,CC BY-SA 4.0, +409680,1,,,5/4/2020 18:16,,-4,113,"

I don't think any of them are good practice. In addition to that they make the code longer.

+ +

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for

+ +

Optional initialization

+ +
var i = 0;
+for (; i < 9; i++) {
+    console.log(i);
+    // more statements
+}
+
+ +

Disadvantages:

+ +
    +
  • Can have side effects
  • +
+ +

Optional condition

+ +
for (let i = 0;; i++) {
+   console.log(i);
+   if (i > 3) break;
+   // more statements
+}
+
+ +

Disadvantages

+ +
    +
  • The only use case I can think of is when you are using this construct is when you are programming a microcontroller that waits for an input. Other than that, I simply don't know what you gain by pulling the condition inside the for-loop.
  • +
+ +

Optional final-check

+ +
for (let i = 0;i < 3;) {
+   console.log(i);
+   i++;
+   // more statements
+}
+
+ +

Disadvantages

+ +
    +
  • This is so useless. I don't know what to think of it
  • +
+ +

All expression optional

+ +
var i = 0;
+
+for (;;) {
+  if (i > 3) break;
+  console.log(i);
+  i++;
+}
+
+ +

Disadvantages

+ +
    +
  • This combines all the disadvantages from above
  • +
+ +

I fail to see the benefit of using optional expressions in for-loops. But maybe I'm missing something? Could someone provide examples where the optional for-loop statement is superior to the standard for-loop?

+",151207,,,,,43962.17222,What are use cases of using optional for loop statements?,,4,4,,,,CC BY-SA 4.0, +409682,1,,,5/4/2020 18:20,,2,87,"

I have come across a small issue with the git work flow in the team.

+ +

When starting to work on a user story, we create a feature branch from the develop branch. Once the user story is finished, a pull request is created and another developer completes the code review. Then the QA tests the feature on the feature branch.

+ +

Often it takes 1 day or more to merge the new feature in to develop branch from creating the pull request to finishing testing.

+ +

Problem comes when I start working on the next feature, which depends on previous feature which is still under testing. I cannot branch out from the develop because previous feature is not merged in to develop.

+ +

In that case, usually create feature branch from the other fearure branch instead of develop.

+ +

I would like to know whether this flow accepted and can be improved. Or is this just a communication issue?

+",58220,,,,,43956.56319,Git work flow with pull requests,,2,2,,,,CC BY-SA 4.0, +409683,1,,,5/4/2020 18:24,,0,48,"

I am working on an API that allows a client document to updated under the endpoint of PUT /client/{documentId}. As part of this document, I need to include an extra field 'ReplicateInManager' that notifies the server as to if the document needs to replicated in an external system, alongside the database for this API.

+ +

I am wondering if I am adhering to REST guidelines, as the 'ReplicateInManager' field will not be stored as part of the entity in the database. The server will read the field and if it's value is true, raise actions asynchronously in other microservices, and effectively throwaway the field.

+ +

My understanding is the best way to represent actions in a RESTful manner is to model the action as part of the entity, such as in this answer: https://softwareengineering.stackexchange.com/a/261647/364929

+ +

I believe modelling the action as part of the request, however I'm not sure if it's acceptable to then not store this field in the database entity.

+",364929,,,,,44107.12917,REST action as part of request body - action not stored alongside entity,,1,1,,,,CC BY-SA 4.0, +409684,1,409690,,5/4/2020 18:48,,6,283,"

I am in the process of refactoring a large C++ code (~2300 files, ~600K lines, mostly older C/C++98 style code) and there are definitely memory leaks that could be shored up using C++ smart pointers. Is there an incremental path towards migrating towards smart pointers or is this an ""all or nothing"" proposition?

+ +

For example, all the ""factory classes"" should return std::unique_ptr's, but this will (appropriately) force all of the caller's to save the result as a std::unique_ptr. But local code could just get the raw ptr (treated as a local weak ptr) to process it. It seems I could also follow a similar path where std::shared_ptr should be used -- e.g., use std::shared_ptr (and std::weak_ptr for back references) when storing pointers within (multiple) data structures, but use raw pointers for local weak pointers.

+",189402,,,,,43955.87778,Procedure for migrating a large C++ code base to use smart pointers,,1,0,3,,,CC BY-SA 4.0, +409686,1,,,5/4/2020 18:57,,-2,395,"

After reading/watching a lot about Event Sourcing, there is one thing I don't fully understand: Events that lead to triggering of other events.

+ +

Let's take StackExchange's ""close question"" feature as an example. The feature requires

+ +
    +
  • commands like VoteToCloseQuestion, and CloseQuestion (direct close for moderators), and
  • +
  • events like VotedToCloseQuestion and ClosedQuestion -- note the use of past participle to express ""historical facts"".
  • +
+ +

The crucial point: If we reach a state of having 5 VotedToCloseQuestion, where does the event ClosedQuestion come from exactly?

+ +
+ +

Often the architecture of Event Sourcing + CQRS is presented in diagrams like:

+ +

(from a talk by Dennis Doomen)

+ +

+ +

(from a talk by Mathew McLoughlin)

+ +

+ +

What surprises my in these diagrams. The command handler has no knowledge of past events apparently. Because of that, I fail to see how events triggering other events can work exactly.

+ +

So far, the best hint I have found was this Q/A. The top-voted answer mentions the notion of ""Event Handlers"" that are responsible for feeding commands back into the system as a reaction to events. In an attempt to make this more concrete, I came up with the following interpretation (focusing on the command side only, pipes are topics/queues, solid arrows is publish, dashed arrows is subscribe):

+ +

+ +

However, such an architecture has some weird implications:

+ +
    +
  • The CommandHandler is only subscribed to the commands topic. In particular, a CommandHandler itself is not subscribed to the event stream itself. This means that the CommandHandler cannot know the full state of the system. As a result, the only decisions it can make are stateless transformations of commands to events -- which is surprisingly boring.
  • +
  • The EventHandler on the other side listens to the stream of events, so it can compute the entire application state (which it may store in materialized view local to the EventHandler service). This is the place where the business logic can live, and where we can make decisions to trigger follow-up commands.
  • +
+ +
+ +

Looking again at the introductory example, the sequence of commands/events would become:

+ +
    +
  • Command: VoteToCloseQuestion
  • +
  • CommandHandler issues event: VotedToCloseQuestion
  • +
  • EventHandler increments internal vote counter, but does nothing because < 5
  • +
  • ... 3 more ...
  • +
  • Command: VoteToCloseQuestion
  • +
  • CommandHandler issues event: VotedToCloseQuestion
  • +
  • EventHandler increments internal vote counter, sees the count is 5, and issues the CloseQuestion command
  • +
  • Command: CloseQuestion
  • +
  • CommandHandler issues event: ClosedQuestion
  • +
+ +

It seems to work, but I'm not sure if I'm on the right track. In particular I'm wondering:

+ +
    +
  • Does the separation into commands and events really add value like this? It feels like I have just duplicated every command as an event again, and since the CommandHandler cannot contain significant business logic due to lack of context, it becomes mostly boilerplate?
  • +
  • Others (e.g. the CQRS FAQ) seem to imply that the business logic is in the command handler. How is this possible if it cannot know the application state?
  • +
+",125593,,125593,,43956.25694,43957.45694,"How does ""event triggers event"" really work in Event Sourcing / CQRS?",,3,13,1,,,CC BY-SA 4.0, +409691,1,,,5/4/2020 19:57,,1,106,"

I am trying to learn DDD. I am modeling a property management domain and I think I have two contexts (subdomains?): a property management context and a resident context.

+ +

Let's say I have an aggregate root Apartment, that holds Floorplans and Units. Each Apartment can have many Floorplans, and each Floorplan can have many Units.

+ +
public class Apartment : IAggregateRoot // for clarity
+{
+    public int Id { get; }
+    public Address Address { get; set; }
+    public ICollection<Floorplan> Floorplans { get; set; }
+}
+
+public class Floorplan
+{
+    public int Id { get; }
+    public int ApartmentId { get; set; }
+    public string Name { get; set; }
+    public int Bedrooms { get; set; }
+    public int Bathrooms { get; set; }
+    public ICollection<Unit> Units { get; set; }
+}
+
+public class Unit
+{
+    public int Id { get; }
+    public int FloorplanId { get; set; }
+    public string Number { get; set; }
+}
+
+ +

Let's say in the property management context I now introduce a Resident who gets assigned to a Unit. My Unit and Resident class now looks like this:

+ +
public class Unit
+{
+    public int Id { get; }
+    public int FloorplanId { get; set; }
+    public string Number { get; set; }
+    public ICollection<Resident> Residents { get; set; }
+}
+
+public class Resident // in the property management context
+{
+    public int Id { get; }
+    public string FirstName { get; set; }
+    public string LastName { get; set; }
+
+    public void UpdateBalance(...);
+}
+
+ +

My question is now if I introduce a Resident in the resident context (that can PayRent() or UpdateProfile(), etc) they must be have a 1:1 relationship with Resident in the property management context, but I thought I cannot reference a non-aggregate root entity without going all the way through Apartment because a Resident cannot exist without an Apartment.

+ +

Is my understanding of aggregate roots incorrect? Is Resident an aggregate root in both contexts? I'm not sure how that would be modeled.

+",17928,,,,,43956.67222,DDD referencing child of aggregate root?,,1,8,,,,CC BY-SA 4.0, +409692,1,409693,,5/4/2020 20:08,,4,521,"

I'm working on a Python application in which there are two Singleton classes: App and Configuration.

+ +

The former seems straight forward, only ever instantiate one App instance; the latter seems controversial.

+ +

From the searches I've done, I need Configuration to be accessible from other modules that update the application's configuration file (and subsequently update the application). To do this, I've designed my class as a Singleton by controlling instantiation through its metaclasses __call__ method. To access the instance, in any module I do the following:

+ +
from app.config import Configuration
+
+class Foo:
+
+    def __init__(self, *args, **kwargs):
+        self.config = Configuration()
+
+ +

Now, Foo is some feature beloning to App, I could've just as easily done:

+ +
from app import App
+
+class Foo:
+
+    def __init__(self, *args, **kwargs):
+        self.config = App().configuration
+
+ +

Where App() returns the application Singleton and .configuration was an attribute where the Configuration was first instantiated. Further searching shows I could even use the app.config module as a Singleton, since Python only loads modules once. So regardless of defining a class as a Singleton or treating the module as a Singleton, the Singleton Pattern remains.

+ +

So what's wrong with either of these:

+ +
    +
  1. Using the class Singleton
  2. +
  3. Treating the module as a Singleton
  4. +
+ +

In any case, I need a single configuration for the entire application. Creating multiple instances would lead to potential race conditions, so this seems like a smart way to handle it. Further, it seems extensible for features such as logging, right?

+",253431,,209774,,44018.75208,44018.82778,What's wrong with using a Singleton?,,2,2,,,,CC BY-SA 4.0, +409695,1,409699,,5/4/2020 21:33,,1,42,"

Let me start by saying that I'm not questioning the utility of encrypting EBS volumes, nor asking how it works.

+ +

I'm just wondering what specifically encrypting EBS volumes is protecting against?

+ +

For my personal laptop, the reason to encrypt the hard drive is if it ever gets stolen, while the thief could create a copy of my hard drive, the data is encrypted at rest and can't be decrypted without logging into my laptop and/or providing the decryption key.

+ +

For an unencrypted EBS volume attached to an EC2, I would assume that the data can only be accessed by the EC2 that it's attached to. Or at least, the data cannot be accessed by anything/anyone besides that EC2 without specifically allowing access to it. Is this assumption wrong?

+ +

If this assumption is correct, then encrypting the EBS volume is protecting against...what? The possibility of the hard drive being stolen from Amazon's datacenter? Or I guess someone could infiltrate their network and digitally copy the data from hard drives, which would then be encrypted?

+ +

I'm just curious about the threat model.

+",237355,,,,,43956.01319,What is the threat model for deciding between unencrypted vs. encrypted EBS volumes?,,1,2,,,,CC BY-SA 4.0, +409701,1,409702,,5/5/2020 0:41,,7,297,"

I was reading about the clean architecture, and I don't get this part:

+
+

Don't use your Entity objects as data structures to pass around in the outer layers. Make separate data model objects for that.

+
+

As long as I don't let the entities leak out through an API - that's why I create Presenters -, what's wrong with letting Entities leave the domain inner layer? Why create DTOs for that? Entities are the most stable thing in the clean architecture.

+

The only thing I should avoid (in the onion model) is dependencies pointing outward.. but the case I presented is pointing inward (API depending on the Entities).

+",354722,,-1,,43998.41736,43956.54514,Why can't Entities leave the inner layers in the Clean Architecture?,,2,2,1,,,CC BY-SA 4.0, +409708,1,409754,,5/5/2020 7:33,,-4,47,"

Im looking for a good way to find a value from the given list based on a text. +Example:

+ +
+

This computer has 16GB ram and with the best processor in it. Case is made from aluminium.

+
+ +

And I have criterion like amount of RAM with possible values:

+ +
    +
  • 4GB
  • +
  • 6GB
  • +
  • 8GB
  • +
  • 16GB
  • +
+ +

I can't do search for value in text because it can find e.g. 6GB and not 16GB.

+ +

It would be great too if this could find similar text and match those as well e.g. healed and sealed based on some correctness factor.

+ +

I've tried with Sørensen–Dice coefficient it kinda works, but with low correctness factor.

+",364963,,,,,43957.12986,Choose a most probable value from the list based on some text,,1,2,,43957.31806,,CC BY-SA 4.0, +409709,1,409713,,5/5/2020 7:40,,0,42,"

I've got an Algorithm class whose responsibility is to find if a given word is in a list of words.

+ +

As part of doing that, the algorithm first has to lowercase the words, remove punctuation, and remove any stopwords. Then, finally, the algorithm tries to find the given word in the cleaned list via the find_word method.

+ +

As you can see in the code, I have split everything in small methods to avoid having one big method. However, I am not sure what's the proper way to call those methods. I am calling all them inside the find_word method one after the other, but that smells like a bad design to me. How do I do this the right way?

+ +
import string
+from nltk.corpus import stopwords
+stopwords = stopwords.words('english')
+
+class Algorithm:
+
+    def __init__(self, words: list):
+        self.words = words
+
+    def _lower(self):
+        # Lowercase all words of the list
+        self.words = [word.lower() for word in self.words]
+
+    def _remove_punctuation(self):
+        # Remove punctuation from all words of the list
+        punct_dict = dict((ord(punct), None) for punct in string.punctuation)  # creates dict of {33:None, 34:None, etc}
+        self.words = [words.translate(punct_dict) for words in
+                     self.words]
+
+    def _remove_stopwords(self):
+        # Remove stopwords from list such as in, at, who, etc.
+        self.words = [word for word in self.words if word not in stopwords]
+
+    def find_word(self, user_word):
+        # Find if the user given word is in the cleaned list
+        self._lower()
+        self._remove_punctuation()
+        self._remove_stopwords()
+        if user_word in self.words:
+            return user_word
+
+class Chatbot:
+
+    def answer(self, user_word):
+        algorithm = Algorithm(['Rabbit', 'Horse', 'turtle'])
+        return algorithm.find_word(user_word)
+
+
+chatbot = Chatbot()
+print(chatbot.answer('horse'))
+
+",322312,,,,,43956.39583,What's a proper way to call a chain of methods that modify an instance attribute?,,1,0,,,,CC BY-SA 4.0, +409710,1,,,5/5/2020 7:51,,-3,51,"

I was wondering what's the principle behind it, and whether I should treat all functions from module that are not used anywhere else at the time as private, and mark them with underscore prefix so others know that these are not used anywhere else. Or maybe only mark as private those that are 'not meant' to be used anywhere else?

+ +

For example I have a situation now, where I created a module that basically has one click function that is used from outside, and the logic was encapsulated to many smaller functions that are currently only used by this main function. But some of those smaller functions are likely to be used in the future by others module as well I think. I wonder what to do in such case. Do I make underscore prefix for each function that is currently not being used anywhere else?

+",364965,,,,,43956.40903,What is the rule for making functions private in Python modules?,,1,0,,,,CC BY-SA 4.0, +409718,1,409720,,5/5/2020 10:51,,-2,68,"

Is there a way through encryption/keys/jwt or anything else to ensure that the data being sent through a POST request is only data coming from another request I made on the client to a 3rd party endpoint like Instagram?

+ +

I haven't been able to think up a secure solution to this. I've tried thinking about solutions where I sign some data, but unless the data comes from and originates from the server, I don't think there is a way to make that work.

+",156719,,156719,,43957.06736,43957.06736,Is there a secure way to ensure a data in an API endpoint of mine came from an Instagram endpoint?,,3,0,,,,CC BY-SA 4.0, +409719,1,,,5/5/2020 10:52,,8,686,"

Is it bad practice to add false or ... or true and ... for the sake of promoting code genericness and/or ease of use?

+ +

As in:

+ +
SELECT *
+FROM table
+WHERE TRUE
+AND IsEnabled
+AND SomeField = some_value
+AND SomeOtherField != some_value
+
+ +

in SQL or in javascript for example:

+ +
if (false
+      || prop === 'category'
+      || prop === 'subCategory'
+      || prop === 'productLogisticGroup'
+      || prop === 'productName'
+      || prop === 'color'
+      || prop === 'configuration'
+)
+
+ +

This lets adding or removing conditions a little easier by removing the thinking process whether the condition is on the edge or not. Is this bad in someway?

+",250313,,250313,,43956.45833,43956.74167,"Is it bad practice to add ""false or"" or ""true and"" to conditionals?",,3,11,,,,CC BY-SA 4.0, +409726,1,,,5/5/2020 14:10,,1,60,"

I have these classes and I want to instantiate each one of them easily.

+ +

All the data to create them are in a json file and there will be a lot of different objects in the end. Each objects have a lot of attributes, I would prefer using a helper. A, B and C can be instantiated and used separately even if A might depend on a C object (or B).

+ +

+ +

First I thought that a factory method would suffice.But each one of them has a lot of attributes and since A depends on all the other objects it would be a mess.

+ +

Then I thought that a pseudo-Builder would work best since it would let me set every attribute one by one while I parse the json file. This works but it means that every class has its own builder and I would need to handle the builders dependency when I parse everything.

+ +

There are solutions but I would like to find something more elegant and not create objects that I don't need. The other thing is that the parser is Abstract and implemented by the one using the library (needed for several reasons).

+ +

My current solution is a pseudo-builder (simple inner-class that lets me setAtt1(...).setAtt2(...)...Build()) for each class and one Factory that has createA(id), CreateB(id), CreateC(id) and an actual builder pattern for H1,H2,H3. The Factory is a realization of the Abstract JsonParser class that the user needs to implement thanks to the Builders.

+ +

I'm not looking for the definitive solution but I would like some advice on my solution and if I am making a big mistake on how I handle things.

+",364996,,,,,44214.17083,Complex objects creation from parsing a file,,1,4,,,,CC BY-SA 4.0, +409727,1,410186,,5/5/2020 14:54,,-1,169,"

I have a system that subscribes to Kafka topics (i.e. a message bus) that publish life-cycle events. My system needs to digest and save these events and later serve its clients (the users of this database) with time intervals information between the events.

+ +

For example, if the Kafka topic exposes the availability/status of a system (say UP, DOWN), I need to be able to query the database for:

+ +
    +
  • all the intervals where a system was DOWN
  • +
  • how much time was this system DOWN in the last 7 days
  • +
  • what was the unavailability (percentage) of this system in the the previous calendar month.
  • +
+ +

Question:

+ +

How should I save this data in the databases considering the following constraints:

+ +
    +
  • the events need to be saved in a relational database
  • +
  • the event could theoretically be received out-of-order (they still carry the time stamp at origin)
  • +
  • performance is not an issue (we will not reach the limits of Kafka, DB, or clients)
  • +
  • we want to minimise code complexity and size (hence increase the maintainability of the solution)
  • +
+ +

The following example aims to facilitate the understanding of the use case. The incoming events have a source (originating system, here system1), a status (UP or DOWN) and a timestamp (ISO format):

+ +

Input:

+ +
system1,UP,2020-05-05T14:00:00Z
+system1,DOWN,2020-05-05T16:00:00Z
+system1,DOWN,2020-05-05T17:00:00Z
+system1,UP,2020-05-05T20:00:00Z
+
+ +

Outputs

+ +

For a query of all UP intervals for system1:

+ +
system1,[(2020-05-05T14:00:00Z - 2020-05-05T16:00:00Z),(2020-05-05T20:00:00Z - <now>)]
+
+ +

For a query of all DOWN intervals for system1:

+ +
system1,[(2020-05-05T16:00:00Z - 2020-05-05T20:00:00Z)]
+
+ +

For a query on how much time system1 was DOWN:

+ +
system1,4h
+
+ +

where:

+ +
    +
  • events with the same status can arrive multiple time (or duplicates)
  • +
  • now represents the time of generating this result
  • +
+ +

Researching, one possible solution:

+ +
| name    | status | from                 | to                   |
+------------------------------------------------------------------
+| system1 | UP     | 2020-05-05T14:00:00Z | 2020-05-05T16:00:00Z |
+| system1 | DOWN   | 2020-05-05T16:00:00Z | 2020-05-05T20:00:00Z |
+| system1 | UP     | 2020-05-05T20:00:00Z | null                 |
+
+ +

Upon receiving a new event, you:

+ +
    +
  • update the latest event you have so far for that system (if any) adding the to (closing the interval) and
  • +
  • start a new open-ended interval (i.e. with to being null).
  • +
+ +

But this solution does not feel the most natural way to do this and I am asking myself if there is another solution to:

+ +
    +
  • avoid multiple updates: INSERT + UPDATE on every event or INSERT + 2 UPDATES if the events comes out-of-order
  • +
  • avoid and data duplication: needing 2 columns instead of one single one (considering that the time intervals are contiguous)
  • +
  • simplify querying and aggregation: in this solution you would have to +group by name, sort by from, then eliminate consecutive duplicate status rows and joining the consecutive intervals with the same status.
  • +
+ +

Searching for a solution that leverages more:

+ +
    +
  • numerical integration (as the problem can be reduced to the area between the graph and the X-axis, the time) (InfluxDB has for such data the integral aggregation function.)
  • +
  • the numeric value behind the dates and use them in some mathematical function
  • +
+",364999,,364999,,43967.79028,43971.53889,How to save events in a relational database better support time-based queries on these events,,1,1,,,,CC BY-SA 4.0, +409730,1,,,5/5/2020 15:55,,1,51,"

I have built a few stacks in my time as a self-taught full-stack developer, but I think there may be a hole in my knowledge that you can hopefully fill in for me. I have built a few stacks that all have a core architecture like this:

+ +
    +
  • Backend in Go, interfacing with a Postgres database or doing some core logic
  • +
  • RESTful API server (also written in Go) serving over HTTPS that triggers database methods/backend logic
  • +
  • Frontend in JavaScript using ReactJS or some other framework. The front end makes calls to the API server (using fetch()) whenever data is needed/logic should be executed (on the backend).
  • +
+ +

I find that this is a pretty good, common architecture that can work in many different use cases. However, I am wondering if there is any other type of core architecture that differs from this. I have heard of GraphQL and I see how that is different than this traditional REST-like architecture, but I am wondering if there is some other type of architecture used to build web apps that does not revolve around a REST/SOAP/GraphQL API. I've heard of WebSockets, and I think that they fit into what I am asking about here, so how are they different from just calling JavaScript's fetch() to make calls to my REST API (or using a third-party library like axios.)?

+",365005,,,,,43956.66736,Different Types of Full Stack Architectures,,1,0,,,,CC BY-SA 4.0, +409731,1,,,5/5/2020 15:59,,1,124,"

I'll demonstrate with an example of the normal distribution in Python.

+ +
def norm_pdf(x, mu=0, v=1, p=1):
+  """"""Returns un-normalized probability density of normal distribution at x.
+  mu: mean
+  v: variance
+  p: precision (inverse of variance)
+  """"""
+  # if they specify v
+  return 2.71828**(-(x-mu)**2/v/2)
+  # if they specify p
+  return 2.71828**(-(x-mu)**2*p/2)
+
+ +

The idea is the user should be able to specify the variance v or the precision (inverse variance) p, but not both. What is the proper way to deal with something like this? What is the etiquette that libraries like Numpy use in these cases? Normally, I'd like to overload the function, but v and p are the same type, so it's not that simple.

+",331410,,,,,43956.74861,Good etiquette for 2 optional arguments that can't both be used,,3,1,1,,,CC BY-SA 4.0, +409738,1,,,5/5/2020 17:56,,3,193,"

Let's say we are using TDD while developing some Calculator class (the simplest case - it should provide add, sub, mul and div public methods). We initially start with develop branch. The following patterns come to my mind:

+ +

Pattern 1:
+At the beginning we would be able to add numbers. We create feature-add branch and do typical TDD flow. At the end we have fully implemented and tested functionality so we merge current branch with develop. Next subtracting should be implemented - new branch called feature-sub will be created, TDD cycles will be done and feature-sub branch will be merged with develop. The same workflow would take place when implementing mul and div methods.

+Pattern 2: +We create feature-operators branch. All the code and tests for above functionalities should be done inside single branch (they are logically ""one piece"" of code)

+ +

Is it better to have one bigger branch or many small branches that join one TDD functionality (e.g. one public method)?

+",365010,,,,,43957.41736,TDD and GIT workflow. How big should branches be?,,3,2,1,,,CC BY-SA 4.0, +409744,1,,,5/5/2020 19:05,,-4,26,"

I am creating a Nodejs website and the website is about selling photos, although I will store the images on the static folder but in the low quality or watermarked. But When the user Purchases that image, I want to give him the link of the image through a secure connection. Please Guide me sensei! I am new to the developer's world and doing a real project for the first time. Your help will be appreciated. Thank you!

+",365013,,,,,43957.11736,Where to store the images on Server-side and not to store in the static folder and How to retrieve them on request?,,1,1,,,,CC BY-SA 4.0, +409746,1,409777,,5/5/2020 19:56,,1,51,"

Consider the following

+ +

Group A

+ +
Job A {
+    Depends on Job B of Group A
+    Run User -> User1
+}
+
+Job B {
+    Depends on Job C and Job D of Group A
+    Run User -> User2
+}
+
+Job C {
+    Depends on Job D of Group A and Job A of GroupB
+    Run User -> User1
+}
+
+Job D {
+    Depends on Job E of Group A
+    RunUser -> User3
+}
+
+Job E { 
+    Run User -> User3
+}
+
+ +

Group B

+ +
Job A {
+    Depends on Job C of Group B
+    Run User -> User4 
+}
+
+Job F {
+    Depends on Job C and Job D of Group B
+    RunUser -> User2
+}
+
+Job C {
+    Depends on Job D of Group A
+    Run User -> User1
+}
+
+Job D {
+    Run User -> User5
+}
+
+ +

Group C

+ +
Job C {
+    Depends on Job A of Group A
+    Run User -> User6 
+}
+
+Job G {
+    Depends on Job H of Group C
+    Run User -> User5
+}
+
+Job H {
+    Run User -> User7
+}
+
+ +

Group D

+ +
Job I {
+    Run User -> User8
+}
+
+ +

and so on...

+ +

For simplicity let us assume that I have ~50-60 such groups and in each group, I have around 1000 Jobs. Run users are Unix users, a user with which Job runs.

+ +

If you look closely you will notice that this is cross-group directed acyclic graph of Jobs. Hence I am thinking to build an event-driven system for triggering these Jobs and for that I am thinking to use Kafka.

+ +
    +
  1. Producer: Each invocation of a Job is a separate process. These are my producers (short lived).
  2. +
  3. Consumer: Assume we have one consumer per run user. I am not sure how do I trigger Jobs for cases when Job is dependent on more than one event (i.e. for completion of more than one Job)?
  4. +
  5. Topics: I am not sure about Kafka Topics. Should I have + +
      +
    • One topic per group?
    • +
    • One topic per user?
    • +
    • One topic per group per user? Or,
    • +
    • One topic per user per group per job?
    • +
  6. +
+ +

Basically I want to solve the following use-cases:

+ +

Usecase 1 Secretive Job A run by user x depends on secretive Job B run by user Y. Neither A nor B wants to tell anyone in the world about their existence. A and B need to trust each other which means they can know the existence of each other.

+ +

Usecase 2 Public Job A run by user x depends on secretive Job B run by user Y

+ +

Usecase 3 Secretive Job A run by user x depends on public Job B run by user Y

+ +

Usecase 4 Public Job A run by user x depends on public Job B run by user Y

+ +

Any ideas on how should I go about designing

+ +
    +
  1. Kafka topics from the secure setup perspective to solve use-cases above.
  2. +
  3. How do I consume events and launch Jobs (for jobs that depend on multiple other Jobs)?
  4. +
+",361588,,361588,,43956.84861,43957.58611,Designing Kafka topics for secured event driven job scheduling system,,1,3,,,,CC BY-SA 4.0, +409748,1,409750,,5/5/2020 20:49,,1,170,"

I'm used to using static link libraries in my projects. This doesn't make the solution heavier and allows to be updated more easily.

+ +

However, I see some GitHub repos providing in their sources the code of the libraries they use (the example that made me raise this question is the libffi library entirely taken in the source code of the Racket language).

+ +

Why such a choice? If I want to make a project public on GitHub, should I use the source in the project, or consider using the static libraries?

+ +

EDIT

+ +

I'm asking this question in a somewhat encompassing way, although I imagine the answers will vary depending on what is being used as well as the target audience.

+ +

To focus the question a bit more, I was looking at projects written in C, for non-embedded platforms. I have a virtual machine project (under the Apache 2.0 license), which uses MIT and LGPL license libraries. I'm targeting x86 architectures.

+",365020,,365020,,43958.50417,43958.58333,Should we include the entire sources of the libraries used in our project?,,2,3,,,,CC BY-SA 4.0, +409755,1,,,5/6/2020 4:30,,-1,85,"

I am developing an application, and want to implement a certain functionality. I find that this can be done in the stack I am using, however it's hard to implement. I can create a microservice with this functionality embedded using another technology, and use it for that particular aspect of my overall application. This is much easier than the first option.

+ +

Which option would normally overall be cheaper, when actually deploying and scaling the application, not including the cost and time of actually developing the application. Thanks.

+",365037,,365037,,43957.23542,43957.47222,Developing your application as microservices or as a monolithic app - which is cheaper when deploying and scaling?,,2,3,,,,CC BY-SA 4.0, +409759,1,,,5/6/2020 7:10,,0,37,"

I have a web service which is exposed to UI owned by our team. This web service is responsible for creation of objects and saving it in the DB (NoSQL Database). The object being created has multiple implementations and behaviours which can be changed at runtime. How should the APIs be split? I have three approaches in mind:

+ +
    +
  1. Have one single API and handle the implementations and behaviours in that API.
  2. +
  3. Have N APIs for N implementations.
  4. +
  5. Have N * M APIs for N*M implementations and behaviours combinations.
  6. +
+ +

In first approach, API would be overloaded with decision points for each property/ behaviour. In second approach, I see the issue of sharing common logic for behaviours across N implementations. In third approach, for any new implementation or behaviour, I would need to make N or M new APIs.

+ +

Example: Service is responsible for creating various Furniture objects. +a) A furniture can be of type Chair, Table or Stool etc. +b) Object can be of different material e.g. Steel, Wood, Iron (more in future) +c) Each object can have different number of legs. I am assuming the formula to calculate strength for K-legged object will be same for all objects irrespective of its implementation.

+ +

UI would know what shape to create (e.g. table) with material (wood) and legs (six for example) and would call backend API. Thanks in advance. Please let me know if more details are required.

+",365044,,,,,43957.64722,API Split for creating object with inheritance and behaviors,,1,2,,,,CC BY-SA 4.0, +409766,1,,,5/6/2020 10:03,,0,23,"

I have multiple apps (app 1, ..., app n), that don't know each other but use the same services. At the same time, there is a single application to configure the services (let's call it backend).

+ +

One service in particular simply stores content, that we have created using an editor and provides it to the applications through the api-gateway. The editor uses the same renderer as the applications use in order to ensure the content will be displayed as desired. This renderer (ContentRenderer) has knowledge about the ContentModel.

+ +

The service, however uses a validator (ContentValidator) when receiving the content from the backend in order to ensure it's fits the ContentModel. Now here I feel that this service is rather a part of a distributed monolith because of the shared ContentModel.

+ +

To visualize this I have created a simple (non-UML) overview:

+ +

+ +

How can I avoid this coupling but ensure

+ +
    +
  • the editors still work with the same renderer as the apps will use and
  • +
  • the content is still validated with the appropriate model
  • +
+ +

I first had the idea to add a ContentModel designer to the backend, which allows to define the model, injecting it to validator and renderer but I have the strange feeling that I just move the coupling to the runtime. Do I miss something here? Is this the wrong approach for this situation?

+",290260,,,,,43957.41875,Avoid tight coupling when configuring a service from a backend,,0,3,,,,CC BY-SA 4.0, +409772,1,409776,,5/6/2020 11:37,,1,63,"

I'm working on a project that uses PubSub(GCP), my question is not specific to GCP, it's more regarding to the architectural pattern(I'm used to statically typed languages, and I have a hard time figuring out how to do this the right way).

+ +

The services that I'm working on are written in go and what I would like(at least for me this seems the right way) is to enforce the consumers and producers to use the same message format(agree on the schema at compile time). Right now the 2 parts are totally independent so we have the message format specified in 2 places(this really bugs me out).

+ +

In the beginning, I thought that the consumer should own the message format(don't judge I'm new to this kind of architecture), had a discussion with a coworker and did some reading afterward, and I agree that this would kinda break the pattern, as the producer would know about the consumer, also an issue appears when you have multiple consumers.

+ +

My next thought was to extract the message format in a different package and have both consumers and producers use the format from there, but this again would increase the coupling. +I tried to do some reading regarding this but I can't find a more detailed explanation/diagram of the pattern that would answer my question, and for sure I'm not the only one who thought about this problem.

+ +

Am I on the right track or what would be the right way to solve this? +Or am i just making my life more complicated than it has to be?

+",365058,,,,,43957.60278,How to agree on message schema in a Publish–subscribe pattern,,2,0,,,,CC BY-SA 4.0, +409773,1,,,5/6/2020 11:47,,19,6008,"

I often read definitions for Polymorphism such as the following:

+ +
+

Polymorphism is the ability to have objects of different types + understanding the same message

+
+ +

But the above definition also apply if we don't use Polymorphism, for example if we have an object of type Circle with a method draw(), and another object of type Rectangle with a method draw(), we can do:

+ +
circle1.draw();
+rectangle1.draw();
+
+ +

So circle1 and rectangle1 understood the same message draw() without using Polymorphism!

+ +

Am I missing something?

+",280327,,9771,,43958.73056,43969.54236,What is polymorphism if you can already have methods that are the same defined in different types?,,10,14,4,,,CC BY-SA 4.0, +409778,1,409783,,5/6/2020 14:07,,5,763,"

The classic example for a transaction is withdrawing money from a savings account and depositing it to a checking account in the same bank.

+ +

Yet I have the suspicion that DB transactions are actually not used for such examples, but rather that eventual consistency is achieved with a a journaling system that reconciles inconsistencies at the end of the day. I imagine that transactions are used for a single update (withdrawal/deposit).

+ +

What is the best practice? I realize that here is no single answer, but I wonder what is commonly done.

+ +

(This asks the same question, but about distributed systems. I am asking what is done within a single bank.)

+",14493,,,,,43957.64583,Are bank transactions run with DB transactions?,,2,3,1,,,CC BY-SA 4.0, +409779,1,,,5/6/2020 14:10,,1,33,"

I have an ongoing project where the directory structure currently is a mess (github Directory) and I would like to strucure it to eventully create a python package out of it. Below I have outlined a potential outline of the structure to package it and would like some feedback if below is a senceble way to do it on, or if there are things that should be considered. Input on how it can be structured in friendlier way or if things should be added/deleted would be highly appreciated.

+

+cli_stats/      
+|__ __init__.py 
+|__ README.md   
+|__ docs/   
+|     |__ docs
+|__ cli_stats/  
+|     |__  cli_stats.py
+|     |__  league_season_init.pickle (generated from setup.py)
+|__ pickle/ 
+|     |__  __init__.py
+|     |__  setup.py
+|     |__  get_id/
+|     |       |__ __init__.py
+|     |       |__ get_id.py
+|     |__ get_stats/
+|     |       |__ __init__.py
+|     |       |__ get_stats.py
+|     |__ api_scraper/
+|         |__ __init__.py
+|         |__ api_scraper.py
+|         |__  check_api.py
+|       
+|__ directory/  
+|    |__ __init__.py
+|    |__ directory.py
+|__clean_stats/ 
+     |__ __init__.py
+     |__ clean_stats.py
+
+
+

The dependenceis is the following, the order they att presented in represents on how they build on top of each other:

+

(The import statements represent the dependencies between modules in the current flat heirarcy which is flat)

+

api_scraper.py

+

This is the "main" module that handles the request to the API. It's the building block on which to get_id.py and get_stats is build upon.

+

get_id.py

+
imports:
+
+from api_scraper import Football
+from directory import Directory
+from directory import StorageConfig
+
+

This module retrieves Ids from the API and stores them in the in a folder named params/

+

get_stats.py

+
imports:
+
+from api_scraper import Football
+from directory import Directory
+from directory import StorageConfig
+
+

This module reads IDs from params/ and retreives stats from the API and stores them in a folder raw_data/

+

setup.py

+
imports:
+
+from get_stats import Stats
+from get_stats import SeasonStats
+from directory import Directory
+from directory import StorageConfig
+
+

This module generates the league_season_init.pickle which is an essential file for cli_stats.py

+

clean_stats.py

+
imports:
+
+from directory import Directory
+from directory import StorageConfig
+
+

This module cleans the data in raw_data/

+

directory.py

+

This module is a helper module for creating paths and to save/load/write json files, also holds the directory paths as variables in class

+

cli_stats.py

+
imports:
+
+from directory import Directory
+from directory import StorageConfig
+from get_stats import SeasonStats
+import clean_stats as clean
+
+

This module is a Interactive Command Line interface that utilizes clean_stats.py and get_stats.py. Will be expanded to include more features as to push data from clean_data/ to a database.

+",365076,,-1,,43998.41736,43957.59028,Optimal package structure - Command Line Interface,,0,4,,,,CC BY-SA 4.0, +409786,1,409791,,5/6/2020 15:41,,2,76,"

I recently learned about the decorator-pattern to dynamically extend existing behaviour. So I have this code:

+ +
IMyInterface b = new A();
+if(someCondition)
+    b = new B(b);
+if(secondCondition)
+    b = new C(b);
+
+ +

and so on, where all classes A, B and C implement IMyInterface and B and C are the decorators. So the final request for a method might be handled by up to three instances in the above example.

+ +

However I now want to inject a decorator into my code, because I have some plugin-mechanism which allows me to seperate product-code from project-specific code. This is why I don´t even know anything of B and C. So I want a way that returns a decorated instance of my interface, in a way that I can chain multiple plugins and thus multiple decorators:

+ +
IMyInterface b = new A();
+/* collect the decorators from all plugins and chain them to get the final instance */
+
+ +

Has anyone a good idea on how to create the instances?

+",273281,,,,,43957.73958,Inject Decorators,,1,2,1,,,CC BY-SA 4.0, +409789,1,409792,,5/6/2020 17:09,,1,242,"

I want to write some invoicing logic, and I start coding it, using TDD.

+ +

The following example is silly, but I'm confident it represents well the everyday dilemma I'm facing

+ +
function createTestInvoice(client) {
+  return Invoice.new(
+    {
+      client: client, rows: [
+        {item: 'apple', quantity: 1, unit_price_ 1}
+        {item: 'banana', quantity: 2, unit_price: 2}
+      ]
+    }
+  )
+}
+
+al = Client.new({name: 'Al', can_buy_apples: true)
+
+assert(
+  createTestInvoice(al).total
+).to(equal(5))
+
+john = Client.new({name: 'John', can_buy_apples: false)
+
+assert(
+  createTestInvoice(al).total
+).to(equal(4))
+
+
+ +

When I start with the implementation I realise that just the following

+ +

+ +

is not enough, since I need some kind of InvoiceRowFactory that decides how and if to allow the row to be stored.

+ +

At this point the logic I was testing in this unit would be testing some logic that is included in the InvoiceRowFactory. A test for it would be:

+ +
/* remember: John cannot buy apples */
+assert(InvoiceRowFactory.call(john, {item: 'apple', quantity: 1, unit_price_ 1}).to(be(null))
+
+ +

How to proceed now? keep the tests in the specs of Invoice, or move them to a brand new test file of InvoiceRowFactory ? Both cases feel wrong to be, because

+ +
    +
  • if I keep the tests here, I'll be testing here something that belongs to another unit, that can and should be tested separately
  • +
  • if I move the tests, I'll be stubbing the dependency. At that point, in the future, I realise this code is horrible OOP, and want to replace InvoiceRowFactory with InvoiceRow. So a refactorer would love not to break anything while he's refactoring, keeping all the tests all green, but how? He can't, because my test of Invoice is too coupled with the InvoiceRowFactory stub
  • +
+ +

I just cannot reach the point where I can look at some code and make some changes keeping confident I'll not break anything, because I write such small units, that the only way to improve them is change the way they interact.

+ +

Thank you

+",365080,,,,,43990.56806,How does TDD behave when the tested unit needs to be expanded?,,3,0,,,,CC BY-SA 4.0, +409790,1,,,5/6/2020 17:13,,1,68,"

I am working in an GUI application that would work as follows:

+ +

It will retrieve and save data from any of the following sources:

+ +
    +
  • A ""Cloud Library"": This library would get and save data to a cloud.
  • +
  • JSON file: Some initial data are gotten from a large JSON file.
  • +
  • Text file: Data retrieved from ""cloud library"" can be saved to a text file so that the application can be load data from it as well.
  • +
+ +

When the application starts it would ask for the JSON file to load the initial data. Once the JSON is loaded, the data of it would be shown in main window in a widget like a grid.

+ +

The application will have 3 states:

+ +
    +
  • Offline: Initial and default state. In this, the data displayed in GUI are disabled to be modified.
  • +
  • Cache: Data displayed in GUI are able to be modified and to be saved only to +local text files at request (i.e. when user clicks on ""save"" button).
  • +
  • Online: Data displayed in GUI are able to be modified and they are saved to the cloud automatically, through the ""Cloud library"", once a change occurs. Optionally user can save the modifications to local files (i.e. when user clicks on ""save"" button).
  • +
+ +

The application would change to states as follows:

+ +
    +
  1. When application starts in ""offline"" state, and it behaves as follows: + +
      +
    • After loading data from given JSON file, data is displayed in main window and disabled to modify them.
    • +
    • The name of the JSON file loaded is shown in status bar.
    • +
    • Entry ""Show online data"" from menu bar is disabled.
    • +
    • Entry ""Real time graphic"" is disabled.
    • +
    • Entry ""Save data"" is disabled.
    • +
  2. +
  3. If a text file is loaded with data previously saved, application changes to ""Cache"" status and: + +
      +
    • The name of the JSON file loaded is shown in status bar.
    • +
    • Entry ""Show online data"" from menu bar is enabled.
    • +
    • Entry ""Real time graphic"" is disabled.
    • +
    • Entry ""Save data"" is enabled.
    • +
  4. +
  5. There is an entry called ""Get online"" in menu entry, when is pressed the application is connected to ""Cloud library"" and data is retrieved from cloud. It reaches to ""Online"" state: + +
      +
    • Label ""Online"" instead of JSON file name is shown in status bar.
    • +
    • Entry ""Show online data"" from menu bar is enabled.
    • +
    • Entry ""Real time graphic"" is enabled.
    • +
    • Entry ""Save data"" is enable.
    • +
    • Data modified in the main grid is automatically updated in the cloud.
    • +
  6. +
  7. When ""Get offline"" in menu entry is pressed application goes again to ""Cache"" state.
  8. +
+ +

I am implementing Clean Achitecture plus MVP to isolate GUI from bussines rules. Also to handle the states I plan to use State Pattern and here is where comes my question:

+ +
    +
  • As I would use MVP and it seems the states listed above affects only to GUI elements, the state pattern would only apply to ""Presenter(s)"" object(s)?
  • +
+",358955,,358955,,43957.83333,43957.83333,Clean architecture and state pattern,,0,3,0,,,CC BY-SA 4.0, +409798,1,409803,,5/6/2020 20:01,,0,36,"

While going through learning hadoop and spark, I came across ""distributed data processing"" and ""distributed computing"".

+ +

Could you let me know if they both are same or referring to different concepts?

+",365101,,209774,,43957.87986,43957.90486,"what is the difference between ""distributed data processing"" and ""distributed computing""?",,1,2,,,,CC BY-SA 4.0, +409799,1,409811,,5/6/2020 20:03,,-2,48,"

So bascially we have a library that contains a series of bifunctions passed in the metadata and datum looking like: +Transform1:

+ +
package transformation1;
+
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.node.ObjectNode;
+import java.util.function.BiFunction;
+
+public class Transform1 implements BiFunction<JsonNode, ObjectNode, JsonNode> {
+
+    @Override
+    public JsonNode apply(JsonNode metadata, ObjectNode datum) {
+        //business logic
+    }
+}
+
+ +

Transform2:

+ +
package transformation2;
+
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.node.ObjectNode;
+import java.util.function.BiFunction;
+
+public class Transform2 implements BiFunction<JsonNode, ObjectNode, JsonNode> {
+
+    @Override
+    public JsonNode apply(JsonNode metadata, ObjectNode datum) {
+        //business logic
+    }
+}
+
+ +

Transform3:

+ +
package transformation3;
+
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.node.ObjectNode;
+import java.util.function.BiFunction;
+
+public class Transform3 implements BiFunction<JsonNode, ObjectNode, JsonNode> {
+
+    @Override
+    public JsonNode apply(JsonNode metadata, ObjectNode datum) {
+        //business logic
+    }
+}
+
+ +

The clients will use transform1, transform2, and some may use transform3. The output of 1 goes to 2 and 2 to 3. So currently a client can just call them individually in order, but I am wondering if there is a way to method chain them similar to say the builder pattern.

+ +

So it will be accessed:

+ +
Transform1 transform1 = new Transform1();
+ObjectNode datum1 = transform1.apply(transform1Metadata, datum);
+Transform2 transform2 = new Transform2();
+ObjectNode datum2 = transform1.apply(transform2Metadata, datum1);
+Transform3 transform3 = new Transform3();
+ObjectNode datum3 = transform1.apply(transform3Metadata, datum2);
+
+ +

But I am wondering is it possible to create another class that would doing akin to a builder pattern for building objects. So something like:

+ +

TransformationBuilder + .transform1(transform1Metadata) + .transform2(transform2Metadata) + .build(datum);

+",365100,,365100,,43957.88403,43958.13889,Design to method chain across packages,,1,2,,,,CC BY-SA 4.0, +409805,1,,,5/6/2020 23:07,,1,110,"

I'm working with Web (react) and Mobile (IOS, Android) teams. And I find that even with a microservices architecture we end up always doing duplicated work at the front end/client level. How to implement a microfrontends architecture that reduces duplication of frontend work but still provides a high quality experience for native clients?

+ +

more context:

+ +

I've read Cam Jackson's microfrontends article and browsed Michael Geers' microfrontends blog. And I realise that we are closer to the Microservices pattern:

+ +

+ +

Than the Microfrontends one:

+ +

+ +

This is mostly due to Conways law. Even with dedicated teams that own a full product we still have dedicated teams only for IOS and Android native work. +Every new discrete piece of functionality, even with a clear bounded context at the API/Service level, ends up being fragmented into multiple different front end implementations of the same feature/component mostly to support Native Apps(that demand that every component that runs in the native apps has to be native). That got me thinking of implementing micro-frontends. +In an ideal world there wouldn't be any dedicated native apps teams and ios and android developers would integrate the corresponding product teams (as described in the image above). But that isn't an option.

+ +

Given that our web app has the same look and feel as the mobile app. Is it a good idea to create a single web component as part of the microfrontend and consume it in the native apps with embedded webviews? Would that be considered a microfronends architecture? What are the drawbacks from that approach?

+ +

That way the same team that builds the responsive web component will be responsible for that feature end to end. And leave only the core user journeys and native specific features (like AR and barcode scanning) to run as native components.

+",327254,,,,,43957.96319,How to implement micro-frontends with Native Apps?,,0,0,,,,CC BY-SA 4.0, +409806,1,,,5/7/2020 0:12,,1,156,"

Starting to explore using DI more in my project design and I find myself asking the same question:

+ +

""If I pass this object to that constructor, does that new object now have a dependency?""

+ +

For example, if I had the following:

+ +
class App:
+
+    ROOT_DIR = Path('some_dir')
+    SOME_CFG = Path(ROOT_DIR, 'some_cfg.cfg')
+
+    def __init__(self, config: Config) -> None:
+        self.config = config
+
+class Config:
+
+    def __init__(self, file: Path) -> None:
+        self.file = file
+
+ +

From that, App has a dependency on Config because Config will be responsible for all configuration tasks.

+ +

Where I start to question it is how Config is constructed: Config(App.SOME_CFG).

+ +

So if I pass anything to the constructor of Config, does that mean it has a dependency?

+",253431,,173647,,43958.27361,43958.44236,Is passing arguments to a constructor always considered DI?,,2,3,1,,,CC BY-SA 4.0, +409809,1,,,5/7/2020 2:03,,-3,156,"

I'm reading the ""Guide to software engineering body of knowledge"" but I am very confused.

+ +

Does a typical process consist of: implementation and change, definition, assessment, process and product measurement?

+ +

Or does it consist of Software requirements, design, construction, testing and maintenance? I thought this was the answer to my question but I'm starting to have doubts

+",365116,,,,,43958.72361,What typical processes does the Software Engineering discipline consist of?,,4,3,,,,CC BY-SA 4.0, +409817,1,,,5/7/2020 5:47,,-3,42,"

I am doing a report for Uni on an application developed by the team I am in. We have to deliver-among other things- a Technical Solution Document along with the application. One of the sections of this document is called ""Product Architecture and Software Stack"".

+ +

I have googled the phrase ""product architecture"" but what I have found are explanations about ""software architecture"". I'd like to know if they are the same or are related at all. Also, I have found plenty information about what software stack is but because I don't know what Product Architecture is I don't know what's the relationship between the two.

+ +

I'd highly appreciate it if someone can tell me the relationship between these concepts and explain how they are related.

+",365122,,9113,,43958.36181,44198.14792,"Relationship between the concepts ""Product Architecture"" ""Software Architecture"" and ""Software Stack""",,3,0,,,,CC BY-SA 4.0, +409824,1,,,5/7/2020 8:34,,0,43,"

I have a bounded context Profiles that manages n profiles in a graph-like fashion. Based on profiles' information it calculates a score for each edge in the graph (i.e. for each pair of profiles), making it n*n scores. When a profile p is updated (event ProfileUpdate) it re-calculates the scores of p with all other profiles (n-1 events ScoreUpdate).

+ +

Another BC Account is interested in the average score AVG(p) of an account's associated profile. For this, it keeps the profile's id and I wonder how and where to handle calculation and/or storage of AVG per profile in an efficient way.

+ +

My thoughts so far

+ +

Aproach 1: Pre-Calculate & Batching

+ +

Naively, the Profiles BC is responsible for the scoring, hence I would put AVG calculation in this domain and emit AVGUpdate A LOT (more often than ScoreUpdate occures, with optimization at least n times per ProfileUpdate). The Account BC would just store the AVG and be able to show it on demand.

+ +

But this feels very in-efficient. Normally, we see some ProfileUpdate events, that result in A LOT of ScoreUpdate events and the Account BC rather seldom wants to show AVG and I am worried about the overhead/performance of handling the huge amount of AVGUpdate and all the involved DB-work to be done in Account BC.

+ +

If Account BC handles each AVGUpdate on its own it would need n queries per ProfileUpdate which makes it unacceptably expensive to host that service. So I wonder if we should batch the AVGUpdate events but then again I am worried about the size per event (probably not a good idea to put n scores in a single event and paging event batches feels strange as well).

+ +

But to me it feels just strange that some users update their profiles and in turn all accounts will update and even override their AVGs a lot before the account's owner looks at it weeks later. That just makes it unnecessary to calculate & store the intermediate AVGs of that account.

+ +

Aproach 2: On Demand

+ +

Since Account BC seldom shows the AVG we could rather calculate it on demand. Since there are A LOT of ScoreUpdate events, we would not like to react on each of these to re-calculate the average, because of the overhead. Rather, we would like to calculate it on-demand, but this puts the overhead of having events such as TRIGGER_AVG_CALCULATION and AVG_RESULT and obviously is bad for UX as the user would need to wait and could suffer from worse service degradation.

+ +

Questions

+ +

How would you approach this scenario? Any other arguments torwards the approaches or even other ideas for approaches?

+",365130,,,,,43958.37222,DDD scores between all profiles (graph). Where/when to calculate AVG efficiently?,,1,2,,,,CC BY-SA 4.0, +409832,1,409862,,5/7/2020 10:36,,-1,426,"

Let's say I have a astonishingly big table (300 columns) with lot's of records (in the millions).

+ +

This table is overly complicated to use. I'd like to refactor this table and split it in multiple more meaningful tables.

+ +

I have only surface knowledge of database: I'm more a user than an database developer. What are the difficulties when having to split large tables with lots of entries and how to proceed properly?

+ +

I've tried googling it but I didn't find information on how to split a table. Can you propose an approach or some advices on how to proceed?

+",293499,,9113,,43962.2375,43962.33681,Splitting large table into multiple tables,,2,4,,,,CC BY-SA 4.0, +409833,1,,,5/7/2020 10:46,,-3,106,"

I've learned the UML recently and I am trying to build a MonsterDuel system. However, there are a lot of classes in this project, and I am confused about the abstract class and its usage. Now, I have created:

+ +
    +
  1. Abstract class Players, and its inherited class Player.
  2. +
  3. Abstract class Field, and its subclasses: Monster, Spell, Tomb, FieldSpell, and CardDeck.
  4. +
  5. The multiple card classes associated with each other. The Monster, SpellCard, and TrapCard form up the entire PlayerCardSelection.
  6. +
+ +

To be specific, I will try my best to wrap up the questions.

+ +
    +
  1. Are there any rules or principles to determine if the UML a good design before coding?
  2. +
  3. I am not sure if the abstract class can inherit another abstract class.
  4. +
  5. If it is bad, how and what can I do to improve?
  6. +
+ +

What I've tried is to separate the entities that I think is independent of the others. For example, I can add players by initializing the instance in the Player class, and inherited the attributes from the abstract class Players.

+ +

For the card section, I did a hierarchy structure to form up a card deck that can be selected and used by a player.

+ +

I haven't added all of the setter and getter, just want it to be as clean as possible. Any help is highly appreciated.

+ +

+",365143,,,,,43958.60833,How to know if the UML class diagram design is good (well-planned) or not in java,,1,2,,,,CC BY-SA 4.0, +409834,1,,,5/7/2020 11:00,,1,34,"

I'm very new to DDD and sometimes confused by it because many people inside the DDD community tend to +have very different approaches when it comes to implementation. I tried to summarize all what I've read +so far in a diagram representing control flow of an ASP.NET request. I think I got at least the basics right but there are a couple of points where I'm not sure if the decisions I've made will lead to a change proof and clean software project. The diagram looks like the following:

+ +

+ +

What I'm not sure about:

+ +
    +
  1. I've read that domain models shouldn't be accessed in the Presentation layer and that I should use DTOs instead. I've also read that Repositories shouldn't return DTOs and Transaction Scripts should be avoided for simple CRUD behaviour. So what I came up with for simple CRUD operations is a service layer which is essentially just a Repository Layer but mapping the domain model to DTOs. The purpose of this is to have a complete seperation between Presentation and Domain layer. However it feels a bit akward because the Service Layer is essentially just a duplication of the Repository layer with mappings.

  2. +
  3. When executing a request I'm mapping between 3 different kinds of objects. ViewModels, DTOs and the domain model. I know that this gives me flexibility and seperation but mapping 3 different types of objects at least once per request seems like a lot of overhead.

  4. +
  5. I'm not very sure where to put validation. While some people in the DDD community argue it belongs into the domain layer others recommend putting it into the Application Layer or even just the Presentation Layer inside the ViewModels. I haven't come up with a solution for validation needs yet however maybe there's a solution which fits well with the architectural approach I've chosen.
  6. +
+",365147,,,,,43958.45833,Does my ASP.NET architecture implement DDD in a safe and future proof way?,,0,1,1,,,CC BY-SA 4.0, +409835,1,,,5/7/2020 11:12,,-3,160,"

This question maybe borderline with the workplace. Also I apologize if it sounds more like a rant than anything.

+ +

We have a legacy code base (million of line of code). No evolution is ever possible, there is always an excuse.

+ +
    +
  • Other teams depend of our interface so we don't change them to avoid breaking anything
  • +
  • Any change is too big to be developed in a short time frame
  • +
  • Any change is too big to be considered ""safe"" and without impact requiring dedicated validation process
  • +
+ +

Even if everyone knows that there is a problem and that it has been expressed we're always pilling up features, never having budget to reduce debt or time slot to do high impact change. Of course the more features are being pilled on, the more complicated the code becomes and the slower we can develop the next features not taking account for bugs being discovered by clients.

+ +

Also the more bugs there are, the lesser the trust is our team (or more like the IT/dev function). Meaning we're even lesser listened to and it's more difficult to justify wanting to do ""purely"" technical work.

+ +

This situation is not new in the organisation, and it doesn't seems like it will change anytime soon.

+ +

How do you cope with the fact that you can't do your work properly due to the situation?

+",293499,,,,,43959.4875,How to deal with code impossible to change,,1,4,,43959.59722,,CC BY-SA 4.0, +409841,1,,,5/7/2020 14:01,,0,274,"

I'm trying to design a new microservice architecture that will eventually replace our current two monolithic APIs (we develop a desktop payroll software that uses our API to send payroll documents to employees or have them sign contracts, and more features and front-ends to come).

+ +

There is a lot of complexity regarding users relationships and I'd like to future-proof my design as much as possible.

+ +

Please excuse me if I'm not clear enough, I'm not a native English speaker and there are a lot of fine details :)

+ +

TL;DR

+ +

I will give many details below in case you're interested, but here is a summary:

+ +
    +
  • I have many types of actors who are tied with chained, business logic relationships (ex: Organization > Company > Employee).
  • +
  • Most of (maybe all in the future) these actors are candidates for being linked to a web user account. And a user account should be able to act like multiple actors (ex: an account X could act as a Company A, an Employee from Company A and an Employee from Company B).
  • +
+ +

And I hesitate between at least 2 solutions:

+ +

1) Creating a microservice per actor type

+ +
    +
  • This could make sense because each actor have its own business logic (properties, actions).
  • +
  • But I'm afraid it will generate a lot of inter-services requests for basic actions (ex: DocumentService must check if the user account has rights to send Documents to an Employee, and this permission is defined as a Company business logic...).
  • +
  • Moreover, each microservice should be aware of others business logic, which greatly defeats the microservice philosophy in my opinion.
  • +
+ +

+ +

2) Creating a generic UserManagement service

+ +
    +
  • This could also make sense because I feel ""actors and users relationships"" could be seen as a business logic in itself, and all these actors are tightly coupled.
  • +
  • My main fear with this option is that this service could become bigger and bigger as we bring features to the web. But then again I could create new microservices for these features and keep using my UserManagementService as a sort of RBAC provider.
  • +
+ +

+ +

What sounds more like a future-proof, best practice to you?

+ +

Do you have any advice or thoughts?

+ +

More details

+ +

Currently, these are the different types of actors in our system:

+ +

+ +

Please note these edge (but nonetheless common) cases:

+ +
    +
  1. A Customer could be an accounting firm and manage payrolls for multiple Companies.
  2. +
  3. A Customer could be managing its Company alone, so all these actors could be one single person. And the Customer could be one of their own Employees as well.
  4. +
  5. Employees, being artists for the most part, often work for several Companies (which could be managed by different Organizations).
  6. +
+ +

This is basically how our software works (locally, without APIs):

+ +
    +
  1. Customer pays for their licence.
  2. +
  3. Customer provides licence key to create an Organization (can create multiple ones).
  4. +
  5. Customer creates SoftwareUsers inside this Organization, with different access levels (which determines which companies/employees one can read/write).
  6. +
  7. SoftwareUsers can create Companies inside this Organization to manage their payroll.
  8. +
  9. SoftwareUsers can create Employees attached to a single Company.
  10. +
  11. SoftwareUsers can generate payslips for Employees.
  12. +
+ +

Then, these are the web-based features that are implemented by our current APIs:

+ +
    +
  1. SoftwareUser can invite an Employee to create their web account (on behalf of their attached Company).
  2. +
  3. SoftwareUser can send PDF Documents to an Employee's account (on behalf of their attached Company).
  4. +
  5. SoftwareUser can create a Campain to have an Employee sign a contract online (on behalf of their attached Company).
  6. +
+ +

As for future features, there will be:

+ +
    +
  1. Customer can pay for their licence online and view some data (remaining payroll volume, consumed phone support time, etc...).
  2. +
  3. Companies (who are not using the software if the Customer is an accountant firm) can view Employees related to them, signed Campains and Documents generated on their behalf.
  4. +
  5. SoftwareUsers who don't have access to the desktop software can enter data to populate payslips (this ""feature"" is expected to grow in the next years as we will progressively bring all the software's features to the web).
  6. +
+ +

Finally, this system needs to be flexible, considering the following use cases:

+ +
    +
  • Companies could be transferred to other Organizations (ex: a company could ask its accountant to provide its data if they wanted to manage their payroll itself).
  • +
  • The same principle applies to each actor type.
  • +
  • Account creation invitations (with Documents attached) could be sent to wrong email addresses, so we must be able to transfer an Employee to a different account.
  • +
+ +

These are the reason I was planning this for relationships and authentication:

+ +
    +
  • For relationships, each actor will only know its parent ID and its linked Account(s) ID(s): this will enable to easily switch owners (a required feature).
  • +
  • For authentication, I'm planning to use JWTs with only the Account ID as payload.
  • +
+ +

UPDATE

+ +

I took your comments into consideration and talked with some teammates, here is a new option:

+ +

3) With a PermissionService

+ +
    +
  • Accounts are not linked to ""actors"" anymore. Instead, they are linked to multiple Permissions that each specifies who has access to which resource, and possibly additional claims (ex: accessLevel for a Permission on an Organization).

  • +
  • This gives the possibility to provide either ""account tokens"" (with only the account ID), or ""permission tokens"" (with also the target ID and claims), depending on whether we enable SSO or need restricted service accounts.

  • +
  • As for the ActorService (not sure if the name fits well), its purpose is to hold the business logic regarding ""actors relationships"" as well as their properties (some of which were added on the diagram to illustrate their business value).

  • +
+ +

+ +

I'm still not 100% sure about this design, so every comment is welcome.

+",357898,,357898,,43963.38125,44113.41875,Many different user types in a microservice architecture,,3,3,,,,CC BY-SA 4.0, +409847,1,409850,,5/7/2020 16:09,,0,71,"

I have seen around the Internet several rest web services with the following behaviour. In case there are any errors, they return a Error object, otherwise they return, say, MyClass.

+ +

See the following example...

+ +

+/*
+ * Data transfer objects
+ */
+@Data
+public class Error {
+  private Integer id;
+  private String description;
+}
+
+@Data
+public class MyClass{
+  // Whatsoever attribute...
+}
+
+/*
+ * Web service (code omitted and abbreviated for the sake of simplicity)
+ */
+@PostMapping(""myclass"")
+public ResponseEntity<MyClass> createMyClass(
+    @RequestBody MyClass pMyClass
+) {
+    Optional<MyClass> optMyClass = facade.createMyClass(pMyClass);
+    if(optTrip.isEmpty())
+        Error err = Error.Builder().id(400).description(""Any reason..."").build();
+        return ResponseEntity.ok(err); //may return the same HTTP 400 error instead of 200 Ok
+    else
+        return ResponseEntity.ok(optMyClass.get());
+}
+
+
+ +

I feel that this is an antipattern for two major reasons. The first one is that the web method is not returning one kind of class objects, usually writing T or ? as returning class. The second one is that they are taking responsability of returning any error it may occur, instead of taking advantage of using HTTP headers and answers themselves.

+ +

Do you agree with me?

+",191801,,,,,43958.73542,Is an antipattern returning differente objects in a single rest method?,,1,2,,,,CC BY-SA 4.0, +409848,1,409857,,5/7/2020 16:26,,0,72,"

Say I am creating an app which contains selectable components and a toolbar/action bar. Some tools work on selected components only, some work on all components together. Pretty standard stuff.

+ +

I can think of 3 approaches to go about the architecture of the relations between the toolbar and the components:

+ +
    +
  1. Toolbar is ""stupid"". Made from nothing but buttons that merely throw an event on click. Event data will contain the clicked action name. The actions execution logic is defined in the components. Components listen to ""onToolClick"" event and decide if they should execute the action on themselves (i.e IF this.selected -> this.actions[event.actionName](this)). This approach will allow flexibility in the implementation of actions, Make it more robust, and simplify the toolbar module. Yet it will complicate the structure of the components.

  2. +
  3. Toolbar is somewhat ""stupid"". Actions execution logic is defined in the toolbar generically (Probably delegated to an external handler). Click on buttons will throw an event, event data will contain a reference to the logic. Components listen to ""onToolClick"" event and decide if they should execute the action on themselves (i.e IF this.selected -> event.action(this)). This approach will keep actions logic in one designated place and simplify components structure. Yet it spreads the action handling over multiple modules.

  4. +
  5. A ""smart"" Toolbar. Actions execution logic is defined in the toolbar generically (Probably delegated to an external handler). Toolbar will also keep a reference to all components. Click on buttons will go over the list of components, decide if action is applicable and execute on the component (i.e. FOR EACH component in components -> IF component.selected -> action(component)). Components will add themselves to the toolbar components list, and update their status. This approach encapsulate actions all together from the components and keeps it all in one place. Yet it's not scaleable in case some new components will require more specific action handling, plus it gives the toolbar direct access to components.

  6. +
+ +

Is there really a ""best practice"" for this issue? Is it a matter of taste, or app specific? Are there any other approaches?

+ +

Thank you very much!

+",331100,,331100,,43958.73472,43959.23056,How to implement an app with an actions toolbar?,,2,4,1,,,CC BY-SA 4.0, +409849,1,409851,,5/7/2020 17:10,,0,58,"

Is it true that we never enable foreign key constraints in the dimensional model of a data warehouse? If yes, then what is the rationale behind that?

+ +

As per my research:

+ +

Some experts told me in a dimensional model, FK will never be enabled, and it is the responsibility of the ETL process to ensure consistency and integrity.

+ +

Data integrity issues may come into the picture, even though ETL is responsible enough through proper dependency.

+ +

Examples:

+ +
    +
  • Late arriving dimension from source

  • +
  • few records could not pass data quality checks and routed to the error table.

  • +
  • intermediate tables are not populated due to batch load failure, and proper restart or recover steps are not followed. Someone restarted the last session to load data into the facts table while some of the dimensions are yet to be populated,

  • +
  • primary key constraints will help me to avoid duplicate record population if data in intermediate tables are getting processed one more time due to re-triggering the target table load session accidentally.

  • +
+",365183,,4,,43959.38542,43959.4625,Do Data Warehouse standards allow foreign key constraints at a dimensional model?,,1,0,,,,CC BY-SA 4.0, +409856,1,,,5/7/2020 20:39,,0,77,"

I am currently developing an application with Qt/C++ which needs to access a Shopify store using the Admin API. In order to access the API, my application needs to know the following information:

+ +
    +
  • API Token
  • +
  • Shared Secret
  • +
  • API Password
  • +
+ +

These values are stored as strings in a header using a simple #define, for example:

+ +
#ifndef STORE_INFO_H
+#define STORE_INFO_H
+
+#define ADMIN_API_TOKEN         ""...""
+#define ADMIN_API_PASSWORD      ""...""
+#define ADMIN_API_SHARED_SECRET ""...""
+
+// Sha la la la
+
+#endif
+
+ +

Afterwards, these values are used across the application as needed. However, there's one big issue with this approach: if someone is smart enough to open the binary application inside a HEX editor or a decompiler, he or she may be able to see this information (which, if used with malicious purposes, could do a lot of damage).

+ +

Irrelevant of the needs of my application or which API I am using, is there are way to ""hide"" or protect sensitive information constants in a C/C++ application?

+",87708,,87708,,43958.86597,43958.87778,How can I protect sensitive information inside a binary?,,2,1,,,,CC BY-SA 4.0, +409859,1,,,5/7/2020 20:59,,0,57,"

Hi I need to develop a new web application for my company and I must use DDD as per my senior-dev requirement.

+ +

In addition I'm maintaining (adding new features and fixing bugs) a large brownfield web app (FinTech app) that was developed by senior developers 5 years ago before I got in this company, as I only spent 1 year and half in this company.

+ +

This large web app I'm maintaining is architected using DDD with NoSQL (MongoDB) database for scalability and I also use this app as example when learning DDD.

+ +

Now I have a new requirement to develop a web app using DDD coupled with SQL Server database only. +I already started modeling the database tables, but I'm stuck in the implementation of the SQL Server database in DDD - C# code .

+ +

Indeed my bounded contexts is similar to the one on page 124 of this book written by Microsoft Authors (Dino and Andrea) with the same entities and values per each bounded context. +I have two bounded contexts (BackOffice and ClientSite) with same entites/values objects as in the book/picture.

+ +

My concern is :

+ +

*1) Do I really need to create 8 tables in Sql Server, then create aggregates in the repository to update them according to their bounded context as I have two bounded contexts?

+ +

2) Do I need to use Event-driving to update the other bounded-context too (Backoffice) if one bounded context is udpated?(like ClientSite) as both form a context-map

+ +

3) As per CQRS, what methods do I need to create in the aggregate for each bounded context, As I know I need save and delete command; and View (Query)

+ +

4) Do I need to create the same method for each aggregate? Is it not a violation of DRY principle?*

+ +

I please just want you to confirm if it's right what I'm doing as I'm not an expert in DDD.

+ +

EDIT :

+ +

The requirement I've got from the domain expert is : They need an application to manage the recruitment process for its recruitment agency. +they are opening a small local recruitment agency and they need a web app to manage the whole process.

+ +

So he told me they will have recruitment Agents for each Area (IT, Finance, Mechanics, Telecom, Medical...) and the backoffice to control everything/the whole process the recruitment agents are doing in the app.

+ +

The process will be as follow : +*1) Backoffice will get request from a company to publish a job offer for x position(x area - let assume A company looking for a junior financial)

+ +

2) Backoffice forwards it to the relevant recruitment agent (financial in this case)

+ +

3) The recruit agent in charge of financial recruitment publishes the ad online, and manage it (sort all the submitted resume and select some candidates for interview)

+ +

4) If the candidate succeeds the interview then they give him an offer.*

+ +

+",365212,,365212,,43959.76667,43959.76667,Use of SQL Server with DDD in ASP.Net Web App,<.net>,0,9,,,,CC BY-SA 4.0, +409861,1,,,5/7/2020 21:06,,1,49,"

]1

+ +

I am new to MVC. Most of my career I have used Web Forms in Asp.net. Reading about MVC has been really confusing because from what I have learnt is that in Traditional MVC, models were supposed to be the business logic as well as house domain models/objects. But when it comes to Microsoft's MVC, model means View Models and MVC is just in UI/Presentation layer. Now I am trying to combine that with the Onion/Clean architecture. I am using the Busines Logic layer and Service layer interchangably but just for simplicity's sake for this app.

+ +

So my web application basically queries a few CSV files. This is the Player Data where we have tennis players bios and their match records and their head to head records. Other than obvious displaying of data in UI layer inside the View, this data needs to be displayed very differently from how it is stored, let's assume there is also business logic decisions here which are handled at service/business layer.

+ +

Assuming the app is going to get much bigger then its starting point, I have decided to go with Repository pattern and not access data source directly. So the repository classes and corresponding interfaces reside in Data Access Layer. All these are tightly coupled to each other so if we decided to switch from CSV files to SQL DB, lot of this layer will need to be rewritten. This is an outer layer so to speak.

+ +

The inner most layer are the domain model/objects which make sense from a logical perspective and do not necessarily map 1 to 1 either with how they are stored in the database nor how they are displayed in the view. All 3 layers DAL, BL, UI can access these model classes directly. However these are in their own projects and these are not dependant on anything outside of that project. From my understanding this is where being technology-unaware is important as we want the ability to switch DAL without a single without any modification required to this particular layer.

+ +

DI is used both for Business layer to be able to use DAL as well as for UI layer to use Business layer functionality. There are no such separate DTOs and instead Domain model objects are used for this.

+ +

This is how I intend to set it up for scalability and SoC. Can someone please point out the cons of this and if I am understanding things correctly?

+",166270,,166270,,43958.89167,43958.89167,Am I structuring this correctly for a .net mvc web app based on onion architecture?,,0,2,1,,,CC BY-SA 4.0, +409866,1,409890,,5/8/2020 0:46,,5,235,"

A coworker has started advocating for a style for our user-facing APIs where every function takes a single struct parameter containing all the real parameters. That is, instead of:

+ +
void fn(int foo, bool bar);
+
+ +

we should do:

+ +
typedef struct {
+    int foo;
+    bool bar;
+} fn_args_t;
+
+void fn(fn_args_t args);
+
+ +

I've never encountered this style before. He argues that it makes it easy to add new parameters later without needing to update callers, although I worry that it also makes it easy to leave out an argument (either an existing one that was just overlooked, or one added later that you would want to pass non-zero for).

+ +

Is there a name for this style? Pros/cons? It seems like a bad sign that I've never seen it used before.

+",485,,,,,43959.57153,Merits of passing function arguments through a single struct,,2,5,,,,CC BY-SA 4.0, +409872,1,,,5/8/2020 5:38,,0,89,"

In a large iOS application, I have a database module which is dedicated to handle application databases with read/write public APIs for other module. UI module has a feature to share the database, in which a copy of the database can be sent to another person. Now to attach my database for share feature UI module requires my database path. So, UI team requested to expose a public API to get the database path.

+ +

I am concern about security perspective of exposing database path as a public API. What are the general guideline to share the database for other modules? Should I expose my database path via public API? If so, is there any implication on further security vulnerability and does it violate the abstraction?

+",208831,,5099,,43959.57431,43961.48264,What's the pattern to share database to other module in security perspective?,,2,4,2,,,CC BY-SA 4.0, +409878,1,409883,,5/8/2020 8:42,,-2,230,"

In Java, C, and C++ we have the following jump statements: break, continue, goto, and return. In C#, there is also throw.

+ +

I'm not really familiar with either of these languages. This is simply what I have read on the web.

+ +

All these jump statements are unconditional. I tried to find mention about conditional jump statements, but all links lead to Assembly.

+ +

Is it correct to say that conditional jump statements exist in Assembly only?

+ +

Some people on the Internet are telling me that if is actually a conditional jump statement, but I don't think so. At least, it's not described as such in Microsoft or QT documentation.

+ +

Regarding comments to this question:

+ +
+

How are if or switch ... case are not conditional jump statements? Why do you think these aren't? – πάντα ῥεῖ

+
+ +

@πάνταῥεῖ - As I see it, if, switch, return, break etc. are control flow statements. Jump statements are a subset of them. And please note that neither Microsoft, nor QT, nor any another documentation treat if as a jump statement.

+ +

The difference between control flow statements and jump statements, as I see it, is described here: https://www.inf.unibz.it/~calvanese/teaching/06-07-ip/lecture-notes/uni06/node45.html

+",215211,,215211,,43960.43403,43962.82917,Conditional jump statements in middle- and high-level languages,,2,10,,,,CC BY-SA 4.0, +409882,1,,,5/8/2020 10:15,,0,18,"

This is a question about design, the use case below is more of an example explaining problems and constraint. Also I'm trying to get more experience with parallel programming so forgive me if my terminology isn't correct sometimes.

+ +

I have a program that processes 3D reconstructed meshes (like structure from motion). These meshes are usually huge (like 7 million triangles) and my current processing program is really simple (like exhaustive iteration through all the triangles) and I cannot use an acceleration structure (or assume that) but computations done per triangles are independent from each other so I could some parallel program framework to carry on my computations.

+ +

My system has only 1 GPU and the number of work items is much lower than the number of triangles.

+ +

For simple programs where the size of the input isn't too big what I would do, and I've done in the past is essentially to structure my program as follows.

+ +
    +
  1. Choose the platform
  2. +
  3. Choose the device
  4. +
  5. Create one command queue and one context
  6. +
  7. Create buffers (read and write)
  8. +
  9. Create/build a program and kernel
  10. +
  11. Set the arguments and enqueue write + run the kernel + enqueue the read
  12. +
  13. Teardown
  14. +
+ +

Which is pretty standard and please notice I'm reading + running + writing only once.

+ +

Therefore my though to deal with a bigger input would be breaking down my input in multiple batches to be processed by the GPU sequentially like (say) I don't know 10k at the time or even more if I can instantiate enough work items.

+ +

What I cannot figure is the a right way to do this. +Is it efficient for example iterating through 3. to 6.? I would assume not much because I think one command queue and one context would suffice (given my case) I think maybe create multiple buffers might be a solution? but I'm not so sure it's that simple because I'd need to synchronize from host side I guess.

+ +

Essentially the question is what would be a typical program structure in this case?

+ +

Please let me know if the description of the problem I'm giving is incomplete and you need more details. +I also have a very simple example for steps 1. to 7. In code (the simplest I could come up with).

+",239439,,,,,43959.42708,How to design an openCL program when the size of the input is much greater than the workitems,,0,0,,,,CC BY-SA 4.0, +409885,1,,,5/8/2020 10:50,,-1,140,"

I’m a Solution Architect trying to understand and use the TOGAF artefacts that could benefit us.

+ +

Now I’m stuck on Application/Data Matrix and none of my Googling ninja skills have helped: I want to visualize how each data element (with classification) is used by each application in a way that you get a good overview.

+ +

I have found examples that show how data elements are used, but they don’t show classification (master data, reference data, transactional data, content data, and historical data.) +

+ +

Then I have found other examples where the classification is shown, but the data is not on the x-axis (as claimed) so there is no real overview. It’s not a matrix over Application/Data. +

+ +

How should/could I create the artifact so that it enables me to get the maximum insigts and still be maintainable?

+",306698,,306698,,43962.5,43962.5,Understanding usage of TOGAF Artifact Application/Data Matrix,,1,2,,,,CC BY-SA 4.0, +409897,1,409904,,5/8/2020 15:12,,0,75,"

I have a Message class that represent some data sent by a smartwatch. A Message has a header (sender, length...) and a type; it can be a location update, an alarm message... There are about thirty different types of messages.

+ +

Based on the type, the message should have a specific payload attached to it. For example, a message of type ""UD"" should have a payload containing latitude and longitude fields.

+ +

I created the Message class like so:

+ +
// Simplified
+class Message extends ValueObject {
+  serial: Serial
+  length: number
+  payload: Payload
+}
+
+class Payload {
+  static create(type: MessageType, payload: any): Payload {
+    switch (type) {
+      case MessageType.UD:
+        return UDPayload.create(payload)
+      case MessageType.LK:
+        return LKPayload.create(payload)
+      ...
+    }
+  }
+}
+
+class UDPayload extends Payload {
+  location: Location
+
+  static create(props: any): UDPayload {
+    return new UDPayload({location})
+  }
+}
+
+ +

Now, my question is: who should instantiate all these value objects? For example, UDPayload.create() should take a Location object as a parameter, should Payload.create() construct it? Because the one constructing the Payload doesn't know about the format of the actual object.

+ +

Where should the validation occur (empty fields for example)?

+ +

Finally, how to handle persistance concerns? That is, when reading the data, I basically have to instantiate all the underlying value objects before constructing the actual payload.

+",330878,,,,,43959.74792,DDD - Complex value object,,1,0,1,,,CC BY-SA 4.0, +409898,1,409907,,5/8/2020 15:54,,11,1935,"

Something that has really be getting under my skin recently is that Salesforce uses the term ""Declarative Development"" to mean ""Low Code"" or ""visual code"".

+ +

For example, this article explains the difference between imperative and declarative programming, while making claims that their ""low code"" solution is declarative (and thus superior).

+ +

However, I'm not sure I can agree with that... In implementation, it seem very much to be imperative programming.

+ +

+ +

Am I wrong here? Does replacing text with shapes somehow make the procedure shown above ""Declarative""?

+ +

Is this not just a transaction script in fancy new clothes?

+",40937,,40937,,43959.725,44156.09583,"Is ""Low Code"" declarative by default?",,5,8,3,,,CC BY-SA 4.0, +409901,1,,,5/8/2020 16:59,,0,152,"

When using routing in a SPA web app (angular, react, etc), the user doesn't have to start at the entry point of the application. They can use a URL in the browser to drill down into any part of the application.

+ +

When implementing HATEOAS in a RESTful backend API, we assume that the front-end only knows the URL to the entry point of the API, and then the API provides links to other parts of the application from there.

+ +

So this begs the question, if a user enters a URL in the browser that loads a specific part of the SPA (not the entry point), how does the SPA get the appropriate API link needed for just that part of the SPA?

+ +

Does the SPA just make a bunch of API calls all at once, starting at the entry point of the API and following links until it gets the link it needs for the state it needs to load? And what happens when the API does not include the link needed because it's not a valid link based on the current state of the application?

+ +

HATEOAS doesn't seem to be very compatible with a modern SPA where you can load the application at very specific sections/states.

+",280428,,,,,44109.79583,SPA Routing with a RESTful API using HATEOAS,,1,24,1,,,CC BY-SA 4.0, +409908,1,,,5/8/2020 21:32,,1,100,"

I'm working with a system that is very similar to a BI dashboard. Basically let's say the dashboard will show a couple of company's business metrics, for example, revenue, refunds, number of orders, average order value, etc.

+ +

On the front end, it will show one year's data, right now daily value will be shown on a line graph for one year. But later, it will start to allow user to select different aggregation options, like one years data will be aggregated by week, by month, etc (or it could be by 7 days, 14 days, etc, yes this is still unknown at this point). On the backend we are using a big data warehouse solution (sql), and a Node.js server

+ +

Now I'm considering 3 options, not sure which approach to go with. If you have some experience / insights to share, it will be really appreciated!

+ +

1) aggregation logic on backend, specifically the data layer, basically does the aggregation in sql queries.

+ +

pro: 1)fast 2)scales well if data size grows (say we start to show 3 years data, more metrics)

+ +

con: 1)if the query aggregation logic changes (like from calendar month/week to rolling x days), you might end up re-write most queries (might not be true, if so pls point out). 2) Require more work to setup solid test.

+ +

2) aggregation logic on backend, specifically the application layer. basically the query will return daily data points, and application handles aggregation logic.

+ +

pro: 1)easier to change if the aggregation logic changes (relatively)

+ +

con: 1)slower than having this in data layer (more network traffic, language performance diff, more load on server) 2)scales worse compared with the data layer approach

+ +

3) aggregation logic on frontend, most charts libraries allows support different aggregation scenarios. Basically api returns all daily data points.

+ +

pro: 1)very flexible if the aggregation logic changes.

+ +

con: 1)slow (network traffic, browser engine, we also support mobile, so it could be very bad on mobile) 2)scales the worts

+ +

Edit: no real-time requirement. Right now the data needed for display charts is around 365 data points for each metric (basically a timestamp and a value pair) and there are around 20 metrics.

+",365304,,365304,,43960.92917,43960.92917,design a BI dashboard system - data aggregation logic on frontend or backend?,,1,3,,,,CC BY-SA 4.0, +409910,1,409928,,5/8/2020 23:42,,25,5166,"

I'm trying to get a review for my lists of pros/cons about how to structure commits that came out of a discussion at my work.

+ +

Here's the scenario:

+ +
    +
  • I have to add feature X to a legacy code base
  • +
  • The current code base has something I can't mock making unit testing feature X impossible
  • +
  • I can refactor to make unit testing possible, but it results in a very large code change touching many other non-test classes that have little in common with feature X
  • +
+ +

My company has the following strictly enforced rules:

+ +
    +
  • Each and every commit must be stand alone (compiles, passes test, etc.) We have automation that makes it impossible to merge until these have proven to pass.
  • +
  • Only fast-forward merges are allowed (no branches, no merge commits, our origin repository only has a a single master branch and it is a perfectly straight line)
  • +
+ +

So the question is how to structure the commits for these 3 things. (refactoring, feature X, and test for feature X) My colleague referred me to this other article but it doesn't seem to tackle the refactoring part. (I agree without the refactoring source and test should be in one commit) +The article talks about ""breaking git bisect"" and ""making sure every commit compiles/passes"" but our strict rules already cover that. +The main other argument they give is ""logically related code kept together"" which seems a bit to philosophical for me.

+ +

I see 3 ways to proceed. I'm hoping that you can either a) add to it b) comment on why one of the existing pro/cons is not important and should be removed from the list.

+ +

method 1 (one commit): includes feature X, test for feature X, and refactoring

+ +

pros:

+ +
    +
  • ""Logically related code kept together"" (Not sure this is actually a ""reason"". I would probably argue all 3 methods do this, but some may argue otherwise. However, no one can argue against it here).
  • +
  • If you cherry-pick / revert without merge conflict, it will probably always compile & pass tests
  • +
  • There is never code not covered by test
  • +
+ +

cons:

+ +
    +
  • Harder to code review. (Why is all this refactoring is done here despite not being related to feature X?)
  • +
  • You cannot cherry-pick without the refactoring. (You have to bring along the refactoring, increasing chance of merge conflict and time spent)
  • +
+ +

method 2 (two commits): one includes feature X, then two includes refactoring and test for feature X

+ +

pros:

+ +
    +
  • Easier to code review both. (Refactoring done only for the sake of testing is kept with the test it is associated with)
  • +
  • You can cherry-pick just the feature. (e.g. for experiments or adding feature to old releases)
  • +
  • If you decide to revert the feature, you can keep the (hopefully) better structured code that came from the refactoring (However, revert will not be ""pure"". See cons below)
  • +
+ +

cons:

+ +
    +
  • There will be a commit without test coverage (even though it's added immediately after, philosophically bad?)
  • +
  • Having a commit without test coverage makes automated coverage enforcement hard/impossible for every commit (e.g. you need y% coverage to merge)
  • +
  • If you cherry-pick only the test, it will fail.
  • +
  • Adds load to people wanting to do revert. (They needed to either know to revert both commits or remove the test as part of the feature revert making the revert not ""pure"")
  • +
+ +

method 3 (two commits): one includes refactoring, two includes feature X and test for feature X

+ +

pros:

+ +
    +
  • Easier to code review the second commit. (Refactoring done only for the sake of testing is kept out of feature commit)
  • +
  • If you cherry-pick / revert either without merge conflict, it should compile & pass tests
  • +
  • There is never code not covered by test (both philosophically good and also easier for automated coverage enforcement)
  • +
+ +

cons:

+ +
    +
  • Harder to code review the first commit. (If the only value of the refactoring is for test, and the test are in a future commit, you need to go back and forth between the two to understand why it was done and if it could have been done better.) + +
      +
    • Arguably the worst of the 3 for ""logically related code kept together"" (but probably not that important???)
    • +
  • +
+ +

So based on all this, I'm leaning towards 3. Having the automated test coverage is a big win (and it what started me down this rabbit hole in the first place). But maybe one of you has pros/cons I missed? Or maybe there's a 4th options?

+",365315,,78589,,43979.37917,43984.67014,How to structure commits when unit test requires refactoring,,6,10,15,,,CC BY-SA 4.0, +409911,1,,,5/9/2020 1:41,,0,186,"

What are the design defects that can be spotted by using a class diagram, and how?

+ +

I am not concerned about syntactical/ representational defects, but in things like the following:

+ +
    +
  • Do the classes/objects have data elements or getter/setters and no actual business methods. If yes, immediate red flag, design is probably a database model or some procedural type design.

  • +
  • Cyclic dependencies.

  • +
  • The amount of technical jargon in class and method names. Ideally none, the more there is, the worse the design probably is. Technical jargon is things like: Manager, Entity, ValueObject, Object, Repository, Service, etc. None of these things should be visible.

  • +
  • Deep hierarchies in classes

  • +
+ +

I could not find a list of the commonly occurring errors, only research papers.

+ +

Which OO principles can be validated by looking at a class diagram, for example, the SOLID principles (like in this former SE question)? Or is there anything more?

+",349099,,9113,,43961.37569,43961.37569,How to find design defects by using a class diagram?,,3,11,1,43960.69792,,CC BY-SA 4.0, +409913,1,,,5/9/2020 4:00,,0,44,"

Scenario:-

+ +
    +
  • The load balancer distributes traffic to backends with a simple round-robin mechanism.
  • +
  • With default config, each backend is assigned weight ""1"" so all backends are given equal chance of receiving requests.
  • +
  • If some backend is given weight 2, then that backend is given twice the number of connections.
  • +
+ +

Hence, changing weights is a way to make sure all backends have the same load.

+ +

Problem Statement:-

+ +

Based on the load feedback from backend, define the weight of each backend such that all backends have a very similar distribution of requests

+ +

For eg, consider we've 3 backends A, B, C.

+ +
    +
  • If the backends have a similar load, then weights stay 1, 1, 1.
  • +
  • If A & B have load ""L"", but C has load ""0.8L"", weights should increase for C so that it can ultimately catch up and reach load L in future.
  • +
+ +

Actual example:

+ +

In my case, this is a TCP load balancer (L4) acting as the first point behind multiple L7 load balancers.

+ +
TCP Load Balancer -> LVS (ipvs+linux)
+Backend -> Nginx
+
+ +

Is there a proposed well-known algorithm on how should the weight be chosen to ensure that all backends will ultimately achieve similar load in future?

+",23180,,23180,,43960.51319,43960.51319,Assigning weights dynamically to ensure equal distribution,,0,5,,,,CC BY-SA 4.0, +409915,1,,,5/9/2020 5:28,,0,63,"

I have three resource types - character, word, and write (these are three separate database tables, with word and write having a foreign key dependency on character).

+ +

When character is created (upon a user save process), word and write also must be created. Character has a few values, but one of the most important is ""active"" - which is a boolean flag.

+ +

However, I don't necessarily care that the client knows the state of character in the database. I want a user be able to click something, and send a PUT request and the end result should be an ""active"" character in the database - whether or not that character entry existed beforehand.

+ +

Ultimately what should happen for every PUT is that character is created or set to active status (which it will have by default if created - so basically, character will exist in its default state after a PUT)

+ +

However, if the two children are created, I obviously don't need to create them again, nor do I want to reset them (they must track their state separately from their parent).

+ +

I have a few different ideas on how I could do this, but I'm not sure which is the most ""REST""ful.

+ +

I could:

+ +

1) Create the records in the word and write tables on the initial PUT request, and check for their existence and do nothing on subsequent requests. This is probably the easiest way/cheapest way, but it is not idempotent.

+ +

2) Create three PUT requests (with upsert functionality on the backend) on a client side action, with the end result being that a record exists in all three tables. Downsides of this are that I have to send three separate PUT requests (which is expensive), and they'd have to be timed correctly - very messy.

+ +

Do a GET call on any relevant character a user tries to save first, and if it already exists, issue a PATCH updating its status to the opposite of its current active status. This is a bit cleaner than #2, but also requires quite a bit of back and forth between client and server.

+ +

I'm not really sure exactly the best way to design the REST API in this context - can anyone give me advice on how they'd probably think it should be done?

+",42793,,,,,43960.55903,How do I stay RESTful when I have two child resources that need to change state independently of a parent?,,1,2,,,,CC BY-SA 4.0, +409936,1,409941,,5/9/2020 18:14,,4,167,"

When I call a function, I want to receive updates when the function reaches some milestones:

+ +
def do_something():
+    start_with_something()
+    # update
+    for x in iterate_something():
+         # update each iteration
+    do_something_expensive()
+    # update again
+    finish_with_something_else()
+    # final update
+    return the_nice_result
+
+ +

This would allow me, for example, update an UI with a progress bar. Or show some initial results to the user after the first update.

+ +

A way to do this would be using callbacks:

+ +
def do_something(initial_update, iteration_update, update_again, final_update):
+    some_initial_data =start_with_something()
+    initial_update(some_initial_data)
+    for x in iterate_something():
+        iteration_update(x)
+    ...
+
+ +

Another way, in python, would be to (abuse?) generators:

+ +
def do_something():
+    some_initial_data= start_with_something()
+    yield some_initial_data
+    for x in iterate_something():
+        yield x
+    do_something_expensive()
+    yield ...
+
+ +

There are pros and cons to both situations, but I do not really fancy any of them. For example, the first one clutters the function signature and updates information passed-in values, and the second one abuses an iteration mechanism.

+ +

In the end I want to keep my do_something function separated from any UI stuff.

+ +

I would like to know if there is an approach for this situation or a way to architecture this, specially in Python, which can avoid both of the cons.

+",238430,,9113,,43960.79028,43961.33819,Approach for updating status of a function,,3,1,,,,CC BY-SA 4.0, +409948,1,,,5/9/2020 22:48,,-10,76,"

Given a class diagram , how can we find design issues in it. I am not talking about the relationships it should be representing as they exist in the real world but i am talking about design issues that make show up in it like cycle among classes.

+",349099,,,,,43960.95,Find design issues via a class diagram?,,1,3,0,43961.32431,,CC BY-SA 4.0, +409958,1,,,5/10/2020 12:45,,0,150,"

I was reading the Urbit docs and stopped at this paragraph (emphasis mine):

+ +
+

The main thing to understand about our ‘overlay OS’, as we call it, is that the foundation is a single, simple function. This function is the Urbit OS virtual machine. We call it ‘Nock’. The entire Urbit OS system compiles down to Nock, and Nock is just 33 lines of code.

+
+ +

As per their other docs, Nock seems to be this Turing-complete function.

+ +

What I'm trying to get my head around is the ""compiles down"" wording. How can an operating system reduce to one single function?

+",301468,,,,,44105.64097,What does it mean for an OS to compile down to a function?,,1,3,1,,,CC BY-SA 4.0, +409959,1,409968,,5/10/2020 13:36,,3,225,"

In my project I'm using the observer pattern in several places, i.e. the subject notifies the observers about something, and expect them to act. The subject does not know anything about the details of these observers.

+ +

With Spring I autowired/injected the observers as follows (constructor injection):

+ +

@Autowired public Subject(List<Observer> observers) {...}

+ +

This works, and as Observer is an interface, there's no compile-time dependency from Subject to Observer. However, there needs to be a runtime dependency, so that the observers can be notified by the subject.

+ +

With the approach shown above, I experienced bean dependency cycle issues, as one of the observer's transitive dependencies was the subject itself.

+ +

To fix this, I introduced a new class SubjectInitializer (and changed the Subject accordingly):

+ +
@Autowired
+public SubjectInitializer(Subject subject, List<Observer> observers) {
+    subject.addObservers(observers);
+}
+
+ +

This way I don't have any bean dependency cycle, as SubjectInitializer is not the target of any dependency.

+ +

However, this fix seems weird to me:

+ +
    +
  1. The observer pattern is used quite a lot, so I guess there's a lot of experience on how to use it with Spring. However, I didn't find anything helpful.
  2. +
  3. The whole purpose of the SubjectInitializer is to initialize the Subject, which only has to happen once. Creating a new bean (and having the singleton in memory?) seems over the top.
  4. +
+ +

Is there a better way to autowire the observer instances?

+",68984,,68984,,43962.34028,43962.34028,How to avoid DI dependency cycle for observer pattern,,2,2,,,,CC BY-SA 4.0, +409961,1,,,5/10/2020 15:18,,0,213,"

In my company, we're using Spring Boot to implement backend API and React to implement frontend including Web interface and Android/iOS apps.

+ +

Since our product is an Enterprise software, customers actually have to pay to get the latest backend API to deploy on their own servers. However, our mobile apps are regularly updated on the App Store. This leads to a situation where the mobile apps on end-users' devices may be the newer version while the backend API on the customer's machine is the older one. We plan to support up to 3 minor version backward, meaning FE 5.4 will support up to backend 5.2.

+ +

The backend does have an endpoint to return the current version number. However, I'm a bit clueless as to how our frontend implementation can maintain backward compatibility with older API versions as we add new features and may introduce breaking changes in backend API.

+ +

I completely understand there might not any beautiful solutions for this problem. I'm hoping if you've gone through this pain, you can share your experiences about what you've tried, the final approach that you took and the potential pitfalls to look out for.

+ +

I'm sure myself and other people who's running into this issue would be really grateful :).

+",37202,,37202,,43961.76111,43970.69097,How to design front-end to handle multiple back-end versions,,2,4,,,,CC BY-SA 4.0, +409967,1,,,5/10/2020 17:55,,0,375,"

I am writing an application in c++ and Qt5. +It would be very convenient for me to create a virtual file system so I can unit test code working on files.

+ +

I have found that in Qt4 there was a QAbstractFileEngine class which would perfectly match my needs, but it was removed (from public interface) in Qt5.

+ +

The only thing which comes to my mind is to create needed file structure in a temporary directory and work on it, but that's not a perfect solution (both in terms of unit tests and my case).

+ +

Are there any other options? I need a solution working on both linux and windows.

+",165468,,,,,43963.29514,fake filesystem for unit tests,,1,0,1,,,CC BY-SA 4.0, +409972,1,,,5/10/2020 20:56,,0,84,"

These would be the base classes like Npc and NpcTask.

+ +
public abstract class NpcTask
+{
+    public Npc Npc { get; private set; }
+
+    public NpcTask(Npc npc)
+    {
+        Npc = npc;
+    }
+
+    public abstract void Update();
+}
+
+public abstract class Npc : MonoBehaviour
+{
+    [HideInInspector]
+    public NavMeshAgent NavMeshAgent;
+
+    public NpcTask CurrentTask;
+
+    private void InitializeComponents()
+    {
+        NavMeshAgent = GetComponent<NavMeshAgent>();
+    }
+
+    protected virtual void Initialize()
+    {
+        InitializeComponents();
+    }
+
+    protected abstract void SetDefaultCurrentTask();
+
+    public void Log(object text)
+    {
+        Debug.Log(""["" + name + ""] "" + text);
+    }
+
+    private void Start()
+    {
+        Initialize();
+    }
+
+    private void Update()
+    {
+        CurrentTask?.Update();
+    }
+}
+
+ +

The next classes would be concrete and abstract implementations of the above to provide more flexibility to Npc's classes.

+ +
public class NpcEnemy : Npc
+{
+    protected override void SetDefaultCurrentTask()
+    {
+        CurrentTask = new NpcEnemyTaskWander(this);
+    }
+}
+
+public abstract class NpcEnemyTask : NpcTask
+{
+    public new NpcEnemy Npc { get { return base.Npc as NpcEnemy; } }
+
+    public NpcEnemyTask(Npc npc) : base(npc)
+    {
+    }
+}
+
+public class NpcEnemyTaskWander : NpcEnemyTask
+{
+    public NpcEnemyTaskWander(Npc npc) : base(npc)
+    {
+    }
+
+    public override void Update()
+    {
+        // Wandering logic.
+        Debug.Log(""NpcEnemyTaskWander.Update()"");
+    }
+}
+
+public class NpcAllie : Npc
+{
+    public List<NpcEnemy> Enemies;
+
+    private void Awake()
+    {
+        Enemies = new List<NpcEnemy>();
+    }
+
+    protected override void SetDefaultCurrentTask()
+    {
+        CurrentTask = new NpcAllieTaskFightEnemies(this);
+    }
+}
+
+public abstract class NpcAllieTask : NpcTask
+{
+    public new NpcAllie Npc { get { return base.Npc as NpcAllie; } }
+
+    public NpcAllieTask(Npc npc) : base(npc)
+    {
+    }
+}
+
+public class NpcAllieTaskFightEnemies : NpcAllieTask
+{
+    public NpcAllieTaskFightEnemies(Npc npc) : base(npc)
+    {
+    }
+
+    public override void Update()
+    {
+        // Fighting logic.
+        Npc.NavMeshAgent.SetDestination(Npc.Enemies[0].transform.position);
+    }
+}
+
+ +

But then there would be the player which will not need pre-coded fighting logic as the fighting would be done by the person playing the game, but still would need a list of enemies to later know which enemies the player can attack or something like that.

+ +
public class Player : MonoBehaviour
+{
+    public List<NpcEnemy> Enemies;
+
+    private void Awake()
+    {
+        Enemies = new List<NpcEnemy>();
+    }
+
+    private void Move()
+    {
+        // Whatever used to listen for input to move (just an example).
+    }
+
+    private void Attack()
+    {
+        // Whatever used to listen for input to attack (just an example).
+    }
+
+    private void Update()
+    {
+        Move();
+        Attack();
+    }
+}
+
+ +

So as you can see both NpcAllies and the Player have list of NpcEnemies. I think the most feasible/simple solution would be to centralize the List in a singleton placed on the scene?

+ +

Lets say also each enemy could have a target which could be either the Player or a NpcAllie(i guess i would need to use GameObject class here to be more general?), so the Player and NpcAllie classes would need to know wheter they are a target or not to attack the right NpcEnemies.

+ +
public class NpcEnemy : Npc
+{
+    public GameObject Target;
+
+    protected override void SetDefaultCurrentTask()
+    {
+        CurrentTask = new NpcEnemyTaskWander(this);
+    }
+}
+
+",365365,,365365,,43963.64306,44113.71111,Avoiding duplicate code between the player and npc classes deriving from Unity's MonoBehaviour,,1,0,,,,CC BY-SA 4.0, +409974,1,,,5/10/2020 22:39,,-1,71,"

I am working on building a generic api that should ideally work with any data (Bring Your Own Data) but the overall functionality remains quite similar at the top level.

+ +

For example lets say we are building a generic recommendation system. Some clients might want to provide recommendation on books, some might want to do on movies, some might want to do on cloths, etc. The workflow required for each could differ based on the content type and/or client. I am wondering how should i wire the components in the system.

+ +

Option 1

+ +

Should the interfaces involved in my application be super generic like task and workflows as described in Netflix Conductor, where i can dynamically define workflow with bunch of existing tasks.

+ +

void process(JsonObject json) - read existing key value pairs from json input and add new information to the same payload that gets passed through the chain

+ +

If this type of generic workflow mechanism is so good and provides loose couple and high cohesion it should be quite popular. is that the case?

+ +

Option 2

+ +

Should i keep it little bit more typed like i pass a request object to a list of processors that would return me a decorated request object which i can execute against my datastore and do some post processing and return ?

+ +

Request processRequest(Request request) - takes original request and returns augmented request +Response execute(Request request) - executes the request +Response processResponse(Response response) - takes the basic response and returns augmented response

+ +

Here I am thinking of using dependency injection and have them wired up in different required combinations and then use the appropriate one depending on the request.

+ +

Questions

+ +
    +
  1. I am mainly curious to hear if highly generic solutions like Netflix conductor is widely adopted and used.
  2. +
  3. I have heard about Pipes and Filter architecture and can we call both options different implementations of Pipes and Filters or only option 1 can be called that way ?
  4. +
  5. I thought having clearly typed interfaces is better than very generic interfaces. Are there some examples of Pipes and Filter architecture with strict interface ?
  6. +
+",237259,,237259,,43962.67708,43992.79514,How to implement different workflows for a single api call in microservice architecture,,1,0,,,,CC BY-SA 4.0, +409976,1,,,5/11/2020 0:07,,-4,44,"

So I'm trying to make a quiz app where the answers should be typed rather than multiple choice. However, even though there is always one right answer, in english, we have many ways of writing the same thing.

+ +

E.g. Q: What is the definition of obesity. A: When a person weighs over 20% more than the recommended weight for their height.

+ +

However, a user could write: when someone is more than 20% heavier than their recommended weight based on height.

+ +

So my question is, how do I account for all types of correct answers? Machine learning?

+",188210,,,,,43962.37014,How to create a quiz app without multiple choice?,,1,2,,,,CC BY-SA 4.0, +409977,1,409995,,5/11/2020 0:24,,-2,105,"

Understood that for all these ways of measuring the time and space is dependant on the users hardware, I was wondering is there is a similar way to calculate the battery usage of a program over n amount of time? Or is it completely dependant on the hardware?

+",350411,,,,,43962.37569,Is there a battery complexity similar to how there is time complexity and space complexity?,,1,3,,,,CC BY-SA 4.0, +409981,1,,,5/11/2020 2:36,,-1,96,"

I'm trying to rapidly develop my frontend, but every time I change my code I find myself refreshing my browser and running some macro to test whether the changes in my code solved the problem.

+ +

I tried changing the process to headless PySelenium, but it takes so long for the driver to launch every time I change my code.

+ +

I also tried Cypress.io, but after following the tutorial, the directory just didn't load.

+ +

I'm looking for a headless option that runs as fast as possible.

+ +

Ideally, the testing framework is be independent of the JS development framework (e.g., React or Angular).

+",160679,,160679,,43962.83889,43963.55139,How to do test-driven front end development?,,1,7,,,,CC BY-SA 4.0, +409982,1,,,5/11/2020 2:38,,-1,144,"

I guess it would be too complex for Node.js / JavaScript to leverage, but I've been working with clusters in node to break big tasks down so all cores can work at once but the inter-process messaging provided by Node seems to stringify the data to JSON and re-parse, causing a high overhead on sharing calculated results between worker and master threads.

+ +

It seems like you could let multiple threads all read to / write from the same memory location in RAM if you wanted. Yes, you'd have a problem if two threads tried to write to the same memory location at the same time, but you could do a timeshare, allowing each thread to only read/write the location during certain nanosecond intervals. Is such a thing possible in lower level languages, and if so, is it a common practice? If yes, what's the terminology for it? In not, why?

+",100972,,,,,43962.26667,Do other languages have variables shared between threads?,,1,4,,,,CC BY-SA 4.0, +409983,1,,,5/11/2020 3:34,,-4,55,"

Dear all the programmers and overflow friend,

+ +

First, I want to say thank you to stack overflow users for helping me finish 20% of my PhD project since last year (using python to draw some technical graph). Now i am waiting for the Viva section of my university. I am python user since one and a half years ago. I keep search the coding problem and execute trouble shoot on that coding until its work and give a correct answer. I also copy and paste other people code and learn from them by google it and modify their code. However, few days ago, something annoy me, i go you tube watch people video about sharing knowledge how to solve problem on coding fast. He say this kind of act is bad behavior and you will fall in this trap so call tutorial purgatory forever and ever. It quite worrying me. I want to ask is this fact true and what should i do,should i give up to watch other people coding tutorial by directly start coding my own?I really need all advice from the expert programmer since I still new in this language programming skill. Thank you very much.

+",365440,,,,,43962.21806,Question about the tutorial purgatory in coding path,,1,0,,,,CC BY-SA 4.0, +409985,1,410010,,5/11/2020 4:22,,1,66,"

I have a string variable named status in my code that can take on three values ""Starting"", ""In-progress"", and ""Complete"". A friend of mine said it was good practice to enforce that status has one of those three values by using an enum. What is the best practice way to do this? (And for what it's worth, is an enum the right tool to use here?)

+ +

I've tried :

+ +
class Status(Enum):
+    Starting = ""Starting""
+    InProgress = ""In-progress""
+    Complete = ""Complete""
+
+ +

and then in my code I have assert statements as so:

+ +
assert(status in Status._value2member_map_)
+
+ +

Is this the right way to do it, or is there something else better?

+",365441,,365441,,43967.79306,43967.79306,Best way to enforce variable has certain values using enum in Python,,2,3,,,,CC BY-SA 4.0, +409993,1,409999,,5/11/2020 8:34,,3,731,"

I built my API service and now I want to consume the information from a WPF application.
+So far I created the class ApiHelper which initializes and provides the HttpClient used to call the API endpoints and the class MemberService which exposes the methods to get the data that it retrieves from the API.

+ +
public class ApiHelper : IApiHelper
+{
+    private readonly IConfiguration _config;
+
+    public HttpClient ApiClient { get; private set; }
+
+    public ApiHelper(IConfiguration config)
+    {
+        _config = config;
+        InitializeClient();
+
+    }
+
+    private void InitializeClient()
+    {
+        string apiBase = _config[""api""];
+
+        ApiClient = new HttpClient
+        {
+            BaseAddress = new Uri(apiBase)
+        };
+
+        ApiClient.DefaultRequestHeaders.Clear();
+        ApiClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(""application/json""));
+    }
+
+ +

This is the service:

+ +
public class MemberService : IMemberService
+{
+    private readonly IApiHelper _apiHelper;
+
+    public MemberService(IApiHelper apiHelper)
+    {
+        _apiHelper = apiHelper;
+    }
+
+    public async Task<List<Member>> GetAllMembersAsync()
+    {
+        using (HttpResponseMessage response = await _apiHelper.ApiClient.GetAsync(""/api/Members""))
+        {
+            // If the response has a successful code...
+            if (response.IsSuccessStatusCode)
+            {
+                // Read the response
+                var result = await response.Content.ReadAsAsync<List<MemberResponse>>();
+                var output = AutoMapping.Mapper.Map<List<Member>>(result);
+                return output;
+            }
+            else
+            // If the response is not successful
+            {
+                // Error handling here
+            }
+        }
+
+        return null;
+    }
+
+ +

My question is not about handling errors such as HttpRequestException that I could handle with a try catch block, but how to handle responses with a status code not in the 2xx range.
+In particular, the endpoints of the API are token-protected so a user has to authenticate before making a request and after the token expires, it needs to be refreshed.
+What I'd like to do is when the response has a status of 401 - Unauthorized, instead of throwing an exception or returning an object with an error message, the method should try to refresh the token (calling the appropriate method in another service) because it could be expired and just if the call fails again, throw the exception. I'd like to have this behavior for all the API calls.
+It's the first time I work with API calls, so this is what I thought to do, but I'd like to have some tips about how to do it. +What is the best approach to achive this? Or do you suggest another way to make API calls and manage its responses?

+",333331,,,,,43962.58403,How to handle failed API calls in C#,,1,0,,,,CC BY-SA 4.0, +409998,1,410005,,5/11/2020 10:27,,1,129,"

I'm trying to make a sports stats app in Java/Android + Realm.

+ +

I have the following classes:

+ +
Season 
+Player
+Matches
+
+ +

I would like the Season to contain a ""list"" of all the players that played that season and Players to have a ""list"" of matches.

+ +

I currently have actual lists as attributes to each class. For example the Player class:

+ +
public class Player{
+    String name;
+    Int wins;
+    Int losses;
+    List <Matches> matches;
+}
+
+ +

Where I would just use the .add function to add to new matches to the player. Also I did take off some syntax related to Realm for simplicity, but this would be a RealmObject.

+ +

Is there a better way to do this? I noticed issues with this implementation, like if I wanted to get all the Matches associated with the season. I would have to go through each player, and account for duplicates. Similarly if I wanted to view all players regardless of Season.

+ +

What would the better way be?

+ +

Edit: to clarify this is with combat sports, and I'm looking for ways to associate objects with other objects. The real purpose of the app is on a small level, for coaches to keep track of their players. A player is a team essentially. The user is pretty much focused on player stats. Here's the layout with UFC as example.

+ +

The app would open to a list of years. From the list let's say I select 2019. It would then open up a list of all players in 2019 that had matches. I select Khabib. It would then show me his stats and matches for that year (with the option to change date so I can change to view for past 3 years or life time if needed). And then I can select a specific match and view it's details.

+ +

The issue with my current implementation is modifying ranges. For example, if the user wants to view all players regardless of season, change the range in which matches would be view etc.

+",365463,,365463,,43963.09444,43965.14236,Best way to Model Classes associated with other Classes?,,3,3,,,,CC BY-SA 4.0, +410002,1,410003,,5/11/2020 12:30,,-2,42,"

I have a scenario where a service needs to update back and forth information from different channels (and each channel should have the same information, e.g inventory of products). +It all works fine if the process is sequential. Problem is that by adding more and by adding more channels, or with more information, the process get slow.

+ +

Creating micro-services for each channel is not a problem; the problem is the communication with each of them and logging errors if there are(once you write it, seems better organized then just planning it).

+ +

My question is; is the use of a Message Broker (like RabbitMQ) the right tool to consider in such cases?! or should someone rely purely on communication between services. +Any practical pattern or use case to follow in such cases?

+",121239,,,,,43962.55417,"Synchronisation of different ""channel"" in an asynchronous way",,1,5,,,,CC BY-SA 4.0, +410004,1,,,5/11/2020 14:22,,1,50,"

I am leading a team of developers creating a Java library. We are following the standard Semantic Versioning model. The code is versioned with Git. Artifacts are built and pushed to an artifact server using Gradle.

+ +

As is the standard, the artifact version, when published, is determined in build.gradle with the line version 'xxx.xxx.xxx'. I want to enforce a policy that all merges to the master branch require that this version be changed. This is to ensure we don't accidentally publish a new artifact over an old version, which could cause a dependency nightmare for downstream users of our library.

+ +

It looks to me like the Git Client Side Hooks may hold the answer, but the methodology for scanning through the changes in a merge and looking for a particular one eludes me.

+ +

Note that I do not need the hook to do any intelligence around the changes to the version line. I trust the developers increment the values correctly. This is just about making sure a forgetful dev (which will undoubtedly be me) is reminded that the version needs to be changed.

+",50376,,,,,43962.59861,Gradle + Git - enforcing a version change upon merge to master,,0,1,,,,CC BY-SA 4.0, +410006,1,410008,,5/11/2020 14:58,,-5,91,"

If I try to rewrite specific regex functionalities (e.g. substituting a string) in Python, a solution using the regex module is always faster.

+ +

Is regex written in C?

+",365478,,9113,,43963.60139,43963.60139,In what programming language is Python's regex module written in?,,1,4,,43962.75347,,CC BY-SA 4.0, +410007,1,,,5/11/2020 15:42,,0,86,"

I'm building a SaaS application with a single tenant architecture in Google Cloud. Each tenant has its own instance with its own subdomain (e.g., client1.myapp.com). I interact with some third-party services by using webhooks. These webhooks are used by the third-parties to notify my app of certain events, but those events are relevant to just one tenant. However, these third-party services do not allow me to specify a different endpoint for each of my tenants (can't specify different subdomains, queryparams or headers...), so I must stick with just one url for all instances (e.g., main.myapp.com).

+ +

+ +

Thus, I need to set a reverse proxy between the third-parties and my app but, the unorthodox thing for me here is that such reverse proxy will need to route requests to the appropriate tenant instance based on the content of the requests (a JSON body). That rules out a cloud load balancer such as the one offered by GCP.

+ +

What would be the standard approach to follow here? I've thought of simply setting up a small Flask server to decode the JSON body, read the JSON body key needed to decide the relevant tenant (e.g., a ""store_id"" parameter. Each tenant has N stores), and route to it by a following routing table in an SQLite DB with the relation between tenant and stores. But I'm afraid this will severely reduce the number of requests I can process. I do not have strong requirements in terms of latency, and each tenant would likely receive around 1k-2k messages per day, but with time I will likely have thousands of tenants. Of course I could scale horizontally this Flask instance but somehow this feels too complex.

+ +

Context and bonus question

+ +

The previous problem is making me second guess myself in choosing a single-tenant arch vs multi-tenant. My thoughts are pretty similar to this answer. The PROs I see are:

+ +
    +
  • Simplicity in development
  • +
  • Easier scaling
  • +
  • Increased reliability (staged rollouts, impossibility of failing to all customers at once).
  • +
+ +

But I see that this may mean a lot of devops complexity as the tenant number grows (and higher costs). I'm betting on being able to address devops complexity with tools such as Octopus... Is it still such a nightmare to maintain many instances vs having a single point of failure with just one instance and scaling vertically?

+",132771,,132771,,43963.33125,43963.33125,Content-based reverse proxy for single tenant architecture?,,0,0,,,,CC BY-SA 4.0, +410012,1,,,5/11/2020 17:38,,0,28,"

I am developing multiple Spring Boot projects that share the same DB, and therefore Entity and Repository classes/interfaces. The Entities and Repositories are stored in a ""common"" library so that one implementation can serve all projects. The DB tables are generated automatically by hibernate.

+ +

I stumbled upon a problem where one service may have a previous version of the ""commons"" library and therefore outdated Entity classes, whereas another project will have an up-to-date version, therefore this may cause issues when the outdated project tries to access columns that should no longer exist in the database.

+ +

I thought about creating an intermediary service which would be the only service with access to the DB. It would allow other services to talk to it through an API in order to retrieve, create or modify data.

+ +

I'd like to find out your opinions on whether this is the right thing to do, and if there is a better solution to the problem?

+",339266,,,,,43962.73472,Spring JPA shared repositories - do I need an intermediary data access point?,,0,1,,,,CC BY-SA 4.0, +410014,1,,,5/11/2020 17:58,,0,117,"

I'm finding the use of IoC containers to be quite the shift in my application design, and its for the better. I'm using a framework called injector that aims to mimic (albeit not entirely) the Guice framework.

+ +

With that in mind, I have 4 classes that I'd like to outline that show where my mixing of container use and ""poor mans"" (manual) DI take place:

+ +
@singleton
+class App:
+
+    @inject
+    def __init__(self, config: Config, logger: Logger) -> None:
+        self.config = config
+        self.logger = logger
+
+@singleton
+class Config:
+
+    def __init__(self, path) -> None:
+        self.path = path
+
+@singleton
+class Logger:
+
+    def __init__(self, path) -> None:
+        self.path = path
+
+@singleton
+class Client:
+
+    def __init__(self, config: Config, logger: Logger) -> None:
+        self.config = config
+        self.logger = logger
+
+ +

In production I use a module to tell the container how to instantiate Config and Logger:

+ +
class AppModule(module):
+
+    CONFIG_PATH = '/foo/bar/settings.config'
+    LOGGER_PATH = '/foo/bar/logs'
+
+    @provider
+    def provide_config(self) -> Config:
+        return Config(CONFIG_PATH)
+
+    @provider
+    def provide_logger(self) -> Logger:
+        return Logger(LOGGER_PATH)
+
+ +

The entry point of the program looks like this:

+ +
from program import App
+from program import AppModule
+
+if __name__ == '__main__':
+
+    injector = Injector(AppModule)
+    app = injector.get(App)
+    app.start()
+
+ +

Now the start method for the App class would in practice look as:

+ +
def start(self) -> None:
+    self.config.load()
+    Client(self.config, self.logger).run()  # is a blocking operation for a GUI to run
+    self.end()
+
+def end(self) -> None:
+    self.config.save()
+
+ +

It is the start method that ""poor mans"" DI takes place. I never define to the IoC container how a Client object is instantiated (hence no @inject) and it uses the same singleton Config and Logger that App uses. The IoC container is responsible for injecting the appropriate dependencies into the App class, but afterwards I explicitly pass them to a new instance of Client. I do this because App does not depend on Client (right?) given that none of its member functions ever need to reference it again beyond start.

+ +

If I were to define it in the IoC container (through AppModule) I would have to somehow reference the Injector instance inside App which would mean passing it around. To my knowledge, Guice discourages this and it is bad practice for DI.

+ +

Therefore, is it normal or problematic to use manual DI in tandem with an IoC container?

+",325526,,325526,,43962.75486,43993.79236,"Mixing IoC container use with ""poor mans"" DI",,1,0,1,,,CC BY-SA 4.0, +410015,1,,,5/11/2020 18:15,,0,40,"

I work on product that is a multi-tenant cloud solution. When it comes to repeatable batch processing we have a set pattern.

+ +
    +
  1. We configure a job to wake up and start executing its logic at regular intervals. Specifically in each run it iterates through all tenants, one by one, reads information pertaining to a respective tenant(stored in a DB) and executes business logic
  2. +
  3. We have more than one instance of any particular job running(on different nodes/servers) to ensure fault tolerance. Thus an instance of a job acquires a lock on a database table row which is statically mapped to the job. By 'acquiring' I mean that the job marks a column in that row as claimed. This ensures that only one amongst the multiple instances of the same job acquire the lock and processes data. This way we don't have duplicate processing of any tenant.
  4. +
+ +

Of late we have seen the need to re-architect this to deal with larger volumes and general scalability issues.

+ +

We want to go the way of multiple instances of jobs working on mutually divided workloads so that we will using our resources better and also that our throughput will increase.

+ +

Are there known patterns/technologies to do this?

+",197820,,197820,,43962.76389,43962.76389,Batch processing : Solutions for distribution of workloads,,0,1,0,,,CC BY-SA 4.0, +410016,1,,,5/11/2020 18:28,,1,387,"

I have read many tutorials on Polymorphism and all of them show examples of Polymorphism used in arrays, for example you would have an Animal (parent class) array, and its elements are Cat and Dog and Rabbit (child classes), and then you would loop through the array and call Animal.speak() on each element.

+ +

Are there others ways in which Polymorphism is used (other than in arrays)?

+",280327,,,,,43965.88958,In what ways is Polymorphism used (other than in arrays)?,,6,4,,,,CC BY-SA 4.0, +410021,1,,,5/11/2020 23:02,,0,22,"

I am in the process of designing an iOS application that will allow users to capture new photos and select existing photos of items. The intended purpose of the application is to allow users the ability to track growth of living organisms over time so that growth can be more easily identified.

+ +

The mechanisms of capturing photos within the application are:

+ +
    +
  1. Select saved photos from device photo library
  2. +
  3. Capture new photos with camera
  4. +
+ +

I intend to associate the photos with user-created objects within the application and persist the object management graph on-device via Core Data. Using APIs that were released at WWDC 2019, CloudKit will be used in concert with Core Data to provide iCloud persistence so that users can access their data across devices. However, this is where I am having difficulty identifying an appropriate solution for object-photo management.

+ +

The solutions that I have identified are:

+ +
    +
  1. Store each user-selected and user-captured photo within the application's data independently of the device's photo library
  2. +
  3. Store references to each user-selected and user-captured photo within the application and iCloud
  4. +
+ +

Regarding the first solution, I do not think that duplicating photos is a good decision when regarding device storage and iCloud storage. However, the solution allows for the ability to make copies of the original photos, which users can then modify within the application, leaving their original photos unmodified. Additionally, if a user captures many photos (think daily or weekly), then the application's storage may grow considerably if they have many items that they are tracking.

+ +

Regarding the second solution, I have been unable to find properties of PHObject objects, which correspond to each photo in a device's photo library, that identify photos across devices. The localIdentifier property seems to be unique to each photo on each device, according to Apple's documentation for fetching PHObjects. I have been unable to find a solution that will allow me to identify the same photo on two different devices that belong to the same user and are signed into the same iCloud account.

+ +

I want to ensure that I am not consuming more space on a user's device(s) or iCloud storage than necessary, but I am currently convinced that storing duplicated photos is the better solution.

+ +

Are there other options that I may not know about?

+",365504,,365504,,43962.96458,43962.96458,How to Properly Store Photos in iOS Application When Using iCloud and Photos Frameworks,,0,0,,,,CC BY-SA 4.0, +410022,1,,,5/11/2020 23:34,,1,117,"

I have an application that's mostly one large data pipeline. That pipeline runs daily and stores processed data in the database (it takes the execution date as its argument). Occasionally the client finds some bug in the data processing algorithm or just simply requests a change in it. Then I write the bug fix or modify the algorithm, push my changes to VCS and I can nicely and easily deploy my changes to the production environment. However this doesn't solve the problem of applying those changes retroactively. There's still a lot of data in the database from the past, processed with erroneous or outdated version of pipeline.

+ +

Moreover, I don't want to rerun a whole pipeline. If my pipeline looks like this:

+ +

A -> B -> C -> D -> E

+ +

and I've made a change only in the second arrow, then I want to rerun only second, third and forth (consecutive arrows depend on the previous ones).

+ +

I've been solving this issue manually, namely by deleting old data from some tables in the database and forcing the pipeline to run sequentially with old dates as arguments starting from the proper arrow. Sometimes deletion part requires something more sophisticated like deleting values only from one column - the exact action depends on the algorithm modification.

+ +

This creates a new problem - new pipeline runs require old ones to be finished before they can start, otherwise the results would be wrong. So each time I do this I have to mess with production environment even more, disable pipeline autoruns and wait for the retroactive runs to end until I reenable it.

+ +

This whole process seems really wrong and error prone. Can I do better? I've been thinking about putting all those actions into small transition scripts each time I need them but then what? How to deploy those scripts to production? It seems really weird to put such one-time use scripts to VCS. They are not a part of application, they're just some ways of transition between versions - the application could be built from the scratch without their existence. On the other hand it resembles a bit database migrations...

+ +

But ok, even if did so, this complicates my CD. My runners would need to check what kind of changes were made in the last app version and if they're the algorithm changes they would need to additionally run transition scripts. It seems like a lot of additional and non generic effort to generic looking problem.

+",365506,,365506,,43962.98819,44145.37778,How to deploy pipeline rerun?,,2,0,,,,CC BY-SA 4.0, +410028,1,410029,,5/12/2020 6:20,,0,34,"

I am looking for any general guidelines to allocate table space quota to different layers/schemas in ETL flow of a data warehouse (% of total space in each layer). +As per my research, ETL flow can consists of 4 layers for a data warehouse:

+ +
    +
  1. Staging -truncate and load data from source files.
  2. +
  3. ODS- Type 1 persistent tables.
  4. +
  5. Transformation layer.
  6. +
  7. DWH layer- Final dimensional model layer.
  8. +
+ +

I understand space requirement may vary based on project requirements, however still any general guideline (if any such in data warehousing and ETL space) to estimate the space will be helpful.

+ +

Thanks,

+",365183,,365183,,43963.27083,43963.77014,Is there any general guidelines to allocate table space quota to different layers in ETL?,,1,3,,,,CC BY-SA 4.0, +410034,1,,,5/12/2020 9:31,,-2,299,"

Multi Threaded application with n threads implemented using ExecutorService.

+ +

There are x (tens of millions) tasks submitted to the ExecutorService.

+ +

Each task needs to generate millions (number of items is not constant) of items, where each item must have a numeric unique id (has max length).

+ +

How Can I generate such ids ?

+ +

Where the restrictions are:

+ +
    +
  • No locking! No blocking! – performance is a top priority (so +can’t use Random next, AtomicInteger, or concurrent hash of ids to pull from ) .

  • +
  • Id must be unique across all threads.

  • +
  • Preferable to have ids to be sequential if possible but not required.

  • +
  • of course no dbs and etc...

  • +
+ +

Having said that,

+ +
    +
  • It’s not possible to give each thread a “range of ids” for it to use, +as in theory 1 thread can get 99% of tasks – so we don’t know what +range to give it (and we can’t just give it a huge range).

  • +
  • It’s not possible to create Random instance for each thread as +uniqueness is not guaranteed.

  • +
+ +

Will appreciate you ideas.

+",365530,,,,,43964.34375,Generating unique numeric ids in multiple thread application without locking,,2,17,,,,CC BY-SA 4.0, +410038,1,,,5/12/2020 13:35,,-2,196,"

So, most software depends upon third-party libraries, to some extent or another. Specifications of such libraries' behaviour usually takes the form of human-readable documentation.

+ +

We write integration tests to ensure that interactions between our software and these libraries yield the expected behaviour.

+ +

But suppose instead that package maintainers published some formal (machine-readable) specification of the contract to which their library's API adheres:

+ +
    +
  • repositories could enforce semantic versioning by comparing changes in specification

  • +
  • repositories could reject packages that fail automated verification against their specification

  • +
  • users of those repositories could be comfortable that the library adheres to its specification, modulo trust of the repository's automated verification process (but could also perform automated verification of their own)

  • +
  • users of the libraries could automatically have test stubs generated from the specification

  • +
  • (some) integration tests may no longer be required

  • +
+ +

These all seem like pretty big wins to me. Yet I don't see any package management tools or repositories doing anything like this.

+ +

What am I missing?

+",365547,,,,,43964.33681,Why don't packages formally specify (and repositories verify) their contracts,,6,15,,,,CC BY-SA 4.0, +410047,1,410071,,5/12/2020 16:19,,0,284,"

What would be the best practice of handling input validation in microservice? Especially for the duplicated data?

+ +

To give context, say I have 3 services:

+ +

User

+ +

Typical user service with User object with a lot of details (~40 fields in the object)

+ +

Asset

+ +

Has Asset object like

+ +
{
+  id,
+  name,
+  companyId,
+  descripton
+}
+
+ +

News

+ +

News service has a Feed object that have reference to both User and Asset however for News it only care subset of both User and Asset fields (ie. only need 10 of User field)

+ +
{
+  id,
+  title,
+  description,
+  asset,
+  user
+
+ +

I'm aware of the concept that News should have its own view of User and Asset and data consistency can be done through message hub. So the question now becomes if I have an http post request for new feed so the request body looks something like

+ +
{
+  ""title"": ""title"",
+  ""description"":""description"",
+  ""asset"": {
+    ""id"": ""asset1"",
+    ""name"": ""phone"",
+    ""companyId"": ""company-1""
+  },
+  ""user"": {
+    ""id: ""user-1"",
+    ""name"": ""name"",
+    ""comapnyId"": ""company-1"",
+    ""roles"": [""reporter""]
+  }
+}
+
+ +

How should I perform the input validation for both user and asset field in News service? Say the input validation for user is something like:

+ +
    +
  • user role can't be mix of reporter and manager (either one)
  • +
  • name < 120 char
  • +
  • companyId must be null if user role is a client, otherwise it must be set
  • +
+ +

Should I then have this input validation sits in both News and User service? Using a common/shared object with validation seems simpler but I always thought it's kind of antipattern or am I missing something here?

+ +

PS - I tried to avoid direct service to service call (also client can't perform multiple requests due to limited network bandwidth) so just storing assetId and userId in the Feed won't work

+ +

PSS - the backend code is written in Go so OOP approach may not be best suited

+",37685,,,,,43964.33542,How to handle input validation in microservices for duplicated data,,1,5,1,,,CC BY-SA 4.0, +410049,1,,,5/12/2020 17:31,,4,165,"

It seems common practice to not positively indicate during a password reset attempt whether the email requested is actually registered or not. This makes sense because knowing that an email is registered will give an attacker a potential vector for exploit.

+ +

However - how does one do this during the new account registration process? I cannot create an account for an email that already exists, I can merely tell the user that they already have an account and they did they mean to log in instead (etc).

+ +

Obviously this exposes the fact that the given email is already registered. Is there any clever way to prevent this? Am I overthinking this?

+",110451,,,,,43964.55903,How to protect potential attackers from finding emails that are already registered?,,0,4,1,,,CC BY-SA 4.0, +410057,1,,,5/12/2020 21:43,,0,1410,"

Given the following architecture and frameworks: +Spring Boot Application with Spring Data JPA (Hibernate is used as OR mapper); layered architecture as followed.

+ +
    +
  • REST layer
  • +
  • Service layer
  • +
  • Persistence layer (JPA)
  • +
+ +

DTOs are used to transfer the data between frontend and backend. Mapping for DTOs -> entity objects, entity objects -> DTOs do already exist.

+ +

My questions are:

+ +
    +
  • Should DTOs contain only the id as a reference to another object (DTO) or the other DTO object itself as reference type?
  • +
  • Should entities contain only the id as a reference to another object (entity) or the enitity object itself as a reference type?
  • +
+ +

To be more precise, eg given an fictive application where orders and items exist. An order consists of multiple items. How should the corresponding DTO (1) and entity object (2) look like:

+ +

(1)(a)

+ +
public class OrderDTO {
+   private Long id;
+   private List<Long> items; // id of item entities
+   // further stuff omitted
+}
+
+ +

or

+ +

(1)(b)

+ +
public class OrderDTO {
+   private Long id;
+   private List<ItemDTO> items;
+   // further stuff
+}
+
+ +

whereas ItemDTO is

+ +
public class ItemDTO {
+   private Long id;
+   private String name;
+   // further stuff
+}
+
+ +

(2)(a)

+ +
@Entity 
+public class Order {
+   @Id
+   // value generation strategy would be set here
+   private Long id;
+
+   // some JPA annotaions to reference the id column of items
+   private List<Long> items;
+   // further stuff omitted
+}
+
+ +

or

+ +

(2)(b)

+ +
@Entity
+public class Order {
+   @Id
+   // value generation strategy would be set here
+   private Long id;
+
+   @OneToMany
+   // further annotations
+   private List<Item> items;
+   // further stuff omitted
+}
+
+ +

whereas Item is

+ +
@Entity
+public class Item {
+   @Id
+   // value generation strategy would be set here
+   private Long id;
+   private String name;
+   // further stuff
+}
+
+ +

I would be glad if you could share you best practices/opinion. Thanks.

+",365578,,365578,,43963.92569,43964.08681,Best practice for references in DTOs and entities in Spring,,1,3,,,,CC BY-SA 4.0, +410062,1,410066,,5/13/2020 2:17,,-1,44,"

Problem Description

+

I'm working in Python and I've been having a problem designing something to handle the following (abstracted) system:

+

I have some objects (lets call them Nodes) that can be in states. These corresponds to physical objects in real life. One node can be in multiple states. Additionally, each state can act on one or more nodes. For example:

+
# Create nodes
+A = Node("A")
+B = Node("B")
+# Apply states to A
+Blue(A)
+Red(A)
+# Apply states to B
+Blue(B)
+# Apply states to both A and B
+BlueConnection(A,B)
+BlueConnection(B,A) # Not necessarily the same thing, sometimes order matters!
+
+

Each state might have some states that it requires or some states that prevent it. For example:

+
Blue(A)
+Green(A)
+Blue(B)
+RedConnection(A,B) # Invalid! RedConnection requires all nodes to be red.
+BlueConnection(A,B) # Also invalid! BlueConnection won't work if any of the nodes are green.
+
+

EDIT: These restrictions correspond to some real life physical restrictions. For a more concrete example:

+
A = Node("A")
+Grounded(A)
+Flying(A) # Invalid! A can't be standing on the ground and flying at the same time.
+B = Node("B")
+FlyingEast(B) #Invalid! B can't be flying East if it's not Flying.
+
+

It's possible to remove states. If this causes other state to no longer meet their requirements, those states would be automatically removed as well.

+
Blue(A)
+Blue(B)
+BlueConnection(A,B)
+# Use a static method of State to remove Blue from A
+State.remove(Blue, A) # Also automatically removes BlueConnection(A,B), since BlueConnection requires both nodes to be blue.
+
+

Finally, we'll write "Transitions", which are a group of state changes. Users will select transitions to do what they want to do. Each of these transitions corresponds to a physical process. Users cannot applying States individually, only Transitions.

+
def FeelingBlue(A, B):
+    '''
+    A and B must not be Green.
+        (Assume this is somehow enforced and this transition isn't available
+        to the user if either is green.)
+    '''
+    Blue(A)
+    Blue(B)
+    BlueConnection(A,B)
+
+

EDIT: To clarify, I will be writing all the transitions. Hence, I'm not trying to solve my problems for all possible transitions, I just want an algorithm that will find 'bad' transitions and tell me to remove them.

+

What I Want to Do

+

I want a nice way to do the following (in Python, I'm not worried about a GUI for now):

+
    +
  1. Applying a State to some Node(s) +
      +
    • If this would make the configuration invalid because some old states are incompatible with the new state, those old states should be removed
    • +
    +
  2. +
  3. Removing a State from some Node(s) +
      +
    • If this would make the configuration invalid because some states are incompatible with the new state, those old states should be removed
    • +
    +
  4. +
  5. Each Transition should only have one valid output +
      +
    • What I don't want is a situation where multiple different removals could all lead to acceptable, but different, States being assigned to the nodes.
    • +
    +
  6. +
+

My Problems

+

I'm not sure whether it's possible to guarantee condition 3. The main issue I see is that one removal could lead to another, which could lead to another, which could lead to another. It seems like this has the potential to lead to situations where the order in which I resolve these removals could lead to different results.

+

As a possible alternative, I've also considered making it so that States can't be removed if an existing State requires it. That would mean I'd have to watch out for dependency cycles. How would I manage those cycles and ensure that no combination of transitions could lead to such a cycle?

+

This whole thing feels a bit like a graph theory problem that someone smarter than me has probably already written a paper about, but the closest thing I can find is dependency graphs.

+

EDIT: Hans-Martin Mosner has shown how fulfilling 3 is impossible if there is a requirement of the form 'choose 2 of 3'. If we were to restrict ourselves to requirements where ALL requirements must be met (no using 'choose x of y') and ALL incompatibilities must be avoided, would it be possible to fulfill condition 3?

+",365588,,-1,,43998.41736,43964.275,Resolving Dependencies and Incompatibilities Deterministically,,1,3,,,,CC BY-SA 4.0, +410067,1,410068,,5/13/2020 6:09,,3,169,"

Let's say a developer is tasked to implement a certain feature in the codebase. That developer tries to implement the feature using Design A (e.g. a certain design pattern). The developer finds out halfway through implementation that while using Design A appears to be an appealing solution at first, it actually introduces more problems than it solves. Therefore, he decides to go ahead and use Design B to implement the feature.

+ +

How can this developer communicate to any future developers (including himself) that they should stray away from using Design A in the implementation for this feature? The developer wants to do this in order to prevent another fellow developer from accidentally re-attempting something that was already tried before and found to not be a good solution (this might happen in a code-refactoring session for example).

+",360382,,,,,43964.32708,How to let others developer know what was tried and didn't work?,,1,3,,43964.33403,,CC BY-SA 4.0, +410076,1,,,5/13/2020 10:11,,0,26,"

We're quite lucky in being able to rearchitect an existing system so that we can learn from existing practices. One of the issues that we want to address first is setting up a flexible, but rational, Authentication process.

+ +

We have a mix of:

+ +
    +
  • internal users (for this our existing ActiveDirectory setup which makes sense to use, but we can't add additional fields/claims, etc. as it's company wide)
  • +
  • external users (assuming that authentication for these users will be held elsewhere)
  • +
+ +

Our current plan, is to setup IdentityServer as a dedicated Api somewhere in our systems. Then when a User authenticates we'd route to either the ActiveDirectory (internal users all have @company.com email) or to the ""other"" authentication source.

+ +

If that step is successful we would add some additional meta-claims from the dedicated IdentityServer (company, role, applications, etc.)

+ +

Then when an Identity is returned to the requesting Application, any domain specific additional claims would be added at that point: (admin, view-permissions, edit-permissions, etc.)

+ +

Does this ""nested call"" approach sound reasonable, achievable and are there any major pitfalls that we've overlooked?

+",265175,,,,,43964.42431,Can IdentityServer or an alternative be used to bring together multiple authentication sources?,,0,0,,,,CC BY-SA 4.0, +410077,1,,,5/13/2020 12:13,,0,29,"

I'm working on this containerized API where in two steps (one asynchronous and one synchronous) the user can interact with the output generated by the first step trough a front-end service.

+ +

I wasn't entirely sure if I should post it here or on SO but since the question is more so about the design rather than the implementation I decided to post it here, let me know if you think it's more suitable for SO.

+ +

The flow looks as follows:

+ +
    +
  1. The user has requested to do some asynchronous job for which it received an unique identifier.
  2. +
+ +

The job produces a model that the user wants to test:

+ +
    +
  1. They send a POST request to a front-end service with the unique identifier and custom data (json) with which they want to test the model with.

  2. +
  3. The front-end service starts a Kubernetes Job.

  4. +
  5. The job's Init Container requests the model.

  6. +
  7. The job's main container loads the model.

  8. +
  9. The front-end service somehow sends a compute request with the user-supplied json to the job's main container.

  10. +
  11. The job's main container performs a computation with the model and data. The response is passed on to the user via the front-end service.

  12. +
  13. The pod stays up for some time so that when similar requests for the same model are receive it doesn't have to spin up new pods every time.

  14. +
  15. After some time the pod shuts down, finishing the job.

  16. +
+ +

I'm having trouble with step 6 (and by extension step 8). As far as I know, pods created by a job can't be connected by a service. And even if that's possible, multiple requests for different models can occur concurrently, so the service has to be able to dynamically differentiate the pods.

+ +

The first iteration of this project was to let the back-end container being able to dynamically load new models, but after review it was decided that this was not desirable so now in order to load a new model the container has to be restarted where the Init Container retrieves the correct data.

+ +

My first thought was to let the back-end job send a request to retrieve the data but that creates several problems:
+ 1. The front-end service has to store the request json in a database even though it's read only once because the back-end request can be routed to a different front-end pod.
+ 2. How would the job know to request new data? (step 8)
+ 3. How are the results sent to the user?

+ +

The second thought was to forgo step 8 and 9 and let the job run to completion and let the front-end read the job status and after it's finished read the logs. At least, that's how to example in the Job documentation does it. This would mean that the job logs must be reserved for output though, which seems like bad design.

+ +

We can build upon this though and in stead of writing to the logs, write to the database. This shares the problem 1 of my first idea in the sense that the database will contain read-once data, but so far this seems to be the only workable solution.

+ +

What is your thought? Is this the way to go, or do you perhaps have an entirely different way to encapsulate this behavior?

+",309139,,309139,,43965.23611,43965.23611,Sending and receiving data from a Kubernetes Job,,0,2,,,,CC BY-SA 4.0, +410081,1,410082,,5/13/2020 13:46,,0,725,"

If python dictionaries are essentially hash tables and the length of hashes used in the python dictionary implementation is 32 bits, that would mean that regardless of the number of key-value pairs that actually exist in any given dictionary the size of the dictionary is static and fixed with a length of 232.

+ +

Such an implementation would be a waste of memory so I'm assuming this isn't the actual space complexity. How is a python dictionary actually implemented and what is its space complexity?

+",364282,,,,,43964.58125,What is the space complexity of a Python dictionary?,,1,0,,,,CC BY-SA 4.0, +410086,1,410093,,5/13/2020 16:18,,0,55,"

In a MVC scenario, is it OK to pass arguments to functions from the View to satisfy different formatting needs?

+ +

For example: let's say that the model has a DateTime object and the view must show this date in various format (sometimes as ""01-Jan-2020"", sometimes as ""Monday, 01-Jan-2020 08:00"" sometimes in some other way, possibly on the same screen).

+ +

Scenario 1:

+ +

The view receives different variable for each format:

+ +
DateShort: ""01-Jan-2020""
+DateLong: ""Monday, 01-Jan-2020 08:00""
+
+...
+
+echo {{ DateShort }}
+echo {{ DateLong }}
+
+ +

Pros: the View is super-simple: no function call, just print variables (yay!). The available formats are centralized and consistent across the whole project (no hotshot printing the date based on their cultural preference)

+ +

Cons: slippery separation of concerns: displaying+formatting content should be the View responsibility. This makes the model exposing the DateTime object bloated with extra, unrelated code. Strong coupling: to display the DateTime in another, new format, we need to change the model (!!)

+ +

Scenario 2:

+ +

The view receives the object directly, then it formats it as-needed:

+ +
Date: (object)DateTime
+
+...
+
+echo {{ Date.format('%D %M %Y') }}
+
+//variant: format is actually exposed as a constant
+echo {{ Date.format(Formats::DATETIME_FORMAT_DATELONG) }}
+
+ +

Pros: better separation of concerns, better decoupling.

+ +

Cons: calling functions from the View feels awful. What if, somewhere, the {{ Date }} variable doesn't have a format() method because it's of another type?

+ +

My feeling: the benefit of the Scenario 2 vastly outweighs the cons. Still, it feels bad and can get very ugly, very fast: {{ Price.format_currency('EUR', {fraction_digit: 2}) }} become acceptable, which ""feels"" terrible.

+ +

Is there a best practice or SOLID rule I can apply to choose one way over the other with confidence?

+",165409,,,,,43965.43264,MVC (templates): is it OK to call functions with arguments from the View?,,1,1,,,,CC BY-SA 4.0, +410090,1,,,5/13/2020 16:57,,-3,62,"

So I want to find out how to get a list of all modifications made by installing a piece of software.

+ +

For example. If I install Word, I want to see all directories created, classes registered, dlls, +Modification to drivers. Etc.

+ +

Is there a way to do it?

+ +

In short I want to know what my OS was before installation, and after; how it differs.

+",365642,,,,,43964.72639,How do I find out all modifications made by installing software?,,1,0,,43965.46458,,CC BY-SA 4.0, +410099,1,410104,,5/13/2020 20:48,,2,116,"

My job regularly involves writing and reviewing (in the form of pull requests) English prose in the documentation for our internal system, which solely faces other developers, in a strongly American firm spread around the world.

+ +

That writing falls into two categories - a documentation website, and documentation snippets in our API (e.g. Swagger docs for our REST API). I feel that both of these have a large effect on users' initial impressions of our system/product, and that initial impression is important to me. We do not have formal technical writers on the team, or indeed any in the firm that I'm aware of.

+ +

I'm a native English speaker, and I did OK in English in school. Over half of my team are not native English speakers. Obviously their English is infinitely superior to my skills in other languages, and they deserve credit for the English they do write and speak.

+ +

However, I'm often disappointed in the quality of English that I see in the documentation, including from native English speakers. I'm very comfortable reviewing code - drawing on static analysis, best practices, team guidelines, rules of thumb and personal experience. I'm significantly less comfortable reviewing documentation changes - I don't feel I have datum points or concrete evidence to rely on, just the gut feel that I've used ever since school. I often end up commenting on PRs with what I think the text should be, without much justification or logic.

+ +

On a bad day, that makes me question my own writing. Nobody ever comments on my English, so I don't get regular feedback like I do on my code. +It also makes me feel that if I can't be sure of my own writing, I shouldn't be coaching junior developers on improving theirs.

+ +

I'm not sure if I'm just lacking the comfort and safety of a programming language and compiler - or whether there is genuine room for improvement here. How can I improve what I write, the constructive criticism I make on documentation, and ultimately the whole team?

+",338400,,,,,43965.37847,"How do I write, review and coach better English language in documentation?",,4,0,,,,CC BY-SA 4.0, +410105,1,410127,,5/14/2020 1:09,,-4,40,"

What would be the proper way to describe a system whose sole purpose it is to give users access to large amounts of data in a structured way according to their own criteria?

+ +

Like say for example someone, for whatever reason, wanted to build a system that gave the user access to every permutation of the integers 0-9. But if the user wanted to narrow down the results by specifying only sets containing four integers should be returned, or that they want all permutations that don't include the numbers 7 of 3, or some other criteria.

+ +

What would you call such a system (or program)? An “information system”? Or a “data access framework”? Or something else...?

+",348754,,9113,,43965.54583,43965.71806,What to call a system that gives users customized access to data?,,1,1,,,,CC BY-SA 4.0, +410107,1,,,5/14/2020 2:52,,-1,138,"

New to Python and all things database related. Wondering when I should consider using a database and why? +I have what is essentially a list of objects that is around 30000 lines long. I'm developing a web app which will need to search this list for the correct object depending on user input.

+ +

For more context, the list contains all valid and unique musical chord names, each with the attributes that make that chord what it is. Eg. Notes, Interval notation. It is the name of the chord the user will type in, and it is the chord attributes they will get back.

+ +

I would like to know the quickest way of pulling an object from this list of objects by searching the objects name, and how I can implement it in python with minimal external libraries.

+ +

Currently I'm just parsing through a .txt file and converting the string line I find back to the proper object in python and returning that.

+ +

Any help is appreciated!

+",365674,,,,,43965.37778,When should I use a database instead of a list?,,2,1,,43965.76528,,CC BY-SA 4.0, +410121,1,410574,,5/14/2020 11:17,,-2,143,"

I am creating a microservice based application which has to interact with Github through its API. Some of the microservices are Projects, Tasks, etc.

+ +

My question is: should I have a microservice dedicated only to communication with github? so when I need to create a project in the project microservices, I register it in my database and send an event to the github microservice to create the project in github (same for task microservices, send an event to the microservice of github to create the task).

+ +

Or should I do the communication with github in each microservice, registering the project (or task) in my database and registering it in github?

+",365699,,,,,43976.39097,Microservices with External Integration,,2,4,,,,CC BY-SA 4.0, +410122,1,,,5/14/2020 11:31,,0,38,"

I learnt that,

+ +

the reason we have context(P) introduced in Goruntime, is that we can hand them off(it's LRQ of goroutines) to other OS thread(say M0), if the running OS thread(M1) needs to block for some reason.

+ +

+ +

Above, we see a thread(M1) giving up its context so that another thread(M0) can run it. The Go scheduler makes sure there are enough threads to run all contexts(P1, P2, P3 etc..).

+ +
+ +

Above model is M:N threading model, where each OS thread(M1) running on CPU core(C) assigned a context(P) having K goroutines in it's LRQ.

+ +

vis-a-vis

+ +

1:1 threading model, where each core(C) has one OS thread(M). pthread_create().

+ +

Comparing above two threading models, context switching of go-routines(in M:N threading model) is much faster than context-switching of OS threads(in 1:1 threading model)

+ +
+ +

To understand the purpose of context(P),

+ +

what is the advantage of handing off context(P1) to other thread(say M2) running on core(C2)?

+ +

Is the advantage about efficiency in re-using cache lines(L1/L2) on core C2, for the related set of goroutines sitting in LRQ of context(P1)?

+",131582,,,,,43965.47986,GoLang concurrency model - Using context,,0,0,,,,CC BY-SA 4.0, +410123,1,,,5/14/2020 11:42,,-1,47,"

I am trying to conceive a wine cellar application for my father and for fun on my free time. +I've decided to make a simple MVC with Pixi.js / JQuery and Bootstrap served by Express for the front and a REST API with JAX-RS Jersey for the back (so in Java).

+ +

The software is supposed to be use on a local level, so hosted by the user and only the user's cellar is supposed to feed the application with data.

+ +

I've studied ontologies, RDF triplets and OWL during my semester and I want to implement it in the project to organize bottles of wine and the data related to them (like the color, the castle, the grapes used and so on), but we haven't applied it to a web app.

+ +

I've been looking on the web how to use and apply ontologies in Java and I found Apache Jena which seems to be the tool I want to work with.

+ +

However, I have no idea how to organize and store the bottle of wine. Should I use only the XML file used by the OWL ontology and populated by the user provided data ? +Or should I store the bottles of wine in SQL and populate my ontology with this data ?

+ +

The first solution seems a bit weird because storing data in a single file does not seems efficient. +And the second solution seems also a bit off because it duplicates the data.

+ +

I think my questions comes from the fact that I do not understand how ontologies works within a webapp, so I could use some of your help to clarify my mind.

+ +

Thank you !

+",365694,,,,,43966.05833,Ontology + Relational bases or only ontology to store data in a web app?,,1,0,0,,,CC BY-SA 4.0, +410126,1,,,5/14/2020 12:55,,-5,23,"

I have an existing .net server infrastructure that resides in a company's intranet. Now for a new feature I'd like normal users (employees) to receive notifications on their machines when specific events occur on the server.

+ +

What's the best way to accomplish this? +Polling periodically is a no-go for security reasons. But for a push method to work, we would need a constant connection to the server which gets complicated when there are too many users, disconnections and reconnections need to be handled, etc. +Someone suggested to use a push message broker, which essentially is the ""constant connection"" approach but the ""complicated"" stuff is already handled by the broker.

+ +

Anyone has an idea how to go about this?

+",320367,,320367,,43966.7125,43966.7125,Implementing an intranet software in which client machines get push updates from a server?,,1,1,,,,CC BY-SA 4.0, +410129,1,,,5/14/2020 13:34,,3,239,"

I'm working in a Scrum team, We have Sprint Planning 2 to breakdown backlog into technical tasks.

+ +

The team is pretty big around 12 developers, We can't split cause it's not under our control.

+ +

We already have architecture designed, But may not cover everything since the code base continues to evolve.

+ +

And when it's come to pull requests, Many pull requests shock me with the unexpected design.

+ +

We struggle with how much technical detail we should talk or provide before the team starts working.

+ +

If we go with too much detail, The discussion may not be accurate we will saw design changes during we hand-on and implement it.

+ +

If we go with more autonomous, Let people think of their solution, People come up with very different approaches compare to how it should be.

+ +

So question is,

+ +

How much level of detail we should talk in Sprint Planning 2 to make this better?

+ +

And any factors and ways to solve this problem?

+",200689,,,,,43969.42778,How much technical detail should we talk in Sprint Planning 2,,6,4,0,,,CC BY-SA 4.0, +410135,1,410350,,5/14/2020 15:53,,-2,126,"

I am doing a project for a client where I am getting my first real heavy, hands-on exposure with Javascript.

+ +

Since I have learned about adding .bind(this to callback functions, I find I am doing it everywhere, and I wonder whether it is excessive and whether it's good practice, or whether I am structuring my code badly.

+ +

Is it normal to have .bind(this) on nearly every callback?

+",25420,,1204,,43970.57153,43970.57153,Is frequent use of bind(this) in Javascript a code smell?,,1,3,,,,CC BY-SA 4.0, +410136,1,410246,,5/14/2020 16:13,,1,213,"

I just had an argument with my professor about surrogate key usage in my project's database design. My professor insisted that a primary key (natural or surrogate) should not ever be exposed, even in the URL, and using an auto-increment as primary key is a bad practice because something like german-tank problem.

+

I argued that the table, for example, USER table, doesn't have natural key by design because it doesn't store unique stuff like email, and using it in the URL should be okay (is this considered "exposing"?) because there is authorization step to check if the user is authorized to access the resource. I also argued that using auto-increment should be okay for a project at a scale that won't need databases merger (which usually when auto-increment can be a problem). Even big software like Oracle use auto-increment (sequence).

+

But my professor won't acknowledge it, and even bring column naming to the argument. My design is that the name of column ID of a table is just ID and not [tablename]_ID because it should be clear that column is the ID of the table. So for example if I had a table called USERS with ID column, and a table called PROFILES with USER_ID column, it should be clear that USER_ID is related to USERS.ID (I presented the ERD drawing). But my professor insisted that I should use [tablename]_ID, which I didn't even know why anymore because my professor just keep saying that the people who see the design won't know which ID is for which table. Isn't that the point of ERD drawing?

+

I'm quite bothered with it, so is there any "academically" or "practical" reason for why my design "should" be changed? I feel that my professor only arguing using his/her own knowledge/experience because his/her reason doesn't quite clicked with me.

+

Edit : +Thank you everyone for all the input. I'll learn more how to workaround that exposed ID on URL, and I have to agree using only ID as column name can be confusing for a lot of people.

+",314344,,314344,,44017.76042,44018.52361,Academic question about table design,,7,10,,,,CC BY-SA 4.0, +410138,1,,,5/14/2020 17:07,,0,209,"

We currently have a monolithic code base which we are in the process of extracting some micro-services where it's obvious to do so.

+ +

One thing that stands out is our email delivery. We have numerous points in the code base where emails are created and are sent out via a number of different templates. What I want to do is take this load and move it to a queuing system. +Going forward the user facing web app would simply send a message to the queue stating for example a project has been won.

+ +

So what I'm trying to work out is where the data for the email (which can be a fair amount at times) should come from. Should we be packaging that data up with the message or should the queue handler be looking at the type of email that needs to be sent and then going and getting that information at that point.

+",43458,,,,,43965.91736,Email via a Microservice,<.net>,1,1,,,,CC BY-SA 4.0, +410139,1,410153,,5/14/2020 17:12,,1,52,"
+-------------+         +--------+        +----------+
+| repository  +-------->+service +------->+controller|
++-------------+         +-^------+        +------------+
++-------------+           |  +^  ++------>+controller2 |
+| repository2 +-----------+  |    |       +------------+
++-------------+              |    +------>+controller3 |
++------------+               |            +------------+
+| repository3+---------------+
++------------+
+
+ +

I want to be able to support multiple different implementations of my repository interface. I use my interface in by business service and I can select what ever implementation I want. So far so good.

+ +

When I try to insert multiple entries with the same key, db connector will throw an exception. In case of jdbcTemplate, this will be DuplicateKeyException, in case of Hibernate, this will likely be some kind of Constraint Exception. Not ideal, but still good enough.

+ +

I must be able to tell the service consumer what exactly went wrong any why, where passing the exception is not good enaugh (as in response 500). Getting muddy.

+ +

So my options are either to catch that exception in the controller, which then I have to implement in each controller (which seems muddy waters to me), or, I can throw some kind of exception in the business service (which seems better) and then handle that in the controller. I think this is better because then I'll be catching my own one of a kind exception, and I'm sure what's going on.

+ +

The problem is, most of my business service implementation is actually an existing interface implementation, which means that in order to do this, I have to override the interface I'm using with new declarations which will have exceptions. This is not what I want.

+ +

What is the proper way to handle this kind of situation? I'm using Spring but I would like a more general answer.

+",365730,,,,,43965.99653,Where to handle duplicate key exceptions in multy layer application,,1,6,,,,CC BY-SA 4.0, +410144,1,,,5/14/2020 20:34,,3,198,"

Based on many tutorials that I have read, the following is the definition of Polymorphism:

+ +
+

Polymorphism is the ability of an object to take on many forms

+
+ +

Now let's assume that we have an Animal parent class, and a Dog and a Cat child classes.

+ +

Is the above Polymorphism definition means that an Animal variable can have many forms in the sense that an Animal variable can be an Animal or it can be a Dog or it can be a Cat, or does it mean something else?

+",280327,,,,,43965.9,"What is meant by ""Polymorphism is the ability of an object to take on many forms""?",,3,2,,,,CC BY-SA 4.0, +410151,1,,,5/14/2020 22:10,,1,31,"

I have a system in which some data is inserted into database. Whenever that data is inserted subscribers should be notified about that (it doesn't have to happen just after inserting, but there is a time limit so I can wait like 10 s and then notify about X number of data inserted).

+ +

Problem is that each subscriber may have a filter. So lets say that new record was inserted to database and it is a product of type: shoes. I have 1 k of subscribers who are interested in that so they should receive email with that information but some of them have filter so that only if it is size 10.

+ +

What would be the best design for that to make it scallable?

+ +

Right now I have a process that keeps on looking for new records in database and for each one it iterates through all subscribers trying to find out if this user should be notified and if yes I send message to the queue with record and subscriber ID so that another component will receive it and send to that user.

+ +

This way I can have X number of components looking for new data and Y number of components receiving events from queue and sending notifications to users so it can be scalled out but still I think that it is not the optimal way for doing that.

+ +

Thanks for help

+",333144,,,,,43996.00208,optimal way of dealing with subscriptions,,1,1,1,,,CC BY-SA 4.0, +410155,1,,,5/15/2020 0:39,,0,72,"

This my first UML diagram for user registration flow.

+ +

After double checking the diagram I feel that I have a crowded diagram compared to what I can found in Google search.

+ +

My questions:

+ +

Are my alt and else branches usage correct?

+ +

Is it valid to include error responses in the sequence diagrams?

+ +

Is it valid to include the notes I have included?

+ +

Here is my Sequence Diagram design:

+ +

+ +

And here is my Sequence Diagram code:

+ +
@startuml
+
+actor Client
+participant Controller
+participant UserRepository
+participant UserProfileRepository
+database MongoDB
+
+Client -> Controller ++ : Registration Request
+note left
+Http Method: POST
+Token Type: CREDENTIALS
+Token: EMAIL:PASSWORD
+end note
+
+Controller -> Controller : Validate token type,\nemail, password\nand email not already\nused
+
+alt if all are valid
+Controller -> UserRepository ++ : Create new user
+UserRepository -> MongoDB : Save created\nuser in DB
+UserRepository -> Controller -- : Return created user
+
+Controller -> UserProfileRepository ++ : Create new user profile
+UserProfileRepository -> MongoDB : Save created user\nprofile in DB
+UserProfileRepository -> Controller -- : Return created user profile
+
+Controller -> Client : Return user id
+note left
+Http Status : 201 Created
+User id returned in Header
+Location: /users/USER_ID,/usersProfiles/USER_ID
+end note
+else if invalid token
+Controller -> Client : invalid request
+note left
+Http Status : 400 Bad Request
+end note
+else if invalid email
+Controller -> Client : invalid email
+note left
+Http Status : 400 Bad Request
+end note
+else if invalid password
+Controller -> Client : invalid invalid password
+note left
+Http Status : 400 Bad Request
+end note
+else if email already used
+Controller -> Client -- : user already exist
+note left
+Http Status : 409 Conflict
+end note
+end
+
+@enduml
+
+",46238,,,,,43966.40347,User registration flow UML diagrams,,1,1,,,,CC BY-SA 4.0, +410158,1,,,5/15/2020 4:20,,1,74,"

I am having trouble trying to represent some multithread process and I would appreciate some advice.

+ +

Please let me explain this with code. (in C++'ish because that is what I am familiar but consider it just pseudo code- it is easy to understand)

+ +

Say I have two classes MainClass.cpp and ElementClass.cpp

+ +

the MainClass has as an element an object of ElementClass

+ +
class MainClass{
+
+  MainClass:myElement();  // The myElement is constructed when the MainClass is constructed
+
+ method1()
+ {
+  myElement.method1();
+ }
+
+  method2();
+
+  ElementClasss myElement;
+}
+
+ +

and

+ +
class ElementClass{
+
+   ElementClass():thread_handler(&separateThreadMethod){}
+
+   method1();
+
+   void separateThreadMethod()
+   {
+     call MainClass method2();
+   }
+} 
+
+ +

So as you can see when an object of MainClass is constructed , its MyElement is also constructed and it initiates another thread that runs separateThreadMethod.

+ +

The MainClass object is kept spinning so its method1 can be called at any time. This also calls myElement.method1()

+ +

So as you can see there are two threads. However they both use parts of MainClass and ElementClass.

+ +

Now I want to represent this in a Diagram and I am wondering how.

+ +

Initially I used Activity Diagrams and I put different partitions for each thread in ElementClass. I also put a different partition for MainClass.

+ +

But MainClass has methods that are called by both threads so in the partition for MainClass there is not a clear line of process.

+ +

Is there a better way to represent this? Perhaps eliminate representing the classes and just represent the threads?

+ +

Any advice will be greatly apreciated

+",296531,,,,,43968.44722,How to represent this multithread application?,,1,0,,,,CC BY-SA 4.0, +410162,1,,,5/15/2020 6:21,,0,28,"

I have an open source API server project on Github and I am trying to put it into continous integration. The project uses SQL Alchemy and I use ELAlchemy to manually generate a png database relationship graph (in png format) and put it into my documentation (in .rst format and uploaded to Read the Docs).

+ +

Now I am using Travis CI, I am wondering whether I can copy the png file from my Travis CI build result and put into Read the Docs, so that I no longer need to generate the ER diagram manually.

+",358768,,,,,43966.34722,Use CI to generate ER diagram and put it into readthedocs,,1,0,,,,CC BY-SA 4.0, +410164,1,,,5/15/2020 6:49,,1,80,"

As a creator of a software library, how can I verify backward compatibility with earlier versions?

+ +

When using a dependency management (here: Maven), multiple versions of my dependency could be (transitively) used:

+ +
com.example:some-project:jar:1.0.0
++- com.example:my-library:jar:1.1.0:compile
++- org.example:another-library:jar:1.0.0:compile
+   \- com.example:my-library:jar:1.0.0:compile
+
+ +

As you can see, some-project is directly using my-library, but it is also a transitive dependency of another-library in an older version. Maven will include version 1.1.0 (see dependency mediation) and I want to ensure backwards compatibility between different minor versions so this is guaranteed to work (no accidental API changes from 1.0.0 to 1.1.0).

+ +

Are there any common practices to verify compatibility? All I can think of is some kind of smoketest script that builds such scenarios and tries to run them.

+",38735,,38735,,43966.40278,43966.47986,Backward compatibility testing,,1,3,,,,CC BY-SA 4.0, +410173,1,,,5/15/2020 8:48,,0,211,"

I'm building my project with Visual Paradigm and I have some use cases that implements the CRUD pattern.

+ +

As specified in the book Use Cases: Patterns and Blueprints, Övergaard and Palmkvist suggest to implement a single use case as one of the best ways to handle this type of use cases. There is a different flow for each action: one operation is considered as the main flow, the others as extending flows.

+ +

The question is: considering that I'm using Visual Paradigm, which is the best way to write the relative sequence diagram of a similar use case?

+",365773,,,,,43967.58611,CRUD use case and relative sequence diagram,,1,4,,,,CC BY-SA 4.0, +410176,1,410210,,5/15/2020 9:46,,1,93,"

I have no experience with C (only C++ and higher level languages). Right now I have tried and failed to find general guidelines on how to write good C code in a way that allows to separate the programm logic clearly into replacable parts that can be combined with other parts or reused in other projects.

+

To give some context:

+

I will have two parties communicating over some channel (e.g. TCP or over a serial port, etc.) that both have to perform computations on the received messages and send their results back and forth many times. Both the exact nature of the computations and the type of communication may need to change. For this reason, I want to separate the communication and peripherals part from the computation-logic part into "modules" (whatever that might mean exactly in terms of the implementation), which I will call "peripheral module" and "protocol module". The communication is assymetric; I will call the parties "Master" and "Slave", which (I think) should both be implemented as the same single compiled programm, selecting Master/Slave behaviour as specified by user input.

+

The kind of thing I'm imagining is as follows: +The "Master" process starts in its "peripheral module", which establishes a connection with the "Slave" process and initializes/fetches some data. It passes the data to the "protocol module", along with (as general as possible) instructions on how it can send messages to (and receive from) the "Slave Process". The protocol module takes over and sends messages to the "Slave Processes" where they are received by its "protocol module" and communication continues. During execution the "protocol module" should be able to send debug/status information back to the "peripheral module", which may or may not run as a separate thread (I currently don't think that is absolutely necessary). Both number, length and structure of messages sent depends on the "protocol module", as does the exact nature of the status/debug information it can provide. There may be several different versions of the "peripheral module", designed to run on different OS and possibly also integrated devices, that should all be able to use any "protocol module" implementation. There may also be several different implementations of the "protocol module".

+

My current implementation ideas:

+

Make a header file "interface.h" that acts as an interface between "protocol module" and "peripheral module" that all implementations of each module should conform to. This header file declares functions (all with external linkage) that look something like this (returning error codes with meanings also defined in "interface.h"):

+
    +
  • int recieve_data(char* data, int num_bytes)
  • +
  • int send_data(char* buffer, int num_bytes)
  • +
  • int status_info_output(char* buffer, int num_bytes)
  • +
  • int begin_protocol(char* initial_data, int num_bytes, SOME_STRUCT config)
  • +
+

The methods recieve_data, send_data and status_info_output are defined by "peripherals.c" and called from "protocol.c", while begin_protocol is defined by "protocol.c" and called from "peripherals.c". The module "protocol.c" will have global variables defining configuration parameters. (With static linkage and perhaps visible access methods, declared in a "protocolXY.h" hearder file that is unique to each "protocol module" implementation).

+

The issue is where and how to defined SOME_STRUCT, since its exact content will be different (with large overlaps) for different implementations of "protocol.c". Also the "protocol module" will define message headers containing information that needs to be monitored during programm execution. I'm not sure if the status_info_output is versatile enough to provide such functionality. I'm also not sure how to inlude status messages and commands such as "the user interrupting communication during execution" in a way that is cleanly communicated to the other party and can be implemented independently of the "protocol module" (i.e. will most likely not differ between different implementations of it).

+

What bothers me further is that, from a design perspective, the "protocol module" could be viewed as some sort of static library (it doesn't have to be linked as such). I have never seen library headers demanding from the user (the "peripherals module") to define certain functions. I considered a scheme where begin_protocol is passed function pointers to tell it how to communicate, but this seems more convoluted and perhaps bad for performance optimization (which may be important for the integrated system).

+

Can anyone provide some advice on how to structure such a project in C and what common practices are in writing modular programms (and perhaps libraries)?

+

Thanks in advance!

+",365780,,-1,,43998.41736,43967.34444,Modular programming with C: Separate device logic from communication protocol logic,,1,13,,,,CC BY-SA 4.0, +410182,1,,,5/15/2020 12:27,,-4,43,"

What the different types of SDLC models and in which systems are they most suitable for?

+",365792,,365792,,43969.39583,43969.39583,SDLC models example?,,1,6,,43966.55903,,CC BY-SA 4.0, +410184,1,,,5/15/2020 13:04,,0,176,"

I've recently started a hosted app project for fun. It is effectively an application for playing tabletop card dealing/guessing/team games remotely. This is for games with custom sets of cards (Balderdash, Articulate, Linkee, Chameleon, Boggle etc.). It's 2020, you get the idea.

+ +

There is a (admin/dealer) user that takes pictures of the cards and uploads them, then they can be dealt to the players, when the dealer performs that action.

+ +

This all runs on a single node HTTP server where +The players connect to the App with a socket.io connection and the AppObserver pushes changes to the user and AppMediator receives commands from users (like deal, or draw from pile etc.). The resources are served from an express instance. There is no persistent storage or external service involved.

+ +

When the admin wants to upload card images this is through a POST multipart form api handled by multer middleware. +A rough diagram of what is going on below: +

+ +

I want the upload middleware (multer) to feed the new resources info directly to the App Instance, What are sensible methods to do this?

+ +

The app needs to know the paths to the uploaded images to create objects for the App to manage from here on, but I can't trust the user to route the POST response back through the socket and ActionMediator (Sending a response to the POST upload, for the user to send the new cards back).

+ +

I need to know the User instance (within App) that the data came from in order to add the new resources to the correct user +However I think this request must go via the ActionMediator to ensure the user has permission to upload resources.

+ +

I can't just emit an upload event on the socket.io namespace since all users would receive it.

+ +

Can I send the socketId in the POST request then figure out which ActionMediator is relevant, then internally call that action? I think this might mean the user could inject data too, since the socket data streams directly to this.

+ +

I could apply a restriction on some actions to be internal only, is this wise?

+ +

Is the socket.io SocketId an acceptable thing to send as a POST field? I assume the Client and Server have the same value.

+ +

Is there something I'm missing that would make this a whole lot easier? Maybe a separate ResourceMediator?

+ +

Current setup

+ +

The POST requests source is unverified and it just calls a member function of the App, which updates and internal list of resources. This then propogates to the admin user, irrelevant of whether the admin user actually sent the POST request.

+ +

This is obviously problemaitic given uploads are just accepted without knowledge of the associated user and since I want the App to have multiple instances I need to identify which App instance to add resources to in future.

+ +

I consulted this relevant question as well: +https://stackoverflow.com/questions/32532271/is-a-node-js-app-that-both-servs-a-rest-api-and-handles-web-sockets-a-good-idea

+",365791,,,,,43966.54444,Express HTTP and socket.io - User upload of image and pass success of upload through to app instance,,0,0,,,,CC BY-SA 4.0, +410187,1,410189,,5/15/2020 13:48,,-3,66,"

I am working on a problem where I need to extract data from text. I first considered using regular expressions, but some of the data is not in a format I am sure how to handle or even if regex is the best way to handle it. So, some lines are simple [fieldname]: value \newline. Unfortunately, others have nested data, such as contacts. Here is an example:

+ +
Contacts: last update 11/30/2015 10:25 AM (PST)
+
+          Dispatch and Operations: Mike (Dispatcher) (Primary Contact)
+          Phone: 111-111-1111          Fax: 111-111-1111
+          Email: test@yahoo.com
+          Owner or Officer: Jane Doe (President)
+          Phone: 222-222-2222          Fax: 111-111-1111
+          Email: test@yahoo.com
+
+SERVICES: last update 11/12/2016 03:41 PM (PST)
+
+ +

You will see it has a start piece of text I can find, but I only want the two contacts below, excluding the last update time. Additionally, the first line of the contact is their title, not something I can count on for pattern matching since it is a freeform field. Now, I could go line by line, but this would mean I need to hard code this knowledge into my code. I tried Googling it, but I haven't been able to find something that addresses this issue. So, what I am hoping is that someone can help me with some direction to help me get back on track.

+",5804,,-1,,43966.87431,43966.87431,Best Way to Extract Different Types of Data from Text File,,1,0,,,,CC BY-SA 4.0, +410188,1,,,5/15/2020 13:56,,3,129,"

I am traditionally a desktop app developer, but circumstance has thrust me into the role of doing web client and corresponding REST api logic for a project I am involved in. Unfortunately, I'm a one-man-show, so my opportunities to learn new patterns or techniques from co-workers is somewhat limited. When I was ramping up, I had the opportunity to work (briefly) with a contractor who exposed me to the idea that my server REST logic should be separated into a Controller (where the actual GET/PUT/POST/DELETE methods live) and a Service that does the heavy lifting. As it was explained to me, the Service might further interact with one or more Providers or Stores.

+ +

My understanding is that a Provider would wrap logic that interacts with another system, maybe another web API, or a weird bit of legacy code, or maybe some proprietary bit of hardware (like a temperature guage, for example). Additionaly, a Store would wrap the CRUD logic for actual data objects to SQL, NoSQL, text files, whatever.

+ +

Assuming all this makes sense, and is indeed how the pros do it, he further advised me to incorporate the naming into my classes, like this:

+ +

PizzaController might proxy the received web API calls to the PizzaService, which in turn could talk to both the PizzaProvider and the RefridgeratorStore.

+ +

I'm not 100% positive this is how the real world does things - but it made enough sense to me to sound credible and I've generally adopted this pattern and so far it has worked well enough to organize my logic.

+ +

But this is where a few questions comes in:

+ +

First, is this view of the separation of my classes really how others structure their code? And if I am close, but not quite, what corrections should I be making?

+ +

Second, is it legitimate for one Service to instance and leverage a second Service? For example, what if my PizzaService has to decide if we want delivery or we are going to make a pizza from scratch - it might want to either invoke the PizzaProvider -or- it might simply want to defer to the PizzaMakerService. If the PizzaService doesn't make this decision, then the decision logic would have to live earlier in the food chain (no pun intended). That would infer my PizzaController would have to decide if it should use the PizzaService -or- the PizzaMakerService; and that doesn't smell right to me.

+ +

And finally, (following the pattern I was shown) my Services frequently return a data object back to my Controller, where the Controller will map one or more properties into a ViewModel that is returned to my client. I've found that I could just as easily map the relevant bits of data into a anonymous object (C#) on the fly and return it to my client. The returned JSON is the same, so why introduce the class definition for a ViewModel at all? Is there a taboo against just crafting an anonymous object in the Controller and returning it?

+ +

I realize (in my situation) I can pretty much do anything I want - how I name classes, how I separate logic, if I use anonymous objects - it's really entirely my code. But these questions have been nagging at me for a while, and I'd like to be doing things as close to 'correctly' as possible. It's likely these questions (or a variation) have been asked and answered before, so I'll apologize now for any duplication - but for the life of me I can't seem to find any direct answers.

+ +

Thanks!

+",365795,,250856,,43988.06319,44168.21111,"Separation of concerns and other best practices across Controllers, Services, Providers and Stores in ASP.NET when building a REST web api",,1,1,0,,,CC BY-SA 4.0, +410192,1,,,5/15/2020 16:01,,0,56,"

we have a legacy project, on Windows. It goes on from 1990-s.

+ +

Until very recently it was not backed by any version control, today it is moved into Git.

+ +

The question is all the prior snapshots. As of now, they are spread on network shared folders, with folder names usually suggesting the part of the project, the date or product version, and often the specific programmer, from whose development machine it was copied.

+ +

Needless ot say, it takes much of disk space, and mostly on many copies of exactly same files, and today when Covid-19 makes many of us work from home - that is becoming a problem.

+ +

Those legacy sources are rarely needed, but sometimes one just to look for a specific identifier (variable, function, file, etc) or even a statement - to find when and why it was introduced.

+ +

Copying them onto my home machine would be slow and would occupy too much of my local disk. Same for other developers. Even if i would Zip (or PPMd) files on server - it would be still big blob to copy over RDP, and i would not be able to search for the text in that archive without speed impact, or without unpacking it first - which will take a huge hit on local disk space on every developer machine.

+ +

I can not move them into existing Git repository - when i settled for my own convenicenceit i did not even knew about those archives, less so had access to them.

+ +

Anyway, Git is not a solution here. Git (or other DVCS) will do data de-duplication and will greatly decrease disk space required, indeed. But Git mantains a ""one true current snapshot"" model, which is useful for development, but not for ""industrial archeology"" (yes, i can settle several repositories and switch them to different branches - but that defeats the idea). I just can not run the text search through ALL the snapshots stored inside Git repo, within all their branches and revisions/commits, with the same ease i can run grep (or a similar GUI tool) over thousands of mostly plain text files, even on a shared network folder.

+ +

Additionally, i do not have information to recover the ""real"" scheme of all the ""branches"" that were happenning with people i never met. While guessing timestamps of every folder is doable even if time consuming, inventing some formally correct relationship of snapshots like ""from John"" and ""from Mary"" is not.

+ +

Additionally, putting files into Git will loose their timestamps metadata. While it is not needed much for the sources that were in DVCS from get go, for pre-VCS sources that has a lot of meaning. Even for mere guessing which modules were alive and which were derelicts that people just could not practically delete away without VCS.

+ +

So, how people do manage situations like this?

+ +

I think i need some system that:

+ +
    +
  • runs on Windows
  • +
  • deduplicates files
  • +
  • maintains original paths, names and timestamps
  • +
  • has useful and easy to learn FTS without restricting me to one ""current"" snapshot out of many stored
  • +
  • preferably FTS to be available as usual ""Search text in files"" with no practical performance hit when compared to just storing files and folders ""as is"" (so, Zip archive is not an option, and it has no deduplicaiton too), so zero learning curve and freedom to use any grep-like and diff-like tools of choice one might like.
  • +
  • does not require formal relationships from one snapshot to another, though optional timeline relations would be good bonus. It would be okay to have no concept of snapshots at all - it would be no worse than what it is today.
  • +
  • can be easily and efficiently copied over the relatively slow network (so, if there is some natively de-duplicating filesystem for Windows Servers - it would not help much, it would be problematic to copy the data over network without re-duplication and there will probably be no such filesystem on non-Server development machines)
  • +
  • low maintenance burden
  • +
  • preferably detects and fixes or at least warns about random data degradation events (bit flips on HDD/SSD, etc).
  • +
  • no need to modify data after initial storing it, actually it better be strictly read-only
  • +
+ +

I think hypothetically it can mostly be implemented on top of NTFS 5 symbolic links, and perhaps someone had already done it?

+ +

Or maybe i just fail to think out of the box and see a different workflow, easy to learn and use.

+",180028,,180028,,43966.675,44117.36111,How to store legacy source texts snapshots with FTS (full text search) all through their history?,,2,4,,,,CC BY-SA 4.0, +410196,1,,,5/15/2020 18:39,,1,84,"

We have an application that has served us and our clients well from some 20 years now. Pretty good track record but it's obviously showing it's age in some areas.

+ +

We are looking for advise and suggestions for bringing the product to the next level. The idea is to address some issues without throwing away or rewriting 10,000's of lines of code.

+ +

Here is what we have today: The application server side (actually dozens of programs and CGIs) runs on Linux (CentOS specifically) where interactive operations are handled via browsers (lots of javascript/jquery, some ajax calls, mostly server side rendering). The server side is almost exclusively written in C. We have our own proprietary forms engine that handles data retrieval, posts and launching of programs. MySQL is used as our database.

+ +

The application is financial in nature and handles 100's of thousands of transactions and customer records. The core of the product is solid. By core I'm talking about our business logic, batch processing, etc. It's the interfaces and the periphery where we need retooling.

+ +

We could use more granular access/permissions model, better access to data and processes though APIs and we need to be able to develop these components more rapidly than possible with our current system.

+ +

To preserve our investment and not fix what is not broken, we are investigating what changes can be made to address these concerns but still keep most of the libraries and programs written in C.

+ +

One idea we have is to replace our front end with a Node.js based application. The application will initially provide API (Rest based json data) services for read and update of data as well as business logic process endpoints. It would also manage access to data (read and write) and processes via some roles based permissions model.

+ +

Eventually the existing interfaces will be redone to use these APIs and possibly do more client side rendering and better interactivity via socket-io and client side javascript frameworks.

+ +

Finally, my questions:

+ +
    +
  • Does this idea have merit?
  • +
  • If not, what other approaches should we consider?
  • +
  • What other considerations are there for this project?
  • +
  • Will Node.js used in this fashion scale?
  • +
  • What server side js framework should we consider? Express seems promising but maybe some other more ""opinionated"" framework would be better.
  • +
+ +

For ready only operations, I expect that all server side activity will be handled in the Node.js application. For business logic, maybe database writes, I expect we will hand off the processing to a separate program. We assume the aysnc/await ability will allow this w/o performance degradation. Add-on integration might be possible but as our libraries aren't thread safe that might have limited usefulness.

+ +

Thanks in advance for your comments/ideas/suggestions.

+",365821,,,,,43967.52431,Modernizing legacy application,,1,7,,,,CC BY-SA 4.0, +410199,1,410206,,5/15/2020 21:36,,2,136,"

My question

+ +

I built an inverted pendulum on an Arduino using C (ie. everything was done procedurally). I'm trying to self study application design and would like to refactor my code into a more OO approach with SOLID, loose coupling, and testability in mind.

+ +

How can I improve this design to achieve a more loosely coupled system and better testability?

+ +


+ +

UML

+ +

+ +


+ +

Brief summary of classes

+ +

MotorController - An interface for motor controllers.

+ +

DrokL928 - Is the motor controller I use, implements MotorController.

+ +

Cart - Is just a convenient wrapper for DrokL928. Allows for more intuitive control of the cart.

+ +

Encoder - An external library for reading values from rotary encoders.

+ +

EncoderWrapper - A wrapper for Encoder. That way I encapsulate the external API into one place.

+ +

StateVector - Holds the current state data.

+ +

StateUpdater - Processes encoder values from EncoderWrapper and assigns them to StateVector.

+ +

LQRController - Computes the PWM signal (based on the current state) to send to the Cart in order to stabilize the pendulum. see: Wikipedia's linear-quadratic-regulator.

+ +


+ +

Specific design questions

+ +
    +
  1. StateUpdater uses constants IDLER_PULLEY_RADIUS and SYSTEM_LOOP_RATE. These smell like they don't belong inside StateUpdater, but they are relevant to the calculation of the state. I suppose all of those private methods inside StateUpdater could be put into a separate class StateCalculator?

  2. +
  3. LQRController uses the constant pendulumBound. That is, the LQR controller should only calculate the input if the pendulum angle is within a certain bound. For some reason I feel like this doesn't belong here, but maybe I'm wrong on that. For the sake of completeness, maybe I should add a bound for each variable in StateVector.

  4. +
  5. As of right now I can't instantiate EncoderWrapper into a test harness because it requires a reference to an Encoder. And by extension, I can't instantiate a StateUpdater into a test harness. How can I fix this?

  6. +
  7. In the most ideal case, I would like for StateVector to not be so hardcoded and the variables should be open to modification. Therefore, LQRController should have a gainVector of any length (right now it is hardcoded to 4). Is this too much abstraction? If not, how can I go about achieving this? Now that I think about it, I believe it would be too much abstraction because then I'm not sure how the StateUpdater would calculate the state on arbitrary state variables, because the algorithm is very specific.

  8. +
+",358119,,,,,43967.58542,How can I improve this design to achieve a more loosely coupled system and better testability?,,2,0,1,,,CC BY-SA 4.0, +410200,1,,,5/15/2020 21:44,,2,50,"

I am a scala developer new to swift. In scala we can share implementation across a variety of classes by ""extending"" a trait that has default implementations for the methods. I would like to see how to do that in swift. Here is my shot at it.

+ +

Let's consider an Alerting mechanism to be added to many application classes

+ +
protocol Alerting {
+    func sendWarning(_ msg: String)
+    func sendAlert(_ msg: String)
+}
+
+extension Alerting {
+    func phoneHome(_ msg: String) { print(msg) }
+    func sendWarning(_ msg: String) { phoneHome(""WARN: \(msg)"") }
+    func sendAlert(_ msg: String)  { phoneHome(""ALERT: \(msg)"") }
+}
+
+ +

Now let's add that Alerting to a couple of App classes:

+ +
class MyAppClass : Alerting {
+    func myFunc(somethingHappened: Bool) {
+        if (somethingHappened) { sendWarning(""Something happened in MyAppClass.myFunc"") } 
+    }
+}
+
+class AnotherAppClass : Alerting {
+    func anotherFunc(somethingHappened: Bool) {
+        if (somethingHappened) { sendAlert(""Something _bad_ happened in AnotherClass.anotherFunc"") } 
+    }
+}
+
+ +

Is this a common pattern /idiom in Swift? Is there a different / more preferred approach, and why is it preferred?

+ +

Objectives for the approach:

+ +
    +
  • Horizontally - focused functionality can be shared across disparate application classes. Think aspect oriented programming.
  • +
  • The functionality can be implemented in one place yet overridden in classes inheriting the behavior as needed
  • +
  • The behaviors are mix-ins ie they can be combined with other behaviors and do not require a rigid inheritance hierarchy
  • +
+ +

Taking a look at how my attempted implementation meets those objectives:

+ +
    +
  • Alerting was added to two App classes that have no inherent relationship
  • +
  • The extension provides default implementations of the required behavior
  • +
  • Other unrelated protocol/extension could be added without restrictions: they meet the mix-ins goal
  • +
+ +

But I would note that my approach comes from a scala centered world and may not reflect what swift folks would prefer.

+",102887,,1204,,43966.98472,43966.98472,What is the idiomatic Swift way to add general functionality via protocol/extension?,,0,13,,,,CC BY-SA 4.0, +410202,1,,,5/16/2020 2:29,,0,79,"

I am working on a web application as a hobby and trying to learn some concepts related to cloud development and distributed applications. I am currently targeting an AWS EC2 instance as a deployment environment, and while I don't currently have plans to deploy the same instance of my API application to many servers, I would like to design my application so that is possible in the future.

+ +

I have a search operation that I currently have implemented using a Trie. I am thinking that it would be slow to rebuild the trie every time I need to perform the search operation, so I would like to keep it in memory and insert into it as the search domain grows. I know that if I only wanted to have one server, I could just implement the trie structure as a singleton and dependency inject it. If I do this in a potentially distributed application, though, I would be opening myself up to data consistency issues. +My thought was to implement the trie in another service and deploy it separately and make requests to it (this sounds like micro service concepts, but I have no experience with those). Is this common practice? Is there a better solution for maintaining persistent data structures in this way?

+",364939,,,,,44179.54375,Maintaining Objects Across API Deployment Instances,,1,0,,,,CC BY-SA 4.0, +410204,1,410286,,5/16/2020 6:24,,3,313,"

Lately, I've been working on a project which basically is a huge rewrite in .NET Core F# + Event Sourcing + PostgreSQL of an old sub-ledger legacy app written in .NET 4.6 C# + SQL Server.

+ +

Since the whole rework thing cannot happen overnight and the legacy process needs to run till every single piece is tested replaced, we opted for distributed transactions via the TransactionScope class, it usually works, but the tradeoff is that you need clean up orphaned transactions in case that there is a crash (and basically that can happen whenever you're updating a service), chances are not high, but that can still happened and already happened.

+ +

Long story short, we need to keep a certain consistency between what is written in the legacy system (ie. SQL Server) and what is written is the new system (ie. PostgreSQL) until everything is fine, it's a critical system so can't really mess up with it.

+ +

So I'm wondering is there really an alternative when it comes to write some data into both databases (albeit with a different format)?

+ +

So that we can have the guarantee that the transaction has worked out (or not) for both DBs (I put the emphasis on both, cause it should be either true or false). What we absolutely want to avoid is that there is a piece of data written into one and not other (system).

+ +

I've heard about the saga pattern but not too sure how this can be applied in that context knowing that we can't change much the legacy system.

+",171752,,,,,43969.52431,Alternatives to distributed transactions in .NET?,<.net>,2,2,1,,,CC BY-SA 4.0, +410218,1,,,5/16/2020 15:55,,-3,163,"

As my first job, I joined a startup with a small number of developers.

+ +

I mainly work with this one other person -- let's call him Paul for now. We are the only two devs on our team.

+ +

He joined the company a year before I did, pretty much at its inception, and built a considerable part of our company's core and gets a lot of approval from management.

+ +

However, looking into his code, I realize it's bad. I mean really bad.

+ +

He sort of crash-learned programming in a few months, and to be honest, he's still a bad programmer. He doesn't know how to do list comprehension (we use python mostly), class inheritance (or anything relating oop), and doesn't know how to use git.

+ +

For instance, currently one of his old codes is taking 2 hours to complete a cron job, which does a huge list of if statements which I'm sure could be narrowed down to 10 minutes using regular expressions.

+ +

But obviously workplace politics exist, and I can't just blatantly say his code is the crux of our problems so pointedly. My team leader doesn't know how to code, and besides, I can't just say ""he really just sucks at coding, and his code is terrible.""

+ +

However, problems need to be amended -- and at some point, I feel like I need to say ""your code really sucks, and I need to rewrite it, because you'll still do an awful job refactoring it"".

+ +

How to bring this up tactfully?

+",365881,,,,,43967.77639,How can I tactfully point out that a colleague's work is bad?,,3,5,,43967.84236,,CC BY-SA 4.0, +410225,1,410227,,5/16/2020 18:42,,1,54,"

I am developing a webservice, which offers different services, but currently under different ports. Existing services like GitHub or GitLab also several services, but I am wondering how they can expose them under a single port. Are they looking at the header of request and decide if it is a git client or web browser request and tunnel the request to the corresponding handler?

+ +

Are there any keywords or topics in this regard, which could help me to understand how its technically done? Thank you!

+",345143,,345143,,43967.78472,43967.81111,Multiple webservices on a single port,,1,0,1,,,CC BY-SA 4.0, +410226,1,,,5/16/2020 18:53,,0,57,"

I'm trying to learn about MODBUS as a free-time project. It is a long standing desire to write my own driver in Golang. That being said, I'm now trying to design the concurrency model for device connections.

+ +

I'm using the MODBUS specifications from modbus.org as guidance. Specifically, my question arises after reading the MODBUS Messaging on TCP/IP Implementation Guide V1.0b +section 4.2 TCP CONNECTION MANAGEMENT:

+ +
+

Implementation Rules :

+ +
    +
  1. Without explicit user requirement, it is recommended to implement the automatic TCP connection management
  2. +
  3. It is recommended to keep the TCP connection opened with a remote device and not to open and close it for each MODBUS/TCP transaction, + Remark: However the MODBUS client must be capable of accepting a close requestfrom the server and closing the connection. The + connection can be reopened when required.
  4. +
  5. It is recommended for a MODBUS Client to open a minimum of TCP connections with a remote MODBUS server (with the same IP address). + One connection per application could be a good choice.
  6. +
  7. Several MODBUS transactions can be activated simultaneously on the same TCP Connection. + Remark: If this is done then the MODBUS transaction identifier must be used to uniquely identify the matching requests and responses.
  8. +
  9. In case of a bi-directional communication between two remote MODBUS entities (each of them is client and server), it is necessary + to open separate connections for the client data flow and for the + server data flow.
  10. +
  11. A TCP frame must transport only one MODBUS ADU. It is advised against sending multiple MODBUS requests or responses on the same TCP PDU.
  12. +
+
+ +

1. What I normally would do

+ +

Use a connection pool. Each request takes a connection out of the pool and define a maximum amount of concurrent connections. If there are no connections available in the pool and N < Max create a new one. Idle connections close after a timeout. Closed and errored connections will be redialed as required. Requests can be done from multiple Go routines.

+ +

A Send() method will build and APU and Write() it on an acquired connection. It then attempts to Read(). Until the result comes back, this routine is blocked. On success the resulting APU is parsed into a PDU and returned to the caller. The connection is Put() back into the pool. In case of a connection error during Write() or Read(), the error is returned and the closed & connection is discarded. (Perhaps I'll implement some retrying later)

+ +

This method keeps a perfect relation requests, results and potential errors. Its a common used pattern if Go networking application like SQL drivers, HTTP, gRPC and more.

+ +

2. If I would follow the spec's advice

+ +

Use a single connection. Request PDUs are passed through a channel to a handler. The channel sequences the requests in a concurrent safe way. A handler runs in an infinite loop over the channel. For every PDU it builds an APU, which contains an incremental Transaction ID, as defined by the spec. This ID needs to be stored in a concurrent safe way into the connection object, probably with a return channel attached to it. For example a map[uint16]chan struct{pdu, error}, protected by a mutex.

+ +

A second loop iterates over Read(). When a reply arrives, it reads the header and decodes the transaction ID. The return channel is than obtained from the map, over which it sends back the PDU.

+ +

In this situation connection error handling and propagation becomes a bit more vague. In case of any error I would somehow lock the Write() loop until redial logic completes. But how to deal with the callers that are still waiting for their Read() results on a channel? It can be considered that their result will never arrive. Should the error be send to all channels currently waiting? Optionally, I can store the original request in the same map and re-transmit.

+ +

In any case, this approach feels fragile at best.

+ +

My questions

+ +

Which way should I go? My preference is clear (1). Will this lead to future problems on embedded devices? Unfortunately I don't have devices to work with yet, nor the experience with them.

+",327642,,,,,44118.54236,Modbus over TCP concurrency pattern,,1,0,,,,CC BY-SA 4.0, +410231,1,410241,,5/17/2020 3:11,,1,143,"

I am watching this talk by Sean Parent. He notes that:

+ +
+

Choosing the same syntax for the same semantics enables code reuse and avoids combinatoric interfaces

+
+ +

What does ""combinatoric interface"" mean?
+Could you explain with an example?

+ +

P.S. Thinking more about this my guess is that if we have n types and m operations with common semantics on the types (they do the same thing on the types), instead of writing m x n functions (i.e. the possible combinations of types and operations), we only write m generic operations. Is that what it means?

+",324635,,324635,,43969.11806,43969.43611,What does Combinatoric Interface mean?,,3,3,,,,CC BY-SA 4.0, +410234,1,,,5/17/2020 7:36,,-4,108,"

Please, consider the below merge sort algorithm. Here we start with a divide portion which splits the array into halves and we do recursive operation on each half separately. I have ignored the merge portion of the algorithm to reduce complexity.

+ +
function mergeSort(unsortedArray)
+{
+
+  let midpoint = Math.floor(unsortedArray.length/2);
+
+  if(unsortedArray.length == 1)
+  {
+    return unsortedArray;
+  }
+
+  let leftArray = mergeSort(unsortedArray.slice(0,midpoint));
+  let rightArray = mergeSort(unsortedArray.slice(midpoint,unsortedArray.length));
+
+}
+
+ +

I know for binary search tree which ignores half of the array in every iteration the answer would easily be arrived at as log2 n.

+ +

Now I would like to calculate the Worst Case Time Complexity for only the portion which splits the array into left half i.e. let leftArray = mergeSort(unsortedArray.slice(0,midpoint));

+ +

Even if the above code splits the array from index of 0 to midpoint. In the next level of recursion it would work on the entire array unlike Binary Search with index 0 to midpoint/2 going to left recursive call and index midpoint/2 to midpoint going to right recursive half.

+ +

So, How would we calculate Time complexity in a scenario where each level of recursion involves multiple recursive calls instead of one?

+",365917,,,,,43968.65764,What is the worst case Time Complexity for only the Divide Portion of the Merge Sort Algorithm?,,1,2,,,,CC BY-SA 4.0, +410242,1,,,5/17/2020 14:52,,0,65,"

I have an architectural question. I have an application which is subscribed to a log compacted Kafka topic. I have to process each event and store into a persistent datastore. I am planning to run the app in 4 instances with the same group-id so that the partitions are distributed with all the nodes.

+ +

During the lifetime of the application, there will be a need for

+ +
    +
  • Resetting consumer to start from 0th offset
  • +
  • Pausing the consumer for a while
  • +
  • Resuming a paused consumer
  • +
+ +

The best way to achieve this was to implement an API which can be called from a script. The API will have an access to the KafkaConsumer object and will call the pause operation.

+ +

Now, the issue I see with this approach is there isn't a way I know which can ensure all of the 4 instances. How can I make sure that one API (or N API where N = number of instances) call will do this operation?

+ +

Any help on this would be appreciated.

+",54436,,,,,44124.50069,How to make sure all of the nodes process one API request behind a load balancer,,1,0,1,,,CC BY-SA 4.0, +410248,1,410250,,5/17/2020 17:29,,42,7740,"

I'm working on a PHP web application that depends on a few 3rd-party services. These services are well documented and provided by fairly large organisations.

+ +

I feel paranoid when working with responses from these API, which leads me to write validation code that validates that the responses match the structure and data types specified in the documentation. This mainly comes from the fact that it's out of my control and if I blindly trust that the data will be correct and it's not (maybe someone changes the json structure by accident), it could lead to unexpected behaviour in my application.

+ +

My question is, do you think this is overkill? How does everyone else handle this situation?

+",87243,,340885,,43971.85,43971.85556,Is it common practice to validate responses from 3rd party APIs?,,8,12,7,,,CC BY-SA 4.0, +410254,1,,,5/17/2020 20:50,,0,131,"

...to prevent messing up with data updates, such as prices, titles, of the products that are placed in an order.

+ +

Namely, a customer buys 3 items: for $5, $10 and $33 and pays for them. All is well. When I as an owner of a shop step in and, before I've delivered those products to a customer, decide to descrease the price of the product, say, #2. Its new price will be $8 instead of $10. And I'll also rename the product #3 a little bit. A customer then would go to a status tracking page and they'll see updated data. Yes, they've paid and the order is being delivered, but the data has already changed in the database. They'll be consufed.

+ +

And so will be I in a year after I'll have changed the prices again, renamed something, etc... and decide to view history of the orders for a year.

+ +

Question 1: is there a practise of making a snaphot of the current prices, names, total order price and other characteristics of the products that constitute an order, as well as other details (chosen shipping rate and shipping method, taxes, discounts, etc) at the moment when it's being placed? Rather than calculating those dynamically when an ""order page"" is opened?

+ +

If yes, does it have a name? And are there recommendations of how to do it properly?

+ +

Question 2: where and how should I store a snaphot?

+ +

option #1:

+ +

I'd have to create multiple tables then:

+ +
    +
  • frozen_products
  • +
  • frozen_discounts
  • +
  • frozen_shipping_method_and_rates

    + +

    etc...

  • +
+ +

that will have the same structure as their dynamics corresponding ones.

+ +

Laborious. Is there a better way?

+ +

option #2:

+ +

along with an order, in the ""orders"" table. But how again, given the fact that an order is a single row? For instance, the products in an order is a list. How would I store a list with its characteristics (price, weight, colour, material, what have you) in a row in such a way that'll be more or less easy to retrieve in the future? Not as a string.

+",365957,,319783,,43969.98194,43971.96389,"When placing an order in a shop, should a snapshop be taken of the products in an order also?",,2,6,,,,CC BY-SA 4.0, +410259,1,410308,,5/17/2020 23:11,,-4,96,"

I am writing a paper trading system, and what I have works, but I can't help but feel that there's a better way to partially close a stock position; what I currently have seems a little overboard.

+
+

Partially Close Scenario: Customer A buys 100 shares of stock X at $1 then, later the customer sells 80 shares at $2; leaving 20 shares on the table.

+

Close plus Open Scenario: Customer A buys 100 shares of stock X at $1 then later the customer sells 120 shares at $2; opening a new short position with 20 shares.

+
+

Pseudo Code

+

I think posting the actual code is irrelevant for answering this question.

+
if (customer has active trade) {
+    if (quantity to close >= quantity open) {
+        close open trade
+        if (quantity to close > quantity open)
+            create new open trade with remaining quantity (close - open)
+    } else {
+        // This is where I think it could be better.
+        close old position at entry price
+        open new position at old entry price (close quantity)
+        close new position at specified exit price
+        open new position at old entry price (old quantity - close quantity)
+    }
+} else {
+    create new open trade with specified quantity
+}
+
+

Is there a more efficient way to write the logic above? Both scenarios can be executed as a single trade on most platforms.

+

NOTE: The community may feel this question has a better home over at Code Review; if this happens to be the case, can it be migrated? I thought it was a better fit here since there's no actual code involved.

+",319749,,319749,,44049.63819,44049.63819,"Is there a better way to write the logic for handling ""close plus open"" and ""partially close"" trades?",,2,2,,,,CC BY-SA 4.0, +410260,1,410272,,5/17/2020 23:25,,-2,158,"

I wrote a small video library app. It renders private Vimeo videos for paid subscribers.

+ +

The owners of the app would like to reward users who watch videos. My first implementation was to trigger a watched mutation at 80% of the video playback (this was not ideal but clients demand features). Users have figured this out and are cheating by skipping to 75% and letting it tick over then moving onto the next video. Users only score one watched event so replays aren't a concern.

+ +

I can think of a few ideas to stop this behavior but would be interested in other answers to this. It doesn't have to be YouTube levels of fake engagement prevention.

+ +
    +
  1. Check n times through the watch, if user sets m/n (they are allowed to skip a bit) they are granted their reward
  2. +
  3. Check start playback time against time at 80% and have they spent enough time on the video.
  4. +
+ +

If it matters I am using React and Apollo.

+",250183,,,,,43969.34583,Prevent users cheating a view count,,4,2,,,,CC BY-SA 4.0, +410262,1,410327,,5/18/2020 1:24,,8,453,"

Oftentimes, ""dirty"" is used to represent unsaved code, memory, or files. For example, a file can be ""dirty"", meaning it's unsaved, memory can be ""dirty"", meaning it's been modified but hasn't been written to RAM, and Git reports its working tree as ""clean"" when there are no uncommitted changes.

+ +

I understand why you would use the term, but where did it originate?

+",312598,,,,,43971.50764,"What's the origin of the term ""dirty"" in regards to unsaved progress?",,3,2,1,,,CC BY-SA 4.0, +410264,1,,,5/18/2020 3:13,,-2,33,"

We will finish a sprint with stories tested and ""Done"" and thereby closed in Jira. They often sit in a branch somewhere until someone remembers we have code waiting to be pushed live. Or worse, we go to push something live and someone will pipe up with ""oh that also includes xyz change..."", or even worse we push one change not realising it includes another one. Usually, it works, because it's tested, but kinda scary we don't often know exactly what going live without doing a thorough code review of all merge requests and diffs across multiple services/components.

+ +

Often we can't push things live immediately as there are dependencies, related work in progress, marketing or any number of other reasons. I know feature flags are a good solution but we're not currently planning to implement.

+ +

How do you track issues and related code that is Done, but not yet released in a methodical manner so you can see the code changes involved and the issues involved across any environment at any time?

+",339664,,,,,43969.44792,Tracking stories and code to be released after its tested and out of the sprint,,2,0,0,,,CC BY-SA 4.0, +410266,1,410278,,5/18/2020 4:03,,-3,93,"

I am working on a system where I am planning to send a large amount of data in a message to an event queue. About 150 or so small objects. I know the technology should be able to handle this, but is it good practice?

+",365981,,,,,43969.34028,How large should messages in an event queue be?,,2,1,0,43978.85139,,CC BY-SA 4.0, +410273,1,,,5/18/2020 7:11,,-1,135,"

How should I set up my project when I want to run the present version of a class against previous versions? I'm interested in issues related to code organization, file naming, and source control.

+ +

I have a stable version A of my code, and am working on some accuracy improvements for version B. (It's a forecasting model in Python, but my question is perhaps broader than either of those things.)

+ +

I want to be able to run both of these versions of code at the same time in at least these two situations:

+ +
    +
  1. Testing. To test the new changes, I'd like to run a test data set through both version A and version B. Version B ought to give more accurate results (and if not it needs improved/fixed). In the future I'd start on version C, and I'd like to be able to test C alongside B (and maybe A).

  2. +
  3. Speed/accuracy tradeoffs. Each version is intended to offer an incremental improvement in forecasting accuracy, although this will typically require greater complexity and processing. For some purposes I may prefer a faster answer, even if it were less accurate. In those cases I would default to an earlier version for a quick-and-dirty answers.

  4. +
+ +

I have seen suggestions for feature flags. So I'd have one class, but I'd specify if I want the A or B (or C) logic. I think this would get messy quickly. It would be easier to read if I had a version with just the A code.

+ +

What's the best way to do this?

+",365991,,365991,,43969.59167,43969.78472,Running multiple version of code,,2,7,,,,CC BY-SA 4.0, +410292,1,410294,,5/18/2020 13:40,,-2,101,"

Is there a standard on db alter scripts (both data and ddl), should they be re-runnable if so what are the reasons for making them re-runnable.

+ +

The only web pages I could find are telling me how to do it, but i want to know why to do it?

+",310910,,,,,43970.28681,Re-runnable db alter scripts,,2,2,,,,CC BY-SA 4.0, +410298,1,,,5/18/2020 15:05,,0,36,"

I am developing a function that produces an inherently inductive output (calculating periods based on a start and end date).

+ +

E.g., period n depends on the calculation on period n-1 and so forth.

+ +

Without increasing my unit tests cyclomatic complexity (loops etc) and adhering to the principal that each unit tests be explicit and assert specific outcomes, how would I achieve this?

+",300307,,,,,43969.62847,Unit testing an inductive problem?,,0,1,,43969.64306,,CC BY-SA 4.0, +410302,1,,,5/18/2020 17:05,,0,18,"

I'm dealing with a native .net WPF app which is more or less a shopping client operated on one of multiple self service terminals.

+ +

On these terminals the users have four possibilities in order to buy the stuff they like:

+ +
    +
  1. Enter cash and buy whatever they want anonymously
  2. +
  3. Login with their user account and buy whatever they want with money from their account
  4. +
  5. & 4. Basically possibility 1 and 2 but there is an employee logged in on the app which performs the actions for the user
  6. +
+ +

Each of these four possibilities may call different apis for the same operation. For example buying a new item could be seen as (in order of the four possibilities again):

+ +
    +
  1. Call buying api with the amount stored on the client
  2. +
  3. Call user api to retrieve amount stored in account and then call buying api
  4. +
  5. & 4. Again just use cases 1 & 2 with the addition that in the end an employee api is called which writes some info that the shopping has been done by a specific employee
  6. +
+ +

My question now: how should/could I implement those scenarios using oauth? Would it be better to have multiple access tokens for each ""session"" or should I rather use one access token which will be enriched with additional information upon login of a user? Or is there any other way I'm currently not aware of as I'm learning oauth while trying to design a good solution for this right now?

+",366026,,,,,43969.71181,Multiple simultaneous users with oauth,,0,0,,,,CC BY-SA 4.0, +410311,1,,,5/18/2020 21:32,,6,268,"

I have a library (npm package, LIB) which is used by the application code (APP). In a release of APP, multiple features are worked on parallely. Sometimes these features need support from LIB, so the changes need to happen there too and LIB used semvar.

+ +

When multiple features like F1 and F2 are being worked on in APP, LIB is released with let's say x.y.1 for F1 and then x.y.2 for F2. Problem is the features are being developed and tested in parallel and it's possible that F2 lands but F1 doesn't. In which case LIB's release x.y.2 contains undesirable code of x.y.1.

+ +

Right now I solve this using two methods:

+ +
    +
  1. Optimistically release LIB, revert undesirable code and release x.y.3 which is basically x.y.2 without x.y.1. Issues with this approach are:

    + +
      +
    • x.y.3 is a new release and is probably needed to be tested again.
    • +
    • Overhead of reverting versions.
    • +
  2. +
  3. Use dirty code from LIB's F1 and F2 branches before releasing, to test APP. Once one of the feature is ready to land, the version is released. Let's say F2 is ready to land, then x.y.1 of LIB is released with support for F2 just in time. Issues with this approach are:

    + +
      +
    • Overhead of releasing the dependency (LIB) versions just ahead of release of APP.
    • +
    • Unversioned code using git branches is hard to manage and prone to bugs. We use something similar to git flow and reverting features from release branch of LIB because F2 has to go makes for more errors as reverting can cause errors. Also, the testing stands invalidated as the new release branch will be (F1 + F2 - F1) while the tests were done on (F1 + F2).
    • +
  4. +
+ +

Both of these solutions have issues. I want to know how to solve this scenario where parallel development meets sequential semvar and I realize that reverting features just before release is an issue and unusual practice.

+ +

Thanks in advance.

+",125592,,52522,,43974.37083,43974.37083,Versioning in parallel features development,,2,0,,,,CC BY-SA 4.0, +410318,1,410332,,5/18/2020 23:50,,0,97,"

I am writing a client facing library and wanted to check if the below approach sounds reasonable? This library is a service that consumes HTML and creates a greeting card.

+ +

I feel writing Services and ServicesImpl may not be necessary and I only need ServicesImpl because the content in Services and ServiceImpl will be copied over. What would be a good design?

+ +

Consumed by user:

+ +
public class CardPrinter {
+    public void create(String html) {
+        Card Card = CardServiceImpl.createCard(html);
+        // ..
+    }
+}
+
+ +

Service:

+ +
public class CardService {
+    Card createCard(String html);
+}
+
+ +

Service impl:

+ +
public static class CardServiceImpl implements CardService {
+    public static Card createCard(String html) {
+        Card Card = Card.builder().content(html).build();
+        return Card;
+    }
+}
+
+ +

Pojo:

+ +
public class Card {
+    private String content;
+}
+
+",186820,,186820,,43970.025,43970.26389,Is this a good design for my library?,,1,2,,,,CC BY-SA 4.0, +410321,1,,,5/19/2020 0:22,,0,119,"

I have a block of code that branches into 2 pathways, let's call them the ""simple"" and ""complex"" branch, based on user input.

+ +

Either the simple or complex logic has 4 steps, let's call them A, B, C and D. A, B and C each have 2 different variations depending on whether we're in the simple or complex branch.

+ +

In the complex case, A, B and C are long enough where it makes sense for them to be broken up into their own methods. In the simple case, these reduce to nearly 2-3 liners.

+ +

Because the simple and complex case correspond to different user inputs, I feel it's easier and less mentally taxing to think about the problem with 1 if statement branching into 2 different methods, rather than 1 method with multiple if statements inside. This way, when I look at the methods of either branch, I'm only having to ever think about the logic of 1 case at the time rather than look in and out of if statements to see the program flow.

+ +

Now the dilemma is that both methods share identical code for D logic, and while they correspond to different user inputs, D is unlikely to ever vary between them. In the complex method, A, B and C are broken out into separate subfunctions, so it is logcally consistent for D, which is the same logical level as A B and C, to also be broken out into a separate function. In-lining, in this case, hurts readability.

+ +

However, in the simple case, A, B and C are all in-lined into the simple() method. Each of these steps is short enough that any separation into subfunctions would hurt readability (and, since D represents a similar level of step in the process, it would be logically inconsistent to break D into a sub method).

+ +

I'm currently most inclined to have D in-lined in the simple case, so that A, B, C and D can be read in one function in simple(), and then have identical logic for D broken out into a sub-function in the complex case. This violates DRY, but I feel this improves readability.

+ +

In summary, in pseudocode:

+ +
if(condition)
+  simple()
+else
+  complex()
+
+function simple() //All in-lined in simple()
+ Asimple;
+ Bsimple;
+ Csimple;
+ D;
+
+function complex()
+  Acomplex();
+  Bcomplex();
+  Ccomplex();
+  D();
+
+
+ +

D is copy-pasted in D() and in-lined in simple(), but I believe this is more readable as both simple and complex have their internal logic at a consistent level of decomposition. And, by keeping logic all together in the simple case and decomposed on the complex case, this maximizess readability by keeping functions neither too short nor too long. Does this seem like an appropriate time to violate DRY?

+",309478,,309478,,43970.48403,43971.28472,DRY Violation for Logical Code Organization and Readability,,2,2,,,,CC BY-SA 4.0, +410322,1,,,5/19/2020 0:32,,5,454,"

I've got a project with an HTTP API which returns data from a database. The layers it goes through to get to the API look like this:

+ +

DB -> Repository -> Controller

+ +

I'm looking to restrict the results which are returned based on the permissions of the requester. Should this be included in the repository layer, or the controller layer and why?

+",366047,,366047,,43970.05903,43989.50764,Should access control be implemented in controller or repository layer?,,4,6,1,,,CC BY-SA 4.0, +410325,1,,,5/19/2020 1:53,,2,126,"

I am trying to plant a garden. Certain plants are good for some plants and bad for others, and I am trying to find the best order of plants: most adjacent friends and no adjacent foes, as defined in this table:

+ +
Num Vegetable   Friends       Foes
+1   Watermelon  7,4,3          8,6
+2   Tomatoes    9,8,6,5,1      7
+3   Sunflowers  7,6,11  
+4   Zucchini    9,7,3   
+5   Eggplant    9,6,2          7,10
+6   Cucumbers   9,7,3          8,1
+7   Corn        8,6,4,3,1      5,2
+8   Cantaloup   7,4,3          6,1
+9   Bell peppers6,5,11,10,2 
+10  Swiss chard 2              5
+11  Rhubarb     9,3 
+
+ +

Assuming I have one of each plant and they are being planted in a row, how do I sort them (most efficiently) so that I will get the most adjacent friends and no adjacent enemies? There are tools online, but I am trying to understand the thought process and the implementation. Java is a language I know, so that would be the most helpful out of any language, but the concepts are the main point for me.

+",366054,,,,,43970.74722,Algorithm for rule-based sorting?,,2,2,,,,CC BY-SA 4.0, +410331,1,,,5/19/2020 6:05,,-2,49,"

We are struggling in deciding scope of end-to-end tests. As per our understanding, we have automated the form exactly the way users interact with it. below are the steps user perform while submitting the form.

+ +

Step 1. User fills the form

+ +

Step 2. If all the validation passes. We submit the form using an API.

+ +

Step 3. We show success/failure message to the user

+ +

In automation, Success and failure of the form is decided by the message we showed in step 3

+ +

Now, here are the questions

+ +
    +
  1. How do we ensure that all the information filled by the user is passed to an API? Relying on the success/failure message may not be enough specially in case of optional fields.
  2. +
  3. Should we worry about this case at all?
  4. +
  5. Should this case be covered in E2E tests at all? Or can this be verified with any other kind of testing?
  6. +
+ +

We are using Cypress for automation(If this helps)

+",317027,,,,,43970.375,How to identify scope of end-to-end automation testing?,,3,0,,,,CC BY-SA 4.0, +410334,1,410338,,5/19/2020 6:39,,0,140,"

So I'm currently playing around with web development as a project, and I've been looking at React recently. My current issue is that I'm having trouble distinguishing between front and back end seperation.

+ +

From my understanding, the front end is the UI that the client interacts with, and the back end is the server itself.

+ +

My current goal is to develop a quick note-taking application. To do this, I currently have two different approaches floating in my head.

+ +

In the first approach, I'd have only a single directory which stores the project. The first approach is to use React to generate a UI, and then have a component in the UI call a function to write the data into a text file, and store that file somewhere in the front end directory. With this approach, it doesn't really seem like a back-end is necessary, as the front end is handling all of it?

+ +

In the second approach, where things are seperated into 2 different directories (one for the front and one for the back), I'd have the front end communicate with the back end, and have the back end write the data into a database somewhere.

+ +

Additionally, is it considered bad practice to have the front end handle stuff like writing into files?

+",366063,,,,,43970.49375,React - When is a backend necessary?,,3,1,,,,CC BY-SA 4.0, +410342,1,,,5/19/2020 9:28,,-4,51,"

i have a problem in my scalable Pub/Sub application that it can not take more than 30 RPS when big amount of sessions are open against it. +first i will explain the application structure.

+ +

the application is a spring-boot embedded tomcat with 200 tomcat threads. +it is like i said a Pub/Sub application with multiple instances behind a load balancer. +when a user opens a connection against the application it gets sticky to specific instance with a cookie he receives back. +the Pub\Sub pattern is obtained by redis that sends the request to the instance which holds the recipient session.

+ +

the load balancer is RR so the connections are spread evenly between the instances. +when a single instance holds more than 3k connections things are starting to slow. +the RPS i can send decreasing drastically from 3000 RPS i could send before connections to 30 RPS with them. +i assumed first that the problem is too little tomcat threads available but it is not the case as i monitor and see 100+ free tomcat threads that are not is use per instance. +i can not find the culprit, seems like it's something related to the amount of sessions open against the instance but i can not say why. +BTW the protocol used to maintain the sessions is xhr-streaming

+",357728,,,,,43972.09167,application can not handle certain amount of RPS when 3k+ sessions connected,,1,1,,,,CC BY-SA 4.0, +410353,1,410362,,5/19/2020 15:23,,2,87,"

I've looked through similar questions and tried to Google what I'm trying to do, but I haven't found anything concrete.

+ +

What I'm trying to do: where I work, we use an HMI/SCADA system that displays tags, machine states, and other standard SCADA functions. Recently, I figured out how to send HTTP requests from the SCADA system and I tested sending tag values and other similar data to an Express server I spun up, where a value is POSTed approximately every second (for monitoring or reporting; think of a temperature gauge reporting temps every second). Even though the system we use is considered ""cutting-edge"", the user interface is blocky and lacking, especially when compared to modern web applications. I had the idea to send data from the SCADA system to a POST endpoint, which will then be processed by the Node.js / Express application and reported to an internal website. This way, I can use graphs and charts (from D3 or chart.io) and pre-built UI component libraries like Bootstrap or TailwindCSS.

+ +

My question: I use the SCADA application to send an HTTP POST request with a data payload. I can verify that the Express server receives the request and I log the message. However, I don't know how to send the POSTed data to the single (for now) HTML page. I have a simple HTML page with a progress bar Bootstrap component that I'm trying to control from the SCADA application. How do I tie the front end components (say, a graph or a Bootstrap progress bar) to update whenever the POST endpoint is hit? I can use the Javascript fetch() but the SCADA system won't necessarily POST at set intervals, so how do I only make changes to the UI when a POST request is made without polling (ie using a fetch() every so often from the frontend, which would be a bad idea, I think).

+ +

Also, if you can foresee other issues with what I'm trying to do, I would greatly appreciate any feedback. This is uncharted territory for our company, so if it works, we will likely build out reports and other user content in this manner, and I'd like to create a solid foundational architecture to support many requests per second without lag, with instant changes to the UI.

+",321458,,,,,43970.78194,How do I send data to HTML page only when a POST endpoint is hit?,,1,1,,,,CC BY-SA 4.0, +410357,1,,,5/19/2020 16:51,,1,14,"

so my project is that I'd like to pull data from a bunch of different services/API's and show them in a single dashboard. SSO is a requirement so I want to make sure the user doesn't have to put in their password over and over.

+ +

The system already has SAML2.0 set up with ADFS which is configured as the identity provider for all the services.

+ +

The issue is that when a user uses SAML to log into a service, they leave my dashboard app and don't get any of the user permissions for that service. For example, say the user is trying to access Gitlab and they only have access to a certain number of projects, unless they go into gitlab and generate a user API key and copy/paste it into my dashboard app, I can't get that user's permissions when making API calls.

+ +

From my research, it sounds like using OAuth2.0 is the solution to this; set up ADFS to generate tokens for a user (as the Authorization server) and then integrate it with applications that use OAuth2.0 (which would be the resource servers) and my dashboard app would be basically a client hanging on to the tokens. This would enable me to make API calls with scoped permissions.

+ +

Does this sound like a good idea? Instinctually, I feel that this has got to be a common problem that's been solved many times before but I am not formulating the Google/Stackoverflow searches for the solution.

+",366109,,,,,43970.70208,How to integrate multiple services via API's into a single dashboard on a per-user basis with SSO?,,0,0,,,,CC BY-SA 4.0, +410359,1,410360,,5/19/2020 18:01,,-2,198,"

In my opinion it should not since it violates single responsibility principle, but I am finding myself often calling both.

+ +

Edit: here are the 'boring' details:
+As you all know, we cannot escape the existence of an Util class. It simply exists. I have a static class Util that has some helper methods, and there dwells RemoveDuplicateSpaces along with:

+ +

NormalizeWhitespaces - replaces all Spacing characters with spaces and \r\n with \n
+StripSpaces- removes all spaces

+ +

As some comments and answers pointed out, my problem falls in line with finding a good name to a function that does both. But how to do so without further littering the class? Trimming a string and removing duplicate spaces are similar and yet, different things.

+",366114,,366114,,43970.82917,43971.49653,Should a helper function that remove duplicate spaces also trim the string?,,6,8,1,,,CC BY-SA 4.0, +410366,1,,,5/19/2020 19:59,,-2,61,"

My web app is built around classes that I call widgets. Their goal is to be reusable and modular, to suit different scenarios. For example, I have a widget called BreadcrumbWidget which has two functions:

+ +
    +
  • addHome(): adds the ""Home"" breadcrumb.
  • +
  • addBreadcrumb($title, $url = null): adds any number of breadcrumbs.
  • +
+ +

I have this function in a separate class:

+ +
public function getBreadcrumbs()
+{
+    return (new BreadcrumWidget())
+        ->addHome()
+        ->addBreadcrumb($this->product->name); // The product's name is Foo
+}
+
+ +

The above function returns a BreadcrumbWidget instance on which I can do ->toArray() and get the output:

+ +
[
+    'widget_type' => 'breadcrumb',
+    'list' => [
+        [
+            'text' => 'Home',
+            'link' => '/some-link',
+        ],
+        [
+            'text' => 'Foo',
+            'link' => null,
+        ],
+    ],
+]
+
+ +

I'm uncertain on how to best write tests for getBreadcrumbs(). I already have a test class for BreadcrumbWidget which checks that addHome and addBreadcrumb work as expected. For getBreadcrumbs() I'd need to test that the result contains the home and Foo breadcrumbs.

+ +

One possible approach would be to mock the BreadcrumbWidget class and define the following expectancies:

+ +
    +
  1. addHome is called,
  2. +
  3. addBreadcrumb is called with the string 'Foo'.
  4. +
+ +

Though this is currently not possible because BreadcrumbWidget is instantiated inside the function and presents a hard dependency. I'd probably need to have factories for all my widgets to enable such testing (and in getBreadcrumbs()'s case inject the factory into the class where function is defined). Is this a common practice?

+ +

Another approach would be to compare the output of ->toArray() with an expected array.

+ +

What would be the better approach or does someone maybe know a better one?

+ +

** I'm using Mockery and I know about overload (see) which can mock hard dependencies, but I'd rather have a solution without it.

+",,user366124,,,,43970.85208,Testing function that return objects,,1,0,,,,CC BY-SA 4.0, +410375,1,,,5/20/2020 3:37,,1,67,"

I have been doing problems on LeetCode recently, and keep getting stuck on some graph or tree problems. I understand the underlying concepts of Graphs, Trees, and different methods of traversal just fine. For example, I can code a tree from scratch by creating a class for the Node and then passing the root to a function that traverses the tree using these interconnected node objects.

+ +

Example below.

+ +
class TreeNode():
+    def __init__(self,val=None,left=None,right=None):
+        self.val = val
+        self.left = left
+        self.right = right
+
+def BFS_tree(root):
+    queue = [root]
+
+    while queue:
+        node = queue[0]
+        print(node.val)
+        queue = queue[1:]
+        left = node.left
+        right = node.right
+        if left:
+            queue.append(node.left)
+        if right:
+            queue.append(node.right)
+
+ +

What I have trouble with is applying this when the input data structure is not already an object specially created for representing part of a Graph or a Tree.

+ +

For example, I see problems where you are supposed to use a DFS on a 2D array, with each second-level element vertex and the edges connected by i+1 and i-1 (Example)

+ +

Or array of numbers representing a combo lock where each combo 0000-9999 is a possible element on a graph (Example)

+ +

Or an array representing the BFS traversal of a tree that then must be used to delete certain elements and keep track of the disjointed trees that result (Example)

+ +

At a very high level, this makes perfect sense.

+ +

But I don't really understand how to actually code it. What are some generalized approaches I can use when thinking of a different data structure in terms of a Graph or Tree. Obviously the approach of init a Node Object, populating the graph with a Node representation of every element, and then traversing it would work... but is far too much work, especially since I want to solve these problems quickly such as in a interview!

+",242231,,242231,,43971.15556,43971.61736,How to apply Graphs/Trees/BFS/DFS concepts to Lists,,1,0,,,,CC BY-SA 4.0, +410376,1,,,5/20/2020 4:46,,-2,70,"

I am building an app where you can invite your friends to join the platform and discuss various topics. (almost Reddit).

+ +

Functionality: When you create a topic you can enter the email of users you want to add. If some of the users are not registered with the platform, we send them an email, asking to get onboarded and see what discussion is happening on the topic.

+ +

Implementation: Create an entry in topic table. For user's being added, check if they exist in user table.

+ +

If yes, then create entry in user_topic table.

+ +

If no, then add to user table, send an invite to join and created entry in user_topic.

+ +

Confusion: Should the user be added to the user table, even though he has not been onboarded to the platform? And when he signs up, we patch in all the details we ask for. Is this the right approach?

+ +

Solutions?:

+ +
    +
  1. Add users to the table. When the user signs up, we POST a new user, but we patch in the details in the backend. This allows the user to be able to see everything that happened on the topic when he signs up. Issues: We have created an entry for a user even though he is not signed up. You could end up signing up for users who are unwilling.
  2. +
  3. We create a separate table where we keep track of the topics an unregistered email has signed up. When a new user is being created we check both the user and unregistered_user table and add the user. We do not create a user entry until he signs up. Issues: We have to check two tables when doing any operation. Every reply made to the topic will have to use the two tables.
  4. +
+ +

Question: +How do you do it in your application? I am willing to change the approach as well. What is the recommended approach? (not sure if there is one. )

+",366146,,,,,43971.40833,Signing up users who have not registered,,2,1,,,,CC BY-SA 4.0, +410378,1,410381,,5/20/2020 5:56,,0,49,"

We have some entities in our code:

+ +

-""View"" and View has some attributes and also contains one or many ""SubViews""

+ +

-And each ""SubView"" has some attributes and contains one or more ""Tweet"" entities.

+ +

So far so good. User edits Views/SubViews/Tweet (adding/removing tweets from subviews, adding/removing subviews fro views etc) in our UI and as everything is linked it updates fine. At the moment the user after editing, can decide to make her changes ""Live"" Or just discard them, there is no option to save and come later.

+ +

So now we want to introduce a ""Draft"" system. So user can edit a View/SubView/Tweet in a draft state (This is to be persisted) and eventually decide to make it ""Live"". On making it ""Live"" the draft doesn't die. User can continue to work on the draft and make it live again. So they both live side-by-side and have to be linked.

+ +

I'm not sure how to go about introducing this is code as the ""Live"" is just implicit in code/db atm. Some options I could think:

+ +
    +
  1. Add a flag to each class with Live/Draft state indication. This will mean that I need to keep them in synch between the entities that are linked together
  2. +
  3. Add another ""Manager"" like class that holds the objects related to Draft/Live versions of the views and manages the relationship between them. Using only the current types. The Manager is the only one that knows what is draft and what is live object
  4. +
  5. Add a completely new type ""DraftView""->""DraftSubView""->""DraftTweet"" and manage the relationship between Draft and Live entities
  6. +
+ +

At the DB level I'm thinking of just adding another column to the table to say Live/Draft.

+ +

More likely, there is a better way to do this and there are patterns to help with this, so i'm hoping I can get some opinions on my suggestions or better ways to do this. Hopefully I could make the question clear.

+ +

Thanks,

+",366154,,,,,43971.28472,"Class Design question concerning adding a new ""DRAFT"" state to an object",,1,0,,,,CC BY-SA 4.0, +410388,1,,,5/20/2020 14:24,,0,135,"

The first alternative can result in lines that are too long, and so reduced readability in some web browsing.

+ +

The second one can result in a better web readability, but might annoy command line debugger users.

+ +

Example:

+ +
blah_bleh_blih_bloh = something(arg_a, arg_b, arg_c, arg_d, arg_e);
+
+ +
blah_bleh_blih_bloh =
+        something(
+                arg_a, arg_b,
+                arg_c, arg_d,
+                arg_e);
+
+",94239,,131624,,43971.92153,43971.92153,Which is better? One line per action or shorter lines?,,1,12,,43971.84514,,CC BY-SA 4.0, +410391,1,,,5/20/2020 16:31,,0,37,"

I'm currently designing an IoT device that connects to a home network and an app to communicate over said network. I'm trying to find a way to reliably connect to the IoT device/ make the IoT device easily discoverable. I've had a few ideas on how I could do this:

+ +

1) Local DNS. - however, not all routers support this so this may be a ""try and if it doesn't work do something else""

+ +

2) Send packets to the entire range of IPs on the network on a specified port, wait for a correct response. - This would be fine for smaller networks, but for places like universities and offices where the network is huge, it would be very slow. Also, this could throw red flags to network analyzers.

+ +

3) Use SSDP (Simple Service Discovery Protocol). - This seems like the best option I've come across, however I've seen that some people are blocking the multicast address for the protocol due to the possibility of DDoS attacks which wouldn't allow it to work. Also, I'm not sure if this would work on all routers.

+ +

If there are any other solutions/ any arguments for/against the three options I've presented please let me know! Thank you in advance

+",366204,,,,,44213.6,Fnding the IP of a specific IoT device on a local network reliably without DNS?,,0,3,,,,CC BY-SA 4.0, +410392,1,410432,,5/20/2020 17:02,,3,110,"

Recently I was asked to refactor some code that leverages JavaScript's array.reduce() method because other developers felt the code hard to read. While doing this I decided to play around with some JavaScript array iteration performance benchmarks, to help me validate various approaches. I wanted to know what the fastest way to reduce an array would be, but I don't know if I can trust the initial results:

+

http://jsbench.github.io/#8803598b7401b38d8d09eb7f13f0709a + +

+

I added the test cases for "while loop array.pop() assignments" to the benchmarks linked above (mostly for the fun of it) but I think there must be something wrong with the tests. The variation in ops/sec seem to large to be accurate. I fear that something is wrong with my test case as I don't understand why this method would be so much faster.

+

I have researched this quite a bit and have found a lot of conflicting information from over the years. I want to better understand what specificity is causing the high variance measured in the benchmark test linked above. Which leads to this post: Given the benchmark example (linked above and shown below), why would the While Loop test case, measure over 5000x faster than its the For Loop counterpart?

+
//Benchmark Setup
+var arr = [];
+var sum = 0; //set to 0 before each test
+for (var i = 0; i < 1000; i++) {
+  arr[i] = Math.random();
+}
+
+
// Test Case #1
+// While loop, implicit comparison, inline pop code
+var i = array.length; 
+while ( i-- ) {
+    sum += array.pop();
+}
+
+
// Test Case #2
+// Reverse loop, implicit comparison, inline code
+for ( var i = arr.length; i--; ) {
+    sum += arr[i];
+}
+
+
+

*Edited

+

In response to the downvotes. +I want this post to be useful. I am adding images to provide context for the links. I removed unnecessary details and refined the content to focus on the questions I am seeking answers for. I also removed a previous example that was confusing.

+
+",366137,,-1,,43998.41736,43972.74861,What is the expected performance of While loops using `array.pop()` assignment vs other methods,,1,8,,,,CC BY-SA 4.0, +410394,1,,,5/20/2020 21:14,,5,405,"

Assume I have the following two functions:

+ +
function storeObject(object) {
+    // Connect to database
+    // Prepare query
+    // Execute query
+}
+
+function retrieveObjectWith(id) {
+    // Connect to database
+    // Prepare query
+    // Execute query
+    // Parse results
+
+    return object;
+}
+
+ +

And that I want to write tests to them:

+ +
function testStore() {
+    storeObject(storedObject)
+
+    // Connect to mocked database
+    // Prepare query to retrieve stored object
+    // Execute query and retrieve stored object
+
+    [retrievedObject] should equal [storedObject]
+}
+
+function testRetrieve() {
+    // Connect to mocked database
+    // Prepare query to store object
+    // Execute query and store object
+
+    retrievedObject(storedObject)
+
+    [retrievedObject] should equal [storedObject]
+}
+
+ +

I could simplify the second test if I ""trusted"" the results from the first, like this:

+ +
function testRetrieve() {
+    // Since I have a separate test for storeObject, I can use it here
+    storeObject(testObject)
+    retrievedObject = retrieveObjectWithId(testObject.id)
+
+    [retrievedObject] should equal [testObject]
+}
+
+ +

Question is

+ +

Are there any cons of using the storeObject(object) instead of rewriting its logic inside the test?

+ +

Is it considered a bad practice to do this?

+",220480,,,,,43972.29236,"Is it a bad practice for a unit test to ""trust"" the other?",,2,5,,,,CC BY-SA 4.0, +410398,1,410399,,5/21/2020 0:34,,-4,94,"

Is it mandatory to implement a REST API even if it doesn't make sense?

+ +

I have created an app consuming REST services from other apps. Now that I have the final result, I wonder if it is needed to implement my own REST API, because as I try to implement it seems that the API has the same funcionality as the APIs I'm using without adding anything new, hence my doubt.

+",352087,,110531,,43972.56528,43972.56528,Does it makes sense to implement a REST API for every app?,,1,3,,,,CC BY-SA 4.0, +410401,1,410416,,5/21/2020 2:42,,-2,162,"

I'm pretty new with C so I have encountered many doubts with pointers. +I've already search a lot about this but there are some things that still are not clear for me, and I also think this will help other beginners:

+ +

I'm currently working with something like a binary search tree, more specifically a dictionary. +In both cases I need a pointer to the left subtree an another one to the right. In the dictionary also a key and a value.

+ +

Lets supose we have a struct node with key,value,left and right(just a summary idea): +Where left and right are pointers to other node

+ +
dictionary new_dict = calloc(1,sizeof(struct node));
+new_dict->key=..;        
+new_dict->value=..;      // some values
+new_dict->left=..;    
+new_dict->right=.. ;    
+
+ +

I'm doing pretty well so far but in some cases I find myself in a situation like this:

+ +
dictionary->left->key
+
+ +

Or maybe more: ptr->ptr->ptr->ptr (any number of times)

+ +
    +
  • Does this represent a problem?

  • +
  • Does this affect the performance?

  • +
  • Is it a bad practice?

  • +
  • Should I try avoiding this?

  • +
+",366235,,1204,,43972.20417,43972.37153,What is the correct use of -> operator when working with pointers?,,2,3,,,,CC BY-SA 4.0, +410407,1,,,5/21/2020 4:17,,-2,35,"

I am planning to do the university project with the Desktop application + web application + mobile application, I have planned to use languages for these 3 as below:

+ +
    +
  • Desktop application: Java SE
  • +
  • Web application: PHP
  • +
  • Mobile application: flutter with Dart
  • +
  • Database: Mysql
  • +
+ +

Now I have a problem with how to connect these 3 with the same database.

+ +

for example: +Desktop application can connect without any issue because both are in the same network. problem is how to connect mobile and web application with a local database.

+ +

Kindly advise me on how to manage that situation.

+ +

Thank you!!

+",366240,,366240,,43972.18194,43972.20139,"Interconnected technique for web, mobile and desktop",,2,1,,,,CC BY-SA 4.0, +410412,1,,,5/21/2020 6:39,,0,54,"

Given I have a message bus that can handle commands and events. A command handler can dispatch an event. But should an event handler be able to dispatch another command? Or should event handlers only be able to dispatch further events?

+ +

Example:

+ +
    +
  1. PostReceivedEventHandler subscribes to PostReceivedEvent, invokes ValidatePostCommand
  2. +
  3. ValidatePostCommandHandler handles ValidatePostCommand, dispatches PostValidatedSuccessfullyEvent
  4. +
+ +

Is this legit? Or should the logic from the ValidatePostCommandHandler go into the PostReceivedEventHandler and this one should then dispatch the PostValidatedSuccessfullyEvent directly?

+ +

Because having just event handlers that have no real business logic in it but just dispatch other commands feels somehow wrong to me.

+",114195,,,,,43972.35625,Message bus: should an event handler trigger other commands?,,1,1,,,,CC BY-SA 4.0, +410420,1,410428,,5/21/2020 12:41,,-2,82,"

In a couple of my programs, the program needs to know an IP address and a port to which it should connect or send data to.

+ +

The solution I have right is to ask for user input via the console - but this just doesn't look like a good solution to me for passing the parameters to the program in the long term. In a later stage of the project, these programs will probably run in a docker, which are managed by kubernetes. (I wasn't able to research about this yet, but it may make a solution better than another - yet my question is not only about that case).

+ +

Other informations which might be important: I am writing these programs in C++ and are quite new to software engineering and programming. I split up the code so that one class manages the functionality (e.g. sending data) and another manages getting user input, so I may write different versions with different parameter-passing solution if necessary. The ""How to use"" does not have to be self-explanatory, for each program there will be a readme explaining the ""How to use"".

+ +

There will probably always be a default version, which is used if no paramter is given (or default is explicitly choosed).

+ +

So my question: What are good solutions (and their pro's and cons) for passing these parameters to a program?

+ +

Things I thought of myself:

+ +
    +
  • Asking for user input in the console (doesn't look nice, is probably incompatible with docker)
  • +
  • A file which contains this informations and from which the programs read
  • +
  • Passing these informations as arguments to the main
  • +
+ +

Are there other solutions I didn't think of? What are usual ways to handle this? +Any hint is usefull! Thank you!

+",366265,,366265,,43972.53125,43972.625,Best way to pass an optional parameter to a program,,1,2,,,,CC BY-SA 4.0, +410421,1,,,5/21/2020 13:13,,-2,151,"

As I'm spending more time working with SQL, I'm starting to feel like this language promotes logic obfuscation. Here are some examples:

+ +

An INNER JOIN in a query can act as an implicit filter that reduces the result set into a subset. However, to understand what is being filtered, you needs to have knowledge of the tables involved, which is information not present in the query syntax. +A query like

+ +
SELECT *
+FROM quotations
+INNER JOIN shipments USING(quotationid)
+
+ +

...can either give you quotations that have been shipped if shipments contains only shipment records that actually happened, or it could give you an undefined subset of quotations if shipments contains shipment entries that go through multiple stages, and a line in shipment can mean anything from 'shipment in preparation' to 'shipment partially sent' to 'shipment reached destination'. Needless to say, unless you know this information, you can't look at the above query and figure out what it means.

+ +

Then we can look at something like this:

+ +
SELECT *
+FROM quotations
+LEFT OUTER JOIN shipments USING(shipmentid)
+WHERE shipment_date IS NULL
+
+ +

Clearly this query returns a set of quotations that have existing shipment records, but the fact that you can obtain this by filtering for shipment_date IS NULL is insane from a logic perspective. +You can say that this is a contrived example and that I should just pick a better column, but there isn't always a very logical, self documenting alternative to choose from, so bear with me on this.

+ +

Then there's the fact that naming result sets is sometimes a compromise between properly describing the result set data, and communicating its intended meaning. Let's say that you want to use a subquery on shipments that returns some information about foreign shipments from last year. All you really need from this subquery is the answer to whether a shipment is foreign or not, but you also need to return a quotation id to JOIN it with the quotations table, and you also need to return the shipment date to filter by last year so you might as well do it from this sub query, and maybe you actually care about some other extra details like the specific country. So what do you call this collection of quotation_id, is_foreign, shipment_date, and country? Either something fairly generic which won't communicate its indented purpose but will encompass all 4 data points properly, or something very descriptive but long and cumbersome. None of these is an appealing option.

+ +

I can rant some more, and I know these aren't perfect examples, but I hope you get my point. It seems to me that declarative languages pose some unique challenges for code clarity that requires unique tools and guidelines to handle, and I wonder if there's some good literature on this?

+",366269,,,,,43972.73958,Are there any guidelines about making SQL queries easier to comprehend?,,4,16,,,,CC BY-SA 4.0, +410422,1,,,5/21/2020 13:48,,0,52,"

I am playing with my simple personal project - a simple REST API application and I am currently struggling with a kind of design problem.

+ +

The problem:

+ +

How to insert an operation ID (request ID, an identifier of each operation) into each ""layer""?

+ +

Context:

+ +

Let's say I have a UserRepository trait (interface) with 2 implementations (say, InMemory and Database), which is used by multiple services (say, CreateUserService, UpdateUserService). All these services are then used by a facade, let's say the UserFacade. This facade is called by the CLI tool or the Users REST handler. +What I would like to be able to do is to create a unique Operation ID for each ""operation"" (request or a CLI call). This ID would be used for logging across the whole application, I would like to be able to access it in the repositories, services and facade. Later I would like to access the logs and trace how the request was processed by each of the layers.

+ +

Possible solutions that come to my mind are:

+ +
    +
  • Pass this operation ID to each method as an additional parameter. I +consider this ugly since the interface would be polluted by the extra parameter, unrelated to the business logic.

  • +
  • Create a whole structure of facades and services using the operation ID (the ID will be provided to each instance via the constructor for each request or CLI +action). I like this approach but I think this would be performance/memory heavy because it would need to create a lot of objects for each processed request.

  • +
  • Some ""global state"" (thread-local...) storing the operation ID?
  • +
+ +

I am trying to not specify any concrete language since I consider this a general problem, more related to the design than the used language or technology.

+",365683,,,,,43973.4,Inject an Operation ID across multiple application layers,,2,1,,,,CC BY-SA 4.0, +410431,1,410438,,5/21/2020 17:46,,0,46,"

The project I am working on is a SaaS application with multiple payment tiers. Each one has multiple limits for different actions. One example would be that a free user can only create 1 space, a premium user can create 5 spaces, and a pro user can create unlimited spaces.

+ +

I am thinking about a system that has the following permissions:

+ +
    +
  • create-1-space
  • +
  • create-5-spaces
  • +
  • create-unlimited-spaces
  • +
+ +

In all examples I have seen, a permission is used to just protect the endpoint. In this scenario, however, the endpoint would need to query the number of spaces, meaning that the permission would not be self contained.

+ +

What is the recommended practice for controlling this usage?

+",365981,,365981,,43972.83611,43973.66111,How to limit resource creation with RBAC permission?,,1,0,,,,CC BY-SA 4.0, +410434,1,410460,,5/21/2020 18:30,,-2,54,"

We will be streaming data from Phones (<= 5000) to a server. We were previously sending the data using MQTT to AWS IOT. Now we want to run this locally. Estimated size per second to be ingested is 5000 phone * 500 bytes = 2.5 MB to 5 MB. 5 mBps = 5*8 mbps i.e. we are looking at something like 20 to 40 mbps. This is not a bottleneck however the number of independent source instances is.

+ +

So, I was trying to scale down the problem and use something light, which can stream at 1 or 2 msgs per second from 5000 devices. I tested websockets but one socket is only able to read asynchronously from 200 end clients. Is it better to code this up or use something Eclipse MQTT or Websockets to Elastic Search or Kafka. We had a grpc instance before that was sending a larger packet so it was draining the phone battery faster as the structure was a json.

+ +

Why I want to use a simple system so that it becomes easy to maintain for the end client when the software is deployed.

+",366290,,,,,43973.82153,Event Sourcing Architecture: Ingesting event data from Phones,,2,0,2,,,CC BY-SA 4.0, +410441,1,,,5/21/2020 23:37,,1,65,"

We have an SSO application that provides authentication for a native mobile application, as well as for a web application. There are some features that the web application has that the mobile application does not, such as account management. We would like to keep the user logged in (without having to sign in again) when directing them from the app to the web browser, let's say via a settings button click in the native application.

+ +

What is the recommended approach to carry along the mobile application's session to the web browser?

+ +

Some clarifying points:

+ + +",56871,,,,,44003.97778,Recommended strategy for maintaining a session when navigating from app to browser?,,2,3,,,,CC BY-SA 4.0, +410442,1,410445,,5/21/2020 23:52,,0,152,"

In general terms, what's the added value in using the techniques described in the iconic book ""Modern C++ Design""? Is it simply the ability to write reusable code that's easily extensible? Or are there particular classes of applications where development can be greatly facilitated through the use of such techniques?

+",366302,,,,,43973.12083,What's the added value in the sophisticated use of C++ templates advocated by Alexandrescu and others?,,1,3,,,,CC BY-SA 4.0, +410443,1,,,5/22/2020 1:11,,0,92,"

In SAP ERP (probably in other SAP systems as well), you have transport orders which you could see as ""commits"" i guess in git. These transports are send from development to test or production system and then ""activated"", which means compiled in their terms.

+ +

With git you normally solve this with different branches or tag one branch for a release. Could you at least in theory create the whole erp system in a git repo, create new branches for each system. Would this not solve the issues (need for caring for the import of order of transports, unwanted different states in different systems) that the transport system has?

+",365017,,,,,43973.52917,Could git be implemented in SAP ERP instead of the transport system?,,2,2,1,,,CC BY-SA 4.0, +410447,1,410454,,5/22/2020 3:25,,-2,174,"

Say we have a boolean variable indicating whether an input array is 2D or 3D (just an example). Is it acceptable to name it is_2d_or_3d instead of is_2d, to make it clearer that if false, it indicates the input is 3D?

+ +

I've personally been doing this for quite some time. I don't remember if I picked it up from some guidelines or just thought of it myself. Is there any conventions suggesting this? If not, are there any drawbacks of this approach (besides not being recommended by guidelines)?

+",366310,,,,,43973.29722,"Is it acceptable to use ""or"" in the name of a boolean variable?",,3,5,,43973.43125,,CC BY-SA 4.0, +410451,1,,,5/22/2020 5:36,,-2,53,"

I have some external SDK library that makes IO calls (either networking or database) in the form of blocks, like so:

+ +
SomeClass.doWork(success: {}, failure: {})
+
+ +

Now I need to chain about 60 different calls because we are working on data replication where each operation is distinct enough that it's not the same, but the principle is there -> all of these take a success and a failure blocks.

+ +

What is the best way to organise this spaghetti:

+ +
let failureBlock: () -> Void = { // something
+}
+
+SomeClass.doWork(success: { [unowned self] in
+   self.runChecks(success: {
+       SomeOtherClass.somethingElse(success: { 
+           SomeClass.doWork(success: { [unowned self] in
+              self.doMore() ///... and on and on she goes
+           }, failure: failureBlock)
+       }, failure: failureBlock)
+    }, failure: failureBlock)
+}, failure: failureBlock)
+
+ +

Update I am concerned with performance and memory management over legibility as my stack traces look quite ugly with lots of thunk and closure in them :/

+",111197,,111197,,43973.31667,43974.32708,Best way to deal with lots of nested closures in Swift,,1,3,,,,CC BY-SA 4.0, +410459,1,410462,,5/22/2020 8:28,,1,91,"

I'm trying to understand what microservices are and MINIX's microkernel architecture seems to be a good analog. (I'm a systems engineer.) In my understanding:

+ +
    +
  • Microservices are like user space drivers
  • +
+ +

That applications don't ask the kernel to do all the work. Instead they send requests to various services (drivers) to eventually get the job done. If a service crash, it is easy to just restart it and hopefully recover the previous state. And other services are not efficient.

+ +
    +
  • Microservices adds communication overhead
  • +
+ +

Like userspace drivers that needs more resource due to the extra IPC needed. Microservices will cause more HTTP/whatever protocol requests sent so it can ask other services to do work.

+ +
    +
  • Microservices can be partially upgraded without downtime
  • +
+ +

Like in MINIX I can upgrade my EXT2 driver to also support EXT4 without rebooting. Microservices can allow some part of it to be upgraded while other part still runs properly (maybe the requests gets delayed, but heck).

+ +
    +
  • Microservices does not need to speak HTTP.
  • +
+ +

Something like DBus (a ultra low latency RPC used by open source UNIX like BSD, Linux and MINIX) can also be used to build microservices.

+ +

Are my comparisons fair? Did I get anything wrong?

+",366324,,,,,43973.39514,Comparing microservice to MINIX microkernel,,2,1,0,,,CC BY-SA 4.0, +410464,1,,,5/22/2020 10:13,,-1,97,"

I'm currently evaluating Event Sourcing and CQRS for an implementation of a new business requirement at my day job. While I can't really speak about the actual business problem, I can give a few reasons for why we think that Event Sourcing might be a good fit:

+
    +
  • great auditing capabilities based on the history of events
  • +
  • "travelling back in time" to recreate a previous state of an aggregate (e.g. for debugging purposes)
  • +
  • the ability to create new projections that take the full history into account
  • +
+

Since I can't go into detail about the exact domain we're in, I will describe my problem using the domain described in this Kata dealing with quiz games.

+

I think I got the general idea of Event Sourcing and how CQRS links to it. However, all examples I can find use domains with clear separations between aggregates as well as between different instances of the same aggregate (in the Kata mentioned above, quizzes and games have a clear relationship. There's no interdependence between different quizzes or different games).

+

The problem

+

In my case I have the problem that it must be possible to merge different instances of the same aggregate (in our sample domain this could mean that it must be possible to merge different quizzes together into one quiz) as well as undoing this merge later on (reconstructing the two original quizzes from the merged one).

+

This constraint adds quite some complexity when it comes to constructing the current state of an aggregate, because it's necessary to read the whole event stream from the beginning to be sure that all relevant events are taken into consideration. It's not possible to partition the event stream in a useful way because it's impossible to tell which aggregates will be merged later in the future. It might even be a problem when the event stream gets partitioned, because the temporal order of related events gets lost.
+From what I understand, partitioning the event stream allows for a fast provision of the events that are necessary to build up the current state of an aggregate. For instance, if I want to know the current state of the quiz with ID 124ecf, I technically could filter the event streams to just have the events for this exact ID which would drastically reduce the number of events. If this is not possible, like in my case, reading the event stream ad hoc to recreate the state of an aggregate will become very slow and impractical over time.

+

The solution I came up with so far

+

The only solution for this problem that seems to be possible to me is to work with rolling snapshots for all necessary projections. The snapshots would update themselves continuously, building up a state optimized for their specific use case (processing commands, answering queries etc.).
+I'm skeptical about this idea, because it requires quite some effort. Most of the implementations of typical applications don't require rolling snapshots for most use cases because building up the desired state from the event stream is fast enough. This simplicity is lost in my case.

+

The question

+

My question could be split up in several parts:

+
    +
  • Is it a good idea to use Event Sourcing for domains like these where it's not possible to draw clear boundaries between different instances of the same aggregate?
  • +
  • Does it make sense to heavily rely on using rolling snapshots to get the desired performance?
  • +
  • Is there another way other than rolling snapshots to implement this?
  • +
  • I can't think of a way for partitioning the event stream. Am I missing something? Are there some techniques that allow partitioning/sharding under the given circumstances?
  • +
+",192600,,209774,,43999.89514,44000.46806,Merging aggregates with Event sourcing,,2,4,1,,,CC BY-SA 4.0, +410467,1,,,5/22/2020 12:36,,-2,113,"

My question is related to designing new software. Let's say I work in a medium-sized company that has a core product. Now we want to start with new software (purpose of the application is different than the core product) with a completely different approach. There is a document with the main features from business point of view.

+ +

Now I think we should specify a solution and at least not last we specify language, libraries, etc. Am I right? If so, how to do it? I know how to document the last part with UML, use-cases, etc. But how to document that we want the software to be able to run as standalone service, a replicated service in a cloud, need of authentication, etc.

+ +

Is there any standard, methodology, or something?

+",366335,,1204,,43973.57014,43974.47361,Architecture of new software documentation,,2,9,,,,CC BY-SA 4.0, +410472,1,410478,,5/22/2020 16:21,,1,208,"

I am working on a WebAPI application which follows the layered approach like Controller > Service Layer > Repository Layer > Entity Framework Core (SQL / Cosmos) The view is in Angular.

+ +

In many of our API, we have some transformation required on the Request object (DTO - request send by UX) to a domain entity that my repository understands. This transformation usually is handled in the Service layer. This is the standard approach I think.

+ +

Now I have an Request object (DTO - request sent by UX, shown below) which is a simple class and I don't need any transformation from DTO to Domain entity. Infact I have a DbSet matching exactly this and the database table has exactly these 3 columns only. In this case, I end up doing the transformation from DTO to Domain entity unnecessarily.

+ +
    public class BookDTO //Received in the API request
+    {
+        public string Name { get; set; }
+        public string Author { get; set; }
+        public decimal price { get; set; }
+    }
+
+ +

To avoid this pointless mapping / transformation I can use the same DTO across all the layers (Controller to Service to Repository), But I feel this is not the right way to do. (Do let me know if there is nothing wrong with this approach)

+ +

Essentially either I will be doing a transformation of DTO to domain model when they have absolutely same attributes or I will end up referencing the DTO in all layers including the repository as well.

+ +

I am not sure if these are the only two options for me or there is a gap in my understanding.

+",366348,,366348,,43973.68889,43973.79931,Is it necessary to have DTO to domain entity mapping always?,,1,0,,,,CC BY-SA 4.0, +410473,1,410475,,5/22/2020 16:47,,-4,71,"

How to tell if we have unused require expressions?

+ +

For example:

+ +
require 'colorize'
+require_relative './helpers'
+
+puts 'Hello, world!'
+
+ +

This is a very simple example but there can be larger/complex cases when after some refactoring we forget to remove the requires, is there an easy way to identify those unused requires? Rubocop doesn't seem to have a rule for that.

+",185535,,185535,,43975.28264,43975.28264,How to identify require expressions that are not needed?,,1,1,,,,CC BY-SA 4.0, +410474,1,,,5/22/2020 17:31,,0,128,"

I'm trying to find the correct flow to manage this kind of development, where A and B are two independent features, and C is a third feature that relies on A and B.

+ +

An obvious approach would be to develop the three features in a sequential way, such as:

+ +
----A----B----C
+
+ +

But I want to be able to push A and B for review while still working on C.
+If I do branches for A and B, I'm not sure how to handle C :

+ +
----A--
+ \--B--   ?-C
+
+ +

Is there a good canonical way of doing this?

+",366352,,209774,,43974.37917,43974.42014,What's the correct Git flow to develop on two independent features + one feature that relies on both?,,2,1,,,,CC BY-SA 4.0, +410476,1,,,5/22/2020 18:43,,1,28,"

I am looking for a way to automate deployment of configmaps to our dev/qa/prod environments. Right now our applications rely on kubernetes configmap/secrets that we manually apply in the cluster using

+ +

kubectl apply -f {name-of-configmap}.yaml

+ +

This is fine and all but before I send my code to our dev environment for integration testing/load testing I have to go and apply the new configmap if I've made changes. Our configmaps aren't kept in the same repository as our code because we don't want the values leaking into git. It would be ideal if we had to use git or some version control that if I work on feature/A and I make a change to the configmap for the application it will be automatically applied as long as I set the values in some more abstract layer. Is there any pattern/tool I can use to do this? Currently for deployment we create a Helm chart for our application and it goes through CircleCI for CICD testing/deployment. But the configmap needs to be already applied before we can deploy.

+",324504,,,,,43973.77986,How to manage kubernetes configmap values in version control across environments?,,0,0,,,,CC BY-SA 4.0, +410477,1,,,5/22/2020 19:04,,1,31,"

Solution as it is right now

+ +

I have this solution where I gather information from a proprietary product of a different company in various sites. The solution is based on a single go binary that contains everything needed to run the application (even an embedded sqlite) and is deployed to Windows computers running the proprietary software. All those Windows computers are connected to the Internet and sit behind a firewall that allows all outgoing traffic but incoming traffic is blocked and the owners of the computers don't have the knowledge to configure their firewalls (to be honest for most of them it is already a demanding task to install the go program)

+ +

Users can access the data that is stored in a sqlite database using their mobile device or a different computer (1). The components needed for this are installed on a server that provides its services over the Internet. I created REST webservices (2) that send graphql queries to the computers on the respective site via RPC over NATS (3). The go program installed on the site computers (4) runs those queries against the local sqlite (5) and sends the result back to the NATS queue (6) (7). The result is taken from the NATS queue and returned to the caller by the same REST service that processed the incoming call (8)

+ +

+ +

Improvement I'm looking for

+ +

This setup works fine when I query single sites. But I should also be able to query several sites in parallel and retrieve a single ""recordset"".

+ +

Here's a made up example:

+ +

Lets assume there is a Persons table available on each site. I can query that table by running SELECT SiteNumber, PersonName FROM Persons

+ +

I need to run that query on for example 3 sites and merge/join them into one result that would look like this:

+ +
2, Daisy
+2, Eve
+2, Adam
+5, Bob
+7, Alice
+
+ +

The SQLs I need to run are much more complicated than this, I would need to do GROUP BY and `ORDER BY´ for example. This excludes approaches where I would for example create three maps and join them into one.

+ +

So far I intentionally don't store or accumulate data on the server. Which are my options to postprocess the data ? I would rather not INSERT all subresults into a temporary table on the server. I found no distributed database that can be embedded into go and works across firewall borders.

+",180333,,,,,43973.79444,Offloading database joins to IOT devices,,0,0,,,,CC BY-SA 4.0, +410479,1,410501,,5/22/2020 19:16,,-2,60,"

I'm working on a project that is a API with many controllers and modules. Which of the following is the best architectural practice to organizing my API controllers by dll (.NET 4.7 WebAPI)? Why?

+ +

OBS: Each item is inside a module is a dll

+ +
    +
  1. One dll for each controller. Each csproj contains only one Controller.cs file:
  2. +
+ +
+

Module 1
+ -- API Controller 1

+ +

-- API Controller 2

+ +

-- Domain
+ -- Application
+ -- Infra

+ +

Module 2
+ -- API Controller 3

+ +

-- API Controller 4

+ +

-- Domain
+ -- Application
+ -- Infra

+
+ +
    +
  1. Group controllers in projects by context
  2. +
+ +
+

Module 1
+ -- API

+ +

-- Domain
+ -- Application
+ -- Infra

+ +

Module 2
+ -- API

+ +

-- Domain
+ -- Application
+ -- Infra

+
+ +
    +
  1. Create only one csproj with all controllers.
  2. +
+ +
+

Module 1
+ -- Domain
+ -- Application
+ -- Infra

+ +

Module 2
+ -- Domain
+ -- Application
+ -- Infra

+ +

API (controllers)

+
+ +

Note: I'm using DDD (as possible), separating my business rules by context. Each context has its layers: domain, application and infra.

+ +

I started centralizing all Controllers of all contexts in one api project. It started to be messy, so I break in separate projects. However, there is some performance issue or architectural pattern that should I follow to organize my project?

+",339793,,339793,,43973.81736,43974.01389,How to organize my controllers in projects in .Net API?,<.net>,1,2,,,,CC BY-SA 4.0, +410482,1,,,5/22/2020 7:45,,204,36766,"

I found this also happened in my team although he may have exaggerated the situation a little bit.

+
+

Scrum is a way to take a below average or poor developer and turn them +into an average developer. It's also great at taking great developers +and turning them into average developers.

+

Everyone just wants to take something easy off the board that you can +get done in a day so you have something to report in tomorrow's daily +scrum. It's just everyone trying to pick the low hanging fruit. +There's no incentive to be smart and to take time to think about +solutions, if nothing is moving across what are you even doing? You're +letting the team down! The velocity is falling!

+

I think if you have hard problems to solve you solve them by giving +them to smart people then leaving them alone. You don't +constantly harass them every day demanding to know what they did +yesterday and what they plan to do today. With daily updates where is +the incentive for the smart people to work on the hard problems? They +now have the same incentive as the junior developer; find the easiest +tickets to move across the board.

+

Sometimes I will want to just be alone and think about a solution for +a few days. If I do that though I'd have nothing to say at the scrum. +So instead I'll pick the user story where the colour on a front end +was the wrong shade of green or a spelling mistake! See, I knocked out +2 stories in one day, before lunch! Go me!

+

...

+
+

I don't fully agree with his words. E.g., I agree with one comment said, it's not that they (managers) don't trust them, it's that they don't get things done without constant supervision.

+

When a great developer becomes an average developer there are always multiple reasons, but I do find daily scrum could be one of reasons. So how do I prevent this side-effect of scrum meetings?

+

I also realized it is easier said than done, but I like to see how others see this problem.

+

----- update -----

+

After reading all the answers I have got so far I realize I need to add some information to make my question more relevant.

+

But before I am getting into that, I want to repeat the words Martin Maat gave in his answer, "The mere fact that so many people feel the need to say something about it is an indicator of the frustration Scrum causes."

+

I totally agree! When I first asked the question I already expected some answers would be "oh, you don't do scrum right!"

+

Some corrections I want make to my original question are,

+
    +
  1. I used the word "great developer" and I probably should just say a decent/good developer because I have seen that sidetracked answers. Besides, throughout my career I never work with great developers, so I shouldn't use that in the first place. What I meant was I see from time to time that scrum has made a good developer perform less well.
  2. +
  3. Some answers focus on the sentence "it's that they don't get things done without constant supervision" and believed that was a micromanaging issue. But this was not my case, e.g. I don't micromanage. The problem I experienced (again from time to time) is good/tech-savvy developers are not necessarily business-savvy ones. Sometimes they will focus on perfecting their technical solution too much without realizing we have a product to deliver in the end. Other times it is a cross-function feature that needs coordination, especially each team may have its own priority/schedule. That is why they need supervision. But I guess I shouldn't just copy the word "constant supervision" from the original post and should not use constant in the first place. But again, if someone argues that "great developers" and "great team" don't do that, I have no counterargument then.
  4. +
  5. One answer said "the daily scrum somehow turned into a competition who has completed the most tickets". I never experienced that. A mature team does not do that (a mature is not necessarily a great team though). Has anyone experienced that?
  6. +
  7. For those suggested me to read the agile manifesto, my counterargument is this was a long book review I wrote in 2008 (12 years ago) for the book "The Enterprise and Scrum (Developer Best Practices)" by scrum cofounder Ken Schwaber. I listed my review here, not to show off, but to show (1) I believe I have done scrum long enough to see its strength and weakness. (2) I know what agile is about.
  8. +
+",217053,Qiulang,591,,44012.35486,44180.07917,How do I prevent Scrum from turning great developers into average developers?,,22,21,123,,,CC BY-SA 4.0, +410509,1,410528,,5/23/2020 9:01,,4,235,"

I realize this is a subjective question, but after studying F# and functional programming I've seen the benefits of immutability.

+ +

Now, I'm thinking of getting rid of mutability entirely in C# unless specifically required. In essence, where before I might have written a function with a signature like -

+ +
public static int F(List<string> input)
+
+ +

I'm now paranoid if that function might mutate the list unless I want it to. So now I'm thinking of doing something like

+ +
public static int F(ImmutableList<string> input)
+
+ +

Also, if I actually do want to mutate a list, before I'd do something like

+ +
public static void F(List<string> input)
+
+ +

and mutate it directly, but now I'm thinking of doing something like this instead:

+ +
public static ImmutableList<string> F(ImmutableList<string> input)
+
+ +

and just re-assigning the new list.

+ +

I don't see this pattern widespread in C# though, even with the benefits of immutability. I'm wondering whether it would be wrong for me to transition to function signatures like this in C#, and if there's a reason this isn't more widespread?

+",355487,,,,,43975.47222,Incorporating immutable signatures in C#?,<.net>,2,7,,,,CC BY-SA 4.0, +410514,1,,,5/23/2020 12:52,,-4,97,"

Java collection streams were introduced in Java 8, which came out in March of 2014.

+ +

By that time, we already had well-established mechanisms for manipulating collections in several other languages, at least two that I can speak of:

+ +
    +
  • In C#, Linq provides extension methods such as Select(), Where(), etc. including collector methods such as ToList(). Parallel extensions have been supported with PLinq since 2010.

  • +
  • In Scala, built-in collections have functions such as map(), flatMap(), filter(), etc. including collector methods such as collect(). Parallel collections have been supported since 2011.

  • +
+ +

Since the creators of java decided that java collection streams are worthy of including in Java 8, they must think that java collection streams offer something which the (arguably simpler) pre-existing mechanisms do not.

+ +

So, what do java collection streams offer which Scala and C# collections do not?

+",41811,,41811,,43974.55347,43975.53611,What is the benefit of Java collection streams over C# or Scala collections?,,2,9,,,,CC BY-SA 4.0, +410515,1,,,5/23/2020 13:14,,0,75,"

I have a collection /users/{userId}/tools

+ +

and I want to GET and POST to that collection. Can I have different representation of that object based on the method?

+ +

For example, for POST I want to send only

+ +
{""name"": ""Toolname"", ""material"": ""MaterialName""}
+
+ +

so I built a DTO with only that 2 fields.

+ +

And forGET I want to retrieve only the name

+ +
[{""name"": ""Toolname""}, {""name"": ""AnotherToolName""}, ...]
+
+ +

so I built a DTO with only the name.

+ +

Because it's the same URI, can I do that?

+",366413,,83178,,43974.56181,43974.97778,Different fields for GET and POST methods in REST,,1,3,,,,CC BY-SA 4.0, +410518,1,410525,,5/23/2020 15:44,,0,124,"

Assume that there is a library and it provides an interface to its users in order to callback them. Users implement this interface and receive notifications from the library.

+ +

Let's say, ICallback is the interface, and Notify1(arg1), ..., Notify5(arg5) are the interface methods.

+ +
interface ICallback;
++ Notify1(arg1)
++ ...
++ Notify5(arg5)
+
+ +

This library also provides a concrete class of the ICallback and distributes this class with the library package.

+ +
class CallbackAdapter : implements ICallback
++ Notify1(arg1)
++ ...
++ Notify5(arg5)
+
+ +

The owner of the library calls this concrete class as ""adapter"". Users are encouraged to use this class instead of interface because it is claimed that;

+ +

a. You may not want to implement all notify methods 1 to 5 because you want to keep your code clean. If you extend this adapter class instead of implementing interface directly, then you can select methods to override. (This is the main motivation written in the class document)

+ +

b. If the interface is changed, say Notify6() is added, then user don't have to change anything in the client code. No compilation error occurs when version is bumped. (This is an extra motivation suggested by people who extends adapter class)

+ +

Note that, some overridden methods of CallbackAdapter class aren't just empty methods, they contain code, and do some work with objects provided by library (args).

+ +

This design really disturbs me. Firstly, I will explain why I'm not comfortable with it and then I will suggest a solution to above motivations. Finally, the question will be asked at the end.

+ +

1. Favor object composition over class inheritance

+ +

When user code extends CallbackAdapter, there will be coupling between user code and an external class. +This can break the user code easily since encapsulation is terribly broken by inheriting an external class. +Anyone can have a look at Effective Java 1st Edition, Item 14 for more details.

+ +

2. Ambiguous adapter pattern

+ +

I think the adapter pattern is misused here unless this isn't another pattern with an ""adapter"" postfix in the name.

+ +

As far as I know, the adapter pattern is used when there is an external alternative implementation that we want to use but our interface doesn't match to use alternative solution directly. Hence, we write an adapter to gain capabilities of alternative implementation (adaptee).

+ +

For all the adapter examples that I've seen, there is an adaptation to a concrete class, a class which does a real job and have a capability. However, for the given example, the adaptation is against an interface but not a concrete class.

+ +

Is this a valid adapter as we know it? I don't think so.

+ +

There is a statement at applicability section of adapter pattern in the book of GoF Design Patterns:

+ +

Use the Adapter pattern when you want to use an existing class, and its interface does not match the one you need.

+ +

I think developers misinterpreted the word ""interface"" in this statement. Author mentions of adaptee's interface which eventually declares the methods of concrete adaptee class. It seems that developers thought like this: There will be a class that a user created, this class will implement the interface provided by us (as library developers), user will want to use this existing class now and then, one day we will change interface, and won't match with user code, so that we must provide an adapter, and distribute this adapter with our library.

+ +

I have just tried to understand the motivation of this adapter design. Above reasoning may be wrong but this doesn't make code safer, it's still insecure because of 1.

+ +

3. Distribution of adapter class

+ +

If there will be an adapter class, it shouldn't be in the library package, it should be in the user package, this makes more sense to me because user adapts his code to work with new implementations.

+ +

4. It's not good to break contract silently

+ +

Interfaces define behavior and are used to make contracts among participants. If a contract ends and a new contract starts, I think both sides must be aware of this change. Breaking a contract silently as given in the above example, may produce undefined behavior that we can't notice at compile time but encounter on run time.

+ +

Solutions to Motivations

+ +

a. Just override methods and keep them empty if you don't want to do anything with them.

+ +

If user can work without new method, Notify6(), this smells like a large interface problem. A segregation may be considered.

+ +

If you insist to have such a feature, you can design a callback register mechanism. Users can register any method they want. May register any of them, all of them or none of them. Keep function objects in the library, and callback registered functions. This seems a better OOP design compared to using inheritance.

+ +

b. Just avoid silent contract breaks. If an interface changes, it's more secure to see compilation errors and solve them one by one at compile time.

+ +

Discussed design is currently in use in a widely used open source project. I've explained thoughts in my mind about it. All of them seems sensible to me.

+ +

I don't think discussed motivations are huge gain. I can't understand why does someone take the risks given at 1 and 4. Help me to understand advantages of such design. What am I missing here?

+",65950,,,,,43974.82569,Object Oriented Design of Callback Methods,,1,6,,,,CC BY-SA 4.0, +410520,1,,,5/23/2020 18:06,,1,58,"

Is it doable, desirable to organize project classes, dependencies in Tree/DAG structure?

+ +

To be more specific. In applications (not libraries) we have always some entry point, am I right? Some main class. Here we keep some core dependencies, which have other dependencies and so on. We treat it as root of dependency tree.

+ +

I wonder if we should keep our dependencies in hierarchical order with some abstraction levels? How to organize complex class connections, keep low coupling, prevent from spaghetti code?

+",363052,,,,,43974.8875,Should class dependencies be organized in tree structure?,,0,5,,,,CC BY-SA 4.0, +410523,1,410527,,5/23/2020 18:58,,0,45,"

The following is an example requesting an explanation for one specific file in one specific filesystem, not helper classes generally.

+ +

I have configured a LEPP stack on a CentOS server. The server hosts an API which is built using Slim PHP Framework. There is an official skeleton repo on Slim's github.

+ +

The repo has a basic file structure which is clearly defined as follows:

+ +
app
+├───dependencies.php
+├───middleware.php
+├───repositories.php
+├───routes.php
+└───settings.php
+logs
+└───app.log
+public
+└───index.php
+src
+├───Application
+│   ├───Actions
+│   │   ├───User
+│   │   │   ├───ListUsersAction.php
+│   │   │   ├───UserAction.php
+│   │   │   └───ViewUserAction.php
+│   │   ├───Action.php
+│   │   ├───ActionError.php
+│   │   └───ActionPayload.php
+│   ├───Handlers
+│   │   ├───HttpErrorHandler.php
+│   │   └───ShutdownHandler.php
+│   ├───Middleware
+│   │   └───SessionMiddleware.php
+│   └───ResponseEmitter
+│       └───ResponseEmitter.php
+├───Domain
+│   ├───DomainException
+│   │   ├───DomainException.php
+│   │   └───DomainRecordNotFoundException.php
+│   └───User
+│       ├───User.php
+│       ├───UserNotFoundException.php
+│       └───UserRepository.php
+└───Infrastructure
+│   └───Persistence
+│       └───User
+│           └───InMemoryUserRepository.php
+tests
+var
+└───cache
+
+ +

I have added my own endpoints to this, for example a 'ViewWordAction' class which utilises a 'PostgresWordRepository' class and returns dictionary definitions for a single word found within a database table when the /dict?word={word} endpoint.

+ +

This action is excluded from the file structure for clarity. There are various other endpoints which I will add, used to analyse word data.

+ +

I have a separate class called 'RegExHelper' which separates punctuation, recognises ends of sentences and other similar functions. In previous versions of my application, the file structure was a mess and did not follow proper conventions, so this and similar classes were stored in a 'Helper' folder within the src directory. This class will be shared amongst various other classes across multiple endpoints.

+ +

I would like to know where to store this helper class and which naming conventions to use. I presume that the class should be refactored as 'RegExHander' and stored in a src/Application/Handlers/RegEx directory, although I cannot find any guides or documentation which explain the proper file structure for this example.

+",313133,,,,,43974.86736,PHP: What code should be removed to its own helper class and where should such classes be located in the filesystem?,,1,0,,,,CC BY-SA 4.0, +410524,1,410526,,5/23/2020 19:19,,-1,110,"

In the Model-View-Controller pattern, I do understand the role of each component.

+ +

The Model represents our application's domain model. The View presents this information and the controller manipulates business logic.

+ +

I do not understand from any diagram that I saw, who knows about whom. Does the View know about the Controller? Does the Controller know about the View?

+ +

All I know is the Model should not know anything about the view, because those two should be independent. As for the opposite I'm not sure.

+",161155,,9113,,43974.82014,43977.13264,MVC who knows about whom?,,2,3,,,,CC BY-SA 4.0, +410531,1,,,5/24/2020 2:24,,3,257,"

Let us say that for a web application, inside the source code of the app you created a function called calculateAmount. Inside the +web app, you need to call that function.

+ +

But for some reason, due to the need for 'automation', you want the +app itself to 'think for itself', at which stage of the program, which function need to be executed.

+ +

For example, when a user of the app is perfoming 'Action A', the function calculateAmount is called. When user of the app is performing 'Action B', another function calculateTax is called.

+ +

First idea: For the effect of 'automation', I was taught that one can simply store the function names calculateAmount, calculateTax inside a database table. Then when the web app is running, at different stages of execution, your web app makes a call to the database in order to know which function need to be invoked, based on the result you get from the database table.

+ +

Second idea: Just as an example we say calculateAmount is a javascript function like the one below:

+ +
function calculateAmount(paramA,paramB){
+  return paramA + (5/100) + paramB;
+}  
+
+ +

Again, because of 'automation', I was taught that one can simply store the function body return paramA + (5/100) + paramB; inside a database table. Then the web app can simply 'think for itself' and when it needs to call the calculateAmount function, the web app makes a call to the database, and retrieve the function body +from the database table and can execute any function whenever the correct situation arises.

+ +

My question is whether the above two ideas is very commonly practiced? Is there a situation where the above practices can have a negative impact on the performance of the database or the web app?

+ +

EDIT: +And the overall picture that someone was trying to promote to me is that you can store many things inside a database.If a software developer can do this well enough, there is less reliance for developer(s) to write source code. This is because, configurations,name of function, function body can be stored in a database. Instead of relying on many developer(s) to write source code, you can rely on the software to think for itself at which point of time what source code should be executed and retrieve it from the database accordingly.

+ +

If an entire software product can be architect in this manner. I believe the person who was trying to teach me this philosophy has a belief: Once the development and testing is complete. The software product itself is not very hard to maintain and customise. Because 'a lot' of the essential 'blocks' of the software 'lives inside the database' and to change or maintain the software. One can simply just edit the data inside the database.

+",366450,,366450,,43975.56042,43976.4875,Storing a function body inside a database table,,2,2,,,,CC BY-SA 4.0, +410532,1,,,5/24/2020 3:34,,1,66,"

On multiple occasions, we've deployed frontend code to production only to find out the backend (REST or GraphQL) hasn't shipped their side yet. Worse yet, we unexpectedly find out a param name changed which may throw an error. Another example: the backend removes an API thinking that clients no longer use the removed API and the frontend crashes. If any layer of communication between frontend and backend breaks down, then we may end up with catastrophic errors.

+ +

I think the ""best solution"" is to use a tool like Cypress or Codecept to create a suite of integration tests which checks every API call the frontend may use. Sadly, that's a heavyweight solution that requires significant upfront investment in developer time.

+ +

Anyway, I'm looking for simple solution to this problem. Maybe something that checks affected APIs when the frontend opens a PR and/or something that checks the frontend repo when the backend deploys to production.

+ +

Any ideas or experience solving this problem?

+",366453,,366453,,43975.99306,43975.99306,How can I ensure the client and server both have access to all API calls in use?,,2,2,,,,CC BY-SA 4.0, +410538,1,,,5/24/2020 10:01,,-1,82,"

I'm developing a social network with Django (Python) + Postgres SQL. My site will have a chat feature so that users can communicate to each other in real-time, and the communication will be only from user-to-user (so there won't be chatrooms with more than two people).

+ +

Let's say that in the future my social network has ten millions of registered users (I know, I know, but for the sake of my question let's assume that this happens) and an average of 20,000 chats open between users at the same time 24/7.

+ +

Assuming that I run my app on the Cloud (Digital Ocean, AWS or whatever) with a traffic balancer, can I expect my Django + SQL app to run seamlessly or should I use Node JS + noSQL to scale my app without so much pain as it grows?

+ +

I heard that the ME*N stack is meant for these kind of use cases (real-time applications with thousands of concurrent connections), but I already developed around 25% of my app in Django + Postgres and I get discouraged to think that I will probably have to re-do everything again from scratch. But on the other hand, I heard that some other big websites such as Instagram have been developed using Django, so I don't know what to think.

+ +

I'm aware that it's possible to connect Django with MongoDB, but I still have the problem with managing the big amount of concurrent real-time connections... Plus, I will use React heavily on the front-end and it might be easier to couple it with Node than with Django.

+ +

What is the best decision here?

+",366467,,,,,43977.12431,Can I manage thousands of concurrent connections with a non-Node stack?,,3,3,,,,CC BY-SA 4.0, +410542,1,,,5/24/2020 12:52,,-1,31,"

Background

+ +

Simplifying, assume I want to write some tool for code-analysis, which tell me which files - class/module - are/have some kind of network interface(s). No matter if it's REST Controller, DB connection, RPC or simple socket.

+ +

Restrictions

+ +

I want to achive this automatically. What do I mean by that? I don't want to use any regex stuff to search network-specific keywords in method names or strings like GET, POST, other http operations, URLs, IP addresses and other things which are related to certain protocol. I want to obtain if class use some socket operations under the hood. Even if it's app build on e.g http framework, and all network operations are hidden in deeper dependencies.

+ +

Example

+ +

We have some REST controller, which listen on particular URL. If we use some HTTP framework, then probably such controller is injected as dependency to other class and this class is injected to other class and so on, where finally we have some network socket which listen on low level network interface. I assume that every programming environment has some atomic socket abstraction, or couple of such, which are used by every network interface.

+ +

Question

+ +

How can I properly fulfill such requirements? The goal is to indicate places where are located network inputs and outputs of the system. Can I achieve that using tool written in only one programming language or do I have to write individual tool every technology?

+ +

PS. Post is language-agnostic in general, but some example in java or python would be helpful.

+",363052,,,,,43975.57708,"How to check whether module or class is network interface, socket?",,1,2,,,,CC BY-SA 4.0, +410548,1,,,5/24/2020 14:51,,0,17,"

I am working through resources related to non-abstract large scale system design. I'm working through the google system design exercise in this video.

+ +

In the example solution presented, a calculation is made for the number of write requests a write server can pass through to an underlying storage subsystem. Essentially, the write service is receiving 4MB images from users and for each calling the storage system write operation in parallel. We assume that the storage subsystem has inifinite scaling. The write service hardware has a NIC capable of 1GB/s operation. We assume that the server has reasonable CPU and cache/memory to fully saturate the link up and down.

+ +

The example video tried to estimate the total number of write operations that a single server can achieve per second.

+ +

They state that:

+ +
    +
  • it takes 4ms to receive the file from the user (4MB / 1000 MB/s)
  • +
  • it takes 4ms to send the file to the storage back end (4MB / 1000 MB/s)
  • +
  • Therefore 8ms to 'save' the file.
  • +
  • Therefore a single server instance +can process 125 writes / second.
  • +
+ +

But this feel a bit wrong to me. If the server hardware is a standard NIC connected to a standard switch, then the connection is full duplex? Therefore the bandwidth up and down is not shared, and therefore the write operations would be roughly 250 / second?

+",340420,,,,,43975.61875,Trying to understand NIC receive / send operations per second estimations for non-abstract system design,,0,1,1,,,CC BY-SA 4.0, +410557,1,410558,,5/24/2020 18:57,,2,63,"

There are different methods for recognizing classes in UP methodology:

+ +
    +
  1. noun/verb analysis
  2. +
  3. using CRC analysis
  4. +
  5. using RUP stereotypes
  6. +
  7. other sources
  8. +
+ +

I have read above methods fully detailed in UML 2 and the Unified Process_ Practical Object-Oriented Analysis and Design (2nd Edition)-Addison-Wesley Professional (2005) book.

+ +

Definition: There are some ambiguity considering something as a single class or attribute of the existing class.

+ +

Suppose we have goods transportation system with the following obvious classes:

+ +
    +
  1. driver
  2. +
  3. system admin
  4. +
  5. goods
  6. +
  7. order
  8. +
  9. customer
  10. +
  11. etc.
  12. +
+ +

In this system the customer should see the live location of the driver, who transports his/her order.

+ +

The problem is that we don't know consider the location like longitude and latitude as driver's attributes (and its functionality as operations) or consider it as single class which has relationship with driver class.

+ +

Question: Is there any method for finding the solution and clarifying this kind of ambiguity?

+",352516,,209774,,43976.79722,43976.79722,Consider as a single class or attribute/operation of existing class?,,2,0,,,,CC BY-SA 4.0, +410560,1,410562,,5/24/2020 20:41,,2,115,"

I am at the stage of implementing a my 1st ever view, after developing a Model and Controller, however there is a problem.

+ +

I have been reading this article on MVC, which is what I have been aiming for when implementing, but I kind of went off-script when making the Controller because I didn't remember to look back at it, not that I think it would have been all that much help.

+ +

Currently my controller contains the methods:

+ +
    public function add($name, $price, $dimensions): string
+
+    public function remove($sku): void
+
+    public function removeAll(): array
+
+    public function updatePrice($sku, $price): void
+
+ +

But when coming to writing the View I realise that I would have to implement database access in order to list entities, unless I ask the controller for the data from the model. That seems kind of wrong to me, but given some of the description in the article mentioned above it seems like this might required.

+ +

It seems most neat to me to supply 'get' functionality from the class which already has database access. In short, should a View interface a database directly?

+",366503,,,,,43977.37292,Does the Controller contain get methods in MVC?,,3,0,,,,CC BY-SA 4.0, +410565,1,,,5/24/2020 22:23,,-4,47,"

I know that Tensorflow use symbolic model-building APIs where the developers can use them to build static computational graphs. +Whereas Pytorch offer imperative programming paradigm where it performs computations as it goes.

+ +

Could anyone help me understand the development paradigms of Scikit-Learn, Keras, Caffe and Theano? Do they use imperative development paradigm or symbolic APIs?

+",366507,,,,,43977.15486,Development paradigms of ML libraries,,1,0,,,,CC BY-SA 4.0, +410566,1,410568,,5/25/2020 0:04,,0,95,"

I'd like to try to create an application where 2 players can play chess online. The (possibly) novel feature would be that the process for joining a game would be similar to how Typeracer works. The first player creates a lobby then shares a link. The friend can then click the link to join the session immediately.

+ +

However, I'm a beginner when it comes to how information is shared over the internet. I've only created a few REST APIs and games in Java/Python so a lot of this would be new to me.

+ +

The first question I have is how/where would I actually implement the logic for this game? Would everything be done client-side (I'd probably use React so I'd then have a js library to hold all game rules) Or is everything done on the server?

+ +

Also, how would I keep the player's game clients in sync? I've heard about websockets but any elaboration would be helpful.

+ +

Lastly, how could I go about implementing the feature where users can share their game link to get the other player to join?

+ +

For context, I have quite a bit of experience with React for UI, I've used NodeJS on the server, and I'm currently learning about how to create web APIs with ASP.NET web API (C#). I figure that I'll be able to create the board UI in React and handle any animations with CSS. If that's not a good idea let me know!

+ +

Overall I'm not really sure how I'll tie this all together so a big picture view might help; and I've love to hear about any frameworks or tools that might make this job easier; thanks for the help in advance!

+",366510,,366510,,43976.00625,43976.24444,Architecture of Online Chess (2-player web-based board game)?,,1,2,,,,CC BY-SA 4.0, +410569,1,,,5/25/2020 6:26,,2,70,"

We have data in Kafka which we need to export to multiple destinations. Each message key is to be exported to one destination.

+ +

A destination can be a REST endpoint, a file, a database etc.

+ +

Each exporter can have its own speed or rate limits and one exporter should not slow down the other.

+ +

In Kafka, the parallelism is dependent on the number of partitions, rather on the no. of messages.

+ +

Approach #1

+ +

We decided to use Akka where we read each message from the Kafka topic and tell to the exporter actors each of which will export to their respective destination like REST, file, database etc.

+ +

Problem: At-most once semantics only. The problem here is that, we have to commit the messages in Kafka. When we tell to an actor, we do not know, whether that actor has processed that message or not. It may still lie in the mailbox and we may commit that message. These committed messages are not read again after process restart.

+ +
while(true) {
+    consumer.poll().forEach( record -> { 
+        getExporter(record.key()).tell(record, ActorRef.noSender()));
+    });
+    consumer.commitAsync();
+}
+
+ +

Approach #2

+ +

Read each message, store it in a persistent file, export it and remove it from the persistent file after export.

+ +

We need a persistent actor for this, for whom we need to tell to. So, we may use ask for this and wait till the actor puts it into the map and then tell it to the exporter.

+ +

Are there any better ways of doing this? Are there any reference architectures?

+",97722,,,,,44157.42083,Processing messages in parallel with Kafka & Akka,,2,0,,,,CC BY-SA 4.0, +410570,1,,,5/25/2020 7:19,,3,263,"

This is something that has been bothering me for quite some time, I tried different approaches over the past year but I always come back to reflect on this, so I'll throw myself out here !

+ +

So, first get some common ground: if we follow the blue book, transaction handling should be done in the application service. We open a transaction, retrieve 1 or multiple aggregates, only adapt 1 aggregate and commit that transaction. ( 1 aggregate per transaction ).

+ +

If we then follow the theory of domain events, it seems that we have 2 camps, the one publishing events before commiting, the one publishing events after commiting. I'm fine with the second one, so let's stay in that context.

+ +

Ok I had different approaches on this one, I'll explain them so you can judge by yourself or maybe find other alternatives.

+ +

I'm using .NET btw.

+ +

So my first attempt was quite easy, and is the one I'm still using now.

+ +

I have a generic repository with 2 methods, like this:

+ +
public interface IRepository<TId, T> where T : Aggregate
+{
+    T GetById(TId id);
+    Save(T aggregate);
+}
+
+ +

And in my save method I do a couple of things:

+ +
    +
  • Check if the entity already exists, if not, add it, if it does exist, attach it back to the change tracker ( using E.F. Core for now, don't focus on this part, it's not important )
  • +
  • As we have only 1 aggregate per transaction, I decided to call my SaveChanges(); method, the one that commits the changes to the database if we don't have a parent scope open, in the Save method of my repository. So I'm basically committing in my repository.
  • +
  • My aggregate base class holds a list of events that have occured from the moment we get it to the moment we save it. So after my save changes in this method, I basically retrieve the unpublished events of my aggregate, dispatch them, and clear the list.
  • +
+ +

As you can see, the Save method does quite a few things, maybe some considered outside of it's responsability. But it has been quite a straight forward experience using this method, as my application services don't really have transactions involved anymore, and each save commits a transaction.

+ +

The second approach for me consisted of overriding the .SaveChanges method of entity framework. I would basically retrieve the list of tracked aggregates, retrieve the events on them. Then call the base.SaveChanges() method that will actually commit, and then dispatch the events.

+ +

The repository save method in this case would only attach the entity to the change tracker when saving it ( I tend to retriev emy entities without change tracking and only track the changes on the .Save part, as it is possible to retrieve multiple entities in the same method for information purposes, but only 1 of them will be changed ).

+ +

What I then do is wrap my DbContext into a IUnitOfWork interface, like this

+ +
public interface IUnitOfWork
+{
+    void Commit();
+}
+
+ +

And the commit method calls the context.SaveChanges(); method, which has been overriden to dispatch events after the commit.

+ +

My application service would then inject the repositories needed, call the .Save() method on one of them, and the commit the uow who is also injected.

+ +

The problem I have with those 2 is in the context of E.F., they are not always 100% correct for what I want to do.

+ +

If I open a transaction scope in my application service, both of those methods will actually publish events when saving on the repo / committing in the UoW, but if I rollback the parent transaction scope, I will have an invalid state.

+ +

My ultimate goal would be to always use a transaction scope in my application service, using that transaction scope's commit to actually commit the changes instead of the context's .SaveChanges() method, and only dispatch the events after the transaction scope is committed.

+ +

Now, my question(s) is(are): does the 2 first approaches seem correct to you, and am I to worried about small details ? Is my third idea actually achievable in a correct way ? Do I maybe miss an extra alternative ?

+ +

Thanks you !

+",264017,,,,,43987.28889,When to publish domain events when handling transactions in the application service,,1,0,0,,,CC BY-SA 4.0, +410571,1,410572,,5/25/2020 8:10,,2,60,"

To reiterate the question - what does it mean to have a weaker or stronger postcondition when overriding a method that only does side effects with another one that only does side effects?

+ +

P.S. What about a mix of side effects & return values?

+",354171,,,,,43976.41528,How to determine whether the postcondition of overridden methods is weaker or stronger if there is no return value?,,1,0,1,,,CC BY-SA 4.0, +410573,1,,,5/25/2020 9:14,,1,96,"

I'm developing an application where one of the main components of the app heavily relies on a third-party framework to work. The choice of framework is highly volatile and it is likely that business requirements will change in the foreseeable future, meaning that there's a good chance the framework I currently am using will be replaced with a similar but different framework.

+ +

What are some measures I can take to minimize the amount of code I have to change when I inevitably do switch frameworks in order to avoid tying myself into the framework? I want to try to leave as much code untouched and limit the amount of changes I have to make to the classes that interact with the framework, allowing for easy plug and play.

+",360382,,,,,43977.11389,How do I minimize the collateral damage caused by a framework change?,,3,6,,,,CC BY-SA 4.0, +410576,1,,,5/25/2020 10:05,,3,76,"

We've got 2 very large platforms for our services & jobs.

+ +

Both platforms consists of 20+ servers hosting 1000+ services/jobs.

+ +

Each job/service is essentially a java web application.

+ +

Both platforms are managed centrally, which allows us to get a holistic overview of the platform, as well as drill down capability to each server.

+ +

Each server has the following logs:

+ +
    +
  1. System out log
  2. +
  3. System err log
  4. +
+ +

The service/job log is centralized and not stored on the server.

+ +

A service/job application may include 3. party libraries that write out to sys out/err, which makes our operations difficult, as the centralized log is not the solemn source for troubleshooting.

+ +

Architecturally we only want the sys out/err log to contain server log entries, not service/job log entries. This would allow the development department to troubleshoot business logic via the centralized log and the operations department to troubleshoot sick servers via server logs.

+ +

As we have no control over these 3rd party libraries, and the sys out/err is global to all the applications on the server, how do we best approach this challenge?

+",294971,,,,,43976.42014,Logging by 3rd party libraries,,0,9,1,,,CC BY-SA 4.0, +410577,1,410580,,5/25/2020 10:56,,1,301,"

I'm a firmware developer and I'm interested in applying the SOLID practice in low level programming especially in Hardware Abstraction Layers in ARM microcontrollers.

+ +

Every example I come across on the internet is implemented in C++ or C# or Java and it seems a little hard to follow those patterns in C.

+ +

Is there any examples that could give me a hint of how to make that work in C?

+",298122,,,,,43976.69861,How do I implement Dependency Inversion in C?,,1,4,,,,CC BY-SA 4.0, +410584,1,,,5/25/2020 12:37,,3,212,"

I am currently working in a team, which, when I joined them did not do any sort of unit or integration testing.

+ +

Over the last 2 years I have bit by bit pushed dotnet unit testing to a point where it is now considered part of day to day workflow. Integration testing have also had a place and are coming up once again as a point of focus which is great. The team generally agrees that dotnet unit tests are definitely worth it and that dotnet integration tests are also a great addition, although they take longer to write and set up.

+ +

The area where unit tests are almost non-existent is our angular/UI code. I've pushed for us to add unit tests for re-usable functions, etc, however component unit testing has largely been left alone as both myself and the rest of the team struggle to quantify whether it is worth it or not. A few attempts to add unit testing to the UI have ended up with us writing a bunch of unit tests, however none of us really understood the value.

+ +

Just recently, an experienced (and quite openly opinionated) front-end developer joined the team and his opinion is that angular UI unit testing is completely and utterly worthless. I am afraid of simply agreeing with this point of view as this was similar to the team's opinion about dotnet unit testing initially and now they think the opposite.

+ +

Question: +What do you test in your angular application and how do you think it improves your software quality/stability?

+ +

Possible discussion points: +Do you have unit tests? If so - what do you test? Components? Services? State? Actual DOM changes? Do you instead have end to end tests? How would you quantify the time(cost) vs benefit?

+ +

Any insight/etc would be highly valued!

+ +

Thanks!

+",237712,,,,,43976.75,What do you unit test in your angular applications?,,1,2,1,,,CC BY-SA 4.0, +410586,1,410647,,5/24/2020 21:22,,0,104,"

I am new to UML sequence diagrams. I saw a few YouTube videos and a few tutorials such as this one.

+ +

I have a system with multiple inputs, that can interact with the system asynchronously. For example, first the input 4 (out of 5), secondly the input 1 (out of 5), etc. How do I represent them in a UML Sequence diagram?

+",366631,just_learning,4,,43976.58542,43977.62083,How do I represent parallel (multiple) inputs in a UML Sequence diagrams?,,1,1,,,,CC BY-SA 4.0, +410592,1,,,5/25/2020 16:59,,0,32,"

My web application has a UI. Some aspects of the UI can be changed (e.g. the language, the theme, the text size). As a concrete example, let's assume that I have a ""theme"" dropdown box available on every page.

+ +

Now, when a user changes the theme, the current page should reload with the new theme, and the new theme should be used for all subsequent page calls. In other words, I want to (a) change some server-side state (the session variable containing the theme name) and (b) reload the page.

+ +

I know how to implement this. In fact, I know more than one way to implement this, and this bugs me. I want to find the best way to implement this, and since best is subjective, I want to identify the advantages and drawbacks of both options (and maybe learn about alternative solutions that I didn't even think of).

+ +
+ +

Here is solution 1:

+ +
    +
  • I add the new theme name as a query parameter (i.e. http://example.com/show_orders?customerId=2 becomes http://example.com/show_orders?customerId=2&changeTheme=dark) and reload the page.
  • +
  • A generic server-side handler checks for the changeTheme parameter, updates the state and then execution passes on to the specific handler (show_orders), which ignores the unknown changeTheme parameter.
  • +
+ +

What I don't like about this solution is that

+ +
    +
  • page-specific parameters (customerId) and site-specific parameters (changeTheme) are mixed and
  • +
  • ""state-changing"" parameters are in the URL: If the user sends the URL to a colleague directly after changing the theme, calling the URL will also cause the colleague's theme to change. I don't know if there is a specific term for this, but if just ""feels wrong"" to have a GET request to a resource called show_orders change something. (Feel free to correct me, if my gut feeling is wrong here.)
  • +
+ +
+ +

Here is solution 2:

+ +
    +
  • I call a dedicated URL for changing the theme and pass the original page as a parameter, i.e. http://example.com/change_theme?newTheme=dark&redirectTo=%2Fshow_orders%3FcustomerId=2.
  • +
  • The change_theme handler changes the session state and then redirects (HTTP 302) to the original URL as defined in the query string parameter.
  • +
+ +

What I don't like about this solution is that

+ +
    +
  • I am redirecting back and forth, which might affect performance, and
  • +
  • I am redirecting to a user-supplied value, so I might need to take extra precautions w.r.t. security.
  • +
+ +
+ +

Today, I implemented solution 1 and I am considering to change it to solution 2, for the reasons outlined above. Is this a good idea? Are there pros and cons that I have missed? Is there some superior solution 3, which is well-established best industry practice? If yes, what problems does it solve that options 1 and 2 have?

+",33843,,,,,43977.36667,How should I represent a UI state change in the URL?,,1,0,,,,CC BY-SA 4.0, +410595,1,,,5/25/2020 17:33,,57,13273,"

This question may sound strange to you, but I am learning C++ all by myself. I have nobody whom I could ask for mentoring and I would be very glad for some advice.

+ +

I have started recently to program in C++ (about 3 - 4 intensive months with about 6 - 8 daily hours). My background is Java and I have done some bigger projects in Java with over 10k LOC (which is for a university student like me big).

+ +

I used C++ mainly for algorithms implementing and visualization but I aim for bigger software projects as well. The only libraries I used are for Catch2, OpenCV, and a little Boost. The strange thing about my programming style is that I have never used pointers in my journey; it is not like I don't know how to use pointers but I just never found a moment where I think a pointer would be useful. When I have to store primitive data, I prefer std::vector over array. When I need to use an object of a class, I prefer to create the object on the stack and pass it by reference; no new/delete, no smart pointers.

+ +

The reason why I ask this (strange) question is, that I feel like I am missing a big area of C++ programming. Could you share with me your experience and maybe give me some tips?

+",312915,,44,,43977.77917,43983.86736,I never use pointers in my C++ code. Am I coding C++ wrong?,,11,17,16,43979.02431,,CC BY-SA 4.0, +410600,1,410601,,5/25/2020 18:53,,1,178,"

I'm trying create a .NET Core project and followed some guides to create a basic architecture

+ + + +

I'm not using an ORM like Entity Framework, I want to use raw SQL for my Maria database so I'm using the official MySQL Connector. I set up the database connection and dependency injection parts.

+ +

As you can see in the sample repositories linked above (sample here) the database access happens in the Application layer and is coupled to the business logic. The persistence layer only acts like a database configuration container.

+ +

I want to know about best practises on how to organize the database access logic.

+ +
    +
  • Before executing queries I have to open a connection and close it afterwards. Should I do this in my Application command / query because then I can fire multiple queries with one connection?
  • +
  • How should I structure the database logic? Should I create one file per SQL statement which handles the sql query and the mapping from database result to domain object?
  • +
+ +

It would be awesome if you could also provide a sample folder structure for visual purposes.

+",317611,,9113,,43976.84583,43976.86528,How to organize database access logic for the infrastructure and application layer when avoiding ORM tools?,,2,0,,,,CC BY-SA 4.0, +410606,1,,,5/26/2020 0:38,,0,68,"

I have a software product I am going to release soon. It is built with Java/Spring/Tomcat and will be installed on the customer's local network.

+ +

I plan to sell this to corporate customers and I want them to buy an annual renewing license to run it.

+ +

I am imagining something like the customer provides me with their company and contact name and they will receive a code in email. They enter the three piece of info ...

+ +
Company Name: Vance Refrigeration Inc.
+Contact Name: Bob Vance
+License Code: A2B2-A3B3-A4B4-A5B5
+
+ +

... and the application is licensed and works for them. Either by ""phoning home"" or by decrypting the code and activating itself locally.

+ +

The license will permit the application to run on X number of servers, for X number of users, with X number of features activated, and it will have an expiry date.

+ +

THOUGHTS AND QUESTIONS:

+ +

Should I encapsulate all of the license feature details (# servers, # users, expiry date) in that license code and decrypt it locally without having to ""phone home"" to a verification server?

+ +

Is there a way to link the three piece of info so all three have to be entered correctly for it to work? My thinking is, that way the name of the licensed owner is always accurately displayed.

+ +

If I require the software to hit a verification server, how often should I have it ""phone home"" ... and how would you handle deployments that are on internal networks with no outside internet access (I've worked in environments like this).

+ +

Is there a good product out there already does of this for me? Don't necessary want to re-invent the wheel.

+",366580,,,,,43977.11181,"What is a good way to make a ""software activation code"" system?",,1,3,,,,CC BY-SA 4.0, +410608,1,,,5/26/2020 2:19,,-1,99,"

Should Developers conduct Sanity Testing in Dev Public Server, before sending code over to QA team?

+ +

We are developing a Property Application. Our company utilizes C# .Net Core with Angular, and Devops process in Azure cloud.

+ +
    +
  1. After software engineer codes and tests locally; by getting all latest changes from Azure Git repos;
  2. +
  3. We finalize code and write automated unit tests (thru Xunit and Karma/Jasmine)
  4. +
  5. Additionally, developers point local database to Developer Environment Public database, to gain access to wider range of data.
  6. +
  7. Finally, we send application to Dev Public Environment for Basic Smoke Test
  8. +
+ +

and then eventually to QA environment through Azure Devops deployment.

+ +

Our QA team is complaining, 'Developers should not only Smoke Test, but Sanity Test to ensure all the functional changes are there'.

+ +

Smoke Test is different from Sanity testing, https://www.guru99.com/smoke-sanity-testing.html https://www.softwaretestinghelp.com/smoke-testing-and-sanity-testing-difference/

+ +

Smoke Test ensure application basically functions, (eg does not render a blank webpage, and APIs actually are turned on. Now as far as other things sanity testing, like correct html layout, exact property tax calculation is seen; that we leave out as its conducted in steps above.

+ +

Our philosophy is, we have already tested Locally, and Then Pointed Our Database to actual Dev Server, and write Unit Tests. Why do Sanity testing Again? If things are not properly deployed correctly, why even paying Azure licensing fees to Microsoft for Github and their Devops services? We trust they work.

+ +

The truth is, some people are not writing code properly EVEN on local environment which is causing problems for QA. However, they are prescribing a incorrect solution to a problem.

+",,user354368,,user354368,43977.36528,43977.36528,"Should Developers Conduct Sanity Testing in Public Dev Environment, if they Tested Locally and Wrote Unit Tests?",<.net>,2,1,,,,CC BY-SA 4.0, +410611,1,410622,,5/26/2020 2:47,,6,523,"

Sometimes when I look at other people's code I see functions that make a bunch of assumptions about the inputs but do not explicitly assert their assumptions. For example, look at the code below:

+
def func(a: list, b: list, c: int):
+    total = 0
+    for i in range(len(a)):
+        total = a[i] + b[i]
+    return total/c
+
+

My first instinct when I see code like this is to add a bunch of assert statements, like so:

+
def func(a: list, b: list, c: int):
+    assert len(a) <= len(b)
+    assert c, "cannot be 0"
+    total = 0
+    for i in range(len(a)):
+        total = a[i] + b[i]
+    return total/c
+
+

My argument is that I would much rather get an AssertionError so I know the exact problem (especially if there's a useful message) than an IndexError or something else and then have to figure out what the root cause is. Sometimes I see 5 or 6 assumptions made about the input, but in practice I don't see functions starting with lots of assert statements very often. I'm tempted to add a bunch of asserts to some code I found to make debugging easier. Is there any reason not to do this?

+

EDIT: +Another way of asking this is if I get an error while running code and debug it to realize that input x from two calls higher in the traceback should always have some property (e.g. always be positive), is there any reason not to just add an assert statement right away in the code?

+

EDIT2: +Here's an example from a popular code repo. In this case, the argument direction has to be in range(8). If it is not, the user gets an error that says

+
Exception has occurred: UnboundLocalError
+local variable 'targ_pts' referenced before assignment
+
+

To me, this is much harder to debug than if it started with an assert statement like assert direction in range(8), "skew direction must be integer between 0 and 7". Should an assert statement be added in this case?

+",285315,,285315,,43999.00833,43999.00833,Is it a good idea to start a function with a bunch of assert statements?,,5,5,,,,CC BY-SA 4.0, +410623,1,410650,,5/26/2020 7:54,,0,109,"

I'm planning to work with different libraries that use different conventions. One uses snake_case, another one uses camelCase. This leads to code that looks like I can't make up my mind:

+ +
Some_Result result = namespace1::this_cool_function() + Namespace2::thatOtherCoolFunction();
+
+ +

Is this just a suck it up and deal with it situation? Should I create a wrapper that makes everything use the same convention (and will hopefully get optimised out). Or is there a different way? Choosing a different library that matches my naming conventions (and that of the other libraries) is out of the question.

+",301421,,,,,43977.64514,How to avoid messy code when working with different libraries,,1,3,,,,CC BY-SA 4.0, +410624,1,,,5/26/2020 7:57,,1,225,"

I mean the question in the sense of: Should the occurrence of simple loops on collections in code in Java 8 and higher be regarded as code smell (except in justified exceptions)?

+ +

When it came to Java 8, I assumed that it would be good to treat everything with Stream API now wherever possible. I thought, especially when I use parallelStream() wherever I know that order doesn't matter, this gives the JVM the ability to optimize the execution of my code.

+ +

Teammates think differently here. They think lambda transforms are hard to read and we don't use streams much now. I agree that streams are hard to read if the code formatter forces you to write them like this:

+ +
return projects.parallelStream().filter(lambda -> lambda.getGeneratorSource() != null)
+        .flatMap(lambda -> lambda.getFolders().prallelStream().map(mu -> Pair.of(mu, lambda.getGeneratorSource())))
+        .filter(lambda -> !lambda.getLeft().equals(lambda.getRight())).map(Pair::getLeft)
+        .filter(lambda -> lambda.getDerivative().isPresent() || lambda.getDpi().isPresent()
+                || lambda.getImageScale().isPresent() || lambda.getImageSize().isPresent())
+        .findAny().isPresent();
+
+ +

I would rather prefer to write them like this:

+ +
return projects.parallelStream()
+
+// skip all projects that don’t declare a generator source
+.filter(λ ->
+    λ.getGeneratorSource() != null
+)
+
+/* We need to remove the folders which are the source folders of their
+ * project. To do so, we create pairs of each folder with the source
+ * folder … */
+.flatMap(λ ->
+    λ.getFolders().stream()
+    .map(μ ->
+        Pair.of(μ, λ.getGeneratorSource()) // Pair<Folder, Folder>
+    )
+)
+// … and drop all folders that ARE the source folders
+.filter(λ ->
+    !λ.getLeft().equals(λ.getRight())
+)
+/* For the further processing, we only need the folders, so we can unbox
+ * them now */
+.map(Pair::getLeft)
+
+// only look for folders that declare a method to generate images
+.filter(λ ->
+    λ.getDerivative().isPresent() || λ.getDpi().isPresent() ||
+    λ.getImageScale().isPresent() || λ.getImageSize().isPresent()
+)
+
+// return whether there is any
+.findAny().isPresent();
+
+ +

Yes, return is at the top, and it looks quite differently than:

+ +
for (Project project : projects) {
+    if (Objects.nonNull(project.getGeneratorSource())) {
+        continue;
+    }
+    for (Folder folder : project.getFolders()) {
+        if (folder.equals(project.getGeneratorSource())) {
+            continue;
+        } else if (folder.getDerivative().isPresent() || folder.getDpi().isPresent()
+                || folder.getImageScale().isPresent() || folder.getImageSize().isPresent()) {
+            return true;
+        }
+    }
+}
+return false;
+
+ +

But isn’t that just a matter of habit? Therefore my question is more a question of assessment:

+ +

Should Stream be something basic to use (should Java beginners learn for loops at all/first, or should they first learn to use streams), or is that too much premature optimization, and one should use streams rather after it is shown that a code makes a bottleneck.

+ +

(Less opinion-based: How do modern Java teaching books (are there still books in IT training?) handle this?)

+ +

Edit 1: To prevent the question from losing focus: My question is: Is Stream is intended as a basic programming paradigm every Java programmer should use often, or as a performance feature for senior software enhancers, that should be avoided? You can also look at it internally: Can the JVM decide whether to use a stream construction or process the objects sequentially (e.g. may it only do that if it detects a hotspot?), or does it have to set up complicated parallelization logic in the background each time, with multiple threads, so that this could produce a lot of overhead? In the second case that would be a clear argument to me against using Stream all and everywhere.

+",80404,,80404,,43977.42917,43978.49722,Is the Java Stream API intended to replace loops?,,3,1,,,,CC BY-SA 4.0, +410626,1,,,5/26/2020 8:06,,2,139,"

I'm migrating my current application to multi-tenant setup.

+ +

Now I've multiple rabbitmq workers to process async job, publish and consume integration events, and other stuffs. I'm planning to use vhosts in RMQ (1 vhost per tenant per service).

+ +

Right now, I'm still not convinced about what approach to take to implement this multi tenancy setup. Here're the options that I've:

+ +
    +
  • One docker container per tenant (which runs 1 consumer process) -> +This sounds overkill to me. Also, I've to figure out, how to read +active_tenants from a central tenant config service to spawn containers dynamically, in docker-compose.

  • +
  • Use multiprocessing module in Python to spawn one worker process each for one tenant. I've tried this out, and have one process fork dynamically multiple tenant process. I'm confused about monitoring stuff here. Also, to be able to handle signals for each tenant process (so that we can track & restart individual tenant process on failure), is there some standard approach? Like some fixed set of signals to worry about? Should I even do it manually, or let something like supervisor to take care of that?

  • +
  • Use supervisord inside docker to manage multiple tenant processes. But this will require pre-configured supervisor config. Can't read active tenant_id from a central tenant service probably?

  • +
+ +

Now out of these options, what really should be most robust approach? We don't really have that much scale issue. Ours is subscription based tenant model. Number of tenants probably is not going to be more than 15 or 20 in near future.

+ +

I'm also worried about lots of processes now being spawned per service, which also increases are server load requirement.

+ +

Another approach which I don't think I should take is multi-threading approach. Is it a good idea to have long running thread for each tenant under a process. Well, of course, given it's Python, it won't really scale well.

+ +

Also, is there already something available to have my rabbitmq workers spawn multiple workers for given set of connection string? I didn't find any in last 3-5 days.

+ +

Language and Frameworks used currently:- Python, Flask, Kombu (for RMQ), Postgres. And, Docker for containerization.

+ +

Let me also explain our current deployment approach, as it could it relevant in getting optimal solution:

+ +
    +
  • Every app has defined their Dockerfile with their repository.
  • +
  • When CI/CD kicks in, jenkins builds the docker images for the app, and pushes it to docker hub.
  • +
  • We've environment specific docker-compose files (This possibly could be merged)
  • +
  • We are storing build_env (required to inject variables in docker compose), separately, and pass that to production machines, in order to run docker-compose
  • +
  • With this, jenkins now connects to prod machines, copies the deployment related files as tar archive to machine, and then runs docker-compose command on the machine.
  • +
+",66382,,66382,,43985.55486,43985.55486,Approach to setup multi-tenant RabbitMQ workers in Python,,0,10,,,,CC BY-SA 4.0, +410630,1,410645,,5/26/2020 9:03,,-2,68,"

I am trying to program a little game and for that I need to determine which's players turn. I am solving it firstly with a Nassi-Schneiderman diagram with a variable turn whic can be 1 or 2, but my problem is that I end up with too much branched decision blocks. (In the first round player 1 begins always, the player that loses starts in every other case, if it is a tie player 1 begins)

+ +

Do I need another variable or has someone an idea on how I can solve my problem without making it too complicated?

+",366600,,,,,43977.59722,Nassi Shneiderman diagram: Which player has its turn?,,2,0,,,,CC BY-SA 4.0, +410635,1,,,5/26/2020 10:35,,-3,70,"

Using github as an example, www.github.com is the website people visit and api.github.com is the REST api server programs will visit. But they probably share some codebase and in my case they share a lot.

+ +

We actually developed the rest server first for our mobile apps then we decided to develop a website because we had though there were quite some codes can be reused (same programming language of course). But now codes are not clean at all and I am trying to refactor them.

+ +

The similarity between them and the problems I need to fix include:

+ +

First, the logic to access the database is basically the same. After get the data, the REST codes will just return them as JSON while the ""website code"" (without a better word to describe them) will feed the data to the template which then generates the html pages.

+ +

Second, the router logics share in common a lot.

+ +

Third, I need to make the deployment easy. We use nginx to direct REST request to the rest server and website request (again, without a better word) to the website server, which is different from the rest server(2 standalone servers).

+ +

So how do I organize the source codes to be DRY & be easy deployment ?

+ +

--- update ---

+ +

If you down voted or close voted can you please leave some comments why ? To me this is a legitimate question and I have been doing refactoring for a while. Thank!

+",217053,,217053,,43977.4875,43978.26181,How do I organize my REST API codes along with the codes for generating the website?,,2,0,,,,CC BY-SA 4.0, +410638,1,410641,,5/26/2020 11:29,,-4,70,"

I'm working as a software developer for around 8 years now and have worked with mostly companies offering outsourcing services.The challenge so far was mostly - learn the new tech, showcase through a prototype/POC and implement it in the assigned project. Over the years, technologies changed, but nature of work remained the same - javascript, jquery, webforms, silverlight, MVC, etc. But now, am trying to transition to a software consultant role and one of the feedbacks that I recieved was - ""Try to think in depth, as to not just use a feature but to relate it to the most basic concept that you know."" e.g. as a best practice for string concatenation in C#, we know that we should be using stringbuillder and also know why we should be using it (string immutability), but the question was - How would you implement a StringBuilder class from scratch?

+ +

I'm not sure how to train myself for becoming ready for design thinking and making myself ready for in-depth thinking and analysis (since most of day to day tasks never require such rigorous drill-down).

+ +

PS: Sorry for long question and also even though the question is not suitable for platform, I would still be happy if anybody could point to a correct forum/community where i can reach out for guidance.

+",48250,,,,,43977.52708,Evolving thought process and design thinking,,1,10,,43977.58403,,CC BY-SA 4.0, +410651,1,410659,,5/26/2020 16:12,,-4,60,"
+

Write the impact of software architecture on Requirement Analysis

+
+

I was given the above question in my assignment. I have studied about Software Architecture and Requirement Analysis but I'm still confused by it.

+

Does Software Architecture have any impact on Requirement Analysis?

+

I have gone through my teacher's lecture and I couldn't find anything notable regarding this matter. He has also said that we have to do our own research on this.

+

According to my research Requirement Analysis is done before any prototyping or architecture designing. Which means Software Architecture isn't involved in the Requirement Analysis process. So how is it even possible to describe Software Architecture's impact on Requirement Analysis?

+",366632,,319783,,43999.28333,43999.28333,What is the impact of Software Architecture on Requirement Analysis?,,1,14,,43977.79931,,CC BY-SA 4.0, +410655,1,410713,,5/26/2020 16:39,,-3,273,"

In a refund tech scam, tech scammers use Chrome Developer Tools to edit the HTML directly on the victim's bank webpage through a Remote Desktop (Teamviewer, AnyDesk, etc) to fool their victim into thinking that they received a 'refund'.

+ +

I was wondering if we could detect if the innerHTML of the victim's bank balance has changed (some DOM event?) and compare the value to a private variable storing the bank balance, reload the page if a mismatch was found. A timer could also be used. Is there any way the scammer may be able to stop the listener/timer code from running without being detected by the Javascript code?

+ +

Although this may be a stupid idea, making these scams harder to execute for scammers will definitely slow down their progress on scamming more vulnerable victims.

+",287790,,,,,43979.41111,Is it possible to prevent tech scammers from editing bank webpages?,,4,3,1,,,CC BY-SA 4.0, +410660,1,410663,,5/26/2020 16:57,,-3,83,"

I am very new to Data Structures and Algorithms. I want to use recursion + stack/queue in a grocery store inventory program that I am making. However, I can't come up with any good ways of using recursion in a manner that contributes and is relevant to my grocery store inventory system.

+ +

I know recursion can be used for the tower of hanoi, coming up with palindromes and other random things like making patterns. However, I am not sure how to apply it to a grocery store inventory system such that it provides value to the staff who interacts with the program.

+ +

I don't know how to implement recursion and I just need ideas on what I can possibly do with it, in a way that adds value to my grocery store inventory application. Would an order system be able to use recursion in an effective way?

+",366637,,1204,,43977.97014,43978.58472,Recursion + Stack/Queue usage in practical life implementation,,2,7,1,43978.22708,,CC BY-SA 4.0, +410670,1,410688,,5/26/2020 19:48,,-2,114,"

I work on a small component in an embedded device (sensor). This component :

+ +
    +
  • Every 5 seconds, sends requests to other components using sockets +(get health check status, operating status etc) and stores the +response in memory.
  • +
  • Creates a socket, listens for incoming requests from UI. And sends the stored stats.
  • +
+ +

To keep the design simple, component is single threaded and asynchronous. I use boost asio, timers. Some components take a while to respond but the async calls (socket create/listen/connect/send/receive) make sure none of the operations block the main thread. This has been working really well.

+ +

I am working on a new task:

+ +
    +
  • Talk to a new component(every 5 secs) and get a list of active events. There could be thousands of active events at any point of time.
  • +
  • Store this info in a database. Hundreds of thousands of records in a month.
  • +
+ +
component_id integer
+event_starttime integer // timestamp
+event_endtime integer // timestamp
+event_info text
+
+ +
    +
  • Send this information to the UI.
  • +
+ +

There will be a single writer and multiple readers. I chose sqlite because it fits my needs: simple, fast, suitable for embedded system. Here are some questions that I have:

+ +
    +
  1. Indices: UI will query things like list of active events (endtime is null), amount of events per component in a data range. I could make component_id an index. Would it make sense to make a text column an index if it's used in searching?
  2. +
  3. UI can query historical stats and close the connection. It can also get continuous feed of active events in a single http connection. These db read operations could block the main thread from talking to other components. Would it make sense to:

    + +

    a. Use a separate thread and boost io_context to listen for incoming db requests from UI? io_context will handle the listening/accepting connections asynchronously. The new thread will handle db reads synchronously. Even if it takes time, it won't block the main thread. Cons: Short queries might be delayed by longer ones since they run in the same thread, share db connection.

    + +

    b. Spawn a new thread per connection? Also, share/create a new db connection?

  4. +
  5. In async_receive(), need to store the event info into database. I can't find any sqlite3 async methods to write this info in the database. The events are stored in a vector and it will insert/update each item in a foreach loop. +This will block the main thread which I want to avoid. Any suggestions?

  6. +
  7. I have seen many recommending WAL mode. Will it be suitable for my +use case?

  8. +
+ +

Edit: +Slightly unrelated, I was reading about nodejs sqlite libraries and come across a discussion on node-sqlite3 vs better-sqlite3. Apparently, better-sqlite3 is much faster than the former even though it is synchronous while node-sqlite3 is async. They say sql operations are CPU intensive, doesn't make sense to use async. Thoughts?

+",31321,,31321,,43977.86597,43978.29028,Designing a sqlite component in C++,,1,1,,,,CC BY-SA 4.0, +410672,1,410695,,5/26/2020 20:49,,-2,74,"

I would like to have the following interface:

+ +
Resource {
+public:
+void copyInto(Resource* src) = 0;
+}
+
+ +

But in order to implement this, the implementation would need to know (or make assumptions about) the implementation that it is copying from. +When i instead have a copyFrom method, the problem would just reverse, meaning that the source would need to know about the target it is copying to.

+ +

I thought about two possible solutions: +The simple one would be to have a staging-Resource with a defined form, into which a source-Resource copies, and from which the destination-Resource copies. +This would create overhead, and make all Resources depend on the implementation of this staging-Resource.

+ +

The other one would be two define a CopyOperation class, that takes information about the source from one side, and information about the destination on the other. It then resolves the copy based on that information.

+ +

Are there any goto-solutions/patterns to this problem (which doesn't seem too special)? +If so, advice/resources, and/or considerations in respect to my mentioned ideas would be highly appreciated!

+",339631,,,,,43978.4875,Behaviour that depends on two sides,,3,5,,,,CC BY-SA 4.0, +410682,1,410685,,5/27/2020 3:16,,3,149,"

I'm wondering if there's a standardized name for the following refactoring:

+ +
class Foo:
+  def do_something_awesome(self):
+    my_bar = Bar(42)
+    return my_bar.reticulate_splines()
+
+ +

Here class Foo is explicitly coupled to class Bar because it relies on that class name to create the my_bar object. If I don't like this explicit coupling, I'd go

+ +
class Foo:
+  def __init__(self, bar_generator):
+    self.bar_generator = bar_generator
+
+  def do_something_awesome(self):
+    my_bar = self.bar_generator.create_bar(42)
+    return my_bar.reticulate_splines()
+
+ +

I can swear I read this in either Martin Fowler's refactoring book or in Kerievski's refactoring to pattern book, but can't seem to find it.

+ +

Apart from that, would this be considered a reasonable refactoring? I feel it's a mix of Factory Object and Dependency Injection.

+",161522,,,,,43979.72222,Refactoring for removing explicit object construction inside class,,2,0,1,,,CC BY-SA 4.0, +410684,1,410694,,5/27/2020 5:07,,0,107,"

I am planning to make a service which will have simple REST APIs and will have a database in backend. I also wanted to add a logic to listen to notifications emitted by other service and there is some business logic which will update the row in the database.

+ +

For updating the database row from Notifications, I can think of 2 approaches:

+ +
    +
  1. Should I create a API which is kind of internal to just used by service and this listener process calls this API instead of directly updating the database?

  2. +
  3. Listener process directly updates the service.

  4. +
+ +

I can see some pros and cons of each approach. In Approach 1, we are adding a REST API unnecessarily which is never used by clients.

+ +

In Approach 2, we are giving one backside way to reach the database instead of all the requests coming from REST API.

+ +

Can someone help me here to tell if one of them is anti-pattern and which one is better to use?

+",366679,,,,,43978.41111,Is this an anti-pattern to have a service have both APIs and listening to events?,,1,2,,,,CC BY-SA 4.0, +410704,1,,,5/27/2020 17:49,,1,44,"

So I have an engine in progress that's structured like this:

+ +
    +
  • entities are simple ids (unsigned short)
  • +
  • components are stored in pools which are static members
  • +
  • systems are stored by the manager, and all inherit from a single base
  • +
+ +

What this looks like in code is like this:

+ +
struct transformComponent {
+    const unsigned short id;
+    vec2 pos, dim;
+
+    transformComponent(vec2 pos, vec2 dim, unsigned short id): pos(pos), dim(dim), id(id) {}
+
+    static componentPool<transformComponent> pool; // allows for scalability
+};
+
+struct physicsComponent {
+    const unsigned short id;
+    vec2 v, a, f;
+    unsigned short mass;
+    float invmass
+
+    physicsComponent(vec2 v, vec2 a, vec2 f, unsigned short mass, unsigned short id): v(v), a(a), f(f), mass(mass), invmass(1.0/mass), id(id) {}
+
+    static componentPool<physicsComponent> pool;
+};
+
+// same for other components
+
+struct system {
+    virtual void update() = 0; // don't care about virtual call, there will be only one per system
+};
+
+struct physicsSystem: public system {
+    virtual void update() override; // the problem
+};
+
+struct room { // the manager
+    std::vector<system*> systems;
+    std::unordered_set<unsigned short> activeIds;
+
+    void update() {for(auto* sys: systems) {sys->update();}}
+};
+
+ +

Now I did everything I could, until now, to make this as cache-friendly as possible. The thing is: if the physics loop reads the physics components and writes to the transforms, doesn't that completely eliminate the point? I'll let some code explain what I mean:

+ +
/* either:
+  physicsSystem has a hashmap of ids that it has to loop over to component pointers, which only gets updated when necessary (std::unordered_map<unsigned short, std::tuple<transformComponent*, physicsComponent*>> comps;), breaks purity (systems have no state)
+or
+  comps is created every frame (for every system), terribly inefficient
+*/
+
+// either:
+void physicsSystem::update() {
+    for(auto pair: comps) {
+        physicsComponent* phy = std::get<physicsComponent*>(pair.second);
+        transformComponent* tra = std::get<transformComponent*>(pair.second);
+
+        phy->a = phy->f * phy->invmass;
+        phy->v += phy->a;
+        tra->pos += phy->v;
+
+        // cache misses to load both transform and physics twice, for every entity, for every system, for every frame
+    }
+}
+
+// or:
+// comps is actually an std::unordered_map<unsigned short, std::tuple<transformComponent*, physicsComponent*, vec2>>
+void physicsSystem::update() {
+    for(auto pair: comps) {
+        physicsComponent* phy = std::get<physicsComponent*>(pair.second);
+        vec2& schedule = std::get<vec2>(pair.second);
+
+        phy->a = phy->f * phy->invmass;
+        phy->v += phy->a;
+        schedule = phy->v;
+    }
+    for(auto pair: comps) {
+        std::get<transformComponent*>(pair.second)->pos += std::get<vec2>(pair.second);
+    }
+    // smooth first loop, but still cache misses in the second (I think)
+}
+
+ +

So, my question is: how would one go about making this loop cache-friendly? By that I mean, no cache misses for every entity, but only when the array size exceeds the cache and when switching between component types to update. I'd be happy to receive any different approaches, as well. +TIA.

+",366715,,,,,43980.68403,Writing a data-oriented ECS update loop that handles multiple components,,1,1,1,,,CC BY-SA 4.0, +410705,1,,,5/27/2020 17:59,,-4,96,"

I'm more or less learning the MEAN stack (have yet to start on Angular, so currently using straight vanilla JS for front-end) and part of what I'm building for my portfolio is a drag-and-drop form builder.

+ +

Currently the form has sections, which contain questions, which can contain multiple options as well as multiple follow up/sub questions (which can then contain their own options, but not follow ups).

+ +

EJS helps me with the original render of the stored form using nested for..of loops, however my question comes into play when adding new elements to the form.

+ +

Currently, I have vanilla front-end JS that is looking at some template tags within the page, then filling those in for new sections, questions, and options.

+ +

However it doesn't seem very DRY because I'm using essentially the same logic in EJS when initially rendering the page (albeit multiple times).

+ +

Should I write functions on the back end which are cast into the EJS render call both for the initial render and then available to the front-end JS, or cast the EJS variable containing the form (from MongoDB) into the front-end JS directly and use functions in there to both draw the page initially, as well as add new elements? Both of these, hopefully, would make use of template tags in the HTML. Is one faster and/or safer than the other?

+ +

Another option could also be to use EJS partials for sections, questions, and options to render the page, but I wouldn't know how to incorporate that into the front-end JS to add new elements without using templates, which is essentially what I'm doing now.

+",363643,,363643,,43978.75903,43979.76944,"Best way to structure reusable code using Node.JS, EJS, and front end JS?",,1,1,,,,CC BY-SA 4.0, +410710,1,,,5/27/2020 19:04,,1,52,"

Assume there is a class Shape. The class has two functions area() and perimeter().

+ +

Let's say Circle and Square inherit from Shape and override these methods. Obviously the results are going to be fractional.

+ +

What is the best way to set a precision value for the results. Say the user says PRECISION=2 or PRECISION=3 and the results returned by the functions of Circle and Square class are returned accordingly. Where should I define this functionality? I was wondering how do libraries like numpy handle this situation.

+ +

One approach can be to add an optional parameter to these functions and round the results accordingly.

+ +
getArea(radius, prec=2) {
+
+}
+
+",366731,,118878,,43978.80833,43979.48542,Suitable way to round results returned by any function of a class,,2,0,,,,CC BY-SA 4.0, +410712,1,410729,,5/27/2020 19:27,,-3,54,"

I'm working on a C++ project which is currently divided into ""sub modules"" / ""components"". Each of these are compiled into a separate library (components are usually 10-20 files). +The libraries are linked to tests which ensure that each component works as expected.

+ +

I've now started to work on the ""main"" part of the project that is using all those different components. The problem is, that in the end I want to 'ship' this project as a dynamic library. +I'm however running into problems linking libraries to a library. I'm not sure if this is because I am doing something wrong with my tools or if this is simply not possible.

+ +

As such, my question is:

+ +

Does my approach of having separate components be libraries so I can easily develop and test them individually make sense in C++, given that I want to deliver this project as a library itself?

+",366732,,366732,,43978.84167,43979.3,"In C++, does it make sens to have library project be composed of other libraries?",,1,7,,,,CC BY-SA 4.0, +410717,1,410730,,5/27/2020 21:07,,0,193,"

Is there a logical reason why the integer is upgraded to 32+ bits? +I was trying to make an 8bit mask, and found myself a bit disappointed that the upgrade will corrupt my equations.

+ +
sizeof( quint8(0)); // 1 byte
+sizeof(~quint8(0)); // 4 bytes
+
+ +

Usually something like this is done for a good reason, but I do not see any reason why a bitwise operator would essentially need to add more bits. It would seem to me that this would hurt performance [slightly] because now you have more bits to allocate and evaluate.

+ +

Why does C++ [Or other languages] do this?

+",136084,,,,,43979.30972,"In C++, Why do bitwise operators convert 8 or 16 bit integers to 32 bit?",<32-bit>,2,4,,,,CC BY-SA 4.0, +410718,1,410722,,5/27/2020 21:43,,3,204,"

I have been pushing unit testing lately. This is a new skill for my team. I have had 10+ years experience writing unit tests, but I am basically the only person on the team with any experience with this at all. I have been struggling lately with how to budget for learning these skills. Forcing people (me included) to learn all new skills outside work hours doesn't work. We have families. Work at work. Home at home. We are all allotted training hours each quarter, which is great. However blog posts, YouTube videos and PluralSight tutorials only get you so far.

+ +

I got this hair brained idea to increase story points for stories where unit tests are required. This effectively reduces the amount of functionality we can deliver per story point. At the time it felt fine, since we are increasing the total effort. In my mind this increase was justified by the ""unknowns"" of writing unit tests. I also expect story point estimates to come back down after our team members have become competent at unit testing.

+ +
+ +

I originally got this hair brained idea from another hair brained idea to increase story point estimates for stories that required writing automated end-to-end tests with Selenium. This resulted in features that used to be 1 story exploding into 6+ stories. Story #1 included development and writing a single automated test. This usually turned out to be a 13 point story. As a general rule the team feels comfortable delivering an 8 point story in a 3 week sprint. Anything higher and our confidence goes down exponentially. A 13 point story is worrisome. A 20 point story in one sprint? Yeah, and while we're at it I'd like a pony too.

+ +

So that first story would be 13 points, then we would have 4-5 stories estimated at 3 to 5 points each. The smaller stories were literally the effort required to write the automated test, including the addition of any test infrastructure code, like Selenium page models. These tests all verified distinct, testable end user behavior.

+ +

Team velocity initially suffered, but eventually went up. Story point estimates never came back down. We continued our story breakdown of a single 13 point story and then a bunch of 3 to 5 point stories to write automated tests.

+ +
+ +

Now we fast forward to my current situation of learning unit testing. The team estimated a story at 13+ story points again, and there is no way to break this story down into anything smaller. For our team, a ""story"" is basically something an end user can interact with. Pretty general, but if an end user cannot see or interact with it, then it is not a user story.

+ +

I requested we do unit tests that require mocking a single method on an interface used to send an e-mail. We create and sent the e-mail using the Postal NuGet package, which makes sending an e-mail no more complicated than rendering a web page with a view model and razor template (our team has extensive experience with ASP.NET MVC).

+ +

The unit tests would cover a ""service"" class invoked when removing people from a business customer account. Anyone who is removed should get an e-mail notification. The new unit tests should cover the fact that e-mails get sent to each person who is removed. They do not need to assert the contents of the e-mail, just that the e-mail gets sent. This involves mocking the IEmailService.Send(Email) method.

+ +

This 13 point story makes me nervous. We are half way through our 3 week sprint and I am still getting basic questions about unit testing fundamentals. I'm afraid we are going to miss our goal this sprint, which is why the story got a 13 point estimate. Each time I tried introducing unit tests, even in smaller, simpler stories, the team always gave me a 13+ point estimate. I'm afraid no story is small enough for a single sprint anymore once you factor in development, automated tests and unit tests. This is simply too much for the speed and skill level of this team — a trend I have noticed the entire 4 years I've lead this project. I'm just simply hitting a brick wall.

+ +

We do not adjust story points based on who gets assigned the story. To be honest, no single person works on a story anyhow. I've read Where does learning new skills fit into Agile?, but at some point you must utilize the new skill, and this is my conundrum. Since I am the team lead, scrum master, business analyst, graphic designer, BDD practitioner and architect of this project I frequently do not have time to pair program with every person on the team. This large number of responsibilities is not changing any time soon, either.

+ +

It seems we must deal with a reduced velocity, or increase the estimates. I've chosen the latter of the two.

+ +

After increasing story point estimates in order to learn unit testing, should the team reduce future story point estimates for similar work based on the assumption that the ""unknowns"" of learning to write unit tests are no longer unknown?

+",118878,,,,,43985.69236,"Should the team reduce future estimates after becoming competent at a new skill, because estimates were increased while learning?",,4,5,,,,CC BY-SA 4.0, +410724,1,410726,,5/27/2020 23:18,,79,16164,"

AFAIK, Option type will have runtime overhead, while nullable types won't, because Option time is an enum (consuming memory).

+ +

Why not just mark optional references as optional, then the compiler can follow code execution and find whenever it can't more be null?

+ +

Edit: I see I was misunderstood. I understand and agree with the advantages of avoiding null pointers. I'm not talking about arbitrary pointers that accept null. I'm only asking why not use compile-time metadata, like C# 8's nullable reference types and TypeScript with strict null checks, where default pointers can't be null and there's a special syntax (mostly ?) to indicate a pointer that can accept null.

+ +

Edit 2:

+ +

Also, Some is strange, in my opinion. Implicit conversion would be better. But that a language feature and not relevant.

+",366741,,366741,,44110.19236,44110.19236,"Why F#, Rust and others use Option type instead of nullable types like C# 8 or TypeScript?",,11,10,23,,,CC BY-SA 4.0, +410728,1,,,5/28/2020 6:12,,-5,43,"

If I have a Ban User process that should definitely modify the Accounts data store and if I have Create Account that should also should modify the Accounts data store.

+",366756,,9113,,43979.88264,43979.88264,Can you have 2 processes modify a data store in a DFD diagram,,1,1,,,,CC BY-SA 4.0, +410731,1,,,5/28/2020 7:30,,1,63,"

I made a pretty simple CRUD API to store customers and some related information in a database.
+My customer has 20 properties like Name, Telephone etc. that are all stored in an anemic domain model.
+My application has a core assembly where my domain logic is located. There I have a business model, looking pretty much like this:

+ +
public class Customer 
+{
+   public string FirstName {get; set;}
+   public string LastName {get; set;}
+   public string Telephone {get; set;}
+   //Further 20 properties
+}
+
+ +

I also have an assembly for my current database. I need another model there for my customer, having the same 20 properties, plus some db-related things, to archive persistence.
+I also have an adapter for my UI (currently just a .NET Core API) and this adapter also needs a UI-model, having the same 20 properties.

+ +

To not repeat all 20 properties in each layer, I made my life simple and just inherited my db models and ui models from the domain models.
+The example of the customer looks like that:

+ +

+ +

My UI adapter and DB adapters have dependencies on the domain model, but the domain model has no dependencies at all.
+In this way I am not hurting any principle I learned over the last years, but for some reason I have a strange feeling doing so.

+ +

Is my approach okay, or do you have any ideas why I am doing the wrong thing?

+",366765,,366765,,43979.31806,44129.45972,Inheriting DBModels and UI Models from Domain Models,,1,6,,,,CC BY-SA 4.0, +410732,1,,,5/28/2020 8:05,,-3,65,"

While in general terms I realize this is a sub-version number that's bumped based on patches that don't bump the major version, is there a more exact definition for this?

+ +

Must the number match the number of patches applied for example?

+ +

Or is this a general term that each project uses slightly differently?

+",99957,,,,,43979.40347,"What is the exact meaning of ""patch level""?",,1,2,,,,CC BY-SA 4.0, +410747,1,,,5/28/2020 14:35,,0,43,"

I'm not sure this question matches this forum's purpose, but I didn't think it should belong to the stackoverflow one either, so here it goes:

+ +

I created a model binder that makes one mapping so ""clean"" by putting it in the model binder itself, but now I wanted to do it again in another action method and I was just wondering if there would be a better way to do it, since I'm not convinced that's the right place to do so.

+ +

My action method looks like this:

+ +
[HttpPost(""Register"")]
+public async Task<IActionResult> Register(
+    [FromQuery] UserToRegister userToRegister,
+    User user /*This property is never used from the body request, since I set it in my custom Model Binder*/)
+{
+    var response = await _userService.RegisterAsync(user);
+    return Ok(response);
+}
+
+ +

I created a custom Model Binder, where I map the properties from userToRegister into the user param. So this custom binder looks like this:

+ +
public Task BindModelAsync(ModelBindingContext bindingContext)
+{
+    var values = bindingContext.ValueProvider;
+    User user = new User()
+    {
+        Id = Guid.NewGuid(),
+        Name = values.GetValue(""Name"").FirstValue,
+        Password = /*password encrypted*/,
+        ...
+    };
+    bindingContext.Result = ModelBindingResult.Success(user);
+    return Task.CompletedTask;
+}
+
+ +

So do you find it a good solution? Would it better to get the User object in the request so I could modify it later? Should I do this modification (hashing the password, creating a new Id, etc) in the ModelBinder?

+",366358,,,,,44130.75139,Best place to map Model got in ActionMethod,<.net>,1,2,,,,CC BY-SA 4.0, +410748,1,410751,,5/28/2020 14:40,,3,221,"

In looking for a simple heuristic to see when inheritance can be abused, I came up with the following hypothesis:

+ +
+

If subclass B overrides method foo, but does not call base.foo(), it seems like inheritance in that method is broken, as it only uses the same method signature, but not necessarily the same behavior.

+
+ +

Is this a right way to identify an inheritance that shouldn’t be?

+",348024,,9113,,43980.60903,43980.60903,OOP - How to identify inheritance abusage?,,2,2,,,,CC BY-SA 4.0, +410753,1,,,5/28/2020 16:06,,3,200,"

In our microservice architecture, this is how requests flow.

+ +

+ +

Service Layer: Requests hit this layer from public LB. Also, response composition is performed here.

+ +

API Gateway: The core job of API gateway is to make parallel calls to respective microservices.

+ +

Please note Usually response composition is done at API Gateway. However, we intentionally moved composition at service layer due to frequent changes in compositions.

+ +

Being in the automotive domain, we deal with vehicles, dealers, leads etc. +Now, there is a confusion on when to prefer synchronous over asynchronous and vice versa. Let me take an example of below two microservices.

+ +

DealersService: This service holds all the information about dealers.

+ +

LXMSService: This service is responsible for processing leads we capture on our platforms and send it to respective clients(dealers).

+ +

While lead is getting captured, we want to show the list of dealers user can choose from. The logic of showing dealers may vary client to client. These logics are configured in LXMS, and we store only dealer ids in this service(LXMS).

+ +

Now there can be two possible ways to show dealers!

+ +

Possibility 1: Synchronous way

+ +

Step 1: Make a call to LXMS microservice

+ +

Step 2: Get a list of dealer ids

+ +

Step 3: Pass these list if dealer ids to Dealer microservice to resolve.

+ +

Step 4: Dealer microservice returns id, name of a dealer which we show to the user.

+ +

+ +

Possibility 2: Asyncronous way

+ +

We remove the dependency by storing the dealer name in LXMS microservice. And maintain consistency using Pub/Sub. This way we are isolating LXMS and removing run-time dependencies from the service layer.

+ +

+ +

Here are the questions

+ +
    +
  1. Which approatch we should go with & why?

  2. +
  3. What is the problem in making sequential calls?

  4. +
  5. In first approatch, when service layer making sequential calls. Can we call microservices independent?

  6. +
  7. Isolation comes at the additional overhead of data synching using pub/sub. Is the isolation worth this overhead?

  8. +
  9. When to prefer synchronous over asynchronous and vice versa?
  10. +
+ +

Thank you!

+",317027,,317027,,43979.73056,44198.0875,"In microservice architecture, when to prefer synchronous over asynchronous communication and vice versa?",,2,2,,,,CC BY-SA 4.0, +410754,1,,,5/28/2020 16:51,,0,36,"

I develop an in-house Java framework. I provide an interface so that my end users can provide their own custom implementation (i.e. plugin/SPI).

+ +
public interface SomePlugin {
+    SomeResponse doSomething(SomeRequest request);
+}
+
+ +

There are a number of small configurations to be made, some required and some optional. For example: plugin name, response formatter.

+ +

My initial interface looked like this (V1):

+ +
public interface SomePlugin { // V1
+    String getName();
+    default SomeFormatter getResponseFormatter() { return new DefaultFormatter(); }
+    SomeResponse doSomething(SomeRequest request);
+}
+
+ +

Over time I learned this is not easily extensible, without breaking existing user implementations or abusing the default keyword. Reasons:

+ +
    +
  • What if I want to receive another config value, e.g. plugin description?
  • +
  • What if I need to provide an alternative way to receive response formatting?
  • +
+ +

V2

+ +

Now, this blog post suggests that I may use ""argument objects"" to better achieve this.

+ +
+

It is generally a bad idea to have non-void methods in SPIs as you will never be able to change the return type of a method again. Ideally, you should have argument types that accept “outcomes”.

+
+ +
public interface SomePlugin { // V2
+    void configure(SomeConfigurator config);
+    SomeResponse doSomething(SomeRequest request);  
+}
+
+public interface SomeConfigurator {
+    void setName(String name);
+    void setDescription(String description);
+    void useFormat(SomeFormatter formatter);
+    void useFormat(SomeParser parser, SomeOutputBuilder builder);
+}
+
+ +
public class MyPlugin { // Usage example
+    void configure(SomeConfigurator config) {
+        config.setName(""Test plugin"");
+        config.useFormat(new MyFormatter());
+    }
+    ...
+}
+
+ +

This nicely solves the extension/evolution problem that my initial design (V1) had.

+ +
    +
  • Deprecation and addition are easier.
  • +
  • I can now receive multiple objects at once, i.e. parser and builder, without creating a wrapper object.
  • +
  • I can overload some methods because what was a return type is now a parameter.
  • +
+ +

I lose the required-ness of mandatory configs, i.e. users may forget to provide name and it will still compile. But I think this is an acceptable trade-off for maintainability, with help of Javadoc and runtime checks.

+ +

V3

+ +

One concern on the concept of ""argument objects"" with void return type is that it is a foreign idea. When I showed this prototype to my end users, they didn't grasp how to use it unless an example was provided. This leads me to consider:

+ +
public interface SomePlugin { // V3
+    SomeConfig getConfig();
+    SomeResponse doSomething(SomeRequest request);  
+}
+
+public final class SomeConfig {
+    private String name;
+    private String description = """";
+    private SomeFormatter formatter = new DefaultFormatter();
+    private SomeParser parser;
+    private SomeOutputBuilder builder;
+    ... getters and setters ...
+}
+
+ +
public class MyPlugin { // Usage example
+    SomeConfig getConfig() {
+        SomeConfig config = new SomeConfig();
+        config.setName(""Test plugin"");
+        config.useFormat(new MyFormatter());
+        return config;
+    }
+    ...
+}
+
+ +

Now, this feels more like a ""traditional"" Java and should be more intuitive to my end users. I still get the benefits of V2. It's unlikely that I'd have to worry about deprecating or replacing getConfig() itself.

+ +

One small con: SomeConfig is now more verbose with all getters and setters, compared to SomeConfigurator in V2 which looked like a clean ""header file"". Despite this, I feel like V3 is the way to go.

+ +

Does my reasoning make sense? Are there any reasons to favor V2 over V3, or do you have other approaches to suggest? Thanks!

+",366802,,,,,43979.70208,Designing a public facing Java interface for future extensibility (API evolution),,0,0,0,,,CC BY-SA 4.0, +410757,1,,,5/28/2020 17:23,,0,93,"

I am building a game which has rooms with clients connected. Each room has it's own websocket.

+ +

At the end of the game, some calculation needs to be made about who won and it's a complicated math CPU intensive one and the result will be sent to the websocket so there is no REST request waiting.

+ +

I am thinking of implementing the game logic in node.js and then make a second REST api in another faster multithreaded language (go/Java/c#)? to make the computation and return the result to node.js.

+ +

Question 1) I read a lot of examples about using redis or rabbitmq but why do I need this? Just in case the server handling the computations gets flooded?

+ +

Question 2) Do I need this or should I just use a language that has multithreaded support and be done with it since it's a simpler approach?

+",154794,,,,,43979.72431,Creating a microservice REST api to offload CPU intensive tasks from node.js,,0,0,0,,,CC BY-SA 4.0, +410758,1,410763,,5/28/2020 17:26,,3,87,"

In a typical server software design, business logic will generally invoke ""services"" (such as a database or web service).

+ +

When I design such a system, I tend to think of each service as a singleton which is created when the system starts up and handles multiple concurrent requests throughout the lifetime, usually not storing details of any particular request within itself.

+ +

However, there is an alternative design whereby a new instance of the service is instantiated for each request. Personally I would normally use the term ""handler"" instead of ""service"" for this.

+ +

It seems to me both patterns can work equally well. Is there a reason to prefer one over the other, or a de-facto best practice regarding this?

+",187486,,,,,43979.74861,Should a service object be transient or persistent?,,1,0,,,,CC BY-SA 4.0, +410760,1,,,5/28/2020 17:47,,0,40,"

We have a weird use-case that I need to support in my application and needed some opinions on how to design it.

+ +

At a basic level the application allows users to work on a ""list"" of ""tweets"" she cares about

+ +
    +
  • To work on any list the user ""locks"" the list and then can do actions like reordering the tweets, add some small comments etc and then ""save"" (save sends a message over kafka for the updated list)
  • +
  • While the list is ""locked"" no other user can work on the same list
  • +
  • 2 different lists can have tweets in common.
  • +
+ +

Now the weird case we need to support is an operation we are calling ""torpedo"":

+ +
    +
  • ""Torpedo"" will allow any user who has locked a list, to make certain updates to a tweet in that list which it can ""torpedo"" to every list that contains that tweet (even if they are locked by another user, and the other user can continue working).
  • +
  • That tweet gets updated across the board
  • +
+ +

Though the user-interface is a challenge, my question is more about the messaging of these updates:

+ +
    +
  • For a single list update a message (containing all the tweets in that list) is sent over kafka with the key being the list-id.
  • +
  • The messages for a same list always go to the same partition and are hence processed in-order
  • +
+ +

Now for a torpedo we have to send a message containing ALL the lists that were updated.

+ +

The problems I have:

+ +
    +
  1. I cannot choose a single list-id as key which means that torpedo +message containing an update for list-id-1 could go in a different +partition than another regular update to list-id-1 and lead to out-of-order processing of list-id-1 updates
  2. +
  3. The consumer needs to know ALL the lists that were updated as +part of the torpedo
  4. +
+ +

The only option I could think is to split the torpedo message per list, that will solve (1) above but will make it harder to do (2) but if don't split I can solve (2) but not (1).

+ +

Please share any opinions on above and any suggestions on how I can make this a design I can still reason about in future.

+",366154,,,,,43979.74097,Designing sane messaging paradigm for concurrent updates,,0,4,0,,,CC BY-SA 4.0, +410761,1,,,5/28/2020 17:48,,0,33,"

I am in the middle of a project building an internal admin web site using Asp.Net MVC Core 3.1 +Some of the features I need in the app could benefit from Blazor Client side. (Single Page App features?) +My question is would it be logical to try to somehow bring Blazor Client into the existing project or would it make sense to start from scratch using the supplied templates to create a Blazor app and discard my MVC app?
+From reading various posts, most which are pre core 3.1, it's hard to get an understanding of the viability of going this route.

+",3435,,,,,43979.74167,Integrating Blazor Client side into an ASP.Net MVC Core 3.1 app. Is it logical or am I better off building new Blazor app from scratch?,,0,2,,,,CC BY-SA 4.0, +410764,1,410767,,5/28/2020 18:25,,1,194,"

I am wondering why all LinkedIn profile URLs are of this type linkedin.com/in/username. Why not just linkedin.com/username? Is there a design reasoning here that I'm missing? (Ex: linkedin.com/in/williamhgates)

+",341250,,,,,43980.81111,Why LinkedIn profile URLs are like `linkedin.com/in/username`?,,2,0,,,,CC BY-SA 4.0, +410771,1,,,5/28/2020 18:48,,2,99,"

My application I have two types of customers: individuals and business customers. My users sometimes need to be able to see the full details of one customer, but at other times they only need to see a brief summary with a few fields (e.g. in a list of search results). I currently have three classes: IndividualCustomer, BusinessCustomer, and CustomerSummary. These are plain old Java objects (aka POJOs) with getters and setters as their only methods.

+ +

I am contemplating whether to combine them using inheritance or possession. For example, maybe there should be a Customer interface which they all implement. In that case, perhaps the interface requires the methods of CustomerSummary so I can do away with that type. Or perhaps there should be a Customer class with a few fields (like the current implementation of CustomerSummary and it should ""have a"" BusinessCustomer or IndividualCustomer for the details.

+ +

The things that concern me are: if I make a Customer interface and eliminate CustomerSummary, then the types IndividualCustomer/BusinessCustomer will sometimes be only partially populated with details, having lots of null fields, and it would be a disaster if one of those ""summaries"" ended up hitting a .save() method and overwriting real data with nulls. If I go with the possession approach where a Customer object has a business or individual customer as a private field, then I have to do frequent type checks or type casts in order to be able to use the two types interchangeably.

+ +

What's the best design pattern for this kind of case, where an object is sometimes only partially populated (i.e. a ""summary""), but can be populated with two different subtypes of details?

+",320228,,,,,43980.34653,How to relate summary to detail data objects?,,4,2,,,,CC BY-SA 4.0, +410772,1,410804,,5/28/2020 18:49,,2,71,"

I want to have a control flow decide whether an object can pass through a point in a flow or not. From my understanding of control and object flows, this would not work in the way I have done it, since an object and control flow are being joined together into something undefined. What would the correct way to do this look like?

+ +

+ +

EDIT:

+ +

I've done some research and stumbled over this in the UML specification:

+ +

+ +
+

Figure 15.59 is an example of using a DataStoreNode. Records for hired employees are persisted in the Personnel + Database. If an employee has no assignment, then one is made using Assign Employee. Once a year, all employees have + their performance reviewed. The JoinNode blocks the flow of tokens to Review Employee except when the + AcceptEventAction (see sub clause 16.10) is triggered “Once a year”. When the AcceptEventAction generates its yearly + control token, this satisfies the join condition on the JoinNode and, as the outgoing edge from the Personnel Database + has “{weight=*}”, object tokens for all the persisted employee records can then flow to Review Employee.

+
+ +

Would this not be the exact same case of an undefined join as well?

+",366824,,366824,,43979.82778,43980.59861,What is the best way to trigger on object flow with a control flow in an UML activity diagram?,,2,1,,,,CC BY-SA 4.0, +410782,1,410786,,5/29/2020 5:54,,-2,55,"

To Put My Question In Better Context...

+ +

I am about done writing my first real-world Node application, which would be classified as a REST API. For myself, it was a bit challenging to wrap my head around Node's Async event processing. I still don't think I fully grasp it, as you will see by the specifics of this post. That being said...

+ +

Am I Making This Overly Complicated?

+ +

I found some code snippets online that helped me get my API working. Below is one function that deals with finding a client. I guess you would call the file this is in, a Controller, for those of you familiar with MVC. But this being Node, and NOT MVC, my question is this:

+ +

GET http://localhost/clients/3 -> brings me to this code...

+ +
// Find a single client with a Id
+exports.findOne = (req, res) => {
+  Client.findById(req.params.clientId, (err, data) => {
+    if (err) {
+      if (err.kind === ""not_found"") {
+        res.status(404).send({
+          message: `Not found Client with id ${req.params.clientId}.`
+        });
+      } else {
+        res.status(500).send({
+          message: ""Error retrieving Client with id "" + req.params.clientId
+        });
+      }
+    } else res.send(data);
+  });
+};
+
+ +

What is the reason for this call to have a callback itself???

+ +
Client.findById(req.params.clientId, (err, data) => {
+
+ +

which in turn, looks like this:

+ +
Client.findById = (clientId, result) => {
+  sql.query(`SELECT * FROM clients WHERE id = ${clientId}`, (err, res) => {
+    if (err) {
+      console.log(""error: "", err);
+      result(err, null);
+      return;
+    }
+
+    if (res.length) {
+      console.log(""found client: "", res[0]);
+      result(null, res[0]);
+      return;
+    }
+
+    // not found client with the id
+    result({ kind: ""not_found"" }, null);
+  });
+};
+
+
+ +

This seems like a lot of work for a simple query function. Coming from a PHP background, this could be done in very few lines of code there.

+ +

The whole thing seems complicated. Is all this really necessary for such a simple API that returns a client record of only four columns?

+ +

For that matter, do I even need that intermediate function (controller)? What's the matter with just routing right to the final function (in this case, a function named findById ) ??

+ +

I'd sure appreciate some input on this before I get too far ahead. I have another dozen endpoints to code, so if I need to change directions, now would be the time.

+ +

Thanks!

+",366852,,,,,43980.35139,Is there a less complicated alternative to handling this simple mySQL query in Node?,,1,1,,,,CC BY-SA 4.0, +410783,1,410785,,5/29/2020 7:59,,2,80,"

I have an API I wrote that I want to test at the API level.

+ +

Given that I'm testing from an external point of view, how can I manage data sets for each tests? +The simplest solution I could come up with is to create a test suite where each test depend on the state the previous one set.

+ +

For example to test that a comment is successfully added to a post I need to make sure that a certain post exists first and that post may have been created by a previous test.

+ +

Is this kind of storytelling common in API testing? Or is there a better way?

+",341209,,341209,,43980.34097,43980.53958,Should API tests depend on each other?,,2,2,,,,CC BY-SA 4.0, +410788,1,410791,,5/29/2020 8:41,,0,76,"

I'm trying to test an API (see API testing).

+ +

Some operations depend on time. +Here are some examples:

+ +
    +
  • A post may only be edited within the first 5 minutes
  • +
  • You may not try to login more than 10 times in 1 minute
  • +
  • After X weeks of inactivity a post is archived and set as read-only
  • +
+ +

If I'm testing from an external point of view, how can I manage time (moving it forward)?

+ +

One possible solution, for example using docker, is installing some interceptor to requests to the system clock. Are there other alternatives I'm not seeing?

+",341209,,,,,43980.38681,How to control time with API testing?,