text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
So I know how to print time since epoch in seconds, and evidently even milliseconds, but when I try nanoseconds I keep getting bogus output of an integer that's way too small, and it sometimes prints numbers smaller than the last run.
#include <stdio.h>
#include <time.h>
int main (void)
{
long int ns;
struct timespec spec;
clock_gettime(CLOCK_REALTIME, &spec);
ns = spec.tv_nsec;;
printf("Current time: %ld nonoseconds since the Epoch\n", ns);
return 0;
}
The nanosecond part is just the "fractional" part, you have to add the seconds, too.
// otherwise gcc with option -std=c11 complaints #define _POSIX_C_SOURCE 199309L #include <stdio.h> #include <time.h> #include <stdint.h> #include <inttypes.h> #define BILLION 1000000000L int main(void) { long int ns; uint64_t all; time_t sec; struct timespec spec; clock_gettime(CLOCK_REALTIME, &spec); sec = spec.tv_sec; ns = spec.tv_nsec; all = (uint64_t) sec * BILLION + (uint64_t) ns; printf("Current time: %" PRIu64 " nanoseconds since the Epoch\n", all); return 0; } | https://codedump.io/share/ZmMV7xpYWJVw/1/printing-time-since-epoch-in-nanoseconds | CC-MAIN-2017-04 | refinedweb | 156 | 67.25 |
With Respect To ScrollBar To Rectangle
Hello Guys.
i have a rectangle, how to make rectangle width, height increase with respect to text ins side rectangle, as text increases, rectangle has to increase, and how to add scrollbar to rectangle as soon as text increases. please help me out.
@Pradeep-Kumar.M
i have a rectangle, how to make rectangle width, height increase with respect to text ins side rectangle, as text increases, rectangle has to increase
You can just simply bind the
pointSizeor
pixelSizeof
Textto that of
Rectangle's
widthand
heightwith some multiplier.
width: txtItm.font.pointSize * 10 height: txtItm.font.pointSize * 5
how to add scrollbar to rectangle as soon as text increases.
Rectangledoesn't have a built-in scrollbar. You will need to create one yourself.
But I'm wondering what would you do by adding a Scrollbar to it.
Rectangledoesn't scroll. You need something like
Flickablewhere in it would make sense. Example here.
Also have a look at TextArea, it automatically adds Scrollbar when required.
thank you, because i want to have scrollbar to my rectangle as soon as text increases and i didnt make use modaldialog,or messagedialog. i made use of rectangle as messagedialog, i will try it.
- Pradeep Kumar
@Pradeep-Kumar.M
and one more thing i want to the difference between contentWidth, contentHeight, contentX, contentY.
i have a sample code, please look into it.
import QtQuick 2.4 import QtQuick.Window 2.2 Window { visible: true width: 300 height: 300 id:win Flickable { id: flick width: 80; height: 80 contentWidth: image.width contentHeight: image.height contentX: image.x contentY: image.y // contentY: 100 Image { id: image; width: 50 height: 50 x: 50 y: 100 source: "qrc:/new/prefix1/logo.png" } } }
@Pradeep-Kumar.M
xand
yare for positions while
widthand
heightare for dimensions.
@p3c0
import QtQuick 2.4
import QtQuick.Window 2.2
Window {
id: win
visible: true
width: 700
height: 500
Rectangle
{
id: rect
width: t1.font.pixelSize * 50
height: t1.font.pixelSize * 10
color: "pink"
Text {
id: t1
text: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaajjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjaaaaaaaaaaa"
}
}
}
as text size increases rectangle has to increase simultaneously , but in this case, its not increasing.
@Pradeep-Kumar.M hmm you meant length not size. Well in that case you will need get text length and bind it with some multiplier.
Its better to use a TextArea here. Isn't it ?
because data is coming from c++ class thats y, inserting it to rectangle directly . if so how to find text length, do we have length method for text, i didnt find it.
@Pradeep-Kumar.M Yes. Use
length.
thank u for example of scrollbar to rectangle, it worked, one issue in it is ,
when i drag down, scrollbar moving upwards, i dont want in that fashion, drag should be with respect to scrollbar, not opposite, please help me.
@Pradeep-Kumar.M Must be some problem in your code. Try doing same as done in that example.
same thing i did it, will you please try, i want the scrollbar to have dragged with respect to mouse simulateonusly, not in opposite fashion, i tried,
@Pradeep-Kumar.M Can you explain it w.r.t to that example ? That example works fine as expected.
code is working fine, but the thing is scrollbar has to move w.r.t to mouse, when i want to scroll, for example in this forum if u want to scroll up, u will move your mouse up, w.r.t to scrollbar, but in the example case moving mouse upwards, scrolling takes place downwards, i mean to say opposite direction.
@Pradeep-Kumar.M Nope. If you scroll down the scrollbar moves down and ofcourse the image will go up and v.v. That's the intended behaviour.
and one more if we have implementation that works with keys press,
will it work for mouse area on clicked?.
@Pradeep-Kumar.M Yes should work. Create a function and call that function when required.
you mean to say create function form ex: function sample()
{
// implementation
}
mouse area{
anchors.fill: id
onclicked
{
sample()
}
}
@Pradeep-Kumar.M Yes, so that same function can be called from Keys and MouseArea. No need to duplicate the code.
what i taught was if there is implementation for keys event
, their will be different implementation for mousearea alse
@Pradeep-Kumar.M Ofcourse you will need to define a
MouseAreaas well as
Keys.onUpPressed. But if the code that moves the scrollbar is similar then no need to duplicate it. Just put it in the function and call that function.
ya got it,
keys.onuppressed,keys.ondownpressed, coming from .js file, will the same logic work for mousearea
{
} also .
@Pradeep-Kumar.M
will the same logic work for mousearea
Depends upon the logic :) | https://forum.qt.io/topic/55488/with-respect-to-scrollbar-to-rectangle | CC-MAIN-2017-43 | refinedweb | 789 | 67.96 |
The QTreeView class provides a default model/view implementation of a tree view. More...
#include <QTreeView>
Inherits QAbstractItemView.
Inherited by QHelpContentWidget and QTreeWidget..
The QTreeView class is one of the Model/View Classes and is part of Qt's model/view framework.
QTreeView implements the interfaces defined by the QAbstractItemView class to allow it to display data provided by models derived from the QAbstractItemModel class.
It is simple to construct a tree view displaying data from a model. In the following example, the contents of a directory are supplied by a QDirModel and displayed as a tree:
QFileSystemModel *model = new QFileSystemModel; model->setRootPath(QDir::currentPath()); QTreeView *tree = new QTreeView(splitter); tree->setModel(model);
The model/view architecture ensures that the contents of the tree view are updated as the model changes.
Items that have children can be in an expanded (children are visible) or collapsed (children are hidden) state. When this state changes a collapsed() or expanded() signal is emitted with the model index of the relevant item.
The amount of indentation used to indicate levels of hierarchy is controlled by the indentation property.
Headers in.
QTreeView supports a set of key bindings that enable the user to navigate in the view and interact with the contents of items::
See also header()., this property has a value of 20.
Access functions:
This property holds whether the items are expandable by the user.
This property holds whether the user can expand and collapse items interactively.
By default, this property is true.
Access functions:.
By default, this property is false.
Access functions::
Constructs a tree view with a parent to represent a model's data. Use setModel() to set the model.
See also QAbstractItemModel.
Destroys the tree view.
Collapses the model item specified by the index.
See also collapsed()..
See also expanded().().
Reimplemented from QAbstractItemView::sizeHintForColumn().
Returns the size hint for the column's width or -1 if there is no model.
If you need to set the width of a given column to a fixed value, call QHeaderView::resizeSection() on the view's header.
If you reimplement this function in a subclass, note that the value you return is only used when resizeColumnToContents() is called. In that case, if a larger column width is required by either the view's header or the item delegate, that width will be used instead.
See also QWidget::sizeHint and header().. | http://doc.qt.digia.com/4.6/qtreeview.html | CC-MAIN-2015-22 | refinedweb | 396 | 58.28 |
1 /*2 Copyright (C) 2002-2004 MySQL AB3 4 This program is free software; you can redistribute it and/or modify521 22 23 24 */25 package org.gjt.mm.mysql;26 27 import java.sql.SQLException ;28 29 /**30 * Here for backwards compatibility with MM.MySQL31 * 32 * @author Mark Matthews33 */34 public class Driver extends com.mysql.jdbc.Driver {35 // ~ Constructors36 // -----------------------------------------------------------37 38 /**39 * Creates a new instance of Driver40 * 41 * @throws SQLException42 * if a database error occurs.43 */44 public Driver() throws SQLException {45 super();46 }47 }48
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/gjt/mm/mysql/Driver.java.htm | CC-MAIN-2018-13 | refinedweb | 104 | 58.38 |
Over on the Asp.Net forums, a user asked how to store an ordered list of pages in an Xml file so he could use it to control the navigation of a group of pages on his web site. It was for a 'Wizard' where the user needed to go through the pages in sequence…no jumping directly to a page. He wanted it to be easily editable.
I thought about it and decided that Xml was overkill. There is no hierarchical data structure required. A simple text file would suffice (although it's not as sexy as Xml is) and there are more text editors than Xml editors….long live Notepad!
I thought about it some more and realized that once the contents of the text file was read in, it should be cached so that each page with the navigation control didn't have to reread it.
Yet more thought: Why put the list of pages in a separate file? If you are going to make a navigation control, why not put the list into the control where it's used? True, it's not a generic control in a traditional sense, but if you are going to have to edit the list of pages anyway, why not do it in the control?
Quick and dirty can be bad, but simple is always good…yah?
Well one thing led to another and I ended up writing a page navigation user control:
Features:
Pages are easily removed and added to the array of strings in the control (edit the file).
The array of strings (pages) is static so there is only one copy in memory.
Navigation is automatic.
The Prev and Next buttons are automatically enabled and disabled.
Usage:
Edit the list of pages in the control.
Drop the User Control on each page to be navigated...or drop it on a Master Page.
Here's the UC_Navigator.ascx file (UC is for User Control). It's basically two buttons in a div.
<%@ Control Language="C#"
AutoEventWireup="true"
CodeFile="UC_Navigator.ascx.cs"
Inherits="UserControls_UC_Navigator" %>
<div style="border-style:ridge;
border-width:medium;
padding: 5px;
background-color:#808080;
text-align:center;
width: 134px;
vertical-align: middle;">
<asp:Button
<asp:Button
</div>
Here's the code-behind file:
using System;
using System.Collections.Generic;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.IO;
public partial class UserControls_UC_Navigator : System.Web.UI.UserControl
{
// list of pages to navigate - in order
// page names must be in lower case
static String[] Pages =
{
"default.aspx",
"child1.aspx",
"exportcalendar.aspx"
};
enum DIRECTION { PREV, NEXT }
// ---- Page_Load ----------------------------
//
// Disable Prev/Next buttons when appropriate
protected void Page_Load(object sender, EventArgs e)
{
int CurrentIndex = GetCurrentIndex();
if (CurrentIndex == 0)
ButtonPrev.Enabled = false;
if (CurrentIndex == Pages.Length - 1)
ButtonNext.Enabled = false;
}
// ---- GetCurrentIndex ----------------------------------
//
// Gets the index of the current page
// (from the array of Pages)
//
// returns -1 if the Page isn't in the Pages array
private int GetCurrentIndex()
{
String CurrentPage;
// get the current page the User Control is on (from parent page)
CurrentPage = Parent.BindingContainer.TemplateControl.AppRelativeVirtualPath;
// get filename only and force lower case
CurrentPage = Path.GetFileName(CurrentPage).ToLower();
// get the index from the page array
int CurrentIndex = Array.IndexOf(Pages, CurrentPage);
return CurrentIndex;
}
// ---- MoveToPage ---------------------------------
//
// Moves to previous, or next page if possible
private void MoveToPage(DIRECTION Direction)
{
int CurrentIndex = GetCurrentIndex();
if (Direction == DIRECTION.PREV)
{
if (CurrentIndex == 0)
return; // can't move before first page
//Response.Redirect(Pages[CurrentIndex - 1]);
Server.Transfer(Pages[CurrentIndex - 1]);
}
if (Direction == DIRECTION.NEXT)
{
if (CurrentIndex == Pages.Length - 1)
return; // can't move after last page
//Response.Redirect(Pages[CurrentIndex + 1]);
Server.Transfer(Pages[CurrentIndex + 1]);
}
}
// ---- Navigation buttons ---------------------------
protected void ButtonPrev_Click(object sender, EventArgs e)
{
MoveToPage(DIRECTION.PREV);
}
protected void ButtonNext_Click(object sender, EventArgs e)
{
MoveToPage(DIRECTION.NEXT);
}
}
A few problems I had on the way:
Getting the current page the user control was on took a bit of digging around. I ended up using the AppRelativeVirtualPath property. It works for normal pages and child-of-master pages.
The first time AppRelativeVirtualPath was accessed, it returned the page in mixed case: subsequently it returned the page in all lower case. I decided to make everything lower case (remember the article is prefaced with Quick and Dirty :) ).
I wasn't sure which to use, Response.Redirect(…) or Server.Transfer(…). Both worked so I left them both in (one is commented out).
Possible Enhancements:
I like the Spartan functional look…plain gray buttons. You could however, "sexy up" the control with CSS and graphics. For sure dude, this ain't purty:
Adding First and Last buttons would be trivial.
You could make the control more "generic" by keeping the list of pages in a separate file or a database and reading them in once (do it in a static constructor).
You could add a property to switch between using Response.Redirect(…) and Server.Transfer(…) methods.
You can download the code Here. My pithy advice: Think before you type.. | https://www.codeproject.com/Articles/36454/Quick-and-Dirty-Automatic-Page-Navigator | CC-MAIN-2018-26 | refinedweb | 824 | 51.04 |
C API: StringSearch. More...
#include "unicode/utypes.h"
#include "unicode/localpointer.h"
#include "unicode/ucol.h"
#include "unicode/ucoleitr.h"
#include "unicode/ubrk.h"
Go to the source code of this file.
C API: StringSearch.
C Apis for an engine that provides language-sensitive text searching based on the comparison rules defined in a
UCollator data struct, see
ucol.h. This ensures that language eccentricity can be handled, e.g. for the German collator, characters ß and SS will be matched if case is chosen to be ignored. See the "ICU Collation Design Document" for more information.
The algorithm implemented is a modified form of the Boyer Moore's search. For more information see "Efficient Text Searching in Java", published in Java Report in February, 1999, for further information on the algorithm.
There are 2 match options for selection:
Let S' be the sub-string of a text string S between the offsets start and end <start, end>.
A pattern string P matches a text string S at the offsets <start, end> if
option 1. Some canonical equivalent of P matches some canonical equivalent of S' option 2. P matches S' and if P starts or ends with a combining mark, there exists no non-ignorable combining mark before or after S' in S respectively.
Option 2. will be the default.
This search has APIs similar to that of other text iteration mechanisms such as the break iterators in
ubrk.h. Using these APIs, it is easy to scan through text looking for all occurances of a given pattern. This search iterator allows changing of direction by calling a
reset followed by a
next or
previous. Though a direction change can occur without calling
reset first, this operation comes with some speed penalty. Generally, match results in the forward direction will match the result matches in the backwards direction in the reverse order
usearch.h provides APIs to specify the starting position within the text string to be searched, e.g.
usearch_setOffset,
usearch_preceding and
usearch_following. Since the starting position will be set as it is specified, please take note that there are some dangerous positions which the search may render incorrect results:
A breakiterator can be used if only matches at logical breaks are desired. Using a breakiterator will only give you results that exactly matches the boundaries given by the breakiterator. For instance the pattern "e" will not be found in the string "\u00e9" if a character break iterator is used.
Options are provided to handle overlapping matches. E.g. In English, overlapping matches produces the result 0 and 2 for the pattern "abab" in the text "ababab", where else mutually exclusive matches only produce the result of 0.
Though collator attributes will be taken into consideration while performing matches, there are no APIs here for setting and getting the attributes. These attributes can be set by getting the collator from
usearch_getCollator and using the APIs in
ucol.h. Lastly to update String Search to the new collator attributes, usearch_reset() has to be called.
Restriction:
Currently there are no composite characters that consists of a character with combining class > 0 before a character with combining class == 0. However, if such a character exists in the future, the search mechanism does not guarantee the results for option 1.
Example of use:
char *tgtstr = "The quick brown fox jumped over the lazy fox"; char *patstr = "fox"; UChar target[64]; UChar pattern[16]; UErrorCode status = U_ZERO_ERROR; u_uastrcpy(target, tgtstr); u_uastrcpy(pattern, patstr);
UStringSearch *search = usearch_open(pattern, -1, target, -1, "en_US", NULL, &status); if (U_SUCCESS(status)) { for (int pos = usearch_first(search, &status); pos != USEARCH_DONE; pos = usearch_next(search, &status)) { printf("Found match at %d pos, length is %d\n", pos, usearch_getMatchLength(search)); } }
usearch_close(search);
ICU 2.4
Definition in file usearch.h. | http://icu.sourcearchive.com/documentation/4.8.1.1-1/usearch_8h.html | CC-MAIN-2018-13 | refinedweb | 628 | 55.64 |
After working on binary classification in the kaggle competition with data from the Titanic , how about tackling another facet of machine learning? Multiple classification (Multi-class) through the recognition of figures / digit MNSIT (Modified National Institute of Standards and Technology database) is indeed a must in the initiation to Machine Learning. So let’s get carried away in this new Kaggle competition
NB: The MNIST database for Modified or Mixed National Institute of Standards and Technology , is a database of handwritten numbers. The MNIST base has become a standard test. It brings together 60,000 training images and 10,000 test images, taken from an earlier database, simply called NIST 1 . These are black and white images, normalized centered 28 pixels per side. (Source: Wikipedia )
Retrieve and read the dataset
To do this first register in the Kaggle competition , then retrieve the data in the data tab .
Here is the structure of the training and test files:
As usual of course the labels (label) are not present in the test data set… where would be the stake if not?
We have to process images. However, these images are bitmaps, that is to say they are represented via matrices of pixels. Each point / pixel is therefore an entry in the matrix. The value in the matrix defining the intensity of the point (gray level) of the image:
The only catch is that the data we recover is not exactly in this matrix format. in fact each image is flattened on a single line. We will therefore have to retrieve the entire row (without the label of course) and resize the row to a 28 × 28 matrix. Luckily the reshape () function comes to your rescue, but we’ll see that in the next chapter.
This step is not necessary if you do not wish to retouch or rework the images. You could quite simply run a machine learning algorithm directly on the matrix data online:
import pandas as pd from sklearn.linear_model import SGDClassifier pd.options.display.max_columns = None TRAIN = pd.read_csv("./data/train.csv", delimiter=',') #, skiprows=1) TEST = pd.read_csv("./data/test.csv", delimiter=',') #, skiprows=1) X_TRAIN = TRAIN.copy() X_TEST = TEST.copy() y = TRAIN.label del X_TRAIN["label"] sgd = SGDClassifier(random_state=42) sgd.fit(X_TRAIN, y) print ("Score Train -->", round(sgd.score(X_TRAIN, y) *100,2), " %")
Score Train --> 85.88 %
Without any adjustments and in a few lines you will have an honorable score of 85% (if you submit it as is to kaggle you will get 84% which is consistent). But of course we can do better, much better.
View images
The following portion of code retrieves a row from the dataset. The row is converted to a 28 × 28 matrix via the reshape () function and and visualized via matplotlib .
# returns the image in digit (28x28) def getImageMatriceDigit(dataset, rowIndex): return dataset.iloc[rowIndex, 0:].values.reshape(28,28) # returns the image matrix in one row def getImageLineDigit(dataset, rowIndex): return dataset.iloc[rowIndex, 0:] imgDigitMatrice = getImageMatriceDigit(X_TRAIN, 3) imgDigit = getImageLineDigit(X_TRAIN, 3) plt.imshow(imgDigitMatrice, cmap=matplotlib.cm.binary, interpolation="nearest") plt.axis("off") plt.show()
The result is then displayed:
Multi-class classification
In the Titanic project , we were in a binary classification type machine learning project. Indeed we had to determine whether the passengers were either survivors or dead. 2 possibilities only, hence the term binary classification. The multi-class classification extends this principle of classification to several labeling classes. This is exactly the case here because we have to classify the digits on the different possibilities [0..9].
In terms of Machine Learning algorithms, we therefore have two ways of handling this type of problem:
- Using binary classification algorithms. In this case it will be necessary to apply these algorithms several times by “binarizing” the labels. For example by applying an algorithm which will recognize the 1s, then the 2s, etc. This is what we call a strategy alone against everything (One versus All). Another method will consist of comparing the pairs / tuples to each other (One versus One)
- Using Multi-class algorithms. The scikit-learn library offers a good number of them:
- SVC (Support Vector Machine)
- Random Forest
- SGD (Stochastic Gradient Descent)
- K near neighbors (KNeighborsClassifier)
- etc.
Results analysis
In a previous chapter we started training on data with a Stochastic Gradient Descent (SGD) algorithm. We saw in another article how to analyze the results of a binary classification, but what about a multi-class classification?
Well, we’re just going to use the same analysis tools as the binary classification, but of course a little different.
The most practical tool in my opinion remains the confusion matrix . The difference here is we’ll read it differently.
sgd = SGDClassifier(random_state=42) cross_val_score (sgd, X_TRAIN, y, cv=5, scoring="accuracy") y_pred = cross_val_predict(sgd, X_TRAIN, y, cv=5) mc = confusion_matrix(y, y_pred) print(mc)
Here is the result :
The latter is very practical and allows you to compare by class (here I remind you [0..9]) the number of erroneous values between the prediction and the value actually observed. As a corollary, a confusion matrix being diagonal means a score of 100%! we will therefore only be interested in the values which are outside the diagonals, because they are those which point to the prediction errors.
A visualization in the form of a heat map is also very useful. A simple call to matplotlib via the matshow () method and voila:
plt.matshow(mc)
First conclusions
In this first article we haven’t really done much yet. Reading the data and launching a first algorithm in order to get a first overview is essential and appears as the first step in order to get an idea of what we are going to implement subsequently to achieve our objective.
In a second article, we will see how to best use this dataset. For example, we could rework the information (for example by extending the dataset) or scale the raster data. And above all, finally, we will test and optimize our machine learning algorithms. | http://aishelf.org/mnsit-1/ | CC-MAIN-2021-31 | refinedweb | 1,004 | 55.95 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi everyone
I'm having some trouble figuring out how to import a library when using atom.io as my IDE
I'm trying to use peasycam, with this at the top of my sketch:
import /lib/peasycam/library/peasycam.jar.*;
and a folder structure like:
projectFolder/ sketch.pde lib/ peasycam/ library/ peasycam.jar
Is the .jar even the correct file to be importing? Can anyone explain how to do this?
Thanks
Answers
Don't know atom but I assume it's a Java editor. Is there a way to import a standard java .jar library? I assume it would work in the same way. Since it's pure Java, it won't know to look in the lib folder unless you pointed atom there.
This should be the package name, not the filename. If you do the same within processing you get
(From memory)
Then you have to make sure the jar is in your class party so it can find it at both compile time and runtime
@koogs You meant
class pathL-) | https://forum.processing.org/two/discussion/22090/importing-a-library-when-using-a-3rd-party-ide-atom-io-sublime-text-etc | CC-MAIN-2019-47 | refinedweb | 194 | 76.11 |
Problem ------- A lot of us are trying to backport or create additional Zope 3-related technology in Zope 2. Whit Morriss, for instance, needs integer IDs in Five, the Plone project is heavily using local components now and will sooner or later need a more sophisticated site manager implementation than PersistentComponents. There's also the need for customizing global template-based views ("Customerize" button), something that doesn't even exist in Zope 3 yet. I don't think all of these things should go into the Products.Five anymore, for several reasons: * We're trying to get away from writing products and into writing Python packages that can e.g. be installed as eggs. * Some of these things are not essential to bringing Zope 3 technology into Zope 2 (e.g. like the intid stuff), so it may be questionable whether it should ship with Zope 2/Five out of the box. * Starting with Zope 2.10, Products.Five is actually getting smaller which is a trend I would like to continue to see. And if I manage to land my Acquisition refactoring in 2.11, Products.Five will pretty soon decrease in size tremendously, and there are also other things in Products.Five that should really be in Zope 2 proper (e.g. OFS).. The namespace for these packages should probably be 'five', as we already have five.intid and five.customerize and we are, after all, the Five project. Advantages ---------- * We'll be able to use stuff we get for Python packages for free, such as installation via eggs, Cheeseshop presence and much less majyck for initialization. *. Comments welcome :) | http://codespeak.net/pipermail/z3-five/2006q4/001889.html | crawl-002 | refinedweb | 271 | 64.91 |
This site uses strictly necessary cookies. More Information
So, the file name of my C# MonoBehaviour script matches the class name exactly (including capitalization), but I'm getting the "The scripts file name does not match the name of the class defined in the script!" error when I drag it onto a game object in the editor. Has anyone seen this or have any thoughts?
Do you have more than one class defined in the script? Did you put the class in a namespace? (Namespaces don't work for $$anonymous$$onoBehaviours.) Can you post your filename and your script? (Remove the body of the script if you prefer not to share.)
hello I have similar problems. I put the class in a "namespace System.IO.Ports" because I need to use the serial port. But same error appears whether I add the namespace. Do you have any idea how to fix it?
Answer by JermAker
·
Mar 24, 2011 at 04:52 PM
Fixed by removing the namespace definition in the MonoBeh.
File Name And Class Name Dont Match
0
Answers
When exactly do scripts have to be named exactly the same as the class they contain?
3
Answers
How to change the file name for the final build?
2
Answers
How to change the matchname after created it
0
Answers
WWW get files name to a folder
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/54153/the-scripts-file-name-does-not-match-the-name-of-t.html?sort=oldest | CC-MAIN-2021-25 | refinedweb | 232 | 84.17 |
Here's another recent question I've received on bit twiddling in VBScript:
You discussed the issues with interpreting error results that come back interpreted as signed longs last year.
Suppose we have a large unsigned long value, something like E18F4994. VBScript returns this value as -510703212. How can we go from this to the "representation" that a C user would get, the value 3784264084? Or given a string containing that representation, "3784264084", how can a VBScript user work out the hex?
Indeed, I did discuss that last year,
here. The key to solving the problem in JScript and VBScript is the same. Consider a 32 bit pattern interpreted as a signed integer and an unsigned integer. If the high bit is zero, both interpretations agree. If the high bit is one then the signed interpretation is equal to the unsigned interpretation minus 2^32. Or, stated the other way, the representation of a negative number is determined by subtracting it from 2^32 and then using the representation of that unsigned number.
This makes constructing the conversion functions you want pretty easy:
Function ReinterpretSignedAsUnsigned(ByVal x)
If x < 0 Then x = x + 2^32
ReinterpretSignedAsUnsigned = x
End Function
Function UnsignedDecimalStringToHex(ByVal x)
x = CDbl(x)
If x > 2^31 - 1 Then x = x - 2^32
UnsignedDecimalStringToHex = Hex(x)
End Function
&He18F4994 ' -510703212
print ReinterpretSignedAsUnsigned(&hE18F4994) ' 3784264084
print UnsignedDecimalStringToHex("3784264084") ' "E18F4994"
You might wonder why it is that we use such a goofy way to represent negative integers as "if the high bit is set then interpret it as an unsigned integer but subtract 2^32". The
"obvious" way to represent negative integers is to declare that the high bit is the "sign bit" and then just have a 31 bit integer. In that system
00000000000000000000000000000111 = &h00000007 = 7
10000000000000000000000000000111 = &h80000007 = -7
very simple and straightforward, right? However, that system has one minor problem, and
the system we actually use has a major advantage.
The minor problem is that in this system there are two zeros -- a "positive zero" and a "negative zero", which is darn weird. That's a pretty minor problem though -- a problem shared, in fact, by floating point numbers. A 64 bit float consists of a 52 bit unsigned integer, a sign bit, and
eleven bits of exponent; if you can twiddle the bits then it's possible to represent +0 and -0 differently in a float, though of course it is silly to do so. (The exact details of how zeros, infinities, nans and denormals work in floating point arithmetic is a subject for another day.)
The major advantage of the "subtract off
2^32" representation for negative numbers becomes apparent when you notice this interesting fact: adding 2^32 to a 32 bit integer is a no-op. You'd go to add the 33rd bit, and there isn't one there, so nothing happens.
Think about that in the context of implementing subtraction. You want to calculate 10 - 3:
10 - 3
= 0000000A + (-3)
= 0000000A + (-3) + 2^32, since in 32 bit arithmetic, adding 2^32 is a no-op.
= 0000000A + (2^32 - 3), but that is
representable as a 32 bit integer
= 0000000A + FFFFFFFFD
= 00000007, because we throw away the high bit that doesn't fit into the 32 bit integer
Get it? If you represent negative integers this way then you don't have to build another circuit on your chip to handle subtraction. You just build a circuit that handles unsigned integer addition. Integer subtraction and unsigned integer addition are the same operation at the bit level.
" if you can twiddle the bits then it’s possible to represent +0 and -0 differently in a float, though of course it is silly to do so"
Except it’s a necessary and useful thing to do; complex numbers make good use of them. A negative zero allows the following identities to hold true:
sqrt(conjugate(z)) = conjugate(sqrt(z))
log(conjugate(z)) = conjugate(log(z))
Waaaay back in Windows 3.1, the format for help files stored integers (for margin indents and the like) in the most bizarre format I’ve ever seen: threes-complement signed integers, with a reversed sign bit.
In other words, if you had an 8-bit field then
10000010 meant "2"
10000001 meant "1"
10000000 meant "0"
01111111 meant "-2", yes, "-2"
01111110 meant "-3", and so on.
There was simply no way to represent the integer -1 in a Windows help file. Did you want to set the paragraph indent in your help file to -10 twips for some reason? Too bad: paragraph indents were stored in tens of twips, so the Windows help compiler would reset that particular indent to 0 twips.
Nice. I had solved the problem of getting unsigned numbers in Javascript this way:
function ReinterpretSignedAsUnsigned(v)
{ return (v >>> 1) * 2 + (v&1) }
Same result but not as elegant.
> Indeed, I did discuss that last year, here.
"here" says:
> .Text – Application Error!
> Details
> NullReferenceException
> Object reference not set to an instance of an object.
By the way, even though you showed (and I thank you) that in cases where programmers want unsigned longs in VB it really does turn out to be not so hard to produce printable strings showing their values in decimal, still don’t you think that conversion to double precision floating is slightly sub-optimal? VB has unsigned bytes; why haven’t unsigned longs ever been added?
> (The exact details of how zeros, infinities,
> nans and denormals work in floating point
> arithmetic is a subject for another day.)
You misspelled "decade".
"You might wonder why it is that we use such a goofy way to represent negative integers…"
Are there actually people reading this who are unaware of two’s complement?
Dude, contain the condescension.
A considerable majority of script developers have no computer science degree. Many have no formal training in programming whatsoever — their entire knowledge of programming languages comes from reading existing scripts and puzzling them out, maybe with a copy of "Learn VBScript in 21 Days" at hand. They have no idea what "twos complement" means. Why would they? It’s an implementation detail that is usually abstracted away.
Sorry, didn’t mean to be condescending; it was genuine surprise! (I don’t have a comp sci degree either, by the way.) I was also a bit surprised that in your explanation you didn’t mention its name…
I even had a colleague who was a software engineer, more highly paid than me and seemed to be deserving of it, who participated in development of debuggers and such stuff … and one day it turned out that she wasn’t aware of two’s complement. That was sure startling. I don’t think I ever asked what she had studied in university, but that wasn’t the moment to ask anyway.
There are quite a lot of engineers who don’t understand that floating point has less than perfect precision in hardware and in ordinary programming languages, and they don’t know how to deal with the fact. Even someone who did a C-style printf to write numbers with 3 digits after the decimal point and then read them back later didn’t understand that the results didn’t match the original numbers before they were written. I don’t know if this is harder or easier than two’s complement, and in these cases maybe the engineering skills aren’t in the computer area, but still… You have to learn to be patient with some of these people.
Of course when someone has studied the matter and still manages to be unaware of it, or when someone markets a product and it fails and they renege on their warranties because they don’t understand it, then it’s different.
PingBack from
A month ago I was discussing some of the issues in integer arithmetic , and I said that issues in floating
Might be kinda dumb, but I learned VBScript by making keyboard macros. I used this website to get started | https://blogs.msdn.microsoft.com/ericlippert/2004/12/03/integer-arithmetic-in-vbscript-part-two/?replytocom=2033 | CC-MAIN-2017-43 | refinedweb | 1,344 | 58.32 |
NOTE: I have written a small sample application with vanilla JS which you can find here. It doesn't work in Firefox or IE11 though due to missing Web Component support in those browsers.
While researching a way to build a micro frontend that works with any javascript framework under the stars, I stumbled upon Web Components, more specifically HTML Imports and Custom Elements and fell in love. As a part-time bare metal afficionado, something that can be done with little hassle in pure html and vanilla js is like crack. And since Custom Elements are a W3C spec, it's extremely future proof which was a really nice bonus for the project.
Creating a simple Custom Element
A custom element is not very different from a React component. It might also be similar to whatever Vue or Angular have but I have no experience with either framework so I will only use React for comparison.
Custom Element:
class VanillaJSWebComponent extends HTMLElement { constructor() { super(); this.innerHTML = `<button type="button">I am a web component written in Vanilla JS</button>`; } } window.customElements.define('wc-vanilla', VanillaJSWebComponent);
React:
class ReactComp extends React.Component { render() { return ( <button type="button">I am a web component written in React</button> ) } } ReactDOM.render(<ReactComp />, document.getElementById('root'));
Both extend their respective base classes and you need to bind them to the DOM. There is one big difference though. In React, you bind the component directly to the spot in the DOM where you want it to show up. With Custom Elements you need to put in a bit more work. With
window.customElements.define() you only declare that what you've written is a Custom Elements that can be used. For the Custom Element to show up, you need to directly reference it on the site you'll be using it on. In React this happens (more or less) implicitly with the
ReactDOM.render() command.
Adding a Custom Element to Your Site
Actually adding the Custom Element to a site involves two steps:
In the
<head> tag of your html file, add
<link href="/path/to/element" rel="import">
and then just call the Custom Element where you want it in the
<body>
<web-component></web-component>
But what about that CORS error?
Now, when you load the site in a browser, chances are you get a CORS error and the component refuses to be loaded. Why CORS is complaining on localhost, I don't know, but generally it's a smart move by your browser to prevent other sites inserting malicious code. The easiest way around this, is to spin up a small server in the folder that you're currently working in. If you have python installed, it is simple one command in the terminal of your choice:
For python 3:
python -m http.server
For python 2:
python -m SimpleHTTPServer
If you don't have python, I'm sure there are other ways.
Now, if you go to the localhost port that the python server just opened for you, the web component will be displayed. That is assuming you use Chrome as your browser.
Conclusion
And that's it. You just created and used a simple web component. Not that complicated, right?
Before you go off building everything with Web Components, there are a few things you should be mindful of:
- Web Components are far from being implemented by modern day browsers and IE 11 (which won't be phased out until 2023) will probably never get there.
- If you want to host a component on a different server that opens up a whole different can of worms
Problem number one can be solved using polyfills.
Problem number two has also been solved by other people but there is a certain degree of effort in there. If you're interested in the solution, check out Micro Frontends by Neuland. | https://sophieau.com/article/custom-elements/ | CC-MAIN-2019-39 | refinedweb | 647 | 61.26 |
SPSite.GetChanges method (SPChangeToken)
Returns a collection of changes, starting from a particular point in the change log.
Namespace: Microsoft.SharePointNamespace: Microsoft.SharePoint
Assembly: Microsoft.SharePoint (in Microsoft.SharePoint.dll)
Parameters
- changeToken
- Type: Microsoft.SharePoint.SPChangeToken
An SPChangeToken object that specifies a starting date and time. An SPException exception is thrown if the token refers to a time before the start of the current change log. To start at the beginning of the change log, pass a a null reference (Nothing in Visual Basic) token.
Return valueType: Microsoft.SharePoint.SPChangeCollection
A collection of SPChange objects that represent the changes.
You can get an SPChangeToken object to pass as an argument to this method by extracting one from the ChangeToken property of the last change returned by a previous call to the GetChanges method. Or you can use the SPChangeToken constructor to create a new change token.
If you construct an SPChangeToken object to use with this method, pass SPChangeCollection.CollectionScope.Site as the constructor’s first argument, the value of the current object’s SPSite.ID property as the second argument, and a DateTime object as the third argument.
The following example is a console application that demonstrates how to get all changes in the log. The program loops while getting changes in batches and breaks out of the loop when it retrieves a collection with zero members, signifying that it has reached the end of the log.
using System; using Microsoft.SharePoint; namespace Test { class ConsoleApp { static void Main(string[] args) { using (SPSite site = new SPSite("")) { long total = 0; SPChangeToken token = null; // Get the first batch of changes. SPChangeCollection changes= site.GetChanges(token); // Loop until the end of the log is reached. while (changes.Count > 0) { total += changes.Count; foreach (SPChange change in changes) { string str = change.ChangeType.ToString(); Console.WriteLine(str); } changes= site.GetChanges(token); token = changes.LastChangeToken; } Console.WriteLine("Total changes = {0:#,#}", total); } Console.Write("\nPress ENTER to continue..."); Console.ReadLine(); } } } | http://msdn.microsoft.com/en-us/library/ms464475.aspx | CC-MAIN-2014-23 | refinedweb | 324 | 60.11 |
Opened 2 years ago
Closed 18 months ago
#775 closed defect (fixed)
'memoryview' gets included in the namespace
Description
The following code:
import pyximport; pyximport.install() from mv import * memoryview(b'asdf')
with mv.pyx:
import array # # commenting the following two lines stops the error being thrown ar2 = array.array('i',[1,2,3]) cdef int[:] myslice = ar2 # shows that memoryview is in the namespace of my.pyx, if above two lines are not commented print(dir())
produces the error:
Traceback (most recent call last): File "run_mv.py", line 3, in <module> memoryview(b'asdf') File "stringsource", line 323, in View.MemoryView.memoryview.__cinit__ (/home/mauro/.pyxbld/temp.linux-x86_64-3.2/pyrex/mv.c:2945) TypeError: __cinit__() takes at least 2 positional arguments (1 given)
(cython 016, python 3.2.3)
So, through the "from mv import *" cython's version of memoryview is imported to python-space which is not quite the same as CPython's and causes the error. However, this only happens if a memoryview is used in the .pyx file.
Change History (1)
comment:1 Changed 18 months ago by scoder
- Milestone changed from wishlist to 0.19.1
- Resolution set to fixed
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
Fixed here: | http://trac.cython.org/ticket/775 | CC-MAIN-2014-42 | refinedweb | 213 | 63.59 |
On Wed, Jun 10, 2015 at 01:40:43AM -0700, Christoph Hellwig wrote:
> Here is the fix. Since that commit the build accidentally relied on the
> installed platform_defs.h:
>
> ---
> From e406bcdcdad80ca491d5b854cde5ad893bef6f8c Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig <hch@xxxxxx>
> Date: Wed, 10 Jun 2015 10:34:45 +0200
> Subject: xfsprogs: fix platform_defs.h include path
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
>
> Since 2fe8a2 ("libxfs: restructure to match kernel layout") platform_defs.h
> the xfs subdirectory under include/ only contains selected headers instead
> of being a directory symlink.
>
> Because of this the build does not properly pick up platform_defs.h, which
> isn't symlinked into include/xfs. Builds only work if a recent enough
> platform_defs.h is available under /usr/include/xfs.
>
> Fix this by including platform_defs.h without the xfs/ prefix.
Actually, I think that platform_defs.h needs to be symlinked into
include/xfs. that was the intent, but I bet I missed it because
the build wasn't failing and so I didn't notice that I'd failed to
put it into the the include/Makefile rule for installed headers.
Yeah, there we are - the install-dev rule has a specific install
rule for platform_defs.h as well, so it was being included in the
package builds correctly (i.e. installed in /usr/include/xfs)
without being mentioned in the HFILES definition that defines header
files to be packaged for /usr/include/xfs....
Patch below (which uncovers another issue to do with include files
on the distclean side, but is not fatal and I'll fix tomorrow).
Cheers,
Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
xfsprogs: build fails to find platform_defs.h
From: Dave Chinner <dchinner@xxxxxxxxxx>
Commit 2fe8a2 ("libxfs: restructure to match kernel layout") failed
to link plaftorm_defs.h into include/xfs, and so the system header
file is used instead if it exists. If it doesn't exist, the n the
build fails.
Classify platform_defs.h as a header file that is installed in the
xfsprogs package into /usr/include/xfs, and remove the special
one-off install rule that puts it into that directory. This also
ensures that a build will always find platform_defs.h in
./include/xfs rather than relying on the system includes to provide
it, hence also solving the build issue.
Reported-by: Christoph Hellwig <hch@xxxxxx>
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
---
include/Makefile | 22 +++++++++++++++++-----
libxfs/crc32.c | 7 +++----
2 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/include/Makefile b/include/Makefile
index 70e43a0..6b9c7f0 100644
--- a/include/Makefile
+++ b/include/Makefile
@@ -18,8 +18,16 @@
TOPDIR = ..
include $(TOPDIR)/include/builddefs
-QAHFILES = libxfs.h libxlog.h \
- atomic.h bitops.h cache.h kmem.h list.h hlist.h parent.h radix-tree.h \
+QAHFILES = libxfs.h \
+ libxlog.h \
+ atomic.h \
+ bitops.h \
+ cache.h \
+ hlist.h \
+ kmem.h \
+ list.h \
+ parent.h \
+ radix-tree.h \
swab.h \
xfs_arch.h \
xfs_btree_trace.h \
@@ -30,8 +38,13 @@ QAHFILES = libxfs.h libxlog.h \
xfs_trace.h \
xfs_trans.h
-HFILES = handle.h jdm.h xqm.h xfs.h
-HFILES += $(PKG_PLATFORM).h
+HFILES = handle.h \
+ jdm.h \
+ $(PKG_PLATFORM).h \
+ platform_defs.h \
+ xfs.h \
+ xqm.h
+
PHFILES = darwin.h freebsd.h irix.h linux.h gnukfreebsd.h
DKHFILES = volume.h fstyp.h dvh.h
LIBHFILES = command.h input.h path.h project.h
@@ -62,7 +75,6 @@ include $(BUILDRULES)
install-dev: default
$(INSTALL) -m 755 -d $(PKG_INC_DIR)
$(INSTALL) -m 644 $(HFILES) $(PKG_INC_DIR)
- $(INSTALL) -m 644 platform_defs.h $(PKG_INC_DIR)
install-qa: install-dev
$(INSTALL) -m 644 $(QAHFILES) $(PKG_INC_DIR)
diff --git a/libxfs/crc32.c b/libxfs/crc32.c
index 0a8d309..bc1fc98 100644
--- a/libxfs/crc32.c
+++ b/libxfs/crc32.c
@@ -33,10 +33,9 @@
* match the hardware acceleration available on Intel CPUs.
*/
-//#include <libxfs.h>
-#include <xfs/platform_defs.h>
-#include <xfs/swab.h>
-#include <xfs/xfs_arch.h>
+#include "xfs/platform_defs.h"
+#include "xfs/swab.h"
+#include "xfs/xfs_arch.h"
#include "crc32defs.h"
/* types specifc to this file */ | http://oss.sgi.com/archives/xfs/2015-06/msg00178.html | CC-MAIN-2017-43 | refinedweb | 661 | 53.07 |
Use Case - Responding To User Input in QML
Supported types of user input
The Qt Quick module provides support for the most common types of user input, including mouse and touch events, text input, and key-press events. Other modules provide support for other types of user input for example, the Qt Sensors module provides support for shake-gestures in QML applications.
This article covers how to handle basic user input; for further information about motion-gesture support, see the Qt Sensors documentation. For information about audio-visual input, see the Qt Multimedia documentation.
Mouse and touch events
The input handlers let QML applications handle mouse and touch events. For example, you could create a button by adding a TapHandler to an Image, or to a Rectangle with a Text object inside. The TapHandler responds to taps or clicks on any type of pointing device.
import QtQuick 2.12 Item { id: root width: 320 height: 480 Rectangle { color: "#272822" width: 320 height: 480 } Rectangle { id: rectangle x: 40 y: 20 width: 120 height: 120 color: "red" TapHandler { onTapped: rectangle.width += 10 } } }
For more advanced use cases such as, drag, pinch and zoom gestures, see documentation for the DragHandler and PinchHandler types.
Note: Some types have their own built-in input handling. For example, Flickable responds to mouse dragging and mouse wheel scrolling. It handles touch dragging and flicking via synthetic mouse events that are created when the touch events are not handled.
Keyboard and button events
Button and key presses, from buttons on a device, a keypad, or a keyboard, can all be handled using the Keys attached property. This attached property is available on all Item derived types, and works with the Item::focus property to determine which type receives the key event. For simple key handling, you can set the focus to true on a single Item and do all your key handling there.
import QtQuick 2.3 Item { id: root width: 320 height: 480 Rectangle { color: "#272822" width: 320 height: 480 } Rectangle { id: rectangle x: 40 y: 20 width: 120 height: 120 color: "red" focus: true Keys.onUpPressed: rectangle.y -= 10 Keys.onDownPressed: rectangle.y += 10 Keys.onLeftPressed: rectangle.x += 10 Keys.onRightPressed: rectangle.x -= 10 } }
For text input, we have several QML types to choose from. TextInput provides an unstyled single-line editable text, while TextField is more suitable for form fields in applications. TextEdit can handle multi-line editable text, but TextArea is a better alternative as it adds styling.
The following snippet demonstrates how to use these types in your application:
import QtQuick 2.12 import QtQuick.Controls 2.4 import QtQuick.Layouts 1.3 ApplicationWindow { width: 300 height: 200 visible: true ColumnLayout { anchors.fill: parent TextField { id: singleline text: "Initial Text" Layout.alignment: Qt.AlignHCenter | Qt.AlignTop Layout.margins: 5 background: Rectangle { implicitWidth: 200 implicitHeight: 40 border.color: singleline.focus ? "#21be2b" : "lightgray" color: singleline.focus ? "lightgray" : "transparent" } } TextArea { id: multiline placeholderText: "Initial text\n...\n...\n" Layout.alignment: Qt.AlignLeft Layout.fillWidth: true Layout.fillHeight: true Layout.margins: 5 background: Rectangle { implicitWidth: 200 implicitHeight: 100 border.color: multiline.focus ? "#21be2b" : "lightgray" color: multiline.focus ? "lightgray" : . | https://doc-snapshots.qt.io/qt5-dev/qtquick-usecase-userinput.html | CC-MAIN-2019-22 | refinedweb | 522 | 57.98 |
Subject: [OMPI users] Shared memory communication limits parallelism?
From: João Luis Silva (jsilva_at_[hidden])
Date: 2007-11-30 13:01:47
Hello,
I'm using the OpenMPI version that is distributed with Fedora 8
(openmpi-1.2.4-1.fc8) on a dual Xeon 5335 (which is a quad core CPU), and
therefore I have 8 cores in a shared memory environment.
AFAIK by default OpenMPI correctly uses shared memory communication (sm)
without any extra parameter to mpirun, however the programs take longer and
don't scale well for more than 4 processors. Here are some example timings
for a simple MPI program (appended to this email):
time mpirun -np N ./mpitest
(the timings are the same for time mpirun --mca btl self,sm -np N)
N t(s) t1/t
-------------------------------
1 35.7 1.0
2 18.8 1.9
3 12.7 2.8
4 10.2 3.5
5 8.2 4.4
6 8.0 4.4
7 7.2 5.0
8 6.4 5.6
You can see that processes 5 and up barely speeds up the process. However
with tcp it has a nearly perfect scalling:
time mpirun --mca btl self,tcp -np N
N t(s) t1/t
-------------------------------
1 34.8 1.0
2 17.7 2.0
3 11.7 3.0
4 8.8 4.0
5 7.0 5.0
6 6.0 5.8
7 5.2 6.8
8 4.5 7.8
Why is this happening? Is this a bug?
Best regards,
João Silva
P.S. Test program appended:
----------------------------------------------------------
#include "stdio.h"
#include "math.h"
#include "mpi.h"
#define N 1000000000
int main(int argc, char* argv[]){
int i;
/* Init MPI */
int np,p;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&np);
MPI_Comm_rank(MPI_COMM_WORLD,&p);
printf("Process #%d of %d\n", p+1, np);
for (i = p*N/np; i < (p+1)*N/np; i++) {
exp(i);
}
return 0;
}
---------------------------------------------------------- | https://www.open-mpi.org/community/lists/users/2007/11/4569.php | CC-MAIN-2016-30 | refinedweb | 323 | 79.67 |
Contents^
Items^
Ask HN: Relationship between set theory and category theory
I have an idea about the relationship between set theory and category theory and I would like some feedback. I would like others to see it too, and I don't know how to do it. I think it's at least interesting to look at as a slightly crazy collage, but I was a bit more excited than normal when the idea hit, so I just had to dump it all at once in this image: (You will have to zoom the picture in order to be able to read the scribbles.)
It has to do with resonance in the energy flowing in emergent networks. Can't quite put my finger on it, so I'll be here to answer any questions.
Thanks for reading.
"Categorical set theory" > "References"...
From "Homotopy category" > "Concrete categories"... :
> While the objects of a homotopy category are sets (with additional structure), the morphisms are not actual functions between them, but rather a classes of functions (in the naive homotopy category) or "zigzags" of functions (in the homotopy category). Indeed, Freyd showed that neither the naive homotopy category of pointed spaces nor the homotopy category of pointed spaces is a concrete category. That is, there is no faithful functor from these categories to the category of sets.
My "understanding" of category theory is extremely shallow, but that's exactly why I think my proposal makes sense. It is the kind of thing that everybody ignores for decades precisely because it's transparently obvious, like a fish that doesn't understand water.
Here is the statement:
The meaning of no category is every category.
reference:...
This was already understood by everybody in the field, no doubt. It's just that somebody has to actually say it to someone else in order for the symmetry to break. The link above has the exact description of this, from Terence Tao.
The most popular docker images each contain at least 30 vulnerabilities
Although vulnerability scanners can be a useful tool, I find it very troublesome that you can utter the sentence "this package contains XX vulnerabilities, and that package contains YY vulnerabilities" and then stop talking. You've provided barely any useful information!
The quantity of vulnerabilities in an image is not really all that useful information. A large amount of vulnerabilities in a Docker image does not necessarily imply that there's anything insecure going on. Many people don't realize that a vulnerability is usually defined as "has a CVE security advisory", and that CVEs get assigned based on a worst-case evaluation of the bug. As a result, having a CVE in your container barely tells you anything about your actual vulnerability position. In fact, most of the time you will find that having a CVE in some random utility doesn't matter. Most CVEs in system packages don't apply to most of your containers' threat models.
Why not? Because an attacker is very unlikely to be able to use vulnerabilities in these system libraries or utilities. Those utilities are usually not in active use in the first place. Even if they are used, you are not usually in a position to exploit these vulnerabilities as an attacker.
Just as an example, a hypothetical outdated version of grep in one of these containers can hypothetically contain many CVEs. But if your Docker service doesn't use grep, then you would need to manually run grep to be vulnerable. And an attacker that is able to run grep in your Docker container has already owned you - it doesn't make a difference that your grep is vulnerable! This hypothetical vulnerable version of grep therefore makes no difference in the security of your container, despite containing many CVEs.
It's the quality of these vulnerabilities that matters. Can an attacker actually exploit the vulnerabilities to do bad things? The answer for almost all of these CVEs is "no". But that's not really the product that Snyk sells - Snyk sells a product to show you as many vulnerabilities as possible. Any vulnerability scanner company thinks it can provide most business value (and make the most money) by reporting as many vulnerabilities as it can. For sure it can help you to pinpoint those few vulnerabilities that are exploitable, but that's where your own analysis comes in.
I'm not saying there's not a lot to improve in terms of container security. There's a whole bunch to improve there. But focusing on quantities like "amount of CVEs in an image" is not the solution - it's marketing.
This entire depending on a base container for setup and then essentially throwing away everything you get from the package manager is part of the issue. There is no real package management for Docker. Hell, there isn't even an official way to determine if your image needs an upgrade (I wrote a ruby script that gets the latest tags from a docker repo, extracts the ones with numbers and sorts them in order and compares them to what I have running).
Relying on an Alpine/Debian/Ubuntu base helps to get dependencies installed quickly. Docker could have just created their own base distro and some mechanism to track package updates across images, but they did not.
There are guides for making bare containers, they contain nothing .. no ip, grep, bash .. only the bare minimum libraries and requirements to run your service. They are minimal, but incredibly difficult to debug (sysdig still sucks unless you shell out money for enterprise).
I feel like containers are alright, but Docker is a partial dumpster fire. cgroup isolation is good, the crazy way we deal with packages in container systems is not so good.
Sure if you're just checking for base-distro packages for security vulnerabilities, you're going to find security issues that don't apply (e.g. an exploit in libpng even though your container runs nothing that even links to libpng), but it does excuse the whole issue with the way containers are constructed.
I think this space is really open too, for people to find better systems that are also portable: image formats that are easy to build, easy to maintain dependencies for, and easy to run as FreeBSD jails OR Linux cgroup managed containers (Docker for FreeBSD translated images to jails, but it's been unmaintained for years).
I agree the tooling is a tire fire :(
> e.g. an exploit in libpng even though your container runs nothing that even links to libpng
it's a problem because some services or api's could be abused giving the attacker a path to these vulnerable resources and then use that vulnerability regardless if it is used currently by your service.
I like my images to only contain what's absolutely needed to do that job only. It's not so difficult to do, provided people would be willing to architect systems from the ground up, instead of pulling in a complete debian or fedora installation and then removing things (that should be outlawed imho lol). Not only do I get less attack surface but also smaller updates (which again then is incentive to do more often), less complexity, less logs, easier auditabile (now every log file or even log line might give valid clues), faster incident response, easier troubleshooting, sorry for going on and on ...
It's a cultural problem too: where people working in an environment where it's normal to have every command available on a production system (really?), and where there is no barrier to install anything new that is "required" without discussion & peer review (what are we pair programming for?) or where nobody even tracks the dead weight in production or whether configs are locked down?.
I sometimes think many companies lost control over this long ago. [citation needed] :(
I'm not familiar with Docker infrastructure but what is the alternative to "pulling in a complete debian or fedora installation and then removing things"? Compiling your own kernel and doing the whole "Linux From Scratch" thing? Isn't that incredibly time-intensive to do for every single container?
Just have an image with very a minimal user land. Compiling your own kernel is irrelevant because you need the host kernel to run the container, and container images don't contain a kernel.
The busybox image is a good starting point. Take that, then copy your executables and libraries. If you are willing to go further, you can rather easily compile your own busybox with most utilities stripped out. It's not time intensive because you need to do it just once, and it takes just an afternoon to figure out how.
I don't think this is a tooling problem at all.
"The tooling makes it too easy to do it wrong." Compared to shell scripts with package manager invocations? Nobody configures a system with just packages: there are always scripts to call, chroots to create, users and groups to create, passwords to set, firewall policies to update, etc.
There are a bunch of ways to create LXC containers: shell scripts, Docker, ansible. Shell scripts preceded Docker: you can write a function to stop, create an intermediate tarball, and then proceed (so that you don't have to run e.g. debootstrap without a mirror every time you manually test your system build script; so that you can cache build steps that completed successfully).
With Docker images, the correct thing to do is to extend FROM the image you want to use, build the whole thing yourself, and then tag and store your image in a container repository. Neither should you rely upon months-old liveCD images.
"You should just build containers on busybox." So, no package management? A whole ensemble of custom builds to manually maintain (with no AppArmor or SELinux labels)? Maintainers may prefer for distros to field bug reports for their own common build configurations and known-good package sets. Please don't run as root in a container ("because it's only a container that'll get restarted someday"). Busybox is not a sufficient OS distribution.
It's not the tools, it's how people are choosing to use them. They can, could, and should try and use idempotent package management tasks within their container build scripts; but they don't and that's not Bash/Ash/POSIX's fault either.
> With Docker images, the correct thing to do is to extend FROM the image you want to use, build the whole thing yourself, and then tag and store your image in a container repository. Neither should you rely upon months-old liveCD images.
This should rebuild all. There should be an e.g. `apt-get upgrade -y && rm -rf /var/lib/apt/lists` in there somewhere (because base images are usually not totally current (and neither are install ISOs)).
`docker build --no-cache --pull`
You should check that each Dockerfile extends FROM `tag:latest` or the latest version of the tag that you support. Its' not magical, you do have to work it.
Also, IMHO, Docker SHOULD NOT create another Linux distribution.
Tinycoin: A small, horrible cryptocurrency in Python for educational purposes
The 'dumbcoin' jupyter notebook is also a good reference: "Dumbcoin - An educational python implementation of a bitcoin-like blockchain"...
When does the concept of equilibrium work in economics?
"Modeling stock return distributions with a quantum harmonic oscillator" .
"Quantum harmonic oscillator"
The QuantEcon lectures have a few different multiple agent models:
"Rational Expectations Equilibrium"
"Markov Perfect Equilibrium"
"Robust Markov Perfect Equilibrium"
"Competitive Equilibria of Chang Model"
... "Lectures in Quantitative Economics as Python and Julia Notebooks" (data sources (pandas-datareader, pandaSDMX), tools, latex2sympy)
"Econophysics"
> Indeed, as shown by Bruna Ingrao and Giorgio Israel, general equilibrium theory in economics is based on the physical concept of mechanical equilibrium.
Simdjson – Parsing Gigabytes of JSON per Second
> Requirements: […] A processor with AVX2 (i.e., Intel processors starting with the Haswell microarchitecture released 2013, and processors from AMD starting with the Rizen)
Also noteworthy that on Intel at least, using AVX/AVX2 reduces the frequency of the CPU for a while. It can even go below base clock.
iirc, it's complicated. Some instructions don't reduce the frequency; some reduce it a little; some reduce it a lot.
I'm not sure AVX2 is as ubiquitous as the README says: "We assume AVX2 support which is available in all recent mainstream x86 processors produced by AMD and Intel."
I guess "mainstream" is somewhat subjective, but some recent Chromebooks have Celeron processors with no AVX2:...
Because someone wanting 2.2GB/s JSON parsing is deploying to a chromebook...
It doesn't seem that laughable to me to want faster JSON parsing on a Chromebook, given how heavily JSON is used to communicate between webservers and client-side Javascript.
"Faster" meaning faster than Chromebooks do now; 2.2 GB/s may simply be unachievable hardware-wise with these cheap processors. They're kinda slow, so any speed increase would be welcome.
AVX2 also incurs some pretty large penalties for switching between SSE and AVX2. Depending on the amount of time taken in the library between calls, it could be problematic.
This looks mostly applicable to server scenarios where the runtime environment is highly controlled.
There is no real penalty for switching between SSE and AVX2, unless you do it wrong. What are you referring to specifically?
Are you talking about state transition penalties that can occur if you forget a vzeroupper? That's the only thing I'm aware of which kind of matches that.
A faster, more efficient cryptocurrency
Full disclosure: I work on the cryptocurrency in this article, Algorand.
There are a lot of questions and speculation here about this paper and Algorand. I would be happy to try an answer them to your satisfaction. Some context may be helpful first, though. This paper is an innovation about one aspect of our technology..
The Algorand pure proof-of-stake blockchain and associated cryptocurrency has many novel innovations aside from Vault. It possesses security and scalability properties beyond what any other blockchain technology allows while still being completely decentralized. Our website, algorand.com, and whitepaper are great places to start to learn more.
If you learn best from videos then I suggest you watch Turing award winner and cryptographic pioneer, Silvio Micali, talk about Algorand:. He is a captivating speaker and the founder of Algorand.
Are there reasons that e.g. Bitcoin and Ethereum and Stellar could not implement some of these more performant approaches that Algorand [1] and Vault [2] have developed, published, and implemented? Which would require a hard fork?
[1]
[2]
My understanding is that PoS approaches follow normal byzantine agreement theory which states that adversaries cannot control more than 1/3rd of the accounts (or money in the case of algorand). You can also delay new blocks more easily.
Ethereum is scared or that so they are implementing some hybrid form.
Bitcoin is doomed from my perspective, because of the focus on proof of work and the confirmation times. When you realize that algorand is super fast, there is no "confirmation time", and there is no waste in energy to mine, then it is hard to back up any cryptocurrency focusing on proof of work.
And what of decentralized premined chains (with no PoW, no PoS, and far less energy use) that release coins with escrow smart contracts over time such as Ripple and Stellar (and close a new ledger every few seconds)?
>.
What prevents a person from using a chain like IPFS?
Ethereum Casper PoS has been under review for quite some time.
Why isn't all Bitcoin on Lightning Network?
Bitcoin could make bootstrapping faster by choosing a considered-good blockhash and balances, but AFAIU, re-verifying transactions like Bitcoin and derivatives do prevents hash collision attacks that are currently considered infeasible for SHA-256 (especially given a low block size).
There was an analysis somewhere where they calculated the cloud server instance costs of mounting a ~51% attack (which applies to PoW chains) for various blockchains.
Bitcoin is not profitable to mine in places without heavily subsidized dirty/clean energy anymore: energy and Bitcoin commodity costs and prices have intersected. They'll need any of: inexpensive clean energy, more efficient chips, higher speculative value.
Energy arbitrage (grid-scale energy storage) may be more profitable now. We need energy storage in order to reach 100% renewable energy (regardless of floundering policy support).
Ripple is not decentralized. I don't know enough about Stellar to answer.
Bitcoin is software and can easily implement these features but the community is divided and can't reach consensus on anything. Lightning Network as layer two solution is pretty good from what I know.
Ethereum improvements are coming along very slowly and that's good. They're the only blockchain with active engagement by thousands of multiple parties.
Aragaon and Vault's papers might sound good, but who knows how they'll turn out in production.
People argue this all day. There's a lot of FUD.
Ripple only runs ~7% of validator nodes; which is far less centralized control than major Bitcoin mining pools and businesses (who do the deciding in regards to the many Bitcoin hard forks); that's one form of decentralization.
Ripple clients can use their own UNL or use the Ripple-approved UNL.
Ripple is traded on a number of exchanges (though fewer than Bitcoin for certain); that's another form of decentralization.
As an open standard, ILP will further reduce vendor lock in (and increase interoperability between) networks that choose to implement it.
There are forks of Ripple (e.g. Stellar) just like there are forks of Bitcoin and Ethereum.
From... :
>.
How does your definition of 'decentralized' differ?
Git-signatures – Multiple PGP signatures for your commits
Is there anything out there that doesn't need GPG? Having a working GPG install is a huge lift for developers.
I take this to mean: apart from the barnacles on GPG, could there be a system which does what GPG does for software development (signing), without the non-functioning web-of-trust of GPG, or the hierarchical system of x509 signing? Something that deals with lost keys, compromised keys/accounts, loss of DNS control, MitMing, MitBing, etc?
I think it is probably in the class of problems where there are no great foolproof solutions. However, I can imagine that techniques like certificate transparency (all signed x509 certificates pushed to a shared log) would be quite useful. Even blockchain techniques. Maybe send someone to check on me, I'm feeling unwell having written that.
> I think it is probably in the class of problems where there are no great foolproof solutions. However, I can imagine that techniques like certificate transparency (all signed x509 certificates pushed to a shared log) would be quite useful.
Securing DNS: ""
> Certs on the Blockchain: "Can we merge Certificate Transparency with blockchain?"
> Namecoin (decentralized blockchain DNS):
(Your first link is broken.)
My main problem with blockchain is the excessive energy consumption of PoW. I know there are PoS efforts, but they seem problematical.
I like the recent CertLedger paper:
My mistake. How ironic. Everything depends upon the red wheelbarrow. Here's that link without the trailing ":
> My main problem with blockchain is the excessive energy consumption of PoW. I know there are PoS efforts, but they seem problematical.
One report said that 78% of Bitcoin energy usage is from renewable sources (many of which would otherwise be curtailed and otherwise unfunded due to flat-to-falling demand for electricity). But PoW really is expensive and hopefully the market will choose less energy-inefficient solutions from the existing and future blockchain solutions while keeping equal or better security assurances.
>> Proof of Work (Bitcoin, ...), Proof of Stake (Ethereum Casper), Proof of Space, Proof of Research (GridCoin, CureCoin,)
The spec should be: DDOS resiliant (without a SPOF), no one entity with control over API and/or database credentials and database backups and the clock, and immutable.
Immutability really cannot be ensured with hashed records that incorporate the previous record's hash as a salt in a blocking centralized database because someone ultimately has root and the clock and all the backups and code vulnerable to e.g. [No]SQL injection; though distributed 'replication' and detection of record modification could be implemented. git push -f may be detected if it's on an already-replicated branch; but git depends upon local timestamps. google/trillian does Merkle trees in a centralized database (for Certificate Transparency).
In quickly reading the git-signatures shell script sources, I wasn't certain whether the git-notes branch with the .gitsigners that are fetched from all n keyservers (with DNS) is also signed?
I also like the "Table 1: Security comparison of Log Based Approaches to Certificate Management" in the CertLedger paper. Others are far more qualified to compare implementations.
You read my mind. I'd love if it could be rooted in a Yubikey.
Decoupling the "signing" and "verifying" parts seem like a good idea. As random Person signs something, how someone else figures out how to go trust that signature is a separate problem.
> I'd love if it could be rooted in a Yubikey.
FIDO2 and Yubico helped develop the new W3C WebAuthn standard:
But WebAuthn does not solve for WoT or PKI or certificate pinning.
> Decoupling the "signing" and "verifying" parts seem like a good idea. As random Person signs something, how someone else figures out how to go trust that signature is a separate problem.
Someone can probably help with terminology here. There's identification (proving that a person has the key AND that it's their key (biometrics, challenge-response)), signing (using a key to create a cryptographic signature – for the actual data or a reasonably secure cryptographic hash of said data – that could only could have been created with the given key), signature verification (checking that the signature was created by the claimed key for the given data), and then there's trusting that the given key is authorized for a specific purpose (Web of Trust (key-signing parties), PKI, ACME, exchange of symmetric keys over a different channel such as QKD) by e.g. signing a structured document that links cryptographic keys with keys for specific authorized functions and trusting the key(s) used to sign said authorizing document.
Private (e.g. Zero Knowledge) blockchains can be used for key exchange and key rotation. Public blockchains can be used for sharing (high-entropy) key components; also with an optional exchange of money to increase the cost of key compromise attempts.
There's also WKD: "Web Key Directory"; which hosts GPG keys over HTTPS from a .well-known URL for a given user@domain identifier:
Compared to existing PGP/GPG keyservers, WKD does rely upon HTTPS.
TUF is based on Thandy. TUF: "The Update Framework" does not presume channel security (is designed to withstand channel compromise)
The TUF spec doesn't mention PGP/GPG:...
There's a derivative of TUF for automotive applications called Uptane:
The Bitcoin article on multisignature; 1-of-2, 2-of-2, 2-of-3, 3-of-5, etc.:
Running an LED in reverse could cool future computers
"Near-field photonic cooling through control of the chemical potential of photons" (2019)
Compounding Knowledge
Buffet’s approach to life is interesting for the same reason an Olympic gymnast is interesting. He has specialized to an extreme and is taking advantage of the rewards of that specialization and natural talent in a unique way.
It’s easy for me to feel shame that I don’t read 8 hours per day, as Warren and Charlie do. Buffett is a phenomenal investor but by all accounts, rather odd. He eats like crap, doesn’t exercise, had a profoundly weird relationship with his wife, and seems addicted to his work at the expense of everything else.
My point is this: his practice of reading income statements and business reports 8 hours per day for 60 years doesn’t make him the kind of person I wish to emulate.
Don’t get me wrong, I respect his levelheadedness toward money, lack of polish wrt PR, and generous philanthropic efforts. There’s a lot of good. But it’s easy to idolize the guy.
Controversial take: Buffett is actually not a great investor in the way people think. Via the float in his insurance companies, he receives a 0% infinite maturity loan to plow into the market. That financial leverage gives him the ability to beat the market year after year - not his own stockpicking prowess.
If you were to start with $1B, then get an extra $2B that you never had to pay back, you too would do quite well holding Coca-Cola and other safe companies with moats. Your returns would be 3x everyone else's.
Buffett himself has tried to explain the impact of float on Berkshire Hathaway but it never seems to sink in with people.
Ill take it even farther: Buffet isn't statistically different than average. He just found a strategy that happened to work, was stubborn enough to stick to it through bad times, and used copious amounts of leverage to juice returns.
A really interesting paper called "Buffet's Alpha" talks about this, and was able to replicate his performance by following a few simple rules. They found that he produced very little actual alpha. To his credit, he seems to have been observant enough to stumble into factors(value, quality, and low beta) before anyone else knew they existed, which is his real strength and contribution.
A summary of that paper from the authors is here:....
They work at AQR, a firm which is notable for being one of the only successful hedge funds to actually publish meaningful research.
The paper's conclusion is noteworthy:
> The efficient market counterargument is that Buffett was simply lucky. Our findings suggest that Buffett’s success is neither luck nor magic but is a reward for a successful implementation of value and quality exposures that have historically produced high returns. Second, we illustrated how Buffett’s record can be viewed as an expression of the practical implementability of academic factor returns after transaction costs and financing costs. We simulated how investors can try to take advan- tage of similar investment principles. Buffett’s success shows that the high returns of these academic factors are not simply “paper” returns; these returns can be realized in the real world after transaction costs and funding costs, at least by Warren Buffett. Furthermore, Buffett’s exposure to the BAB factor and his unique access to leverage are consistent with the idea that the BAB factor represents reward to the use of leverage.
BTW, AQR funded the initial development of pandas; which now powers tools like alphalens (predictive factor analysis) and pyfolio.
There's your 'compounding knowledge'.
(Days later)
"7 Best Community-Built Value Investing Algorithms Using Fundamentals"
(The Zipline backtesting library also builds upon Pandas)
How can we factor ESG/sustainability reporting into these fundamentals-driven algorithms in order to save the world?
I wonder to what extent Warren Buffet is Warren Buffet because of how he thinks and acts (like all of these non-fiction authors selling books by using his name would have us believe), and to what extent he is the product of media selection bias. -- If you take a large enough group of people who take risky stakes that are large enough (like the world of financial asset management), then one of them is bound to be as successful as Warren Buffet, even if they all behave randomly.
Funnily enough, Buffett actually calculated this in his essay "The Super Investors of Graham and Doddsville"
Basically, advocates of the Free Market Hypotheses believed that it was impossible for anyone to deliberately, repeatedly generate alpha. The market was rational, and success was probabilistically distributed. With a large enough population, you will get Buffett level returns - therefor Buffett is a fluke.
However, Buffett calculated that there weren't enough investors for his success to be a factor of random distribution, plus there were 2 dozen others who followed a similar strategy who also consistently generate alpha
"The Superinvestors of Graham and Doddsville" (1984)...
From... :
> The speech and article challenged the idea that equity markets are efficient through a study of nine successful investment funds generating long-term returns above the market index.
This book probably doesn't mention that he's given away over 71% to charity since Y2K. Or that it's really cold and windy and snowy in Omaha; which makes for lots of reading time.
"Warren Buffett and the Interpretation of Financial Statements: The Search for the Company with a Durable Competitive Advantage" (2008) [1], "Buffetology" (1999) [2], and "The Intelligent Investor" (1949, 2009) [3] are more investment-strategy-focused texts.
[1]...
[2]...
[3]...
Value Investing:
> This is why it’s commonly telling you what happened, not why it happened or under what conditions it might happen again.
Why CISA Issued Our First Emergency Directive
There are a number of efforts to secure DNS (and SSL/TLS which generally depends upon DNS; and upon which DNS-over-HTTPS depends) and the identity proof systems which are used for record-change authentication and authorization.
Domain registrars can and SHOULD implement multi-factor authentication.
Are there domain registrars that support FIDO/U2F or the new W3C WebAuthn spec?
Credentials and blockchains (and biometrics):...
DNSSEC:...
ACME / LetsEncrypt certs expire after 3 months (*) and require various proofs of domain ownership:...
Certificate Transparency:
Certs on the Blockchain: "Can we merge Certificate Transparency with blockchain?"
Namecoin (decentralized blockchain DNS):
DNSCrypt:
DNS over HTTPS:
DNS over TLS:
DNS:
Chrome will Soon Let You Share Links to a Specific Word or Sentence on a Page
I would like to urge the browser developers/makers to adopt existing proposals which came through open consensus which do precisely cover the same use cases (and more!)
W3C Reference Note on Selectors and States:
It is part of the suite of specs that came through the W3C Web Annotation Working Group:
More examples in W3C Note Embedding Web Annotations in HTML:
Different kinds of Web resources can combine multiple selectors and states. Here is a simple one using `TextQuoteSelector` handled by the clientside application:(type=TextQuoteSelecto...
A screenshot/how-to:
"Integration with W3C Web Annotations"
> It would be great to be able to comment on the linked resource text fragment. W3C Web Annotations [implementations] don't recognize the targetText parameter, so AFAIU comments are then added to the document#fragment and not the specified text fragment. [...]
> Is there a simplified mapping of W3C Web Annotations to URI fragment parameters?
Guidelines for keeping a laboratory notebook
My paper notebooks were always horrific. Writing has always been painful and awkward for me. But I could type like nobody's business thanks to programming. I survived college in the early 80s by being one of the first students to get a word processor.
By the time I was keeping a notebook, my work was generating mountains of computer readable data, source code, and so forth. We managed by agreeing on a format for data files, where the filename referenced a notebook page, and it worked OK.
Today, it's unavoidable that people are going to keep their notes electronically, and there are no perfect solutions for doing this. Wet chemists still like paper notebooks, since it's hard to get a computer close to the bench, and to type while wearing rubber gloves. Academic workers are expected to supply their own computers, and are nervous about getting them damaged or contaminated. Plus, drawing pictures and writing equations on a computer are both awkward.
Computation related fields lend themselves well to purely electronic notebooks, no surprise. Today, a lot of my work fits perfectly in a Jupyter notebook.
Commercial notebook software exists, but it tends to be sold largely for enterprise use, i.e., the solution it solves is how to control lab workers and secure their results, not how to enable independent, creative work.
> Computation related fields lend themselves well to purely electronic notebooks, no surprise. Today, a lot of my work fits perfectly in a Jupyter notebook.
Some notes and ideas regarding Jupyter notebooks as lab notebooks from "Keeping a Lab Notebook [pdf]":
Superalgos and the Trading Singularity
Though others didn't, you might find this interesting: "Ask HN: Why would anyone share trading algorithms and compare by performance?" ( )
I think there is value in a back-testing module, however, sharing an algo doesn't make sense to me, until unless someone wants to buy mine for an absurd amount.
I think part of the value of sharing knowledge and algorithmic implementations comes from getting feedback from other experts; like peer review and open science and teaching.
Case in point: the first algorithm on this list [1] of community contributed algorithms that were migrated to their new platform is "minimum variance w/ constraint" [2]. Said algorithm showed returns of over 200% as compared with 77% returns from the SPY S&P 500 ETF over the same period, ceteris paribus. In the 69 replies, there are modifications by community members and the original author that exceed 300%.
Working together on open algorithms has positive returns that may exceed advantages of closed algorithmic development without peer review.
[1]...
[2]
How well does it do in production though and what happens when multiple algos execute the same trades? Does it cause the rest of the algos to adapt and change results? It makes sense to back-test together and work on it, but if it's proven to work, someone will create something to monitor volume on those trades and work against it. I'd be curious to see the same algo do 300% in production, and if so, then my bias would be uncalled for.
> How well does it do in production though and what happens when multiple algos execute the same trades?
Price inflation.
> Does it cause the rest of the algos to adapt and change results?
Trading index ETFs? IDK
> It makes sense to back-test together and work on it, but if it's proven to work, someone will create something to monitor volume on those trades and work against it.
Why does it need to do lots of trades? Is it possible for anyone other than e.g. SEC to review trades by buyer or seller?
> I'd be curious to see the same algo do 300% in production, and if so, then my bias would be uncalled for.
pyfolio does tear sheets with Zipline algos: pyfolio/examples/zipline_algo_example.ipynb...
alphalens does performance analysis of predictive factors: alphalens/examples/pyfolio_integration.ipynb...
awesome-quant lists a bunch of other tools for algos and superalgos:
What's a good platform for paper trading (with e.g. zipline or moonshot algorithms)?
I disagree with price inflation just because everything is hedged, but it may be true.
The too many trades is if there are 300 algos, and I look in the order book and see different orders from different exchanges at the same price point, then I would be adapting to see what's happening, not myself, but there are people who watch order flows.
I don't paper trade, either it works in production with real money or not. Have to get a feel for spreads, commissions, and so on.
Also, in my case, I am hesitant to even use paid services as someone can be watching it, so most my tools are made by me. Good luck with your trading though, if it works out, let me know, I'd pay to use it along side my other trades.
Crunching 200 years of stock, bond, currency and commodity data
Coming from a CS/Econ/Finance background....
Efficiency as “the market fully processes all information” is decidedly untrue.
Efficiency as “Its very hard to arb markets without risk better than a passive strategy” is very true.
Most professional money managers don’t beat the market once you factor in fees. Many private equity funds produce outsize returns, but it’s based on higher risk and taking advantage of tax laws.
Over time, positive performance for mutual funds don’t persist. (If you’re in the top decile performance one year, you’re no more likely to be there next year)
Despite all of this, there are ways people make money. But it’s a small subset of professionals. It’s high frequency traders who find ways to cut the line on mounds of pennies. It inside information from hedge funds. Or earlier access to non public information. But it’s generally not in areas that normal people like you and me can access.
It inside information from hedge funds. Or earlier access to non public information. But it’s generally not in areas that normal people like you and me can access.
Asymmetric information is pretty far from what used to be said about the perfect market and rational actors. It's "there's a sucker born every minute" and "if it seems too good to be true it probably is" economics.
I might be misunderstanding what you're saying here, but are you sure you're right? Fama originally predicated the model of the efficient market (the efficient market hypothesis) on the idea of informational efficiency. Information asymmetry is a fundamental measure involved in the idealized model of an efficient market.
What you're mentioning about rational actors is actually a different topic altogether in economics.
Or have I misunderstood what you're getting at?
I was interested, so I did some research here.
Rational Choice Theory
Rational Behavior
> Most mainstream academic economics theories are based on rational choice theory.
> While most conventional economic theories assume rational behavior on the part of consumers and investors, behavioral finance is a field of study that substitutes the idea of “normal” people for perfectly rational ones. It allows for issues of psychology and emotion to enter the equation, understanding that these factors alter the actions of investors, and can lead to decisions that may not appear to be entirely rational or logical in nature. This can include making decisions based primarily on emotion, such as investing in a company for which the investor has positive feelings, even if financial models suggest the investment is not wise.
Behavioral finance
Bounded rationality > Relationship to behavioral economics
Perfectly rational decisions can be and are made without perfect information; bounded by the information available at the time. If we all had perfect information, there would be no entropy and no advantage; just lag and delay between credible reports and order entry.
Information asymmetry
Heed these words wisely: What foolish games! Always breaking my heart....
>.
Which, I think, brings me to equitable availability of maximum superalgo efficiency and limits of real value creation in capital and commodities markets; which'll have to be a topic for a different day.
Show HN: React-Schemaorg: Strongly-Typed Schema.org JSON-LD for React
I have a slightly longer post describing this work and the reasoning behind it on dev.to[1].
[1]:...
Is there a good way to generate JSONschema and thus forms from schema.org RDFS classes and (nested, repeatable) properties?
By JSONschema do you mean [this standard]()? I don't know of a tool that does that yet, the JSON Schema is general enough with allOf/anyOf that it should express schema.org schema as well.
Depends on the purpose here. With this, my goal was to speed up the Write-Update-Debug development loop. Depends on the use case, simply using Google's Structured Data Testing Tool [1] might be a better way to verify schema than JSON-schema?
[1]:
There are a number of tools for generating forms and requisite client and serverside data validations from JSONschema; but I'm not aware of any for RDFS (and thus the schema.org schema [1]). A different use case, for certain.
Definitely a cool area for exploration. I'm not aware of JSON Schema generators from RDFS either.
It should be possible to model the basics (nested structure, repeated structure, defining the "domain" and "range" of a value).
Schema.org definitions however have no conception of "required" values[], however, so some of the cool form validation we see in some of these tools might not apply here.
[*] _Consumers_ of Schema.org JSON-LD or microdata, however, might define their own data requirements. E.g. Google has some concept of required fields, which you can see when using the Structured Data Testing Tool.
Consumer Protection Bureau Aims to Roll Back Rules for Payday Lending
From the article:
>.
390%
From... :
> TARP recovered funds totalling $441.7 billion from $426.4 billion invested, earning a $15.3 billion profit or an annualized rate of return of 0.6% and perhaps a loss when adjusted for inflation.[2][3]
0.6%
Is your point that the US government should get into the payday lending business?
They are ultra short term loans, so the rate is going to be outrageous when expressed as an APR...
Lectures in Quantitative Economics as Python and Julia Notebooks
It's amazing how we are watching use cases for notebooks and spreadsheets converging. I wonder what the killer feature will be to bring a bigger chunk of the Excel world into a programmatic mindset... Or alternatively, whether we will see notebook UIs embedded in Excel in the future in place of e.g. VBA.
That’s not a bad idea. Spreadsheets are pure functional languages built that use literal spaces instead of namespaces.
Notebooks are cells of logic. You could conceivably change the idea of notebook cells to be an instance of a function that points to raw data and returns raw data.
Perhaps this just Alteryx though
This is brilliant.
I'm picturing the ability to write a Python function with the parameters being just like the parameters in an Excel function. You can drag the cell and have it duplicated throughout a row, updating the parameters to correspond to the rows next to it.
It would exponentially expand the power of excel. I wouldn't be limited to horribly unmaintainable little Excel functions.
VBA can't be used to do that, can it? As far as I understand (and I haven't investigated VBA too much) VBA works on entire spreadsheets.
Essentially, replace the excel formula `=B3-B4` with a Python function `subtract(b3, b4)` where Subtract is defined somewhere more conveniently (in a worksheet wide function definition list?).
This would require a reactive recomputing of cells to be anything like a spreadsheet. > Essentially, replace the excel formula `=B3-B4` with a Python function `subtract(b3, b4)`
as of now jupyter/ipython would not recompute `subtract(b3, b4)` if you change b3 or b4, this has positive and negative (reliance on hidden state and order of execution) effects.
I too would really like something like this, but I think it is pretty far away from where jupiter is now.
You can build something like this with Jupyter today.
> Traitlets is a framework that lets Python classes have attributes with type checking, dynamically calculated default values, and ‘on change’ callbacks.
> Traitlet events. Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the observe method of the widget can be used to register a callback...
You can definitely build interactive notebooks with Jupyter Notebook and JupyterLab (and ipywidgets or Altair or HoloViews and Bokeh or Plotly for interactive data visualization).
> Qgrid is a Jupyter notebook widget which uses SlickGrid to render pandas DataFrames within a Jupyter notebook. This allows you to explore your DataFrames with intuitive scrolling, sorting, and filtering controls, as well as edit your DataFrames by double clicking cells.
Qgrid's API includes event handler registration:
> neuron is a robust application that seamlessly combines the power of Visual Studio Code with the interactivity of Jupyter Notebook....
"Excel team considering Python as scripting language: asking for feedback" (2017)
OpenOffice Calc ships with Python 2.7 support:
Procedural scripts written in a general purpose language with named variables (with no UI input except for chart design and persisted parameter changes) are reproducible.
What's a good way to review all of the formulas and VBA and/or Python and data ETL in a spreadsheet?
Is there a way to record a reproducible data transformation script from a sequence of GUI interactions in e.g. OpenRefine or similar?
OpenRefine/OpenRefine/wiki/Jupyter
"Within the Python context, a Python OpenRefine client allows a user to script interactions within a Jupyter notebook against an OpenRefine application instance, essentially as a headless service (although workflows are possible where both notebook-scripted and live interactions take place.
Are there data wrangling workflows that are supported by OpenRefine but not Pandas, Dask, or Vaex?
This interesting need to have a closer look, possibly refine can be more efficient? But haven't used it enough to know, just payed around with it a bit. Didn't realise you could combine it with jupyter.
There are undergraduate and graduate courses in each language:
Python version:
Julia version:
Does anyone else find it strange that there is no real-world data in these notebooks? It's all simulations or abstract problems.
This gives me the sense, personally, that economists aren't interested in making accurate predictions about the world. Other fields would, I think, test their theories against observations.
pandas-datareader can pull data from e.g. FRED, Eurostat, Quandl, World Bank:...
pandaSDMX can pull SDMX data from e.g. ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank; with requests-cache for caching data requests:...
The scikit-learn estimator interface includes a .score() method. "3.3. Model evaluation: quantifying the quality of predictions"...
statsmodels also has various functions for statistically testing models:
"latex2sympy parses LaTeX math expressions and converts it into the equivalent SymPy form" and is now merged into SymPy master and callable with sympy.parsing.latex.parse_latex(). It requires antlr-python-runtime to be installed.
IDK what Julia has for economic data retrieval and model scoring / cost functions?
If Software Is Funded from a Public Source, Its Code Should Be Open Source
From the US Digital Services Playbook [1]:
> PLAY 13
> Default to open
> When we collaborate in the open and publish our data publicly, we can improve Government together. By building services more openly and publishing open data, we simplify the public’s access to government services and information, allow the public to contribute easily, and enable reuse by entrepreneurs, nonprofits, other agencies, and the public.
> Checklist
> - Offer users a mechanism to report bugs and issues, and be responsive to these reports
> [...]
> - Ensure that we maintain contractual rights to all custom software developed by third parties in a manner that is publishable and reusable at no cost
> [...]
> - When appropriate, publish source code of projects or components online
> [...]
> Key Questions
> [...]
> - If the codebase has not been released under an open source license, explain why.
> - What components are made available to the public as open source?
> [...]
[1]
Apache Arrow 0.12.0
>.
Statement on Status of the Consolidated Audit Trail (2018)
> Put simply, the CAT is intended to enable regulators to oversee the securities markets on a consolidated basis—and in so doing, better protect these markets and investors.
U.S. Federal District Court Declared Bitcoin as Legal Money
"Application of FinCEN's Regulations to Persons Administering, Exchanging, or Using Virtual Currencies" (2013)...
"Legality of bitcoin by country or territory"...
"Know your customer"
"Anti-money-laundering measures by region"...
"Anti-money-laundering measures by region > United States"
Post Quantum Crypto Standardization Process – Second Round Candidates Announced
>,”
Links to the 17 public-key encryption and key-establishment algorithms and 9 digital signature algorithms are here: "Round 2 Submissions"...
"Quantum Algorithm Zoo" has moved to .
Ask HN: How do you evaluate security of OSS before importing?
What tools can I use to evaluate the security posture of an OSS project before I approve its usage with high confidence?
Oddly, whether a project has at least one CVE reported could be interpreted in favor of the project.
Do they have a security disclosure policy? A dedicated security mailing list?
Do they pay bounties or participate in e.g Pwn2own?
Do they cryptographically sign releases?
Do they cryptographically sign VCS tags (~releases)? commits? `git tag -s` / `git commit/merge -S`
Downstream packagers do sometimes/often apply additional patches and then sign their release with the repo (and thus system global) GPG key.
Whether they require "Signed-off-by" may indicate that the project has mature controls and possibly a formal code review process requirement. (Look for "Signed-off-by:" in the release branch (`git commit/merge -s/--signoff`)
How have they integrated security review into their [iterative] release workflow?
Is the software formally verified? Are parts of the software implementation or spec formally verified?
Does the system trust the channel? The host? Is it a 'trustless' system?
What are the single points of failure?
How is logging configured? To syslog?
Do they run the app as root in a Docker container? Does it require privileged containers?
If it has to run as root, does it drop privileges at startup?
Does the package have an SELinux or AppArmor policy? (Or does it say e.g. "just set SELinux to permissive mode)
Is there someone you can pay to support the software in an enterprise environment? Open or closed, such contacts basically never accept liability; but if there is an SLA, do you get a pro-rated bill?
As far as indicators of actual software quality:
How much test coverage is there? Line coverage or statement coverage?
Do they run static analysis tools for all pull requests and releases? Dynamic analysis? Fuzzing?
Of course, closed or open source projects may do none or all of these and still be totally secure, insecure, or unsecure.
This is a pretty extensive list. Thanks for sharing!
Ask HN: How can I use my programming skills to support nonprofit organizations?
Lately I've been thinking about doing programming for nonprofits, both because I want to help out with what I'm good at but also to hone my skills and potentially get some open source credit.
So far I've had a hard time finding nonprofit projects where I can just pick up something and start programming. I know about freecodecamp.org, but they force you to go through their courses, and as I already have multiple years of experience as a developer, I feel like that would be a waste of time.
Isn't there a way to contribute to nonprofit organization in a more direct and simple manner like how you would contribute to an open source project on GitHub?
There are lots of project management systems with issue tracking and kanban boards with swimlanes. Because it's unreasonable to expect all volunteers to have a GH account or even understand what GH is for, support for external identity management and SSO may be essential to getting people to actually log in and change their password regularly.
Sidling a nonprofit with custom built software with no other maintainers is not what they need. Build (and pay for development, maintenance, timely security upgrades and security review) or Buy (where is our data? who backs it up? how much does it cost for a month or a few years? Is it open source with a hosted option; so that we can pay a developer to add or fix what we need?)
"Solutions architect" may be a more helpful objective title for what's needed.
What are their needs? Marketing, accounting, operations, HR
Marketing: web site, maps service, directions, active social media presence that speaks to their defined audience
Accounting: Revenue and expenses, payroll/benefits/HR, projections, "How can we afford to do more?", handle donations and send receipts for tax purposes, reports to e.g. and infographics for wealth-savvy donors
Operations: Asset inventory, project management, volunteer scheduling
HR: payroll, benefits, volunteer scheduling, training, turnover, retaining selfless and enlightenedly-self-interested volunteers
Create a spreadsheet. Rows: needs/features/business processes. Columns: essential, nice to have, software products and services.
Create another spreadsheet. Rows: APIs. Columns: APIs.
Training: what are the [information systems] processes/workflows/checklists? How can I suggest a change? How do we reach consensus that there's a better way to do this? Is there a wiki? Is there a Q&A system?
"How much did you sink on that? Probably seemed like the best option according to the information available at the time, huh? Do you have a formal systems acquisition process? Who votes according to what type of prepared analysis? How much would it cost to switch? What do we need to do to ETL (extract, transform, and load) into a newer better system?"
When estimating TCO for a nonprofit, turnover is a very real consideration. People move. Chances are, as with most organizations TBH, there's a patchwork of partially-integrated and maybe-integrable systems that it may or may not be more cost-effective and maintainable to replace with a cloud ERP specifically designed for nonprofits.
Who has access rights to manually update which parts of the website? How can we include dynamic ([other] database-backed) content in our website? What is a CMS? What is an ERP? What is a CRM? Are these customers, constituents, or both? When did we last speak with those guys? How can people share our asks with social media networks?
If you're not willing or able to make a long-term commitment, the more responsible thing to do is probably to disclose any conflicts of interest recommend a SaaS solution hosted in a compliant data center.
q="nonprofit erp"
q="nonprofit crm"
q="nonprofit cms" + donation campaign visibility
What time of day are social media posts most likely to get maximum engagement from which segments of our audience? What is our ~ARPU "average revenue per user/follower"?
... As a volunteer and not a FTE, it may be a worthwhile exercise to build a prototype of the new functionality with whatever tools you happen to be familiar with with the expectation that they'll figure out a way to accomplish the same objectives with their existing systems. If that's not possible, there may be a business opportunity: are there other organizations with the same need? Is there a sustainable market for such a solution? You may be building to be acquired.
Ask HN: Steps to forming a company?
Hey guys, I'm leaving my firm very shortly to form a startup.
Does why have a checklist of proper ways to do things?
Ie. 1. Form Chapter C Delaware company with Clerky 2. Hire payroll company x 3. use this company for patents.
any info there?
From "Ask HN: What are your favorite entrepreneurship resources" :
> USA Small Business Administration: "10 steps to start your business."-...
> "Startup Incorporation Checklist: How to bootstrap a Delaware C-corp (or S-corp) with employee(s) in California"
> FounderKit has reviews for Products, Services, and Software for founders:
... I've heard good things about Gusto for payroll, HR, and benefits through Guideline:
A Self-Learning, Modern Computer Science Curriculum
Outstanding resource.
jwasham/coding-interview-university also links to a number of also helpful OER resources:
MVP Spec
> The criticism of the MVP approach has led to several new approaches, e.g. the Minimum Viable Experiment MVE[19] or the Minimum Awesome Product MAP[20]....
Can we merge Certificate Transparency with blockchain?
From "REMME – A blockchain-based protocol for issuing X.509 client certificates" :
""". """
TLA references "Certificate Transparency Using Blockchain" (2018)"Certificate+Transparen...
Thanks for the references! The main issue isn't the support and maintenance of a such distributed network, but its integration with current solutions and avoiding centralized middleware services that will weaken the schema described in the documents.
> The main issue isn't the support and maintenance of a such distributed network,
Running a permissioned blockchain is nontrivial. "Just fork XYZ and call it a day" doesn't quite describe the amount of work involved. There's read latency at scale. There's merging things to maintain vendor strings,
> but its integration with current solutions
- Verify issuee identity
- Update (domain/CN/subjectAltName, date) index
- Update cached cert and CRL bundles
- Propagate changes to all clients
> and avoiding centralized middleware services that will weaken the schema described in the documents.
Eventually, a CDN will look desireable. IPFS may fit the bill, IDK?
> Running a permissioned blockchain is nontrivial.
You are right. It's needed a relevant BFT protocol, a lot of work with masternodes community and smart economic system inside. You can look at an example of a such protocol:
google/trillian
>.
Why Don't People Use Formal Methods?
Which universities teach formal methods?
- q=formal+verification
- q=formal-methods
Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?
Is there a certification for formal methods? Something like for engineer-status in other industries?
What are some examples of tools and [OER] resources for teaching and learning formal methods?
- JsCoq
- Jupyter kernel for Coq + nbgrader
- "Inconsistencies, rolling back edits, and keeping track of the document's global state" (jsCoq + hott [+ IJavascript Jupyter kernel], STLC: Simply-Typed Lambda Calculus)
- TDD tests that run FV tools on the spec and the implementation
What are some examples of open source tools for formal verification (that can be integrated with CI to verify the spec AND the implementation)?
What are some examples of formally-proven open source projects?
- "Quark : A Web Browser with a Formally Verified Kernel" (2012) (Coq, Haskell)
What are some examples of projects using narrow and strong AI to generate perfectly verified software from bad specs that make the customers and stakeholders happy?
From reading though comments here, people don't use formal methods because: cost-prohibitive, inflexibile, perceived as incompatible with agile / iterative methods that are more likely to keep customers who don't know what formal methods are happy, lack of industry-appropriate regulation, and cognitive burden of often-incompatible shorthand notations.
Almost University of New South Wales (Sydney, Australia) alum, and it's the biggest difference between the Software Engineering course and Comp Sci. There are two mandatory formal methods and more additional ones to take. They are really hard, and mean a lot of the SENG cohort ends up graduating as COMPSCI, since they can't hack it and don't want to repeat formal methods.
Steps to a clean dataset with Pandas
To add to the three points in the article:
Data quality
Imputation
Feature selection
datacleaner can drop NaNs, do imputation with "the mode (for categorical variables) or median (for continuous variables) on a column-by-column basis", and encode "non-numerical variables (e.g., categorical variables with strings) with numerical equivalents" with Pandas DataFrames and scikit-learn.
sklearn-pandas "[maps] DataFrame columns to transformations, which are later recombined into features", and provides "A couple of special transformers that work well with pandas inputs: CategoricalImputer and FunctionTransformer"
Featuretools
> Featuretools is a python library for automated feature engineering. [using DFS: Deep Feature Synthesis]
auto-sklearn does feature selection (with e.g. PCA) in a "preprocessing" step; as well as "One-Hot encoding of categorical features, imputation of missing values and the normalization of features or samples"...
auto_ml uses "Deep Learning [with Keras and TensorFlow] to learn features for us, and Gradient Boosting [with XGBoost] to turn those features into accurate predictions"...
Reahl – A Python-only web framework
I feel bad about projects like these, I really do. I'm sure a lot of effort have been put in and it probably satisfies what OP wants to do. But in the end, no one serious would ever consider using it.
>But in the end, no one serious would ever consider using it.
Why not?
I would say it is the other way around. For any web application project of decent size there is almost always a transpilation step that converts whatever programming language your source code is in, into JavaScript (usually ES5). We also had very successful and widely used projects like GWT for years (GWT was first released in 2006!!!).
Before GWT, there was Wt framework (C++); and then JWt (Java), which do the server and clientsides (with widgets in a tree).
Wt:
JWt:
GWT:
Now we have Babel, ES YYYY, and faster browser release cycles.
Ask HN: How can you save money while living on poverty level?
I freelance remotely, making roughly $1200 a month as a programmer because I only work 10 hours maximum each week (limited by my contract). I share the apartment with my mom, and It's a section 8 so our rent contributions are based on the income we make. My contribution towards rent is $400 a month.
Although I make more money than my mom (she's of retirement age and only works 1-2 days a week), while I'm looking for more work I want to figure out how to move out and live more independently on only $1200 a month.
I need to live frugally and want to know what I can cut more easily. I own a used car (already paid in full), and pay my own car insurance, electricity, phone and internet. After all that I have about $400 left each month which can be eaten up by going out or some emergency funds.
More recently I had to pay for my new city parking sticker so that's $100 more in expenses this particular month. I would be satisfied just living in a far off town paying the same $400 a month, I feel my dollars would stretch further since I now get 100% more privacy for the same price.
On top of that this job is a contract job so I need to put money aside to pay my own taxes. This $1200 is basically living on poverty level. Any ideas to make saving work? Is it very possible for people in the US to still save while on poverty?
That's not a living wage (or a full time job). There are lots of job search sites.
Spending some time on a good resume / CV / portfolio would probably be a good investment with positive ROI.
Is there a nonprofit that you could volunteer with to increase your hireability during the other 158 hours of the week?
Or an online course with a credential that may or may not have positive ROI as a resume item?
Is there a code school in your city with a "you don't pay unless you land a full time job with a living wage and benefits" guarantee?
What is your strategy for business and career networking?
From :
> Personal Finance (budgets, interest, growth, inflation, retirement)
Personal Finance
Khan Academy > College, careers, and more > Personal finance...
"CS 007: Personal Finance For Engineers"
A DNS hijacking wave is targeting companies at an almost unprecedented scale
Source link:-...
> The National Cybersecurity and Communications Integration Center issued a statement [1] that encouraged administrators to read the FireEye report. [2]
[1]...
[2]-...
Show HN: Generate dank mnemonic seed phrases in the terminal
From :
> The first four words will be a randomly generated Doge-like sentence.
The seed phrases are fully valid checksummed BIP39 seeds. They can be used with any cryptocurrency and can be imported into any BIP39 compliant wallet.
> […] However there is a slight reduction in entropy due to the introduction of the doge-isms. A doge seed has about 19.415 fewer bits of entropy than a standard BIP39 seed of equivalent length.
Can you sign a quantum state?
> Abstract.].
"Quantum signcryption"
Lattice Attacks Against Weak ECDSA Signatures in Cryptocurrencies [pdf]
From the paper:
Abstract. In this paper, we compute hundreds of Bitcoin private keys and dozens of Ethereum, Ripple, SSH, and HTTPS private keys by carrying out cryptanalytic attacks against digital signatures contained in public blockchains and Internet-wide scans.
> Countermeasures. All of the attacks we discuss in this paper can be prevented by using deterministic ECDSA nonce generation [29], which is already implemented in the default Bitcoin and Ethereum libraries.
REMME – A blockchain-based protocol for issuing X.509 client certificates
pity it's blockchain based
It's unclear to me why you would want a distributed PKI to authenticate a centralized app. Or maybe it's only for dapps?.
California grid data is live – solar developers take note
> It looks like California is at least two generations of technology ahead of other states. Let’s hope the rest of us catch up, so that we have a grid that can make an asset out of every building, every battery, and every solar system.
+1. Are there any other states with similar grid data available for optimization; or any plans to require or voluntarily offer such a useful capability?
Why attend predatory colleges in the US?
> Why would people attend predatory colleges?
Why would people make an investment with insufficient ROI (Return on Investment)?
Insufficient information.
College Scorecard [1] is a database with a web interface for finding and comparing schools according to a number of objective criteria. CollegeScorecard launched in 2015. It lists "Average Annual Cost", "Graduation Rate", and "Salary After Attending" on the search results pages. When you review a detail page for an institution, there are many additional statistics; things like: "Typical Total Debt After Graduation" and "Typical Monthly Loan Payment".
The raw data behind CollegeScorecard can be downloaded from [2]. The "data_dictionary" tab of the "Data Dictionary" spreadsheet describes the data schema.
[1]
[2]
Khan Academy > "College, careers, and more" [3] may be a helpful supplement for funding a full-time college admissions counselor in a secondary education institution
[3]
(I haven't the time to earn 10 academia.stackexchange points in order to earn the prestigious opportunity to contribute this answer to such a forum with threaded comments. In the academic journal system, journals sell academics' work (i.e. schema.org/ScholarlyArticle PDFs, mobile-compatible responsive HTML 5, RDFa, JSON-LD structured data) and keep all of the revenue).
"Because I need money for school! Next question. CPU: College Textbook costs and CPI: All over time t?!"
Ask HN: Data analysis workflow?
What kind of workflow do you employ when designing a data-flow or analyzing data?
Let me give a concrete example. For the past year, I have been selling stuff on the interwebs through two payment processors one of them being PayPal.
The selling process was put together with a bunch of SaaS hooking everything together through webhooks and notifications.
Now I need to step it that control and produce a proper flow to handle sign up, subscription and payment.
Before doing that I'm analyzing and trying to conciliate all transactions to make sure the books are OK and nothing went unseen. There lies the problem. I have data coming from different sources such as databases, excel files, CSV exports and some JSON files.
At first, I started dealing with it by having all the data in CSV files and trying to make sense of them using code and running queries within the code.
As I found holes in the data I had to dig up more data from different sources and it became a pain to continue with code. I now imported everything into Postgres and have been "debugging" with SQL.
As I advanced through the process I had to generate a lot of routines to collect and match data. I also have to keep all the data files around and organized which is very hard to do because I'm all over the place trying to find where the problem is.
How do you handle with it? What kind of workflow? Any best practices or recommendations from people who do this for a living?
Pachyderm may be basically what you're looking for. It does data version control with/for language-agnostic pipelines that don't need to always redo the ETL phase.
Dask-ML works with {scikit-learn, xgboost, tensorflow, TPOT,}. ETL is your responsibility. Loading things into parquet format affords a lot of flexibility in terms of (non-SQL) datastores or just efficiently packed files on disk that need to be paged into/over in RAM.
Sklearn.pipeline.Pipeline API: {fit(), transform(), predict(), score(),}... can also minimize ad-hoc boilerplate ETL / feature engineering :
> Featuretools is a framework to perform automated feature engineering. It excels at transforming temporal and relational datasets into feature matrices for machine learning.
The PLoS 10 Simple Rules papers distill a number of best practices:
"Ten Simple Rules for Reproducible Computational Research"...
“Ten Simple Rules for Creating a Good Data Management Plan”...
In terms of the scientific method, a null hypothesis like "there is no significant relation between the [independent and dependent] variables" may be dangerously unprofessional p-hacking and data dredging; and may result in an overfit model that seems to predict or classify the training and test data (when split with e.g. sklearn.model_selection.train_test_split and a given random seed).
One of these days (in the happy new year!) I'll get around to updating these notes with the aforementioned tools and docs:...
IDK what has specifically in terms of analysis workflow? Their docker containers have very many tools configured in a reproducible way:...
Ask HN: What is your favorite open-source job scheduler
Too many business scripts rely on cron(8) to run. Classic cron cannot handle task duration, fail (only with email), same-task piling, linting, ...
So what is your favorite open-source, easy to bundle/deploy job scheduler, that is easy to use, has logging capacity, config file linting, and can handle common use-cases : kill if longer than, limit resources, prevent launching when previous one is nor finished, ...
systemd-crontab-generator may be usable for something like linting classic crontabs?
Systemd/Timers as a cron replacement:...
Celery supports periodic tasks:
>)....
How to Version-Control Jupyter Notebooks
Mentioned in the article: manual nbconvert, nbdime, ReviewNB (currently GitHub only), jupytext.
Jupytext includes a bit of YAML in the e.g. Python/R/Julia/Markdown header.
Huge +1s for both nbdime and jupytext. Excellent tools both.
Really enjoying jupytext. I do a bunch of my training from Jupyter and it has made my workflow better.
Teaching and Learning with Jupyter (A book by Jupyter for Education)
Still under heavy development, but already useful. Add we accept Pull Requests to add items or fix issues. Join us!
A small suggestion.
Since Jupyter Notebook can be used for both programming and documentation, why don't you use Jupyter Notebook itself as the source of your document?
It is actually very easy to setup a Jupyter Notebook driven .ipynb -> .html publishing pipeline with github + a local Jupyter instance
Here is a toy example (for my own github page)
The convert script is here (also a Jupyter Notebook)...
You got the ideas.
BTW, to make the system fully replicable, I use docker for the local Jupyter instance, which can be launched via the Makefile...
Here is the custom Dockerfile:...
Margin Notes: Automatic code documentation with recorded examples from runtime
Slightly related are Go examples - they're tests, and documentation at the same time. It'd be nice if someone hooks in a listener to automatically collect examples tho
And Elixir's doctests...
And Python's Axe!
I mean doctest:
1. sys.settrace() for {call, return, exception, c_call, c_return, and c_exception}
2. Serialize as/to doctests. Is there a good way to serialize Python objects as Python code?
3. Add doctests to callables' docstrings with AST
Mutation testing tools may have already implemented serialization to doctests but IDK about docstring modification.
... MOSES is an evolutionary algorithm that mutates and simplifies a combo tree until it has built a function with less error for the given input/output pairs.
Time to break academic publishing's stranglehold on research
Add in that the quality of the system is massively massively broken...peer review is about as accurate as the flip of a coin. It does not promote gorund breaking or novel research, it barely (arguably doesn't) even contribute to quality research. I had a colleague recently be told by a journal editor 'we don't publish critiques from junior scholars.' So much for the nature of peer review being entirely driven by the quality of the work.
As one of those academics...I keep getting requests to peer review, I respectfully make clear I don't review for non open source journals anymore. Same with publishing. I'm not tenure-track so am not primarily evaluated based on output.
Publishing is broken, but it is really just part of the broader and even more broken nature of academic research.
What would be a good alternative to peer-review though? Genuinely interested.
Open publishing and commenting would be a good start---having a dialog, like is done at conferences. Older academic journal articles (pre 1900) read much more like discussions than like the hundred dollar word vomits of modern academic publishing. The broken incentives are at the core of this rotten fruit, though. Just making journals open isn't enough.
We have (almost) open publishing and open commenting. Did that improve anything?
There's open commenting? I've never seen the back and forth of the review process be published. It should be published. supports threaded comments on anything with a URI; including PDFs and specific sentences or figures thereof. All you have to do is register an account and install the browser extension or include the JS in the HTML.
It's based on open standards and an open platform.
W3C Web Annotations:
About Hypothesis:
Ask HN: How can I learn to read mathematical notation?
There are a lot of fields I'm interested in, such as machine learning, but I struggle to understand how they work as most resources I come across are full of complex mathematical notation that I never learned how to read in school or University.
How do you learn to read this stuff? I'm frequently stumped by an academic paper or book that I just can't understand due to mathematical notation that I simply cannot read.
These might help a bit.
But as someone with similar problems, I'm beginning to think there's no real solution other than thousands of hours of studying.
There are a number of Wikipedia pages which catalog various uses of symbols for various disciplines:
Outline_of_mathematics#Mathematical_notation...
List_of_mathematical_symbols
List_of_mathematical_symbols_by_subject...
Greek_letters_used_in_mathematics,_science,_and_engineering...
Latin_letters_used_in_mathematics...
For learning the names of symbols (and maybe also their meaning as conventially utilized in a particular field at a particular time in history), spaced repetition with flashcards with a tool like Anki may be helpful.
For typesetting, e.g. Jupyter Notebook uses MathJax to render LaTeX with JS.
latex2sympy may also be helpful for learning notation.
… data-science#mathematical-notation...
New law lets you defer capital gains taxes by investing in opportunity zones
I believe the way that the program works is you can defer taxes from your original capital gains (and the cost basis gets increased so your deferred taxes are less than you would pay otherwise), reinvest them in an "opportunity zone", and not pay capital gains on your investment on the opportunity zone, if you hold it long enough
e.g. Bob bought Apple stock for $50 a share back in the day, and sells it for $200/share. He defers his taxes until 2026. Instead of paying capital gains tax on $150/share, the cost basis is adjusted by 15% so in addition to benefiting from the time value of money, future Bob will only be taxed on $142.50 of capital gains. Bob can buy a house in an "opportunity zone" (from scrolling around the embedded map, there's plenty of million dollar+ houses in these areas. There's also lots of sports teams and stadiums in these areas, so maybe Bob buys an NFL team or a parking lot next to their stadium), rent it out for 10 years, sell it, and not have to pay any capital gains tax on the appreciation. Definitely not a bad deal for him!
Yes its more akin to a 1031 exchange, but opened up so that non-real estate capital gains are eligible.
Is it just capital gains? Wondering if it applies to any other forms of active or passive income.
How are profits from these investments treated?
Can you "swap til you drop" like with a 1031 exchange?
> Is it just capital gains? Wondering if it applies to any other forms of active or passive income.
I would also like some information about this.
+1 for investing in distressed areas; self-nominated with intent or otherwise.
If it's capital gains only, -1 on requiring sale of capital assets in order to be sufficiently incentivized. (Because then the opportunity to tax-advantagedly invest in Opportunity Zones is denied to persons without assets to liquidate; i.e. unequal opportunity).
Q: "Why don't I get the same tax-advantage for investing in a/my opportunity zone community?"
A [AFAIU]: "Because you don't have capital gains; only regular income" (~="Because you're not an accredited investor")
How to Write a Technical Paper [pdf]
A few years ago i found a great version of a similar piece that proposed something like a 'sandwich' model for each section, and the work as a whole. A sandwich layer was a hook (something interesting), the meat (what you actually did), and a summary.
I failed to save it and i haven't been able to dig it up again, but I liked the idea. The paper was written in the style it described, as well..
The digraph presented in the OP really is a great approach, IMHO:
## Introduction
## Related Work, System Model, Problem Statement
## Your Solution
## Analysis
## Simulation, Experimentation
## Conclusion
... "Elements of the scientific method"...
no, it was on arxiv somewhere. It was written as a journal article.
edit: AH! it was linked upthread on bioarxiv.
JSON-LD 1.0: A JSON-Based Serialization for Linked Data
JSON-LD 1.1
"Changes since 1.0 Recommendation of 16 January 2014"
Jeff Hawkins Is Finally Ready to Explain His Brain Research
Cortical column:
> In the neocortex 6 layers can be recognized although many regions lack one or more layers, fewer layers are present in the archipallium and the paleopallium.
What this means in terms of optimal artificial neural network architecture and parameters will be interesting to learn about; in regards to logic, reasoning, and inference.
According to "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function"... , the human brain appears to be [at most] 11-dimensional (11D); in terms of algebraic topology
Relatedly,
"Study shows how memories ripple through the brain"...
> The [NeuroGrid].
Re: Topological graph theory [1], is it possible to embed a graph on a space filling curve [2] (such as a Hilbert R-tree [3])?
[1]
[2]
[3]
[4] (git packfiles)
Interstellar Visitor Found to Be Unlike a Comet or an Asteroid
It's too bad we're not ready to launch probes to visit and explore such transient objects at a moment's notice.
I think it would make sense to plan a probe mission where the probe would be put into storage to stand by for these kind of events. It would still be a challenge and might be impossible to find a launch window and reach the passing object in time, but it would be worth trying.
Just imagine how tragic it would be, if an ancient artifact of an alien civilization drifts by earth and we don't manage to have at least a look at it.
Wouldn’t such an object be emitting radio signals that earth could receive?
Not if it's something like another civilization's Tesla Roadster.
> Not if it's something like another civilization's Tesla Roadster.
'Oumuamua is red and headed toward Pegasus (the winged horse) after a very long journey starting longtime in spacetime ago. It is wildly tumbling off-kilter and potentially creating a magnetic field that would be useful for interplanetary spacetravel.
They're probably pointing us to somewhere else from somewhere else.
If this is any indication of the state of another civilization's advanced physics, and it missed us by a wide margin, they're probably laughing at our energy and water markets; and indicating that we should be focused on asteroid impact avoidance (and then we will really laugh about rockets and red electromagnetic kinetic energy machines and asteroid mining).
"Amateurs"
[We watch it fly by, heads all turning]
Maybe it would've been better to have put alone starman in the passenger seat or two starpeoples total?
Given the skull shape of October 2015 TB145 [1] (due to return in November 2018), maybe 'Oumuamua [2] is a pathology of Mars and an acknowledgement of our spacefaring intentions? Red, subsurface water, disrupted magnetic field.
[1]
[2]
In regards to a red, unshielded, earth vehicle floating in solar orbit with a suited anthropomorphic creature whose head is too big for the windshield:
"What happened here?"
"That's not a knife... This is a knife." -- Crocodile Dundee
Publishing more data behind our reporting
Publishing raw data itself is definitely a good start but there also needs to be a push towards a standardized way of sharing data along with it's lineage (dependent sources, experimental design/generation process, metadata, graph relationship of other uses, etc.).
> Publishing raw data itself is definitely a good start but there also needs to be a push towards a standardized way of sharing data along with it's lineage (dependent sources, experimental design/generation process, metadata, graph relationship of other uses, etc.).
Linked Data based on URIs is reusable. ( )
The Schema.org Health and Life Sciences extension is ahead of the game here, IMHO. MedicalObservationalStudy and MedicalTrial are subclasses of . {DoubleBlindedTrial, InternationalTrial, MultiCenterTrial, OpenTrial, PlaceboControlledTrial, RandomizedTrial, SingleBlindedTrial, SingleCenterTrial, and TripleBlindedTrial} are subclasses of schema.org/MedicalTrial.
A schema.org/MedicalScholarlyArticle (a subclass of ) can have a. is the inverse of .
More structured predicates which indicate the degree to which evidence supports/confirms or disproves current and other hypotheses (according to a particular Person or Persons on a given date and time; given a level of scrutiny of the given information) are needed.
In regards to epistemology, there was some work on Fact Checking ( e.g. ) in recent times. To quote myself here, from :
>.
"#LinkedReproducibility"; "#LinkedMetaAnalyses", "#StudyGraph"
CSV 1.1 – CSV Evolved (for Humans)
Well, if you want to improve tabular data formats:
1. Add a version identifier / content-type on the first line!
2. Create a formal grammar for this CSV format
3. Specify preferred character-encoding
4. Provide some tooling (validation, CSV 1.1 => HTML, CSV => Excel)
5. Add the option to specify column type (string, int, date)
6. Specify ISO-8601 as the preferred date format
7. Allow 'reheading' the columns in the file itself. This is useful in streaming data.
8. Specify the format of the newlines.
CSVW: CSV on the Web
"CSV on the Web: A Primer"
"Model for Tabular Data and Metadata on the Web"
"Generating JSON from Tabular Data on the Web" (csv2json)
"Generating RDF from Tabular Data on the Web" (csv2rdf)
...
N. Allow authors to (1) specify how many header rows are metadata and (2) what each row is. For example: 7 metadata header rows: {column label, property URI [path], datatype URI, unit URI, accuracy, precision, significant figures}
With URIs, we can merge, join, and concatenate data (when e.g. study control URIs for e.g. single/double/triple blinding/masking indicate that the meets meta-analysis inclusion criteria).
"#LinkedReproducibility"; "#LinkedMetaAnalyses"
Ask HN: Which plants can be planted indoors and easily maintained?
Chlorophytum comosum (spider plants) are good air-filtering houseplants that are also easy to take starts of:
Houseplant:
Graduate Student Solves Quantum Verification Problem
"Classical Verification of Quantum Computations" Mahadev. (2018)...
The down side to wind power
.
I am confused: How does the warming work exactly and is this actually a global climate effect? Because this part of the article makes it sound to me as if it's just a very localised change of temperature caused by the exchange of different air layers, which can't be right? Because you couldn't really compare that to climate change on a global scale.
The example is clearly hypothetical only. We're never going to cover one third of the continental US with wind turbines.
The more important information to me is that neither wind nor solar have the power density that has been claimed.
For wind, we found that the average power density —.
Then you have the separate problem that the wind doesn't always blow and the sun doesn't always shine, so you need a huge storage infrastructure (batteries, presumably) alongside the wind and solar generating infrastructure.
IMO nuclear is the only realistic alternative to coal to provide reliable, zero-emission "base load" power generation. Wind and solar could make sense in some use cases but not in general.
> IMO nuclear is the only realistic alternative to coal to provide reliable, zero-emission "base load" power generation. Wind and solar could make sense in some use cases but not in general.
How much heat energy does a reactor with n meters of concrete around it, located on a water supply in order to use water in an open closed loop, protected with national security resources, waste into the environment?
I'd be interested to see which power sources the authors of this study would choose as a control for these just sensational stats.
From :
> Canada (2030), France (2021), and the UK (2025) are all working to entirely phase out coal-fired power plants for very good reasons (such as neonatal health).
Would you burn a charcoal grill in an enclosed space like a garage? No.
Thermodynamics of Computation Wiki
"Quantum knowledge cools computers: New understanding of entropy" (2011)...
>.
"The thermodynamic meaning of negative entropy" (2011)
Landauer's principle:
"Thin film converts heat from electronics into energy" (2018)-...
>).
"Pyroelectric energy conversion with large energy and power density in relaxor ferroelectric thin films" (2018)
Carnot heat engine > Carnot cycle, Carnot's theorem, "Real heat engines":
Carnot's theorem > Applicability to fuel cells and batteries:(thermodyna...
> Since fuel cells and batteries can generate useful power when all components of the system are at the same temperature [...], they are clearly not limited by Carnot's theorem, which states that no power can be generated when [...]. This is because Carnot's theorem applies to engines converting thermal energy to work, whereas fuel cells and batteries instead convert chemical energy to work.[6] Nevertheless, the second law of thermodynamics still provides restrictions on fuel cell and battery energy conversion
[deleted]
Is there enough heat energy from a datacenter to -- rather than heating oceans (which can result in tropical storms) -- turn a turbine (to convert heat energy back into electrical energy)?
Is there a statistic which captures the amount of heat energy discharged into ocean/river/lake water? "100% clean energy with PPAs (Power Purchase Agreements)" while bleeding energy into the oceans isn't quite representative of the total system.
"How to Reuse Waste Heat from Data Centers Intelligently" (2016)-...
>° and 35°C (80-95°F),° to 70°C (130-160°F)°C.
Heat Pump:
"Data Centers That Recycle Waste Heat"...
Why Do Computers Use So Much Energy?
> Also, to foster research on this topic we have built a wiki, combining lists of papers, websites, events pages, etc. We highly encourage people to visit it, sign up, and start improving it; the more scientists get involved, from the more fields, the better!
Thermodynamics of Computation Wiki...
HN:
Justice Department Sues to Stop California Net Neutrality Law
Sorry if I get some basic understanding of the law wrong but...
Isn't this the same thing as regulating car emissions? Doesn't 822 only apply to providers in the state itself? Wouldn't it be that the telecoms are welcome to engage in another method of end-customer billing in other states?
What am I missing?
> Like California’s auto emissions laws that forced automakers to adopt the standards for all production, the state’s new net neutrality rules could push broadband providers to apply the same rules to other states.
I think you're right, the motives are very similar.
>.
I thought Republicans were pro-states rights and limited government? How does their position on this jive with their ideology?
Expansion of federal jurisdiction under the Commerce Clause is an egregious violation of Constitutional law.
Does the federal government have the enumerated right under the Commerce Clause to, for example, ban football for anyone that doesn't have a disability? No!
Was the Commerce Clause sufficient authorization for Federal prohibition of alcohol? No! An Amendment to the Constitution was necessary. And, Federal Alcohol and the unequal necessary State Alcohol prohibitions miserably failed to achieve the intended outcomes.
Where is the limit? How can they claim to support a states' rights, limited government position while expanding jurisdiction under the Interstate Commerce Clause? "Substantially affecting" interstate commerce is a very slippery slope.
Furthermore, de-classification from Title II did effectively - as the current administration's FCC very clearly argued (in favor of special interests over those of the majority) - relieve the FCC of authority to regulate ISPs: they claimed that it's FTC's job and now they're claiming it's their job.
Without Title II classification, FCC has no authority to preempt state net neutrality regulation. California and Washington have the right to regulate ISPs within their respective states.
Outrageous!
Limited government:
States' rights:
[Interstate] Commerce Clause:
Net neutrality in the United States > Repeal of net neutrality policy:...
The limit is established, and constantly reevaulated by the Supreme Court. For example it was held that gun control cannot be done through the Commerce Clause.
Car emissions and ISPs are different. As ISPs are very much perfect examples of truly local things (they need to reach your devices with EM signals either via cables or air radio), the Federal government might try to argue that the net neutrality regulation of California affects the whole economy substantially, because it allows too much interstate competition due to the lack of bundling/throttling by ISPs.
Similarly, the problem with car emissions might be that requiring thing at the time of sale affects what kind of cars are sold to CA.
Is the Commerce Clause too vague? Yes. Is there a quick and sane way to fix it? I see none. Is it at least applied consistently? Well, sort of. But we shall see.
ISPs are the very opposite of local, as the only reason I have an ISP is to deliver bits from the rest of the world. Of course, the FCC doesn't seem to understand that...
To summarize the points made in [1]: products can be sold across state lines, internet service sold in one state cannot be sold across state lines.
[1]
In my opinion, the court has significantly erred in redefining interstate commerce to include (1) intrastate-only-commerce; and (2) non-commerce (i.e. locally grown and unsold wheat)
Furthermore - and this is a bit off topic - unalienable natural rights (Equality, Life, Liberty, and pursuit of Happiness) are of higher precedence. I mention this because this is yet another case where the court will be interpreting the boundary between State and Federal rights; and it's very clear that the founders intended for the powers of the federal government to be limited -- certainly not something that the Commerce Clause should be interpreted to supersede.
What penalties and civil fines are appropriate for States or executive branch departments that violate the Constitution; for failure to uphold Oaths to uphold the Constitution?
The problem is, someone has to interpret what kind of economy the Founders intended.
Is it okay if a State opts to withdraw from the interstate market for wheat? Because without power to meddle with intra-state production, consumption and transactions, it's entirely possible.
White House Drafts Order to Probe Google, Facebook Practices
This sounds like they want the equal representation policies that the Republican Party got rolled back in the 80s (ruled unconstitutional iirc). It’s what allowed the rise of partisan “news”. It seems like any “equal exposure” policies would hit the same issues.
That said the primary “imbalanced exposure” seems to be due to evicting people who simply spend their time attacking minorities, attack equal rights, and promoting violence towards anyone that they dislike. For whatever reason the Republican Party seems to have decided that those people represent “conservative” views that private companies should have to support.
i. e. : People who are exercising their right to free speech.
'Right to free speech' does not exist outside the government. It never has, unless there's an amendment to the first amendment that no one is telling us about..
Repeating what I said in an earlier comment....they were able to grow to the size they have become because they are exempted from liable laws under safe harbor. The argument for that was that they were neutral platforms. They no longer are so they either need to remove the protections or be subject to the first amendment but they should not be able to have it both ways.....and I don't think there is case law to back this one way or the other yet...
> they were able to grow to the size they have become because they are exempted from liable laws under safe harbor
This was not a selective protection. When the government grants limited resources like electromagnetic spectrum and right of way, they're not directly making a monopoly, but the FCC does then claim right to regulate speech.
In the interest of fairness, the FCC classed telecommunication service providers as common carriers; thus authorizing FCC to pass net neutrality protections which require equal prioritization of internet traffic. (No blocking, No throttling, No paid prioritization). The current administration doesn't feel that that's fair, and so they've moved to dismantle said "burdensome regulations".
The current administration is now apparently attempting to argue that information service providers - which are all equally granted safe harbor and obligated to comply with DMCA - have no right to take down abuse and harassment because anti-trust monopoly therefore Freedom of Speech doesn't apply to these corporation persons.
Selective bias, indeed! Broadcast TV and Radio are subject to different rules than Cable (non-broadcast) TV.
Other regimes have attempted to argue that the government has the right to dictate the media as well.
Taking down abuse and harassment is necessary and well within the rights of a person and a corporation in the United States. Taking down certain content is now legally required within 24 hours of notice from the government in the EU.
Where is the line between a media conglomerate that produces news entertainment and an information service provider? If there is none, and the government has the right to regulate "equal time" on non-granted-spectrum media outlets, future administrations could force ConservativeNewsOutletZ and LiberalNewsOutletZ to carry specific non-emergency content, to host abusive and offense rhetoric, and to be sued for being forced to do so because no safe harbor.
Can anyone find the story of how the GOP strongarmed and intimidated Facebook into "equal time" (and then we were all shoved full of apparently Russian conservative "fake news" propaganda) before the most recent election where the GOP won older radio, TV, and print voters and young people didn't vote because it appeared to be unnecessary?
Meanwhile, the current administration rolled back the "burdensome regulation" that was to prevent ISPs from selling complete internet usage history; regardless of age.
Maybe there's an exercise that would be helpful for understanding the "corporate media filter" and the "social media filter"?
You, having no money -- while watching corporate profits soar and income inequality grow to unprecedented heights -- will choose to take a job that requires you to judge whether thousands of reported pieces of content a day are abusive, harassing, making specific threats, inciting specific destructive acts, recruiting for hate groups, depicting abuse; or just good 'ol political disagreement over issues, values, and the appropriate role of the punishing and/or nurturing state. You will do this for weeks or months, because that's your best option, because nobody else is standing in the mirror behind these people who haven't learned to respectfully disagree over facts and data (evidence).
Next, you will plan segments of content time interspersed with ads paid for by people who are trying to sell their products, grow their businesses, and reach people. You will use a limited amount of our limited electromagnetic spectrum which the government has sold your corporate overlords for a limited period of time, contingent upon your adherence to specific and subjective standards of decency as codified in the stated regulations.
In both cases, your objective is to maximize profit for shareholders.
Your target audiences may vary from undefined (everyone watching), to people who only want to review fun things that they agree with in their safe little microcosm of the world, to people who know how to find statistics like corporate profits, personal savings rate, infant morality, healthcare costs per capita, and other Indicators identified as relevant to the Targets and Goals found in the UN Sustainable Development Goals (Global Goals Indicators).
Do you control what the audience shares?
Ask HN: Books about applying the open source model to society
I've been thinking for some time now that as productivity keeps growing, not all people will need to work any more. Society will eventually start to resemble an open source project where a few core contributors do the real work (and get to decide the direction), some others help around, and the majority of people just benefit without having to do anything. I'm wondering if any books have been written to explore this concept further?
> I've been thinking for some time now that as productivity keeps growing, not all people will need to work any more.
How much energy do autotrophs and heterotrophs need to thrive?
"But then we'll be rewarding laziness!"
Some people do enjoy the work they've chosen to do. We enjoy the benefits of upward mobility here in the US; the land of opportunity.
Why would I fully retire at 65 (especially if lifespan extension really is in reach)?
> Society will eventually start to resemble an open source project where a few core contributors do the real work (and get to decide the direction), some others help around, and the majority of people just benefit without having to do anything.
Open-source governance
Free-rider problem
As we continue to reward work, the people who are investing in the means of production (energy, labor, automation, raw materials) and science (research and development; education) continue to amass wealth and influence.
This concentration of wealth -- wealth inequality -- has historically presaged and portended unrest.
How contributions to open source projects are reinforced, what motivates people who choose to contribute (altruism, enlightened self interest, compassion, acceptance,), and what makes a competitive and thus sustainable open source project is an interesting study.
... Business models for open-source software:...
... Political Science:
... National currencies are valued in FOREX markets:
> I'm wondering if any books have been written to explore this concept further?
"The Singularity is Near: When Humans Transcend Biology" (2005) contains a number of extrapolated predictions; chief among these is that there will continue to be exponential growth in technological change
... Until we reach limits; e.g. the carrying capacity of our ecosystem, the edge of the universe.
"The Limits to Growth" (1972, 2004)
"Leverage Points: Places to Intervene in a System" (2010)
Who owns what and who 'gets to' just chill while the solar robots brush their teeth? Heady questions. "Tired yet?"
The Aragon Project has a really interesting take on open source governance:
""" IMAGINE A NATION WITHOUT LAND AND BORDERS
A digital jurisdiction
>. """
Today, Europe Lost The Internet. Now, We Fight Back
Here's a quote from this excellent article:
> An error rate of even one percent will still mean tens of millions of acts of arbitrary censorship, every day.
And a redundant -- positively defiant -- link and page title:
"Today, Europe Lost The Internet. Now, We Fight Back."...
Firms with 50 or less employees should stay that small, really.
VPN providers in North and South America FTW.
> VPN providers in North and South America FTW.
Article 13 will affect the entire Internet, not just people in Europe. Most people on the Internet use large, multinational platforms. Those platforms will set rules according to the lowest common denominator, because it's the easiest to implement.
This means that people all over the world are going to have a much more difficult time with any user-generated content. That's true even for user-created content, even if there is no other copyright holder involved (look at how badly Content ID has played out).
Or they just stop doing business in Europe.
Since europe is a quite big market, maybe not an option and easier to geolocate and restrict just EU-Traffic
Or just not, if there's no nexus to europe. I suspect for a small business, the right thing to do would be to simply ignore EU directives. I would not be surprised if the US passes a law to make judgements against US companies without a European nexus (users in Europe would not count) unenforceable in the US. That was done with the SPEECH act to stop libel tourism.
Technically, the phrase "Useful Arts and Sciences" in the Copyright Clause of the US Constitution applies to just that; the definitions of which have coincidentally changed over the years.
The harms to Freedom of Speech -- i.e. impossible 99% accuracy in content filtering still results in far too much censorship -- so significantly outweigh the benefits for a limited number of special interests intending to thwart inferior American information services which also currently host "art" and content pertaining to the "useful arts"; that it's hard to believe this new policy will have it's intended effects.
Haven't there been multiple studies which show that free marketing from e.g. content piracy -- people who experience and recommended said goods at $0 -- is actually a net positive for the large corporate entertainment industry? That, unimpeded, content spreads like the common cold through word of mouth; resulting in greater number of artful impressions.
How can they not anticipate de-listing of EU content from news and academic article aggregators as an outcome of these new policies? (Resulting in even greater outsized impact on one possible front page that consumers can choose to consume)
For countries in the EU with less than 300 million voters, if you want:
- time for your headline: $
- time for your snippet: $$
- time for your og:description: $$
- free video hosting: $$$
- video revenue: $$$$
- < 30% American content: $$$$$
Pay your bill.
And what of academic article aggregators? Can they still index schema:ScholarlyArticle titles and provide a value-added information service for science?
Consumer science (a.k.a. home economics) as a college major
> That's why we need to bring back the old home economics class. Call it "Skills for Life" and make it mandatory in high schools. Teach basic economics along with budgeting, comparison shopping, basic cooking skills and time management.
Some Jupyter notebooks for these topics that work with could be super helpful. A self-paced edX course could also be a great intro to teaching oneself though online learning.
* Personal Finance (budgets, interest, growth, inflation, retirement)
* Food Science (nutrition, meal planning for n people, food prep safety, how long certain things can safely be left out on the counter)
* Productivity Skills (GTD, context switching overhead, calendar, email labels, memo app / shared task lists)
There were FACS (Family and Consumer Studies/Sciences) courses in our middle and high school curricula. Nutrition, cooking, sewing; family planning, carry a digital baby for awhile
Home economics
* Family planning
> * Personal Finance (budgets, interest, growth, inflation, retirement)
Personal Finance
Khan Academy > College, careers, and more > Personal finance...
"CS 007: Personal Finance For Engineers"
> * Food Science (nutrition, meal planning for n people, food prep safety, how long certain things can safely be left out on the counter)
Food Science
Dietary management
Nutrition Education:
MyPlate
Healthy Eating Plate-...
How to make salads, smoothies, sandwiches
How to compost and avoid unnecessary packaging
* School, College, Testing, "How Children Learn"
GED, SAT, ACT, MCAT, LSAT, GRE, GMAT, ASVAB
Defending a Thesis, Bar Exam, Boards
Khan Academy > College, careers, and more
Educational Testing
529 Plans (can be used for qualifying educational expenses for any person)
Middle School "Glimpse" project: Past, Present, Future. Present, Future: plan your 4-year highschool course plan, pick 3 careers, pick 3 colleges (and how much they cost)
High school literature: write a narrative essay for college admissions
* Health and Medicine
How to add emergency contact and health information to your phone, carseat (ICE: In Case of Emergency)
How to get health insurance ( )
"What's your blood type?" (?!)
Khan Academy > Science > Health and Medicine
Facebook vows to run on 100 percent renewable energy by 2020
Is there a list of 100% renewable energy companies?
OTOH, Apple and Google are 100% renewable -- accounting for Power Purchase Agreements -- today.
{Company, Usage, PPA offsets, Target Year}
Are there sustainability reporting standards which require these facts?
Miami Will Be Underwater Soon. Its Drinking Water Could Go First
Now, now, let's focus on the positives here:
- more pollution from shipping routes through the Arctic circle (and yucky-looking icebergs that tourists don't like)
- less beachfront property
- more desalinatable water
- hotter heat
- more revulsive detestable significant others (displaced global unrest)
- costs of responding to natural disasters occurring with greater frequency due to elevated ocean temperatures
- less parking spaces (!)
What are the other costs and benefits here?
I've received a number of downvotes for this comment. I think it's misunderstood, and that's my fault: I should have included [sarcasm] around the whole comment [/sarcasm].
I've written about our need to address climate change here in past comments. I think the administration's climate change denials (see: "climate change politifact') and regulatory rollbacks are beyond despicable: they're sabotaging the United States by allowing more toxic chemicals into the environment that we all share, and allowing more sites that must be protected with tax dollars that aren't there because these industries pay far less than benchmarks in terms of effective tax rate. We know that vehicle emissions, mercury, and coal ash are toxic: why would we allow people to violate the rights of others in that way?
A person could voluntarily consume said toxic byproducts and not have violated their own rights or the rights of others, you understand. There's no medical value and low potential for abuse, so we just sit idly by while they're violating the rights of other people by dumping toxic chemicals into the environment that are both poisonous and strongly linked to climate change.
What would help us care about this? A sarcastic list of additional reasons that we should care? No! Miami underwater during tourist season is enough! I've had enough!
So, my mistake here - my downvote-earning mistake - was dropping my generally helpful, hopeful tone for cynicism and sarcasm that wasn't motivating enough.
We need people to regulate pollution in order to prevent further costs of climate change. Water in the streets holds up commerce, travel, hampers national security, and destroys the road.
We must stop rewarding pollution if we want it - and definitely resultant climate change - to stop. What motivates other people to care?
Free hosting VPS for NGO project?
GCP, OpenShift, AppEngine, Firebase, and Heroku all have a free plan.
The Burden: Fossil Fuel, the Military and National Security
Scientists Warn the UN of Capitalism's Imminent Demise
The actual document title: "Global Sustainable Development Report 2019 drafted by the Group of independent scientists: Invited background document on economic transformation, to chapter: Transformation: The Economy" (2018) [PDF]
Why I distrust command economies (beyond just because of our experiences with violent fascism and defense overspending and the subsequent failures of various communist regimes):
We have elections today. We don't choose to elect people that regard the environment (our air, water, land, and other natural resources) as our most important focus. A command economy driven by these folks for longer than a term limit would be even more disastrous.
The market does not solve for 'externalities': things that aren't costed in. We must have regulation to counteract the blind optimization for profit (and efficiency) which capitalism rewards most.
Environmental regulation is currently insufficient; worldwide. That is the consensus from the Paris Agreement which 195 countries signed in 2015.
Maybe incentives?
We could sell tokens for how much pollutants we're allowed to f### everyone else over with and penalize exceeding the amount we've purchased. That would incentivize firms to pollute less so that they can save money by having to buy fewer tokens. (Europe does this already; and it's still not going to save the planet from industrial production externalities)
So, while I'm wary of any suggestion that a command economy would somehow bring forth talent in governance, I look to this article for actionable suggestions that penalize and/or incentivize sustainable business and living practices.
Sustainable reporting really is a must: how can I design an investment portfolio that excludes reckless, irresponsible, indifferent, and careless investments and highly values sustainability?
No one likes to be driven by harsh penalties; everyone likes to be rewarded (even with carrots as incentives).
Markets do not solve for long term outcomes. Case in point: the market has not chosen the most energy efficient cryptocurrencies. Is this an information asymmetry issue: people just don't know, or just don't care because the incentives are so alluring, the brand is so strong, or the perceived security assurances of the network outweighs the energy use (and environmental impact) in comparison to dry cleaning and fossil fuel transport.
How would a command economy respond to this? It really is denial and delusion to think that the market will cast aside less energy efficient solutions in order to save the environment all on its own.
So, what do we do?
Do we incentivize getting inefficient vehicles off of the road and into a recycling plant where they belong?
Do we shut down major sources of pollution (coal plants, vehicle emissions)?
Do we create tokens to account for pollution allowances (for carbon and other toxic f###ing chemicals)?
Do we cut irrational subsidies for industries that don't pay their taxes (even when they make money); so that we're aware of the actual costs of our behavior?
Do we grow hemp to absorb carbon, clean up the soil, replace emissions, and store energy?
Who's in the mood to dom these greedy shortsighted idiots into saving themselves and preventing the violation of our right to health (life)? No, you can't because you're busy violating your own rights and finding drugs/druggies and that's not allowed? Is that a lifetime position?
"Go burn a charcoal grill and your gas vehicle in your closed garage for awhile and come talk to me." That's really what we're dealing with here.
Anyways, this paper raises some good points; although I have my doubts about command economies.
[strikethrough] You can't do that to yourself. [/strikethrough] You can't do that to others (even if you pay for their healthcare afterwards).
Where's Captain Planet when you need 'em, anyways?
The problem may actually be solved by a free market... But it has to have different optimizations.
"Value" or "Quality" of a company has to stop being measured in unconstrained growth and instead measured in minimized externalities, while achieving your stated goal.
Accounting has to become a hell of a lot more complicated. We're talking "keep track of your industrial waste product", possibly even having to take responsibility in some manner for dealing with it directly.
There also needs to be some societal change as well. We need to Look at how we've stretched our supply and industry chains worldwide and start to figure out ways to minimize transportation and production costs.
Competition may need to fundamentally change it's nature as well. Trade secrets need to be ripped out into the light of day. It's not a contest of out doing the other guy, but to see who can find a way to make the entire industry more efficient.
Definitely a lot of change needed. That is for sure.
Firefox Nightly Secure DNS Experimental Results
> The experiment generated over a billion DoH transactions and is now closed. You can continue to manually enable DoH on your copy of Firefox Nightly if you like.
...
>.
Long-sought decay of Higgs boson observed at CERN
Furthermore, both teams measured a rate for the decay that is consistent with the Standard Model prediction, within the current precision of the measurement.
And now everyone: Noooo, not again.
(Explanation: it's well-known that the Standard Model can't be completely correct but again and again physicists fail to find an experiment contradicting its predictions, see... for example)
Well, the standard model can be correct. It is correct until some experiment proves otherwise.
It is full of unexplained hadcoded parameters, indeed, which need an explanation from outside of the SM.
> It is full of unexplained hadcoded parameters, indeed, which need an explanation from outside of the SM....
> The term magic number or magic constant refers to the anti-pattern of using numbers directly in source code
Who is going to request changes when doing a code review on a pull request from God?
But which branch do you pull from? God has been forked so many times, it's hard to keep track. There are several distros available, so I guess you just get to pick the one that checks the most of your needs at the time.
God doesn't use GitHub.
You have to run `git format-patch` and email the results to Him.
Yeah him and Torvalds alike, maybe it's an ego thing. Or a graybeard thing?
Sen. Wyden Confirms Cell-Site Simulators Disrupt Emergency Calls
Building a Model for Retirement Savings in Python
re: pulling historical data with pandas-datareader, backtesting, algorithmic trading:...
re: historical returns
- [The article uses a constant 7% annual return rate]
- "The current average annual return from 1923 (the year of the S&P’s inception) through 2016 is 12.25%." (but that doesn't account for inflation)
- (300%+ over n years (from a down market))
Is there a Jupyter notebook with this code (with a requirements.txt for (repo2docker))?
New E.P.A. Rollback of Coal Pollution Regulations Takes a Major Step Forward
Would you move your family downwind from a coal plant? Why or why not?
Coal ash pollutes air, water, rain (acid rain), crops (our food), and soil. Which rights of victims does coal pollution infringe? Who is liable for the health effects?
Canada (2030), France (2021), and the UK (2025) are all working to entirely phase out coal-fired power plants for very good reasons (such as neonatal health).
~"They're just picking on coal": No, we're choosing renewables that are lower cost AND don't make workers and citizens sick.
If you can mine for coal, you can set up solar panels and wind turbines.
If you can run a coal mine; you can buy some cheap land, put up solar panels and wind turbines, and connect it to the grid.
Researchers Build Room-Temp Quantum Transistor Using a Single Atom
"Quasi-Solid-State Single-Atom Transistors" (2018)...
New “Turning Tables” Technique Bypasses All Windows Kernel Mitigations
Any word yet on whether MacOS, BSD, or Linux are also vulnerable?
Um – Create your own man pages so you can remember how to do stuff
Interesting project. I believe these two shell functions in the shell's initialization file (~/.profile, ~/.bashrc, etc.) can serve as a poor man's um:
umedit() { mkdir -p ~/notes; vim ~/notes/"$1.txt"; } um() { less ~/notes/"$1.txt"; }
If you write these in .rst, you can generate actual manpages with Sphinx:...
sphinx.builders.manpage:...
[deleted]
Leverage Points: Places to Intervene in a System
Extremely interesting take, with a lot of good stuff to think about.
I'm unclear about the claim that less economic growth would be better, though, and the author seems very committed to it. I wasn't able to find the article they reference as explaining how less growth is what we really need (J.W. Forrester, World Dynamics, Portland OR, Productivity Press, 1971), and it comes from almost 50 years ago, which might as well be another economic era altogether.
Does anyone know what the arguments are, what assumptions they require, and whether they still apply today? My understanding is that "less growth is better" is a distinctly minority take amongst modern economists, but the rest of this article seems very intelligently laid out, so I'd like to dig deeper.
I've always thought that for any dial we have, there's always an optimal setting, whether it's tax rates, growth rates, birth rates, etc., and blindly pushing one way or the other (like both political parties tend to do) is not helpful, or at the very least merely indicates different value systems.
The book is called "limits to growth", and it's a work of the Club of Rome. You can watch this video to have an idea of their work [1]. At first it's a very strange idea, afterall we work and consume everyday in order to grow the economy. But every healthy system as a homeostastic point, a point where it doesn't need to grow, only to be maintained. We are now a fat society and we need to get our health back, we need to degrow. We need to work much less hours and consume much less. Reducing working hours is a great leverage point. People will start to have time to care about the community, and take care of their own health. Maybe have more time to take a walk instead of using the car. This can improve health and environment but will not contribute to economic growth, healthy people that don't use cars are not good friends of economic growth based on GDP. English is not my mother language, Ursula Le Guin had an eloquent post about this on her blog, but was removed to be on the last book. The post is called "Clinging to a Metaphor" (the metaphor is economic growth) and the book is called "No Time to Spare". [1]
"The Limits to Growth" (1972)
"Thinking in Systems: a Primer" (2008)
Glossary of systems theory
Systems Theory
...
Computational Thinking
Which of the #GlobalGoals (UN Sustainable Development Goals) Targets and Indicators are primary leverage points for ensuring - if not growth - prosperity?
SQLite Release 3.25.0 adds support for window functions is a good introduction to window functions.
Besides that, the comprehensive testing and evaluation of SQLite never ceases to amaze me. I'm usually hesitant to call software development "engineering", but SQLite is definitely well-engineered.
This looks great, but I couldn't get through the first question on aggregate functions. Are there any SQL books/tutorials that go over things like this?
A lot of material I've seen has been like the classic image of "How to draw an owl. First draw two circles, then draw the rest of the owl", where they tell you the super basic stuff, then assume you know everything.
Ibis uses windowing functions for aggregations if the database supports them. IDK when support for the new SQLite support will be implemented?
[EDIT]
I created an issue for this here:
Update on the Distrust of Symantec TLS Certificates
Is the certifi bundle (2018.8.13) on PyPI also updated?
> Are these still in the bundle?
> Should projects like requests which depend on certifi also implement this logic?
The Transport Layer Security (TLS) Protocol Version 1.3
Is PKI still an optional feature of TLS? Can one still use self-signed x.509 certificates and have key-signing parties?
Academic Torrents – Making 27TB of research data available
All the .torrent files are served over http so with a simple MITM attack a bad actor could swap in their own custom tweaked version of any data set here in order to achieve whatever goals that might serve for the bad actor's interests.
I really wish we could get basic security concepts added to the default curriculum for grade schoolers. You shouldn't need a PhD in computer security to know this stuff. These site creators have PhDs in other fields, but obviously no concept of security. This stuff should be basic literacy for everyone.
> This stuff should be basic literacy for everyone.
Arguably, one compromised PKI x.509 CA jeopardizes all SSL/TLS channel sec if there's no certificate pinning and an alternate channel for distributing signed cert fingerprints (cryptographically signed hashes).
We could teach blockchain and cryptocurrency principles: private/secret key, public key, hash verification; there there's money on the table.
GPG presumes secure key distribution (`gpg --verify .asc`).
TUF is designed to survive certain role key compromises.
1/0 = 0
1/0 = 1(±∞)
> How many times does zero go into any number? Infinity. [...]
> How many times does zero go into zero? infinity^2?
Zero goes into zero x times, for any real x. Infinity isn't real, therefore neither is infinity^2 so no.
Extrapolate.
What value does 1/x approach?
What about 2/x?
And then, what about ∞/x? What value would we expect that to approach? ∞(±∞)
Power Worth Less Than Zero Spreads as Green Energy Floods the Grid
I don't truly understand this "problem". I understand storing the energy in batteries is currently very expensive economically and materially.
However I believe there are plenty of "goods" (irrespective of if they are bulk materials, or partially processed products) which have a high processing energy per volume ratio (this does not need to be recoverable stored energy).
Allow me to give an example: currently we have a drought in Belgium (or at least Flanders). We are not landlocked, there is plenty of water in the sea. Desalination is energy intensive. Instead of only looking at energy storage, why can't we increase the processing capacity (more desalination sites capable of working in parallel), and desalinate say sea water during the energy flood? I don't expect this to be an ideal real world example, only a pattern for identifying such examples: any product (could be composite parts, or bulk material) which is relatively compact and has some high energy per product volume processinng step. Just do the process (desalination, welding some part to another part...) when the sun shines, and store them for later.
Products with very high step energy density are good candidates for storing, and could help flatten daily variations, and perhaps even seasonal variations!
Now some companies would prefer avoiding risk if they don't have guaranteed orders far enough into the future, then perhaps there should be a market for insurance or loans, so that the company is encouraged to take the risk, instead of wasting the cheap energy...
Capital costs vs. marginal costs.
You're going to build a $100M desalination plant and run it for three hours a day? That's a ton of money sitting idle most of the day, far more than what is recovered with zero operating costs.
(This is called the utilization factor -- how long a piece of equipment is used vs. staying idle)
Ideally you want useful processes with low capital costs and expensive marginal/energy costs. Desalination is not one of those.
A desal plant is a useful thing and can be run around the clock off traditional energy sources. The "free energy" hours would help lower the costs.
I can even imagine a SETI-like application where people who over-generate power are able to donate it to causes of their preference...
But if you are using traditional energy sources to run it all the time, it is no longer 'solving' the problem of burning off excess energy during peak renewable times.
Someone is having to build a lot of highly wasteful, redundant infrastructure.
Rational cryptocurrency mining firms can use the excess (unstorable) energy by converting it back to money (while the sun shines and the wind blows).
Money > Energy > Money
> Someone is having to build a lot of highly wasteful, redundant infrastructure.
We're nowhere near having the energy infrastructure necessary to support everyone having an electric vehicle yet.
Energy storage is key to maximizing returns from renewables and minimizing irreversible environmental damage.
Kernels, a free hosted Jupyter notebook environment with GPUs
Hey Ben, are these going to support arbitrary CUDA?
At the moment, we're focused on providing great support for the Python and R analytics/machine learning ecosystems. We'll likely expand this in the future, and in the meantime it's possible to hack through many other usecases we don't formally support well.
How do you handle custom environment requirements, whether it’s Python version, library version, or more complex things in the environment that some code might run on?
Basically, suppose I wanted everything that I could define in a Docker container to be available “as the environment” in which the notebook is running. How do I do that?
I ask because I’ve started to see an alarming proliferation of “notebook as a service” platforms that don’t offer that type of full environment spec, if they offer any configuration of the run time environment at all.
I’ve taught probability and data science at university level and worked in machine learning in a variety of businesses too, and I’d say for literally all use cases, from the quickest little pure-pedagogy prototype of a canned Keras model to a heavily customized use case with custom-compiled TensorFlow, different data assets for testing vs ad hoc exploration vs deployment, etc., the absolutely minimum thing needed before anything can be said to offer “reproducibility” is complete specification of the run time environment and artifacts.
The trend to convince people that a little “poke around with scripts in a managed environment” offering is value-additive is dangerous, very similar to MATLAB’s approach to entwine all data exploration with the atrocious development havits that are facilitated by the console environment (and to specifically target university students with free licenses, to use a drug dealer model to get engineers hooked on MATLAB’s workflow model and use that to leverage employers to oblige by buying and standardizing on abjectly bad MATLAB products).
Any time I meet young data scientists I always try to encourage them to avoid junk like that. It’s vital to begin experiments with fully reproducible artifacts like thick archive files or containers, and to structure code into meaningful reproducible units even for your first ad hoc explorations, and to absolutely always avoid linear scripting as an exploratory technique (it is terrible and ineffective for such a task).
Kaggle Kernels seems like a cool idea, so long as the programmer must fully define artifacts that describe the complete entirety of the run time environment, and nobody is sold on the Kool Aid of just linear scripting in some other managed environment.
Each kernel for example could have a link back to a GitHub repo containing a Dockerfile and build scripts for what defined the precise environment the notebook is running in. Now that’s reproducible.
Here are the Kaggle Kernels Dockerfiles:
- Python:...
- R:... builds containers (and launches free cloud instances) on demand with repo2docker from a (commit hash, branch, or tag) repo URL:...
That’s a great first step! Adding the ability to customize on a per-notebook basis would be impressive.
Solar and wind are coming. And the power sector isn’t ready
I don't know that fatalism and hopelessness are motivating for decision makers (who are seeking greater margins regardless of policy and lobbies).
Is our transformation to 100% clean energy ASAP a certain eventuality? On a long enough timescale, it would be irrational for utilities to not choose both lower cost and more sustainable environmental impact ('price-rational', 'environment-rational').
We should expect storage and generation costs to continue to fall as we realize even just the current pipeline of capitalizable [storage] research.
Solar energy is free.
Solar Just Hit a Record Low Price in the U.S.”
(emphasis mine)
>>.”
I'm assuming that's without factoring in the health cost externalities.
Yes, this is solely for cost of power. The healthcare savings and quality of life improvements are an additional bonus on top of very cheap power.... (Air pollution is triggering diabetes in 3.2 million people each year)-... (The Other Reason to Shift away from Coal: Air Pollution That Kills Thousands Every Year)
Causal Inference Book
Causal inference (Causal reasoning) ( )
Tim Berners-Lee is working a platform designed to re-decentralize the web
Does anyone have some links on Solid that aren't media articles? I can't find anything, not even in Tim's homepage.
Spec:
Source:
...
From ( )
>.
Mastodon has now supplanted GNU StatusNet.
More States Opting to 'Robo-Grade' Student Essays by Computer
edX can automate short essay grading with edx/edx-ora2 "Open Response Assessment Suite" [1] and edx/ease "Enhanced AI scoring engine" [2].
1: 2:
... I believe there's also a tool for peer feedback.
Peer feedback/grading on MOOCs is pretty bad in my experience. There’s too much diversity of skills, language ability, etc. And too many people who bring their own biases and mostly ignore any grading instructions.
Peer discussion and feedback are useful in things like college classes. Much less so with MOOCs.
Ask HN: Looking for a simple solution for building an online course
I want to build an online course on graph algorithms for my university. I've tried to find a solution which would let submit, execute and test student's code (implement an online judge), but have had no success. There are a lot of complex LMS and none of them seem to have this feature as a basic functionality.
Are there any good out-of-box solutions? I'm sure I can build a course using Moodle or another popular LMS with some plugin, but I don't want to spend my time customizing things.
I'm interested both in platforms and self-hosted solutions. Thanks!
Maybe look at Jupyter Notebook? It does much of this out of the box, but may not be exactly what you are looking for.
nbgrader is a "A system for assigning and grading Jupyter notebooks."
jupyter-edx-grader-xblock
> Auto-grade a student assignment created as a Jupyter notebook, using the nbgrader Jupyter extension, and write the score in the Open edX gradebook
... networkx is a graph library written in Python which has pretty good docs:
There are a few books which feature networkx.
There is now a backprop principle for deep learning on quantum computers
"A Universal Training Algorithm for Quantum Deep Learning"
New research a ‘breakthrough for large-scale discrete optimization’
Looks like it may be this paper:
"An Exponential Speedup in Parallel Running Time for Submodular Maximization without Loss in Approximation"
The ACM STOC 2018 conference links to "The Adaptive Complexity of Maximizing a Submodular Function"...
A DOI URI would be great, thanks.
Wind, solar farms produce 10% of US power in the first four months of 2018
Take note: 10% PRODUCED, not 10% CONSUMED.
This is counting all output by wind and solar regardless if it is needed and usable when the power is being produced. This is quite important because wind and solar are not on-demand sources of power.
> This is counting all output by wind and solar regardless if it is needed and usable when the power is being produced. This is quite important because wind and solar are not on-demand sources of power.
I think you have that backwards: in the US, we lack the ability to scale down coal and nuclear plants. Solar and Wind are generally the first to get pulled offline when generated capacity exceeds demand and storage.
TIL this is called "curtailment" and it's an argument that utilities have used to justify not spending on renewables that are saving the environment from global warming (which is going to require more electricity for air conditioning).
Solar energy production peaks around noon. Demand for electricity peaks in the evening. We need storage (batteries with supercapacitors out front) in order to store the difference between peak generation and peak use. Because they're unable to store this extra energy, they temporarily shut down solar and wind and leave the polluting plants online.
Consumers aren't exposed to daily price fluctuations: they get a flat rate that makes it easy to check their bill; so there's no price incentive to e.g. charge an EV at midday when energy is cheapest.
The 'Duck curve' shows this relation between peak supply and demand in electricity markets:
Developing energy storage capabilities (through infrastructure and open access basic research that can be capitalized by all) is likely the best solution. According to a fairly recent report, we could go 100% renewable with the energy storage tech that exists today.
But there's no money for it. There's money for subsidizing oil production (regardless of harms (!)), but not so much for wind and solar. There's money for responding to natural disasters caused by global warming, but not so much for non-carbon-based energy sources that don't cause global warming. A film called "The Burden: Fossil Fuel, the Military, and National Security" quotes the actual unsubsidized price of a gallon of gasoline.
Wouldn't it be great if there was some kind of computer workload that could be run whenever energy is cheapest ( 'energy spot instances') so that we can accelerate our migration to renewable energy sources that are saving the environment for future generations? If there were people who had strong incentives to create demand for power-efficient chips and inexpensive clean energy.
Where would be if we had continued with Jimmy Carter's solar panels on the roof of the White House (instead of constant war and meddling with competing oil production regions of the world)?
It's good to see wind and solar growing this fast this year. A chart with cost per kWhr or MWhr would be enlightening.
FDA approves first marijuana-derived drug and it may spark DEA rescheduling
Perhaps someone from the US could help me understand the federal vs state legality of cannabis in the USA?
Can a state override any federal law?
Could federal-level law enforcement theoretically charge someone in a cannabis-legal state for drug offenses?
states cannot override any federal law.
the federal government merely tolerates the semi-autonomous nature of states in order to maintain order. but the legality of the federal government's supremacy is well evolved, its power and might is extremely disproportionally higher than any collective of states, and at this point the semi-autonomous nature of states is pure fiction as the federal government can assume jurisdiction over any intra-state matter if it wanted to through its interstate commerce laws.
yes, federal level law enforcement can theoretically charge someone in a cannabis-legal state for drug offenses. this still happens. the discretion of the words of the President, the heads of the DEA and DOJ prevent the MAJORITY of it from happening and also guide the discretion of the courts and public sympathy. so right now, for the last two administrations it has not been a priority to upset the social order in cannabis-legal states. but it can still happen.
> states cannot override any federal law.
Not actually true:(U.S._Constituti...
> not actually true
a concept which requires agreement from the federal courts themselves, who have never upheld a single argument related to this concept.
not actually what?
Selective incorporation:...
10th Amendment:...
> The Tenth Amendment, which makes explicit the idea that the federal government is limited to only the powers granted in the Constitution, has been declared to be a truism by the Supreme Court.
Supremacy clause:
> federal acts take priority over any state acts that conflict with federal law
Natural rights ('inalienable rights': Equal rights, Life, Liberty, pursuit of Happiness):
9th Amendment:...
> The Ninth Amendment (Amendment IX) to the United States Constitution addresses rights, retained by the people, that are not specifically enumerated in the Constitution. It is part of the Bill of Rights.
If the 9th Amendment recognizes any unenumerated rights of the people (with Supremacy, regardless of selective incorporation), it certainly recognizes those of the Declaration of Independence ((secession from the king ('CSA')), Equality, Life, Liberty, pursuit of Happiness), our non-binding charter which frames the entirety of the Constitutional convention
All of these things have been interpreted by the courts and their conclusion was not like yours
It is nice that you are interested in these things, but they simply cannot be read verbatim and then extrapolated to other things.
This isn't educational for anybody, this is a view that lacks all consensus and all avenues to ever garner consensus in this country.
Again, I ask you to explain how the current law grants equal rights.
> We tend to have issues with Equal rights/protections: slavery, voting rights, [school] segregation. Please help us understand how to do this Equally:
>> Furthermore, (1) write a function to determine whether a given Person has a (natural inalienable) right: what information may you require? (2) write a function to determine whether any two Persons have equal rights.
Abolitionists faced similar criticism from on high.
States Can Require Internet Tax Collection, Supreme Court Rules
I have a hunch that this will, in the end, be a massive win for large retailers vs. small ones. The task of figuring out how to calculate tax for all states is more or less the same amount of work regardless of size, which means for someone like Amazon it's more or less trivial, but for a mom-and-pop store it's a major hassle.
Thankfully with Shopify it is extremely easy and straightforward to manage for my wife's small online store. Their platform does a great job properly charging taxes by state, county and city in certain situations. Then using an inexpensive plan from the entire filing and paying process is 100% automated.
In 10 minutes I was able to file and pay all the sales taxes to several state, dozens of California counties and a handful of cities that charge additional taxes on top.
So you assume. As a small business you are unlikely to be audited, but that software could easily be wrong creating a huge minefield and potential liability.
So that would be the software companies' liability. And such business practice can differentiate good ones from the bad ones. I'm seeing a new business market here even.
There is zero economic gain from more complex tax rules. Further, the software does not absolve you of liability. At best they may agree to cover it, but that's unlikely and they can also go broke if they get it wrong.
Actually, current sales tax software provided by South Dakota and other states does absolve a merchant for liability if used to calculate sales tax due.
I agree with this approach!
Actually, the federal government should oblige each member state to provide the algorithm, and sign it cryptographically and have it expire every X fixed time interval, and have signed algorithms for the current and next time interval, so that software can automatically fetch and stay up to date.
Then the "business opportunity" of navigating FUD evaporates. Currently any such enterprise charging for such a service can spend a fraction of their budget lobbying against harmonization...
Since it would be an obligation of the states to the federal government, these algorithms (provided by each member state) should be hosted on a fixed federal government site.
Time to start a petition?
This would reduce costs of tax collection for all parties.
What is the most convenient format for this layered geographic data? Are the tax district boundary polygons already otherwise available as open data? What do localities call these? Sales tax tables, sales tax database, machine-readable flat files in an open format with a common schema?
How much tax revenue should it cost to provide such a service on a national level?
States, Counties, Cities, 'Tax Zones'(?) could be required to host tax.state.us.gov or similar with something like Project Open Data JSONLD /data.json that could be aggregated and shared by a server with a URL registry, a task queue service, and a CDN service.
While the Bitcoin tax payments bill passed the Senate and House in Arizona, it was vetoed in May 2018. Seminole County in Florida now allows tax payment with crytocurrencies such as Bitcoin:...
>.
This could also help reduce the costs of tax collection and possibly increase the likelihood of compliance with the forthcoming tax bills!
these are all very good questions, and only a community discussion of people with the right skills and interests can draft a petition, if enough people contribute to the discussion we can make the proposal more reasonable and robust against valid criticisms... but I believe we can make this happen by just starting the discussion. We can bitch on Hacker News, or we can draft a proposal for the different government levels. The more reasonable we draft it, the higher the probability the petition will be a success. I think it wouldn't be hard to argue against this proposal that a legally enforced computation should be open source, i.e. not just the algorithm for the computation but also all the data lists and boundary polygons used in the algorithm...
Ask HN: Do you consider yourself to be a good programmer?
if not, why? how do you validate your achievements?
> For identifying strengths and weaknesses: "Programmer Competency Matrix":
> -
> -
> -
Don’t read too much into that. TDD for example is not leveling up it’s an opinionated approach to development.
Automated testing is not a choice in many industries.
If you're not familiar with TDD, you haven't yet achieved that level of mastery.
There's a productivity boost to being able to change quickly without breaking things.
Is all unit/functional/integration testing and continuous integrating TDD? Is it still TDD if you write the tests after you write the function (and before you commit/merge)?
I think this competency matrix is a helpful resource. And I think that learning TDD is an important thing for a good programmer.
There is absolutely no need to follow TDD to be good at testing.
This is all unfounded conjecture: it seems easier to remember which parameter combinations may exist and need to be tested when writing the function; so "let's all write tests later" becomes a black box exercise which is indeed a helpful perspective for review, but isn't the most effective use of resources.
IMHO being convinced that there's only one true and correct methodology (TDD, Scrum, etc.) or paradigm (functional, objective, reactive programming, etc.) is a sign of being a bad programmer.
A good programmer finds common attributes and behaviors and organizes them into namespaced structs/arrays/objects with functions/methods and tests. Abstractly, which terms should we use to describe hierarchical clusters of things with information and behaviors if not those from a known software development or project management methodology?
And a good programmer asks why people might have spent so much time formalizing project development methodologies. "What sorts of product (team) failures are we dealing with here?" is an expensive question to answer as a team.
By applying tenets of Named agile software development methodologies, teams and managers can feel like they're discussing past and current experiences/successes/failures with comparable implementations of approaches that were or are appropriate for different contexts.
To argue the other side, just cherry picking from different methodologies is creating a new methodology, which requires time to justify basically what we already have terms for on the wall over here.
"We just pop tasks off the queue however" is really convenient for devs but can be kept cohesive by defining sensible queues: [kanban] board columns can indicate task/issue/card states and primacy, [sprint] milestone planning meetings can yield complexity 'points' estimates for completable tasks and their subtasks. With team velocity (points/time), a manager can try to appropriately schedule optimal paths of tasks (that meet the SMART criteria (specific, measurable, achievable, relevant, and Time-bound)); instead of fretting with the team over adjusting dates on a Gantt chart (task dependency graph) deadline, the team can
What about your testing approach makes it 'NOT TDD'?
How long should the pre-release static analysis and dynamic analyses take in my fancy DevOps CI TDD with optional CD? Can we release or deploy right now? Why or why not?
'We can't release today because we spent too much time arguing about quotes like "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines." ("Self Reliance" 1841. Emerson) and we didn't spec out the roof trusses ahead of time because we're continually developing a new meeting format, so we didn't get to that, or testing the new thing, yet.'
A good programmer can answer the three questions in a regular meeting at any time, really:
> 1. What have you completed since the last meeting?
> 2. What do you plan to complete by the next meeting?
> 3. What is getting in your way?
And:
Can we justify refactoring right now for greater efficiency or additional functionality?
The simple solution there is to simply not use specific parameters (outside ovious edge-cases, ie supplying -1 and 2^63 into your memory allocator). Writing a simple reproducible fuzzer is easy for most contained functions.
I find blackbox testing itself also fairly useful. The part where you forget which parameter combinations may occur can be useful since you now A) rely on documentation you made and B) can write your test independent of how you implemented it just like if you had written it beforehand. (Just don't forget to avoid falling into the 'write test to pass function' trap)
IMHO, it's so much easier to write good, comprehensive tests while writing the function (FUT: function under test) because that information is already in working memory.
It's also easier to adversarially write tests with a fresh perspective.
I shouldn't need to fuzz every parameter for every commit. Certainly for releases.
"Building an AppSec Pipeline: Keeping your program, and your life, sane"
I mean, I general don't think you should write fuzzers for absolutely everything (most contained functions => doesn't touch a lot of other stuff and few parameters with a known parameter space)
The general solution is to use whatever testing methodology you are comfortable, that is very effective, very efficient and covers a lot of problem space. Of course no testing method does that so you'll have to constantly balance whatever works best (which is why I think pure TDD is overrated)
> Is all unit/functional/integration testing and continuous integrating TDD?
No. They differentiate in the matrix.
> If you're not familiar with TDD, you haven't yet achieved that level of mastery.
That's not true - I've worked on teams with far lower defect rates than the typical TDD team.
TDD can help keep a developer focused - and this can help overall productivity rates - but it doesn't directly help lower defect rates.
> TDD can help keep a developer focused - and this can help overall productivity rates - but it doesn't directly help lower defect rates.
We would need to reference some data with statistical power; though randomization and control are infeasible: no two teams are the same, no two projects are the same, no two objective evaluations of different apps' teams' defect rates are an apples to apples comparison.
Maybe it's the coverage expectation: do not add code that is not run by at least one test.
Handles are the better pointers
A few years ago I wanted to make a shoot-em-up game using C# and XNA I could play with my kids on an Xbox. It worked fine except for slight pauses for GC every once in a while which ruined the experience.
The best solution I found was to get rid of objects and allocation entirely and instead store the state of the game in per-type tables like in this article.
Then I realized that I didn't need per-type tables at all, and if I used what amounted to a union of all types I could store all the game state in a single table, excluding graphical data and textures.
Next I realized that since most objects had properties like position and velocity, I could update those with a single method by iterating through the table.
That led to each "object" being a collection of various properties acted on by independent methods which were things like gravity and collision detection. I could specify that a particular game entity was subject to gravity or not, etc.
The resulting performance was fantastic and I found I could have many thousands of on-screen objects at the time with no hiccups. The design also made it really easy to save and replay game state; just save or serialize the single state table and input for each frame.
The final optimization would have been to write a language that would define game entities in terms of the game components they were subject to and automatically generate the single class that was union of all possible types and would be a "row" in the table. I didn't get around to that because it just wasn't necessary for the simple game I made.
I was inspired to do this from various articles about "component based game design" published at the time that were variations on this technique, and the common thread was that a hierarchical OO structure ended up adding a lot of unneeded complexity for games that hindered flexibility as requirements changed or different behaviors for in-game entities were added.
Edit: This is a good article on the approach....
> The final optimization would have been to write a language that would define game entities in terms of the game components they were subject to and automatically generate the single class that was union of all possible types and would be a "row" in the table
django-typed-models
> polymorphic django models using automatic type-field downcasting
> The actual type of each object is stored in the database, and when the object is retrieved it is automatically cast to the correct model class
...
> the common thread was that a hierarchical OO structure ended up adding a lot of unneeded complexity for games that hindered flexibility as requirements changed or different behaviors for in-game entities were added.
So, in order to draw a bounding box for an ensemble of hierarchically/tree/graph-linked objects (possibly modified in supersteps for reproducibility), is an array-based adjacency matrix still fastest?
Are sparse arrays any faster for this data architecture?
> django-typed-models
Interesting, never came across it. But I've got a Django GitHub library (not yet open source, because it's in a project I've been working with on and off) that does the same for managing GitHub's accounts. An Organization and User inherit the same properties and I downcast them based on their polymorphic type field.
ContentType.model_class(), models.Model.meta.abstract=True, django-reversion, django-guardian
IDK how to do partial indexes with the Django ORM? A simple filter(bool, rows) could probably significantly shrink the indexes for such a wide table.
Arrays are fast if the features/dimensions are known at compile time (if the TBox/schema is static). There's probably an intersection between object reference overhead and array copy costs.
Arrow (with e.g. parquet on disk) can help minimize data serialization/deserialization costs and maximize copy-free data interoperability (with columnar arrays that may have different performance characteristics for whole-scene transformation operations than regular arrays).
Many implementations of SQL ALTER TABLE don't have to create a full copy in order to add a column, but do require a permission that probably shouldn't be GRANTed to the application user and so online schema changes are scheduled downtime operations.
If you're not discovering new features at runtime and your access pattern is generally linear, arrays probably are the fastest data structure.
Hacker News also has a type attribute that you might say is used polymorphically:...
Types in RDF are additive: a thing may have zero or more rdf:type property instances. RDF quads can be stored in one SQL table like:
_id,g,s,p,o,xsd:datatype,xml:lang
... with a few compound indexes that are combinations of (s,p,o) so that triple pattern graph queries like (?s,?p,1) are fast. Partial indexes (SQLite, PostgreSQL,) would be faster than full-table indexes for RDF in SQL, too.
Neural scene representation and rendering
This work is a natural progression from a lot of other prior work in the literature... but that doesn't make the results any less impressive. The examples shown are amazingly, unbelievably good! Really GREAT WORK.
Based on a quick skim of the paper, here is my oversimplified description of how this works:
During training, an agent navigates an artificial 3D scene, observing multiple 2D snapshots of the scene, each snapshot from a different vantage point. The agent passes these snapshots to a deep net composed of two main parts: a representation-learning net and a scene-generation net. The representation-learning net takes as input the agent's observations and produces a scene representation (i.e., a lower-dimensional embedding which encodes information about the underlying scene). The scene-generation network then predicts the scene from three inputs: (1) an arbitrary query viewpoint, (2) the scene representation, and (3) stochastic latent variables. The two networks are trained jointly, end-to-end, to maximize the likelihood of generating the ground-truth image that would be observed from the query viewpoint. See Figure 1 on Page 15 of the Open Access version of the paper. Obviously I'm playing loose with language and leaving out numerous important details, but this is essentially how training works, as I understand it based on a first skim.
EDIT: I replaced "somewhat obvious" with "natural," which better conveys what I actually meant to write the first time around.
I, literally just 15 minutes ago, had a chat with a friend of mine exactly about how what we are doing right now with computer vision is all based on a flawed premise (supervised 2D training set). The human brain works in 3D space (or 3D+time) and then projects all this knowledge in a 2D image.
Here I was, thinking I finally had thought of a nice PhD project and then Deepmind comes along and gets the scoop! Haha.
"Spatial memory"
It may be splitting hairs, but I think the mammalian brain, at least, can simulate/remember/imagine additional 'dimensions' like X/Y/Z spin, derivatives of velocity like acceleration/jerk/jounce.
Is space 11 dimensional (M string theory) or 2 dimensional (holographic principle)? What 'dimensions' does the human brain process? Is this capacity innate or learned; should we expect pilots and astronauts to have learned to more intuitively cognitively simulate gravity with their minds?
New US Solar Record – 2.155 Cents per KWh
"Cost of electricity by source"
"Electricity pricing"
> United States 8 to 17 ; 37[c] 43[c] [cents USD/kWh]
Ask HN: Is there a taxonomy of machine learning types?
Besides classification and regression, and the unsupervised methods for principle components, clustering and frequent item-sets, what tools are there in the ML toolkit and what kinds of problems are amenable to their use?
Outline of Machine Learning
Machine learning # Applications
"machine learning map" image search:...
Senator requests better https compliance at US Department of Defense [pdf]
The "Mozilla SSL Configuration Generator" has a checkbox for 'HSTS enabled?' and can generate SSL/TLS configs for Apache, Nginx, Lighttpd, HAProxy, AWS, ELB....
You can select 'nginx', then 'modern', and then 'apache' for a modern Apache configuration.
Are the 'modern' configs FIPS compliant?
What browsers/tools does requiring TLS 1.3 break?
Banks Adopt Military-Style Tactics to Fight Cybercrime
> In a windowless bunker here, a wall of monitors tracked incoming attacks — 267,322 in the last 24 hours, according to one hovering dial, or about three every second — as a dozen analysts stared at screens filled with snippets of computer code.
>.
Is this type of monitoring possible (necessary, even) with blockchains? Blockchains generally silently disregard bad/invalid transactions. Where could discarded/disregarded transactions and forks be reported to in a decentralized blockchain system? Who would pay for log storage? How redundantly replicated should which data be?
How DDOS resistant are centralized and decentralized blockchains?
Exchanges have risk. In terms of credit fraud: some crypto asset exchanges do allow margin trading, many credit card companies either refuse transactions with known exchanges or charge cash advance interest rates, and all transactions are final.
Exchanges hold private keys for customers' accounts, move a lot to offline cold storage, and maybe don't do a great job of explaining that YOU SHOULD NOT LEAVE MONEY ON AN EXCHANGE. One should transfer funds to a different account; such as a hardware or paper wallet or a custody service.
Do/can crypto asset exchanges participate in these exercises? To what extent do/can blockchains help solve for aspects of our unfortunately growing cybercrime losses?
Premined blockchains could reportedly handle card/chip/PIN transaction volumes today.
No, Section 230 Does Not Require Platforms to Be “Neutral”
>.
......
Ask HN: Do battery costs justify “buy all sell all” over “net metering”?
Are batteries the primary justification for "buy all sell all" over "net metering"?
Are next-gen supercapacitors the solution?
> Ask HN: Do battery costs justify "buy all sell all" over "net metering"?
> Are batteries the primary justification for "buy all sell all" over "net metering"?
> Are next-gen supercapacitors the solution?
With "Net Metering", electric utilities buy consumers' excess generated energy at retail or wholesale rates.
With "Buy All, Sell All", electric utilities require consumers to sell all of the energy they generate from e.g. solar panels (usually at wholesale prices, AFAIU) and buy all of the energy they consume at retail rates. They can't place the meter after any local batteries.
Do I have this right?
Net metering:
(used-generated) x (retail || wholesale)
Buy all, sell all:
(used x retail) - (generated x wholesale)
For the energy generating consumer, net metering is a better deal: they have power when the grid is down, and they keep or earn more for the energy generation capability they choose to invest in.
Break-even on solar panels happens sooner with net metering.
Utilities argue that: maintaining grid storage and transfer costs money, which justifies paying energy generating consumers less than they pay for more constant sources of energy like dams, wind farms, and commercial solar plants.
Building a two-way power transfer grid costs money. Batteries require replacement after a limited number of cycles. Spiky or bursting power generation is not good for batteries because they don't get a full cycle. [Hemp] supercapacitors can smooth out that load and handle many more partial charge and discharge cycles.
Is energy storage the primary justifying cost driver for "buy all, sell all"?
What investments are needed in order to more strongly incentivize clean energy generation? Do we need low cost supercapacitors to handle the spiky load?
Are these utilities granted a monopoly? Are they price fixing?
Energy demand from blockchain mining has not managed to keep demand constant so that utilities have profit to invest in clean energy generation and a two-way smart grid that accommodates spiky consumer energy generation. Demand for electricity is falling as we become less wasteful and more energy efficient. As the cost of renewable energy continues to fall (and become less expensive than nonrenewables), there should be more margin for energy utilities which cost-rationally and environmentally-rationally choose to buy renewable energy and sell it to consumers.
Please correct me with the appropriate terminology.
How can we more strongly incentivize consumer solar panel investments?
Here's a discussion about the lower costs of hemp supercapacitors as compared with graphene super capacitors:
""" [energy. ""'
Portugal electricity generation temporarily reaches 100% renewable
Currently we are passing through an atypical wind and rain period. Our dams are full and pouring out water as we are producing more energy than we consume.
Despite all of this energy prices don't drop as they are bound to one of the few legal monopolies in Europe: the grid operation cartel (REN in the article, there is only one allowed for each EU country).
Also this feels as fake news since we have a few very old (and historically insecure, like sines power plant and carregado power plant) fossil fuel power plants that are still operating with no signs of slowing down while making profits through several scams in chemical engineering their way into the 0 emissions lot.
are batteries (or some sort of storage) in the infrastructure plan for the future? power.
If you're going to be mentioning this again in the future please correct your usage of power/energy density. Power density is measured in W/kg, energy density is measured in Wh/kg. Supercapacitors tend to excel in the former but be poor in the latter. You mentioned power density but used units for energy density. This happens so often in media that I feel the need to correct it even in a comment.
> please correct your usage of power/energy density. Power density is measured in W/kg, energy density is measured in Wh/kg. Supercapacitors tend to excel in the former but be poor in the latter.
I'd update the units; good call. You may have that confused? Traditional supercapacitors have had lower power density and faster charging/discharging. Graphene and hemp somewhat change the game, AFAIU.
It makes sense to put supercapacitors in front of the battery banks because they last so many cycles and because they charge and discharge so quickly (a very helpful capability for handling spiky wind and solar loads).
I think you may still be a little confused. Power density is the rate at which energy can be added to or drawn from the the cell per unit mass. So faster charging and discharging means high power density. Energy density is the total amount of energy that can be stored per unit mass. Supercapacitors are typically higher in power density and lower in energy density than batteries[1].
You're right that it makes sense to put supercapacitors in front of the battery banks for the reasons you said.
[1]...
I must have logically assumed that rate of charge and discharge include time (hours) in the unit: Wh/kg.
My understanding is that there's usually a curve over time t that represents the charging rate from empty through full.
[edit]
"C rate"
Battery_(electricity)#C_rate
Battery_charger#C-rates
> Charge and discharge rates are often denoted as C or C-rate, which is a measure of the rate at which a battery is charged or discharged relative to its capacity. As such the C-rate is defined as the charge or discharge current divided by the battery's capacity to store an electrical charge. While rarely stated explicitly, the unit of the C-rate is [h^−1], equivalent to stating the battery's capacity to store an electrical charge in unit hour times current in the same unit as the charge or discharge current.
It does sound amazing and economical, like almost too good to be true, but I very much hope it is true. What are the downsides? Is there a degradation problem or something similar? Other than the stoner connection in people's minds, what kind of resistance is there to this? Why isn't it widely known?
You know, I'm not sure. This article is from a few years ago now and there's not much uptake.
It may be that most people dismiss supercapacitors based on the stats for legacy (pre-graphene/pre-hemp) supercapacitors: large but quick and long-lasting.
It may be that hemp is taxed at up to 90% because it's a controlled substance in the US (but not in Europe, Canada, or China; where we must import shelled hemp seeds from). A historical accident?
GPU Prices Drop ~25% in March as Supply Normalizes
How do these new GPUs compare to those from 10 years ago in terms of FLOPs per Watt?
The new ASICs for Ethereum mining can't be solely responsible for this percent of the market.
(Note that NVIDIA's stock price is up over 1700% over the past 10 years. And that Bitcoin mining on CPUs and GPUs hasn't been profitable for quite awhile. In 2007, I don't think we knew that hashing could be done on GPUs; though there were SSL accelerator cards that were mighty expensive)
Apple says it’s now powered by renewable energy worldwide
As this article [1] explains, Apple does not (and cannot) actually run on 100% renewable energy globally, as any of its stores/premises/facilities that are connected to local municipal power grids will use whatever power generation method is used on that grid, and that is still likely to be fossil fuel in most locations.
But they purchase Renewable Energy Certificates to offset their use of non-renewable energy, so they can make the claim that their net consumption of non-renewable electricity is negative.
[1]...
This is weird. So if I run a wind farm that generates 1 MW(...h? why is this an energy unit and not power? but whatever...) and I use all of that electricity myself, I can also sell 1 REC to someone else so that they claim they run on green electricity. Which means that now either (a) I have to legally claim I run on dirty electricity (which is a lie on its face and make no sense???) or (b) we both claim we run on green power, double-dipping and screwing up the accounting of greenness.
Am I misunderstanding something? How does this work?
No that's how it works. That's also the reason why Norway, despite only using hydro, has only 40-50% renewable energy in some statistics. They sell green energy certificates to consumers abroad (e.g. in Germany). Officially, Norwegians then use coal power whereas in reality it's all hydro power. There isn't even enough transmission capacity to the south to get that kind of exchange physically.
Wow! And I just realized there seems to be another loophole: that means (say) a company like Apple could start a separate power company in Norway based on hydro power, have that company completely waste 100% of the energy it produce there, and yet "buy" the equivalent REC in another jurisdiction where they run on coal and suddenly get to 100% "green" power... potentially even making more money in tax credits, if there are any, all while consuming more and more dirty power without actually helping anybody shift to renewable energy. Right?
I think it comes down to giving them credit for funding the construction of massive clean energy projects, even if they don't exclusively use the electricity from that project themselves.
Look at Microsoft. They just funded a deal to build out an absolutely massive solar farm in Virginia....
They do have facilities in state that will only need a fraction of that power to be 100% green, and the excess will be pumped into the local Virginia power grid and used by consumers there.
Basically, Microsoft is saying that they funded the project to generate excess green energy in one place to offset the dirty energy they consume in areas where there is no local green power option available.
100% renewable energy by purchasing and funding renewable energy is an outstanding acheivement.
Is there another statistic for measuring how many KWhr or MWhr are sourced directly from renewable energy sources (or, more logically, 'directly' from batteries + hemp supercapacitors between use and generation)?
Hackers Are So Fed Up with Twitter Bots They’re Hunting Them Down Themselves
This is an interesting approach. Maybe Twitter shouldn't solve the fake accounts problem directly, maybe they should come up with an evaluation criteria and then create a market for identifying fake accounts.
If their evaluation criteria is good, they could get away with 0 cost to build the best possible system (motivated by competition on a market).
There's an open call for papers/proposals for handling the deluge. "Funding will be provided as an unrestricted gift to the proposer's organization(s)" ... "Twitter Health Metrics Proposal Submission"...
A real hacker move would be to just leave Twitter and go to Mastodon
Are you suggesting that Mastodon has a better system for identifying harassment, spam, and spam accounts? Or that, given that they're mostly friendly early adopters, they haven't yet encountered the problem?
It seems to me like you don't understand the crucial difference between Twitter and Mastodon.
There's no such thing as Mastodon, a singular social network. Mastodon is a series of instances that talk to each other. A sysadmin running the instance can do whatever he pleases in his instance, including closing the registration, banning entire instances from communicating with his instance, and enforcing whichever rules he wants to enforce.
Mastodon doesn't deal with such issues at all. It's sysadmins running Mastodon instances that are supposed to deal with such issues.
It's more like reddit, where mods of subreddits have nearly complete authority over their own space on the social network, than it is like Twitter, in which a single entity is in charge.
Mastodon is a federated system like StatusNet/GNU Social.
So, in your opinion, Mastodon nodes - by virtue of being federated - would be better equipped to handle the spam and harassment volume that Twitter is subject to?
I find that hard to believe..
“We’re committing Twitter to increase the health and civility of conversation”
First Amendment protections apply to suits brought by the government. Civil suits are required to prove damages ("quantum of loss").
There are many open platforms. (I've contributed to those as well). Some are built on open standards. None of said open platforms have procedures or resources for handling the onslaught of disrespectful trash that the people we've raised eventually use these platforms for communicating at other people who have feelings and understand the Golden Rule.
The initial early adopters (who have other better things to do) are fine: helpful, caring, critical, respectful; healthy. And then everyone else comes surging in with hate, disrespect, and vitriol; unhealthy. They don't even realize that being hateful and disrespectful is making them more depressed. They think that complaining and talking smack to people is changing the world. And then they turn off the phone or log out of the computer, and carry on with their lives.
No-one taught them to be the positive, helpful energy they want to attract from the world. No-one properly conditioned them to either respectfully disagree according to the data or sit down and listen. No-one explained to them that a well-founded argument doesn't fit in 140 or 280 characters, but a link and a headline do. No-one explained to them that what they write on the internet lasts forever and will be found by their future interviewers, investors, jurors, and voters. No-one taught them that being respectful and helpful in service of other people - of the group's success, of peaceful coexistence - is the way to get ahead AND be happy. "No-one told me that."
Shareholders of public corporations want to see growth in meaningless numbers, foreign authoritarian governments see free expression as a threat to their ever-so-fragile self-perceptions, political groups seek to frame and smear and malign and discredit (because they are so in need of group acceptance; because money still isn't making them happy), and there are children with too much free time reading all of these.
No-one is holding these people accountable: we need transparency and accountability. We need to focus on more important goals and feel good about helping; about volunteering our time to help others be happier.
Instead, now that these haters and scam artists have all self-identified, we must spend our time conditioning their communications until they learn to respectfully disagree on facts and data or go somewhere else. "That's how you feel? Great. How does that make your victim feel?" is the confrontation that some people are seeking from companies that set out to serve free speech and provide a forum for citizens to share the actual news.
Who's going to pay for that? Can they sue for their costs and losses? Advertisers do not want a spot next to hateful and disrespectful.
"How dare you speak of censorship in such veiled terms!?" Really? They're talking about taking down phrases like "kill" and "should die"; not phrases like "I disagree because:"
So, now, because there are so many hateful economically disadvantaged people in the world with nothing better to do and no idea how to run a business or keep a job with benefits, these companies need to staff 24 hour a day censors to take down the hate and terror and gang recruiting within one hour. What a distorted mirror of our divisively fractured wealth inequality, indeed.
"Ban gangs ASAP, please: they'll just go away"
How much does it cost to pay prison labor to redundantly respond to this trash? Are those the skills they need to choose a different career with benefits and savings that meet or exceed inflation when they get out?
What is the procedure for referring threats of violence to justice in your jurisdiction? Are there wealthy individuals in your community who would love to contribute resources to this effort? Maybe they have some region-specific pointers for helping the have-nots out here trolling like it's going to get them somewhere they want to be in life?
Let me share a little story with you:
A person walks into a bar/restaurant, flicks off the bartender/waiter, orders 5 glasses of free water, starts plastering ads to the walls and other peoples' tables, starts making threats to groups of people cordially conversing, and walks out.
Gitflow – Animated in React
Thanks! A command log would be really helpful too.
The HubFlow docs contain GitFlow docs and some really helpful diagrams:
I change the release prefix to 'v' so that the git tags for the release look like 'v0.0.1' and 'v0.1.0':
I usually use HubFlow instead of GitFlow because it requires there to be a Pull Request; though GitFlow does work when offline / without access to GitHub.I usually use HubFlow instead of GitFlow because it requires there to be a Pull Request; though GitFlow does work when offline / without access to GitHub.
git config --replace-all gitflow.prefix.versiontag v git config --replace-all hubflow.prefix.versiontag v
Ask HN: How feasible is it to become proficient in several disciplines?
For example to become a professional in:
- back-end api development
- DevOps
- Data Engineer (big data, data science, ML, etc)
It is feasible, though as with any type of specialization, you're then a "jack of all trades, master of none". Maybe a title like "Full Stack Data Engineer" would be descriptive.
You could write an OAuth API for accepting and performing analysis of datasets (model fitting / parameter estimation; classification or prediction), write a test suite, write Kubernetes YAML for a load-balanced geodistributed dev/test/prod architecture, and continuously deploy said application (from branch merges, optionally with a manual confirmation step; e.g. with GitLab CI) and still not be an actual Data Engineer.
After rising for 100 years, electricity demand is flat
Cryptocurrency mining is about to consume more electricity than home usage in Iceland.[1] I assume it's similar in other places that have cheaper electricity.
Seems that power companies should encourage consumers to mine Bitcoin. Problem solved.
[1]...
> Seems that power companies should encourage consumers to mine Bitcoin. Problem solved.
Blockchains will likely continue to generate considerable demand for electricity for the foreseeable future.
Blockchain firms can locate where energy is cheapest. Currently that's in countries where energy prices go negative due to excess capacity and insufficient energy storage resources (batteries, [hemp/graphene] supercapacitors, water towers).
With continued demand, energy companies can continue to invest in new clean energy generation alternatives.
Unfortunately, in the current administration's proposed budget, funding for ARPA-E is cancelled and allocated to clean coal; which Canada, France, and the UK are committed to phasing out entirely by ~2030.
A framework for evaluating data scientist competency
Something HTML with local state like "Programmer Competency Matrix" would be great.
Levi Strauss to use lasers instead of people to finish jeans
> The firm says the new techniques will reduce chemical use and make the way in which jeans are faded, distressed and ripped more efficient.
Yes, but can they make them as comfortable as this pair I've been working on for many years?
Can they sew/weave cool patches in?
Chaos Engineering: the history, principles, and practice
awesome-chaos-engineering lists a bunch of chaos engineering resources and tools such as Gremlin:
Scientists use an atomic clock to measure the height of a mountain
Quantum_clock#More_accurate_experimental_clocks:...
>".
AFAIU, this type of geodesy isn't possible with 'normal' time structs. Are nanoseconds enough?
"[Python-Dev] PEP 564: Add new time functions with nanosecond resolution"...
The thread you linked makes a good point: there isn't really any reason to care about the actual time, only the relative time. These sorts of clocks just use 256- or 512-bit counters; it's not like they're having overflow issues.
Resources to learn project management best practices?
My side project is beginning to attract interest from a few people who would like to hop on board. At this point I am just doing what feels familiar and sensible, but the project manager perspective is new to me. Are there any sort of articles/books/podcasts/etc that could clue me into how to become better at it?
Project Management:... ... #requirements-traceability, #work-breakdown-structure (Mission, Project, Goal/Objective #n; Issue #n, - [ ] Task)
"Ask HN: How do you, as a developer, set measurable and actionable goals?"
- Burndown Chart, User Stories
... GitHub and GitLab have milestones and reorderable issue boards. I still like for complexity points; though you can also just create labels for e.g. complexity (Complexity-5) and priority (Priority-5).
Ask HN: Thoughts on a website-embeddable, credential validating service?
Reading Troy Hunt's password release V2 blog post [0], I came across the NIST recommendation to prevent users from creating accounts with passwords discovered in data breaches. This got me thinking: would a website admin (ex. small business owner with a custom website) benefit from a service that validates user passwords? The idea is to create a registration iframe with forms for email, password, etc., which would check hashed credentials against a database of data from breaches. Additionally, client-side validation would enforce rules recommended by the NIST's Digital Identity Guidelines [1], which would relieve admins from implementing their own rules. I'm sure there are additional security features that can be added.
1. Have you seen a need for this type of service, and could you see this being adopted at all?
2. Do you know of a service like this? I've looked, no hits so far.
3. Does the architecture seem sound?
[0]:
[1]:
blockchain-certificates/cert-verifier-js:
> A library to enable parsing and verifying a Blockcert. This can be used as a node package or in a browser. The browserified script is available as verifier.js.
> The cert-issuer project issues blockchain certificates by creating a transaction from the issuing institution to the recipient on the Bitcoin blockchain that includes the hash of the certificate itself.
... We could/should also store X.509 cert hashes in a blockchain.
Exactly what part of such a service would benefit from anything related to a blockchain?
Are you asking me why blockcerts stores certs in a blockchain?
Or whether using certs (really long passwords) is a better option than submitting unhashed passwords on a given datetime to a third-party in order to make sure they're not in the pwned passwords tables?
I was just reading about a company trying to make self-sovereign identity including actual certs (like degrees and such) an accessible and widely applicable/acceptable technology using Ethereum blockchain. I thought it showed some real practicality and promise. I believe it begins with U- forgot the name.Perhaps UPort? Anyhow, I'd be interested in hearing from anyone here about why that might be a bad or good idea. I don't personally have the skill in that tech to know.
Known Traveler Digital Identity system is a "new model for airport screening and security that uses biometrics, cryptography and distributed ledger technologies."
Blockcerts are for academic credentials, AFAIU.
[EDIT]
Existing blockchains have a limited TPS (transactions per second) for writes; but not for reads. Sharding and layer-2 (sidechains) do not have the same assurances. I'm sure we all remember how cryptokitties congested the txpool during the Bitcoin futures launch.
Thank you. I looked into "clear", one of the airport known traveler ID systems. (I'm assuming there are others) It's pretty cool/concerning. Takes ~ 8 min to load a traveler into its system. Thanks for reminding me of TPS---the info on read vs write.
Ask HN: What's the best algorithms and data structures online course?
These aren't courses, but from answers to "Ask HN: Recommended course/website/book to learn data structure and algorithms" :]
Using Go as a scripting language in Linux
I, too, didn't realize that shebang parsing is implemented in the `binfmt_script` kernel module.
Does this persist across reboots?
echo ':golang:E::go::/usr/local/bin/gorun:OC' | sudo tee /proc/sys/fs/binfmt_misc/register
No, but different init systems may autoload formats based on some configuration files. Systemd, for instance:...
Guidelines for enquiries regarding the regulatory framework for ICOs [pdf]
This is a helpful table indicating whether a Payment, Utility, Asset, or Hybrid coin/token: is a security, qualifies under Swiss AML payment law.
The "Minimum information requirements for ICO enquiries" appendix seems like a good set of questions for evaluating ICOs. Are there other good questions to ask when considering whether to invest in a Payment, Utility, Asset, or Hybrid ICO?
Are US regulations different from these clear and helpful regulatory guidelines for ICOs in Switzerland?
> Are there other good questions to ask when considering whether to invest in a Payment, Utility, Asset, or Hybrid ICO?
This paper doesn seem to cover that, only on how regulators should treat investments.
On investing in ICOs, the questions are the same as any other IPO. And, in most cases, just speculation.
The Benjamin Franklin method for learning more from programming books
> Read your programming book as normal. When you get to a code sample, read it over
> Then close the book.
> Then try to type it up.
According to a passage in "The Autobiography of Benjamin Franklin" (1791) regarding re-typing from "The Spectator"...
EBook:
Avoiding blackouts with 100% renewable energy
I notice that cases A and C require batteries for storage.
Should there be a separate entry for new gen supercapacitors? Supercapacitors built with both graphene and hemp have different Max Charge Rate (GW), Max Discharge Rate (GW), and Storage (TWh) capacities than even future-extrapolated batteries and current supercapacitors.
The cost and capabilities stats in this article look very promising:
....
To be clear, supercapacitors are an alternative to li-ion batteries.
"Matching demand with supply at low cost in 139 countries among 20 world regions with 100% intermittent wind, water, and sunlight (WWS) for all purposes" (Renewable Energy, 2018)...
Ask HN: What are some common abbreviations you use as a developer?
These are called 'codelabels'. They're great for prefix-tagging commit messages, pull requests, and todo lists:
BLD: build
BUG: bug
CLN: cleanup
DOC: documentation
ENH: enhancement
ETC: config
PRF: performance
REF: refactor
RLS: release
SEC: security
TST: test
UBY: usability
DAT: data
SCH: schema
REQ: requirement
REQ: request
ANN: announcement
STORY: user story
EPIC: grouping of user stories
There's a table of these codelabels here:...
Someday TODO FIXME XXX I'll get around to:
- [ ] DOC: create a separate site/organization for codelabels
- [ ] ENH: a tool for creating/renaming GitHub labels with unique foreground and background colors
YAGNI: Ya' ain't gonna need it
LOL, lulz
DRY: Don't Repeat Yourself
KISS: Keep It Super Simple
MVC: Model-View-Controller
MVT: Model-View-Template
MVVM: Model-View-View-Model
UI: User Interface
UX: User Experience
GUI: Graphical User Interface
CLI: Command Line Interface
CAP: Consistency, Availability, Partition tolerance
DHT: Distributed Hash Table
ETL: Extract, Transform, and Load
ESB: Enterprise Service Bus
MQ: Message Queue
VM: Virtual Machine
LXC: Linux Containers
[D]VCS, RCS: [Distributed] Version/Revision Control System
XP: Extreme Programming
CI: Continuous Integration
CD: Continuous Deployment
TDD: Test-Driven Development
BDD: Behavior-Driven Development
DFS, BFS: Depth/Breadth First Search
CRM: Customer Relationship Management
CMS: Content Management System
LMS: Learning Management System
ERP: Enterprise Resource Planning system
HTTP: Hypertext Transfer Protocol
HTTP STS: HTTP Strict Transport Security
REST: Representational State Transfer
API: Application Programming Interface
HTML: Hypertext Markup Language
DOM: Document Object Model
LD: Linked Data
LOD: Linked Open Data
URI: Uniform Resource Indicator
URN: Uniform Resource Name
URL: Uniform Resource Locator
UUID: Universally Unique Identifier
RDF: Resource Description Format
RDFS: RDF Schema
OWL: Web Ontology Language
JSON-LD: JSON Linked Data
JSON: JavaScript Object Notation
CSVW: CSV on the Web
CSV: Comma Separated Values
CIA: Confidentiality, Integrity, Availability
ACL: Access Control List
RBAC: Role-Based Access Control
MAC: Mandatory Access Control
CWE: Common Weakness Enumeration
CVE: Common Vulnerabilities and Exposures
XSS: Cross-Site Scripting
CSRF: Cross-Site Request Forgery
SQLi: SQL Injection
ORM: Object-Relational Model
AUC: Area Under Curve
ROC: Receiver Operating Characteristic
DL: Description Logic
RL: Reinforcement Learning
CNN: Convolutional Neural Network
DNN: Deep Neural Network
IS: Information Systems
ROI: Return on Investment
RPU: Revenue per User
MAU: Monthly Active Users
DAU: Daily Active Users
STEM: Science, Technology, Engineering, Mathematics/Medicine
STEAM: STEM + Arts
W3C: World-Wide-Web Consortium
GNU: GNU's not Unix
WRDRD: WRD R&D
... The Sphinx ``.. index::`` directive makes it easy to include index entries for acronym forms, too
There Might Be No Way to Live Comfortably Without Also Ruining the Planet
"A good life for all within planetary boundaries" (2018)
> Abstract:, based on current relationships. Strategies to improve physical and social provisioning systems, with a focus on sufficiency and equity, have the potential to move nations towards sustainability, but the challenge remains substantial.
> ."
Perhaps ironically, our developments in service of sustainability (resource efficiency) needs for a civilization on Mars are directly relevant to solving these problems on Earth.
Recycle everything.
Survive without soil, steel, hydrocarbons, animals, oxygen.
Convert CO2, sunlight, H20, and geothermal energy to forms necessary for life.
Algae, carbon capture, carbon sequestration, lab grown plants, water purification, solar power, [...]
Mars requires a geomagnetic field in order to sustain an atmosphere in order to [...].
"The Limits to Growth" (1972, 2004) [1] very clearly forecasts these same unsustainable patterns of resource consumption: 'needs' which exceed and transgress our planetary biophysical boundaries.
The 17 UN Sustainable Development Goals (#GlobalGoals) [2] outline our worthwhile international objectives (Goals, Targets, and Indicators). The Paris Agreement [3] sets targets and asks for commitments from nation states (and businesses) to help achieve these goals most efficiently and most sustainably.
In the US, the Clean Power Plan [4] was intended to redirect our national resources toward renewable energy with far less external costs. Direct and indirect subsidies for nonrenewables are irrational. Are subsidies helpful or necessary to reach production volumes of renewable energy products and services?
There are certainly financial incentives for anyone who chooses to invest in solving for the Global Goals; and everyone can!
[1]
[2]...
[3]
[4]
Multiple GWAS finds 187 intelligence genes and role for neurogenesis/myelination
> We found evidence that neurogenesis and myelination—as well as genes expressed in the synapse, and those involved in the regulation of the nervous system—may explain some of the biological differences in intelligence.
re: nurture, hippocampal plasticity and hippocampal neurogenesis also appear to be affected by dancing and omega-3,6 (which are transformed into endocannabinoids by the body):
Could we solve blockchain scaling with terabyte-sized blocks?
These numbers in a computational model (or even Jupyter notebooks) would be useful.
We may indeed need fractional satoshis ('naks').
With terabyte blocks, lightning network would be unnecessary: at least for TPS.
There will need to be changes to account for quantum computing capabilities somewhere in the future timeline of Bitcoin (and everything else in banking and value-producing industry). Probably maybe a different hash function instead of just a routine difficulty increase (and definitely something other than ECDSA, which isn't a primary cost). $1.3m/400k a year to operate a terabyte mining rig with 50Gbps bandwidth would affect decentralization; though maybe not any more than it already is affected now.... (51%)
Confidence intervals for these numbers would be useful.
Casper PoS and beyond may also affect future Bitcoin volume estimates.
Ask HN: Do you have ADD/ADHD? How do you manage it?
Also, how has it affected your CS career? I feel that transitioning to management would help, as it does not require lengthy periods of concentration, but rather distributed attention for shorter periods.
Music. Headphones. Chillstep, progressive, chillout etc. from di.fm. Long mixes from SoundCloud with and without vocals. "Instrumental"
Breathe in through the nose and out through the mouth.
Less sugar and processed foods. Though everyone has a different resting glucose level.
Apparently it's called alpha-pinene.
Fidget things. Rubberband, paperclip.
The Pomodoro Technique: work 25 minutes, chill for 5 (and look at something at least 20 feet away (20-20-20 rule))
Lists. GTD. WBS.
Exercise. Short walks.
Ask HN: How to understand the large codebase of an open-source project?
Hello All!
what are techniques you all used to learn and understand a large codebase? what are the tools you use?
Write the namespace outline out by hand on a whiteboard or a sheet of paper.
Use a static analyzer to build a graph of the codebase.
Build an adjacency list and a graph of the imports; and topologically + (…) sort.
What is the best way to learn to code from absolute scratch?
We have been hosting a Ugandan refugee in our home in Oakland for the past 9 months and he wants to learn how to code.
Where is the best place for him to start from absolute scratch? What resources can we point him to? Who can help?
Here's an answer to a similar question: "Ask HN: How to introduce someone to programming concepts during 12-hour drive?" (Python3) (Javascript) (Git) (Markdown)
Read the docs. Read the source. Write docstrings. Write automated tests: that's the other half of the code.
Keep a journal of your knowledge as e.g. Markdown or ReStructuredText; regularly pull the good ones from bookmarks and history into an outline.
I keep a tools reference doc with links to Wikipedia, Homepage, Source, Docs:
And a single-page log of my comments:
> To get a job, "Coding Interview University":
Tesla racing series: Electric cars get the green light – Roadshow
Tesla Racing Circuit ideas for increasing power discharge rate, reducing heat, and reducing build weight:
Hemp supercapacitors (similar power density as graphene supercapacitors and li-ion, lower cost than graphene)
Active cooling. Modified passive cooling.
Biocomposite frame and panels (stronger and lighter than steel and aluminum (George Washington Carver))
> Biocomposite frame and panels (stronger and lighter than steel and aluminum (George Washington Carver))
"Soybean Car" (1941)
What happens if you have too many jupyter notebooks?
These days there is a tendency in data analysis to use Jupyter Notebooks. But what happens if you have too many jupyter notebooks? For example, there are more than a hundred.
Actually, you start creating some modules. However, it is less convenient to work with them compared to what was before. It happens that you should code in web interface, somewhere in similar to the notepad++ form or you should change your IDLE.
Personally, I work in Pycharm and so far I couldn't assess remote interpreter or VCS. It is because pickle files or word2vec weighs too much (3gb+) and so I don't want to download/upload them. Also Jupyter is't cool in pycharm.
Do you have better practices in your companies? How to correctly adjust IDLE? Do you know about any possible substitution for the IPython notebook in the world of data analysis?
> what happens if you have too many jupyter notebooks? For example, there are more than a hundred.
Like anything else, Jupyter Notebook is limited by the CPU and RAM of the system hosting the Tornado server and Jupyter kernels.
At 100 notebooks (or even just one), it may be a good time to factor common routines into a packaged module with tests and documentation.
It's actually possible (though inefficient) to import code from Jupyter notebooks with ipython/ipynb (pypi:ipynb): (... )
> Actually, you start creating some modules. However, it is less convenient to work with them compared to what was before. It happens that you should code in web interface, somewhere in similar to the notepad++ form or you should change your IDLE.
The Spyder IDE has support for .ipynb notebooks converted to .py (which have the IPython prompt markers in them). Spyder can connect an interpreter prompt to a running IPython/Jupyter kennel. There's also a Spyder plugin for Jupyter Notebook:
> Personally, I work in Pycharm and so far I couldn't assess remote interpreter or VCS. It is because pickle files or word2vec weighs too much (3gb+) and so I don't want to download/upload them.
Remote data access times can be made faster by increasing the space efficiency of the storage format, increasing the bandwidth of the connection, moving the data to the code, or moving the code to the data.
> Do you have better practices in your companies?
There are a number of [Reproducible] Data Science cookiecutter templates which have a directory for notebooks, module packaging, and Sphinx docs:...
Refactoring increases testability and code reuse.
> How to correctly adjust IDLE?
I don't think I understand the question?
"Configuring IPython"
Jupyter > "Installation, Configuration, and Usage"...
> Do you know about any possible substitution for the IPython notebook in the world of data analysis?
From :
> > "Examples of the notebook interface include the Mathematica notebook, Maple worksheet, MATLAB notebook, IPython/Jupyter, R Markdown, Apache Zeppelin, Apache Spark Notebook, and the Databricks cloud."
There are lots of Jupyter kernels for different tools and languages (over 100; including for other 'notebook interfaces'):
And there are lots of Jupyter integrations and extensions:...
Cancer ‘vaccine’ eliminates tumors in mice
The article is about this study:
"Eradication of spontaneous malignancy by local immunotherapy"
> In situ vaccination with low doses of TLR ligands and anti-OX40 antibodies can cure widespread cancers in preclinical models.
Boosting teeth’s healing ability by mobilizing stem cells in dental pulp
Tideglusib
> "Promotion of natural tooth repair by small molecule GSK3 antagonists"
> [...] Here we describe a novel, biological approach to dentine restoration that stimulates the natural formation of reparative dentine via the mobilisation of resident stem cells in the tooth pulp.
This Biodegradable Paper Donut Could Let Us Reforest the Planet
"These drones can plant 100,000 trees a day"
> Called the Cocoon, this simple invention protects seedlings from harsh arid climates and reduces the amount of water they need to thrive–and boosts their survival rate by as much as 80%.
Drones that can plant 100k trees a day
> It’s simple maths. We are chopping down about 15 billion trees a year and planting about 9 billion. So there’s a net loss of 6 billion trees a year.
This is a regional thing [0] though. We need to plant the trees in Latin America, Caribbean, and Sub-Saharan Africa. The rest of the world is gaining forests.
[0]...
Now, the question is, which industry sector cuts the largest percentage of trees. Now, answering that question, is there a way to do the same thing without cutting trees?
Answering that question and then executing a business plan is probably worth billions.
Planting trees to combat 6 billion trees lost every year is a pure expense, no profit to be made at all. At least not if you won't cut them down a few decades later.
A very good way to absorb atmospheric carbon is to plant new trees and cut down and use old ones for anything else but burning it.
"This Biodegradable Paper Donut Could Let Us Reforest The Planet"
I think this kind of thing is funny for us westerners. I recently found this tech has been used in arid countries for possibly hundreds or thousands of years....
I do realize the ollas are for more permanent gardens, but it's the same concept. I do hope the paper version gets used, there are plenty of places that could benefit from it.
What are some YouTube channels to progress into advanced levels of programming?
There are some cool YouTube channel suggestions on But I wanted to know which of those are great to progress into advanced level of programming? Which of the channels teach advanced techniques?
These aren't channels, but a number of the links are marked with "(video)":...
Multiple issue and pull request templates
+1
Default: /ISSUE_TEMPLATE.md
/ISSUE_TEMPLATE/<name>.md</name>
Default: /PULL_REQUEST_TEMPLATE.md
/PULL_REQUEST_TEMPLATE/<name>.md</name>
You can leave the last S for off (for savings) if you want /ISSUE_TEMPLATE(S)/ or /PULL_REQUEST_TEMPLATE(S)/
Five myths about Bitcoin’s energy use
Regarding the proof-of-stake part: Ethereum devs are also working on sharding which will make scaling on the Ethereum Blockchain way easier. I really think that Ethereum will be the No.1 cryptocurrency in the near future. Bitcoin devs showed plenty of times, that they are not capable of keeping pace with demand (couldn't even get the block size thing right). Bitcoin is virtually dead. No one can really use it for real world transactions. Plus Bitcoin in reality is a really central coin, completely in the hands of the miners. Even if the Bitcoin devs decided to go with PoS, the miners wouldn't agree.
Proof of Work (Bitcoin*, ...), Proof of Stake (Ethereum Casper), Proof of Space, Proof of Research (GridCoin, CureCoin,)
Plasma (Ethereum) and Lightning Network (BitCoin (SHA256), Litecoin (scrypt),) will likely offload a significant amount of transaction volume and thereby reduce the kWh/transaction metrics.
> But electricity costs matter even more to a Bitcoin miner than typical heavy industry. Electricity costs can be 30-70% of their total costs of operation.
> [...] If Bitcoin mining really does begin to consume vast quantities of the global electricity supply it will, it follows, spur massive growth in efficient electricity production—i.e. the green energy revolution. Moore’s Law was partially a story about incredible advances in materials science, but it was also a story about incredible demand for computing that drove those advances and made semiconductor research and development profitable. If you want to see a Moore’s-Law-like revolution in energy, then you should be rooting for, and not against, Bitcoin. The fact is that the Bitcoin network, right now, is providing a $200,000 bounty every 10 minutes (the mining reward) to the person who can find the cheapest energy on the planet.
This is ridiculous. The economy is already incentivized to find cheaper electricity by forces far more powerful than Bitcoin. By this logic, you would defend a craze for trying to boil the sea.
If the market had internalized the external health, environmental, and defense costs of nonrenewable energy, we would already have cheap, plentiful renewable energy. But we don't: the market is failing to optimize for factors other than margin. (New Keynesian economics admits market failure, but not non-rationality.)
So, (speculative_valuation - cost) is the margin. Whereas with a stock in a leveraged high-frequency market with shorting, (shareholder_equity - market_cap) is explainable in terms of the market information that is shared.
So, it's actually (~$200K-(n_kwhrs*cost_kwhr)) for whoever wins the block mining lottery (which is about every 10 minutes and can be anyone who's mining).
But the point about Bitcoin maintaining demand for and while we move to competitive lower cost renewable energy and greater efficiency is good.
What we should hope to see is the blockchain industry directly investing in clean energy capacity development in order to rationally minimize their primary costs and maximize environmental sustainability.
It is kind of missing the point though. If everyone was competing to make electrical single-person planes to help people commute to work it would also increase demand for renewable/electrical energy - but can we agree, that just using electrical cars instead of planes is going to take a lot less electricity overall?
Yes, and then energy prices would decrease due to less demand. Blockchain energy usage maintains demand for energy; which keeps prices high enough that production of renewables can profitably compete with nonrenewables while we reach production volumes of solar, wind, and hemp supercapacitors for grid storage.
> Throughout the first half of 2008, oil regularly reached record high prices.[2][3][4][5] Prices on June 27, 2008, touched $141.71/barrel, for August delivery in the New York Mercantile Exchange [...] The highest recorded price per barrel maximum of $147.02 was reached on July 11, 2008.
At that price, there's more demand for renewables (such as electric vehicles and solar panels)
> Since late 2013 the oil price has fallen below the $100 mark, plummeting below the $50 mark one year later....
... Energy costs and inflation are highly covariate. (Trouble is, CPI All rarely ever goes back down)
Bitcoin's energy use will tend to raise to match the mining profits. As long as both BTC's price and transaction fees keep raising, the energy miners spend will continue to raise. BTC's price is going up due to speculation, and transaction fees due to blocks being full.
Even at current levels, BTC's built in block rewards dominate tx fees (12.50BTC built in + 1.38 tx fees). The 12.5 built in reward is essentially a subsidy for mining.
The block reward is an incentive for redundant distributed replica nodes.
Ask HN: Which programming language has the best documentation?
Python! The Python docs are written in ReStructuredText and built with Sphinx.
Ask HN: Recommended course/website/book to learn data structure and algorithms
I am a full-time Android developer who does most of his programming work in Java. I am a non CS graduate so didn't study Data structure and algorithms course in university so I am not familiar with this subject which is hindering my prospect of getting better programming jobs. There are so many resources out there on this subject that I am unable to decide which one is the best for my case. Could someone please point me out in the right direction. Thanks.]
Why is quicksort better than other sorting algorithms in practice?
The top-voted response is helpful: throwing away constants as in Big-O notation is misleading, average cases aren't the case; Sedgewick Algorthms book.
ORDO: a modern alternative to X.509
There are a number of W3C specs for this type of thing.
Linked Data Signatures (ld-signatures) relies upon a graph canonicalization algorithm that works with any RDF format (RDF/XML, JSON-LD, Turtle,)
> The signature mechanism can be used across a variety of RDF data syntaxes such as JSON-LD, N-Quads, and TURTLE, without the need to regenerate the signature
A defined way to transform ORDO to RDF would be useful for WoT graph applications.
WebID can express X509 certs with the cert ontology. {cert:X509Certificate, cert:PGPCertificate,} rdfs:subClassOf cert:Certificate
ld-signatures is newer than WebID.
(Also, we should put certificates in a blockchain; just like Blockcerts (JSON-LD))
Wine 3.0 Released
Kimbal Musk is leading a $25M mission to fix food in US schools
+1. The introduction to "Nudge: Improving Decisions about Health, Wealth, and Happiness" discusses how choices about food placement in cafeterias influence students' dietary decisions.
Spinzero – A Minimal Jupyter Notebook Theme
+1. The Computer Modern serif fonts look legit. Like LaTeX legit.
Now, if we could make the fonts unscalable and put things in two columns (in order to require extra scrolling and 36 character wide almost-compiling copy-and-pasted code samples without syntax highlighting) we'd be almost there!
I searched for Computer Modern fonts and they're all available here:
I am surprised why these beauties are not widely adopted on websites and such. I agree, they just look very disciplined and professional.
I'd hope someday these relics are hosted on Google Fonts.
What does the publishing industry bring to the Web?
Q: What does the publishing industry bring to the Web?
A: PDF hosting, comments, a community of experts
FWIU, Publishing@W3C proposes WPUB [1] instead of PDF or MHTML for 'publishing' .
How do WPUB canonical identifiers (which reference/redirect(?) to the latest version of the resource) work with W3C Web Annotations attached to e.g. sentences within a resource identified with a URI? When the document changes, what happens to the attached comments? This is also a problem with PDFs: with a filename like document-20180111-v01.pdf and a stable(!) URL like, we can add Web Annotations to that URI; but with a new URI, those annotations are lost.
[1]
Git is a blockchain
Bitcoin is very much inspired by git; though in terms of immutability it's more similar to mercurial and subversion (git push -f)
Git accepts whatever timestamp a node chooses to add to a commit. This can cause interesting sorts in terms of chronological and topological sort orders.
Without an agreed-upon central git server there is not a canonical graph.
You can use GPG signatures with Git, but you need to provide your own keyserver and then there's still no way to enforce permissions (e.g. who can ALTER, UPDATE, or DELETE which files).
Git is a directed acyclic graph (DAG). Not a chain. Blockchains are chains to prevent double-spending (e.g. on a different fork).
Bitcoin was accepted by The Linux Foundation (Linus Torvalds wrote Git):
It's always been a mix of inspiration in my mind:
- Torrent P2P file sharing.
- Git like data structure and protocol
- Immutability from functional programming
- Public key cryptography
Show HN: Convert Matlab/NumPy matrices to LaTeX tables
LaTeX must be escaped in order to prevent LaTeX injection.
AFAIU, numpy.savetxt does not escape LaTeX characters?
Jupyter Notebook rich object display protocol checks for obj._repr_latex_() when converting a Jupyter notebook from .ipynb to LaTeX.
The Pandas _repr_latex_() function calls to_latex(escape=True † )....
†* The default value of escape ️ (and a few other presentational parameters) is determined from the display.latex.escape option:... *
df = pd.read_csv('filename.csv', ); df.to_latex(escape=True)
Or, with a Jupyter notebook:
df = pd.read_csv('filename.csv', ); df
# $ jupyter convert --to latex filename.ipynb
Wouldn't it be great if there was a LaTeX incantation that allowed for specifying that the referenced dataset URI (maybe optionally displayed also as a table) is a premise of the analysis; with RDFa and/ or JSONLD in addition to LaTeX PDF? That way, an automated analysis tool could identify and at least retrieve the data for rigorous unbiased analyses.
#StructuredPremises
A Year of Spaced Repetition Software in the Classroom
What a great article about using Anki during class in a Language Arts curriculum.
NIST Post-Quantum Cryptography Round 1 Submissions
Are there any blogs that talk about what’s required in a crypto algorithm to withstand quantum computing? Is a straightforward increasing the key size enough? Or are there new paradigms explored? Are these algorithms symmetric or asymmetric?
This paper lists a few of the practical concerns for quantum-resistant algos (and proposes an algo that wasn't submitted to NIST Post-Quantum Cryptography Round 1):
"Quantum attacks on Bitcoin, and how to protect against them" (~2027?)
A few Quantum Computing and Quantum Algorithm resources:
Responsive HTML (arxiv-vanity/engrafo, PLoS,) or Markdown in a Jupyter notebook (stored in a Git repo with a tag and maybe a DOI from figshare or Zenodo) really would be far more useful than comparing LaTeX equations rendered into PDFs.
What are some good resources to learn about Quantum Computing?
Quantum computing:
Quantum algorithm:
Quantum Algorithm Zoo:
Jupyter notebooks:
* QISKit/qiskit-tutorial > "Exploring Quantum Information Concepts"...
* jrjohansson/qutip-lectures > "Lecture 0 - Introduction to QuTiP - The Quantum Toolbox in Python"...
* sympy/quantum_notebooks...
krishnakumarsekar/awesome-quantum-machine-learning:...
arxiv quant-ph:
Gridcoin: Rewarding Scientific Distributed Computing
In all honesty this is what I've been waiting for in terms of a useful cryptocurrency. Now if we could only decentralize the control of what projects the processing goes towards with smart contracts then we could have a coin with more actual utility. Imagine the hash rate of the BTC network going towards some useful calculations.
> Imagine the hash rate of the BTC network going towards some useful calculations.
""". """
> Imagine the hash rate of the BTC network going towards some useful calculations.
Unfortunatly BTC mining now runs almost entirely on ASICs that can't be used to compute anything but SHA-256.
Imagine and equivalent amount of computer power in an alternative future. I mean, you're right, and this limitation is a flaw in bitcoin, but you've also missed the point.
It's also hard to call it a flaw in Bitcoin, considering it was an intentional design decision.
Even intentional design decisions can be flawed.
There's a pretty hard limit bounding the optimizability of SHA256. That's why hashcash uses a cryptographic hash function.
There may be - or, very likely are - shortcuts for proof of research better than Grover's; which, when found, will also be very useful for science and medicine. However, that advantage is theoretically destabilizing for a distributed consensus network; which is also a strange conflict in incentives.
Sort of like buying "buy gold" commercials when the market was heading into the worst recession since the Great Depression.
SSL accelerators may benefit from the SHA256 ASIC optimizations incentivized by the bitcoin design.
"""The accelerator provides the RSA public-key algorithm, several widely used symmetric-key algorithms, cryptographic hash functions, and a cryptographically secure pseudo-random number generator"""
GPU prices are also lower now; probably due to demand pulling volume. The TPS (transactions per second) rate is doing much better these days.
How would you solve the local daretime problem in order with Git and signatures?
Power Prices Go Negative in Germany
The article doesn’t actually answer the questions it purports to answer. Better source:-....
Energy markets are artificial markets designed to create various price signals that result in certain incentives on both generation and demand, subject to numerous constraints. One constraint is that demand and supply must balance. The grid can’t store much energy. Oversupply can cause grid frequency to go above 50/60 Hz, threatening grid stability:. Power prices go negative when there is too much generation capacity online at a given instant, relative to demand. That creates incentives for generators that can shut down (like natural gas) to do so.
Negative power prices are not a good thing for consumers. A negative price in the wholesale electric markets does not mean the electricity is "less than free." Obviously, even wind power or solar always costs positive money to generate in real terms. Instead, it signals a mismatch between generation capacity, storage capacity, and demand. In a grid with adequate storage capacity, negative prices would be extremely rare.
I think you're missing the critical piece of your explanation of why negative prices aren't good for consumers:
The reason that negative power prices look like they would be good for consumers is that it seems like they should lower their monthly bills. But in practice their bill should stay the same, even if there were significant periods during which energy prices were negative. Why? The negative prices don't indicate that the cost of power production is negative for the power company, so the power company is losing money. The power company has to recoup those losses somehow, and they do it by charging more when power prices are positive.
In order to take advantage of negative power prices, a power consumer would have to dynamically increase their power consumption in response to the negative prices. If they have any significant power storage capacity, maybe they could store power and sell it back to the grid when prices go positive, or maybe turn on their mining rig while prices are negative, if they go negative frequently.
I'll just add in some empirical results. German power has risen to be some of the most expensive in Europe since they started the Energiewende. They have very low wholesale prices (driving the coal producers out of business) combined with very high retail prices (might be top 3 in Europe?).
The high retail price appears to be driven by the mechanisms driving the Energiewende ie transition to wind and solar.......
The NYT article reads as though Germany is benefiting from the switch to renewables, and that the only problems are that sometimes there's so much power around that they have to pay people to use it.
It mentions high cost of electricity, and that it's due to fees and "renewable investment costs" but then immediately hand-waves that away because "household energy bills have been rising over all anyway."
When combined with your information above (those hand-wavy fees actually account for fully 50% of the costs, with 24% being the renewables surcharge) it would seem that the NYT is being misleading. It seems they want to convey the idea that the renewables are an immediate good thing for everyone (which I do not take issue with politically) while downplaying the significant costs to consumers. Am I missing something?
If I'm not, then this does nothing but contribute to the current view of American media as being intentionally misleading when it suits their interests.
No, I don't think you're missing something here.
The price for electricity is indeed very high in Germany, rising to new heights with the new year, again.
The problem is, unsurprisingly, regulation and policy, i.e. the fees you already mentioned, and the subsidies for industrial usage, etc.
So, yes, it actually seems that the NYT is misleading here.
Let's leave this as an exercise for the reader, to judge if anyone should really be surprised here.
I always think of this old "Trust, but verify" quibble, and that it is actually based on an old Russian proverb. The irony.. :)
> The price for electricity is indeed very high in Germany
It's not only very high, but the second highest in the world, and we are about to take over the first spot [1]
> The problem is, unsurprisingly, regulation and policy, i.e. the fees you already mentioned, and the subsidies for industrial usage, etc.
The problem are the subsidies for all the green energy. It's not only the direct costs but also the costs for grid interventions (turning on/off capacity), paying for renewables even if they are not producing because there is too much energy available, the huge requirements for new energy lanes from north to south, getting rid of nuclear power etc.
Subsidies for industrial usage is often quoted as increasing the costs by interested parties, but it's also a direct need of the Energiewende, because the economic damage would be gigantic. Unless you want to get the energy intensive factories out of your country, you have to factor in those costs.
> So, yes, it actually seems that the NYT is misleading here.
Yes, as does most media - especially in Germany. Those negative prices are no win for any german. Why else is it, that we will soon pay the highest prices for electricity in the world?
[1]-...
"Several countries in Europe have experienced negative power prices, including Belgium, Britain, France, the Netherlands and Switzerland."
> Yes, as does most media - especially in Germany. Those negative prices are no win for any german. Why else is it, that we will soon pay the highest prices for electricity in the world?
AFAIU, it's because you're aggressively shaping the energy market in order to reduce health and environmental costs now.
The technical issue here is that batteries are not good enough yet; and [hemp] supercapacitors are not yet at the volume needed to lower the costs. So, maintaining a high price for energy keeps the market competitive for renewables which have positive negative externalities.
Can the excess energy on certain days be converted back to money through cryptocurrency mining? (While society decides whether batteries are a crucial energy security investment)
Mathematicians Find Wrinkle in Famed Fluid Equations
Navier-Stokes equations:–Stokes_equations
Bitcoin is an energy arbitrage
In addition to relocating to where energy is the least expensive, Bitcoin creates incentive for miners to lower the local cost of energy: invest in renewable energy.
Renewable Energy / Clean Energy is now less expensive than alternatives; with continued demand, the margins are at least maintained.
> In addition to relocating to where energy is the least expensive, Bitcoin creates incentive for miners to lower the local cost of energy: invest in renewable energy.
We have lots of direct and effective subsides for nonrenewable energy in the United States. And some for renewables, as well. For example [1] average effective tax rate over all money making companies: 26%
"Coal & Related Energy": 0.69%
"Oil/Gas (integrated)": 8.01%
"Power": 29.22%
"Green and Renewable Energy": 26.42%
[1] "Tax Rates by Sector (US)" (January 2017)...
X-posting here from the article's comments:
The price reflects the confidence investors have in the security's ability to meet or exceed inflation and in the information security of the network.
Volatility adds value for algo traders: say the prices are [1, 101, 51, 101, 51, 201]:
(101-1)+(101-51)+(201-51)=300
(201-1)=200
For the average Joe looking at the vested options they're hodling, though, volatility is unfriendly.
When e.g. algo-traders are willing to buy in when the price starts to fall, they're making liquidity; which some exchanges charge less for.
Enigma Catalyst (Zipline) is one way to backtest and live-trade cryptocurrencies algorithmically.
There are now more than 200k pending Bitcoin transactions
At 20 transactions per second it's a delay of 3 hours. (200000/20/60/60)
"The bitcoin network's theoretical maximum capacity sits between 3.3 to 7 transactions per second."
The OT link does say "Transactions Per Second 22.54".
The solutions for this 3 hour backlog of unconfirmed transactions include: implementing SegWit, increasing the blocksize, and Lightning Network.
I think the link counts incoming transactions. I.e. you can always schedule more, but it doesn't mean they're going to be acted on in a reasonable timeframe.
What ORMs have taught me: just learn SQL (2014)
ORMs:
- Are maintainable by a team. "Oh, because that seemed faster at the time."
- Are unit tested: eventually we end up creating at least structs or objects anyway, and then that needs to be the same everywhere, and then the abstraction is wrong because "everything should just be functional like SQL" until we need to decide what you called "the_initializer2".
- Can make it very easy to create maintainable test fixtures which raise exceptions when the schema has changed but the test data hasn't.
- Prevent SQL injection errors by consistently parametrizing queries and appropriately quoting for the target SQL dialect. (One of the Top 25 most frequent vulnerabilities). This is especially important because most apps GRANT both UPDATE and DELETE; if not CREATE TABLE and DROP TABLE to the sole app account.
- Make it much easier to port to a new database; or run tests with SQLite. With raw SQL, you need the table schema in your head and either comprehensive test coverage or to review every single query (and the whole function preceding db.execute(str, *params))
- May be the performance bottleneck for certain queries; which you can identify with code profiling and selectively rewrite by hand if adding an index and hinting a join or lazifying a relation aren't feasible with the non-SQLAlchemy ORM that you must use.
- Should provide a way to generate the query at dev or compile-time.
- Should make it easy to DESCRIBE the query plans that code profiling indicates are worth hand-optimizing (learning SQL is sometimes not the same as learning how a particular database plans a query over tables without indexes)
- Make managing db migrations pretty easy.
- SQLAlchemy really is great. SQLAlchemy has eager loading to solve the N+1 query problem. Django is often more than adequate; and has had prefetch_related() to solve the N+1 query problem since 1.4. Both have an easy way to execute raw queries (that all need to be reviewed for migrations). Both are much better at paging without allocating a ton of RAM for objects and object attributes that are irrelevant now.
- Make denormalizing things from a transactional database with referential integrity into JSON really easy; which webapps and APIs very often need to do.
Is there a good JS ORM? Maybe in TypeScript?
I've used but I'd stay away from it, only felt cumbersome. No productivity gain.
It's built on top a query builder Knex () which is decent.
Objection.js and the Knex query builder are excellent. Think ORM light with full access to SQL.
Show HN: An educational blockchain implementation in Python
> It is NOT secure neither a real blockchain and you should NOT use this for anything else than educational purposes.
It would be nice if non-secure parts of implementation or design were clearly marked.
What's the point of education article, if bad examples aren't clearly marked as bad? If MD5 usage is the only issue, author could easily replace it with SHA and get rid of the warning at the start. If there are other issues, how can a reader know which parts to trust?
Even if fixing bad/insecure parts are "left as an exercise for the reader", learning value of the article would be much greater if those parts would be at least pointed at.
OP here.
erikb is spot on in the sibling comment. This hasn't been expert-reviewed, hasn't been audited so I'm pretty confident there is a bug somewhere that I don't know about.
It's educational in the sense that I tried as best a I could to implement the various algorithmic parts (mining, validating blocks & transactions, etc...).
I originally used MD5 because I thought I would do more exploration regarding difficulty and MD5 is faster to compute than SHA. In the end, I didn't do that exploration, so I could easily replace MD5 with SHA. I'll update the notebook to use SHA, but I'm still not gonna remove the warning :)
I'll also try to point out more explicitly which parts I think are not secure.
> I'll also try to point out more explicitly which parts I think are not secure.
Things I've noticed:
* Use of floating point arithmetic.
* Non-reproducible serialization in verify_transaction can produce slightly different, but equivalent JSON, which leads to rejecting transactions if produced JSON is platform-dependent (e.g. CRLFs, spaces vs tabs).
* Miners can perform DoS by creating a pair of blocks referencing each other (recursive call in verify_block is made before any sanity checks or hash checks, so they can modify block's ancestor without worrying about changing its hash).
* mine method can loop forever due to integer overflow.
* Miners can put in block a transaction with output sum greater than input sum - only place where it is checked is in compute_fee and no path from verify_block leads there.
Those are all very good points I didn't think about, thanks for these.
I'll fix the two bugs with verify_block and the possibility for a miner to inject invalid a output > input transaction.
I'll add a note for the 3 others.
For deterministic serialization (~canonicalization), you can use sort_keys=True or serialize OrderedDicts. For deseialization, you'd need object_pairs_hook=collections.OrderedDict.
Most current blockchains sign a binary representation with fixed length fields. In terms of JSON, JSON-LD is for graphs and it can be canonicalized. Blockcerts and Chainpoint are JSON-LD specs:
> Blockcerts uses the Verifiable Claims MerkleProof2017 signature format, which is based on Chainpoint 2.0....
FYI, dicts are now ordered by default as of Python 3.6.
That's an implementation detail, and shouldn't be relied upon. If you want an ordered dictionary, you should use collections.OrderedDict.
It's now the spec for 3.6+.
> #python news: @gvanrossum just pronounced that dicts are now guaranteed to retain insertion order. This is the end of a long journey.
More here:...
OrderedDicts are backwards-compatible and are guaranteed to maintain order after deletion.
Thanks! Simplest explanation I've seen.
Here's an nbviewer link (which, like base58, works on/over a phone):...
Note that Bitcoin does two rounds of SHA256 rather than one round of MD5. There's also a "P2P DHT" (peer-to-peer distributed hash table) for storing and retrieving blocks from the blockchain; instead of traditional database multi-master replication and secured offline backups.
> ERROR:root:Invalid transaction signature, trying to spend someone else's money ?
This could be more specific. Where would these types of error messages log to?
My mistake, it's BitTorrent that has a DHT. Instead of finding the most network local peer with the block identified by a (prev_hash, hash) hash table key, the Bitcoin blockchain broadcasts all messages to all nodes; which must each maintain a complete backup of the entire blockchain.
"Protocol documentation"
MSU Scholars Find $21T in Unauthorized Government Spending
Unauthorized federal spending (in these two departments) 1998-2015: $21T
Federal debt (2017): $20T
$ 20,000,000,000,000 USD
Would a blockchain for government expenditures help avoid this type of error?
We already now have ( ) and expenditure line item metadata.
Would having traceable money in a distributed ledger help us keep track of money collected from taxpayers?
Obviously, the volatility of most cryptocurrencies would be disadvantageous for purposes of transferring and accounting for government spending. Isn't there a way to peg a cryptocurrency to the USD; even with Quantitative Easing? How is Quantitative Easing different from just deciding to print trillions more 'coins' in order to counter debt or inflation or deflation; why is the government in debt at all?
re: Quantitative Easing
Say I have $100 in my Social Security Fund (in very non-aggressive investments which need to meet or exceed inflation) and the total supply of money (including paper notes and numbers in debit and credit columns of various public and private databases) the total supply of money is $1T with $1T in debt; if 1T is printed to pay for that debt, is my $100 in retirement savings then worth $50? Or is it more complex than that?
[deleted]
Universities spend millions on accessing results of publicly funded research
Are there good open source solutions for journal publishing? (HTML abstract, PDFs, comments, ...)?
Yes- quite a few beyond what's already been listed, in fact:
Ambra is being discontinued!...
Edit: And theoj doesn't really appear to be maintained anymore either...
> Ambra is being discontinued!
The article mentions the discontinuation of Aperta but nothing about Ambra?
An Interactive Introduction to Quantum Computing
Part 2 mentions two quantum algorithms that could be used to break Bitcoin (and SSH and SSL/TLS; and most modern cryptographic security systems): Shor's algorithm for factorization and Grover's search algorithm.
Part 2:...
Shor's algorithm:
Grover's algorithm:
I don't know what heading I'd suggest for something about how concentration of quantum capabilities will create dangerous asymmetry. (That is why we need post-quantum ("quantum resistant") hash, signature, and encryption algorithms in the near future.)
Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256)
"Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256)"
> […] On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates.
From :
> NIST has initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms. Nominations for post-quantum candidate algorithms may now be submitted, up until the final deadline of November 30, 2017.
Project Euler
After hearing about it for years, I decided to start working through Project Euler about two weeks ago. It really is much more about math than programming, although it's a lot of fun to take on the problems with a language that has tail call optimization because so many of the problems involve recurrence relations.
I like that the problems are constructed in a way that usually punishes you for trying to use brute force. Sometimes there's a problem that doesn't have a more elegant solution, though, as if to remind us that brute force often works remarkably well.
I agree with Project Euler being mostly about math. I prefer Codewars for practicing programming or learning a new language.
Euler was a mathematician after all. Now thinking about it I wonder what project Djikstra would look like, or say maybe project Stallman.
There's a Project Rosalind (named after Rosalind Franklin) which is sort of like Project Euler for bioinformatics:
I like bioinformatics problems because:
- There are problem explanations and an accompanying textbook.
- You can structure the solutions with unit tests that test for known good values.
- There's a graph of problems.
Who’s Afraid of Bitcoin? The Futures Traders Going Short
Wall Street being able to buy Bitcoin might increase demand, but they might also do things that would at least at times amplify sell pressure like panic selling, shorting it, leveraged shorting, margin calls
Statement on Cryptocurrencies and Initial Coin Offerings
A little emphasis, focusing on the aftermath from the SEC's July report on the DAO:
> ."
I think people are viewing this as an attack on crypto, when its actually just common sense. People put too much faith the 'Contract' half of 'Ethereum/Smart Contract'
Basically. Today ICOs are selling tokens as shares of equity in their company, or similar. Which you can then sell on..
Otherwise how can the court help you, and who else is going to help you?
> I think people are viewing this as an attack on crypto, when its actually just common sense.
> […].
This. IRS regards coins and tokens as capital gains taxable things regardless of whether they qualify as securities. SEC exists to protect investors from scams and unfair dealing. In order to protect investors, SEC regulates issuance of securities.
Ask HN: How do you stay focused while programming/working?
I often find myself "needing" to take a mini-break after just a few minutes of concerted effort while coding. In particular, this often occurs after I've made a tiny breakthrough, prompting me to reward myself by checking Twitter or HN. This bad habit quickly derails any momentum. What are some tips to increase focus stamina and avoid distraction?
It's not exactly new and exciting, but I found that listening to calm, instrumental music helps me focus. Mostly Ambient. If you do not like electronic music, Stars Of The Lid or Bohren & Der Club Of Gore are very much worth checking out.
Also, has worked wonders for me.
In both cases, it seems that unstructured audio input, like, occupies the parts of my mind that would otherwise distract me.
> It's not exactly new and exciting, but I found that listening to calm, instrumental music helps me focus. Mostly Ambient.
Same. Lounge, Ambient, Chillout, Chillstep ( has a bunch of great streams. SoundCloud and MixCloud have complete replayable sets, too.)
I've heard that videogame soundtracks are designed to not be distracting; to help focus.
A Hacker Writes a Children's Book
The rhymes and illustrations look great! Is there a board book edition?
Other great STEM and computers books for kids:
"A is for Array"
"Lift-the-Flap Computers and Coding"
"Computational Fairy Tales"
"Hello Ruby: Adventures in Coding"
"Python for Kids: A Playful Introduction To Programming"
"Lauren Ipsum: A Story About Computer Science and Other Improbable Things"
"Rosie Revere, Engineer"
"Ada Byron Lovelace and the Thinking Machine"
"HTML for Babies: Volume 1 of Web Design for Babies"
"What Do You Do With a Problem?"
"What Do You Do With an Idea?"
"ABCs of Mathematics", "The Pythagorean Theorem for Babies", "Non-Euclidian Geometry for Babies", "Introductory Calculus for Infants", "ABCs of Physics", "Statistical Physics for Babies", "Netwonian Physics for Babies", "Optical Physics for Babies", "General Relativity for Babies", "Quantum Physics for Babies", "Quantum Information for Babies", "Quantum Entanglement for Babies"
"ELI5": "Explain like I'm five"
Someone should really make a list of these.
Ask HN: Do ISPs have a legal obligation to not sell minors' web history anymore?
I guess COPPA is still in place and in theory it applies to ISPs, although they may be allowed to assume that all traffic from a household comes from the bill payer who is presumably over 13.......
So they can currently argue that, since they don't know the age of the browser, they're not liable?
Weren't we better off with a policy making it illegal to sell web browsing history for anyone; regardless of whether their age or disability is known?
Tech luminaries call net neutrality vote an 'imminent threat'
> “The current technically-incorrect order discards decades of careful work by FCC chairs from both parties, who understood the threats that Internet access providers could pose to open markets on the Internet.”
Paid prioritization is that threat.
Again, streaming video content for all ages is not more important than online courses.
Ask HN: Can hashes be replaced with optimization problems in blockchain?
CureCoin.
From... :
>.
...
From .
From :
> Gridcoin (Berkeley 2013) is built on Proof-of-Stake and Proof-of-Research. Gridcoin is used as payment for computing resources contributed to BOINC.
> I doubt that volatility would be welcome on the Gridcoin blockchain: Wikipedia lists "6.5% Inflation. 1.5% Interest + 5% Research Payments APR" under the Supply Growth infobox attribute.
>
Ask HN: What could we do with all the mining power of Bitcoin? Fold Protein?
Instead of buzzing SHA-512 in circles like busy bees ad infinitum, is there any way we can use these calculations productively?
Instead of algo-trading the stock markets?!
There are a number of distributed computing projects (e.g. SETI@home):...
The Ethereum White Paper lists a number of applications for blockchains:
(BitCoin is built on SHA-256, Ethereum is built on Keccak-256 (~SHA-3))
Proof-of-Stake is a lower energy alternative to Proof-of-Work with tradeoffs:
Unfortunately, IDK of another way to find secure consensus (blockchains are consensus protocols) in a DDOS-resistant way with unsolved problems?
> Unfortunately, IDK of another way to find secure consensus (blockchains are consensus protocols) in a DDOS-resistant way with unsolved problems?
Gridcoin (Berkeley 2013) is built on Proof-of-Stake and Proof-of-Research. Gridcoin is used as payment for computing resources contributed to BOINC.
I doubt that volatility would be welcome on the Gridcoin blockchain: Wikipedia lists "Supply growth 6.5% Inflation. 1.5% Interest + 5% Research Payments APR" under the Supply Growth infobox attribute.
No CEO needed: These blockchain platforms will let ‘the crowd’ run startups
Mentioned in the article are Aragon, District0x, Ethlance, NameBazaar, Colony, DAOstack; all of which, IIUC, are built with Ethereum and Smart Contracts (DAOs).
How much energy does Bitcoin mining really use?
Is there a confidence interval chart with low, average, and high estimates? Maybe a Jupyter notebook with parametrized functions and a reproducible and reasonably reviewable analysis?
A sustainability index with voluntary data from mining pools would be great.
The Actual FCC Net Neutrality Repeal Document. TLDR: Read Pages 82-87 [pdf]
Here are some links to the relevant antitrust laws:
Sherman Antitrust Act (1890)
Aspen Skiing Co. v. Aspen Highlands Skiing Corp. (1985)....
Transparency in network management and paid prioritization practices and agreements will be relevant.
"We find that antitrust law, in combination with the transparency rule we adopt, is particularly well-suited to addressing any potential or actual anticompetitive harms that may arise from paid prioritization arrangements." (p.147)
If antitrust law is sufficient, as you've found, there would be no need for Title II Common Carrier regulation in any industry.
We can call phone numbers provided by any company at the same rate because phone companies are regulated as Title II Common Carriers. ISPs are also common carriers.
"Public airlines, railroads, bus lines, taxicab companies, phone companies, internet service providers,[3] cruise ships, motor carriers (i.e., canal operating companies, trucking companies), and other freight companies generally operate as common carriers."
The 5 most ridiculous things the FCC says in its new net neutrality propaganda
> The Federal Communications Commission put out a final proposal last week to end net neutrality. The proposal opens the door for internet service providers to create fast and slow lanes, to block websites, and to prioritize their own content. This isn’t speculation. It’s all there in the text.
Great. Payola. Thanks Verizon!
Does the FTC have the agreement information needed to hear the anti-trust cases that are sure to result from what are now complaints to the FCC (an organization with network management expertise) being redirected to the FTC?
Title II is the appropriate policy set for ISPs; regardless of how lucrative horizontal integration with content producers seems.
FCC's Pai, addressing net neutrality rules, calls Twitter biased
No. Censoring hate speech by banning people who are verbally assaulting others (in violation of Terms of Service that they agreed to) is a very different concern than requiring common carriers to equally prioritize bits.
If we extend "you must allow people to verbally assault others (because free speech applies to the government)" to TV and radio, what do we end up with?
Note that the FCC fines non-cable TV (broadcast radio and TV) for cursing on air. See "Obscene, Indecent and Profane Broadcasts"...
How can you ask social media companies to do something about fake news (the vast majority of which served to elect the current administration (which nominated this FCC chairman)) while also lambasting them for upholding their commitment to providing a hate-free experience for net citizens and paying advertisers?
"Open Internet": No blocking. No throttling. No paid prioritization.
It would be easier for us to understand the "Open Internet" rules if the proposed "Restoring Internet Freedom" page wasn't crudely pasted over (redirected to from) the page describing the current Open Internet rules. (current policy) now redirects to (proposed policy).
ISPs blocking, throttling, or paid-prioritizing Twitter, Netflix, Fox, or CNN for everyone is a different concern than responding to individuals who are threatening others with hate speech.
The current policy ("Open Internet") means that you can use the bandwidth cap that you pay for for whatever legal content you please.
The proposed policy ("Restoring Internet Freedom") means that internet businesses will need to pay every ISP in order to not be slower than the big guys who can afford to pay-to-play (~"payola").
A curated list of Chaos Engineering resources
Never having heard off 'Chaos Engineering', this seems like a bad case of 'Cargo Cult Engineering'.
That starts with the term 'chaos', which has a well-defined meaning in Chaos Theory, where it is quite obviously borrowed from: small changes in input lead to large changes in output. Neither distributed systems in general, and especially not the sort of system this engineering strives to build, fit that definition. In fact, they are the exact opposite: every part of a typical web stack is already build to mitigate changing demands such as traffic peaks or attacks.
The mumbo jumbo around "defining a steady state" and "disproving the null hypothesis" seems like a veneer of sciency on a rather well-known concept: testing.
A supreme court justice once said: "Good writing is a $10 thought in a 5 cent sentence". This is the opposite.
"Resilience Engineering" would be a good alternative term for these failure scenario simulations and analyses.
Glossary of Systems Theory > A > Adaptive capacity:
> Adaptive capacity: An important part of the resilience of systems in the face of a perturbation, helping to minimise loss of function in individual human, and collective social and biological systems
Technology behind Bitcoin could aid science, report says
Bloom is working on non-academic credit building and scoring.
Hyperledger brings together many great projects and tools which have numerous applications in science and industry.
Is a blockchain necessary? Could we instead just sign JSONLD records with ld-signatures and store them in an eventually or strongly consistent database we all contribute resources to synchronizing and securing?
That's just the centralization or decentralization question.
We can do it all centralized already but we would also all need to trust whoever is hosting this data and trust every single person who has the ability to enter the data.
Less nodes you need to trust the better, in a centralized solution where everyone can contribute you need to be able to trust everyone.
In a decentralized system where everyone can contribute, you don't need to trust anyone but give up benefits of centralization such as speed, performance and usability.
> We can do it all centralized already but we would also all need to trust whoever is hosting this data and trust every single person who has the ability to enter the data.
There are plenty of ways to minimize trust required with traditional cryptography though, this is not all or nothing, we have been doing this since PGP. You can get the overwhelming majority of the benefits with none of the drawbacks.
But how else are you going to hype an ICO with claims about the size of a market and get people who don't understand how blockchains work to give your their BTC/ETH?
Git hash function transition plan
> Some hashes under consideration are SHA-256, SHA-512/256, SHA-256x16, K12, and BLAKE2bp-256.
Not sure what K12 is (Keccak?), but BLAKE2 is a very attractive option.
Vintage Cray Supercomputer Rolls Up to Auction
The linked jacket looks pretty cool.
"Vintage Nylon Cray Super Computer Coat Medium, Cray Y-MP C90 Chippewa Falls"
Google is officially 100% sun and wind powered – 3.0 gigawatts worth
+1000.
TIL this is called "Corporate Renewable Energy Procurement"....
PPA: Power Purchase Agreement
Interactive workflows for C++ with Jupyter
QuantStack/xeus-cling
QuantStack/xwidgets
QuantStack/xplot (bqplot)
Vanguard Founder Jack Bogle Says ‘Avoid Bitcoin Like the Plague’
Over the past 7 years, Bitcoin has outperformed every security and portfolio that Jack Bogle has recommended.
This is pretty disrespectful to Jack Bogle.
Vanguard is almost singlehandedly responsible for returning trillions of dollars of costs, in the form of fees and underperformance by active managers, back to investors. Millions of investors have benefited.
Bitcoin has been a bubble since $1 and $100 to these people.
What evidence is there that it isn't a bubble? People buy Bitcoin only because they think they can sell it higher. Eventually, you will run out of greater fools.
So did tulip bulbs in Holland in the 17th century. It's still not enough to make it a smart choice.
Bitcoin has greatly outperformed tulips.
I think they just grew more tulips to meet demand?
Nasdaq Plans to Introduce Bitcoin Futures
My guess is that this is probably pretty meaningless.
There are a few things going against them.
- The CBOE and CME are both much larger futures exchanges and are going to be offering futures first
- since you can't net out futures contracts from different exchanges this means they tend to become winner take.
This might be interesting as one of the things that everyone is worried about is price manipulation.
If you haven't thought about how futures work with respect to margin and marking at the end of the trading day you need to know that you can be required to deposit more money into your margin account if the futures trade moves against you on any given day.
This means the marking price is very important and lost of institutional money is worried that the exchanges are easy to manipulate.
see:...
> Nasdaq’s product will reinvest proceeds from the spin-off back into the original bitcoin in a way meant to make the process more seamless for traders, the person said.
This is awesome,, right now the CBOE and CME both have punted on the question of forks saying, they'll have a best efforts to figure it. That appears to exceed CME’s plan to use four sources, and Cboe’s one. Nasdaq’s contracts will be cleared by Options Clearing Corp., the person said.
BitMEX bitcoin futures are already online. IDK how many price sources they pull?
Aren't there a few other companies already selling Bitcoin futures?
In general, when the CME enters a market for futures, they take all of the air out of the room. I don't think it's realistic to believe NASDAQ can compete with them.
Well John McAfee thinks bitcoin will hit 1 million by 2020.
Pump and dumpers gonna pump and dump....
Or, large investment banking houses will step in and create naked shorting opportunities to inflate sell pressure creating 'death spirals' to drive prices down and scoop them up and extreme discounts. This happens in the traditional public markets everyday.
> Or, large investment banking houses will step in and create naked shorting opportunities to inflate sell pressure creating 'death spirals' to drive prices down and scoop them up and extreme discounts. This happens in the traditional public markets everyday.
Is there a term for this?
Yes, this can happen in a few different ways and is the reason why Ycombinator created SAFEs. When you have a public company you will get offers for what are called "credit lines", "debt financing" or "convertible notes". They are traditionally used to create death spirals.... as the size of your float increases by you, executive director (CEO/CFO), as a public company "issuing" more stock to cover the loan. The more you issue, the less you're worth until somebody comes along scoops you up and re-engineers the cap table which is a restructuring. However, manipulation can occur within institutions as well:...
Ask HN: Where do you think Bitcoin will be by 2020?
I have a friend who believes it will be $100,000 per BitCoin and his reasoning is 'supply and demand'.
There will be around 18M bitcoins in 2020. [1][2]
[1]
[2]
This paper [3] suggests we'll be needing to upgrade to quantum-secure hash functions instead of ECDSA before 2027.
[3] "Quantum attacks on Bitcoin, and how to protect against them"
Hopefully, Ethereum will have figured out a Proof of Stake [4] solution for distributed consensus which is as resistant to DDOS as Proof of Work; but with less energy consumption (thereby, unfortunately or fortunately, un-incentivizing clean energy as a primary business goal).
[4]
Ask HN: Why would anyone share trading algorithms and compare by performance?
I was speaking with a person years my senior awhile back, and sharing information about the Quantopian platform (which allows users to backtest and share trading algorithms); and he asked me "why would anyone share their trading algorithms [if they're making any money]?"
I tried "to help each other improve their performance". Is there a better way to explain to someone who spends their time reading forums with no objective performance comparisons over historical data why people would help each other improve their algorithmic trading algorithms?
Catalyst, like Quantopian, is also built on top of Zipline; but for cryptocurrencies.
Zipline (backtesting and live trading of algorithms with initialize(context) and handle_data(context, data) functions; with the SPY S&P 500 ETF as a benchmark)
Pyfolio (for objectively comparing the performance of trading strategies over time)
...
"Community Algorithms Migrated to Quantopian 2"...
- "Reply to minimum variance w/ contrast" seems to far outperform the S&P 500.
Ask HN: CS papers for software architecture and design?
Can you please point me to some papers that you consider very influential for your work or that you believe they played significant role on how we structure our software nowdays?
"The Architecture of Open Source Applications" Volumes I & II
"Manifesto for Agile Software Development"...
"Catalog of Patterns of Enterprise Application Architecture"
Fowler > Publications ("Refactoring ",)
"Design Patterns: Elements of Reusable Object-Oriented Software" (GoF book)
.
UNIX Philosophy
Plan 9
## Distributed Systems
CORBA > Problems and Criticism (monolithic standards, oversimplification,):...
Bulk Synchronous Parallel:
Paxos:
Raft: #Safety
CAP theorem:
Keeping a Lab Notebook [pdf]
I'd love to hear some thoughts about keeping a "lab notebook" for ML experiments. I use Jupyter Notebooks when playing around with different ML models, and I find that it really helps to document my thought process with notes and comments. It also seems that the ML workflow is very 'experiment' driven. I'm always thinking "Hm, I think if I tweak this hyperparameter this way, or adjust this layer this way, then I'll get a better result because X". Thus, I have a bit of a hypothesis and proposed experiment. I run that model, and see if it improved or not.
Then, I run into an issue where I can either: 1. overwrite the original model with my new hyperparameters/design and re-run and analyze or 2. keep adding to the same notebook "page" with a new hypothesis/test/analysis loop, thus making the notebook pretty large. With number 1, I often want to backtrack and re-reference how a previous experiment went, but I lose that history. With number 2, it seems to get big pretty quickly, and coming back to the same notebook requires more setup, and "searching" the history gets more cumbersome.
Does anyone try using a separate notebook page for each experiment, maybe with a timestamp or "version"? Or is there a better way to do this in a single notebook? I am thinking that something like "chapters" could help me here, and it seems like this extension might help me:...
These are ASCII-sortable:
0001_Introduction.ipynb
0010_Chapter-1.ipynb
ISO8601 w/ UTC is also ASCII sortable.
# Jupyter notebooks as lab notebooks
## Disadvantages
### Mutability
With a lab notebook, you can cross things out but they're still there.
- [ ] ENH: Copy cell and mark as don't execute (or wrap with ```language\n``` and change the cell type to markdown)
- [ ] ENH: add a 'Save and {git,} Commit' shortcut
CoCalc (was: SageMathCloud) has (somewhat?) complete notebook replay with a time slider; and multi-user collaborative editing. ("Time-travel is a detailed history of all your edits and everything is backed up in consistent snapshots.")
### Timestamps
You must add timestamps by hand; i.e. as #comments or markdown cells.
- [ ] ENH: add a markdown cell with a timestamp (from a configurable template) (with a keyboard shortcut)
### Project files
You must manage the non-.ipynb sources separately. (You can create a new file or folder. You can just drag and drop to upload. You can open a shell tab to `git status diff commit` and `git push`, if the Jupyter/JupyterHub/CoCalc instance has network access to e.g. GitLab or GitHub)
## Advantages
### Reproducibility Executable I/O cells
The version_information and/or watermark extensions will inline the software versions that were installed when the notebook was last run
Dockerfile for OS config
Conda environment.yml (and/or pip requirements.txt and/or pipenv Pipfile) for further software dependencies
BinderHub can rebuild a docker image on receipt of a webhook from a got repo, push the built image to a docker image repository, and then host prepared Jupyter instances (with Kubernetes) which contain (and reproducibly archive) all of the preinstalled prerequisites.
Diff: `git diff`, `nbdime`
### Publishing
You can generate static HTML, HTML slides with RevealJS, interactive HTML slides with RISE, executable source with comments (e.g. a .py file), LaTeX, and PDF with 'Save as' or `jupyter-convert --to`. You can also create slides with nbpresent.
MyBinder.org and Azure Notebooks have badges for e.g. a README.md or README.rst which launch a project executably in a docker instance hosted in a cloud. CoCalc and Anaconda Cloud also provide hosted Jupyter Notebook projects.
You can template a gradable notebook with nbgrader.
GitHub renders .ipynb notebooks as HTML. Nbviewer renders .ipynb notebooks as HTML.
There are more than 90 Jupyter Kernels for languages other than Python....
How to teach technical concepts with cartoons
There's not a Wikipedia page for "visual metaphor", but there are pages for "visual rhetoric" and "visual thinking"
Negative space can be both meaningful and useful later on.
I learned about visual thinking and visual metaphor in application to business communications from "The Back of the Napkin: Solving Problems and Selling Ideas with Pictures"
Fact Checks
Indeed, fact checking systems are only as good as the link between identity credentialing services and a person. (as mentioned in this article) is a good start.
A few other approaches to be aware of:
"Reality Check is a crowd-sourced on-chain smart contract oracle system" [built on the Ethereum smart contracts and blockchain].
And standards-based approaches are not far behind:
W3C Credentials Community Group
W3C Verifiable Claims Working Group
W3C Verifiable News.
DHS orders agencies to adopt DMARC email security
From :
> By Jan. 2018, all federal agencies will be required to implement DMARC across all government email domains.
> Additionally, by Feb. 2018, those same agencies will have to employ Hypertext Transfer Protocol Secure (HTTPS) for all .gov websites, which ensures enhanced website certifications.
Requiring TLS (and showing an unlocked icon for non-TLS-secured emails) would also be good.
The electricity for 1BTC trade could power a house for a month
The article seems to imply that a 1BTC transaction requires 200kWh of energy.
First, what is the source for that number?
Second, what is the business interest of the quoted individual? Are they promoting competing services?
Third, how much energy does the supposed alternative really take, by comparison?
How much energy do these aspects of said business operations require:
- Travel to and from the office for n employees
- Dry cleaning for n employees' work clothes
- Lights for an office of how many square feet
- Fraud investigations in hours worked, postal costs, wait times, CPU time and bandwidth to try and fix data silos' ledgers' transaction ids and time skew; with a full table JOIN on data nobody can only have for a little while from over here and over there
- Desktop machines' idle hours
- Server machines' idle hours
With low cost clean energy, these businesses are profitable; with a very different cost structure than traditional banking and trading.
Anyone want to guess how much the quoted concerned party has invested in cryptocoins / cryptocurrencies? Guy's prolly just sitting at home, shorting it, just waiting for the price to move.
By comparison, with an ICO, there's less back-and-forth on the cap table.
"My job is to feed the machines."
PAC Fundraising with Ethereum Contracts?
I'll cc this here with formatting changes (extra \n and ---) for Hacker News:
---
### Background
- PAC: Political Action Committee
-
### Questions
- Is Civic PAC fundraising similar to e.g. a Crowdsale or a CappedCrowdsale or something else entirely, in terms of ERC20 OpenZeppelin solidity contracts?
- Would it be worth maintaining an additional contract for [PAC] "fundraising" with terminology that campaigns can understand; or a terminology map?
- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the risks of a token sale for a PAC?
--- Is there any way to check for donors' citizenship? (When/Where is it necessary to check donors' citizenship (with credit/debit cards or cryptocoins/cryptotokens?))
- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the costs of a token sale for a PAC?
--- How much gas would such a contract require?
- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the benefits of a token sale for a PAC?
---- Lower transaction fees than credit/debit cards?
---- Time limit (practicality, marketing)
---- Cap ("we only need this much")
---- Refunds in the event of […]
### Objectives
- Comply with all local campaign finance laws
--- Collect citizenship information for a Person
--- Collect citizenship information for an Organization 'person'
- Ensure that donations hold value
- Raise funds
- Raise funds up to a cap
- (Optionally?) collect names and contact information (
Thanks for the wealth of resources in this post. Here are a few more:
"Python for Finance: Analyze Big Financial Data" (2014, 2018) ... also includes the "Finance with Python" course and this book as a PDF and Jupyter notebooks.
Quantopian put out a call for the best Value Investing algos (implemented in quantopian/zipline) awhile back. This post links to those and other value investing resources: (Ctrl-F "econo")
"Lectures in Quantitative Economics as Python and Julia Notebooks" links to these excellent lectures and a number of tools for working with actual data from FRED, ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank, Quandl.
One thing that many finance majors, courses, and resources often fail to identify is the role that startup and small businesses play in economic growth: jobs, GDP, return on direct capital investment. Most do not succeed, but it is possible to do better than index funds and have far more impact in terms of sustainable investment than as an owner of a nearly-sure-bet index fund that owns some shares and takes a hands-off approach to business management, research, product development, and operations.
Is it possible to possess a comprehensive understanding of finance and economics but still not have personal finance down? Personal finance: r/personalfinance/wiki, "Consumer science (a.k.a. home economics) as a college major" | https://westurner.github.io/hnlog/ | CC-MAIN-2019-13 | refinedweb | 43,422 | 53.71 |
A fresh release of RQuantLib
is now on CRAN and
in Debian.
RQuantLib
combines (some of) the quantitative analytics of
QuantLib with the
R statistical computing environment and language.
This follows the 0.3.3 release from last week
and has again a number of internal changes. All uses of objects from
external namespaces are now explicit as I removed the remaining using
namespace QuantLib;. This makes things a little more verbose, but should
be much clearer to read, especially for those not yet up to speed on whether
a given object comes from any one of the Boost, QuantLib or Rcpp
namespaces. We also generalized an older three-dimensional
plotting function used for option surfaces — which had already been used in the
demo() code — and improved the code underlying this: arrays of
option prices and analytics given two input vectors are now computed at the
C++ level for a nice little gain in efficiency. This also illustrates the possible
improvements from working with the new
Rcpp API that is
now used throughout the package,... | https://www.r-bloggers.com/rquantlib-0-3-4/ | CC-MAIN-2018-26 | refinedweb | 175 | 58.11 |
str_to_sympy function¶
(Shortest import:
from brian2.parsing.sympytools import str_to_sympy)
- brian2.parsing.sympytools.str_to_sympy(expr, variables=None)[source]¶
Parses a string into a sympy expression. There are two reasons for not using
sympifydirectly: 1) sympify does a
from sympy import *, adding all functions to its namespace. This leads to issues when trying to use sympy function names as variable names. For example, both
betaand
factor– quite reasonable names for variables – are sympy functions, using them as variables would lead to a parsing error. 2) We want to use a common syntax across expressions and statements, e.g. we want to allow to use
and(instead of
&) and function names like
ceil(instead of
ceiling).
- Parameters
expr : str
The string expression to parse.
variables : dict, optional
- Returns
s_expr :
A sympy expression
Raises
SyntaxError
In case of any problems during parsing. | https://brian2.readthedocs.io/en/latest/reference/brian2.parsing.sympytools.str_to_sympy.html | CC-MAIN-2022-40 | refinedweb | 139 | 58.48 |
errno.h - system error numbers
#include <errno.h>
[CX]
Some of the functionality described on this reference page extends the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of IEEE Std 1003.1-2001 defers to the ISO C standard.Some of the functionality described on this reference page extends the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of IEEE Std 1003.1-2001 defers to the ISO C standard.
[CX]
The ISO C standard only requires the symbols [EDOM], [EILSEQ], and [ERANGE] to be defined]
- [XSR]
No message is available on the STREAM head read queue]
- [XSR]
No STREAM resources.No STREAM resources.
- [ENOSTR]
- [XSR]
Not a STREAM]
- [XSR]
Stream ioctl() timeout.Stream ioctl() timeout.
- [ETIMEDOUT]
- Connection timed out.
- [ETXTBSY]
- Text file busy.
- [EWOULDBLOCK]
- Operation would block (may be the same value as [EAGAIN]).
- [EXDEV]
- Cross-device link.
Additional error numbers may be defined on conforming systems; see the System Interfaces volume of IEEE Std 1003.1-2001.
None.
None.
The System Interfaces volume of IEEE Std 1003.1-2001, Section 2.3, Error Numbers
First released in Issue 1. Derived from Issue 1 of the SVID.
Updated for alignment with the POSIX Realtime Extension.
The following new requirements on POSIX implementations derive from alignment with the Single UNIX Specification:
-
The majority of the error conditions previously marked as extensions are now mandatory, except for the STREAMS-related error conditions.
Values for errno are now required to be distinct positive values rather than non-zero values. This change is for alignment with the ISO/IEC 9899:1999 standard. | http://pubs.opengroup.org/onlinepubs/009604599/basedefs/errno.h.html | CC-MAIN-2015-48 | refinedweb | 282 | 61.22 |
This section describes a number of common use cases in which Smooks can be used.
Templating
Groovy Scripting
Support for Groovy based scripting is made available through the configuration namespace. This adds support for DOM or SAX based Groovy scripting.
Example configuration:":
Processing Non-XML Data (CSV, EDI, JSON, Java etc)
Java Binding
Java to Java Transformations
Processing Huge Messages (GBs)
Message Splitting & Routing
Please refer to the Splitting & Routing section in the previous section.
Persistence (Database Reading and Writing)
Message Enrichment
Use the SQLExecutor to query a database. The queried data will be bound to the bean context (ExecutionContext). Use the bound query data to enrich your messages e.g. where you are splitting and routing.
TODO!! | http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=73793584&selectedPageVersions=52&selectedPageVersions=51 | CC-MAIN-2014-15 | refinedweb | 118 | 56.76 |
Thank you @m_adam it's really helpful! I was going to a wrong direction. I chose to do a project that has every new aspect to explore :)) I will for sure disturb you in future with my questions. I appreciate your suggestions.
Parvin
Thank you @m_adam it's really helpful! I was going to a wrong direction. I chose to do a project that has every new aspect to explore :)) I will for sure disturb you in future with my questions. I appreciate your suggestions.
Parvin
I have 90 strut shaping a sphere in this scene. each strut is a linear cloner of cubes and they represent LED strips. I want to animate the light in realtime. So, I am trying to animate the sphere with effectors and change the color, then send the color data through Lumos Library to a Raspberry Pi to animate light. I'm awfully new to C4D and Python in C4D.
I know I have to give address to each cube and link it with the address I have for the LED pixel. But that goes way further. what I'm struggling with now is capturing data of each cloner.
Thanks for your patience,
Parvin
Oh I see. I take that note for python effector, thanks. I think I explained it in a very wrong way because as I see you have a cloner as the child of other cloner. In my example, I have 90 different cloners. I will attach my example.
P
@m_adam when I ran the code you posted, I got the error "Failed to retrieves op and its child". I am calling each function in the main to realize what each of them is doing. But I think all of them are linked to one another.
So my main question is that when you say I need to select my objects in order, how should I exactly do this? I put the Python Effector in the effectors list for each 90 cloners that I have, and I assumed this would work.
Thanks,
Parvin
Wow! thanks Adam. I have to take my time to process all the useful information you gave me. I will reply on this post if I have further questions.
Hi,
I'm sorry if I'm posting a repeated topic. I am trying to get color data from multiple mo-graph in python to send out to a micro controller for realtime animation.
I am explaining what I've done so far
I have 90 linear cloners of cubes (there are 30 of them in each cloner). I want to get color data from each clone and send the rgb color to the controller. After a couple of hours searching I came to this conclusion to make a blank 2D list, append each cloner as an object to the list and therefore, I can have an ID to retrieve the data from. from my one week experience in C4D and python, I wrote this code. I set 2 cloners as user data. this it absolutely not working. If any one has done sth similar to what I want to do, his/her help would be much appreciated.
import c4d from c4d.modules import mograph as mo from c4d import utils def main(): n = 2 m = 33 Matrix = [[0] * m for i in range(n)] Obj1 = op[c4d.ID_USERDATA,2] Obj2 = op[c4d.ID_USERDATA,1] count = len(Obj1) print count for j in range (32): Matrix[j][0] = Obj1 Matrix[j][1] = Obj2 md = mo.GeGetMoData(op) if md==None: return False cnt = md.GetCount() marr = md.GetArray(c4d.MODATA_MATRIX) carr = md.GetArray(c4d.MODATA_COLOR) print carr for i in reversed(xrange(0, cnt)): md.SetArray(c4d.MODATA_COLOR, carr, True) md.SetArray(c4d.MODATA_WEIGHT, warr, True) return True | https://plugincafe.maxon.net/user/parvin | CC-MAIN-2020-24 | refinedweb | 629 | 75.91 |
Hello,
I need to simulate a stream which behaves exactly like a ostringstream, except when handling objects of type color. What is the best way to do this?
user.display() << color() << "Name: " << color(color::Red) << setw(10) << left << str_name << endl << color() << "Date: " << color(color::Green) << setw(10) << right << int_date << endl;
should display the same as...
user.display() << color() << "Name: " << setw(10) << left << color(color::Red) << str_name << endl << color() << "Date: " << setw(10) << right << color(color::Green) << int_date << endl;
I programming a telnet server that has to display text in color. The color codes are ANSI escape sequences ie characters. These characters when passed to ostringstream via << mess up setw() formatting alignment.
I need a mechanism to receive and process escape sequence characters from operator << without passing them on to ostringstream. What is the best way to do this?
I could write a wrapper that emulates ostringstream but then I would have to provide every overloaded << function. That's a lot of busy work. There has to be a better way.
Last night, I tried to inherit public ostringstream and provide operator << for my color class. I realized that would not work because ostringstream operator<< returns a reference to ostringstream and not my class. Could I overload operator << in color() to return console()? How would that translate into code?
class console: public std::ostringstream { public: console& (operator<<) ( color &c ) { std::cout << "color: " << c.i << std::endl; } // wont work if color is not the first obj passed to the stream };
Next I tried to emulate the entire ostringstream interface. This works except for setw, which gives me compile error "error: declaration of 'operator<<' as non-function" in g++.
#include <iostream> #include <iomanip> using std::setw; class console { private: std::ostringstream oStr; public: console& (operator<<) ( color &c ) { std::cout << "color: " << c.i << std::endl; } console& (operator<<) ( std::string &s ) { std::cout << "string: " << s << std::endl; oStr << s; } console& (operator<<) ( setw p ) { std::cout << sw << "setw: " << std::endl; oStr << p; } void Render() { std::cout << oStr.str(); } };
ideas? Thanks in advance. | https://www.daniweb.com/programming/software-development/threads/266065/string-stream-operator-special-handling-for-user-class | CC-MAIN-2016-44 | refinedweb | 335 | 56.86 |
At 12:15 PM 10/7/2005 -0700, Martin Maly wrote:
Based on the binding rules described in the Python documentation, I would expect the code to throw because binding created on the line (1) is local to the class block and all the other __doc__ uses should reference that binding. Apparently, it is not the case.
Correct - the scoping rules about local bindings causing a symbol to be local only apply to *function* scopes. Class scopes are able to refer to module-level names until the name is shadowed in the class scope.
Is this bug in Python or are __doc__ strings in classes subject to some additional rules?
Neither; the behavior you're seeing doesn't have anything to do with docstrings per se, it's just normal Python binding behavior, coupled with the fact that the class' docstring isn't set until the class suite is completed.
It's currently acceptable (if questionable style) to do things like this in today's Python:
X = 1
class X: X = X + 1
print X.X # this will print "2"
More commonly, and less questionably, this would manifest as something like:
def function_taking_foo(foo, bar): ...
class Foo(blah): function_taking_foo = function_taking_foo
This makes it possible to call 'function_taking_foo(aFooInstance, someBar)' or 'aFooInstance.function_taking_foo(someBar)'. I've used this pattern a couple times myself, and I believe there may actually be cases in the standard library that do something like this, although maybe not binding the method under the same name as the function. | https://mail.python.org/archives/list/python-dev@python.org/thread/ZCGNZ4JY3MUE4QTAWFWSVDOYDDJECVD6/ | CC-MAIN-2021-43 | refinedweb | 252 | 56.08 |
I am trying to convert boolean to string type...
Boolean b = true;
String str = String.valueOf(b);
Boolean b = true;
String str = Boolean.toString(b);
I don't think there would be any significant performance difference between them, but I would prefer the 1st way.
If you have a
Boolean reference,
Boolean.toString(boolean) will throw
NullPointerException if your reference is
null. As the reference is unboxed to
boolean before being passed to the method.
While,
String.valueOf() method as the source code shows, does the explicit
null check:
public static String valueOf(Object obj) { return (obj == null) ? "null" : obj.toString(); }
Just test this code:
Boolean b = null; System.out.println(String.valueOf(b)); // Prints null System.out.println(Boolean.toString(b)); // Throws NPE
For primitive boolean, there is no difference. | https://codedump.io/share/FgioCWU8n7Bn/1/best-approach-to-converting-boolean-object-to-string-in-java | CC-MAIN-2017-13 | refinedweb | 131 | 61.22 |
User Interfaces
After reading this guide, you’ll know:
- How to build reusable client side components in any user interface framework.
- How to build a style guide to allow you to visually test such reusable components.
- Patterns for building front end components in a performant way in Meteor.
- How to build user interfaces in a maintainable and extensible way.
- How to build components that can cope with a variety of different data sources.
- How to use animation to keep users informed of changes.
View layers
Meteor officially supports three user interface (UI) rendering libraries, Blaze, React and Angular. Blaze was created as part of Meteor when it launched in 2011, React was created by Facebook in 2013, and Angular was created by Google in 2010. All three have been used successfully by large production apps. Blaze is the easiest to learn and has the most full-stack Meteor packages, but React and Angular are more developed and have larger communities.
Syntax
- Blaze uses an easy-to-learn Handlebars-like template syntax, with logic like
{{#if}}and
{{#each}}interspersed in your HTML files. Template functions and CSS-selector events maps are written in JavaScript files.
-.)
Community
- Blaze has many full-stack Meteor packages on Atmosphere, such as
useraccounts:coreand
aldeed:autoform.
- React has 42k stars on Github and 13k npm libraries.
- Angular has 12k stars on Github and 4k npm libraries.
Performance
-).
- One test benchmarks Angular 2 as the best, followed by React and Angular 1, followed by Blaze.
Mobile
- Cordova
- All three libraries work fine in a Cordova web view, and you can use mobile CSS libraries like Ionic’s CSS with any view library.
- The most advanced mobile web framework is Ionic 2, which uses Angular 2.
- Ionic 1 uses Angular 1, but there are also Blaze and React ports.
- Another good option is Onsen UI, which includes a React version.
- Native
- You can connect any native iOS or Android app to a Meteor server via DDP. For iOS, use the
meteor-iosframework.
- You can write apps with native UI elements in JavaScript using React Native. For the most recent information on how to use React Native with Meteor, see this reference.
UI components
Regardless of the view layer that you are using, there are some patterns in how you build your User Interface (UI) that will help make your app’s code easier to understand, test, and maintain. These patterns, much like general patterns of modularity, revolve around making the interfaces to your UI elements very clear and avoiding using techniques that bypass these known interfaces.
In this article, we’ll refer to the elements in your user interface as “components”. Although in some systems, you may refer to them as “templates”, it can be a good idea to think of them as something more like a component, which has an API and internal logic, rather than a template, which is just a bit of HTML.
To begin with, let’s consider two categories of UI components that are useful to think about, “reusable” and “smart”:
Reusabletemplate decides what to render solely based on its arguments. Pure components are even easier to reason about and test than reusable ones and so should be preferred wherever possible.
Global data storescollection,
- Accounts information, like
Meteor.user()and
Meteor.loggingIn()
- Current route information
- Any other client-side data stores (read more in the Data Loading article):
The JavaScript of this component is responsible for subscribing and fetching the data that’s used by the
Lists_show template itself:
Visually testing
For instance, in Galaxy, we use a component explorer called Chromatic to render each component one specification at a time or with all specifications at once.
Using Chromatic enables rapid development of complex components. Typically in a large application, it can be quite difficult to achieve certain states of components purely by “using” the application. For example, a component in Galaxy can enter a complex state if two deploys of the same app happen simultaneously. With Chromatic we’re able to define this state at the component level and test it independently of the application logic.
You can use Chromatic component explorer in your Meteor + React app with
meteor add mdg:chromatic. Similar projects built in React are UI Harness by Phil Cockfield and React Storybook by Arunoda Susiripala.
User interface patterns
Here are some patterns that are useful to keep in mind when building the user interface of your Meteor application.
Internationalization
Internationalization (i18n) is the process of generalizing the UI of your app in such a way that it’s easy to render all text in a different language. Meteor’s package ecosystem includes internationalization options tailored to your frontend framework of choice.
Places to translate
It’s useful to consider the various places in the system that user-readable strings exist and make sure that you are properly using the i18n system to generate those strings in each case. We’ll go over the implementation for each case in the sections about
tap:i18n and
universe:i18n below.
- HTML templates and components. This is the most obvious place—in the content of UI components that the user sees.
- Client JavaScript messages. Alerts or other messages that are generated on the client side are shown to the user, and should also be translated.
- Server JavaScript messages and emails. Messages or errors generated by the server can often be user visible. An obvious place is emails and any other server generated messages, such as mobile push notifications, but more subtle places are return values and error messages on method calls. Errors should be sent over the wire in a generic form and translated on the client.
- Data in the database. A final place where you may want to translate is actual user-generated data in your database. For example, if you are running a wiki, you might want to have a mechanism for the wiki pages to be translated to different languages. How you go about this will likely be unique to your application.
Using
tap:i18n in JavaScript
In Meteor, the excellent
tap:i18n package provides an API for building translations and using them in your components and frontend code.
To use
tap:i18n, first
meteor add tap:i18n to add it to your app. Then we need to add a translation JSON file for our default language (
en for English) – we can put it at
i18n/en.i18n.json. Once we’ve done that we can import and use the
TAPi18n.__() function to get translations for strings or keys within our JavaScript code.
For instance for errors in the Todos example app, we create an
errors module that allows us to easily alert a translated error for all of the errors that we can potentially throw from methods:
The
error.error field is the first argument to the
Meteor.Error constructor, and we use it to uniquely name and namespace all the errors we use in the application. We then define the English text of those errors in
i18n/en.i18n.json:
Using
tap:i18n in Blaze
We can also easily use translations in Blaze templates. To do so, we can use the
{{_ }} helper. In the Todos app we use the actual string that we want to output in English as the i18n key, which means we don’t need to provide an English translation, although perhaps in a real app you might want to provide keys from the beginning.
For example in
app-not-found.html:
Changing language
To set and change the language that a user is seeing, you should call
TAPi18n.setLanguage(fn), where
fn is a (possibly reactive) function that returns the current language. For instance you could write
Then somewhere in your UI you can
CurrentLanguage.set('es') when a user chooses a new language.
Using
universe:i18n in React
For React-based apps, the
universe:18n package presents an alternative solution to
tap:i18n.
universe:i18n adopts similar conventions to
tap:i18n, but also includes a convenient drop-in React component and omits
tap:i18n's dependencies on Meteor’s
templating and
jquery packages.
universe:i18n was intended for Meteor React applications using
ES2015 modules, but it can be used without React or modules.
Using
universe:i18n in JS
To get started, run
meteor add universe:i18n to add it to your app. Add an English (
en-US) translation file in
JSON format to your app with the name
en-us.i18n.json. Translation files can be identified by file name or with the
{"_locale": "en-US"} JSON property. The
YAML file format is also supported.
If your app uses
ES2015 modules included from
client/main.js and
server/main.js entry points, import your JSON file(s) there. The
i18n.__() function will now locate keys you pass.
Borrowing from the
tap:i18n example above, in
universe:i18n our
displayError function now looks like this:
To change the user’s language, use
i18n.setLocale('en-US').
universe:i18n allows retrieval of additional translations by method as well as including JSON files with a client bundle.
Using
universe:i18n in React components
To add reactive i18n inline in your React components, simply use the
i18n.createComponent() function and pass it keys from your translation file. Here’s an example of a simple component wrapping i18n’s translation component:
See the documentation for
universe:i18n for additional options and configuration.
Event handling.
Throttling method calls on user action.
If you do not, you’ll see performance problems across the board: you’ll be flooding the user’s network connection with a lot of small changes, the UI will update on every keystroke, potentially causing poor performance, and your database will suffer with a lot of writes.
To throttle writes, a typical approach is to use underscore’s
.throttle() or
.debounce() functions. For instance, in the Todos example app, we throttle writes on user input to 300ms:.
Limiting.
User experience patterns
User experience, or UX, describes the experience of a user as they interact with your application. There are several UX patterns that are typical to most Meteor apps which are worth exploring here. Many of these patterns are tied to the way data is loaded as the user interacts with your app, so there are similar sections in the data loading article talking about how to implement these patterns using Meteor’s publications and subscriptions.
Subscription readiness
When you subscribe to data in Meteor, it does not become instantly available on the client. Typically the user will need to wait for a few hundred milliseconds, or as long as a few seconds (depending on the connection speed), for the data to arrive. This is especially noticeable when the app is first starting up or you move between screens that are displaying completely new data.
There are a few UX techniques for dealing with this waiting:
We do this with Blaze’s
Template.subscriptionsReady which is perfect for this purpose, as it waits for all the subscriptions that the current component has asked for to become ready.
Per-component loading state.
We achieve this by passing the readiness of the todos list down from the smart component which is subscribing (the
listShowPage) into the reusable component which renders the data:
And then we use that state to determine what to render in the reusable component (
listShow):
Showing placeholders.
For example, in Galaxy, while you wait for your app’s log to load, you see a loading state indicating what you might see:
Using the style guide to prototype loading state
Loading states are notoriously difficult to work on visually as they are by definition transient and often are barely noticeable in a development environment where subscriptions load almost instantly.
This is one reason why being able to achieve any state at will in the component style guide is so useful. As our reusable component
Lists_show simply chooses to render based on its
todosReady argument and does not concern itself with a subscription, it is trivial to render its loading state in a style guide.
Pagination
In the Data Loading article we discuss a pattern of paging through an “infinite scroll” type subscription which loads one page of data at a time as a user scrolls down the page. It’s interesting to consider UX patterns to consume that data and indicate what’s happening to the user.
A list component
Let’s consider any generic item-listing component. To focus on a concrete example, we could consider the todo list in the Todos example app. Although it does not in our current example app, in a future version it could paginate through the todos for a given list.
There are a variety of states that such a list can be in:
- Initially loading, no data available yet.
- Showing a subset of the items with more available.
- Showing a subset of the items with more loading.
- Showing all the items - no more available.
- Showing no items because none exist.
It’s instructive to think about what arguments such a component would need to differentiate between those five states. Let’s consider a generic pattern that would work in all cases where we provide the following information:
- A
countof the total number of items.
- A
countReadyboolean that indicates if we know that count yet (remember we need to load even that information).
- A number of items that we have
requested.
- A list of
itemsthat we currently know about.
We can now distinguish between the 5 states above based on these conditions:
countReadyis false, or
count > 0and
itemsis still empty. (These are actually two different states, but it doesn’t seem important to visually separate them).
items.length === requested && requested < count
0 < items.length < requested
items.length === requested && requested === count && count > 0
count === 0
You can see that although the situation is a little complex, it’s also completely determined by the arguments and thus very much testable. A component style guide helps immeasurably in seeing all these states easily! In Galaxy we have each state in our style guide for each of the lists of our app and we can ensure all work as expected and appear correctly:
A pagination “controller” pattern
A list is also a good opportunity to understand the benefits of the smart vs reusable component split. We’ve seen above that correctly rendering and visualizing all the possible states of a list is non-trivial and is made much easier by having a reusable list component that takes all the required information in as arguments.
However, we still need to subscribe to the list of items and the count, and collect that data somewhere. To do this, it’s sensible to use a smart wrapper component (analogous to an MVC “controller”) whose job it is to subscribe and fetch the relevant data.
In the Todos example app, we already have a wrapping component for the list that talks to the router and sets up subscriptions. This component could easily be extended to understand pagination:
UX patterns for displaying new data
An interesting UX challenge in a realtime system like Meteor involves how to bring new information (like changing data in a list) to the user’s attention.:
The reusable sub-component can then use the
hasChanges argument to determine if it should show some kind of callout to the user to indicate changes are available, and then use the
onShowChanges callback to trigger them to be shown.
Optimistic UI.
Indicating when:
Of course in this scenario, you also need to be prepared for the server to fail, and again, indicate it to the user somehow.
Unexpected failures:
Animation
Animation is the process of indicating changes in the UI smoothly over time rather than instantly. Although animation is often seen as “window dressing” or purely aesthetic, in fact it serves a very important purpose, highlighted by the example of the changing list above. In a connected-client world where changes in the UI aren’t always initiated by user action (i.e. sometimes they happen as a result of the server pushing changes made by other users), instant changes can result in a user experience where it’s difficult to understand what is happening.
Animating changes in visiblity.
Animating:
Animating. | https://guide.meteor.com/ui-ux.html | CC-MAIN-2018-09 | refinedweb | 2,708 | 52.6 |
Hidden three came out the same day as HV19.11 Frolicsome Santa Jokes API came out. And as the API was the first challenge were we had to deal with a remote server, maybe the flag is hidden on the remote server.
A quick scan using nmap reveals that there is another port opened: port 17. Connecting using ncat got me a single character which seem to not change. I wrote a small python script to connect, receive the character, check if it changed, if so, log character, start over.
import socket LAST_CHAR = '' def write(c): with open('log', 'a') as file: file.write(c) while True: try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('whale.hacking-lab.com', 17)) char = s.recv(8).decode()[0] if LAST_CHAR != char: LAST_CHAR = char write(char) except: pass
While looking at the log from time to time it shows, that the character changes every hour. So this script does a lot of useless requests, as you only need one request per hour.
After a bit more than 24 hours the log file looked like the following:
AILYfl4g}HV19{an0ther_DAILY
Rearranging got as the flag which is HV19{an0ther_DAILY_fl4g}. | https://blog.sebastianschmitt.eu/challenges/hackvent-2019/hv19-h3-hidden-three/ | CC-MAIN-2022-21 | refinedweb | 197 | 74.69 |
Hierarchical Data Format (HDF)
- Designed to store and organize large amounts of data.
- Store multiple data files in a single data file!
- Different types of information.
- Self describing (metadata included in the file)
- Properties[ref] :
- Datasets (numpy arrays): fast slicing, compression.
- Group (dictionaries): nesting, POSIX path syntax.
- Attributrs (metadata): datasets/group, key-value.
- HDF5 is row based and really effient than csv for very large file size[ref] .
- Extensions:
.h5,
.hdf,
.hdf4, …
- Tool: HDFView
- Example[ref] :
An example HDF5 file structure which contains groups, datasets and associated metadata.
import h5py f = h5py.File('mytestfile.hdf5', 'r') # read a file # h5py.File acts like Python dict dset = f['mydataset'] dset.attrs # attribute
t-digest
later
•Notes with this notation aren't good enough. They are being updated. | https://dinhanhthi.com/data-structure | CC-MAIN-2020-29 | refinedweb | 126 | 54.18 |
Programming a Minew Tech iBeacon Bluetooth module - Nordic nRF51822
Introduction
The Minew Tech i4 Pilot iBeacon is a very low cost Bluetooth module based on the popular Nordic nRF51822 SoC which is already supported by the mbed Compiler. The board includes a 1000mAH coin-cell battery (CR2477) making it suitable for applications where is desirable to not replace the battery for at least a couple of years.
The module can be bought on Alibaba
Connections
Programming the MiniBeacon board might be a bit tricky because the module does not include a dedicated CMSIS-DAP interface, as opposite to the classic mbed boards. However, the board populates the SWD (Serial Wire Debug) pins, therefore it is possible to use an external programmer, for example the nRF51-DK that includes the J-Link interface and make the required connections.
Looking at the schematics of the nRF51-DK board, it is clear which pins should be used:
Programming
Writing programs for the MiniBeacon is easy. Make sure that you choose the correct platform, for example nRF51822-mKIT. Then compile your code and download the binary to the J-Link interface. The following code snippet shows how to access to the LEDs populated on the board.
#include "mbed.h" //miniBeacon board DigitalOut myled3(P0_12); DigitalOut myled2(P0_15); DigitalOut myled1(P0_16); int main() { while(1) { myled3 = 1; myled2 = 1; myled1 = 1; wait(0.05); myled3 = 0; myled2 = 0; myled1 = 0; wait(1.95); } }
8 comments on Programming a Minew Tech iBeacon Bluetooth module - Nordic nRF51822:
Please log in to post comments.
Does this iBeacon have pins routed so I could connect to a sensor via I2C or SPI? Maybe if I desolder the LEDs - could those pins be used for I2C? | https://os.mbed.com/users/MarceloSalazar/notebook/programming-a-minibeacon-bluetooth-module-nordic-n/ | CC-MAIN-2018-13 | refinedweb | 286 | 58.82 |
wcsncpy - copy part of a wide-character string
#include <wchar.h> wchar_t *wcsncpy(wchar_t *ws1, const wchar_t *ws2, size_t n);
The wcsncpy() function copies not more than n wide-character codes (wide-character codes that follow a null wide-character code are not copied) from the array pointed to by ws2 to the array pointed to by ws1. If copying takes place between objects that overlap, the behaviour is undefined.
If the array pointed to by ws2 is a wide-character string that is shorter than n wide-character codes, null wide-character codes are appended to the copy in the array pointed to by ws1, until n wide-character codes in all are written.
The wcsncpy() function returns ws1; no return value is reserved to indicate an error.
No errors are defined.
None.
Wide-character code movement is performed differently in different implementations. Thus overlapping moves may yield surprises.
If there is no null wide-character code in the first n wide-character codes of the array pointed to by ws2, the result will not be null-terminated.
None.
wcscpy(), <wchar.h>.
Derived from the MSE working draft. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/wcsncpy.html | CC-MAIN-2017-17 | refinedweb | 189 | 62.27 |
Finite State Machines, often abbreviated as FSM is a mathematical computation model that could be useful for building user interfaces, especially nowadays that front-end apps are becoming much more complex due to the nature of the problems that they solve. Did you know that 🧑🚀 SpaceX used JavaScript for the spaceship flight interface? 🤯.
In this article, I'm going to explain the benefits of composing user interfaces using finite state machines. Let's dive in! 🤿
What is a finite state machine?
A finite state machine is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another, this change is called a transition.
A FSM is defined by:
- Σ: The input alphabet.
- S : A finite, non-empty set of states.
- δ : The state-transition function (δ: S x Σ -> S).
- s0 : The initial state, an element of S.
- F : The set of accepting states.
Now you're probably like wtf 🤯, this sounds scary 😱 and academic, right? Let's try to illustrate this definition with a real world example to understand it better.
Understanding a FSM
The world is full of finite state machines, in fact, you are using them every day, but probably you didn't think of them as such. I'm sure that after reading the post you'll start pointing them in the real world, trust me I'm doing it right now 😂
A traffic light 🚦 is a simple example to understand FSM. For the sake of this consider that our traffic light has 3 colors.
At any point in time, the traffic light will be on one of the following scenarios:
- 🟢 Green
- 🟡 Yellow
- 🔴 Red
Those scenarios are called states and because the set is limited to 3 states we can say that is finite.
The initial state of the machine is 🟢 green and whenever one of the lights is on the output of the other ones is off.
The state will change in response to an input, that in our case is a timer, through a transition. The transition is a mapping that defines the path of our state.
Let's represent the traffic light FSM on a graphic, so we can visualize the state transitions and understand how the machine works. Usually, you'll see the FSM represented like this 📸:
Try to link this example with the mathematical definition we introduced before! Seems easier right? ☺️
Ok 🆒! I explained how a traffic light works, but now what? How can we use this model to compose better UIs? 🤔. Now that we understand how FSM works, we're going to code a JavaScript application to see the advantages and benefits! 👀
Implementing a FSM with JavaScript
The traffic light is a simple example to understand the concept of FSM. However, to showcase all the benefits and the potential of this concept, we're going to build something a little bit more complex. Such as a UI that could potentially fail due to external circumstances.
The application that we're going to build is a UI with a button, whenever the button is pressed we're going to call an external API and we're going to render the response of the API in our app.
Defining the state machine
Before starting to code, as we've seen in our previous example, the first thing we need to do is defining our state machine.
This is actually the first benefit. Why? Because from the first moment you have to define the FSM and this process helps you to plan and contemplate all the possible states of your UI. So basically you won't miss any edge case.
This way of approaching a problem is called 🔝 ⬇️ top-down approach. Instead of trying to solve a specific part of the problem without understanding it fully ⬇️ 🆙 bottom-up, first, you define the whole model of your application.
This would be the statechart of the application we're going to build:
As you can see, we defined all the possible states of the user interface and also the transitions between them.
Idle: The initial state.
Fetching: The state where the UI is fetching the API.
Fulfilled: The state when the API fetch succeeds.
Rejected: The state when the API fetch fails.
Now, we can define for each state, the output and behaviour of our application. This makes our UI deterministic and what this means is that given the current state and an input you'll know what the next state is going to be all the time. When you control every state, you are free of bugs 🐛.
Let's build the wireframes 🎨 to define the output of the application:
Our wireframes, implement all the states that are defined by our FSM. We're ready to move on with coding! 👏.
Implementing the FSM
I'm going to build the FSM using plain JavaScript only. Why? I'll answer this question after the implementation 👀.
The first thing we're going to define is our input alphabet Σ. Based on the statechart we designed before. Inputs are events that will cause a state transition in our FSM. Our Σ will look like this:
const EVENTS = { FETCH: 'FETCH', RESOLVE: 'RESOLVE', REJECT: 'REJECT' };
Next, we are going to define our set of states S. Also, as we defined, we should set the initial state to Idle as S0.
const STATE = { IDLE: 'IDLE', FETCHING: 'FETCHING', FULFILLED: 'FULFILLED', REJECTED: 'REJECTED', } const initialState = STATE.IDLE
Finally we're going to combine all those pieces into the FSM. Defining the transitions δ between every state in response to the inputs.
const stateMachine = { initial: initialState, states: { [STATE.IDLE]: { on: { [EVENTS.FETCH]: STATE.FETCHING } }, [STATE.FETCHING]: { on: { [EVENTS.RESOLVE]: STATE.FULFILLED, [EVENTS.REJECT]: STATE.REJECTED, } }, [STATE.FULFILLED]: { on: { [EVENTS.FETCH]: STATE.FETCHING } }, [STATE.REJECTED]: { on: { [EVENTS.FETCH]: STATE.FETCHING } }, } }
The FSM is ready to be used! 🥳.
Why did I implement the FSM using plain JavaScript?
Because I want to show how simple it is to create one. As well as to show that FSM is totally decoupled from any library or framework 💯. They don't know anything about rendering, instead, they define the application state flow. This is one of the best things about composing UIs with FSM 😍.
You can abstract and decouple the whole flow from any framework 🙌. You can use this FSM with any library, such as React, React-Native, Vue, Svelte, Preact, Angular...
Demo time 🕹
To see our FSM in action I built a demo app with
React so I can show you how our application works! The example is a dummy app that calls a Pokemon API and renders the result, a common task in front-end development nowadays.
Take a look at the CodeSandbox below 👇 and try to break the UI:
- Disable your Internet connection and try to click the button 🔌
- Try to click the button multiple times 👈
- Simulate a slow network connection 🐌
The first thing I did was to define all the UI for every state in our machine. Using a simple
switch statement to do the conditional rendering:
const App = () => { switch (state) { case STATES.IDLE: return (...) case STATES.FETCHING: return (...) case STATES.FULFILLED: return (...) case STATES.REJECTED: return (...) default: return null } }
Once our app knows how to render every state of the machine, we need to define the transitions between the states in response to events (inputs). Remember that our FSM has the following inputs:
Fetch,
Resolve and
Reject.
In this case, I'm using a
useStateMachine hook from a library, just to avoid having to implement the not-so-relevant React part of the state handling. This hook receives the state machine we defined before as a configuration argument.
const [pokemon, setPokemon] = React.useState(null); const [state, send] = useStateMachine()(stateMachine);
The hook exposes the
state that is an object which contains the current state of the machine we defined and the
send function which is the state transition function (δ: S x Σ -> S). Also, we have a
pokemon state variable to save the API response.
So, to transition from a state to another, we will call the
send Function passing an
Input as an argument.
As you can see we have a
onFetchPokemon function to make the API request. As soon as you click the Button, we will send a
FETCH input and as a result of this, we will transition the state to
Fetching.
If there's an error, we're going to catch it and send a
REJECT input to transition the state to
Rejected.
If everything works well we will save the response into the
pokemon state variable and then send a
RESOLVE input to transition the state to
Resolved.
const App = () => { const onFetchPokemon = async () => { try { send(EVENTS.FETCH); const pokedexRandomNumber = Math.floor(Math.random() * 151) + 1; const pokemon = await fetchPokemon(pokedexRandomNumber); setPokemon(pokemon); send(EVENTS.RESOLVE); } catch (ex) { send(EVENTS.REJECT); } }; }
The UI already knows what they need to render on every state, so basically, we've got all the possible cases covered 🎉. We're 100% free of bugs! 💯
Benefits
Let's do a quick summary of the benefits of composing user interfaces with FSM:
- Contemplate and plan all the possible states of the application 🧠
- Document the application flow, to make it more accessible to non-dev people 📝
- Predictable and declarative UIs 📖
- Makes code bases easier to maintain 💖
- No unexpected bugs 🐛
Libraries
I didn't use any library on purpose to build the FSM, but I would really recommend looking at the following tools If you plan to incorporate them into production:
Discussion (1)
Nice, thank you.
You can add javascript-state-machine | https://practicaldev-herokuapp-com.global.ssl.fastly.net/carloscuesta/composing-uis-with-finite-state-machines-39ak | CC-MAIN-2021-39 | refinedweb | 1,598 | 63.49 |
Updated algorithm: Buy XIV when VIX is above 20, sell XIV when VIX is below 12.
Is it a good trade in real market? On paper it looks great, and it's so simple. Any suggestions?
Updated algorithm: Buy XIV when VIX is above 20, sell XIV when VIX is below 12.
Is it a good trade in real market? On paper it looks great, and it's so simple. Any suggestions?
@James Could u explain the line Correct look-ahead bias in mapping data to times
df = df.tshift(1, freq='b')
That line I am not sure, but rename_col method is only for getting VIX everyday:
def rename_col(df): df = df.rename(columns={'CLOSE': 'price'}) df = df.rename(columns={'VIX Close': 'price'}) df = df.rename(columns={'Equity Put Volume': 'price'}) df = df.fillna(method='ffill') df = df[['price', 'sid']] # Correct look-ahead bias in mapping data to times df = df.tshift(1, freq='b') df['mean'] = pd.rolling_mean(df['price'], 2) log.info(' \n %s ' % df.head()) return df
I just updated the algorithm and the return is amazing.
If you check my Transaction Details after you clone my code and run it, these transactions are very clear based on my simple algorithm, buy low and sell high on both XIV and UVXY.
Simple algorithm: Buy XIV and sell UVXY when VIX is above 20, sell XIV and buy UVXY when VIX is below 12.
I noticed that this algo suffers a failure to launch when testing in the current year.
when I get VIX, sometimes it is not that accurate, so this year, it didn't get a VIX below 12, which is not true.
Now only trade with XIV, no UVXY.
read this article about how XIV gains through time.
My thinking is very simple: VIX stays around 12-14 most times, once in a while(2-3 times a year) it will go up to 20, 25, 30 or even higher but eventually it will come down to below 15. XIV is inverse of VIX, when VIX go up to 20, XIV will be at its low point, however XIV will not stay at its low very long, just like VIX will be not be that high very long, as VIX come down, XIV will go up.. This will work because XIV is not UVXY, it gains through time, so hold it for long term will actually profit for you. The same goes for shorting UVXY, because of Contango, but shorting is more risky so that's why I prefer to trade only XIV.
Interesting algo - thanks for sharing. The URL ( ) seems to only pull VIX pricing info through 2015. How are the post-2015 prices calculated and compared?
I change to pipeline to pull VIX data, and now it works after 2015.
Hi James, from a previous post you mentioned:.
So I'm wondering if your algo would have less drawdown if you waited for the VIX to cross the 20 mark "from above" before entering the XIV position. Otherwise you'll enter XIV at 20 for example, but if VIX is increasing, you'll be losing on the XIV position. I'm just not sure if that can be coded into the algo?
yes, I did try it, but the result somehow is a little worse than this. I think the reason is that when VIX spikes it will quickly drop again, the window to buy VIX is only a few days, so that few days will not cost us too much, but changing the algorithm might miss a few entry windows to buy.
The site above has calculated XIV etc. prices going back to 2004. It would be a good idea to figure out how to
down load it and run your algorithm on it. XIV has not always been as profitable and there are huge
drops (8:1) in the price according to this site. The transitions seem to have become much sharper in the
last few years. VIX almost hit 60 in 2008, You would have been in trouble if you bought at 20.
Because XIV is being traded in this algo, and XIV is rebalanced everyday, does the algo account for beta slippage? In instances where the long XIV position is held for months, the slippage may start eating away at profit? Or is the slippage negligible?
XIV will drop while you are holding it, but you will not sell it until VIX go below 12, as you can see from my transaction records, every trade of XIV I made money, every single one of them, that means as long as you hold it and not sell, eventually you can make profit and sell at a profit when VIX reach below 12. This algorithm works!
So the slippage is negligible, that's what I thought with such high returns. This is cool James, congrats man! Are you going live with it soon?
I do notice I get this log when I run it in Q:
1969-12-31 19:00 WARN requests_csv.py:56: UserWarning: Quandl has deprecated unencrypted url functionality. Please format all urls using https:// 1969-12-31 19:00 WARN <string>:29: FutureWarning: pd.rolling_mean is deprecated for Series and will be removed in a future version, replace with Series.rolling(window=2,center=False).mean()
Any thoughts?
I switched to use pipeline to get VIX value, so there is no more deprecated functions. Updated algorithm attached.
Hey all, he is right UVXY is too risky, but TZA isn't. Check this out
Hey Guys,
Did a little more backtesting. Just be careful if trading live. It did not handle the 2011 crash very well. James, Are you putting in any other trading guards?
Oh, I see it in the sid(). Sorry :) The UVXY threw me off. I imagine the beta slippage is considerably greater using a 3x leveraged ETF in TZA.
yeah, I was just lazy and threw it in there. Also, If you are using about 50% leverage for long only, you can get this puppy to almost 10000% if you discount the crash in 2011. say maybe order(context.VIX, int(10000/current_price-10)) ? just check it out. I' working on putting in trading guards for XIV because this doesn't catch the crash in 2011
I have made some improvements on the algorithm, and the return has been improved a lot!
Hey James, I've been following this thread and have spent quite a few hours playing around with your algo trying to increase the gains while decreasing the drawdown. Haven't had much luck thus far. Your latest update is blowing my mind!
Question for you. When I run the new algo YTD it buys nothing - the reason being that the current VIX price is below 19.80. But, when I run it from 01/01/16 thu yesterday, it's still holding XIV from November. Wondering your thoughts on this. XIV is up 32% so far YTD so it would be nice to capture these gains if you had gone live with this algo on the first of the year. If you were to start live trading this with capital say, tomorrow, would you just deploy it and sit tight and wait for the VIX to get back up to 19.80?
Safest way is to wait until VIX go up again and then buy XIV, but the algorithm bought it on Nov 2016 and hold it until now, so like you said, you can buy XIV now and hold, and wait until the algo say it's time to sell. I bought XIV a couple days ago in my real account and we will see how it goes.
James,
Referring to you latest algo with the mad gains. Do you think you are overfitting since it does not trade much?
All of my improvements are for one goal, which is to buy XIV at lowest and sell it at highest, if you play around with my algo, change a few parameters, you will find out the return will get a little worse, maybe 1/5 of what the return is now, but still very good!
Here is a list of parameters you can change:
1. elif (current_price <= context.sell_price and xiv_current <= xiv_mean+1.6) ------> change 1.6 to 2
2. elif (current_price >= 19.8) --------> change 19.8 to 20
3. context.stop_price = 23.6 and elif (current_price >= 23.6) ----------> change 23.6 to 24
4. price_history = data.history(context.XIV, "price", 20, "1d") --------------> change 20 to 25
5. pipe.add(SimpleMovingAverage(inputs=[cboe_vix.vix_open], window_length=20), 'vix_mean') ---------> change 20 to 25
These are the parameters I have been testing all the time, the ones that I am using now gave the best result, but if you spend some time and play around with them, you will find out that as long as you change those numbers within a certain range(say change 19.8 to any number between 19 and 20), the result are still going to be great.
What that means is the idea of this algorithm works, i select those specific numbers based on past VIX and XIV values, but I think the future results will still be good since all those numbers within a certain range gave a pretty good return, as long as you pick a number in the middle of those ranges, you will be getting good result. I tested all those numbers and their corresponding ranges, I picked the numbers in the middle of their ranges, therefore even if the future VIX and XIV are going to be different, these numbers will be able to handle it.
So test those parameters and find out what ranges these parameters are in will still give decent returns and pick the best numbers for yourself. Read my comments if you don't understand what those parameters are for.
I agree with both Nathan and James. I think there may be some overfitting, but in James's defence the algo's basic concepts seem valid. I created a version that should have less risk of overfitting. I deleted much of James's code as it looks like he was experimenting a fair bit so there was quite a bit of stuff in there that either wasn't being used, or wasn't contributing significantly to gains. There was also some stuff that created risk of overfitting. The gains aren't as high but that seems a reasonable tradeoff for an algo that should be overfit less. Good job though James, you win the award for most intriguing backtest given the sheer magnitude of the returns.
I believe I am right in saying you can user fetcher_csv to load your own data? That being the case my suggestion would be to create ersatz equity curves going back to 2004 using vix futures which approximate VXX and XIV. Upload these and see how these particular parameters performed since 2004.
Returns as spectacular, but the drawdowns are huge - I wouldn't be able to stomach 50% drawdowns.
Could we implement a stoploss some how?
Returns as spectacular, but the drawdowns are huge - I wouldn't be able to stomach 50% drawdowns.
Could we implement a stoploss some how?
According to my backtesting the system described in this thread (in particular the last implementation by Warren Harding above) is a horrendous disaster when back tested against XIV/VXX simulated prices since 2004.
The VIX spiked up to 80 in late 2008 and this system as drafted was short from 19.80. The drawdown for the system was 98%and it did not recover until late 2013. CAGR was 37% for the period and standard deviation (annualised) of daily returns was 75%.
Correct me if I'm wrong, but the Algo that James posted would switch over to UVXY once the VIX hits 63:
if data.can_trade(context.XIV):
# If VIX index is above 59, we buy XIV.
if (current_price >= 59):
if (context.sell_price == 0):
context.sell_price = 12.8
context.stop_price = 63 # Set a stop price, if VIX goes above it, we sell
# Place the buy order (positive means buy, negative means sell)
order_target_percent(context.XIV, 1)
order_target_percent(context.UVXY, 0)
log.info("VIX: %s" % current_price)
log.info("vix_last_price %s" % context.vix_last_price)
# sell XIV if VIX goes above the stop price we set last time
elif (current_price >= context.stop_price):
order_target_percent(context.XIV, 0)
order_target_percent(context.UVXY, 1)
log.info("VIX_current____________ %s" % current_price)
log.info("VIX_stop_price_________ %s" % context.stop_price)
log.info("xiv_current____________ %s" % xiv_current)
context.sell_price = 0
context.stop_price = 100
Thanks for pointing that out to people Anthony.
1). This algo is not for the faint of heart in the first place.
2). My draft does not include the necessary guardrails, it's just a quick simple draft not a finished trading system.
3). I still think there is potential here and I am going to attempt to install some guardrails.
Use at your own risk!!!
I think VIX is a very good short trade. I am basing my trading at IB on a switch between Contango and a level of backwardation. That does away with guessing "ranges". In my view you are most unlikely to be able to contain DD while maintaining CAGR at very high levels.
If I actually make money on this one I will aim to rebalance periodically between the system and cash. So, a 50/50 rebalancing halves CAGR and DD. Needless to say it seems a good idea to trade very small relative to your net worth!
That way horrible DD's should be almost bearable.
Sorry I can't post the backtest - I have not uploaded my futures data. I have done the tests in my own python backtester locally.
In back testing CAGR 98% since 2004, max dd 76%. Using the front month only. XIV/VXX use a sliding combination of front and second.
Its a handsome return even if you allocate 30/70 in favour of cash. What happens going forward? No idea! You can bet it will be nothing like as good as that. Still, here's hoping.
And no doubt you won't see such a good MAR with this thing either over the next decade. XIV will go bust if the spike is big enough. Hence the importance of rebalancing just in case of disaster..
Warren, I also see there are moments where leverage goes above 1. Do you know of a way to limit that so it could be a set it and forget it algo?
Sure, here's a version that rebalances more often to keep the leverage steady.
Use at your own risk!
Looks ok. It turns $10k into $3M. I thought with apparent returns like that I was going to have to tell you your intraday leverage hits the sky. Only 1.31. This is that code with PvR added. I turned on PnL because there was room since there were no shorts (on by default) to worry about (in record). PvR has max leverage built-in, or for minimal intraday leverage code, use this. [This backtest starts earlier and runs a few extra days].
Always best to do the back testing oneself. In 2008/2009 buy and hold of the XIV would have lost 93% or more of its value - I used futures contracts, front month only, to replicate buy and hold performance since 2004. During that crash VIX rose to 80 and the S&P 500 lost 55% at its worst.
In 1929 and following at its worst the US stock market was down 90% ish.
So, if XIV can lose 93% with the VIX at 80 and the S&P 500 down 55%...it does not have to get an awful lot worse for XIV to go bust.
But look, here is what people so often miss.
Almost everyone on these sort of forums is trying to shoot the lights out. To grow rich and retire on the proceeds of speculation. And nothing wrong with that.
I believe the mistake they usually make however is that they assume huge returns will come with modest volatility and drawdown - I do not believe that to be the case.
Even the great Medallion Fund is a pretty hairy ride.
The real problem for people with less experience is that they invent rules and change parameters so that in back testing their goal is achieved - high return for low vol and dd.
In my own experience at least, this does not come to pass. I have experienced the horrors of curve fitting at first hand. I have been guilty of leading myself up this naive garden path in the past.
Instead of cooking the books, in my view one is better off doing one of two things. Or a combination of both. Above I quoted a backtest yielding a CAGR of 98% for a mad DD of 76%. Such impressive results may well not come to pass, but since the tests were based on one simple parameter, provided option sellers continue to demand a reasonable premium for selling call options the VIX, I am working on the assumption that the trade will not entirely fail.
That being the case if you can't survive a 76% or worse drawdown (who can?) you have two reasonable options.
Trade for peanuts - maybe that is $10,00 in my case, or $100,000 in your case. If you compound, your money grows 30 times in 5 years (in your dreams!). At that stage 76% drawdowns get very destabilising so take some money off the table.
Or look at it another way. Take your entire trading capital and reckon you will devote 10% to this high risk scheme. Leave the rest in a bunch of low risk, short dated bond funds. Every month, quarter or year rebalance to maintain the 10/90 split. Over the long term you would achieve 10 or 11% on your capital for single digit volatility drawdown and single digit volatility.
In short my own feeling is that it is misguided to twist the parameters and add rules. Accept such a scheme for what it is (horribly risky and volatile) and trade small.
Not many people will care for my suggestion.
You make the real money in a number of different ways. If VIX is flattish for a period you will benefit from contango on the short. In a spike up you aim to benefit from going long the vix. After the spike huge profits were made in 2009 going short when vix had hit 80.
The algo in this thread fails on all counts other than when VIX trades within a range.
It is a disaster in a huge spike. People will only be able to see this if they back test back through a period which includes a vast spike such as that in 2008/2009. True in the long term VIX has traded within a fairly predictable range but you will lose your pants in the odd Black Swan event.
People have assumed they can not test back beyond when the ETFs started trading. Incorrect: they can use the futures contracts back to 2004. Having traded futures for years it was simple for me. Not so simple if you have not spent time thinking about the concatenation of futures contracts and how to do it.
Nonetheless not to do so is to make a grave error. As can be seen here.
This algorithm works most times, but when there is a fundamental flaw in our economy, a real crisis with a real cause happens, you get out of this algorithm, and wait until the crisis pass. Until VIX fall back into that range again, you can continue trade with the algo.
a real crisis with a real cause happens, you get out of this algorithm, and wait until the crisis pass
Didn't look at your algo James only Warren's. And that ain't what Warren's does. Warren's is short right the way up the spike. Maybe yours is different? Or are you relying on manual intervention to stop trading?
"For traders who short volatility with multi-day long positions in XIV, they would be wise to instead use short positions in VXX."
Got it from this article:
Manually stop it, my algo is not good enough to capitalize a huge spike in VIX.
Maybe no one's algo is! Certainly not always and in all circumstances anyway.
Warren,
I tried to do a backtest with your version and it had an error with holdXIVmode on recent time spans.
Here's a bug fix.
Has anyone tried live trading this algo on either IB or Robinhood? Also any thoughts on this futurewarning?
"1969-12-31 19:00 WARN requests_csv.py:56: UserWarning: Quandl has deprecated unencrypted url functionality. Please format all urls using https://
1969-12-31 19:00 WARN :29: FutureWarning: pd.rolling_mean is deprecated for Series and will be removed in a future version, replace with
Series.rolling(window=2,center=False).mean() "
I changed the algorithm to only trade XIV again and removed a lot of risky over fitting stuff.
You can try to test in any time frame within last 5 years, every single trade this algorithm made is a profitable trade. Check the Transaction Details, you will see how much each trade makes, some make 20%, some 50%, some even 200%, but the most important thing is that it never lose money!
You just need to be patient, sometimes one trade will take a year or longer, but it will always turn into a profitable trade in the end.
Reading the ETF prospectus, Velocityshares strongly recomends to don't hold XIV for long periods. Here's a copy/paste of "Long holding period risk" section :
...The ETNs are only suitable for a very short investment horizon. The relationship between the level of the VIX Index and the underlying futures on the VIX Index will begin to break down as the length of an investor’s holding period increases...
...The ETNs are not long term substitutes for long or short positions in the futures underlying the VIX Index... ...The long term expected value of your ETNs is zero. If you hold your ETNs as a long term investment, it is likely that you will lose all or a substantial portion of your investment...
Prospectus link:
XIV is the inverse of VIX, it's one of the few VIX ETNs that you can actually hold for long and not going to zero in the end. Go check what XIV does and how it defers from other VIX ETFs and ETNs.
I will be very thankful if you can prove more details. I just checked with the official information provided by the issuer, and it does not say that. In summary it says "The ETNs ...may not be suitable for investors who plan to hold them for longer than one day..." The issuer says the long term expected value is zero. I don't know what other information could be better. Just find "long term" in the official prospectus.
If you have more info, it will be more than welcome.
XIV will go up in a calm, less volatile bull market(1990-1997, 2002-2008, 2011-2017, 6-7 out of every 10 years maybe?).
XIV will go down in a more volatile bear market(1997 - 2002, 2008 - 2011).
That's how XIV works, and in the long term it will grow if we are at a not-so-volatile bull market, like the one we are in right now.
You will see what I mean in this graph:
Hi James, I like the algorithm. I agree with you that the VIX will have its black swan moment and spike up during a crisis but that will be a rare once-in-a-decade event which will require manual intervention. Other than that, this algo seems to work great the majority of time. Is the latest version of the algorithm tradable live?
The trouble with this algorithm is the hard coded 1.6. This needs to be changed into the standard deviation for XIV. Other than that, like the algorithm.
I am not sure, you probably need to change a few things depend on what your broker is, IB or robinhood.
Tried to use the standard deviation instead of 1.6, thoughts?
I did something a little different. Firstly, I am benchmarking against XIV since it seems to me we need to beat a buy and hold strategy. I then looked at two market modes - one normal and one during a collapse. This strategy goes to cash when vix is about 25 to try avoid crashes. In normal trading conditions it holds a percentage in XIV based on the level of VIX and the rest in cash. This feels a little less like overfitting to me than putting in specific values that are not likely to be repeated. This seems to beat XIV in the long term - and clearly beats the market by a large percentage. Any suggestions for improvements would be greatly appreciated.
What if we threw in a small holding or 75% XIV 25% TLT to try and reduce the DD. Of course this will directly impact the returns, but the DDs are rediculous and that is during the bull run from 2009-present. These 40-50% DD during this timeframe would be >80% during 2008
James,
Thanks for sharing the algo! Couple of q's:
- is there any reason the backtest didn't go back to the xiv inception date (2010-11-30)?
- it seems the vix signal is day lag, e.g., on 2011-10-05, current_price is 46.18, while that's Oct 04 open vix, and Oct 05 open vix is actually 40.73
You are right about the VIX price, I used a pipeline to get VIX at opening price every day, but it did gave me the opening price of previous day, I am look into fixing this problem.
I have been reading quite a bit about how to get current day VIX price. The closest I got is to use the closing price from previous day, since Quantopian does not support intraday trading on VIX, so we can't get it's intraday opening price. Anyone have a better idea?
So I made a few changes in the last couple days:
1. Instead of getting VIX opening price from previous day, I get its closing price from previous day.
2. buy UVXY when you are not long XIV, but only use 10% of my portfolio to buy UVXY.
3. Changed the XIV selling condition to when XIV <= XIV_30_days_average + 2.7. The reason: sell XIV at a higher price.
This algo always buy XIV at the lowest price, because I only buy it after a VIX spike. So the question is when to sell it? How to sell it just before another VIX spike, and that is the million dollar question. Maybe set a 30% or higher stop gain?
Any idea is appreciated.
James,
I feel this algo is prone to overfitting... you may want to get replicated prices for xiv/uvxy/vxx since 2004 when vix futures started trading...the current vix regime is very different from what it was. Simply going back to the xiv inception date would expose a lot more risks than current test scenario.
You may fetch vix from csv files. Good luck.
This is the testing result with trading only XIV from 2010-11-30 to now. I removed UVXY because it only started trading after 2011.
Again, even with a -76% drawdown, every single trade is profitable. The first XIV buy waited more than 2 years to sell, but it still made money in the end.
Local csv is not supported as I know. You can use dropbox or google drive, I use http to my own server.
Notice the drawdown comes to -76%?
I tried to do that too, didn't work. Big drawdown is not a problem if the final return is still positive.
Why not allocate 100% of portfolio to UVXY when going long VIX?
Hi James, fantastic stuff you have here. Question: I ran multiple back tests on your 100,000%+ algorithm and upon analyzing the log outputs, I noticed that I got a couple 'WARN' signs indicating that X amount of shares of UVXY/XIV were partially filled. What are the repercussions of this if it were to be traded live? It doesn't seem to have partial fills early on, but they start recurring with frequency (3 times between August 2016 and November 2016) late into the algorithm's lifecycle.
@Mark Trader
I do like the idea of standard deviation, but how it sits isn't robinhood proof due to leverage going above 1. Also is that pulling the past morning vix price or closing price?
@Alex Paz
Quantopian is simulating the product's liquidity. In backtests, I think they use the product's dollar volume to determine a realistic order amount that can be filled. So as the algo makes more money and therefore places larger orders, not all orders can be filled at once because the product doesn't have enough liquidity. More info on dollar volume here:
Great returns, well done!
A question: Once strategy are holding UVXY, for example at 23.6, but vix drop down to below 19.8 in one day, and then stop in the area that vix is between 12.2 and 19.8 for a long time, what will happen? Do we have some logic to know this and avoid it happen?
If you bought UVXY when VIX is below 12.2, and hold it for a very long time, your UVXY will go to zero, that's why I removed uvxy or only allocate 10% of my portfolio for it. Long UVXY is very risky!
So James, yes you just mentioned one situation of holding UVXY, another possible situation in this strategy, which I am concerning is that the strategy holding UVXY from 23.6 and then vix drop quickly to below 19.8, and right now in strategy there is not logic dealing with this.
So you final decision will be removing UVXY? It seems the returns are affected very much. Holding UVXY like a gambling game, sometimes lose but a few chance gain bigger.
The only time I will buy UVXY is when VIX drop below 12.4, and only time hold it is when VIX is between 12.4 -19.5. When VIX is above 19.5 I will always sell UVXY, so when VIX is at 23.6, I sell UVXY, not hold it. That's how my strategy works, please double check.
Another way to trade UVXY, use a small portion of your portfolio, set a stop loss, so if you lose, you lose small, if you win, you can potentially win bigly.
HI James, sorry that I did not study your last code before, and I just checked it out.
Actually you could make your code much simple if do so: always gather XIV when vix higher than 19.8.
And if you do so, the strategy becomes a very simple task which manual operation will be better and safe, via Option Spread.
@Josh Nielsen
Thank you for clarifying. That brings up a second question: Is there any way around this issue? As the algorithm matures and grows in size, wouldn't that be hindering the potential profits it's designed to return?
In other words, at one point it'll get so big that it will, by nature, be too big to run at full capacity. And by full capacity I mean 100% of all orders executed are filled completely.
Too big = More than 10 Million dollars worth of shares trade per day. You don't need to worry about that!
also, since there's a huge dd when the market tanks, which we expect to do sooner or later, would a stop loss or trailing stop work here? there must be some way to exit a position that moves against without suffering such a large dd... many of these algos have done well the last couple of years as there has been little volatility and the market as shot straight up more or less... I suspect this may not do well in a choppy or more volatile trading environment which we could see soon enough... thoughts? seems too risky to put in play as it is..
@tony
I am not trading it as of now. Still reading into it more and learning more about it.
@James
Thanks for your thoughts. I have another question: This may be a Quantopian thing but I noticed that in the transaction details that it buys on the first day (whichever day that may be). Then on the next transaction day, it buys on and sells each security in the same minute but not in the right order. At first it seems illogical because there wouldn't be enough capital to buy again and then sell. For example:
1 May 9:32am - Buy 123 Quantity of XIV @ $'X'
2 May 9:32am - Buy 123 Quantity of UVXY @ $'X'
2 May 9:32am - Sell 123 Quantity of XIV @ $'X'
There isn't enough capital to buy again first on May 2. Shouldn't the order go 'buy, sell, buy'? Again, this may be a silly Quantopian thing putting the securities in alphabetical order, but wanted to be sure.
@tony
It won't work in a more volatile bear market, like 2008, you should get out when a big crash happened.
@alex
I think it's just the order displayed on Quantopian, in reality it should be buy, sell then buy.
Trade both XIV and UVXY with 100% of your portfolio.
James, this algorithm seems to have a forward looking bias, in that you are using the VIX closing value at the start of the day. You should look back a day when pulling VIX close value.
You are right, there is no look ahead. Very interesting. The only issue I have with this is the hardcoded 2.7 number, which is very significant in the early days of XIV when it is small, and not significant at all in the current times.
Try testing using 2.3, 2.4, .... 3.8, the worst return is 16391.3%, with a -61.3% drawdown. 2.7 is a number in the middle of that range which gives the best return. So what this means is in the future even if 2.7 will not give the best result, it will still give a pretty good one.
James, You didn't understand my point. What you are doing is overfitting the model. There is no rationale behind the 2.7, except that it works. Overfitted models perform poorly out of sample.
Thanks James, like the algo. And I appreciate the points made about the hard coded 2.7 brought up by Macro and Anthony. Since you're using the VIX close from the day before, do you guys think it would be helpful to execute this function in premarket trading once that is made available in Q?
It's not overfitting if ALL those numbers (2.3 - 3.8) produce consistent good return. It is overfitting if only one or two of those numbers give good return and the rest give not so good return.
It should be good if we can get rid of the 2.7 and instead using some standard deviation or some formula which take XIV, UVXY, VIX info as input. 2.7 might be good for the past data,
but might not work well in future
Another suggestion is if we can minimize the drawdown, several suggestions
1. when VIX is above 19.5, instead of blindly buy XIV, we only buy when we see the VIX decreases, and we might monitor the VIX price in hour granularity if it is above 19.5
2 Introduce a cool down timer = 5-10 days which do not trade when we see dramatically decline from past 3 days
saw leverage go above 1 a few times, is this still an issue say using it with Robinhood without leverage?
@haiqin Liu
Go try it out yourself, and let me know.
@elsid
you can changeorder_target_percent(context.XIV, 1) to order_target_percent(context.XIV, 0.95) to make the leverage below 1.
Thanks James yea I just wanted to make sure also to my understanding when you are long XIV when VIX Is above say 19.5. You don't sell those shares correct? So even in an 08' scenario you'll get a massive spike to say to 80-90 your drawdown will go down to say 50-80% but will soon recover when VIX come second crashing back down to normal levels correct?
I think if you add back in uxvy to this and possibly add in this then it could possible be an algo that has the crazy gains you'd like and is decently tradeable.
Hi James - thanks for the algo. Were you able to sell XIV this Friday, when it turned? Are you using IB, RH or another?
@guillermo
The algo sold XIV on Feb. 24, but I sold it on Feb. 23 on my personal account. I am not live trading with this aglo yet.
Hi James,
Nice algo. BTW , your code had me puzzled a bit at the beginning because of all the if clauses.
You can make it more understandable (and elegant) by simply removing all except 2nd to last if clause and replacing with
if (current_price >= 19.8): if (context.sell_price == 0): context.sell_price = 12.2 # Set a sell price at 12.2 order_target_percent(context.XIV, 1) #order_target_percent(context.UVXY, 0) log.info("VIX: %s" % current_price) log.info("xiv_current %s" % xiv_current) # If VIX index is below sell price and xiv below xiv_20d_mean + standard deviation, we sell XIV. elif (current_price <= context.sell_price and xiv_current <= xiv_mean+xiv_std): # Sell all of our shares by setting the target position to zero order_target_percent(context.XIV, 0) #order_target_percent(context.UVXY, 1) log.info("VIX_current____________ %s" % current_price) log.info("VIX_mean_______________ %s" % vix_mean) log.info("xiv_current____________ %s" % xiv_current) log.info("xiv_mean_______________ %s" % xiv_mean) context.sell_price = 0 context.vix_last_price = current_price
Try it. The results are identical.
Cheers,
Serge
James,
i'm a new user with Quantopian so pardon me if i'm wrong... Just cloned your last algo and ran the full backtest out of the box. Something weird with the UVXY transactions: While the sells XIV balance perfectly the buys, it's not the case with UVXY. Sometimes, but not always, the sells represent a minor fraction of the buys. For example, first UVXY buy : 4886, then 488 sold. Am I missing something ?
I'm having an issue now where it is trying to buy uvxy, but the orders are being rejected and cancelled. One reason being that it tries to order more than allotted. Although even if it did buy it, I would be losing money due to the decay of uvxy.
How so? The largest drawdown in your backtest is from XIV - not UVXY?
There is bound to be something that pops the VIX up every now and then and if you're holding UVXY it will be a nice little run (like the election)
Hi I am new with quantopian, I run James' algo backtest and daily position and gains appear to stop at 2017-2-14. Why is that?
Quick question, guys.. could any of the variations of the algorithm in this thread get me flagged as a day trader?
Thank you.
** Also, first post, been browsing quite a bit, running backtests amany, and learning a TON. I plan to stick around awhile ;) Thanks for all of your contributions. I hope I can be of assistance where needed.
Thomas, this algo is designed to trade XIV and UVXY exclusively (they're essentially just invertions of one another).
If you're looking for low beta, and thus for your earnings not to be so dependent on which way the market is headed, you're probably going to be after algos that select from a broader range of stocks/tickers.
Hello guys,
thx for sharing your code. I've got some finance/python experience, but pretty new to Quantopian API so your code was really helpful.
I've modified the algo in this post with the aim to make it more robust and, above all, a bit safer (also done some off-line back-testing from 2007 using underlying indices when ETNs were not available - results are encouranging).
In a nutshell, the algo either
1. goes short vol (i.e. long XIV): if both i. VIX futures in contango (VXV > VIX) & ii. VIX is declining (not just when high enough like in the previous algos), or
2. goes long vol (i.e. long VXX): if both i. VIX futures in backwardation (VXV < VIX) & ii. VIX is increasing.
3. Otherwise, the algo take all position off (flat).
Thoughts?
@Kern Winn,
not sure if you are still interested in answer to your recent question
If your portfolio value is always above $25,000, then you are always exempt from day trade restrictions.
Generally, buy / sell or sell / buy of same stock during same trading session counts as a day trade.
If portfolio under $25,000, then you must avoid becoming tagged as a pattern day trader.
"A pattern day trader is a stock market trader who executes four or more day trades in five business days in a margin account, provided the number of day trades are more than six percent of the customer's total trading activity for that same five-day period." - from
My current algo Robin Hood VIX Mix tends to trade infrequently enough to completely avoid pattern day trader difficulties.
Of course, the nature of these trades and the securities involved mean you can lose a lot of money.
If you are more interested in low risk, then probably not appropriate for you.
None, I've been paper trading this since january and it has been long UVXY from ~$21 (pre reverse split 4:1) Drawdown is around 60-70 so far.
My oppinion: Any algo which is not backtesting include the year of 2008, should be very very careful!
only if you are trying to go long UVXY - it is the holy grail if you're shorting it and the market doesn't crash
We need an update on this to pull VIX data from CBOE or pipeline instead of Yahoo, since that function was deprecated.
Has anyone converted their algorithm to work with iBridgePy, Zipline or QuantConnect? If not I plan to soon or would be happy to help anyone convert theirs.
@Michael
Very nice!
But I would prefer to hear what's your experience on these different platforms. Pros and Cons. :-)
Especially zipline-live. or I have to pay for that?
Zipline-live will likely have the same issues with setting up multiple instances. It sounds like you're further along in research than me. I'm not as worried about multiple accounts, my main concern currently is porting the code and getting a stable build/configuration.
@ Alessandro Muci
Interesting algorithm and risk metrics look very attractive.
I just have one thing to suggest: holding TLT (20yr bond etf) while holding no position in XIV or VXX.
It will boost all relevant risk metrics (Sharpe from 1.14 to 1.40, Alpha from 28% to 40%, Drawdown from -33% to -28%).
Seemed like a promising, simple strategy. Unfortunate that the events over Feb 2-5, 2018 (a cripping -80% destruction in the fund's value during after-hours trading) demonstrated that the VIX instrument is subject to exaggerated fluctuations that do not represent true market sentiments. The ETN is being liquidated.
Credit Suisse says it will end trading in the volatility security that's become the focus of this sell-off
I wrote in this forum that this kind of issues can happened, and that the long term price of the asset is zero. Of course, I lost more than 100% of profit in this period.
Same here warned about this on a few VIX strategies on here, especially the faulty backtests that most were showing, thinking that somehow 50%-60% DD in the lowest volatile period in history, with limited backtesting history was going to end well. Felt like a very amateurish environment of not taking risk into consideration at all.
The irony it was almost prophetic one year ago haha.
Elsid Aliaj Feb 3, 2017 Edit
or 99% during 08' | https://www.quantopian.com/posts/trade-xiv-based-on-vix-1 | CC-MAIN-2019-04 | refinedweb | 7,459 | 74.08 |
Needed to make a small change to my photoblog - a change that should be reflected on every single post. Thought I might be able to use python DOM to get it done. With PHP it's quite easy, I have done that sort of thing many times in the past. Can't that be handled by the template? no because this was a CSS change that really did involve changing the post's HTML. It was just a matter of removing an inline style (CSS) and replacing it with a CSS class. But each post only has a fragment so python minidom simply refused parse it. That forced me to look at HTMLParser which I didn't like because it's an old fashion event driven parser. There was a time when use to swear by event driven parsers (expat for example) but that was long ago. Thus I had no option but to revert to the minidom API and to make my HTML fragment well formed by adding a new start and end tag to enclose the whole fragment. These enclosing tags can be stripped out later when I am saving the modified post back to the database. With this approach the code is short and sweet. But of course I need to flesh it out by adding support for retrieving posts from the database and writing them back in, instead of working with the hardcoded bit of HTML as done during testing.
#!/usr/bin/python
from xml.dom.minidom import *
domNode = parseString('<xml><p align="center"><a href="/images/comingup.jpg"><img src="/images/comingup-t.jpg" title="dawn" alt="Sunrise close to Nuwara Eliya" style="border-color: #505050; border-width: 7px" /></a></p>Great Western, Nuwara Eliya at Dawn.</xml>')
ele = domNode.getElementsByTagName('img') ele.item(0).removeAttribute('style') ele.item(0).setAttribute("class","photo");
subNode = domNode.firstChild;
if subNode.hasChildNodes(): children = subNode.childNodes for child in children: print child.toxml() | http://www.raditha.com/blog/archives/1763.html | CC-MAIN-2017-47 | refinedweb | 324 | 74.9 |
Initializes security layer.
You need to call this function on the server before calling Network.InitializeServer. Don't call this function on the client.
Once your online game reaches a certain popularity people will try to cheat. You will need to account for this both at the game layer and at the network layer. Unity handles the network layer by providing secure connections if you wish to use them. * Uses AES encryption. Prevents unauthorized reads and blocks replay attacks * Adds CRCs so that data tampering can be detected. * Uses randomized, encrypted SYNCookies to prevent unauthorized logins. * Uses RSA encryption to protect the AES key. Most games will want to use secure connections. However, they add up to 15 bytes per packet and take time to compute so you may wish to limit usage to deployed games only.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Start() { Network.InitializeSecurity(); Network.InitializeServer(32, 25000); } } | https://docs.unity3d.com/kr/2017.2/ScriptReference/Network.InitializeSecurity.html | CC-MAIN-2021-25 | refinedweb | 154 | 52.26 |
OSClisten — Listen for OSC messages to a particular path.
On each k-cycle looks to see if an OSC message has been send to a given path of a given type.
ihandle -- a handle returned by an earlier call to OSCinit, to associate OSClisten with a particular port number.
idest -- a string that is the destination address. This takes the form of a file name with directories. Csound uses this address to decide if messages are meant for csound.
itype -- a string that indicates the types of the optional arguments that are to be read. The string can contain the characters "cdfhis" which stand for character, double, float, 64-bit integer, 32-bit integer, and string. All types other than 's' require a k-rate variable, while 's' requires a string variable.
A handler is inserted into the listener (see OSCinit) to intercept messages of this pattern.
kans -- set to 1 if a new message was received, or zero if not. If multiple messages are received in a single control period, the messages are buffered, and OSClisten can be called again until zero is returned.
If there was a message the xdata variables are set to the incoming values, as interpretted by the itype parameter. Note that although the xdata variables are on the right of an operation they are actually outputs, and so must be variables of type k, gk, S, or gS, and may need to be declared with init, or = in the case of string variables, before calling OSClisten.
Below are two .csd files which demonstrate the usage of the OSC opcodes. They use the files OSCmidisend.csd and OSCmidircv.csd.
Example 289. Example of the OSC opcodes.
The following two .csd files demonstrate the usage of the OSC opcodes in csound. The first file, OSCmidisend.csd, transforms received real-time MIDI messages into OSC data. The second file, OSCmidircv.csd, can take these OSC messages, and intrepret them to generate sound from note messages, and store controller values. It will use controller number 7 to control volume. Note that these files are designed to be on the same machine, but if a different host address (in the IPADDRESS macro) is used, they can be separate machines on a network, or connected through the internet.
CSD file to send David Akbari 2007 ; Modified by Jonathan Murphy ; Use this file to generate OSC events for OSCmidircv.csd #define IPADDRESS # "localhost" # #define PORT # 47120 # turnon 1000 instr 1000 kst, kch, kd1, kd2 midiin OSCsend kst+kch+kd1+kd2, $IPADDRESS, $PORT, "/midi", "iiii", kst, kch, kd1, kd2 endin </CsInstruments> <CsScore> f 0 3600 ;Dummy f-table e </CsScore> </CsoundSynthesizer>
CSD file to receive Jonathan Murphy and Andres Cabrera 2007 ; Use file OSCmidisend.csd to generate OSC events for this file 0dbfs = 1 gilisten OSCinit 47120 gisin ftgen 1, 0, 16384, 10, 1 givel ftgen 2, 0, 128, -2, 0 gicc ftgen 3, 0, 128, -7, 100, 128, 100 ;Default all controllers to 100 ;Define scale tuning giji_12 ftgen 202, 0, 32, -2, 12, 2, 256, 60, 1, 16/15, 9/8, 6/5, 5/4, 4/3, 7/5, \ 3/2, 8/5, 5/3, 9/5, 15/8, 2 #define DEST #"/midi"# ; Use controller number 7 for volume #define VOL #7# turnon 1000 instr 1000 kst init 0 kch init 0 kd1 init 0 kd2 init 0 next: kk OSClisten gilisten, $DEST, "iiii", kst, kch, kd1, kd2 if (kk == 0) goto done printks "kst = %i, kch = %i, kd1 = %i, kd2 = %i\\n", \ 0, kst, kch, kd1, kd2 if (kst == 176) then ;Store controller information in a table tablew kd2, kd1, gicc endif if (kst == 144) then ;Process noteon and noteoff messages. kkey = kd1 kvel = kd2 kcps cpstun kvel, kkey, giji_12 kamp = kvel/127 if (kvel == 0) then turnoff2 1001, 4, 1 elseif (kvel > 0) then event "i", 1001, 0, -1, kcps, kamp endif endif kgoto next ;Process all events in queue done: endin instr 1001 ;Simple instrument icps init p4 kvol table $VOL, gicc ;Read MIDI volume from controller table kvol = kvol/127 aenv linsegr 0, .003, p5, 0.03, p5 * 0.5, 0.3, 0 aosc oscil aenv, icps, gisin out aosc * kvol endin </CsInstruments> <CsScore> f 0 3600 ;Dummy f-table e </CsScore> </CsoundSynthesizer> | http://www.csounds.com/manualOLPC/OSClisten.html | CC-MAIN-2015-40 | refinedweb | 707 | 68.3 |
Due to Microsoft's flagrant disregard for the thousands of developers who have voiced their utter dislike for the new theme colors used
in Visual Studio 2012, it was necessary to create a custom theme in order to even consider using the new IDE. Included in this article are an outline
of the key changes made to the VS 2012 UI, the tools used, and a little background on why this was necessary. I've included some files & links to make it easy
for everyone to do for themselves what Microsoft should have done in the first place: Create a clean, colorful, user-friendly interface.
Microsoft blatantly ignored the number one
complaint amongst developers who tested the Beta & Release Candidate versions of Visual Studio 2012: The horrific choice of colors used in its 'Light'
and 'Dark' color themes, as well as the switch to monotone icons. Even a cursory review of the thousands of comments reveals the new theme colors & icons result
in a significant decline in developer productivity and desire to use the new UI. Some comments even indicate that the dismal colors may lead to thoughts of suicide.
Although Microsoft did add back a miniscule amount of color in a few areas of the final product, the software giant clearly disregarded the majority of developers
who flat out hate what its design team did to Visual Studio.
Obviously, it is a very unfortunate decision on the part of Microsoft and the Visual Studio design team not to listen to those of us who tried our best
to voice our dislike for the new UI. I have already read numerous posts by developers who are either considering adopting a new IDE, or are simply not
upgrading to VS 2012 until Microsoft fixes the problem.
I personally considered not purchasing VS 2012. However, there are a few compelling additions to the .NET Framework 4.5 I would like to take advantage of.
As such, I took it upon myself to come up with a cleaner and more user-friendly color theme.
As a fan of both the VS 2010 color scheme, as well as the Microsoft Office 2010 Blue theme, I decided to base my VS 2012 Cool Blue theme on those.
Below is a screen shot that captures most of the key color elements.
Obviously, we all have differing ideas on what works best for us. It doesn't matter to me if you want to use this theme as-is or modify it to suit your
own tastes. In fact, I’d love to see more user-designed themes pop up; I know we can do a better job than Microsoft! I will note that this was
just a quick-and-dirty run through of re-coloring the UI. I tried to make things pop, ease eyestrain, create some much-needed differentiation between
the different areas of the UI, etc... And of course, hopefully keep a few of my peers from slitting their wrists.
The first key change I made was to tweak the Registry entry to eliminate the ALL CAPS menus. WHAT was Microsoft thinking when they did that?!
Here's a link to Richard Banks' article
that covers this easy fix.
To modify the UI colors, I used Brian Chavez's Visual
Studio 2012 Theme Editor. As he notes, this is a very basic, though functional, program. I began with the 'Light' theme, and modified it to get what I wanted.
One of the key problems I ran into was trying to figure out what properties (Theme Records) affected what parts of the UI. To help with this, I dumped all of the values
to an Excel spreadsheet. I've included this spreadsheet, which shows the original color values of the 'Light' theme and notes my changes.
To make things easier, I did sort each of the theme Categories alphabetically. To get the Theme Records listing to match, just add a simple sorter to the ThemeReader class:
private class ColorRecordSorter : IComparer<ColorRecord>
{
public int Compare(ColorRecord x, ColorRecord y)
{
return (x.Name.CompareTo(y.Name));
}
}
Then, within the UnpackColorCategory method, add this line before returning the Category:
UnpackColorCategory
category.ColorRecords.Sort(new ColorRecordSorter());
I've included the modified Registry Editor (.reg) file and the Excel spreadsheet with my notes in the attached .ZIP file.
Good luck. | http://www.codeproject.com/script/Articles/View.aspx?aid=453813 | CC-MAIN-2015-35 | refinedweb | 720 | 60.35 |
TensorFlow newbie here, training on a simple tutorial which I just fail.
The point is to convert an image to grayscale.
Our data is basically an
HxWx3
[r, g, b]
[gray, gray, gray]
gray = mean(r, g, b)
import tensorflow as tf
import matplotlib.image as mpimg
filename = "MarshOrchid.jpg"
raw_image_data = mpimg.imread(filename)
image = tf.placeholder("uint8", [None, None, 3])
# Reduce axis 2 by mean (= color)
# i.e. image = [[[r,g,b], ...]]
# out = [[[ grayvalue ], ... ]] where grayvalue = mean(r, g, b)
out = tf.reduce_mean(image, 2, keep_dims=True)
# Associate r,g,b to the same mean value = concat mean on axis 2.
# out = [[[ grayvalu, grayvalue, grayvalue], ...]]
out = tf.concat(2, [out, out, out])
with tf.Session() as session:
result = session.run(out, feed_dict={image: raw_image_data})
print(result.shape)
plt.imshow(result)
plt.show()
The error come from the dtype of your placeholder. Cause the type inference, intermediate tensors cannot have values greater than 255 (2^8-1). When Tensorflow compute mean(147, 137, 88), first it compute : sum(147, 137, 88)=372, but 372>256 so it keep 372% 256 = 116.
And so mean(147, 137, 88) = sum(147, 137, 88)/3 = 116/3 = 40. Change the dtype of your placeholder to "uint16" or "uint32". | https://codedump.io/share/kkQkR3JXAhfw/1/grayscale-convertion-using-tfreducemean-amp-tfconcat | CC-MAIN-2018-26 | refinedweb | 205 | 62.04 |
McDonaldPython Development Techdegree Student 6,115 Points
Is this broken?
I'm getting an error on this quiz saying that I am not passing in a string to the function but I'm pretty sure that I am.
import unittest from string_fun import is_palindrome class PalindromeTestCase(unittest.TestCase): def setUp(self): tcat = 'tacocat' cat = 'cat' def test_good_palindrome(self): self.assertTrue(is_palindrome('tacocat')) def test_bad_palindrome(self): self.assertFalse(is_palindrome('yarn'))
def is_palindrome(yarn): """Return whether or not a string is a palindrome. A palindrome is a word/phrase that's the same in both directions. """ return yarn == yarn[::-1]
1 Answer
jb3044,476 Points
Your code looks fine.
You can pass the challenge by removing
def setUp(self): tcat = 'tacocat' cat = 'cat' | https://teamtreehouse.com/community/is-this-broken-2 | CC-MAIN-2022-40 | refinedweb | 121 | 51.65 |
Drag-and-drop is the action of clicking on a virtual object and dragging it to a different location or onto another virtual object. In general, it can be used to invoke many kinds of actions, or create various types of associations between two objects..
Step 2: Starting
Open Flash and create a new Flash File (ActionScript 3).
Set the stage size to 450x300 and add a black background (#1B1B1B).
Step 3: Draggable Clips
We'll need some MovieClips to drag, I've used some of the Envato Marketplace logos.
Convert them to MovieClips and set their instance names:
Step 4: Drop Target
A MovieClip will be used as a drop target for each draggable clip, a simple rectangle will do the job.
Convert the rectangle to MovieClip and duplicate it (Cmd + D) to match the number of draggable objects.
The instance names will be the name of the draggable clip, plus Target, leaving us with denTarget, oceanTarget, etc.
Step 5: Guides
Let's add some guides to help the user figure out what to do.
A title that will tell the user what to do with the elements in the screen.
An icon to tell the user how to do it.
Keywords to tell the user where to match the objects.
Step 6: ActionScript Time
Create a new ActionScript Document and save it as "Main.as".
Step 7: Required Classes
This time we'll need just a few classes.
package { import flash.display.Sprite; import flash.events.MouseEvent;
Step 8: Extending the Class
We're going to use Sprite specific methods and properties so we extend using the Sprite Class.
public class Main extends Sprite {
Step 9: Variables
These are the variables we will use, explained in the comments.
var xPos:int; //Stores the initial x position var yPos:int; //Stores the initial y position
Step 10: Main Function
This function is executed when the class is loaded.
public function Main():void { addListeners(den, ocean, jungle, river, forest); //A function to add the listeners to the clips in the parameters }
Step 11: Position Function
A function to get the position of the MovieClips, this will help us return the MC to its original position when the drop target its incorrect or no drop target was hit.
private function getPosition(target:Object):void { xPos = target.x; yPos = target.y; }
Step 12: Start Drag
This function enables the dragging to the clip with the listener.
private function dragObject(e:MouseEvent):void { getPosition(e.target); e.target.startDrag(true); }
Step 13: Stop Drag
The next function stops the dragging when the mouse button is released, it also checks if the object is in the correct drop target.
private function stopDragObject(e:MouseEvent):void { if (e.target.hitTestObject(getChildByName(e.target.name + "Target"))) //Checks the correct drop target { e.target.x = getChildByName(e.target.name + "Target").x; //If its correct, place the clip in the same position as the target e.target.y = getChildByName(e.target.name + "Target").y; } else { e.target.x = xPos; //If not, return the clip to its original position e.target.y = yPos; } e.target.stopDrag(); //Stop drag }
Step 14: Listeners
Adds the listeners to the clips in the parameters using the ...rest argument.
private function addListeners(... objects):void { for (var i:int = 0; i < objects.length; i++) { objects[i].addEventListener(MouseEvent.MOUSE_DOWN, dragObject); objects[i].addEventListener(MouseEvent.MOUSE_UP, stopDragObject); } }
Step 15: Document Class
Go back to the .Fla file and in the Properties Panel add "Main" in the Class field to make this the Document Class.
Conclusion
Now you know how to easily make a drag target, this can be very useful for games and applications. Make your own drag app and take this concept further!
Thanks for reading!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/create-a-drag-and-drop-puzzle-in-actionscript-30--active-2920 | CC-MAIN-2016-50 | refinedweb | 641 | 66.44 |
The .NET framework supports a wide range of languages meaning it is not uncommon to find code examples in a language that you are not familiar or comfortable with. This blog entry discusses some approaches that can be used to include portions of code written in a foreign language within your own application. We will demonstrate the technique by utilising C# code within a VB.NET project.
Technique 1: Code Translation
If you only have a couple of lines of C# to include within your VB.NET application you may like to attempt to manually convert the code into VB.NET so that it can be incorporated directly. If you are having difficulty converting the code (or do not have enough time to learn the specifics of another language) you could try an online code translation service such as the Developer Fusion C# to VB.NET code conversion tool which will attempt to automatically translate the code for you.
For example, given the following C# code:
public event EventHandler Foo; public void FireFoo() { EventHandler eh = Foo; if (eh != null) eh(this, EventArgs.Empty); }
Developer Fusions’ code translation tool will automatically provide the following VB.NET conversion:
Public Event Foo As EventHandler Public Sub FireFoo() Dim eh As EventHandler = Foo RaiseEvent eh(Me, EventArgs.Empty) End Sub
Notice how the code translator is reasonably intelligent, for example it has removed the if statement from the original C# code snippet in favour of the VB.NET specific RaiseEvent statement which internally performs the same check. As with any automated translation utility, your milage may vary with respect to how well the translator copes with your particular code snippet.
Although this technique is useful for small amounts of code it does not lend itself well to converting large amounts of source code, or source code that utilises specialised features of a given language. For this we may like to investigate an alternative technique.
Technique 2: Class Library
One of the many specifications surrounding the .NET environment is one called the Common Language Specification (CLS). This is a subset of CLR features that any language targeting the .NET runtime should support in order to be fully interoperable with other CLS-compliant languages. In other words if you stick to using CLS compliant language features your code should be usable in projects written in a wide range of other languages that also support the Common Language Specification. C# and VB.NET both support the Common Language Specification making it easy for these two languages to interoperate.
If you have a large amount of C# code (such as an entire class) that you would like to include within a VB.NET project you can create a C# class library project and include the source files within it. This allows you to compile the C# source code unmodified and without needing to fully understand how it works. After compilation all you have to do is add a reference to the class library from your VB.NET application as shown in the screenshot to the right. You will then be able to access the C# classes using standard VB.NET syntax.
While writing code how do you know if you are using the subset of language features that are CLS compliant? The easiest way is to add the CLSCompliant attribute to your assembly. If set to true, this will cause the C# or VB.NET compilers to emit warnings whenever they detect that you have used a language feature (or datatype etc) that is non CLS compliant.
For example within a C# class library you might add the following line to the AssemblyInfo.cs file:
using System [assembly: CLSCompliant(true)]
Technique 3: Netmodules and assembly linking
I find as a .NET developer it is important to understand the limitations and artificial restrictions imposed by the tools I utilise. You may be surprised to know that Visual Studio does not provide access to every feature of the .NET platform.
One feature not supported by Visual Studio is the ability to compile assemblies that consist of code written in more than one language. For example having an assembly where one class is written in C# while another is written in VB.NET (or even MSIL etc).
If you are willing to drop down to command line compilation it is possible to achieve this. For example given two source files foo.cs and bar.vb we can create an assembly called myassembly.dll by executing the following commands from a Visual Studio command prompt window.
SET NETCF_DIR="C:\progra~1\Microsoft.NET\SDK\CompactFramework\v2.0\WindowsCE" csc /noconfig /nostdlib /r:"%NETCF_DIR%\mscorlib.dll" /t:module foo.cs vbc /netcf /noconfig /nostdlib /sdkpath:%NETCF_DIR% /t:module bar.vb link /LTCG /verbose /dll /out:myassembly.dll foo.netmodule bar.netmodule /subsystem:windows
The calls to the C# (csc) and VB.NET (vbc) command line compilers specify the /t:module parameter which causes the compiler to produce a netmodule (foo.netmodule and bar.netmodule respectively). A netmodule can be thought of as being similar to an object (*.obj) file that a native C or C++ developer may be familiar with.
Using the linker (link.exe) enables one or more netmodules to be merged into an assembly, in this case a DLL named myassembly.dll. The linker is not concerned with what language the individual netmodules were compiled from.
Sample Application
[Download multilanguageprojects.zip - 16KB]
The first sample application demonstrates creating a C# class library that can be consumed by a VB.NET winforms application. It takes the textbox cursor location code snippet I wrote in C# and turns it into a C# class library called CSharpClassLibrary.dll. This DLL is then added as a reference to the VB.NET application which makes use of the functionality to highlight the current cursor location within a multiline textbox. Only a minimal amount of C# knowledge was required to make this happen, just enough to wrap the code snippet up into a public class.
This application also demonstrates accessing functionality from an assembly which has classes written in more than one language. Within the MultiLanguageClassLibrary subdirectory you will file a small batch file that demonstrates using the command line compilers to build an assembly that contains individual classes written in C# and VB.NET. As you will see from the VB.NET sample application’s menu item there is no difficulty accessing the functionality of these classes once a reference to the assembly has been added.
[Download gps.zip - 23KB]
This blog entry was spawned by a request for a GPS Intermediate Driver example in VB.NET. The Windows Mobile 5.0 SDK includes an example of how to use this API, but it is written in C#. I have re-coded the GUI part of the GPS Intermediate Driver example into VB.NET and hence produced another example of a C# class library being used from VB.NET. This sample can be downloaded from the link above.
Very nice example and tutorial for us VBer’s wishing to implement external source into our applications.
The GPS tutorial is especially appreciated by me… kudos and many thanks.
:>)
TW,
Hi, thank you for the article, it was very helpful
I am still having an issue with the software/hardware though – I don’t know which.
I am running the program on a symbol mc75 and the program will run – but it gets 0 of 12 satellites – so it does not return any gps information.
Do you have any idea what I need to do to get the machine to recognize the gps?
Thanks for any advice – I realize this article is old so I’m hoping someone will see this :-)
Hey
I can’t find a CSC.exe anywhere on my computer…
I am using Visual Studio .NET 2005.
Thanks
Nice tutorial. Good job at telling the newer programmers about multi-file assemblies.
“The linker is not concerned with what language the individual netmodules were compiled from.”
Compiled .Net code does not have the language of origin, so the linker doesn’t know what languages they are.
Also I’m pretty sure using the linker is pointless the vbc.exe and csc.exe have options to compile a multi-file assembley.
Mitch | http://www.christec.co.nz/blog/archives/290/comment-page-1 | CC-MAIN-2018-47 | refinedweb | 1,375 | 56.66 |
The following guest post comes from Alessandro Bellotti, BlackBerry Developer Elite.
In applications that manipulate and display a huge amount of data, it is very useful to integrate charts to provide a better user experience while showing that data to the user.
Currently in Cascades, the Qt class Graph can’t be used. Several solutions have been used, including the use of a background image with overlaying text written by using a Label {}, but this approach never led to optimal results.
Fortunately, thanks to the great flexibility of the BlackBerry 10 Native SDK and to Angelo Gallarello, a member of our BlackBerry Dev Group in Milan who has shown us a great JavaScript library called jQuery Sparklines, we can solve the problem of using graphs in a very simple, fast and effective way.
jQuery is very flexible and easy to use. It supports seven different chart types and all of them are easily customizable.
A very cool feature is the possibility to test the custom parameters in real time thanks to the dedicated box and very detailed documentation on customizing charts.
The integration into a Cascades project is very easy: the first step is to create an HTML file with a simple text editor. In our case, we name it “grafico.html” and it looks like the following:
Minified can be downloaded here:
Select all the text in the two links and paste between:
<script type=”text/javascript”> e </script>
of the “grafico.html” file.
To display the graph, you just need to add a WebView to your project (in the QML file that should display the graph), setting its URL to the location of your HTML file.
WebView {
id: idWebView
objectName: "idWebView"
url: "local:///assets/webview/grafico.html"
verticalAlignment: VerticalAlignment.Fill
horizontalAlignment: HorizontalAlignment.Fill
}
The next step is to create the string with the values that will be inserted in the grafico.html file to produce the graph.
In our case, I took a SQLLite database, and while rushing records with a Select, I created the string using the method of creating a function callable from the QML in the header file (HPP):
Q_INVOKABLE void readData (QString search);
and a Q_SIGNALS
void gotSparky(QString Sparky);
In CPP, I scrolled the database and for each record found and built the string in the global variable Sparky, maintaining the structure required by the jQuery library.
Finally, after reading all the records, I finished the creation by integrating the Sparky string with Sparky_Full which also contains the parameters for the creation of the chart. So I have set the string in QML “emit gotSparky(Sparky_Full);”
void ApplicationUI::readData(QString search)
{
QSqlDatabase database = QSqlDatabase::database();
SqlDataAccess *sqlda = new SqlDataAccess(database.databaseName());
QString appo = search;
QVariant result = sqlda->execute(appo);
Sparky_Full = "";
Sparky = "[";
if (!sqlda->hasError()) {
int recordsRead = 0;
if( !result.isNull() ) {
QVariantList list = result.value<QVariantList>();
recordsRead = list.size();
for(int i = 0; i < recordsRead; i++) {
QVariantMap map = list.at(i).value<QVariantMap>();
if (i == 0){
Sparky = Sparky + map.value("quantita").toString();
}
else {
Sparky = Sparky + "," + map.value("quantita").toString();
}
}
Sparky = Sparky + "]";
Sparky_Full = QString("$('.sparkline').sparkline(")+ Sparky + QString(",{type: 'line', chartRangeMin: '0', fillColor: false, width: '720', height: '250'});");
emit gotSparky(Sparky_Full);
}
if (recordsRead == 0) {
// showToast("NO Data!");
}
} else {
alert(tr("Read records failed: %1").arg(sqlda->error().errorMessage()));
}
}
In the QML file you just need to add a few lines of code. I created two “property” that have the task of checking if the app is already connected to the “onGotSparky” signal (because in that case the data is duplicated), and a support string for the output string coming from the C++ function.
import bb.cascades 1.0
import bb.system 1.0
Page {
property variant connessoSparky:0
property string sparky_String: ""
onCreationCompleted:
if (connessoSparky == 0){
_app.gotSparky.connect(onGotSparky)
connessoSparky = 1
}
_app.readData(“SELECT * FROM myDataBase”)
}
function onGotSparky(stringa) {
sparky_String = stringa
}
I put in the evaluateJavaScript() in a ActionItem to keep everything separate. In practice, when onTriggered:{} is issued, the string is passed to the HTML page that elaborates it and gives the graph.
actions: [
ActionItem {
id: actGraph
title: "Titolo"
imageSource: "asset:///images/menuicons/ic_chart.png"
ActionBar.placement: ActionBarPlacement.OnBar
onTriggered: {
waterWebView.evaluateJavaScript(sparky_String)
}
}
]
That’s all! Let us know if you have any questions in the comments below, and happy coding! | http://devblog.blackberry.com/2014/04/charts-with-cascades-integration-through-jquery/ | CC-MAIN-2014-42 | refinedweb | 707 | 55.64 |
Microsoft Corporation
March 2005
Applies to: BizTalk Server 2004
Summary: "Using XML Schemas in BizTalk Server 2004" discusses the basic concepts of XML schemas and explains how they are used in BizTalk Server 2004. (30 printed pages)
Schemas are required for most BizTalk Server applications because they are used to process and validate message information. This document covers the basic concepts of XML schemas and explains how they are used in Microsoft® BizTalk® Server 2004.
This document consists of five parts:
Links to further information about XML and schemas are included at the end of this document.
BizTalk schemas are documents that define the structure of the XML (eXtended Markup Language) data in BizTalk messages, and their purpose is to create templates for processing and validating XML messages. To create BizTalk schemas, you need a basic understanding of XML. The first part of this document outlines the basic concepts of XML and then discusses how schemas are used as template definitions for XML messages.
Before the creation of XML, markup languages such as HTML were primarily used to define how text is displayed on a computer screen or printed page. XML can do much more than any previous markup language. Soon after XML was created, people realized that instead of just marking up text, they could use XML as a way to structure and identify information in new ways. Previous database formats only contained raw data, but XML revolutionized the database world by making it possible to embed a description with each portion of the data. Having the description accompanying the data made it possible to reorganize that information into new categories more easily.
XML provides a way to structure data in a flexible and efficient manner. XML quickly became popular because it enabled people to create documents that included both data and the definition of that data.
XML structures and identifies information by using markup codes to enclose fields of data in a hierarchical format. XML uses a handful of standardized symbols to set off information. A typical inventory file written in XML might look like this:
<Chair>
<Name>Straight Back</Name>
<Number>040754</Number>
<Price>79.99</Price>
</Chair>
In this example, the information about a particular chair is contained in XML markup codes that define the name, number, and price. The words "Chair", "Name", "Number", and "Price" are surrounded by the standard XML code characters "<", ">", and "/". These standard codes are used to mark up the data so that a database program can read and write the information efficiently.
Each set of enclosing codes, and the word inside, is called an element. These elements surround and define the corresponding data. In the previous example, the cost of the chair, 79.99, is surrounded by the XML elements of "<Price>" and "</Price>". This combination of XML elements and enclosed data is the basic format for all XML documents, and provides an easy-to-read and easy-to-program way to store the description of the data together with the data itself..
In the previous chair inventory example, an XML schema document would need to be created in order to define the spelling and data type of each element. For example, it would define the spelling of "Chair," "Name," "Number," and "Price." The schema would not only include the exact spelling of these words, but would also define the requirements of the data in each element. In the case of the price, the schema would require that the price be a number, and not a text string.
A typical XML schema for the chair inventory example would look like this:
<schema xmlns="">
<element name="Chair" type="chairType">
<complexType name="chairType">
<sequence>
<element name="Name" type="text">
<element name="Number" type="text">
<element name="Price" type="decimal">
</sequence>
</complexType>
</element>
</schema>
This chair inventory schema creates code that corresponds to each element of the XML document that the schema references. It defines the spelling and content of each element. For example, in the chair inventory example, the price is defined with this line:
<Price>79.99</Price>
The chair inventory schema uses the following line to make sure that the element called "Price" is spelled correctly and that the data type of the information is "decimal".
<element name="Price" type="decimal">
If an XML document had the value of "Free" for the price instead of a decimal number, an error would be generated when the document was processed with the schema.
This is the basic principle of how schemas are structured. "Part 3: Creating Schemas" provides more detail about how to build schemas.
BizTalk Server uses XML schemas to perform two major functions:
BizTalk schemas are most commonly used to alter data in a message and route it to a new destination. BizTalk Server can take data from an incoming message and copy it to a new message, or use the data to make a decision about further action. All of this data processing requires the use of XML schemas in order to read and write the data accurately.
When BizTalk Server receives messages from one computer system, it uses schemas to convert the message into the format required for a different computer system. A source schema tests the data in the incoming message and uses a destination schema to transform it to the appropriate data in the outgoing message. For example, you may have messages coming in from a furniture store requesting a chair from the warehouse. The format of the incoming message from the store may look like this:
<Chair>
<ID>040754</ID>
<Price>59.52</Price>
<Color>Red</Color>
</Chair>
However, if the computer at the warehouse uses a different message format, it may be expecting a message that looks like this:
<Chair>
<ID>040754</ID>
<Hue>Red</Hue>
</Chair>
If you send the message from the store directly to the warehouse, the warehouse computer won't be able to read the message correctly, because the data fields won't match. The data fields won't match because the data in the incoming message has three sections (ID, Price, Color), but the outgoing message has only two (ID, Hue).
You can use BizTalk Server to solve this type of problem by defining a transformation relationship between a field in one message and a field in another. You define this relationship by using BizTalk Mapper, a visual tool that you use to drag and drop the relationship criteria into place.
After you have created the relationship by using the editor and the mapper, BizTalk Server provides many different ways to copy, transform, and route the data by using pipelines and orchestrations.
BizTalk Mapper uses a visual drag-and-drop user interface to define transformational relationships between fields in one message and fields in another message. After you have promoted the properties you wish to use, you need to create a map file that outlines the relationships.
To create a map file using the BizTalk Mapper
The following figure shows the Add New Item dialog box.
Figure 1 Add New Item dialog box
You can display the contents of the new map file by double-clicking the map file name in Solution Explorer. The following figure shows a blank map.
Figure 2 Blank map file
Click Open Source Schema and Open Destination Schema to add schemas to the map. After you have inserted the schemas, you can drag the mouse pointer from the properties in one schema to the properties in another. When you release the mouse, a line appears showing the link between the two properties. The following figure shows a sample map with the relationships drawn in.
Figure 3 Sample map file
Using XML schemas and relationship mapping enables BizTalk pipelines and orchestrations to receive, transform, and send messages from one computer system to another..
Use the following steps to promote a property that you wish to identify.
To promote a property
The following figure shows the Promote Properties dialog box with a field named PetType.
Figure 4 Promote Properties dialog box
When you promote a property, you add code to the schema. In the preceding example, when you promote the PetType field, the following code will be added to the root node of the schema:
<b:properties>
<b:property distinguished="true" xpath="/*[local-name()='Pets' and namespace-uri()='']
/*[local-name()='StudentPets' and namespace-uri()='']/*[local-name()='PetType' and namespace-uri()='']" />
</b:properties>
After a property is promoted, BizTalk Server has the information it needs to process the property with the mapper and to create a transformation relationship.
For more information about BizTalk Mapper and about promoting properties, see BizTalk Server 2004 Help.
When you create a schema, the structure of the schema may not be correct. BizTalk Server provides a way to test the accuracy of the schema to make sure that it is internally consistent and valid.
To validate a schema for accuracy, add the schema to the BizTalk project and right-click it in Solution Explorer. One of the options will be to validate the schema. Select this option and BizTalk Server will run tests on the schema to make sure that it is constructed properly. If there are any errors, they will be displayed in the output window.
To validate data in an XML message, you can use schemas embedded in a custom pipeline. See BizTalk Server 2004 Help for more information about advanced types of data validation.
The best way to create schemas is to use BizTalk Editor.
BizTalk Editor makes it easy to create schemas because instead of typing commands and hoping you don't make a typing error, you can use menus to add elements to your schema. As each element is added, BizTalk Server makes sure that it is valid, preventing syntax errors.
To create a schema using BizTalk Editor
The following figure shows the Add New Item dialog box for a new schema.
Figure 5 Add New Item dialog box
After you have added your new schema, you can view it in BizTalk Editor by double-clicking the schema name in Solution Explorer. The following figure shows a newly created schema.
Figure 6 Newly created schema
When you create a schema in BizTalk Server, the editor displays the following lines of code:
<?xml version="1.0" encoding="utf-16" ?>
<xs:schema xmlns="" xmlns:b=""
targetNamespace=""
xmlns:
<xs:element
<xs:complexType />
</xs:element>
</xs:schema>
This default schema has four standard parts. They are:
The first part of the default schema is the declaration:
<?xml version="1.0" encoding="utf-16" ?>
Every schema has this type of declaration as the first line. The standard XML codes "<?" and "?>"indicate that this is an XML schema. The version number (1.0) is defined and so is the encoding. BizTalk Server prefers UTF-16 encoding. See "Using External Schemas" later in this document for more information about encoding.
The second part of the schema consists of these two lines:
<xs:schema xmlns=""
xmlns:b=""
targetNamespace=""
xmlns:
</xs:schema>
The first line, which takes up four lines on the page, defines the basic schema information. All BizTalk Server-generated schema parts are prefixed with "xs:", which stands for XML schema. Note that this is all one logical line that begins with "<xs:schema" and ends with ">". This part of the schema definition defines the basic namespaces of the schema. It is called the schema node because it encloses all other nodes.
Namespaces are used to make sure that components in your schema will not be confused with components of the same name in another schema. For example, you may have two schemas that both have a field named "customer". If you don't put them in different namespaces, BizTalk Server won't know which customer you are referring to.
The following table shows the namespaces that BizTalk Server generates.
Default namespace
xmlns
This is called the default namespace and uses a name that is generated from the solution and schema names. Even though this is prefixed with http:// it doesn't need to point to an actual Web location. This is the namespace that will be used if no other namespaces are defined.
BizTalk namespace
xmlns:b
This points to the definition of an additional schema namespace that BizTalk Server uses for specific purposes. You can use components that share this common BizTalk namespace.
Target namespace
targetNamespace
This sets the target namespace, which is the namespace that will be used in your document. The target namespace overrides the default namespace, but by making the two namespaces the same, you avoid any potential namespace overlap.
Standard schema namespace
Xmlns:xs
This is the namespace that defines standard XML namespace elements. If you don't use this as the default namespace, you must put "xs:" before each standard element. This makes sure that all non-standard elements are defined by your schema.
The second line of the schema node:
</xs:schema>
is the closing tag. All the other nodes of the schema are enclosed by these two tags. As with all XML code, you must make sure that you close all opening tags. Additional properties of the schema node are defined in the BizTalk Server Programmer's Reference.
The third part of the schema is called the root node. BizTalk Server creates a default root node with the following two lines:
<xs:element
</xs:element>
The root node is simply the top node of the XML schema tree. Additional properties of the root node are defined in the BizTalk Server Programmer's Reference.
The final part of the schema added by BizTalk Server is:
<xs:complexType />
This line declares the data type of the root as complexType. The definition of what complexType will consist of will be defined later when you add more nodes using the BizTalk editor..
After you have finished creating a schema, you can edit or add new items to any node by right-clicking the node name and selecting options from the menu. See BizTalk Server 2004 Help for more information about using BizTalk Editor.
BizTalk Server allows you to import externally created schemas. If you use a schema created outside BizTalk Server, the schema data must be stored in a text file in the proper format. The format formula for editing and storing text file characters is called encoding. If you edit your schemas in Microsoft Notepad, the encoding will be compatible with BizTalk Server. However, if you use another text editor, you must make sure that your editor saves text data using the following encoding criteria:
If you are creating XML data files, BizTalk Server converts the encodings to UTF-8. Note that because both UTF-8 and UTF-16 are commonly known as Unicode, you must make sure that you know which of the two encodings you are using.
If you would like to know more about the UTF and BOM standards, the Unicode Web site has a useful FAQ at.
Even though XML is quickly becoming a popular format for storing message data, many computer systems still use a "flat file" format that corresponds to standard database file storage formats. Flat files have structures that were designed many years ago, more for the convenience of computers than for humans.
XML uses a hierarchical structure for storing information, but flat files store their information in a continuous string of bytes. If you look at an XML file, you can easily view the tree-like structure of the data.
There are two basic types of flat file structures: positional and delimited.
The positional structure is based on early computer punch cards, where the meaning of the data was based on the columns in which the information was stored. For example, a name would be in the first 20 columns and the address in the next 30 columns. Each group of columns corresponded to a field and the set of columns corresponded to a single record. Positional storage was very efficient for electromechanical reading and writing, and worked well enough for slow computer systems of the past.
A typical positional record might look like this:
0123456789012345678901234567890123456789012345678901234567890123456789
FirstName LastName Street Address
The first 20 columns are allocated for the first name, the next 20 for the last name, and the remaining 30 columns for the street address. Note that the repeating numbers above the data are not part of the file, but are shown here to help visualize contents of the file in terms of position and length.
As computers got faster, a second type of filing system evolved to save storage space, because positional storage can be very wasteful of space. For example, if you have allocated 20 characters for a first name and the name stored is "Laura", you are wasting 15 characters of space. By using a delimited storage, instead of allocating a fixed number of characters for each field, a special character is used to separate fields from each other. The special character, called a delimiter, must not be used inside any field. A typical delimiter might be a comma.
A typical delimited file might look like this, with a comma being used as a delimiter:
0123456789012345678901234567890123456789012345678901234567890123456789
FirstName,LastName,StreetAddress
The delimited file takes up 32 bytes, whereas the positional equivalent took up 70 bytes. Note that the repeating numbers above the data are not part of the file, but are shown here to help visualize contents of the file in terms of position and length.
Delimited files take a bit more processing, but the processing difference is pretty much irrelevant with higher-speed computers. The only danger with using delimited files is that you must make sure that the data in the fields does not contain a delimiter. For example, a street address that contained a comma (for example, "24th St., NW") would be interpreted as two fields, not one.
BizTalk Server can receive flat files and transform them into XML messages for processing. Conversely, BizTalk Server can take an internal XML message and convert it to a flat file that can be sent as an outgoing message. However, to transform flat files, you must use specific schema annotations known as flat file extensions. The following sections show you how to develop a flat file solution. The solution uses the BizTalk Server flat file extensions to receive a positional flat file and transform it into an XML file.
There are many different ways to develop a BizTalk solution, but this document will use the following four-step process:
The following example uses the four-step process described earlier to develop a BizTalk solution that receives a positional flat file message and uses a flat file extension to transform the data into an XML message. The name of the solution is FFPosRec.
Before you begin, be sure that you have BizTalk Server 2004 set up on a single computer. It is suggested that you run other solutions, such as the "Hello, World" solution in the BizTalk Server SDK, to ensure that your BizTalk Server installation works properly.
You must set up your BizTalk solution every time you create a new solution. Each time, you need to create new folders, build a container assembly, and create a new project. For this example, the name of the solution is FFPosRec which stands for Flat File Positional Receive.
First, set up a series of folders. Begin by setting up one folder that will contain all the other folders. For this example, call it FFPosRec. Then, inside this folder, create three folders, called In, Out, and Original.
The In folder is the receive location for your solution. When the solution is created, you will drop a flat file in this location for processing.
The Out folder is the send location for your solution. After your file is received and transformed, a new XML file will be created in this location.
The Original folder is not required, but is suggested as a safety procedure. Because BizTalk Server consumes any flat file you drop into the In folder, you need to make sure that you save an original so you can test it again. Put the original flat file you create in this folder and copy it to the root of the FFPosRec folder.
Your folder structure should look like this:
C:\FFPosRec
C:\FFPosRec\In
C:\FFPosRec\Out
C:\FFPosRec\Original
After you have created the folders, make sure that the user of the computer has permission to use the folders. The instance of BizTalk Server that you are running must have read and write permission for all the folders.
You must create a .NET strong-named key assembly to contain the code for your solution. Creating a strong-named key assembly guarantees that your solution will be unique and will not collide with the namespaces of other .NET assemblies. A strong name consists of the assembly name, version, culture (if available), public key, and digital signature. For more information about strong-named key assemblies, see the Microsoft .NET Framework Developer's Guide.
When you install Visual Studio .NET 2003, a command prompt shortcut is created in the Visual Studio .NET Tools folder that you can select from the Start menu. Type the following in the command prompt window:
sn –k "C:\FFPosRec\FFPosRec.snk"
This runs the strong-named key file generator program with the "k" switch, which generates the new file with the name you provide. Be sure to type the quotes to include the entire path information, and to specify the file extension as .snk.
You are now ready to create a new BizTalk project.
To create a new BizTalk project
Your project is now created and you are almost finished with the first step.
You must tell BizTalk Server where your strong-named assembly is stored on your computer.
To connect the project to the assembly
You have just created the basic framework necessary to hold your solution. The next step is to create your schema.
The schema defines what data comes in and what data goes out. This information must be completely defined before any further processing can take place. In most cases, it makes sense to create the schema before you create the rest of the solution. But because one person may design the schema and another may do the actual programming, these tasks may be done in the opposite order.
When you work with flat files, you need to know the exact requirements for your data. In other words, you must know the length of your record, how many fields it contains, and the length of each field.
This example uses one record with two fields. The record stores the name of a student at a school and the type of pet that the student owns. The following table shows a list of the data requirements for each field in the record.
1
StudentName
0
20
2
PetType
Note that the field offset is measured from the end of the previous field, or, for the first field, from the beginning of the record. Offset and length are measured in bytes or characters.
A typical file that was created to this specification might look like this:
0123456789012345678901234567890123456789012345678901234567890123456789
Harry Owl
Note that the repeating numbers above the data are not part of the file, but are shown here to help visualize the contents of the file in terms of position and length.
When you create a schema for this example, follow these steps:
Use the procedure outlined earlier in "Part 3: Creating Schemas" to create your new schema. The schema that BizTalk Server generates for this example should have the following text:
<?xml version="1.0" encoding="utf-16" ?>
- <xs:schema xmlns=""
xmlns:b=""
targetNamespace=""
xmlns:
- <xs:element
<xs:complexType />
</xs:element>
</xs:schema>
The only differences between the FFPosRec schema and the chair inventory schema are in the second and fourth lines, where the namespace is created by combining the solution and schema names: ffposrec.FFPosRec.
Now you must add an extension to the schema to define how to convert the flat file data to XML so that BizTalk Server can process it. Schemas were designed to process XML documents, but by using the Flat File Extension, you can convert flat files into BizTalk Server XML files.
To add the Flat File Extension
After you click OK, you will notice that BizTalk Server has added several lines to your schema. BizTalk Server has added two sets of annotations to the schema. Annotations are a standard technology used to extend schemas.
The first newly added annotation code section in the schema is:
<xs:annotation>
- <xs:appinfo>
<b:schemaInfo
<schemaEditorExtension:schemaInfo namespaceAlias="b"
extensionClass="Microsoft.BizTalk.FlatFileExtension.FlatFileExtension"
standardName="Flat File"
xmlns:schemaEditorExtension="
/SchemaEditorExtensions" />
</xs:appinfo>
</xs:annotation>
Note that this first annotation refers to the "b" namespace that was previously defined in the schema node. This namespace contains elements that are unique to BizTalk Server and will define additional namespaces and properties for flat files.
The second annotation is:
<xs:annotation>
- <xs:appinfo>
<b:recordInfo
</xs:appinfo>
</xs:annotation>
This also uses the "b" namespace and provides some default information about the record structure for flat files. For more information about Flat File Extension properties, see the BizTalk Server Programmer's Reference.
This step is not required, but it is good practice to give every schema a meaningful root name.
To name the root node
You must add at least one record for a flat file schema, because a flat file is a collection of one or more records.
To add a child record
When you view the modified schema in the schema editor, you will notice that adding the new record opened up the <xs:complexType/> element and inserted a sequence under the root node with the following code:
:element>
</xs:sequence>
This allows other records to be inserted later and identifies this record as the first one in the sequence. Then it creates an element called "StudentPets" and defines the record as delimited.
You have now added a record that can contain positional fields. The next step is to add child elements for each field.
To add the child fields
<xs:element
- <xs:annotation>
- <xs:appinfo>
<b:fieldInfo
</xs:appinfo>
</xs:annotation>
</xs:element>
This defines a field of 20 characters that will contain the name of the student. The field code will look like this:
<xs:element
- <xs:annotation>
- <xs:appinfo>
<b:fieldInfo
</xs:appinfo>
</xs:annotation>
</xs:element>
Repeat the preceding procedure (taking care to select the record name, not the field name) to create a field named "PetType" for the type of pet. This field will also have an offset of 0 and a length of 20. (Note that in this case, the offset is from the end of the previous field.) The two fields together make a total of 40 characters.
You have now created a schema for your flat file receive solution.
For more information about additional record and field properties, see the BizTalk Server Programmer's Reference.
Most BizTalk solutions require using the BizTalk Server graphical editing tools to write the code and sometimes adding C# code for specific objects. Because the FFPosRec solution does not involve complex routing or decision-making, Orchestration Designer was not used to create this solution. However, because this solution involves converting the document format from flat file to XML, you must create a custom BizTalk Server pipeline to create the code necessary to process incoming flat file messages.
BizTalk Server pipelines process messages in stages. The FFPosRec solution uses a custom pipeline to create a stage that disassembles the message. Disassembling converts the flat-file message into the native XML format that BizTalk Server uses.
To add a pipeline to the FFPosRec solution
You have created a custom receive pipeline with a disassembler stage. Now you must tell BizTalk Server which schema you want to use for the disassembly. You will use the schema you created in step 2, the one with the Flat File Extension.
To connect the pipeline component to the schema
You have now connected your pipeline to your schema, and your custom pipeline is ready to be added to your solution.
You have created a schema with flat-file extension and you also have created a custom pipeline with a flat-file disassembly stage. The pipeline component is connected to the proper schema. Now you must combine the schema and the pipeline component into one package that BizTalk Server can use to implement the solution. The schema and pipeline information will be added to the .NET assembly you created in step 1.
To build the solution
A few moments later, the results of your build will appear in the Output window of Visual Studio. If the build had no errors, the results should read:
---------------------- Done ----------------------
Build: 1 succeeded, 0 failed, 0 skipped
The most common error in this part of the process is not filling in the offset and length of the positional fields. Another common error is not giving all components a unique name. A third error may occur if you need to revise your schema. Because of caching issues, before you can successfully rebuild, you may need to delete your custom pipeline and make sure that the revised pipeline is included when you rebuild. Any time you alter a pipeline, you may need to rebuild your assembly.
You must perform one more step before you can test your solution. Even though you have created an assembly containing your code, you must deploy it. The assembly is sitting on your computer, but to share your assembly, it must reside in the global assembly cache so that it is accessible to the .NET Framework. BizTalk Server only uses the assembly version that you have deployed, so you must redeploy your assembly every time you rebuild it.
To deploy the assembly
A few moments later, the results of your deployment will appear in the Output window of Visual Studio. If the deployment had no errors, the results should read:
---------------------- Done ----------------------
Build: 1 succeeded, 0 failed, 0 skipped
Deploy: 1 succeeded, 0 failed, 0 skipped
Note that BizTalk Server first builds your assembly one last time and then deploys it.
To make sure that your assembly is properly deployed, you can look at the deployment of all BizTalk Server assemblies on your computer by opening BizTalk Explorer in Visual Studio.
To view BizTalk Server assemblies
<YOURMACHINENAME>.BizTalkMgmtDb.dbo
The most common error in deployment is that you may already have an assembly deployed with the same characteristics. This will happen if you try to deploy the same assembly after you have already deployed it. If you try to redeploy without undeploying, you'll get an error saying that the assembly is already deployed.
To undeploy an assembly
The following sequence will be useful when you are going through an iterative build cycle with BizTalk Server:
You have now created, built, and deployed a BizTalk schema and a custom pipeline. Your solution for translating flat files to XML is almost finished. All that is left to do is hook up the send and receive ports, and when finished with that, test your solution. This involves following a set of steps in the correct order.
The first thing you must do to configure your solution is to create a receive port. This will tell BizTalk Server how you want to receive a flat file, but not where.
To create a BizTalk Server receive port
In the last section you created a receive port that told BizTalk Server how you want to receive your file, but not where. You must now specify a location where the file will be dropped so that BizTalk Server can receive it.
To create a BizTalk Server receive location
You have now finished creating the receive ports and locations.
You must configure the send port so that BizTalk Server can send out the XML file that it created after disassembling the flat file. BizTalk Server uses the XML format as the default for sending and receiving files, so this step will be much easier than configuring the receive port. Also, unlike for the receive port, you do not need to go through an extra step to specify the send location.
To create a BizTalk Server send port
BTS.ReceivePortName
BTS.ReceivePortName == FFPosRec_RP Add
To enable a BizTalk Server receive location
Your receive location is now ready to receive.
To start a BizTalk Server send port
Drop a flat file into the In folder. After a moment an XML file should appear in the Out folder. If a new XML file does not appear, use the BizTalk Administration console to see what may have gone wrong. Usually the Event Viewer will give you a clue whether the problem is in the send port, receive ports, pipeline, or schema.
Delimited flat files are similar to positional flat files except that they use delimiters between each field. A typical delimited flat file is the common CSV (Comma Separated Version) file that Microsoft Excel generates if you choose the CSV output format. Follow the same procedure that was shown in the FFPosRec solution, but choose delimited values and specify the delimiter.
The following figure shows an Excel file with one row and two columns, representing output from a database of student pets. The first column represents a student name and the second represents a pet type.
Figure 8 Excel data file
You must choose the Excel Save As option and select the CSV option, to save your file in CSV (comma delimited) format. If you view the file in a text editor, it will look like this:
Harry,Owl
You can process this with BizTalk Server by using the same procedure provided in the FFPosRec solution for positional flat files, but you must set the appropriate properties for comma-delimited files.
The following schema can be used to process a CSV flat file that follows the data saved from the Excel file above:
<?xml version="1.0" encoding="utf-16" ?>
<xs:schema
<xs:annotation>
<xs:appinfo>
<b:schemaInfo
<schemaEditorExtension:schemaInfo namespaceAlias="b"
extensionClass="Microsoft.BizTalk.FlatFileExtension.FlatFileExtension"
standardName="Flat File"
xmlns:
<:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
This schema can be used to generate the following XML file:
<?xml version="1.0" encoding="utf-8" ?>
<Pets xmlns="">
<StudentPets>
<StudentName>Harry</StudentName>
<PetType>Owl</PetType>
</StudentPets>
</Pets>
You can use Microsoft InfoPath in conjunction with BizTalk Server to display XML messages sent from BizTalk Server. If you are using InfoPath to display XML data generated by BizTalk Server, you must use the Processing instruction property of the message context for InfoPath to display the data correctly.
When an XML file is created or edited using InfoPath, InfoPath creates a processing instruction at the beginning of that XML file, which indicates that the document should be edited with InfoPath. A processing instruction is part of the XML standard and does not interfere with the schema on which the XML file may be based. If you do not include the processing instruction, another XML editor may be the default XML editor and InfoPath will not be able to read or edit your document.
To determine the processing instruction data, you should create a form in InfoPath using the schema you will be using in BizTalk Server. Then publish the form and save it. The processing instruction will be part of the form.
The following links discuss processing instruction information in greater detail.
For more information about processing instructions, see the InfoPath documentation.
The following books provide useful information about XML and schemas:
The following Web sites provide useful information about XML and schemas: | http://msdn.microsoft.com/en-us/library/ms942182.aspx | crawl-002 | refinedweb | 5,933 | 60.35 |
This tutorial demonstrates how to generate text using a character-based RNN. the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt
import tensorflow as tf from tensorflow.keras.layers.experimental import preprocessing import numpy as np import os import time(f'Length of text: {len(text)} characters')(f'{len(vocab)} unique characters')
65 unique characters
Process the text
Vectorize the text
Before training, you need to convert the strings to a numerical representation.
The
preprocessing.StringLookup layer can convert each character into a numeric ID. It just needs the text to be split into tokens first.
example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars
<tf.RaggedTensor [[b'a', b'b', b'c', b'd', b'e', b'f', b'g'], [b'x', b'y', b'z']]>
Now create the
preprocessing.StringLookup layer:
ids_from_chars = preprocessing.StringLookup( vocabulary=list(vocab))
It converts form tokens to character IDs, padding with
0:
ids = ids_from_chars(chars) ids
<tf.RaggedTensor [[41, 42, 43, 44, 45, 46, 47], [64, 65, 66]]>
Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use
preprocessing.StringLookup(..., invert=True).
chars_from_ids = tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True)
This layer recovers the characters from the vectors of IDs, and returns them as a
tf.RaggedTensor of characters:
chars = chars_from_ids(ids) chars
<tf.RaggedTensor [[b'a', b'b', b'c', b'd', b'e', b'f', b'g'], [b'x', b'y', b'z']]>
You can
tf.strings.reduce_join to join the characters back into strings.
tf.strings.reduce_join(chars, axis=-1).numpy()
array([b'abcdefg', b'xyz'], dtype=object)
def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1)
The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and.
all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids
<tf.Tensor: shape=(1115394,), dtype=int64, numpy=array([20, 49, 58, ..., 47, 10, 2])>
ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids)
for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8'))
F i r s t C i t i
seq_length = 100 examples_per_epoch = len(text)//(seq_length+1)
The
batch method lets you easily convert these individual characters to sequences of the desired size.
sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq))
tf.Tensor( [b'F' b'i' b'r' b's' b't' b' ' b'C' b'i' b't' b'i' b'z' b'e' b'n' b':' b'\n' b'B' b'e' b'f' b'o' b'r' b'e' b' ' b'w' b'e' b' ' b'p' b'r' b'o' b'c' b'e' b'e' b'd' b' ' b'a' b'n' b'y' b' ' b'f' b'u' b'r' b't' b'h' b'e' b'r' b',' b' ' b'h' b'e' b'a' b'r' b' ' b'm' b'e' b' ' b's' b'p' b'e' b'a' b'k' b'.' b'\n' b'\n' b'A' b'l' b'l' b':' b'\n' b'S' b'p' b'e' b'a' b'k' b',' b' ' b's' b'p' b'e' b'a' b'k' b'.' b'\n' b'\n' b'F' b'i' b'r' b's' b't' b' ' b'C' b'i' b't' b'i' b'z' b'e' b'n' b':' b'\n' b'Y' b'o' b'u' b' '], shape=(101,), dtype=string)
It's easier to see what this is doing if you join the tokens back into strings:
for seq in sequences.take(5): print(text_from_ids(seq).numpy())
b'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou ' b'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k' b"now Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\nLet us ki" b"ll him, and we'll have corn at our own price.\nIs't a verdict?\n\nAll:\nNo more talking on't; let it be d" b'one: away, away!\n\nSecond Citizen:\nOne word, good citizens.\n\nFirst Citizen:\nWe are accounted poor citi'
For training you'll need a dataset of
(input, label) pairs. Where
input and
label are sequences. At each time step the input is the current character and the label is the next character.
Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep:
def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text
split_input_target(list("Tensorflow"))
(['T', 'e', 'n', 's', 'o', 'r', 'f', 'l', 'o'], ['e', 'n', 's', 'o', 'r', 'f', 'l', 'o', 'w'])
dataset = sequences.map(split_input_target)
for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy())
Input : b'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou' Target: b'irst Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
Create training batches
You used
tf.data to split the text into manageable sequences. But before feeding this data into the model,) .prefetch(tf.data.experimental.AUTOTUNE)) dataset
<PrefetchDataset shapes: ((64, 100), (64, 100)), types: (tf.int64, tf.int64)>
Build The Model
This section defines the model as a
keras.Model subclass (For details see Making new Layers and Models via subclassing).
This model has three layers:
tf.keras.layers.Embedding: The input layer. A trainable lookup table that will map each character-ID to a vector with
embedding_dimdimensions;
tf.keras.layers.GRU: A type of RNN with size
units=rnn_units(You can also use an LSTM layer here.)
tf.keras.layers.Dense: The output layer, with
vocab_sizeoutputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model.
# Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024
class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x
model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units)
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood, 67) # (batch_size, sequence_length, vocab_size)
In the above example the sequence length of the input is
100 but the model can be run on inputs of any length:
model.summary()
Model: "my_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) multiple 17152 _________________________________________________________________ gru (GRU) multiple 3938304 _________________________________________________________________ dense (Dense) multiple 68675 ================================================================= Total params: 4,024,131 Trainable params: 4,024,131 Non-trainable params: 0 _________________________________________________________________
To get actual predictions from the model([41, 34, 7, 28, 21, 45, 11, 14, 59, 8, 15, 11, 6, 33, 44, 33, 8, 40, 5, 39, 32, 22, 37, 14, 53, 0, 48, 22, 23, 46, 44, 58, 39, 41, 47, 7, 6, 62, 48, 9, 2, 27, 58, 17, 26, 17, 6, 16, 36, 28, 36, 8, 3, 23, 19, 57, 50, 51, 59, 27, 6, 12, 39, 9, 23, 29, 37, 30, 62, 51, 63, 35, 45, 52, 18, 7, 58, 17, 53, 41, 28, 37, 10, 64, 55, 49, 61, 45, 57, 56, 21, 0, 0, 7, 48, 27, 51, 12, 1, 49])
Decode these to see the text predicted by this untrained model:
print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy())
Input: b'seeming man!\nOr ill-beseeming beast in seeming both!\nThou hast amazed me: by my holy order,\nI though' Next Char Predictions: b"aT'NGe3?s,A3&SdS,Z$YRHW?mhHIfdrYag'&vh-\nMrCLC&BVNV, IEqjksM&:Y-IOWPvkwUelD'rCmaNW.xoiueqpG'hMk:[UNK]i"_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.
Because your model returns logits, you need to set the
from_logits flag.
loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions) mean_loss = example_batch_loss.numpy().mean() print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", mean_loss)
Prediction shape: (64, 100, 67) # (batch_size, sequence_length, vocab_size) Mean loss: 4.20401
A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized:
tf.exp(mean_loss).numpy()
66.954285
Configure the training procedure using the
tf.keras.Model.compile method. = 20
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
Epoch 1/20 172/172 [==============================] - 7s 26ms/step - loss: 3.2441 Epoch 2/20 172/172 [==============================] - 6s 26ms/step - loss: 2.0601 Epoch 3/20 172/172 [==============================] - 5s 26ms/step - loss: 1.7388 Epoch 4/20 172/172 [==============================] - 5s 26ms/step - loss: 1.5589 Epoch 5/20 172/172 [==============================] - 5s 26ms/step - loss: 1.4484 Epoch 6/20 172/172 [==============================] - 5s 26ms/step - loss: 1.3771 Epoch 7/20 172/172 [==============================] - 5s 26ms/step - loss: 1.3199 Epoch 8/20 172/172 [==============================] - 5s 26ms/step - loss: 1.2721 Epoch 9/20 172/172 [==============================] - 5s 26ms/step - loss: 1.2302 Epoch 10/20 172/172 [==============================] - 5s 26ms/step - loss: 1.1853 Epoch 11/20 172/172 [==============================] - 5s 26ms/step - loss: 1.1447 Epoch 12/20 172/172 [==============================] - 5s 26ms/step - loss: 1.0984 Epoch 13/20 172/172 [==============================] - 5s 26ms/step - loss: 1.0547 Epoch 14/20 172/172 [==============================] - 5s 26ms/step - loss: 1.0081 Epoch 15/20 172/172 [==============================] - 5s 26ms/step - loss: 0.9530 Epoch 16/20 172/172 [==============================] - 5s 26ms/step - loss: 0.9024 Epoch 17/20 172/172 [==============================] - 6s 26ms/step - loss: 0.8471 Epoch 18/20 172/172 [==============================] - 5s 26ms/step - loss: 0.7954 Epoch 19/20 172/172 [==============================] - 6s 26ms/step - loss: 0.7426 Epoch 20/20 172/172 [==============================] - 6s 26ms/step - loss: 0.6958
Generate text
The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.
Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text.
The following makes a single step prediction:
class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "" or "[UNK]" from being generated. skip_ids = self.ids_from_chars(['', '[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "" or "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states
one_step_model = OneStep(model, chars_from_ids, ids_from_chars)
Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
start = time.time() states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start)
ROMEO: The poxio's blows, no: bitter and call'd up Angelo. ELBOW: With all my heart the other through what they would were, Or else a knee each parks of rage, To undertake the truth of person will command. Now, by the imjured both yield three vantage. Now, by my state I should accuse me, and I will make a doum with the best on the deed. 'Tis numbed glad I break no other from her death: in arms Between'd their hearts, and fled to sleep no greater than enemy Where I have subjects for your death: Therefore they fall in substitute black,-- As I, Jove large and true mine adversaries: Make full obsenve for a quarrel or us, Elves a quarter old. Hold, then farewell. BENVOLIO: In faith, be most agreed, and want that hang'd our guest? KING RICHARD II: Why, Buckingham, be there be with you. LUCIO: 'Tis better ord, I follow it. CAMILLO: Swear you, poor soul, O valiant point on eightee out goodness, threatening stock Against black-parting 'love and gainsay the extremest warrior, nays suffer as thy n ________________________________________________________________________________ Run time: 2.4890763759613037
The easiest thing you can do to improve the results is to train it for longer (try
EPOCHS = 30).
You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions.
If you want the model to generate text faster the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above.
start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start)
tf.Tensor( [b"ROMEO:\nGood even, my lord.\n\nGoNZALO:\nI go. Now now be gold! I mean again.\n\nROMEO:\nI warrant thee, tell me, be sure the duke and all these walls\nRichard our cadismed; prick it not.\n\nBUSHBALYHAM:\nRomeo, the next way homewible.\n\nKING EDWARD IV:\nWilt thou my son, my master is so, nor dangerous\nBy help to noble peerish; whom, and 'tis be.\n\nPOMPEY:\nWhy, but a word moreof? what do yet that seek of his prince:\nMethinks a smell my part there: where are you?\nFor he is all and make you well.\n\nLUCIO:\n\nISABELLA:\nO, weed my banners be so bold to give him o'erwook\nWhich shows me for himself; but we want them,\nBut one seven years so cries, and thus with us\nStill 'tis cartly as't, we will consumed\nWhile Hereford, and my father and thy gentle senseboly,\nWhich never lived nor rage debase more bellies 'by;\nBut I'll Clarence and the lord. How far infirmity!\nWhat, will you gave her? Phile tyree, grieve now?\nHere, merry mistracious Warwick's daughter,\nO wonder is it like to a chair with sadless indeed.\nYour fresh " b"ROMEO:\nPardon is in my tent these two courses of the king.\n\nNORTHUMBERLAND:\nThere on his well-said i' the world; and brick-sprain'd up\nThy great Bolingbroke days be ruled by me.\n\nLord Mayor:\nWhat good mad hat the lords of Clarence! Well a wench!\nShould a lord wader, by Adibe to't.\n\nMENENIUS:\nWell, I beseech you,\nBy charing home and you my sir were not to know.\n\nNurse:\nYour will should knock you were not weeping\nTill he hath prosper best of all prace;\nThe proud issue with her soul flish wounds and realm; in\na leaden all the very penitence, if the\npeofle sworn, I'll crave the woroneth of my true dear kinds.\n\nKING EDWARD IV:\nThen love, as it was, but most proofing and smiles,\n'Think of wine and called by Bland,\nBut such a year and aguments. I'll to my true opinion!'\nAnd thinkest serve awhile to humbly brother?\n\nKING EDWARD IV:\nSent thou draw me.\nThere is no virtue, or that hear that\nto tell us our more. But Your dun ratis and Duke of Clarence weeds.\nI will tell you would not proud-heads to stan" b"ROMEO:\nYour reasons are my fairly queen?\nBetter on that son should fled to seet in heaten;\nI will prove a sweeter blood in his under well;\nTo-morrow must I tell the counterfeitns:\nAgainst the early tide that we will weep,\nBut raimould by hundry fortune and flowers.\nSups, I would think up:\nMy life drowned, untimely brought\nWhen the abound son of peeresp, nob life,\nAs is the linent bunky way to sweat: alas!\nWhy should have I with our hands, now death.\nHere in your mother?\n\nSerdnever:\nBe it so, for the lurs; whose love I had guest\nWas mutind ours: sweet Kate to meet it.\n\nGLOUCESTER:\nHarp! mark me.\nI am about me: our generance take then\nyou are like to: no man but beasts he did she be\nEmburatement, look into a hell met,\nFoe, like an your quit my disposition,\nAgainst them, fought with you.\nAh, what say'st thou? Camillo, pace me, repeat:\nMany guilty clouds and not of Mine; and it\nAre come another person. My queen's husband and my soveriness\nTo bring his daughter to my sink in mide of death\nI'ed so" b"ROMEO:\nNay, if thy wits have all for gravenced, so husband!\nAh, I by leave, and lugh his pleasures to the fair\nShould, suck any coward as the sun.\n\nAUTOLYCUS:\nHence!\n\nFLORIZEL:\nShould I.\nGo to, a bride that he against the deed.\n\nHMORE ETWARD IV:\nWell, Clifford, tell me, how believe hed,\nAnd bid her fail that he did seem to bed,\nWho look'd for Rome is next dewivers.\n\nANHERIO:\nIf so, or at a bowling bovel? Menenius!\n\nAll:\nCall that greater say he till then be spent myself:\nbut I shall go see the lonal bub tybalt from the cause,\nAnd back'd, as when we should hear me in my cold\nPost-take of her own sword, you would susprome them,\nwife, too weak deaf, and the king post to the senate:\nAcquaint her life, against what shot fares,\nNot that in the iced for their glift enemies,\nyet, if I would entreat it folly, my rights are all\ndearly against Their and all aforements:\n'Twas more than meant to say\nthat I am going. Metum's monstrous town;\nAnd not means yet; so weep; for I intend joy,\nand when I wand to " b"ROMEO:\nSyop wherein, the Fater, there; and with the land\nIn soldiers ributances, was so gentleman:\nher heavens have fallen out with all the world were butcher'd.\nCome, come, King Edward to his soul of mine.\n\nKING RICHARD III:\nSandly, were there will not be distinguingly.\n\nDUKE VINCENTIO:\nI know with me: in this be steaded in the king.\n\nDUCHESS OF YORK:\nWhat he's with sweat? is it a lawd we stand alone?\n\nPedant:\nHencefactors, traitors! From Ploucester's death.\nMeantime the hope to growl she were they all,\nTrue straight from the world and all the ship special sport.\n\nBIONDELLO:\nWhy, hear you, while's a spain were as tubority?\n\nBUSHY:\nDenolaculate, a vessed gunecome a moturer?\n\nPETRUCHIO:\nWhy, I thank your wanton alteration: my manory have\nlabourer out;\nEven love, and, suppine, a vessel, ever stood doth limb.\n\nLORD ROSS:\nThe senate and your crothes slow, then in praise.\n\nISABELLA:\nI warrant him, and keep thy name; And now I mean our father's sake.\n\nPETRUCHIO:\nA man hath besied long-'janion's wi"], shape=(5,), dtype=string) ________________________________________________________________________________ Run time: 2.3512351512908936
Export the generator
This single-step model can easily be saved and restored, allowing you to use it anywhere a
tf.saved_model is accepted.
tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step')
WARNING:tensorflow:Skipping full serialization of Keras layer <__main__.OneStep object at 0x7f926497ff28>, because it is not built.. INFO:tensorflow:Assets written to: one_step/assets INFO:tensorflow:Assets written to: one_step/assets
states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")). ROMEO: HERMIONE: Why, what man? and say you will. WARWICK: It will be safer than thee, and next be debt:
Advanced: Customized Training
The above training procedure is simple, but does not give you much control. It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.
So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement curriculum learning to help stabilize the model's open-loop output.
The most important part of a custom training loop is the train step function.
Use
tf.GradientTape to track the gradients. You can learn more about this approach by reading the eager execution guide.
The basic procedure is:
- Execute the model and calculate the loss under a
tf.GradientTape.
- Calculate the updates and apply them to the model using the optimizer.
class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss}
The above implementation of the
train_step method follows Keras'
train_step conventions. This is optional, but it allows you to change the behavior of the train step and still use keras'
Model.compile and
Model.fit methods.
model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units)
model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
model.fit(dataset, epochs=1)
172/172 [==============================] - 16s 81ms/step - loss: 2.7281 <tensorflow.python.keras.callbacks.History at 0x7f918c6fb400>
Or if you need more control, you can write your own complete custom training loop:
EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch))
Epoch 1 Batch 0 Loss 2.1604 Epoch 1 Batch 50 Loss 2.0819 Epoch 1 Batch 100 Loss 2.0088 Epoch 1 Batch 150 Loss 1.9042 Epoch 1 Loss: 2.0039 Time taken for 1 epoch 15.29 sec ________________________________________________________________________________ Epoch 2 Batch 0 Loss 1.8115 Epoch 2 Batch 50 Loss 1.7712 Epoch 2 Batch 100 Loss 1.6842 Epoch 2 Batch 150 Loss 1.6678 Epoch 2 Loss: 1.7288 Time taken for 1 epoch 14.73 sec ________________________________________________________________________________ Epoch 3 Batch 0 Loss 1.5976 Epoch 3 Batch 50 Loss 1.6310 Epoch 3 Batch 100 Loss 1.5008 Epoch 3 Batch 150 Loss 1.5508 Epoch 3 Loss: 1.5654 Time taken for 1 epoch 14.79 sec ________________________________________________________________________________ Epoch 4 Batch 0 Loss 1.5221 Epoch 4 Batch 50 Loss 1.4466 Epoch 4 Batch 100 Loss 1.4530 Epoch 4 Batch 150 Loss 1.4677 Epoch 4 Loss: 1.4635 Time taken for 1 epoch 14.78 sec ________________________________________________________________________________ Epoch 5 Batch 0 Loss 1.4302 Epoch 5 Batch 50 Loss 1.4034 Epoch 5 Batch 100 Loss 1.4557 Epoch 5 Batch 150 Loss 1.4137 Epoch 5 Loss: 1.3943 Time taken for 1 epoch 15.19 sec ________________________________________________________________________________ Epoch 6 Batch 0 Loss 1.3380 Epoch 6 Batch 50 Loss 1.3404 Epoch 6 Batch 100 Loss 1.3174 Epoch 6 Batch 150 Loss 1.3430 Epoch 6 Loss: 1.3400 Time taken for 1 epoch 15.05 sec ________________________________________________________________________________ Epoch 7 Batch 0 Loss 1.3027 Epoch 7 Batch 50 Loss 1.3185 Epoch 7 Batch 100 Loss 1.2899 Epoch 7 Batch 150 Loss 1.2744 Epoch 7 Loss: 1.2955 Time taken for 1 epoch 15.01 sec ________________________________________________________________________________ Epoch 8 Batch 0 Loss 1.1957 Epoch 8 Batch 50 Loss 1.2315 Epoch 8 Batch 100 Loss 1.2380 Epoch 8 Batch 150 Loss 1.2457 Epoch 8 Loss: 1.2549 Time taken for 1 epoch 15.05 sec ________________________________________________________________________________ Epoch 9 Batch 0 Loss 1.2290 Epoch 9 Batch 50 Loss 1.2181 Epoch 9 Batch 100 Loss 1.1872 Epoch 9 Batch 150 Loss 1.2097 Epoch 9 Loss: 1.2160 Time taken for 1 epoch 14.81 sec ________________________________________________________________________________ Epoch 10 Batch 0 Loss 1.1551 Epoch 10 Batch 50 Loss 1.1808 Epoch 10 Batch 100 Loss 1.1605 Epoch 10 Batch 150 Loss 1.1976 Epoch 10 Loss: 1.1772 Time taken for 1 epoch 15.23 sec ________________________________________________________________________________ | https://www.tensorflow.org/tutorials/text/text_generation?hl=fi | CC-MAIN-2021-17 | refinedweb | 4,450 | 68.67 |
Support
Quality
Security
License
Reuse
kandi has reviewed speechbrain and discovered the below as its top functions. This is intended to give you an instant insight into speechbrain implemented functionality, and help decide if they suit your requirements.
Various pretrained models nicely integrated with (HuggingFace) in our official organization account. These models are given with an interface to easily run inference, facilitating integration. If a HuggingFace model isn't available, we usually provide a least a Google Drive folder containing all the experimental results corresponding.
The Brain class, a fully-customizable tool for managing training and evaluation loops over data. The annoying details of training loops are handled for you while retaining complete flexibility to override any part of the process when needed.
A YAML-based hyperparameter specification language that describes all types of hyperparameters, from individual numbers (e.g. learning rate) to complete objects (e.g. custom models). This dramatically simplifies recipe code by distilling basic algorithmic components.
Multi-GPU training and inference with PyTorch Data-Parallel or Distributed Data-Parallel.
Mixed-precision for faster training.
A transparent and entirely customizable data input and output pipeline. SpeechBrain follows the PyTorch data loader and dataset style and enables users to customize the i/o pipelines (e.g adding on-the-fly downsampling, BPE tokenization, sorting, threshold ...).
A nice integration of sharded data with WebDataset optimized for very large datasets on Nested File Systems (NFS).
Install via PyPI
pip install speechbrain
QUESTION
Using RNN Trained Model without pytorch installedAsked 2022-Feb-28 at 20:17
I have trained an RNN model with pytorch. I need to use the model for prediction in an environment where I'm unable to install pytorch because of some strange dependency issue with glibc. However, I can install numpy and scipy and other libraries. So, I want to use the trained model, with the network definition, without pytorch.
I have the weights of the model as I save the model with its state dict and weights in the standard way, but I can also save it using just json/pickle files or similar.
I also have the network definition, which depends on pytorch in a number of ways. This is my RNN network definition.) self.hidden = self.init_hidden() def forward(self, feature_list): feature_list=torch.tensor(feature_list) if self.matching_in_out: lstm_out, _ = self.lstm( feature_list.view(len( feature_list), 1, -1)) output_space = self.hidden2out(lstm_out.view(len( feature_list), -1)) output_scores = torch.sigmoid(output_space) #we'll need to check if we need this sigmoid return output_scores #output_scores else: for i in range(len(feature_list)): cur_ft_tensor=feature_list[i]#.view([1,1,self.input_size]) cur_ft_tensor=cur_ft_tensor.view([1,1,self.input_size]) lstm_out, self.hidden = self.lstm(cur_ft_tensor, self.hidden) outs=self.hidden2out(lstm_out) return outs def init_hidden(self): #return torch.rand(self.num_layers, self.batch_size, self.hidden_size) return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device), torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device))
I am aware of this question, but I'm willing to go as low level as possible. I can work with numpy array instead of tensors, and reshape instead of view, and I don't need a device setting.
Based on the class definition above, what I can see here is that I only need the following components from torch to get an output from the forward function:
I think I can easily implement the sigmoid function using numpy. However, can I have some implementation for the nn.LSTM and nn.Linear using something not involving pytorch? Also, how will I use the weights from the state dict into the new class?
So, the question is, how can I "translate" this RNN definition into a class that doesn't need pytorch, and how to use the state dict weights for it? Alternatively, is there a "light" version of pytorch, that I can use just to run the model and yield a result?
I think it might be useful to include the numpy/scipy equivalent for both nn.LSTM and nn.linear. It would help us compare the numpy output to torch output for the same code, and give us some modular code/functions to use. Specifically, a numpy equivalent for the following would be great:
rnn = nn.LSTM(10, 20, 2) input = torch.randn(5, 3, 10) h0 = torch.randn(2, 3, 20) c0 = torch.randn(2, 3, 20) output, (hn, cn) = rnn(input, (h0, c0))
and also for linear:
m = nn.Linear(20, 30) input = torch.randn(128, 20) output = m(input)
ANSWERAnswered 2022-Feb-17 at 10:47
You should try to export the model using torch.onnx. The page gives you an example that you can start with.
An alternative is to use TorchScript, but that requires torch libraries.
Both of these can be run without python. You can load torchscript in a C++ application
ONNX is much more portable and you can use in languages such as C#, Java, or Javascript (even on the browser)
Just modifying a little your example to go over the errors I found
Notice that via tracing any if/elif/else, for, while will be unrolled) def forward(self, x, h0, c0): lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0)) outs=self.hidden2out(lstm_out) return outs, (hidden_a, hidden_b) def init_hidden(self): #return torch.rand(self.num_layers, self.batch_size, self.hidden_size) return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(), torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach()) # convert the arguments passed during onnx.export call class MWrapper(nn.Module): def __init__(self, model): super(MWrapper, self).__init__() self.model = model; def forward(self, kwargs): return self.model(**kwargs)
Run an example
rnn = RNN(10, 10, 10, 3) X = torch.randn(3,1,10) h0,c0 = rnn.init_hidden() print(rnn(X, h0, c0)[0])
Use the same input to trace the model and export an onnx file
torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', dynamic_axes={'x':{1:'N'}, 'c0':{1: 'N'}, 'h0':{1: 'N'} }, input_names=['x', 'h0', 'c0'], output_names=['y', 'hn', 'cn'] )
Notice that you can use symbolic values for the dimensions of some axes of some inputs. Unspecified dimensions will be fixed with the values from the traced inputs. By default LSTM uses dimension 1 as batch.
Next we load the ONNX model and pass the same inputs
import onnxruntime ort_model = onnxruntime.InferenceSession('rnn.onnx') print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
Source
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
No vulnerabilities reported
Explore Related Topics | https://kandi.openweaver.com/python/speechbrain/speechbrain | CC-MAIN-2022-21 | refinedweb | 1,094 | 50.02 |
In the first GAN tutorial we covered the fundamentals of training GANs. In this post we will continue of the same data but implement some improved techniques from this paper.
Main Issue:
When training GANs our objective is to find the Nash equilibrium for two player mini-max game. The Nash equilibrium can be intuitively defined as both players wanting to continue their strategies regardless of what the other player is going. For GANs, the Nash equilibrium is when the cost for D is at a minimum with respect to \theta_D and the cost for G is at a minimum with respect to theta_G.
Traditionally we will use gradient descent but we should note that J_D = f(\theta_D, \theta_G) and J_G = f(\theta_D, \theta_G). Using gradient descent to lower J_D can increase J_G and vice verse. This doesn’t help convergence.
Feature Matching
The first improvement technique is feature matching. The objective of G is to minimize log(1-D(G(z)) which is same maximizing the output of D (log(D(G(z))). Instead of maximizing directly on the output of D, we should maximize on the activation outputs from an intermediate layer in D. Think of a CNN, the intermediate conv layers are the feature detectors whereas the final FC layers are just used for classification. So, likewise, if we train on D’s intermediate layer outputs, we are essentially training G to really learn from the discriminative features instead of just the output.
So training will now involve minimizing the difference between D’s intermediate layer and one of G’s intermediate layers. This technique works very well empirically. The new objective looks like this:
The tensorflow implementation is very simple. Just return the activation outputs from one of the intermediate layers and use that to redesign the new objective for G. Complete code is under the repo in feature_matching.py
First, we need to return the activation outputs from an intermediate layer.
def mlp(inputs): """ Pass the inputs through an MLP. D is an MLP that gives us P(inputs) is from training data. G is an MLP that converts z to X'. """ fc1 = tf.nn.tanh(linear(inputs, FLAGS.num_hidden_units, scope='fc1')) fc2 = tf.nn.tanh(linear(fc1, FLAGS.num_hidden_units, scope='fc2')) fc3 = tf.nn.tanh(linear(fc2, 1, scope='fc3')) return fc3, fc2
Then we need to change the objective of G to the new one for feature matching.
self.cost_G = tf.sqrt(tf.reduce_sum(tf.pow(self.fc2_D_X-self.fc2_D_X_prime, 2)))
And also keep in mind that we now need to feed in batch_X for stepping for G as well. With the normal GAN (without feature matching) G only cares about D(G(z)) for it objective but now it needs to factor is D(G(z)) and D(X) since it’s trying to reduce the difference between the intermediate layer activate outputs from both. So the new step function for G looks like this:
def step_G(self, sess, batch_z, batch_X): input_feed = {self.z: batch_z, self.X: batch_X} output_feed = [self.cost_G, self.optimizer_G] outputs = sess.run(output_feed, input_feed) return outputs[0], outputs[1]
The result it basically that a almost perfect decision boundary for D at 0.5. Compare this with the noisy decision boundary from the GAN implementation without feature matching. We are able to better learn the discriminative features for G but focusing on the intermediate layers rather than the binary output from D. Here is the transformation for our distributions with feature matching:
Minibatch Discrimination
Issue: I’m going to be very verbose about it to clearly define the issue we are trying to solve because it can be a bit complicated. Let’s think about learning a normal distribution. You have certain values of X that will produce high probability (pdf) in the normal distribution. First we pretrain D to match our p_data and now for the same values of X, D produces a high probability. Now it’s time to train the GAN. We feed in random noise z into G which transforms into X’. If there X’ are far away from the X that result in high P from D, then these X’ will generate low P. If they are very similar to the X that result in high P from D, then these X’ will generate a high P. G wants (D(G(z)) to be high and this happens when X’ are similar to high P causing X. So as G is training, it will map more and more of it’s random noise z to X’ that very similar to high P resulting X. This is problematic because we are essentially causing G to produce X that converge to 1 max P producing point, which is certainly not learning the whole p_data distribution. This problem is called the collapse of the generator.
So why does this happen? It’s because we are training D one point at a time. It receives X or X’ and sees just that one point and has to determine probability P that the point is from the training set. When is sees the point is wants to see, it generate a high P. This is the crux of the issue leading to collapse of the gradient. Note: we still get the probability for each sample in the batch one at a time, but calculating that one probability now involves all the samples in the batch.
Solution is to factor in the entire batch. We will take our input, multiply by a tensor (trainable), get the absolute difference in L1 distance between this sample and all other samples in the batch for each row, apply a negative exponential operation and then take the resulting values for each row, sum them up and now we have our minibatch discrimination values. We will concat these to our normal outputs from the intermediate layer. Note: Dimension of the output now will change.
Why does this work? Since we are using all the other samples in the batch to influence D’s prediction, we are effectively avoiding the collapse of the generator with this new side information.
Results: minibatch discrimination works really well and quickly to produce visually appealing results but empirically, feature matching results in better models (esp. for semi-supervised classification tasks).
Conclusion
There are a few more techniques which were proven to be empirically successful in the paper but these two techniques are by far the ones I found to be most impactful. I may upload implementations for a few of them later (esp. virtual batch normalization). I will also be using many of these techniques in the DCGAN implementation.
Code:
Github Repo (Updating all repos, will be back up soon!)
11 thoughts on “Improved Techniques for Training GANs”
The Github Repo is rot.How soon will it be accessible?
LikeLiked by 1 person
Hi Howard, I believe the author is working on pytorch repos for us as it will be clearer to use for a lot of these examples. until then I have still been able to follow the post and most of the code I need is within the post itself. I also have been using code tutorials online for GANs etc.
LikeLiked by 1 person
Sorry about that, I discontinued the tensorflow repo for now because of external restrictions. I will be uploading PyTorch repos and also videos for a lot of these concepts soon (late summer).
Should it be without the sqrt root the feature matching equation?
L2 norm of x is ||x||_{2} = sqrt{sum(x_{i}^2)}
but is written ||x||_{2}^2 = sum(x_{i}^2)
LikeLiked by 1 person
That is what I thought, too. | https://theneuralperspective.com/2016/11/12/improved-techniques-for-training-gans/ | CC-MAIN-2018-22 | refinedweb | 1,285 | 63.39 |
I did it like this and it somewhat works but I have problem with the middle name such as Barrack Hussein Obama. How do I make it so it ignores the middle name and just gives me the last name???
#include <iostream> #include <iomanip> #include <string> using namespace std; int main() { string name; string first_name, last_name; char response; do{ cout << "Enter your full name: "; cin >> first_name; cin >> last_name; // get their full name, including spaces. // display last name cout << "\t" <<first_name << " "<< last_name << ", your last name is " << last_name << endl; cout << "CONTINUE(y/n)? "; cin >> response; cin.ignore(50, '\n'); } while(response == 'Y' || response=='y') ; system ("pause"); return 0; } | https://www.daniweb.com/programming/software-development/threads/492055/finding-last-name | CC-MAIN-2018-43 | refinedweb | 106 | 71.44 |
I've been writing a piece of code that will use a brute force approach to find 50 exponential equations that fit 5 points (x,y) that i've entered. The code is fine and works without a glitch.. The problem is when i try to make it keep a record of the 50 best equations. This sort routine is the one i've been using.. I've put it in its own test class just to make easier to follow. The current problem is it ends up displaying something completely unwarranted and the array ends up with only two different values.
public class sortTest{ public static void main(String [] args) { int [][] best = new int[50][3]; //50 best equations. the three placeholders are b,c,euclid in the equation y(t) = 10*b^(t*c). euclid is the euclidean distance between the equation and the points i've given it. The smaller it is, the better the equation. int tempcomp [] = new int[3]; //to temporarily store new equations boolean sorted; for(int index = 0; index<best.length; index++){ best[index][0] = 100+index; best[index][1] = 100+index; best[index][2]= 100+index; } for(int c = 0; c<best.length; c++) { tempcomp[0] = 100-c; tempcomp[1] = 100-c; tempcomp[2]= 100-c; if(tempcomp[2]<best[49][2]) best[49] = tempcomp; sorted = false; while (!sorted) { sorted = true; // assume this is last pass over array for (int index=0; index<best.length-1; index++) { if (best[index][2] > best[index+1][2]) { // exchange elements tempcomp = best[index]; best[index] = best[index+1]; best[index+1] = tempcomp; sorted = false; // after an exchange, must look again } } } } } }
Thanks for lookin!
Khodeir | https://www.daniweb.com/programming/software-development/threads/330972/defective-sort-routine-for-two-dimensional-array | CC-MAIN-2020-16 | refinedweb | 279 | 65.01 |
U.N. To Govern Internet? 1197
Falmarian writes "Apparently the rest of the world isn't happy about the US franchise on internet governance. A news.com article discusses the possibility that the U.N. will make a bid for control of such governing functions as assigning TLDs and IPs." From the article: .'"
Yuk (Score:4, Insightful)
Re:Yuk (Score:5, Funny)
Re:Yuk (Score:5, Interesting)
At that point, I start lobbying Slashdot to bring alternic back up to snuff and in use. Screw that.
I *already* pay a tax for my domain name. It's called a domain name registration fee. The money goes to support those root servers (and to the pockets of the registrars, but hey).
Re:Yuk (Score:3, Insightful)
Or wait, are you wanting to talk about high profile events that occurred recently, ignoring all of that? If so, bring it on.
* Weapons of mass destruction inspections? What do you know, they were right!
* Oil For Food: Widely distorted in the
Re:Yuk (Score:3, Insightful)
Re:Yuk (Score:5, Insightful)
They'll usually tell you that they in general blame unfair trade practices. For example, even with their low labor costs, African farms often have a hard time competing with subsidized US and European ag exports. First world nations do a lot of pretty nasty stuff as far as import regulations go (for example, declaring the Vietnamese catfish as not being a catfish, to subsidize the US catfish industry)
Not that many of their problems aren't their own fault, mind you.
Ask the people of Darfur how the UN has failed to even try to protect them
Because they *weren't authorized to intervene by the Security Council*. What, are you picturing some huge security council debate over whether cmm.com is typosquatting on cnn.com? We're not talking about troop deployments, we're talking about the internet.
Re:Yuk (Score:5, Funny)
But not to their faces.
Re:Yuk (Score:3, Interesting)
Wow! That's huge news! I had no idea that all African farmers were a single "African Economics Expert"! My army of cloner geeks will want to hear of this immediately.
had been raping the women
This has already been discussed in earlier comments. Of over 10,000 troops, everyone even remotely involved in the allegations was sent home; grand total, 77. And this is one of 16 current UN operations worldwide. Meanwhile, the troops fighting in the Congo have killed and raped sev
they ate their milk producing animals (Score:5, Insightful)
I thought I'd seen it all on slashdot, but your summation of hundreds of years of colonial exploitation and invasions, arbitrarily defined states (often encompassing many ethnic groups) which war with each other over resources, corrupt government, civil war and finally skewed trade laws which make it impossible to climb out of poverty as
'they ate their milk producing animals'
really does take my breath away.
If the UN know what they're doing, they'll surely be rushing lots of well informed teenage geniuses like yourself over to sort it out right now.
Re:Yuk (Score:5, Insightful)
Ask the starving people in Africa how well the UN has managed things. Ask the people of Darfur how the UN has failed to even try to protect them from genocide. But given that the UN lacks any real enforcement powers, I for one am not too worried about them trying to tax the internet.
My dad worked in Africa "de-mining". Why not ask Africans whether they'd prefer life without the UN. My experience was many Africans (and this wasn't your Cairo/Jo'burg Africans, this was twenty-years-of-post-colonial-conflict-sponsored-
b y Washington-Moscow-London-Paris-Havana-Beijing Africa, by the way) respected the limited work the UN was able to do in extremely difficult circumstances.
The UN may be shite, but it's better than nothing. And it's a lot better than the League of Nations, which in turn was a lot better than... bugger all international cooperation.
And regarding Darfur, I've been following this since long before it hit the mainstream media. The UN's been there a long time, dealing with entrenched resistence from the (sovereign) government of Sudan, and from neighbouring states. It's not always possible - or even desirable - to just move into and occupy a country to effect change.
Israel (Score:5, Interesting)
You don't get off so lightly. What about the Carter (and later Reagan) Administration's "Join the Jihad" campaign aimed at recruiting militant Islamists and getting them together in Afghanistan (with training from the US) to fight the Soviets?
See, it is all the fault of two presidents from different political parties... At least as far as Al Qaeda and any collegiate international terrorism organization goes.
And Regarding Israel--- The history of the founding of Israel between WWI and 1949 is quite interesting and full of material that will make almost anybody uncomfortable regardless of political disposition. However, it was all started by the British who claim to have wanted to reward those Jews who fought for Britain in WWI by trying to promote British Palestine as a place where they could go to as a homeland provided that the existing Palestinians were not displaced (read the Balfour Declaration). The time between the end of WWI and 1949 was full of terrorism on the part of the Zionists and Arabs (continuing today often on both sides despite efforts of moderates on either side). And, most interestingly, the attempted collaboration between the ELHI brotherhod (in part lead by Yitzak Shameer) and Hitler (one might add that the ELHI brotherhood had no shortage of good things to say about the Nazis). As punishment for his efforts and sympathies, Shameer was later elected Prime Minister which should tell you a lot about Israeli politics.
Last time I checked... (Score:3, Interesting)
The UN is inefficient, but bad stuff tends not to come out of the UN because too many people have veto power. As opposed to here.
Such attacks are not about the UN (Score:4, Interesting)
From a year ago for example, a large number of leading indicators showed progress in Iraq's infrastructure. Compare that to the Congo or Haiti in which the UN is running peacekeeping operations.
Pardon me for being Mr. Obvious here but there is a big difference from running a peacekeeping operation and trying to rebuild a country after largely destroying it (first with sanctions, then with bombs).
"Men from roughly 50 different countries make up the U.N. forces in Congo, and the United Nations does not conduct background checks. Furthermore, U.N. troops are exempt from prosecution in Congo."
Can you say "International Criminal Court?"
While the US has made mistakes on the ground dealing with Iraqi and Afghani Prisoners and civilians, at least widespread allegations of sexual exploitation and abuse of women, boys and girls havn't been happening like they are happening in the Congo.
As others have pointed out, this never happend in Vietnam or anything, right?
Also a lot of this type of activity in the Congo has been happening between the warring factions. Sorry, but blaming the UN for their actions is like blaming the US when insurgents attack an Iraqi police station.
"Didier Bourguet, a U.N. official from France, is pictured here in an image found on his hard drive, which was obtained by ABC News. Also on the hard drive were thousands of photos of him having sex with hundreds of young Congolese girls."
If that is the case, then someone has an obligation to prosecute him. IANAL, but last time I checked, I think the country of nationality had the first right to prosecute in these matters, followed by the country where the crimes were committed, and following that, there is no reason why the ICC couldn't prosecute. Oh, wait, the ICC is a dirty word here in the US, sorry I forgot...
I would further point out that there is a large contingent of French, British, and German troops in the Congo under the EU (*not* NATO) flag, the first EU peacekeeping deployment outside Europe.
People forget that a large extent of the issue is that conservatives (the media insofar as most large media outlets are owned by other corporations such as Disney, GE, etc have inherent in their organizations a conservative bias) are largely upset that the US is no longer the dominant power in the world (except militarily). Every major trade war with the EU has ended in a US defeat. The EU has a larger population and a higher per GDP than the US. And the have two permanent seats on the UNSC, and many seats in both the GA and the WTO. Compare that to *1* for the US in each organization.
We in the US can hold our own against China and any other nationalist state. However, because we don't see internationalism as a worthy goal, we cannot hold our own against states who work together to set up common economic policy, as the EU has done.
Note that the parent poster, like many, seems to equate the UN with "France" and/or "Germany." This is further evidence of the building propaganda war against the EU. But what will happen if the EU ends up with three seats on the UNSC at some point (if, say, Russia were to join)?
I fear we are heading into a new type of cold war against an opponent we cannot hope to defeat. Thanks "New American Century..."
Re:Yuk (Score:3, Insightful)
Internet: Development of the DARPA Labs (USA)
Internet: PHYSICALLY constructed by the US
WWW: later addon from MIT
ftp: created in the US
TCP/IP: created in the US
Feel free to add on. The point of this is that the internet, as it started, was wholly concieved and created by the US. Yes other countries added to and by more people connecting, you get more content. However, the fact remains that the US created it.
Now, the UN is coming in after this wonderfu
Re:Yuk (Score:3, Informative)
Sorry to burst your bubble but WWW is a CERN invention [historyoftheinternet.com] (international organization part in Switzerland, and part in France). Check here [oup.co.uk] and here [web.cern.ch].
Re:Yuk (Score:4, Informative)
Re:Yuk (Score:4, Informative)
In order to create a conflict the US had the weapons inspectors search Saddams palaces and harem for weapons of mass destruction, knowing that Saddam would refuse at first.
Re:Yuk (Score:4, Insightful)
Why would you want an organization whose consituents are mostly corrupt pseudo-democracies or flatout dictatorships to control anything?
Re:Yuk (Score:3, Insightful)
The UN isn't in the business of overthrowing governments. Neither is ICANN. The UN has, however, moved to stop abuses many times - including the oft American favorite, Gulf War I.
one dollar girls in Africa
Several *million* people have been killed in the Congo, and there have likely been equivalent numbers of rapes by various troops involved in the quite brutal conflict. And yet, in this one mission, of 16 worldwide, with 16,000 troops, with everyone accuse
Re:Yuk (Score:5, Insightful)
The UN isn't in the business of overthrowing governments.
I think you might want to read up a bit on why, exactly, the United Nations was founded. This article [opendemocracy.net] may or may not be believed in its entirety, but the fact of the matter is one way or another, the UN was conceived during WWII and was officially founded directly afterwards specifically to prevent dictators running roughshod over their neighbors all over the world. That was the original mandate, and that's why the five permanent members of the security council are who they are.
Even the UN's official history [un.org] is perfectly up front about its origins as a tool of the Allies in fighting Germany and Japan during WWII.
Now you see why many people in the US (and other countries) think the UN has gotten so far off track from its original mandate that it is no longer relevant. It was intended to at least contain, occasionally fight and if necessary overthrow dangerous governments like those of Adolf Hitler and Saddam Hussein. Whether you want to believe it or not, and whether you agree with that cause, that is the truth.
I am no neo-con (or even a traditional-con); I voted against Bush both times. But I get just as annoyed as anyone when people speak of the UN as if its purpose is to keep anyone from fighting, ever. That was not why it was created. It was created to keep rogue states in check - that is the entire reason it exists. It was created during wartime, with a mandate that specifically told member nations to keep fighting. Yet nowadays, it is only ever used as an excuse to do nothing because of competing political interests from those who have something to gain by standing on the sidelines.
As for the UN taking over the internet... read any of what I just posted (either the two links or my commentary, whether you subscribe to the same view or not) and tell me how this would make a lick of sense.
Re:UN never said Iraq had no WMD ... (Score:3, Informative)
Turns out they had no evidence, let alone proof, because there was no weapons of mass-destruction program worth mentioning in Iraq. Oh... and the only ones who were saying that there was were the ex-Iraqis who everyone but the Bush administration had already
Re:NPR Slave (Score:3, Informative)
There is still no proof that the weapons of mass destruction weren't moved
Except for everything such as ardently pro-war hawk (formerly uberconvinced that Iraq had WMDs) David Kay's inspection report. Except for the fact that there was no infrastructure for any sort of relevant production in the entire country, and the agents degrade.
Read Kay's report. You'll notice no mention of the "sarin" and "mustard gas" shell finds. Why? Because, like the other several dozen false positives in initial testing,
Re:Yuk (Score:5, Informative).
Here's an article [wikipedia.org] with tons of links, for those who would like to distort his views by giving decade-old quotes that were overcome by events. I suggest you start reading the *recent* quotes from each of the heads of UNSCOM/UNMOVIC as well, plus the comments of the IAEA.
Re:Yuk (Score:5, Interesting)
Cycle of the ages (Score:5, Insightful)
Re:Cycle of the ages (Score:4, Insightful)
Re:Cycle of the ages (Score:4, Insightful)
However, lazy folks just prefer handing control over to someone else, and pay lip service to ideas like "freedom" and "liberty."
Really ? (Score:3, Insightful)
In any case, what is the UN qualified to have oversight on?
Re:Cycle of the ages (Score:3, Insightful)
My experience has shown that whenever a new area of freedom opens up, some group abuses it, requiring regulation/oversight.
Pardon me if this sounds offensive, I don't mean it to be, but my first (and second and third) impression from this statement is that you like control and telling other people what to do or how to do it. Some people prefer consensus and commonly held mores of behavior to authoritarian approaches with rigid rules and regulations, as in level 3 vs. level 2 of Kohlberg's stages of mor [wikipedia.org]
What a Great Idea! (Score:5, Insightful)
After all, that's what we elected these people to do, right? Oh wait a minute. nobody elected the UN, it's a treaty organization.
I'm not trying to sound reactionary, but this sounds like a solution in search of a problem. The internet is fine the way it is. If the U.S. Congress has managed to keep its hands off it so far, the U.N. should follow suit, imo. The more politicians we get involved in managing the net, the worse it will perform for everybody.
Being Your Own Customer [whattofix.com]
Re:What a Great Idea! (Score:3, Insightful)
I'd say it's a pretty damn well run organization despite being run by the U.N.
U.N. is not just a bunch of incompetent politicians, although i'm sure a lot of americans like to think that.
Re:What a Great Idea! (Score:4, Insightful)
But what about "managing Teh Intarweb"? The majority of politicians these days don't even understand that there is more to the internet than what Internet Explorer shows them. If they start throwing around regulations that are impossible to follow (like "ban all sites that might offend someone, but we can't give you a list because that would be offensive", how many times have we heard THAT now?) the majority of the politicians wouldn't figure it out until everything starts going down in flames, and if they can't see the rubble in Internet Explorer, they don't know that it's there.
And of course, being unelected, should they get an email saying the internet should be shut down for its annual cleaning and believe that it's true, there isn't anything obvious that can be done about it.
Actually it is run by a incompetent politicians (Score:5, Interesting)
Re:What a Great Idea! (Score:5, Insightful)
That is what everyone with half a brain thinks. It is a joke of an organization. Libya was head of the human rights council! Other nations included Cuba (HA!) and Syria (HAHA!)
It is composed of European socialists and third-world zeros. If you want it to have any moral authority create the UDN (United Democratic Nations) and invite nations that respect the sanctity of human life.
--Joey
Re:What a Great Idea! (Score:5, Insightful)
However, on the same hand, the US has no real reason to give up control.
Hence the suggestion to use the UN - it seems like a middle ground somewhat. The people that suggested it are simply trying to create a compromise so the *net doesn't fragment.
Re:What a Great Idea! (Score:3, Insightful)
Ohh wait, you can't right?
Exactly douchebag
Re:What a Great Idea! (Score:4, Informative)
nobody elected the UN, it's a treaty organization
Re:What a Great Idea! (Score:3, Insightful)
Huh? (Score:5, Interesting)
The US _does_ control root, right?
Re:Huh? (Score:3, Interesting)
Re:Huh? (Score:3)
The US controls most of the roots (Score:3, Informative)
Re:Huh? (Score:5, Insightful)
I'm all for it (Score:5, Funny)
Re:I'm all for it (Score:4, Funny)
Re:I'm all for it (Score:3, Insightful)
Also remind the UN is more than the security council, for instance the World Health Org. and World Food Program are UN bodies with millions and millions of human lives depending on them on a daily basis.
I'm convinced the people working at the UN in the offices and in the field are higly motivated, skilled and
It isn't broke... (Score:5, Interesting)
I don't care who controls it... (Score:3, Interesting)
In fact, get rid of them entirely. They aren't truly necessary except to maintain backwards compatability.
Peace Keepers on the Net (Score:5, Funny)
In communist Europe, the internet owns YOU..... (Score:3, Insightful)
Hmmm.... (Score:5, Insightful)
Re:Hmmm.... (Score:3, Insightful)
China would get a vote (Score:3, Insightful)
Call us cowboys, but a lot of the world doesn't want our freedoms, and would be more than happy to stop them for all of us. I don't think the spirit of the internet could survive a bunch of unelected corrupt dictators setting the rules.
Re:Hmmm.... (Score:5, Insightful)
I tend to agree with most everyone else here: if it ain't broke, don't fix it.
I don't agree with the idea that "the US invented it, therefore we should control it". I don't think that's a good approach or attitude, but I also think that the internet has been humming along just fine without any real government control.
Really...what would *anyone* have to gain from allowing the UN to control the internet from a practical standpoint (no, "sticking it to the US" doesn't count)? I think it's pretty obvious that the cost/benefit ratio is really, really bad in that scenario.
A false assumption here (Score:3, Interesting)
Air travel, news, food, and Earth's economy are just as "global", and yet there are no global entities in charge of those areas. Not only does there not need to be, there are good reasons to not have global (i.e. centralized) control of such things. 20th century history is full of examples.
One big reason to fear UN control beyond taxes: how long before they try to crack down on "hate speech," which will me
Re:Hmmm.... (Score:5, Insightful)
The UN doesn't even vaguely resemble a world government. It's more like a country club for national governments. There's no real money in helping refugees, feeding starving children, or vaccinations; the UNHCR, UNICEF, and the WHO are decent branches of the UN. There is staggering amounts of money in "overseeing" oil and other commodity sales and there's probably also staggering amounts of money and power involved in domain name control. Do you really want an organization made up of unelected and unaccountable politicos running another program with money involved given the UN's track record in that regard?
Re:Hmmm.... (Score:3, Interesting)
There is not a global government. The UN is a treaty organization that wants to become a government. Your attitude is to just hand over a national asset to a questionable body that is not accountable to anyone.
Besides, why not do something better? Create alternate directories and advertise the IP numbers for those nameservers. Let software developers work out the problems with mult
The reason not is because the UN is ill suited (Score:4, Insightful)
Another problem is that the UN isn't an elected body. It's diplomats that are appointed and are not answerable to the public they supposedly represent. Politicians do enough shady shit when they ARE directly answerable, it gets far, far worse when there's no accountability.
I mean for a good example, see the receant Tsunami crisis. When the Tsunami hit, the important thing initally was getting basic aid there immediatly, food, water, and medical attention. A number of nations did just that. Both their military and civilian volunteers went over and worked their asses off to save lives. The UN, sent a group over to survey the damage and fact find, they gave some soundbites to the media, and whined that the troops over there should be wearing UN blue, rather than the uniforms of their countries. All the while people were in desperate need of immediate help.
That's just a good example of the general problem. Look at the UN office in New York. The oppulance is simply unbelievable for an orginization that is supposed to be a representitive of so many poor nations. Then realise they have offices like this all over the place.
Now for the US there's an additonal consideration in that the UN may decide they want regulations on the Internet that are unconstutional. The constution can't be overriden just by some treaty orginization, it overrides all other law in America (well, it's supposed to at any rate, politicans seem to forget that sometimes). So for example China might want to push a regulation that says no subversive political speech is allowed, and they'd have plenty of backers on that. Well, sorry, but that's unconstutional.
While I think we can work out a more equitable solution than the US running the Internet, having the UN run it isn't the right answer.
Anyone but the U.N. (Score:3, Insightful)
Re:Anyone but the U.N. (Score:3, Interesting)
This is contradictory... power and protection for member states? How about we protect the member states by not giving the UN power?
The UN was supposed to be a framework for diplomatic cooperation of countries. A place for them to talk issues to death, to negotiate treaties and so forth. The failure we've seen has
That's worse than the US (Score:5, Insightful)
When the UN adopts the first amendment... (Score:5, Insightful)
...then maybe. Not before.
Re:When the UN adopts the first amendment... (Score:4, Insightful)
No. The UN pays lip service to the freedom of speech, but clearly states in the charter (have you read it?) that these "rights" are subject to abridgement or revocation by the UN itself. A right isn't a right if it can be taken away. That's why the US founding documents speak of inalienable rights, endowed by the creator. In other words, rights that transcend the power of government.
ignoring problems comes next (Score:4, Funny)
Re:ignoring problems comes next (Score:3, Funny)
I see your point. The UN will act just like Microsoft does now.
Typical UN Resolution (Score:5, Funny)
We will give you 1 year to take down your website, before 'more drastic' measures will be taken.
One year later
...
Resolution 30357B - Illegal File Traders:
Oh, did we say one year? We meant two. Take two years. But take it down! Don't make us unleash the fury!
Two years later
...
Resolution 30357C - Illegal File Traders:
We at the UN can't help but notice that you haven't taken your site down. We strongly disapprove of your actions. So much so that we're giving you three more years to do it. But you'd better believe that when those three years are up it's clobbering time. Seriously.
Three years later
...
Resolution 30357D - Illegal File Traders:
It seems you are still running your illegal website. We downloaded several Chingy tunes today (thanks for the UN discount!). But you seriously need to take that site down. Seriously. To show you how serious we are, we're going to start a plan of denying aid to people not in any way affiliated with you. Yes we know this won't affect on you personally, but it makes us look like bad-asses. Five more years! That's all we can give you. Then out come the meat hammers!
Five years later
...
Resolution 30357E - Illegal File Traders
- Rider A: Condemnation of Israel for refusing to just fucking disappear like the Mayans
- Rider B: Pay-raise and trips to Disneyland!
Maybe it's us. Are we doing something wrong? Is there something we could give you to make you take that site down? Because, seriously, we're all pussies here at the UN and don't want to do anything drastic like follow through on our empty resolution statements. So why don't we go ahead and give you as many years as you like to take that site down. Just keep those kickbacks coming! And remember, we are the world's last resort for justice!
Re:Typical UN Resolution (Score:5, Funny)
The US is making us do this again. Sorry. So, *sigh*, this is probably your last warning. First of all, thanks for taking that copy of Herbie: Fully Loaded off your site. But if you don't provide proof that you're not operating another server somewhere in some way we can't detect, we're going to come get you.
A question of Rights (Score:3, Interesting)
Right now if I want I can spew all the hate-speech I like on the internet.
Right now I can arrange the sale of firearms over the internet.
Right now I can play addictive text-based MUDs that waste more lives than either of the above.
Will these be preserved by a governing body who disapproves of all three?*
(*number three was a joke)
...other suggested responsibilities... (Score:3, Insightful)
TLD for food program (Score:4, Insightful)
As a forum for international discussion, dialog and negotiation, the U.N. is a fine organization. The U.N. as a body is, though, not actually accountable to anyone. This is why the U.N. should not be thought of as a government, or even a meta-government (a government of governments). Any body that is not accountable to (as in, risks being voted out of office or power), eventually becomes corrupt.
How much money went to Sadaam Hussein in the oil for food program? How much was actually used for food? Little if any. How much money was skimmed off the top by people at the U.N.? A lot, but we can never know how much because these people neither represent my (or your) interests, nor are they accountable to me (or you)!
Why the UN bashing? (Score:5, Insightful)
It's not overly effective in some respects (stopping invasions, oppression) but that's a fault of the countries involved not the organisation itself.
Without the UN, there might still be apartheid in South Africa. There would be lots more people starving to death. There would likely still be smallpox. Free and fair elections would be unavailable in many countries. AIDS (and tuberculosis and malaria) would be far greater problems. Those accused of warcrimes might not be tried.
While it's easy to knock the UN following recent scandals, get a sense of perspective. It's extremely difficult to coordinate things on a world scale without any real authority but the UN does do an extremely admirable job.
Whether it would handle the root servers well or not is a separate issue but don't critise out of a hand an organisation that has saved millions of lives.
Manta
Kids, stop fighting (Score:5, Interesting)
I'm sorry to have to agree though, the idea of the UN controlling the Internet is scary, for exactly the reasons that people have mentioned. It's currently largely unregulated (another word for that is "free", get it?). The comments from UN reps in other countries (e.g. Syria) revealed amazing ignorance of how the internet works, and an explicit desire to exert firm control over content. The complaint by Brazil about the
So far I have yet to hear either a good technical or policy-based argument against leaving it in US hands. I'm willing to be convinced, but so far all the arguments against US control have boiled down to, "we don't like you and/or don't want you to have it." Not good enough for me, sorry. I'm going to write my Congresscritters and ask them not to turn it over.
There is no Internet (Score:4, Insightful)
Right now, almost everybody agrees that US-centric organization like ICANN get to govern top-level things like the root domain. But there is absolutely nothing keeping people following their own set of standards. Indeed, some already do.
I don't even worry that much about "fragmentation". The Internet is already horribly fragmented. It's no longer safe or consistent or well-organized, which you used to be able to count on. If, say, we end up with multiple conflicting namespaces, someone will create some meta-directory protocols or search engines or something.
Of course, it would be nicer if that didn't happen. No sense making things worse then they are.
I just keep thinking (Score:5, Interesting)
Looking at the U.N. myself though I don't really see an organization consistent enough to draw any conclusions about it. It is an evolving entity. Look at its state over time since oh, say, 1985, and you'll realize there are almost no points over this time period where the U.N. in practice clearly resembles the entity it was just five years before. The U.N. had a clearly defined role during the Cold War; now that the Cold War is over that role no longer applies, and it is trying to find its new role. I don't think there's any way to predict right now what that role is going to be. The U.S. has the option of taking an active hand in shaping the U.N.'s new role, if we want (there have been parts of the last 20 years where we've done this, though right now is not one of them); however, what we can't do is make the U.N. go away. It's going to stay around, and it's going to develop into something. That isn't our choice. Our only choice is, will it develop into something with us or without us.
One thing that it occasionally worries me the U.N. might develop into is a bloc organization that basically represents "everyone but the U.S.". That is, I think it is possible that as the U.S. increasingly acts only in its own immediate interest to the exclusion of anyone else's interests, other countries will use the U.N. as a platform on which to band together and represent their interests in common, until the U.N. eventually becomes something which pens in the U.S. the way NATO penned in the USSR. As an American, I don't think this situation would be good for me or my country. However, I think it is possible. I also think that trying to push hard against or de-emphasize the U.N. does more to make the above "U.N. vs U.S." outcome likely than it does to make the U.N. weaker. The U.N.'s potential strength stems from the countries which wish to align with it; it's exactly as strong if the U.S. appears hostile toward it as it is if the U.S. appears apathetic toward it. However if the U.S. appears hostile toward the U.N. we do begin to set the stage for a situation where the U.N. begins to behave antagonistically back.
I see this DNS thing as a small but noteworthy step toward this situation.
Four or five years ago if the U.N. expressed an interest in controlling the DNS servers (and they did) there would be no point in taking this suggestion seriously (and no one did) because there was already an independent and international body (ICANN) on track toward running the DNS system. Now the U.S. has decided to make ICANN no longer a meaningfully independent body, and the governance of the DNS servers a U.S. national issue rather than an international one. And now, as a result, we are starting to see movements where service providers and governments outside the U.S. [slashdot.org] are starting to look into ways to break away from the U.S.-commerce-department-controlled ICANN system and into nameserver independence. In this light, the U.N. proposing they control nameservers takes on a very different tone. It underscores that if the U.S. does not wish to administer the nameservers under its control in an international fashion, there are other entities perfectly willing to assume that job.
If other nations choose to break away from the U.S. controlled nameservers, well, it's likely they'll do so together, meaning that we will have the U.S. commerce department running DNS for the U.S. and an international body running DNS for "everybody else". And who will run this international body? Well, the U.N. is a likely choice. The steady smear campaign against the U.N. doesn't exist in the same way outside the
Why not allow unlimited TLDs? (Score:3, Interesting)
For example, it seems silly to rely on something like
If you need something authoritative, private authorities could use public/private keys as proof to do that. Indiviuals could then decide which private authorities have standards worth trusting. The U.N. could set up such an authority to authenticate government sites. When a user visits a government site, it could refer the application to the whichever authorities it chooses.
Limiting TLDs just creates conflict as different powerful interests vie for their own distinctions. Sure people can more quickly categorize this way, but the limitations seem to outweight the benefits.
Let's not and say we did. (Score:4, Interesting)
It is really bad as it is now. Every independent board member that has overseen ICAAN actions has said this. But putting it into the hands of the UN per se would just make matters much worse.
Also I have the strong feeling that many people don't have the slightest clue what the UN really is and what it does. The funny thing is everyone seems to have an opinion about it. Either they hate it or love it or like it or dislike it. Germans like it and left leaning Americans like it. French like it and conservative Americans dislike it. I don't know about Americans, but I know that Germans don't have a clue what it is they like.
Some basics:
The UN is made up of different bodies to which countries are elected. Each world region (like Africa or Asia) has a certain quota for how many countries they can vote into a certain comitee. Then there are also organizations for specific purposes. Like UNAIDS or the UN high comissionare for refugees.
The UN is very good for diplmacy for example. All nations can go there and resolve conflicts instead of starting wars. Granted, it hasn't work very good and could be made better, but I don't see any alternative. Kofi Annan for example pushed through some very important reforms in his first two years of office.
Anyways, I could go on for hours, but maybe You can just check their webpage. It is quite informative.
Just reading the UN Charta would most likely be very invormative to many people here I suppose.
The UN is many, many things at the same time. Maybe if a sensible set of rules would be put together for some kind of organization under the UN umbrella it would appear international and at the same time remain efficient. But is not going to happen anyways. So keep cool and keep cursing Verisign and their control over ICANN.
US to retain what? (Score:4, Interesting)
The other root servers could stop mirroring A, ISPs could stop pointing to the current root servers, or the end users could stop using their ISPs domain servers.
If the UN wants to set up and control their own root server, they should just do it, there's nothing stopping them.
-- Should you trust authority without question?
Re:get over it... (Score:3, Insightful)
Re:get over it... (Score:4, Insightful)
Re:get over it... (Score:4, Insightful)
aren't oppressive countries (i.e. against freedom of speech and though in this case) part of the UN? The USA anyone?
Living outside the US, all I can say is that having the US control the Internet is a bad thing.
More valuable than Lagrange points=US will keep it (Score:4, Interesting)
My bet is Bush'll nominate someone anti-UN to the UN to make it ineffective so this UN thing isn't an option. Oh....
Re:get over it... (Score:5, Funny)
Re:get over it... (Score:3, Insightful)
The US invented the internet. The internet has to be controlled (to a degree) from somewhere. If everything is and always has been working just fine from where it is, why would anyone want to move it? Because they want to change it....that's why. It belongs where it is.
TV and the Telephone do not have one worldwide location of control. You can't control which country all of the billions of TV's are in... if there was only one, you could... do you get it now? Your
Re:get over it... (Score:3, Insightful)
Normally, I'm all about fair treatment of citizens across the world, regardless of their country of origin. But in this case, I really believe the US should retain control.
First off, no one is saying that the US is doing a bad job; they want change because they don't think it "feels right" that the US is controlling everything. This requires a certain amount of faith that a body like the UN can do as good of a job as the US has been doing. I would hate to have a switch take p
Re:get over it... (Score:5, Insightful)
This kind of turf war is likely to happen with a UN controlled internet.
For example, what happens when countries like China, North Korea, and many more. Demand that the UN aid them in "filtering" the internet for their citizens.
The root servers are pretty stable and things are working fine right now. Theres no need for a change to a venue where politics will rule the technology (I know there are politics already, but were talking orders of magnitude difference here).
Re:get over it... (Score:3, Insightful)
I'll tell you what. Nothing happens. So what if they demand? They can be voted down by A DEMOCRATIC PROCESS, involving more enlightened nations and the USA.
Your point doesn't stand. The UN is more democratic on any day than the USA.
Re:get over it... (Score:4, Insightful)
I could just imagine the UN Security council trying to run it.... Wait, nothing would ever happen because you need unanimous consent of all nations involved to do anything. Getting some of these to agree is a far fetched notion.
Of course we could always let the general assembly run it. There's a brilliant idea. Give the United States as much say over the internet as every other tiny country in the world. Thats fair and democratic?
They could also do it by population. In that case China has a huge advantage. That would be great.
Under any system we'd allow the wrong people to get our hands on it. Is letting china, libya, cuba, north korea etc telling the rest of the wold what to do with the internet really the definition of DEMOCRATIC PROCESS?
A big part of the problem is the UN has no accountability. When the UN starts using it to push their viewpoint (as the topic said universal net access) what then. What do we do when the internet becomes a vehicle for corruption? Who do we call and say change this? Someone will be getting rich while the internet collapses. Currently ICANN doesn't have the power to tax the net like this, or to create filters etc. In the hands of the UN. . . who knows what power it'll have. The UN has zero accountability. If ICANN tryied this now they'd be stopped in a second.
What I find the most amusing about all of this is how so many Europeans are all about this idea. As if they'd actually have a say over it? It wouldn't be the EU's internet, it would be the world. Under the UN that means security council or general assembly. Tell France or Germany that Uganda has as much say over the nets infrastructure as they do. Or that China has more say (due to bribing other countries) or whatever. The EU would lose out on the deal, but the only possible thing that would make them like it is the fact that it hurts the US more than it hurts them (kind of like Kyoto). Stabbing yourself to hurt the US is not a good idea.
While maybe some more international control could be used for the internet, I would say that there is no reason for the UN to have any say over it. The UN is corrupt and getting worse. There's zero accountability. Whats the best option then? Well tell us legitimate problems with the internet as its being run and maybe we'll examine them on that basis.
Phil
Re:get over it... (Score:3, Funny)
Since when is Tim Berners-Lee a European?
The man is British!
Re:get over it... (Score:3, Informative)
Re:get over it... (Score:3, Informative)
Re:If they don't like it... (Score:3, Funny)
Since he's the main guy behind funding it with U.S. tax dollars, that would be more of an argument for keeping things as they are...
Re:Internet comes of age (Score:3, Insightful)
Not that it matters anyway - as the parent says, a country struggling for complete hegemony over any
Re:Internet Comes of Age (Score:3, Insightful)
I don't think they have any room to point fingers.
Oh and lets not forget that the solution is to take a system that has been working perfectly fine and give it to an unelected group of people with a incredibly bad track record. A gr
Re:Internet Comes of Age (Score:3, Informative)
Re:Internet Comes of Age (Score:5, Funny)
ANYBODY BUT THE DUTCH!
Next thing you know, the streets of the internet will be littered with sites trying to sell sex and drugs.
Re:Internet Comes of Age (Score:3, Insightful)
On a serious note, if ICANN were making politically-motivated decisions, I'd be for taking that power away from ICANN and handing it over to someone less susceptible to politi
Re:Internet Comes of Age (Score:5, Insightful)
Re:The UN (Score:4, Insightful)
Which one?
-b
Re:The UN (Score:3, Interesting)
Global Use != Global Ownership (Score:4, Insightful)
If the rest of the world doesn't want to be a part of our DNS, they can set up their own. But we already have ccTLDs that expressly give such authority to governments. What do you want for nothing, a rubber biscuit? | https://tech.slashdot.org/story/05/07/14/1555244/un-to-govern-internet | CC-MAIN-2016-44 | refinedweb | 7,708 | 73.47 |
"Armin Ronacher" armin.ronacher@active-4.com wrote in message
news:loom.20080527T192243-415@post.gmane.org...
| Basically the problematic situation with iterable strings is something
like
| a
flatten function that flattens out every iterable object except of
strings.
In most real cases I can imagine, this is way too broad. For instance, trying to 'flatten' an infinite iterable makes the flatten output one also. Flattening a set imposes an arbitrary order (but that is ok if one feeds the output to set(), which de-orders it). Flattening a dict decouples keys and values. Flattening iterable set-theoretic numbers (0={}, n = {n-1, {n-1}}, or something like that) would literaly yield nothing.
| Imagine it's implemented in a way similar to that:: | | def flatten(iterable): | for item in iterable: | try: | if isinstance(item, basestring): | raise TypeError() | iterator = iter(item) | except TypeError: | yield item | else: | for i in flatten(iterator): | yield i
I can more easily imagine wanting to flatten only certain classes, such and tuples and lists, or frozensets and sets.
def flatten(iterable, classes): for item in iterable: if type(item) in classes: for i in flatten(item, classes): yield i else: yield item
| A problem comes up as soon as user defined strings (such as UserString) is | passed to the function. In my opinion a good solution would be a "String" | ABC one could test against.
This might be a good idea regardless of my comments.
tjr | https://mail.python.org/archives/list/python-dev@python.org/message/KC5CBZZNC5WZE5GU3AAPCXBA7DZLWKP2/ | CC-MAIN-2020-40 | refinedweb | 240 | 53 |
#include <librets/UpdateResponse.h>
Default constructor.
Returns the column names.
Get the current value for the data encoding flag.
Get the field name of the current error.
Get the error number for the current error.
Get the offset of the error.
Get the text associated with the error.
Returns the RETS-STATUS ReplyCode.
Returns the RETS-STATUS ReplyText.
Returns the value of a column as a string.
Returns the value of a column as a string.
Get the field name of the current warning.
Get the error number for the current warning.
Get the offset of the warning.
Get the response required indicator.
Get the text associated with the warning.
Parse the result sent back from the client..
Set the data encoding flag to allow for parsing of extended characters by Expat.
RETS is officially US-ASCII, but this will allow a work around for servers that haven't properly sanitized their data.
Set the input stream for Parse. | http://lpod.org/librets/classlibrets_1_1_update_response.html | CC-MAIN-2020-10 | refinedweb | 159 | 72.12 |
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#19401 closed Bug (fixed)
'swapped' checking does not account for case insensitivity
Description
According to the code at django/db/models/options.py:230 the test for "has this class been swapped?" is essentially: "if there's a swapped meta option, and the option is not None, and the option isn't simply the name of the current model, in "appname.model" format"
However, the test doesn't take into account that the model part of appname.model is case insensitive.
While we could simply normalise app_name and that coming back from the setting, I think a better test would be:
"if there's a swapped meta option, and the option is not none, and the "get_model()" using that option is not the same as the current class instance." Ie, rather than assume model names are case insensitive, just use the get_model and ensure it doesn't return us.
Change History (19)
comment:1 Changed 4 years ago by russellm
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to invalid
- Status changed from new to closed
comment:2 Changed 4 years ago by chriscog
- Resolution invalid deleted
- Status changed from closed to reopened
Actually, this was something I only discovered myself, in that while app names are case sensitive, model names are not. Check out these lines from django/db/models/loading.py
def get_model(self, app_label, model_name, seed_cache=True, only_installed=True): """ Returns the model matching the given app_label and case-insensitive model_name. Returns None if no model is found. """ if seed_cache: self._populate() if only_installed and app_label not in self.app_labels: return None return self.app_models.get(app_label, SortedDict()).get(model_name.lower())
comment:3 Changed 4 years ago by ptone
- Resolution set to needsinfo
- Status changed from reopened to closed
Because the swap check involves the app label, I don't see how you are ever going to have a swappable collision between myapp.Mymodel and myapp.MyMODEL because you will have plenty of other problems with having models in the same app be the name name like that.
Do you have a concrete failure example, or is this just supposition from looking at the code?
comment:4 Changed 4 years ago by chriscog
- Resolution needsinfo deleted
- Status changed from closed to reopened
Sorry sorry sorry. I understand that this appears to be a trivial issue, perhaps even leaning towards "argument over programming preference", but please hear me out.
There are two issues with this current implementation. 1. inconsistent behaviour with the rest of django. 2. DRY not being maintained
- Consistency
The code as stands will operate correctly if AUTH_USER_MODEL='auth.User' ... the swapping logic will detect that the model is being directed to be swapped out with itself, and will act like a no-op. No worries!
However, if you set AUTH_USER_MODEL='auth.user' very strange things happen. During syncdb, it sets up the auth tables as expected, but does not prompt to create a superuser. The "createsuperuser" is available in manager, but errors-out if run, and a whole bunch of other stuff comes back with AttributeError: Manager isn't available; User has been swapped for 'auth.user'
So, a quick response to setting 'auth.user' is, of course, "dont do that".
However, django developers have already been trained to use lowercase names when using the 'appname.modelname' nomenclature This is not just taking it from the code snippet in a previous comment above, but in things like this documentation:
Notice that they use "site" and "user" rather than "Site" and "User"
So, we're asking developers to "not care about" case in one circumstance, to "very much caring about case" in another. Not consistent, and the odd behaviour above will lead to (admittedly rare, at least initially) problems.
And, I'm not even including the case where there's weird interactions between swapped models and content types.
Note, I'm just using Content type as one example, there are others.. plenty of places where "django.db.models.loading.get_model" is called from developer input. Like it or not, at this point Django needs to treat model names as case insensitive when being referred to from strings.
- DRY
In 1.4, the only place where strings are turned into model names is in django.db.models.loading.get_model().
With the introduction of 1.5 model swapping, we've put that into another location: models.db.options.Options .. which imho is not tightly-bound enough to the above.
At the VERY least, I propose we ensure such translations stay in the same location. However, that would just highlight that object-comparions and string-comparisons are not orthogonal.
So, a very easy fix to both of these is to replace the string based comparison with a object based. Ie turn this:
if self.swappable: model_label = '%s.%s' % (self.app_label, self.object_name) swapped_for = getattr(settings, self.swappable, None) if swapped_for not in (None, model_label): return swapped_for
into this:
if self.swappable: swapped_for = getattr(settings, self.swappable, None) if swapped_for is None: return None swapped_for_model = get_model(*swapped_for.split('.',1)) this_model = get_model(self.app_label,self.module_name) if swapped_for_model is not this_model: return swapped_for
Now, this appears a little "messy" because I cannot see any attribute on the Options object that has a reference to the model. Ie, it always needs to go back through get_model to get that. (that a cycle prevention technique?). However since we're calling get_model (which is the model cache), this has "cacheability" written all over it.
Also note that I've used module_name rather than object_name... module_name is always lowercased. (Not relevant here, since get_model lowercases the model names anyway)
Tested the above code, and it works for both variations of AUTH_USER_MODEL: syncdb does what it's supposed to, and the admin screens are behaving themselves. I'm going to run the regression test on it in a moment, and perhaps even do that "git pull request" thingy.
(note, I'm not really happy about the get_model(*somestr.split('.',1)) pattern... but that's used _everywhere_ all over Django right now. Very non-DRY)
Summary:
I think this, or something to the same effect is a worthwhile change as it increases consistency and DRY. However seeing that _swapped gets called frequently, this looks like something that should really be part of the model cache or a "just in time" setting on the Options object itself. (ie, not set till its called, then subsequent queries return from a cache). I would not object to this not being included in 1.5 but if swapped models gain traction, we should look at improving the design there.
comment:5 Changed 4 years ago by anonymous
Ran regression tests and it failed... in a very strange way. Got
Traceback (most recent call last): File "./runtests.py", line 330, in <module> options.failfast, args) File "./runtests.py", line 156, in django_tests state = setup(verbosity, test_labels) File "./runtests.py", line 135, in setup mod = load_app(module_label) File "/Users/chris/dev/dj15/Django-1.5b2/django/db/models/loading.py", line 96, in load_app models = import_module('.models', app_name) File "/Users/chris/dev/dj15/Django-1.5b2/django/utils/importlib.py", line 35, in import_module __import__(name) File "/Users/chris/dev/dj15/Django-1.5b2/tests/modeltests/fixtures/models.py", line 12, in <module> from django.contrib.contenttypes import generic File "/Users/chris/dev/dj15/Django-1.5b2/django/contrib/contenttypes/generic.py", line 17, in <module> from django.contrib.admin.options import InlineModelAdmin, flatten_fieldsets File "/Users/chris/dev/dj15/Django-1.5b2/django/contrib/admin/__init__.py", line 6, in <module> from django.contrib.admin.sites import AdminSite, site File "/Users/chris/dev/dj15/Django-1.5b2/django/contrib/admin/sites.py", line 3, in <module> from django.contrib.admin import ModelAdmin, actions ImportError: cannot import name actions
when it tries to import modeltests.fixtures.models as part of the run_tests loop. This _only_ happens if I call get_model inside the _swapped method, even if I just call it with fixed params and trap every exception, and the call has no bearing on the execution of the function. Those last few entries in the stack trace have come up before in other issues, too. Viz:
Very odd... this did not happen just on the first call to get_model either... it was called 7 times in _swapped(), and then _after_ 20 or so test modules were loaded, the fixtures module failed.
comment:6 Changed 4 years ago by ptone
- Severity changed from Normal to Release blocker
- Triage Stage changed from Unreviewed to Accepted
OK - so setting AUTH_USER_MODEL='auth.user' is something that should be handled better - auth.User should not think it is swapped with 'auth.user'
The appname.model is established as a convention for Django - swapped models furthers that (for better or worse), and has the flaw in being easily confused with a python path.
Because get_model has always used lowercase, it seems that we should just say that the Django specific "appname.modelname" convention is case insensitive.
Which could make the solution as simple as:
if swapped_for.lower() not in (None, model_label.lower()):
supporting case sensitive module names is against PEP-8, and unless you get tricky about where modules are on your path, not even possible for systems that don't have case-sensitive FS.
comment:7 Changed 4 years ago by chriscog
Thanks for responding and accepting. Might not seem significant, but I've had too many recent bad experiences with getting R&D to see the issues (not Django related at all), and the acknowledgement of the contribution is highly appreciated.
I've been doing more tests, and I cant get adding get_model to work... too many circular module dependencies going on. I think that's a non-starter for now.
Your proposed solution may not be universal enough either; while model names are treated case-insensitive, app_labels (by default derived from the module names), _are_ case sensitive: there is nothing in the model loading or module loading mechanism that case-normalises app names... they're imported with the case specified, and stored in the app cache with the case specified.
So, my suggested change becomes
try: swapped_for = getattr(settings, self.swappable) except AttributeError: return None swapped_for_app, swapped_for_model = swapped_for.split('.') if swapped_for_app == self.app_label and swapped_for_model.lower() == self.module_name: return swapped_for
- I have committed a cardinal sin and not tested this yet. (At work!)
- I'm using self.module_name, which is precalculated as self.object_name.lower().
- Does not handle cases where the app_label is blank (do they exist?)
comment:8 Changed 4 years ago by chriscog
Update: What you have _will_ work _if_ you assume that PEP-8 is being followed. While I can't come up with a real-world example, though, I would still suggest my more-awkward-looking change as it avoids breaking applications that "happen to be working in violation of PEP-8". If we're going to enforce "no case-only distinctions", we should do it in a much earlier place, such as the app loader, and with a clear error message.
comment:9 Changed 4 years ago by ptone
so yeah - case sensitive app_labels have been around too long to change now probably - I'd hope it would be rare that someone would have two apps, one Foo and the other foo, or bar.foo - but I'm sure it is out there.
We could still just lowercase the object_name for the swapped check
comment:10 Changed 4 years ago by chriscog
Yeah... I got the logic of that test wrong. Hooray for testing! Here's a working, and passing the unit tests, version:
def _swapped(self): """ Has this model been swapped out for another? If so, return the model name of the replacement; otherwise, return None. """ if self.swappable: try: swapped_for = getattr(settings, self.swappable) except AttributeError: return None try: swapped_for_app, swapped_for_model = swapped_for.split('.') except ValueError: # This is not ideal, but we want the model validation to catch the error return swapped_for if swapped_for_app != self.app_label or swapped_for_model.lower() != self.module_name: return swapped_for return None swapped = property(_swapped)
- Returning swapped_for even if we know it is wrong was done because the unit tests expect the error to come from the model validation code, rather than here.
- That this test code is run each time anything needs to determine if the model has been swapped feels really ugly. Is it too late in the release cycle to refactor this? Perhaps set up this very test inside Options.contribute_to_class() and have is_swapped, swapped_app_label and swapped_object_name as attributes on Object. The app_label.model_name format test can then be removed from the model validation (and we'll need to adjust the invalid_models.badswappablevalue test. I'll happily work on this if its worth it. Github fork appropriate? We can use "swapped" instead of "is_swapped" to be compatible with all the code that simply uses swapped for swappededness.
comment:11 follow-up: ↓ 13 Changed 4 years ago by chriscog
I've refactored the swapped check so that its checked once when the options are applied to the class, and all tests for swapped-ed-ness are done against the "is_swapped", "swapped_app_label" and "swapped_object_name" attributes.
These changes cause three regression tests to fail, but these appear to be "non issues", or perhaps even fixing a limitation of the older method. All three tests were looking for a failure condition that appears to be perfectly fine code. There's insufficient documentation to know if the test should have failed by design, or was failing for a known or unknown reasons.
I've put the changes into a github fork and sent a pull request:
That appropriate?
I'd like someone with more knowledge about those tests to determine if they really are "tests that used to fail, but now work as it should be."
comment:12 Changed 4 years ago by chriscog
- Has patch set
comment:13 in reply to: ↑ 11 Changed 4 years ago by ptone
I've put the changes into a github fork and sent a pull request:
That appropriate?
So using a pull request is completely appropriate - but I have to let you know that changing the value of the flag from swapped to is_swapped was completely unnecessary and will cause many reviewers give your patch less consideration. Likewise the change from property to attribute is not related to this ticket. These are design stylistic choices one gets to make when writing the original feature, not to rehash while working on a bugfix patch. I hope you take that as constructive feedback - your efforts to give this issue some significant thought are appreciated.
The main issue at hand is to remove a confusing result with a new feature when, what is likely to be a relatively common honest mistake, is made.
That means raising a clearer error in cases where the setting is incorrect - or allowing a looser (case insensitive object name) test of the swapped model.
My own choice would be to allow for the looser test, but I'm going to defer to Russ on the determination of the severity of this issue, and its best resolution.
I have worked up a more lightweight fix using the case insensitive approach (with a test) here:
comment:14 Changed 4 years ago by chriscog
Your concerns are understood. Soon after I made those changes, I realised that a far simpler one would be to keep just the one attribute, "swapped".
However, I'm adamant that turning swapped from a function into an attribute that is calculated _once_ during Options.contribute_to_class() is the better (and much more efficient) way of handling it. I'll create another pull request with that variation and I'll take my chances :)
comment:15 Changed 4 years ago by chriscog
New Pull Request:
comment:16 Changed 4 years ago by charettes
As suggested on github we could just use a cached_property instead.
comment:17 Changed 4 years ago by Russell Keith-Magee <russell@…>
- Resolution set to fixed
- Status changed from reopened to closed
comment:18 Changed 4 years ago by Russell Keith-Magee <russell@…>
comment:19 Changed 4 years ago by russellm
@chriscog Thanks for your work (and pull request); however, I opted to go for a simpler approach -- your patch didn't contain tests, but it did break a number of existing tests. From a quick analysis, this was because doing the swapped check at the time of class instantiation messes with the test suite, where swapped models are changed.
Ultimately, the swapped property isn't an expensive check, so it shouldn't be a huge performance difference.
Thanks again for your report and persistence :-)
I must be missing something here, because there's nothing case insensitive about the model name in AUTH_USER_MODEL. If your user class is called MyUser in an app call myapp, the value of the AUTH_USER_MODEL should be myapp.MyUser - myapp.myuser is an error, and should be reported as such, and as far as I can make out, it is. | https://code.djangoproject.com/ticket/19401?cversion=0&cnum_hist=2 | CC-MAIN-2016-36 | refinedweb | 2,842 | 55.64 |
/* LINUXaudio.c: Copyright (C) 1995 Jonathan Moh */ /* * Linux code based on examples in the "Hacker's Guide to VoxWare 2.4" * and the file "experimental.txt" accompanying the source code * for VoxWare 2.9 and later (provided with the Linux kernel source), * both by Hannu Savolainen (hannu@voxware.pp.fi). * * The code not only initializes the sound board and driver for the correct * sample size, rate, and channels, but customizes the DMA buffers to * match Csound's input or output buffer size. * * A new option ('-V') was added to Csound to allow the user to set the * master output volume on the soundcard. * * Jonathan Mohr * Augustana University College * Camrose, Alberta, Canada T4V 2R3 * mohrj@augustana.ab.ca * * 1995 Oct 17 */ #include <unistd.h> #include <sys/ioctl.h> #include <sys/soundcard.h> #include "cs.h" #include "soundio.h" #define MIXER_NAME "/dev/mixer" void setsndparms( int dspfd, int format, int nchanls, float sr, unsigned bufsiz ) { int parm, original; unsigned frag_size; /* set sample size/format */ switch ( format ) { case AE_UNCH: /* unsigned char - standard Linux 8-bit format */ parm = AFMT_U8; break; case AE_CHAR: /* signed char. - probably not supported by Linux */ parm = AFMT_S8; break; case AE_ULAW: parm = AFMT_MU_LAW; break; case AE_ALAW: parm = AFMT_A_LAW; break; case AE_SHORT: #ifdef LINUX_BE parm = AFMT_S16_BE; /* Linux on SPARC is big-endian */ #else parm = AFMT_S16_LE; /* Linux on Intel x86 is little-endian */ #endif break; case AE_LONG: die(Str(X_327,"Linux sound driver does not support long integer samples")); case AE_FLOAT: die(Str(X_326,"Linux sound driver does not support floating-point samples")); default: /* Linux sound driver provides these names for modes not used by Csound */ /* parm = AFMT_IMA_ADPCM; */ /* parm = AFMT_U16_LE; */ /* parm = AFMT_U16_BE; */ die(Str(X_1342,"unknown sample format")); } original = parm; if (ioctl(dspfd, SOUND_PCM_WRITE_BITS, &parm) == -1) die(Str(X_1312,"unable to set requested sample format on soundcard")); if (parm != original) die(Str(X_1199,"soundcard does not support the requested sample format")); /* set number of channels (mono or stereo) */ parm = nchanls; if (ioctl(dspfd, SOUND_PCM_WRITE_CHANNELS, &parm) == -1) die(Str(X_1310,"unable to set mode (mono/stereo) on soundcard")); if (parm != nchanls) die(Str(X_239,"DSP device does not support the requested mode (mono/stereo)")); /* set the sample rate */ parm = (int) sr; if (ioctl(dspfd, SOUND_PCM_WRITE_RATE, &parm) == -1) die(Str(X_1313,"unable to set sample rate on soundcard")); if (parm != (int) sr) { sprintf(errmsg,Str(X_455,"Sample rate set to %d (instead of %d)"), parm, (int) sr); warning(errmsg); } #ifndef __FreeBSD__ /* set DMA buffer fragment size to Csound's output buffer size */ parm = 0; frag_size = 1; /* find least power of 2 >= bufsiz */ while ( frag_size < bufsiz ) { frag_size <<= 1; parm++; } parm |= 0x00020000; /* Larry Troxler's idea */ /* parm |= 0x00ff0000; */ /* use max. number of buffer fragments */ if (ioctl(dspfd, SNDCTL_DSP_SETFRAGMENT, &parm) == -1) die(Str(X_755,"failed while trying to set soundcard DMA buffer size")); /* find out what buffer size the driver allocated */ if (ioctl(dspfd, SNDCTL_DSP_GETBLKSIZE, &parm) == -1) die(Str(X_754,"failed while querying soundcard about buffer size")); if (parm != (int)frag_size) { sprintf(errmsg, Str(X_466,"Soundcard DMA buffer size set to %d bytes (instead of %d)"), parm, frag_size); warning(errmsg); } else printf(Str(X_823,"hardware buffers set to %d bytes\n"), parm); #endif } void setvolume( unsigned volume ) { int mixfd, parm; /* open mixer device for writing */ if ( (mixfd = open(MIXER_NAME, O_WRONLY)) == -1 ) die(Str(X_1309,"unable to open soundcard mixer for setting volume")); /* set volume (left and right channels the same) */ parm = (volume & 0xff) | ((volume & 0xff) << 8); if (ioctl(mixfd, SOUND_MIXER_WRITE_VOLUME, &parm) == -1) die(Str(X_1311,"unable to set output volume on soundcard")); /* close mixer device */ if ( close(mixfd) == -1 ) die(Str(X_734,"error while closing sound mixer device")); } | http://csound.sourcearchive.com/documentation/4.23f13/LINUXaudio_8c-source.html | CC-MAIN-2017-34 | refinedweb | 590 | 53.24 |
Hello,
Hi Mahmoud,
As I can see, you’re trying to extract text without a license. I would like to share with you that the text is not completely extracted in the evaluation mode. If you have already purchased a license then please set it before extracting the text. However, if you’re still evaluating then you may get a temporary license for 30 days from this link to test the complete text extraction.
I hope this helps. If you have any further questions, please do let us know.
Regards,
Hi,
Thanks for using our products. Can you please share the source PDF document and the code snippet that you are using so that we can test the scenario at our end. We apologize for your inconvenience.
Hello,
Code:
Aspose.Pdf.Kit.PdfExtractor extractor = new Aspose.Pdf.Kit.PdfExtractor();
Hello Mahmoud,
Thanks for sharing the resource files.
I have tested the scenario and I am able to reproduce the same problem. For the sake of correction, I have logged it in our issue tracking system as PDFKITNET-27861. We will investigate this issue in details and will keep you updated on the status of a correction. <?xml:namespace prefix = o
We apologize for your inconvenience. | https://forum.aspose.com/t/arabic-text-is-not-recognized/110237 | CC-MAIN-2021-21 | refinedweb | 205 | 67.55 |
Unity 5.0.4
The Unity 5.0.4 release brings you improvements and a few fixes. Read the release notes below for details.
For more information about the previous main release, see the Unity 5: Audio - Enabled OpenSL for GearVR.
- iOS/IL2CPP: Load embedded resource files as memory mapped read-only files so that they do not contribute to memory pressure.
- iOS/IL2CPP: Lower memory used by IL2CPP executables at runtime by removing unused overhead from memory profiling.
- Xbox One: Unity is now build with the June 2015 XDK. You must have the June 2015 XDK installed on your PC and use the matching or later recovery.
Fixes
- (none) - Android: Fixed an issue whereby unaligned access caused a crash on Tegra K1.
- (691217) - GLES: Fixed crash on Vivante GPUs when using binary shaders.
- (699694) - GLES: Fixed crash when using multithreaded renderer and using shader that uses vertex colors when the mesh data doesn't contain vertex colors.
- (none) - Graphics: Configurable vertex compression to fix lightmap UVs shifting.
- (700474) - Graphics: Fixed issue when loading single channel JPEGs using Texture2D.LoadImage.
- (691599) - Inspector: Fixed NullReferenceException caused by deleting objects during Inspector redraw.
- (685439) - iOS: Fixed an issue whereby lightmapped objects with legacy shaders lit with realtime light in legacy deferred no longer render incorrectly.
- (695118), (701548) - iOS/IL2CPP: Add support for PreserveAttribute to prevent classes, methods, fields and properties from being stripped in IL2CPP.
- (700507) - iOS/IL2CPP: Avoid deadlock during UnloadUnusedAssets.
- (691607), (667147) - iOS/IL2CPP: Correct an exception during code conversion which has the error message "Invalid global variables count" when converting some UnityScript assemblies.
- (698060) - iOS/IL2CPP: Corrected an error in generated code when a constrained generic parameter type is used in a nested lambda expression.
- (698589) - iOS/IL2CPP: Corrected RPC implementation for the UnityEngine.Networking namespace.
- (704018) - iOS/IL2CPP: Ensure that GetCurrentMethod returns the proper value, even when the generated native method is inlined.
- .693316
- : Pr700531event a C++ compiler error in generated code about an undeclared identifier with the test "Unused local just for stack balance".
- .
- (704069) - iOS/IL2CPP: Prevent the player build process from using older generated C++ source files from a previous build.
- (691008) - iOS/IL2CPP: When compiling scripts for the player, appropriate UnityEngine.UI.dll will be referenced.
- (671681) - Lighting: Fixed inaccurate screenspace scissoring of lights with large range.
- (none) - Mono: Double traverse object count to avoid stack overflow when freeing huge amounts of objects.
- (none) - Windows Store Apps: Fixed a rare crash at boot when reading AppxManifest.xml.
- (690152) - Xbox One: Fixed a bug that could cause game chat to fail when more than two players are involved.
- (none) - Xbox One: Mono: Double traverse object count to avoid stack overflow when freeing huge amounts of objects.
Changeset: 1d75c08f1c9c
Unity 5.0.4 | https://unity3d.com/ru/unity/whats-new/unity-5.0.4 | CC-MAIN-2021-31 | refinedweb | 457 | 57.27 |
Tips, tricks, and guides for developing on modern Windows platforms
Note: This method for overriding the Windows Phone Back button is for Silverlight apps only. Please see my other post on the Back button in Windows Phone 8.1 WinRT apps.
By default Windows Phone keeps pages on a ‘back stack’ and automatically navigates backwards through that stack (eventually exiting the app) when you press the hardware Back button. This is intuitive, but you may want to override the behaviour from time to time.
However, keep in mind that the expected behaviour – that pressing Back takes the user back a step – should be preserved (you’ll likely fail certification if it isn’t). Overriding the Back button is not intended to let you assign the Back button to random functions (e.g. pausing audio playback); you should override the Back button when doing so preserves the user experience in an intuitive, logical way. For example I have an app that uses popup windows to display content. If a popup window is displayed and the user presses Back, the window closes, but the page does not navigate backwards.
To override the Back button you need a very simple method in your page’s code behind.
Firstly, add the ComponentModel namespace to your page:
using System.ComponentModel;
Then add this method:
protected override void OnBackKeyPress(CancelEventArgs e)
{
// put any code you like here
MessageBox.Show("You pressed the Back button");
e.Cancel = true;
}
This little method overrides the default Back button behaviour with whatever you put inside it. There is one important line of code I’ve added in the above method:
e.Cancel = true;
Setting e.Cancel to true tells the OS to cancel the default Back button behaviour (i.e. navigating to the previous page). If you leave that line out your method would run, and then the phone would also navigate backwards (which may be the behaviour you want).
[…] Note: this technique is for Windows Phone 8.1 WinRT apps only. For previous versions of Windows Phone (including Windows Phone 8.1 Silverlight apps) see my previous post. […]
[…] this technique is for Windows Phone 8.1 WinRT apps only. Please see this post for previous versions of Windows Phone (including Windows Phone 8.1 Silverlight […] | http://grogansoft.com/blog/?p=572 | CC-MAIN-2019-35 | refinedweb | 376 | 66.44 |
react-gif-player
Similar to Facebook's GIF toggle UI, this React component displays a still image preview by default, and swaps in an animated GIF when clicked. The images are preloaded as soon as the component mounts, or whenever a new source is passed.
Note: Unlike Facebook's UI, which uses an HTML video element to preserve playback progress, this component uses the actual GIF and will be reset on each click.
install
npm install react-gif-player react react-dom
If you're unable to use npm and need production-ready scripts, check out the releases.
usage
quick start
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width,initial-scale=1"> <!-- gifplayer.css v0.4.1 --> <link rel="stylesheet" href="[email protected]/dist/gifplayer.css"> </head> <body> <div id="cat"></div> <!-- react/react-dom served over CDN --> <script src="[email protected]/umd/react.development.js"></script> <script src="[email protected]/umd/react-dom.development.js"></script> <!-- gifplayer.js v0.4.1 --> <script src="[email protected]/dist/gifplayer.js"></script> <script> ReactDOM.render( React.createElement(GifPlayer, { gif: '/img/cat.gif', still: '/img/cat.jpg' }), document.getElementById('cat') ); </script> </body> </html>
with a module bundler
var React = require('react'); var ReactDOM = require('react-dom'); var GifPlayer = require('react-gif-player'); // with JSX ReactDOM.render( <GifPlayer gif="/img/cat.gif" still="/img/cat.jpg" />, document.getElementById('cat') );
options
Options can be passed to the
GifPlayer element as props.
gif: a string address to an animated GIF image.
still: a string address to a still preview of the GIF (e.g. JPG, PNG, etc.)
autoplay: a boolean which can be set
trueif you want to immediately bombard your user with a moving GIF as soon as it's available
onTogglePlay: a function which is called whenever the GIF toggles between playing and paused. Receives one argument,
playing, which is a boolean.
pauseRef: a function callback is called with another function,
pause- this can be saved and called later to remotely pause the playing of the GIF, in such cases where that might be desired. For example, you might want to stop the GIF when it scrolls offscreen. The word "ref" is used because its usage pattern is similar to React element refs:
// here's an example class MyGifWrapper extends React.Component { componentDidMount () { addEventListenerWhenGifFlowsOffscreen(this.pauseGif); } render () { return ( <GifPlayer src={src} still={still} pauseRef={pause => this.pauseGif = pause} /> ); } }
Any other attribute available on the HTML
imgtag can be passed as well (excluding
src, which would be overwritten), though keep in mind React's version of that attribute may be different than you expect.
GifPlayer expects one or both of the
gif and
still props. If one is left out, the other will be used as a fallback.
However, if only a
gif prop is provided, the first frame will be extracted and used as the still preview as soon as the GIF image has fully loaded.
generating still frame at build time
The disadvantage of not providing a
still prop, even though a stand-in will be generated, is that your GIF must fully load before the still frame appears instead of the (likely slowly moving) GIF.
One streamlined way to generate a still frame ahead of time is to incorporate the gif-frames module, which has only pure JavaScript dependencies, into your build process.
e.g.
var gifFrames = require('gif-frames'); var fs = require('fs'); gifFrames({ url: 'src/image.gif', frames: 0 }).then(function (frameData) { frameData[0].getImageStream().pipe(fs.createWriteStream('build/still.jpg')); });
If you need finer-tuned control over image quality, you can try Gifsicle.
styles
Important: In order for the default styles to be used, dist/gifplayer.css must be included in your HTML.
CSS styles can be overridden easily. To add a border around the image, try including this CSS after including the default styles:
.gif_player img { border: 3px solid cornflowerblue; }
usage with sass
If you preprocess your styles with Sass, you can have more powerful control via Sass variables. The defaults are located at the top of src/GifPlayer.scss:
$gif_btn_bg_base_color: #000 !default; $gif_btn_bg_opacity: 0.5 !default; $gif_btn_bg_opacity_hover: 0.7 !default; // ...etc
The
!default flag means that declaring alternatives before including the default styles will override them.
// Include var overrides before default styles import $gif_btn_bg_base_color: gold; $gif_btn_text_color: cornflowerblue; $gif_btn_font_family: serif; // Using webpack css/sass module import syntax @import '~react-gif-player/src/GifPlayer'; // include other overrides afterward .gif_player { margin: 1rem; img { border: 2px solid #222; } } | https://reactjsexample.com/a-gif-component-that-moves-when-you-want-it-to/ | CC-MAIN-2021-21 | refinedweb | 744 | 50.02 |
hmmm still waiting on feedback on this topic.
THanks all
hmmm still waiting on feedback on this topic.
THanks all
OK Copeg,
The first thing I need to figure out is how to modify the gui so that users can input the values for term in months, Rate in percent, & loan amount.
So I guess I need to modify the code...
OK let me clarify a bit.
My code as posted Compiles & runs OK.
What I need to know is how to make the necessary changes to the current code so I can accomplish what I need to do with it.
I...
Hi all,
I am working on a java program and I need to update it so that it does the following:
****Write the program in Java (with a graphical user interface) so that it will allow the user to...
so are you talking about using the f.add (new JButton(:closeButton:)); method??
If so I already tried that and it said it couldn't find the jbutton.
so where else would I add it??
OK I tried to add it to the run command, but it still gives errors.
Here is the updated code:
// Import the swing and AWT classes needed
import java.awt.EventQueue;
import...
HI all,
I am trying to add a close button to some basic swing code I was playing with, but no matter where I put the code for the Jbutton I get errors.
Here is the code for the program as is...
OK I am just going to close this thread,
I have gotten people confused.
OK let me clarify something
I have tried to compile both files in TextPad but they both give errors.
here are the errors for the 1st code:
.\InvestmentFrame.java:25: illegal combination of...
In the book we are reading for my Java 2 class there is a section on GUI that gave me some code, but I don't know how to use it in a text editor (textpad) to make the code so it will work.
There are...
Thanks everyone for all the help!!
not sure if this is correct,
But I just though U could replace
return !pig; on line 10 with
return !true;
this gives the false return
IKE
Hi all,
I am having a problem with my code,
It is supposed to display 10 lines of text when I run it, but at first it does that, then it goes and displays 20 lines at a time. I am at my wits end...
HI all,
I have my code working only problem is that on the first loan its supposed to stop after 84 months, but it goes all the way to 100.
This of course causes a negative ammount to be... | http://www.javaprogrammingforums.com/search.php?s=f5396602e13c2a0ff285878bed5d44d7&searchid=1028991 | CC-MAIN-2014-35 | refinedweb | 460 | 88.97 |
Ok, well I attempted another exercise in the book. It was to 'Write a program that reads in a sequence of positive numbers and prints out the total and average value. The end of the sequence should be signalled by entering -1'.
This is what I came up with:
Only when I enter -1 it messes up and says an error log is being created :(Only when I enter -1 it messes up and says an error log is being created :(Code:
#include <iostream.h>
int main()
{
int x, y = 0, z = 0;
while(x != -1)
{
x = 0;
cout << "Enter a number: ";
cin >> x;
z++;
y = x + y;
}
y = y + 1;
cout << "The average of those numbers is " << y / z;
}
Thanks if you can help
-Marlon | http://cboard.cprogramming.com/cplusplus-programming/78098-another-problem-printable-thread.html | CC-MAIN-2016-30 | refinedweb | 126 | 75.84 |
Hey, I'm not sure if this is really a bug but I'd like to show to prevent some undesired behavior!
I'm working in the `match` support for returns () library when I saw a behavior similar to the described in `Constant Value Patterns` section on PEP-622 ().
A very small and reproducible example:
```python
from typing import Any, ClassVar, Optional
class Maybe:
empty: ClassVar['Maybe']
_instance: Optional['Maybe'] = None
def __new__(cls, *args: Any, **kwargs: Any) -> 'Maybe':
if cls._instance is None:
cls._instance = object.__new__(cls)
return cls._instance
Maybe.empty = Maybe()
if __name__ == '__main__':
my_maybe = Maybe()
match my_maybe:
case Maybe.empty:
print('FIRST CASE')
case _:
print('DEFAULT CASE')
```
The output here is `FIRST CASE`, but if I delete `__new__` method the output is `DEFAULT CASE`.
Is that the correct behavior?
Python version: 3.10.0a7 | https://bugs.python.org/msg397380 | CC-MAIN-2021-43 | refinedweb | 140 | 66.13 |
0
I just made a small program to spell check a provided sentence and point errors in the sentence. Actually, the program creates a list by reading data from text file which contains dictionary words and from there it tells whether the inputted word/s are in dictionary or not. I would like to extend my program further by also adding a suggestion list to suggest user words similar to the incorrect word/s they entered so that they can modify their sentence accordingly. How would i be able to suggest similar words?
Here is the code snippet:-
def check(): print '*'*8+" Program to check spelling errors in a sentence you entered "+'*'*8 print "write some text in english" text=raw_input("Start: ") tex=text.lower() print tex textcheck=tex.split(' ') dic=open('D:\Mee Utkarsh\Code\Python\DictionaryE.txt','r') origdic=dic.read() origdicf=origdic.split('\n') errorlist=[] correctwordlist=[] for words in textcheck: if words in origdicf: correctwordlist.append(words) elif words not in origdicf: errorlist.append(words) else: pass for x in textcheck: if x.isdigit(): correctwordlist.append(x) errorlist.remove(x) print '-'*50 print 'Error words list' a=1 while a==1: if errorlist==[]: print 'No Error!' a=a+1 else: for x in errorlist: print '\b',x,' ' a+=1 print '-'*50 y=1 print 'Correct Words list' while y==1: if correctwordlist==[]: print 'Sentence Full of Errors' y=y+1 else: for x in correctwordlist: print '\b',x,' ' y=y+1 print '-'*50 | https://www.daniweb.com/programming/software-development/threads/424149/spellcheck-program | CC-MAIN-2016-36 | refinedweb | 246 | 62.78 |
About the Average Word Length problem
I am struggling with this problem since yesterday. I did what i can do, but I couldn't get 5/5. No matter what i did, I always get 3/5. can you see what I'm missing in this code? # as a remainder, the average word length problem is: you have an input with multiple words, and the output should be the average of letters per word rounded to the nearest whole number. And Here's the code: import string punc= string.punctuation test= input() phrase=" " count_letter =0 count_word =0 for i in range(len(test)): phrase +=str(test[i]) if phrase[i] == " ": count_word +=1 if phrase[i] != " " and (phrase[i] not in punc): # not counting spaces and punctuations count_letter +=1 print(round((count_letter +1)/count_word))
24 AnswersNew Answer
Try using: (count_letter//count_word)+1 Also, im really glad i read your question. i had no idea there was a string library. I used regex to solve this.
Your code adapted. I bypassed the word =" ", and iterated directly from the input. Also a minor change to word_count = This now works. Thanks heaps for sharing your interesting concept
Aymane Boukrouh has hit the nail on the head. The task requires you to "round up" to the nearest whole number.
I am also playing with the code and getting some discrepancies when I introduce an extra " " space into the sentence. The code counts an extra word each time it sees chr(32), " ".
Try using math.ceil() instead of round()
also you can’t use the code i gave you for rounding because if you floor a number like 3.0 it becomes 3.0. And then you add 1, it becomes 4. math.ceil() is the right choice. But thats not the issue. ——————-Spoiler ahead: The issue starts all the way at the beginning of your code with the line phrase = “ “ because then you do: phrase += str(test[i]) That means phrase is now “ word” Which means phrase[i] = “ “ And your if statement says if phrase[i] = “ “ then add a word!!! To solve your issue, all you have to do is is change: phrase = “ “ to: phrase = “” And you’re done. Lol, coding can be so cruel. All this over a damn whitespace.
It did surely made a difference. I'm now getting 4/5, but now a new problem came out. The (count_letter) part, when doing ((count_letter +1)/count_word) test 5 fails. And when instead doing (count_letter/count_word) test 1 fails 🤷♂️.
ok so i figured out your problem. First use the code that shows you the failing test case like the one i gave you. Here’s a copy of your code that i changed a little. Run it on your end and you will see what is happening that is causing you to round incorrectly.
import string punc= string.punctuation test= input() phrase=" " count_letter =0 count_word =0 for i in range(len(test)): phrase +=str(test[i]) print("character: " + phrase[i] + " ~ ") if phrase[i] == " ": count_word +=1 print("Adding a Word") elif phrase[i] != " " and (phrase[i] not in punc): # not counting spaces and punctuations count_letter +=1 print("Adding a Letter") print(round(((count_letter)//count_word)+1))
To round up to the next number you can use ceil() in the math module. from math import ceil print(ceil(4.1)) # output: 5
you have to remove the floor and replace it with ceil and get rid of the + 1 right now you are doing this: 13//3 = 3.0 + 1 + 1 = 5 i just copied yours and removed that and it worked. edit: nope. i was using the old one you wrote. Davids code is good for learning more methods, but i think you should try to make your code work. It’s almost there. I’ll post mine as well above yours. Your missing two conditional statements.
Ivan Thanks, I don't know if it does work because I copied the code David Ashton wrote to test it and it worked. But it is a completely different approach.
'''#My Answer: import re,math as m x = list(map(lambda y: len(y), ''.join(re.findall('[A-Za-z ]',input())).split(' '))) print(m.ceil((sum(x)/len(x)))) ''' import string import math punc= string.punctuation test= input() phrase= "" count_letter =0 count_word =0 for i in range(len(test)): phrase +=str(test[i]) print(phrase + "-----------") if phrase[i] == " " or phrase[i] in punc: count_word +=1 print("Add a word") if phrase[i] != " " and (phrase[i] not in punc): count_letter +=1 print("Add a letter") # you need to check to see if the before a punctuation there is another punctuation. like “Hello there...” print(math.ceil(count_letter/count_word))
Try my code: Average word length: import string import math punc= string.punctuation test= input() phrase= "" count_letter =-1 count_word =1 for i in range(len(test)): phrase +=str(test[i]) if phrase[i] == " ": count_word +=1 if phrase[i] != " " and (phrase[i] not in punc): count_letter +=1 print(math.ceil((count_letter//count_word)) +1)
word = input() str = word.split() import re, math mystr = re.sub(r"[^A-Za-z]", "", word) print (int(math.ceil(len(mystr)/len(str))))
Ivan, it didn't work. Apparently, it made it worse, now I'm getting 1/5. I'm sure I wrote everything right.. | https://www.sololearn.com/Discuss/2121320/about-the-average-word-length-problem | CC-MAIN-2021-10 | refinedweb | 878 | 75.91 |
Adding a Splash Screen to Your Applications
Environment: VC6, Win32
Introduction
Seemingly, every application I create has some lengthy processing in the WM_CREATE section of code. Sometimes, this delay before my main window is displayed causes users to click again on the application icon, thus starting yet another instance of the application. This SPLASH C++ class allows me to easily display a startup splash screen or other information before the main application window is displayed.
Before developing this class, I had tried to use Dialog boxes and timers to simulate a splash screen with limited success. But that method always had less than acceptable results. This class uses a bitmap created in your resource editor and its Resource ID to define the splash screen.
This code uses the bare Win32 API. MFC is NOT REQUIRED!
Using the SPLASH Class
The first thing to do is create the splash screen bitmap. This can be done externally, and then imported into your resource editor, or created within the editor. The resource editor will assign it an ID such as IDB_BITMAP1. This bitmap can be any size. The splash screen window will size to fit it automatically.
In your C++ code, include the SPLASH.H header file; then, create a splash class instance as follows:
#include "splash.h" //global variables SPLASH mysplash;
Of course, you'll also have to include your project SPLASH.CPP. Alternatively, you can compile SPLASH.CPP and SPLASH.H to a LIB file and include that in your project.
In your WM_CREATE section of your WndProc, initialize the splash screen with the SPLASH::Init() method as follows:
mysplash.Init(hWnd,hInst,IDB_BIMAP1);
The Init() method takes a window handle of the parent window of the splash screen, in this case hWnd. The second parameter is the instance handle of the parent Window. Of course, the third parameter is the resource ID of the splash screen bitmap.
After initializing the splash screen, two other methods are used to show or hide the splash screen, Those methods are coincidently Hide() and Show(); both take no parameters.
To display the splash screen, you would do this:
mysplash.Show();
To hide the splash screen, oddly enough, you would do this:
mysplash.Hide();
One member variable, (BOOL) SHOWING, is used to programmatically determine whether the splash screen is currently displayed. This allows you to set a timer and hide the splash screen after a predetermined length of time, or hide it upon a mouse click or any other window event.
Example code for this is as follows:
case WM_LBUTTONDOWN: if(mysplash.SHOWING) { mysplash.Hide(); } break;
The code below shows the SPLASH class being used in an application.
#include "splash.h" //global variables SPLASH mysplash; . . . WndProc(...) . . . case WM_LBUTTONDOWN: if(mysplash.SHOWING) { mysplash.Hide(); } break; case WM_CREATE: mysplash.Init(hWnd,hInst,IDB_BITMAP1); mysplash.Show(); //simulate lengthy window initialization Sleep(4000); //hide the splash screen as the main window appears mysplash.Hide(); break;
I hope you find this class useful in your application development. I would appreciable feedback via e-mail or the comments section of this article. | http://www.codeguru.com/print/cpp/w-d/dislog/splashscreens/article.php/c5029/Adding-a-Splash-Screen-to-Your-Applications.htm | CC-MAIN-2015-27 | refinedweb | 511 | 66.33 |
Eclipse Community Forums - RDF feed Eclipse Community Forums BIRT : Styles overlapping in HTML Page <![CDATA[I am an in-depth lover and user of BIRT. but I am facing an issue while rendering the Reports in my JSP. All the CSS styles and formatting (for example : font-color Red) I applied in my BIRT report design are displayed correctly jsp. But when I close this page and open another one which has different formatting properties (for example font-color Green) the style properties of the previous jsp is being applied to the newly opened one. i.e. Latest jsp displays some elements in Red color and some in Green color. . And only thing which is common for these two jsps is that , both of them have the same parent jsp. I think this is happening because the BIRT itself is generating some in line styles (something like style_4, style_5, style_6 etc.) which are applied while rendering html data. This issue does not happen when I render the report in PDF. What could be the cause of this ? How can I resolve this issue sothat I can render reports in a way that styles are not getting overlapped. Please help.... thanks, BIRT Bird]]> BIRT BIRD 2012-10-05T17:43:33-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Are you using the API, tag libraries or the Viewer? Take a look at the namespace option when using the API: Are you applying individual properties in the BIRT report or are you using BIRT styles? Jason]]> Jason Weathersby 2012-10-05T18:50:55-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[thanks for the fast response. I am using API. Yes I tried using namespace options: options.setOutputFormat("HTML"); options.setEnableInlineStyle(false); Also I tried setting as individual properties, BIRT style and external CSS files. Nothing seems to be working. thanks, ]]> BIRT BIRD 2012-10-05T19:33:16-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Did you try the namespace? HTMLRenderOption options = new HTMLRenderOption(); options.setOutputFileName("output/resample/renderoptions.html"); options.setOutputFormat("HTML"); options.setHTMLIDNamespace("mytest") Jason]]> Jason Weathersby 2012-10-05T19:43:57-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[No, I hadn't tried that way. So I tried just like you said now: htmlOptions.setOutputFileName("output/resample/renderoptions.html"); htmlOptions.setOutputFormat("html"); htmlOptions.setHTMLIDNamespace("mytest"); Then it is not at all giving any HTML data. What is meant by "output/resample/renderoptions.html" ? Do I have to change something in this as per my context? thanks, ]]> BIRT BIRD 2012-10-05T21:05:48-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[No that was the example output. Do not use that. Just add htmlOptions.setHTMLIDNamespace("mytest"); to your existing options list. Note that your map is probably not named htmlOptions Jason]]> Jason Weathersby 2012-10-08T22:05:35-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[ I tried as you said by just giving htmlOptions.setHTMLIDNamespace("mytest"); Still no luck. It still conflicts with the styles of the previously opened jsp. thanks, BIRT BIRD]]> BIRT BIRD 2012-10-09T18:16:35-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Can you open a bugzilla entry for this? Jason]]> Jason Weathersby 2012-10-09T18:58:55-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Sure. Do you have a link for posting BIRT issues in Bugzilla? If so, could you please share it? -thanks, ]]> BIRT BIRD 2012-10-09T19:06:46-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[I have opened a bug in Buzilla. Bug # 391597. thanks,]]> BIRT BIRD 2012-10-10T19:17:48-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Thanks for posting. Jason]]> Jason Weathersby 2012-10-11T05:09:30-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Thanks Jason. Do you have any idea on how long they will take to address this Bug? thanks,]]> BIRT BIRD 2012-10-11T17:28:40-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Not really sure. To move it along you may want to add code that can be easily executed to show the issue. BTW does this issue happen with only a couple of the report items or all of them. The items that have the issue, are you setting properties on or are you applying a style? Jason]]> Jason Weathersby 2012-10-12T19:29:30-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Ok. This happens with all reports for which I have applied some formatting with Color, Font and Border. Also it happens in both cases (applying a named CSS style or applying individual properties.) thanks, ]]> BIRT BIRD 2012-10-12T21:14:36-00:00 Re: BIRT : Styles overlapping in HTML Page <![CDATA[Any chance you can upload some code to reproduce? Jason]]> Jason Weathersby 2012-10-16T03:11:58-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=393816&basic=1 | CC-MAIN-2017-30 | refinedweb | 826 | 68.47 |
Timer.AutoReset Property
Gets or sets a Boolean indicating whether the Timer should raise the Elapsed event only once (false) or repeatedly (true). was set to true.
The following example creates a Timer whose Elapsed event fires after 1.5 seconds. Its event handler then displays "Hello World!" on the console.
using System; using System.Timers; public class Example { private static Timer aTimer; public static void Main() { // Create a timer with a 1.5 second interval. double interval = 1500.0; aTimer = new System.Timers.Timer(interval); // Hook up the event handler for the Elapsed event. aTimer.Elapsed += new ElapsedEventHandler(OnTimedEvent); // Only raise the event the first time Interval elapses. aTimer.AutoReset = false; aTimer.Enabled = true; // Ensure the event fires before the exit message appears. System.Threading.Thread.Sleep((int) interval * 2); Console.WriteLine("Press the Enter key to exit the program."); Console.ReadLine(); } // Handle the Elapsed event. private static void OnTimedEvent(object source, ElapsedEventArgs e) { Console.WriteLine("Hello World!"); } } // This example displays the following output: // Hello World! // Press the Enter key to exit the program.
Available since 1.1 | https://technet.microsoft.com/en-us/library/system.timers.timer.autoreset.aspx | CC-MAIN-2017-34 | refinedweb | 179 | 55.5 |
Tableview_select
I’ve written a custom input form sub-classed from input form dialog. It works fine stand alone. It also works fine when called from a button on my tableview form. When I call it from tableview select, it blows out. Somehow container_view becomes None. I’ve tried everything I can think of, including calling the button action. Any thoughts?
@Appletrain, have you tried decorating the
tableview_selectedmethod with
@on_main_thread?
@mikael Thanks @mikael, doesn’t help. Same error, same place. What I can’t understand is why it works one place and not another. I stripped every thing out of the two functions so they are doing exactly the same thing. On blows up, the other does’t.
@Appletrain, can you share your stripped-out code?
@mikael, this is about as stripped out as I can make it
def tableview_add(self): trxRec = {} result = {} # trxRec = {'account': self.selected_account} result = bucks_dialog(trxRec) # this works def tableview_update(self): result = {} trxRec = {} # trxRec = self.selected_item result = bucks_dialog(trxRec) # this blows out - self.container_view changes to None def tableview_did_select(self, tv, section, row): self.tableview_update() def transaction_action(sender): # sender is ui.button sender.tint_color = 'blue' ds.tableview_add()
Bucks_dialog is a custom form_dialogs from dialogs
Error is:
Pythonista3/Documents/Projects/bucks/bucks_form.py", line 217, in trans_type_action
self.container_view.name = segment_names[ind]
AttributeError: 'NoneType' object has no attribute 'name'
self.container_view is none (form_dialog)
@Appletrain, sorry, my faulty memory. You need to decorate the method with
@ui.in_background. This will mean that you do not get the method return value in the same thread, so you need to set the tableview values within the backgrounded method, or look at this thread.
@mikael, thank you, that took care of it. I had tried that, but I put it at the update function not the select function. | https://forum.omz-software.com/topic/6508/tableview_select/1 | CC-MAIN-2022-27 | refinedweb | 298 | 61.43 |
This section demonstrates you the use of method flush().
Description of code:
Stream represents resource so it is necessary to flush out the stream after performing any file operation before exiting the program otherwise you could lose buffered data. You can use flush() or close() method for this purpose.
In the given example, we have used BufferedWriter class along with FileWriter class to write some text to the file. Then using the method write() of BufferedWriter class, we have written the text into the file. Now in order to keep the data safe, we have used flush() method. This method forced the buffered data out of an output stream.
Here is the code:
import java.io.*; public class FileFlush { public static void main(String[] args) throws Exception { String st = "Hello"; File f = new File("C:/hello.txt"); BufferedWriter bw = new BufferedWriter(new FileWriter(f)); bw.write(st); bw.flush(); } }
Through the method flush(), you can flush out any output stream and keep your data safe. | http://www.roseindia.net/tutorial/java/core/files/fileflush.html | CC-MAIN-2016-30 | refinedweb | 165 | 82.54 |
Blokkal::Ui::ProviderComboBox Class ReferenceKComboBox for selection of provider setups. More...
#include <providercombobox.h>
Detailed DescriptionKComboBox for selection of provider setups.
This class is intended to simplify presentation of various provider setups.
Definition at line 46 of file providercombobox.h.
Constructor & Destructor Documentation
Creates a new ProviderComboBox.
- Parameters:
-
Definition at line 90 of file providercombobox.cpp.
Destructor
Definition at line 98 of file providercombobox.cpp.
Member Function Documentation
Adds the providers in providers to the current provider list.
- Parameters:
-
- Note:
- The "Custom" item is always at the end of the list that is presented to the user
Definition at line 111 of file providercombobox.cpp.
Returns the id string of the currently selected provider.
- Returns:
- the id string of the currently selected provider
Definition at line 132 of file providercombobox.cpp.
This signal is emitted when a new provider is selected.
- Parameters:
-
Sets the currently selected provider to the one which id matches id. If no such provider exists the "Custom"-Element is selected
- Parameters:
-
Definition at line 141 of file providercombobox.cpp.
Replaces the current provider list with providers
- Parameters:
-
- Note:
- An item for "Custom" is always added to the provider list.
Definition at line 103 of file providercombobox.cpp.
The documentation for this class was generated from the following files: | http://blokkal.sourceforge.net/docs/0.1.0/classBlokkal_1_1Ui_1_1ProviderComboBox.html | CC-MAIN-2017-43 | refinedweb | 212 | 51.04 |
This article will demonstrate how to print or to display several first row of all columns using head function from a DataFrame variable. So, using jupyter notebook as a web-based application to run the script, import the data first. After finishing importing the data, store it into a DataFrame variable using Pandas library. Soon after that, select several first rows from the variable using the ‘head’ function. So, run the jupyter notebook first as in the following command execution :
Following the execution of the Jupyter Notebook, the following are scripts to demonstrate the purpose :
1. Reading data from a CSV file with this script :
import pandas as pd df = pd.read_csv("nba-2.csv", index_col="Name")
2. Print the several first rows available in the DataFrame variable called ‘df’ using the following script :
df.head()
The following is the image output of the above execution :
As in the above image output, all the columns are available in the output display. But the rows in the output display is only five rows. So, the head function will actually display by default the first five rows available in the DataFrame variable called df. The function ‘head’ is actually a default function available in any variable with the type of DataFrame. Furthermore, in order to create the variable, Pandas library must be imported in the first place. | http://www.dark-hamster.com/programming/how-to-select-several-first-rows-of-all-columns-with-head-function-from-a-dataframe-using-pandas-library-in-jupyter-notebook/ | CC-MAIN-2021-21 | refinedweb | 225 | 62.88 |
wcstombs - converts a wide character string to a multibyte character
string
Standard C Library (libc, -lc)
#include <stdlib.h>
size_t
wcstombs(const char * restrict s, wchar_t * restrict pwcs, size_t n);
The wcstombs() converts the null-terminated wide character string pointed
by pwcs to the corresponding multibyte character string, and store it to
the array pointed by s. This function may modify the first at most n
bytes of the array pointed by s. Each characters will be converted as if
wctomb(3) is continuously called, except the internal state of wctomb(3)
will not be affected.
For state-dependent encoding, the wcstombs() implies the result multibyte
character string pointed by s always to begin with an initial state.
The behaviour of the wcstombs() is affected by LC_CTYPE category of the
current locale.
There are special cases:
s == NULL The wcstombs() returns the number of bytes to store the
whole multibyte character string corresponding to the wide
character string pointed by pwcs. In this case, n is
ignored.
pwcs == NULL undefined (may causes the program to crash).
The wcstombs() returns:
0 or positive
number of bytes stored to the array pointed by s. There is
no cases that the value returned is greater than n (unless
s is a null pointer). If the return value is equal to n,
the string pointed by s will not be null-terminated.
(size_t)-1 pwcs points the string containing invalid wide character.
The wcstombs() also sets errno to indicate the error.
The wcstombs() may causes an error in the following case:
[EILSEQ] pwcs points the string containing invalid wide character.
setlocale(3), wctomb(3)
The wcstombs() function conforms to ANSI X3.159-1989 (``ANSI C''). The
restrict qualifier is added at .
BSD February 4, 2002 BSD | http://nixdoc.net/man-pages/NetBSD/man3/wcstombs.3.html | CC-MAIN-2019-43 | refinedweb | 292 | 65.01 |
(Another) Mercurial Plugin for hoe
Description
This is a fork of the [hoe-hg](bitbucket.org/mml.
Examples
# in your Rakefile Hoe.plugin :mercurial
If there isn't a '.hg' directory at the root of your project, it won't be activated.
Committing
$ rake hg:checkin
-or-
$ rake ci
This will offer to pull and merge from the default repo (if there is one), check for any unregistered files and offer to add/ignore/delete or temporarily skip them, run the *:precheckin* task (which you can use to run tests, lint, or whatever before checking in), builds a commit message file out of the diff that's being committed and invokes your editor on it, does the checkin, then offers to push back to the default repo.
Pre-Release Hook
This plugin also hooks Hoe's *prerelease* task to tag and (optionally) sign the rev being released, then push to the default repo. If there are any uncommitted files, it also verifies that you want to release with uncommitted changes, and ensures you've bumped the version number by checking for an existing tag with the same version.
If you also wish to check the History file to ensure that you have an entry for each release tag, add this to your hoespec:
self.check_history_on_release = true
You can also invoke or add the ':check_history' task as a dependency yourself if you wish to check it at other times.
It expects lines like:
== v1.3.0 <other stuff>
to be in your History file. Markdown, RDoc, and Textile headers are all supported.
To sign tagged revisions using 'hg sign', do this in your hoespec:
self.hg_sign_tags = true
This requires that 'hg sign' work on its own, of course.
Other Tasks
It also provides other tasks for pulling, updating, pushing, etc. These aren't very useful on their own, as it's usually just as easy to do the same thing yourself with 'hg', but they're intended to be used as dependencies in other tasks.
A 'rake -T' will show them all; they're all in the 'hg' namespace.
Dependencies
Hoe and Mercurial, obviously. I haven't tested these tasks with Mercurial versions earlier than 1.6 or so.
Installation
$ gem install hoe-mercurial
License
The original is used under the terms of the following license:
Copyright 2009 McClain Looney (m@loonsoft.
My modifications are:
and are licensed under the same terms as the original. | http://www.rubydoc.info/gems/hoe-mercurial/frames | CC-MAIN-2017-47 | refinedweb | 404 | 62.48 |
Add anchor points?
- monomonnik last edited by gferreira
Is there an easy way to add anchor points to a path, like the Add Anchor Points command in Illustrator?
(I looked here and in the documentation, but I can’t find it.)
you can use the
FlattenPenfrom fontPens.
from fontPens.flattenPen import FlattenPen # create an empty path dest = BezierPath() # create flatten pen that will draw into the dest bezierPath pen = FlattenPen(dest, approximateSegmentLength=30, segmentLines=True) # draw into the flatten pen pen.moveTo((100, 100)) pen.curveTo((100, 150), (150, 200), (200, 200)) pen.endPath() # create an other path path = BezierPath() # draw an oval path.oval(200, 200, 200, 200) # draw the path with oval in the flatten pen path.drawToPen(pen) # set stroke and fill stroke(0) fill(None) # draw the dest drawPath(dest)
to learn more about pens and how to use them see
- monomonnik last edited by
@frederik You opened a door to a whole new world for me. This is much simpler than I thought. And at the same time, I think it’s going to take me some time to get my head around all this pen-stuff. Thanks! | https://forum.drawbot.com/topic/241/add-anchor-points | CC-MAIN-2020-16 | refinedweb | 193 | 73.68 |
#include <sys/neti.h> int net_hook_unregister(net_handle_t net, char *hook_name, hook_t hook);
Solaris DDI specific (Solaris DDI).
value returned from a successful call to net_protocol_register().
hook name to be registered
pointer to a hook_t structure
The net_hook_register() function uses hooks that allow callbacks to be registered with events that belong to a network protocol. A successful call to net_hook_register() requires that a valid handle for a network protocol be provided (the net parameter), along with a hook description that includes a reference to an available event.
While it is possible to use the same hook_t structure with multiple calls to net_hook_register(), it is not encouraged.
The hook_t structure passed in with this function is described by hook_t(9S). The following describes how this structure is used.
Must be non-NULL and represent a function that fits the specified interface.
Gives the hook a name that represents its owner. No duplication of h_name among the hooks present for an event is allowed.
Currently unused and must be set to 0.
Specify a hint to net_hook_register() on how to insert this hook. If the hint cannot be specified, then an error is returned.
May take any value that the consumer wishes to have passed back when the hook is activated.
If the net_hook_register() function succeeds, 0 is returned. Otherwise, one of the following errors is returned:
The system cannot allocate any more memory to support registering this hook.
A hook cannot be found among the given family of events.
A hook with the given h_name already exists on that event.
A before or after dependency cannot be satisfied due to the hook with
The h_hint field specifies a hint that cannot currently be satisfied because it conflicts with another hook. An example of this might be specifying HH_FIRST or HH_LAST when another hook has already been registered with this value.
The net_hook_register() function may be called from user or kernel context.
See attributes(5) for descriptions of the following attributes:
net_hook_unregister(9F), hook_t(9S) | http://docs.oracle.com/cd/E36784_01/html/E36886/net-hook-register-9f.html | CC-MAIN-2017-09 | refinedweb | 332 | 56.55 |
One device. This flexibility has allowed various groups, some commercial and some hobbyist, to develop alternative distributions of Android. These are commonly referred to as “custom ROMs” however a better name would be “custom firmware.”
Since all the necessary building blocks are available, maybe you have wondered how hard it is to build your own custom ROM, your own personalized version of Android! It is indeed possible, read on to find out more.
Warning
Before we dive into the murky world of building custom versions of Android, we need to pause and assess the enormity of the task ahead, while keeping our expectations in check. If you have absolutely no coding experience, zero experience with using the command line (on Linux or macOS), or no idea what is a “Makefile” then this isn’t for you.
Android is a complete operating system. It is complex and contains many different subsystems. Creating an operating system as complex and useful as Android didn’t happen over night. This means that any customization that you wish to perform is going to have to start small. To create an alternative Android distribution that is radically different will take many, many hours of hard work and dedication.
Having said that. If you are familiar with writing code, if you do know a bit about Makefiles and compilers then making your own version of Android can be a rewarding experience!
Prerequisites
Theoretically it would be possible to build a custom Android firmware for any computing device capable of running a modern operating system. However to make life easy we will limit ourselves to building Android to devices which have support “out of the box”, namely Nexus devices. For my demo build I used a Nexus 5X.
To build Android you are going to need access to (and familiarity with) and Linux machine or a Mac. In both cases you will be using the terminal a lot and you need to be confident with shell commands. I did my first build using a Linux virtual machine, however it wouldn’t recognize the Nexus 5X when in bootloader mode, so I was unable to flash the new firmware on the device. So then I switched to a Mac and it worked without too many problems.
You will need 130GB of disk space and probably around 8GB of RAM. I tried building Android with just 4GB of RAM and I ran into lots of problems. I also ran into similar problems with 8GB of RAM, however using some tricks (see later) I was able to create a successful build.
Learn patience. Building Android isn’t quick. To synchronize the source repository with my local machine took almost 24 hours! Also, a full clean build will take several hours to complete. Even after making a minor change you might need to wait 10 to 20 minutes for a build. It all depends on your hardware, however don’t expect to have your new version of Android up and running in just a few moments.
The Android Open Source Project version of Android does not include any Google services. So things like Google Play, YouTube, Gmail and Chrome will be missing. There are ways to flash those “gapps” onto your own custom firmware, but I will leave you to find out how to do that. Hint: Search for “install gapps”.
Where to start
The basic process is this. Download and build Android from the Android Open Source Project, then modify the source code to get your own custom version. Simple!
Google provides some excellent documentation about building AOSP. You need to read it and then re-read it and then read it again. Don’t jump any steps and don’t assume you know what it will say next and skim over parts.
I won’t repeat verbatim what is in the build instructions here, however the general steps are:
- Set up a build environment – including installing the right development tools, the Java Development Kit, and getting all the paths and directories right.
- Grab the source – this is done using the “Repo” tool and git.
- Obtain proprietary binaries – some of the drivers are only released in binary form.
- Choose a target – using the “lunch” tool.
- Start the build – using “make” and Jack.
- Flash the build onto your device – using adb and fastboot.
Tips and tricks for the build process
That all sounds easy, but there are a few gotchas along the way. Here are some notes I made during the process that you might find useful:
Set up a build environment – Ubuntu 14.04 is the recommended build environment for Linux users and OS X 10.11 for Mac users. You need to install OpenJDK 8 on Linux and Oracles JDK 8 on OS X. On OS X you also need Macports installed along with Xcode an the Xcode command line tools. I used OS X 10.12 which caused a little problem with the function syscalls being deprecated in the 10.12 OS X SDK. The work around is here:
Grab the source – This is an easy step, however it takes a long time. For me it took over 24 hours. Such a large download only happens once, further syncing with the main source tree will be incremental.
Obtain proprietary binaries – The binary drivers should be unpacked in your working directory.
Choose a target – For the Nexus 5X use aosp_bullhead-user
Start the build – You start the build using make. GNU make can handle parallel tasks with a -jN argument, and it’s common to use a number of tasks N that’s between 1 and 2 times the number of hardware threads on the computer being used for the build. However, if you find your machine struggles during the build process then try something like “make -j2”.
If you get build errors which seem related to memory, especially about the Jack server and memory then do these two things:
- export ANDROID_JACK_VM_ARGS=”-Xmx4g -Dfile.encoding=UTF-8 -XX:+TieredCompilation”
- change the jack.server.max-service in $HOME/.jack-server/config.properties to 1
If you change any of the Jack server configuration stuff (including setting or altering the ANDROID_JACK_VM_ARGS variable) then you need to kill the Jack server and run the make again. Use ./prebuilts/sdk/tools/jack-admin kill-server to stop the Jack server.
If you get any communications errors related to the Jack server then just start the build again, that normally fixes it.
Flash the build onto your device – You will find adb and fastboot in ./out/host/darwin-x86/bin/ or ./out/host/darwin-x86/bin/ for OS X or Linux respectively.
Flash it
Once you have a successful build and you have flashed it onto your device using “fastboot flashall -w” then reboot your device. What you will see is a vanilla version of AOSP. There are no Google services, no Play Store and only a few core apps. This is the bare bones of Android.
However, congratulations are in order. You have managed to build Android from its source code and flash it on to a device. That is no mean feat.
Customization
Now that you have Android up and running, you can start to customize it and make your own specialist ROM. This is actually where things get hard. You are about to tinker with a guts of the Android operating system and the problem is that Android is huge. My working directory is like 120+GB of data. That is the source code, the graphics, the compiled binaries, the tools, everything. That is a lot of stuff.
So, start simple. Here are two simple customizations that will get you going, start on the path to being an Android firmware hacker!
Customize the messaging app
A relatively easy customization is to change one of the pre-built apps. If you were to develop a full alternative Android distribution then modifying or replacing some of the core apps would be a given. In this case we are just going to tweak it, however the principles remain the same for more complex changes and revisions.
The core apps are found in the directory ./packages/apps/ and we are interested in the Messaging app in ./packages/apps/Messaging/. Drill down through src/com/android/messaging/ and edit BugleApplication.java. You can edit it with your favorite GUI editor or if you want to stay on the command line then use vi or nano.
BugleApplication.java is the entry point for the Messaging app. To keep things simple what we are going to do is add a Toast that will be displayed when the app is first started. Near the top of the file underneath the long list of import statements add this line:
import android.widget.Toast;
Now look for the onCreate() function. Towards the end of the function, before the final call to Trace.endSection(); add the following two lines:
Toast myToast = Toast.makeText(getApplicationContext(), "Welcome!", Toast.LENGTH_LONG); myToast.show();
Save the file and start another build using the make command. Once the build has finished, flash it onto your device and reboot. Start the Messaging app and look for the “Welcome!” toast. Obviously this is a simple modification, however the potential is there to modify any of the default apps, in whatever way you please.
More customization
Any self-respecting custom Android distribution must include some information about the ROM itself. To do this we can alter the built-in Settings app and add some information to the About Phone section. To do this, edit the file device_info_settings.xml from ./packages/apps/Settings/res/xml/ and add the following two sections at the bottom of the file before final </PreferenceScreen> tag:
<!-- ROM name --> <Preference android: <!-- ROM build version --> <Preference android:
Save the file and then rebuild and re-flash the firmware on your device. Once you reboot go to Settings->About Phone and scroll to the bottom:
The above alteration is a bit of hack as really the strings should be defined in strings.xml for English and for others languages. If you plan to do any serious AOSP development you need to do things right!
Wrap-up
The two modifications I have made are very basic and there is loads more that could be done including pre-installing other apps, adding ringtones & wallpapers, and tweaking the kernel. However I hope this has given you a taste of what is possible or at least given you an idea about how to build AOSP and tinker with the innards of Android! | https://www.androidauthority.com/build-custom-android-rom-720453/ | CC-MAIN-2018-34 | refinedweb | 1,757 | 64.41 |
How write a calculate average monthly sales program
New to FreeBASIC? Post your questions here.
4 posts • Page 1 of 1
- Posts: 1
- Joined: May 18, 2018 16:59
How write a calculate average monthly sales program
I would like to no if someone could send me in the direction of how to write a calculate average monthly sales program. I've search the internet and have not found one using free Basic.
- Posts: 1146
- Joined: May 08, 2006 21:58
- Location: Crewe, England
Re: How write a calculate average monthly sales program
You need to provide more information. What is the format of your input data?
Short answer is that you add up 12 monthly totals and divide by 12.
Short answer is that you add up 12 monthly totals and divide by 12.
Re: How write a calculate average monthly sales program
Here is a very simple average calculator implementation.
You'll have to enter values one by one. Hit [Enter] after each value.
No value and [Enter] displays result, then waits and thereafter exits ...
You'll have to enter values one by one. Hit [Enter] after each value.
No value and [Enter] displays result, then waits and thereafter exits ...
Btw: I'm fully aware, that this is a beginners way, of doing it (done so, on purpose).
Code: Select all
' Average.bas -- 2018-05-20, MrSwiss
'
' compile: -s console
'
Type Average
Private:
As Double sum
As ULongInt cnt
Public:
Declare Sub AddOne(ByVal value As Double, ByRef ave As Average)
Declare Sub ShowAv(ByRef ave As Const Average)
End Type
Sub Average.AddOne(ByVal value As Double, ByRef ave As Average)
' here we modify the type to latest: sum & count
ave.sum += value ' add latest 'value'
ave.cnt += 1 ' increment counter
End Sub
Sub Average.ShowAv(ByRef ave As Const Average)
' here we use the type in 'read only' mode
Print ' get values & do the math., then _
Print "Average is: "; ave.sum / ave.cnt ' display it
End Sub
' test/demo code-start
Dim As Average cAve ' use one (of above Type)
Dim As Double cValue ' temporary variable
Do
Cls : Locate 2, 1 ' clear screen | (below) get user input
Input "enter a amount, then press [Enter] (empty = show/quit) "; cValue
If cValue <> 0.0 Then ' if: new value received
cAve.AddOne(cValue, cAve) ' add freshly entered value
Else ' no new value: show result, then exit
cAve.ShowAv(cAve) ' show result
Print "press a key to exit ... ";
Sleep()
Exit Do ' end of prog.
End If
Sleep(100,1)
Loop
' test/demo code-end ' -----EOF -----
Re: How write a calculate average monthly sales program
Can only guess the format.
My guess, a list of numbers held in a string and separated by a known separator.
Suitable for not too long a list of numbers.
My guess, a list of numbers held in a string and separated by a known separator.
Code: Select all
#Include "file.bi"
#include "string.bi"
'file.bi for format
'string.bi for fileexists in loadfile
' =========== save and load text files ============
Sub savefile(filename As String,p As String)
Dim As Integer n
n=Freefile
If Open (filename For Binary Access Write As #n)=0 Then
Put #n,,p
Else
Print "Unable to load " +
'==========================================
' ===== average and split together for average ========
Sub split(DataString As String,DataSeparator As String,var1 As String,var2 As String)
Dim As Long pst=Instr(DataString,DataSeparator),LD=Len(DataSeparator)
var1="":var2=""
If pst<>0 Then
var1=Mid(DataString,1,pst-1)
var2=Mid(DataString,pst+LD)
Else
var1=DataString
End If
End Sub
Function average(DataString As String,DataSeparator As String) As Double
Dim As String s=DataString,var1,var2
Dim As Long counter
Dim As Double d
Do
counter+=1
split(s,DataSeparator,var1,var2)
s=var2
d+=Val(var1)
Loop Until s=""
d= d/counter
Return d
End Function
'==============================================
'==== examples ========
Print "average"
Print average("1,2,3,4,5,-6.97,7,8,9,10,11,12,13,1234.08",",") 'Separator is a comma
Print "check average"
Print (1+2+3+4+5-6.97+7+8+9+10+11+12+13+1234.08)/14
Print "Press a key for a list of numbers in a file (could be sales)"
Sleep
'=====================
' ====== files example =======
Randomize
Dim As String s ' a random list of sales
Dim As String tmp
Dim As Double d
Dim As Long number=20000
For n As Long=1 To number
tmp=Format((Rnd*500),".00")+Chr(13,10) ' formatted number with carriage return and new line separator
s+=tmp 'string list for file
d+=Val(tmp) 'keep a double tally to check
d=d/number 'double average
'save the list to a file
savefile ("numbers.dat",s)
'load the file into w
Dim As String w=loadfile("numbers.dat")
Print w
Dim As Double x=average(w,Chr(13,10)) 'Separator is a carriage return and new line CHR(13,10)
print "list length = ";number
Print "List Average = ";Format(x,".00"), "Check: ";d;" versus ";x
Kill "numbers.dat"
Print Iif(Fileexists("numbers.dat"),"delete file manually","File has been deleted")
Print "Done"
Sleep
Suitable for not too long a list of numbers.
4 posts • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 3 guests | https://www.freebasic.net/forum/viewtopic.php?f=2&t=26710&p=247577 | CC-MAIN-2019-13 | refinedweb | 875 | 63.39 |
,
I.
First, thanks to Andrew and Ian for solving the "can't link in MSVS" problem- it turns out I was linking against the multithreaded debug *static* libraries, after all.
Now, I'm trying to use vgui from within an MFC application. The mfc_example program, as well as anything I try to write, fails with the assert in appcore.cpp:
CWinApp::CWinApp(LPCTSTR lpszAppName){
.
.
. // initialize CWinThread state
AFX_MODULE_STATE* pModuleState = _AFX_CMDTARGET_GETSTATE();
AFX_MODULE_THREAD_STATE* pThreadState = pModuleState->m_thread;
ASSERT(AfxGetThread() == NULL); //<- FAILS HERE
After some digging, I've found that I'm creating two CWinApps in my app (I think). I come thru CWinApp::CWinApp three times, failing on the third pass:
Call stack for first pass:
> mfc70d.dll!CWinApp::CWinApp(const char * lpszAppName=0x00000000) Line 226 C++
mfc70d.dll!$E2() Line 582 + 0xf C++
msvcr70d.dll!_initterm(void (void)* * pfbegin=0x7c2e6180, void (void)* * pfend=0x7c2e6184) Line 588 C
mfc70d.dll!_CRT_INIT(void * hDllHandle=0x7c140000, unsigned long dwReason=1, void * lpreserved=0x0012fd30) Line 185 + 0xf C
mfc70d.dll!_DllMainCRTStartup(void * hDllHandle=0x7c140000, unsigned long dwReason=1, void * lpreserved=0x0012fd30) Line 266 + 0x11 C
Call stack for second pass:
mfc70d.dll!CWinApp::CWinApp(const char * lpszAppName=0x00000000) Line 226 C++
> vgui_test2.exe!Cvgui_test2App::Cvgui_test2App() Line 31 + 0x2d C++
vgui_test2.exe!$E1() Line 39 + 0x28 C++
msvcr70d.dll!_initterm(void (void)* * pfbegin=0x004834a4, void (void)* * pfend=0x004835e4) Line 588 C
vgui_test2.exe!WinMainCRTStartup() Line 336 + 0xf C
Call stack for third pass:
> mfc70d.dll!CWinApp::CWinApp(const char * lpszAppName=0x00000000) Line 226 C++
vgui_test2.exe!vgui_mfc_app::vgui_mfc_app() Line 30 + 0x11 C++
vgui_test2.exe!vgui_mfc_app_init::vgui_mfc_app_init() Line 15 + 0x22 C++
vgui_test2.exe!$E1() Line 95 + 0xd C++
msvcr70d.dll!_initterm(void (void)* * pfbegin=0x004834b4, void (void)* * pfend=0x004835e4) Line 588 C
vgui_test2.exe!WinMainCRTStartup() Line 336 + 0xf C
In vgui_register_all.cxx, the following is what makes that third call:
#ifdef VGUI_USE_MFC
# include <vgui/impl/mfc/vgui_mfc_app_init.h>
vgui_mfc_app_init theAppinit;
#endif
As long as I don't declare any variables from vgui (like vgui_tableau_sptr etc.) the third call is never made (but nothing much else is gonna happen either!)
As soon as I declare something like
vgui_tableau_sptr tableau;
it bombs.
What's going on?
Thanks in advance for your help.
Steve | https://sourceforge.net/p/vxl/mailman/message/4130541/ | CC-MAIN-2017-39 | refinedweb | 363 | 61.12 |
Asked by:
How to Get SQl Server Instance name
I am doing a Windows Application in C#.net (3.5) with SQL Server 2005 as Back end.
My task is i am required to get the SQL Server's Server Name in Combo box without logging in SQL Server/ or Instance Name.
I tried to access via Registry Keys but was unsucessfull as i am hardcoding the key values.
But is there any was i can get Servername of SQL Server in Combobox.
OSQL -L did not work, how should i approach this
Question
All replies
Check with this
If this post answers your question, please click "Mark As Answer". If this post is helpful please click "Mark as Helpful".
Hello Amit, if your Sql Server verion is equal or greater to 2005 you can use the SERVERPTOPERTY function:
For example, this code can be used for that stuff:
using System; using System.Data; using System.Data.SqlClient; namespace SqlServerExamples { class Program { static void Main(string[] args) { using (var connection = new SqlConnection("Data Source = ...; User Id = ...; Password = ...;")) { connection.Open(); using (var command = new SqlCommand("SELECT SERVERPROPERTY('ServerName') AS ServerName, SERVERPROPERTY('InstanceName') AS InstanceName", connection)) { using (var reader = command.ExecuteReader(CommandBehavior.SingleRow)) { if (reader.Read()) { Console.WriteLine(reader["ServerName"]); Console.WriteLine(reader["InstanceName"]); } } } } Console.ReadLine(); } } }
Hope this helps,
Miguel.
am getting error on connection.open();
without providing data source and other parameters how can i)
- Edited by amit_kumar Friday, August 26, 2011 3:42 PM modification
Amit, Try the below function.
public string[] GetSqlInstances() { DataRowCollection rows = SqlDataSourceEnumerator.Instance.GetDataSources().Rows; string[] instances = new string[rows.Count]; for (int i = 0; i < rows.Count; i++) { instances[i] = Convert.ToString(rows[i]["InstanceName"]); } return instances; }
Hope this helps.
Please mark this post as answer if it solved your problem. Happy Programming!
- Proposed as answer by Hasibul Haque Saturday, August 27, 2011 8:01 AM
Hi,
See the below URL to get the IP address and Servername of SQL server.
Please use Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. | https://social.msdn.microsoft.com/Forums/vstudio/en-US/7efda165-f2da-46d2-8c25-f7b093d9e0d3/how-to-get-sql-server-instance-name?forum=csharpgeneral | CC-MAIN-2015-22 | refinedweb | 347 | 57.77 |
The simplest way to use this configuration module is to use an
~/.xmonad/xmonad.hs like this:
module Main (main) where
import XMonad
import XMonad.Config.Arossato (arossatoConfig)
main :: IO ()
main = xmonad =<< arossatoConfig
NOTE: that I'm using xmobar and, if you don't have xmobar in your
PATH, this configuration will produce an error and xmonad will not
start. If you don't want to install xmobar get rid of this line at
the beginning of arossatoConfig.
You can use this module also as a starting point for writing your
own configuration module from scratch. Save it as your
~/.xmonad/xmonad.hs and:
1. Change the module name from
module XMonad.Config.Arossato
( -- * Usage
-- $usage
arossatoConfig
) where
to
module Main where
2. Add a line like:
main = xmonad =<< arossatoConfig
3. Start playing with the configuration options...;) | http://hackage.haskell.org/package/xmonad-contrib-bluetilebranch-0.8.1/docs/XMonad-Config-Arossato.html | CC-MAIN-2015-35 | refinedweb | 137 | 59.8 |
Iron.io Laravel and Workers, Microservices
We are starting to use Iron.io and their workers for a lot of the tasks that our apps need to do. For example one app needs to scan websites for images and text and report on them. In our case that is 2 workers, one with the code needed to get the text we want and the other images. Another worker runs behat tests to take screenshots and reports back to the called with the results.
Using Iron.io has made this whole process easy and scalable. One request can be for say 100 urls and with Iron.io we can run one worker per url or using the Symfony Process library we can even use a worker to run a multi-threaded processes.
Some of the resources out there like iron`s example are great. And using this library has made it super easy. Below I cover how exactly to set this up. (hopefully this week we will have a Laravel 5 version of it out)
Step 1 Install
Install 4.2 work. (5 might be ready soon)
composer create-project laravel/laravel=4.2 example_worker --prefer-dist
Set your minimum stability in your composer.json
}, "config": { "preferred-install": "dist" }, "minimum-stability": "dev" }
Then pull in the library
composer require iron-io/laraworker
And add this one patch for PHP 5.6 TODO add code snippet
and
And of course as the readme.md notes for Laraworker
php vendor/iron-io/laraworker/LaraWorker.php -i true
As the developer notes this makes a new folder and file
/worker/libs/worker_boot.php and /worker/ExampleLaraWorker.php
Step 2 Configure
We will use the .env to do configuration not the way noted in the laraworker docs so lets install that. Just use this post to set that up.
So after you are done your, as in the Laraworker docs, we need to set the queue config.
Set Iron.io credentials in app/config/queue.php and set default to iron –> ‘default’ => ‘iron’,
So yours will look like
# 'default' => getenv('QUEUE_DRIVER'), 'connections' => array( 'iron' => array( 'driver' => 'iron', 'host' => 'mq-aws-us-east-1.iron.io', 'token' => getenv('IRON_TOKEN'), 'project' => getenv('IRON_PROJECT_ID'), 'queue' => 'your-queue-name', 'encrypt' => true, ), ),
Then make your project on Iron and get the Token and Project ID
Step 3 See if Example Worker works
Lets see if the Example works before we move forward.
php artisan ironworker:upload --worker_name=ExampleLaraWorker --exec_worker_file_name=ExampleLaraWorker.php
If it worked you will see
This will upload a worker related queue
Step 4 Make our own worker!
The goal of this worker
- It will get a JSON object of the info needed to do a job
- It will do the job by getting the json file from the S3 file system where it lives (it could live in a db or other location)
- Using the JSON object’s callback it will send back the results to the caller
That is it.
This example will be used in real life to later on parse say 100 urls for already created json render tree objects of the urls data including images and text. This job only cares about the text. Cause the job is fairly easy I will be sending to each worker 5 urls to process.
Copy the worker in /workers folder to the new Worker name
Due to bad naming abilities I am calling this
RenderTreeTextGrepper.php
So now my worker folder has
RenderTreeTextGrepper.php
But I do not want that class to have all my code so I will start to build out a namespace for all of this and the 2 classes I want to manage ALL of this work.
Class 1 @fire
So the worker will fire the class I have to handle all of this.
"autoload": { "classmap": [ "app/commands", "app/controllers", "app/models", "app/database/migrations", "app/database/seeds", "app/tests/TestCase.php" ], "psr-4": { "AlfredNutileInc\\RenderTreeTextGrepperWorker\\": "app/" } },
then
composer dump
Then in
app/RenderTreeTextGrepperWorker folder I have
/projects/example_worker/app/RenderTreeTextGrepperWorker/RenderTreeGrepperHandler.php is the class to handle the incoming request and process it.
Class 2 Event Listener
Then I register the event listener with the app/config/app.php to make it easier to handle the results of the output. You can do all of this in class 1 as well.
#app/config/app.php 'AlfredNutileInc\RenderTreeTextGrepperWorker\GrepCallbackListener'
And that is it.
What is it?
So we are going to upload and run this and here is what will happen. NO WAIT!
First lets make a test so we can see locally if all the logic is there.
Local Test
Just a quick test to see if the handler will handle things and pass results
<?php class RenderTreeTextTest extends \TestCase { /** * @test */ public function should_populate_results() { $handle = new \AlfredNutileInc\RenderTreeTextGrepperWorker\RenderTreeGrepperHandler(); $payload = new \AlfredNutileInc\RenderTreeTextGrepperWorker\RenderTreeTextDTO( 'foo-bar', ['foo', 'bar', 'baz'], ['text1', 'text2'], [ 'caller' => '', 'params' => ['foo', 'bar'] ], false, false ); $results = $handle->handle($payload); var_dump($results); $this->assertNotNull($results); } }
Running this
phpunit --filter=should_populate_results
Produces this
class AlfredNutileInc\RenderTreeTextGrepperWorker\RenderTreeTextDTO#334 (6) { public $uuid => string(7) "foo-bar" public $urls => array(3) { [0] => string(3) "foo" [1] => string(3) "bar" [2] => string(3) "baz" } public $text => array(2) { [0] => string(5) "text1" [1] => string(5) "text2" } public $callback => array(2) { 'caller' => string(41) "" 'params' => array(2) { ... } } public $results => array(1) { [0] => string(21) "Listener is listening" } public $status => bool(false) } }
Of course I need to go into more testing for the two classes to see how they react to different data going in but just to see that there are not obvious issues before I upload the worker.
Upload the worker we just made
php artisan ironworker:upload --worker_name=RenderTreeTextGrepper --exec_worker_file_name=RenderTreeTextGrepper.php
And then we see on Iron.io
Then we run it
php artisan ironworker:run --queue_name=RenderTreeTextGrepper
Before that though I updated app/commands/RunWorker.php:26 to make a better payload
public function fire() { $queue_name = $this->option('queue_name'); $payload = "This is Hello World payload :)"; if($queue_name == 'RenderTreeTextGrepper') { $payload = new \AlfredNutileInc\RenderTreeTextGrepperWorker\RenderTreeTextDTO( 'foo-bar', ['foo', 'bar', 'baz'], ['text1', 'text2'], [ 'caller' => '', 'params' => ['foo', 'bar'] ], false, false ); }
We then see the Task
And the example log output
Guzzle and the Callback
How to format the callback?
Let’s require guzzle
composer require guzzlehttp/guzzle
At this point we have a working example. The queue takes the json and the worker processes it!
/projects/example_worker/app/RenderTreeTextGrepperWorker/GrepCallbackListener.php
Thanks to the library and Iron.io it really is that simple. | https://alfrednutile.info/posts/136/ | CC-MAIN-2021-17 | refinedweb | 1,078 | 54.63 |
member objects
The static member objects are not part of the object. If the static member is declared thread_local(since C++11), there is one such object per thread. Otherwise, there is only one instance of the static member object in the entire program. The static members exist even if no objects of the class have been defined. Static data members cannot be mutable. Local classes (classes defined inside functions)
If a static data member of literal type is declared constexpr, it can be initialized with a brace-or-equal initializer that is a constant expression inside the class definition (since C++11). A definition at namespace scope is still required, but it should not have an initializer:
struct X {
constexpr static int n = 1; // since C++11
};
constexpr int X::n; | http://en.cppreference.com/mwiki/index.php?title=cpp/language/static&oldid=44829 | CC-MAIN-2015-22 | refinedweb | 132 | 53.31 |
Download presentation
Presentation is loading. Please wait.
1
Lecture 12 Recursion part 1
Richard Gesick
2
Topics Simple Recursion Recursion with a Return Value
3
Simple Recursion Sometimes large problems can be solved by transforming the large problem into smaller and smaller problems until you reach an easily solved problem. This methodology of successive reduction of problems is called recursion. That easy-to-solve problem is called the base case. The formula that reduces the size of a problem is called the general case.
4
Recursive Methods A recursive method calls itself, i.e. in the body of the method, there is a call to the method itself. The arguments passed to the recursive call are smaller in value than the original arguments.
5
Simple Recursion When designing a recursive solution for a problem, we need to do two things: Define the base case. Define the rule for the general case.
6
Printing “Hello World” n Times Using Recursion
In order to print “Hello World” n times (n is greater than 0), we can do the following: Print “Hello World” Print “Hello World” (n – 1) times This is the general case. We have reduced the size of the problem from size n to size (n – 1).
7
Printing “Hello World” n Times Using Recursion
Printing “Hello World” (n – 1) times will be done by Printing “Hello World” Printing “Hello World” (n – 2) times … and so on Eventually, we will arrive at printing “Hello World” 0 times: that is easy to solve; we do nothing. That is the base case.
8
Pseudocode for Printing “Hello World” n Times Using Recursion
printHelloWorldNTimes( int n ) { if ( n is greater than 0 ) print “Hello World” printHelloWorldNTimes( n – 1 ) } // else do nothing
9
Coding the Recursive Method
public static void printHelloWorldNTimes(int n) { if ( n > 0 ) System.out.println( “Hello World” ); printHelloWorldNTimes( n – 1 ); } // if n is 0 or less, do nothing
10
Recursion with a Return Value
In a value-returning method, the return statement can include a call to another value-returning method. For example, public int multiplyAbsoluteValueBy3( int n ) { return ( 3 * Math.abs( n ) ); }
11
Recursion with a Return Value
In a recursive value-returning method, the return statement can include a call to the method itself. The return value of a recursive value-returning method often consists of an expression that includes a call to the method itself: return ( expression including a recursive call to the method );
12
Calculating a Factorial
The factorial of a positive number is defined as factorial( n ) = n! = n * ( n – 1 ) * ( n – 2 ) * … 3 * 2 * 1 By convention, factorial( 0 ) = 0! = 1 (The factorial of a negative number is not defined.) Can we find a relationship between the problem at hand and a smaller, similar problem?
13
Calculating a Factorial
factorial( n ) = n! = n * ( n – 1 ) * ( n – 2 ) * … 3 * 2 * 1 factorial( n - 1 ) = ( n – 1 )! = ( n – 1 ) * ( n – 2 ) * … 3 * 2 * 1 So we can write: factorial( n ) = n * factorial( n – 1 ) That formula defines the general case.
14
Calculating a Factorial
factorial( n ) = n * factorial( n – 1 ) At each step, the size of the problem is reduced by 1: we progress from a problem of size n to a problem of size (n – 1) A call to factorial( n ) will generate a call to factorial( n – 1 ), which in turn will generate a call to factorial( n – 2 ), …. Eventually, a call to factorial( 0 ) will be generated; this is our easy-to-solve problem. We know that factorial( 0 ) = 1. That is the base case.
15
Code for a Recursive Factorial Method
public static int factorial( int n ) { if ( n <= 0 ) // base case return 1; else // general case return ( n * factorial( n – 1 ) ); }
16
Common Error Trap In coding a recursive method, failure to code the base case will result in a run-time error. If the base case is not coded, the recursive calls continue indefinitely because the base case is never reached. This eventually generates a StackOverflowError.
17
Greatest Common Divisor
The Greatest Common Divisor (gcd) of two numbers is the greatest positive integer that divides evenly into both numbers. The Euclidian algorithm finds the gcd of two positive numbers a and b. It is based on the fact that: gcd( a, b ) = gcd ( b, remainder of a / b ) assuming: b is not 0
18
GCD: Euclidian Algorithm
Step 1: r0 = a % b if ( r0 is equal to 0 ) gcd( a, b ) = b stop else go to Step 2 Step 2: Repeat Step 1 with b and r0,instead of a and b.
19
GCD Example: Euclidian Algorithm
If a = and b = 60378, then … % = 2694 (different from 0) % 2694 = 1110 (different from 0) 2694 % 1110 = 474 (different from 0) 1110 % 474 = 162 (different from 0) 474 % 162 = 150 (different from 0) 162 % 150 = 12 (different from 0) 150 % 12 = 6 (different from 0) 12 % 6 = 0 gcd( , ) = 6
20
GCD Code { if ( dividend % divisor == 0 ) return divisor;
public static int gcd( int dividend, int divisor ) { if ( dividend % divisor == 0 ) return divisor; else // general case return ( gcd( divisor, dividend % divisor ) ); }
Similar presentations
© 2017 SlidePlayer.com Inc. | http://slideplayer.com/slide/3225024/ | CC-MAIN-2017-17 | refinedweb | 853 | 56.29 |
Is Apache Or GPL Better For Open-Source Business? 370
mjasay writes "While the GPL powers as much as 77% of all SourceForge projects, Eric Raymond argues that the GPL is 'a confession of fear and weakness' that 'slows 'pure' GPL-only open-source projects, as GPL-prone developers have to 'modify."
Doesn't really matter (Score:5, Informative)
GPL or Apache doesn't really matters -- what matters is if you can make money. There essential matter is whether the software in question is a tool you use or the product you sell itself. If it's just a tool, the GPL makes sense, so you get contributions back. If it's your product itself, neither GPL nor Apache makes sense.
Exactly -- is the software the means, or the end? (Score:4, Interesting)
And there you hit the nail on the head. If the software is the means to some other end, then yes, the GPL or some derivative would seem to make the most sense, in order to ensure that any improvements someone else might come up with are propagated back into the main branch. I would wager that this holds true for most FOSS projects -- and the SourceForge figures of 70% of projects using the GPL would seem to back this up.
But if, as you note, the software is the end in itself, if it is the product one is trying to sell, then proprietary is really the only way to go, simply from the perspective of locking others out.
And therein lies the crux of the conflict -- those keenest to use any piece of software are also keenest to see it spread and improve as quickly and efficiently as possible, while those trying to sell any piece of software are less interested in improvements than in maintaining exclusive control. These would appear to be orthogonal goals. The alternate model of giving the software away for free and charging for service instead adds an interesting wrinkle to the equation.
Cheers,
Re:Exactly -- is the software the means, or the en (Score:4, Insightful)
i.e. you want to have your cake and eat it too. i.e. dual licenses schemes like MySQL's. i.e. you want to sell your GPL code.
For you "owner" of the code, yeah--especially if you are extra weasely and require copyright assignment. For contributors, it is a scam. Why the hell should I contribute to your dual licensed garbage so you can turn around and profit from my work? I never understood why such companies aren't hassled more about this. It is really a great scam--you get a bunch of people contributing to your work for free and you get to sell it all. Course, I guess the same holds true for most things on the internet--flickr doesn't take pictures, its users do and flickr profits from that. Slashdot doesn't have a script to write comments, we write them and they profit from that. So I might be wrong on this... but the dual-license guys seem way more blatant, probably because I get a lot of satisfaction posting here, but dont really get much satisfaction contributing to some faceless corporations open source project.
A sucky one though. I doubt many programmers on this board want to be in a position that the work they produce for a company is essentially worthless and the way to move up is through the tech support department. I also doubt customers would benefit either since giving away the software and charging for support creates an incentive to make shoddy software that requires a lot of hand-holding.
Re: (Score:2)
I gather Slashdot doesn't make a lot from us writing comments. They make money from the advertisements we see while browsing. Comments are just slashdot's way of fostering a community and increasing page counts.
Re: (Score:3, Insightful)
They dont make much from us clicking on the ads either. People who bother do register accounts become blind to them.
Slashdot doesn't directly make money from us writing comments. The indirectly make money from us because our comments give a reason for people to visit. Without them, the website wouldn't be interesting and nobody would visit... thus making this place unattractive to advertisers.
Re:Exactly -- is the software the means, or the en (Score:5, Interesting)
It's actually pretty difficult to contribute to dual-license GPL projects - they'd rather do the changes themselves and not risk legal hassles. What they want is bug reports.
When I was maintaining the ISC DHCP distribution (BSD license, BTW), I dreaded getting large patches, because I'd have to go through the whole damned patch and figure out what it actually did, and correct it. I much preferred bug reports. The idea that patches are why people open source things is really a red herring - sure, if you get a regular contributor who's really good, you can start to trust their work, but that only works for projects like linux where you have a huge number of interested geeks.
So really, what's going on with a dual-licensed model is that the owner of the copyright is using the FUD of the GPL to get people who don't trust open source or don't want to open source their own code to pay for non-GPL copies. At the same time they are offering the GPL version to the community of people who like the GPL, which spurs adoption. It's a win for everyone.
The problem with the BSD license is that the only way to get money out of it is charity, because there is no license FUD. Nothing wrong with charity, but it can make paying the bills a bit difficult.
Re:Exactly -- is the software the means, or the en (Score:4, Insightful)
Many dual-licensed projects are perfectly happy to accept patches, as long as you sign over the copyright or the right to relicense, and swear that you are the author of work and don't know of any infringements in it.
The problem is motivating people to sign that. I think I know a way, but I'm still working on the product for my new company so don't yet have proof.
Re:Exactly -- is the software the means, or the en (Score:4, Interesting)
That is possible. What I am thinking of is a covenant to continue the development as Open Source for two years after the contribution, or to remove the contribution.
Re:Exactly -- is the software the means, or the en (Score:5, Insightful)
There is an easy answer to this. Don't make the software as your business. Most of the successful Open Source applications are made by Open Source projects in which businesses participate, not businesses whose goal is to make the software.
Am I saying that Open Source business doesn't work? Most of the time it does not. It depends on what you are doing.
Re:Exactly -- is the software the means, or the en (Score:4, Insightful)
Re:Exactly -- is the software the means, or the en (Score:4, Insightful)
Read the complaint [fsf.org]. Don't do the stupid obvious license violations alleged in the complaint. Then you'll be fine.
Nobody violates a Free Software / Open Source license for a smart reason. Cisco hasn't got their compliance act together.
Re: (Score:2)
How many people know the difference between the Apache license and GPL? How many are more likely to adopt a project because it chose Apache over the GPL? And, most importantly, how many people are dealing with your crappy project anyway? I don't have the stats but I can only imagine how many inactive, hardly active, or active but never used projects there are on Source Forge. The difference between licenses to 99% of those projects is zero.
Re: (Score:3, Insightful)
FWIW, I'm approx. familiar with both licenses, and I expect most FOSS developers are. I prefer GPL
... actually, these days I'm leaning towards either GPL3 or AGPL, but I recognize the Apache license as a good one. Just one that's a bit more open to being ripped off than I prefer.
For a business, it seems to me the important consideration would be what you want to do. You need the right code, and it's a lot easier if you can just modify slightly something already done. And in that case you must use whate
Re: (Score:2, Insightful)
That's an odd view... personally, I think what really matters is that you can't make money with the code. Money comes from controlling a resource that is scarce. Money requires poverty as a precursor. Wealth comes from abundance.
The argument in the article misses the point when they keep talking about code quality, efficiency and market forces, because the GPL isn't about creating higher quality code. The GPL is about protecting something that is naturally abundant from the corrupting influences of law
Re: (Score:3, Insightful)
If it's just a tool, the GPL makes sense, so you get contributions back.
The GPL doesn't make sense if your software gives you a competitive advantage, because by releasing your code under the GPL, you relinquish that competitive advantage.
Re: (Score:3, Informative)
Duel License is a lot of work.
First you need to verify all people who contribute know the product will be duel licensed and all their contributions will go as such. Second you need to insure that Single Licensed GPL Code doesn't leak into the Commercial version.
Third You will always need to make sure the commercial version is just as good if not better then the GPL one so if you need to replace that GPL only module it better work just as well if not better. As people are paying for the commercial version a
Re: (Score:3, Insightful)
I'm in a business where we welcome GPL-licensed apps with open arms. Of course, we don't sell software, we sell services and expertise. Any idiot can set up a web server and mail drop, and they are free to use the same tools we use. It takes a bit more dedication to do a kickass job of it, and that's where we stand.
If a business feels "threatened" by the GPL, maybe they need to stop selling artificially-rarefied bits. That business model has been slowly collapsing for nearly 30 years.
Re: (Score:3, Informative)
I'm in a business where we welcome GPL-licensed apps with open arms. Of course, we don't sell software, we sell services and expertise.
Well, many people sell service and expertise and they use non-GPL products. For instance, the URLs below will take to people who sell services and expertise in *BSD systems. [freebsd.org] [openbsd.org] [netbsd.org] [ixsystems.com]
It doesn't matter all that much (Score:2, Insightful)
The source availability provisions that come with distributing GPL software are a small pain for companies that want to make use of open source software, but that's about the biggest difference.
Anyway, over time, it will become obvious how big a concern the copyleft is to businesses.
Re: (Score:3, Interesting)
I work for a company that makes closed source software. We have a few pieces of core code we're not willing to open. But to make that stuff useful, we integrate with vast amounts of other tools and libraries that aren't our critical core, that we're perfectly happy to share with others.
So the first thing we look for when we need some particular functionality is a BSDish license. We can use it however we need to, but we, and others can all share our improvements. As a re
Short Term vs. Long Term Thinking (Score:3, Insightful)
Without people like RMS fighting for the cause, I don't think the center would have moved so far towards FOSS today.
Supporting GPL in business is tougher, but it is also true that the benefits a company derives from open software are those it won't be able to reap in the future if the world turns back towards licenses which are less free.
Who's business? (Score:3, Insightful)
If you are making money developing the software, the GPL with a dual liscence is a feature, not a bug:
"Hey Mr Customer, you can have it for free under this GPL thingy, or pay us $$$ and do whatever you want with it"
If you want to make money modifying the software, the GPL is a disaster.
Re:Who's business? (Score:4, Insightful)
You have identified the major points though:
The GPL does not preclude the open source community from forking and out innovating me. But any innovation done has to be done in the clear, assuming those changes are beyond "customizations" for a single customer.
If it's not your CODE... (Score:2, Informative)
It isn't
"If you want to make money modifying the software, the GPL is a disaster."
It's if you want to make money modifying SOMEONE ELSE'S code, the GPL is a disaster.
If it's your own code, you can add non-GPL bits to your code and still make monopoly rent.
If it contains someone else's code under GPL, you can still make money from the modifications, but you won't make a monopoly rent from it.
Re: (Score:3, Informative)
It's if you want to make money modifying SOMEONE ELSE'S code, the GPL is a disaster.
As always, it's about what exactly are you doing: If you want to monopolize on someone else's code, then you're right. But imagine, some user (company) has some software and the code is available under GPL. To get improvements they can ask you to do it, instead of the original author, and you can make money by modifying his code.
Re: (Score:2)
X% of your customers demand GPL.
Y% of your customers don't care.
Z% of your customers want closed source.
The questions is what is X, Y, Z in your field of dreams (err, Business plan)?
More business and governments are demanding GPL, but is it big enough for you?
It depends on what you're trying to accomplish (Score:5, Insightful)
If you're trying to get a protocol or "standard" of some kind as widely adopted as possible, then you should use a more permissive license (e.g. BSD, MIT, Apache). If you want people to embrace your product, yet then have to buy a license from you if they want to modify it in any proprietary way, you use the GPL.
It's basically a business question of whether you plan to make money DIRECTLY from the code (i.e. GPL), or whether you have ulterior motives for making money elsewhere (i.e. Apache). For examples of the latter, most of the largest permissive-licensed projects (Apache, Firefox, etc) are bankrolled by Microsoft competitors as a means to block Microsoft from having full monopoly power in a particular niche.
This really is a TIRED and boring flamewar. There simply is no "one license to rule them all". It depends on what you're trying to accomplish.
Re: (Score:3, Insightful)
My best guess for making money off open source is still support. If you provide pay-only features then you've got to be better than the very best programmer in the open source community. You'll always be in an arms race trying to introd
It's difficult to make money with support (Score:3, Insightful)
Say, in theory, that you decide to fund your company by supporting a single Open Source product. Put yourself in the customer's place:
The customer will have to spend a lot of time just figuring out what is breaking, so that he or she knows who to call.
The customer will then have to spend additional time proving to the vendor that something is broken in their product, while the vendor points elsewhere: hardware, OS, someone else's product.
The customer will have to manage integrating all of these piece-mal
Re:It depends on what you're trying to accomplish (Score:4, Insightful)
Re: (Score:3, Interesting)
To make sense of this you have to think about quid-pro-quo. If the company has contributed a lot of code under the GPL, and expects to continue to do so, having a way for the company to pay for that is a fair quid-pro-quo for the company.
If the contributor gets something back for their code contribution, that is fair for the contributor. The problem is that most dual-licensing projects don't even promise the code will continue to be free for one day after your contribution! And a few years back, Sun made a
Re:It depends on what you're trying to accomplish (Score:4, Insightful)
This is what happens when people used to getting attention miss the attention when their 15 minutes is up.
No one has payed attention to Eric Raymond in years so now he has to start a flame war.
Re: (Score:2)
Amen to that! The editors of
/. must have felt like rousing the old BSD vs. GPL debate one. . . . more . . . time.
GPL offered protection from competitors (Score:5, Insightful)
One thing the GPL offers that BSD-type licenses don't: protection from competitors. When a business releases it's code under a BSD-type license, it's competitors are free to take that code and expand upon it to make new products while keeping their code secret. As a business that means that you're always giving to your competitors but they don't have to give anything to you in return.. You're never left holding the short end of the code-exchange stick. The only way a competitor can use your code without letting you use any improvements he makes is to not make any changes to your code at all. But if he's not making any changes or enhancements, you always have the first-mover advantage and he'll never be able to offer anything you aren't already offering. From a business standpoint, if you're going to open the source code at all the GPL provides assurance that the only way your competitors can hitch a free ride is if they accept always being in second place behind you when it comes to new features.
That's assuming you can open the code in the first place. For code that's not critical to your business it's an easy answer. If the code is critical to your business, the first question you need to ask is whether or not you can open it to the world in the first place. Opening it means the entire world can see the exact thing that sets your business apart from others in that case, OTOH it also means the entire world can offer improvements and that means you're effectively getting a development department not even giants like IBM and Microsoft can afford for free. Keeping it closed means you can avoid revealing the keys to your success, OTOH it also means there's huge amounts of useful software out there that you can't use and will have to pay to get (either in cash to buy commercial versions or in time to duplicate the functionality). I can't say whether the trade-off's worth it for any particular business or not, but as a businessman you'd better be asking that question and getting a solid, well-grounded answer to it.
Re: (Score:2)
Re: (Score:3, Insightful).
That's actually not true. There's no obligation in the GPL for your competitor to give you any of their source code unless they, or one of their customers, or somebody else downstream, redistributes the code to you. Since they are allowed to charge a fee for the software, you might find yourself having to pay to see their code changes. Or if you can't find anybody prepared to give you or sell you a copy of the software, you may never get the changes.
Re:GPL offered protection from competitors (Score:4, Insightful)
OTOH it also means the entire world can offer improvements and that means you're effectively getting a development department not even giants like IBM and Microsoft can afford for free.
Not really. That's the theory, but in reality what it means is that nothing prevents such a development department from forming spontaneously. In reality, many open source projects languish because no one is interested in developing for them, and there's no management in place to guide the developers to plan a roadmap for the project. High-profile, successful open source projects like Linux, Mozilla, and Pidgin didn't happen by accident.
You have to have people who understand the project, its purpose and goals, and have technical expertise in coding, and who are interested in contributing to the project and see a need to do it, or who are paid to do so.
If you're a company and you want to foster this sort of environment, one of the best things you can do is set aside some budget to pay coders for contributions that make it into the trunk of the project, or, you know, hire a few full-time developers to work on your project.
Simply putting the code out there and wishing isn't going to get you very far. Although, at least that way, when you go out of business, anyone who used to depend on your company for support can come along and pick up the project code and do something with it. Which is better than nothing, I guess. Far better to fertilize your project by putting incentives out there for programmers and users to take interest, than to simply open the codebase up and wait for magic to happen.
Re: (Score:2)
Re: (Score:3, Insightful)
The whole argument about which is more free is lame semantics.
I use the GPL because it does what I want. Whether you call that "freedom", "restrictions" or "communism" is completely irrelevant.
I don't choose a license because of its freedom value, but because it does what I want.
Apache or GPL? (Score:5, Insightful)
Yes.
Debian's 'free' repo rebut (Score:2)
My example is the number of packages in the Debian Free repository. There are, no doubt, quite a few licenses among those packages, but they meet Debian's high standards for Free software.
Businesses will always do their best to capture all of the value of the work of others. There are no end to schemes meant to capture the value of GPL software and prevent others from using them. Tivo's kernel hack come to mind....
Embrace and extend all over again? Raymond's FUD ? (Score:4, Insightful)
How does having the ability to close down the product a better freedom?
Actually with the GPL, you can dual license since it's your own software and thus have a free GPL version and then a privately extended version if that is what you business is looking to do...
With BSD, well, all your concurrent company can do the same and compete in the proprietary version with you, how is that helping you?
Raymond argues that GPL is bad because it's an uncertain license... what?
If Cisco can't read that it has to distribute source code, well, that is a shame with all their lawyers.
Anyone else KNOW what they have to do. So there is no ambiguity there!
Same goes for Google's Android going with the Apache license...
Basically in that super proprietary cell phone world, they are more than happy to have it under a BSD like license.
Now every company can build an OS together and all close them on their side leaving you, the user, with nothing out of that openness except the base system which might well be unusable.
See how MacOS X free part is free/useful compared to the full product? Haha!
...
So they save on development cost, like they would have with the GPL but remove the idea that they want to guarantee that this investment will be guaranteed in the future.
It's like a trap to win the cellphone OS race and then, when it's too late and they have such an insurmountable market share, they close it and we go back to business as usual...
As a user i can't trust that, I'll go with Maemo, OpenMoko or anything that has as much GPL as possible if i have a choice!
(which right now I don't really have, but this year is going to be interesting! I hope
...)
Re: (Score:2)
From my understanding, if you own all the copyright you have a right to republish under different terms and extend all you want, which doesn't remove the original GPL source code from circulation of course.
Like QT did and i think MySQL and probably others.
Re: (Score:3, Informative)
After my last comment, i went to look a bit at wikipedia, and, what do we get: [wikipedia.org]
----------------8----------------, difficul
Re: (Score:3, Informative)
I'm sorry, but why exactly are you implying that you can't dual with BSD as well if you own the code and accomplish the same thing? A public BSD licensed version and a private proprietary licensed version with your extensions to the core.
You're right, of course, but what the GP failed to mention (but probably thought about) is that with the GPL you can have a dual-licensed product *without* propietary extensions and still have a workable business model, all thanks to some businesses' fears of open-sourcing their own code. Not so with BSD.
With Mac OS X you're confusing two parts of the system as if they were one. The OSS portion of OS X is perfectly usable.
But how useful is it? the F/OSS part has almost none of the features that are commonly associated with OSX, can run practically none of the OSX-exclusive applications, and I've yet to see a single reason to
If all GPL code was Apache... (Score:5, Insightful)
...it'd be better for business, at least in the sense that more people would find commercial opportunities with it. But would that code be open source in the first place, were it not for the GPL? I doubt it. Most companies don't want to give away source competitors could put directly in their proprietary products. Give away GPL code? To use it means the competitors would have to open source their application, turning them into a service and support company rather than product sales, where you'll beat them on accrued skill and experience. I'd also say that a lot more individual contributors subscribe to "share and share alike" than "share and kthxbye". My point is that it's not like you got two equal options, either you use GPL code or you have to write it yourself because there is no such Apache code. Would be nice if there were, but then I'd like a pony too.
...then IBM wouldn't be into OSS at all (Score:4, Insightful)
If you look at big companies like IBM who have really embraced OSS, they have done so precisely because of the GPL. The GPL is really the only license that makes a lot of business sense. The GPL has two major advantages over other licenses. First since you own the copyright you can dual license the code as proprietary and GPL if you wish, while making sure that code can continue to be developed by a community and protected from exploitation---the only caveat here being that you have to make sure copyrights are always assigned to you, something that many projects do. The second major advantage is that no company can use your code against you in a competitive manner. The playing field is completely level. If improving your code helps a competitor, it also helps you. Given all this, if I was a commercial company, wanted to have my projects be open source, and I owned all the copyrights, then it's a no brainer. the GPL is the only way to go. It seems like the only time people complain about the GPL is when they don't happen to have a natural copyright to the code and for some reason feel some sense of entitlement to code (if it's open source I should be able to use it how I want, dang it) just because it's OSS. It's very bizarre.
Frankly I'm surprised to hear of such blatant FUD coming from someone like ESR. I think the solution to FUD is to be a bit more vocal about defending what the GPL is actually about and how it protects users, developers, *and* commercial corporations. It's not public domain software. It's source code just like source code from any other source. If it's not yours and you don't want to abide by the license, buy rights to the code or stop complaining.
Re:...then IBM wouldn't be into OSS at all (Score:4, Insightful)
Frankly I'm surprised that YOU'RE surprised. ESR's a fruitcake, and he's been spewing this kind of idiocy for years.
Re: (Score:3, Informative)
Hurd it all before (Score:2)
This is of course a centuries old debate. GPL projects have the patience, confidence, and self-respect to wait for the right business to come along, one that will make a real commitment to a long-term relationship and honour its responsibilities. Only then does it get the source.
On the other hand you've got the projects with the much more liberal BSD or Apache licenses, projects so desperate for attention that they'll jump into bed with any business, and give up their source at the drop of a hat, not cari
Tell you my "stragetgy" (Score:3, Insightful)
Eric is basically right. I've been burned in the past, so I now pay attention to the license an application uses (something you should get into the habit of doing).
Here is my decision tree for deciding to use an application licensed under any FOSS license:
1) If I plan to modify the application in any way, or use it as a library, it has to be under a BSD derived license. This means BSD, MIT, Apache, MSPL, Perl's artistic license, or anything similar. GPL, or any "viral" license is out... I dont touch GPL code anymore (actually, this is a lie, see below).
1.1) There are exceptions to the "used as a library" rule. If everybody else is using said library in their application (eg: libmysql), nobody is gonna try to GPL-ize my whole application. And if they do go after me, it will only be because I'm so successful that I become a target for such nonsense. If your library is nothing more than a CPAN module and it is GPL, I can't use it, sorry.
2) If I don't plan to modify the application for use in my project, the license becomes less important. In these cases, I look at other factors such as how active the project is. I don't like depending on projects that haven't been touched since 2005.
3) If your application or code will become a non-linked dependency of my application (for example, a GPL'd version control system), I don't really care what the license is. Since it isn't linked into my application, I won't get "infected". In fact, I might even contribute to your GPL project provided my contributions are independent works and don't come out of my own "toolkit" so-to-speak.
4) If you require me to assign copyright to you before I can contribute, you are a scam and can piss up a rope. Granted, many of the big-boys require this (most GNU stuff, Firefox(?), MySQL) and so I might be willing to cave in an contribute anyway provide what I'm contributing is an important bugfix and doesn't erode ownership my personal toolkit (i.e. the good stuff). The scam guys are companies who want ownership so they can cook up dual license schemes and profit from your work (MySQL). Scammers can pay for their own bugfixes...
Bottom line, I won't touch GPL for anything that might make my mainline code become a derivative work and force it all to become GPL'd. BSD'sh licenses cannot do this to my mainline code, so I can use their stuff and contribute anything I think they will find useful. GPL doesn't let me cherry pick useful stuff out of my code, so they miss out on some pretty cool things. Since I dont like leeching from GPL stuff (using it, but having no way to give back), I just avoid it instead.
In other words, if you GPL your project, $SUPER_BIG_COMPANY can't lift your code and make $MILLIONS$ but only at a heavy cost--the pool of people who are able to work on your project becomes much, much smaller. BSD-style licenses are attractive to business precisely because business knows they can contribute changes without getting into trouble. If I use a BSD anything, I know that I have the option to deeply embed the code into my application, still be able to contribute back any changes, and retain control over my intellectual property. GPL reduces control over my IP and thus I can only depend on it in the loosest way possible. The second I want to make any contributions, depending on how I used the GPL code, my entire portfolio might be in legal jeopardy. Not cool.
PS: IANAL
Re:Tell you my "stragetgy" (Score:5, Insightful)
I always thought that was the idea -
"If you want to use my stuff in your project, you have to open it. Feel free to write your own if that doesn't fit in with your plans"
Re: (Score:3, Interesting)
Your assertion that I'd have to suffer the "punishment" of writing my own is a false dichotomy that hinges on me either being able to "write my own" or use GPL code. This isn't the case.
GPL doesn't have a monopoly on open source. GPL has some real competition from alternative open source licenses. Unless your GPL code has a very compelling reason for me to use it, I'll pick the BSD code every time. If enough people do the same thing, the
Re:Tell you my "stragetgy" (Score:4, Informative)
Good for you.
I, personally, don't see many companies getting behind BSD. Neither would I, personally, want to contribute to a piece of code that could be taken, used, altered and distributed in a closed way with no recourse to having people open it.
The fact that you say I can trust you to contribute back does not help.
There will always be a significant number that think my way, and a significant number that think yours. Just don't pretend like I'm losing out by not doing things your way, it's by design and it's very simple.
Re: (Score:3, Interesting)
See, you are rational. Your "team", like any, has plenty of zealots on it that are more than happy to mod "my team" down, or call me a leech, or whatever. I think part of the reason you guys get more traction is cause you have more lout-mouth activists. BSD folk are pretty chill, so we dont get much visibility.
Of course you have RMS. We've just got that OpenBSD guy Theo. I'll take Theo cause at least nobody outside of a small group has heard of him
:-)
Re: (Score:3, Insightful)
Your assertion that I'd have to suffer the "punishment" of writing my own is a false dichotomy that hinges on me either being able to "write my own" or use GPL code. This isn't the case.
It's true there are other open source licenses, but if you're looking a GPL project, there's only one of that project, and it might be the only one (or the best one) that suits you. If you can find an alternative, with a license more suitable for your needs, go ahead.
Don't think of it as "punishment." If I write something and GPL it, and you want to use it, I'm allowing you to use it provided you fulfill the conditions I establish. You do the same thing with your super-duper proprietary source portfolio
GPL projects have greater contributor diversity (Score:4, Interesting)
Here is political scientist Steven Weber, writing about the tendency toward different governance styles for projects using BSD-like and GPL licenses (when he writes "Linux" he means it as an exemplar, not the only instance):
Weber, Steven, The Success of Open Source, 2004, pp. 62-63.
Assuming his claim is true, this may be because developers see the two licenses differently (e.g. contributors may feel greater incentives to contribute to GPL projects, or they may have principled reasons, or it may have to do with their identity and membership in a community). Or it may be because of the kinds of projects that pick the licenses: the typical BSD structure he describes mirrors that of big companies, perhaps because they tend to choose such licenses. Personally, I suspect all of these factors contribute. But then I find them to be compelling reasons to pick the GPL.
Re: (Score:3, Insightful)
The second I want to make any contributions, depending on how I used the GPL code, my entire portfolio might be in legal jeopardy.
Firstly, "making contributions" does not normally trigger the GPL.
Secondly, the GPL does not put your portfolio "in legal jeopardy". The worst case scenario is that you have to remove (somebody else's) GPL'ed code from your portfolio.
Finally, it is copyright law which makes this a requirement, not GPL.
Re:Tell you my "stragetgy" (Score:4, Insightful)
In other words, if you GPL your project, $SUPER_BIG_COMPANY can't lift your code and make $MILLIONS$
They can't "lift" it as in deprive you of it, but they couldn't do that with BSD or for that matter the good old public domain, either. But they very much CAN produce a competing product based on it, then make millions supporting it while your product fails because it's not as good. You don't understand the GPL or copyright at all for that matter if you don't understand the distinction between theft and copyright infringement.
GPL reduces control over my IP and thus I can only depend on it in the loosest way possible. The second I want to make any contributions, depending on how I used the GPL code, my entire portfolio might be in legal jeopardy. Not cool.
I'm not sure why you think you should be able to use GPL code without respecting the wishes of the rights holders, but "in legal jeopardy" is a bit of an overstatement. Only the pieces in which you used GPL code will be up for debate. Even then you need only remove the GPL code and replace it with something else; it's not like you need to start over.
You may or may not have points in the rest of your comment, but this part is pure FUD. You never have to do anything to comply with the GPL until you distribute something that contains GPL code. This is considerably more free than the default situation in which you are not permitted to use someone else's code at all. Complaining that you have less rights over someone else's code than when they license under BSD/MIT/Artistic/whatever is true but obvious and thus uninteresting.
Re:Tell you my "stragetgy" (Score:5, Interesting)
Re: (Score:3, Informative)
As always with these sorts of rants:
Please name exactly the GPL project that you would use if it was BSD licensed. You may need a pretty elaborate and bullet-proof explanation as to how it is impossible for you to use this project without releasing your source code as well.
In reality all reusable code is LGPL or linking-exceptions or BSD or whatever. You list the reasons developers do this, but you seem to like turning it into an anti-GPL rant, rather than realizing that your own arguments are exactly what
Re:That's quite a rant... (Score:4, Insightful)
But the second you want to distribute it, anything that the GPL considers a derivative work becomes GPL. And *that* is why some people, like myself, prefer to avoid GPL.
The only way that something you write can "become GPL" is for you to choose to license it under the GPL. There is no other way under heaven or earth for this to happen. If you've heard otherwise, you've been misinformed.
Using the GPL takes away the *option* of ever being able to distribute our work without making it GPL.
You seem to be saying that if I choose the GPL as the license for my software, I've removing your ability to redistribute software that you derive from mine under a non-GPL license. If so, yes, that is correct. That is the price I'm charging for allowing you to derive from my work. (If you ask me with a good reason, I might allow something else, but that's the default.)
If I start using the GPL in my code, my option to distribute my codebase under a license of my choosing goes out the window
No. You can distribute your code under the GPL and then switch to another license at any time for future versions. What you cannot do is redistribute my GPL'ed code under the license of your choice.
Protect Forking or Merging? (Score:2)
The difference between the GPLv2 and BSD is simple:
So in the end, the choice is whether forking or merging is more important to you. Forking may mean more people can use your code. However others would argue the
Would the best Linux still be free without GPL? (Score:4, Insightful)
I think we have to ask: What has the GPL done for us, or at least probably done for us?
Starting a decade ago several very large corporations poured significant resources into Linux development, and were compelled to keep their contributions open-licensed and essentially free (as in beer).
Do we think that would have been the case if Linux had been Apache or BSD-licensed, or would we instead see a division into deluxe IBMLinux (that works on multi-processors and new chips and 64-bit) and open Linux that scrapes along on simple 486 hardware.
Just look at Unix (Score:3, Interesting)
theres something wierd going on (Score:2)
I prefer Bruce Peren's approach (Score:2)
As much as I appreciate Eric's contributions to open source (his early writings helped get me interested in writing open source), and our shared interest in shooting guns -grin-, I prefer Bruce Peren's approach (as I wrote about recently [markwatson.com]).
I think that a combination of AGPL, a less restrictive license like MIT or Apache 2, and perhaps something like the LGPL cover most needs, and the fewer licenses the better. The point of the article
It's simply a matter of business (Score:4, Informative)
The only reason people ask this question is because they take a simplistic "one fits all" view of Open Source.
A great many ways have been tried to make money from Open Source. Dual-licensing is one of the best. It requires a strong copyleft license.
On the other side, if you are investing your own time, without pay, in an Open Source project, having folks run away with it in their commercial product makes you feel like an unpaid employee with no rights. So, a lot of people use the GPL because of that.
Apache or BSD licensing is really good if you want everyone to use your stuff regardless of what they do with it. There are many strategic reasons to do that, for example if you are trying to evangelize a standard way of doing things (that, perhaps, ties into some other aspect of your business and will eventually make you money).
Companies that apply BSD or Apache licensing to their products are really severely limiting how they can possibly make money from that product. Having seen some of these companies fail (I've not been directly involved in one, yet) it sounds like a bad idea.
The company I'm working on now does use dual licensing.
Take Geir with a grain of salt. (Score:3, Interesting)
The linked article's comments should be seen in that light.
Long before Harmony existed, there was a GNU clean-room implementation of Java called Classpath. In the interests of the community, the thing to have done for free software would be to obtain implementation completeness and then 'pony up' the money to Sun to certify (Cacao/JamVM/Kaffe + Classpath) as Java compatible.
Perhaps Sun wouldn't have allowed that but... Instead, the backers of Apache sought to create a second clean-room implementation, namely Harmony (Code and financial resources of IBM, & others - according to wikipedia). 'They' choose to hire developers to implement Java again from scratch a second time in the hopes of bullying Sun into giving them the JCK for free. It would have been sensible before work started on Harmony 4 years ago to negotiate licensing. Now there's a standoff but in whose interests does it serve to have 2 almost compatible implementations? As one javalobby poster bluntly put it recently [dzone.com]:
So in this case, the Apache license benefits faceless corporations. I believe GPL is a good license for Sun's Java, as it prevents closed forks. Apache are arguing it's good to have a JVM distinct from the reference implementation. Again, good for whom? IBM, so they can release a proprietary JVM for Websphere? Google, so they can plunder bits of it for Harmony?
In response to the above quote, Oracle may also have their own agendas for Java but at least now the code is GPL'd. Red Hat, the main contributor to IcedTea, could fork it at their leisure for the goodwill of the people - any changes they make would be subject to the GPL. Forks of Harmony don't have the same protections. And yeah, I trust Oracle more than I would IBM!
Realities (Score:5, Interesting)
While I appreciate all Stallman and FSF has done, I still prefer the BSD-style license (you're free to do what you want with this code, including not being free with your changes
:). Forced freedom isn't true freedom, IMHO. But that's a philosophical debate.
In more practical terms, businesses operate with restrictions. They have fears of licensing problems, code contamination, and lawsuits and such. A less restrictive license such as BSD/Apache/X Windows (which, if I understand correctly, merely require attribution, not giving away your business code if it interfaces too tightly with open source content).
Honestly, what's the problem with BSD over GPL? So I take a BSD kernel (for example), hack it up with my fancy mods, resell it as a proprietary product. I am required to note, hey, this product uses BSD software under the hood. Any competitor is free to grab the same base software, and apply his own talents to competing with me.
I take a a GNU product, apply some of my special magic to it, and I'm screwed (businesswise, at least). I have to give away any enhancements I make. Blah. LGPL at least lets me use compilers, interpreted languages, libraries, and so on, as a bit of dodge. (I feel LGPL only exists because if it didn't, everybody would run screaming from GPL, and it would have died long ago. I can't link to a freakin' library without releasing my code? No thanks.)
I think that bears repeating: LGPL has helped keep GPL'd software in use. That a sign there's a problem there, IMO. With the lines between libraries, compilation, interpreters, interfacing, web access, becoming blurred (and Stallman wanting web services to have to release code), I think it's becoming more and more of a problem.
In a perfect happy world where all our needs and wants and income is taken care of, GPL all the way, man... But in world where one has to express one's talents to make a living, the socialistic ideal of GPL just doesn't jive with business.
In practice, I use GPL'd software a lot, and I am appreciative. But other than for the odd bug fix, I shy away from *ever* touching the source code, period; from a business standpoint, it'd be death.
On BSD style code I've used, I've gone in, made enhancements, and redistributed things; and when I found bugs in the core of the stuff I've worked with, I've contributed back. (But not my new, proprietary enhancements.) So I've been motivated to contribute more to the BSD-style-license world, than the GPL one.
If I'm rich some day, and can afford to work on some projects for free, I will likely contribute to GPL projects as much (although I still have the restricted-freedom philosophical problem). But while I'm in business, I won't spend any significant amount of time enhancing GPL code. It's sad, but it's a harsh reality of our world.
Re:Realities (Score:5, Insightful)
The problem is that there's no incentive to contribute. You can take, but nothing makes you give back. Especially because if you give back, you're effectively working for your competitors.
So yeah, BSD is excellent from a "leech" point of view. It's not that good from the "project" point of view. It's not that good from the contributor point of view either. Why should I bother contributing when that in effect makes me an unpaid employee of every company using that source?
The LGPL exists for a strategic reason.
For some things, such as the C library, there exist many reimplementations. Making that GPLd drives people to alternatives, and loses on any potential contributions. So the LGPL is a compromise to still get contributions to that code.
Stallman considers that a library should be GPLd when it provides a competitive advantage. If it's GPL or "code your own", he hopes you'll go with the GPL one.
On the contrary, it jives perfectly fine with business.
Take Red Hat for instance, and other companies that pay for GPL development. Why do they do that? Because they know that even if IBM takes advantage of their improvements, the moment they fix something in Red Hat's code, they have to give back as well. So not only does Red Hat get better drivers or SMP support, they also get free fixes from IBM for it!
The BSD on the other hand doesn't have such things. Red Hat would write their driver, release as closed source, not contribute it back obviously, and every other company would do the same. The end result is that BSD won't get the driver until some volunteer happens to contribute it.
There's also lots of GPLd code in various devices you rarely look at very closely, such as cash registers. The companies that work for those don't sell code. They sell hardware + software + support, and have no problem with contributing bugfixes for whatever GPLd code they used, because their business loses nothing by doing so. And without the GPL they wouldn't bother to contribute, because that takes programmer time, and as such won't be done if optional.
It's only death if your business is selling software on the shelf. There are many companies with different business models, which sell routers, or cash registers, or support, and for which the GPL isn't a hindrance in the slightest.
Re:Realities (Score:4, Insightful)
That assumes I have a product. What if I'm just an user?
Where's my incentive to contribute something that your company will repackage and sell?
I consider that a very weak incentive.
Yes, of course it's easier to contribute a one line fix for a buffer overrun, than to manually patch every new version.
You however have a big incentive not to contribute anything that might make you more competitive.
Long term this sort of thing will result in all the interesting technology being in the proprietary forks, and an open codebase that doesn't do anything interesting, but runs well.
And here, again, why I don't release BSD code. I don't want you to have an exclusive edge, I want you to contribute to my project.
Re:Realities (Score:4, Insightful)
I take a a GNU product, apply some of my special magic to it, and I'm screwed (businesswise, at least). I have to give away any enhancements I make.
That's the whole point. The guys that put in work for free don't want you taking their base and making alterations and then selling it with no obligations to give away your enhancements. The companies that put time and money into GPL software don't want that either, they contributed to the community and the condition for you getting the source and using the software is that you open up too.
That's the condition for using their work. Don't like it? Find or write another solution.
So many posts here seem to think that GPL code ought to be able to be used as if it was public domain. That's not the idea at all.
Things like the linksys hacking communities would not exist without the GPL.
the solution is to upgrade to GPL 3 (Score:3, Informative)
Eric (Score:3, Funny)
Everybody loves him. [geekz.co.uk]
(Except me.)
We need a more clear LGPL-like license (Score:3, Interesting)
There were good posts above but this has devolved into a typical flameware between people who see the GPL and the BSD as the only two possibilities.
What I would very much like to see is something that is "what people think the LGPL probably means before they read the fine print":
You can use the source code unchanged in any way you want in your software and distribute the result. However if you modify the source code to use it in your result, you must release your modifications (but not the rest of your program) under the same license.
In my opinion additions that don't require modification of your code are going to be creative work and thus should belong to you. But you should not be able to "steal" my code by making tiny changes to it and closing the result. I think a lot of people feel the same way.
Now this sort of license has been made dozens of times, and is often called "GPL plus a linking exception". The problem is that there is no common three-letter name so nobody can easily refer to this license, so there is license proliferation.
I think the FSF is to blame for forcing thier philosophy by activiely avoiding creation of the license described above and giving it a nice short name with a 'G' in it. No other organization seems to have the clout of the FSF to get a name standardized.
There are also commercial intereste to blame, they know this sort of license would address all the arguments against the GPL without removing the ability to compete that the BSD does. Their tactic seems to be to argue that there is no middle ground, and also to pollute the license namespace with hundereds of BSD licenses so that any such license that gets any popularity is buried.
It's simple really (Score:3, Insightful)
If you want to make your software free of restrictions, then place it under a BSD, MIT, Apache or other unencumbered license. But if you desire to control, regulate and manage what other people can do with your software, then use a restrictive license like the GPL. Many businesses like the GPL because they can be "community based" while still restricting their competitor's ability to leverage the software.
What I have never understood, though, was the use of the GPL for non-commercial community software. The usual excuse is that "Microsoft can't steal my code". That displays a shocking ignorance of the nature of information. No one can take your software away from you, or away from your users. They might be able to fork it, but your original software is still there untouched. The reciprocity of the GPL can be very useful, as with commercial open source, but it has nothing to do with protecting the software. Instead it protects the fragile sensibilities of the author.
ESR is one who gets it, who understands that free software does not need to be protected and coddled beneathed layers of licensing restrictions. Anything beyond attribution and warranty disclaimers is too much.
-1 troll (Score:4, Informative)
Seriously, troll much?
It's not so people can't take the code away, it's so they can't even use it without giving changes and enhancements back.
What's shocking is your ignorance of the reasons behind people using the GPL.
After all these years, Matt still doesn't get GPL (Score:3, Insightful)
Even after years of conversations with us in the FLOSS community, Matt still doesn't get it. He's completely focused on “businesses with a codebase that release it under some license”. He doesn't understand community-driven software that isn't tied to on specific corporate entity.
The GPL is specifically designed for community-driven software that is not tied to one company. Matt could very well be right about the limited, pro-corporate world he occupies; it could very well be better for them to use the Apache license.
However, individuals and very small contracting agencies benefit best when they can be put on equal footing with the big guys. The only types of licenses that do this are copyleft licenses.
Finally, declaring that people's life's work trying to make the world a better place — even if you disagree with their politics — is disingenuous at best. I've spent most of my adult life working to make the GPL and the codebases around it better. I'm sorry to hear that Matt thinks I've been busy dumping radioactive waste on his world.
Depends on your goals (Score:3, Interesting)
If your goal is wide acceptance in the business world, then yes, the Apache license or BSD or, for that matter, public domain is far better than the GPL in most cases -- other posters have noted the glaring exceptions.
If your goal is maximum utility for individual users, then the GPL or a close relative is the way to go.
The thing about ESR and RMS is that they approach the issue from diametrically opposed positions and assume that everyone must follow their lead, when the fact is that motivations vary. I was interested in Free Software from the beginning for reasons that were (and remain) essentially altruistic: I wanted to help develop software that would be useful to individual users and accessible to individual developers of like mind. I don't care one way or the other if anything I've done becomes especially popular or widely adopted in the business world, though I have contributed to projects that were.
ESR and many of his supporters, on the other hand, do care very much whether their work is adopted by the corporate world, and many of them are hoping to profit from it. While that's not to my personal taste, it's all fine and well, and I support their freedom to take that approach.
That said, it has often seemed to me that if you want to write software for the corporate world and to make decent sums of money at it, it makes a lot more sense to just get a programming job at a corporation or start a closed-source software company. I know there are a lot of folks out for world domination with varying amounts of tongue-in-cheek, but I've never been convinced that there is any tangible benefit for individual users (including myself, when I'm using rather than coding software) to be derived from massive corporate adoption of most FOSS software. Conversely, there is a great deal of risk, not to corporations playing with "viral" licenses, but to the freedom of free software itself, when you play games with large corporations with lots of money and attorneys. In such a contest, the small FOSS project is always overmatched unless it aligns itself with another large corporation, which entails even more risk.
Anyway, the long and the short of it is that we have many different kinds of free/open licenses for the simple reason that developers have many different goals for their projects. One size does not fit all, ESR and RMS to the contrary.
Re: (Score:2)
I believe the Apache license is the same as the BSD license, it may even be the BSD license, I'm not sure and I didn't RTFA, naturally. Anyway both Apache and BSD have been around a long, long time.
So it's not "throwing another one in the ringer", it's an old player getting up and saying "you guys suck, I'm the best". Basically I think they are trying to start the FOSS version of a fist fight.
May the best license win!
It differed from the last BSD version (Score:3, Informative)
in that the Apache license dealt with patents (which, being outside copying is still able to ensure you can never use the original BSD licensed code if someone takes a patent on it) and therefore was a better BSD than the BSD license.
Re: (Score:3, Informative)
Woops, you seem to have heard some rumor, misunderstood it, and then attempted to spread it as if it were truth.
The real story is that hard work has been done to make the GPL3 *compatible* with the Apache License 2 (or APL v2, ASF is the 'Apache Software Foundation' and not a license).
Compatible, ie that GPL3 software can link too APL software, is new (GPL2 code wasn't able to link to/include Apache licensed code), is new.. But that is not at all the same as being *equal*, the APL is a BSD like license whic
Re: (Score:3, Insightful)
I honestly don't see what the argument is about. Folks using GPL don't WANT you snatching their code to make some proprietary widget which you then sell (look up RMS writings on the subject. He has proprietary code being slightly less evil than Satan...but not by much) whereas companies like MSFT that want to take your code, not give you jack for it, and lock it up in their proprietary widget(see the NT networking stack) WANT you to release as BSD or similar so they have more code to snatch.
Since the ones
Re: (Score:3, Insightful)
"Whenever we use anything that is open source the first thing is to ensure it is not GPL'd. If it is GPL'd we find another solution or write our own."
That's the idea, dummy.
If you're not going to reciprocate, then write your own!
Re: (Score:2)
That is the catch (and this is actually becoming boring
:-)...
I can't reciprocate when your code is GPL'd. Since I would linking to your code (say, a CPAN module), your code would put my own codebase in a legal gray area. Therefore, I can't give back the changes that are relevant to you without also potentially being required to give back the entire codebase that uses your library.
So yeah... I'd like to reciprocate, but your license won't let me!
Re: (Score:2)
"I can't give back the changes that are relevant to you without also potentially being required to give back the entire codebase that uses your library."
No, that really is the idea, that you can't use my stuff without opening the whole of your codebase in the same way.
That is precisely the point! If you're not willing to open the whole of your application, you don't get to use any GPL components.
Re: (Score:3, Informative)
What you see as not letting you contribute back, I and others would see as refusal to open up more of your work.
Exactly, it's an 'if you aren't with us then you're against us' mentality, and most people decide to pick 'against us' when confronted by that attitude. Which of these do you think benefits the Free Software community most:
Personally, I'd pick option 3. In an ideal world, the new project would be open t
Re:GPL is a hindrance (Score:5, Interesting)
95% of the projects on sourceforge are rubbish that either has no release or hasn't been updated since 1992 so what does that 77% tell me? Nothing. Here is something more intereting:
Firefox - Used everywhere, Not GPL
Apache - Used everywhere, Not GPL
OpenSSH - Used everywhere, Not GPL
Perl - Used everwhere, Not GPL
PHP - Used everywhere, Not GPL
Ruby - Used everywhere, Not GPL
Rails - Used everywhere, Not GPL
PostgreSQL - used everywhere, Not GPL
My point? Dunno, but I woudn't be using sourceforce for gathering statistics.
No pity. I dont really care, honestly. Software is a tool, dammit. Not a religion. I left linux because of the politics. I just want something that works.
Re:GPL is a hindrance (Score:5, Informative)
"Firefox - Used everywhere, Not GPL"
Actually, it is. Firefox is triple licensed as GPL, LGPL and MPL. All of these licenses are so-called "copyleft" to some extent, requiring back contribution.
"Perl - Used everwhere, Not GPL"
Again. It is. PERL is dual licensed GPL and the Artistic License. The Artistic License has less restrictions than the GPL, but more restrictions than the BSD license.
"Ruby - Used everywhere, Not GPL"
Yet again. It is. Ruby is dual licensed GPL (all of it) and the Ruby license (some of it). The Ruby license does allow commercial and proprietary use, but certain parts of Ruby is not covered by the Ruby license.
Besides this name dropping is pointless. I can counter with other examples (at least with your definition of 'everywhere').
Linux - Used everywhere, GPL
OpenOffice.org - Used everywhere, GPL
MySQL - Used much more everywhere than PostgreSQL, GPL
Samba - Used everywhere, GPL.
The main point is that loads of projects see great adoption even if they use the GPL. So using the GPL to cover your bases, doesn't seem to be a great deterrent.
Re: (Score:3, Informative)
Then don't use it, your choice. Please stop thinking that Open Source == Public Domain. GPL is specifically written to make sure that it can only be used by pieces of software that are similarly licensed.
Re: (Score:3, Informative)
"And that's why it's unethical: because it attempts to dictate its morality on other people. It's headfucked."
And you're dictating yours, you fucking moron.
"These people should work for free and let me do what I want with their code! WAAAAAAAAH!"
Don't like the license, don't fucking use it. But stop bleating, for god's sake, it's pathetic.
Re: (Score:2)
Whenever we release any source we put the most permissive license on it we can - which translates to "You can do whatever you want with this EXCEPT put a GPL style licence on it" (...)
Does your license define what is a "GPL style license"?
If not, thanks but no, thanks.
Re: (Score:2)
And now let me rant about how Slashdot gets worse by the day. Not only do I have to log back in to Slashdot repeatedly when browsing (because I'm "behind a corporate firewall")(imagine!!) but now I find myself browsing this particular topic as user "1779"! Sorry, 1779, I'll try not to muck up your view settings.
You sure it wasn't '1729'? I'm getting a lot of other logins as well, at least 25 so far.
It's claimed to be a feature. (Score:3, Interesting)
From the FAQ: Why is someone else's User Name appearing on my User Page's Menu?
;) that you have visited. This is useful when you want to hop around between your user info, and someone else's: to compare friends and foes for example. Your account has not been hacked, this is totally by design.
This is not a bug. This is a feature! That name is the last user page (besides your own
It's a badly implemented feature. You don't really have someone else's identity, it just looks that way. Maybe. It may have
It's a bug (Score:2)
I've gotten 8 other accounts, with their preferences and hidden emails, as well as mod points.
It only stated happening today, too.
Re: (Score:2)
that isn't this. I suspect what they ment that answer to be for people who get confused because other user profile pages look like there. In this case, the damn website thinks I'm logged in as somebody else! Hopefully this glitch doesn't extend further than the story pages and people can't reset the password on my real account. Who knows who this will post as!
(guess it posts as me, not whoever it things is logged into this page)
Re: (Score:2)
I'm having this problem; I've gotten several other accounts with their prefs, private email addresses, and mod points (in 2 instances).
Shit's busted.
Re: (Score:3, Insightful)
Amazing - an intelligent comment in a discussion like this.
However, I think the GPL has advantages for the long-term efficiency. It doesn't fit in to current standard business practices as BSD does, but it can offer Free Software a competitive advantage, and therefore can lure people into the open source paradigm, which many of us think superior (and which I'm perfectly willing to accept for the sake of argument). In addition, it encourages some contributions by reducing the chance that the contributor
Re: (Score:3, Insightful)
Communistic? The GPL? I don't know, I don't think you can get more communistic than the view that code other people develop should be handed over to you for your benefit just because you "need" it. Those of us who favor the GPL take a more capitalistic view: if you want something I created, you're going to need to give me something of value to me in exchange. | http://tech.slashdot.org/story/09/04/29/1440254/is-apache-or-gpl-better-for-open-source-business?sdsrc=prevbtmprev | CC-MAIN-2015-40 | refinedweb | 11,987 | 69.11 |
2008 Qualification Round
Saving the Universe
Let S be the set of search engines. We maintain another set of search engines T that is initially empty. We iterate through the sequence of queries.
If the current query q is the name of a search engine, we insert it in T. Alternatively, we could have used filter here. If T is still a proper subset of S then we continue to the next query. Otherwise, S = T, in which case the search engine we’ve been using until now should be q, and we must switch to another search engine, the choice of which is yet to be determined. We set T to be the singleton set containing q and continue to the next query.
import Data.List import qualified Data.Set as S import Jam main = jam $ do [s] <- getints ss <- S.fromList <$> getsn s [q] <- getints qs <- getsn q return $ show $ solve ss qs solve s qs = fst $ foldl' g (0, S.empty) qs where g (n, t) q | S.notMember q s = (n , t) | s /= t' = (n , t') | otherwise = (n + 1, S.singleton q) where t' = S.insert q t
Train Timetable
We construct an event log, which consist of a timestamp and each event is either of the form "a train leaves station S" or "a train is ready to leave station S", where S is either A or B.
We sort this log by timestamp, and for events with the same timestamp, we ensure the "train is ready" events take precedence. Then we replay the log while maintaining two counters per station, which we’ll call
The number of trains we need at the beginning.
The current number of trains at the station.
In detail, when a train leaves station S, we decrement the second counter for station S unless it is zero in which case we increment the first counter for S. Also, when a train arrives at S, we increment the second counter for S.
After replaying the log, we print the first counters of the two stations.
import Data.List import Jam toTime s = let (hh, ':':mm) = break (== ':') s in read hh * 60 + read mm toTimes = map $ (\[a, b] -> (toTime a, toTime b)) . words record t [e0, e1] as es = foldl' (\es (t0, t1) -> ((t0, e0):(t1 + t, e1):es)) es as main = jam $ do [t] <- getints [na, nb] <- getints as <- getsn na bs <- getsn nb let es0 = record t ["1A", "0A"] (toTimes as) [] es1 = record t ["1B", "0B"] (toTimes bs) es0 es = sort es1 ((a, b), _) = foldl' (\((a0, b0), (a, b)) (_, e) -> case e of "0A" -> ((a0, b0), (a, b+1)) "0B" -> ((a0, b0), (a+1, b)) "1A" -> if a == 0 then ((a0+1, b0), (a, b)) else ((a0, b0), (a-1, b)) "1B" -> if b == 0 then ((a0, b0+1), (a, b)) else ((a0, b0), (a, b-1)) ) ((0, 0), (0, 0)) es return $ show a ++ " " ++ show b
Fly Swatter
The fly survives if it fits between the strings, that is, if its center lies in a square with side length (g - 2*f) centered within a hole, and if the hole borders the ring, then the center must be within (R - t - f) of the center of the ring.
By symmetry, we can solve for the half of a quadrant of the racket. We’ll focus on the sector between 0 and pi/4 radians.
If (g - 2f) is zero or less, then the fly dies, otherwise we consider the x-coordinates of the edges of the squares where the fly must be centered to survive; the y-coordinates are similar. The first square lies between (r + f) and (r + g - f), and in general the (k - 1)th square lies between (r + f) + (2r + g) k and (r + g - f) + (2r + g) k.
Then for:
x <- [r+f, r+f+d..R-t-f], y <- [r+f, r+f+d..x]
we check if the square with bottom-left corner (x, y) lies at least partially in the ring. If so, then we add its area to a running sum, first halving it if x and y are equal. If the square fully lies within the ring, then its area is simply (g - 2*f)^2, otherwise we must first intersect it with the circle of radius (R - t - f) centered at the origin.
There are 4 cases depending on how many corners of the square lie within the ring. All require computing the area of a segment. We find the points of intersection with the ring P and Q via the equation for a circle x2 + y2 = r2. Then we take the dot product of OP and OQ, divide by r2, and take the inverse cosine to determine the angle, which we use to find the area of the sector POQ. Lastly, we subtract the area of the triangle POQ to get the area of the segment, using a simple formula from linear algebra.
The other part is easy: we add the area of a triangle, trapezium, or truncated square.
This turns out to be fast enough for the large input. However, we also implement a faster version for training purposes: a couple of divisions can determine the number of squares in a given row lie completely in the ring. We multiply this by (g - 2*f)2 and add the areas of the partial squares as before.
import Jam import Text.Printf main = jam $ do [f, rr, t, r, g] <- getdbls let slice = pi * rr^2 / 8 s = g - 2 * f d = 2 * r + g r2 = (rr - t - f)^2 inring x y = x^2 + y^2 <= r2 seg (x1, y1) (x2, y2) = 0.5 * (acos ((x1*x2 + y1*y2) / r2) * r2 - abs (x1*y2 - x2*y1)) partial x y = case (inring (x+s) y, inring x (y+s)) of -- One corner lies inside the ring. (False, False) -> let a = sqrt(r2 - y^2) b = sqrt(r2 - x^2) in seg (a, y) (x, b) + (a - x)*(b - y)*0.5 -- Both left corners lie inside the ring. (False, True) -> let a1 = sqrt(r2 - y^2) a2 = sqrt(r2 - (y+s)^2) in seg (a1, y) (a2, y + s) + s*((a1 + a2)*0.5 - x) -- Both bottom corners lie inside the ring. -- This seems to be impossible? (True, False) -> let b1 = sqrt(r2 - x^2) b2 = sqrt(r2 - (x+s)^2) in seg (x, b1) (x + s, b2) + s*((b1 + b2)*0.5 - y) -- Three corners lie inside the ring. (True, True) -> let a = sqrt(r2 - (y+s)^2) b = sqrt(r2 - (x+s)^2) in seg (x+s, b) (a, y+s) + s^2 - (x + s - a)*(y + s - b)*0.5 try x = if 2*x^2 > r2 then 0 else (let k = floor $ (sqrt(r2 - x^2) - x) / d k1 = floor $ (sqrt(r2 - (x+s)^2) - (x+s)) / d f j = sum [partial (x + d*fromIntegral i) x | i <- [j..k]] in (if k1 >= 0 then fromIntegral k1 * s^2 + 0.5 * s^2 + f (k1 + 1) else 0.5 * partial x x + f 1)) + try (x + d) brute = sum [(if x == y then 0.5 else 1.0) * (if inring x y then (if inring (x + s) (y + s) then s^2 else partial x y) else 0) | x <- [r+f, r+f+d..rr-t-f], y <- [r+f, r+f+d..x]] -- Slower, but easier. -- return $ printf "%.6f" $ if s <= 0 then 1 else 1 - brute / slice return $ printf "%.6f" $ if s <= 0 then 1 else 1 - try (r + f) / slice | https://crypto.stanford.edu/~blynn/haskell/2008-qual.html | CC-MAIN-2018-05 | refinedweb | 1,262 | 86.54 |
Nice! I'll test and report back
EDIT: It seems to be working. It detects James IED's but you cannot difuse them. Also, dunno whether it's intentional but the debug messages have now dissapeared since the little update you done just now.
Nice! I'll test and report back
EDIT: It seems to be working. It detects James IED's but you cannot difuse them. Also, dunno whether it's intentional but the debug messages have now dissapeared since the little update you done just now.
Last edited by Acelondoner; Jan 21 2011 at 04:17.
reezo_ieddetect_debug = false; into true
I jumped on my chair when I noticed the IEDs are detectable, I didn't move to the IED because I couldn't find the triggerman..there might still be problems, but for now, they can be detected
Defusing them with a triggerman around - even when both scripts will interact correctly - won't be good for our health.
Missions: The Taking of Fallujah (ACE) | A Day in the Life (ACE) | Roadblock Duty (ACE)
Developer for: ACE RWR Radar Component | EOD Mod | SMK Animations Mod | IED Det. and Disp. Script | MOUT Generation Script | Loudspeaker Script | SniperPod Camera Mod
lol, I've been working with your script so much that I totally forgot his script had a triggerman aha
Great work mate.
Here's a couple of thoughts when testing it with buddies:
In MP it seems inconsistent across players. One person would detect it, then it would detect on the other people. The first person who detected it would get the option to Remote Detonate it -always- (note: we never were able to blow it from a distance using the action menu. When we did get the Remote Detonate it, we had to go up to it and INJECT it with the number. And only the person who first detected it could do that).
Now... that was for the James script and the _add.sqf in an object.
Suggestion: for the James script, when it detects using the James script make it say something along the lines of, "Active Frequency Detected" or something of that sort (something that gives the player a hint that there is a triggerman).
And lastly, another suggestion: lower the detach IED option so you only get it to pop up within, lets say, 1 meter. BUT, I think that you should need to SEARCH the object that has the script in it, have it trigger an animation, and THEN you'll get the detach IED option.
Edit: Also, we were getting BUNCH of Fake IED hits, are you sure the ", 01]" in the detector.sqf is 1%? It may have just been chance, but it was a lot. It may have something to do with the fact that our SCAN interval was set to 5 (note: with no lag issues).
Edit #2: Just had an idea I thought I should throw out there... EOD Suit. Terribly needed/wanted to go with this. Now, how would this be put in a practical solution? Well, in theory (and this is a theory), use a similar system the ACE Rucksacks do. Basically a remodelled/textured rucksack, that instead of a backpack it's a full body suit that covers your player model. It'll take up your second primary inventory slot, and we can actually make it very heavy so that you'll have to drop all excess equipment minus a pistol and a magazine as well as giving the model an actual armor value. Thoughts?
Last edited by SpectreRSG; Jan 21 2011 at 06:14.
EDIT: It seems to be doing it alot more now for some reason... It also seems to show the approx position of the fake IED at another player/detector.
Last edited by Acelondoner; Jan 21 2011 at 09:58.
Thanks for the feedback. I'll keep testing and improving, I have a couple clues why these things happen.
I must say that, for the moment I am forced to disregard any feedback related to James' script. I need to stay focused on testing my own script first and programming always requires a very rational approach, to avoid loosing track of progress.
I am not saying you must provide me with feedback, but if you do, make sure you follow these guidelines.
1) Make it simple, like 1 EOD 1 IED
2) Start building up gradually
3) Proceed excluding variables, e.g. at first put Fake IEDs to 00 to avoid that
4) Change one variable at the time, to maintain control of the events
This is how I am going to test things now that the script code is all polished and organized. Dedicated server testing is a must because we all know ArmA changes drastically one things go "real".
Thanks a lot for taking part in this project, I might create a dev-heaven section for this whole thing, it should help us keep track of the most constructive and "professional" feedback.
It's either us or the script, because if we don't whip it programmed code always kicks back! whipwhip!
UPDATE: I have v1.5RC2 almost ready. I've started testing on dedicated exclusively, no more local/editor simulations. Everything works, with multiple EODs, scanner units, scanner vehicles and both units and vehicles.
I need to make the following improvements, and then 1.5RC2 is released.
1) Check for isServer and !isServer calls, in order to avoid duplicate scripts running (done)
2) Improve the attachment system (to be done)
3) When a scanner unit is inside a scanner vehicle, pause the unit scanner (not really necessary because of the "busy" variable, for now).
Oky.
laters..laters..
Last edited by Reezo; Jan 21 2011 at 19:34.
bump?bump?Alright, VERY nice job with this script, I've been following it closley but was unable to post until now, great job! Only question is with respawn/Norrin's revive. Im using Norrin's revive script and wonder how I can readd the detect to only engieneers? Any ideas? And for anyone trying to get this to work with james' ied script i believe its on page 5
Keep up the great work! Let me know if you need anything!Keep up the great work! Let me know if you need anything!
To further Kolmain's bump
I have issues with Norrin's revive too, one being adding the:
In the description.ext, it just crashes to desktop everytime saying:In the description.ext, it just crashes to desktop everytime saying:class CfgSounds
{
#include "scripts\IEDdetect\IEDdetect_sounds.cpp"
};
class RscTitles
{
#include "scripts\IEDdetect\IEDdetect_screens.cpp"
};
And when placed in the revive.sqf\dialogs\config.cpp it crashes to desktop saying:
This is the only thing stopping me from using this awesome script, tried working round this with no great success
Viper you have to remove the conflicts from description.ext
It just takes tinkering
Stitching together dialog conflicts is like an art form
Last edited by Kremator; Jan 21 2011 at 22:41. | http://forums.bistudio.com/showthread.php?113289-SR5-Tactical-IED-Detection-and-Remote-Det-Script/page11 | CC-MAIN-2013-48 | refinedweb | 1,173 | 71.75 |
In this post, we will implement different ways to schedule Apex Class in Salesforce. It is very important to schedule Apex Classes in certain business scenarios. This helps to run Apex Classes at specific times, even at regular intervals. Let’s hop into the implementation.
How to make Apex Class schedulable?
To make class schedulable, first, implement the Schedulable Interface. Then you must implement the below method that is defined in Schedulable Interface:
global void execute(SchedulableContext sc){}
The method must be global or public.
Consider the below example, where we have implemented Schedulable Interface and implemented execute method as well. It queries 5 Accounts and updates them.
AccountSchedule.cls
public class AccountSchedule implements Schedulable{ public void execute(SchedulableContext sc){ List<Account> lstAccount = new List<Account>(); for(Account objAccount : [SELECT Type FROM Account WHERE Type != 'Customer - Direct' ORDER BY CreatedDate LIMIT 5]){ objAccount.Type = 'Customer - Direct'; lstAccount.add(objAccount); } update lstAccount; } }
Schedule Apex Class Declaratively
Now that we have implemented Schedulable Interface, we need to actually schedule an Apex Class, To do that, follow below steps:
- Go to Apex Classes from the Quick Find box.
- Click on Schedulable Apex.
- Select Apex Class that implemented Schedulable Interface and configure the Schedulable Apex Execution, Frequency, Start Date, End Date, and Preferred Start Time. Please make note that we cannot provide Minutes and Seconds for scheduling. It will only run at 0 Minutes and 0 Seconds for specific hour.
This is how the Schedule Apex configuration would look to run Apex Class every day at 12.00 AM for the next 5 years.
And this is how the Scheduled Job would look in the All Scheduled Jobs:
Schedule Apex Class using System.schedule()
We can also schedule an Apex Class using System.schedule(). We need to pass 3 parameters:
- Name of Schedule Job: Any Text value to identify the name for this Schedule Job.
- Expression: An expression used to represent the time and date the job is scheduled to run. Check official documentation to know more about this expression and its format here.
- An instance of an Apex Class that implements Schedulable Interface.
This is how we can call System.schedule() to run Apex Class once a day at 12.00 AM:
System.schedule('Update Account Type Schedule', '0 0 0 * * ?', new AccountSchedule());
Also Read:
Schedule Batch Apex using System.scheduleBatch()
We can use System.scheduleBatch() method to schedule a batch job to run once at a future time. Following are the parameters for System.scheduleBatch():
- An instance of an Apex Class that implements the Database.Batchable interface.
- Name of Schedule Job: Any Text value to identify the name for this Schedule Job.
- The time interval in Minutes after which the job starts executing.
- An optional scope value. By default, it’s 200. Maximum 2000.
Below is the sample Batch Apex that updates 5 Accounts from the database and updates it.
AccountBatch.cls
public class AccountBatch implements Database.Batchable<sObject>{ public Database.QueryLocator start(Database.BatchableContext bc){ return Database.getQueryLocator('SELECT Type FROM Account WHERE Type != \'Customer - Direct\' ORDER BY CreatedDate LIMIT 5'); } public void execute(Database.BatchableContext bc, List<Account> lstAccount){ for(Account objAccount : lstAccount){ objAccount.Type = 'Customer - Direct'; } update lstAccount; } public void finish(Database.BatchableContext bc){ } }
And this is how we can schedule this Batch to run after 50 Minutes:
System.scheduleBatch(new AccountBatch(), 'Update Account Type Batch', 50);
These are the different ways to schedule Apex Class in Salesforce.
If you don’t want to miss new implementations, please Subscribe here. If you want to know more about Scheduling Apex Class in Salesforce, please check official Salesforce documentation here.
Thank you! See you in the next implementation!
3 thoughts on “Different ways to Schedule Apex Class in Salesforce”
Thanks a lot for this info Niki.
I have a question, where goes the code (System.schedule()), in which part of the class?
I don’t understand, please send help.
This code:
System.schedule(‘Update Account Type Schedule’, ‘0 0 0 * * ?’, new AccountSchedule());
Hi Karen, you can call it from the Anonymous Window of Developer Console. | https://niksdeveloper.com/salesforce/different-ways-to-schedule-apex-class-in-salesforce/ | CC-MAIN-2022-27 | refinedweb | 674 | 60.41 |
Label doesn't show up
Hi.
I have this simple piece of code:
@#include <QApplication>
#include <QLabel>
int main (int argc, char *argv[]) {
QApplication app(argc, argv);
QWidget w;
w.setGeometry (0, 0, 800, 450);
QLabel a("AAA", &w);
a.setGeometry (10, 10, 200, 100);
w.showNormal ();
QLabel b("BBB", &w);
b.setGeometry (10, 410, 200, 100);
w.update ();
return app.exec ();
}@
When I run it, I see "AAA" but not "BBB". Why is that? It must be possible to add a child widget to 'w' after it's been shown!
I'm using Qt 5.4.0 on Ubuntu 14.10.
One problem is that you placed the B label below the bottom border of the widget. A rectangle of height 100 placed at y 410 will try to render text somewhere around y 460, which is lower than the bottom border (450).
The other problem is described in the docs of "QWidget":
bq. If you add a child widget to an already visible widget you must explicitly show the child to make it visible.
So all you need to do is this:
@
QLabel b("BBB", &w);
b.setGeometry (10, 410, 200, 100);
b.show();
@
You don't need the call to w.update().
- alex_malyu
Comments were removed even though I still believe that code which rely on the object order destruction in the same scope is evil.
@alex_malyu This code is fine (for a code snippet). Order of destruction of scope variables is guaranteed by the standard and is the reverse order they were declared in. So
@
//this is perfectly fine:
QWidget parent1;
QWidget child1(&parent1); //will destruct first and detach from parent
//this is an error:
QWidget child2;
QWidget parent2;
child2->setParent(&parent2); //a double delete will occur on scope exit
@
The setGeometry call is setting the position in parent coordinates, so yes, they are related. The widget is just not shown initially if the parent is already visible. This is done to allow creating hidden widgets without a flicker. If they were shown initially you would have to call hide() right away but they might still be shown for a moment causing a distraction. They are shown automatically for not yet shown parents because there's no such problem there.
Layouts should be used most of the time, but some simple non-resizable widgets can do without them fine (less typing).
Ok, Thanx a lot for your help!!
[quote author="alex_malyu" date="1423617598"]Comments were removed even though I still believe that code which rely on the object order destruction in the same scope is evil.[/quote]
Agreed. But for small examples or snippets to illustrate an issue it's ok. | https://forum.qt.io/topic/51052/label-doesn-t-show-up | CC-MAIN-2018-09 | refinedweb | 445 | 66.33 |
Opened 3 years ago
Closed 3 years ago
Last modified 3 years ago
#19522 closed Bug (invalid)
ModelForm and BooleanField
Description
class TestModel(models.Model):
b = models.BooleanField(db_index=True, default=True)
class Meta:
db_table = 'test'
app_label = 'test'
class TestModelForm(forms.ModelForm):
class Meta:
model = TestModel
f = TestModelForm()
{{ f }} -- no view form
Change History (5)
comment:1 Changed 3 years ago by anonymous
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Version changed from 1.4-alpha-1 to 1.5-alpha-1
comment:2 in reply to: ↑ description Changed 3 years ago by anonymous
comment:3 Changed 3 years ago by anonymous
Must user _(u'Кириллица')
Remove this ticket, please.
comment:4 Changed 3 years ago by timo
- Resolution set to invalid
- Status changed from new to closed
comment:5 Changed 3 years ago by anonymous
This problem not raise exception.
Note: See TracTickets for help on using tickets.
# -*- coding: utf-8 -*-
from django.utils.translation import ugettext_lazy as _ | https://code.djangoproject.com/ticket/19522 | CC-MAIN-2016-22 | refinedweb | 164 | 57.27 |
Details
Description
With
LUCENE-5339 facet sampling disappeared.
When trying to display facet counts on large datasets (>10M documents) counting facets is rather expensive, as all the hits are collected and processed.
Sampling greatly reduced this and thus provided a nice speedup. Could it be brought back?
Activity
- All
- Work Log
- History
- Activity
- Transitions
I'm currently expermenting with this. To increase the speed it seems logical to me the FacetsCollector needs to return less hits. I have a slighly modified version that I will attach.
It uses a sampling technique that divides the total hits in to 'bins' of a given size; and takes one sample of that bin. I have implemented it as keeping that one sample as 'hit' of the search if it was a hit, and clearing all other bits. See the attached file.
By using this technique the distribution of the results should not be altered too much, while the performance gains can be significant.
A quick test revealed that for 1M results and binsize 500, the sampled version is twice as fast.
The problem it that the resulting {{FacetResult}}s are not correct, as the number of hits is reduced. This can be fixed afterwards for counting facets by multiplying with the binsize; but for other facets it will be more difficult or will require other approaches.
What do you think?
This looks great!
To increase the speed it seems logical to me the FacetsCollector needs to return less hits.
fewer hits
(Sorry, pet peeve).
Maybe, you could add a utility method on SamplingFacetsCollector to "fixup" the FacetResult assuming it's a simple count (e.g., multiply the aggregate by the bin size)? The old sampling code ( ) has something like this.
It might be good to allow passing the random seed, for repeatable results?
Another option, which would save the 2nd pass, would be to do the sampling during Docs.addDoc.
Also, instead of the bin-checking, you could just pull the next random double and check if it's < 1.0/binSize?
This looks like a great start! I have few comments/suggestions, based on the previous sampling impl:
- I think SamplingFC.createDocs should return a declared SampledDocs (see later) instead of anonymous class
- Currently that SampledDocs.getDocIdSet() creates a new sample on every call. This is bad not only from a performance point of view, but more because if there are few Facets* that call it, they will compute weights on different samples
- Instead, I think that SampledDocs.getDocIdSet() should return the same sampled DIS, i.e. by caching it.
- Also, I think it would be good if it exposes a getSampledDocIdSet which takes some parameters e.g. the sample size
- I think the original FBS should not be modified. E.g. in the previous sampling impl, you could compute the sampled top-facets but then return their exact counts by traversing the original matching docs and counting only them.
- But maybe we should leave that for later, it could be a different SFC impl. But still please don't compute the sample on every getDocIdSet() call.
- The old implementation let you specify different parameters such as sample size, minimum number of documents to evaluate, maximum number of documents to evaluate etc. I think those are important since defining e.g. 10% sample size could still cause you to process 10M docs, which is slow..
I agree with Mike that we need to allow passing a seed for repeatability (I assume it will be important when testing). Maybe wrap all the parameters in a SamplingConfig? We can keep the sugar ctors on SFC to take only e.g. sampleRatio..
Thanks guys for the feedback (also on my language skills, I need to improve my English
)
It might be good to allow passing the random seed, for repeatable results?
Yes! This is very sensible for testing and more 'stable' screenresults and I will add this.
Another option, which would save the 2nd pass, would be to do the sampling during Docs.addDoc.
I considered sampling on the 'addDocument' but I figured it would be more expensive as then for each hit we need to do a random() calculation.
I think SamplingFC.createDocs should return a declared SampledDocs (see later) instead of anonymous class
I also considered this. It is far better for clarity-sake but it also costs a copy of the original. I will try some approaches and will make sure the sampling is only done once..
This was more or less by accident, but indeed seems useful. All segments need the same ratio of sampling though, else it would be really hard to correct the counts afterwards. (Or am I missing something here?)
Maybe wrap all the parameters in a SamplingConfig?
Yes. Very useful and makes it more stable.
The old implementation let you specify different parameters such as sample size, minimum number of documents to evaluate, maximum number of documents to evaluate etc
The old style sampling indeed had a fixed sample size, which I found very useful. However, I have not yet found a way to implement this as I do not know the total number of results when I start facetting, so I cannot determine the samplingRatio. I could of course first count all results, but that also impacts performance as I would need two passes. I will give it some more thought, but maybe you have an idea on how to accomplish this in a better way?
Here is a patch (agains 4.7) that covers some of the feedback..
xor-shift might be a good choice here.
Great effort!
I wish to throw in another part - the description of this issue is about sampling, but the implementation is about random sampling.
This is not always the case, nor it is very fast (indeed, calling 1M times Random.nextInt would be measurable by itself IMHO).
A different sample could be (pseudo-code)
int acceptedModulu = (int)(1/sampleRatio); int next() { do { nextDoc = inner.next(); } while (nextDoc != NO_MORE_DOCX && nextDoc % acceptedModulu != 0) ; return nextDoc; }
This should be faster as a sampler, and perhaps saves us from creating a new DocIdSet.
One last thing - if I did the math right - the sample crafted by the code in the patch would be twice as large as the user may expect.
For a sample ratio of 0.1, the random.nextInt() would be called with 10, so the avg. "jump" is actually 5 - and every 5th document in the original set (again, in avg) would be selected, and not every 10th in avg. I think the random.nextInt should be called with twice the size it is called now (e.g 20, making the avg random selection 10).
About the patch:
- SampledDocs should be static class, to not carry over the FC reference. We've been bitten by such things in the past and I don't see a good reason to do it here.
- The 'sampling' package may be an overkill? if we want to do it, need to add package.html. But I think you can wrap them all in a SFC now (SampledDocs and SamplingParams as static classes) as it's not a lot of code.
- Could you take a look at the old tests and see if they can be ported to the new API?
About the sampling itself, Rob referred me to this link where a simple implementation for PRNG is used, following XOR-Shift method. Could you take a look and see if it can be applied here instead of using Java's Random. According to the link, it performs 7 times faster...
I don't think we need a sampling package (overkill), and I also don't think we need a separate "config" class (SamplingParams/Config) which will bring complexity/questions itself; I think it's a non-goal here to revive the previous sampling implementation with all of its complexity, nor to push all of this responsibility onto Rob, who's doing great work here.
Rather, I think we should start simple (what Rob needs, here), commit that, and iterate from there, bringing back features as users need them.
I agree Mike. Rob wrote though in a previous comment: "The old style sampling indeed had a fixed sample size, which I found very useful", so I assumed that's something he wants to push for in this issue as well. I'm OK w/ re-introducing sampling in baby steps, but if Rob's goal is to use sampling + fixed sampling, then we should help him do that in this issue - there's no reason to break this into two issues?
Rob, if you only want to introduce sampling ratio, then I agree with Mike that SamplingParams is an overkill for this issue. And in anyway I think a separate sampling package is an overkill as well. If sampling code grows, we can always refactor and move it under its own package.
I reviewed createSample in the patch, and I think something's wrong with it. As I understand it, you set binsize to 1.0/sampleRatio, so if sR=0.1, binsize=10. This means that we should keep 1 out of every 10 matching documents. Then you iterate over the bitset, for every group of 10 documents you draw a representative index at random, and clear all other documents.
But what seems wrong is that the iteration advances through the "bin" irrespective of which documents were returned. So e.g. if the FBS has 100 docs total, and docs 5, 15, 25, 35, ... returned and say random.nextInt(binsize) picks all indexes but 5, if I understand the code correctly, all bits will be cleared! I double-checked with the following short main:
public static void main(String[] args) throws Exception { Random random = new Random(); FixedBitSet sampledBits = new FixedBitSet(100); for (int i = 5; i < 100; i += 10) { sampledBits.set(i); } int size = 100; int binsize = 10; int countInBin = 0; int randomIndex = random.nextInt(binsize); for (int i = 0; i < size; i++) { countInBin++; if (countInBin == binsize) { countInBin = 0; randomIndex = random.nextInt(binsize); } if (sampledBits.get(i) && !(countInBin == randomIndex)) { sampledBits.clear(i); } } for (int i = 0; i < 100; i++) { if (sampledBits.get(i)) { System.out.print(i + " "); } } System.out.println(); }
And indeed in many iterations the main prints nothing, in some only 2-3 docs "survive" the sampling.
So first I think Gilad's pseudo-code is more correct in that it iterates over the matching documents and I think you should do the same here. When you do that, you no longer need to check bits.get
in order to decide if to clear it.
I wonder if your benchmark results are correct (unless you run a MatchAllDocsQuery?) – can you confirm that you indeed count 10% of the documents?
Another point raised by Gilad is the method of sampling. While you implement random sampling, there are other methods like "take the 10th document" etc. which will be faster. I think one can experiment with these through an extension of SampledDocs.createSample (and SamplingFC.createDocs), but just in case you want to give such a simple sampling method a try, would be interesting to compare randomness with something simpler.
Thanks all for the insight!
My implementation is not correct in the fact that I indeed run over all the bits in the returning FixedBitSet, and will work better when only looking at the set bits (the real results). The sampling I have implemented will sample the total set of documents which is correct in the MatchAllDocsQuery, but will behave worse and worse the fewer hits are returned; as the chance to sample a hit decreases fast.
The reason I choose to do random sampling in bins instead of just using modulo is to prevent accidental periodicity-effects. For example, if a document is created each day and you sample with modulo 7, you only get the 'monday' results. This can cause weird biased results.
So my next tasks will be:
- implement xorshift, as it seems a good improvement over the Math.random() (although I switched to hte faster Random.nextInt( n )).
- only sample the hits
- remove the package
Here is a patch.
I'm not totally satisfied with it though. There are two loose ends:
-.
Some small performance indicators:
Using an in memory index with 10M documents, retrieving 25% of them and using a samplingRatio of 0.001 (skipping the first call, average on the next three)
Exact: 184 ms.
Sampled: 67 ms.
Not the 10-fold increase I had hoped for, but significantly faster.
Btw. for my use case the two problems I described above are not a real issue. I do a global count anyway, and use this to determine if the facets need to be sampled or not. When they need to be sampled, I know for sure that there are enough hits to give a proper estimate. The price of counting and sampling is lower than the price of retrieving exact facets.
More performance data:
Exact: 185 ms.
Sampled with fixed interval : 51 ms.
The difference between random sampling with index in 'bins' and a fixed interval is not really that big. Although I probably need a larger index to get a better comparison.
Nice patch,
I was surprised by the gain - only 3.5 time the gain for 1/1000 of the documents... I think this is because the random generation takes to long?
Perhaps it is to heavy to call .get(docId) on the original/clone and .clear() for every bit.
I crafted some code for creating a new (cleared) FixedBitSet instead of a clone, and setting only the sampled bits. With it, instead of going on ever document and calling .get(docId) I used the iterator, which may be more efficient when it comes to skipping cleared bits (deleted docs?).
Ran the code on a scenario as you described (Bitset sized at 10M, randomly selected 2.5M of it to be set, and than sampling a thousandth of it to 2,500 documents).
The results:
- none-cloned + iterator: ~20ms
- cloned + for loop on each docId: ~50ms
int countInBin = 0; int randomIndex = random.nextInt(binsize); final DocIdSetIterator it = docIdSet.iterator(); try { for( int doc = it.nextDoc(); doc != DocIdSetIterator.NO_MORE_DOCS; doc = it.nextDoc()) { if (++countInBin == binsize) { countInBin = 0; randomIndex = random.nextInt(binsize); } if (countInBin == randomIndex) { sampledBits.set(doc); } } } catch (IOException e) { // should not happen throw new RuntimeException(e); }
Also attaching the java file with a main() which compares the two implementations (original still named createSample() and the proposed one createSample2().
Hopefully, if this will be consistent, the sampling end-to-end should be closer the 10 factor gain as hoped for.
Patch looks good, and those are nice performance results!
But, can we please remove the SamplingParams class? There are only 2
params now; if we get up to 6 or 7 params then maybe we should switch
to a separate config class. I think one ctor taking only sampleRatio,
and then one other taking all params, is enough?.
I think these limitations are acceptable? In a large index, the
number of such "tiny" segments should be a tiny percentage of the doc
space; if not, something serious is wrong.
Another option, which would save the 2nd pass, would be to do the sampling during Docs.addDoc.
I considered sampling on the 'addDocument' but I figured it would be
more expensive as then for each hit we need to do a random()
calculation.?
Quick update:
I implemented the single pass on the 'addDoc' method using a fixed interval (which should be the fastest implementation I guess)
Exact: 180ms
Single Pass (no-clone) Fixed Interval: 38ms.
This is an almost 4.75x speed-up.
I will compare this later on with a single pass random indexed approach.?
Yes I thought of this too. I will update this. I think I prefer the single pass option myself, as I do not need the original bitset and the faster option.
+1 for removing SamplingParams.
I'm OK if the original collected hits is lost, let's just make sure we document this on the collector. Perhaps in the future (separate issue) we can add another collector (or enhance this one) to allow you to get both the original docs and the sampled docs.
I wonder what are the performance implications of sampling during collection or post collection. In the past, Mike and I saw that not interfering with collection improves performance, though what we measured was accessing the ordinals-DV during collection. Just wondering if in-collection performs better than post-collection. Especially if sampleRatio is low, it means we set far fewer hits on the FBS, just to clean most of them afterwards. On the other hand we add calls to random.nextInt/Double, so that's a tradeoff which would be good to measure. We don't even need random.nextDouble(), we can do what you/mike suggested above – work in "bins", for each bin draw a random index and discard all hits given to addDoc unless it is the binIndex.
I also think that if we keep the original FBS (whether we clone-and-clear, create-new-sampled-one or whatever), we should iterate on the matching docs and not all the bits. I don't see the logic of why that's good at all, unless the bitset.cardinality() is very big (like maybe 75% of the bitset size)? Of course, if we move to sample in-collection, that's not an issue at all.
Rob, I want to make sure we are all on the same page as to what we want to achieve in this issue (helps scope it):
- Add SamplingFacetsCollector which takes sampleRatio and optional seed and random-samples the matching docs so that facets are counted on fewer hits
- We should note that as a result the weights computed for facets are approximation only, and the application needs to correct them if it wants (e.g. multiplying by the inverse of sampleRatio or exact count etc.). At any rate, this is something that we don't plan to tackle in this issue, right?
Maybe we should call it RandomSamplingFacetsCollector to denote it's a random sample (vs e.g the approaches mentioned by Gilad above)? Then the parameter seed is clear as to what it means.
New patch.
My little benchmark results:
Exact :176 ms
Sampled:36 ms
- I removed the SamplingParams.
- I use random sampling in bins in the addDoc. This has almost no performance impact over fix-internal samling when running with a sampleRatio of 0.001.
- Renamed the class to RandomSamplingFacetsCollector
- I decided to always add the first document of a segment. This counters the fact that randomindex in the last bin is larger than the segment-size. Gives slightly better results.
I do have some questions:
- Code style-wise: I see this. is sometimes used, and sometimes not. What do you prefer?
- It seems FixedBitSet contains a lot of empty space, as only 1 in 1000 hits are preserved. I'm not that familiar with all the DocIdSet implementations, but maybe there are more suitable subclasses?
btw:
My use case is that I would like to present a 'overview' of the search results using facets. Because the index might well be over 10M documents, I need the speed-up that comes from sampling the facets. The results do not need to be exact; as the user will be informed that the results are estimated.
I do want to provide a 'order of magnitude' comparison, that is why I need the correctCountFacetResults. It is easier for the user to see that it is about 10.000 than just provide a percentage.
Thanks Rob. Few comments:
- Typo in jdoc: "to low" – "too low"?
- About constructors: can we have two ctors, one taking samplingRatio and another taking all the parameters?
- And then I think that seed=0 could be used as "generate a seed for me", so you don't need to take a Long (can take primitive)
- I'd make the first ctor call this(false, ratio, 0) and fold in init() inside the all-params ctor.
- It's a matter of personal preference, but I like that static classes are either at the top of the class (above all class members) or at the end .. but not in the middle
- correctCountFacetResults
- I think this is redundant in the jdoc "Corrects FacetCounts if sampling is used." – you obviously call this method if you used sampling?
- XORShift64Random: maybe we should move it under o.a.l.util (and lucene/core)? I think it's a useful utility class in general?
- At any rate, it should be static class and let's not nest it inside SampledDocs
- Now that the collector sub-samples during collection and does not record the original docs, I think SampledDocs can be declared private (static) class as I don't see any use for someone to care if it's SampledDocs or just Docs?
Code style-wise: I see this. is sometimes used, and sometimes not. What do you prefer?
This is a personal preference. I use this. only when the method has a parameter which conflicts w/ class members and I often use it explicitly in the ctor.
It seems FixedBitSet contains a lot of empty space
You can use WAH8DocIdSet which offers better compression, but it means that the collector would need to declare it doesn't support out-of-order collection. If keepScores=true it doesn't support that anyway, but I think you can explore with it? Can you benchmark with it and report back? If the results are good, maybe we can turn off out-of-order collection for sampling by default and let the app override this behavior if it cares about out-of-order?
I have taken these points into account in the new patch.
- Separated XORShift64Random to o.a.l.util
- Moved SampledDocs and declared private
- Fixed some typos / redundancy in the javadocs.
- Refactored the c'tors, removed the init.
Looks good Rob. I apologize for not mentioning this, but now that XORShift64Random is a public class, it has to have jdocs on all methods and ctors, otherwise documentation linting will fail. Can you please add some in your next patch?
Also, are you planning to write some unit tests? You can either start with one of the existing tests or look at old tests. I think maybe start new will be easier. The key point is that in order to test sampling, we need to index many documents to make the samples count. So e.g. we want to make sure that if we give 10% sample ratio, then a category's count is ~10% of the expected count.
In the old tests we had issues w/ false positives - tests that failed on these asserts just because the nature of sampling isn't deterministic. Would be good if we can craft the test such that on one hand it does test sampling, but on the other hand doesn't cause unwanted noise.
I do think we can optimize SampledDocs to not use FixedBitSet even in the case of out-of-order collection (no scores) by keeping an int[] or some other compressed array, especially when the sample ratio is so small. We can do that later though - we need tests first.
I will add the documentation to the XORShift64Random.
And of course, the tests. I have already some code that does testing in my mini benchmark. I will reduce the document count, for the unit tests need to be as fast as possible. Probably about 100 docs will do fine. Providing a seed will make sure we can test exact; but I will try to also add a test that uses a random seed and tests if the result fits in a given range. Should be doable, will take the other tests as example.
Btw. I also tried the WAH8DocIdSet with setInterval(8). It is slightly slower (36 ms for FSB, 42 ms for WAH8) but it is a too small margin to draw real conclusions I think. I agree it is better to have some tests first.
-1 to adding this XOrShiftRandom to lucene/core. There is absolutely no need for it to be here. We are an IR library, not a JDK. Just because its "useful" is not good enough to expose something as a public API.
Moreover, this class does not extend java.util.Random, has no statistical tests, etc. Doesn't belong as a public api in lucene core. please keep it package private in facets.
Sorry if this was mentioned before and I missed it - how can sampling work correctly (correctness of the end result) if it's done 'on the fly' ?
Beforehand, one cannot know the number of documents that would match the query, as such, the sampling ratio is unknown, given that we can afford faceted search over N documents only.
If the query yields 10K results and the sampling ration is 0.001 - would 10 documents make a good sample?
Same if the query yields 100M results - is 10K sample good enough? Is it perhaps to much?
I find it hard to figure a pre-defined sampling ratio which would fit different cases.
That's good point Gilad. I think once this gets into Lucene it means other people will use it and we should offer a good sampling collector that works in more than one extreme case (always tons of results) even if it's well documented. One of the problems is that when you have a query Q, you don't know in advance how many documents it's going to match.
That's where the min/maxDocsToEvaluate came in handy in the previous solution – it made SamplingFC smart and adaptive. If the query matched very few documents, not only it didn't bother to sample and save CPU, it also didn't come up w/ a crappy sample (as Gilad says, 10 docs). The previous sampling worked on the entire query, the new collector can be used to use these threshold per-segment.
But I feel that this has to give a qualitative solution – the sample has be meaningful in order to be considered as representative at all, and we should let the app specify what "meaningful" is to it, in the form of minDocsToEvaluate(PerSegment).
And since sampling is about improving speed, we should also let the app specify a maxDocsToEvaluate(PerSegment), so a 1% sample still doesn't end up evaluating millions of documents.
Robert, I agree w/ your comment on XORShiftRandom - it was a mistake to suggest moving it under core.
Rob, I feel like I've thrown you back and forth with the patch. If you want, I can take a stab at making the changes to SFC.
Hi all, good points.
Actually, in my application, I always do a count before any other search/facetting. This means I can set a sampling ratio to be appropriate for the number of results (or even decide not to do sampling at all). So for me, I have enough functionality as-is in this issue.
But of course if would be nice if it could be implemented in one pass, because there are more people using Lucene
I think it would not be that hard to implement a lower-bound; if for example 15.000 would be the minimum number of documents before sampling kicks in, the RandomSamplingFacetCollector could also collect docs in a int[15000] to make sure that if there are less hits, the int[] is used, else the sampled set. The exact-part collector can stop collecting after these 15k docs.
A bit harder is the upper-bound, as you only know how many hits there are after collecting them all. To prevent collecting all the documents, you need a sampling rate to start skipping documents. If the result then contains too much document; the sampled result can be re-sampled to the desired size. I think this is more like the previous implementation of the SamplingCollector.
I have already implemented a small test (and fixed an off-by-one using it) ; I will create a patch today that contains this test and the xorshift merged back into the collector.
Is it a good idea to explore this upper-bound/lower-bound approach? Maybe you already thought of alternative approaches as well?
- Reverted move of XorShift64Random
- Added basic test for random sampling. Uses sampling ratios of 1.0d and 0.1d. 1.0d should behave like no sampling at all.
Actually, in my application, I always do a count before any other search/facetting
Hmm, what do you mean? How do you count the number of hits before you execute the search?
The reason why the previous sampling solution did not do sampling per-segment is that in order to get to a good sample size and representative set, you need to know first how many documents the query matches and only then you can do a good sampling, taking min/maxSampleSize into account. Asking the app to define these boundaries per-segment is odd because app may not know how many segments an index has, or even the distribution of the segment sizes. For instance, if an index contains 10 segments and the app is willing to fully evaluate 200K docs in order to get a good sampled set, it would be wrong to specify that each segment needs to sample 20K docs, because the last two segments may be tiny and so in practice you'll end up w/ a sampled set of ~160K docs. On the other hand, if the search is evaluated entirely, such that you know the List<MatchingDocs> before sampling, you can now take a global decision about which documents to sample, given the min/maxSampleSize constraints.
At the beginning of this issue I thought that sampling could work like that:
FacetsCollector fc = new FacetsCollector(...); searcher.search(q, fc); Sampler sampler = new RandomSampler(...); List<MatchingDocs> sampledDocs = sampler.sample(fc.getMachingDoc()); facets.count(sampledDocs);
But the Facets impls all take FacetsCollector, so perhaps what we need is to implement RandomSamplingFacetsCollector and only override getMatchingDocs() to return the sampled set (and of course cache it). If we'll later want to return the original set, it's trivial to cache it aside (I don't think we should do it in this issue).
I realize it means we allocate bitsets unnecessarily, but that a correct way to create a meaningful sample. Unless we can do it per-segment, but I think it's tricky since we never know how many hits a segment will match a priori. Perhaps we should focus here to get a correct and meaningful sample, and improve performance only if it becomes a bottleneck? After all, setting a bit in a bitset if far faster than scoring the document.
How do you count the number of hits before you execute the search?
Well, I do a search of course, but collect the hits using a TotalHitCountCollector and not retrieve any stored values. I did not find any other way to determine for sure if I needed to do sampling or not. I know this takes time. When I first implemented it however, it was faster to do a count and determine whether provide exact facets or needed to sample. Not optimal, but it worked. And because I could use sampling in the facets, the total time (1 pass counting, 1 pass sampling facets) was still much less than the time it would take to do a exact facet and count in one pass.
When only considering the samplingThreshold, facetting should still be doable without counting first. It can be done by storing the first samplingThreshold documents (in the addDoc) in a separate array (in the collector) without sampling. This way the count is not needed to decide on whether to sample or not as there will always be sampled. Only the sampled result is discarded if the total number of hits <= minSampleSize. I agree that this is not the nicest way to get a sample. (but can reduce the time to retrieve estimated facet results by 5 when using a sampling rate of 1 in 1000).
The alternative is to do it more like in your snippet (and more like the first approach); collect all documents and sample afterwards. This way you know the number of hits and adjusting the sample rate based on parameters is more straightforward.
Either way is faster than using exact facets, so both ways are a win.
Well, I do a search of course, but collect the hits using a TotalHitCountCollector and not retrieve any stored values
That means that you evaluate the query twice, which is expensive ... but also, this doesn't guarantee to provide a "correct" sample. So say you found out the query matches 10M documents and you decide that 100K docs are a good sample, you'll set the sampling ratio to 0.01 but then you apply this ratio per-segment (as in this patch), and could easily end up with less than 100K docs (e.g. if randomness didn't really pick 0.01 of documents in a certain segment).
I don't think we should store the minSampleSize in an int[] and move to a bitset if we collected more docs. First, the collector works per-segment and I think sampling should work on the entire result set. So the int[] wouldn't be part of MatchingDocs, it'd need to be held inside the collector and then you'll need to know where to "cut" it for each MatchingDocs instance (per-segment).
I really think a simple solution is what we should start with. RandomSamplingFC only overrides .getMatchingDocs() and it can determine if sampling is needed or not, given minSampleSize and the sum of totalHits from all MatchingDocs. Then you do sampling per-segment, but with the "global picture" in mind, and you're able to correct the sample ratio so that we come as close to minSampleSize as possible.
To me, if we can factor in a maxSampleSize in this issue is a bonus, but I can definitely see that happening in a separate issue, as that's a performance thing. We should focus on giving our users a collector which produces a good sample, otherwise it's not valuable sampling.
I have a patch ready that implements the random sampling usign override on .getMatchingDocs(). It passes the test, so it should be ok
.
It is slower however (only 3x speedup), but maybe there is room for optimization?
Exact :168 ms
Sampled:55 ms
Thanks Rob. Few comments:
- I don't think that we need both totalHits and segmentHits. I know it may look not so expensive, but if you think about these ++ for millions of hits, they add up. Instead, I think we should stick w/ the original totalHits per-segment and in RandomSamplingFC do a first pass to sum up the global totalHits.
- With that I think no changes are needed to FacetsCollector?
About RandomSamplingFacetsCollector:
- I think we should fix the class jdoc's last "Note" as follows: "Note: if you use a counting {\@link Facets}
implementation, you can fix the sampled counts by calling...".
- Also, I think instead of correctFacetCounts we should call it amortizeFacetCounts or something like that. We do not implement here the exact facet counting method that was before.
- I see that you remove sampleRatio, and now the ratio is computed as threshold/totalDocs but I think that's ... wrong? I.e. if threshold=10 and totalHits = 1000, I'll still get only 10 documents. But this is not what threshold means.
- I think we should have minSampleSize, below which we don't sample at all (that's the threshold)
- sampleRatio (e.g. 1%) is used only if totalHits > minSampleSize, and even then, we make sure that we sample at least minSampleSize
- If we will have maxSampleSize as well, we can take that into account too, but it's OK if we don't do this in this issue
- createSample seems to be memory-less – i.e. it doesn't carry over the bin residue to the next segment. Not sure if it's critical though, but it might affect the total sample size. If you feel like getting closer to the optimum, want to fix it? Otherwise, can you please drop a TODO?
- Also, do you want to test using WAH8DocIdSet instead of FixedBitSet for the sampled docs? If not, could you please drop a TODO that we could use a more efficient bitset impl since it's a sparse vector?
About the test:
- Could you remove the 's' from collectors in the test name?
- Could you move to numDocs being a random number – something like atLeast(8000)?
- I don't mean to nitpick but if you obtain an NRT reader, no need to commit()
- Make the two collector instances take 100/10% of the numDocs when you fix it
- Maybe use random.nextInt(10) for the facets instead of alternating sequentially?
-?
- You have some sops left at the end of the test.
Make the two collector instances take 100/10% of the numDocs when you fix it
Sorry, I don't get what you mean by this.
.
Hi Rob, patch looks great.
A few comments:
- Some imports are not used (o.a.l.u.Bits, o.a.l.s.Collector & o.a.l.s.DocIdSet)
- Perhaps the parameters initialized in the RandomSamplingFacetsCollector c'tor could be made final
- XORShift64Random.XORShift64Random() (default c'tor) is never used. Perhaps it was needed for usability when this was thought to be a core utility and was left by mistake? Should it be called somewhere?
- getMatchingDocs()
-
- needsSampling() could perhaps be protected, allowing other criteria for sampling to be added
- createSample()
- randomIndex is initialized to 0, effectively making the first document of every segment's bin to be selected as the representative of that bin, neglecting the rest of the bin (regardless of the seed). So if a bin is the size of a 1000 documents, than there are 999 documents that regardless of the seed would always be neglected. It may be better so initialize as randomIndex = random.nextInt(binsize) as it happens for the 2nd and on bins.
- While creating a new MatchingDocs with the sampled set, the original totalHits and original scores are used. I'm not 100% sure the first is an issue, but any facet accumulation which would rely on document scores would be hit by the second as the scores (at least by javadocs) are defined as non-sparse.
Thanks,
How could I have missed this... Must take a break I think.
createSample
I always take the first document, as I did not implement carrying-over of the segments. If I would pick a random index and this index would be greater than the number of document in the segment, the segment would not be sampled. This results is 'too few' sampled documents. Taking the first always might result in 'too many' but that gave a better overall distribution and average.
I think your argument about not-so-random documents and the fact that carry-over should not be that hard, I should implement carry over anyway.
but any facet accumulation which would rely on document scores would be hit by the second as the scores
That's a great point Gilad. We need a test which covers that with random sampling collector.
Is there a reason to add more randomness to one test?
It depends. I have a problem with numDocs=10,000 and percents being 10% .. it creates too perfect numbers if you know what I mean. I prefer a random number of documents to add some spice to the test. Since we're testing a random sampler, I don't think it makes sense to test it with a fixed seed (0xdeadbeef) ... this collector is all about randomness, so we should stress the randomness done there. Given our test framework, randomness is not a big deal at all, since once we get a test failure, we can deterministically reproduce the failure (when there is no multi-threading). So I say YES, in this test I think we should have randomness.
But e.g. when you add a test which ensures the collector works well w/ sampled docs and scores, I don't think you should add randomness – it's ok to test it once.
Also, in terms of test coverage, there are other cases which I think would be good if they were tested:
- Docs + Scores (discussed above)
- Multi-segment indexes (ensuring we work well there)
- Different number of hits per-segment (to make sure our sampling on tiny segments works well too)
- ...
I wouldn't for example use RandomIndexWriter because we're only testing search (and so it just adds noise in this test). If we want many segments, we should commit/nrt-open every few docs, disable merge policy etc. These can be separate, real "unit-", tests.
Sorry, I don't get what you mean by this.
I meant that if you set numDocs = atLeast(8000), then the 10% sampler should not be hardcoded to 1,000, but numDocs * 0.1.
the original totalHits .. is used
I think that's OK. In fact, if we don't record that, it would be hard to fix the counts no?
Ahh thanks, I missed that. I agree it's very improbable that one of the values is missing, but if we can avoid that at all it's better. First, it's not one of the values, we could be missing even 2 right – really depends on randomness. I find this assert just redundant – if we always expect 5, we shouldn't assert that we received 5. If we say that very infrequently we might get <5 and we're OK with it .. what's the point of asserting that at all?
I renamed the sampleThreshold to sampleSize. It currently picks a samplingRatio that will reduce the number of hits to the sampleSize, if the number of hits is greater.
It looks like it hasn't changed? I mean besides the rename. So if I set sampleSize=100K, it's 100K whether there are 101K docs or 100M docs, right? Is that your intention?
...Given our test framework, randomness is not a big deal at all, since once we get a test failure, we can deterministically reproduce the failure (when there is no multi-threading)...
Ok, this makes sense to me.
It looks like it hasn't changed? I mean besides the rename. So if I set sampleSize=100K, it's 100K whether there are 101K docs or 100M docs, right? Is that your intention?
Correct, it is my intention. I actually prefer not to increase the sampleSize with more hits, as bigger samples are slower and 100K is a nice sample size anyway and more hits means more time. I adjust the sampleRatio so that the resulting set of documents is (close to) the sampleSize.
I find this assert just redundant – if we always expect 5, we shouldn't assert that we received 5. If we say that very infrequently we might get <5 and we're OK with it .. what's the point of asserting that at all?
Agreed with the <5. Asserting seems redundant, but is that not the point in unit-tests? The trick is that the assertion should still hold if you change the implementation..
I will add more next week.)
That's a great idea!
The docFreq of the category drill-down term is an upper bound - and could be used as a limit.
It's cheap, but might not be the exact number as it also take under account deleted documents.
The limit should also take under account the total number of hits for the query, otherwise the estimate and the multiplication with the sampling factor may yield a larger number than the actual results.
The limit should also take under account the total number of hits for the query, otherwise the estimate and the multiplication with the sampling factor may yield a larger number than the actual results.
I understand this statement is confusing, I'll try to elaborate.
If the sample was exactly at the sampling ratio, this would not be a problem, but since the sample - being random as it is - may be a bit larger, adjusting according to the original sampling ratio (rather than the actual one) may yield larger counts than the actual results.
This could be solved by either limiting to the number of results, or adjusting the samplingRate to be the exact, post-sampling, ratio.
Asserting seems redundant, but is that not the point in unit-tests?
The problem is when those false alarms cause noise. The previous sampling tests had a mechanism to reduce the noise as much as possible, but they didn't eliminate it. For example, the test was run few times, each time w/ increasing sample until it gave up and failed. At which point someone had to inspect the log and determine that this is a false positive. Since you expect at most 5 categories, and any number between 1-5 is fair game, I prefer not to assert on #categories at all. If you really want to assert something, then make sure 0 < #categories <= 5?
Thanks again for the good points.
I currently have an amortizeFacetCounts that uses the IndexSearcher to retrieve a reader an a FacetsConfig to determine the Term for the docFreq. I'm not really sure how this will work for hierarchies though.
Using totalHits as upper bound is not really necessary I think; as the sampling rate is determined by the total number of hits and the sample size; so reversing this can never yield numbers greater that totalHits.
I alse removed the exact assert and switch to an atLeast. Will add a patch soon.
- Javadocs:
- From the class javadocs: Note: the original set of hits will be available as documents...
I think we should just write "the original set of hits can be retrieved from getOriginal.." - I don't want anyone to be confused with the wording "will be available as documents".
- Can you make NOT_CALCULATED static?
- Typo: samplingRato
- randomSeed: I think it should say "if 0..." not "if null".
- getMatchingDocs – can you move the totalHits calculation to getTotalHits()? And then call it only if (sampledDocs==null)?
- needsSampling – I know it was suggested to make it protected for inheritance purposes, but as it looks now, all members are private so I don't see how can one extend the class only to override this method (e.g. he won't have access to sampleSize even). Maybe we should keep it private and when someone asks to extend, we know better what needs to be protected? For instance, I think it's also important that we allow overriding createSampledDocList, but for now let's keep it simple.
- I think that we need to document somewhere (maybe in class javadocs) that the returned sampled docs may include empty MatchingDocs instances (i.e. when no docs were "sampled" from a segment). Just so that we don't surprise anyone with empty instances. If people work w/ MatchingDocs as they should, by obtaining an iterator, it shouldn't be a problem, but better document it explicitly.
- Perhaps we should also say something about the returned MatchingDocs.totalHits, which are the original totalHits and not the sampled set size?
- About carryOver:
- Doesn't it handle the TODO at the beginning of createSample?
- Why does it need to always include the first document of the first segment? Rather you could initialize it to -1 and if carryOver == -1 set it to a random index within that bin? Seems more "random" to me.
- amortizedFacetCounts:
- It's a matter of style so I don't mind if you keep this like you wrote, but I would write it as if (!needsSampling() || res == null) and then the rest of the method isn't indented. Your call.
- I think it's better to allocate childPath so that the first element is already the dimension. See what FacetsConfig.pathToString currently does. Currently it results in re-allocating the String[] for every label. Then you can call the pathToString variant which takes the array and the length.
- Separately, would be good if FacetsConfig had an appendElement(Appendable, int idx) to append a specific element to the appendable. Then you could use a StringBuilder once, and skip the encoding done for the entire path except the last element.
- Perhaps we should cap this (int) (res.value.doubleValue() / this.samplingRate) by e.g. the number of non-deleted documents?
- About test:
- This comment is wrong: //there will be 5000 hits.
- Why do you initialize your own Random? It's impossible to debug the test like that. You should use random() instead.
- This comment is wrong? //because a randomindexer is used, the results will vary each time. – it's not because the random indexer, but because of random sampling no?
- Would you mind formatting the test code? I see empty lines, curly braces that are sometimes open in the next line etc. I can do it too before I commit it.
- I see that createSample still has the scores array bug – it returns the original scores array, irrespective of the sampled docs. Before you fix it, can you please add a testcase which covers scores and fails?
I think we're very close!
Thanks Shai, I really appreciate all the feedback from you guys on this issue.
I will try to fix the loose ends soon.
Thanks Shai, I really appreciate all the feedback for you guys on this issue.
No worries, it should be me thanking you for doing all this work!
Hi all,
Making good progress, only I'm not really sure what to do with the scores. I could only keep the scores of the sampled documents (creating a new scores[] in the createSample. Or just leave scoring out completely for the sampler? (Passing keepscores = false, removing the c'tor param, setting {[scores}} to null?
If it's not too much work for you, I think you should just create a new float[]? You can separate the code such that you don't check if needs to keep scores for every sampled document, at the cost of duplicating code. But otherwise I think it would be good if we kept that functionality for sampled docs too.
New patch. I'm still not really sure about the scorings, but please take a look at it.
About the scores (the only part I got to review thus far), the scores should be a non-sparse float array.
E,g, if there are 1M documents and the original set contains 1000 documents the score[] array would be of length 1000, If the sampled set will only have 10 documents, the score[] array should be only 10.
The relevant part:
if (getKeepScores()) { scores[doc] = docs.scores[doc]; }
should be changed as the scores[] size and index should be relative to the sampled set and not the original results.
Also the size of the score[] array could be the amount of bins?
Rob, I reviewed the patch and I agree with Gilad - the way you handle the scores array is wrong. It's not random access by doc. I believe if you added a test it would show up quickly. But perhaps we can keep scores out of this collector ... we can always add it later. So I don't mind if you want to wrap up w/o scores for now. Can you then fix the patch to always set keepScores=false? Also, I noticed few sops left in test.
Removed scores. Added javadoc explaining what happens to scores. Removed System.out.println
I reviewed the patch more closely before I commit:
- Modified few javadocs
- Removed needsSampling() since we don't offer an extension any access to e.g. totalHits. We can add it when there's demand.
- Fixed a bug in how carryOver was implemented – replaced by two members leftoverBin and leftoverIndex. So now if leftoverBin != -1 we know to skip the first such documents in the next segment and depending on leftoverIndex, whether we need to sample any of them. Before that, we didn't really skip over the leftover docs in the bin, but started to count a new bin.
- Added a CHANGES entry.
I reviewed the test - would be good if we can write a unit test which specifically matches only few documents in one segment compared to the rest. I will look into it perhaps later.
I think it's ready, but if anyone wants to give createSample() another look, to make sure this time leftover works well, I won't commit it by tomorrow anyway.
Commit 1579594 from Shai Erera in branch 'dev/trunk'
[ ]
LUCENE-5476: add RandomSamplingFacetsCollector
Commit 1579596 from Shai Erera in branch 'dev/trunk'
[ ]
LUCENE-5476: use empty diamond
Commit 1579598 from Shai Erera in branch 'dev/branches/branch_4x'
[ ]
LUCENE-5476: add RandomSamplingFacetsCollector
Committed to trunk and 4x. Thanks Rob and Gilad for your contributions!
Close issue after release of 4.8.0
+1 to bring it back.
I think we could expose methods that take a FBS and either sub-sample it in place, or return a new FBS? | https://issues.apache.org/jira/browse/LUCENE-5476 | CC-MAIN-2016-50 | refinedweb | 8,908 | 72.87 |
C / C++ Rules
cc_binary
cc_binary(name, deps, srcs, data, args, compatible_with, copts, defines, deprecation, distribs, features, includes, licenses, linkopts, linkshared, linkstatic, malloc, nocopts, output_licenses, restricted_to, stamp, tags, testonly, toolchains, visibility, win_def_file)
Implicit output targets
name.stripped(only built if explicitly requested): A stripped version of the binary.
strip -gis run on the binary to remove debug symbols. Additional strip options can be provided on the command line using
--stripopt=-foo. This output is only built if explicitly requested.
name.dwp(only built if explicitly requested): If Fission is enabled: a debug information package file suitable for debugging remotely deployed binaries. Else: an empty file.
Arguments
cc_import
cc_import(name, data, hdrs, alwayslink, compatible_with, deprecation, distribs, features, interface_library, licenses, restricted_to, shared_library, static_library, system_provided, tags, testonly, visibility)
cc_import rules allows users to import precompiled C/C++ libraries.
The following are the typical use cases:
1. Linking a static library
cc_import( name = "mylib", hdrs = ["mylib.h"], static_library = "libmylib.a", # If alwayslink is turned on, # libmylib.a will be forcely linked into any binary that depends on it. # alwayslink = 1, )2. Linking a shared library (Unix)
cc_import( name = "mylib", hdrs = ["mylib.h"], shared_library = "libmylib.so", )3. Linking a shared library with interface library (Windows)
cc_import( name = "mylib", hdrs = ["mylib.h"], # mylib.lib is a import library for mylib.dll which will be passed to linker interface_library = "mylib.lib", # mylib.dll will be available for runtime shared_library = "mylib.dll", )4. Linking a shared library with
system_provided=True(Windows)
cc_import( name = "mylib", hdrs = ["mylib.h"], # mylib.lib is an import library for mylib.dll which will be passed to linker interface_library = "mylib.lib", # mylib.dll is provided by system environment, for example it can be found in PATH. # This indicates that Bazel is not responsible for making mylib.dll available. system_provided = 1, )5. Linking to static or shared library
On Unix:
cc_import( name = "mylib", hdrs = ["mylib.h"], static_library = "libmylib.a", shared_library = "libmylib.so", ) # first will link to libmylib.a cc_binary( name = "first", srcs = ["first.cc"], deps = [":mylib"], linkstatic = 1, # default value ) # second will link to libmylib.so cc_binary( name = "second", srcs = ["second.cc"], deps = [":mylib"], linkstatic = 0, )On Windows:
cc_import( name = "mylib", hdrs = ["mylib.h"], static_library = "libmylib.lib", # A normal static library interface_library = "mylib.lib", # An import library for mylib.dll shared_library = "mylib.dll", ) # first will link to libmylib.lib cc_binary( name = "first", srcs = ["first.cc"], deps = [":mylib"], linkstatic = 1, # default value ) # second will link to mylib.dll through mylib.lib cc_binary( name = "second", srcs = ["second.cc"], deps = [":mylib"], linkstatic = 0, )
Arguments
cc_library
cc_library(name, deps, srcs, data, hdrs, alwayslink, compatible_with, copts, defines, deprecation, distribs, features, include_prefix, includes, licenses, linkopts, linkstatic, nocopts, restricted_to, strip_include_prefix, tags, testonly, textual_hdrs, toolchains, visibility, win_def_file)
Header inclusion checking
All header files that are used in the build must be declared in the
hdrs or
srcs of
cc_* rules. This is enforced.
For
cc_library rules, headers in
hdrs comprise the public interface of
the library and can be directly included both from the files in
hdrs and
srcs of the library itself as well as from files in
hdrs and
srcs of
cc_* rules that list the library in their
deps.
Headers in
srcs must only be directly included from the files in
hdrs
and
srcs of the library itself. When deciding whether to put a header into
hdrs or
srcs, you should ask whether you want consumers of this library
to be able to directly include it. This is roughly the same decision as between
public and
private visibility in programming languages.
cc_binary and
cc_test rules do not have an exported interface, so they
also do not have a
hdrs attribute. All headers that belong to the binary or test
directly should be listed in the
srcs.
To illustrate these rules, look at the following example.
cc_binary( name = "foo", srcs = [ "foo.cc", "foo.h", ], deps = [":bar"], ) cc_library( name = "bar", srcs = [ "bar.cc", "bar-impl.h", ], hdrs = ["bar.h"], deps = [":baz"], ) cc_library( name = "baz", srcs = [ "baz.cc", "baz-impl.h", ], hdrs = ["baz.h"], )
The allowed direct inclusions in this example are listed in the table below. For example
foo.cc is allowed to directly include
foo.h and
bar.h, but
not
baz.h.
The inclusion checking rules only apply to direct
inclusions. In the example above
foo.cc is allowed to
include
bar.h, which may include
baz.h, which in
turn is allowed to include
baz-impl.h. Technically, the
compilation of a
.cc file may transitively include any header
file in the
hdrs or
srcs in
any
cc_library in the transitive
deps closure. In
this case the compiler may read
baz.h and
baz-impl.h
when compiling
foo.cc, but
foo.cc must not
contain
#include "baz.h". For that to be
allowed,
baz must be added to the
deps
of
foo.
Unfortunately Bazel currently cannot distinguish between direct and transitive
inclusions, so it cannot detect error cases where a file illegally includes a
header directly that is only allowed to be included transitively. For example,
Bazel would not complain if in the example above
foo.cc directly
includes
baz.h. This would be illegal, because
foo
does not directly depend on
baz. Currently, no error is produced
in that case, but such error checking may be added in the future.
Arguments
cc_proto_library
cc_proto_library(name, deps, data, compatible_with, deprecation, distribs, features, licenses, restricted_to, tags, testonly, visibility)
cc_proto_library generates C++ code from
.proto files.
deps must point to
proto_library
rules.
Example:
cc_library( name = "lib", deps = [":foo_cc_proto"], ) cc_proto_library( name = "foo_cc_proto", deps = [":foo_proto"], ) proto_library( name = "foo_proto", )
Arguments
fdo_prefetch_hints
fdo_prefetch_hints(name, compatible_with, deprecation, distribs, features, licenses, profile, restricted_to, tags, testonly, visibility)
Represents an FDO prefetch hints profile that is either in the workspace or at a specified absolute path. Examples:
fdo_prefetch_hints( name = "hints", profile = "//path/to/hints:profile.afdo", ) fdo_profile( name = "hints_abs", absolute_path_profile = "/absolute/path/profile.afdo", )
Arguments
fdo_profile
fdo_profile(name, absolute_path_profile, compatible_with, deprecation, distribs, features, licenses, profile, proto_profile, restricted_to, tags, testonly, visibility)
Represents an FDO profile that is either in the workspace or at a specified absolute path. Examples:
fdo_profile( name = "fdo", profile = "//path/to/fdo:profile.zip", ) fdo_profile( name = "fdo_abs", absolute_path_profile = "/absolute/path/profile.zip", )
Arguments
cc_test
cc_test(name, deps, srcs, data, args, compatible_with, copts, defines, deprecation, distribs, exec_compatible_with, features, flaky, includes, licenses, linkopts, linkstatic, local, malloc, nocopts, restricted_to, shard_count, size, stamp, tags, testonly, timeout, toolchains, visibility, win_def_file) | https://docs.bazel.build/versions/0.21.0/be/c-cpp.html | CC-MAIN-2020-45 | refinedweb | 1,047 | 52.05 |
Sc.
One of my favorite approaches to dealing with privileges is the idea of temporarily elevating privileges. This is in contrast to the approach in which you use RunAs to run a program using another user’s credentials. There are two ways to do this..
Control panel applets are a bit of a challenge since the RunAs option is not there when you right click an applet or Control Panel itself. So I went ahead and created my own control panel shortcut..
Hopefully this time running as a non-admin will stick. I will keep you posted during the next 1000 posts.
Scott makes a great distinction between Open Source code and Source Out in the open..
@timeOfDay
@maxHour?.
xp_cmdshell
nvarchar.
Jeff has a great post in which he compares UML to circuit diagrams and then asks, why doesn’t UML enjoy the same currency for software development?
In the comments Scott Hanselman makes a great point...
It’s because, IMHO, UML isn’t freaking obvious. It’s obtuse. What’s the open arrow, open circle mean again?
It’s because, IMHO, UML isn’t freaking obvious. It’s obtuse. What’s the open arrow, open circle mean again?
I think he is spot on. But you could also say that about any programming language, right? What is that colon between the two words mean?
public class Something : IObscurity<-- What the heck is that?
{
}
If you are a VB programmer, it might be unfamiliar. But if you are a C# programmer my question is like asking what is that funny curly line and dot at the end of this sentence? Oh that’s an interface implementation silly. Of course!
Don’t get me started on C++ with its double colon craziness and its @variable and variable* which leave the befuddled developer asking what exactly do they mean?
@variable
variable*
The evolution of software has been a steady stream towards higher level abstractions. We no longer punch holes in cards to represent computer calculations in binary (at least I hope not). As a managed code developer, I don’t even have to worry about allocating memory (malloc anybody?) before I use code...Glory be! So doesn’t it seem natural that UML would be the next evolutionary step in that chain?
Umm...Well no.
The most successful widespread abstractions are those that abstract the underlying computing architecture, which itself is abstract. Memory, for example, is pretty the same thing to everybody, no matter what kind of software you are working on. If the machine can handle allocating and deallocating it for you so you don’t have to think about it all the time, then all the better for everybody.
But that same principle doesn’t work as well when we start raising the abstraction level to cover our real world concepts. The next obvious level of abstraction are domain classes. How many times have you written an Order class? I’ve written one. Great! Since I did the work, I can simply post that baby on SourceForge and save the rest of you suckers a bunch of time. Now anybody can simply just drag the UML representation into their UML diagrams and bam!, their Web 2.0 revolutionary microformatted shopping cart application is complete. Sit back and watch the flood of money flow in.
Order
If only it were that easy.
It would be nice to be able to work with such high level abstractions and wire them up. Oh, here, I’ll just draw a line from this order to the shopping cart and boom! when the user clicks this button, the item goes into the cart. But what about the various business rules triggered around adding this order to the cart? What about the fact that the cart lives in another process on a separate server and the order needs to serialized? What about the persistence mechanism? How do you express that in UML?
You can’t. Writing code is like asking an evil genie for a wish. No matter how carefully you craft the wish, there is always some pernicious detail left out just waiting to jab you in the eye. I wish I were rich and now I am some poor slob named Rich living in abject poverty. There are just too many moving parts and pitfalls in a piece of software to deal with and worry about.
UML has a bit of trouble capturing the semantics of code. Like snowflakes, no two Order classes are alike. Every client has their peculiar and idiosyncratic ideas on what an order is and how it should work in their environment. So what do we do? We start encumbering UML with all sorts of new symbols and glyphs so that we can work toward a semantically expressive UML (executable UML anyone?)
But this just turns UML into another programming language. The fact that it is in a diagram form doesn’t make it any more expressive than code. In a way, adopting UML is like changing from English to Chinese. Sure a single Chinese character can represent a whole word or even multiple words, but that doesn’t make it any easier to grasp. Now, you have to learn thousands of characters.
Not to mention the fact that you are writing the same code twice. Once by dragging a bunch of diagrams around with a mouse (how slow is that?) and again by writing out the actual compilable code. Granted, that particular issue my be solved by executable UML in which the model is the code. But that suffers from its own range of problems, not the least of which is the huge number of symbols required to make it work.
Now to be fair, my criticism is about formal UML and UML modelling tools such as Rational Rose. If you are prepared to run wild and loose with your UML, it can be useful at a very high level as a planning tool. I sometimes sketch out interaction diagrams to help me think through the interactions of my class objects. That is useful. But I rarely keep these diagrams around because hell will freeze over before I waste a bunch of my time trying to keep all of them up to date with the actual code. The code really is the design. The only diagram potentially worth keeping around is the very high level system architecture diagram outlining the various subsystems..
So go out there and steal someone else’s thunder. But do it according to the rules.
I violated one of my own rules. Can you figure out which one?..
link
guid
Doh!
I should have a fix soon.
UPDATE: All fixed. Sorry for the brief disruption. Ramseur who works on the Rainbow 2.0 Portal project looks like he has made progress in implementing..
addLoadEvent
UPDATE:).. | http://haacked.com/archive/2006/04.aspx | crawl-001 | refinedweb | 1,140 | 74.79 |
Deploying on Kubernetes¶
Note
This document is mainly for advanced Kubernetes usage. The easiest way to run a Ray cluster on Kubernetes is by using the built-in Cluster Launcher. Please see the Cluster Launcher documentation for details.
This document assumes that you have access to a Kubernetes cluster and have
kubectl installed locally and configured to access the cluster. It will
first walk you through how to deploy a Ray cluster on your existing Kubernetes
cluster, then explore a few different ways to run programs on the Ray cluster.
The configuration
yaml files used here are provided in the Ray repository
as examples to get you started. When deploying real applications, you will probably
want to build and use your own container images, add more worker nodes to the
cluster (or use the Kubernetes Horizontal Pod Autoscaler), and change the
resource requests for the head and worker nodes. Refer to the provided
yaml
files to be sure that you maintain important configuration options for Ray to
function properly.
Creating a Ray Namespace¶
First, create a Kubernetes Namespace for Ray resources on your cluster. The
following commands will create resources under this Namespace, so if you want
to use a different one than
ray, please be sure to also change the
namespace fields in the provided
yaml files and anytime you see a
-n
flag passed to
kubectl.
$ kubectl create -f ray/doc/kubernetes/ray-namespace.yaml
Starting a Ray Cluster¶
A Ray cluster consists of a single head node and a set of worker nodes (the
provided
ray-cluster.yaml file will start 3 worker nodes). In the example
Kubernetes configuration, this is implemented as:
A
ray-headKubernetes Service that enables the worker nodes to discover the location of the head node on start up.
A
ray-headKubernetes Deployment that backs the
ray-headService with a single head node pod (replica).
A
ray-workerKubernetes Deployment with multiple worker node pods (replicas) that connect to the
ray-headpod using the
ray-headService.
Note that because the head and worker nodes are Deployments, Kubernetes will automatically restart pods that crash to maintain the correct number of replicas.
If a worker node goes down, a replacement pod will be started and joined to the cluster.
If the head node goes down, it will be restarted. This will start a new Ray cluster. Worker nodes that were connected to the old head node will crash and be restarted, connecting to the new head node when they come back up.
Try deploying a cluster with the provided Kubernetes config by running the following command:
$ kubectl apply -f ray/doc/kubernetes/ray-cluster.yaml
Verify that the pods are running by running
kubectl get pods -n ray. You
may have to wait up to a few minutes for the pods to enter the ‘Running’
state on the first run.
$ kubectl -n ray get pods NAME READY STATUS RESTARTS AGE ray-head-5455bb66c9-6bxvz 1/1 Running 0 10s ray-worker-5c49b7cc57-c6xs8 1/1 Running 0 5s ray-worker-5c49b7cc57-d9m86 1/1 Running 0 5s ray-worker-5c49b7cc57-kzk4s 1/1 Running 0 5s
Note
You might see a nonzero number of RESTARTS for the worker pods. That can happen when the worker pods start up before the head pod and the workers aren’t able to connect. This shouldn’t affect the behavior of the cluster.
To change the number of worker nodes in the cluster, change the
replicas
field in the worker deployment configuration in that file and then re-apply
the config as follows:
# Edit 'ray/doc/kubernetes/ray-cluster.yaml' and change the 'replicas' # field under the ray-worker deployment to, e.g., 4. # Re-apply the new configuration to the running deployment. $ kubectl apply -f ray/doc/kubernetes/ray-cluster.yaml service/ray-head unchanged deployment.apps/ray-head unchanged deployment.apps/ray-worker configured # Verify that there are now the correct number of worker pods running. $ kubectl -n ray get pods NAME READY STATUS RESTARTS AGE ray-head-5455bb66c9-6bxvz 1/1 Running 0 30s ray-worker-5c49b7cc57-c6xs8 1/1 Running 0 25s ray-worker-5c49b7cc57-d9m86 1/1 Running 0 25s ray-worker-5c49b7cc57-kzk4s 1/1 Running 0 25s ray-worker-5c49b7cc57-zzfg2 1/1 Running 0 0s
To validate that the restart behavior is working properly, try killing pods and checking that they are restarted by Kubernetes:
# Delete a worker pod. $ kubectl -n ray delete pod ray-worker-5c49b7cc57-c6xs8 pod "ray-worker-5c49b7cc57-c6xs8" deleted # Check that a new worker pod was started (this may take a few seconds). $ kubectl -n ray get pods NAME READY STATUS RESTARTS AGE ray-head-5455bb66c9-6bxvz 1/1 Running 0 45s ray-worker-5c49b7cc57-d9m86 1/1 Running 0 40s ray-worker-5c49b7cc57-kzk4s 1/1 Running 0 40s ray-worker-5c49b7cc57-ypq8x 1/1 Running 0 0s # Delete the head pod. $ kubectl -n ray delete pod ray-head-5455bb66c9-6bxvz pod "ray-head-5455bb66c9-6bxvz" deleted # Check that a new head pod was started and the worker pods were restarted. $ kubectl -n ray get pods NAME READY STATUS RESTARTS AGE ray-head-5455bb66c9-gqzql 1/1 Running 0 0s ray-worker-5c49b7cc57-d9m86 1/1 Running 1 50s ray-worker-5c49b7cc57-kzk4s 1/1 Running 1 50s ray-worker-5c49b7cc57-ypq8x 1/1 Running 1 10s # You can even try deleting all of the pods in the Ray namespace and checking # that Kubernetes brings the right number back up. $ kubectl -n ray delete pods --all $ kubectl -n ray get pods NAME READY STATUS RESTARTS AGE ray-head-5455bb66c9-7l6xj 1/1 Running 0 10s ray-worker-5c49b7cc57-57tpv 1/1 Running 0 10s ray-worker-5c49b7cc57-6m4kp 1/1 Running 0 10s ray-worker-5c49b7cc57-jx2w2 1/1 Running 0 10s
Running Ray Programs¶
This section assumes that you have a running Ray cluster (if you don’t, please refer to the section above to get started) and will walk you through three different options to run a Ray program on it:
Using kubectl exec to run a Python script.
Using kubectl exec -it bash to work interactively in a remote shell.
Submitting a Kubernetes Job.
Running a program using ‘kubectl exec’¶
To run an example program that tests object transfers between nodes in the
cluster, try the following commands (don’t forget to replace the head pod name
- you can find it by running
kubectl -n ray get pods):
# Copy the test script onto the head node. $ kubectl -n ray cp ray/doc/kubernetes/example.py ray-head-5455bb66c9-7l6xj:/example.py # Run the example program on the head node. $ kubectl -n ray exec ray-head-5455bb66c9-7l6xj -- python example.py # You should see repeated output for 10 iterations and then 'Success!'
Running a program in a remote shell¶
You can also run tasks interactively on the cluster by connecting a remote shell to one of the pods.
# Copy the test script onto the head node. $ kubectl -n ray cp ray/doc/kubernetes/example.py ray-head-5455bb66c9-7l6xj:/example.py # Get a remote shell to the head node. $ kubectl -n ray exec -it ray-head-5455bb66c9-7l6xj -- bash # Run the example program on the head node. root@ray-head-6f566446c-5rdmb:/# python example.py # You should see repeated output for 10 iterations and then 'Success!'
You can also start an IPython interpreter to work interactively:
# From your local machine. $ kubectl -n ray exec -it ray-head-5455bb66c9-7l6xj -- ipython # From a remote shell on the head node. $ kubectl -n ray exec -it ray-head-5455bb66c9-7l6xj -- bash root@ray-head-6f566446c-5rdmb:/# ipython
Once you have the IPython interpreter running, try running the following example program:
from collections import Counter import platform import time import ray ray.init(address="$RAY_HEAD_SERVICE_HOST:$RAY_HEAD_SERVICE_PORT_REDIS_PRIMARY") @ray.remote def f(x): time.sleep(0.01) return x + (platform.node(), ) # Check that objects can be transferred from each node to each other node. %time Counter(ray.get([f.remote(f.remote(())) for _ in range(100)]))
Submitting a Job¶
You can also submit a Ray application to run on the cluster as a Kubernetes Job. The Job will run a single pod running the Ray driver program to completion, then terminate the pod but allow you to access the logs.
To submit a Job that downloads and executes an example program that tests object transfers between nodes in the cluster, run the following command:
$ kubectl create -f ray/doc/kubernetes/ray-job.yaml job.batch/ray-test-job-kw5gn created
To view the output of the Job, first find the name of the pod that ran it, then fetch its logs:
$ kubectl -n ray get pods NAME READY STATUS RESTARTS AGE ray-head-5455bb66c9-7l6xj 1/1 Running 0 15s ray-test-job-kw5gn-5g7tv 0/1 Completed 0 10s ray-worker-5c49b7cc57-57tpv 1/1 Running 0 15s ray-worker-5c49b7cc57-6m4kp 1/1 Running 0 15s ray-worker-5c49b7cc57-jx2w2 1/1 Running 0 15s # Fetch the logs. You should see repeated output for 10 iterations and then # 'Success!' $ kubectl -n ray logs ray-test-job-kw5gn-5g7tv
To clean up the resources created by the Job after checking its output, run the following:
# List Jobs run in the Ray namespace. $ kubectl -n ray get jobs NAME COMPLETIONS DURATION AGE ray-test-job-kw5gn 1/1 10s 30s # Delete the finished Job. $ kubectl -n ray delete job ray-test-job-kw5gn # Verify that the Job's pod was cleaned up. $ kubectl -n ray get pods NAME READY STATUS RESTARTS AGE ray-head-5455bb66c9-7l6xj 1/1 Running 0 60s ray-worker-5c49b7cc57-57tpv 1/1 Running 0 60s ray-worker-5c49b7cc57-6m4kp 1/1 Running 0 60s ray-worker-5c49b7cc57-jx2w2 1/1 Running 0 60s
Cleaning Up¶
To delete a running Ray cluster, you can run the following command:
kubectl delete -f ray/doc/kubernetes/ray-cluster.yaml. | https://docs.ray.io/en/master/cluster/kubernetes.html | CC-MAIN-2020-50 | refinedweb | 1,652 | 59.53 |
Today I will show you how to support a read operation from the SQL Server database using a HTTP service using the ASP.NET Web API in MVC 4. I will create a very simple web API to read a list of Employees details. To do this first we must create a database in SQL Server.Step 1: Create a new database in SQL Server. Then, create a new table named "Employee":Step 2: Insert some records to the newly created table:Now, We are going to create a HTTP Service using the ASP.Net Web API in Visual Studio 2012 RC.Step 1: Open the Visual Studio 2012 RC.
Step 3: Now, we are going to add a model class that contains the data in your application:
Step 4: Now we will create a class in the Employee.cs file that contains some properties related to the database fields:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace WebApi.Models
{
public class product
{
public int Id { get; set; }
public string Name { get; set; }
public string desig { get; set; }
public decimal salary { get; set; }
}
}Step 5: Here, we use the Repository Design pattern to seperate our service implementation. Add a new class as in Step 3 to the Model folder. Give it the name EmployeeDetails.cs.Step 6: It contains the methods that fetches the data from SQL Server database and creates a collection of Employees and stores it in a variable.
using System.Data;
using System.Data.SqlClient;
public class productdetails
private List<product> products = new List<product>();
private int _nextId = 1;
SqlConnection con;
SqlDataAdapter da;
DataSet ds=new DataSet();
public IEnumerable<product> GetAll()
{
con = new SqlConnection("Data Source=MYPC;Initial Catalog=MyDb;uid=sa;pwd=wintellect");
da = new SqlDataAdapter("select * from Employee", con);
da.Fill(ds);
foreach (DataRow dr in ds.Tables[0].Rows)
{
products.Add(new product() { Id=int.Parse(dr[0].ToString()),Name=dr[1].ToString(), desig=dr[2].ToString(),salary=int.Parse(dr[3].ToString())});
}
return products;
}
}Step 7: Now, its time to create a Web API Controller that handles HTTP requests from the client. The Web API Controller contains two controllers in the Solution Explorer.
Delete the ValuesController from the Solution Explorer under the controller folder.Step 8: Now add a new controller, as follows:
Step 9: In this controller you can add an action for Get. The Get action is for fetching all Employee details from the database. It contains HTTP GET methods.Here is the method to get the list of all products:
using System.Net;
using System.Net.Http;
using System.Web.Http;
using WebApi.Models;
namespace WebApi.Controllers
public class productController : ApiController
static readonly productdetails repository = new productdetails();
public IEnumerable<product> GetAllProducts()
return repository.GetAll();
}
}In the above code we create a method name that starts with "Get" that automatically maps to GET requests from the client URI and has no parameters.Step 10: Now, our Web API is ready to be used for read operations. Build and run this project on your local computer. It will generate a URL and the client will access the API at 11: The Client will get the complete list of Employees by entering this URI into your browser:.
©2014
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/99bb20/create-read-operation-in-web-api-using-mvc-4/ | CC-MAIN-2014-41 | refinedweb | 549 | 58.89 |
I am messing around with try/except and have come up with this:
- Code: Select all
def print_user_input(your_number):
print("You entered {0}".format(your_number))
a_number = input("Please enter a number: ")
while True:
try:
num = int(a_number)
print_user_input(num)
break
except ValueError:
print("You need to enter a number")
a_number = input("Please try again: ")
print("Exit program...")
My question is, instead of using the break statement, what condition can I use in the loop itself in order to exit when a number has been entered? My info source says there are always ways to do this without break/continue, I can't seem to find a way though.
Any pointers are much appreciated. Thanks. | http://python-forum.org/viewtopic.php?f=6&t=4450&p=5699 | CC-MAIN-2016-44 | refinedweb | 114 | 50.97 |
In contrast to OpenShift 2 where one would use
git to push up the code for your application into OpenShift, with that triggering a new build and deployment, OpenShift 3 will instead pull your code from your source code repository. This difference means that under OpenShift 3, pushing changes up to your code repository will not by default automatically trigger a new build and deployment.
To enable automatic builds when code changes are pushed up to your code repository, it is necessary to link your source code repository and OpenShift using a webhook integration.
If you were for example using GitHub to host the code repository for your application, you would configure GitHub to trigger a GitHub webhook on a push, where the target URL is a special URL exposed by the OpenShift cluster where your application is hosted. That way when a push is performed, OpenShift will be notified by GitHub and a new build triggered automatically.
Veer Muchandi has demonstrated how to do this previously in a video on this blog and further information can be found in our documentation about using the GitHub webhook mechanism.
What though if you didn’t want to trigger a new build immediately after code was pushed to GitHub and instead you wished to have the code passed through a service such as Travis-CI which would run code tests first.
For this case you would need to use a Generic webhook. In this blog post I am going to walk you through setting up this scenario using a webhook proxy service to mediate between Travis-CI and OpenShift.
The problem of interoperability
A webhook in web development is a method of augmenting or altering the behaviour of a web application, through the use of custom callbacks. The idea behind a webhook is that when you update some resource on a web site or via a web service, it can trigger some further action by initiating a HTTP request against a special URL of yet another web service.
Great in concept, but one downside of webhooks are that their uses are so varied that it is hard to develop standards as to what information may be carried by the URL for the request, in request headers, or the payload of the request itself.
What tends to happen is that the producers of the webhook callback, define their own specification as to the data they will send. If you as a consumer want to allow for maximum interoperability, you have two choices. You either try and support all the different formats for webhook producers you may want to interoperate with, or you define your own format for what you expect and then rely on others to use that format or create a webhook proxy which translates callbacks from one format to another.
In the case of OpenShift it takes the middle ground. It provides builtin support for the most popular webhook producers it would need to deal with, such as GitHub, accepting the GitHub webhook format, but also defines its own format for a Generic webhook.
Linking OpenShift with Travis-CI
If you are not familiar with Travis-CI, it is a continuous integration service used to build and test software projects hosted at GitHub. It is especially popular due to its free tier offering for projects which are released as Open Source.
Whether you are running a web site related to an Open Source project, or are a commercial organisation using its paid service, you may wish to use it to run tests on your application code and only deploy the updated code if it passes all your tests.
For OpenShift 2, the Travis-CI system had a special provider plugin for OpenShift. As explained above, the way that source code is deployed is different between OpenShift 2 and 3 and so that existing plugin will not work with OpenShift 3.
The alternative for OpenShift 3 is to make use of the webhook notification support in Travis-CI. This is where we hit the interoperability problem raised above as Travis-CI defines its own format for the webhook callback which is generated and that doesn’t match what OpenShift is expecting to receive.
Comparing webhook callback formats
The format of the webhook callback that Travis-CI generates is documented on the Travis-CI site. You can check out the full format there, but a couple of key points about the format are:
- The authorization secret it generates so that a consumer can validate the source of a webhook callback is sent in the HTTP
Authorizationrequest header.
- The content of the request is sent as
application/x-www-form-urlencoded, but where the information of interest is actually sent as URL encoded JSON data stored in the
payloadfield of the format data.
The Generic webhook that OpenShift expects uses its own different format. The notable differences compared to what Travis-CI generates are:
- The authorization secret needs to be sent as part of the URL, the
Authorizationheader would therefore be ignored.
- The content of the request is expected to be
application/jsonand not as form data.
- The layout of the data within the JSON part of the payload is different.
For the Generic webhook in OpenShift, the payload sent in the request content is actually optional and a build can be deployed purely based on a request sent to the correct URL. We could therefore technically still point the Travis-CI webhook notification at OpenShift and get it to work. Unfortunately though, the URL that Travis-CI triggers the webhook request against, is defined in the
.travis.yml file.
The problem with this if you haven’t worked it out, is that this means the authorization secret would need to be included in the URL listed in the
.travis.yml file. For an Open Source project this would mean it is publicly visible to everyone, which isn’t what you would want.
In addition to this issue, when a payload isn’t provided to OpenShift with the details of the Git repository branch and commit corresponding to the test run, then OpenShift can only assume that it should run a build against the last commit on the branch it is linked to. This may not correspond to what the test was run against.
It is therefore important to be able to properly pass across some of the details generated in the Travis-CI webhook callback into the OpenShift Generic webhook format to ensure everything works correctly.
The only way to connect two different webhook request formats together when differences exist, is to use a webhook proxy service.
Although there are webhook proxy services out there which offer a general format translation and proxying service, I decided to create my own webhook proxy service and run it in OpenShift to act as the required bridge, so you can see what is involved and how you could also do it yourself for any third party service you may need to work with.
Using a webhook proxy service
The webhook proxy service I have implemented to bridge between Travis-CI and OpenShift, can be found at:
It consists of a small Python web application implemented using the Flask web framework. You can see the actual code in the
app.py file on the repository on GitHub. I have also included a Python pip
requirements.txt file with the list of Python packages which are required by the application.
This is all that is required to host the webhook proxy service on OpenShift using the default S2I builder for Python supplied with OpenShift. All you would need to do is add a new
python:2.7 application to your project using the OpenShift web console and give it the appropriate URL for the Git repository.
An easier way is to add the application using the
oc command line tool by running:
oc create -f
This will add the webhook proxy service, including exposing it via both HTTP and HTTPS. The name of the application will be
webhook-proxy.
With the webhook proxy service running, the next steps are to configure the build configuration for the application you want to link with Travis-CI, with the authorization secret that Travis-CI will use, and update your Travis-CI configuration to generate the webhook notification when a test succeeds.
Setting the authorization secret
As mentioned above, when Travis-CI runs the webhook callback, it will send an authorization secret in the HTTP
Authorization header. The webhook proxy service when translating the request, will move that into the URL as required by OpenShift. As it is Travis-CI which dictates what the authorization secret is, we need to add it to the build configuration for the target application.
To update the build configuration you can either edit it though the OpenShift web console or using
oc edit on the command line. For example, if your web application is called
myapp, you would run:
oc edit bc/myapp
You then need to find the
triggers section and the subsection for the
generic webhook configuration.
triggers: - generic: secret: replace-this-with-authorization-secret type: Generic
The details of how to generate the authorization secret are described on the Travis-CI site. The snippet of Python code they provide for doing this is:
from hashlib import sha256 sha256('username/repository' + TRAVIS_TOKEN).hexdigest()
The
TRAVIS_TOKEN would be found, as explained in their documentation, in your account profile on the Travis-CI site.
Enabling notifications in Travis-CI
The prior step sets up the OpenShift side, next is to configure Travis-CI through the
.travis.yml file which is a part of the source code repository for the application you are using for your deployment in OpenShift and in Travis-CI for testing.
For this you need to add a
notifications section containing a
webhooks sub section. Details of setting up this type of notification can be found in the Travis-CI documentation.
The important part of this is what URL to use. Because we need to proxy this webhook via the proxy service we need to use the URL of the webhook proxy service we just deployed.
language: python python: - "2.7" install: - pip install -r requirements.txt script: - python manage.py test notifications: webhooks: urls: - on_success: always on_failure: never on_start: never
The format of the URL is:
https://<webhook-proxy-host>/travis-ci/<openshift-api-host>/<project>/<application>
To determine the external host name allocated to the webhook proxy service, you can use the
oc describe route command.
$ oc describe route/webhook-proxy | grep Host Requested Host: webhook-proxy-notifications.a123.apps.example.com
The OpenShift API host should be the host name of the OpenShift cluster displayed when you originally logged in using the
oc login command.
Once the
.travis.yml file has been updated, commit the change and push your change to your code repository. When Travis-CI picks up that a push has occurred, once the tests are run the webhook should be triggered. This will go to the webhook proxy service, which will translate the webhook format to that expected by OpenShift and pass it through to OpenShift. OpenShift will then start a new build and deployment of your application based on the code that passed testing under Travis-CI.
Other webhook notification types
The webhook proxy service I implemented only supports Travis-CI. It should be relatively straight forward to adapt to other third party services used in continuous integration pipelines and which use webhook notifications. If you do use it and add support for other commonly used services, do let me know via the project issues tracker on GitHub and we can see about adding in the additional support.
As to Travis-CI in particular, they do provide default integrations for many services for triggering deployments. As noted above they have provided an integration in the past for OpenShift 2. In time hopefully we will get a builtin integration for OpenShift 3, but in the interim something like this webhook proxy service does the job. | https://blog.openshift.com/using-generic-webhook-trigger-builds/ | CC-MAIN-2017-13 | refinedweb | 2,008 | 56.08 |
Hi,
i'm making some custom maps with JOSM and piclayer (get the framework form openstreemap and i build more accurate path into that area using a background image(jpg made with autocad) added with piclayer.
Now i would like to save the whole thing as one osm file in order to run it on a local OSM server
but I can't merge both layers (image+osm) on JOSM in order to have both image and osm in one osm file.
Well basically the question is how to use his own background image on OSM?
asked
01 Oct '12, 14:21
jtt33
16●1●1●2
accept rate:
0%
Well, using OSM data with a certain backround image is not a matter of OSM itself.
You need a framework like Leaflet or OpenLayers, thus you can publish rendered OSM data on a webbrowser maybe combined with an image.
Have a look at their websites, there are some examples how to use these frameworks for special purposes.
answered
01 Oct '12, 16:
josm ×526
image ×53
background ×43
custom ×36
question asked: 01 Oct '12, 14:21
question was seen: 4,625 times
last updated: 01 Oct '12, 16:17
Missing backgrounds in JOSM?
Using images as backgrounds
import image into JOSM
Load image as a point as a point Tile Mill
How to write/change custom JOSM validator rules?
Hide Error/Placeholder Tiles in JOSM
Background layer options missing?
How can I see satellite images in JOSM?
JOSM - Is it possible to load own calibrated maps or air photos?
floorplan image background/overlay
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/16576/using-is-own-background-images-on-osm?sort=oldest | CC-MAIN-2020-05 | refinedweb | 271 | 68.3 |
Subject: Re: [boost] phoenix::bind
From: Joel de Guzman (joel_at_[hidden])
Date: 2008-10-02 10:32:45
Peter Dimov wrote:
> Joel de Guzman:
>
>> Oh my, there's a lot of discussion I can't find that snippet
>> of information. Anyway, it has something to do about using
>> the "let" semantics be applied to "lambda" only on the
>> top level. I think it will work with the lambda syntax and
>> behavior you sought for.
>
> Not quite.
>
> The purpose of unlambda is to allow you to pass a lambda expression into
> a function that expects an ordinary function object and might use it as
> a part of another lambda expression.
>
> template<class F> void g( F f )
> {
> h( bind( f, _2, _1 ) );
> }
>
> int main()
> {
> g( _1 < _2 ); // fail
> g( unlambda( _1 < _2 ) ); // works
> }
>
> A top-level lambda[] that does let()[] will not work for this case.
Yep, I'm keeping the behavior of protect, not unlambda. As
I mentioned in my other post. val(_1 < _2) can probably be
the unlambda behavior.
BTW, has anyone realized that if I were to make phoenix, lambda,
then lambda[ ... ] will be ambiguous with the namespace lambda?
Argh!
Regards,
-- Joel de Guzman
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2008/10/142922.php | CC-MAIN-2019-26 | refinedweb | 219 | 73.78 |
#include <wx/wfstream.h>
This class represents data read in from a file.
There are actually two such groups of classes: this one is based on wxFile whereas wxFFileInputStream is based in the wxFFile class.
Note that wxInputStream::SeekI() can seek beyond the end of the stream (file) and will thus not return wxInvalidOffset for that.
Opens the specified file using its ifileName name in read-only mode.
Initializes a file stream in read-only mode using the file I/O object file.
Initializes a file stream in read-only mode using the specified file descriptor.
Destructor.
Returns the underlying file object.
Returns true if the stream is initialized and ready.
Reimplemented from wxStreamBase.
Reimplemented in wxFileStream. | https://docs.wxwidgets.org/stable/classwx_file_input_stream.html | CC-MAIN-2018-51 | refinedweb | 117 | 60.31 |
FiPy current consumption analysis
Hi!
I'm doing a project with LoRaWAN on the FiPy where I'm going to measure the energy consumption of the device when sending and receiving packets over LoRa (it's for a master thesis I'm writing).
I have the FiPy connected to the PyMakr development board and I'm reading the current consumption of the whole node (FiPy + pymakr) connected through USB to a PAC 1932 board. I started out reading the Idle current consumption of over 300 mA. I realized after a while that some other radios on the device might be on and after turning off WLAN, Bluetooth, LTE and Server (don't know if this has an effect, but it was listed under network, so I gave it a go) the Idle current consumtion dropped to somewhere between 60-70 mA, which is in accordance with the FiPY specifications ()
However to be able to monitor the energy consumtion of the device in a fulfilling manner I need to turn off everything that I don't need (basically everything but LoRa and OneWire since I'm using a DS18X20 thermometer for my application) so that I can get as clear a picture over when there is a rise in the current consumption and what is causing it.
There seems to be something else that's running because at some interval there is a spike or peek in current consumption to about 100-110 mA. This interval doesn't seem to be fixed but it sometimes is 10 seconds or 40-50 seconds between the peeks. This happens when I'm not running anything on the device (see picture for details). Does anybody have a suggestion to what the cause of these spikes might be (the LoRa radio trying to signal perhaps?) and is there something else that I could try to turn off to get the Idle consumption even further down?
Any help is highly appreciated!
- davidchallender last edited by
Have been looking at power consumption and have similar issues to what is reported here have 200mA in main program then 190 to 150 mA for 30 sec before dropping to deep sleep. In first cycle deep sleep current is 200 micro amps. In subsequent cycles the deep sleep current is 30 mA. Have tried turning everything off without change. Have same issue with LTE but don't have a SIM. Agree that things should not be instantiated by default but as required.
@dylan
No problems Dylan let me know how you go - I work for the Metro Fire Brigade In Melbourne - We are about to start evaluating the FiPy board on Telstra so very keen to get feed back on how you went.
I am just waiting on our sim pack to arrive from Telstra and the boards.."
@adrianbro I am using the Pytrack board, I must have made the assumption that the it was talking about the main expansion board, but that does seem possible. As for removing the RTS/CTS jumpers, would that be literally cutting the LTE_RTS and LTE_CTS pins from the FiPy :S.."
I have had success connecting to LTE which has led to a deepsleep current of around 20mA. However still have not been able to drop to the 20uA.
I am using bits and pieces of the code found in this discussion and I've tried it on different firmware versions as well. Does anyone see anything wrong with this code?
import machine import network import os import time import pycom from network import LTE from network import WLAN from network import Bluetooth from network import LoRa #FUN=1') while not lte.isattached(): print("Attaching...") time.sleep(0.1) print("Attached") lte.connect() while not lte.isconnected(): print("Connecting...") time.sleep(0.1) print("Connected!") time.sleep(2) quit = False while quit == False: try: lte.deinit() except OSError: print(' Exception occured, retrying...') pass else: quit = True print("Disconnected") print('Switching off WLAN') wlan = network.WLAN() wlan.deinit() print('Switching off Heartbeat') pycom.heartbeat(False) pycom.rgbled(0x000000) print('Switching off Server') server = network.Server() server.deinit() print('Switching off Bluetooth') bt = Bluetooth() bt.deinit() print('Switching off LoRa') lora = LoRa(mode=LoRa.LORAWAN, region=LoRa.AU915, power_mode=LoRa.SLEEP) print("Going to sleep") machine.deepsleep(300000)
@nalexopo Is there an easy way of checking if your SIM supports Cat-M1 or NB-IoT? I am following up with Telstra at the moment however it could be a few days until someone who knows gets back to me.
@nalexopo @Dylan Same here. The minimum deep sleep current on average is around 12 mA in my experiments.
@dylan
I came to the conclusion that in order to have a constant low current you have to use a working SIM that supports Cat-M or NB-IoT. I just used @MartinN 's code and a working sim. Pycom has to fix this.
@nalexopo I am also having problems getting the 20uA deepsleep current draw, would you be able to share any working code you had that achieved 20uA?
So far I have gotten it down to 30mA (also tested it with MartinNs code and got the same).
This is with a SIM card, I found that without a SIM I couldn't lte.deinit() at all.
Cheers
@thomand1000 I forgot to mention that in the first case the current draw remained at about 200mA for as long as half a minute or less before going down to 20uA. I edit that too on original post
@nalexopo Have you looked at what happens right before deep sleep? In my case the device draws about 200 mA of current for 1-2 seconds and then goes into deep sleep. I've discussed it with Pycom here:, which believes it might be the LTE modem getting ready for deep sleep (a potential bug), but it would be interesting to know if you experience the same behavior.
I currently develop a battery based project on fipy with some sensors and these are my 2 cents on current draw. I presently use Lora but I plan on switching between ALL networks if I can.
I tried with machine.deepsleep() without switching off anything, without SIM CARD. Sometimes the device would sleep with a 20uA current draw, but it would draw 200mA or 30mA current for 1-30 seconds after the supposed end of program before really sleeping . Sometimes it wouldn't sleep and the current dissipation would be either 30mA or 200mA steady. It was random as far as I could say
I tried with machine.deepsleep() with switching off with @MartinN 's code and I have same behaviour with him but current doesn't always decrease.
With SIM card installed and @MartinN 's code machine.deepsleep() works as expected. 20uA current draw always on sleep, LTE offs without delay
@jmarcelino I see that you are active on the forums and always explaining things in a good way. Maybe you can look into this and explain what is happening? Or what we are doing wrong?
I'm using the FiPy in experiments on my master thesis and really need the deep sleep to be able to monitor the energy consumption on my device in a good way. So it's pretty urgent getting this to work.
Thanks!
@thomand1000 You are indeed right, I don't have a SIM card present! And indeed this whole situation is strange, after booting the system has LTE on (based on the high power consumption) even if there is no SIM card available, and when you want to turn the LTE off it does not succeed in a proper way because there is no SIM card present..... It looks like Pycom has some serious work to do here!
@thomand1000 I have not yet entered the area of deepsleep, because I first wanted to understand the normal power consumption....
Overall it is really strange, it looks the system has more or less every power hungry peripheral on by default and in order to preserve power you have to instantiate and deinit any of these peripherals you do not use manually in your application. Pycom can you explain and comment on this? Maybe provide some code examples how to reach the idle current as mentioned in the spec sheets.
@martinn I have made som progress debugging your situation. When I removed the nano sim card on my FiPy the LTE.deinit() part of your code threw en error just like in your case. When I put it back in it deinited right away. So I guess that you need to have a sim card in the FiPy to be able to deinit LTE. Strange...
@martinn Hi! I have basically used the same code that you have. LTE turns off on my system. I have never experienced any trouble with it and my output from running your code is like your console output (same Device, firmware, python and micropython) except that the Exception never occured.
I'm sorry that I can't help you with why this is happening. I am experiencing a lot of strange things when I have had the system up for a long time. An example is that when I measured the idle (no radios) current it was around 60-70 mA. After a while testing with different scripts running lora this suddenly jumped to about 100 mA (nothing else turned on). I rebooted the FiPy but it didn't work. After rebooting the computer I had connected to it as well the current went down to 60-70 mA again...
I have another thread going on the forums on deepsleep current by the way. If you have any experience with deep sleep on the FiPy any help would be highly appreciated. The way it works now is that when running machine.deepsleep() the current rises from 60-70 mA to 160 mA. The same happens if I change the last line of your code above from machine.idle() to machine.deepsleep(). Very strange!
I have been doing some current measurements when the FiPy (and other modules) are actually doing nothing (running machine.idle() in an infinite loop)
My experiences: When using Firmware 1.17.3.b1, a lte.deinit() often (not always!) result in an OSError exception. And even after looping until it succeeds, it takes a considerable amount of time before the current actually decreases to values comparable to those as mentioned in the datasheets
Question: Which firmware do you use and what code do you use to reliably and quickly switch off LTE? Any code suggestions for me, eg other peripherals I am missing?
My code:
print('Device: ' + uos.uname().machine) print('Firmware: ' + uos.uname().release) print('Python: ' + sys.version) print('MicroPython: ' + str(sys.implementation.version[0]) + '.' + str(sys.implementation.version[1]) + '.' + str(sys.implementation.version[2])) print('===============================================================================') print('Switching off Heartbeat') pycom.heartbeat(False) pycom.rgbled(0x000022) print('Switching off WLAN') wlan = network.WLAN() wlan.deinit() print('Switching off Server') server = network.Server() server.deinit() print('Switching off Bluetooth') bt = Bluetooth() bt.deinit() print('Switching off LoRa') lora = LoRa(mode=LoRa.LORAWAN, region=LoRa.EU868, power_mode=LoRa.SLEEP) if (uos.uname().sysname == 'FiPy'): print('Switching off LTE') lte = network.LTE() quit = False while quit == False: try: lte.deinit() except OSError: print(' Exception occured, retrying...') pass else: quit = True print('Switching off Sigfox') # sigfox = Sigfox(mode=Sigfox.SIGFOX, rcz=Sigfox.RCZ1) print('Switching off RGB Led') pycom.rgbled(0x000000) print('===============================================================================') print('Now perform measurements ... ') while True: machine.idle()
Console output:
Device: FiPy with ESP32 Firmware: 1.17.3.b1 Python: 3.4.0 MicroPython: 1.8.6 =============================================================================== Switching off Heartbeat Switching off WLAN Switching off Server Switching off Bluetooth Switching off LoRa Switching off LTE Exception occured, retrying... Exception occured, retrying... Switching off Sigfox Switching off RGB Led =============================================================================== Now perform measurements ...
@thomand1000 We would need to see your code, as the LoRa radio shouldn't send anything at all unless you ask it to. Contrary to other technologies such as Wi-Fi or cellular, there is no regular activity, a class A LoRa node just sends when asked to, waits for the RX windows, listen a bit, and should then stop (unless you use confirmed packets and the ACK isn't received of course).
If you start a LoRaWAN join it will retransmit until it receives an accept, but the pattern should be quite predictable. | https://forum.pycom.io/topic/3090/fipy-current-consumption-analysis | CC-MAIN-2019-43 | refinedweb | 2,062 | 73.07 |
The Parallel Patterns Library allows you to more easily take advantage of parallelism. See what this and other Visual C++ 2010 features are in store.
Kenny Kerr
MSDN Magazine February 2009
Read more!
This month's column describes the benefits and methodologies of usability testing.
Dr. Charles B. Kreitzberg and Ambrose Little
MSDN Magazine July 2009
This month we describe techniques for automating UI testing in Windows Presentation Foundation applications.
James McCaffrey
MSDN Magazine March
Stephen Toub shows you how to add round-robin scheduling support on top of the ThreadPool for more granular processing control.
Stephen Toub
MSDN Magazine January 2009
The latest releases of WinDBG and Visual Studio know exactly how to use source server, so its benefits are available to both .NET and native C++ developers. See why this is so important in tracking down bugs.
John Robbins
MSDN Magazine August 2006
Paul DiLascia
MSDN Magazine August 2002
Bar:
00401050: E8B7000000 call _penter
00401055: 55 push ebp
00401056: 8BEC mov ebp,esp
00401058: E8A8FFFFFF call ILT+0(?Foo
0040105D: 3BEC cmp ebp,esp
0040105F: E8AE000000 call _chkesp
00401064: 5D pop ebp
00401065: C3 ret
#ifndef _STDAFX_H
#define _STDAFX_H
// This define must occur before any headers are included.
#define _CRTDBG_MAP_ALLOC
// Include all other headers here!
// Include CRTDBG.H after all other headers
#include <crtdbg.h>
#define NEW_INLINE_WORKAROUND new ( _NORMAL_BLOCK ,\
__FILE__ , __LINE__ )
#define new NEW_INLINE_WORKAROUND
#endif // _STDAFX_H
From the December 2000 issue of MSDN Magazine | http://msdn.microsoft.com/en-us/magazine/cc301382.aspx | crawl-002 | refinedweb | 236 | 54.93 |
The following snippet works for me - almost:
When I run my simulation the above code changes ica (as evidenced by a recording of _ref_ica, in which the noise is evident), but the noisy current doesn't seem to feed back into noise in the voltage (_ref_v). There is a mod file in my simulation which writes ica, and this I know that the ica that is definitely feeding into the simulation, but the noisy component doesn't seem to be. It's as though the changes in _currents are being ignored by the voltage integrator altogether.
Code: Select all
import neuron.rxd.rxd as nrr import random def _kn_currents(rhs): nrr._currents(rhs) # global nrr._rxd_induced_currents print "adding some noise" sign = 1 cur = random.random()/10 ## This line alters ica, but does not seem to have effect on voltage print nrr._curr_ptrs[0][0] nrr._curr_ptrs[0][0] += -sign * cur print nrr._curr_ptrs[0][0] # print nrr._rxd_induced_currents # nrr._rxd_induced_currents[0] += 0.0 nrr._rxd_induced_currents[0] += sign * cur print nrr._rxd_induced_currents nrr._callbacks[2] = _kn_currents
Any thoughts on this would be welcome. I realise that I'm probably missing a trick by not using the apparatus in multiCompartmentalReaction.py, but this is my attempt to understand how the code is working without losing myself completely in all the pointer maps! | https://www.neuron.yale.edu/phpBB/viewtopic.php?t=3139 | CC-MAIN-2019-18 | refinedweb | 220 | 57.77 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.